Blog: Jill Dyché Subscribe to this blog's RSS feed!

Jill Dyché

There you are! What took you so long? This is my blog and it's about YOU.

Yes, you. Or at least it's about your company. Or people you work with in your company. Or people at other companies that are a lot like you. Or people at other companies that you'd rather not resemble at all. Or it's about your competitors and what they're doing, and whether you're doing it better. You get the idea. There's a swarm of swamis, shrinks, and gurus out there already, but I'm just a consultant who works with lots of clients, and the dirty little secret - shhh! - is my clients share a lot of the same challenges around data management, data governance, and data integration. Many of their stories are universal, and that's where you come in.

I'm hoping you'll pour a cup of tea (if this were another Web site, it would be a tumbler of single-malt, but never mind), open the blog, read a little bit and go, "Jeez, that sounds just like me." Or not. Either way, welcome on in. It really is all about you.

About the author >

Jill is a partner co-founder of Baseline Consulting, a technology and management consulting firm specializing in data integration and business analytics. Jill is the author of three acclaimed business books, the latest of which is Customer Data Integration: Reaching a Single Version of the Truth, co-authored with Evan Levy. Her blog, Inside the Biz, focuses on the business value of IT.

Editor's Note: More articles and resources are available in Jill's BeyeNETWORK Expert Channel. Be sure to visit today!


By Stephen Putman, Senior Consultant


Spock-chess
I recently read Rob Gonzalez' blog post  I've Got a Federated Bridge to Sell You (A Defense of the Warehouse)  with great interest - a Semantic Web professional who is defending a technology that could be displaced by semantics! I agree with Mr. Gonzalez that semantically federated databases are not the answer in all business cases. However, traditional data warehouses and data marts are not the best answer in all cases either, and there are also cases where neither technology is the appropriate solution.


The appropriate technological solution for a given business case depends on a great many factors, which I like to call "Three-Dimensional Chess."


An organization needs to consider many factors in choosing the right technology to solve an analytical requirement, including:



  • Efficiency/speed of query return - Is the right data stored or accessed in an efficient manner, and can it be accessed quickly and accurately?  

  • Currency of data - How current is the data that is available?  

  • Flexibility of model - Can the system accept new data inputs of differing structures with a minimum of remodeling and recoding?  

  • Implementation cost, including maintenance - How much does it cost to implement and maintain the system?  

  • Ease of use by end users - Can the data be accessed and manipulated by end users in familiar tools without damage to the underlying data set?  

  • Relative fit to industry and organizational standards - This deals with long-term maintainability of the system, which I addressed in a recent posting –  Making It Fit.

  • Current staff skillsets/scarcity of resources to implement and maintain - Can your staff implement and maintain the system, or alternately, can you find the necessary resources in the market to do so at a reasonable cost?




Fortunately, new tools and methodologies are constantly being developed that can optimize one or more of these factors, but balancing all of these sometimes mutually exclusive factors is a very difficult job. There are very few system architects who are well versed in many of the applicable systems, so architects tend to advocate the types of systems they are familiar with, bending requirements to fit the characteristics of the system. This causes the undesirable tendency that is represented in the saying, "When all you have is a hammer, everything looks like a nail."


Make sure that your organization is taking all factors into account when deciding how to solve an analytical requirement by developing or attracting people who are skilled at playing ”three-dimensional chess.”


  




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at baseline-consulting.com/ebooks.





Posted February 16, 2011 6:00 AM
Permalink | No Comments |


By Stephen Putman, Senior Consultant


Chainlink_steve_lodefink


I begin today with an invitation to a headache...click this link:  The Linking Open Data Cloud Diagram


Ouch! That is a really complicated diagram. I believe that the  Semantic Web  suffers from the same difficulty that many worthy technologies do - the relative impossibility to describe the concept in simple terms, using concepts familiar to the vast majority of the audience. When this happens, the technology gets buried under well-meaning but hopelessly complex diagrams like this one. If you take the time to understand it, the concept is very powerful, but all the circles and lines immediately turn off most people.


Fortunately, there are simple things that you can do in your organization today that will introduce the concept of  linked data  to your staff and begin to leverage the great power that the concept holds. It will take a little bit of transition, but once the idea takes hold you can take it in several more powerful directions.


Many companies treat their applications as islands unto themselves in their basic operations, regardless of any external feeds or reporting that occurs. One result of this is that basic, seldom-changing concepts such as Country, State, and Date/Time are replicated in each system throughout the company. A basic tenet of data management states that managing data in one place is preferable to managing it in several - every time something changes, it must be maintained in however many systems use it.


One of the basic concepts of linked data is that applications will use a common repository for data like State, for example, and publish  Uniform Resource Identifiers  (URIs), or standardized location values that act much like Web-based URLs, for each value in the repository. Applications will then link to the URI for the lookup value instead of proprietary codes in use today. There are efforts to make global shared repositories for this type of data, but it is not necessary to place your trust in these data stores right away - all of this can occur within your company's firewall.


The transition to linked data does not need to be sudden or comprehensive, but can be accomplished in an incremental fashion to mitigate disruption to existing systems. Here are actions that you can begin right now to start the transition:



  • If you are coding an application that uses these common lookups, store the URI in the parent table instead of the proprietary code.

  • If you are using "shrink wrap" applications, construct views that reconcile the URIs and the proprietary codes, and encourage their use by end users.

  • Investigate usage of common repositories in all future development and packaged software acquisition.

  • Begin investigation of linking company-specific common data concepts, such as department, location, etc.




  Once the transition to a common data store is under way, your organization will have lower administration costs and more consistent data throughout the company. You will also be leading your company into the future of linked data processing that is coming soon.


photo by steve_lodefink via Flickr (Creative Commons License)




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.





Posted February 1, 2011 6:00 AM
Permalink | No Comments |


By Stephen Putman, Senior Consultant


Netflix_PseudoGil
I just finished reading a post on the Netflix blog - 5 Lessons We've Learned Using Amazon Web Services (AWS). Even though this article is specific to a high-traffic cloud-based technology platform, I think that it holds a great lesson for the optimization of any computer system, and especially a system that relies on outside sources such as a business intelligence system.  


Netflix develops their systems with the attitude that anything can fail at any point in the technology stack, and their systems should respond in as graceful a way as possible. This is a wonderful attitude to have for any system, and their lessons can be applied to a BI system just as easily:


1. You must unlearn what you have learned. Many people who develop and maintain BI systems come from the transactional application world, and apply their experience to a BI system, which is fundamentally different in several ways - for example, the optimization goal of a transactional system is the individual transaction, while the optimization point of a BI system is the retrieval and manipulation of often huge data sets. Managers and developers that do not realize these differences are doomed to failure with their systems, while people who  successfully  make the transition meet organizational goals much more easily.


2. Co-tenancy is hard. The BI system must manage many different types of loads and requests on a daily basis while simultaneously appearing to be as responsive to the user as all other software used. The system administrator must balance data loads, operational reporting requests, and the construction and manipulation of analysis data sets, often at the same time. This is the same sort of paradigm shift as in lesson 1 - people who do not realize the complications of this environment are doomed to failure since the success of a BI system is directly proportional to the frequency of use, and an inefficient system quickly becomes unused.


3. The best way to avoid failure is to fail constantly. This lesson seems counter-intuitive, but I've seen a lot of failed systems that always assumed that things would work perfectly - source feeds would always have valid data, in the same place, at the same time, always - that this philosophy gains more credence daily. Systems should always be tested for outages at any step of the process, and coded so that the response is graceful and as invisible to end-users as possible. If you don't rehearse this in development, you will fail in production - take that to the bank.


4. Learn with real scale, not toy models. It would seem that proper performance testing on systems equivalent to production hardware and networking with full data sets would be self-evident, but many development shops look at this as an unnecessary expense that adds little to the finished product. But, as in lesson 3 above, if you do not rehearse the operation of your system on the same size of system as your production environment, you have no way of knowing how the system will respond in real-world situations, and are effectively gambling with your career. The smart manager avoids this sort of gamble.


5. Commit yourself. This message surfaces in many different discussions, but it should be re-emphasized frequently - a system as important as your enterprise business intelligence system should have strong and unwavering commitment from all levels of your organization to survive the inevitable struggles that occur in the implementation of such a large computer system.


It is sometimes surprising to realize that even though technology continues to become more complex and distributed, the same simple lessons can be learned from every system and applied to new systems. These lessons should be reviewed frequently in your quest to implement successful data processing systems.


photo by PseudoGil via Flickr (Creative Commons License)




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.





Posted January 18, 2011 6:00 AM
Permalink | No Comments |


By Dick Voorhees, Senior Consultant


Champagne
The New Year is upon us. And for many, the coming of the New Year involves making new resolutions, or reaffirming old ones. This resolution-making process includes corporations and organizations, not just individuals. In terms of personal resolutions, some undertake this process in earnest, but many seem to deal with resolutions superficially, or at least not very effectively. The same is frequently true for organizations as well.


So how then should an organization go about deciding which ”resolutions” to pursue in the New Year, which goals and objectives are both worthy and achievable? Often there are no "good" or "bad" opportunities, a priori, but some are more likely to result in a successful outcome and/or have more significant payoff than others.



  1. Take stock of the opportunities, and develop a list of key potential initiatives (or review the existing list, if one exists). Consider recent or imminent changes in the marketplace, competitors’ actions, and governmental regulations. Which of these initiatives offers the possibility of consolidating/increasing market share, improving customer service, or represents necessary future investment (in the case of regulations)? And which best supports the existing goals and objectives of the organization?

  2. Assess the capabilities and readiness of the organization to act on these initiatives. An opportunity might be a significant one, but if the organization can’t respond effectively and in a timely manner, then the opportunity will be lost, and the organization might better focus its attention and resources on another opportunity with lesser potential payback, but that has a much greater chance of success.

  3. Develop a roadmap, a tactical plan, for addressing the opportunity. Determine which resources are required – hardware, software, capital, and most importantly people – what policies and procedures must be defined or changed, etc...


Then be prepared to act! Sometimes the best intentions for the New Year fail not for lack of thought or foresight, but for lack of effective follow through. Develop the proper oversight/governance mechanisms, put the plan into action, and then make sure to monitor progress on a regular basis.


These are not difficult steps to follow, but organizations sometimes need help doing so. We’ve found that clients who call us have learned the hard way – either directly or through stories they’ve heard in their industries – that some careful planning, deliberate program design, and – if necessary – some skill assessment and training can take them a long way in their resolutions for success in 2011. Good luck!


photo by L.C.Nřttaasen via Flickr (Creative Commons)


  




DVoorhees_50_bw Dick Voorhees is a seasoned technology professional with more than 25 years of experience in information technology, data integration, and business analytic systems. He is highly skilled at working with and leading mixed teams of business stakeholders and technologists on data enabling projects.



Posted January 11, 2011 6:00 AM
Permalink | No Comments |


By Stephen Putman, Senior Consultant


Spreadsheet


The implementation of a new business intelligence system often requires the replication of existing reports in the new environment. In the process of designing, implementing and testing the new system, issues of data elements not matching existing output invariably come up. Many times, these discrepancies arise from data elements that are extrapolated from seemingly unrelated sources or calculations that are embedded in the reports themselves that often pre-date the tenure of the project team implementing the changes. How can you mitigate these issues in future implementations?


Issues of post-report data manipulation can range from simple - lack of documentation of the existing system - to complex and insidious - "spreadmarts" and stand-alone desktop databases that use the enterprise system for a data source, for example. It is also possible that source systems make changes to existing data and feeds that are not documented or researched by the project team. The result is the same - frustration from the business users and IT group in defining these outliers, not to mention the risk absorbed by the enterprise in using unmanaged data in reports that drive business decisions.


  The actions taken to correct the simple documentation issues center around organizational discipline:



  • Establish (or follow) a documentation standard for the entire organization, and stick to it!

  • Implement gateways in development of applications and reports that ensure that undocumented objects are not released to production

  • Perform periodic audits to ensure compliance


Reining in the other sources of undocumented data is a more complicated task. The data management organization has to walk a fine line between control of the data produced by the organization and curtailing the freedom of end users to respond to changing data requirements in their everyday jobs. The key is communication - the business users need to be encouraged to communicate data requirements into an easy-to-use system and understand the importance of sharing this information with the entire organization. If there is even a hint of disdain or punitive action regarding this communication, it will stop immediately, and these new derivations will remain a mystery until anther system is designed.


The modern information management environment is heading more and more towards transparency and accountability, which is being demanded by both internal and external constituencies. The well-documented reporting system supports this change in attitude to reduce risk in external reporting and increase confidence in the veracity of internal reports, allowing all involved to make better decisions and drive profitability of the business. It is a change whose time has come.


photo by r h via Flickr (Creative Commons License)




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.





Posted December 21, 2010 6:00 AM
Permalink | No Comments |


By Stephen Putman, Senior Consultant


RabbitHole_xJasonRogersx
In my recent blog posting, "Metadata is Key," I talked about one way of changing the mindset of managers and implementers in support of the coming "semantic wave" of linked data management. Today, I give you another way to prepare for the coming revolution, and also become more disciplined and effective in your project management whether you're going down the semantic road or not...


  rathole (n) -  [from the English idiom ”down a rathole” for a waste of money or time] A technical subject that is known to be able to absorb infinite amounts of discussion time without more than an infinitesimal probability of arrival at a conclusion or consensus.


  Anyone who has spent time implementing computer systems knows exactly what I'm talking about here. Meetings can sometimes devolve into lengthy discussions that have little to do with the subject at hand. Frequently, these meetings become quite emotional, which makes it difficult to refocus the discussion on the meeting's subject. The end result is frustration felt by the project team on "wasting time" on unrelated subjects, with the resulting lack of clarity and potential for schedule overruns.


One method for mitigating this issue is the presence of a "rathole monitor" in each meeting. I was introduced to this concept at a client several years ago, and I was impressed by the focus they had in meetings, much to the project’s benefit. A "rathole monitor" is a person who does not actively participate in the meeting, but understands the scope and breadth of the proposed solution very well and has enough standing in the organization that they are trusted. This person listens to the discussion  in the meeting, and interrupts when he perceives that the conversion is veering off into an unrelated direction. It is important for this person to record this divergence and relay it to the project management team for later discussion - the discussion is usually useful to the project, and if these new ideas are not addressed later, people will keep these ideas to themselves, which could be detrimental to the project.


  This method will pay dividends in current project management, but how does it relate to semantics and linked data? Semantic technology is all about context and relationships of data objects - in fact, without these objects and relationships being well defined, semantic processing  is impossible.  Therefore, developing a mindset of scope and context is essential to the successful implementation of any semantically enabled application. Training your staff to think in these terms makes your organization perform in a more efficient and focused manner, which will surely lead to increased profitability and more effective operations.


photo by xJasonRogersx via Flickr (Creative Commons License)




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.





Posted December 16, 2010 6:00 AM
Permalink | No Comments |


By Stephen Putman, Senior Consultant


MetaDataKey-Brenda-Starr
One of the most promising developments in data management over the last ten years is the rise of semantic processing, commonly referred to as the "Semantic Web." Briefly described, semantic processing creates a "web of data" complimenting the "web of documents" of the World Wide Web. The benefits of such an array of linked data are many, but the main benefit could the ability for machines to mine for needed data to enhance searches, recommendations, and the like, where humans do this now.


Unfortunately, the growth of the semantic data industry has been slower than anticipated, mainly due to a "chicken and egg" problem - the systems needs descriptive metadata to be added to existing structures to function efficiently, but major data management companies are reluctant to invest a great deal into creating tools to do this until an appropriate return on investment is demonstrated. I feel that there is an even more basic issue with the adoption of semantics that has nothing to do with tools or investment - we need the implementers and managers of data systems to change their thinking about how they do their jobs; to make metadata production central to the systems they produce.


The interoperability and discoverability of data is becoming an increasingly important requirements for organizations of all types - the financial industry is keenly aware of the requirements of reporting systems that are XBRL-enabled, for example. If we leave external requirements to the side, the same requirements can benefit the internal reporting of the organization as well. Reporting systems go through extended periods of design and implementation, with their contents and design a seemingly well-guarded secret. Consequently, effort is required for departments not originally included in the system design to discover and use appropriate data for their operations.


The organization and publication of metadata about these reporting systems can mitigate the cost of this discovery and use by the entire organization. Here is a sample of the metadata produced by every database system, either formally or informally:



  • System-schema-table-column

  • Frequency of update

  • Input source(s)

  • Ownership-stewardship

  • Security level


The collection and publication of such metadata in  standard forms  will prepare your organization for the coming ”semantic wave," even if you do not have a specific application that can utilize this data at the present time. This will give your organization an advantage over those companies that wait for these requirements to be implemented and will need to play catch-up. You will also gain the advantage of your staff thinking in terms of metadata capture and dissemination, which will help your company become more efficient in its data management functions.


photo by ~Brenda-Starr~ via Flickr (Creative Commons License)




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.





Posted December 14, 2010 6:00 AM
Permalink | No Comments |


Squeeze_fit
By Stephen Putman, Senior Consultant


I've spent the last eighteen months at clients that have aging technology infrastructures and are oriented to build applications as opposed to buying more integrated software packages. All of these organizations face a decision which is similar to the famed "build vs. buy" decision that is made when implementing a new enterprise computer system - do we acquire new technology to fulfill requirements, or adapt our existing systems to accomplish business goals?


Obviously, there are pros and cons to each approach, and external factors such as enterprise architecture requirements and resource constraints factor into the decision. However, there are considerations independent of those constraints whose answers may guide you to a more effective decision. These considerations are the subject of this article.


Ideally, there would not be a decision to make here at all - your technological investments are well managed, up-to-date, and flexible enough to adapt easily to new requirements. Unfortunately, this is rarely the case in most organizations. Toolsets are cobbled together from developer biases (from previous experience), enterprise standards, or inclusion of OEM packages with larger software packages such as ERP systems or packaged data warehouses. New business requirements often appear that do not fit neatly into this environment, which makes this decision necessary.


Aquire New


The apparent path of least resistance in addressing new business requirements is to purchase specialized packages that solve tactical issues well. This approach has the benefit of being the solution that would most closely fit the requirements at hand. However, the organization runs the risk of gathering a collection of ill-fitting software packages that could have difficulty solving future requirements. The best that can be hoped for in this scenario is that the organization leans toward obtaining tools that are based on a standardized foundation of technology such as Java. This enables future customization if necessary and ensures that there will be resources available to do the future work without substantial retraining.


Modify Existing Tools


The far more common approach to this dilemma is to adapt existing software tools to the new business requirements. The advantage to this approach is that your existing staff is familiar with the toolset and can adapt it to the given application without retraining. The main challenge in this approach is that the organization must weigh the speed of adaptation against the possible inefficiency of the tools in the given scenario and the inherent instability of asking a toolset to do things that it was not designed to do.


The "modify existing" approach has become much more common in the last ten to twenty years because of budgetary constraints imposed upon the departments involved. Unless you work in a technology company in the commercial product development group, your department is likely perceived as a cost center to the overall organization, not a profit center, which means that money spent on your operations is an expense instead of an investment. Therefore, you are asked to cut costs wherever possible, and technical inefficiencies are tolerated to a greater degree. This means that you may not have the opportunity to acquire new technology even if it makes the most sense.


The decision to acquire new technology or extend existing technology to satisfy new business requirements is often a decision between unsatisfactory alternatives. The best way for an organization to make effective decisions given all of the constraints is to base its purchase decisions on standardized software platforms. This way, you have the maximum flexibility when the decision falls to the "modify existing" option.


photo by orijinal via Flickr (Creative Commons License)




StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.





Posted December 10, 2010 6:00 AM
Permalink | No Comments |


By Mary Anne Hopper, Senior Consultant


WhereWorkComesFrom
I’ve written quite a bit about the importance of establishing rigor around the process of project intake and prioritization.   If you’re sitting there wondering how to even get started, I believe it is important to understand where it is these different work requests because unlike application development projects, BI projects tend to have touch points across the organization.   I tend to break the sources into three main categories—stand-alone, upstream applications, and enhancements.


Stand-alone BI projects are those that are not driven by new source system development. Project types can include as new data marts, reporting applications, or even re-architecting legacy reporting environments. Application projects are driven by changes in any of the upstream source systems we utilize in the BI environment including new application development and changes to existing applications. Always remember that the smallest of changes in a source system can have the largest of impacts to the downstream BI application environment. The enhancements category is the catch-all for low risk development that can be accomplished in a short amount of time.


Just as important in understanding where work requests come from is prioritizing those work requests.     The three need to be considered in the same prioritization queue—this is a step that challenges a lot of the clients I work with.   So, why is it so important to prioritize work together?   The first reason is resource availability.   Resource impact points include project resources (everyone from the analysts to the developers to the testers to the business customers), environment availability and capacity (development and test), and release schedules.   And, most importantly—prioritizing all work together ensures the business is getting their highest value projects completed first.


  





MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.




Posted November 2, 2010 6:00 AM
Permalink | No Comments |


By Mary Anne Hopper, Senior Consultant


Toothpaste_by_cogdogblog
As you can imagine, I travel quite a bit as a consultant for Baseline.   Over my tenure, I have developed a standard routine for getting through the airport.   More often than not, things have gone pretty smoothly for me.   Until this week – my bag was pushed into the extra screening area where it turned out there was an over-sized tube of toothpaste that had to be thrown away.     How did this happen when week in and week out, I use the same bag for my stuff and always get through without a hitch?   Well, I deviated from my process.


You see, the prior week I actually checked a bag and was able to throw a full tube of toothpaste in the ditty bag and I never checked when I was packing for this week’s trip.   I deviated from my standard process.   If you’ve ever implemented a ”small” or ”low impact” change that has blown up an ETL job, changed the meaning of a field, or caused a report to return improper results, you know where I’m going with this.


Process is important.   Discipline in implementing to that process is even more important.   Am I proposing that every small change go through an entire full-blown project lifecycle?   Absolutely not.   But, there should be a reasonable life cycle for everything that goes into a production quality environment.   Taking consistent steps in delivery helps to ensure that even the smallest of changes do not result in high impact outages.   This can be achieved by taking the time to analyze, develop, and then test changes prior to implementation.   What that right level of rigor is depends on the impact of the environment being unavailable or incorrect.


So, what did I learn from my experience with the tooth paste?   My deviation only cost me about $3.50, some embarrassment in the TSA line, and an unplanned trip to CVS.   I learned I will no longer change my travel packing plans (whether or not I check luggage).   What can you learn?   There is a cost in time and/or dollars if you don’t follow a set process.   The best starting place is to work with your business and/or IT partners to reach consensus on that right level of rigor – and stick with it.


Photo provided by CogDogBlog via Flickr (Creative Commons License).





MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.




Posted October 28, 2010 6:00 AM
Permalink | No Comments |