BeyeNETWORK Belgium Blogs BeyeNETWORK Belgium Blogs. Copyright BeyeNETWORK 2005 - 2019 http://www.beyenetwork.be/rss/content.php 150 31 BeyeNETWORK Belgium Blogs http://www.b-eye-network.com/images/logo_b-eye_rss.gif http://www.beyenetwork.be/rss/content.php Thoughts on Data Correctness data validity, data correctness, and some approaches for considering the difference and taking action that I am currently developing at my Data Quality book website. Please check them out!

]]>
http://www.beyenetwork.be/blogs/loshin/archives/2011/04/thoughts_on_dat.php Fri, 1 Apr 2011 12:56:02 MST http://www.beyenetwork.be/blogs/loshin/archives/2011/04/thoughts_on_dat.php
Papers on the Value of Data Quality Improvement I have had a great opportunity to put some thoughts to paper (courtesy of Informatica) regarding methods for understanding business impacts related to data quality improvement and how they can be isolated, organized, measured, and communicated to a variety of stakeholders across the organization.

Here are links for downloading the papers:

Understanding the Financial Value of Data Quality Improvement

Improved Risk Management via Data Quality Improvement

Improving Organization Productivity through Improved Data Quality

Please check them out and let me know your thoughts!



]]>
http://www.beyenetwork.be/blogs/loshin/archives/2011/03/papers_on_the_v.php Mon, 14 Mar 2011 06:12:21 MST http://www.beyenetwork.be/blogs/loshin/archives/2011/03/papers_on_the_v.php
Charlotte NC: Data Quality and MDM Event TOMORROW! It is not too late to sign up for tomorrow's breakfast event in Charlotte, NC: I have been invited by data quality and MDM tool company Ataccama to be the invited guest speaker at a series of breakfast seminar events in early March, with the last one tomorrow. Sign up now!



]]>
http://www.beyenetwork.be/blogs/loshin/archives/2011/03/charlotte_nc_da.php Wed, 2 Mar 2011 09:29:36 MST http://www.beyenetwork.be/blogs/loshin/archives/2011/03/charlotte_nc_da.php
The Dreaded Stairs

By Stephen Putman, Senior Consultant

Stairs_robinfensom
Recently, a friend of mine posted a link on Facebook that reinforced a philosophy that I have had for a long time that applies to all activities in life that are not duty-bound:

The Dreaded Stairs (part of  The Fun Theory project)

I have long felt that humans do things for two reasons:

A) They're fun

B) They're lucrative

This applies to the field of Data Governance and Quality as it does everything else. One of the reasons data governance and quality initiatives are not more widely adopted and followed is that the work is not terribly fun - data owners must be identified, policies and processes must be adopted, and the entire process must be monitored and attended once it is in place. It's also not seen as lucrative in a direct sense - the act of cleansing the data in a transaction usually doesn't provide immediate financial reward, and while the implementation of governance and quality initiatives can affect the company's bottom line, the benefits are very difficult to quantify in a traditional sense.

Phil Simon  has produced a terrific  series  for The Data Roundtable on incentive ideas for data quality programs, so I will not address these here - he says it much better than I can. I am concerned with "fun." The video above demonstrated an innovative idea to make a mundane but healthy activity (climbing stairs) into a joyful experience. What sort of innovative programs can be created to make managing high-quality data fun?

"Fun" is a difficult concept because it means something different to everyone. One way to find out what is "fun" to your employees is by conducting surveys or workshops to ask them directly. Another possibility could be to have a "company carnival" in your parking lot, and award employees who identify quality issues with raffle tickets or a "boss' dunk tank." The White House holds a  yearly contest  with government employees for the best quality improvement or cost-savings idea (this is more of an incentive, but some people also consider contests like this fun).

These are just a few ideas off the top of the head - do you have creative people who can come up with other ideas? If it is indeed true that fun makes unpleasant activities more palatable, this would be time well spent to reinforce data governance and quality in your organization.

photo by Robin Fensom via Flickr (Creative Commons license)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at baseline-consulting.com/ebooks.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2011/02/the_dreaded_sta.php Tue, 22 Feb 2011 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2011/02/the_dreaded_sta.php
Three-Dimensional Chess

By Stephen Putman, Senior Consultant

Spock-chess
I recently read Rob Gonzalez' blog post  I've Got a Federated Bridge to Sell You (A Defense of the Warehouse)  with great interest - a Semantic Web professional who is defending a technology that could be displaced by semantics! I agree with Mr. Gonzalez that semantically federated databases are not the answer in all business cases. However, traditional data warehouses and data marts are not the best answer in all cases either, and there are also cases where neither technology is the appropriate solution.

The appropriate technological solution for a given business case depends on a great many factors, which I like to call "Three-Dimensional Chess."

An organization needs to consider many factors in choosing the right technology to solve an analytical requirement, including:

  • Efficiency/speed of query return - Is the right data stored or accessed in an efficient manner, and can it be accessed quickly and accurately?  
  • Currency of data - How current is the data that is available?  
  • Flexibility of model - Can the system accept new data inputs of differing structures with a minimum of remodeling and recoding?  
  • Implementation cost, including maintenance - How much does it cost to implement and maintain the system?  
  • Ease of use by end users - Can the data be accessed and manipulated by end users in familiar tools without damage to the underlying data set?  
  • Relative fit to industry and organizational standards - This deals with long-term maintainability of the system, which I addressed in a recent posting –  Making It Fit.
  • Current staff skillsets/scarcity of resources to implement and maintain - Can your staff implement and maintain the system, or alternately, can you find the necessary resources in the market to do so at a reasonable cost?

Fortunately, new tools and methodologies are constantly being developed that can optimize one or more of these factors, but balancing all of these sometimes mutually exclusive factors is a very difficult job. There are very few system architects who are well versed in many of the applicable systems, so architects tend to advocate the types of systems they are familiar with, bending requirements to fit the characteristics of the system. This causes the undesirable tendency that is represented in the saying, "When all you have is a hammer, everything looks like a nail."

Make sure that your organization is taking all factors into account when deciding how to solve an analytical requirement by developing or attracting people who are skilled at playing ”three-dimensional chess.”

 


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at baseline-consulting.com/ebooks.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2011/02/three-dimension.php Wed, 16 Feb 2011 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2011/02/three-dimension.php
Linked Data Today!

By Stephen Putman, Senior Consultant

Chainlink_steve_lodefink

I begin today with an invitation to a headache...click this link:  The Linking Open Data Cloud Diagram

Ouch! That is a really complicated diagram. I believe that the  Semantic Web  suffers from the same difficulty that many worthy technologies do - the relative impossibility to describe the concept in simple terms, using concepts familiar to the vast majority of the audience. When this happens, the technology gets buried under well-meaning but hopelessly complex diagrams like this one. If you take the time to understand it, the concept is very powerful, but all the circles and lines immediately turn off most people.

Fortunately, there are simple things that you can do in your organization today that will introduce the concept of  linked data  to your staff and begin to leverage the great power that the concept holds. It will take a little bit of transition, but once the idea takes hold you can take it in several more powerful directions.

Many companies treat their applications as islands unto themselves in their basic operations, regardless of any external feeds or reporting that occurs. One result of this is that basic, seldom-changing concepts such as Country, State, and Date/Time are replicated in each system throughout the company. A basic tenet of data management states that managing data in one place is preferable to managing it in several - every time something changes, it must be maintained in however many systems use it.

One of the basic concepts of linked data is that applications will use a common repository for data like State, for example, and publish  Uniform Resource Identifiers  (URIs), or standardized location values that act much like Web-based URLs, for each value in the repository. Applications will then link to the URI for the lookup value instead of proprietary codes in use today. There are efforts to make global shared repositories for this type of data, but it is not necessary to place your trust in these data stores right away - all of this can occur within your company's firewall.

The transition to linked data does not need to be sudden or comprehensive, but can be accomplished in an incremental fashion to mitigate disruption to existing systems. Here are actions that you can begin right now to start the transition:

  • If you are coding an application that uses these common lookups, store the URI in the parent table instead of the proprietary code.
  • If you are using "shrink wrap" applications, construct views that reconcile the URIs and the proprietary codes, and encourage their use by end users.
  • Investigate usage of common repositories in all future development and packaged software acquisition.
  • Begin investigation of linking company-specific common data concepts, such as department, location, etc.

 Once the transition to a common data store is under way, your organization will have lower administration costs and more consistent data throughout the company. You will also be leading your company into the future of linked data processing that is coming soon.

photo by steve_lodefink via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2011/02/linked_data_tod.php Tue, 1 Feb 2011 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2011/02/linked_data_tod.php
Meet Me for Breakfast, Data Quality, and MDM - 3 Upcoming Events I have been invited by data quality and MDM tool company Ataccama to be the invited guest speaker at a series of breakfast seminar events in early March at the following locations:

March 1 Bridgewater NJ

March 2 Chicago, IL

March 3 Charlotte, NC

The topic is "Strategic Business Value from your Enterprise Data," and I will be discussing aspects of business value drivers for Data Quality and MDM. I believe that attendees will also get a copy of my book "Master Data Management."

I participated in a few similar events at the end of 2010 and found that some of the attendees posed ssome extremenly interesting challenges, and I hope to share some new insights at these upcoming events!



]]>
http://www.beyenetwork.be/blogs/loshin/archives/2011/01/meet_me_for_bre.php Thu, 20 Jan 2011 11:24:38 MST http://www.beyenetwork.be/blogs/loshin/archives/2011/01/meet_me_for_bre.php
Succeed Despite Failing

By Stephen Putman, Senior Consultant

Netflix_PseudoGil
I just finished reading a post on the Netflix blog - 5 Lessons We've Learned Using Amazon Web Services (AWS). Even though this article is specific to a high-traffic cloud-based technology platform, I think that it holds a great lesson for the optimization of any computer system, and especially a system that relies on outside sources such as a business intelligence system.  

Netflix develops their systems with the attitude that anything can fail at any point in the technology stack, and their systems should respond in as graceful a way as possible. This is a wonderful attitude to have for any system, and their lessons can be applied to a BI system just as easily:

1. You must unlearn what you have learned. Many people who develop and maintain BI systems come from the transactional application world, and apply their experience to a BI system, which is fundamentally different in several ways - for example, the optimization goal of a transactional system is the individual transaction, while the optimization point of a BI system is the retrieval and manipulation of often huge data sets. Managers and developers that do not realize these differences are doomed to failure with their systems, while people who  successfully  make the transition meet organizational goals much more easily.

2. Co-tenancy is hard. The BI system must manage many different types of loads and requests on a daily basis while simultaneously appearing to be as responsive to the user as all other software used. The system administrator must balance data loads, operational reporting requests, and the construction and manipulation of analysis data sets, often at the same time. This is the same sort of paradigm shift as in lesson 1 - people who do not realize the complications of this environment are doomed to failure since the success of a BI system is directly proportional to the frequency of use, and an inefficient system quickly becomes unused.

3. The best way to avoid failure is to fail constantly. This lesson seems counter-intuitive, but I've seen a lot of failed systems that always assumed that things would work perfectly - source feeds would always have valid data, in the same place, at the same time, always - that this philosophy gains more credence daily. Systems should always be tested for outages at any step of the process, and coded so that the response is graceful and as invisible to end-users as possible. If you don't rehearse this in development, you will fail in production - take that to the bank.

4. Learn with real scale, not toy models. It would seem that proper performance testing on systems equivalent to production hardware and networking with full data sets would be self-evident, but many development shops look at this as an unnecessary expense that adds little to the finished product. But, as in lesson 3 above, if you do not rehearse the operation of your system on the same size of system as your production environment, you have no way of knowing how the system will respond in real-world situations, and are effectively gambling with your career. The smart manager avoids this sort of gamble.

5. Commit yourself. This message surfaces in many different discussions, but it should be re-emphasized frequently - a system as important as your enterprise business intelligence system should have strong and unwavering commitment from all levels of your organization to survive the inevitable struggles that occur in the implementation of such a large computer system.

It is sometimes surprising to realize that even though technology continues to become more complex and distributed, the same simple lessons can be learned from every system and applied to new systems. These lessons should be reviewed frequently in your quest to implement successful data processing systems.

photo by PseudoGil via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2011/01/succeed_despite.php Tue, 18 Jan 2011 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2011/01/succeed_despite.php
New Years' Resolutions: Assess and Revise Your BI Strategy

By Dick Voorhees, Senior Consultant

Champagne
The New Year is upon us. And for many, the coming of the New Year involves making new resolutions, or reaffirming old ones. This resolution-making process includes corporations and organizations, not just individuals. In terms of personal resolutions, some undertake this process in earnest, but many seem to deal with resolutions superficially, or at least not very effectively. The same is frequently true for organizations as well.

So how then should an organization go about deciding which ”resolutions” to pursue in the New Year, which goals and objectives are both worthy and achievable? Often there are no "good" or "bad" opportunities, a priori, but some are more likely to result in a successful outcome and/or have more significant payoff than others.

  1. Take stock of the opportunities, and develop a list of key potential initiatives (or review the existing list, if one exists). Consider recent or imminent changes in the marketplace, competitors' actions, and governmental regulations. Which of these initiatives offers the possibility of consolidating/increasing market share, improving customer service, or represents necessary future investment (in the case of regulations)? And which best supports the existing goals and objectives of the organization?
  2. Assess the capabilities and readiness of the organization to act on these initiatives. An opportunity might be a significant one, but if the organization can't respond effectively and in a timely manner, then the opportunity will be lost, and the organization might better focus its attention and resources on another opportunity with lesser potential payback, but that has a much greater chance of success.
  3. Develop a roadmap, a tactical plan, for addressing the opportunity. Determine which resources are required – hardware, software, capital, and most importantly people – what policies and procedures must be defined or changed, etc...

Then be prepared to act! Sometimes the best intentions for the New Year fail not for lack of thought or foresight, but for lack of effective follow through. Develop the proper oversight/governance mechanisms, put the plan into action, and then make sure to monitor progress on a regular basis.

These are not difficult steps to follow, but organizations sometimes need help doing so. We've found that clients who call us have learned the hard way – either directly or through stories they've heard in their industries – that some careful planning, deliberate program design, and – if necessary – some skill assessment and training can take them a long way in their resolutions for success in 2011. Good luck!

photo by L.C.Nøttaasen via Flickr (Creative Commons)

 


DVoorhees_50_bw Dick Voorhees is a seasoned technology professional with more than 25 years of experience in information technology, data integration, and business analytic systems. He is highly skilled at working with and leading mixed teams of business stakeholders and technologists on data enabling projects.



]]>
http://www.beyenetwork.be/blogs/dyche/archives/2011/01/new_years_resol.php Tue, 11 Jan 2011 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2011/01/new_years_resol.php
Do You Know What Your Reports Are Doing?

By Stephen Putman, Senior Consultant

Spreadsheet

The implementation of a new business intelligence system often requires the replication of existing reports in the new environment. In the process of designing, implementing and testing the new system, issues of data elements not matching existing output invariably come up. Many times, these discrepancies arise from data elements that are extrapolated from seemingly unrelated sources or calculations that are embedded in the reports themselves that often pre-date the tenure of the project team implementing the changes. How can you mitigate these issues in future implementations?

Issues of post-report data manipulation can range from simple - lack of documentation of the existing system - to complex and insidious - "spreadmarts" and stand-alone desktop databases that use the enterprise system for a data source, for example. It is also possible that source systems make changes to existing data and feeds that are not documented or researched by the project team. The result is the same - frustration from the business users and IT group in defining these outliers, not to mention the risk absorbed by the enterprise in using unmanaged data in reports that drive business decisions.

 The actions taken to correct the simple documentation issues center around organizational discipline:

  • Establish (or follow) a documentation standard for the entire organization, and stick to it!
  • Implement gateways in development of applications and reports that ensure that undocumented objects are not released to production
  • Perform periodic audits to ensure compliance

Reining in the other sources of undocumented data is a more complicated task. The data management organization has to walk a fine line between control of the data produced by the organization and curtailing the freedom of end users to respond to changing data requirements in their everyday jobs. The key is communication - the business users need to be encouraged to communicate data requirements into an easy-to-use system and understand the importance of sharing this information with the entire organization. If there is even a hint of disdain or punitive action regarding this communication, it will stop immediately, and these new derivations will remain a mystery until anther system is designed.

The modern information management environment is heading more and more towards transparency and accountability, which is being demanded by both internal and external constituencies. The well-documented reporting system supports this change in attitude to reduce risk in external reporting and increase confidence in the veracity of internal reports, allowing all involved to make better decisions and drive profitability of the business. It is a change whose time has come.

photo by r h via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2010/12/do_you_know_wha.php Tue, 21 Dec 2010 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2010/12/do_you_know_wha.php
Keep It On Track

By Stephen Putman, Senior Consultant

RabbitHole_xJasonRogersx
In my recent blog posting, "Metadata is Key," I talked about one way of changing the mindset of managers and implementers in support of the coming "semantic wave" of linked data management. Today, I give you another way to prepare for the coming revolution, and also become more disciplined and effective in your project management whether you're going down the semantic road or not...

 rathole (n) -  [from the English idiom ”down a rathole” for a waste of money or time] A technical subject that is known to be able to absorb infinite amounts of discussion time without more than an infinitesimal probability of arrival at a conclusion or consensus.

 Anyone who has spent time implementing computer systems knows exactly what I'm talking about here. Meetings can sometimes devolve into lengthy discussions that have little to do with the subject at hand. Frequently, these meetings become quite emotional, which makes it difficult to refocus the discussion on the meeting's subject. The end result is frustration felt by the project team on "wasting time" on unrelated subjects, with the resulting lack of clarity and potential for schedule overruns.

One method for mitigating this issue is the presence of a "rathole monitor" in each meeting. I was introduced to this concept at a client several years ago, and I was impressed by the focus they had in meetings, much to the project's benefit. A "rathole monitor" is a person who does not actively participate in the meeting, but understands the scope and breadth of the proposed solution very well and has enough standing in the organization that they are trusted. This person listens to the discussion  in the meeting, and interrupts when he perceives that the conversion is veering off into an unrelated direction. It is important for this person to record this divergence and relay it to the project management team for later discussion - the discussion is usually useful to the project, and if these new ideas are not addressed later, people will keep these ideas to themselves, which could be detrimental to the project.

 This method will pay dividends in current project management, but how does it relate to semantics and linked data? Semantic technology is all about context and relationships of data objects - in fact, without these objects and relationships being well defined, semantic processing  is impossible.  Therefore, developing a mindset of scope and context is essential to the successful implementation of any semantically enabled application. Training your staff to think in these terms makes your organization perform in a more efficient and focused manner, which will surely lead to increased profitability and more effective operations.

photo by xJasonRogersx via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2010/12/keep_it_on_trac.php Thu, 16 Dec 2010 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2010/12/keep_it_on_trac.php
Metadata is Key

By Stephen Putman, Senior Consultant

MetaDataKey-Brenda-Starr
One of the most promising developments in data management over the last ten years is the rise of semantic processing, commonly referred to as the "Semantic Web." Briefly described, semantic processing creates a "web of data" complimenting the "web of documents" of the World Wide Web. The benefits of such an array of linked data are many, but the main benefit could the ability for machines to mine for needed data to enhance searches, recommendations, and the like, where humans do this now.

Unfortunately, the growth of the semantic data industry has been slower than anticipated, mainly due to a "chicken and egg" problem - the systems needs descriptive metadata to be added to existing structures to function efficiently, but major data management companies are reluctant to invest a great deal into creating tools to do this until an appropriate return on investment is demonstrated. I feel that there is an even more basic issue with the adoption of semantics that has nothing to do with tools or investment - we need the implementers and managers of data systems to change their thinking about how they do their jobs; to make metadata production central to the systems they produce.

The interoperability and discoverability of data is becoming an increasingly important requirements for organizations of all types - the financial industry is keenly aware of the requirements of reporting systems that are XBRL-enabled, for example. If we leave external requirements to the side, the same requirements can benefit the internal reporting of the organization as well. Reporting systems go through extended periods of design and implementation, with their contents and design a seemingly well-guarded secret. Consequently, effort is required for departments not originally included in the system design to discover and use appropriate data for their operations.

The organization and publication of metadata about these reporting systems can mitigate the cost of this discovery and use by the entire organization. Here is a sample of the metadata produced by every database system, either formally or informally:

  • System-schema-table-column
  • Frequency of update
  • Input source(s)
  • Ownership-stewardship
  • Security level

The collection and publication of such metadata in  standard forms  will prepare your organization for the coming ”semantic wave," even if you do not have a specific application that can utilize this data at the present time. This will give your organization an advantage over those companies that wait for these requirements to be implemented and will need to play catch-up. You will also gain the advantage of your staff thinking in terms of metadata capture and dissemination, which will help your company become more efficient in its data management functions.

photo by ~Brenda-Starr~ via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2010/12/metadata_is_key.php Tue, 14 Dec 2010 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2010/12/metadata_is_key.php
Making It Fit

Squeeze_fit
By Stephen Putman, Senior Consultant

I've spent the last eighteen months at clients that have aging technology infrastructures and are oriented to build applications as opposed to buying more integrated software packages. All of these organizations face a decision which is similar to the famed "build vs. buy" decision that is made when implementing a new enterprise computer system - do we acquire new technology to fulfill requirements, or adapt our existing systems to accomplish business goals?

Obviously, there are pros and cons to each approach, and external factors such as enterprise architecture requirements and resource constraints factor into the decision. However, there are considerations independent of those constraints whose answers may guide you to a more effective decision. These considerations are the subject of this article.

Ideally, there would not be a decision to make here at all - your technological investments are well managed, up-to-date, and flexible enough to adapt easily to new requirements. Unfortunately, this is rarely the case in most organizations. Toolsets are cobbled together from developer biases (from previous experience), enterprise standards, or inclusion of OEM packages with larger software packages such as ERP systems or packaged data warehouses. New business requirements often appear that do not fit neatly into this environment, which makes this decision necessary.

Aquire New

The apparent path of least resistance in addressing new business requirements is to purchase specialized packages that solve tactical issues well. This approach has the benefit of being the solution that would most closely fit the requirements at hand. However, the organization runs the risk of gathering a collection of ill-fitting software packages that could have difficulty solving future requirements. The best that can be hoped for in this scenario is that the organization leans toward obtaining tools that are based on a standardized foundation of technology such as Java. This enables future customization if necessary and ensures that there will be resources available to do the future work without substantial retraining.

Modify Existing Tools

The far more common approach to this dilemma is to adapt existing software tools to the new business requirements. The advantage to this approach is that your existing staff is familiar with the toolset and can adapt it to the given application without retraining. The main challenge in this approach is that the organization must weigh the speed of adaptation against the possible inefficiency of the tools in the given scenario and the inherent instability of asking a toolset to do things that it was not designed to do.

The "modify existing" approach has become much more common in the last ten to twenty years because of budgetary constraints imposed upon the departments involved. Unless you work in a technology company in the commercial product development group, your department is likely perceived as a cost center to the overall organization, not a profit center, which means that money spent on your operations is an expense instead of an investment. Therefore, you are asked to cut costs wherever possible, and technical inefficiencies are tolerated to a greater degree. This means that you may not have the opportunity to acquire new technology even if it makes the most sense.

The decision to acquire new technology or extend existing technology to satisfy new business requirements is often a decision between unsatisfactory alternatives. The best way for an organization to make effective decisions given all of the constraints is to base its purchase decisions on standardized software platforms. This way, you have the maximum flexibility when the decision falls to the "modify existing" option.

photo by orijinal via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.




]]>
http://www.beyenetwork.be/blogs/dyche/archives/2010/12/making_it_fit.php Fri, 10 Dec 2010 06:00:00 MST http://www.beyenetwork.be/blogs/dyche/archives/2010/12/making_it_fit.php
Webinar: Fundamental Techniques To Maximize the Value of Your Enterprise Data I will be presenting at a webinar hosted by Talend on December 2 at 2:00PM EDT, 11:00AM PDT on Fundamtental Techniques to Maximize the Value of Your Enterprise Data. In this presentation I will discuss the convergence of the value of three interconnected techniques: master data managemetn, data integration, and data quality. As data repurposing grows, so do the challenges in centralizing semantics, and we wil look at some common challenges. Join me on Dec 2!

]]>
http://www.beyenetwork.be/blogs/loshin/archives/2010/11/webinar_fundame.php Mon, 29 Nov 2010 12:56:03 MST http://www.beyenetwork.be/blogs/loshin/archives/2010/11/webinar_fundame.php
The Practitioner's Guide to Data Quality Improvement Just published! My new book on data quality improvement, called The Practitioner's Guide to Data Quality Improvement was released a few weeks ago and is now available. The book provides practical information about the business impacts of poor data quality and provides pragmatic suggestions on building your data quality roadmap, assessing data quality, and adapting data quality tools and technology to improve profitability, reduce organizational risk, increase productivity, and enhance overall trust in enterprise data.

I have an accompanying web site for the book at www.dataqualitybook.com. At that site I am posting my ongoing thoughts about data quality (and other topics!) and you can download a free sample chapter on data quality maturity!

Please visit the site, check out the chapter, and let me know your thoughts by email: loshin@knowledge-integrity.com.



]]>
http://www.beyenetwork.be/blogs/loshin/archives/2010/11/the_practitione.php Wed, 10 Nov 2010 13:45:15 MST http://www.beyenetwork.be/blogs/loshin/archives/2010/11/the_practitione.php