Blog: David Loshin Subscribe to this blog's RSS feed!

David Loshin

Welcome to my BeyeNETWORK Blog. This is going to be the place for us to exchange thoughts, ideas and opinions on all aspects of the information quality and data integration world. I intend this to be a forum for discussing changes in the industry, as well as how external forces influence the way we treat our information asset. The value of the blog will be greatly enhanced by your participation! I intend to introduce controversial topics here, and I fully expect that reader input will "spice it up." Here we will share ideas, vendor and client updates, problems, questions and, most importantly, your reactions. So keep coming back each week to see what is new on our Blog!

About the author >

David is the President of Knowledge Integrity, Inc., a consulting and development company focusing on customized information management solutions including information quality solutions consulting, information quality training and business rules solutions. Loshin is the author of The Practitioner's Guide to Data Quality Improvement, Master Data Management, Enterprise Knowledge Management: The Data Quality Approach and Business Intelligence: The Savvy Manager's Guide. He is a frequent speaker on maximizing the value of information. David can be reached at loshin@knowledge-integrity.com or at (301) 754-6350.

Editor's Note: More articles and resources are available in David's BeyeNETWORK Expert Channel. Be sure to visit today!

Recently in Governance Category

As a by-product of some of our current activities in data governance, I was interested in looking at ways that people model performance metrics. Interestingly, half an hour's worth of web searching turned up surprisingly few artifacts that describe ways to model a performance metric. Perhaps my search term vocabulary is artificially limited to the phrases I believe should provide some hits, since I am confident that every BI tool vendor has embedded models for performance metrics.

However, the failed search exercise has triggered the dreaded next step: having to think about it myself. My first thoughts revolve around "metric basics":

- who are the stakeholders,
- what are the performance objectives,
- what is being measured,
- what are the units of measure,
- how is the measurement performed,
- how often is the measurement done,
- is the measurement process automated or manual,
- how is the result reported,
- how are individual measurements rolled up into more comprehensive scores,
- what are the benchmark values,
- what are the critical thresholds,
- who is notified of a low score,
- how are issues forwarded into the issues tracking system.

Any other suggestions?


Posted July 31, 2007 7:23 AM
Permalink | 1 Comment |

My company has been involved in a lot of data governance work recently. Two of the mian drivers are regulatory compliance and consistency in reporting (which often rolls back to compliance). Interestingly, in some of the client industries, fraud detection seems to be an additional driver. This is a little curious to me. On the one hand, fraud detection fits into the compliance framework - looking for non-conformance to business policies. In both cases, we essentially identify critical policies, rules that indicate conformance to those policies, and generate alerts when those policies are violated.

The difference is that compliance is introspective while fraud detection is outward looking. Compliance seeks to guard your own behavior, looking for how the organization is living up to everyone else's expectations. Fraud detection is outwardlooking, seeking to figure out how your own rules are being transgressed by others.

I can imagine another significant difference - fraud is performed proactively, with the perpetrators intentionally trying to avoid detection. Compliance issues are potentially intentional, but inadvertent non-compliance is certainly targeted by control processes.

This raises a different business challenge: it may be possible that there are corporate business policies that conflict with externally-imposed regulations. If so, does the issue of compliance change from self-policing to weighing the risk of noncomplaince with the risk of getting caught? And if the latter is the case, it suggests that internal governance programs are "window-dressing," especially when the real (i.e., intentional) transgressions are going to be well-hidden.


Posted May 13, 2007 5:46 PM
Permalink | No Comments |