data quality

Data quality from discovery tools: Do you have a false sense of security?

Michael Nicholas   /     |   No comment

Do you trust the data quality in your company’s CMDB?  Would you bet your job on it?  Would you bet the future of your company on it?  You may be unknowingly placing those bets everyday and if you knew the real quality of the data you’re dealing with, you’d likely be surprised.  The data in your CMDB is one of the most important sets of data that your IT department manages.  It is the authoritative list of technology assets that your company manages and uses along with the web of dependencies that describes how they relate to each other and your business operations.

 

This data is used for calculating taxes, ensuring security patches are up to date, determining how many software licenses to pay for and as the basis for your company’s business continuity planning. If the data is wrong, it could open your company up for tax audit liabilities, costly and embarrassing security breaches, paying for more licenses than you need and not being able to effectively recover in the event of a disaster.  Those are some pretty big risks with some significant consequences – so it is worth asking the question again.  Do you trust the data in your company’s CMDB and would you bet your job and your company’s future on it?

 

ITSM and discovery tool vendors would have you believe that the quality of data in your CMDB is pretty good and if you are concerned, you can use discovery tools to simply bring in more data which will make it even better?  But is it good enough?  Many companies are being lulled into a false sense of security based on the claims of discovery tool vendors but experience has shown that CMDB data imported from discovery tools and other source systems can often have as high as 30% inaccuracies – including: obsolete data records, duplicates, conflicting records, missing data and incorrect relationships.  Hopefully that 30% isn’t the critical data that you need, otherwise your bet just got a lot riskier.

 

The key to data quality is applying the right set of data validation, correlation and reconciliation rules and logic to the data coming into your CMDB from the various source systems and discovery tools.  Each input source may do the best it can to clean up its individual data stream, but what your company cares about is the big picture.  Each data stream provides a piece of the whole picture.  Unfortunately, the pieces you are collecting won’t fit together perfectly.  There will be gaps, overlaps and data from different points in time that are each “correct” on their own, but create problems when you bring them together.

 

In order to solve these problems, companies need to apply a set of data quality capabilities on top of the input streams to fix and reconcile the data as it is coming into the CMDB.  This set of capabilities will not only improve the quality of the data you have but highlight areas where the data cannot be fixed (for example where something isn’t being collected) so you can focus your data improvement efforts on the areas that will have the greatest impact.

 

Would you rather bet your company’s future on 70% quality data or something a lot higher?  Don’t let the discovery tool vendors lull you into a false sense of security, its time to take action.

 

No Comments

Sorry, the comment form is closed at this time.

Request Demo