(248) 735-0648

During the past few years, automated discovery capabilities have greatly expanded the capacity and efficiency of IT operations management and service management functions. Highly manual processes for inventorying and classifying IT assets and solution components have been replaced with capabilities for identifying changes to the IT ecosystem and updating CMDB records in near-real-time. The data that discovery tools provide enables any IT staff to make improved operational decisions and tune the IT environment with a higher level of precision than previously possible.

 

There is no question that discovery capabilities will continue to be a key enabler of IT for the foreseeable future, but companies should beware of relying only on existing discovery capabilities, as technology and business continue to evolve. They are likely to give leaders a false sense of confidence in the information they are viewing and the decisions they make. To understand this risk better, one must look at what discovery capabilities are (and are not) and how they fit into the broader ITSM picture.

 

Discovery tools use pre-programmed rules to identify components in the IT environment of known types or containing a certain set of characteristics. Most discovery tools specialize in a particular type of technology, and many are vendor/manufacturer specific. Most mid- to large-size companies use 5–15 different discovery tools to provide adequate coverage of the technology in their environment (different tools for network devices, desktop software inventory, data center device configurations, software licenses, etc.). The data collected from discovery capabilities is typically recorded in the CMDB within an ITSM system as configuration items. This is where the discovered information is managed and stored and forms the basis for reporting.

 

The issue with discovery tools is their focus. It is discovery, and only discovery. It doesn’t check for accuracy, context, recentness, duplication, etc. The information entering the system through discovery is not filtered prior to entry, and it creates a textbook garbage-in, garbage-out scenario.

 

An additional issue is the way the data from different tools are integrated in the CMDB. For a typical company using existing discovery capabilities, almost one-third of the data in the CMDB is inaccurate, incomplete, redundant or missing – meaning the overall picture that is created of the IT environment (and used for decision making) has many gaps.

 

Furthermore, discovery tools only touch the assets they “discover” on a timed basis (once a week, once a month, etc.). If anything happens during the interval between touches, then that information is not reported, automatically becomes inaccurate and triggers all sorts of negative downstream effects.

 

While 70% accuracy may be adequate to provide a general sense of the landscape, it is not sufficient to satisfy many modern IT needs and is certainly not sufficient to form the basis for sound business decisions. Improving CMDB data quality by 10–15% will make an enormous difference – passing or failing a compliance audit, successfully defending against a hacking attempt, underpaying tax liabilities, overspending on support contracts – the list of downstream benefits is long and increasing.

 

Many IT vendors will propose solving the data quality problem by adding more tools to the mix – providing more data to fill the holes. The problem with this approach is that more discovery tools may help with the missing data issues, but are likely to make data redundancy and conflict issues worse and perpetuate the overall problem. IT leaders are likely to think that data quality is improving, further increasing their confidence in the decisions being made – a scary thought, considering the potential business impacts of a mistake. Sadly, there are examples of this literally every day in the press.

 

There is a way to leverage your existing investments in discovery tools. Leading companies have begun adopting an ancillary approach to discovery and CMDB data quality. They are adding a layer of data quality management capabilities on top of them to improve the way data is integrated and reconciled across all sources to remove the duplication, resolve conflicts and fill gaps. This both improves the overall quality of data in the CMDB and provides a clearer understanding of where problems exist, so any IT staff can work to resolve them (through new capabilities) and decision makers have a justified reason to be more confident in their decisions.

 

Blazent provides a data quality management platform that ingests discovery data and intelligently reconciles it with data from finance and IT management tools. The resultant high-quality data is almost 100% accurate, timely and complete. You can learn more in the Ensuring Data Quality white paper here, or contact us directly at sales@blazent.com