(248) 735-0648

Anyone reading this would probably agree that nearly all enterprises are in an endless struggle to maintain their operational data. This is not new, in fact it’s something that has been an ongoing challenge since the first companies began to operate at scale. In the very early days there were people whose actual job title was “Computer” (that is, one who computes). Even back then employees were a key element of data entry and correction and they did an acceptable job with it.

 

Of course that was then, and this is now. The problem is that when data volumes increased employees were no longer able to keep up with the data deluge and accuracy (that is, quality) began to suffer. This problem is growing exponentially, and there is realistically no end in sight.  In order to survive (or thrive, which would be better), organizations need to develop new methodologies that enable them to stay ahead of the effort to maintain quality data, and as we see in the press nearly every day, that isn’t happening as often as it should.

 

Organizations have always needed high quality, accurate data, whether it was delivered in the 1800s by a “Computer” named Sears Cook Walker (the first person who actually had that title) or whether it’s delivered now by a cloud based AWS server farm. What’s been an interesting side effect is that advances in technology have not only brought more capabilities, they’ve brought a steady increase in the ability to measure and track pretty much every detail of the operation. With this increase in measurement capability paralleled by an increase in data, this process is no longer manageable by humans (sorry Sears). Even with blindingly fast machines doing the work, there are increasing gaps in what can be validated. The pressure to process more and do it faster tends to increase the likelihood of mistakes. Mistakes that become endemic to business processes are unacceptable for an organization relying on accurate operational data to make important business decisions.

 

The good news is that the continuous advancement of automation offer opportunities to address the issues of volume, speed and accuracy. As with any technology, (or actually anything, really), the right tool must be selected and it must then be implemented properly to realize the benefits. Automation also offers process consistency, which under the current workload is well beyond the scope of humans to manage. As a side benefit, automation tools also address resources issues where individuals don’t want to or can’t perform mundane and repetitive data quality tasks.

 

All individuals want to feel that they are positively contributing to the growth and success of the organization.  Working to enter and correct data is simply a task that most employees would rather not do on a regular basis, and it’s a poor use of resource dollars. Automation technologies can help in that they can take away some of these lower end tasks and perform them better, faster and more accurately.

 

Automation technologies continue to mature but many offer considerable enhanced capabilities which are unrealistic to expect from individuals doing so manually. Even the most simple aggregation and normalization of data from two sources is an immense effort for an individual.  For automation technology however, this is just a starting point and better, it can do it 24/7 with consistent results every time. The benefit grows considerably when we consider that the aggregation is typically across far more than 2 data sources, something that is not realistic for an individual to perform on a regular basis and definitely not with any acceptable speed, consistency or quality.

 

Companies will continue to struggle with the aggregation and normalization of data which is why they need to look deeper into automation tools that deliver higher quality consistently and faster. They will also get the added benefit of relieving their employees of the mundane tasks and allow them to reallocate those resources to analyze and assess the results rather than collect and sort the raw data.

 

In order to thrive, enterprises need to take a step back and look at the longer term patterns that are driving their operational decisions. Anytime there is a disconnect, anytime something happens that makes you want to hit the pause button, chances are there is an underlying data issue. More specifically, there is an underlying data quality issue. The fact that we live in a fully-connected, mobile, social, cloud-base environment should be seen as the massive opportunity it is, but that opportunity comes with data issues that need to be addressed head on. Want to get ahead of the Data Quality curve? Click here for more information.