(248) 735-0648

During the next 5 years, machine learning is poised to play a pivotal and transformational role in how IT Infrastructure is managed. Two key scenarios are possible: transforming infrastructure from a set of under-utilized capital assets to a highly efficient set of operational resources through dynamic provisioning based on consumption; and the identification of configurations, dependencies and the cause/effect of usage patterns through correlation analysis.

 

In the world of IT infrastructure, it’s all about efficient use of resources. With on-premise infrastructure (compute, storage and network) utilization rates for most organizations in the low single digits, the cloud has sold the promise of a breakthrough. For those organizations moving to Infrastructure as a Service (IaaS), utilization in the middle to high teens is possible, and for those moving to Platform as a Service (PaaS), utilization in the mid-twenties is within reach. That being said, where is the promised breakthrough? The actual breakthrough in the efficient use of IT infrastructure will not occur from cloud adoption alone, but through the application of machine learning to dynamically provision the right scale and type of resources at the time they are needed for consumption.

 

Dynamic provisioning driven by demand is essentially the same operational concept as power grids and municipal water systems – capacity allocation driven by where resources are consumed, rather than where they are produced. This is possible as a result of a near frictionless resource allocation and transport infrastructure. When a user expresses a demand for an IT service, the resources needed to provide that service will be dynamically provisioned from an available pool of capacity to fulfill the demand in real-time. When the resources are no longer needed, they are returned to the pool for provisioning elsewhere. Infrastructure capacity reserved/allocated and sitting idle will effectively disappear because it will only be allocated when needed.

 

The second part of the breakthrough relates to right-sizing infrastructure. Whether this is network capacity or compute Virtual Machine size – machine learning will enable analysis of the patterns of behavior by users and correlate them to the consumption of infrastructure resources. Eventually, the benefit of machine learning in this scenario will be in the predictive analysis of infrastructure needs to anticipate and deliver more efficient resource allocation.

 

During the near term, these benefits will be much more tactical. Automated discovery combined with behavioral correlation analysis will virtually eliminate the need for manual inventory and mapping of components and configuration items in the IT ecosystem to reveal how the ecosystem is operating. There may still be a need for manual activity to articulate how the ecosystem was designed to function, but this can be done using a declarative approach that describes the desired behavior.

 

Today, IT has the opportunity to automate the mapping of components in their infrastructure to provide a more accurate and actionable picture. Blazent’s Data-powered IT Service Management white paper linked here illustrates how an organization can maintain an accurate picture of service components to improve the effectiveness of IT Service Management (ITSM) processes.  If you have immediate questions, please contact us a 510 851 6073.