AI, GenAI, Causal AI, and AIOps are the latest IT terms that promise higher quality business services at reduced costs, enabled by AI automation, of course. If you think about it, these significant improvements have been promised for the past couple of decades now, but has anything really changed? And is it worth taking another look at?


What has stopped us from realizing the value of observability?

Observability refers to the ability to monitor, measure, and understand the state of a system or application by examining its outputs, logs, and performance metrics. In modern software systems and cloud computing, Observability plays an increasingly crucial role in ensuring the reliability, performance, and security of applications and infrastructure.

Now, application performance monitoring (APM) vendors have been promising the value of observability since the first releases of technology in the early 90s. Vendors and their solutions such as IBM Tivoli and HP Openview consolidated the market with many others joining (and leaving) over the years. The market is currently led by Dynatrace, DataDog, and New Relic. Through the years, customers have seen the value of observability, but only in small, isolated pockets. Why is that? And what lessons can we learn?

Establishing key parameters in observability

If we are to extract the full value from investments in observability, here are a few areas that we need to understand and consider:

ITSM integration

The scenario when monitoring technology finds an incident and potentially identifies the root cause, but the service desk will still go through a manual process of L1, L2 and L3 support is an ideal example of missing integration.

I have seen that talking to customers about their observability ITSM integration quite often uncovers a lack of knowledge of what is now possible, especially with ITSM vendors such as ServiceNow entering the observability market.

Today, it is quite possible to integrate at a CMDB level, using observability data to help populate service maps within such tools as ServiceNow.

Lifecycle adoption

The typical use case for observability is to monitor and accelerate fixes for business services running in production environments.

Given that the typical fix costs for incidents identified before entering production is 10x cheaper, wouldn’t it be sensible to deploy in development, testing and production? Well, the answer to this is obviously yes, but it’s not quite that simple, especially when you have different partners delivering infrastructure, applications and network services, all with their own tools.

The first step here is to deploy the technology as standard across environments. This should be approached as less of a technical challenge and much more of an organizational change and technology adoption one. From a technology perspective, having the observability vendor’s infra/PaaS agent deployed as standard is a great cost-effective first step. This then allows migration to full stack capability, which is where the full value is delivered with much less friction.

Another historic reason for lack of wider observability adoption has been one of prohibitive cost. Here, there have been recent changes to observability vendors licensing, which allow for a degree of consumption-based changing, making wider adoption more possible from a cost perspective.

Access to trained staff

According to Melissa Davis, VP Analyst at Gartner, “While the skills and capabilities of AI are concentrated to highly technical roles, the status of AI is rapidly changing as industry executives begin to realize the importance of a workforce knowledgeable in data, analytics and AI. To build such a workforce, organizations require data literacy and AI literacy as core competencies.”

Access to staff with the necessary observability and AIOps skills can be a challenge. Retaining them is an even bigger one. I have often heard customers asking their key vendors to work together for the common good.

Using a partner with years of experience in delivering solutions using multiple vendors makes more sense here. Their experience in the solution design and management of vendors will be key. I have often heard customers asking their key vendors to work together for the common good. Without an experienced partner managing this, customers can be subject to pressure based on the vendors’ view of success, which might potentially differ from the customer’s point of view.

Managed services partner

A managed service partner who can provide access to pre-built observability and AIOps capabilities would be the best course of action. It would be intriguing if that partner has gone through the challenge of internally transforming both the toolset and associated processes.


Observability and AIOps in action

At Eviden, we have done just that.

We are super excited to offer an enhanced Cloud Ops experience as standard, powered by premier Observability and Service Management technology stacks from Dynatrace and ServiceNow

Simon Withers | Eviden Cloud - Head of Portfolio and Engineering

The Eviden Digital Performance Management practice, with 12 years of Observability experience, has worked with Eviden Cloud to implement observability and AIOps as a standard using the Dynatrace platform integrated into ServiceNow.

We are looking forward to the first phase where initial Cloud customers will be able to migrate from Splunk-based observability/log management to the new platform. This is slated to take place in July 2024.

All new Eviden Cloud customers will benefit from Dynatrace-based Infra/PaaS monitoring, with a simple upgrade to consumption-based full-stack observability, with DevSecOps integration through to SRE and business process monitoring capabilities.

  • Connect with me to know more about you can implement observability to accelerate your IT transformation journey and get the best benefits.
  • Read more about our client stories and demonstrated capabilities.