Customer Service

Incident Resolution Recommendation

ITSM deals with the efficient resolution of IT issues and requests while ensuring that incidents or business interruptions do not occur or are kept at a minimum. It is also a common approach to creating, supporting, managing IT services. And one of the core practices of ITSM is incident management. At the heart of incident management lies a ticketing system that generates and stores substantial amounts of prioritization and subject matter classification, i.e., training and testing data.  

ITSM optimization can benefit by using AI-ML by proactively and rapidly identifying patterns across open incidents and subsequently route various incidents to the appropriate resolution groups after identifying patterns.  Actions and content can be recommended to help service desk agents by enabling them to solve issues faster, enhance search capabilities, look up additional assistance, and improve knowledge management. For example, users can search for solutions through virtual assistants such as chatbots or similar to Google Assistant and solve incidents. With the help of MLOps and past incidents, service desk agents can be trained to scan incoming tickets and provide users with automatic solutions. If required, the service desk can automatically send any relevant knowledge base articles that might help the user. Supervised ML algorithm-based solutions which use historical data to automatically classify tasks and incidents at scale can help categorize, prioritize and triage issues with greater ease, reducing manual work and errors. Unsupervised machine ML solutions which identify patterns through continuous segmenting and grouping of similar items can improve cluster analysis opportunities. 

By combining the power of our flagship framework — — with a decade of experience, we have ensured AI projects are delivered in a timely and robust manner. Organizations achieve a successful AI transformation journey with our systematic approach towards enterprise AI and MLOps solutions.  

Our approach to an enterprise MLOps journey begins with a robust data intake and elaborate analysis to frame the problem areas.  The second part includes applying an engineering mindset, identify enablers required for AI initiatives to be successful at enterprise scale, and preparing cognitive models. Finally, these models are deployed and managed throughout the lifecycle of the solution. Throughout the lifecycle, pre-built frameworks are used to push the required transformations ahead. Coupled with a robust uptime for applications we use and near-zero chances of interruption, this means unwavering support all the way for customers. So, what would have typically taken months to deliver can be developed and deployed in weeks, and most importantly — at a fraction of the cost.


Incident tickets going back and forth can cause an outage: Around 30–40% of tickets risk incorrect triage, and by the time it reaches the right team, the issue might have become an outage.

MLOps-based service desk processing depends on categorization: Inaccurate manual ticket categorization can stop routing tickets to the right support team and thus delay the troubleshooting process. This typically results in continuous attempts to reassign tickets and finally upsets the SLAs, risking business operations. 

Data exclusion due to the ‘Other’ category: Using an ‘Other’ bucket to put things in that don’t resemble anything else is a required category. However, this can quickly turn into an overused option. The ML-based training and analysis of the training inference will attempt to identify common patterns or features from the collection of ‘Other’ incidents. Since the collection of incidents greatly varies, there’s little that the model will be able to leverage, often leading to data exclusion from training.

Lack of adequate training can collapse the system: While kicking off new technology such as AI/ML in ITSM, education, and awareness are paramount. In case of any ambiguity among stakeholders, especially service desk agents, lack of knowledge can lead to adoption failure and breakdown of the entire ITSM optimization exercise. 

Data quality can affect accuracy and results: As AI/ML relies heavily on historical and present data for data modeling to arrive at a solution, predict recommendations, and automating processes. The immense volume of unutilized structured and unstructured data can influence results as the training process is heavily dependent on existing data to learn from. Thus, low quality of data or an inconsistent process can drastically bring down the initial accuracy and produce incompetent results. – Cognitive Solutions

  • Customers expect short SLAs and quick responses from the service desk. A customer service efficiency impediment is created when large volumes of tickets are created, and a surge in incidents happen.
  • We can create a cognitive recommendation system based on ML, DL, and NLP methods that checks for similar incidents in the past. 
  • For the resolution of logged tickets, we can create a cognitive recommendation engine that can be trained using a corpus of historic tickets. 
  • This engine is able to read tickets enhanced with information from the ticket enrichment module, understand the user intent, suggest the most relevant incident resolver group, and provide possible resolutions.

Solution Approach

Use Case Discovery

We actively engage with our clients to capture the business requirements while observing the problem. In the case of deriving credit scores, traditional model development methods are lengthy, tedious, and often prone to human bias, lack data accuracy in the absence of proper data collection, data models, and data cleansing. We can effectively identify the relevant datasets and formulate a use case-based approach that would solve the business problem or improve/predict actionable insights to mitigate the problem.

We can assist in setting up the required infrastructure – framework provides out-of-the-box development platforms, and all functionality can be accessed using a Jupyter Notebook — ensuring zero-delay and plug-and-play availability of high-end hardware. Development images configured based on pre-defined templates can be installed on-premises or in a development VM within the infrastructure. This enables authentication using LDAP, seamless project setup using Bitbucket, Jenkins, and Docker (ensuring build and deployment without software compatibility issues).  The project can be started seamlessly with the relevant environments, which are subsequently created automatically. 

The framework made available by leverages the latest ML and DL tools while preparing models and includes Pachyderm-based data versioning, deployment using a Kubernetes orchestration system, Kubeflow and Spark-based ML and DL build and deployment, Istio-based service mesh enabled microservice architecture, and ELK based monitoring capability; contributing to reduction in latency time.

Data Engineering’s MLOps framework has different data adapters available through a common catalog of services that simplify interoperability and scalability concerns, enable APIs, and abstract all the technical complexities from the service consumer. This allows establishing high-end Alluxio and Presto-based rapid, inexpensive data connectivity and data collection from diverse sources (available in structured, unstructured, and streaming formats) coming in at a high velocity and in huge volumes. 

All the data sources are funneled into the data storage layer after proper validation and cleansing. The storage landscape with different storage types and extreme flexibility is built-in to manipulate, filter, select, and co-relate different data formats.

Infrastructure and MLOps Automation

The details collected, project code, data preparation workflows, and models can be easily versioned in a repository (Bitbucket, Git, etc.), and data sets can be versioned through on-premise /cloud storage. These can be added as exploratory variables by using two excellent features of

  1. Data Connectivity Marketplace libraries
  2. Data Versioning

The attributes obtained are used for categorization (employing Pachyderm-based data versioning) and then performing univariate, bi-variate, and Bag of Words analysis — for both structured and unstructured datasets through xpresso Exploratory Data Analysis (Data and Statistical Analysis).  Different datasets and their different versions can be easily controlled and stored into xpresso Data Model (XDM)-enabled data store that enabled easy retrieval and storage of datasets/ files into internal XDM. MLOps automation allows creating pipelines, train with as much data and as accurately as possible, fastest time to inference, with the ability to rapidly retrain. The xpresso Data Pipeline Management (Rapid Model Training and Experimentation) uses Kubeflow-enabled pipelines. Thus, multiple experiments using different models and datasets can be created, tested, paused, and restarted to gain better insight.

We can vastly improve an organization’s ticket resolution turnaround time, reduce the incident lifecycle from hours to seconds by enabling rapid response to incidents, and identify patterns for any incident. By correlating similar events and anticipating incidents that might occur in the future via our analysis and real-time actionable insights, we can provide effective recommendations. This also means that customer satisfaction, process, and team management are enhanced while business impacts caused by delays in service are minimized.

How can help Customer Service Organizations transform their journey to cognitive AI solutions is an AI/ML Application Lifecycle Management Platform. enables complete lifecycle management of AI/ML solutions, addressing the AI transformation journey of enterprises on any cloud platform of choice. offers functionality essential for building AI/ML solutions – primarily enabling data scientists to rapidly build predictive and prescriptive models. The platform provides a user-friendly interface to develop, deploy, and manage AI/ML solutions at scale. In addition, supports the incorporation of these solutions into business processes, surrounding infrastructure, products and applications.

Key benefits of include:

  • Empowers data scientists to transform AI/ML research into solutions 
  • Improves the productivity of data scientists by enabling them to focus on the business problem, developing algorithms and rapid experimentation of models 
  • Addresses the shortage of skilled data science resources with automated workflows, toolkits and frameworks 
  • Manages AI transformation journey costs without any wastage of R&D efforts 
  • Provides an enterprise-ready and secure environment for complete lifecycle management of AI/ML applications
  • Enables at-scale deployment of enterprise AI/ML applications on-premise, cloud (AWS, GCP, Azure), or hybrid environments

Additional details on can be found at: We can schedule a demo of the platform for anyone interested in learning more.

Have Any Questions?

Need more information about the platform?