Chargemaster Analytics

About the Customer & Challenges Faced:

The chargemaster or charge description master (CDM) is a comprehensive listing of items billable to a hospital patient or a patient’s health insurance provider. It contains data elements such as charge descriptions, billing codes, pricing, and many other necessary data elements.

In practice, it usually contains highly inflated prices at several times that of actual costs to the hospital. Our client, a healthcare payer based in Indiana wanted to quantify and verify whether quarterly increases in payments to providers are within permissible limits as dictated by contract language, charge description master, cap, and attestation information.”

Solution and Approach:

  • The chargemaster analytics solution quantifies and verifies whether payments to payers are within permissible limits as laid down by the contract language and the CDM.​
  • framework provides out-of-the-box development platforms. The project was started seamlessly with the relevant environments which were then created automatically. Development images configured based on pre-defined templates were installed on-premises or in a development VM within the infrastructure. This enabled authentication using LDAP, seamless project setup using Bitbucket, Jenkins and Docker (ensuring build and deployment without software compatibility issues).’s MLOps framework allows establishing high-end Alluxio and Presto-based efficient data connectivity and collecting data from diverse sources. ​
  • The framework made available by leverages the latest ML and DL tools while preparing models and includes Pachyderm-based data versioning, deployment using a Kubernetes orchestration system, Kubeflow and Spark-based ML and DL build and deployment, Istio-based service mesh enabled microservice architecture, and ELK based monitoring capability; contributing to reduction in latency time.​
  •’s MLOps framework allows establishing high-end data connections and collecting data from diverse sources. This was leveraged to analyze the extensive data repository of claims’ data and providers’ historical data after collecting details. We used advanced statistical and machine learning algorithms – both in the discrete and continuous domains – to analyze these and detect exceptional increases in billing amounts.​
  • We analyzed all claims data and calculated the increase in quarter-over-quarter billing for different revenue and CPT codes. Increases determined quantitatively were checked against contract language to verify if they were allowed. For cost increases outside permissible limits, high confidence intervals were constructed to support the results. The final outcomes from the negotiations were incorporated as a feedback loop into the algorithms to improve accuracy. ​
  • The details collected were added as exploratory variables by using libraries and analyzed. The attributes obtained were used for categorization (employing Pachyderm-based data versioning) and then performing univariate, bi-variate and Bag of Words analysis — for both structured and unstructured datasets through xpresso Exploratory Data Analysis (Data and Statistical Analysis).  Different datasets and their different versions were easily controlled and stored into xpresso Data Model (XDM)-enabled data store that enabled easy retrieval and storage of datasets/ files into internal XDM. This was achieved by using two excellent features of ​
  1. Data Connectivity Marketplace libraries​
  2. Data Versioning​
  • Finally, we drew in-depth insights based on the historical data obtained from various providers’ behavior and detected over $50 million as the potential increase in billing over permissible limits. These insights provided much-needed clarity while negotiating payer-provider contracts.​


  • By using, one can leverage high-end data connectivity, efficient data versioning, perform exploratory data analysis and generate inferences using an intuitive process and through an industry-standardized manner.
  • The unique, containerized platform-centric approach offered by can be used to employ required infrastructure, deploy rapidly to multiple high-availability environments while aligning with best-in-class DevSecOps practices.
  • also brings in-depth QA-QC testing and logging frameworks, synchronous and asynchronous monitoring, and performance tracking ability.
  • also has SSO (single-sign-on) for various in-built tools and subsystems that make the platform access seamless throughout.​
  • In a nutshell, all the above features in a single plate under the same hood make an unbeatable AI Ops framework.

Have Any Questions?

Need more information about the platform?