1. What is

The platform enables a comprehensive approach towards building enterprise artificial intelligence (AI) solutions. With its five-stage process-based cognitive journey (figure below), delivers AI solutions in a timely and robust manner using a reproducible methodology. The enterprise AI journey starts with use case discovery, and continues through data intake to data preparation, and to cognitive modelling that leads to actionable insights.

2. How does help data scientists build cognitive solutions? provides scalable, reliable, easy-to-use, automated tool kits and accelerators to build useful complex AI solutions with our MLOps environment and microservices repository. For each phase of the journey, provides accelerators that are built for specific cognitive goals and enterprise-tested with our customers. Depending on the cognitive maturity of the enterprise, leverages appropriate accelerators to manage the AI/ML model development and production deployment. is optimized for AI-based analysis and operation-wide environments, from account, packaging, source versioning, data management, and application development to deployment, operations, and monitoring. It also enables additional data engineering capabilities to manage Big Data.

3. What differentiates

With the long history of delivering production-grade ML services at Abzooba, we’ve learned that there can be many pitfalls in operating production ML-based systems. Understanding these challenges, we designed to manage the entire AI/ML application development life cycle using a scalable MLOps architecture.

MLOps is an ML engineering culture and practice that aims at unifying ML system development (Dev) and ML system operation (Ops). Practicing MLOps means that we advocate for automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment, and infrastructure management.

Data scientists can implement and train an ML model with predictive performance on a dataset, given relevant training data for their use case. However, the real challenge is not building an ML model; the challenge is building an integrated ML system and continuously operating it in production. focuses on enabling these enterprise capabilities to transform the individual analysts’ research into production-capable solutions.

4. How do I get started with

To get started with, we encourage all users to start with a business use case and let us help you productionize your machine learning models on any infrastructure in a 6-week timeframe.

  • 1 Week to Discover, where we will define the business use case, identify the data sources, and design the data science algorithm and framework requirements.
  • 2 Weeks later, you should be ready for Experimentation, where we help you set up the infrastructure, design the solution and the workflow, and bring in data from multiple sources.
  • The next 3 weeks will be used to Deploy your model to production, where you can train multiple models in parallel and track and compare them. You can then choose the best model and we will help you deploy that in production and set up post-deployment monitoring.

5. Can I run on my on-prem infrastructure?

Yes, xpresso is available to be installed on any on-premises infrastructure. We have automated scripts that can help you install it on any Linux-based virtual machines.

6. Does xpresso support installation on cloud services?

Yes, xpresso can be installed on all the major cloud services — GCP, Azure, AWS.

7. How does the pricing work?

We have flexible pricing for various degrees of use. You can refer to our Pricing Page for further details.

9. Does xpresso have GPU support?

Yes, GPUs can be used for deployment of components and pipelines and consequently for running experiments on training pipelines. Please contact the xpresso team for configuring GPUs in your existing installation.

Control Center

1. How do I access Control Center?

For every installation, you will be provided with a URL for the Control Center along with admin credentials to further create roles and users within that installation. Once you click on the URL, you will be directed to a login page, where you can now login using your credentials.

2. My training pipeline is ready on Dev. How do I run it on Production? supports promotion of a pipeline from one instance to another. This action can be performed by a person with the appropriate role.

3. How can I make the best use of the same components under different pipelines?

Once a component is created, it can be used within any pipeline. When you create a pipeline, all components defined as “pipeline_job” or “job” will automatically appear on the right-hand side of the Solution Builder. You can simply drag and drop that component to the pipeline. The step can be repeated for the other pipelines.

4. How do I bring code from Jupyter notebooks to my code repo?

You should create a new component in your solution using the “jupyter” flavor. This will create a blank notebook for you with a few cells pre-populated with skeleton code. You can open this notebook by using the “Edit Code” link after clicking the component in Solution Builder. You can then write your code into these cells or add new cells, as required. After coding and testing the notebook, you can check it into the code repository by clicking the “Push” button on the customized notebook. You can then build and deploy the component, as usual.

5. Is Bitbucket supported in xpresso?

Yes, xpresso has a close integration with Git protocol-based repositories, like Bitbucket. When you create a new solution, or add a new component to a solution, xpresso automatically creates the solution repository with the appropriate folder structure and some skeleton code created for you to get started.

6. How do I use GitLab?

You can directly go to your solution repository by clicking on the GitLab icon on the top right side.

When you create a solution under xpresso and define components, it automatically creates the code repository within GitLab and provides a folder structure for all your components with some sample code to get you started. The sample code is dependent on the component type and the flavor you have chosen when building the solution.

Data Ops

1. How do I connect to different data sources in

You can use the Data Connectivity component from the xpresso Component Library for this, or you can use the Data Connectivity library to create your own custom Data Connectivity component. The detailed documentation can be found here:

2. How can I version my data?

There are three methods through which we can version your data:

3. Does xpresso support SSD?

Yes, SSD are supported and can be used instead of NFS. This can be configured at installation time.

4. How does xpresso help with data exploration?

There are three methods through which we can do basic exploration of your data:

5. How does xpresso ensure data security?

We have functionality to encrypt data both at rest as well as in motion. 


1. What is a Docker container and how does xpresso support Docker containers?

Docker enables developers to easily pack, ship, and run any application as a lightweight, portable, self-sufficient container that can run virtually anywhere. It also lends itself to CI/CD: continuous integration/continuous deployment.

In xpresso, once you have created the code for a component, you need to build it. This will create a Docker image of the component, enabling you to deploy it to the target environment. You can find the steps on how to do it here.

2. Can I do multiple builds simultaneously?

Yes, you can select as many components as possible at a time and build in parallel.

3. Can I deploy my pipeline on Kubeflow?

xpresso allows you to deploy your machine learning pipelines on both Kubeflow and Spark. You can find detailed documentation here.

4. Can I use Spark clusters for deployment?

xpresso allows you to deploy your machine learning pipelines both on Kubeflow and Spark. You can find detailed documentation here.

5. Can I schedule my production runs on xpresso?

This can be achieved by scheduling and experiment, where you can create a pipeline for inference and then schedule it using the “Schedule Run” functionality. 

Model Ops

1. What are the different machine learning frameworks supported in xpresso?

We support all popular machine learning frameworks. We have special support through pre-built components in the xpresso Component Library for some of these: XGBoost, Sklearn, Keras, and LightGBM.  

2. Do xpresso support distributed learning?

xpresso provides support for training pipelines created in PySpark. When deployed, these pipelines will run on Spark clusters, which can be scaled up or down as needed.

3. Can I train my neural network models in xpresso?

Yes, xpresso support training of neural networks. GPUs can be used for deployment of components and pipelines and can therefore be used for experimentation, if needed. provides special support for popular DL libraries like Keras. When such libraries are used within xpresso, several relevant metrics are reported directly to the xpresso Dashboard, with minimal coding required. 

4. How does xpresso version my model?

Output models are automatically versioned by in a Model Repository when generated as part of training pipelines. A Model Repository consists of branches corresponding to experiment runs, with each branch containing commits. Each commit will have a folder, where all your models for that run will be saved as a file. You can also download the file directly from the Model Repository.

5. How is model lineage supported in xpresso?

xpresso saves the history of all models run with the following versions saved as part of each experiment run:

  • Version of data used
  • Version of pipeline
  • Version of the parameters used 
  • Version of the output model

6. Is there a support for feature store?

Feature store is part of our upcoming release.

7. How does xpresso help in A/B testing?

A/B Tests enable data scientists to test multiple deployed models simultaneously to check which one works best. xpresso allows you to create a service mesh of multiple models that can run using a single API. You can find the details here.

8. Can I run multiple trainings simultaneously?

Yes, you can run multiple experiments at the same time. Based on the compute allocated, xpresso will automatically place experiments in queue if needed and run them once the previous experiments have been completed.

9. How can I monitor my models in production?

Model monitoring is an upcoming feature, where you can view the operation and stability metrics related to each model. You can also generate and configure alerts for the same.