DevOps

1. What is a Docker container and how does xpresso support Docker containers?

Docker enables developers to easily pack, ship, and run any application as a lightweight, portable, self-sufficient container that can run virtually anywhere. It also lends itself to CI/CD: continuous integration/continuous deployment. In xpresso, once you have created the code for a component, you need to build it. This will create a Docker image of the component, enabling you to deploy it to the target environment. You can find the steps on how to do it here.

2. Can I directly deploy a component using a pre-built Docker image?

Yes, if you already have a pre-built Docker image, you do not need to go through the build process and directly mention the Docker registry URL when deploying. Instead, you will need to click on the “Advanced Setting” icon next to the component you want to deploy and mention the URL under “Custom Docker image”.

image1

3. Can I do multiple builds simultaneously?

Yes, you can select as many components as possible at a time and build in parallel.

4. Can I deploy my pipeline on Kubeflow?

xpresso allows you to deploy your machine learning pipelines on both Kubeflow and Spark. You can find detailed documentation here Link.

5. Is there a provision to provide runtime parameters during deployment?

Yes, you can specify the environment parameters passed to the component as name-value pairs. Use the “+” icon to add more environment parameters and the “-” icon to remove any.

image2

6. Can I use Spark clusters for deployment?

xpresso allows you to deploy your machine learning pipelines both on Kubeflow and Spark. You can find detailed documentation here.

7. How do I deploy my models?

Deploying a trained model creates a REST API end point for it, which can be used to send requests to the model and get predictions. To do this, the selected model has to be wrapped within an Inference Service. An Inference Service is a special form of a web service customized to query a trained model and return a prediction from it. You can deploy one or more trained models at a time. Each must be coupled with an Inference Service. To deploy one or more trained models, click the “Deploy Model” button on the “Model Repository”.

image3

8. Can I auto scale the infrastructure where my production model is running?

Coming soon!

9. Can I schedule my production runs on xpresso?

This can be achieved by scheduling and experiment, where you can create a pipeline for inference and then schedule it using the “Schedule Run” functionality.