Testing a Deployed Model


Once a trained model as been deployed, it can be tested using tools such as POSTMAN or curl, or of course, by creating a UI which accepts inputs and invokes the inference service.

Inference Service URL

The URL of the Inference Service associated with the deployed model is available to developers when the service is deployed. The actual inference REST API is formed by appending “/predict” to the URL, e.g., http://172.16.6.51:30935/predict

Inference Service Request

The request to an inference service is a JSON object, with a single attribute, “input”, whose value is a JSON object containing any attribute-value pairs required by the service. See the example below

{"input":
   { "Store": 238.0, "DayOfWeek": 5.0, "Promo": 0.0,
     "StateHoliday": 0.0, "SchoolHoliday": 0.0, "StoreType": 3.0,
     "Assortment": 2.0, "CompetitionDistance": 610.0, "Promo2": 0.0, "Day":
     1.0, "Month": 7.0, "Year": 1.0, "isCompetition": 0.0, "NewAssortment":
     3, "NewStoreType": 1
   }
}

Testing the deployed model

Some useful tips when testing the deployed model:

  • Set the request method to POST

  • Set the request header “Content-Type” to application/json”

  • Pass the JSON request (as in the example above) in the request body (in POSTMAN, select “raw” format)