Skip to main content

API Requests

API Environment

Currently, only the MLflow environment is supported.

MLflow

The MLflow inference server provides the following URLs:

  • /invocations: This is the inference path. Input data is transmitted via a POST request, and the inference result is returned.
  • /ping: Used for health checks.
  • /health: Same as /ping.
  • /version: Returns the MLflow version.


For more details, please refer to the page below.

https://mlflow.org/docs/latest/deployment/deploy-model-locally.html#inference-server-specification

Making API Requests

When making requests, you must use an API Key. Otherwise, the request will not be processed as it will be treated as unauthorized.

You can check the API URL in the API information. You can use this URL by adding the appropriate path.

Making MLflow Requests

You can make requests by adding the path to the API URL you checked above. Below are a few examples.

  • When making a /ping request
    • Add /ping to the API URL to make the request.
    • https://api-cloud-function.elice.io/2ff51a26-9c2d-414c-86dc-56ae903291a5
    • /ping
  • When making an inference request
    • Add /invocations to the API URL to make the request.
    • https://api-cloud-function.elice.io/2ff51a26-9c2d-414c-86dc-56ae903291a5/invocations


If you were to make the request via the curl command, it would look like this:

curl --location 'https://api-cloud-function.elice.io/{{api_id}}/invocations' \
--header 'Authorization: Bearer {{api_key}}' \
--header 'Content-Type: application/json' \
--data '{
"inputs": [ {{your_data}} ]
}'

To verify the data formats accepted by MLflow, please refer to the following document: https://mlflow.org/docs/latest/deployment/deploy-model-locally.html#accepted-input-formats