Accessing a Real-Time Service (Token-based Authentication)

If a real-time service is in the Running state, the real-time service has been deployed successfully. This service provides a standard RESTful API for users to call. Before integrating the API to the production environment, commission the API. You can use the following methods to send an inference request to the real-time service:

Method 1: Use GUI-based Software for Inference (Postman)

  1. Download Postman and install it, or install the Postman Chrome extension. Alternatively, use other software that can send POST requests. Postman 7.24.0 is recommended.
  2. Open Postman.
  3. Set parameters on Postman. The following uses image classification as an example.
    • Select a POST task and copy the API URL to the POST text box. To obtain the API URL of the real-time service, switch to the Usage Guides tab on the page providing details about the real-time service. On the Headers tab page, set Key to X-Auth-Token and Value to the obtained token.

    • On the Body tab page, file input and text input are available.
      • File input

        Select form-data. Set KEY to the input parameter of the AI application, for example, images. Set VALUE to an image to be inferred (only one image can be inferred).

      • Text input

        Select raw and then JSON(application/json). Enter the request body in the text box below. An example request body is as follows:

        {
          "meta": {
            "uuid": "10eb0091-887f-4839-9929-cbc884f1e20e"
          },
          "data": {
            "req_data": [
              {
                "sepal_length": 3,
                "sepal_width": 1,
                "petal_length": 2.2,
                "petal_width": 4
              }
            ]
          }
        }

        meta can carry a universally unique identifier (UUID). When you call an API, the system provides a UUID. When the inference result is returned, the UUID is returned to trace the request. If you do not need this function, leave meta blank. data contains a req_data array for one or multiple pieces of input data. The parameters of each piece of data are determined by the AI application, such as sepal_length and sepal_width in this example.

  4. After setting the parameters, click send to send the request. The result will be displayed in Response.
    • Inference result using file input: The field values in the return result vary with the AI application.
    • Inference result using text input: The request body contains meta and data. If the request contains uuid, uuid will be returned in the response. Otherwise, uuid is left blank. data contains a resp_data array for the inference results of one or multiple pieces of input data. The parameters of each result are determined by the AI application, for example, sepal_length and predictresult in this example.

Method 2: Run the cURL Command to Send an Inference Request

The command for sending inference requests can be input as a file or text.

Method 3: Use a Python Script to Send an Inference Request

The Python script for sending inference requests can be input as a file or text.