from openlayer import Openlayer

# Let's say we want to stream the following row, which represents a model prediction:
data = {
  "user_query": "what's the meaning of life?",
  "output": "42",
  "tokens": 7,
  "cost": 0.02,
  "timestamp": 1620000000
}

# Prepare the config for the data, which depends on your project's task type. In this
# case, we have an LLM project:
from openlayer.types.inference_pipelines import data_stream_params

config = data_stream_params.ConfigLlmData(
    input_variable_names=["user_query"],
    output_column_name="output",
    num_of_token_column_name="tokens",
    cost_column_name="cost",
    timestamp_column_name="timestamp",
    prompt=[{"role": "user", "content": "{{ user_query }}"}],
)

client = Openlayer()
data_stream_response = client.inference_pipelines.data.stream(
    inference_pipeline_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    rows=[data],
    config=config,
)
{
  "success": true
}

Use this endpoint to stream individual inference data points to Openlayer. If you want to upload many inferences in one go, please use the batch upload method instead.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your workspace API key. See Find your API key for more information.

Path Parameters

inferencePipelineId
string
required

The inference pipeline id (a UUID).

Body

application/json
rows
object[]
required

A list of inference data points with inputs and outputs

config
object
required

Configuration for the data stream. Depends on your Openlayer project task type.

Response

200
application/json
Status OK.
success
boolean
required