- The ground truths for the data streamed to the platform were not available during the model inference time, but became available after some time.
- You want to add human feedback associated with a request, but this feedback was not available during model inference time.
How to update data
Every row streamed to Openlayer has aninference_id
— a unique identifier
of the row. You can provide the inference_id
during stream time, and if you don’t,
Openlayer will assign unique IDs to your rows.
Enhanced Tracing with Metadata: When using the
@trace
decorator, you can
dynamically add metadata and set custom inference IDs: - Custom Inference
IDs: Use update_current_trace(inferenceId="your_id")
for request
correlation and future data updates - Trace Metadata: Add context with
update_current_trace(user_id="123", session="abc")
- Step Metadata:
Enrich individual steps with update_current_step(model="gpt-4", tokens=150)
Key Benefit: Custom inference IDs enable you to easily add user feedback,
ground truth labels, and signals after the initial request. See the Tracing
guide
for comprehensive examples.inference_id
to specify the rows you want to update.
Let’s say that you want to add a column called label
with ground truths. If you have
your data in a pandas DataFrame similar to:
Python
Python
Python
Using Custom Inference IDs for User Feedback and Signals
When you use custom inference IDs with the@trace
decorator, you can easily collect and update data with user feedback, ratings, and other signals after the initial request. This creates powerful feedback loops for improving your AI system.
Complete Workflow Example
Here’s a comprehensive example showing how to set up custom inference IDs for future updates:Advanced Use Cases
You can also update data with more sophisticated signals:Python
Benefits of This Approach
- Seamless Integration: Custom inference IDs created during tracing are automatically available for updates
- Rich Feedback Loops: Collect user ratings, business signals, and ground truth labels
- Better Model Evaluation: Use real user feedback to assess model performance
- Continuous Improvement: Identify patterns in user satisfaction to improve your AI system
- A/B Testing: Track different model versions and their user satisfaction rates
Schedule regular syncing of feedback data to Openlayer (e.g., hourly or daily)
to keep your monitoring dashboard up-to-date with real user sentiment and
business metrics.