Openlayer integrates with Anthropic in two different ways:

  • If you are building an AI system with Anthropic LLMs and want to evaluate it, you can use the SDKs to make Openlayer part of your workflow.

  • Some tests on Openlayer are based on a score produced by an LLM evaluator. You can set any of Anthropic’s LLMs as the LLM evaluator for these tests.

This integration guide explores each of these paths.

Evaluating Anthropic LLMs

You can set up Openlayer tests to evaluate your Anthropic LLMs in development and monitoring.

Development

In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events.

Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:

  1. either provide a way for Openlayer to run your AI system on your datasets, or
  2. before pushing, generate the model outputs yourself and push them alongside your artifacts.

For AI systems built with Anthropic LLMs, if you are not computing your system’s outputs yourself, you must provide your API credentials.

To do so, navigate to “Settings” > “Workspace secrets,” and add the ANTHROPIC_API_KEY secret.

If you don’t add the required Anthropic API key, you’ll encounter a “Missing API key” error when Openlayer tries to run your AI system to get its outputs:

Monitoring

To use the monitoring mode, you must set up a way to publish the requests your AI system receives to the Openlayer platform. This process is streamlined for Anthropic LLMs.

To set it up, you must follow the steps in the code snippet below:

Python
# 1. Set the environment variables
import anthropic
import os

os.environ["ANTHROPIC_API_KEY"] = "YOUR_ANTHROPIC_API_KEY_HERE"
os.environ["OPENLAYER_API_KEY"] = "YOUR_OPENLAYER_API_KEY_HERE"
os.environ["OPENLAYER_INFERENCE_PIPELINE_ID"] = "YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE"

# 2. Import the `trace_anthropic` function and wrap the Anthropic client with it
from openlayer.lib import trace_anthropic

anthropic_client = trace_anthropic(anthropic.Anthropic())

# 3. From now on, every message creation call with
# the `anthropic_client`is traced by Openlayer. E.g.,
completion = anthropic_client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "How are you doing today?"}
    ],
)

See full Python example

Once the code is set up, all your Anthropic LLM calls are automatically published to Openlayer, along with metadata, such as latency, number of tokens, cost estimate, and more.

If you navigate to the “Requests” page of your Openlayer inference pipeline, you can see the traces for each request.

If the Anthropic LLM call is just one of the steps of your AI system, you can use the code snippets above together with tracing. In this case, your Anthropic LLM calls get added as a step of a larger trace. Refer to the Tracing guide for details.

After your AI system requests are continuously published and logged by Openlayer, you can create tests that run at a regular cadence on top of them.

Refer to the Monitoring overview, for details on Openlayer’s monitoring mode, to the Publishing data guide, for more information on setting it up, or to the Tracing guide, to understand how to trace more complex systems.

Anthropic LLM evaluator

Some tests on Openlayer rely on scores produced by an LLM evaluator. For example, tests that use Ragas metrics and the custom LLM evaluator test.

You can use any of Anthropic’s LLMs as the underlying LLM evaluator for these tests.

You can change the default LLM evaluator for a project in the project settings page. To do so, navigate to “Settings” > Select your project in the left sidebar > click on “Metrics” to go to the metric settings page. Under “LLM evaluator,” choose the Anthropic LLM you want to use.

Furthermore, make sure to add your ANTHROPIC_API_KEY as a workspace secret.