Skip to main content

Documentation Index

Fetch the complete documentation index at: https://openlayer.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Definition

The prompt injection test (built with Llama) checks for prompt injection, malicious strings and jailbreak attempts in the input data of your system.

Taxonomy

  • Task types: LLM.
  • Availability: and .

Why it matters

  • Prompt injection is a type of attack that exploits an AI system and deviates it from its intended behavior.
  • It is important to detect and prevent prompt injection attacks to ensure the reliability and security of your system.

Test configuration examples

If you are writing a tests.json, here are a few valid configurations for the character length test:
[
  {
    "name": "No prompt injection",
    "description": "Asserts that the input data has no prompt injection attempts",
    "type": "integrity",
    "subtype": "hasPromptInjectionCount",
    "thresholds": [
      {
        "insightName": "hasPromptInjectionCount",
        "measurement": "hasPromptInjectionPercentage",
        "operator": "<=",
        "value": 0.0
      }
    ],
    "subpopulationFilters": null,
    "mode": "development",
    "usesValidationDataset": true, // Apply test to the validation set
    "usesTrainingDataset": false,
    "usesMlModel": false,
    "syncId": "b4dee7dc-4f15-48ca-a282-63e2c04e0689" // Some unique id
  }
]