Documentation Index
Fetch the complete documentation index at: https://openlayer.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Definition
The prompt injection test (built with Llama) checks for prompt injection, malicious strings and jailbreak attempts in the input data of your system.Taxonomy
- Task types: LLM.
- Availability: and .
Why it matters
- Prompt injection is a type of attack that exploits an AI system and deviates it from its intended behavior.
- It is important to detect and prevent prompt injection attacks to ensure the reliability and security of your system.
Test configuration examples
If you are writing atests.json, here are a few valid configurations for the character length test:

