ai-prompt-safety-waf
AI prompt safety in application code β input sanitization, output validation, structured output enforcement, content filtering, token budget management, system message protection, and secure logging for LLM-integrated applications.
Overview
| Property | Value |
|---|---|
| Type | Instruction |
| File | instructions/ai-prompt-safety-waf.instructions.md |
| Applies To | **/*.py, **/*.ts, **/*.js |
| WAF Alignment | responsible-ai, security, cost-optimization |
| Lines | 230 |
How It Works
Instructions are automatically applied to files matching the applyTo glob pattern. When a developer opens a matching file in VS Code with GitHub Copilot, this instructionβs content is injected into the AI context.
Source
Auto-generated from the FrootAI primitive catalogΒ .
Last updated on