Skip to Content
GeneratedInstructionsai-prompt-safety-waf

ai-prompt-safety-waf

AI prompt safety in application code β€” input sanitization, output validation, structured output enforcement, content filtering, token budget management, system message protection, and secure logging for LLM-integrated applications.

Overview

PropertyValue
TypeInstruction
Fileinstructions/ai-prompt-safety-waf.instructions.md
Applies To**/*.py, **/*.ts, **/*.js
WAF Alignmentresponsible-ai, security, cost-optimization
Lines230

How It Works

Instructions are automatically applied to files matching the applyTo glob pattern. When a developer opens a matching file in VS Code with GitHub Copilot, this instruction’s content is injected into the AI context.

Source


Auto-generated from the FrootAI primitive catalogΒ .

Last updated on