← Back to glossary
Glossary

Responsible Disclosure (AI)

Reviewed 9 April 2026 Canonical definition

Responsible disclosure for AI is the practice of reporting discovered vulnerabilities in AI systems — such as prompt injection flaws, MCP server weaknesses, or agent authentication bypasses — to the affected organisation privately before publishing them, giving the organisation time to remediate. It is the AI equivalent of the coordinated vulnerability disclosure practices established in traditional cybersecurity.