← Back to glossary
Glossary

AI Red Teaming

Reviewed 20 March 2026 Canonical definition

AI red teaming is the practice of systematically probing an AI system for vulnerabilities, biases, and failure modes by simulating real-world attacks and edge cases. It is increasingly required by regulation for high-risk AI systems.