Glossary
AI Red Teaming
AI red teaming is the practice of systematically probing an AI system for vulnerabilities, biases, and failure modes by simulating real-world attacks and edge cases. It is increasingly required by regulation for high-risk AI systems.