Glossary
Meaningful Human Control
Meaningful human control is the requirement that a human retain genuine authority over high-stakes AI agent decisions — not just theoretical oversight, but the practical ability to understand, review, override, or stop agent actions. It is a concept increasingly embedded in AI regulations and risk frameworks, and goes beyond token approval steps to require that human reviewers have sufficient context to make informed decisions.