Glossary
Guardrails
Guardrails are runtime constraints that limit what an AI agent can do, what data it can access, and how it can respond. They are designed to remain enforceable even if the agent reasons toward an unsafe action.
Guardrails are runtime constraints that limit what an AI agent can do, what data it can access, and how it can respond. They are designed to remain enforceable even if the agent reasons toward an unsafe action.