The AI Governance Stack: Where Does Each Tool Fit?
As AI systems become more autonomous, a complex ecosystem of governance tools has emerged. Each addresses different aspects of AI safety, but confusion about when to use which tool is widespread. Let's map the AI governance landscape and show how Vienna OS complements existing tools.
The key insight? Different layers of the AI stack require different governance approaches. A chatbot needs content filtering. An autonomous agent needs execution control.
The Four Layers of AI Governance
Layer 1: Prompt Layer (Input/Output Filtering)
What it governs: Content generated by AI models
Primary tools: Guardrails AI, OpenAI moderation, Azure Content Safety
Timeline: Reactive (after generation, before display)
Layer 2: Observability Layer (Model Performance)
What it governs: Model behavior, performance drift, data quality
Primary tools: Arthur, Arize, WhyLabs, MLflow
Timeline: Reactive (continuous monitoring)
Layer 3: Documentation Layer (Compliance)
What it governs: Model documentation, risk assessments
Primary tools: Credo AI, ModelOp, H2O Model Risk Management
Timeline: Proactive + Ongoing documentation
Layer 4: Enforcement Layer (Execution Control)
What it governs: Actions taken by AI agents in production
Primary tools: Vienna OS
Timeline: Proactive (before execution)
The crucial distinction: Layers 1-3 are advisory—they tell you when something might be wrong. Layer 4 is enforcement—it prevents wrong actions from happening.
Guardrails AI (Prompt Layer)
Guardrails AI excels at content-focused AI applications, filtering inputs and outputs for safe, appropriate content.
How Guardrails AI Works
from guardrails import Guard
from guardrails.hub import ProfanityFree, ToxicLanguage
guard = Guard().use(
ProfanityFree,
ToxicLanguage(threshold=0.8),
on_fail="reask"
)
response = guard(
messages=[{"role": "user", "content": user_input}],
model="gpt-4"
)
Guardrails AI Strengths
Where Guardrails AI Falls Short
Guardrails AI can't help with:
Vienna OS (Enforcement Layer)
Vienna OS operates at the enforcement layer, controlling what AI agents can actually do in production systems.
How Vienna OS Works
const vienna = new ViennaClient();
async function transferFunds(amount: number) {
const warrant = await vienna.requestWarrant({
intent: 'transfer_funds',
payload: { amount },
justification: 'Customer refund request'
});
if (warrant.approved) {
await bank.transfer(warrant.payload);
}
}
Vienna OS's Unique Position
The Complete Stack Working Together
┌─────────────────────────────────────────┐
│ AI Application │
├─────────────────────────────────────────┤
│ Layer 4: Execution Control (Vienna OS) │
│ ✓ Warrant-based authorization │
├─────────────────────────────────────────┤
│ Layer 3: Documentation (Credo AI) │
│ ✓ Risk assessments & compliance │
├─────────────────────────────────────────┤
│ Layer 2: Observability (Arthur) │
│ ✓ Model performance monitoring │
├─────────────────────────────────────────┤
│ Layer 1: Content Safety (Guardrails) │
│ ✓ Input/output filtering │
└─────────────────────────────────────────┘
Real-World Integration
Here's how all layers work together in an AI trading system:
# Layer 1: Content Safety
safe_description = content_guard.parse(trade_description)
# Layer 2: Model Monitoring
arthur.log_prediction(inputs=market_data, outputs=signal)
# Layer 3: Documentation
# Risk assessment and compliance docs generated
# Layer 4: Execution Control
warrant = await vienna.requestWarrant({
intent: 'execute_trade',
resource: symbol,
justification: 'Model-generated signal'
})
if warrant.approved and arthur.model_healthy and safe_description:
await broker.execute_trade(warrant.payload)
Tool Comparison Matrix
|---------|---------------|--------|----------|-----------|
When to Use Each Tool
Use Guardrails AI When:
Use Vienna OS When:
The Key Insight: Enforcement vs Advisory
*Advisory Tools (Layers 1-3):*
*Enforcement Tools (Layer 4):*
This is why Vienna OS's approach is: "We don't ask agents to behave — we remove their ability to misbehave."
Building Your Strategy
When planning AI governance:
1. Start with risk assessment for each layer
2. Implement content safety first (often easiest)
3. Add monitoring as you deploy ML systems
4. Implement execution control for high-stakes AI
The most robust implementations use tools from all layers. They're complementary technologies addressing different aspects of AI risk.
Learn more about building a complete AI governance stack. Read our documentation →