Skip to main content
Back to Home

The Vienna Manifesto

Principles for governed AI execution

We are building a world where AI agents act autonomously on behalf of humans and organizations. They will deploy code, move money, manage infrastructure, make business decisions, and interact with the physical world. This is not a future scenario — it is happening now.

The question is not whether AI agents will be autonomous. The question is whether that autonomy will be governed. These are our beliefs about what governed means.

Principle I

No execution without authorization

Every autonomous action must be explicitly authorized before it happens. The default for any AI agent should be denial, not permission. This isn't a limitation — it's the foundation of trust between humans and machines.

Principle II

Authorization must be cryptographic, not cosmetic

A log entry is not authorization. A permissions matrix is not authorization. True authorization means a verifiable, tamper-evident, time-limited token that binds intent to approval to execution. If you can't prove it mathematically, you can't prove it.

Principle III

Risk must be classified, not assumed

Not all actions are equal. A health check and a wire transfer have fundamentally different risk profiles and deserve fundamentally different governance. Systems that treat all actions identically are either too restrictive to be useful or too permissive to be safe.

Principle IV

Humans must remain in the loop — but not in the way

Human oversight is non-negotiable for high-risk actions. But requiring human approval for every read query is a system that will be circumvented. The art is knowing which actions need human judgment and which don't. Risk tiering solves this.

Principle V

Audit trails must be immutable and complete

When something goes wrong — and it will — you need an unimpeachable record of what happened, who authorized it, and why. Mutable logs are fiction. Only cryptographically linked, append-only records can be trusted as evidence.

Principle VI

Governance must be infrastructure, not afterthought

Security bolted on after deployment is theater. Governance must be in the execution path — not watching from the sidelines. The agent doesn't decide whether to ask permission; the system won't execute without it.

Principle VII

The cost of governance must be less than the cost of failure

If your governance system is slower than your agents, people will route around it. If it's more expensive than the risks it prevents, no one will use it. Governance that works is governance that's fast enough to be invisible and cheap enough to be default.

These principles are not aspirational. They are implemented in code. Vienna OS exists because we believe the gap between “AI agents should be governed” and “AI agents are governed” must be closed with infrastructure, not intentions.

If you believe the same, join us.

— The Vienna OS Team, March 2026