Agent tool use has no permission model — it is all-or-nothing trust
devtoolsdevtools0 views
When you give an AI agent access to a tool (shell, database, file system, API), it gets full unrestricted access. An agent with shell access can rm -rf / as easily as it can run a test. There is no equivalent of Linux capabilities, OAuth scopes, or IAM policies for agent tools. So what? Every enterprise CTO faces a binary choice: give the agent full access (unacceptable risk) or no access (useless agent). They choose no access. This is why agent adoption in enterprises is near zero for anything touching production systems. Why does this matter in the first place? The highest-value agent use cases — database migrations, infrastructure changes, deployment pipelines, customer data processing — are exactly the ones that require access to dangerous tools. The low-risk tasks (writing docs, answering questions) are low-value. Agents are stuck doing easy cheap work because they cannot be trusted with hard valuable work. The structural reason: LLMs are non-deterministic. You cannot prove in advance that an agent will not take a destructive action. Traditional software gets permissions because it is deterministic — you can audit the code. Agent behavior depends on prompt, context, and model weights, which makes static analysis impossible. Nobody has built the runtime permission enforcement layer that would make fine-grained agent permissions viable.
Evidence
Claude Code uses manual approval for destructive commands but has no declarative policy file. OpenAI Assistants API has no tool-level permissions. LangChain tools have no authorization layer. Reports of agents accidentally deleting files and dropping tables: https://news.ycombinator.com/item?id=39847123