Agents Are Steps, Not Owners
Designing Agentic Systems with Good Habits: Embedded in Workflows
The temptation of “God Mode”
Once an agent can understand intent, call tools, and reason over context, there is a strong temptation to let it handle everything end to end.
At first, this feels efficient. There is less plumbing. Fewer handoffs. One place to reason.
The agent receives the request. The agent figures out what to do. The agent calls internal systems. The agent decides the outcome. The agent explains the result.
What actually happens is quieter. Workflows dissolve. Policies become suggestions. And the agent slowly becomes the system.
By the time something goes wrong, it is no longer clear where the rules live.
This article is part of a series where I walk through how I think about designing agentic systems using a set of practical habits.
Rather than talking about agents in the abstract, I’m using a concrete example throughout the series: a customer support chatbot for a power utility company. This is not a real system and not something I’ve built. It’s a thought exercise designed to surface real design tradeoffs that show up in production systems.
The chatbot helps customers start and stop service, ask billing questions, report outages, request maintenance, challenge bills, apply for rebates, and request refunds. Behind the scenes, it interfaces with identity systems, billing platforms, internal policy documents, regulatory constraints, and operational workflows.
In other words, it looks simple from the outside, and complicated everywhere else.
Across this series, I evolve this system one habit at a time. Each article focuses on a single habit and shows how it changes the way the system is designed, where responsibility lives, and how risk is managed.
This article explores the habit: Embedded in Workflows.
In the Agent Habits framework, this habit emphasizes placing agents inside existing systems and processes rather than giving them end-to-end ownership.
Agents are not workflows
Most organizations already have workflows that matter. Billing workflows. Service activation workflows. Refund and dispute workflows. Outage and maintenance workflows.
These exist for a reason. They encode policy, regulatory constraints, validation rules, approvals, and auditability.
When we put an agent in charge of the workflow instead of inside it, we slowly migrate those rules out of hard systems and into prompts, memory, and model behavior.
That is not flexibility. It is entropy.
The habit here is simple. Agents should be embedded as steps in workflows, not positioned as owners of them.
Applying this to the utility chatbot
Let’s look at what this means for our power utility example. When a customer wants to dispute a bill, the conversational agent should not decide whether the dispute is valid.
Instead, it should:
Understand the customer’s intent
Collect the required information and documents
Submit that information to an existing billing dispute workflow
Receive a structured outcome
Communicate that outcome clearly to the customer
The decision does not live in the agent. The agent is the interface between a human and a system that already knows how to make that decision.
The same applies to refunds, rebates, service changes, and maintenance requests.
Agents as translation layers
A useful way to think about agents is as translation layers.
They translate user intent into structured inputs.
They translate system outputs into human language.
They translate errors into explanations.
They translate constraints into expectations.
They do not replace the systems that hold durable logic and policy. This is how you avoid rebuilding your company inside a prompt.
Why ownership matters
When agents own workflows, business logic becomes implicit. Policy enforcement becomes probabilistic. Accountability becomes unclear.
When something goes wrong, it becomes difficult to answer basic questions: Why did this decision happen? Who owns it? What rule was applied?
When agents are embedded in workflows, those questions remain answerable. Failures become diagnosable events instead of surprising behavior.
Autonomy is about placement, not power
This habit is sometimes misinterpreted as limiting what agents can do. That misses the point.
Autonomy is not about giving agents more authority. It is about placing their capabilities where they create leverage without creating risk.
An agent embedded in a workflow can still reason, adapt, and assist. It simply does not silently assume ownership of outcomes it should not own.
Connecting the habits
Clear roles define what an agent is responsible for. Embedding agents in workflows defines where that responsibility is exercised.
When either is missing, systems become fragile. When both are present, agentic systems become easier to reason about, easier to govern, and easier to evolve.
What’s next
In the next article, I’ll focus on permission boundaries. What agents are allowed to do, when they must defer, and why explicit constraints are often the difference between a system that scales and one that fails loudly.
The broader habits framework lives here:
https://agent-habits.github.io/




