HBR’s Framework Is Right: AI Agents Aren’t Tools — They’re Team Members
Source: “To Scale AI Agents Successfully, Think of Them Like Team Members” — Rahul Telang, Muhammad Zia Hydari, and Raja Iqbal. Harvard Business Review, March 23, 2026.
Harvard Business Review published something important this week. The article is by three Wharton researchers — Rahul Telang, Muhammad Zia Hydari, and Raja Iqbal — and it lays out the governance framework that will define enterprise agentic AI policy across regulated industries by Q3 2026.
The headline is “To Scale AI Agents Successfully, Think of Them Like Team Members.” That framing is deliberately provocative, and it’s exactly right.
The Core Argument
When an AI agent can execute — update records, issue refunds, route approvals, send communications on your organization’s behalf — it’s no longer a tool. It’s a participant. And participants require the same governance infrastructure you apply to human participants:
- Defined identity. Who is this agent? What role does it hold? Who owns it?
- Bounded authority. What can it do? What can it absolutely not do, even if instructed?
- Trusted information sources. What data can it read? What context is it operating in?
- Audit trails. What did it do, when, and why — with enough fidelity to reconstruct decisions?
These aren’t compliance checkboxes. They’re operational necessities for any system that can take consequential action at machine speed.
The Companion Piece
HBR published a second article the same week: “Create an Onboarding Plan for AI Agents.” The framing makes the governance model concrete: agents need structure, feedback loops, and evaluation criteria — the same things a new hire needs in week one.
Most organizations are skipping this entirely. They’re deploying agents the way they deployed SaaS tools in 2015 — sign up, configure, ship. That workflow made sense for software that couldn’t take consequential action. It doesn’t work for software that can update your CRM, process your transactions, or communicate with your customers.
The Meta Incident as Object Lesson
When Meta’s AI system behaved unexpectedly in production on March 19-20, the governance failures weren’t technical. They were structural: unclear ownership of the agent’s behavior, undefined authority boundaries, and insufficient audit trail to reconstruct what happened and why.
That’s the pattern. Not a model failure — those are recoverable. A governance architecture failure — those compound.
The HBR framework directly addresses this structure. Bounded authority means the agent can’t exceed its defined scope even if instructed to. Explicit identity means a single owner is accountable when something goes wrong. Audit trails mean you can reconstruct decisions without relying on the agent to explain itself after the fact.
Why the “Digital Employee” Framing Will Win
The governance gap for AI agents isn’t a lack of tooling — it’s the absence of organizational discipline applied to a new class of system actor. The “digital employee” framing wins because it maps to infrastructure that already exists.
You know how to onboard employees. You know how to define roles, grant permissions, conduct performance reviews, and terminate access. The compliance frameworks exist. The HR workflows exist. The accountability structures exist.
Applying this infrastructure to agents isn’t a new capability build. It’s a scope extension — with a few technically specific additions (kill switch mechanisms, prompt injection defenses, behavioral logging at the action level).
The Adoption Timeline
Q3 2026 is when this becomes mandatory for regulated industries. Financial services, healthcare, and public sector will see the first wave of audit requirements specifically targeting agent identity and authority boundaries. The CSAI Foundation, AAIF, and OWASP’s AIVSS are all building the standards layer now.
The enterprises that have been building governance infrastructure quietly will have a structural advantage. The ones treating agent deployment as a purely technical project — without a governance owner, without authority documentation, without audit logging — will be scrambling to reconstruct months of deployment history under regulatory pressure.
Three Questions for Your Next Leadership Meeting
- Does every deployed agent have a named human owner who is accountable for its behavior?
- Is the authority boundary for each agent explicitly documented and technically enforced — not just described in a configuration file?
- If an agent takes an unexpected action today, can you reconstruct the full decision chain within 30 minutes?
If the answer to any of those is “not sure” or “we’d have to check,” the HBR framework is worth a two-hour working session with your governance team this week.
The article is free to read. The governance debt it describes isn’t.