The agentic AI gold rush continues. Everyone seems to agree that AI agents will reforge the contours of work and business. What is less clear is the mental model to apply when conceptualizing the role of agents within the enterprise. Is an AI agent an employee? An enterprise application? An RPA bot on steroids? A callable service?
This is not a trivial question; the mental model that we approach agentic AI with has implications on how well we succeed at conceptualizing, designing, deploying, scaling, and managing agents to effect true transformation. Without a clear metaphor, organizations risk committing to limited agentic applications, fuzzy agentic operating models, misaligned expectations between business and technology teams, and ultimately, localized or suboptimal value creation.
The Fallacy Of “Agent As Employee”
One of the more common mental models we come across with our clients is that of “agent as employee.” This mental model conceives an agent as a digital worker who is employed in a specific context as an executor of a specific set of tasks (or processes). Now, this conceptualization is somewhat useful for the purpose of establishing identity, permissions, and access management for the agent. It also has a fundamental flaw, however, in that it frames agents as tactical work units instead of as reusable cognitive assets. The anthropomorphic framing of “agents as employee” also bestows a false sense of autonomy, judgment, and accountability to the agent, which leads enterprises to sidestep crucial system-level controls and governance modes, choosing to manage individual agents instead of the agentic portfolio as a system.
More broadly, the “agents as employee” metaphor also reinforces the organizational status quo. By treating agents as employees, you imply that humans and agents are interchangeable cogs within operating models that can remain essentially unchanged. It suggests that existing structures, built on real-world constraints of human skills, labor costs, and regulatory limits, can simply be lifted and shifted onto agents. This is both wrong and dangerous, as it tempts firms to view agents as drop-in replacements for their people, rather than catalysts for process redesign and innovation. Moreover, thinking of agents as employees brings the added risk of collapsing two concerns, capability design and system operation, into one, thus obscuring the need to manage them distinctly.
Here’s the good news: A better mental framing really does exist.
The Dual Identity Of AI Agents
An AI agent can be thought of as possessing a dual identity:
On one hand, an agent is a skill. Each agent represents a discrete, clearly bounded cognitive capability that is modular, reusable, and extensible enough to fit several adjacent use cases. For example, a retail firm may build an agent whose skill is real-time demand forecasting, blending sales, promotions, and supply signals to adjust inventory. Or a legal firm may deploy an agent whose skill is case law summarization, involving packaging legal precedents into concise, lawyer-ready briefs. The skills-based framing allows a crisp articulation of an agent’s value as well as a pathway for enhancing each individual skill over time.
On the other hand, an agent is also a product. Operationally, this identity forces you to consider the agent to reside on a foundational platform with managed dependencies, telemetry, governance, and policy enforcement to ensure safety and scale. Strategically, it requires that you apply considerations such as user experience design, business value alignment, clear roadmaps and lifecycle management, funding, and ownership so that agents evolve as durable enterprise capabilities rather than standalone curiosities. The product foundation makes your enterprise’s agentic AI “skills” usable at scale.
This Dual Identity Leads To A Dual Roadmap
The dual identity offers a clear implementation path. In this framing, most enterprises that are implementing AI agents at scale will develop two parallel, synchronized roadmaps:
The roadmap for each individual skill defines which cognitive capabilities you want to make available to the business. The specific sequencing of the rollout of these skills will be a factor of the value as well as the feasibility of these skills. For example, in the case of the “case law summarization” agent, the first version may simply introduce a baseline cognitive capability of distilling lengthy rulings into concise briefs. Subsequent releases may expand the feature set of this skill to enable classification of cases by legal domain and extract cited precedents or to retrieve and compare outcomes across jurisdictions.
A “foundations” roadmap focuses on delivering the foundational platform capabilities required to stand up these skills. Every new wave of skills should be anchored to a minimum viable platform foundation. For example, if you’re releasing an initial tranche of basic skills, the foundations roadmap will provide the basic observability, grounding, and safety controls to go live. As your skills roadmap expands, so must the platform. That means an ever-expanding substrate of foundational capabilities for stable governance, delivered in sync with the growing catalog of skills.
This twin-track approach ensures that business value is unlocked progressively while also building a long-term agentic architecture.
Here is an example to illustrate how a dual roadmap might work. The table below depicts the roadmap for the “case law summarization” skill, alongside the “foundations” roadmap that enables the firm to stand up each skills release in a stable, scaled, and secure manner. (Note that, in the real world, at each release you will be building an MVP foundations roadmap to support multiple skills or skill enhancements, rather than just a single skill.)
I predict that in early phases, most of your AI or IT team’s effort will be focused on building and shipping specific agentic skills. But as adoption grows and the complexity of your agentic portfolio increases, that ratio will flip. In the long term, much of your team’s investment and effort will go toward platform enablement as your firm’s business functions, partners, and citizen developers take on the ownership of developing specific skills on the secure foundations that you have established.
At Forrester, we have a lot of thoughts on how to do agentic AI right. See, for example, this report by Sam Higgins and I that describes the four ways in which agentic AI changes operating models and what it means for enterprise leaders and tech strategists. You can also connect with me to schedule an inquiry.