Skip to content

Approaching agentic AI, the missing layer between intelligence and execution

AI & data engineering
Agentic AI

AI is evolving, from answering questions to taking action. For years, most AI systems have been reactive: they interpreted input and returned predictions or classifications. But the latest generation of models can go further. Beyond answering questions, they can interpret goals, plan multi-step actions, call external tools and adapt their behavior over time. This marks a critical shift: from AI as a passive assistant to AI as an active agent. 

This change is strategic, not just technical. Organizations across industries are exploring how agentic AI systems can take on meaningful tasks with minimal oversight. And the pace is accelerating. 

Gartner forecasts that by 2028, 33% of enterprise software will incorporate agentic AI capabilities, up from less than 1% today, and that 15% of daily work decisions will be made autonomously by AI agents, compared to virtually none in 2024. IDC reports that 70% of organizations expect agentic AI to disrupt their business models within the next 18 months, signaling how rapidly this paradigm is reshaping expectations. 

The business case is equally compelling. According to McKinsey, implementing agentic AI could boost operational efficiency by up to 30%, driven by faster execution, reduced manual oversight and more adaptive, real-time decision-making. 

But with new capabilities come new design questions: What makes agentic systems different from traditional automation? Where should we draw boundaries between augmentation and autonomy? And how do we validate systems that act independently in dynamic, real-world environments? 

This article explores those questions and offers a practical lens for engaging with this new class of systems, starting with a foundational concept: the Minimum Viable Agent (MVA). 

What is agentic AI? 

Agentic AI refers to a system that is designed to autonomously pursue a defined goal or desired outcome, by solving complex problems with limited or no human oversight. In other words, it is a system capable of autonomous decision-making and action-taking: analyzing the context surrounding the goal, decomposing into smaller tasks if needed, designing an execution plan and orchestrating its implementation. To do so, they also manipulate a variety of tools (APIs, robotic systems, internal and external data sources) as needed. 

Attributes of an agentic AI system

To expand on this definition, agentic AI systems typically exhibit a set of defining characteristics that distinguish them from traditional automation or task-specific models: 

  • Memory and context awareness 
    They maintain a persistent state, enabling recall of past actions, conversations or system conditions across time. 
  • Planning and execution 
    They break down goals into actionable steps and carry out workflows that may span minutes, hours or days. 
  • Autonomy 
    They initiate and complete tasks without the need for continuous human guidance or step-by-step instruction. 
  • Tool and environment use 
    They can invoke APIs, run functions, query databases or manipulate the system environment to fulfil objectives. 
  • Collaboration and orchestration  
    They coordinate with other agents or tools using delegation, sequencing or parallelism to achieve complex goals. 
  • Governance and safety 
    They are built with oversight in mind, featuring guardrails, audit logs, escalation paths and policy enforcement mechanisms to manage operational risk. 

From traditional automation to agentic AI  

The reasoning capabilities and autonomous nature of agentic AI allow us to take enterprise workflow automation to the next level, overcoming many of the limitations inherent in traditional software systems. Within this domain, we can distinguish between two broad types of agentic AI systems: agentic workflows and AI agents

Agentic workflows integrate autonomous decision-making into selected phases of an existing business process, augmenting traditional systems incrementally. AI agents, by contrast, operate as independent entities capable of reasoning, planning and executing tasks, often in coordination with other agents across system boundaries. 

Agentic AI vs. traditional automation

Real-world use case: invoice processing

Consider the task of automating invoice handling: receiving, extracting, validating and paying incoming invoices. 

With traditional automation, the system must be explicitly programmed to handle known invoice formats. Each time a new vendor format appears, the system must be updated. Processing flows follow strict rule sets, and every policy change requires corresponding updates to the system. Typically, a human remains in the loop to verify extracted data and authorize payments, especially for high-value transactions. 

Now, imagine turning this into an agentic workflow. We could delegate the invoice parsing step to an AI, trusting it to extract key data points like invoice amount, due date, supplier ID or line-item totals. With modern computer vision and natural language processing capabilities, this task becomes relatively low risk, particularly if AI reliably handles, say, 999 out of 1000 invoice types. 

But how acceptable is a 1-in-1000 failure rate? That depends on context. If misclassified invoices are flagged for human review or if payments above a certain threshold require additional validation, risk can be mitigated. Even with those safeguards, the organization gains substantial efficiency by removing the human from significant portions of parsing and data entry work. 

Finally, to make the system fully agentic, we could authorize AI to handle the entire process end-to-end, including final payment approval. But this raises a different class of challenges: can the system understand edge cases? Can it be audited? Do we trust it to act autonomously in high-risk scenarios? 

Answering those questions requires deeper investment into system design, oversight mechanisms and organizational comfort with risk. And that leads to the most strategic question: Do we even want AI to do this autonomously yet?  

The experience of interaction  

Our experience interacting with agentic AI systems may differ dramatically from that with “traditional” software. Instead of submitting data through static, predefined forms and inputs, we can now express our needs using different modalities such as voice, text or even graphical input, almost mimicking the way we interact with other human beings. 

Equally important is how these systems interact with one another. In modern enterprise environments, system-to-system communication has traditionally relied on rigid data exchange contracts and predefined APIs. Agentic systems shift this paradigm by enabling dynamic coordination across tools and services, reacting to events, adapting plans and cooperating with other autonomous systems in real time. 

Conceptual layout of an agentic system

As software gains the ability to hear, see and talk, are we on the verge of fundamentally rethinking how we design interfaces for both humans and machines?  

Minimum viable agent (MVA): redefining MVP for the next generation software   

The latest wave of AI breakthroughs is unlocking unprecedented opportunities but also introducing a new class of challenges. These challenges fall into two broad categories: feasibility and risk assessment and experience design

Before committing to scale and long-term investment, we must answer two foundational questions: 

  • Can we trust the AI to take on this problem? 
    This is a question of feasibility and risk. Agentic systems are inherently probabilistic and non-deterministic, making them harder to test, verify and reason about in production. What’s the acceptable margin of error? What are the consequences of a mistake? Can we clearly define failure modes and design around them? 
  • Are we comfortable interacting with AI to solve this problem?  
    This is a question of experience. How natural, predictable or comfortable will interacting with the system feel? Will users understand what the agent is doing and why? Outside enterprise settings, where interactions with AI are often less personalized, its adoption is increasing in more intimate and emotionally sensitive domains like medicine, mental health and elder care. In these fields, interactions with AI can feel uncomfortable, exposing a gap between human perceptions and machine behavior. Success here isn’t just about capability. It’s about usability, transparency, security and emotional trust in how the AI acts.

At first glance, the Minimum Viable Agent (MVA) may seem like just another perspective on the Minimum Viable Product (MVP). And in terms of implementation mindset—start small, validate fast—that’s still true. But agentic systems differ in fundamental ways that require a new approach to ideation, experience design and defining success. 

Unlike traditional software, where behavior is fully deterministic and success is defined by feature completeness or task execution, an agent’s value is emergent. It depends on how reliably it can pursue goals, adapt to new inputs and deliver outcomes without strict control. In addition to this, MVA emphasizes how users communicate and interact with the agent. Since AI agents process multiple data types such as voice, text, video, etc., the user experience must be intuitive and natural. The nature of the agent requires a seamless, multimodal conversational interface that adapts to the user’s needs in real time.  

This shift demands that we move beyond checklist thinking and start validating behavior, judgment and risk boundaries, not just functionality. 

The MVA is about building trust, not just code.  

Conclusion: toward a new paradigm of autonomy  

While multi-agent systems promise transformational efficiency gains, they also entail high implementation complexity and operational risk. Agentic workflows, meanwhile, offer a pragmatic middle ground, enabling organizations to embed intelligence where it adds the most value, manage risk and evolve existing systems gradually. 

As agentic systems evolve, their ability to autonomously reason and act opens the door to significant efficiency and scale, but not without introducing a new dimension of implementation complexity. The challenge isn’t just whether AI can take over a process, but whether we can design systems we’re comfortable living and working with

That’s why Minimum Viable Agents represent more than a development tactic; they’re a new strategic lens. Instead of focusing solely on features, we must validate judgment, resilience and risk tolerance from day one. It’s not enough to build a working system, we must build one we can trust. 

The organizations that embrace this mindset will be the ones to harness agentic AI not as a novelty, but as a durable advantage.