Agentic AI in 2025: promise, practice and the gap between them

The ISG report on Agentic AI in 2025 shows a gap between promise and practice. Only 9% are using agents structurally, while the rest are experimenting. The technology is there, but collaboration, frameworks and trust are lacking. Success requires work redesign, clear roles and human-centered adoption.

Agentic AI promises fundamentally different work: software that makes its own decisions, organizes itself and performs work processes autonomously. But where does this technology really stand? The ISG State of the Agentic AI Market Report 2025 shows an interesting paradox. Organizations are experimenting abundantly with agents, but structural adoption lags. There is potential, certainly, but practice forces a rethink. What is already working? What blocks scaling up? And what does this require from technology and people?

A realistic picture of Agentic AI in 2025

The ISG State of the Agentic AI Market Report 2025 paints a picture that is both hopeful and confrontational. Organizations are experimenting with agentic AI on a large scale, but only a fraction actually succeed in applying this technology structurally.

The promise of agents performing autonomous work is alive and well, but the road to mature adoption is proving recalcitrant. Agents function mostly behind the scenes - in IT processes, financial systems and DevOps - and are rarely visible in customer-facing or strategic domains.

The barriers are rarely technical: They involve trust, collaboration, governance and clarity of roles. While technology is delivering on its promise in small steps, organizations are struggling primarily with organizational embedding. The human-agent relationship turns out to be more complex than thought, especially if autonomy is not to remain an empty promise. The report therefore calls for reflection: not only what can be done with agentic AI, but especially - how do we make it work in practice?

What's the real state of affairs? Ten hard truths from 2025

The ISG report paints a nuanced picture with hard data:

  1. Only 9 % of the organizations have agentic AI structurally in use.
  2. Over 70 % does engage in pilots and POCs, but scale-up remains limited.
  3. Agents are mostly applied in IT back office, DevOps and finance.
  4. Large organizations are especially reluctant, due to governance and complexity.
  5. Technology is not an obstacle, but trust, transparency and explainability are.
  6. Human-agent collaboration falls short: users do not know how to direct or control the agent.
  7. Legacy systems and data links are a major bottleneck.
  8. There is a need for standardization in language, frameworks and responsibilities.
  9. Many pilots founder because of ambiguities in goals, roles and structure.
  10. We see agentic AI as a colleague, but often still act on it as if it were a tool - that friction.

Together, these observations make clear: agentic AI is already technically mature, but organizationally still young.

This must be done differently: five lessons from the report

The ISG report provides clear messages for those who want to go beyond experimentation:

  • Start with clear use cases. Choose a concrete task (e.g., DevOps, declarations, reporting) where autonomy makes a real difference.
  • Make agent behavior insightful. Who does what, why and when? Transparency is not optional.
  • Provide human control points. Agents must be supervised - not as a risk, but as a guarantee.
  • Invest in collaboration, not just technology. Employees must learn to cooperate with, correct and direct agents.
  • Develop clear frameworks and roles. Define who gets to decide what - for people as well as agents.

These steps make it clear that the adoption of agentic AI is much broader than technology alone. People live, work and decide with it.

What the report teaches us about collaborating with agents

The report makes clear: agentic AI will not come into its own without a fundamental overhaul of how we organize work. Technology alone is not enough. It's about shaping human-agent collaboration - including clear roles, oversight, and room for autonomy. This touches on a key point in how Augmentic looks at the future of work: agents are not replacements, but colleagues with digital responsibilities. If we approach agents as full-fledged team members, we can make work smarter, more consistent, and more people-centric.

The report emphasizes that agentic AI becomes successful only when humans and technology work together in a new way. It requires not only tools, but also clear frameworks, shared responsibilities and mutual trust. An agent is not a replacement for humans, but a colleague with its own rules of the game.

What can you do today?

The move to agentic AI does not have to be a complicated journey. Three concrete actions to start now:

  1. Select one process that has a lot of storage or repetition (e.g., reports, back office, help desk). Investigate whether agentic AI can provide direct value there.
  2. Interview your users about current AI use: what are they already using, what is working, what is not? Involve practical experiences directly.
  3. Organize a short workshop On roles and oversight: what does autonomy mean within your team? Who takes what responsibility?

Would you like to spar, deepen or just check if you are on the right track? Augmentic likes to think with you - without fuss, but with a plan.

Source: ISG State of the Agentic AI Market Report 2025 by ISG AI Advisory (Loren Absher & Olga Kupriyanova), published June 2025: ISG Report 2025