- Businesses are rapidly deploying agentic AI—autonomous systems that act without human guidance—but governance is not keeping pace, creating significant risks during adoption.
- A survey of over 500 data professionals by Drexel University revealed that 41% of organizations already use agentic AI in daily operations, moving beyond mere experimentation.
- However, only 27% of organizations believe their governance frameworks are mature enough to oversee and control these systems.
- Governance in this context is not about rigid regulation, but about clearly defining responsibilities, monitoring AI behavior, and determining when human intervention is necessary.
- This misalignment becomes dangerous when AI acts autonomously in real-world situations before humans can react.
- For example, during a power outage in San Francisco, autonomous robotaxis got stuck at an intersection, obstructing emergency vehicles despite the systems functioning “as designed.”
- In finance, AI fraud detection can automatically block transactions in real-time; customers only find out when their card is declined, raising questions about who is liable if the AI is wrong.
- Many organizations have humans “in the loop,” but they only participate after the AI has made a decision, making oversight more about troubleshooting than actual accountability.
- Without governance from the start, small issues accumulate, undermining trust even if the systems do not show obvious failures.
- The survey indicates that organizations with strong governance translate initial autonomous AI benefits into better long-term efficiency and revenue growth.
- The OECD emphasizes that accountability and human oversight must be designed in from the beginning, not added as an afterthought.
Conclusion: Businesses are rapidly deploying agentic AI—autonomous systems that act without human guidance—but governance is not keeping pace, creating significant risks during adoption. Agentic AI is widely used, with 41% of organizations already in operation, yet only 27% have sufficiently strong governance. Governance here means clearly defining responsibilities, monitoring AI behavior, and identifying human intervention points. Many organizations have humans “in the loop,” but they only participate after a decision is made, rendering oversight a reactive fix rather than true accountability.

