Center for AI Policy: AI Agents – Governing Autonomy in the Digital Age
The report outlines the rapid shift of AI agents from research to deployment, emphasizing their autonomous, goal-directed capabilities along a five-level spectrum. It identifies three primary risks—catastrophic misuse, gradual human disempowerment, and extensive workforce displacement—and recommends policies such as an Autonomy Passport, continuous oversight, mandatory human control over high-stakes decisions, and annual workforce impact studies to ensure safe and beneficial integration of these agents.
Center for AI Policy: AI Agents – Governing Autonomy in the Digital Age
Presents an overview of AI agents, defined as autonomous systems capable of complex tasks without constant human supervision, highlighting their rapid progression from research to real-world application.
It identifies three major risks: catastrophic misuse through malicious applications, gradual human disempowerment as decision-making shifts to algorithms, and significant workforce displacement due to automation of cognitive tasks.
The report proposes four policy recommendations for Congress, including an Autonomy Passport for registration and oversight, mandatory continuous monitoring and recall authority, requiring human oversight for high-consequence decisions, and implementing workforce impact research to address potential job losses. These measures aim to mitigate the risks while allowing the beneficial aspects of AI agent development to continue.
- AI agents represent a significant shift in AI capabilities, moving from research to widespread deployment. Unlike chatbots, these systems are autonomous and goal-directed, capable of taking a broad objective, planning their own steps, using external tools, and iterating without continuous human prompting. They can operate across multiple digital environments and automate decisions, not just steps. Agent autonomy exists on a spectrum, categorized into five levels ranging from shift-length assistants to frontier super-capable systems.
- The widespread adoption of autonomous AI agents presents three primary risks: catastrophic misuse, where agents could enable dangerous attacks or cyber-intrusions; gradual human disempowerment, as decision-making power shifts to opaque algorithms across economic, cultural, and governmental systems; and workforce displacement, with projections indicating that tasks equivalent to roughly 300 million full-time global positions could be automated, affecting mid-skill and cognitive roles more rapidly than previous automation waves.
- To mitigate these risks, the report proposes four key policy recommendations for Congress. These include creating a federal Autonomy Passport system for registering high-capability agents before deployment, mandating continuous oversight and recall authority (including containment and provenance tracking) to quickly suspend problematic deployments, requiring human oversight by qualified professionals for high-consequence decisions in domains like healthcare, finance, and critical infrastructure, and directing federal agencies to monitor workforce impacts annually.
- The proposed policy measures are designed to be proportional to the level of agent autonomy and the domain of deployment, focusing rigorous oversight on where autonomy creates the highest risk while allowing lower-risk innovation to proceed. For instance, the Autonomy Passport requirement and continuous oversight mechanisms target agents classified at Level 2 or higher on the five-level autonomy scale.
- Early deployments demonstrate significant productivity gains, and experts project agents could tackle projects equivalent to a full human work-month by 2029. However, the pace of AI agent development is accelerating faster than the governance frameworks designed to contain its risks, creating a critical mismatch and highlighting the need for proactive policy intervention before the next generation of agents is widely deployed.
Related Articles
Students as Agent Builders: How Role-Based Access (RBAC) Makes It Possible
How ibl.ai’s role-based access control (RBAC) enables students to safely design and build real AI agents—mirroring industry-grade systems—while institutions retain full governance, security, and faculty oversight.
AI Equity as Infrastructure: Why Equitable Access to Institutional AI Must Be Treated as a Campus Utility — Not a Privilege
Why AI must be treated as shared campus infrastructure—closing the equity gap between students who can afford premium tools and those who can’t, and showing how ibl.ai enables affordable, governed AI access for all.
Pilot Fatigue and the Cost of Hesitation: Why Campuses Are Stuck in Endless Proof-of-Concept Cycles
Why higher education’s cautious pilot culture has become a roadblock to innovation—and how usage-based, scalable AI frameworks like ibl.ai’s help institutions escape “demo purgatory” and move confidently to production.
AI Literacy as Institutional Resilience: Equipping Faculty, Staff, and Administrators with Practical AI Fluency
How universities can turn AI literacy into institutional resilience—equipping every stakeholder with practical fluency, transparency, and confidence through explainable, campus-owned AI systems.