The Association for Computing Machinery’s Europe Technology Policy Committee (ETPC) has released 'Systemic Risks Associated with Agentic AI: A Policy Brief,' examining how European Union regulation should adapt to autonomous, goal-directed LLM systems.
The brief defines agentic AI as systems capable of setting or refining plans and executing tasks with minimal or no human oversight. Distinctive traits highlighted include persistent operation, adaptive learning, and self-reflection. Also, the document identifies systemic risks associated with agentic AI, including potential economic displacement, the propagation of misinformation, and environmental impacts.
While acknowledging that the EU AI Act establishes a baseline for governance, the brief concludes that gaps remain for technologies with autonomous and self-optimizing behavior, and offers recommendations:
Foresight frameworks: Differentiate between regulatory foresight (laws and compliance mechanisms for systemic risk) and societal foresight (longer-term cultural and behavioral implications). Dynamic governance: Incorporate continuous oversight during system operation—such as real-time monitoring and adaptive controls—to address fairness, accountability, transparency, security, accuracy, and interpretability. Legislative updates: Complement and amend the EU AI Act and related instruments to address agentic characteristics explicitly.
The brief includes short chapters that outline core concepts of agentic AI, how self-optimizing systems could challenge human control, categories of adverse risk, and the current EU policy landscape.
Click here to read the original press release.