As Europe accelerates its regulatory efforts around artificial intelligence, one area draws particular attention: general-purpose AI models (GPAI). These are the powerful, adaptable systems—like large language models (LLMs)—that can serve a wide variety of downstream tasks, from summarising legal documents to generating code or automating customer support.
In July 2025, the European Commission published an important clarification:
“Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689”.
These guidelines help clarify how the AI Act applies to GPAI, especially for developers and deployers of such models. But they also raise important questions for any organisation that uses, integrates, or relies on GPAI within its workflows. In this article, we explore what the guidelines mean in practice—and how organisations can take responsible, structured action.
From capability to responsibility: what’s new in the EU guidance?
The document outlines how the EU distinguishes between:
GPAI model providers – the actors that develop or train the foundation model
GPAI system providers – the actors that package the model into a usable application
Downstream deployers – the organisations that use or integrate the system into business processes
Each of these layers comes with its own set of obligations, particularly for models that may qualify as “high-impact GPAI”—for example, those with wide adoption, significant autonomy, or ability to influence public discourse.
The EU guidance outlines key responsibilities such as:
Providing technical documentation and summaries
Publishing transparency statements
Conducting risk assessments and implementing mitigation
Enabling system-level controls for downstream use
What matters most: even if you’re not the developer of a foundation model, you still carry responsibility for how it is used in your organisation.
What this means for European businesses and public sector organisations
If you use general-purpose models (like GPT-4, Gemini, Claude, or open-source equivalents), the EU guidelines directly affect you—even if you’re just integrating these models via third-party tools or APIs.
Key questions to consider:
Have we mapped where GPAI models are used in our organisation?
Do we understand the risk levels and control requirements for each use case?
Can we demonstrate that our use is transparent, lawful, and safe?
Are our employees trained to understand and monitor GPAI tools?
This is not just a technical challenge—it’s a strategic and organisational one. And that’s precisely why frameworks for adoption are needed.
How the AI Adoption Framework helps
At AI Adoption Framework, we provide a clear and structured approach to help European organisations navigate the dual challenge of innovation and regulation. Our framework is built around seven pillars—from strategic alignment and data readiness to ethics, governance, and measurable value.
These pillars directly support the implementation of the GPAI guidance:
Responsible AI, ethics & security – align GPAI use with EU legal expectations
Governance & risk management – map responsibilities between providers, system owners and internal teams
Platform, tooling & integration – understand how general-purpose models are embedded into workflows
People, skills & change adoption – raise AI literacy and reduce blind trust in powerful models
In short: the AI Adoption Framework helps you move from reading the regulation to acting on it—responsibly and effectively.
Next steps: from reading to readiness
The new guidelines for general-purpose AI are a clear signal: regulators expect visibility, responsibility, and control—even when organisations rely on external models. But this isn’t a reason to avoid AI. It’s a call to act intentionally.
✅ Download the full guidelines here:
Guidelines on the scope of GPAI obligations (PDF)
➡️ Need a roadmap to align your use of GPAI with EU rules?
Visit AI Adoption Framework and explore our Maturity Scan, tools and playbook.