OpenAI’s Open-Source Models: Do They Pass the EU AI Act Test?

07-Aug-2025

In a bold move that’s shaking up the AI landscape, OpenAI released two open-source models on August 5, 2025: the gpt-oss-120b (a hefty 120 billion parameter beast) and its smaller sibling, the gpt-oss-20b (20 billion parameters). These text-only reasoning models are tailored for tasks like coding, math, science, and general knowledge, boasting performance that rivals proprietary heavyweights like OpenAI’s o4-mini and o3-mini on key benchmarks. Released under the permissive Apache 2.0 license with an added gpt-oss usage policy promoting safe and responsible deployment, these models include weights, architecture details, tokenizers (via the open-sourced TikToken library), and even training code snippets. Trained on trillions of filtered text tokens (with a June 2024 knowledge cutoff) and refined using chain-of-thought reinforcement learning, they’re designed to minimize harm while maximizing utility.

But here’s the question: In an era of tightening regulations, do these models qualify as truly “open-source” under the EU AI Act? And more crucially, are they compliant with key provisions like Articles 53 and 55? Let’s dive in, breaking it down step by step based on the EU AI Act (Regulation (EU) 2024/1689).

Grok

Are These Models “Open-Source” According to the EU AI Act?

The short answer? Yes, they fit the bill as general-purpose AI (GPAI) models released under a “free and open licence.” The EU AI Act, in Recital 102 and Article 3 (point 12), sets a clear bar: Models must publicly share parameters (weights, architecture, usage info) and allow access, usage, modification, distribution, study, and improvement — all while giving credit to the original provider and ensuring similar terms for derivatives.

OpenAI’s release checks these boxes. The Apache 2.0 license is a gold standard for openness, enabling commercial use, tweaks, and sharing without roadblocks. The accompanying usage policy stresses responsibility (e.g., users must handle safeguards against misuse like bypassing safety in fine-tuning) but doesn’t clamp down on core freedoms. No paywalls, no data grabs — just pure, accessible AI goodness. This openness grants them exemptions from some hefty transparency rules (more on that below), as long as they don’t veer into high-risk territory.

Breaking Down Compliance: Articles 53 and 55 Under the Microscope

The EU AI Act isn’t just about labels; it’s about compliance. Article 53 lays out baseline duties for GPAI providers, like documentation and copyright respect, while Article 55 ramps it up for “systemic risk” models — think those capable of causing widespread havoc. Open-source models get some leeway, but not a free pass.

To make this crystal clear, here’s a short compliance breakdown in table form. (Note: This is based on public info from OpenAI’s model cards and announcements; full verification would need regulatory scrutiny.)

https://medium.com/media/809863dee092a7dbeeb07b68f9f03e84/href

What Does This Mean for the Future of AI?

OpenAI’s release is a win for democratizing AI — developers can now tinker, build, and innovate without starting from scratch. By aligning with the EU AI Act’s open-source spirit, these models sidestep some regulatory hurdles, potentially setting a precedent for others. That said, the devil’s in the details: While they seem compliant on paper, nuances like copyright processes could invite scrutiny.

As AI evolves, expect more releases like this to balance innovation with responsibility. If you’re a developer or policymaker, keep an eye on the EU AI Office for updates. What do you think — will this spark a wave of truly open AI, or is it just the tip of the iceberg? Drop your thoughts below!


OpenAI’s Open-Source Models: Do They Pass the EU AI Act Test? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Also read: Syntax Verse Daily Quiz Answer 07 August 2025: Answer Inside!
WHAT'S YOUR OPINION?
Related News