
According to Sam Altman’s internal roadmap, GPT-5 will fuse two very different lineages:
Today you have to decide which temperament fits your prompt, flip the wrong to the wrong model and you waste time, tokens, or quality. GPT-5’s mission is to make that choice for you. Think of it as a personal assistant that knows when to fire up turbo mode for a calculus proof and when to coast on economy settings for a shopping list. If the plumbing works, users should see a best-of-both-worlds blend of speed, cost control, and brainpower without touching a dropdown.
Altman’s plan (subject to the usual “this-is-AI-so-things-change” disclaimer):
| Subscription | Access Level | Rough Translation |
| Free | GPT-5, “standard intelligence” | Better than GPT-4, with no throttling on basics. |
| Plus ($20/mo) | Mid-tier intelligence | A noticeable IQ bump, think honors class. |
| Pro | Highest intelligence, larger context windows, premium features | The full Tony-Stark suit: voice, canvas, deep research, the whole shebang. |
Whether Plus keeps enough extra oomph to justify its $20 after free users taste GPT-5 is an open question, and a sneaky upsell risk for OpenAI.

Sam Altman teased the release of GPT-5 on X
Altman is already dialing down the hype. GPT-5 will still be “experimental” and not the mysterious International Math Olympiad gold-medal model lurking in OpenAI’s skunkworks. Meanwhile the company is also cooking its first open-source LLM since GPT-2, a move likely intended to blunt pressure from Meta’s Llama line and keep the research community onside.
Right now AI feels like alphabet soup, GPT-4o, o4, o3, turbo, “reasoning,” “creative,” and so on. Pick the wrong spoon and you slurp thin gruel. Nick Turley, head of ChatGPT, frames GPT-5’s auto-selector as the cure: “Our goal is that the average person does not need to think about which model to use.” In practice that means:
OpenAI promised fireworks last December when lab tests suggested its new large-language model got sharper the longer you let it think. Reality was messier. Once engineers wrapped that brainy prototype into a chatty “o3” version for customers, most of the wow factor evaporated. Two insiders say the gains essentially fell back to GPT-4-class performance.
So what broke? A cocktail of hard problems:
Despite the hiccups, GPT-5 is ready. People who’ve test-driven it say:
Don’t expect a GPT-3-to-GPT-4-level quantum leap, but incremental still matters when ChatGPT is already a cash geyser. Even small upgrades could help justify OpenAI’s reported plan to torch $45 billion on rented servers over the next 3½ years, and keep Microsoft (likely to hold ~33 % of the equity after a looming restructure) happily on the hook.
Internal strains persist. Meta has poached a dozen OpenAI researchers with “soccer-star” pay packages, and Slack spats have flared between research boss Mark Chen and deputies. Yet leadership insists momentum is back, thanks to a “universal verifier” that automates quality checks during reinforcement learning. VP Jerry Tworek even floated the idea that this RL machinery might already be OpenAI’s proto-AGI.
CEO Sam Altman naturally dialed the hype to eleven, telling comedian Theo Von that “GPT-5 is smarter than us in almost every way.” Rivals, Google, Anthropic, Elon Musk’s xAI, aren’t laughing; they’re doubling down on the same reinforcement-learning tricks.
GPT-5 should land this week or next, smarter, steadier, but not sorcerous. The real test isn’t whether it beats humans at trivia, it’s whether it keeps OpenAI a step ahead in the GPU-gobbling arms race the company itself kicked off.