
The latest gpt-5 features leak is now partly confirmed and partly unresolved. Reports on April 22 said an internal Codex model picker briefly exposed GPT-5.5, oai-2.1, Arcanine, and Glacier-alpha; OpenAI then announced GPT-5.5 on April 23 and updated availability on April 24.[1][5] The confirmed direction is clear: the GPT-5 family is moving from chat toward longer, tool-using work in coding, computer use, knowledge work, and research. The unresolved part is just as important. OpenAI has not published official figures for the leaked codenames, and model names in a dropdown are not a release schedule.
What actually leaked
The leak story has two layers. The older layer came on August 7, 2025, when reports said GitHub briefly published GPT-5 launch material before OpenAI’s event. Those reports described four variants: GPT-5, GPT-5 Mini, GPT-5 Nano, and GPT-5 Chat. They also reported separate language pointing to GPT-5 Pro access for higher-tier users.[3] Android Authority separately reported that the GitHub post pointed to stronger reasoning, coding, user experience, and agentic capabilities.[4]
The newer layer came on April 22, 2026. Moneycontrol reported that an internal Codex model selection menu became visible to a limited set of users and showed GPT-5.5, oai-2.1, and codenamed builds.[1] The Agent Times reported a similar dropdown exposure involving GPT-5.5, Arcanine, and Glacier-alpha, while noting that OpenAI had not commented at the time and that the sighting had not been independently verified.[2]
That matters because the leak did not come from a vague anonymous claim. It appeared to involve product surfaces that developers actually use. Readers who want the broader release context can start with our GPT-5 launch timeline, then work forward through the GPT-5.x updates.
Why the picker matters
A model picker is not just a menu. It is a routing surface. If a nonpublic model appears there, it can reveal how a lab is segmenting capability, speed, access, and product packaging. In this case, the important signal was not that every leaked name would ship. The signal was that OpenAI was testing more specialized GPT-5 family entries around coding, tool use, and long-running work.
What it did not reveal
The leak did not publish model weights, training data, or a full technical report. It also did not prove that Arcanine, Glacier-alpha, or oai-2.1 will become public products. OpenAI has not published official figures for these codenames, and the GPT-5.5 announcement does not list them as released models.[5]

What OpenAI has confirmed since the leak
OpenAI announced GPT-5.5 on April 23, 2026, then updated the post on April 24 to say GPT-5.5 and GPT-5.5 Pro were available in the API.[5] The announcement describes GPT-5.5 as a model for writing and debugging code, researching online, analyzing data, creating documents and spreadsheets, operating software, and moving across tools until a task is finished.[5]
That official description lines up with the leak’s strongest theme: GPT-5 is becoming less of a single chat model and more of a work system. The confirmed features are concentrated in agentic coding, computer use, knowledge work, research support, and safeguards for higher-risk capabilities.
| Confirmed item | What it means | Status as of publication |
|---|---|---|
| GPT-5.5 Thinking and GPT-5.5 Pro | GPT-5.5 Thinking is rolling out to Plus, Pro, Business, and Enterprise users. GPT-5.5 Pro is rolling out to Pro, Business, and Enterprise users.[5] | Official |
| Codex availability | OpenAI says GPT-5.5 is available in Codex for Plus, Pro, Business, Enterprise, Edu, and Go plans with a 400K context window.[5] | Official |
| API pricing | OpenAI lists GPT-5.5 at $5 per 1M input tokens and $30 per 1M output tokens, and GPT-5.5 Pro at $30 per 1M input tokens and $180 per 1M output tokens.[5][8] | Official |
| API context | OpenAI says GPT-5.5 has a 1M context window in the API.[5] | Official |
| Fast mode in Codex | OpenAI says GPT-5.5 Fast mode generates tokens 1.5x faster for 2.5x the cost.[5] | Official |
The confirmed release does not validate every screenshot or every interpretation. It does validate the general direction of the GPT-5 features leak: more autonomy, more tool use, larger work surfaces, and more plan-based execution.
Leak claims versus official GPT-5 features
OpenAI officially introduced GPT-5 on August 7, 2025 as a unified system with a fast model, a deeper reasoning model called GPT-5 thinking, and a real-time router that chooses between them based on the conversation and task.[6] That original launch is the key reference point for judging the leaks.
| Leak claim or report | Officially confirmed feature | How to read it |
|---|---|---|
| Four GPT-5 variants appeared in a deleted GitHub post: GPT-5, GPT-5 Mini, GPT-5 Nano, and GPT-5 Chat.[3] | OpenAI API docs list GPT-5, GPT-5 mini, and GPT-5 nano; the GPT-5 model page also lists GPT-5 chat-latest pricing in the broader GPT-5 family.[9] | Mostly confirmed, but packaging changed across ChatGPT and API surfaces. |
| Reports pointed to better reasoning and agentic capabilities.[4] | OpenAI described GPT-5 as a unified system with a router and deeper reasoning mode.[6] | Directionally confirmed. |
| Reports suggested a Pro tier for harder work.[3] | OpenAI launched GPT-5 pro for extended reasoning and later GPT-5.5 Pro for higher-accuracy work.[6][5] | Confirmed as a product pattern. |
| Reports emphasized coding and developer workflows. | OpenAI said GPT-5 scored 74.9% on SWE-bench Verified and 88.0% on Aider Polyglot in its developer launch materials.[7] | Confirmed with official benchmark claims. |
| The April 2026 Codex leak exposed GPT-5.5 and codenames such as Arcanine and Glacier-alpha.[1][2] | OpenAI confirmed GPT-5.5, but has not published official figures for Arcanine, Glacier-alpha, or oai-2.1.[5] | Partly confirmed. Treat codenames as unverified. |
| Leaks implied more autonomous work loops. | OpenAI’s GPT-5 system card describes a real-time router, GPT-5 thinking, GPT-5 thinking mini, and GPT-5 thinking pro within the broader system.[10] | Confirmed as architecture direction, not as a promise that every leaked label ships. |
A leak can be accurate at the product-direction level and still miss the final names, limits, access rules, or price. For side-by-side model positioning, use our GPT model comparison rather than a screenshot from a pre-release menu.

What the leak means for ChatGPT users
For ChatGPT users, the confirmed trend is not a secret prompt or a hidden setting. It is a product shift. GPT-5 introduced automatic routing between a fast model and a deeper reasoning model.[6] GPT-5.5 pushes the same direction into longer, messier tasks that involve code, files, tools, and online research.[5]
Expect more automatic routing
OpenAI’s original GPT-5 launch made the model the default ChatGPT experience for signed-in users and moved the product toward automatic reasoning when a prompt benefits from it.[6] The practical result is fewer moments where a user must choose the exact model before typing. If you track these changes day to day, bookmark our ChatGPT updates in 2026 changelog.
Expect better work on files, code, and long tasks
The most useful upgrade for many users will not be a clever answer to a trivia question. It will be a model that can take a messy repository, a pile of documents, or a multi-step spreadsheet task and keep working without constant hand-holding. OpenAI says GPT-5.5 is designed to plan, use tools, check its work, navigate ambiguity, and keep going.[5]
Do not treat leaked codenames as selectable options
If you see Arcanine, Glacier-alpha, or another codename discussed online, treat it as unconfirmed until it appears in OpenAI’s official documentation or in your own account’s supported model picker. OpenAI has not published official figures for those codenames. A visible string is not the same thing as stable product access.
- Retest your most important prompts after a model change.
- Check whether old chats now use a different model path.
- Use file-heavy and code-heavy examples, not only short Q&A prompts.
- Watch for changes in refusal behavior and safety boundaries.
- Save outputs from before and after a rollout so you can compare quality.

What developers should watch
The developer impact is larger than the consumer headline. GPT-5 added more API control through reasoning_effort, verbosity, and custom tools, giving developers ways to trade off speed, response length, and tool format.[7] GPT-5.5 raises the same question at a higher price and larger context scale: when does a more capable model reduce retries enough to justify the per-token cost?
| Developer question | Official data point | Practical takeaway |
|---|---|---|
| How big is the original GPT-5 API context? | OpenAI says GPT-5 models can accept 272,000 input tokens and emit 128,000 reasoning and output tokens, for a 400,000-token total context length.[7] | Design retrieval and chunking around usable input, not just headline context. |
| What does the GPT-5 model page list? | OpenAI’s GPT-5 model docs list a 400,000-token context window, 128,000 max output tokens, and $1.25 input / $10.00 output pricing per 1M tokens.[9] | Use official model docs for production cost estimates. |
| How does GPT-5.5 change the API ceiling? | OpenAI says GPT-5.5 will support a 1M context window in the API and lists GPT-5.5 at $5 input / $30 output per 1M tokens.[5][8] | Long-context workflows may become simpler, but output cost matters. |
| What should teams test first? | OpenAI’s GPT-5 developer launch emphasizes coding, agentic tasks, tool calls, reasoning controls, and custom tools.[7] | Benchmark on your own repository, tool chain, and latency budget. |
For implementation planning, pair this story with our context window comparison and OpenAI API pricing guide. If you need a broader model shortlist, use all GPT models compared rather than choosing solely from launch-day claims.

What not to assume from the leak
The leak is useful, but it is easy to overread. A dropdown tells you what was present in a product surface at one moment. It does not tell you the final release plan, safety policy, access level, or benchmark performance.
- Do not assume every codename becomes a product. OpenAI confirmed GPT-5.5, not every leaked label.[5]
- Do not assume temporary access means entitlement. A model can appear briefly and still return unsupported-model errors later.
- Do not assume anecdotes are benchmarks. Developer screenshots can be useful, but they do not replace controlled evaluations.
- Do not assume a higher version is cheaper. GPT-5.5 has a higher official API price than GPT-5, even if OpenAI argues it can use fewer tokens on some work.[5][8][9]
- Do not assume safeguards loosen. OpenAI says GPT-5.5 has stronger safeguards, including more cyber-specific controls.[5]
The safest reading is narrow. The leak previewed OpenAI’s near-term emphasis on agentic coding, tool use, and longer work loops. It did not prove a full roadmap.
How this fits OpenAI’s 2026 model strategy
The GPT-5 features leak fits a broader pattern: OpenAI is building a family of models and product modes rather than a single static chatbot. The public GPT-5 launch already split the experience into a fast path, a thinking path, and a Pro path.[6] The developer launch added model sizes, tool controls, and pricing tiers.[7] GPT-5.5 extends that pattern into more autonomous work.
That cadence makes version tracking important. If you are comparing outputs across time, read our GPT-5.1 update, GPT-5.2 release notes, and GPT-5.3 release alongside this story. Small version changes can affect routing, available tools, context windows, and pricing.
The enterprise angle is also central. OpenAI said GPT-5 was launching across Microsoft 365 Copilot, Copilot, GitHub Copilot, and Azure AI Foundry.[7] That makes model roadmap signals relevant beyond ChatGPT. For partnership context, see OpenAI and Microsoft news, and for daily release coverage see today’s OpenAI news.
The bottom line is simple. The gpt-5 features leak was useful because it pointed to OpenAI’s product direction. It was not authoritative enough to set budgets, procurement rules, or launch expectations. Use leaks to decide what to watch. Use official documentation to decide what to build on.
Frequently asked questions
Was the GPT-5 features leak real?
Reports say real product surfaces briefly exposed GPT-5-related information. The August 2025 leak involved a reportedly deleted GitHub post, while the April 2026 leak involved a Codex model picker shown to some users.[3][1] OpenAI later confirmed GPT-5.5, but not every codename reported in the leak.[5]
Did the leak reveal GPT-6?
No. The reports covered GPT-5 family labels, GPT-5.5, and several codenames. OpenAI has not published an official figure for a GPT-6 release in connection with these reports. Treat any GPT-6 claim as speculation unless OpenAI publishes it.
Which leaked features were confirmed?
The strongest confirmed themes are multiple GPT-5 family entries, deeper reasoning, Pro-tier reasoning, agentic coding, and tool-heavy work.[3][6][5] The exact labels and access rules changed across ChatGPT, Codex, and the API. That is normal for pre-release material.
Is GPT-5.5 available now?
As of this article’s April 25, 2026 publication date, OpenAI says GPT-5.5 is rolling out in ChatGPT and Codex to paid plans, and its April 24 update says GPT-5.5 and GPT-5.5 Pro are available in the API.[5] Availability can still vary by plan, workspace, and rollout timing.
Does GPT-5.5 cost more than GPT-5 in the API?
Yes on headline token price. OpenAI lists GPT-5 at $1.25 per 1M input tokens and $10.00 per 1M output tokens, while GPT-5.5 is listed at $5 per 1M input tokens and $30 per 1M output tokens.[9][8] Total workflow cost can still depend on retries, output length, caching, and tool use.
Should developers wait for the next leaked model?
No. Build with model abstraction, test suites, and clear cost controls so you can swap models when official support changes. A leak can tell you what to evaluate next, but it should not be the foundation for a production roadmap.
