Subquadratic has launched its first AI model, claiming it can process up to 12 million tokens using a new architecture called Subquadratic Selective Attention (SSA).

The company said SSA scales linearly in compute and memory, avoiding the quadratic scaling issues seen in traditional transformer models.

Subquadratic also said it plans to release a model with a 50-million-token context window later.

The startup said its model scored 83 on the MRCR v2 benchmark, outperforming GPT-5.5 by nine points. It also claimed 92.1% accuracy on a needle-in-a-haystack retrieval benchmark at 12 million tokens.

On SWE-Bench Verified, Subquadratic reported a score of 82.4%, slightly ahead of Claude Opus 4.6 and Gemini 3.1 Pro.

The company added that its architecture delivers a 52.2-times speed improvement over dense attention systems at one million tokens.

According to chief technology officer Alex Whedon, SSA selects only the token relationships that matter for each prompt instead of processing every possible relationship.

The company said this helps reduce processing costs while handling larger context windows more efficiently.

Subquadratic is launching a beta API with support for the full 12-million-token context window alongside a coding tool called SubQ Code and a research tool named SubQ Search.

The startup has raised $29 million at a $500 million valuation and was previously known as Aldea before shifting from speech models to AI systems.

📢 For the latest Tech & Telecom news, videos and analysis join ProPakistani's WhatsApp Group now!

Follow ProPakistani on Google News & scroll through your favourite content faster!

Shares