|

Nous Research Releases Token Superposition Training to Speed Up LLM Pre-Training by Up to 2.5x Across 270M to 10B Parameter Models

Pre-training giant language fashions is pricey sufficient that even modest effectivity enhancements can translate into significant price and time financial savings. Nous Research is releasing Token Superposition Training (TST), a technique that considerably reduces pre-training wall-clock time at fastened compute with out touching the mannequin structure, optimizer, tokenizer, parallelism technique, or coaching information.

At the 10B-A1B mixture-of-experts scale, TST reaches a decrease last coaching loss than a matched-FLOPs baseline whereas consuming 4,768 B200-GPU-hours versus the baseline’s 12,311 — roughly a 2.5x discount in whole pre-training time.

https://arxiv.org/pdf/2605.06546

The Problem TST is Solving

Modern LLM pre-training is closely data-driven. Recent coaching regimes routinely overtrain effectively past compute-optimal estimates, and uncooked textual content throughput. How a lot information a mannequin can course of per FLOP has grow to be a key lever. Subword tokenizers like BPE already enhance throughput by compressing sequences; and the analysis suggests a lot of the BPE benefit over byte-level fashions comes merely from shorter sequences, which implies the mannequin sees extra textual content per unit of compute.

TST asks whether or not that throughput lever might be pulled additional throughout coaching, independently of the tokenizer and with out completely altering the mannequin.

How TST Works: Two Phases

TST modifies the usual pre-training loop in two sequential phases:

Phase 1 — Superposition: For the primary r fraction of whole coaching steps (the paper finds r ∈ [0.2, 0.4] to be shut to optimum throughout examined scales), the mannequin doesn’t obtain particular person tokens. Instead, the enter sequence of size L is segmented into non-overlapping baggage of s contiguous tokens. In the embedding layer, every bag is collapsed right into a single latent “s-token” by averaging the s token embeddings. The transformer then processes a sequence of size L/s.

Crucially, every TST step is stored equal-FLOPs to a typical coaching step by growing the information sequence size by s occasions throughout the superposition section. Because every latent place corresponds to s supply tokens, the mannequin ingests s occasions as a lot textual content per unit of compute — that is what drives the throughput acquire.

On the output aspect, every latent place predicts the following bag of s tokens somewhat than a single subsequent token. The commonplace cross-entropy loss is changed with a multi-hot cross-entropy (MCE) loss, which assigns equal chance mass 1/s to every token within the goal bag. The MCE loss reduces to a easy imply of normal cross-entropy phrases over the s targets — it may be carried out utilizing the present fused CE kernels already current in any main pre-training library, with out writing a brand new kernel or including an auxiliary head.

Phase 2 — Recovery: After the superposition section, coaching resumes from the saved checkpoint with commonplace next-token prediction for the remaining 1 - r steps. The TST code is absolutely eliminated at this boundary to keep away from any experimental contamination. A transient loss spike happens on the transition, usually between 1 and a couple of nats, which resolves inside just a few thousand steps. After that, the recovered mannequin crosses under the equal-FLOPs baseline and stays there.

The mannequin produced on the finish of Phase 2 is architecturally equivalent to one produced by typical pre-training, with the identical next-token prediction inference habits.

What the Experiments Show

TST was validated at 4 scales: 270M and 600M dense (SmolLM2 shapes tailored to the Llama3 modeling code, with the Llama3-8B tokenizer and untied enter/output embeddings — which makes the 270M mannequin equal in measurement to SmolLM2-135M and the 600M to SmolLM2-360M), 3B dense (SmolLM3 form), and a 10B-A1B MoE within the Qwen3 household. Training used the DCLM dataset for the smaller runs and a 50/50 mixture of DCLM and FineWeb-Edu for the MoE run. All runs used AdamW with the Warmup-Stable-Decay studying fee schedule and had been run in TorchTitan beneath FSDP parallelism, on 64 NVIDIA B200 GPUs for the bigger fashions and eight B200 GPUs for the smaller ones.

At the 3B scale with bag measurement s = 6 and step ratio r = 0.3, TST at 20,000 steps reaches a last lack of 2.676 — almost matching a 36,000-step baseline at 2.677 — whereas utilizing 247 B200-GPU-hours versus 443. The 20k-step TST run scores 62.4 on HellaSwag and 66.3 on ARC-Easy, versus 62.3 and 65.9 for the 36k baseline.

At the 10B-A1B MoE scale with s = 16 and r ≈ 0.25, the TST run processes 2T information tokens and achieves a last lack of 2.236, under the baseline’s 2.252 after 1.05T tokens, whereas beating it on all 4 reported benchmarks: HellaSwag (71.2 vs. 70.1), ARC-Easy (74.2 vs. 73.8), ARC-Challenge (47.3 vs. 46.3), and MMLU (39.0 vs. 37.4).

The analysis group presents three comparability views in opposition to the baseline — equal-FLOPs, equal-loss, and equal-data. Under equal-FLOPs and equal-loss situations, TST persistently wins. Under equal whole token consumption, the baseline wins, as a result of TST’s efficient compute funds per information token is smaller. This is a crucial boundary situation that determines the place TST applies.

Two Distinct Mechanisms

An ablation examine isolates the input-side and output-side parts. Both independently outperform the baseline; combining them produces additional enchancment with out indicators of interference. The authors interpret this as proof that TST is 2 orthogonal mechanisms somewhat than a single trick.

The output-side mechanism — next-bag-of-tokens prediction — is conceptually associated to multi-token prediction (MTP). Unlike MTP, which provides okay impartial prediction heads and further parameters, TST retains a single output head and replaces solely the goal. This makes it the least costly member of a rising class of future-signal auxiliary aims. Unlike MTP, it reveals constant features throughout all examined scales together with small fashions the place MTP has been proven to degrade efficiency.

The input-side mechanism has no direct analog within the current pre-training literature. The analysis group provides two believable explanations: it could implicitly regularize the embedding geometry (since many random s-grams of tokens should stay linearly separable as soon as averaged), or it could act as a type of pre-pre-training, exposing the mannequin to a coarser model of the true information earlier than fine-resolution language modeling begins.

A focused ablation immediately assessments what occurs when illustration continuity is damaged. The analysis group runs a 3B TST experiment the place the enter embedding and output LM head are randomly re-initialized initially of Phase 2. The consequence: last loss jumps to 2.938 — worse than each the TST run (2.676) and the usual baseline (2.808). The Phase 1 TST steps contributed nothing to the ultimate mannequin. This confirms that shared representations throughout each phases aren’t incidental to TST’s success — they’re what makes it work.

Marktechpost’s Visual Explainer

Token Superposition Training — Practical Guide
arXiv 2605.06546

01 / Overview

What Is Token Superposition Training?

Token Superposition Training (TST) is a two-phase pre-training methodology from Nous Research that will increase token throughput per FLOP with out altering the mannequin structure, optimizer, tokenizer, parallelism, or coaching information.

The core thought: Instead of feeding one token at a time, common s contiguous token embeddings into one “s-token,” prepare on that for the primary r fraction of steps, then change again to commonplace next-token prediction. The last mannequin is architecturally equivalent to one educated usually.
  • Phase 1 (Superposition) — mannequin reads baggage of s tokens, predicts the following bag
  • Phase 2 (Recovery) — commonplace next-token prediction resumes from the checkpoint
  • Inference — fully unchanged; no new heads, no new parameters
  • Validated at 270M, 600M, 3B dense and 10B–A1B MoE
TST trades compute effectivity for greater information consumption. Best suited to compute-bound pre-training, not data-bound.

02 / Phase 1

Phase 1 — The Superposition Phase

For the primary r fraction of whole coaching steps, the enter sequence of size L is cut up into non-overlapping baggage of s contiguous tokens. Their embeddings are averaged right into a single latent s-token. The transformer processes a sequence of size L/s — however every place corresponds to s actual tokens, so throughput is greater on the similar FLOPs.

Equal-FLOPs trick: To hold every step equal-FLOPs to baseline, the information sequence size is elevated by — not the batch measurement. Every TST step prices the identical compute as a typical step.

On the output aspect, the loss goal shifts from a single subsequent token to the following bag of s tokens. The multi-hot cross-entropy (MCE) loss assigns equal chance mass 1/s to every token within the goal bag:

# L_MCE = imply of s commonplace CE phrases
for i in vary(superposition_bag_size):
    goal = labels[..., i].flatten(0, 1)
    loss += torch.nn.practical.cross_entropy(pred, goal)
loss = loss / superposition_bag_size

No new kernel wanted — reuses the present fused CE kernel in your pre-training library.

03 / Phase 2

Phase 2 — The Recovery Phase

After r × total_steps of superposition coaching, resume from the checkpoint with the TST code absolutely eliminated. Standard next-token prediction runs for the remaining (1 — r) × total_steps.

What occurs on the change: A loss spike of 1–2 nats happens on the section boundary. It resolves inside just a few thousand steps. After that, the mannequin crosses under the equal-FLOPs baseline and stays there.
  • Remove TST code absolutely — don’t hold it as an auxiliary loss throughout Phase 2
  • Do not re-initialize the enter embedding or LM head on the boundary
  • Shared representations throughout each phases are what make TST work
Re-initializing the embedding or LM head on the section boundary fully breaks TST. In a 3B ablation, this raised last loss from 2.676 to 2.938 — worse than the two.808 baseline. The Phase 1 steps contributed nothing.

04 / Implementation

PyTorch Implementation

Three adjustments to the usual coaching loop — enter folding, averaged embedding lookup, and MCE loss.

# 1. Input folding (inside prepare loop)
if superposition_bag_size isn't None and superposition_bag_size > 1:
    bs, seq = inputs.form
    inputs = inputs.reshape(
        bs, seq // superposition_bag_size, superposition_bag_size
    )
# 2. Averaged embedding lookup (inside mannequin ahead)
if len(tokens.form) == 3:
    bs, sp_seq, superposition_bag_size = tokens.form
    h = self.tok_embeddings(tokens[..., 0]).float()
    for i in vary(1, superposition_bag_size):
        h = h + self.tok_embeddings(tokens[..., i]).float()
    h = (h / superposition_bag_size).to(h_dtype)
else:
    h = self.tok_embeddings(tokens)
Note: Sum in float32 for numerical precision, then solid again to coaching dtype. The embedding layer is the one forward-pass change.

05 / Hyperparameters

Tuning Bag Size s and Step Ratio r

Two hyperparameters management TST. Both have well-defined sensible ranges validated throughout mannequin scales.

Step Ratio r
0.2 — 0.4
Fraction of whole steps run in superposition mode. Robust throughout all examined scales. Below 0.2, throughput acquire is simply too small. Above 0.5, Phase 2 can’t absolutely recuperate.
Bag Size s
3 — 16
U-shaped optimum that shifts with mannequin measurement. Start within the flat basin; overshooting makes the bag goal too lossy to recuperate from.

Model Size Recommended s Recommended r
270M 3 — 8 0.2 — 0.4
600M 6 — 10 0.2 — 0.4
3B 6 (examined) 0.3 (examined)
10B–A1B MoE 16 (examined) ∼0.25 (examined)
Large bag sizes (s ≥ 8): Switch from uniform MCE loss weighting to power-law weighting (1/i per place). Motivated by mutual info between token pairs decaying as an influence legislation with distance (fitted exponent okay ≈ −1.25 on DCLM).

06 / Negative Results

What Doesn’t Work

The paper paperwork a number of variants that had been examined and failed. Save your self the compute.

  • Positional encodings earlier than averaging — including RoPE or sinusoidal encodings to tokens earlier than the imply persistently damage efficiency. Within-bag permutation invariance seems to be a function, not a bug.
  • RoPE rescaling at section transition — accelerated early Phase 2 restoration however generally raised last loss. Leave RoPE unchanged throughout the boundary.
  • s impartial heads — changing the one MCE head with s separate heads predicting s positions gave no constant acquire at greater parameter price and implementation complexity.
  • Binary cross-entropy / hinge loss — each considerably underperformed the MCE formulation and even fell under the baseline.
  • Retaining TST head in Phase 2 — not but benchmarked however recognized as future work; don’t assume it helps.
Bottom line: The easiest model works greatest — imply embeddings in, imply CE loss out, laborious change on the section boundary, no additional parameters.

07 / Results

Key Results & When to Use TST

At equal wall-clock — similar compute, higher loss:

Scale B200-hrs TST Loss Baseline Loss
3B dense 247 2.676 2.808
10B–A1B MoE 4,768 2.236 2.252 (@ 12,311 hrs)

At equal last loss — wall-clock saved:

Scale TST (B200-hrs) Baseline (B200-hrs) Speedup
3B dense 247 443 ∼1.8×
10B–A1B MoE 4,768 12,311 ∼2.5×
Use TST when
✓ You are compute-bound
✓ You have ample information
✓ You need decrease loss on the similar FLOPs
✓ You want the identical inference mannequin
Avoid TST when
✕ Data is the bottleneck (TST makes use of s× extra tokens in Phase 1)
✕ You evaluate at equal token consumption
✕ Under equal-data situations, baseline wins

Paper: arXiv 2605.06546  •  nousresearch.com/token-superposition

Key Takeaways

  • Nous Research's Token Superposition Training (TST) cuts LLM pre-training time by up to 2.5x at matched FLOPs — no structure, tokenizer, or optimizer adjustments required.
  • Phase 1 averages contiguous token embeddings into baggage and predicts the following bag through multi-hot cross-entropy; Phase 2 reverts to commonplace next-token prediction from the identical checkpoint.
  • Validated at 270M, 600M, 3B dense, and 10B-A1B MoE — TST beats the baseline on loss and downstream evals (HellaSwag, ARC, MMLU) throughout all scales.
  • Optimal hyperparameters: bag measurement s ∈ [3–8] for smaller fashions, step ratio r ∈ [0.2, 0.4]; shared embeddings throughout each phases are important — re-initializing them makes TST worse than the baseline.
  • Trade-off: TST consumes extra uncooked information tokens per compute funds — greatest suited to compute-bound coaching; the output-only variant is the choice for data-bound settings.

Check out the Paper and ProjectAlso, be happy to comply with us on Twitter and don’t overlook to be part of our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to associate with us for selling your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar and so on.? Connect with us

The publish Nous Research Releases Token Superposition Training to Speed Up LLM Pre-Training by Up to 2.5x Across 270M to 10B Parameter Models appeared first on MarkTechPost.

Similar Posts