|

MiniMax Releases MiniMax M2: A Mini Open Model Built for Max Coding and Agentic Workflows at 8% Claude Sonnet Price and ~2x Faster

Can an open supply MoE really energy agentic coding workflows at a fraction of flagship mannequin prices whereas sustaining long-horizon software use throughout MCP, shell, browser, retrieval, and code? MiniMax crew has simply launched MiniMax-M2, a mix of consultants MoE mannequin optimized for coding and agent workflows. The weights are printed on Hugging Face underneath the MIT license, and the mannequin is positioned as for finish to finish software use, multi file enhancing, and lengthy horizon plans, It lists 229B whole parameters with about 10B lively per token, which retains reminiscence and latency in examine throughout agent loops.

https://github.com/MiniMax-AI/MiniMax-M2

Architecture and why activation dimension issues?

MiniMax-M2 is a compact MoE that routes to about 10B lively parameters per token. The smaller activations cut back reminiscence stress and tail latency in plan, act, and confirm loops, and enable extra concurrent runs in CI, browse, and retrieval chains. This is the efficiency price range that allows the velocity and value claims relative to dense fashions of comparable high quality.

MiniMax-M2 is an interleaved considering mannequin. The analysis crew wrapped inner reasoning in <suppose>...</suppose> blocks, and instructs customers to maintain these blocks within the dialog historical past throughout turns. Removing these segments harms high quality in multi step duties and software chains. This requirement is specific on the model page on HF.

Benchmarks that concentrate on coding and brokers

The MiniMax crew reviews a set of agent and code evaluations are nearer to developer workflows than static QA. On Terminal Bench, the desk exhibits 46.3. On Multi SWE Bench, it exhibits 36.2. On BrowseComp, it exhibits 44.0. SWE Bench Verified is listed at 69.4 with the scaffold element, OpenFingers with 128k context and 100 steps.

https://github.com/MiniMax-AI/MiniMax-M2

MiniMax’s official announcement stresses 8% of Claude Sonnet pricing, and close to 2x velocity, plus a free entry window. The similar word supplies the particular token costs and the trial deadline.

Comparison M1 vs M2

Aspect MiniMax M1 MiniMax M2
Total parameters 456B whole 229B in mannequin card metadata, mannequin card textual content says 230B whole
Active parameters per token 45.9B lively 10B lively
Core design Hybrid Mixture of Experts with Lightning Attention Sparse Mixture of Experts concentrating on coding and agent workflows
Thinking format Thinking price range variants 40k and 80k in RL coaching, no suppose tag protocol required Interleaved considering with <suppose>...</suppose> segments that have to be preserved throughout turns
Benchmarks highlighted AIME, LiveCodeBench, SWE-bench Verified, TAU-bench, lengthy context MRCR, MMLU-Pro Terminal-Bench, Multi SWE-Bench, SWE-bench Verified, BrowseComp, GAIA textual content solely, Artificial Analysis intelligence suite
Inference defaults temperature 1.0, high p 0.95 mannequin card exhibits temperature 1.0, high p 0.95, high ok 40, launch web page exhibits high ok 20
Serving steering vLLM really helpful, Transformers path additionally documented vLLM and SGLang really helpful, software calling information offered
Primary focus Long context reasoning, environment friendly scaling of take a look at time compute, CISPO reinforcement studying Agent and code native workflows throughout shell, browser, retrieval, and code runners

Key Takeaways

  1. M2 ships as open weights on Hugging Face underneath MIT, with safetensors in F32, BF16, and FP8 F8_E4M3.
  2. The mannequin is a compact MoE with 229B whole parameters and ~10B lively per token, which the cardboard ties to decrease reminiscence use and steadier tail latency in plan, act, confirm loops typical of brokers.
  3. Outputs wrap inner reasoning in <suppose>...</suppose> and the mannequin card explicitly instructs retaining these segments in dialog historical past, warning that removing degrades multi-step and tool-use efficiency.
  4. Reported outcomes cowl Terminal-Bench, (Multi-)SWE-Bench, BrowseComp, and others, with scaffold notes for reproducibility, and day-0 serving is documented for SGLang and vLLM with concrete deploy guides.

Editorial Notes

MiniMax M2 lands with open weights underneath MIT, a mix of consultants design with 229B whole parameters and about 10B activated per token, which targets agent loops and coding duties with decrease reminiscence and steadier latency. It ships on Hugging Face in safetensors with FP32, BF16, and FP8 codecs, and supplies deployment notes plus a chat template. The API paperwork Anthropic appropriate endpoints and lists pricing with a restricted free window for analysis. vLLM and SGLang recipes can be found for native serving and benchmarking. Overall, MiniMax M2 is a really stable open launch.


Check out the API Doc, Weights and Repo. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to observe us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The put up MiniMax Releases MiniMax M2: A Mini Open Model Built for Max Coding and Agentic Workflows at 8% Claude Sonnet Price and ~2x Faster appeared first on MarkTechPost.

Similar Posts