MiniMax-M2.7
Self-evolving agentic coding model for complex tool use, software delivery, and long-horizon productivity workflows
MiniMax-M2.7 is the high-efficiency choice for AI engineers and developer teams who need to run coding-heavy agents, long-horizon tool workflows, and cost-sensitive automation at scale.
Why we love it
- Excellent price-to-performance for coding agents
- Strong tool use and long-horizon execution
- Handles Office edits beyond pure code tasks
- Prompt caching lowers repeated workflow cost
Things to know
- Text-only model with no multimodal input
- Best results still depend on strong scaffolding
- Enterprise governance details remain limited
About
Executive Summary: MiniMax-M2.7 is an agentic large language model built for developers, AI engineers, and automation teams that need strong coding, tool use, and long-horizon task execution. Its core value is combining frontier-grade software engineering ability with unusually low token cost, making advanced agent workflows cheaper to deploy at scale.
MiniMax positioned M2.7 on March 18, 2026 as a self-evolving model that helped improve its own harness through more than 100 iterative optimization cycles, reportedly driving a 30% gain on an internal programming evaluation. On official benchmarks, it scores 56.22% on SWE-Pro, 55.6% on VIBE-Pro, 57.0% on Terminal Bench 2, and 1495 ELO on GDPval-AA. It also keeps a 97% skill adherence rate across 40 complex skills, each exceeding 2,000 tokens. MiniMax-M2.7 offers a Paid Only plan, with paid tiers starting at $0.30 per 1M input tokens. It is Less expensive than average for this category.
The automation angle is what makes it matter. M2.7 can construct agent harnesses, coordinate agent teams, search tools dynamically, and handle multi-round editing across code and office documents, which makes it useful for CI-style debugging, incident response, workflow generation, and high-fidelity document revision. For teams building AI operators rather than simple chatbots, this is closer to an execution model than a generic assistant.
Key Features
- ✓Build agent harnesses to automate complex multi-step productivity tasks
- ✓Generate and revise code across full project delivery and debugging workflows
- ✓Search tools dynamically to reduce manual orchestration in agent pipelines
- ✓Coordinate agent teams for longer task chains and structured execution
- ✓Edit Excel, PowerPoint, and Word files through multi-round high-fidelity revisions
- ✓Scale coding workflows with low token pricing and prompt caching support
Product Comparison
| Dimension | MiniMax-M2.7 | Claude Opus 4.6 |
|---|---|---|
| Core use case | Low-cost agentic coding and execution model | Premium general reasoning and coding assistant |
| Price-to-performance | $0.30 and $1.20 per 1M input and output tokens | Much higher premium pricing |
| Agent workflow fit | Strong with harnesses, skills, and tool loops | Strong, but more expensive to scale heavily |
| Modality limits | Text-only model | Broader multimodal capability |
| Best deployment style | Scaffolded coding agents and workflow automation | High-end assistant and mixed enterprise usage |
| ROI profile | Higher ROI for cost-sensitive automation at scale | Higher ROI when polish matters more than spend |
Frequently Asked Questions
The core difference is value versus polish. While Claude Opus 4.6 is stronger as a premium all-around model with multimodal support, MiniMax-M2.7 has an absolute advantage in coding-agent cost efficiency at $0.30 and $1.20 per 1M input and output tokens.
The main concern is deployment fit, not benchmark strength. Community discussion points out that M2.7 shines with scaffolds like OpenClaw or structured agent loops, but is less compelling as a plain chat model. The workaround is to pair it with strong planning, skills, and routing.
No public official free API tier is clearly documented for M2.7. Official pay-as-you-go pricing starts at $0.30 per 1M input tokens and $1.20 per 1M output tokens, with a highspeed variant at $0.60 and $2.40.
It fits best as an execution model inside agent frameworks. M2.7 works well with tool routing, scaffolded loops, prompt caching, and products like MiniMax Agent, OpenClaw, Ollama cloud, or custom coding pipelines that need cheap long-context execution.
Partly, but security buyers should verify details first. Public materials emphasize capability, pricing, and agent execution more than enterprise governance. Teams handling regulated code or documents should run a policy review before broad rollout.
Yes. Officially, M2.7 is improved for Excel, PowerPoint, and Word, supports multi-round high-fidelity editing, and keeps 97% skill adherence across 40 complex skills. That makes it viable for report revision, slide editing, and structured document workflows.