Gate News, March 25 — Cursor released the Composer 2 technical report, unveiling the complete training scheme for the first time. The base model Kimi K2.5 uses MoE architecture, with a total of 1.04 trillion parameters and 32 billion activated parameters. The training consists of two stages: first, continued pretraining on code data to enhance encoding knowledge; then, large-scale reinforcement learning to improve end-to-end coding capabilities. The RL environment fully simulates real Cursor usage scenarios, including file editing, terminal operations, code search, and tool calls, allowing the model to learn under conditions close to production. The report also details the construction of Cursor’s self-developed benchmark, CursorBench: tasks are collected from actual engineering team coding sessions rather than artificially created. Kimi K2.5 scored 36.0 on this benchmark, and after two-stage training, Composer 2 reached 61.3, a 70% improvement. Cursor states that its inference cost is significantly lower than some cutting-edge large model APIs, achieving Pareto optimality between accuracy and cost.