Loading...
Discovering amazing AI tools


Diffusion-native next-edit LLM for hosted edit prediction, code editing, and high-throughput classification by Inception Labs.

Diffusion-native next-edit LLM for hosted edit prediction, code editing, and high-throughput classification by Inception Labs.
Mercury Edit (Mercury-2 / Mercury Edit family) is a diffusion-native next-edit prediction model from Inception Labs that generates edit suggestions, multi-line and multi-edit changes, and cursor-aware completions for code and text. Because it is diffusion-based it generates tokens in parallel, enabling very high throughput and low-latency edit prediction while offering improved controllability for multimodal tasks compared with autoregressive models. Mercury is offered as a hosted API (Mercury API / mercuryapi provider) that integrates with editors, CLIs, and agent pipelines; it is commonly used for inline code edits, autocomplete-style refactors, classification and structured-output generation (e.g., SQL). A notable limitation of diffusion LLMs like Mercury-2 is limited support for iterative function-calling protocols (pause → tool call → continue), so they are best used for single-shot structured output, classification, and edit prediction workflows.
