Mistral: Ministral 3B

mistralai/ministral-3b

Created Oct 17, 2024131,072 context
$0.04/M input tokens$0.04/M output tokens

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.

Providers for Ministral 3B

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

Context
131K
Max Output
131K
Input
$0.04
Output
$0.04

Throughput

Latency

More models from Mistral AI

    Mistral: Ministral 3B – Provider Status | OpenRouter