Stable Code 3B
Stability AI
• Framework: UnknownStable Code 3B is a compact 3-billion-parameter large language model developed by Stability AI for code generation, completion, and reasoning tasks. Trained on over 1.3 trillion tokens from diverse programming and text datasets, it supports more than 18 programming languages. The model offers strong HumanEval performance and efficient inference, making it suitable for IDE integration, education, and developer tooling. Stability AI also provides an instruction-tuned version (Stable Code Instruct 3B) optimized for conversational code assistance.
Stable Code 3B AI Model

Model Performance Statistics
Views
Released
Last Checked
Version
- Code completion
- Fill-in-middle
- Multi-language support
- Parameter Count
- N/A
Dataset Used
The Stack, GitHub public repos
Related AI Models
Discover similar AI models that might interest you
CodeGen2.5 7B

CodeGen2.5 7B
Salesforce
CodeGen2.5 7B is an open-source, 7-billion-parameter large language model created by Salesforce Research for program synthesis, code generation, and infill tasks. It supports multiple programming languages, including Python, Java, and JavaScript, and is trained on over 1.4 trillion code and text tokens. The model introduces improvements in infill sampling, context understanding, and multilingual code generation efficiency. Compared to larger predecessors, CodeGen2.5 7B delivers comparable performance while being optimized for resource-constrained environments.
StarCoder2

StarCoder2
BigCode
StarCoder2 is a large-scale open-source AI model developed by BigCode for code generation and comprehension tasks. Built with PyTorch and licensed under Apache 2.0, it supports multiple programming languages and is optimized for both code completion and generation. The model is designed to aid developers by automating code writing, improving productivity, and enabling advanced programming assistance.
DeepSeek-Coder

DeepSeek-Coder
DeepSeek AI
DeepSeek‑Coder is a series of open-source code language models developed by DeepSeek AI using PyTorch. Trained from scratch on 2 trillion tokens (87% code, 13% natural language), with model sizes from 1.3B to 33B parameters and a 16K window context. It excels at project‑level code completion, infilling, and supports dozens of programming languages. It consistently leads benchmarks like HumanEval, MultiPL‑E, and MBPP in open-source comparisons.