AI & ML interests
Large language models
Recent Activity
Papers
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling
Articles
A series of extremely small, yet powerful language models redefining capabilities at small scale
-
Falcon-H1-Tiny: A series of extremely small, yet powerful language models redefining capabilities at small scale
📝37Generate text using extremely small yet powerful language models
-
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 42 -
tiiuae/Falcon-H1-Tiny-90M-Instruct
Text Generation • 91.1M • Updated • 297k • 33 -
tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF
91.1M • Updated • 1.66k • 11
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).
-
Falcon H1 Playground
🦅33Chat with Falcon-H1 language models
-
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Paper • 2507.22448 • Published • 70 -
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 42 -
tiiuae/Falcon-H1-0.5B-Base
Text Generation • 0.5B • Updated • 20.7k • 16
A series of powerful, universal and fine-tunable small Language Models
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 43 -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 14.3k • 893 -
tiiuae/falcon-rw-1b
Text Generation • Updated • 7.97k • 118 -
tiiuae/falcon-rw-7b
Text Generation • 8B • Updated • 300 • 17
7B models built on top of Falcon3-7B
Arabic benchmark datasets https://arxiv.org/pdf/2507.15850
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
-
Falcon Mamba Playground
🐍65Generate chat responses using FalconMamba-7b model
-
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 35 -
tiiuae/falcon-mamba-7b
Text Generation • Updated • 9.32k • 242 -
tiiuae/falcon-mamba-7b-instruct
Text Generation • 7B • Updated • 5.81k • 70
Leveraging Contextual Web Data for Fine-tuning Vision Language Models (https://arxiv.org/abs/2502.10250)
A series of extremely small, yet powerful language models redefining capabilities at small scale
-
Falcon-H1-Tiny: A series of extremely small, yet powerful language models redefining capabilities at small scale
📝37Generate text using extremely small yet powerful language models
-
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 42 -
tiiuae/Falcon-H1-Tiny-90M-Instruct
Text Generation • 91.1M • Updated • 297k • 33 -
tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF
91.1M • Updated • 1.66k • 11
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).
-
Falcon H1 Playground
🦅33Chat with Falcon-H1 language models
-
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Paper • 2507.22448 • Published • 70 -
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 42 -
tiiuae/Falcon-H1-0.5B-Base
Text Generation • 0.5B • Updated • 20.7k • 16
7B models built on top of Falcon3-7B
A series of powerful, universal and fine-tunable small Language Models
Arabic benchmark datasets https://arxiv.org/pdf/2507.15850
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
-
Falcon Mamba Playground
🐍65Generate chat responses using FalconMamba-7b model
-
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 35 -
tiiuae/falcon-mamba-7b
Text Generation • Updated • 9.32k • 242 -
tiiuae/falcon-mamba-7b-instruct
Text Generation • 7B • Updated • 5.81k • 70
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 43 -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 14.3k • 893 -
tiiuae/falcon-rw-1b
Text Generation • Updated • 7.97k • 118 -
tiiuae/falcon-rw-7b
Text Generation • 8B • Updated • 300 • 17
Leveraging Contextual Web Data for Fine-tuning Vision Language Models (https://arxiv.org/abs/2502.10250)