Pre-trained models in MiniPLM: Knowledge Distillation for Pre-Training Language Models
AI & ML interests
Training efficient language models (MiniLLM, MiniPLM)
Organization Card
models 50
MiniLLM/MiniLLM-gpt2-340M
Text Generation • Updated • 801 • 6
MiniLLM/SFT-gpt2-120M
Text Generation • 0.1B • Updated • 132
MiniLLM/SFT-gpt2-760M
Text Generation • 0.8B • Updated • 7
MiniLLM/MiniPLM-Qwen-500M
Text Generation • 0.5B • Updated • 56 • 7
MiniLLM/MiniPLM-llama3.1-212M
Text Generation • 0.2B • Updated • 24 • 6
MiniLLM/MiniPLM-Mamba-130M
Text Generation • 0.1B • Updated • 13 • 3
MiniLLM/MiniPLM-Qwen-1.2B
Text Generation • 1B • Updated • 24 • 4
MiniLLM/Ref-Pretrain-Qwen-104M
Text Generation • 0.1B • Updated • 5 • 2
MiniLLM/Pretrain-Qwen-1.2B
Text Generation • 1B • Updated • 8
MiniLLM/Pretrain-Qwen-500M
Text Generation • 0.5B • Updated • 7
datasets 10
MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
Updated • 151
MiniLLM/pile-tokenized
Updated • 37 • 2
MiniLLM/roberta-corpus-processed
Updated • 13
MiniLLM/openwebtext-processed
Updated • 109
MiniLLM/dolly-processed
Viewer • Updated • 110k • 337 • 1
MiniLLM/sinst
Viewer • Updated • 8.35k • 73 • 1
MiniLLM/uinst
Viewer • Updated • 64.8k • 97 • 1
MiniLLM/self-inst
Viewer • Updated • 242 • 77 • 2
MiniLLM/Vicuna
Viewer • Updated • 80 • 67 • 1
MiniLLM/dolly
Viewer • Updated • 500 • 121