Zero-Overhead Introspection for Adaptive Test-Time Compute Paper • 2512.01457 • Published Dec 1, 2025 • 1
view post Post 5185 New family of 1B models just dropped!> LiquidAI/LFM2.5-1.2B-Base: 10T → 28T tokens> LiquidAI/LFM2.5-1.2B-Instruct: new large-scale multi-stage RL> LiquidAI/LFM2.5-1.2B-JP: our most polite model> LiquidAI/LFM2.5-VL-1.6B: multi-image multilingual> LiquidAI/LFM2.5-Audio-1.5B: 8x times faster, no quality lossSuper proud of this release 🤗 See translation 3 replies · 🚀 14 14 + Reply
NeuralArTS: Structuring Neural Architecture Search with Type Theory Paper • 2110.08710 • Published Oct 17, 2021
Towards One Shot Search Space Poisoning in Neural Architecture Search Paper • 2111.07138 • Published Nov 13, 2021
Consent in Crisis: The Rapid Decline of the AI Data Commons Paper • 2407.14933 • Published Jul 20, 2024 • 14
Bridging the Data Provenance Gap Across Text, Speech and Video Paper • 2412.17847 • Published Dec 19, 2024 • 10
Domain Adaptation of Llama3-70B-Instruct through Continual Pre-Training and Model Merging: A Comprehensive Evaluation Paper • 2406.14971 • Published Jun 21, 2024
Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit Paper • 2506.06607 • Published Jun 7, 2025 • 2
MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation Paper • 2511.22989 • Published Nov 28, 2025 • 16
EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models Paper • 2312.06281 • Published Dec 11, 2023 • 2
Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press Diplomacy Paper • 2508.07485 • Published Aug 10, 2025 • 10
Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models Paper • 2510.15061 • Published Oct 16, 2025 • 1
view post Post 8311 LiquidAI/LFM2-8B-A1B just dropped!8.3B params with only 1.5B active/token 🚀> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B> MoE designed to run on phones/laptops (llama.cpp / vLLM)> Pre-trained on 12T tokens → strong math/code/IF See translation 1 reply · 🔥 9 9 🚀 3 3 + Reply