Kiran N PRO
Fourwheels2512
·
AI & ML interests
None yet
Recent Activity
posted an
update
about 15 hours ago
Zero Forgetting in LLM Fine-Tuning — 4 Benchmarks, All Domains Retained
We tested sequential fine-tuning on Mistral-7B across 4 independent benchmarks (5, 4, 5, and 8 domains).
Standard LoRA forgets 38–49% of prior knowledge per domain. Our continual learning adapter: -0.17% drift.
The Salesforce 5-domain test showed positive backward transfer — the model got better at old domains as it learned new ones (retention BERTScore: 0.889 → 0.907).
No replay buffers. No EWC. No knowledge distillation. Spectral norm locked at 1.0. Naive LoRA crashed at gradient norm 263. Ours: under 6.
Interactive benchmark dashboard: https://huggingface.co/spaces/Fourwheels2512/zero-forgetting-benchmarks
Live product (free tier): https://mhc-finetune-saas-zrtokzlkbnue9zsk7jfgad.streamlit.app
Patent pending. 7 technical reports. 196 automated tests. Solo founder, 6 months of R&D. updated
a Space about 16 hours ago
Fourwheels2512/zero-forgetting-benchmarks published
a Space about 16 hours ago
Fourwheels2512/zero-forgetting-benchmarks