Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
In a Training Loop 🔄
0.4
TFLOPS
68
82
127
Daniel Fox
FlameF0X
Follow
fvckingnull4's profile picture
WonderYear1905's profile picture
jatsalkes's profile picture
45 followers
·
28 following
https://flamefox.gattodev.tech/
FlameF0X
AI & ML interests
Pre-training text generator. (Brother, im 17) Please don't try to contact me.
Recent Activity
reacted
to
SeaWolf-AI
's
post
with 🔥
about 13 hours ago
ALL Bench — Global AI Model Unified Leaderboard https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard If you've ever tried to compare GPT-5.2 and Claude Opus 4.6 side by side, you've probably hit the same wall: the official Hugging Face leaderboard only tracks open-source models, so the most widely used AI systems simply aren't there. ALL Bench fixes that by bringing closed-source models, open-weight models, and — uniquely — all four teams under South Korea's national sovereign AI program into a single leaderboard. Thirty-one frontier models, one consistent scoring scale. Scoring works differently here too. Most leaderboards skip benchmarks a model hasn't submitted, which lets models game their ranking by withholding results. ALL Bench treats every missing entry as zero and divides by ten, so there's no advantage in hiding your weak spots. The ten core benchmarks span reasoning (GPQA Diamond, AIME 2025, HLE, ARC-AGI-2), coding (SWE-bench Verified, LiveCodeBench), and instruction-following (IFEval, BFCL). The standout is FINAL Bench — the world's only benchmark measuring whether a model can catch and correct its own mistakes. It reached rank five in global dataset popularity on Hugging Face in February 2026 and has been covered by Seoul Shinmun, Asia Economy, IT Chosun, and Behind. Nine interactive charts let you explore everything from composite score rankings and a full heatmap to an open-vs-closed scatter plot. Operational metrics like context window, output speed, and pricing are included alongside benchmark scores. All data is sourced from Artificial Analysis Intelligence Index v4.0, arXiv technical reports, Chatbot Arena ELO ratings, and the Korean Ministry of Science and ICT's official evaluation results. Updates monthly.
upvoted
a
paper
about 15 hours ago
Beyond Language Modeling: An Exploration of Multimodal Pretraining
updated
a model
1 day ago
FlameF0X/Qwen2-0.2B-it
View all activity
Organizations
FlameF0X
's datasets
17
Sort: Recently updated
FlameF0X/coherence-RLAIF
Viewer
•
Updated
5 days ago
•
49
•
7
FlameF0X/agentic-code
Viewer
•
Updated
15 days ago
•
47.8k
•
54
FlameF0X/arXiv-AI-ML
Viewer
•
Updated
17 days ago
•
2.5k
•
41
FlameF0X/TinyTask2-BM
Viewer
•
Updated
20 days ago
•
28
•
18
FlameF0X/TinyTask-BM
Viewer
•
Updated
20 days ago
•
1.48k
•
13
•
1
FlameF0X/Claude-Corpus
Viewer
•
Updated
29 days ago
•
7
•
28
FlameF0X/YAAR-data
Viewer
•
Updated
Jan 27
•
120
•
8
FlameF0X/MedWiki
Viewer
•
Updated
Jan 24
•
1.83k
•
9
FlameF0X/test
Updated
Dec 28, 2025
•
11
FlameF0X/Heuristic-Epistemic-Reasoning
Viewer
•
Updated
Dec 26, 2025
•
50
•
17
FlameF0X/Lime
Preview
•
Updated
Dec 13, 2025
•
53
FlameF0X/Safety_Alignment_Benchmark
Viewer
•
Updated
Dec 6, 2025
•
155
•
11
FlameF0X/i3-chat
Viewer
•
Updated
Dec 5, 2025
•
50
•
11
FlameF0X/chat-pretrain
Updated
Dec 5, 2025
•
6
FlameF0X/Carpathia-Mix
Updated
Nov 24, 2025
•
13
FlameF0X/Lang2Lang
Viewer
•
Updated
Aug 10, 2025
•
92
•
9
FlameF0X/Mixture-of-Thoughts-2048T
Viewer
•
Updated
Jun 29, 2025
•
234k
•
35
•
3