Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
1
8
1
LuYinMiao
LuYinMiao
Follow
StJohnDeakins's profile picture
1 follower
·
2 following
AI & ML interests
None yet
Recent Activity
reacted
to
Juanxi
's
post
with 👍
about 2 hours ago
📢 Awesome Multimodal Modeling We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models. 🔹 Taxonomy & Evolution: Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment. Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding. Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms. Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws. 💡 Key Distinction: UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training. 🔗 Explore & Contribute: https://github.com/OpenEnvision-Lab/Awesome-Multimodal-Modeling
reacted
to
Juanxi
's
post
with 😎
about 2 hours ago
📢 Awesome Multimodal Modeling We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models. 🔹 Taxonomy & Evolution: Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment. Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding. Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms. Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws. 💡 Key Distinction: UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training. 🔗 Explore & Contribute: https://github.com/OpenEnvision-Lab/Awesome-Multimodal-Modeling
reacted
to
Juanxi
's
post
with 🤗
about 2 hours ago
📢 Awesome Multimodal Modeling We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models. 🔹 Taxonomy & Evolution: Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment. Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding. Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms. Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws. 💡 Key Distinction: UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training. 🔗 Explore & Contribute: https://github.com/OpenEnvision-Lab/Awesome-Multimodal-Modeling
View all activity
Organizations
models
3
Sort: Recently updated
LuYinMiao/qwen3_4b
Updated
Jun 24, 2025
LuYinMiao/qwen
Updated
May 1, 2025
LuYinMiao/sft
Updated
Apr 21, 2025
datasets
8
Sort: Recently updated
LuYinMiao/math_eval
Updated
Jan 28
•
5
LuYinMiao/medical_eval
Viewer
•
Updated
Jan 28
•
11.6k
•
8
LuYinMiao/math
Viewer
•
Updated
Jan 28
•
67.6k
•
8
LuYinMiao/medical
Viewer
•
Updated
Jan 28
•
23.5k
•
65
LuYinMiao/openR1-difficult
Updated
Aug 24, 2025
•
9
LuYinMiao/openR1-random
Updated
Aug 24, 2025
•
5
LuYinMiao/openR1-longest
Updated
Aug 24, 2025
•
8
LuYinMiao/hehe
Updated
Jun 10, 2025
•
4