TWIN Datasets and models from the paper "Same or Not? Enhancing Visual Perception in Vision-Language Models" glab-caltech/TWIN-Qwen2.5-VL-3B Image-Text-to-Text • 4B • Updated about 1 month ago • 52 • 2 glab-caltech/TWIN-InternVL3_5-1B Image-Text-to-Text • 1B • Updated about 1 month ago • 19 • 1 glab-caltech/FGVQA Viewer • Updated about 1 month ago • 12k • 87 • 1 glab-caltech/TWIN Viewer • Updated about 1 month ago • 562k • 119 • 3
VALOR Models from the paper "No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers" glab-caltech/VALOR-8B 8B • Updated Dec 11, 2025 • 5 glab-caltech/VALOR-GroundingDINO Object Detection • Updated Dec 11, 2025 No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers Paper • 2512.08889 • Published Dec 9, 2025 • 1
No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers Paper • 2512.08889 • Published Dec 9, 2025 • 1
TWIN Datasets and models from the paper "Same or Not? Enhancing Visual Perception in Vision-Language Models" glab-caltech/TWIN-Qwen2.5-VL-3B Image-Text-to-Text • 4B • Updated about 1 month ago • 52 • 2 glab-caltech/TWIN-InternVL3_5-1B Image-Text-to-Text • 1B • Updated about 1 month ago • 19 • 1 glab-caltech/FGVQA Viewer • Updated about 1 month ago • 12k • 87 • 1 glab-caltech/TWIN Viewer • Updated about 1 month ago • 562k • 119 • 3
VALOR Models from the paper "No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers" glab-caltech/VALOR-8B 8B • Updated Dec 11, 2025 • 5 glab-caltech/VALOR-GroundingDINO Object Detection • Updated Dec 11, 2025 No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers Paper • 2512.08889 • Published Dec 9, 2025 • 1
No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers Paper • 2512.08889 • Published Dec 9, 2025 • 1