yguooo/newyorker_caption_ranking
Viewer • Updated • 2.18M • 1.1k • 6
How to use HumorR1/policy-e2b-grpo-thinking with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-VL-2B-Thinking")
model = PeftModel.from_pretrained(base_model, "HumorR1/policy-e2b-grpo-thinking")LoRA on Qwen3-VL-2B-Thinking trained via GRPO against the Bradley-Terry reward model HumorR1/rm-qwen25vl-3b-nodesc. Output format: {thinking}</think>\n\n<caption>X</caption>.
yguooo/newyorker_caption_ranking).Part of a 2x2 ablation over training method (SFT, GRPO) and output
format (no thinking, thinking) for humor caption generation. See
HumorR1/rm-qwen25vl-3b-nodesc for the reward model used to train (and
score) this policy.
Backbone: Qwen/Qwen3-VL-2B-Thinking.
This repo is a LoRA adapter; load with peft.PeftModel.from_pretrained.
from PIL import Image
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from vllm.lora.request import LoRARequest
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-2B-Thinking", trust_remote_code=True)
llm = LLM(model="Qwen/Qwen3-VL-2B-Thinking", trust_remote_code=True, dtype="bfloat16",
enable_lora=True, max_lora_rank=32, max_model_len=4096)
# Caption format: <caption>X</caption>; thinking variant prefixes <think>...</think>.
HumorR1/rm-qwen25vl-3b-nodesc (held-out pairwise accuracy 0.6635).Base model
Qwen/Qwen3-VL-2B-Thinking