OmniGAIA
Collection
Towards Native Omni-Modal AI Agents • 7 items • Updated
• 6
This is the official checkpoint for the OmniAtlas-Qwen2.5-3B model, introduced in the paper OmniGAIA: Towards Native Omni-Modal AI Agents. Download and use it for evaluation on the OmniGAIA Benchmark, or apply it to your own omni-modal agentic tasks.
OmniAtlas is trained in two stages:
Serve the model with vLLM via an OpenAI-compatible API:
vllm serve /path/to/your/OmniAtlas-Qwen2.5-3B \
--served-model-name omniatlas-qwen2.5-3b \
--port 8080 \
--host 0.0.0.0 \
--tensor-parallel-size 1 \
--gpu-memory-utilization 0.9 \
--trust-remote-code \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--uvicorn-log-level debug \
--max-model-len 32768
Then query the model with omni-modal inputs (image, audio, video) and tools:
import base64
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="empty")
def encode_base64(file_path):
with open(file_path, "rb") as f:
return base64.b64encode(f.read()).decode("utf-8")
video_b64 = encode_base64("/path/to/video.mp4")
audio_b64 = encode_base64("/path/to/audio.wav")
image_b64 = encode_base64("/path/to/image.jpg")
tools = [
{
"type": "function",
"function": {
"name": "web_search",
"description": "Perform a Google web search and return the top results (title, URL, snippet).",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The search query string."}
},
"required": ["query"]
}
}
},
{
"type": "function",
"function": {
"name": "page_browser",
"description": "Fetch and extract the full text content from one or more webpages.",
"parameters": {
"type": "object",
"properties": {
"urls": {
"type": "array",
"items": {"type": "string"},
"description": "A list of webpage URLs to fetch content from."
}
},
"required": ["urls"]
}
}
},
{
"type": "function",
"function": {
"name": "code_executor",
"description": "Execute Python code in a sandbox. Available: numpy, pandas, sympy, math, etc.",
"parameters": {
"type": "object",
"properties": {
"code": {"type": "string", "description": "Python code to execute."}
},
"required": ["code"]
}
}
},
]
response = client.chat.completions.create(
model="omniatlas-qwen2.5-3b",
messages=[
{
"role": "user",
"content": [
{"type": "video_url", "video_url": {"url": f"data:video/mp4;base64,{video_b64}"}},
{"type": "audio_url", "audio_url": {"url": f"data:audio/wav;base64,{audio_b64}"}},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_b64}"}},
{"type": "text", "text": "Based on the video, audio, and image, answer the question: ..."},
],
}
],
tools=tools,
tool_choice="auto",
)
print(response.choices[0].message.content)
For full evaluation and agent usage, see the OmniGAIA repository.
If you find OmniAtlas useful in your work, we kindly ask that you cite us:
@misc{li2026omnigaia,
title={OmniGAIA: Towards Native Omni-Modal AI Agents},
author={Xiaoxi Li and Wenxiang Jiao and Jiarui Jin and Shijian Wang and Guanting Dong and Jiajie Jin and Hao Wang and Yinuo Wang and Ji-Rong Wen and Yuan Lu and Zhicheng Dou},
year={2026},
archivePrefix={arXiv},
eprint={2602.22897},
url={https://arxiv.org/abs/2602.22897}
}
Base model
Qwen/Qwen2.5-Omni-3B