Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 4022420071 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MINERVA for lmms-eval

This dataset package is prepared for lmms-eval integration of MINERVA.

Contents

  • minerva.json: original annotation metadata.
  • data/train.lance: Lance table with one row per video_id and inline video_blob.

Lance schema

  • video_id: YouTube video ID.
  • youtube_url: YouTube URL reconstructed from ID.
  • video_ext: local downloaded file extension.
  • video_size_bytes: raw file size.
  • video_blob: large binary blob with metadata lance-encoding:blob=true.

Loading examples

import lance

ds = lance.dataset("hf://datasets/lmms-lab-eval/minerva/data/train.lance")
row = ds.scanner(columns=["video_id"], limit=1).to_table().to_pylist()[0]
blob = ds.take_blobs("video_blob", ids=[0])[0]
with open("sample.mp4", "wb") as f:
    f.write(blob.read())
Downloads last month
24