Dutch Town Scene 1 (#zn7nyhyg)
SiRLab publishes LiDAR-first synthetic datasets with dense per-point semantics, synchronized RGB, and ego-motion, built to plug into modern perception pipelines.
- Best for: point-cloud semantic segmentation, filtering/analytics, fusion prototyping.
- Commercialization: you may commercialize models/outputs trained on the Full dataset (dataset redistribution is prohibited; see License below).
- Full access / quote / integration: see Commercial access & services.
Documentation status
This README is generated automatically and may contain inaccuracies or incomplete details. The README is provided “AS IS” and may contain errors or artifacts. For a validated, dataset-specific README, contact contact@sirlab.ai and include the datasetId.
Quick Preview
LiDAR viewer (Space)
Want the Full dataset (complete LiDAR + camera + ego-motion streams)?
See Commercial access & services to request Full access or a quote.
Dataset Description
Synthetic urban driving dataset with LiDAR, RGB camera, and ego-motion. Includes dense semantic annotations for point-cloud and image segmentation via a shared label registry.
Why this dataset is different
LiDAR first, not camera first: Every LiDAR return includes a semantic label_id aligned to a dataset-wide label registry. No manual labeling pipeline needed for point-cloud segmentation workflows.
Stable shapes for GPU batching: LiDAR frames are exported as fixed-size arrays (invalid returns encoded as xyz == [0,0,0]), which is convenient for CUDA pipelines and reproducible training runs.
Multi-modal alignment: LiDAR, camera, and ego-motion share timing anchors (unix_timestamp_ns) and per-frame world poses to support cross-sensor synchronization.
Full dataset adds production deliverables: The Full repo includes complete sensor streams and additional artifacts (e.g., frame indexes, intrinsics, and static ground truth) beyond the lightweight preview/ exports.
Preview Files are for inspection only. Do not use Preview Files for training, testing, validation, evaluation, benchmarking, or performance claims.
Technical Overview
- Total simulation steps: 150
- Sensors:
- lidar_360_1 (LiDAR): 150 frames (Full dataset only), 10.00 Hz, dir=
lidar/lidar_360_1 - camera_front_1 (Camera): 150 frames (Full dataset only), 10.00 Hz, dir=
camera/camera_front_1 - ego_motion (EgoMotion): 150 frames (Full dataset only), 10.00 Hz, dir=
ego_motion/ego_motion
- lidar_360_1 (LiDAR): 150 frames (Full dataset only), 10.00 Hz, dir=
- Total duration: 15.0 seconds
- Semantic classes: 53, dir=
labels/* - Static GT boxes: 218 (Full dataset only), dir=
static/*
License
Use of this dataset is governed by the SiRLab Synthetic Dataset License Agreement.
Current license terms: SirLab Dataset License
Access, Availability & Commercial Terms
Dataset identifier
This dataset is identified by datasetId: zn7nyhyg (see manifest.json). Use this identifier in all access, support, and procurement requests.
Availability modes
This dataset may be published in one of the following modes:
- Preview dataset (public): includes Preview Files under
preview/, plusmanifest.jsonand the label registry. - Full dataset (access-controlled): includes the complete dataset contents for the same datasetId (as defined by the Manifest and license terms).
Preview Files are lightweight inspection artifacts and may use different formats than the Full dataset deliverables (e.g., viewer-optimized exports).
Hub repositories:
- Preview dataset repo: https://huggingface.co/datasets/sirlab-ai/driving-urban-dutch-fog__zn7nyhyg-preview
- Full dataset repo: https://huggingface.co/datasets/sirlab-ai/driving-urban-dutch-fog__zn7nyhyg
If you do not have full access enabled, only the Preview dataset may be visible/downloadable.
Full access enablement
Full access for a given datasetId is provisioned to authorized licensees under the SiRLab Synthetic Dataset License Agreement. To request enablement, see Commercial access & services.
Commercial access & services
The Full dataset is provided to authorized licensees under
the SiRLab Synthetic Dataset License Agreement (delivery may be via Hugging Face access or another secure method).
Commercial terms (pricing, scope, and delivery) are handled via quote/order.
To request access or a quote, use https://www.sirlab.ai/contact (or email contact@sirlab.ai) and include the datasetId: zn7nyhyg.
We engage in one of the following ways (depending on your needs):
- Full dataset license (access to the Full dataset for the same
datasetId) - Custom dataset generation (you specify ODD/edge cases/sensor model; we deliver datasetId(s))
- Pipeline integration (bring our simulation + sensor + labeling pipeline into your environment)
For a quicker reply, include:
- use case (ADAS / robotics / mapping / etc.)
- sensors required (LiDAR-first, camera second, or both)
- target scenarios (e.g., fog, intersections, VRUs)
- timeline + deployment constraints (cloud/on-prem/security)
Dataset Intended Usage
This dataset may be used solely for internal research, development, training, testing, and evaluation of AI/ML systems (including generative AI), and for creating derived synthetic data and annotations, as permitted by the License Agreement. You may commercialize resulting models/outputs provided they do not include the Dataset (or any portion of it as a substitute) and you do not redistribute the Dataset. If present, Preview Files are for inspection/preview only and must not be used for training, testing, validation, evaluation, benchmarking, or performance claims.
Want proof it works on your stack?
We can run a private evaluation on the Full dataset and return metrics/plots without distributing the dataset. This is useful when procurement is still in progress or you want an apples-to-apples comparison under NDA.
Dataset Characterization
Data Collection Method
All data is generated via physics-based simulation with virtual sensors:
- Generated in a controlled 3D environment using physically-based sensor models.
- No real-world sensor recordings are used; signals come from the renderer and simulation stack.
Labeling Method
Labels are produced automatically from simulation ground truth:
- Semantic IDs, static 3D boxes, and dynamic object labels (e.g., pedestrians, vehicles) are derived from engine metadata (object classes, instance IDs, and full 6-DoF poses over time).
- No manual or AI annotation is involved.
Use of Generative Models
No generative models are used for sensor or label creation:
- This dataset does not rely on diffusion models, GANs, or other generative techniques to create sensor data or annotations.
- All content comes from deterministic or stochastic simulation components (rendering, physics, and procedural scene logic) inside the engine.
Preview Files
Preview files (if provided) are located under preview/ and are intended for inspection and presentation. They may not be representative of the full dataset.
- Not intended for model training or evaluation/benchmarking.
- If shared externally, label them as “Preview Files”.
Directory Layout
Preview dataset layout (public)
<dataset_root>/
├─ preview/
│ ├─ *.pclbin # point-cloud preview files (0..N)
│ ├─ *_rgb.webp # RGB preview images (0..N)
│ └─ *_seg.webp # segmentation preview images (0..N)
├─ labels/
│ ├─ labels.json
│ └─ labels_lut.png
├─ README.md
└─ manifest.json
Full dataset layout
<dataset_root>/
├─ labels/
│ ├─ labels.json
│ └─ labels_lut.png
├─ static/
│ ├─ static_gt_parquet_schema.json
│ ├─ static_gt_world.json
│ └─ static_gt_world.parquet
├─ lidar/
│ └─ lidar_360_1/
│ ├─ frame_schema.json
│ ├─ frames/
│ │ ├─ 00000000.npz
│ │ ├─ 00000001.npz
│ │ └─ ...
│ ├─ frames_index.parquet
│ └─ intrinsics.json
├─ camera/
│ └─ camera_front_1/
│ ├─ frame_schema.json
│ ├─ frames_index.parquet
│ ├─ intrinsics.json
│ ├─ rgb/
│ │ ├─ 00000000.png
│ │ ├─ 00000001.png
│ │ └─ ...
│ ├─ semantic_color/
│ │ ├─ 00000000.png
│ │ ├─ 00000001.png
│ │ └─ ...
│ └─ semantic_id/
│ ├─ 00000000.png
│ ├─ 00000001.png
│ └─ ...
├─ ego_motion/
│ └─ ego_motion/
│ ├─ frame_schema.json
│ ├─ frames/
│ │ ├─ 00000000.json
│ │ ├─ 00000001.json
│ │ └─ ...
│ ├─ frames_index.parquet
│ └─ sensor_info.json
├─ README.md
└─ manifest.json
Preview Assets
Overview
Both the Full and Preview repositories include a preview/ folder with lightweight inspection assets.
- In the Preview repo,
preview/is essentially what you get (plusmanifest.jsonand labels). - In the Full repo,
preview/is just a small visual subset alongside the full sensor streams.
Note: See Access, Availability & Commercial Terms for what’s included in Preview vs Full.
Included in preview/
- LiDAR samples:
preview/*.pclbin(viewer-friendly exports) - Camera samples:
preview/*_rgb.webp,preview/*_seg.webp(RGB + segmentation previews)
Data Format & Conventions
- Directory layout: see Directory Layout
- Per-sensor indexing: each sensor typically provides an
index.parquet(or equivalent) that enumerates frames and metadata. - Frame storage: frames are stored under per-sensor
frames/directories (format varies by sensor; see the sensor section). - Labels: label mapping is under
labels/(e.g.,labels.json) and is referenced vialabel_id. - Static ground truth (Full dataset only): stored under
static/in world coordinates (see Static Groundtruth). - Coordinate/time conventions: see Coordinate Frames & Time.
If details are missing or ambiguous, contact contact@sirlab.ai with the datasetId for a validated README.
Coordinate Frames & Time
Coordinate Frames
- World frame: right-handed (RH), meters.
- Sensor frame: per-sensor local frame (RH). Sensor data (e.g., LiDAR points) is stored in sensor frame, while the sensor pose is provided in world frame.
- Rotations: quaternions are stored as xyzw unless otherwise stated.
Time & Alignment
frame_indexis authoritative per sensor and comes from the sensor payload (e.g.,payload_simstep).sim_stepis the simulation tick at which data was recorded.unix_timestamp_nsis a synthetic wall-clock anchored at simulation start (sim_start_timestamp_ns) for cross-sensor synchronization.
Sensors
LiDAR
Availability note: If you are viewing the Preview dataset, LiDAR data may be limited to preview/ samples. The complete LiDAR stream is available to authorized licensees in the Full dataset repo for the same datasetId.
Formats: Preview LiDAR uses preview/*.pclbin (viewer export). Full LiDAR uses lidar/.../*.npz (dataset deliverable).
This directory contains 3D point cloud frames recorded from simulated LiDAR sensor(s).
Key properties
- Dense per-point semantic labeling: every returned point stores a
label_idaligned with the dataset label registry. Eliminates manual labeling pipelines for semantic segmentation / filtering / analytics. - Fixed-size frames (GPU-friendly): frames keep a consistent point array size
Nmatching the sensor model (no culling). Invalid returns are encoded as xyz==[0,0,0]. This is convenient for batching and CUDA/GPU processing (stable shapes). - Direct label remap:
label_idis a compact uint that must be decoded through the dataset-wide label registry (same system used by all modalities and groundtruth).
Directory layout
<dataset_root>/
lidar/
<lidar_sensor_name>/
intrinsics.json
frame_schema.json
frames_index.parquet
frames/
000000.npz
000001.npz
...
intrinsics.json
Sensor intrinsics and timing metadata.
schema:lidar_intrinsics@1timing.sim_start_timestamp_ns: sim start anchor (used for synthetic wall-clock)timing.sim_framerate_hz: simulation tick ratetiming.sensor_framerate_hz: sensor output rateintrinsics: LiDAR model parameters (FOV, channels, range, noise, etc.)
frame_schema.json
Machine-readable schema for the .npz frame content (arrays, dtypes, shapes, units, and frames).
Treat this file as the source-of-truth for consumers.
Per-frame .npz (frames/000123.npz)
Each .npz contains a single LiDAR frame:
xyz float32 [N, 3] meters, sensor frame
sensor_pos_world float32 [3] meters, world frame
sensor_rot_world float32 [4] quat xyzw, world frame
label_id uint8 [N] semantic label id per point
labels_hist int64 [K, 2] (label_id, count) histogram
label_id is a dataset-wide semantic identifier. Decode it using the dataset’s label registry
(see Groundtruth → Labels / Labeling System).
Notes:
- Coordinate frames: point coordinates are stored in the sensor frame; the sensor pose is stored in world space.
- Padding: because frames are fixed-size, unused points may be encoded as xyz == [0, 0, 0]. Consumers can mask these out if they need only valid returns.
- Current export: only XYZ + pose + labels are exported at the moment (no intensity / rings / returns).
frames_index.parquet
One row per recorded frame for discovery and cross-sensor alignment.
Recommended columns:
frame_index int64 # authoritative frame id from the LiDAR payload (payload_simstep)
sim_step int64 # simulation step at which the frame was recorded
sim_time_s float64 # sim_step / sim_framerate_hz
unix_timestamp_ns int64 # synthetic wall-clock anchored at sim start
frame_file string # run-root-relative path to the .npz (use the subfolder path)
Camera
Availability note: If you are viewing the Preview dataset, camera data may be limited to preview/ samples. The complete camera stream is available to authorized licensees in the Full dataset repo for the same datasetId.
This directory contains RGB images and semantic segmentation outputs recorded from the simulated camera sensor(s).
Directory layout
<dataset_root>/
└─ camera/
└─ camera_front/
├─ intrinsics.json
├─ rgb/
│ ├─ 00000000.png
│ ├─ 00000001.png
│ └─ ...
├─ semantic_id/
│ ├─ 00000000.png
│ ├─ 00000001.png
│ └─ ...
├─ semantic_color/
│ ├─ 00000000.png
│ ├─ 00000001.png
│ └─ ...
├─ frame_schema.json
└─ frames_index.parquet
Per-frame files
rgb/00000123.png
8-bit sRGB RGB image.semantic_id/00000123.png
8-bit grayscale image where each pixel stores a semantic label id (uint8).0means unlabeled.- To visualize or decode classes, interpret ids using the dataset label registry.
semantic_color/00000123.png
Visualization derived fromsemantic_idusing the label registry palette.- Pixel ids are clamped to the palette size before lookup.
Intrinsics (camera/.../intrinsics.json)
Contains camera intrinsics and timing metadata (projection parameters, distortion if applicable, image size, etc.). The sensor pose is stored in world space.
Frame index (camera/.../frames_index.parquet)
One row per frame.
Typical columns:
frame_index int64 # authoritative frame index from payload
sim_step int64 # simulation step at recording time
sim_time_s float64 # sim_step / sim_framerate_hz
unix_timestamp_ns int64 # synthetic wall-clock anchored at sim start (for cross-sensor sync)
sensor_pos_world_x float32
sensor_pos_world_y float32
sensor_pos_world_z float32
sensor_rot_world_x float32
sensor_rot_world_y float32
sensor_rot_world_z float32
sensor_rot_world_w float32
rgb_file string # path under camera/.../rgb/
semantic_id_file string # path under camera/.../semantic_id/
semantic_color_file string # path under camera/.../semantic_color/
Notes:
- Image data is stored as PNGs; camera pose is stored per-frame in world space in
frames_index.parquet. - Semantic ids are dataset-wide ids; decode them using the dataset label registry.
Ego motion
Availability note: If you are viewing the Preview dataset, ego-motion data may be limited to preview/ samples. The complete ego-motion stream is available to authorized licensees in the Full dataset repo for the same datasetId.
This directory contains per-frame ego-motion snapshots (pose + IMU-like signals) recorded from the simulated ego system.
Directory layout
<dataset_root>/
ego_motion/
<ego_motion_sensor_name>/
sensor_info.json
frame_schema.json
frames_index.parquet
frames/
00000000.json
00000001.json
...
sensor_info.json
Sensor-level metadata and conventions:
- Timing fields (simulation start anchor and rates)
- Coordinate system: RH
- Units: meters (position/linear), radians (angular)
- Semantics note: pose + IMU quantities are expressed in world frame
frame_schema.json
Machine-readable schema for the per-frame JSON files under frames/.
Per-frame JSON (frames/00000042.json)
Each file contains a single ego snapshot:
frame_index(authoritative id from payload)sim_step,delta_secondspose(world frame):pos_world_m:[x, y, z]rot_world_quat_xyzw:[qx, qy, qz, qw]half_extents_m: half-size of the ego bounding box (same convention as static groundtruth boxes)
imu_world(world frame):lin_vel_m_s: linear velocity[vx, vy, vz]lin_acc_m_s2: linear acceleration[ax, ay, az]ang_vel_rad_s: angular velocity[wx, wy, wz]ang_acc_rad_s2: angular acceleration[alphax, alphay, alphaz]
frames_index.parquet
One row per recorded ego-motion frame for discovery and cross-sensor alignment.
Typical columns:
frame_index int64 # authoritative frame id from payload (payload_simstep)
sim_step int64
sim_time_s float64
unix_timestamp_ns int64 # synthetic wall-clock anchored at sim start
delta_seconds float64
sensor_pos_world_* float64
sensor_rot_world_* float64 # quat xyzw
lin_vel_world_* float64
lin_acc_world_* float64
ang_vel_world_* float64
ang_acc_world_* float64
frame_file string # run-root-relative path to the JSON (use the subfolder path)
Groundtruth
Labels / Labeling System
All labeled outputs in this dataset (sensor data and groundtruth) encode semantic information using integer label IDs (e.g., label_id).
- Any object/point/pixel that carries a label stores a uint label ID.
- To interpret these IDs, use the dataset’s label registry (mapping
id -> tag/text/color).
Label registry file: labels/labels.json
Optional lookup texture (if present): labels/labels_lut.png
Static Groundtruth
Availability: Static groundtruth is included in the Full dataset only.
It is not included in the public Preview dataset.
Static scene objects (buildings, poles, guardrails, etc.) are stored once per run, in world coordinates.
Files:
static/static_gt_world.jsonstatic/static_gt_world.parquet- Schema:
static/static_gt_parquet_schema.json
Record fields (parquet)
The parquet schema (static/static_gt_parquet_schema.json) defines the following columns:
label_id(int32): semantic label idinstance(int32): object instance idcenter_m_x,center_m_y,center_m_z(float64): box center in meters (world frame)half_extents_m_x,half_extents_m_y,half_extents_m_z(float64): half-size in metersrotation_quat_x,rotation_quat_y,rotation_quat_z,rotation_quat_w(float64): box orientation as quaternionsphere_radius_m(float64): optional bounding sphere radius (meters)
JSON example
[
{
"label_id": 46,
"instance": -1,
"center_m": [x, y, z], // meters, world frame
"half_extents_m": [hx, hy, hz], // meters
"rotation_quat": [qx, qy, qz, qw],
"sphere_radius_m": r // meters, optional
}
]
label_id is a dataset-wide semantic identifier. Decode it using the dataset’s label registry. Any field that stores a label ID (across sensors and groundtruth) should be interpreted through that same registry.
Dynamic Groundtruth (work in progress)
Availability: Dynamic groundtruth (when finalized) is distributed via the Full dataset.
It is not included in the public Preview dataset.
Dynamic groundtruth (e.g., moving actors, tracks, per-frame 3D boxes/poses) is currently under active development.
Planned outputs (subject to change):
- Per-frame dynamic object state (pose/velocity) and/or 3D oriented boxes
- Stable identifiers for linking objects across time
- Schema files alongside the produced artifacts
Once finalized, this section will document:
- File locations under dynamic/ (or equivalent)
- Join keys (e.g., frame_id / timestamps + instance or track id)
- Exact column names and coordinate conventions
Ethical Considerations
Although this dataset is synthetic, you should still consider:
- potential misuse for surveillance or privacy‑sensitive applications,
- limitations of simulated data when transferring to the real world,
- bias in scenario design (e.g., under‑representation of rare or critical events).
- Downloads last month
- 70