Datasets:
Agents Learn Their Runtime -- Task Definitions
Paper: Agents Learn Their Runtime: Interpreter Persistence as Training-Time Semantics
200 procedurally generated Opaque Knapsack task instances. These are the shared evaluation problems solved by all models in the paper's benchmark matrix.
Key Terms
- Persistent runtime: the Python interpreter keeps all variables alive between agent steps. An agent can write
total_weight += wand it persists to the next turn. - Stateless runtime: the interpreter resets after every step. All variables are lost; the agent must reconstruct state from the conversation history each turn.
- Easy (100 tasks): 25--40 items (mean 34), budget covers ~82% of items, optimal solution uses ~4 items.
- Hard (100 tasks): 80--120 items (mean 102), budget covers ~78%, optimal solution uses ~12 items. Substantially more items to search through and a larger optimal set to assemble.
The Opaque Knapsack Task
A partially observable constrained optimization problem. An agent is given a set of items identified only by opaque IDs and must:
- Call
inspect(item_id)to reveal an item's weight, value, and class (costs one unit of a limited inspection budget) - Call
take_item(item_id)to select items, maximizing total value without exceeding a weight capacity - Respect class-validity constraints (only certain item classes are allowed)
Item properties are hidden behind random IDs, so the task is unsolvable by memorization. The agent must track running weight totals, budget usage, and candidate rankings across multiple steps.
Related Datasets
| Dataset | What it contains |
|---|---|
| This dataset | Task definitions (the problems) |
| Training traces | 2,000 teacher solutions by Gemini 3 Flash (1K persistent + 1K stateless), used to fine-tune LoRA adapters |
| Benchmark traces | 1,200 inference traces from Qwen3-8B (base + 2 LoRA adapters) solving these exact tasks across 12 conditions |
Structure
tasks/
├── easy/knapsack/
│ └── knapsack-0000000000.json ... knapsack-0000000099.json
└── hard/knapsack/
└── knapsack-0000000000.json ... knapsack-0000000099.json
File Schema
Each JSON file fully specifies a single task instance:
{
"task_id": "unique identifier",
"family": "knapsack",
"seed": 12345,
"difficulty": {
"n_items": 36,
"capacity": 34,
"budget_coverage": 0.58,
"p_valid": 0.2,
"optimal_set_size": 3,
"max_item_dominance": 0.38
},
"public": { "capacity": 34, "budget": 21, "valid_classes": ["A", "C"] },
"private": { "items": { "item_abc123": {"weight": 5, "value": 12, "class": "A"} } },
"reference": { "optimal_value": 47, "optimal_items": ["item_abc123", "..."] },
"nl": { "title": "...", "instructions": "...", "output_format": "..." }
}
| Field | Description |
|---|---|
public |
Parameters revealed to the agent (capacity, budget, valid classes) |
private |
Ground-truth item properties, hidden behind inspect() at runtime |
reference |
Optimal solution for scoring |
nl |
Natural-language prompt given to the agent |
difficulty |
Generation parameters controlling problem hardness |
Reproduction
Tasks are generated via make tasks in the source repo, which calls pythonformer.cli with config files from pythonformer/task_configs/.
License
Apache License 2.0
Citation
@article{may2026agents,
title={Agents Learn Their Runtime: Interpreter Persistence as Training-Time Semantics},
author={May, Victor and Salgarkar, Aaditya and Wang, Yishan and Misra, Diganta and Nguyen, Huu},
journal={arXiv preprint arXiv:2603.01209},
year={2026}
}
- Downloads last month
- 5