File size: 3,248 Bytes
503f5d6
244d646
503f5d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
244d646
 
 
 
 
 
 
 
 
 
 
 
 
503f5d6
 
244d646
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
 
 
 
 
 
9f7f431
244d646
 
 
 
 
 
 
 
 
 
 
 
 
9f7f431
 
 
 
244d646
 
9f7f431
244d646
 
 
 
 
9f7f431
 
244d646
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
 
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
9f7f431
244d646
 
 
9f7f431
244d646
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
pretty_name: ArXiv Deep Learning Python Research Code
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: repo
    dtype: string
  - name: file
    dtype: string
  - name: code
    dtype: string
  - name: file_length
    dtype: int64
  - name: avg_line_length
    dtype: float64
  - name: max_line_length
    dtype: int64
  - name: extension_type
    dtype: string
  splits:
  - name: train
    num_bytes: 3590067176.125193
    num_examples: 391496
  download_size: 1490724325
  dataset_size: 3590067176.125193
language:
  - en
license: other
size_categories:
  - 100K<n<1M
tags:
  - code
  - deep-learning
  - arxiv
  - research
  - python
task_categories:
  - text-generation
---

# ArXiv Deep Learning Python Research Code

A curated corpus of Python source code files extracted from GitHub repositories referenced in ArXiv papers. Contains 391,496 files (1.49 GB) filtered to deep learning frameworks, designed for training and evaluating Code LLMs on research-grade code.

## Dataset Summary

| Statistic | Value |
|-----------|-------|
| Total files | 391,496 |
| Total size | 1.49 GB |
| Source repos | 34,099 |
| Time span | ArXiv inception through July 2023 |

## Dataset Structure

| Field | Type | Description |
|-------|------|-------------|
| `repo` | string | GitHub repository name |
| `file` | string | File path in the repository |
| `code` | string | File contents |
| `file_length` | int64 | Number of characters in the file |
| `avg_line_length` | float64 | Average line length |
| `max_line_length` | int64 | Maximum line length |
| `extension_type` | string | File extension |

## Usage

```python
from datasets import load_dataset

# full dataset
ds = load_dataset("AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code", split="train")

# streaming
ds = load_dataset("AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code", streaming=True, split="train")
for sample in ds:
    print(sample["repo"], sample["file"])
    break
```

## Data Collection

34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023, totaling 773 GB of compressed GitHub repositories.

These repositories were filtered to files mentioning any of the following frameworks: `torch`, `jax`, `flax`, `stax`, `haiku`, `keras`, `fastai`, `xgboost`, `caffe`, `mxnet`, yielding 1.4 million files which were further filtered to the final 391k.

## Sensitive Information

The dataset may contain emails, IP addresses, and API/SSH keys that were previously published in public GitHub repositories.

## Related Resources

- [ArXiv DL Instruct](https://huggingface.co/datasets/AlgorithmicResearchGroup/ArXivDLInstruct) - Instruction-tuning dataset derived from this code
- [Algorithmic Research Group - Open Source](https://algorithmicresearchgroup.com/opensource.html)

## Citation

```bibtex
@misc{arxiv_deep_learning_python_research_code,
    title={ArXiv Deep Learning Python Research Code},
    author={Matthew Kenney},
    year={2023},
    publisher={Hugging Face},
    url={https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code}
}
```