matthewkenney commited on
Commit
244d646
·
verified ·
1 Parent(s): 235f854

Update dataset card with improved documentation

Browse files
Files changed (1) hide show
  1. README.md +58 -44
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  configs:
3
  - config_name: default
4
  data_files:
@@ -26,71 +27,84 @@ dataset_info:
26
  num_examples: 391496
27
  download_size: 1490724325
28
  dataset_size: 3590067176.125193
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ---
30
- # Dataset Card for "AlgorithmicResearchGroup/arxiv_python_research_code"
31
 
32
- ## Dataset Description
33
 
34
- https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code
35
 
 
36
 
37
- ### Dataset Summary
 
 
 
 
 
38
 
39
- AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code contains over 1.49B of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
- ### How to use it
42
  ```python
43
  from datasets import load_dataset
44
 
45
- # full dataset (1.49GB of data)
46
- ds = load_dataset("ArtifactAI/arxiv_deep_learning_python_research_code", split="train")
47
 
48
- # dataset streaming (will only download the data as needed)
49
- ds = load_dataset("ArtifactAI/arxiv_deep_learning_python_research_code", streaming=True, split="train")
50
- for sample in iter(ds): print(sample["code"])
 
 
51
  ```
52
 
53
- ## Dataset Structure
54
- ### Data Instances
55
- Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
56
- ### Data Fields
57
- - `repo` (string): code repository name.
58
- - `file` (string): file path in the repository.
59
- - `code` (string): code within the file.
60
- - `file_length`: (integer): number of characters in the file.
61
- - `avg_line_length`: (float): the average line-length of the file.
62
- - `max_line_length`: (integer): the maximum line-length of the file.
63
- - `extension_type`: (string): file extension.
64
-
65
- ### Data Splits
66
 
67
- The dataset has no splits and all data is loaded as train split by default.
68
 
69
- ## Dataset Creation
70
 
71
- ### Source Data
72
- #### Initial Data Collection and Normalization
73
- 34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.
74
 
75
- These repositories were then filtered, and the code from each file that mentions ["torch", "jax", "flax", "stax", "haiku", "keras", "fastai", "xgboost", "caffe", "mxnet"] was extracted into 1.4 million files.
76
 
77
- #### Who are the source language producers?
78
 
79
- The source (code) language producers are users of GitHub that created unique repository
 
80
 
81
- ### Personal and Sensitive Information
82
- The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.
83
 
84
- ## Additional Information
85
-
86
- ### Dataset Curators
87
- Matthew Kenney, AlgorithmicResearchGroup, matt@algorithmicresearchgroup.com
88
-
89
- ### Citation Information
90
- ```
91
  @misc{arxiv_deep_learning_python_research_code,
92
- title={arxiv_deep_learning_python_research_code},
93
  author={Matthew Kenney},
94
- year={2023}
 
 
95
  }
96
- ```
 
1
  ---
2
+ pretty_name: ArXiv Deep Learning Python Research Code
3
  configs:
4
  - config_name: default
5
  data_files:
 
27
  num_examples: 391496
28
  download_size: 1490724325
29
  dataset_size: 3590067176.125193
30
+ language:
31
+ - en
32
+ license: other
33
+ size_categories:
34
+ - 100K<n<1M
35
+ tags:
36
+ - code
37
+ - deep-learning
38
+ - arxiv
39
+ - research
40
+ - python
41
+ task_categories:
42
+ - text-generation
43
  ---
 
44
 
45
+ # ArXiv Deep Learning Python Research Code
46
 
47
+ A curated corpus of Python source code files extracted from GitHub repositories referenced in ArXiv papers. Contains 391,496 files (1.49 GB) filtered to deep learning frameworks, designed for training and evaluating Code LLMs on research-grade code.
48
 
49
+ ## Dataset Summary
50
 
51
+ | Statistic | Value |
52
+ |-----------|-------|
53
+ | Total files | 391,496 |
54
+ | Total size | 1.49 GB |
55
+ | Source repos | 34,099 |
56
+ | Time span | ArXiv inception through July 2023 |
57
 
58
+ ## Dataset Structure
59
+
60
+ | Field | Type | Description |
61
+ |-------|------|-------------|
62
+ | `repo` | string | GitHub repository name |
63
+ | `file` | string | File path in the repository |
64
+ | `code` | string | File contents |
65
+ | `file_length` | int64 | Number of characters in the file |
66
+ | `avg_line_length` | float64 | Average line length |
67
+ | `max_line_length` | int64 | Maximum line length |
68
+ | `extension_type` | string | File extension |
69
+
70
+ ## Usage
71
 
 
72
  ```python
73
  from datasets import load_dataset
74
 
75
+ # full dataset
76
+ ds = load_dataset("AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code", split="train")
77
 
78
+ # streaming
79
+ ds = load_dataset("AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code", streaming=True, split="train")
80
+ for sample in ds:
81
+ print(sample["repo"], sample["file"])
82
+ break
83
  ```
84
 
85
+ ## Data Collection
 
 
 
 
 
 
 
 
 
 
 
 
86
 
87
+ 34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023, totaling 773 GB of compressed GitHub repositories.
88
 
89
+ These repositories were filtered to files mentioning any of the following frameworks: `torch`, `jax`, `flax`, `stax`, `haiku`, `keras`, `fastai`, `xgboost`, `caffe`, `mxnet`, yielding 1.4 million files which were further filtered to the final 391k.
90
 
91
+ ## Sensitive Information
 
 
92
 
93
+ The dataset may contain emails, IP addresses, and API/SSH keys that were previously published in public GitHub repositories.
94
 
95
+ ## Related Resources
96
 
97
+ - [ArXiv DL Instruct](https://huggingface.co/datasets/AlgorithmicResearchGroup/ArXivDLInstruct) - Instruction-tuning dataset derived from this code
98
+ - [Algorithmic Research Group - Open Source](https://algorithmicresearchgroup.com/opensource.html)
99
 
100
+ ## Citation
 
101
 
102
+ ```bibtex
 
 
 
 
 
 
103
  @misc{arxiv_deep_learning_python_research_code,
104
+ title={ArXiv Deep Learning Python Research Code},
105
  author={Matthew Kenney},
106
+ year={2023},
107
+ publisher={Hugging Face},
108
+ url={https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code}
109
  }
110
+ ```