number
int64 2
7.91k
| title
stringlengths 1
290
| body
stringlengths 0
228k
| state
stringclasses 2
values | created_at
timestamp[s]date 2020-04-14 18:18:51
2025-12-16 10:45:02
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 19:34:46
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 14:20:48
⌀ | url
stringlengths 48
51
| author
stringlengths 3
26
⌀ | comments_count
int64 0
70
| labels
listlengths 0
4
|
|---|---|---|---|---|---|---|---|---|---|---|
5,738
|
load_dataset("text","dataset.txt") loads the wrong dataset!
|
### Describe the bug
I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in the world??
### Steps to reproduce the bug
my_dataset = load_dataset("text","TextFile.txt")
my_dataset
### Expected behavior
I expected the dataset to contain the actual data from the text document that I used.
### Environment info
Google Colab
|
CLOSED
| 2023-04-12T01:07:46
| 2023-04-19T12:08:27
| 2023-04-19T12:08:27
|
https://github.com/huggingface/datasets/issues/5738
|
Tylersuard
| 1
|
[] |
5,737
|
ClassLabel Error
|
### Describe the bug
I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes
### Steps to reproduce the bug
from datasets import ClassLabel, Dataset
1. Create the ClassLabel object with 3 label values and their corresponding names
label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"])
2. Define a dictionary with text and label fields
data = {
'text': ['text_1', 'text_2', 'text_3'],
'label': [1, 2, 3],
}
3. Create a Hugging Face dataset from the dictionary
dataset = Dataset.from_dict(data)
print(dataset.features)
4. Map the label values to their corresponding label names using the label object
dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])})
5. Print the resulting dataset
print(dataset)
### Expected behavior
I hope my label type is class label instead int.
### Environment info
python 3.9
google colab
|
CLOSED
| 2023-04-11T17:14:13
| 2023-04-13T16:49:57
| 2023-04-13T16:49:57
|
https://github.com/huggingface/datasets/issues/5737
|
mrcaelumn
| 2
|
[] |
5,736
|
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
|
### Describe the bug
Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run.
### Steps to reproduce the bug
I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1.
1. Set up a script `my_dataset.py` to generate and load an offline dataset.
2. Load it with
```python
ds = datasets.load_dataset(path=/path/to/my_dataset.py,
name='toy',
data_dir=/path/to/my_dataset.py,
cache_dir=cache_dir,
download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
)
```
It loads fine
```
Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data.
```
3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error
```
2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__
next(self.gen)
File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir
shutil.rmtree(dirname)
File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c'
```
### Expected behavior
Regenerate the dataset from scratch and reload it.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
|
OPEN
| 2023-04-11T11:29:15
| 2023-11-30T07:16:58
| null |
https://github.com/huggingface/datasets/issues/5736
|
rcasero
| 3
|
[] |
5,734
|
Remove temporary pin of fsspec
|
Once root cause is found and fixed, remove the temporary pin introduced by:
- #5731
|
CLOSED
| 2023-04-11T09:04:17
| 2023-04-11T11:04:52
| 2023-04-11T11:04:52
|
https://github.com/huggingface/datasets/issues/5734
|
albertvillanova
| 0
|
[
"bug"
] |
5,732
|
Enwik8 should support the standard split
|
### Feature request
The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train".
The HuggingFace Datasets library should include a BuilderConfig for Enwik8 with train, validation, and test sets derived from the first 90 million bytes, next 5 million bytes, and last 5 million bytes, respectively. This Enwik8 split is standard practice in LM papers, as elaborated and motivated below.
### Motivation
Enwik8 is commonly split into 90M, 5M, 5M consecutive bytes. This is done in the Transformer-XL [codebase](https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/getdata.sh#L34), and is additionally mentioned in the Sparse Transformers [paper](https://arxiv.org/abs/1904.10509) and the Compressive Transformers [paper](https://arxiv.org/abs/1911.05507). This split is pretty much universal among language modeling papers.
One may obtain the splits by manual wrangling, using the data yielded by the ```enwik8-raw``` BuilderConfig. However, this undermines the seamless functionality of the library: one must slice the single raw example, extract it into three tensors, and wrap each in a separate dataset.
This becomes even more of a nuisance if using the current Enwik8 HuggingFace dataset as a TfdsDataSource with [SeqIO](https://github.com/google/seqio), where a pipeline of preprocessors is typically included in a SeqIO Task definition, to be applied immediately after loading the data with TFDS.
### Your contribution
Supporting this functionality in HuggingFace Datasets will only require an additional BuilderConfig for Enwik8 and a few additional lines of code. I will submit a PR.
|
CLOSED
| 2023-04-11T08:38:53
| 2023-04-11T09:28:17
| 2023-04-11T09:28:16
|
https://github.com/huggingface/datasets/issues/5732
|
lucaslingle
| 2
|
[
"enhancement"
] |
5,730
|
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
|
CI is broken for `test_py310`.
See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948
```
=========================== short test summary info ============================
ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_file_utils.py::test_get_from_cache_fsspec - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_filesystem.py::test_is_remote_filesystem - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[tmp_path-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level/second_level/date=2019-10-01-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path/file.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://top_level-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://dir_that_doesnt_exist-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xgetsize[tmp_path/file.txt-100] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://-0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://top_level/second_level/date=2019-10-01/a.parquet-100] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[tmp_path/*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xwalk[tmp_path-expected_outputs0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xwalk[mock://top_level/second_level-expected_outputs1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]/*-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False
===== 2105 passed, 18 skipped, 38 warnings, 46 errors in 236.22s (0:03:56) =====
```
|
CLOSED
| 2023-04-11T08:29:46
| 2023-04-11T08:47:56
| 2023-04-11T08:47:56
|
https://github.com/huggingface/datasets/issues/5730
|
albertvillanova
| 0
|
[
"bug"
] |
5,728
|
The order of data split names is nondeterministic
|
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718
```
FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']
At index 0 diff: 'random' != 'train'
Full diff:
- ['train', 'random']
+ ['random', 'train']
```
I have checked locally and found out that the data split order is nondeterministic.
This is caused by the use of `set` for sharded splits.
|
CLOSED
| 2023-04-11T07:31:25
| 2023-04-26T15:05:13
| 2023-04-26T15:05:13
|
https://github.com/huggingface/datasets/issues/5728
|
albertvillanova
| 0
|
[
"bug"
] |
5,727
|
load_dataset fails with FileNotFound error on Windows
|
### Describe the bug
Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps:
(1) create conda environment
(2) activate environment
(3) install with: ``conda` install -c huggingface -c conda-forge datasets`
Then
```
from datasets import load_dataset
# this or any other example from the website fails with the FileNotFoundError
glue = load_dataset("glue", "ax")
```
**Below I have pasted the error omitting the full path**:
```
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified:
'C:\\Users\\...\\.cache\\huggingface'
```
### Steps to reproduce the bug
On Windows 10
1) create a minimal conda environment (with just Python)
(2) activate environment
(3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets`
(4) import load_dataset and follow example usage from any dataset card.
### Expected behavior
The expected behavior is to load the file into the Python session running on my machine without error.
### Environment info
```
# Name Version Build Channel
aiohttp 3.8.4 py311ha68e1ae_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 22.2.0 pyh71513ae_0 conda-forge
aws-c-auth 0.6.26 h1262f0c_1 conda-forge
aws-c-cal 0.5.21 h7cda486_2 conda-forge
aws-c-common 0.8.14 hcfcfb64_0 conda-forge
aws-c-compression 0.2.16 h8a79959_5 conda-forge
aws-c-event-stream 0.2.20 h5f78564_4 conda-forge
aws-c-http 0.7.6 h2545be9_0 conda-forge
aws-c-io 0.13.19 h0d2781e_3 conda-forge
aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge
aws-c-s3 0.2.7 h8113e7b_1 conda-forge
aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge
aws-checksums 0.1.14 h8a79959_5 conda-forge
aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge
aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge
brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
c-ares 1.19.0 h2bbff1b_0
ca-certificates 2023.01.10 haa95532_0
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311h7d9ee11_3 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 40.0.1 py311h28e9c30_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
datasets 2.11.0 py_0 huggingface
dill 0.3.6 pyhd8ed1ab_1 conda-forge
filelock 3.11.0 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py311ha68e1ae_0 conda-forge
fsspec 2023.4.0 pyh1a96a4e_0 conda-forge
gflags 2.2.2 ha925a31_1004 conda-forge
glog 0.6.0 h4797de2_0 conda-forge
huggingface_hub 0.13.4 py_0 huggingface
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.3.0 pyha770c72_0 conda-forge
importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge
intel-openmp 2023.0.0 h57928b3_25922 conda-forge
krb5 1.20.1 heb0366b_0 conda-forge
libabseil 20230125.0 cxx17_h63175ca_1 conda-forge
libarrow 11.0.0 h04c43f8_13_cpu conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge
libbrotlidec 1.0.9 hcfcfb64_8 conda-forge
libbrotlienc 1.0.9 hcfcfb64_8 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libcrc32c 1.1.2 h0e60522_0 conda-forge
libcurl 7.88.1 h68f0423_1 conda-forge
libexpat 2.5.0 h63175ca_1 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge
libgrpc 1.52.1 h32da247_1 conda-forge
libhwloc 2.9.0 h51c2c0f_0 conda-forge
libiconv 1.17 h8ffe710_0 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
libprotobuf 3.21.12 h12be248_0 conda-forge
libsqlite 3.40.0 hcfcfb64_0 conda-forge
libssh2 1.10.0 h9a1e1f7_3 conda-forge
libthrift 0.18.1 h9ce19ad_0 conda-forge
libutf8proc 2.8.0 h82a8f57_0 conda-forge
libxml2 2.10.3 hc3477c8_6 conda-forge
libzlib 1.2.13 hcfcfb64_4 conda-forge
lz4-c 1.9.4 hcfcfb64_0 conda-forge
mkl 2022.1.0 h6a75c08_874 conda-forge
multidict 6.0.4 py311ha68e1ae_0 conda-forge
multiprocess 0.70.14 py311ha68e1ae_3 conda-forge
numpy 1.24.2 py311h0b4df5a_0 conda-forge
openssl 3.1.0 hcfcfb64_0 conda-forge
orc 1.8.3 hada7b9e_0 conda-forge
packaging 23.0 pyhd8ed1ab_0 conda-forge
pandas 2.0.0 py311hf63dbb6_0 conda-forge
parquet-cpp 1.5.1 2 conda-forge
pip 23.0.1 pyhd8ed1ab_0 conda-forge
pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge
pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyh0701188_6 conda-forge
python 3.11.3 h2628c8c_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge
python_abi 3.11 3_cp311 conda-forge
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pyyaml 6.0 py311ha68e1ae_5 conda-forge
re2 2023.02.02 h63175ca_0 conda-forge
requests 2.28.2 pyhd8ed1ab_1 conda-forge
setuptools 67.6.1 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.1.10 hfb803bf_0 conda-forge
tbb 2021.8.0 h91493d7_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
ucrt 10.0.22621.0 h57928b3_0 conda-forge
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
vc 14.3 hb6edc58_10 conda-forge
vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge
xxhash 0.8.1 hcfcfb64_0 conda-forge
xz 5.2.10 h8cc25b3_1
yaml 0.2.5 h8ffe710_2 conda-forge
yarl 1.8.2 py311ha68e1ae_0 conda-forge
zipp 3.15.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 hcfcfb64_4 conda-forge
zstd 1.5.4 hd43e919_0
```
|
CLOSED
| 2023-04-10T23:21:12
| 2023-07-21T14:08:20
| 2023-07-21T14:08:19
|
https://github.com/huggingface/datasets/issues/5727
|
joelkowalewski
| 4
|
[] |
5,726
|
Fallback JSON Dataset loading does not load all values when features specified manually
|
### Describe the bug
The fallback JSON dataset loader located here:
https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153
does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior?
To fix this you'd have to change this line:
https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140
To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method.
### Steps to reproduce the bug
Consider a dataset JSON like this:
```
[
{
"instruction": "Do stuff",
"output": "Answer stuff"
},
{
"instruction": "Do stuff2",
"input": "Additional Input2",
"output": "Answer stuff2"
}
]
```
Using this code to load the dataset:
```
from datasets import load_dataset, Features, Value
features = {
"instruction": Value("string"),
"input": Value("string"),
"output": Value("string")
}
features = Features(features)
ds = load_dataset("json", data_files="./ds.json", features=features)
for row in ds["train"]:
print(row)
```
we get a dataset that looks like this:
| **Instruction** | **Input** | **Output** |
|-----------------|--------------------|-----------------|
| "Do stuff" | None | "Answer Stuff" |
| "Do stuff2" | None | "Answer Stuff2" |
### Expected behavior
The input column should contain values other than None for dataset entries that have the "input" attribute set:
| **Instruction** | **Input** | **Output** |
|-----------------|--------------------|-----------------|
| "Do stuff" | None | "Answer Stuff" |
| "Do stuff2" | "Additional Input2" | "Answer Stuff2" |
### Environment info
Python 3.10.10
Datasets 2.11.0
Windows 10
|
CLOSED
| 2023-04-10T15:22:14
| 2023-04-21T06:35:28
| 2023-04-21T06:35:28
|
https://github.com/huggingface/datasets/issues/5726
|
myluki2000
| 1
|
[] |
5,725
|
How to limit the number of examples in dataset, for testing?
|
### Describe the bug
I am using this command:
`data = load_dataset("json", data_files=data_path)`
However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter.
### Steps to reproduce the bug
In the description.
### Expected behavior
To be able to limit the number of examples
### Environment info
Nothing special
|
CLOSED
| 2023-04-10T08:41:43
| 2023-04-21T06:16:24
| 2023-04-21T06:16:24
|
https://github.com/huggingface/datasets/issues/5725
|
ndvbd
| 3
|
[] |
5,724
|
Error after shuffling streaming IterableDatasets with downloaded dataset
|
### Describe the bug
I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`:
```
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
for key, example in ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__
for x in self.ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__
yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables
batch = f.read(self.config.chunksize)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries
out = read(*args, **kwargs)
File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read
return self._buffer.read(size)
File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read
if not self._read_gzip_header():
File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header
raise BadGzipFile('Not a gzipped file (%r)' % magic)
gzip.BadGzipFile: Not a gzipped file (b've')
```
I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle.
### Steps to reproduce the bug
1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4
2.
```
import datasets
dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train')
dataset = dataset.shuffle(buffer_size=10_000, seed=42)
next(iter(dataset))
```
### Expected behavior
`next(iter(dataset))` should give me a sample from the dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-04-09T16:58:44
| 2023-04-20T20:37:30
| 2023-04-20T20:37:30
|
https://github.com/huggingface/datasets/issues/5724
|
szxiangjn
| 1
|
[] |
5,722
|
Distributed Training Error on Customized Dataset
|
Hi guys, recently I tried to use `datasets` to train a dual encoder.
I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script)
Here are my code:
```python
class RetrivalDataset(datasets.GeneratorBasedBuilder):
"""CrossEncoder dataset."""
BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")]
# DEFAULT_CONFIG_NAME = "DuReader"
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id": datasets.Value("string"),
"question": datasets.Value("string"),
"documents": Sequence(datasets.Value("string")),
}
),
supervised_keys=None,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
train_file = self.config.data_dir + self.config.train_file
valid_file = self.config.data_dir + self.config.valid_file
logger.info(f"Training on {self.config.train_file}")
logger.info(f"Evaluating on {self.config.valid_file}")
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file}
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file}
),
]
def _generate_examples(self, file_path):
with jsonlines.open(file_path, "r") as f:
for record in f:
label = record["label"]
question = record["question"]
# dual encoder
all_documents = record["all_documents"]
positive_paragraph = all_documents.pop(label)
all_documents = [positive_paragraph] + all_documents
u_id = "{}_#_{}".format(
md5_hash(question + "".join(all_documents)),
"".join(random.sample(string.ascii_letters + string.digits, 7)),
)
item = {
"question": question,
"documents": all_documents,
"id": u_id,
}
yield u_id, item
```
It works well on single GPU, but got errors as follows when used DDP:
```python
Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED)
```
Here are my train script on a two A100 mechine:
```bash
export TORCH_DISTRIBUTED_DEBUG=DETAIL
export TORCH_SHOW_CPP_STACKTRACES=1
export NCCL_DEBUG=INFO
export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV
nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1&
```
I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) .
@lhoestq hope you could help me?
|
CLOSED
| 2023-04-09T11:04:59
| 2023-07-24T14:50:46
| 2023-07-24T14:50:46
|
https://github.com/huggingface/datasets/issues/5722
|
wlhgtc
| 1
|
[] |
5,721
|
Calling datasets.load_dataset("text" ...) results in a wrong split.
|
### Describe the bug
When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does.
### Steps to reproduce the bug
I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL
```
folder_path = "/home/cyril/Downloads/llama_dataset"
data = datasets.load_dataset("text", data_dir=folder_path)
data.save_to_disk("/home/cyril/Downloads/data.hf")
data = datasets.load_from_disk("/home/cyril/Downloads/data.hf")
print(data)
```
Results in the following split:
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 2114
})
test: Dataset({
features: ['text'],
num_rows: 200882
})
validation: Dataset({
features: ['text'],
num_rows: 152
})
})
```
It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split
### Expected behavior
Train split should have the bulk of the training examples.
### Environment info
datasets 2.11.0, python 3.10.6
|
OPEN
| 2023-04-08T23:55:12
| 2023-04-08T23:55:12
| null |
https://github.com/huggingface/datasets/issues/5721
|
cyrilzakka
| 0
|
[] |
5,720
|
Streaming IterableDatasets do not work with torch DataLoaders
|
### Describe the bug
When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader:
```
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__
self._iterator = self._get_iterator()
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__
w.start()
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper'
```
To reproduce, run the code
```
from datasets import load_dataset
data = load_dataset(args.dataset_name, split="train", streaming=True)
train_len = 5000
val_len = 100
train, val = data.take(train_len), data.skip(train_len).take(val_len)
traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text")
traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True)
```
Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via
```
from torch.utils.data import Dataset, IterableDataset
from torchvision.transforms import Compose, Resize, ToTensor
from transformers import AutoTokenizer
import requests
from PIL import Image
class IterableClipDataset(IterableDataset):
def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"):
self.dataset = dataset
self.context_length = context_length
self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform
self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer
self.image_key = image_key
self.text_key = text_key
def read_image(self, url: str):
try: # Try to read the image
image = Image.open(requests.get(url, stream=True).raw)
except:
image = Image.new("RGB", (224, 224), (0, 0, 0))
return image
def process_sample(self, image, text):
if isinstance(image, str):
image = self.read_image(image)
if self.image_transform is not None:
image = self.image_transform(image)
text = self.tokenizer.encode(
text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length"
)
text = torch.tensor(text, dtype=torch.long)
return image, text
def __iter__(self):
for sample in self.dataset:
image, text = sample[self.image_key], sample[self.text_key]
yield self.process_sample(image, text)
```
### Steps to reproduce the bug
Steps to reproduce
1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly)
2. Run the code above
### Expected behavior
Batched data is produced from the dataloader
### Environment info
```
datasets == 2.9.0
python == 3.9.12
torch == 1.11.0
```
|
OPEN
| 2023-04-08T18:45:48
| 2025-03-19T14:06:47
| null |
https://github.com/huggingface/datasets/issues/5720
|
jlehrer1
| 10
|
[] |
5,719
|
Array2D feature creates a list of list instead of a numpy array
|
### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?
Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly.
### Steps to reproduce the bug
Run this code:
```py
from datasets import Dataset, Features, Array2D
import numpy as np
# you have to change the first dimension of the shape to None to make it return an array
features = Features(dict(seq=Array2D((2,2), 'float32')))
ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)
a = ds[0]['seq']
print(a)
print(type(a))
```
The following will be printed in stdout:
```
[[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]]
<class 'list'>
```
### Expected behavior
Each indexed item should be a list or numpy array. Currently, `Array((2,2))` yields a list but `Array((None,2))` yields an array.
### Environment info
- `datasets` version: 2.11.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.4
|
CLOSED
| 2023-04-07T21:04:08
| 2023-04-20T15:34:41
| 2023-04-20T15:34:41
|
https://github.com/huggingface/datasets/issues/5719
|
offchan42
| 4
|
[] |
5,717
|
Errror when saving to disk a dataset of images
|
### Describe the bug
Hello!
I have an issue when I try to save on disk my dataset of images. The error I get is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_to_disk
for job_id, done, content in Dataset._save_to_disk_single(**kwargs):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1473, in _save_to_disk_single
writer.write_table(pa_table)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_writer.py", line 570, in write_table
pa_table = embed_table_storage(pa_table)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2268, in embed_table_storage
arrays = [
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2269, in <listcomp>
embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2142, in embed_array_storage
return feature.embed_storage(array)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/features/image.py", line 269, in embed_storage
storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
File "pyarrow/array.pxi", line 2766, in pyarrow.lib.StructArray.from_arrays
File "pyarrow/array.pxi", line 2961, in pyarrow.lib.c_mask_inverted_from_obj
TypeError: Mask must be a pyarrow.Array of type boolean
```
My dataset is around 50K images, is this error might be due to a bad image?
Thanks for the help.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset["train"].save_to_disk("./myds", num_shards=40)
```
### Expected behavior
Having my dataset properly saved to disk.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
OPEN
| 2023-04-07T11:59:17
| 2025-07-13T08:27:47
| null |
https://github.com/huggingface/datasets/issues/5717
|
jplu
| 22
|
[] |
5,716
|
Handle empty audio
|
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)`
|
CLOSED
| 2023-04-07T09:51:40
| 2023-09-27T17:47:08
| 2023-09-27T17:47:08
|
https://github.com/huggingface/datasets/issues/5716
|
zyb8543d
| 2
|
[] |
5,715
|
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
|
### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch/issues/13246
With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue.
However, this issue can be released when the returning output is fixed in length.
Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list.
The design would be good when we load datasets as
```python
load_dataset(..., with_return_as_fixed_tensor=True)
```
### Motivation
The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662
: Numpy or Pandas seems not to have problems, while both have the string type.
(I'm not sure that the sequence of huggingface datasets can solve this problem as well)
### Your contribution
I'll read it ! thanks
|
CLOSED
| 2023-04-06T13:57:48
| 2023-04-20T17:16:26
| 2023-04-20T17:16:26
|
https://github.com/huggingface/datasets/issues/5715
|
jungbaepark
| 1
|
[
"enhancement"
] |
5,713
|
ArrowNotImplementedError when loading dataset from the hub
|
### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
Create the dataset and push it to the hub:
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB")
```
Then use it:
```python
from datasets import load_dataset
dataset = load_dataset("org/dataset-name")
```
### Expected behavior
To properly download and use the pushed dataset.
Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
CLOSED
| 2023-04-06T10:27:22
| 2023-04-06T13:06:22
| 2023-04-06T13:06:21
|
https://github.com/huggingface/datasets/issues/5713
|
jplu
| 2
|
[] |
5,712
|
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
|
### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
|
CLOSED
| 2023-04-05T16:47:10
| 2023-04-06T08:32:37
| 2023-04-05T17:17:44
|
https://github.com/huggingface/datasets/issues/5712
|
rcasero
| 2
|
[] |
5,711
|
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
|
### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(embedding_filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
|
CLOSED
| 2023-04-05T16:46:49
| 2023-04-07T09:16:59
| 2023-04-07T09:16:59
|
https://github.com/huggingface/datasets/issues/5711
|
rcasero
| 2
|
[] |
5,710
|
OSError: Memory mapping file failed: Cannot allocate memory
|
### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```terminal
0_21/cache-e9c42499f65b1881.arrow
load_hf_datasets_from_disk: 82%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 494/600 [07:26<01:35, 1.11it/s]
Traceback (most recent call last):
File "example_load_genkalm_dataset.py", line 35, in <module>
multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay)
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process
genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length,
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset
hf_ds = load_from_disk(path_or_name)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk
arrow_table = concat_tables(
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables
tables = list(tables)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr>
table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix())
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file
memory_mapped_stream = pa.memory_map(filename)
File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
### Steps to reproduce the bug
Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share.
### Expected behavior
I expect the 3TB of data can be fully mapped to memory
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyArrow version: 11.0.0
- Pandas version: 1.0.1
|
CLOSED
| 2023-04-05T14:11:26
| 2023-04-20T17:16:40
| 2023-04-20T17:16:40
|
https://github.com/huggingface/datasets/issues/5710
|
Saibo-creator
| 1
|
[] |
5,709
|
Manually dataset info made not taken into account
|
### Describe the bug
Hello,
I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated.
Former `dataset_infos.json` file:
```
{"default": {
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"_type": "Image"
},
"labels": {
"names": [
"Fake",
"Real"
],
"_type": "ClassLabel"
}
},
"splits": {
"validation": {
"name": "validation",
"num_bytes": 901010094.0,
"num_examples": 3200,
"dataset_name": null
},
"train": {
"name": "train",
"num_bytes": 901010094.0,
"num_examples": 3200,
"dataset_name": null
}
},
"download_size": 1802008414,
"dataset_size": 1802020188.0,
"size_in_bytes": 3604028602.0
}}
```
After I update it manually it looks like:
```
{
"bstrai--deepfake-detection":{
"description":"",
"citation":"",
"homepage":"",
"license":"",
"features":{
"image":{
"decode":true,
"id":null,
"_type":"Image"
},
"labels":{
"num_classes":2,
"names":[
"Fake",
"Real"
],
"id":null,
"_type":"ClassLabel"
}
},
"supervised_keys":{
"input":"image",
"output":"labels"
},
"task_templates":[
{
"task":"image-classification",
"image_column":"image",
"label_column":"labels"
}
],
"config_name":null,
"splits":{
"validation":{
"name":"validation",
"num_bytes":36627822,
"num_examples":123,
"dataset_name":"deepfake-detection"
},
"train":{
"name":"train",
"num_bytes":901023694,
"num_examples":3200,
"dataset_name":"deepfake-detection"
}
},
"download_checksums":null,
"download_size":937562209,
"dataset_size":937651516,
"size_in_bytes":1875213725
}
}
```
Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet?
Thanks!
### Steps to reproduce the bug
-
### Expected behavior
-
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
CLOSED
| 2023-04-05T11:15:17
| 2023-04-06T08:52:20
| 2023-04-06T08:52:19
|
https://github.com/huggingface/datasets/issues/5709
|
jplu
| 2
|
[] |
5,708
|
Dataset sizes are in MiB instead of MB in dataset cards
|
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932)
<img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png">
TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:`
- [x] Bulk edit on the Hub to fix this in all canonical datasets
- [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
|
CLOSED
| 2023-04-05T06:36:03
| 2023-12-21T10:20:28
| 2023-12-21T10:20:27
|
https://github.com/huggingface/datasets/issues/5708
|
albertvillanova
| 12
|
[
"bug",
"dataset-viewer"
] |
5,706
|
Support categorical data types for Parquet
|
### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns:
```python
import pandas as pd
import pyarrow.parquet as pq
from datasets import load_dataset
# Create categorical sample DataFrame
df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category')
df.to_parquet('data.parquet')
# Read back as pyarrow table
table = pq.read_table('data.parquet')
print(table.schema)
# type: dictionary<values=string, indices=int32, ordered=0>
# Load with huggingface datasets
load_dataset('parquet', data_files='data.parquet')
```
Error:
```
Traceback (most recent call last):
File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single
writer.write_table(table)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
NotImplementedError
```
### Motivation
Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature.
### Your contribution
I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
|
CLOSED
| 2023-04-04T09:45:35
| 2024-06-07T12:20:43
| 2024-06-07T12:20:43
|
https://github.com/huggingface/datasets/issues/5706
|
kklemon
| 17
|
[
"enhancement"
] |
5,705
|
Getting next item from IterableDataset took forever.
|
### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda example: example['text'].startswith('Ar'))
print(next(iter(dataset)))
```
However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs.
I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this?
### Steps to reproduce the bug
Unfortunately without my data files, there is no way to reproduce this bug.
### Expected behavior
With `IteralbeDataset`, I expect the first item to be returned instantly.
### Environment info
- datasets version: 2.11.0
- python: 3.7.12
|
CLOSED
| 2023-04-04T09:16:17
| 2023-04-05T23:35:41
| 2023-04-05T23:35:41
|
https://github.com/huggingface/datasets/issues/5705
|
HongtaoYang
| 2
|
[] |
5,702
|
Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None?
|
### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below:
```json
[
[
{"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null],
[
{"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null],
[
{"text":"水","idxes":[38]},
null,
{"text":"舀","idxes":[40]},
"假", // note this is just a standalone string
null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]]
```
### Motivation
I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features).
```json
{"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]}
```
### Your contribution
I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 .
|
CLOSED
| 2023-04-04T03:20:43
| 2023-04-05T14:15:18
| 2023-04-05T14:15:17
|
https://github.com/huggingface/datasets/issues/5702
|
gitforziio
| 4
|
[
"enhancement"
] |
5,699
|
Issue when wanting to split in memory a cached dataset
|
### Describe the bug
**In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line.
Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".**
Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result.
Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case.
To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code.
Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway.
Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods.
### Steps to reproduce the bug
```python
import datasets
def generate_examples():
for i in range(10):
yield {"id": i}
dataset_ = datasets.Dataset.from_generator(
generate_examples,
keep_in_memory=False,
)
dataset_.train_test_split(
test_size=3,
shuffle=False,
keep_in_memory=True,
train_indices_cache_file_name=None,
test_indices_cache_file_name=None,
)
```
### Expected behavior
The result of the above code should be a DatasetDict instance.
Instead, we get the following exception stack:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[3], line 1
----> 1 dataset_.train_test_split(
2 test_size=3,
3 shuffle=False,
4 keep_in_memory=True,
5 train_indices_cache_file_name=None,
6 test_indices_cache_file_name=None,
7 )
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs)
521 self_format = {
522 "type": self._format_type,
523 "format_kwargs": self._format_kwargs,
524 "columns": self._format_columns,
525 "output_all_columns": self._output_all_columns,
526 }
527 # apply actual function
--> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
530 # re-apply format to the output
File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
507 validate_fingerprint(kwargs[fingerprint_name])
509 # Call actual function
--> 511 out = func(dataset, *args, **kwargs)
513 # Update fingerprint of in-place transforms + update in-place history of transforms
515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint)
4425 test_indices = permutation[:n_test]
4426 train_indices = permutation[n_test : (n_test + n_train)]
-> 4428 train_split = self.select(
4429 indices=train_indices,
4430 keep_in_memory=keep_in_memory,
4431 indices_cache_file_name=train_indices_cache_file_name,
4432 writer_batch_size=writer_batch_size,
4433 new_fingerprint=train_new_fingerprint,
4434 )
4435 test_split = self.select(
4436 indices=test_indices,
4437 keep_in_memory=keep_in_memory,
(...)
4440 new_fingerprint=test_new_fingerprint,
4441 )
4443 return DatasetDict({"train": train_split, "test": test_split})
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs)
521 self_format = {
522 "type": self._format_type,
523 "format_kwargs": self._format_kwargs,
524 "columns": self._format_columns,
525 "output_all_columns": self._output_all_columns,
526 }
527 # apply actual function
--> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
530 # re-apply format to the output
File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
507 validate_fingerprint(kwargs[fingerprint_name])
509 # Call actual function
--> 511 out = func(dataset, *args, **kwargs)
513 # Update fingerprint of in-place transforms + update in-place history of transforms
515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3645 """Create a new dataset with rows selected following the list/array of indices.
3646
3647 Args:
(...)
3676 ```
3677 """
3678 if keep_in_memory and indices_cache_file_name is not None:
-> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.")
3681 if len(self.list_indexes()) > 0:
3682 raise DatasetTransformationNotAllowedError(
3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it."
3684 )
ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both.
```
### Environment info
- `datasets` version: 2.11.1.dev0
- Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5
- Python version: 3.8.12
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
***
***
EDIT:
Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
|
OPEN
| 2023-04-03T17:00:07
| 2024-05-15T13:12:18
| null |
https://github.com/huggingface/datasets/issues/5699
|
FrancoisNoyez
| 2
|
[] |
5,698
|
Add Qdrant as another search index
|
### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset.
### Your contribution
I can provide a PR implementing that functionality on my own.
|
OPEN
| 2023-04-03T14:25:19
| 2023-04-11T10:28:40
| null |
https://github.com/huggingface/datasets/issues/5698
|
kacperlukawski
| 1
|
[
"enhancement"
] |
5,696
|
Shuffle a sharded iterable dataset without seed can lead to duplicate data
|
As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one.
This can happen only when you have a number of shards that is a factor of the number of nodes.
The current workaround is to always set a `seed` in `.shuffle()`
|
CLOSED
| 2023-04-03T09:40:03
| 2023-04-04T14:58:18
| 2023-04-04T14:58:18
|
https://github.com/huggingface/datasets/issues/5696
|
lhoestq
| 0
|
[
"bug"
] |
5,695
|
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError
|
### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the error (it might take a while as the dataset has ~170GB):
```python
from datasets import load_dataset
dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True)
```
Stack trace:
```
(torch-multimodal) bash-4.2$ python test.py
Downloading and preparing dataset None/None to /cluster/work/cotterell/tamariucai/HuggingfaceDatasets/theodor1289___parquet/theodor1289--wit-7a3e984414a86a0f/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 491.68it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 16.93it/s]
Traceback (most recent call last):
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/cluster/work/cotterell/tamariucai/multimodal-mirror/examples/test.py", line 2, in <module>
dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True)
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
The dataset is loaded in variable `dataset`.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.4
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-04-02T14:42:44
| 2024-05-15T12:04:47
| 2023-04-10T08:04:04
|
https://github.com/huggingface/datasets/issues/5695
|
amariucaitheodor
| 7
|
[] |
5,694
|
Dataset configuration
|
Following discussions from https://github.com/huggingface/datasets/pull/5331
We could have something like `config.json` to define the configuration of a dataset.
```json
{
"data_dir": "data"
"data_files": {
"train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
}
}
```
we could also support a list for several configs with a 'config_name' field.
The alternative was to use YAML in the README.md.
I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv.
This format would be used in `push_to_hub` to be able to push multiple configs.
cc @huggingface/datasets
EDIT: actually we're going for the YAML approach in README.md
|
OPEN
| 2023-04-01T13:08:05
| 2023-04-04T14:54:37
| null |
https://github.com/huggingface/datasets/issues/5694
|
lhoestq
| 3
|
[
"generic discussion"
] |
5,692
|
pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types
|
### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module>
(dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding)
File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en")
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset
datasets = map_nested(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested
mapped = [
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset
ds = self._as_dataset(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files
pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0]
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables
return cls.from_blocks(blocks)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks
table = cls._concat_blocks(blocks, axis=0)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks
return pa.concat_tables(pa_tables, promote=True)
File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>>
```
This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en", split="train")
```
### Expected behavior
The dataset should load normally without any errors.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
OPEN
| 2023-03-31T18:19:40
| 2024-01-14T07:24:21
| null |
https://github.com/huggingface/datasets/issues/5692
|
cyanic-selkie
| 6
|
[] |
5,690
|
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
|
### Describe the bug
rta.sh
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__
raise AttributeError(f"No {package_name} attribute {name}")
AttributeError: No huggingface_hub attribute hf_api
### Reproduction
_No response_
### Logs
```shell
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__
raise AttributeError(f"No {package_name} attribute {name}")
AttributeError: No huggingface_hub attribute hf_api
```
### System info
```shell
- huggingface_hub version: 0.13.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/appuser/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 1.7.1
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.3.0
- hf_transfer: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
```
|
CLOSED
| 2023-03-31T08:22:22
| 2023-07-21T14:21:57
| 2023-07-21T14:21:57
|
https://github.com/huggingface/datasets/issues/5690
|
wccccp
| 5
|
[
"bug"
] |
5,688
|
Wikipedia download_and_prepare for GCS
|
### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_
I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage.
### Steps to reproduce the bug
Run this and insert a path:
```
import datasets
builder = datasets.load_dataset_builder(
"wikipedia", language="en", date="20230320", beam_runner="DirectRunner")
builder.download_and_prepare({path}, file_format="parquet")
```
This is where the problem of it eating RAM occurs.
I have also tried several versions of this, based on the docs:
```
import gcsfs
import datasets
storage_options = {"project": "tdt4310", "token": "cloud"}
fs = gcsfs.GCSFileSystem(**storage_options)
output_dir = "gcs://wikipediadata/"
builder = datasets.load_dataset_builder(
"wikipedia", date="20230320", language="en", beam_runner="DirectRunner")
builder.download_and_prepare(
output_dir, storage_options=storage_options, file_format="parquet")
```
The error message that is received here is:
> ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
I have ran `pip install apache-beam[gcp]`
### Expected behavior
The wikipedia data loaded into GCS
Everything worked when testing with a smaller demo dataset found somewhere in the docs
### Environment info
Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance.
|
CLOSED
| 2023-03-30T23:43:22
| 2024-03-15T15:59:18
| 2024-03-15T15:59:18
|
https://github.com/huggingface/datasets/issues/5688
|
adrianfagerland
| 3
|
[] |
5,687
|
Document to compress data files before uploading
|
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them.
I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub.
- Compressed files are tracked by Git LFS in our default `.gitattributes` file
What do you think?
CC: @stevhliu
See related issue:
- https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1
|
CLOSED
| 2023-03-30T06:41:07
| 2023-04-19T07:25:59
| 2023-04-19T07:25:59
|
https://github.com/huggingface/datasets/issues/5687
|
albertvillanova
| 3
|
[
"documentation"
] |
5,685
|
Broken Image render on the hub website
|
### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type

See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work**
So the dataset is stored in the following way
```python
builder.download_and_prepare(output_dir=str(output_dir))
ds = builder.as_dataset(split="train")
# [NOTE] no idea how to push it from the builder folder
ds.push_to_hub(repo_id=repo_id)
builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id)
ds = builder.as_dataset(split="test")
ds.push_to_hub(repo_id=repo_id)
```
The build is this class
```python
class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
features = datasets.Features(
{
"image_id": datasets.Value("int64"),
"image": datasets.Image(),
"width": datasets.Value("int32"),
"height": datasets.Value("int32"),
"objects": datasets.Sequence(
{
"id": datasets.Value("int64"),
"area": datasets.Value("int64"),
"bbox": datasets.Sequence(
datasets.Value("float32"), length=4
),
"category": datasets.ClassLabel(names=categories),
}
),
}
)
return datasets.DatasetInfo(
description=description,
features=features,
homepage=homepage,
license=license,
citation=citation,
)
def _split_generators(self, dl_manager):
archive = dl_manager.download(url)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"annotation_file_path": "train/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"annotation_file_path": "test/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"annotation_file_path": "valid/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
]
def _generate_examples(self, annotation_file_path, files):
def process_annot(annot, category_id_to_category):
return {
"id": annot["id"],
"area": annot["area"],
"bbox": annot["bbox"],
"category": category_id_to_category[annot["category_id"]],
}
image_id_to_image = {}
idx = 0
# This loop relies on the ordering of the files in the archive:
# Annotation files come first, then the images.
for path, f in files:
file_name = os.path.basename(path)
if annotation_file_path in path:
annotations = json.load(f)
category_id_to_category = {
category["id"]: category["name"]
for category in annotations["categories"]
}
print(category_id_to_category)
image_id_to_annotations = collections.defaultdict(list)
for annot in annotations["annotations"]:
image_id_to_annotations[annot["image_id"]].append(annot)
image_id_to_image = {
annot["file_name"]: annot for annot in annotations["images"]
}
elif file_name in image_id_to_image:
image = image_id_to_image[file_name]
objects = [
process_annot(annot, category_id_to_category)
for annot in image_id_to_annotations[image["id"]]
]
print(file_name)
yield idx, {
"image_id": image["id"],
"image": {"path": path, "bytes": f.read()},
"width": image["width"],
"height": image["height"],
"objects": objects,
}
idx += 1
```
Basically, I want to add to the hub every dataset I come across on coco format
Thanks
Fra
### Steps to reproduce the bug
In this case, you can just navigate on the [dataset](https://huggingface.co/datasets/Francesco/cell-towers)
### Expected behavior
I was expecting the image rendering feature to work
### Environment info
Not a lot to share, I am using `datasets` from a fresh venv
|
CLOSED
| 2023-03-29T15:25:30
| 2023-03-30T07:54:25
| 2023-03-30T07:54:25
|
https://github.com/huggingface/datasets/issues/5685
|
FrancescoSaverioZuppichini
| 3
|
[] |
5,682
|
ValueError when passing ignore_verifications
|
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError:
```
ValueError: 'none' is not a valid VerificationMode
```
|
CLOSED
| 2023-03-29T15:00:30
| 2023-03-29T17:28:58
| 2023-03-29T17:28:58
|
https://github.com/huggingface/datasets/issues/5682
|
albertvillanova
| 0
|
[
"bug"
] |
5,681
|
Add information about patterns search order to the doc about structuring repo
|
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders.
I have a déjà vu that it had already been discussed as some point but I don't remember....
|
CLOSED
| 2023-03-29T11:44:49
| 2023-04-03T18:31:11
| 2023-04-03T18:31:11
|
https://github.com/huggingface/datasets/issues/5681
|
polinaeterna
| 2
|
[
"documentation"
] |
5,679
|
Allow load_dataset to take a working dir for intermediate data
|
### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It will help boost the performance.
### Your contribution
I can provide a PR to fix this if the proposal seems reasonable.
|
OPEN
| 2023-03-29T07:21:09
| 2023-04-12T22:30:25
| null |
https://github.com/huggingface/datasets/issues/5679
|
lu-wang-dl
| 4
|
[
"enhancement"
] |
5,678
|
Add support to create a Dataset from spark dataframe
|
### Feature request
Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame.
### Motivation
Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing the data in parallel.
By providing a seamless integration between these two frameworks, we make it easier for data scientists and developers to work with both Spark and Hugging Face in the same workflow.
### Your contribution
We can discuss about the ideas and I can help preparing a PR for this feature.
|
CLOSED
| 2023-03-29T04:36:28
| 2024-08-27T14:43:19
| 2023-07-21T14:15:38
|
https://github.com/huggingface/datasets/issues/5678
|
lu-wang-dl
| 5
|
[
"enhancement"
] |
5,677
|
Dataset.map() crashes when any column contains more than 1000 empty dictionaries
|
### Describe the bug
`Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty.
### Steps to reproduce the bug
Example:
```
import datasets
def add_one(example):
example["col2"] += 1
return example
n = 1001 # crashes
# n = 999 # works
ds = datasets.Dataset.from_dict({"col1": [{}] * n, "col2": [1] * n})
ds = ds.map(add_one, writer_batch_size=1000)
```
### Expected behavior
Above code should not crash
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-29T00:01:31
| 2023-07-07T14:01:14
| 2023-07-07T14:01:14
|
https://github.com/huggingface/datasets/issues/5677
|
mtoles
| 0
|
[] |
5,675
|
Filter datasets by language code
|
Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form.
I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora)
|
CLOSED
| 2023-03-27T09:42:28
| 2023-03-30T08:08:15
| 2023-03-30T08:08:15
|
https://github.com/huggingface/datasets/issues/5675
|
named-entity
| 4
|
[] |
5,674
|
Stored XSS
|
x
|
CLOSED
| 2023-03-26T20:55:58
| 2024-04-30T22:56:41
| 2023-03-27T21:01:55
|
https://github.com/huggingface/datasets/issues/5674
|
Fadavvi
| 1
|
[] |
5,672
|
Pushing dataset to hub crash
|
### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder.
So I'm now trying with the `push_to_hub()` func as follow:
```python
from datasets import load_dataset
import os
dataset = load_dataset("imagefolder", data_dir="./data", split="train")
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
```
But again, this produces an error:
```
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100212/100212 [00:00<00:00, 439108.61it/s]
Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 100211/100211 [00:00<00:00, 149323.73it/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15947.92it/s]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2245.34it/s]
Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:31<00:00, 2.24s/it]
Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 225kB/s]
Traceback (most recent call last):
File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module>
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub
repo_info = dataset_infos[next(iter(dataset_infos))]
StopIteration
```
What could be happening here ?
### Expected behavior
The dataset is pushed to the hub
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-26T17:42:13
| 2023-03-30T08:11:05
| 2023-03-30T08:11:05
|
https://github.com/huggingface/datasets/issues/5672
|
tzvc
| 3
|
[] |
5,671
|
How to use `load_dataset('glue', 'cola')`
|
### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
---------------------------------------------------------------------------
InvalidVersion Traceback (most recent call last)
File <timed exec>:1
(Omit because of long error message)
File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version)
195 match = self._regex.search(version)
196 if not match:
--> 197 raise InvalidVersion(f"Invalid version: '{version}'")
199 # Store the parsed out pieces of the version
200 self._version = _Version(
201 epoch=int(match.group("epoch")) if match.group("epoch") else 0,
202 release=tuple(int(i) for i in match.group("release").split(".")),
(...)
208 local=_parse_local_version(match.group("local")),
209 )
InvalidVersion: Invalid version: '0.10.1,<0.11'
```
- You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb)
### Steps to reproduce the bug
- This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup)
1. cd `/DockerImage` and command `docker build . -t week0`
2. cd `/` and command `docker-compose up`
3. Run `experimental_notebooks/data_exploration.ipynb`
----
Just to be sure, I wrote down Dockerfile and requirements.txt
- Dockerfile
```Dockerfile
FROM python:3.8
WORKDIR /root/working
RUN apt-get update && \
apt-get install -y python3-dev python3-pip python3-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt
CMD ["bash"]
```
- requirements.txt
```txt
pytorch-lightning==1.2.10
datasets==1.6.2
transformers==4.5.1
scikit-learn==0.24.2
```
### Expected behavior
There is no bug to implement `load_dataset('glue', 'cola')`
### Environment info
I already wrote it.
|
CLOSED
| 2023-03-26T09:40:34
| 2023-03-28T07:43:44
| 2023-03-28T07:43:43
|
https://github.com/huggingface/datasets/issues/5671
|
makinzm
| 2
|
[] |
5,670
|
Unable to load multi class classification datasets
|
### Describe the bug
I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)).
While loading the dataset, I'm getting the following error snippet.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[44], line 3
1 from datasets import load_dataset
----> 3 imdb_dataset = load_dataset("yelp_review_full")
4 imdb_dataset
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1716 ignore_verifications = ignore_verifications or save_infos
1718 # Create a dataset builder
-> 1719 builder_instance = load_dataset_builder(
1720 path=path,
1721 name=name,
1722 data_dir=data_dir,
1723 data_files=data_files,
1724 cache_dir=cache_dir,
1725 features=features,
1726 download_config=download_config,
1727 download_mode=download_mode,
1728 revision=revision,
1729 use_auth_token=use_auth_token,
1730 **config_kwargs,
1731 )
1733 # Return iterable dataset in case of streaming
1734 if streaming:
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1520 raise ValueError(error_msg)
1522 # Instantiate the dataset builder
-> 1523 builder_instance: DatasetBuilder = builder_cls(
1524 cache_dir=cache_dir,
1525 config_name=config_name,
1526 data_dir=data_dir,
1527 data_files=data_files,
1528 hash=hash,
1529 features=features,
1530 use_auth_token=use_auth_token,
1531 **builder_kwargs,
1532 **config_kwargs,
1533 )
1535 return builder_instance
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1291 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1292 super().__init__(*args, **kwargs)
1293 # Batch size used by the ArrowWriter
1294 # It defines the number of samples that are kept in memory before writing them
1295 # and also the length of the arrow chunks
1296 # None means that the ArrowWriter will use its default value
1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
309 # prepare info: DatasetInfo are a standardized dataclass across all datasets
310 # Prefill datasetinfo
311 if info is None:
--> 312 info = self.get_exported_dataset_info()
313 info.update(self._info())
314 info.builder_name = self.name
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self)
400 def get_exported_dataset_info(self) -> DatasetInfo:
401 """Empty DatasetInfo if doesn't exist
402
403 Example:
(...)
410 ```
411 """
--> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo())
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls)
385 @classmethod
386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict:
387 """Empty dict if doesn't exist
388
389 Example:
(...)
396 ```
397 """
--> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir)
368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md")
369 if "dataset_info" in dataset_metadata:
--> 370 return cls.from_metadata(dataset_metadata)
371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)):
372 # this is just to have backward compatibility with dataset_infos.json files
373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f:
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata)
387 return cls(
388 {
389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
(...)
393 }
394 )
395 else:
--> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"])
397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default")
398 return cls({dataset_info.config_name: dataset_info})
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data)
330 yaml_data = copy.deepcopy(yaml_data)
331 if yaml_data.get("features") is not None:
--> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
333 if yaml_data.get("splits") is not None:
334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data)
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
-> 1745 return cls.from_dict(from_yaml_inner(yaml_data))
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1734 return {"_type": snakecase_to_camelcase(obj["dtype"])}
1735 else:
-> 1736 return from_yaml_inner(obj["dtype"])
1737 else:
1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1736 return from_yaml_inner(obj["dtype"])
1737 else:
-> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature)
1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict):
1705 label_ids = sorted(feature["class_label"]["names"])
-> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)):
1707 raise ValueError(
1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing."
1709 )
1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids]
TypeError: can only concatenate str (not "int") to str
```
The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue?
### Steps to reproduce the bug
Run the following code snippet in a python script/ notebook cell:
```
from datasets import load_dataset
yelp_dataset = load_dataset("yelp_review_full")
yelp_dataset
```
### Expected behavior
The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-25T18:06:15
| 2023-03-27T22:54:56
| 2023-03-27T22:54:56
|
https://github.com/huggingface/datasets/issues/5670
|
ysahil97
| 2
|
[] |
5,669
|
Almost identical datasets, huge performance difference
|
### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("beans", split="train")
for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
The above pass over the dataset takes about 1.5 seconds on my computer.
However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce:
```python
def transform(example):
example["image2"] = cv2.imread(example["image_file_path"])
return example
dataset2 = dataset.map(transform, remove_columns=["image"])
for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
### Expected behavior
Same timings
### Environment info
python==3.10.9
datasets==2.10.1
|
OPEN
| 2023-03-23T18:20:20
| 2023-04-09T18:56:23
| null |
https://github.com/huggingface/datasets/issues/5669
|
eli-osherovich
| 7
|
[] |
5,666
|
Support tensorflow 2.12.0 in CI
|
Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664
|
CLOSED
| 2023-03-23T14:37:51
| 2023-03-23T16:14:54
| 2023-03-23T16:14:54
|
https://github.com/huggingface/datasets/issues/5666
|
albertvillanova
| 0
|
[
"enhancement"
] |
5,665
|
Feature request: IterableDataset.push_to_hub
|
### Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`.
Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
```
from datasets import load_dataset
dataset = load_dataset("laion/laion400m", streaming=True, split="train")
```
Then you could filter the dataset based on certain conditions:
```
filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400)
```
In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push:
```
from datasets import Dataset
Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...)
```
It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size:
```
filtered_dataset.push_to_hub("my-filtered-dataset")
```
### Motivation
This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk.
### Your contribution
Happy to test out a PR :)
|
CLOSED
| 2023-03-23T09:53:04
| 2025-06-06T16:13:22
| 2025-06-06T16:12:36
|
https://github.com/huggingface/datasets/issues/5665
|
NielsRogge
| 13
|
[
"enhancement"
] |
5,663
|
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
|
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ======
```
|
CLOSED
| 2023-03-23T09:39:43
| 2023-03-23T10:09:55
| 2023-03-23T10:09:55
|
https://github.com/huggingface/datasets/issues/5663
|
albertvillanova
| 0
|
[
"bug"
] |
5,661
|
CI is broken: Unnecessary `dict` comprehension
|
CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
```
|
CLOSED
| 2023-03-23T09:13:01
| 2023-03-23T09:37:51
| 2023-03-23T09:37:51
|
https://github.com/huggingface/datasets/issues/5661
|
albertvillanova
| 0
|
[
"bug"
] |
5,660
|
integration with imbalanced-learn
|
### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress.
### Your contribution
If I can get this working myself I can submit a PR with example code to go in the docs
|
CLOSED
| 2023-03-22T11:05:17
| 2023-07-06T18:10:15
| 2023-07-06T18:10:15
|
https://github.com/huggingface/datasets/issues/5660
|
tansaku
| 1
|
[
"enhancement",
"wontfix"
] |
5,659
|
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
|
### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type.
The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71
However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing:
```
pip install soundfile==0.12.1
```
Then:
```python
>>> soundfile
>>> soundfile.__libsndfile_version__
```
<details>
<summary> Traceback (most recent call last): </summary>
```
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module>
import _soundfile_data # ImportError if this doesn't exist
ModuleNotFoundError: No module named '_soundfile_data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module>
raise OSError('sndfile library not found using ctypes.util.find_library')
OSError: sndfile library not found using ctypes.util.find_library
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module>
_snd = _ffi.dlopen(_explicit_libname)
OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory
```
</details>
Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as:
```
pip install --upgrade soundfile
sudo apt install libsndfile1
```
We can now import `soundfile`:
```python
>>> import soundfile
>>> soundfile.__version__
'0.12.1'
>>> soundfile.__libsndfile_version__
'1.0.28'
```
We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147
But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138
Updating/upgrading the `libsndfile` doesn't change this:
```
sudo apt-get update
sudo apt-get upgrade
```
Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files.
Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues.
### Steps to reproduce the bug
Environment described above. Loading mp3 files:
```python
from datasets import load_dataset
common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
print(next(iter(common_voice_es)))
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 2
1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
----> 2 print(next(iter(common_voice_es)))
File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self)
937 for key, example in ex_iterable:
938 if self.features:
939 # `IterableDataset` automatically fills missing columns with None.
940 # This is done with `_apply_feature_types_on_example`.
--> 941 yield _apply_feature_types_on_example(
942 example, self.features, token_per_repo_id=self._token_per_repo_id
943 )
944 else:
945 yield example
File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id)
698 encoded_example = features.encode_example(example)
699 # Decode example for Audio feature, e.g.
--> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
701 return decoded_example
File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
-> 1864 return {
1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
1864 return {
-> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id)
1305 elif isinstance(schema, (Audio, Image)):
1306 # we pass the token to read and decode files from private repositories in streaming mode
1307 if obj is not None and schema.decode:
-> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1309 return obj
File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id)
162 raise RuntimeError(
163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, "
164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
165 )
166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3":
--> 167 raise RuntimeError(
168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, "
169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
170 )
172 if file is None:
173 token_per_repo_id = token_per_repo_id or {}
RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`.
```
### Expected behavior
Load mp3 files!
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Soundfile version: 0.12.1
- Libsndfile version: 1.0.28
|
CLOSED
| 2023-03-22T10:07:33
| 2024-07-12T01:35:01
| 2023-04-07T08:51:28
|
https://github.com/huggingface/datasets/issues/5659
|
sanchit-gandhi
| 13
|
[] |
5,654
|
Offset overflow when executing Dataset.map
|
### Describe the bug
Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big.
The map function executes all iterations, and then returns the following error:
```bash
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single
writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize
self.write_examples_on_file()
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate):
### Steps to reproduce the bug
```python
from glob import glob
import torch
from datasets import Dataset, Image
from torchvision.transforms import PILToTensor, RandomCrop
file_paths = glob("/home/datasets/DIV2K_train_HR/*")
to_tensor = PILToTensor()
crop_transf = RandomCrop(size=256)
def prepare_data(example):
tensor = to_tensor(example["image"].convert("RGB"))
return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])}
train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image())
train_data = train_data.map(
prepare_data,
cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp",
desc="Caching multiple random crops of image",
remove_columns="image",
)
print(train_data[0].keys(), train_data[0]["hr"].shape)
```
### Expected behavior
Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])`
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Pytorch version: 2.0.0+cu117
- torchvision version: 0.15.1+cu117
|
OPEN
| 2023-03-21T09:33:27
| 2023-03-21T10:32:07
| null |
https://github.com/huggingface/datasets/issues/5654
|
jan-pair
| 2
|
[] |
5,653
|
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
|
### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`.
### Environment info
datasets main document
|
CLOSED
| 2023-03-21T05:25:35
| 2023-03-24T16:36:23
| 2023-03-24T16:36:23
|
https://github.com/huggingface/datasets/issues/5653
|
RmZeta2718
| 1
|
[
"documentation",
"good first issue"
] |
5,651
|
expanduser in save_to_disk
|
### Describe the bug
save_to_disk() does not expand `~`
1. `dataset = load_datasets("any dataset")`
2. `dataset.save_to_disk("~/data")`
3. a folder named "~" created in current folder
4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`)
related issue https://github.com/huggingface/transformers/issues/10628
### Steps to reproduce the bug
As described above.
### Expected behavior
expanduser correctly
### Environment info
- datasets 2.10.1
- python 3.10
|
CLOSED
| 2023-03-20T12:02:18
| 2023-10-27T14:04:37
| 2023-10-27T14:04:37
|
https://github.com/huggingface/datasets/issues/5651
|
RmZeta2718
| 5
|
[
"good first issue"
] |
5,650
|
load_dataset can't work correct with my image data
|
I have about 20000 images in my folder which divided into 4 folders with class names.
When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting images and the like but absolutely nothing worked
|
CLOSED
| 2023-03-18T13:59:13
| 2023-07-24T14:13:02
| 2023-07-24T14:13:01
|
https://github.com/huggingface/datasets/issues/5650
|
WiNE-iNEFF
| 21
|
[] |
5,649
|
The index column created with .to_sql() is dependent on the batch_size when writing
|
### Describe the bug
It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index.
This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export.
### Steps to reproduce the bug
```
from datasets import Dataset
import sqlite3
db = sqlite3.connect(":memory:")
nice_numbers = Dataset.from_dict({"nice_number": range(101,106)})
nice_numbers.to_sql("nice1", db, batch_size=1)
nice_numbers.to_sql("nice2", db, batch_size=2)
print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)]
print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)]
```
### Expected behavior
I expected the "index" column to be unique
### Environment info
```
% datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
zsh: segmentation fault datasets-cli env
```
|
CLOSED
| 2023-03-18T05:25:17
| 2023-06-17T07:01:57
| 2023-06-17T07:01:57
|
https://github.com/huggingface/datasets/issues/5649
|
lsb
| 2
|
[] |
5,648
|
flatten_indices doesn't work with pandas format
|
### Describe the bug
Hi,
I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output
### Steps to reproduce the bug
tabular_data = pd.DataFrame(np.random.randn(10,10))
tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data)
tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices()
### Expected behavior
No error thrown
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1
|
OPEN
| 2023-03-17T12:44:25
| 2023-03-21T13:12:03
| null |
https://github.com/huggingface/datasets/issues/5648
|
alialamiidrissi
| 1
|
[
"bug"
] |
5,647
|
Make all print statements optional
|
### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute
|
CLOSED
| 2023-03-16T20:30:07
| 2023-07-21T14:20:25
| 2023-07-21T14:20:24
|
https://github.com/huggingface/datasets/issues/5647
|
gagan3012
| 2
|
[
"enhancement"
] |
5,645
|
Datasets map and select(range()) is giving dill error
|
### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
I get following error: `module 'dill._dill' has no attribute 'log'`
I've tried downgrading the dill version from latest to 0.2.8, but no luck.
Stack trace:
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj)
> 367 try:
> --> 368 import transformers as tr
> 369
>
> ModuleNotFoundError: No module named 'transformers'
>
> During handling of the above exception, another exception occurred:
>
> AttributeError Traceback (most recent call last)
> 17 frames
> <ipython-input-13-dd14813880a6> in <module>
> ----> 1 test = train_dataset.select(range(10))
>
> /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
> 155 }
> 156 # apply actual function
> --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
> 159 # re-apply format to the output
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
> 155 if kwargs.get(fingerprint_name) is None:
> 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
> --> 157 kwargs[fingerprint_name] = update_fingerprint(
> 158 self._fingerprint, transform, kwargs_for_fingerprint
> 159 )
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
> 103 for key in sorted(transform_args):
> 104 hasher.update(key)
> --> 105 hasher.update(transform_args[key])
> 106 return hasher.hexdigest()
> 107
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value)
> 55 def update(self, value):
> 56 self.m.update(f"=={type(value)}==".encode("utf8"))
> ---> 57 self.m.update(self.hash(value).encode("utf-8"))
> 58
> 59 def hexdigest(self):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value)
> 51 return cls.dispatch[type(value)](cls, value)
> 52 else:
> ---> 53 return cls.hash_default(value)
> 54
> 55 def update(self, value):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value)
> 44 @classmethod
> 45 def hash_default(cls, value):
> ---> 46 return cls.hash_bytes(dumps(value))
> 47
> 48 @classmethod
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj)
> 387 file = StringIO()
> 388 with _no_cache_fields(obj):
> --> 389 dump(obj, file)
> 390 return file.getvalue()
> 391
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file)
> 359 def dump(obj, file):
> 360 """pickle an object to a file"""
> --> 361 Pickler(file, recurse=True).dump(obj)
> 362 return
> 363
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj)
> 392 return
> 393
> --> 394 def load_session(filename='/tmp/session.pkl', main=None):
> 395 """update the __main__ module with the state from the session file"""
> 396 if main is None: main = _main_module
>
> /usr/lib/python3.9/pickle.py in dump(self, obj)
> 485 if self.proto >= 4:
> 486 self.framer.start_framing()
> --> 487 self.save(obj)
> 488 self.write(STOP)
> 489 self.framer.end_framing()
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj)
>
> /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
> 689 write(NEWOBJ)
> 690 else:
> --> 691 save(func)
> 692 save(args)
> 693 write(REDUCE)
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj)
> 583 dill._dill.log.info("# F1")
> 584 else:
> --> 585 dill._dill.log.info("F2: %s" % obj)
> 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None))
> 587 dill._dill.StockPickler.save_global(pickler, obj, name=name)
>
> AttributeError: module 'dill._dill' has no attribute 'log'
### Steps to reproduce the bug
After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab
do either
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
### Expected behavior
The map and select function should work
### Environment info
dataset: https://huggingface.co/datasets/scientific_papers
dill = 0.3.6
python= 3.9.16
transformer = 4.2.0
|
CLOSED
| 2023-03-16T10:01:28
| 2023-03-17T04:24:51
| 2023-03-17T04:24:51
|
https://github.com/huggingface/datasets/issues/5645
|
Tanya-11
| 2
|
[] |
5,641
|
Features cannot be named "self"
|
### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0,1,2,3], columns = ["self"])
datasets.arrow_dataset.Dataset.from_pandas(dummy_pandas)
```
### Expected behavior
No error thrown
### Environment info
- `datasets` version: 2.8.0
- Python version: 3.9.5
- PyArrow version: 6.0.1
- Pandas version: 1.4.1
|
CLOSED
| 2023-03-15T17:16:40
| 2023-03-16T17:14:51
| 2023-03-16T17:14:51
|
https://github.com/huggingface/datasets/issues/5641
|
alialamiidrissi
| 0
|
[] |
5,639
|
Parquet file wrongly recognized as zip prevents loading a dataset
|
### Describe the bug
When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data/devops-00000-of-00001-22fe902fd8702892.parquet) is wrongly identified by python as being a zip not a parquet.
(Full thread on [Slack](https://huggingface.slack.com/archives/C02V51Q3800/p1678890880803599))
### Steps to reproduce the bug
```python
from datasets import load_dataset_builder
ds = load_dataset_builder("HuggingFaceGECLM/StackExchange_Mar2023")
```
### Expected behavior
Loading the file normally.
### Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
CLOSED
| 2023-03-15T15:20:45
| 2023-03-16T13:40:14
| 2023-03-16T13:40:14
|
https://github.com/huggingface/datasets/issues/5639
|
clefourrier
| 0
|
[] |
5,638
|
xPath to implement all operations for Path
|
### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using xPath to interact with remote objects.
### Your contribution
I could try to make a PR. I'm a bit unfamiliar with chaining right now.
|
CLOSED
| 2023-03-15T13:47:11
| 2023-03-17T13:21:12
| 2023-03-17T13:21:12
|
https://github.com/huggingface/datasets/issues/5638
|
thomasw21
| 5
|
[
"enhancement"
] |
5,637
|
IterableDataset with_format does not support 'device' keyword for jax
|
### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'device'`
Looking over the code, it seems IterableDataset support only pytorch and no support for jax device keyword?
https://github.com/huggingface/datasets/blob/fc5c84f36684343bff3e424cb0fd1ac5ecdd66da/src/datasets/iterable_dataset.py#L1029
### Steps to reproduce the bug
1. Load an IterableDataset (tested in streaming mode)
2. Call with_format('jax',device=device)
### Expected behavior
I expect to call `with_format('jax', device=device)` as per [documentation](https://huggingface.co/docs/datasets/use_with_jax) without error
### Environment info
Tested with installing newest (dev) and also pip release (2.10.1).
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
|
OPEN
| 2023-03-15T11:04:12
| 2025-01-07T06:59:33
| null |
https://github.com/huggingface/datasets/issues/5637
|
Lime-Cakes
| 3
|
[] |
5,634
|
Not all progress bars are showing up when they should for downloading dataset
|
### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/110427462/224851138-13fee5b7-ab51-4883-b96f-1b9808782e3b.png">
tqdm
<img width="1251" alt="Screen Shot 2023-03-13 at 3 58 59 PM" src="https://user-images.githubusercontent.com/110427462/224851180-5feb7825-9250-4b1e-ad0c-f3172ac1eb78.png">
### Steps to reproduce the bug
1. Run this line
```
from datasets import load_dataset
rotten_tomatoes = load_dataset("rotten_tomatoes", split="train")
```
### Expected behavior
all progress bars for builder script, metadata, readme, training, validation, and test set
### Environment info
requirements.txt
```
aiofiles==22.1.0
aiohttp==3.8.4
aiosignal==1.3.1
aiosqlite==0.18.0
anyio==3.6.2
appnope==0.1.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-generator==1.10
async-timeout==4.0.2
attrs==22.2.0
Babel==2.12.1
backcall==0.2.0
beautifulsoup4==4.11.2
bleach==6.0.0
brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1666764961872/work
certifi==2022.12.7
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1671179414629/work
cfgv==3.3.1
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work
comm==0.1.2
conda==22.9.0
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1669907009957/work
conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1669733752472/work
coverage==7.2.1
cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1669592251328/work
datasets==2.1.0
debugpy==1.6.6
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.6
distlib==0.3.6
distro==1.4.0
entrypoints==0.4
exceptiongroup==1.1.0
executing==1.2.0
fastjsonschema==2.16.3
filelock==3.9.0
flaky==3.7.0
fqdn==1.5.1
frozenlist==1.3.3
fsspec==2023.3.0
huggingface-hub==0.10.1
identify==2.5.18
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work
iniconfig==2.0.0
ipykernel==6.12.1
ipyparallel==8.4.1
ipython==7.32.0
ipython-genutils==0.2.0
ipywidgets==8.0.4
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
json5==0.9.11
jsonpointer==2.3
jsonschema==4.17.3
jupyter-events==0.6.3
jupyter-ydoc==0.2.2
jupyter_client==8.0.3
jupyter_core==5.2.0
jupyter_server==2.4.0
jupyter_server_fileid==0.8.0
jupyter_server_terminals==0.4.4
jupyter_server_ydoc==0.6.1
jupyterlab==3.6.1
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.5
jupyterlab_server==2.20.0
libmambapy @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/libmambapy
mamba @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/mamba
MarkupSafe==2.1.2
matplotlib-inline==0.1.6
mistune==2.0.5
multidict==6.0.4
multiprocess==0.70.14
nbclassic==0.5.3
nbclient==0.7.2
nbconvert==7.2.9
nbformat==5.7.3
nest-asyncio==1.5.6
nodeenv==1.7.0
notebook==6.5.3
notebook_shim==0.2.2
numpy==1.24.2
outcome==1.2.0
packaging==23.0
pandas==1.5.3
pandocfilters==1.5.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
platformdirs==3.0.0
plotly==5.13.1
pluggy==1.0.0
pre-commit==3.1.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
psutil==5.9.4
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==11.0.0
pycosat @ file:///Users/runner/miniforge3/conda-bld/pycosat_1666836580084/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
Pygments==2.14.0
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1665350324128/work
pyrsistent==0.19.3
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
pytest==7.2.1
pytest-asyncio==0.20.3
pytest-cov==4.0.0
pytest-timeout==2.1.0
python-dateutil==2.8.2
python-json-logger==2.0.7
pytz==2022.7.1
PyYAML==6.0
pyzmq==25.0.0
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1661872987712/work
responses==0.18.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
ruamel-yaml-conda @ file:///Users/runner/miniforge3/conda-bld/ruamel_yaml_1666819760545/work
Send2Trash==1.8.0
simplegeneric==0.8.1
six==1.16.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.4
stack-data==0.6.2
tenacity==8.2.2
terminado==0.17.1
tinycss2==1.2.1
tomli==2.0.1
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
tornado==6.2
tqdm==4.64.1
traitlets==5.8.1
trio==0.22.0
typing_extensions==4.5.0
uri-template==1.2.0
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1669259737463/work
virtualenv==20.19.0
wcwidth==0.2.6
webcolors==1.12
webencodings==0.5.1
websocket-client==1.5.1
widgetsnbextension==4.0.5
xxhash==3.2.0
y-py==0.5.9
yarl==1.8.2
ypy-websocket==0.8.2
zstandard==0.19.0
```
|
CLOSED
| 2023-03-13T23:04:18
| 2023-10-11T16:30:16
| 2023-10-11T16:30:16
|
https://github.com/huggingface/datasets/issues/5634
|
garlandz-db
| 2
|
[] |
5,633
|
Cannot import datasets
|
### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Please see 'steps to reproduce the bug' for the specific error, as steps to reproduce is just importing the library
### Steps to reproduce the bug
```
$ python3
Python 3.8.15 (default, Nov 24 2022, 15:19:38)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 59, in <module>
from .arrow_reader import ArrowReader
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_reader.py", line 27, in <module>
import pyarrow.parquet as pq
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/core.py", line 37, in <module>
from pyarrow._parquet import (ParquetReader, Statistics, # noqa
ImportError: cannot import name 'FileEncryptionProperties' from 'pyarrow._parquet' (/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/_parquet.cpython-38-x86_64-linux-gnu.so)
```
### Expected behavior
I would expect for the statement `import datasets` to cause no error
### Environment info
Output of `conda list`:
```
# packages in environment at /home/jack/.conda/envs/pbalawender_zpp:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
abseil-cpp 20210324.2 h2531618_0
advertools 0.13.2 pypi_0 pypi
aiofiles 0.8.0 pypi_0 pypi
aiohttp 3.8.3 py38h5eee18b_0
aiosignal 1.2.0 pyhd3eb1b0_0
aiosqlite 0.17.0 pypi_0 pypi
anyio 3.6.2 pypi_0 pypi
aquirdturtle-collapsible-headings 3.1.0 pypi_0 pypi
argon2-cffi 21.3.0 pypi_0 pypi
argon2-cffi-bindings 21.2.0 pypi_0 pypi
arrow 1.2.3 pypi_0 pypi
arrow-cpp 3.0.0 py38h6b21186_4
asttokens 2.2.0 pypi_0 pypi
async-timeout 4.0.2 py38h06a4308_0
attrs 22.1.0 py38h06a4308_0
automat 22.10.0 pypi_0 pypi
aws-c-common 0.4.57 he6710b0_1
aws-c-event-stream 0.1.6 h2531618_5
aws-checksums 0.1.9 he6710b0_0
aws-sdk-cpp 1.8.185 hce553d0_0
babel 2.11.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
beautifulsoup4 4.11.1 pypi_0 pypi
blas 1.0 mkl
bleach 5.0.1 pypi_0 pypi
boost-cpp 1.73.0 h27cfd23_11
bottleneck 1.3.5 py38h7deecbd_0
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotlipy 0.7.0 py38h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
c-ares 1.18.1 h7f8727e_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.9.24 pypi_0 pypi
cffi 1.15.1 py38h5eee18b_3
charset-normalizer 2.1.1 pypi_0 pypi
click 8.1.3 pypi_0 pypi
constantly 15.1.0 pypi_0 pypi
contourpy 1.0.6 pypi_0 pypi
cryptography 38.0.4 pypi_0 pypi
cssselect 1.2.0 pypi_0 pypi
cudatoolkit 10.1.243 h8cb64d8_10 conda-forge
cycler 0.11.0 pypi_0 pypi
dacite 1.6.0 pypi_0 pypi
dataclasses 0.8 pyh6d0b6a4_7
datasets 1.18.4 py_0 huggingface
datetime 4.7 pypi_0 pypi
debugpy 1.6.4 pypi_0 pypi
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pypi_0 pypi
dill 0.3.6 py38h06a4308_0
docker-pycreds 0.4.0 pypi_0 pypi
double-conversion 3.1.5 he6710b0_1
entrypoints 0.4 py38h06a4308_0
executing 0.8.3 pyhd3eb1b0_0
filelock 3.8.0 pypi_0 pypi
flake8 6.0.0 pypi_0 pypi
flask 2.1.3 py38h06a4308_0
flit-core 3.6.0 pyhd3eb1b0_0
fonttools 4.38.0 pypi_0 pypi
fqdn 1.5.1 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.3.3 py38h5eee18b_0
fsspec 2022.11.0 py38h06a4308_0
gensim 4.2.0 pypi_0 pypi
gflags 2.2.2 he6710b0_0
giflib 5.2.1 h5eee18b_3
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.30 pypi_0 pypi
glog 0.5.0 h2531618_0
grpc-cpp 1.39.0 hae934f6_5
huggingface-hub 0.11.1 pypi_0 pypi
huggingface_hub 0.13.1 py_0 huggingface
hyperlink 21.0.0 pypi_0 pypi
icu 58.2 he6710b0_3
idna 3.4 py38h06a4308_0
importlib-metadata 5.1.0 pypi_0 pypi
importlib_metadata 4.11.3 hd3eb1b0_0
importlib_resources 5.2.0 pyhd3eb1b0_1
incremental 22.10.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.17.1 pyh210e3f2_0 conda-forge
ipython 8.7.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge
isoduration 20.11.0 pypi_0 pypi
itemadapter 0.7.0 pypi_0 pypi
itemloaders 1.0.6 pypi_0 pypi
itsdangerous 2.0.1 pyhd3eb1b0_0
jedi 0.18.2 pypi_0 pypi
jinja2 3.1.2 py38h06a4308_0
jmespath 1.0.1 pypi_0 pypi
joblib 1.2.0 pypi_0 pypi
jpeg 9b h024ee3a_2
json5 0.9.10 pypi_0 pypi
jsonpickle 3.0.0 pypi_0 pypi
jsonpointer 2.3 pypi_0 pypi
jsonschema 4.17.3 py38h06a4308_0
jupyter-core 5.1.0 pypi_0 pypi
jupyter-events 0.5.0 pypi_0 pypi
jupyter-server 1.23.3 pypi_0 pypi
jupyter-server-fileid 0.6.0 pypi_0 pypi
jupyter-server-ydoc 0.4.0 pypi_0 pypi
jupyter-ydoc 0.2.2 pypi_0 pypi
jupyter_client 7.4.9 py38h06a4308_0
jupyter_core 5.2.0 py38h06a4308_0
jupyterlab 3.6.0a4 pypi_0 pypi
jupyterlab-pygments 0.2.2 pypi_0 pypi
jupyterlab-server 2.16.3 pypi_0 pypi
jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 pypi_0 pypi
krb5 1.19.4 h568e23c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
libboost 1.73.0 h3ff78a5_11
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libcurl 7.88.1 h91b91d3_0
libedit 3.1.20221030 h5eee18b_0
libev 4.33 h7f8727e_1
libevent 2.1.12 h8f2d780_0
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libnghttp2 1.46.0 hce63b2e_0
libpng 1.6.39 h5eee18b_0
libprotobuf 3.17.2 h4ff587b_1
libsodium 1.0.18 h7b6447c_0
libssh2 1.10.0 h8f2d780_0
libstdcxx-ng 11.2.0 h1234567_1
libthrift 0.14.2 hcc01f38_0
libtiff 4.1.0 h2733197_1
libuv 1.44.2 h5eee18b_0
libwebp 1.2.0 h89dd481_0
lz4-c 1.9.4 h6a678d5_0
markupsafe 2.1.1 py38h7f8727e_0
matplotlib 3.6.2 pypi_0 pypi
matplotlib-inline 0.1.6 py38h06a4308_0
mccabe 0.7.0 pypi_0 pypi
mistune 2.0.4 pypi_0 pypi
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
morfeusz2 1.99.6 pypi_0 pypi
multidict 6.0.2 py38h5eee18b_0
multiprocess 0.70.14 py38h06a4308_0
nbclassic 0.4.8 pypi_0 pypi
nbclient 0.7.2 pypi_0 pypi
nbconvert 7.2.5 pypi_0 pypi
nbformat 5.7.0 py38h06a4308_0
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py38h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
notebook 6.5.2 pypi_0 pypi
notebook-shim 0.2.2 pypi_0 pypi
numexpr 2.8.4 py38he184ba9_0
numpy 1.23.5 py38h14f4228_0
numpy-base 1.23.5 py38h31eccc5_0
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openssl 1.1.1t h7f8727e_0
orc 1.6.9 ha97a36c_3
packaging 22.0 py38h06a4308_0
pandas 1.5.2 pypi_0 pypi
pandocfilters 1.5.0 pypi_0 pypi
parsel 1.7.0 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathlib 1.0.1 pypi_0 pypi
pathtools 0.1.2 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.3.0 pypi_0 pypi
pip 22.2.2 py38h06a4308_0
pkgutil-resolve-name 1.3.10 py38h06a4308_0
platformdirs 2.5.4 pypi_0 pypi
prometheus-client 0.15.0 pypi_0 pypi
promise 2.3 pypi_0 pypi
prompt-toolkit 3.0.33 pypi_0 pypi
protego 0.2.1 pypi_0 pypi
protobuf 4.21.12 pypi_0 pypi
psutil 5.9.0 py38h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 10.0.1 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycodestyle 2.10.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pydispatcher 2.0.6 pypi_0 pypi
pyflakes 3.0.1 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pyopenssl 22.1.0 pypi_0 pypi
pyrsistent 0.18.0 py38heee7806_0
pysocks 1.7.1 py38h06a4308_0
python 3.8.15 h7a1cb2a_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python-dotenv 0.21.0 pypi_0 pypi
python-fastjsonschema 2.16.2 py38h06a4308_0
python-json-logger 2.0.4 pypi_0 pypi
python-xxhash 2.0.2 py38h5eee18b_1
pytorch 1.7.1 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
pytz 2022.6 pypi_0 pypi
pyyaml 6.0 py38h5eee18b_1
pyzmq 23.2.0 py38h6a678d5_0
queuelib 1.6.2 pypi_0 pypi
re2 2022.04.01 h295c915_0
readline 8.2 h5eee18b_0
regex 2022.10.31 pypi_0 pypi
requests 2.28.1 py38h06a4308_0
requests-file 1.5.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rfc3339-validator 0.1.4 pypi_0 pypi
rfc3986-validator 0.1.1 pypi_0 pypi
scikit-learn 1.1.3 pypi_0 pypi
scipy 1.9.3 pypi_0 pypi
scrapy 2.7.1 pypi_0 pypi
seaborn 0.12.1 pypi_0 pypi
send2trash 1.8.0 pypi_0 pypi
sentry-sdk 1.12.1 pypi_0 pypi
service-identity 21.1.0 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 65.6.3 pypi_0 pypi
shortuuid 1.0.11 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
smart-open 6.2.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
snappy 1.1.9 h295c915_0
sniffio 1.3.0 pypi_0 pypi
soupsieve 2.3.2.post1 pypi_0 pypi
sqlite 3.40.1 h5082296_0
stack-data 0.6.2 pypi_0 pypi
stack_data 0.2.0 pyhd3eb1b0_0
terminado 0.17.0 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tinycss2 1.2.1 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tldextract 3.4.0 pypi_0 pypi
tokenizers 0.13.2 pypi_0 pypi
tomli 2.0.1 pypi_0 pypi
torchvision 0.8.2 py38_cu101 pytorch
tornado 6.2 py38h5eee18b_0
tqdm 4.64.1 py38h06a4308_0
traitlets 5.6.0 pypi_0 pypi
transformers 4.25.1 pypi_0 pypi
tweepy 4.12.1 pypi_0 pypi
twisted 22.10.0 pypi_0 pypi
twython 3.9.1 pypi_0 pypi
typing-extensions 4.4.0 py38h06a4308_0
typing_extensions 4.4.0 py38h06a4308_0
uri-template 1.2.0 pypi_0 pypi
uriparser 0.9.3 he6710b0_1
urllib3 1.26.13 pypi_0 pypi
utf8proc 2.6.1 h27cfd23_0
w3lib 2.1.0 pypi_0 pypi
wandb 0.13.7 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
webcolors 1.12 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
websocket-client 1.4.2 pypi_0 pypi
werkzeug 2.2.2 py38h06a4308_0
wheel 0.38.4 py38h06a4308_0
widgetsnbextension 4.0.3 py38h06a4308_0
xxhash 0.8.0 h7f8727e_3
xz 5.2.10 h5eee18b_1
y-py 0.5.4 pypi_0 pypi
yaml 0.2.5 h7b6447c_0
yarl 1.8.1 py38h5eee18b_0
ypy-websocket 0.5.0 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.11.0 py38h06a4308_0
zlib 1.2.13 h5eee18b_0
zope-interface 5.5.2 pypi_0 pypi
zstd 1.4.9 haebb681_0
```
|
CLOSED
| 2023-03-13T13:14:44
| 2023-03-13T17:54:19
| 2023-03-13T17:54:19
|
https://github.com/huggingface/datasets/issues/5633
|
ruplet
| 1
|
[] |
5,632
|
Dataset cannot convert too large dictionnary
|
### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of my dictionnary, and then I have the error "OverflowError: Python int too large to convert to C long".
Do you know how to solve this problem?
Unfortunately I cannot give a reproductible code because I cannot share a so large file, but you can find the code below (it's a test on only a part of the validation data ~10Go, but it's already the case).
Thank you!
### Steps to reproduce the bug
SAVE_DIR = './data/'
features = h5py.File(SAVE_DIR+'features.hdf5','r')
valid_data = features["validation"]["data/features"]
v_array_values = [np.float32(item[()]) for item in valid_data.values()]
for i in range(len(v_array_values)):
v_array_values[i] = v_array_values[i].round(decimals=5)
dict_valid = datasets.Dataset.from_dict({'input_values': v_array_values})
### Expected behavior
The code is expected to give me a Huggingface dataset.
### Environment info
python: 3.8.15
numpy: 1.22.3
datasets: 2.3.2
pyarrow: 8.0.0
|
OPEN
| 2023-03-13T10:14:40
| 2023-03-16T15:28:57
| null |
https://github.com/huggingface/datasets/issues/5632
|
MaraLac
| 1
|
[] |
5,631
|
Custom split names
|
### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub)
### Motivation
Easier access to more splits
### Your contribution
No
|
CLOSED
| 2023-03-12T17:21:43
| 2023-03-24T14:13:00
| 2023-03-24T14:13:00
|
https://github.com/huggingface/datasets/issues/5631
|
ErfanMoosaviMonazzah
| 1
|
[
"enhancement"
] |
5,629
|
load_dataset gives "403" error when using Financial phrasebank
|
When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads when I try to access it manually.
|
OPEN
| 2023-03-11T07:46:39
| 2023-03-13T18:27:26
| null |
https://github.com/huggingface/datasets/issues/5629
|
Jimchoo91
| 1
|
[] |
5,627
|
Unable to load AutoTrain-generated dataset from the hub
|
### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
```
### Steps to reproduce the bug
Steps to reproduce:
1. `pip install datasets==2.10.1`
2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login`
```
from datasets import load_dataset
# load dataset
dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
dataset = load_dataset(dataset)
```
Here's the full traceback:
```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2383.80it/s]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 505.95it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 writer = writer_class(
1869 features=writer._features,
1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1871 storage_options=self._fs.storage_options,
1872 embed_local_files=embed_local_files,
1873 )
-> 1874 writer.write_table(table)
1875 num_examples_progress_update += len(table)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
567 pa_table = pa_table.combine_chunks()
--> 568 pa_table = table_cast(pa_table, self._schema)
569 if self.embed_local_files:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema)
2311 if table.schema != schema:
-> 2312 return cast_table_to_schema(table, schema)
2313 elif table.schema.metadata != schema.metadata:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema)
2269 if sorted(table.column_names) != sorted(features):
-> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Input In [8], in <cell line: 6>()
4 # load dataset
5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
----> 6 dataset = load_dataset(dataset)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1781 # Download and prepare data
-> 1782 builder_instance.download_and_prepare(
1783 download_config=download_config,
1784 download_mode=download_mode,
1785 verification_mode=verification_mode,
1786 try_from_hf_gcs=try_from_hf_gcs,
1787 num_proc=num_proc,
1788 )
1790 # Build dataset for splits
1791 keep_in_memory = (
1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1793 )
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
870 if num_proc is not None:
871 prepare_split_kwargs["num_proc"] = num_proc
--> 872 self._download_and_prepare(
873 dl_manager=dl_manager,
874 verification_mode=verification_mode,
875 **prepare_split_kwargs,
876 **download_and_prepare_kwargs,
877 )
878 # Sync info
879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
963 split_dict.add(split_generator.split_info)
965 try:
966 # Prepare split will record examples associated to the split
--> 967 self._prepare_split(split_generator, **prepare_split_kwargs)
968 except OSError as e:
969 raise OSError(
970 "Cannot find data file. "
971 + (self.manual_download_instructions or "")
972 + "\nOriginal error:\n"
973 + str(e)
974 ) from None
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1747 job_id = 0
1748 with pbar:
-> 1749 for job_id, done, content in self._prepare_split_single(
1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1751 ):
1752 if done:
1753 result = content
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub.
I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub):
```python
dataset = load_dataset(
"lhoestq/custom_squad",
revision="main" # tag name, or branch name, or commit hash
)
```
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
|
OPEN
| 2023-03-10T17:25:58
| 2023-03-11T15:44:42
| null |
https://github.com/huggingface/datasets/issues/5627
|
ijmiller2
| 2
|
[] |
5,625
|
Allow "jsonl" data type signifier
|
### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset script at jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
The reason is because the script has these lines to extract the data type by its extension. Therefore, the derived type is `jsonl` which is not recognized by datasets as the error above shows.
https://github.com/huggingface/transformers/blob/ade26bf9912f69e2110137443e4406d7dbe253e7/examples/pytorch/translation/run_translation.py#L342-L356
I suppose you could argue that this is the script's fault (in which case I'll do a PR over at `transformers`) but it makes sense to me to add `jsonl` as an alias to `json` in `datasets`.
### Your contribution
At the moment I cannot work on this. I think it can be as "easy" as having an alias for json, namely jsonl.
|
OPEN
| 2023-03-10T13:21:48
| 2023-03-11T10:35:39
| null |
https://github.com/huggingface/datasets/issues/5625
|
BramVanroy
| 2
|
[
"enhancement"
] |
5,624
|
glue datasets returning -1 for test split
|
### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
print(d["label"]
```
### Expected behavior
Expected behavior should be 0/1 instead of -1.
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-09T14:47:18
| 2023-03-09T16:49:29
| 2023-03-09T16:49:29
|
https://github.com/huggingface/datasets/issues/5624
|
lithafnium
| 1
|
[] |
5,618
|
Unpin fsspec < 2023.3.0 once issue fixed
|
Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614
|
CLOSED
| 2023-03-07T08:41:51
| 2023-03-07T13:39:03
| 2023-03-07T13:39:03
|
https://github.com/huggingface/datasets/issues/5618
|
albertvillanova
| 0
|
[] |
5,616
|
CI is broken after fsspec-2023.3.0 release
|
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677.1887748, 'islink': False, 'mode': 33188, 'uid': 1001, 'gid': 123, 'mtime': 1678175677.1887748, 'ino': 286957, 'nlink': 1} != 'file.txt'
Full diff:
[
- 'file.txt',
+ {'created': 1678175677.1887748,
+ 'gid': 123,
+ 'ino': 286957,
+ 'islink': False,
+ 'mode': 33188,
+ 'mtime': 1678175677.1887748,
+ 'name': 'file.txt',
+ 'nlink': 1,
+ 'size': 70,
+ 'type': 'file',
+ 'uid': 1001},
]
```
Also:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[GzipFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[Lz4FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[XzFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[ZstdFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
===== 5 failed, 2134 passed, 18 skipped, 38 warnings in 157.21s (0:02:37) ======
```
See:
- fsspec/filesystem_spec#1205
|
CLOSED
| 2023-03-07T08:06:39
| 2023-03-07T08:37:29
| 2023-03-07T08:37:29
|
https://github.com/huggingface/datasets/issues/5616
|
albertvillanova
| 0
|
[
"bug"
] |
5,615
|
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
|
### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
I wrote codes below to make it.
```py
def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset:
iter_add_dataset = iter(add_dataset)
def add_column_fn(example):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: next(iter_add_dataset)[key]}
return dataset.map(add_column_fn)
```
Is there other way to do it? Or is it intended?
### Steps to reproduce the bug
Thie codes below occurs `NotImplementedError`
```py
from datasets import IterableDataset
def gen(num):
yield {f"col{num}": 1}
yield {f"col{num}": 2}
yield {f"col{num}": 3}
ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1})
ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2})
new_ids = ids1.add_column("new_col", ids1)
for row in new_ids:
print(row)
```
### Expected behavior
`IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-07T01:52:00
| 2023-03-09T15:24:05
| 2023-03-09T15:23:54
|
https://github.com/huggingface/datasets/issues/5615
|
zsaladin
| 1
|
[
"wontfix"
] |
5,613
|
Version mismatch with multiprocess and dill on Python 3.10
|
### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/download_manager.py", line 35, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 40, in <module>
import multiprocess.pool
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 609, in <module>
class ThreadPool(Pool):
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 611, in ThreadPool
from .dummy import Process
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/dummy/__init__.py", line 87, in <module>
class Condition(threading._Condition):
AttributeError: module 'threading' has no attribute '_Condition'. Did you mean: 'Condition'?
```
I think this is a bad interaction of versions from `dill`, `multiprocess`, `apache-beam`, and `threading` from the Python (3.10) standard lib. Upgrading `multiprocess` to a version that does not crash like this is not possible because `apache-beam` pins `dill` to and old version:
```
Because multiprocess (0.70.10) depends on dill (>=0.3.2)
and apache-beam (2.45.0) depends on dill (>=0.3.1.1,<0.3.2), multiprocess (0.70.10) is incompatible with apache-beam (2.45.0).
And because no versions of apache-beam match >2.45.0,<3.0.0, multiprocess (0.70.10) is incompatible with apache-beam (>=2.45.0,<3.0.0).
So, because yyy depends on both apache-beam (^2.45.0) and multiprocess (0.70.10), version solving failed.
```
Perhaps it is not right to file a bug here, but I'm not totally sure whose fault it is. And in any case, this is an immediate blocker to using `datasets` out of the box.
Possibly related to https://github.com/huggingface/datasets/issues/5232.
### Steps to reproduce the bug
Steps to reproduce:
1. Make a poetry project with this configuration
```
[tool.poetry]
name = "yyy"
version = "0.1.0"
description = ""
authors = ["Adam Pauls <adpauls@gmail.com>"]
readme = "README.md"
packages = [{ include = "xxx" }]
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
datasets = "^2.10.1"
apache-beam = "^2.45.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
2. `poetry install`.
3. `poetry run python -c "import datasets"`.
### Expected behavior
Script runs.
### Environment info
Python 3.10. Here are the versions installed by `poetry`:
```
•• Installing frozenlist (1.3.3)
• Installing idna (3.4)
• Installing multidict (6.0.4)
• Installing aiosignal (1.3.1)
• Installing async-timeout (4.0.2)
• Installing attrs (22.2.0)
• Installing certifi (2022.12.7)
• Installing charset-normalizer (3.1.0)
• Installing six (1.16.0)
• Installing urllib3 (1.26.14)
• Installing yarl (1.8.2)
• Installing aiohttp (3.8.4)
• Installing dill (0.3.1.1)
• Installing docopt (0.6.2)
• Installing filelock (3.9.0)
• Installing numpy (1.22.4)
• Installing pyparsing (3.0.9)
• Installing protobuf (3.19.4)
• Installing packaging (23.0)
• Installing python-dateutil (2.8.2)
• Installing pytz (2022.7.1)
• Installing pyyaml (6.0)
• Installing requests (2.28.2)
• Installing tqdm (4.65.0)
• Installing typing-extensions (4.5.0)
• Installing cloudpickle (2.2.1)
• Installing crcmod (1.7)
• Installing fastavro (1.7.2)
• Installing fasteners (0.18)
• Installing fsspec (2023.3.0)
• Installing grpcio (1.51.3)
• Installing hdfs (2.7.0)
• Installing httplib2 (0.20.4)
• Installing huggingface-hub (0.12.1)
• Installing multiprocess (0.70.9)
• Installing objsize (0.6.1)
• Installing orjson (3.8.7)
• Installing pandas (1.5.3)
• Installing proto-plus (1.22.2)
• Installing pyarrow (9.0.0)
• Installing pydot (1.4.2)
• Installing pymongo (3.13.0)
• Installing regex (2022.10.31)
• Installing responses (0.18.0)
• Installing xxhash (3.2.0)
• Installing zstandard (0.20.0)
• Installing apache-beam (2.45.0)
• Installing datasets (2.10.1)
```
|
OPEN
| 2023-03-06T17:14:41
| 2024-04-05T20:13:52
| null |
https://github.com/huggingface/datasets/issues/5613
|
adampauls
| 6
|
[] |
5,612
|
Arrow map type in parquet files unsupported
|
### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce the bug
The dataset is private, but this can be reproduced with any dataset that has Arrow maps.
### Expected behavior
Loading the dataset no matter whether streaming is True or not.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-1029-gcp-x86_64-with-glibc2.31
- Python version: 3.10.7
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
OPEN
| 2023-03-06T12:03:24
| 2024-03-15T18:56:12
| null |
https://github.com/huggingface/datasets/issues/5612
|
TevenLeScao
| 4
|
[] |
5,610
|
use datasets streaming mode in trainer ddp mode cause memory leak
|
### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler
torch.manual_seed(42)
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM
from transformers import AdamW, get_linear_schedule_with_warmup
hf_model_path ='./Wenzhong-GPT2-110M'
tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
tokenizer.add_special_tokens({'pad_token': '<|pad|>'})
from datasets import load_dataset
gpus=8
max_len = 576
batch_size_node = 17
save_step = 5000
gradient_accumulation = 2
dataloader_num = 4
max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus
#max_step = -1
print("total_step:%d"%(max_step))
import datasets
datasets.version
dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True)
print('load over')
shuffled_dataset = dataset.shuffle(seed=42)
print('shuffle over')
def dataset_tokener(example,max_lenth=max_len):
example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] ))
return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest")
new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"])
print('map over')
configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False)
model = AutoModelForCausalLM.from_pretrained(hf_model_path)
model.resize_token_embeddings(len(tokenizer))
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
from transformers import Trainer,TrainingArguments
import os
print("strat train")
training_args = TrainingArguments(output_dir="./test_trainer",
num_train_epochs=1.0,
report_to="none",
do_train=True,
dataloader_num_workers=dataloader_num,
local_rank=int(os.environ.get('LOCAL_RANK', -1)),
overwrite_output_dir=True,
logging_strategy='steps',
logging_first_step=True,
logging_dir="./logs",
log_on_each_node=False,
per_device_train_batch_size=batch_size_node,
warmup_ratio=0.03,
save_steps=save_step,
save_total_limit=5,
gradient_accumulation_steps=gradient_accumulation,
max_steps=max_step,
disable_tqdm=False,
data_seed=42
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=new_new_dataset,
eval_dataset=None,
tokenizer=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False),
#compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
#preprocess_logits_for_metrics=preprocess_logits_for_metrics
#if training_args.do_eval and not is_torch_tpu_available()
#else None,
)
trainer.train(resume_from_checkpoint=True)
### Expected behavior
use the train code uppper
my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb
start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py
here is result:

here is memory usage monitor in 12 hours

every dataloader work allocate over 24gb cpu memory
according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase.
i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak.
### Environment info
pytorch 1.11.0
py 3.8
cuda 11.3
transformers 4.26.1
datasets 2.9.0
|
OPEN
| 2023-03-06T05:26:49
| 2024-03-07T01:11:32
| null |
https://github.com/huggingface/datasets/issues/5610
|
gromzhu
| 3
|
[] |
5,609
|
`load_from_disk` vs `load_dataset` performance.
|
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_disk` and then use `load_from_disk` to load the filtered version.
The performance of these two approaches is wildly different:
* Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching)
* Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM)
I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it?
Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260).
### Steps to reproduce the bug
See above
### Expected behavior
Load times should be about the same.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
OPEN
| 2023-03-05T05:27:15
| 2023-07-13T18:48:05
| null |
https://github.com/huggingface/datasets/issues/5609
|
davidgilbertson
| 4
|
[] |
5,608
|
audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
|
### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the bug
x = load_dataset("audiofolder", data_dir="x")
### Expected behavior
x = load_dataset("audiofolder", data_dir="x") should create a dataset of 20,000 rows (files).
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-05T00:14:45
| 2023-03-12T00:02:57
| 2023-03-12T00:02:57
|
https://github.com/huggingface/datasets/issues/5608
|
joseph-y-cho
| 2
|
[] |
5,606
|
Add `Dataset.to_list` to the API
|
Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`.
|
CLOSED
| 2023-03-03T16:17:10
| 2023-03-27T13:26:40
| 2023-03-27T13:26:40
|
https://github.com/huggingface/datasets/issues/5606
|
mariosasko
| 3
|
[
"enhancement",
"good first issue"
] |
5,604
|
Problems with downloading The Pile
|
### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

They should be all 14GB like here (https://the-eye.eu/public/AI/pile/train/).
Alternatively, can I somehow download the files by myself and use the datasets preparing script?
### Steps to reproduce the bug
dataset = load_dataset('the_pile', split='train', cache_dir='F:\datasets')
### Expected behavior
The files should be downloaded correctly.
### Environment info
- `datasets` version: 2.10.1
- Platform: Windows-10-10.0.22623-SP0
- Python version: 3.10.5
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
|
CLOSED
| 2023-03-03T09:52:08
| 2023-10-14T02:15:52
| 2023-03-24T12:44:25
|
https://github.com/huggingface/datasets/issues/5604
|
sentialx
| 7
|
[] |
5,601
|
Authorization error
|
### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingface.co/datasets/namespace/your_dataset_name`
4.
```
cp /somewhere/data/*.json .
git lfs track *.json
git add .gitattributes
git add *.json
git commit -m "add json files"
```
but when I execute `git push` I got the error:
```
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
batch response: Authorization error.
error: failed to push some refs to 'https://huggingface.co/datasets/zeusfsx/ukrainian-news'
```
Size of data ~100Gb. I have five json files - different parts.
### Expected behavior
All my data pushed into hub
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-03-02T12:08:39
| 2023-03-14T16:55:35
| 2023-03-14T16:55:34
|
https://github.com/huggingface/datasets/issues/5601
|
OleksandrKorovii
| 2
|
[] |
5,600
|
Dataloader getitem not working for DreamboothDatasets
|
### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset to load some images
2- error after loading when trying to visualise the images
### Expected behavior
I was expecting a numpy array of the image
### Environment info
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
CLOSED
| 2023-03-02T11:00:27
| 2023-03-13T17:59:35
| 2023-03-13T17:59:35
|
https://github.com/huggingface/datasets/issues/5600
|
salahiguiliz
| 1
|
[] |
5,597
|
in-place dataset update
|
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds = ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Feature request
Call for in-place dataset update functions, that update the existing `Dataset` in place without creating a new copy. The interface is supposed to keep the same style as PyTorch, such as the in-place version of a `function` is named `function_`. For example, the in-pace version of `add_item`, i.e., `add_item_`, immediately updates the `Dataset`.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds.add_item_({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Related Functions
* `.map`
* `.filter`
* `.add_item`
|
CLOSED
| 2023-03-01T12:58:18
| 2023-03-02T13:30:41
| 2023-03-02T03:47:00
|
https://github.com/huggingface/datasets/issues/5597
|
speedcell4
| 3
|
[
"wontfix"
] |
5,596
|
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
|
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>>
to
{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}
```
But I can succesfully load a subset of the dataset, for example this works:
```python
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)])
```
and `ds.features` returns:
```
{'repo': Value(dtype='string', id=None),
'org': Value(dtype='string', id=None),
'issue_id': Value(dtype='int64', id=None),
'issue_number': Value(dtype='int64', id=None),
'pull_request': {'user_login': Value(dtype='string', id=None),
'repo': Value(dtype='string', id=None),
'number': Value(dtype='int64', id=None)},
'events': [{'type': Value(dtype='string', id=None),
'action': Value(dtype='string', id=None),
'datetime': Value(dtype='timestamp[s]', id=None),
'author': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'description': Value(dtype='string', id=None),
'comment_id': Value(dtype='int64', id=None),
'comment': Value(dtype='string', id=None)}]}
```
So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue.
Side note:
I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train")
```
### Expected behavior
Load the entire dataset succesfully.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
|
CLOSED
| 2023-03-01T12:53:08
| 2023-12-05T03:22:00
| 2023-03-02T11:12:11
|
https://github.com/huggingface/datasets/issues/5596
|
loubnabnl
| 5
|
[] |
5,594
|
Error while downloading the xtreme udpos dataset
|
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4...
Downloading data: 16%|██████████████▏ | 56.9M/355M [03:11<16:43, 297kB/s]
Generating train split: 0%| | 0/6075 [00:00<?, ? examples/s]Traceback (most recent call last):
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1608, in _prepare_split_single
for key, record in generator:
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 732, in _generate_examples
yield from UdposParser.generate_examples(config=self.config, filepath=filepath, **kwargs)
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 921, in generate_examples
for path, file in filepath:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 158, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 211, in _iter_from_path
yield from cls._iter_tar(f)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 167, in _iter_tar
for tarinfo in stream:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2475, in __iter__
tarinfo = self.next()
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2344, in next
raise ReadError("unexpected end of data")
tarfile.ReadError: unexpected end of data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 855, in <module>
main()
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 487, in main
train_dataset = load_dataset(dataset_name, source_language, split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1488, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
```
### Expected behavior
Download the udpos dataset
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
|
CLOSED
| 2023-02-28T23:40:53
| 2023-11-04T20:45:56
| 2023-07-24T14:22:18
|
https://github.com/huggingface/datasets/issues/5594
|
simran-khanuja
| 21
|
[] |
5,586
|
.sort() is broken when used after .filter(), only in 2.10.0
|
### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
This only happens with the 2.10.0 release.
### Steps to reproduce the bug
```Python
from datasets import load_dataset
# dataset with length of 1104
ds = load_dataset('glue', 'ax')['test']
ds = ds.filter(lambda x: x['idx'] > 1100)
ds.sort('premise')
print('Done')
```
File "/home/dongkeun/datasets_test/test.py", line 5, in <module>
ds.sort('premise')
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3959, in sort
sort_table = query_table(
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 588, in query_table
_check_valid_index_key(key, size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 537, in _check_valid_index_key
_check_valid_index_key(max(key), size=size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 531, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 1103 is out of bounds for size 3
### Expected behavior
It should sort the dataset and print "Done". Which it does on 2.9.0.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-02-28T12:18:09
| 2023-02-28T18:17:26
| 2023-02-28T17:21:59
|
https://github.com/huggingface/datasets/issues/5586
|
MattYoon
| 1
|
[
"bug"
] |
5,585
|
Cache is not transportable
|
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ```
|
CLOSED
| 2023-02-28T00:53:06
| 2023-02-28T21:26:52
| 2023-02-28T21:26:52
|
https://github.com/huggingface/datasets/issues/5585
|
davidgilbertson
| 2
|
[] |
5,584
|
Unable to load coyo700M dataset
|
### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coyo-700m to /root/.cache/huggingface/datasets/kakaobrain___parquet/kakaobrain--coyo-700m-ae729692ae3e0073/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%
1/1 [00:00<00:00, 63.35it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 5.00it/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1859 _time = time.time()
-> 1860 for _, table in generator:
1861 if max_shard_size is not None and writer._num_bytes > max_shard_size:
9 frames
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1893
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset```
### Steps to reproduce the bug
```
from datasets import load_dataset
hf_dataset = load_dataset("kakaobrain/coyo-700m")
```
### Expected behavior
The above commands load the dataset successfully. Or handles exception and continue loading the remainder.
### Environment info
colab. any
|
CLOSED
| 2023-02-27T19:35:03
| 2023-02-28T07:27:59
| 2023-02-28T07:27:58
|
https://github.com/huggingface/datasets/issues/5584
|
manuaero
| 1
|
[] |
5,581
|
[DOC] Mistaken docs on set_format
|
### Describe the bug
https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format
<img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png">
While actually running it will result in:
<img width="1094" alt="image" src="https://user-images.githubusercontent.com/36224762/221507032-007dab82-8781-4319-b21a-e6e4d40d97b3.png">
### Steps to reproduce the bug
_
### Expected behavior
_
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
CLOSED
| 2023-02-27T08:03:09
| 2023-02-28T19:19:17
| 2023-02-28T19:19:17
|
https://github.com/huggingface/datasets/issues/5581
|
NightMachinery
| 1
|
[
"good first issue"
] |
5,577
|
Cannot load `the_pile_openwebtext2`
|
### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("the_pile_openwebtext2")
```
### Expected behavior
load as normal.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-02-24T13:01:48
| 2023-02-24T14:01:09
| 2023-02-24T14:01:09
|
https://github.com/huggingface/datasets/issues/5577
|
wjfwzzc
| 1
|
[] |
5,576
|
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
|
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).
_Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
|
CLOSED
| 2023-02-24T12:57:49
| 2023-02-24T12:58:31
| 2023-02-24T12:58:18
|
https://github.com/huggingface/datasets/issues/5576
|
wjfwzzc
| 1
|
[] |
5,575
|
Metadata for each column
|
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which one works better in our downstream task, here as workaround right now what I do is the compute the hash of the preprocessing that the images went through as part of the new columns name, it would be nice to attach some kinda meta data in these scenarios to the each columns. metadata
### Your contribution
Maybe we could map another relational like database as the metadata?
|
OPEN
| 2023-02-24T10:53:44
| 2024-01-05T21:48:35
| null |
https://github.com/huggingface/datasets/issues/5575
|
parsa-ra
| 5
|
[
"enhancement"
] |
5,574
|
c4 dataset streaming fails with `FileNotFoundError`
|
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", streaming=True)
next(iter(dataset))
```
causes a
```
FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz
```
I can download this file manually though e.g. by entering this URL in a browser.
There is an underlying HTTP 403 status code:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/8ef8d75b0e045dec4aa5123a671b4564466b0707086a7ed1ba8721626dfffbc9?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-train.00000-of-01024.json.gz%3B+filename%3D%22c4-train.00000-of-01024.json.gz%22%3B&response-content-type=application/gzip&Expires=1677483770&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvOGVmOGQ3NWIwZTA0NWRlYzRhYTUxMjNhNjcxYjQ1NjQ0NjZiMDcwNzA4NmE3ZWQxYmE4NzIxNjI2ZGZmZmJjOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzQ4Mzc3MH19fV19&Signature=yjL3UeY72cf2xpnvPvD68eAYOEe2qtaUJV55sB-jnPskBJEMwpMJcBZvg2~GqXZdM3O-GWV-Z3CI~d4u5VCb4YZ-HlmOjr3VBYkvox2EKiXnBIhjMecf2UVUPtxhTa9kBVlWjqu4qKzB9gKXZF2Cwpp5ctLzapEaT2nnqF84RAL-rsqMA3I~M8vWWfivQsbBK63hMfgZqqKMgdWM0iKMaItveDl0ufQ29azMFmsR7qd8V7sU2Z-F1fAeohS8HpN9OOnClW34yi~YJ2AbgZJJBXA~qsylfVA0Qp7Q~yX~q4P8JF1vmJ2BjkiSbGrj3bAXOGugpOVU5msI52DT88yMdA__&Key-Pair-Id=KVTP0A1DKRTAX')
```
### Expected behavior
This should retrieve the first example from the C4 validation set. This worked a few days ago but stopped working now.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-02-24T07:57:32
| 2023-12-18T07:32:32
| 2023-02-27T04:03:38
|
https://github.com/huggingface/datasets/issues/5574
|
krasserm
| 12
|
[] |
5,572
|
Datasets 2.10.0 does not reuse the dataset cache
|
### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-3.9.6/lib/python3.9/site-packages/datasets/load.py:1174, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1165 except Exception as e: # noqa: catch any exception of hf_hub and consider that the dataset doesn't exist
1166 if isinstance(
1167 e,
1168 (
(...)
1172 ),
1173 ):
-> 1174 raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})")
1175 elif "404" in str(e):
1176 msg = f"Dataset '{path}' doesn't exist on the Hub"
ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
This has been around since at least v2.0.
### Steps to reproduce the bug
```
from datasets import load_dataset
import numpy as np
tenk = load_dataset("lsb/tenk") # ten thousand integers
print(np.average(tenk['train']['a'])) # prints 4999.5
### now disconnect your internet
tenk_too = load_dataset("lsb/tenk", download_mode="reuse_dataset_if_exists")
# Raises ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
### Expected behavior
I expected that I would be able to reuse the dataset I just downloaded.
### Environment info
- `datasets` version: 2.10.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
|
CLOSED
| 2023-02-23T17:28:11
| 2023-02-23T18:03:55
| 2023-02-23T18:03:55
|
https://github.com/huggingface/datasets/issues/5572
|
lsb
| 0
|
[] |
5,571
|
load_dataset fails for JSON in windows
|
### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Steps to reproduce the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Expected behavior
Should be able to read from a path different than current directory in Windows machine.
### Environment info
datasets version: 2.3.1
python version: 3.8
Windows OS
|
CLOSED
| 2023-02-23T16:50:11
| 2023-02-24T13:21:47
| 2023-02-24T13:21:47
|
https://github.com/huggingface/datasets/issues/5571
|
abinashsahu
| 2
|
[] |
5,570
|
load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
|
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet = load_dataset("imagenet-1k", split="train", streaming=True)
FileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub
```
tested on a colab notebook.
### Expected behavior
I would expect a specific error indicating that I have to login then accept the dataset licence.
I find this bug very relevant as this code is on a guide on the [Huggingface documentation for Datasets](https://huggingface.co/docs/datasets/about_mapstyle_vs_iterable)
### Environment info
google colab cpu-only instance
|
CLOSED
| 2023-02-23T16:44:32
| 2023-07-24T15:18:50
| 2023-07-24T15:18:50
|
https://github.com/huggingface/datasets/issues/5570
|
buoi
| 2
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.