number
int64
2
7.91k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-12-16 10:45:02
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 19:34:46
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 14:20:48
url
stringlengths
48
51
author
stringlengths
3
26
comments_count
int64
0
70
labels
listlengths
0
4
2,957
MultiWOZ Dataset NonMatchingChecksumError
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = load_dataset('multi_woz_v22', 'v2.2_active_only') ``` ## Expected results For the above calls to `load_dataset` to work. ## Actual results NonMatchingChecksumError. Traceback: > Traceback (most recent call last): File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-15-4e91280e112e>", line 1, in <module> dataset = load_dataset('multi_woz_v22', 'v2.2') File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/load.py", line 847, in load_dataset builder_instance.download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 615, in download_and_prepare self._download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare verify_checksums( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json'] ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
CLOSED
2021-09-22T23:45:00
2022-03-15T16:07:02
2022-03-15T16:07:02
https://github.com/huggingface/datasets/issues/2957
bradyneal
1
[ "bug" ]
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.google.com/drive/11Em_Amoc-aPGhSBIkSHU2AvEh24nVayy?usp=sharing) that shows this behavior. For this example, I have created a toy dataset at: https://huggingface.co/datasets/SaulLu/toy_struc_dataset This dataset is composed of two versions: - v1 on commit `a6beb46` which has a single example `{'id': 1, 'value': {'tag': 'a', 'value': 1}}` in file `train.jsonl.gz` - v2 on commit `e7935f4` (`main` head) which has a single example `{'attr': 1, 'id': 1, 'value': 'a'}` in file `train.jsonl.gz` With a terminal, we can start to get the v1 version of the dataset ```bash git lfs install git clone https://huggingface.co/datasets/SaulLu/toy_struc_dataset cd toy_struc_dataset git checkout a6beb46 ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 ``` With a terminal, we can now start to get the v1 version of the dataset ```bash git checkout main ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 (not v2) ``` ## Expected results The last output should have been ``` {"id":1, "value": "a", "attr": 1} # This is the example in v2 ``` ## Ideas As discussed offline with Quentin, if the cache hash was ever sensitive to changes in a compressed file we would probably not have the problem anymore. This situation leads me to suggest 2 other features: - to also have an `load_from_cache_file` argument in the "load_dataset" method - to reorganize the cache so that we can delete the caches related to a dataset (cf issue #ToBeFilledSoon) And thanks again for this great library :hugs: ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
OPEN
2021-09-22T13:34:32
2023-08-31T16:49:01
null
https://github.com/huggingface/datasets/issues/2956
SaulLu
1
[ "bug" ]
2,953
Trying to get in touch regarding a security issue
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future. Thank you for your consideration, and I look forward to hearing from you! (cc @huntr-helper)
CLOSED
2021-09-21T15:58:13
2021-10-21T15:16:43
2021-10-21T15:16:43
https://github.com/huggingface/datasets/issues/2953
JamieSlome
1
[]
2,945
Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future: - [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch - Currently, simple merge commits are already disabled - I propose to disable rebase merging as well - ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~ - ~~This protection would reject direct pushes to master branch~~ - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~ - [x] Protect the master branch only from direct pushing of **merge commits** - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch). - No need to disable/re-enable this protection on each release This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
CLOSED
2021-09-20T06:47:01
2021-09-20T12:01:27
2021-09-20T12:00:16
https://github.com/huggingface/datasets/issues/2945
albertvillanova
2
[ "enhancement" ]
2,944
Add `remove_columns` to `IterableDataset `
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'IterableDataset' object has no attribute 'remove_columns' ``` **Describe the solution you'd like** It would be nice to have `.remove_columns()` to match the `Datasets` api. **Describe alternatives you've considered** This can be done with a single call to `.map()`, I can try to help add this. 🤗
CLOSED
2021-09-20T04:01:00
2021-10-08T15:31:53
2021-10-08T15:31:53
https://github.com/huggingface/datasets/issues/2944
changjonathanc
1
[ "enhancement", "good first issue" ]
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}` Related feature: https://github.com/huggingface/datasets/pull/2836 :question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) ## Workaround Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`. ## Steps to reproduce the bug 1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists. 2. `pip install datasets==1.11.0` and run the following snippet: ```python from datasets import load_dataset ids = ["1272-141231-0000"] ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.filter(lambda x: x["id"] in ids) ``` 3. `pip install datasets==1.12.1` and re-run the code again ## Expected results Same result as with the previous `datasets` version. ## Actual results ```bash Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1) Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow Traceback (most recent call last): File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module> ds = ds.filter(lambda x: x["id"] in ids) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter indices = self.map( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map return self._map_single( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file return cls( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as return Features(recursive_reorder(self, other)) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)} Process finished with exit code 1 ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 5.0.0
CLOSED
2021-09-19T16:16:37
2021-09-20T16:25:43
2021-09-20T16:25:42
https://github.com/huggingface/datasets/issues/2943
anton-l
6
[ "bug" ]
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}] ``` ## Expected results Loading is successful. ## Actual results Loading throws above error. ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
OPEN
2021-09-18T10:39:13
2022-01-19T14:10:07
null
https://github.com/huggingface/datasets/issues/2941
ayaka14732
1
[ "bug", "dataset bug" ]
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any errors. ## Actual results PermissionError see trace below: ``` Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare self._save_info() File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__ next(self.gen) File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir os.rename(tmp_dir, dirname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9' ``` By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed. It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue. ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.22449-SP0 - Python version: 3.8.12 - PyArrow version: 5.0.0
CLOSED
2021-09-17T16:52:10
2022-08-24T13:09:08
2022-08-24T13:09:08
https://github.com/huggingface/datasets/issues/2937
daqieq
4
[ "bug" ]
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one reference left" ``` This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards. Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this. cc @Rocketknight1
CLOSED
2021-09-17T15:26:53
2021-10-13T09:03:23
2021-10-13T09:03:23
https://github.com/huggingface/datasets/issues/2934
lhoestq
2
[ "bug" ]
2,932
Conda build fails
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
CLOSED
2021-09-17T12:49:22
2021-09-21T15:31:10
2021-09-21T15:31:10
https://github.com/huggingface/datasets/issues/2932
albertvillanova
2
[ "bug" ]
2,930
Mutable columns argument breaks set_format
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] dataset.set_format("python", columns=column_list) column_list[1] = "foo" # Change the list after we call `set_format` dataset['train'][:4].keys() ``` ## Expected results ```python dict_keys(['idx', 'label']) ``` ## Actual results ```python dict_keys(['idx']) ```
CLOSED
2021-09-16T12:27:22
2021-09-16T13:50:53
2021-09-16T13:50:53
https://github.com/huggingface/datasets/issues/2930
Rocketknight1
1
[ "bug" ]
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[str], tokenizer: PreTrainedTokenizerFast, ) -> bool: """Get the good rows""" encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer) ex["encoding"] = encoding for relation in ex["relations"]: if not is_valid_relation(relation, valid_rel_labels): return False for span in ex["spans"]: if not is_valid_span(span, valid_ner_labels, encoding): return False return True def get_dataset(): loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name="prodigy-dataset", data_files=sorted(file_paths), cache_dir=cache_dir, )["train"] valid_ner_labels = set(vocab.ner_category) valid_relations = set(vocab.relation_types.keys()) ds = ds.filter( filter_good_rows, fn_kwargs=dict( valid_rel_labels=valid_relations, valid_ner_labels=valid_ner_labels, tokenizer=vocab.tokenizer, ), keep_in_memory=True, num_proc=num_proc, ) ``` `ds` is a `DatasetDict` produced by a jsonl dataset. This runs fine on 1.11 but fails on 1.12 **Stack Trace** ## Expected results I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11 ## Actual results ``` tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl ds = ds.filter( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter indices = self.map( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map return self._map_single( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single batch = apply_function_on_filtered_inputs( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...} indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0 def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0): """Utility to apply the function on a selection of columns.""" nonlocal update_data fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] if offset == 0: effective_indices = indices else: effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset processed_inputs = ( > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) ) E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels' ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Mac - Python version: 3.8.9 - PyArrow version: pyarrow==5.0.0
CLOSED
2021-09-16T01:14:02
2021-09-20T16:23:22
2021-09-20T16:23:21
https://github.com/huggingface/datasets/issues/2927
timothyjlaurent
2
[ "bug" ]
2,926
Error when downloading datasets to non-traditional cache directories
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual results ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.1.2 - Platform: Ubuntu - Python version: 3.8 ## Extra notes Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction: - With `cache_dir="/path/to/netapp/.cache"` the same thing happens. - However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work - On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore. While I could test it only for a NetApp device, it might have to do with any other mounted FS. Thanks :)
OPEN
2021-09-15T19:59:46
2021-11-24T21:42:31
null
https://github.com/huggingface/datasets/issues/2926
dar-tau
1
[ "bug" ]
2,924
"File name too long" error for file locks
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Steps to reproduce the bug Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4): ```python from datasets import load_dataset load_dataset("gar1t/test") ``` ## Expected results Expect the function to return without an error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare self._save_info() File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info with FileLock(lock_path): File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
CLOSED
2021-09-15T18:16:50
2023-12-08T13:39:51
2021-10-29T09:42:24
https://github.com/huggingface/datasets/issues/2924
gar1t
12
[ "bug" ]
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an error load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True) ## does not raise an error ``` ## Expected results Both calls should raise the same error ## Actual results Call with streaming=False: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "splits" does not exist in table schema' ``` Call with `streaming=False`: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s] ``` ## Environment info - `datasets` version: 1.12.1.dev0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
CLOSED
2021-09-15T17:44:38
2022-04-12T10:09:40
2022-04-12T10:09:39
https://github.com/huggingface/datasets/issues/2923
severo
1
[ "bug", "dataset-viewer" ]
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <module> d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch") File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict pa_table = InMemoryTable.from_pydict(mapping=mapping) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict return cls(pa.Table.from_pydict(*args, **kwargs)) File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 223, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__ out = pa.array(self.data, type=type) File "pyarrow/array.pxi", line 306, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
CLOSED
2021-09-15T17:12:11
2021-09-15T17:21:45
2021-09-15T17:21:45
https://github.com/huggingface/datasets/issues/2921
lhoestq
0
[]
2,919
Unwanted progress bars when accessing examples
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") In [3]: d[0] 100%|████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s] Out[3]: {'a': tensor(0)} ``` This is because the pytorch formatter calls `map_nested` that uses progress bars cc @sgugger
CLOSED
2021-09-15T14:05:10
2021-09-15T17:21:49
2021-09-15T17:18:23
https://github.com/huggingface/datasets/issues/2919
lhoestq
1
[]
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_dataset iter_dset = iter( load_dataset("scitldr", name="FullText", split="test", streaming=True) ) next(iter_dset) ``` ## Expected results Returns the first sample of the dataset ## Actual results Calling `__next__` crashes with the following Traceback: ```python ----> 1 next(dset_iter) ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 339 340 def __iter__(self): --> 341 for key, example in self._iter(): 342 if self.features: 343 # we encode the example for ClassLabel feature types for example ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self) 336 else: 337 ex_iterable = self._ex_iterable --> 338 yield from ex_iterable 339 340 def __iter__(self): ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split) 162 163 with open(filepath, encoding="utf-8") as f: --> 164 for id_, row in enumerate(f): 165 data = json.loads(row) 166 if self.config.name == "AIC": ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length) 496 else: 497 length = min(self.size - self.loc, length) --> 498 return super().read(length) 499 500 async def async_fetch_all(self): ~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length) 1481 # don't even bother calling fetch 1482 return b"" -> 1483 out = self.cache._fetch(self.loc, self.loc + length) 1484 self.loc += len(out) 1485 return out ~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end) 378 elif start < self.start: 379 if self.end - end > self.blocksize: --> 380 self.cache = self.fetcher(start, bend) 381 self.start = start 382 else: ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs) 86 def wrapper(*args, **kwargs): 87 self = obj or args[0] ---> 88 return sync(self.loop, func, *args, **kwargs) 89 90 return wrapper ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs) 67 raise FSTimeoutError 68 if isinstance(result[0], BaseException): ---> 69 raise result[0] 70 return result[0] 71 ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout) 23 coro = asyncio.wait_for(coro, timeout=timeout) 24 try: ---> 25 result[0] = await coro 26 except Exception as ex: 27 result[0] = ex ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end) 538 if r.status == 206: 539 # partial content, as expected --> 540 out = await r.read() 541 elif "Content-Length" in r.headers: 542 cl = int(r.headers["Content-Length"]) ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self) 1030 if self._body is None: 1031 try: -> 1032 self._body = await self.content.read() 1033 for trace in self._traces: 1034 await trace.send_response_chunk_received( ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n) 342 async def read(self, n: int = -1) -> bytes: 343 if self._exception is not None: --> 344 raise self._exception 345 346 # migration problem; with DataQueue you have to catch ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyArrow version: 2.0.0 - aiohttp version: 3.7.4.post0
CLOSED
2021-09-15T13:06:07
2021-12-01T08:15:00
2021-12-01T08:15:00
https://github.com/huggingface/datasets/issues/2918
SBrandeis
3
[ "bug", "streaming" ]
2,917
windows download abnormal
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png) # Sample code to reproduce the bug ``` ## Expected results It can be downloaded normally. ## Actual results it cann't ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.11.0 - Platform:windows - Python version:3.7 - PyArrow version:
CLOSED
2021-09-15T12:45:35
2021-09-16T17:17:48
2021-09-16T17:17:48
https://github.com/huggingface/datasets/issues/2917
wei1826676931
3
[ "bug" ]
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesystem_spec/blob/0589358d8a029ed6b60d031018f52be2eb721291/fsspec/__init__.py#L55)). So that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable. This make the import of datasets failing as it is using that `fsspec.spec`. ## Steps to reproduce the bug I could reproduce the bug with a dummy poetry project. Here is the pyproject.toml: ```toml [tool.poetry] name = "debug-datasets" version = "0.1.0" description = "" authors = ["Pierre Godard"] [tool.poetry.dependencies] python = "^3.8" datasets = "^1.11.0" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.poetry.plugins."fsspec.specs"] "file2" = "fsspec.implementations.local.LocalFileSystem" ``` The only other file being a `debug_datasets/__init__.py` empty file. The overall structure of the project is as follows: ``` . ├── pyproject.toml └── debug_datasets └── __init__.py ``` Then, within the project folder run: ``` poetry install poetry run python ``` And in the python interpreter, try to import `datasets`: ``` import datasets ``` ## Expected results The import should run successfully. ## Actual results Here is the trace of the error I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .filesystems import extract_path_from_uri, is_remote_filesystem File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 30, in <module> def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: AttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem' ``` ## Suggested fix `datasets/filesystems/__init__.py`, line 30, replace: ``` def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: ``` by: ``` def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool: ``` I will come up with a PR soon if this effectively solves the issue. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: WSL2 (Ubuntu 20.04.1 LTS) - Python version: 3.8.5 - PyArrow version: 5.0.0 - `fsspec` version: 2021.8.1
CLOSED
2021-09-15T07:54:06
2021-09-15T16:49:17
2021-09-15T16:49:16
https://github.com/huggingface/datasets/issues/2914
pierre-godard
1
[ "bug" ]
2,913
timit_asr dataset only includes one text phrase
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english 1. Install the dataset and other packages ```python !pip install datasets>=1.5.0 !pip install transformers==4.4.0 !pip install soundfile !pip install jiwer ``` 2. Load the dataset ```python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") ``` 3. Remove columns that we don't want ```python timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]) ``` 4. Write a short function to display some random samples of the dataset. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit["train"].remove_columns(["file"])) ``` ## Expected results 10 random different transcription phrases. ## Actual results 10 of the same transcription phrase "Would such an act of refusal be useful?" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: not listed
CLOSED
2021-09-14T21:06:07
2021-09-15T08:05:19
2021-09-15T08:05:18
https://github.com/huggingface/datasets/issues/2913
margotwagner
2
[ "bug" ]
2,904
FORCE_REDOWNLOAD does not work
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
OPEN
2021-09-14T09:45:26
2021-10-06T09:37:19
null
https://github.com/huggingface/datasets/issues/2904
anoopkatti
3
[ "bug" ]
2,902
Add WIT Dataset
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (excerpt from their Github README.md) > - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples. > - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages. > - A collection of diverse set of concepts and real world entities. > - Brings forth challenging real-world test sets. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-09-13T19:38:49
2024-10-02T15:37:48
2022-06-01T17:28:40
https://github.com/huggingface/datasets/issues/2902
nateraw
6
[ "dataset request" ]
2,901
Incompatibility with pytest
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pytest test.py ``` ## Expected results It should give something like: ``` collected 1 item test.py . [100%] ======= 1 passed in 3.15s ======= ``` ## Actual results ``` ============================================================================================================================= test session starts ============================================================================================================================== platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml plugins: anyio-3.3.1 collected 1 item tests/queries/test_rows.py . [100%]Traceback (most recent call last): File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module> raise SystemExit(pytest.console_main()) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall return outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main return wrap_session(config, _main) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session config.hook.pytest_sessionfinish( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set f = path.open("w") TypeError: xpathopen() takes 1 positional argument but 2 were given ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
CLOSED
2021-09-13T19:12:17
2021-09-14T08:40:47
2021-09-14T08:40:47
https://github.com/huggingface/datasets/issues/2901
severo
1
[ "bug" ]
2,899
Dataset
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-09-12T07:38:53
2021-09-12T16:12:15
2021-09-12T16:12:15
https://github.com/huggingface/datasets/issues/2899
rcacho172
0
[ "dataset request" ]
2,898
Hug emoji
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-09-12T03:27:51
2021-09-12T16:13:13
2021-09-12T16:13:13
https://github.com/huggingface/datasets/issues/2898
Jackg-08
0
[ "dataset request" ]
2,892
Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-40add67f8751> in <module> 2 data = {"a": [[0], None]} 3 features = Features({"a": Sequence(Value("int32"))}) ----> 4 dataset = Dataset.from_dict(data, features=features) [...] ~/datasets/features.py in encode_nested_example(schema, obj) 888 if isinstance(obj, str): # don't interpret a string as a list 889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) --> 890 return [encode_nested_example(schema.feature, o) for o in obj] 891 # Object with special encoding: 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks TypeError: 'NoneType' object is not iterable ``` Instead, if should run without error, as if the `features` were not passed
CLOSED
2021-09-10T14:11:43
2021-09-13T14:18:13
2021-09-13T14:17:42
https://github.com/huggingface/datasets/issues/2892
lhoestq
1
[ "bug" ]
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-09-10T09:51:17
2021-09-10T11:45:29
2021-09-10T11:45:29
https://github.com/huggingface/datasets/issues/2890
rcacho172
0
[ "dataset request" ]
2,889
Coc
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-09-10T07:32:07
2021-09-10T11:45:54
2021-09-10T11:45:54
https://github.com/huggingface/datasets/issues/2889
Bwiggity
0
[ "dataset request" ]
2,888
v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
CLOSED
2021-09-09T21:53:15
2021-09-12T20:18:35
2021-09-12T16:15:39
https://github.com/huggingface/datasets/issues/2888
fcakyon
2
[ "question" ]
2,886
Hj
CLOSED
2021-09-09T18:58:52
2021-09-10T11:46:29
2021-09-10T11:46:29
https://github.com/huggingface/datasets/issues/2886
Noorasri
0
[]
2,885
Adding an Elastic Search index to a Dataset
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s] No error is thrown, but the indexing breaks ~90%. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset from elasticsearch import Elasticsearch es = Elasticsearch() squad = load_dataset('squad', split='validation') index_name = "corpus" es_config = { "settings": { "number_of_shards": 1, "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": { "properties": { "idx" : {"type" : "keyword"}, "title" : {"type" : "keyword"}, "text": { "type": "text", "analyzer": "standard", "similarity": "BM25" }, } }, } class IndexBuilder: """ Elastic search indexing of a corpus """ def __init__( self, *args, #corpus : None, dataset : squad, index_name = str, query = str, config = dict, **kwargs, ): #instantiate HuggingFace dataset self.dataset = dataset #instantiate ElasticSearch config self.config = config self.es = Elasticsearch() self.index_name = index_name self.query = query def elastic_index(self): print(self.es.info) self.es.indices.delete(index=self.index_name, ignore=[400, 404]) search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config) return search_index def exact_match_method(self, index): scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1) return scores, retrieved_examples if __name__ == "__main__": print(type(squad)) Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config) search_index = Index.elastic_index() scores, examples = Index.exact_match_method(search_index) print(scores, examples) for name in squad.column_names: print(type(squad[name])) ``` ## Environment info We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment. Poetry: - Python version: 3.8 - PyArrow: 4.0.1 - Elasticsearch: 7.13.4 - datasets: 1.10.2 Local: - Python version: 3.8 - PyArrow: 3.0.0 - Elasticsearch: 7.7.1 - datasets: 1.7.0
OPEN
2021-09-09T12:21:39
2021-10-20T18:57:11
null
https://github.com/huggingface/datasets/issues/2885
MotzWanted
3
[ "bug" ]
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## Expected results The DocRED dataset should be loaded without any problems. ## Actual results ``` NonMatchingChecksumError Traceback (most recent call last) <ipython-input-4-b1b83f25a16c> in <module> ----> 1 d = datasets.load_dataset('docred') ~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 845 846 # Download and prepare data --> 847 builder_instance.download_and_prepare( 848 download_config=download_config, 849 download_mode=download_mode, ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 613 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 614 if not downloaded_from_gcs: --> 615 self._download_and_prepare( 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 673 # Checksums verification 674 if verify_infos: --> 675 verify_checksums( 676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 677 ) ~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7'] ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0 This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`. ## Remarks - I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache. - The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.
CLOSED
2021-09-09T05:55:02
2021-09-13T11:24:30
2021-09-13T11:24:30
https://github.com/huggingface/datasets/issues/2882
tmpr
1
[ "bug" ]
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_dataset timit = load_dataset("timit_asr", cache_dir="./temp") unique_transcripts = set(timit["train"]["text"]) print(unique_transcripts) assert len(unique_transcripts) > 1 ``` ## Expected results Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it. ## Actual results Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore." ## Environment info - `datasets` version: 1.4.1 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.9 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: tried both - Using distributed or parallel set-up in script?: no -
CLOSED
2021-09-07T18:53:45
2021-09-08T16:55:19
2021-09-08T09:12:28
https://github.com/huggingface/datasets/issues/2879
rcgale
3
[ "bug" ]
2,878
NotADirectoryError: [WinError 267] During load_from_disk
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3 from datasets import load_from_disk from datasets.filesystems import S3FileSystem s3_file = "output of save_to_disk" s3_filesystem = S3FileSystem() load_from_disk(s3_file, fs=s3_filesystem) ``` ## Expected results load_from_disk succeeds without error ## Actual results Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it. ``` Exception ignored in: <finalize object at 0x26409231ce0; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' Exception ignored in: <finalize object at 0x264091c7880; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
OPEN
2021-09-07T15:15:05
2021-09-07T15:15:05
null
https://github.com/huggingface/datasets/issues/2878
Grassycup
0
[ "bug" ]
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure) However any data files in a folder named "dummy" should be ignored as well as they should only be used to test the dataset. Same for "dataset_infos.json" which should only be used to get the `dataset.info`
CLOSED
2021-09-07T14:09:04
2021-09-29T09:05:38
2021-09-29T09:05:38
https://github.com/huggingface/datasets/issues/2877
lhoestq
2
[ "enhancement" ]
2,875
Add Congolese Swahili speech datasets
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/1435196393631764482
OPEN
2021-09-07T12:13:50
2021-09-07T12:13:50
null
https://github.com/huggingface/datasets/issues/2875
osanseviero
0
[ "dataset request", "speech" ]
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS. ``` import datasets datasets.config.PYARROW_VERSION.major --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module> 1 import datasets ----> 2 datasets.config.PYARROW_VERSION.major AttributeError: 'str' object has no attribute 'major' ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 4.0.1
CLOSED
2021-09-06T21:06:57
2021-09-08T08:51:52
2021-09-08T08:51:52
https://github.com/huggingface/datasets/issues/2871
bwang482
5
[ "bug" ]
2,869
TypeError: 'NoneType' object is not callable
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: - Python version: 3.7 - PyArrow version:
CLOSED
2021-09-03T11:27:39
2025-02-19T09:57:34
2021-09-08T09:24:55
https://github.com/huggingface/datasets/issues/2869
Chenfei-Kang
17
[ "bug" ]
2,868
Add Common Objects in 3D (CO3D)
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)* - **Motivation:** *excerpt from above blog post:* > As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences. > > Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model. > Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-09-02T20:36:12
2024-01-17T12:03:59
null
https://github.com/huggingface/datasets/issues/2868
nateraw
0
[ "dataset request", "vision" ]
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9... Traceback (most recent call last): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split for key, record in utils.tqdm( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__ for obj in iterable: File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples with derived_file.open(encoding="utf-8") as f: File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare raise OSError( OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' ``` ```python >>> import datasets as ds >>> b = ds.load_dataset('counter', split="train", streaming=True) Using custom data configuration default >>> list(b) [] ``` ## Expected results An exception should be raised in streaming mode ## Actual results No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty. ## Environment info - `datasets` version: 1.11.1.dev0 - Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
CLOSED
2021-09-02T13:10:53
2021-10-14T09:24:09
2021-10-14T09:24:09
https://github.com/huggingface/datasets/issues/2866
severo
11
[ "bug" ]
2,860
Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
CLOSED
2021-09-01T11:04:10
2021-09-02T06:47:40
2021-09-02T06:47:40
https://github.com/huggingface/datasets/issues/2860
mrm8488
1
[ "bug" ]
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). Instead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)
CLOSED
2021-08-31T21:11:04
2021-10-12T07:35:52
2021-10-11T11:05:51
https://github.com/huggingface/datasets/issues/2859
lhoestq
2
[ "enhancement", "streaming" ]
2,850
Wound segmentation datasets
## Adding a Dataset - **Name:** Wound segmentation datasets - **Description:** annotated wound image dataset - **Paper:** https://www.nature.com/articles/s41598-020-78799-w - **Data:** https://github.com/uwm-bigdata/wound-segmentation - **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http://www.miccai.org/special-interest-groups/challenges/ and https://fusc.grand-challenge.org/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-08-30T10:44:32
2021-12-08T12:02:00
null
https://github.com/huggingface/datasets/issues/2850
osanseviero
0
[ "dataset request", "vision" ]
2,849
Add Open Catalyst Project Dataset
## Adding a Dataset - **Name:** Open Catalyst 2020 (OC20) Dataset - **Website:** https://opencatalystproject.org/ - **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-08-30T10:14:39
2021-08-30T10:14:39
null
https://github.com/huggingface/datasets/issues/2849
osanseviero
0
[ "dataset request" ]
2,846
Negative timezone
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```python # Where the timestamp column has a tz of -03:00 datasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files, 'test': test_files}, cache_dir="./cache_teste/") ``` ## Expected results The -03:00 is a valid tz so the regex should accept this without raising an error. ## Actual results As this regex disaproves a valid tz it raises the following error: ```python raise ValueError( f"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp." f"Examples include timestamp[us] or timestamp[us, tz=America/New_York]" f"See: https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp" ) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyArrow version: 5.0.0
CLOSED
2021-08-27T20:50:33
2021-09-10T11:51:07
2021-09-10T11:51:07
https://github.com/huggingface/datasets/issues/2846
jadermcs
1
[ "bug" ]
2,845
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do: ``` if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds) ``` This can already be done with: ``` builder = load_dataset_builder(ds) if not os.path.idsir(builder.cache_dir): builder.download_and_prepare() ``` but the current way is a way less intuitive and much harder to remember than the proposed API, IMHO. One more way is to do: ``` _ = load_dataset(ds) ``` but it wastes resources loading the dataset when it's not needed. this has been discussed at https://huggingface.slack.com/archives/C01229B19EX/p1630021912025800 Thank you! @lhoestq
OPEN
2021-08-27T18:21:51
2021-08-27T18:24:05
null
https://github.com/huggingface/datasets/issues/2845
stas00
0
[ "enhancement" ]
2,842
always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due. So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k` So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it. The same in code: ``` # first run python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')" # now run immediately python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')" # the second command should fail, but it doesn't fail now. ``` Please let me know if I explained myself clearly. Thank you!
CLOSED
2021-08-26T23:31:53
2021-10-22T09:43:35
2021-10-22T09:43:35
https://github.com/huggingface/datasets/issues/2842
stas00
6
[ "enhancement" ]
2,841
Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
## Adding a Dataset - **Name:** GLUECoS - **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks - **Paper:** https://aclanthology.org/2020.acl-main.329/ - **Data:** https://github.com/microsoft/GLUECoS - **Motivation:** We currently only have [one other](https://huggingface.co/datasets/lince) dataset for code-switching Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-08-26T17:47:39
2021-10-20T18:41:20
null
https://github.com/huggingface/datasets/issues/2841
yjernite
1
[ "dataset request" ]
2,840
How can I compute BLEU-4 score use `load_metric` ?
I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4. If I want to compute BLEU-4 score, what can i do?
CLOSED
2021-08-26T17:36:37
2021-08-27T08:13:24
2021-08-27T08:13:24
https://github.com/huggingface/datasets/issues/2840
Doragd
0
[]
2,839
OpenWebText: NonMatchingSplitsSizesError
## Describe the bug When downloading `openwebtext`, I'm getting: ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}] ``` I suspect that the file we download from has changed since the size doesn't look like to match with documentation `Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("openwebtext", download_mode="force_redownload") ``` ## Expected results Loading is successful ## Actual results Loading throws above error. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: linux (Redhat version 8.1) - Python version: 3.8 - PyArrow version: 4.0.1
CLOSED
2021-08-26T13:50:26
2021-09-21T14:12:40
2021-09-21T14:09:43
https://github.com/huggingface/datasets/issues/2839
thomasw21
5
[ "bug" ]
2,837
prepare_module issue when loading from read-only fs
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_dataset( python_loader, data_files={"train": train_path, "test": test_path} ) ``` where `python_loader` is a path to a file located in a readonly folder. ## Expected results This should work I think? ## Actual results ```python return load_dataset( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 711, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 465, in prepare_module with FileLock(lock_path): File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 314, in __enter__ self.acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 263, in acquire self._acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 378, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
CLOSED
2021-08-25T15:21:26
2021-10-05T17:58:22
2021-10-05T17:58:22
https://github.com/huggingface/datasets/issues/2837
Dref360
1
[ "bug" ]
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty. ```python from datasets import Dataset import pyarrow as pa pa_table = pa.Table.from_pydict({"a": [1]}) pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema) ds_table = pa.concat_tables([pa_table2, pa_table]) dataset = Dataset(ds_table) print([len(b) for b in dataset.data._batches]) # [0, 1] print(dataset.data._offsets) # [0 0 1] (should be [0, 1]) dataset[0] ``` raises ```python --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/table.py in _interpolation_search(arr, x) 90 else: 91 i, j = i, k ---> 92 raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") 93 94 IndexError: Invalid query '0' for size 1. ``` This can be fixed by ignoring empty batches when computing `table._batches` and `table._offsets` cc @SaulLu
CLOSED
2021-08-24T16:49:20
2021-08-24T17:21:17
2021-08-24T17:21:17
https://github.com/huggingface/datasets/issues/2833
lhoestq
0
[]
2,832
Logging levels not taken into account
## Describe the bug The `logging` module isn't working as intended relative to the levels to set. ## Steps to reproduce the bug ```python from datasets import logging logging.set_verbosity_debug() logger = logging.get_logger() logger.error("ERROR") logger.warning("WARNING") logger.info("INFO") logger.debug("DEBUG" ``` ## Expected results I expect all logs to be output since I'm putting a `debug` level. ## Actual results Only the two first logs are output. ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.6 - PyArrow version: 5.0.0 ## To go further This logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`. `transformers` sets a default `StreamHandler` [here](https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/utils/logging.py#L86)
CLOSED
2021-08-24T11:50:41
2023-07-12T17:19:30
2023-07-12T17:19:29
https://github.com/huggingface/datasets/issues/2832
LysandreJik
2
[ "bug" ]
2,831
ArrowInvalid when mapping dataset with missing values
## Describe the bug I encountered an `ArrowInvalid` when mapping dataset with missing values. Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown). [data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv) [data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv) ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("csv", data_files=['data_small.csv']) datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id']) ``` ## Expected results No error ## Actual results ``` File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Invalid null value ``` ## Environment info - `datasets` version: 1.5.0 - Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
OPEN
2021-08-24T08:50:42
2021-08-31T14:15:34
null
https://github.com/huggingface/datasets/issues/2831
uniquefine
1
[ "bug" ]
2,829
Optimize streaming from TAR archives
Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives: ``` tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2 ``` Instead, I suggest we implement `iter_archive` for the `StreamingDownloadManager`. The regular `DownloadManager` already has it. Then we will have to update the json/txt/csv/etc. loaders to make them use `iter_archive` on TAR archives. That's also what Tensorflow Datasets is doing in this case. See this [dataset](https://github.com/tensorflow/datasets/blob/93895059c80a9e05805e8f32a2e310f66a23fc98/tensorflow_datasets/image_classification/flowers.py) for example. Therefore instead of doing ```python uncompressed = dl_manager.extract(tar_archive) filename = "books_large_p1.txt" with open(os.path.join(uncompressed, filename)) as f: for line in f: ... ``` we'll do ```python for filename, f in dl_manager.iter_archive(tar_archive): for line in f: ... ```
CLOSED
2021-08-23T16:56:40
2022-09-21T14:29:46
2022-09-21T14:08:39
https://github.com/huggingface/datasets/issues/2829
lhoestq
1
[ "enhancement", "streaming" ]
2,826
Add a Text Classification dataset: KanHope
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper} - **Author:** *[AdeepH](https://github.com/adeepH)* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages* - I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated. - The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval* ``` Using custom data configuration default Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762... --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-114-4a9cdb519e4c> in <module>() 1 from datasets import load_dataset 2 ----> 3 data = load_dataset('/content/bn') 9 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 850 ignore_verifications=ignore_verifications, 851 try_from_hf_gcs=try_from_hf_gcs, --> 852 use_auth_token=use_auth_token, 853 ) 854 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 614 if not downloaded_from_gcs: 615 self._download_and_prepare( --> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) 618 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 691 try: 692 # Prepare split will record examples associated to the split --> 693 self._prepare_split(split_generator, **prepare_split_kwargs) 694 except OSError as e: 695 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator) 1107 disable=bool(logging.get_verbosity() == logging.NOTSET), 1108 ): -> 1109 example = self.info.features.encode_example(record) 1110 writer.write(example, key) 1111 finally: /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example) 1015 """ 1016 example = cast_to_python_objects(example) -> 1017 return encode_nested_example(self, example) 1018 1019 def encode_batch(self, batch): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 892 return schema.encode_example(obj) 893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 894 return obj /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data) 665 # If a string is given, convert to associated integer 666 if isinstance(example_data, str): --> 667 example_data = self.str2int(example_data) 668 669 # Allowing -1 to mean no label. /usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values) 623 if value not in self._str2int: 624 value = str(value).strip() --> 625 output.append(self._str2int[str(value)]) 626 else: 627 # No names provided, try to integerize KeyError: ' ' ```
CLOSED
2021-08-23T12:21:58
2021-10-01T18:06:59
2021-10-01T18:06:59
https://github.com/huggingface/datasets/issues/2826
adeepH
1
[ "dataset request" ]
2,825
The datasets.map function does not load cached dataset after moving python script
## Describe the bug The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files. ## Steps to reproduce the bug Just run the following codes in different .py files. ```python if __name__ == '__main__': from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) ``` ## Expected results The map function should reload data in the second or any later runs. ## Actual results The processing happens in each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: linux - Python version: 3.7.6 - PyArrow version: 3.0.0 This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
CLOSED
2021-08-23T03:23:37
2024-07-29T11:25:50
2021-08-31T13:13:36
https://github.com/huggingface/datasets/issues/2825
hobbitlzy
6
[ "bug" ]
2,823
HF_DATASETS_CACHE variable in Windows
I can't seem to use a custom Cache directory in Windows. I have tried: set HF_DATASETS_CACHE = "C:\Datasets" set HF_DATASETS_CACHE = "C:/Datasets" set HF_DATASETS_CACHE = "C:\\Datasets" set HF_DATASETS_CACHE = "r'C:\Datasets'" set HF_DATASETS_CACHE = "\Datasets" set HF_DATASETS_CACHE = "/Datasets" In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset
CLOSED
2021-08-21T13:17:44
2021-08-21T13:20:11
2021-08-21T13:20:11
https://github.com/huggingface/datasets/issues/2823
rp2839
1
[]
2,821
Cannot load linnaeus dataset
## Describe the bug The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce: ``` from datasets import load_dataset datasets = load_dataset("linnaeus") ``` This results in: ``` Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-4-7ef3a88f6276> in <module>() 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("linnaeus") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 603 raise FileNotFoundError("Couldn't find file at {}".format(url)) 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 605 raise ConnectionError("Couldn't reach {}".format(url)) 606 607 # Try a second time ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ```
CLOSED
2021-08-20T12:15:15
2021-08-31T13:13:02
2021-08-31T13:12:09
https://github.com/huggingface/datasets/issues/2821
NielsRogge
1
[ "bug" ]
2,820
Downloading “reddit” dataset keeps timing out.
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data") ``` ## Expected results A clear and concise description of the expected results. I would expect the download to finish, or at least provide a parameter to extend the read timeout window. ## Actual results Specify the actual results or traceback. Shown below in error message. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: macOS - Python version: 3.9.6 (conda env) - PyArrow version: N/A
CLOSED
2021-08-20T02:52:36
2021-09-08T14:52:02
2021-09-08T14:52:02
https://github.com/huggingface/datasets/issues/2820
smeyerhot
10
[ "bug" ]
2,818
cannot load data from my loacal path
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tarin)) # loading data by load_dataset data = load_dataset('csv',data_files=config.train_path) print(len(data)) ``` ## Steps to reproduce the bug ```python C:\Users\wie\Documents\项目\文本分类\data\train.csv 7613 Traceback (most recent call last): File "c:/Users/wie/Documents/项目/文本分类/lib/DataPrecess.py", line 17, in <module> data = load_dataset('csv',data_files=config.train_path) File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 830, in load_dataset **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 271, in __init__ **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 386, in _create_builder_config config_kwargs, custom_features=custom_features, use_auth_token=self.use_auth_token File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 156, in create_config_id raise ValueError("Please provide a valid `data_files` in `DatasetBuilder`") ValueError: Please provide a valid `data_files` in `DatasetBuilder` ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: win10 - Python version: 3.7.9 - PyArrow version: 5.0.0
CLOSED
2021-08-19T11:13:30
2023-07-25T17:42:15
2023-07-25T17:42:15
https://github.com/huggingface/datasets/issues/2818
yang-collect
1
[ "bug" ]
2,816
Add Mostly Basic Python Problems Dataset
## Adding a Dataset - **Name:** Mostly Basic Python Problems Dataset - **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/google-research/google-research/tree/master/mbpp - **Motivation:** Simple, small dataset related to coding problems. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-08-18T20:28:39
2021-09-10T08:04:20
null
https://github.com/huggingface/datasets/issues/2816
osanseviero
1
[ "dataset request" ]
2,813
Remove compression from xopen
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming In order to fulfill these requirements, streaming implementation patched some Python functions: - the `open(urlpath)` function was patched with `fsspec.open(urlpath)` - the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip://`), which are required by `fsspec.open` Recently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,... Under the hood, the implementation: - passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)` Some concerns have been raised about passing the parameter `compression` to `fsspec.open`: - https://github.com/huggingface/datasets/pull/2786#discussion_r689550254 - #2811 The main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset: ```python gzip.open(open(urlpath ``` While this is true: - it is not natural/usual to call `open` inside `gzip.open` (never seen this before) - indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming) In this particular case, there is a natural fix solution: #2811: - Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath` - Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression="gzip"` Are there other issues apart from this? Note that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just - `gzip.open` - `open` (after having called dl_manager.download_and_extract) TODO: - [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic. - For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`: - oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July) - In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming - [ ] If this is indeed an issue, which are the possible alternatives? Pros/cons?
CLOSED
2021-08-18T09:35:59
2021-08-23T15:59:14
2021-08-23T15:59:14
https://github.com/huggingface/datasets/issues/2813
albertvillanova
1
[ "generic discussion" ]
2,812
arXiv Dataset verification problem
## Describe the bug `dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples. Therefore, loading the dataset without `ignore_verifications=True` results in a verification error.
OPEN
2021-08-17T18:01:48
2022-01-19T14:15:35
null
https://github.com/huggingface/datasets/issues/2812
eladsegal
0
[ "bug", "dataset bug" ]
2,808
Enable streaming for Wikipedia corpora
**Is your feature request related to a problem? Please describe.** Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora: ```python from datasets import load_dataset # Throws ValueError: Builder wikipedia is not streamable. wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ``` Given that these corpora are derived from Wikipedia dumps in XML format which are then processed with Apache Beam, I am not sure whether streaming is possible in principle. The goal of this issue is to discuss whether this feature even makes sense :) **Describe the solution you'd like** It would be nice to be able to stream Wikipedia corpora from the Hub with something like ```python from datasets import load_dataset wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ```
CLOSED
2021-08-16T15:59:12
2023-07-20T13:45:30
2023-07-20T13:45:30
https://github.com/huggingface/datasets/issues/2808
lewtun
1
[ "enhancement" ]
2,799
Loading JSON throws ArrowNotImplementedError
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which suggests some incorrect type inference is being made on the `datasets` side. For example, the stack trace indicates that some URL fields are being parsed as timestamps. You can find a Colab notebook which reproduces the error [here](https://colab.research.google.com/drive/1YUCM0j1vx5ZrouQbYSzal6RwB4-Aoh4o?usp=sharing). **Edit:** If one repeatedly tries to load the dataset, it _eventually_ works but I think it would still be good to understand why it fails in the first place :) ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns https://huggingface.co/datasets/lewtun/github-issues-test/resolve/main/issues-datasets.jsonl data_files = hf_hub_url(repo_id="lewtun/github-issues-test", filename="issues-datasets.jsonl", repo_type="dataset") # throws ArrowNotImplementedError dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas ... df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to `pandas`. ## Actual results ``` --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) <ipython-input-7-5b8e82b6c3a2> in <module>() ----> 1 dset = load_dataset("json", data_files=data_files, split="test") 9 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: JSON conversion to struct<url: timestamp[s], html_url: timestamp[s], labels_url: timestamp[s], id: int64, node_id: timestamp[s], number: int64, title: timestamp[s], description: timestamp[s], creator: struct<login: timestamp[s], id: int64, node_id: timestamp[s], avatar_url: timestamp[s], gravatar_id: timestamp[s], url: timestamp[s], html_url: timestamp[s], followers_url: timestamp[s], following_url: timestamp[s], gists_url: timestamp[s], starred_url: timestamp[s], subscriptions_url: timestamp[s], organizations_url: timestamp[s], repos_url: timestamp[s], events_url: timestamp[s], received_events_url: timestamp[s], type: timestamp[s], site_admin: bool>, open_issues: int64, closed_issues: int64, state: timestamp[s], created_at: timestamp[s], updated_at: timestamp[s], due_on: timestamp[s], closed_at: timestamp[s]> is not supported ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
CLOSED
2021-08-13T15:31:48
2022-01-10T18:59:32
2022-01-10T18:59:32
https://github.com/huggingface/datasets/issues/2799
lewtun
11
[ "bug" ]
2,797
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
**Is your feature request related to a problem? Please describe.** Creating and editing dataset cards should be but not that easy - If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under github huggingface/datasets/datasets/the dataset, and willing to make a pr to add or fix information. - Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again. - Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser. - if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal) - dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us. **Describe the solution you'd like** - Everyone (or at least the author/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow - We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`
OPEN
2021-08-13T11:54:49
2021-08-14T08:42:09
null
https://github.com/huggingface/datasets/issues/2797
richarddwang
0
[ "enhancement" ]
2,794
Warnings and documentation about pickling incorrect
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L262 Docs: > For a transform to be hashable, it needs to be pickleable using dill or pickle. > – [docs](https://huggingface.co/docs/datasets/processing.html#fingerprinting) For my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https://github.com/huggingface/datasets/issues/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default: https://github.com/huggingface/datasets/blob/c93525dc291346e54212567fa72d7d607befe937/setup.py#L83 ... and the hashing will fail if it fails. ### Enhancement I think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https://github.com/huggingface/datasets/issues/2516#issuecomment-865173139: ```python from datasets.fingerprint import Hasher Hasher.hash(my_object) ``` I think add this to the docs will help future users quickly debug any hashing troubles of their own :-) ## Steps to reproduce the bug `dill` but not `pickle` hashing failure in https://github.com/huggingface/datasets/issues/2643 ## Expected results If either `dill` or `pickle` can successfully hash, the hashing will succeed. ## Actual results If `dill` or `pickle` cannot hash, the hashing fails. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
OPEN
2021-08-12T23:09:13
2021-08-12T23:09:31
null
https://github.com/huggingface/datasets/issues/2794
mbforbes
0
[ "bug" ]
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=['train[:8]', 'test[:8]', 'val[:8]'] ) ``` However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists. I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split. Is this type of splitting supported? If so, how can I do it?
CLOSED
2021-08-11T17:43:21
2023-07-25T17:40:50
2023-07-25T17:40:50
https://github.com/huggingface/datasets/issues/2788
brijow
1
[]
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset use_auth_token=use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py Trying to do python run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/mrpc/ Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago. Thank you!
CLOSED
2021-08-11T16:19:01
2023-10-03T12:39:25
2021-08-18T15:09:18
https://github.com/huggingface/datasets/issues/2787
jinec
9
[ "bug" ]
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #2739 - #2778 - Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`: - #2779 - `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`: - #2782
CLOSED
2021-08-10T09:59:41
2021-08-10T11:16:07
2021-08-10T11:16:07
https://github.com/huggingface/datasets/issues/2781
albertvillanova
0
[ "bug" ]
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`: Quote: > The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero. Context: trying to use `config.HF_DATASETS_OFFLINE` here: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48 but are uncertain if it's safe, since it's not documented as a public API. Thank you! @lhoestq, @albertvillanova
OPEN
2021-08-09T21:23:17
2021-08-09T21:23:17
null
https://github.com/huggingface/datasets/issues/2776
stas00
0
[ "enhancement" ]
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below. Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265 However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like: ```text Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow ``` The path is exactly the same each run (e.g., last 26 runs). This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000. I think that https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248 ... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below. ## Steps to reproduce the bug ```python # Contents of print_fingerprint.py from transformers import set_seed from datasets.fingerprint import generate_random_fingerprint set_seed(42) print(generate_random_fingerprint()) ``` ```bash for i in {0..10}; do python print_fingerprint.py done 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d ``` ## Expected results After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused. ## Actual results After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
CLOSED
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
https://github.com/huggingface/datasets/issues/2775
mbforbes
3
[ "bug" ]
2,773
Remove dataset_infos.json
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",... However, there are others that do not seem too meaningful in the README, like the checksums. **Describe the solution you'd like** Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept. cc: @julien-c @lhoestq
CLOSED
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
https://github.com/huggingface/datasets/issues/2773
albertvillanova
1
[ "enhancement", "generic discussion" ]
2,772
Remove returned feature constrain
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
OPEN
2021-08-08T04:01:30
2021-08-08T08:48:01
null
https://github.com/huggingface/datasets/issues/2772
PosoSAgapo
0
[ "enhancement" ]
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ds = ds.add_column('ones', [1]*128) ``` ## Expected results I would expect a new column named `ones` filled with `1`. When I check the length of `ds` it says `128`. Interestingly, it works when calling `ds = ds.map(lambda x: x)` before adding the column. ## Actual results Specify the actual results or traceback. ```python --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) /var/folders/l4/2905jygx4tx5jv8_kn03vxsw0000gn/T/ipykernel_6301/868709636.py in <module> 1 from datasets import load_dataset 2 ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ----> 3 ds = ds.add_column('ones', [0]*128) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 2965 column_table = InMemoryTable.from_pydict({name: column}) 2966 # Concatenate tables horizontally -> 2967 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 2968 # Update features 2969 info = self.info.copy() ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 715 table_blocks = to_blocks(table) 716 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 717 return cls.from_blocks(blocks) 718 719 @property ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 663 return cls(table, blocks) 664 else: --> 665 table = cls._concat_blocks_horizontally_and_vertically(blocks) 666 return cls(table, blocks) 667 ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks_horizontally_and_vertically(cls, blocks) 623 if not tables: 624 continue --> 625 pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1) 626 pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated) 627 return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 612 else: 613 for name, col in zip(table.column_names, table.columns): --> 614 pa_table = pa_table.append_column(name, col) 615 return pa_table 616 else: ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 31962 but got length 128 ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 5.0.0
CLOSED
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
https://github.com/huggingface/datasets/issues/2768
lvwerra
2
[ "bug" ]
2,767
equal operation to perform unbatch for huggingface datasets
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did: https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L925 Here please find an example: For example, a typical example from ReCoRD might look like { 'passsage': 'This is the passage.', 'query': 'A @placeholder is a bird.', 'entities': ['penguin', 'potato', 'pigeon'], 'answers': ['penguin', 'pigeon'], } and I need a prosessor which would turn this example into the following two examples: { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'penguin', } and { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'pigeon', } For doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help @lhoestq Thank you very much.
CLOSED
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
https://github.com/huggingface/datasets/issues/2767
dorooddorood606
5
[ "bug" ]
2,765
BERTScore Error
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
CLOSED
2021-08-06T15:58:57
2021-08-09T11:16:25
2021-08-09T11:16:25
https://github.com/huggingface/datasets/issues/2765
gagan3012
1
[ "bug" ]
2,763
English wikipedia datasets is not clean
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.en') print(w['train'][0]['text']) ``` > 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the "famous historical and cultural market towns in China".\n\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\'s games often interwoven with auspiciouse objects.\n\n, it had 27 residential communities () and 25 villages under its administration.\n\nShi Family Grand Courtyard\n\nShi Family Grand Courtyard (Tiānjīn Shí Jiā Dà Yuàn, 天津石家大院) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\n\nShi\'s ancestor came from Dong\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\n\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\n\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\n\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\' construction, local folk art and customs, and traditional period furnishings and crafts.\n\n**See also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'** ## Expected results I expect no junk in the data. ## Actual results Specify the actual results or traceback. ## Environment info - `datasets` version: 1.10.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 3.0.0
CLOSED
2021-08-05T14:37:24
2023-07-25T17:43:04
2023-07-25T17:43:04
https://github.com/huggingface/datasets/issues/2763
lucadiliello
1
[ "bug" ]
2,762
Add RVL-CDIP dataset
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. - **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/ - **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/ - **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-08-05T09:57:05
2022-04-21T17:15:41
2022-04-21T17:15:41
https://github.com/huggingface/datasets/issues/2762
NielsRogge
3
[ "dataset request", "vision" ]
2,761
Error loading C4 realnewslike dataset
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
CLOSED
2021-08-05T08:16:58
2021-08-08T19:44:34
2021-08-08T19:44:34
https://github.com/huggingface/datasets/issues/2761
danshirron
4
[ "bug" ]
2,760
Add Nuswide dataset
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)* - **Data:** *[here](https://github.com/wenting-zhao/nuswide)* - **Motivation:** *This dataset is a benchmark in the Text Retrieval task.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-08-05T03:00:41
2021-12-08T12:06:23
null
https://github.com/huggingface/datasets/issues/2760
shivangibithel
0
[ "dataset request", "vision" ]
2,757
Unexpected type after `concatenate_datasets`
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected. ## Steps to reproduce the bug ```python >>> featurized_teacher Dataset({ features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'], num_rows: 502 }) >>> for f in featurized_teacher.features: print(featurized_teacher[f].shape) torch.Size([502]) torch.Size([502, 300]) torch.Size([502, 300]) torch.Size([502, 300]) >>> featurized_student Dataset({ features: ['s_features', 's_labels'], num_rows: 502 }) >>> for f in featurized_student.features: print(featurized_student[f].shape) torch.Size([502, 64]) torch.Size([502]) ``` The shapes seem alright to me. Then the results after concatenation are as follow: ```python >>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1) >>> type(concat_dataset["t_labels"]) <class 'list'> ``` One would expect to obtain the same type as the one before concatenation. Am I doing something wrong here? Any idea on how to fix this unexpected behavior? ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 3.0.0
CLOSED
2021-08-04T07:10:39
2021-08-04T16:01:24
2021-08-04T16:01:23
https://github.com/huggingface/datasets/issues/2757
JulesBelveze
2
[ "bug" ]
2,750
Second concatenation of datasets produces errors
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
CLOSED
2021-08-03T10:47:04
2022-01-19T14:23:43
2022-01-19T14:19:05
https://github.com/huggingface/datasets/issues/2750
Aktsvigun
5
[ "bug" ]
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reclor", streaming=True) ``` ## Expected results Ideally: raise a specific exception, something like `ManualDownloadError`. Or at least give the reason in the message, as when we load in normal mode: ```python from datasets import load_dataset dataset = load_dataset("reclor") ``` ``` AssertionError: The dataset reclor with config default requires manual data. Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name') . Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>') ``` ## Actual results ``` TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-11.5-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
CLOSED
2021-08-03T10:26:27
2021-08-09T08:53:35
2021-08-04T11:36:30
https://github.com/huggingface/datasets/issues/2749
severo
2
[ "bug" ]
2,746
Cannot load `few-nerd` dataset
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError). Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53... Traceback (most recent call last): File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples with open(filepath, encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json' ``` The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Python version: 3.8 - PyArrow version: 1.0.1
CLOSED
2021-08-02T22:18:57
2021-11-16T08:51:34
2021-08-03T19:45:43
https://github.com/huggingface/datasets/issues/2746
Mehrad0711
6
[ "bug" ]
2,743
Dataset JSON is incorrect
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset_infos.json. The only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead. ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Steps to reproduce the bug Look at the files. ## Expected results The first key should be `plain_text`: ```json { "plain_text": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Actual results ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ```
CLOSED
2021-08-02T13:01:26
2021-08-03T10:06:57
2021-08-03T09:25:33
https://github.com/huggingface/datasets/issues/2743
severo
2
[ "bug" ]
2,742
Improve detection of streamable file types
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownloadManager(base_path=builder.base_path)) ``` raises ``` NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet ``` But the file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed: ```bash curl --header "Range: bytes=0-100" -L https://drive.google.com/uc\?export\=download\&id\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U 506938088174940160 yes 1 302221719412830209 yes 1 289761704907268096 yes 1 513820885032378369 yes % ``` Yet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats. **Describe the solution you'd like** In the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text/html; charset=UTF-8`. **Describe alternatives you've considered** Add a variable in the dataset script to set the data format by hand.
CLOSED
2021-08-02T12:55:09
2021-11-12T17:18:10
2021-11-12T17:18:10
https://github.com/huggingface/datasets/issues/2742
severo
1
[ "enhancement", "dataset-viewer" ]
2,741
Add Hypersim dataset
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-08-02T10:06:50
2021-12-08T12:06:51
null
https://github.com/huggingface/datasets/issues/2741
osanseviero
0
[ "dataset request", "vision" ]
2,737
SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0` ## Steps to reproduce the bug ```python sacrebleu= datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] references = ["It is a guide to action that ensures that the military will forever heed Party commands"] results = sacrebleu.compute(predictions=predictions, references=references) print(results) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: Python 3.8.0 - PyArrow version: 5.0.0
CLOSED
2021-07-30T23:53:08
2021-09-22T10:47:41
2021-08-03T04:23:37
https://github.com/huggingface/datasets/issues/2737
devrimcavusoglu
5
[ "bug" ]
2,736
Add Microsoft Building Footprints dataset
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.microsoft.com/en-us/maps/building-footprints - **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @sashavor
OPEN
2021-07-30T16:17:08
2021-12-08T12:09:03
null
https://github.com/huggingface/datasets/issues/2736
albertvillanova
1
[ "dataset request", "vision" ]
2,735
Add Open Buildings dataset
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html - **Paper:** https://arxiv.org/abs/2107.12283 - **Data:** https://sites.research.google/open-buildings/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @osanseviero
OPEN
2021-07-30T16:08:39
2021-07-31T05:01:25
null
https://github.com/huggingface/datasets/issues/2735
albertvillanova
0
[ "dataset request" ]
2,730
Update CommonVoice with new release
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220). - **Paper:** https://discourse.mozilla.org/t/common-voice-2021-mid-year-dataset-release/83812 - **Data:** https://commonvoice.mozilla.org/en/datasets - **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2021-07-29T15:59:59
2021-08-07T16:19:19
null
https://github.com/huggingface/datasets/issues/2730
yjernite
3
[ "dataset request" ]
2,728
Concurrent use of same dataset (already downloaded)
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" "bert-large-cased" "roberta-large" "albert-base-v1" "albert-large-v1"; do for TASK_NAME in "mrpc" "rte" 'imdb' "paws" "mnli"; do export OUTPUT_DIR=${MODEL}_${TASK_NAME} sbatch --job-name=${OUTPUT_DIR} \ --gres=gpu:1 \ --no-requeue \ --cpus-per-task=10 \ --hint=nomultithread \ --time=1:00:00 \ --output=jobinfo/${OUTPUT_DIR}_%j.out \ --error=jobinfo/${OUTPUT_DIR}_%j.err \ --qos=qos_gpu-t4 \ --wrap="module purge; module load pytorch-gpu/py3/1.7.0 ; export HF_DATASETS_OFFLINE=1; export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets; python compute_measures.py --seed=$SEED --saving_path=results --batch_size=$BATCH_SIZE --task_name=$TASK_NAME --model_name=/gpfswork/rech/toto/transformers_models/$MODEL" done done ```python # Sample code to reproduce the bug dataset_train = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_train = dataset_train.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter))) dataset_val = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_val = dataset_val.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter, args.filter + 5000))) dataset_test = load_dataset('imdb', split='test', download_mode="reuse_cache_if_exists") dataset_test = dataset_test.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True) ``` ## Expected results I believe I am doing something wrong with the objects. ## Actual results Traceback (most recent call last): File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 983, in _prepare_split check_duplicates=True, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/arrow_writer.py", line 192, in __init__ self.stream = pa.OSFile(self._path, "wb") File "pyarrow/io.pxi", line 829, in pyarrow.lib.OSFile.__cinit__ File "pyarrow/io.pxi", line 844, in pyarrow.lib.OSFile._open_writable File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status FileNotFoundError: [Errno 2] Failed to open local file '/gpfswork/rech/tts/unm25jp/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "compute_measures.py", line 181, in <module> train_loader, val_loader, test_loader = get_dataloader(args) File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader dataset_train = load_dataset('paws', "labeled_final", split='train', download_mode="reuse_cache_if_exists") File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 658, in _download_and_prepare + str(e) OSError: Cannot find data file. Original error: [Errno 2] Failed to open local file '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.8.0 - Platform: linux (jeanzay) - Python version: pyarrow==2.0.0 - PyArrow version: 3.7.8
OPEN
2021-07-29T14:18:38
2021-08-02T07:25:57
null
https://github.com/huggingface/datasets/issues/2728
PierreColombo
4
[ "bug" ]
2,727
Error in loading the Arabic Billion Words Corpus
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results The datasets load succefully. ## Actual results ```python _extract_tags(self, sample, tag) 139 if len(out) > 0: 140 break --> 141 return out[0] 142 143 def _clean_text(self, text): IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.7.11 - PyArrow version: 3.0.0
CLOSED
2021-07-29T12:53:09
2021-07-30T13:03:55
2021-07-30T13:03:55
https://github.com/huggingface/datasets/issues/2727
M-Salti
2
[ "bug" ]
2,724
404 Error when loading remote data files from private repo
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/datasets/lewtun/asr-preds-test/resolve/main/preds.jsonl ``` ## Expected results Load dataset. ## Actual results 404 Error.
CLOSED
2021-07-28T14:24:23
2021-07-29T04:58:49
2021-07-28T16:38:01
https://github.com/huggingface/datasets/issues/2724
albertvillanova
3
[ "bug" ]
2,722
Missing cache file
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json'`
CLOSED
2021-07-28T03:52:07
2022-03-21T08:27:51
2022-03-21T08:27:51
https://github.com/huggingface/datasets/issues/2722
PosoSAgapo
2
[ "bug" ]
2,719
Use ETag in streaming mode to detect resource updates
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd like** Take the ETag of the data files into account and provide it (directly or through a hash) to give a signal that I can invalidate my cache. **Describe alternatives you've considered** None
OPEN
2021-07-27T14:17:09
2021-10-22T09:36:08
null
https://github.com/huggingface/datasets/issues/2719
severo
0
[ "enhancement", "dataset-viewer" ]
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`. To remedy the problem we can change this line to `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
CLOSED
2021-07-26T13:24:59
2021-07-26T18:04:43
2021-07-26T18:04:43
https://github.com/huggingface/datasets/issues/2716
amankhandelia
3
[ "bug" ]