number
int64
2
7.91k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-12-16 10:45:02
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 19:34:46
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 14:20:48
url
stringlengths
48
51
author
stringlengths
3
26
comments_count
int64
0
70
labels
listlengths
0
4
3,902
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug Unable to import datasets ## Steps to reproduce the bug ```python from datasets import Dataset, DatasetDict ``` ## Expected results The import works without errors ## Actual results ``` AttributeError Traceback (most recent call last) <ipython-input-37-c8cfcbe62127> in <module> 11 # from tqdm import tqdm 12 # import torch ---> 13 from datasets import Dataset 14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling 15 # from sentence_transformers import SentenceTransformer ~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module> 46 ) 47 ---> 48 import fsspec 49 import numpy as np 50 import pandas as pd ~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module> 10 from . import _version, caching 11 from .callbacks import Callback ---> 12 from .core import get_fs_token_paths, open, open_files, open_local 13 from .exceptions import FSTimeoutError 14 from .mapping import FSMap, get_mapper ~/.local/lib/python3.8/site-packages/fsspec/core.py in <module> 16 caches, 17 ) ---> 18 from .compression import compr 19 from .registry import filesystem, get_filesystem_class 20 from .utils import ( ~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module> 68 69 ---> 70 register_compression("zip", unzip, "zip") 71 register_compression("bz2", BZ2File, "bz2") 72 ~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force) 44 45 for ext in extensions: ---> 46 if ext in fsspec.utils.compressions and not force: 47 raise ValueError( 48 "Duplicate compression file extension: %s (%s)" % (ext, name) AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: Jupyter notebook - Python version: 3.8.10 - PyArrow version: 7.0.0
CLOSED
2022-03-12T21:22:03
2023-02-09T14:53:49
2022-03-22T07:10:41
https://github.com/huggingface/datasets/issues/3902
arunasank
5
[ "bug" ]
3,901
Dataset viewer issue for IndicParaphrase- the preview doesn't show
## Dataset viewer issue for '*IndicParaphrase*' **Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)* *The preview of the dataset doesn't come up. The error on the console is: Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'* Am I the one who added this dataset ? Yes
CLOSED
2022-03-12T16:56:05
2022-04-12T12:10:50
2022-04-12T12:10:49
https://github.com/huggingface/datasets/issues/3901
ratishsp
1
[ "dataset-viewer" ]
3,896
Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*' **Link:** https://huggingface.co/datasets/multi_news ``` Server error Status code: 400 Exception: FileNotFoundError Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src ``` Am I the one who added this dataset ? No
CLOSED
2022-03-11T16:38:10
2022-03-15T12:30:23
2022-03-15T12:30:23
https://github.com/huggingface/datasets/issues/3896
severo
5
[ "dataset-viewer" ]
3,889
Cannot load beans dataset (Couldn't reach the dataset)
## Describe the bug The beans dataset is unavailable to download. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('beans') ``` ## Expected results The dataset would be downloaded with no issue. ## Actual results ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403) ``` [It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip ) ## Environment info Google Colab
CLOSED
2022-03-10T16:34:08
2022-03-15T15:26:47
2022-03-15T15:26:47
https://github.com/huggingface/datasets/issues/3889
ivsanro1
1
[ "dataset bug" ]
3,888
IterableDataset columns and feature types
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None` However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models. Here are a few cases that lead to `features` being `None`: 1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset 2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map` 3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map` Things we can consider, for each point above: 1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features` 2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API) 2.b prefetch the first output value to infer the type 3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user cc @mariosasko @albertvillanova
OPEN
2022-03-10T16:19:12
2022-11-29T11:39:24
null
https://github.com/huggingface/datasets/issues/3888
lhoestq
8
[ "generic discussion", "streaming" ]
3,883
The metric Meteor doesn't work for nltk ==3.6.4
## Describe the bug Using the metric Meteor with nltk == 3.6.4 gives a TypeError: TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object ## Steps to reproduce the bug ```python import datasets metric = datasets.load_metric("meteor") predictions = ["hello world"] references = ["hello world"] metric.compute(predictions=predictions, references=references) ``` ## Expected results TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object I think this TypeError exists because input sentences are tokenized into lists of tokens and the str.lower() is applied to this list of tokens. ## Actual results No error but a meteor score ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: linux - Python version: 3.8.12 - PyArrow version: 7.0.0
CLOSED
2022-03-10T02:28:27
2022-03-10T09:03:39
2022-03-10T09:03:39
https://github.com/huggingface/datasets/issues/3883
zhaowei-wang-nlp
1
[ "bug" ]
3,881
How to use Image folder
Ran this code ``` load_dataset("imagefolder", data_dir="./my-dataset") ``` `https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /tmp/ipykernel_33/1648737256.py in <module> ----> 1 load_dataset("imagefolder", data_dir="./my-dataset") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1684 revision=revision, 1685 use_auth_token=use_auth_token, -> 1686 **config_kwargs, 1687 ) 1688 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1511 download_config.use_auth_token = use_auth_token 1512 dataset_module = dataset_module_factory( -> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1514 ) 1515 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1202 ) from None 1203 raise e1 from None 1204 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py ```
CLOSED
2022-03-09T21:18:52
2022-03-11T08:45:52
2022-03-11T08:45:52
https://github.com/huggingface/datasets/issues/3881
rozeappletree
8
[ "question" ]
3,877
Align metadata to DCAT/DCAT-AP
**Is your feature request related to a problem? Please describe.** Align to DCAT metadata to describe datasets **Describe the solution you'd like** Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant **Describe alternatives you've considered** **Additional context** DCAT is a W3C standard extended in Europe with DCAT-AP, an example is data.europa.eu publishing datasets metadata in DCAT-AP
OPEN
2022-03-09T16:12:25
2022-03-09T16:33:42
null
https://github.com/huggingface/datasets/issues/3877
EmidioStani
0
[ "enhancement" ]
3,872
HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()` While pushing, it gives some error like this. ``` Traceback (most recent call last): File "data_split_speech.py", line 159, in <module> data_new_2.push_to_hub("user-name/dataset-name",private=True) File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub api.upload_file( File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file raise err File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file r.raise_for_status() File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet ``` Can anyone help me to resolve this issue.
CLOSED
2022-03-09T12:03:37
2022-03-15T16:19:50
2022-03-15T16:19:50
https://github.com/huggingface/datasets/issues/3872
illiyas-sha
6
[]
3,869
Making the Hub the place for datasets in Portuguese
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community. What are some datasets in Portuguese worth integrating into the Hugging Face hub? Special thanks to @augusnunes for his collaboration on identifying the first ones: - [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). cc @osanseviero
OPEN
2022-03-09T03:06:18
2022-03-09T09:04:09
null
https://github.com/huggingface/datasets/issues/3869
omarespejel
1
[ "dataset request" ]
3,861
big_patent cased version
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
CLOSED
2022-03-08T14:08:55
2023-04-21T14:32:03
2023-04-21T14:32:03
https://github.com/huggingface/datasets/issues/3861
slvcsl
2
[ "dataset request" ]
3,859
Unable to dowload big_patent (FileNotFoundError)
## Describe the bug I am trying to download some splits of the big_patent dataset, using the following code: `ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") ` However, this leads to a FileNotFoundError. FileNotFoundError Traceback (most recent call last) [<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset ----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") 8 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1705 ignore_verifications=ignore_verifications, 1706 try_from_hf_gcs=try_from_hf_gcs, -> 1707 use_auth_token=use_auth_token, 1708 ) 1709 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 if not downloaded_from_gcs: 594 self._download_and_prepare( --> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 596 ) 597 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification [/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 123 split_types = ["train", "val", "test"] 124 extract_paths = dl_manager.extract( --> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types} 126 ) 127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types} [/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc) 282 download_config.extract_compressed_file = True 283 extracted_paths = map_nested( --> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False 285 ) 286 path_or_paths = NestedDataStructure(path_or_paths) [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args) 194 # Singleton first to spare some computation 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 196 return function(data_struct) 197 198 # Reduce logging to keep things readable in multiprocessing with tqdm [/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs) 314 elif is_local_path(url_or_filename): 315 # File, but it doesn't exist. --> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 317 else: 318 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist I have tried this in a number of machines, including on Colab, so I think this is not environment dependent. How do I load the bigPatent dataset?
CLOSED
2022-03-08T11:47:12
2022-03-08T13:04:09
2022-03-08T13:04:04
https://github.com/huggingface/datasets/issues/3859
slvcsl
1
[ "bug", "duplicate" ]
3,857
Order of dataset changes due to glob.glob.
## Describe the bug After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system. There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken): https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
OPEN
2022-03-08T11:10:30
2022-03-14T11:08:22
null
https://github.com/huggingface/datasets/issues/3857
patrickvonplaten
1
[ "generic discussion" ]
3,855
Bad error message when loading private dataset
## Describe the bug A pretty common behavior of an interaction between the Hub and datasets is the following. An organization adds a dataset in private mode and wants to load it afterward. ```python from transformers import load_dataset ds = load_dataset("NewT5/dummy_data", "dummy") ``` This command then fails with: ```bash FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub ``` **even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org. We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO. ## Steps to reproduce the bug E.g. execute the following code to see the different error messages between `transformes` and `datasets`. 1. Transformers ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model") ``` The error message is clearer here - it gives: ``` OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` Let's maybe do the same for datasets? The PR was introduced to `transformers` here: https://github.com/huggingface/transformers/pull/15261 ## Expected results Better error message ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1
CLOSED
2022-03-08T09:55:17
2022-07-11T15:06:40
2022-07-11T15:06:40
https://github.com/huggingface/datasets/issues/3855
patrickvonplaten
2
[ "bug" ]
3,854
load only England English dataset from common voice english dataset
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Proportions:** - 24% United States English - 8% England English - 5% India and South Asia (India, Pakistan, Sri Lanka) - 3% Australian English - 3% Canadian English - 2% Scottish English - 1% Irish English - 1% Southern African (South Africa, Zimbabwe, Namibia) - 1% New Zealand English Can we replicate this for Age as well? **Age proportions of the common voice:-** - 24% 19 - 29 - 14% 30 - 39 - 10% 40 - 49 - 6% < 19 - 4% 50 - 59 - 4% 60 - 69 - 1% 70 – 79
CLOSED
2022-03-08T09:40:52
2024-03-23T12:40:58
2022-03-09T08:13:33
https://github.com/huggingface/datasets/issues/3854
amanjaiswal777
2
[ "question" ]
3,851
Load audio dataset error
## Load audio dataset error Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb, ``` from datasets import load_dataset, load_metric, Audio raw_datasets = load_dataset("superb", "ks", split="train") print(raw_datasets[0]["audio"]) ``` following errors occur ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-169-3f8253239fa0> in <module> ----> 1 raw_datasets[0]["audio"] /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1925 return self._getitem( -> 1926 key, 1927 ) 1928 /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1910 formatted_output = format_table( -> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1912 ) 1913 return formatted_output /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row 314 /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row) 219 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row 222 223 def decode_column(self, column: list, column_name: str) -> list: /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example) 1320 else value 1321 for column_name, (feature, value) in utils.zip_dict( -> 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) 1324 } /usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0) 1319 if self._column_requires_decoding[column_name] 1320 else value -> 1321 for column_name, (feature, value) in utils.zip_dict( 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj) 1053 # Object with special decoding: 1054 elif isinstance(schema, (Audio, Image)): -> 1055 return schema.decode_example(obj) if obj is not None else None 1056 return obj 1057 /usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 100 array, sampling_rate = self._decode_non_mp3_file_like(file) 101 else: --> 102 array, sampling_rate = self._decode_non_mp3_path_like(path) 103 return {"path": path, "array": array, "sampling_rate": sampling_rate} 104 /usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path) 143 144 with xopen(path, "rb") as f: --> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 146 return array, sampling_rate 147 /usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type) 110 111 y = [] --> 112 with audioread.audio_open(os.path.realpath(path)) as input_file: 113 sr_native = input_file.samplerate 114 n_channels = input_file.channels /usr/lib/python3.6/posixpath.py in realpath(filename) 392 """Return the canonical path of the specified filename, eliminating any 393 symbolic links encountered in the path.""" --> 394 filename = os.fspath(filename) 395 path, ok = _joinrealpath(filename[:0], filename, {}) 396 return abspath(path) TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader ``` ## Expected results ``` >>> raw_datasets[0]["audio"] {'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347, 0.01623535, 0.01724243]), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav', 'sampling_rate': 16000} ```
CLOSED
2022-03-08T02:16:04
2022-09-27T12:13:55
2022-03-08T11:20:06
https://github.com/huggingface/datasets/issues/3851
lemoner20
8
[ "bug" ]
3,848
NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset: ```bash expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}} recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c6425dd74e5b55f2f325c9', 'num_bytes': 40662}} verification_name = 'dataset source files' def verify_checksums(expected_checksums: Optional[dict], recorded_checksums: dict, verification_name=None): if expected_checksums is None: logger.info("Unable to verify checksums.") return if len(set(expected_checksums) - set(recorded_checksums)) > 0: raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) if len(set(recorded_checksums) - set(expected_checksums)) > 0: raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] for_verification_name = " for " + verification_name if verification_name is not None else "" if len(bad_urls) > 0: error_msg = "Checksums didn't match" + for_verification_name + ":\n" > raise NonMatchingChecksumError(error_msg + str(bad_urls)) E datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: E ['https://adversarialglue.github.io/dataset/dev.zip'] src/datasets/utils/info_utils.py:40: NonMatchingChecksumError ``` ## Expected results The dataset downloads correctly, and there is no error. ## Actual results Datasets library is looking for a checksum of None, and it gets a non-None checksum, and throws an error. This is clearly a bug.
CLOSED
2022-03-08T00:24:12
2022-03-15T14:37:26
2022-03-15T12:28:23
https://github.com/huggingface/datasets/issues/3848
jxmorris12
7
[ "bug" ]
3,847
Datasets' cache not re-used
## Describe the bug For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory. ## Steps to reproduce the bug Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example. ```python from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") # tokenizer = AutoTokenizer.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("roberta-base") text_column_name = "text" column_names = raw_datasets["train"].column_names def tokenize_function(examples): return tokenizer(examples[text_column_name], return_special_tokens_mask=True) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on every text in dataset", ) ``` ## Expected results No tokenization would be required after the 1st run. Everything should be loaded from the cache. ## Actual results Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 18.04.6 LTS - Python version: 3.6.9 - PyArrow version: 6.0.1
OPEN
2022-03-07T19:55:15
2025-05-19T11:58:55
null
https://github.com/huggingface/datasets/issues/3847
gejinchen
28
[ "bug" ]
3,841
Pyright reportPrivateImportUsage when `from datasets import load_dataset`
## Describe the bug Pyright complains about module not exported. ## Steps to reproduce the bug Use an editor/IDE with Pyright Language server with default configuration: ```python from datasets import load_dataset ``` ## Expected results No complain from Pyright ## Actual results Pyright complain below: ``` `load_dataset` is not exported from module "datasets" Import from "datasets.load" instead [reportPrivateImportUsage] ``` Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation. ## Environment info - `datasets` version: 1.18.3 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
CLOSED
2022-03-07T10:24:04
2023-02-18T19:14:03
2023-02-13T13:48:41
https://github.com/huggingface/datasets/issues/3841
lkhphuc
6
[ "bug" ]
3,839
CI is broken for Windows
## Describe the bug See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 ``` ___________________ test_datasetdict_from_text_split[test] ____________________ [gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe split = 'test' text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt' tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7') @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_datasetdict_from_text_split(split, text_path, tmp_path): if split: path = {split: text_path} else: split = "train" path = {"train": text_path, "test": text_path} cache_dir = tmp_path / "cache" expected_features = {"text": "string"} > dataset = TextDatasetReader(path, cache_dir=cache_dir).read() tests\io\test_text.py:118: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read use_auth_token=use_auth_token, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare self._download_prepared_from_hf_gcs(dl_manager.download_config) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs reader.download_from_hf_gcs(download_config, relative_data_dir) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/")) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path download_desc=download_config.download_desc, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache headers=headers, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head max_retries=max_retries, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request return session.request(method=method, url=url, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request resp = self.send(prep, **send_kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send r = adapter.send(request, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send return self._on_request(adapter, request, *a, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request match, match_failed_reasons = self._find_match(request) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <responses.RequestsMock object at 0x000002048AD70588> request = <PreparedRequest [HEAD]> def _find_first_match(self, request): match_failed_reasons = [] > for i, match in enumerate(self._matches): E AttributeError: 'RequestsMock' object has no attribute '_matches' C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError ```
CLOSED
2022-03-07T10:06:42
2022-05-20T14:13:43
2022-03-07T10:07:24
https://github.com/huggingface/datasets/issues/3839
albertvillanova
0
[ "bug" ]
3,838
Add a data type for labeled images (image segmentation)
It might be a mix of Image and ClassLabel, and the color palette might be generated automatically. --- ### Example every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color. So we might want to render the image as a colored image instead of a black and white one. <img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png"> See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow
OPEN
2022-03-07T09:38:15
2025-11-28T10:58:23
null
https://github.com/huggingface/datasets/issues/3838
severo
1
[ "enhancement" ]
3,835
The link given on the gigaword does not work
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
CLOSED
2022-03-07T07:56:42
2022-03-15T12:30:23
2022-03-15T12:30:23
https://github.com/huggingface/datasets/issues/3835
martin6336
0
[ "bug" ]
3,832
Making Hugging Face the place to go for Graph NNs datasets
Let's make Hugging Face Datasets the central hub for GNN datasets :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field. What are some datasets worth integrating into the Hugging Face hub? Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Special thanks to @napoles-uach for his collaboration on identifying the first ones: - [ ] [SNAP-Stanford OGB Datasets](https://github.com/snap-stanford/ogb). - [ ] [SNAP-Stanford Pretrained GNNs Chemistry and Biology Datasets](https://github.com/snap-stanford/pretrain-gnns). - [ ] [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression) cc @osanseviero
OPEN
2022-03-06T03:02:58
2022-03-14T07:45:38
null
https://github.com/huggingface/datasets/issues/3832
omarespejel
4
[ "dataset request", "graph" ]
3,831
when using to_tf_dataset with shuffle is true, not all completed batches are made
## Describe the bug when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch ## Steps to reproduce the bug this is the sample code below https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing ## Expected results regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16. ## Actual results 4 batches ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
CLOSED
2022-03-06T02:43:50
2022-03-08T15:18:56
2022-03-08T15:18:56
https://github.com/huggingface/datasets/issues/3831
greenned
4
[ "bug" ]
3,830
Got error when load cnn_dailymail dataset
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below: - windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories' - google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' The code is to load dataset: windows os: ``` from datasets import load_dataset dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data") ``` google colab: ``` import datasets train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ```
CLOSED
2022-03-05T01:43:12
2022-03-07T06:53:41
2022-03-07T06:53:41
https://github.com/huggingface/datasets/issues/3830
wgong0510
2
[ "duplicate" ]
3,829
[📄 Docs] Create a `datasets` performance guide.
## Brief Overview Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments. ## Feature Request Could we create a performance guide for using `datasets`, similar to: * [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735) * [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis) This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below). ![image](https://user-images.githubusercontent.com/3712347/156859152-a3cb9565-3ec6-4d39-8e77-56d0a75a4954.png) ## Related Issues * [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670) * [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499) * [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004) * [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830) * [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315) * [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911)
OPEN
2022-03-05T00:28:06
2022-03-10T16:24:27
null
https://github.com/huggingface/datasets/issues/3829
dynamicwebpaige
1
[ "enhancement" ]
3,828
The Pile's _FEATURE spec seems to be incorrect
## Describe the bug If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py: For "all" * the pile_set_name is never set for data * there's actually an id field inside of "meta" For subcorpora pubmed_central and hacker_news: * the meta is specified to be a string, but it's actually a dict with an id field inside. ## Steps to reproduce the bug ## Expected results Feature spec should match the data I'd think? ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
CLOSED
2022-03-04T21:25:32
2022-03-08T09:30:49
2022-03-08T09:30:48
https://github.com/huggingface/datasets/issues/3828
dlwh
1
[ "bug" ]
3,823
500 internal server error when trying to open a dataset composed of Zarr stores
## Describe the bug The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code. The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine. In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format? For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("openclimatefix/mrms") ``` ## Expected results The dataset should be downloaded or open up ## Actual results A 500 internal server error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35 - Python version: 3.9.10 - PyArrow version: 7.0.0
CLOSED
2022-03-04T10:37:14
2022-03-08T09:47:39
2022-03-08T09:47:39
https://github.com/huggingface/datasets/issues/3823
jacobbieker
4
[ "bug" ]
3,822
Add Biwi Kinect Head Pose Database
## Adding a Dataset - **Name:** Biwi Kinect Head Pose Database - **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles. - **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html) - **Motivation:** Useful pose estimation dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2022-03-04T08:48:39
2025-04-07T13:04:25
2022-06-01T13:00:47
https://github.com/huggingface/datasets/issues/3822
osanseviero
10
[ "dataset request", "vision" ]
3,820
`pubmed_qa` checksum mismatch
## Describe the bug Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets try: datasets.load_dataset("pubmed_qa", "pqa_labeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_unlabeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_artificial") except Exception as e: print(e) ``` ## Expected results Successful download. ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: macOS - Python version: 3.8.1 - PyArrow version: 3.0.0
CLOSED
2022-03-04T00:28:08
2022-03-04T09:42:32
2022-03-04T09:42:32
https://github.com/huggingface/datasets/issues/3820
jon-tow
1
[ "bug", "duplicate" ]
3,818
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
**Is your feature request related to a problem? Please describe.** The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input. For example, when the `add_batch` method is used, then the `compute()` method fails: ``` metric = load_metric("sari") metric.add_batch( predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) metric.compute() > TypeError: _compute() missing 1 required positional argument: 'sources' ``` Therefore, the `compute() `method can only be used standalone: ``` metric = load_metric("sari") result = metric.compute( sources=["About 95 species are currently accepted ."], predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) > {'sari': 26.953601953601954} ``` **Describe the solution you'd like** Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class. ``` add_batch(*, sources=None, predictions=None, references=None, **kwargs) add(*, sources=None, predictions=None, references=None, **kwargs) compute() ``` **Describe alternatives you've considered** I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch). **Additional context** These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
CLOSED
2022-03-03T18:57:54
2022-03-04T18:04:21
2022-03-04T18:04:21
https://github.com/huggingface/datasets/issues/3818
lmvasque
3
[ "enhancement" ]
3,813
Add MetaShift dataset
## Adding a Dataset - **Name:** MetaShift - **Description:** collection of 12,868 sets of natural images across 410 classes- - **Paper:** https://arxiv.org/abs/2202.06523v1 - **Data:** https://github.com/weixin-liang/metashift Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2022-03-03T14:26:45
2022-04-10T13:39:59
2022-04-10T13:39:59
https://github.com/huggingface/datasets/issues/3813
osanseviero
7
[ "dataset request", "vision" ]
3,809
Checksums didn't match for datasets on Google Drive
## Describe the bug Datasets hosted on Google Drive do not seem to work right now. Loading them fails with a checksum error. ## Steps to reproduce the bug ```python from datasets import load_dataset for dataset in ["head_qa", "yelp_review_full"]: try: load_dataset(dataset) except Exception as exception: print("Error", dataset, exception) ``` Here is a [colab](https://colab.research.google.com/drive/1wOtHBmL8I65NmUYakzPV5zhVCtHhi7uQ#scrollTo=cDzdCLlk-Bo4). ## Expected results The datasets should be loaded. ## Actual results ``` Downloading and preparing dataset head_qa/es (download: 75.69 MiB, generated: 2.86 MiB, post-processed: Unknown size, total: 78.55 MiB) to /root/.cache/huggingface/datasets/head_qa/es/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9... Error head_qa Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t'] Downloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /root/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43... Error yelp_review_full Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0'] ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
CLOSED
2022-03-03T09:01:10
2022-03-03T09:24:58
2022-03-03T09:24:05
https://github.com/huggingface/datasets/issues/3809
muelletm
1
[ "bug", "duplicate" ]
3,808
Pre-Processing Cache Fails when using a Factory pattern
## Describe the bug If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time. ## Steps to reproduce the bug ```python def preprocess_function_factory(augmentation=None): def preprocess_function(examples): # Tokenize the texts if augmentation: conversions1 = [ augmentation(example) for example in examples[sentence1_key] ] if sentence2_key is None: args = (conversions1,) else: conversions2 = [ augmentation(example) for example in examples[sentence2_key] ] args = (conversions1, conversions2) else: args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer( *args, padding=padding, max_length=max_seq_length, truncation=True ) # Map labels to IDs (not necessary for GLUE tasks) if label_to_id is not None and "label" in examples: result["label"] = [ (label_to_id[l] if l != -1 else -1) for l in examples["label"] ] return result return preprocess_function capitalize = lambda x: x.capitalize() preprocess_function = preprocess_function_factory(augmentation=capitalize) print(hash(preprocess_function)) # This will change on each run raw_datasets = raw_datasets.map( preprocess_function, batched=True, load_from_cache_file=True, desc="Running transformation and tokenizer on dataset", ) ``` ## Expected results Running the code twice will cause the cache to be re-used. ## Actual results Running the code twice causes the whole dataset to be re-processed
CLOSED
2022-03-02T20:18:43
2022-03-10T23:01:47
2022-03-10T23:01:47
https://github.com/huggingface/datasets/issues/3808
Helw150
3
[ "bug" ]
3,807
NonMatchingChecksumError in xcopa dataset
## Describe the bug Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("xcopa", "it") ``` ## Expected results The dataset should be loaded correctly. ## Actual results Fails with: ```python in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/cambridgeltl/xcopa/archive/master.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3, and 1.18.4.dev0 - Platform: - Python version: 3.8 - PyArrow version:
CLOSED
2022-03-02T18:10:19
2022-05-20T06:00:42
2022-03-03T17:40:31
https://github.com/huggingface/datasets/issues/3807
afcruzs-ms
6
[ "bug" ]
3,804
Text builder with custom separator line boundaries
**Is your feature request related to a problem? Please describe.** The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line boundaries. Not all of them are always wanted. **Describe the solution you'd like** ```python if self.config.sample_by == "line": batch_idx = 0 while True: batch = f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line if self.config.custom_newline is None: batch = batch.splitlines(keepends=self.config.keep_linebreaks) else: batch = batch.split(self.config.custom_newline)[:-1] pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 ``` **A clear and concise description of what you want to happen.** Creating the dataset rows with a subset of the `splitlines()` line boundaries.
OPEN
2022-03-02T14:50:16
2022-03-16T15:53:59
null
https://github.com/huggingface/datasets/issues/3804
cronoik
6
[ "enhancement" ]
3,795
can not flatten natural_questions dataset
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir') dataset['train'].flatten() ``` ## Expected results a dataset with `long_answer` as features ## Actual results Traceback (most recent call last): File "temp.py", line 5, in <module> dataset['train'].flatten() File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper out = func(self, *args, **kwargs) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten dataset._data = update_metadata_with_features(dataset._data, dataset.features) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features features = Features({col_name: features[col_name] for col_name in table.column_names}) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp> features = Features({col_name: features[col_name] for col_name in table.column_names}) KeyError: 'annotations.long_answer' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.13 - Platform: MBP - Python version: 3.8 - PyArrow version: 6.0.1
CLOSED
2022-02-27T13:57:40
2022-03-21T14:36:12
2022-03-21T14:36:12
https://github.com/huggingface/datasets/issues/3795
Hannibal046
2
[ "bug" ]
3,792
Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*' **Link:** *link to the dataset viewer page* `data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]") ` *short description of the issue* ``` [NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff']]() ``` Am I the one who added this dataset ? No
CLOSED
2022-02-25T19:55:09
2024-03-13T12:25:08
2022-02-28T08:44:18
https://github.com/huggingface/datasets/issues/3792
rafikg
26
[ "dataset-viewer" ]
3,788
Only-data dataset loaded unexpectedly as validation split
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
OPEN
2022-02-25T12:11:39
2022-02-28T11:22:22
null
https://github.com/huggingface/datasets/issues/3788
albertvillanova
7
[ "bug" ]
3,786
Bug downloading Virus scan warning page from Google Drive URLs
## Describe the bug Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself. See: - #3758 - #3773 - #3784
CLOSED
2022-02-25T09:32:23
2022-03-03T09:25:59
2022-02-25T11:56:35
https://github.com/huggingface/datasets/issues/3786
albertvillanova
1
[ "bug" ]
3,784
Unable to Download CNN-Dailymail Dataset
## Describe the bug I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening: - The dataset sits in Google Drive, and both the CNN and DM datasets are large. - Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:** ![image](https://user-images.githubusercontent.com/58678541/155658435-c2f497d7-7601-4332-94b1-18a62dd96422.png) - **This leads to the following error**: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ``` ## Expected results That the dataset is downloaded and processed just like other datasets. ## Actual results Hit with this error: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
CLOSED
2022-02-25T05:24:47
2022-03-03T14:05:17
2022-03-03T14:05:17
https://github.com/huggingface/datasets/issues/3784
AngadSethi
4
[ "bug" ]
3,778
Not be able to download dataset - "Newsroom"
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help! Thanks Darshan Tank
CLOSED
2022-02-23T10:15:50
2022-02-23T17:05:04
2022-02-23T13:26:40
https://github.com/huggingface/datasets/issues/3778
Darshan2104
2
[ "dataset bug" ]
3,776
Allow download only some files from the Wikipedia dataset
**Is your feature request related to a problem? Please describe.** The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb). **Describe the solution you'd like** I would like to use the `data_files` argument in the `load_dataset` function to define which file in the wikipedia dataset I would like to download. Thus, I can work with the dataset in a smaller machine using the Apache Beam `DirectRunner`. **Describe alternatives you've considered** I've tried to use the `simple` Wikipedia dataset. But it's in English and I would like to use Portuguese texts in my model.
OPEN
2022-02-22T13:46:41
2022-02-22T14:50:02
null
https://github.com/huggingface/datasets/issues/3776
jvanz
1
[ "enhancement" ]
3,773
Checksum mismatch for the reddit_tifu dataset
## Describe the bug A checksum occurs when downloading the reddit_tifu data (both long & short). ## Steps to reproduce the bug reddit_tifu_dataset = load_dataset('reddit_tifu', 'long') ## Expected results The expected result is for the dataset to be downloaded and cached locally. ## Actual results File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
CLOSED
2022-02-22T10:57:07
2022-02-25T19:27:49
2022-02-22T12:38:44
https://github.com/huggingface/datasets/issues/3773
anna-kay
4
[ "bug" ]
3,770
DuplicatedKeysError on msr_sqa dataset
### Describe the bug Failure to generate dataset msr_sqa because of duplicate keys. ### Steps to reproduce the bug ``` from datasets import load_dataset load_dataset("msr_sqa") ``` ### Expected results The examples keys should be unique. **Actual results** ``` >>> load_dataset("msr_sqa") Downloading: 6.72k/? [00:00<00:00, 148kB/s] Downloading: 2.93k/? [00:00<00:00, 53.8kB/s] Using custom data configuration default Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1... Downloading: 100% 4.80M/4.80M [00:00<00:00, 7.49MB/s] --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator) 1080 example = self.info.features.encode_example(record) -> 1081 writer.write(example, key) 1082 finally: 8 frames DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: nt-639 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: DuplicatedKeysError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self) 449 for hash, key in self.hkey_record: 450 if hash in tmp_record: --> 451 raise DuplicatedKeysError(key) 452 else: 453 tmp_record.add(hash) DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: nt-639 Keys should be unique and deterministic in nature ``` ### Environment info datasets version: 1.18.3 Platform: Google colab notebook Python version: 3.7 PyArrow version: 6.0.1
CLOSED
2022-02-22T00:43:33
2022-02-22T08:12:39
2022-02-22T08:12:39
https://github.com/huggingface/datasets/issues/3770
kolk
1
[]
3,769
`dataset = dataset.map()` causes faiss index lost
## Describe the bug assigning the resulted dataset to original dataset causes lost of the faiss index ## Steps to reproduce the bug `my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure ```python self.dataset.add_faiss_index('embeddings') self.dataset.list_indexes() # ['embeddings'] dataset2 = my_dataset.map( lambda x: self._get_nearest_examples_batch(x['text']), batch=True ) # the unexpected result: dataset2.list_indexes() # [] self.dataset.list_indexes() # ['embeddings'] ``` in case something wrong with my `_get_nearest_examples_batch()`, it's like this ```python def _get_nearest_examples_batch(self, examples, k=5): queries = embed(examples) scores_batch, retrievals_batch = self.dataset.get_nearest_examples_batch(self.faiss_column, queries, k) return { 'neighbors': [batch['text'] for batch in retrievals_batch], 'scores': scores_batch } ``` ## Expected results `map` shouldn't drop the indexes, in another word, indexes should be carried to the generated dataset ## Actual results map drops the indexes ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.12 - PyArrow version: 7.0.0
OPEN
2022-02-21T21:59:23
2022-06-27T14:56:29
null
https://github.com/huggingface/datasets/issues/3769
Oaklight
3
[ "bug" ]
3,764
!
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
CLOSED
2022-02-20T19:05:43
2022-02-21T08:55:58
2022-02-21T08:55:58
https://github.com/huggingface/datasets/issues/3764
LesiaFedorenko
0
[ "dataset-viewer" ]
3,763
It's not possible download `20200501.pt` dataset
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect to download the dataset locally. ## Actual results ``` >>> from datasets import load_dataset >>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475... /home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features. warnings.warn( 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare super()._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested mapped = [ File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested return function(data_struct) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json ``` ## Environment info ``` - `datasets` version: 1.18.3 - Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1 ```
CLOSED
2022-02-20T18:34:58
2022-02-21T12:06:12
2022-02-21T09:25:06
https://github.com/huggingface/datasets/issues/3763
jvanz
2
[ "bug" ]
3,762
`Dataset.class_encode` should support custom class names
I can make a PR, just wanted approval before starting. **Is your feature request related to a problem? Please describe.** It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing. https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235 **Describe the solution you'd like** I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values. **Describe alternatives you've considered** One can use map instead. I find it harder to read. ```python CLASS_NAMES = ['apple', 'orange', 'potato'] ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column])) # Proposition ds = ds.class_encode_column(label_column, CLASS_NAMES) ``` **Additional context** I can make the PR if this feature is accepted.
CLOSED
2022-02-19T21:21:45
2022-02-21T12:16:35
2022-02-21T12:16:35
https://github.com/huggingface/datasets/issues/3762
Dref360
3
[ "enhancement" ]
3,761
Know your data for HF hub
**Is your feature request related to a problem? Please describe.** Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues. **Describe the solution you'd like** Something like https://knowyourdata.withgoogle.com/ for HF hub
CLOSED
2022-02-19T19:48:47
2022-02-21T14:15:23
2022-02-21T14:15:23
https://github.com/huggingface/datasets/issues/3761
Muhtasham
1
[ "enhancement" ]
3,760
Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*' **Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)* *with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://huggingface.co/spaces/kingabzpro/savtadepth.* Am I the one who added this dataset ? Yes
CLOSED
2022-02-19T17:45:08
2022-03-22T07:12:11
2022-03-22T07:12:11
https://github.com/huggingface/datasets/issues/3760
kingabzpro
5
[ "dataset-viewer" ]
3,758
head_qa file missing
## Describe the bug A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json) ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("head_qa", name="en") ``` ## Expected results The dataset should be loaded ## Actual results ``` Downloading and preparing dataset head_qa/en (download: 75.69 MiB, generated: 2.69 MiB, post-processed: Unknown size, total: 78.38 MiB) to /home/slesage/.cache/huggingface/datasets/head_qa/en/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9... Downloading data: 2.21kB [00:00, 2.05MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t'] ``` ## Environment info - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.11.0-1028-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 6.0.1
CLOSED
2022-02-18T16:32:43
2022-02-28T14:29:18
2022-02-21T14:39:19
https://github.com/huggingface/datasets/issues/3758
severo
2
[ "bug" ]
3,756
Images get decoded when using `map()` with `input_columns` argument on a dataset
## Describe the bug The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances. However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image data is passed as raw byte representation to the mapping function. ## Steps to reproduce the bug ```python from datasets import load_dataset from torchvision import transforms from PIL.Image import Image dataset = load_dataset('mnist', split='train') def transform_all_columns(example): # example['image'] is encoded as PIL Image assert isinstance(example['image'], Image) return example def transform_image_column(image): # image is decoded here and represented as raw bytes assert isinstance(image, Image) return image # single-sample dataset for debugging purposes dev = dataset.select([0]) dev.map(transform_all_columns) dev.map(transform_image_column, input_columns='image') ``` ## Expected results Image data should be passed in decoded form, i.e. as PIL Image objects to the mapping function unless the `decode` attribute on the image feature is set to `False`. ## Actual results The mapping function receives images as raw byte data. ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.32 - Python version: 3.8.0b4 - PyArrow version: 7.0.0
CLOSED
2022-02-18T15:35:38
2022-12-13T16:59:06
2022-12-13T16:59:06
https://github.com/huggingface/datasets/issues/3756
kklemon
2
[ "bug" ]
3,755
Cannot preview dataset
## Dataset viewer issue for '*rubrix/news*' **Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page* Cannot see the dataset preview: ``` Status code: 400 Exception: Status400Error Message: Not found. Cache is waiting to be refreshed. ``` Am I the one who added this dataset ? No
CLOSED
2022-02-18T13:06:45
2022-02-19T14:30:28
2022-02-18T15:41:33
https://github.com/huggingface/datasets/issues/3755
frascuchon
3
[ "dataset-viewer" ]
3,754
Overflowing indices in `select`
## Describe the bug The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"test": [1,2,3]}) ds = ds.select(range(5)) print(ds) print() print(ds["test"]) ``` Result: ```python Dataset({ features: ['test'], num_rows: 5 }) [1, 2, 3, 1, 2] ``` This behaviour is not documented and can lead to unexpected behaviour when for example taking a sample larger than the dataset and thus creating a lot of duplicates. ## Expected results It think this should throw an error or at least a very big warning: ```python IndexError: Invalid key: 5 is out of bounds for size 3 ``` ## Environment info - `datasets` version: 1.18.3 - Platform: macOS-12.0.1-x86_64-i386-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
CLOSED
2022-02-18T11:30:52
2022-02-18T11:38:23
2022-02-18T11:38:23
https://github.com/huggingface/datasets/issues/3754
lvwerra
2
[ "bug" ]
3,753
Expanding streaming capabilities
Some ideas for a few features that could be useful when working with large datasets in streaming mode. ## `filter` for `IterableDataset` Adding filtering to streaming datasets would be useful in several scenarios: - filter a dataset with many languages for a subset of languages - filter a dataset for specific licenses - other custom logic to get a subset The only way to achieve this at the moment is I think through writing a custom loading script and implementing filters there. ## `IterableDataset` to `Dataset` conversion In combination with the above filter a functionality to "play" the whole stream would be useful. The motivation is that often one might filter the dataset to get a manageable size for experimentation. In that case streaming mode is no longer necessary as the filtered dataset is small enough and it would be useful to be able to play through the whole stream to create a normal `Dataset` with all its benefits. ```python ds = load_dataset("some_large_dataset", streaming=True) ds_filter = ds.filter(lambda x: x["lang"]="fr") ds_filter = ds_filter.stream() # here the `IterableDataset` is converted to a `Dataset` ``` Naturally, this could be expanded with `stream(n=1000)` which creates a `Dataset` with the first `n` elements similar to `take`. ## Stream to the Hub While streaming allows to use a dataset as is without saving the whole dataset on the local machine it is currently not possible to process a dataset and add it to the hub. The only way to do this is by downloading the full dataset and saving the processed dataset again before pushing them to the hub. The API could looks something like: ```python ds = load_dataset("some_large_dataset", streaming=True) ds_filter = ds.filter(some_filter_func) ds_processed = ds_filter.map(some_processing_func) ds_processed.push_to_hub("new_better_dataset", batch_size=100_000) ``` Under the hood this could be done by processing and aggregating `batch_size` elements and then pushing that batch as a single file to the hub. With this functionality one could process and create TB scale datasets while only requiring size of `batch_size` local disk space. cc @lhoestq @albertvillanova
OPEN
2022-02-18T10:45:41
2025-03-19T14:50:14
null
https://github.com/huggingface/datasets/issues/3753
lvwerra
8
[ "enhancement" ]
3,750
`NonMatchingSplitsSizesError` for cats_vs_dogs dataset
## Describe the bug Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results Loading is successful. ## Actual results ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=7503250, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=7262410, num_examples=23410, dataset_name='cats_vs_dogs')}] ``` ## Environment info Reproduced on a fresh [Colab notebook](https://colab.research.google.com/drive/13GTvrSJbBGvL2ybDdXCBZwATd6FOkMub?usp=sharing). ## Additional Context Originally reported in https://github.com/huggingface/transformers/issues/15698. cc @mariosasko
CLOSED
2022-02-18T05:46:39
2022-02-18T14:56:11
2022-02-18T14:56:11
https://github.com/huggingface/datasets/issues/3750
jaketae
1
[ "bug" ]
3,747
Passing invalid subset should throw an error
## Describe the bug Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('rotten_tomatoes', 'asdfasdfa') ``` ## Expected results This should break, since `'asdfasdfa'` isn't a subset of the `rotten_tomatoes` dataset. ## Actual results This API call silently succeeds.
OPEN
2022-02-17T18:16:11
2022-02-17T18:16:11
null
https://github.com/huggingface/datasets/issues/3747
jxmorris12
0
[ "bug" ]
3,744
Better shards shuffling in streaming mode
Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`: ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename in all_files], "metadata_files": [all_metadata[filename] for filename in all_files], } ``` It happened for Multilingual Spoken Words for example in #3666 However currently **the two lists are shuffled independently** when shuffling the shards in streaming mode. This leads to `_generate_examples` not having the right metadata for each file. To prevent this issue I suggest that we always shuffle lists of the same length the exact same way to avoid such a big but silent issue. cc @polinaeterna
CLOSED
2022-02-17T15:07:21
2022-02-23T15:00:58
2022-02-23T15:00:58
https://github.com/huggingface/datasets/issues/3744
lhoestq
0
[ "enhancement", "streaming" ]
3,739
Pubmed dataset does not work in streaming mode
## Describe the bug Trying to use the `pubmed` dataset with `streaming=True` fails. ## Steps to reproduce the bug ```python import datasets pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True) print (next(iter(pubmed_train))) ``` ## Expected results I would expect to see the first training sample from the pubmed dataset. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 367, in __iter__ for key, example in self._iter(): File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 364, in _iter yield from ex_iterable File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 79, in __iter__ for key, example in self.generate_examples_fn(**self.kwargs): File "/Users/abhinav/.cache/huggingface/modules/datasets_modules/datasets/pubmed/9715addf10c42a7877a2149ae0c5f2fddabefc775cd1bd9b03ac3f012b86ce46/pubmed.py", line 373, in _generate_examples tree = etree.parse(filename) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py", line 1202, in parse tree.parse(source, parser) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py", line 584, in parse source = open(source, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'gzip://pubmed21n0001.xml::ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0001.xml.gz' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0 ## Comments The error looks like an issue with `open` vs. `xopen` inside the `xml` package. It looks like it's trying to open the remote source URL, which has been edited with prefix `gzip://...`. Maybe there can be an explicit `xopen` before passing the raw data to `etree`, something like: ```python # Before tree = etree.parse(filename) root = tree.getroot() # After with xopen(filename) as f: data_str = f.read() root = etree.fromstring(data_str) ```
CLOSED
2022-02-16T17:13:37
2022-02-18T14:42:13
2022-02-18T14:42:13
https://github.com/huggingface/datasets/issues/3739
abhi-mosaic
1
[ "bug" ]
3,738
For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files. In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys: ```python import datasets as ds iterable_dataset = ds.load_dataset("huggingface/transformers-metadata", split="train", streaming=True); rows = list(iterable_dataset.take(100)) rows[0] # {'model_type': 'albert', 'pytorch': True, 'tensorflow': True, 'flax': True, 'processor': 'AutoTokenizer'} rows[99] # {'model_class': 'BartModel', 'pipeline_tag': 'feature-extraction', 'auto_class': 'AutoModel'} ``` In normal mode, an exception is thrown: ```python import datasets as ds dataset = ds.load_dataset("huggingface/transformers-metadata", split="train"); ``` ``` ValueError: Couldn't cast model_class: string pipeline_tag: string auto_class: string to {'model_type': Value(dtype='string', id=None), 'pytorch': Value(dtype='bool', id=None), 'tensorflow': Value(dtype='bool', id=None), 'flax': Value(dtype='bool', id=None), 'processor': Value(dtype='string', id=None)} because column names don't match ```
OPEN
2022-02-16T15:20:57
2022-02-21T14:24:55
null
https://github.com/huggingface/datasets/issues/3738
severo
9
[ "bug" ]
3,735
Performance of `datasets` at scale
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The dataset is a 1.1TB extract from GitHub with 120M code files and is stored as 5000 `.json.gz` files. The goal of the preprocessing is to remove duplicates and filter files based on their stats. While the calculating of the hashes for deduplication and stats for filtering can be parallelized the filtering itself is run with a single process. After processing the files are pushed to the hub. ## Machine The experiment was run on a `m1` machine on GCP with 96 CPU cores and 1.3TB RAM. ## Performance breakdown - Loading the data **3.5h** (_30sec_ from cache) - **1h57min** single core loading (not sure what is going on here, corresponds to second progress bar) - **1h10min** multi core json reading - **20min** remaining time before and after the two main processes mentioned above - Process the data **2h** (_20min_ from cache) - **20min** Getting reading for processing - **40min** Hashing and files stats (96 workers) - **58min** Deduplication filtering (single worker) - Save parquet files **5h** - Saving 1000 parquet files (16 workers) - Push to hub **37min** - **34min** git add - **3min** git push (several hours with `Repository.git_push()`) ## Conclusion It appears that loading and saving the data is the main bottleneck at that scale (**8.5h**) whereas processing (**2h**) and pushing the data to the hub (**0.5h**) is relatively fast. To optimize the performance at this scale it would make sense to consider such an end-to-end example and target the bottlenecks which seem to be loading from and saving to disk. The processing itself seems to run relatively fast. ## Notes - map operation on a 1TB dataset with 96 workers requires >1TB RAM - map operation does not maintain 100% CPU utilization with 96 workers - sometimes when the script crashes all the data files have a corresponding `*.lock` file in the data folder (or multiple e.g. `*.lock.lock` when it happened a several times). This causes the cache **not** to be triggered (which is significant at that scale) - i guess because there are new data files - parallelizing `to_parquet` decreased the saving time from 17h to 5h, however adding more workers at this point had almost no effect. not sure if this is: a) a bug in my parallelization logic, b) i/o limit to load data form disk to memory or c) i/o limit to write from memory to disk. - Using `Repository.git_push()` was much slower than using command line `git-lfs` - 10-20MB/s vs. 300MB/s! The `Dataset.push_to_hub()` function is even slower as it only uploads one file at a time with only a few MB/s, whereas `Repository.git_push()` pushes files in parallel (each at a similar speed). cc @lhoestq @julien-c @LysandreJik @SBrandeis
OPEN
2022-02-16T14:23:32
2024-06-27T01:17:48
null
https://github.com/huggingface/datasets/issues/3735
lvwerra
6
[]
3,733
Bugs in NewsQA dataset
## Describe the bug NewsQA dataset has the following bugs: - the field `validated_answers` is an exact copy of the field `answers` but with the addition of `'count': [0]` to each dict - the field `badQuestion` does not appear in `answers` nor `validated_answers` ## Steps to reproduce the bug By inspecting the dataset script we can see that: - the parsing of `validated_answers` is a copy-paste of the one for `answers` - the `badQuestion` field is ignored in the parsing of both `answers` and `validated_answers`
CLOSED
2022-02-16T13:17:37
2022-02-17T07:54:25
2022-02-17T07:54:25
https://github.com/huggingface/datasets/issues/3733
albertvillanova
0
[ "bug" ]
3,730
Checksum Error when loading multi-news dataset
## Describe the bug When using the load_dataset function from datasets module to load the Multi-News dataset, does not load the dataset but throws Checksum Error instead. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("multi_news") ``` ## Expected results Should download and load Multi-News dataset. ## Actual results Throws the following error and cannot load data successfully: ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C'] ``` Could this issue please be looked at? Thanks!
CLOSED
2022-02-16T05:11:08
2022-02-16T20:05:06
2022-02-16T08:48:46
https://github.com/huggingface/datasets/issues/3730
byw2
1
[ "bug" ]
3,729
Wrong number of examples when loading a text dataset
## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming=False) print(len(datasets['train'])) # 1199649 ``` I also use command line operation to verify it ``` $ wc -l train.txt 1199637 train.txt ``` ## Expected results please fix that issue ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.3 - Platform:windows&linux - Python version:3.7 - PyArrow version:6.0.1
CLOSED
2022-02-16T01:13:31
2022-03-15T16:16:09
2022-03-15T16:16:09
https://github.com/huggingface/datasets/issues/3729
kg-nlp
2
[ "bug" ]
3,728
VoxPopuli
## Adding a Dataset - **Name:** VoxPopuli - **Description:** A Large-Scale Multilingual Speech Corpus - **Paper:** https://arxiv.org/pdf/2101.00390.pdf - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** one of the largest (if not the largest) multilingual speech corpus: 400K hours of multilingual unlabeled speech + 17k hours of labeled speech Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). 👀 @kahne @Molugan
CLOSED
2022-02-15T23:04:55
2022-02-16T18:49:12
2022-02-16T18:49:12
https://github.com/huggingface/datasets/issues/3728
VictorSanh
1
[ "dataset request" ]
3,724
Bug while streaming CSV dataset with pandas 1.4
## Describe the bug If we upgrade to pandas `1.4`, the patching of the pandas module is no longer working ``` AttributeError: '_PatchedModuleObj' object has no attribute '__version__' ``` ## Steps to reproduce the bug ``` pip install pandas==1.4 ``` ```python from datasets import load_dataset ds = load_dataset("lvwerra/red-wine", split="train", streaming=True) item = next(iter(ds)) item ```
CLOSED
2022-02-15T15:16:19
2022-02-15T16:55:44
2022-02-15T16:55:44
https://github.com/huggingface/datasets/issues/3724
albertvillanova
0
[ "bug" ]
3,720
Builder Configuration Update Required on Common Voice Dataset
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: https://github.com/huggingface/datasets/blob/master/datasets/common_voice/common_voice.py and Urdu isn't included there. I assume a quick update will fix the issue as Urdu speech is now available at the Common Voice dataset. Am I the one who added this dataset? No
CLOSED
2022-02-14T16:21:41
2024-04-28T18:03:08
2024-04-28T18:03:08
https://github.com/huggingface/datasets/issues/3720
aasem
7
[ "bug" ]
3,717
wrong condition in `Features ClassLabel encode_example`
## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}") ``` ## Expected results The `not - 1` condition change the result of the condition. For instance, if `example_data` equals 4 and ` self.num_classes` equals 4 too, `example_data < self.num_classes` will give `False` as expected . But if i add the `not - 1` condition, `not -1 <= example_data < self.num_classes` will give `True` and raise an exception. ## Environment info - `datasets` version: 1.18.3 - Python version: 3.8.10 - PyArrow version: 7.00
CLOSED
2022-02-14T11:44:35
2022-02-14T15:09:36
2022-02-14T15:07:43
https://github.com/huggingface/datasets/issues/3717
Tudyx
1
[ "bug" ]
3,716
`FaissIndex` to support multiple GPU and `custom_index`
**Is your feature request related to a problem? Please describe.** Currently, because `device` is of the type `int | None`, to leverage `faiss-gpu`'s multi-gpu support, you need to create a `custom_index`. However, if using a `custom_index` created by e.g. `faiss.index_cpu_to_all_gpus`, then `FaissIndex.save` does not work properly because it checks the device id (which is an int, so no multiple GPUs). **Describe the solution you'd like** I would like `FaissIndex` to support multiple GPUs, by passing in a list to `add_faiss_index`. **Describe alternatives you've considered** Alternatively, I would like it to at least provide a warning cause it wasn't the behavior that I expected. **Additional context** Relavent source code here: https://github.com/huggingface/datasets/blob/6ed6ac9448311930557810383d2cfd4fe6aae269/src/datasets/search.py#L340-L349 Device management needs changing to support multiple GPUs, probably by `isinstance` calls. I can provide a PR if you like :) Thanks for reading!
CLOSED
2022-02-14T06:21:43
2022-03-07T16:28:56
2022-03-07T16:28:56
https://github.com/huggingface/datasets/issues/3716
rentruewang
2
[ "enhancement" ]
3,714
tatoeba_mt: File not found error and key error
## Dataset viewer issue for 'tatoeba_mt' **Link:** https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt My data loader script does not seem to work. The files are part of the local repository but cannot be found. An example where it should work is the subset for "afr-eng". Another problem is that I do not have validation data for all subsets and I don't know how to properly check whether validation exists in the configuration before I try to download it. An example is the subset for "afr-deu". Am I the one who added this dataset ? Yes
CLOSED
2022-02-13T16:35:45
2022-02-13T20:44:04
2022-02-13T20:44:04
https://github.com/huggingface/datasets/issues/3714
jorgtied
1
[ "dataset-viewer" ]
3,708
Loading JSON gets stuck with many workers/threads
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from datasets import load_dataset from multiprocessing import Process from tqdm import tqdm import datasets from transformers import set_seed def run_tasks_in_parallel(tasks, ds_list): for _ in tqdm(range(1000)): print('new batch') running_tasks = [Process(target=task, args=(ds, i)) for i, (task, ds) in enumerate(zip(tasks, ds_list))] for running_task in running_tasks: running_task.start() for running_task in running_tasks: running_task.join() def get_dataset(): dataset_name = 'transformersbook/codeparrot' ds = load_dataset(dataset_name+'-train', split="train", streaming=True) ds = ds.shuffle(buffer_size=1000, seed=1) return iter(ds) def get_next_element(ds, process_id, N=10000): for _ in range(N): _ = next(ds)['content'] print(f'process {process_id} done') return set_seed(1) datasets.utils.logging.set_verbosity_debug() n_processes = 8 tasks = [get_next_element for _ in range(n_processes)] args = [get_dataset() for _ in range(n_processes)] run_tasks_in_parallel(tasks, args) ``` Today I noticed that it can happen when running it on a single process on a machine with many cores without streaming. So just `load_dataset("transformersbook/codeparrot-train")` alone might cause the issue after waiting long enough or trying many times. It's a slightly random process which makes it especially hard to track down. When I encountered it today it had already processed 17GB of data (the size of the cache folder when it got stuck) before getting stuck. Here's my current understanding of the error. As far as I can tell it happens in the following block: https://github.com/huggingface/datasets/blob/be701e9e89ab38022612c7263edc015bc7feaff9/src/datasets/packaged_modules/json/json.py#L119-L139 When the try on line 121 fails and the `block_size` is increased it can happen that it can't read the JSON again and gets stuck indefinitely. A hint that points in that direction is that increasing the `chunksize` argument decreases the chance of getting stuck and vice versa. Maybe it is an issue with a lock on the file that is not properly released. ## Expected results Read a JSON before the end of the universe. ## Actual results Read a JSON not before the end of the universe. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.10 - PyArrow version: 7.0.0 @lhoestq we dicsussed this a while ago. @albertvillanova we discussed this today :)
OPEN
2022-02-11T18:50:48
2023-06-16T11:24:12
null
https://github.com/huggingface/datasets/issues/3708
lvwerra
8
[ "bug" ]
3,707
`.select`: unexpected behavior with `indices`
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": ["d", "e", "f"], "label": [4, 5, 6]}) res1 = ds.select([1, 2, 3])['text'] res2 = ds.select([1000])['text'] ``` ## Expected results Both results should throw an `Error`. ## Actual results `res1` will give `['e', 'f', 'd']` `res2` will give `['e']` ## Environment info Bug found from this environment: - `datasets` version: 1.16.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.7 - PyArrow version: 6.0.1 It was also replicated on `master`.
CLOSED
2022-02-11T15:20:01
2022-02-14T19:19:21
2022-02-14T19:19:21
https://github.com/huggingface/datasets/issues/3707
gabegma
2
[ "bug" ]
3,706
Unable to load dataset 'big_patent'
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\datasets\downloads\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\bigPatentData\train.tar.gz doesn't exist ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.18.3 - Platform: Windows - Python version:3.8 - PyArrow version:7.0.0
CLOSED
2022-02-11T09:48:34
2022-02-14T15:26:03
2022-02-14T15:26:03
https://github.com/huggingface/datasets/issues/3706
ankitk2109
5
[ "bug" ]
3,704
OSCAR-2109 datasets are misaligned and truncated
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the particular (mis)alignment is in various configurations: ```python from datasets import load_dataset dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_fi", split="train", use_auth_token=True) entry = dataset[0] # entry["text"] is from fi_part_3.txt.gz # entry["meta"] is from fi_meta_part_2.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_no", split="train", use_auth_token=True) entry = dataset[900000] # entry["text"] is from no_part_3.txt.gz and contains a blank line # entry["meta"] is from no_meta_part_1.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_mk", split="train", streaming=True, use_auth_token=True) # 9088 texts in the dataset are empty ``` For `deduplicated_fi`, all exported raw texts from the dataset are 17GB rather than 20GB as reported in the data splits overview table. The token count with `wc -w` for the raw texts is 2,067,556,874 rather than the expected 2,357,264,196 from the data splits table. For `deduplicated_no` all exported raw texts contain 624,040,887 rather than the expected 776,354,517 tokens. For `deduplicated_mk` it is 122,236,936 rather than 134,544,934 tokens. I'm not expecting the `wc -w` counts to line up exactly with the data splits table, but for comparison the `wc -w` count for `deduplicated_mk` on the raw texts is 134,545,424. ## Issues * The meta / text files are not paired correctly when loading, so the extracted texts do not have the right offsets, the metadata is not associated with the correct text, and the text files may not be processed to the end or may be processed beyond the end (empty texts). * The line count offset is not reset per file so the texts aren't aligned to the right offsets in any parts beyond the first part, leading to truncation when in effect blank lines are not skipped. * Non-unix newline characters are treated as newlines when reading the text files while the metadata only counts unix newlines for its line offsets, leading to further misalignments between the metadata and the extracted texts, and which also results in truncation. ## Expected results All texts from the OSCAR release are extracted according to the metadata and aligned with the correct metadata. ## Fixes Not necessarily the exact fixes/checks you may want to use (I didn't test all languages or do any cross-platform testing, I'm not sure all the details are compatible with streaming), however to highlight the issues: ```diff diff --git a/OSCAR-2109.py b/OSCAR-2109.py index bbac1076..5eee8de7 100644 --- a/OSCAR-2109.py +++ b/OSCAR-2109.py @@ -20,6 +20,7 @@ import collections import gzip import json +import os import datasets @@ -387,9 +388,20 @@ class Oscar2109(datasets.GeneratorBasedBuilder): with open(checksum_file, encoding="utf-8") as f: data_filenames = [line.split()[1] for line in f if line] data_urls = [self.config.base_data_path + data_filename for data_filename in data_filenames] - text_files = dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")]) - metadata_files = dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")]) + # sort filenames so corresponding parts are aligned + text_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")])) + metadata_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")])) + assert len(text_files) == len(metadata_files) metadata_and_text_files = list(zip(metadata_files, text_files)) + for meta_path, text_path in metadata_and_text_files: + # check that meta/text part numbers are the same + if "part" in os.path.basename(text_path): + assert ( + os.path.basename(text_path).replace(".txt.gz", "").split("_")[-1] + == os.path.basename(meta_path).replace(".jsonl.gz", "").split("_")[-1] + ) + else: + assert len(metadata_and_text_files) == 1 return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"metadata_and_text_files": metadata_and_text_files}), ] @@ -397,10 +409,14 @@ class Oscar2109(datasets.GeneratorBasedBuilder): def _generate_examples(self, metadata_and_text_files): """This function returns the examples in the raw (text) form by iterating on all the files.""" id_ = 0 - offset = 0 for meta_path, text_path in metadata_and_text_files: + # line offsets are per text file + offset = 0 logger.info("generating examples from = %s", text_path) - with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8") as text_f: + # some texts contain non-Unix newlines that should not be + # interpreted as line breaks for the line counts in the metadata + # with readline() + with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8", newline="\n") as text_f: with gzip.open(open(meta_path, "rb"), "rt", encoding="utf-8") as meta_f: for line in meta_f: # read meta @@ -411,7 +427,12 @@ class Oscar2109(datasets.GeneratorBasedBuilder): offset += 1 text_f.readline() # read text - text = "".join([text_f.readline() for _ in range(meta["nb_sentences"])]).rstrip() + text_lines = [text_f.readline() for _ in range(meta["nb_sentences"])] + # all lines contain text (no blank lines or EOF) + assert all(text_lines) + assert "\n" not in text_lines offset += meta["nb_sentences"] + # only strip the trailing newline + text = "".join(text_lines).rstrip("\n") yield id_, {"id": id_, "text": text, "meta": meta} id_ += 1 ``` I've tested this with a number of smaller deduplicated languages with 1-20 parts and the resulting datasets looked correct in terms of word count and size when compared to the data splits table and raw texts, and the text/metadata alignments were correct in all my spot checks. However, there are many many languages I didn't test and I'm not sure that there aren't any texts containing blank lines in the corpus, for instance. For the cases I tested, the assertions related to blank lines and EOF made it easier to verify that the text and metadata were aligned as intended, since there would be little chance of spurious alignments of variable-length texts across so much data.
CLOSED
2022-02-11T08:14:59
2022-03-17T18:01:04
2022-03-16T16:21:28
https://github.com/huggingface/datasets/issues/3704
adrianeboyd
10
[ "bug" ]
3,703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 604, in <module> main() File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 481, in main metric = load_metric(path='mymetric/seqeval/seqeval.py') File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 610, in load_metric dataset=False, File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 450, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' **What should I do? Please help me, thank you**
CLOSED
2022-02-11T06:38:42
2023-07-11T09:31:59
2023-07-11T09:31:59
https://github.com/huggingface/datasets/issues/3703
zhangyifei1
9
[]
3,700
Unable to load a dataset
## Describe the bug Unable to load a dataset from Huggingface that I have just saved. ## Steps to reproduce the bug On Google colab `! pip install datasets ` `from datasets import load_dataset` `my_path = "wiki_dataset"` `dataset = load_dataset('wikipedia', "20200501.fr")` `dataset.save_to_disk(my_path)` `dataset = load_dataset(my_path)` ## Expected results Loading the dataset ## Actual results ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: null _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: string to {'builder_name': Value(dtype='string', id=None), 'citation': Value(dtype='string', id=None), 'config_name': Value(dtype='string', id=None), 'dataset_size': Value(dtype='int64', id=None), 'description': Value(dtype='string', id=None), 'download_checksums': {}, 'download_size': Value(dtype='int64', id=None), 'features': {'title': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}, 'text': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'post_processed': Value(dtype='null', id=None), 'post_processing_size': Value(dtype='null', id=None), 'size_in_bytes': Value(dtype='int64', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='string', id=None)}}, 'supervised_keys': Value(dtype='null', id=None), 'task_templates': Value(dtype='null', id=None), 'version': {'version_str': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'major': Value(dtype='int64', id=None), 'minor': Value(dtype='int64', id=None), 'patch': Value(dtype='int64', id=None)}} because column names don't match ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
CLOSED
2022-02-10T15:05:53
2024-07-04T08:39:23
2022-02-11T22:56:39
https://github.com/huggingface/datasets/issues/3700
PaulchauvinAI
3
[ "bug" ]
3,688
Pyarrow version error
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed with all version of pyarrow execpt `4.0.0` but still get the same error. ## Steps to reproduce the bug ```python import datasets ``` ## Expected results A clear and concise description of the expected results. ## Actual results AttributeError Traceback (most recent call last) <ipython-input-19-652e886d387f> in <module> ----> 1 import datasets ~\AppData\Local\Continuum\anaconda3\lib\site-packages\datasets\__init__.py in <module> 26 27 ---> 28 if _version.parse(pyarrow.__version__).major < 3: 29 raise ImportWarning( 30 "To use `datasets`, the module `pyarrow>=3.0.0` is required, and the current version of `pyarrow` doesn't match this condition.\n" AttributeError: 'Version' object has no attribute 'major' ## Environment info Traceback (most recent call last): File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Alex\AppData\Local\Continuum\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 5, in <module> File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages\datasets\__init__.py", line 28, in <module> if _version.parse(pyarrow.__version__).major < 3: AttributeError: 'Version' object has no attribute 'major' - `datasets` version: - Platform: Linux(Ubuntu) and Windows: conda on the both - Python version: 3.7 - PyArrow version: 7.0.0
CLOSED
2022-02-08T12:53:59
2022-02-09T06:35:33
2022-02-09T06:35:32
https://github.com/huggingface/datasets/issues/3688
Zaker237
3
[ "bug" ]
3,687
Can't get the text data when calling to_tf_dataset
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") dataset = load_dataset("sst") train_dataset = dataset["train"].to_tf_dataset(columns=['sentence'], label_cols="label", shuffle=True, batch_size=8,collate_fn=data_collator) ``` However, this only gets me the labels; the text--the most important part--is missing: ``` for s in train_dataset.take(1): print(s) #prints something like: ({}, <tf.Tensor: shape=(8,), ...>) ``` As you can see, it only returns the label part, not the data, as indicated by the empty dictionary, `{}`. So far, I've played with various settings of the method arguments, but to no avail; I do not want to perform any text processing at this time. On my quest to achieve what I want ( a `tf.data.Dataset`), I've consulted these resources: [https://www.philschmid.de/huggingface-transformers-keras-tf](https://www.philschmid.de/huggingface-transformers-keras-tf) [https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow](https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow) I was surprised to not find more extensive examples on how to transform a Hugginface dataset to one compatible with TensorFlow. If you could point me to where I am going wrong, please do so. Thanks in advance for your support. --- Edit: In the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset), I found the following description: _In general, only columns that the model can use as input should be included here (numeric data only)._ Does this imply that no textual, i.e., `string` data can be loaded?
CLOSED
2022-02-08T11:52:10
2023-01-19T14:55:18
2023-01-19T14:55:18
https://github.com/huggingface/datasets/issues/3687
phrasenmaeher
6
[]
3,686
`Translation` features cannot be `flatten`ed
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]") print(dataset.features) # {'translation': Translation(languages=['en', 'fr'], id=None)} print(dataset[0]) # {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}} dataset.flatten() ``` ## Expected results `dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")` ```python dataset[0] # {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' } dataset.features # {'translation.en': Value("string"), 'translation.fr': Value("string")} ``` ## Actual results ```python In [31]: dset.flatten() --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-31-bb88eb5276ee> in <module> ----> 1 dset.flatten() [...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms [...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth) 1294 break 1295 dataset.info.features = self.features.flatten(max_depth=max_depth) -> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features) 1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.') 1298 dataset._fingerprint = new_fingerprint [...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) [...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) KeyError: 'translation.en' ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.10 - PyArrow version: 3.0.0
CLOSED
2022-02-08T11:33:48
2022-03-18T17:28:13
2022-03-18T17:28:13
https://github.com/huggingface/datasets/issues/3686
SBrandeis
1
[ "bug" ]
3,679
Download datasets from a private hub
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the transformers library and the CLI. I'm going to create issues there as well, and I'll reference them below.
CLOSED
2022-02-04T10:49:06
2022-02-22T11:08:07
2022-02-22T11:08:07
https://github.com/huggingface/datasets/issues/3679
juliensimon
3
[ "enhancement", "private-hub" ]
3,677
Discovery cannot be streamed anymore
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first row of the train split. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 365, in __iter__ for key, example in self._iter(): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 362, in _iter yield from ex_iterable File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 272, in __iter__ yield from islice(self.ex_iterable, self.n) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 79, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/discovery/542fab7a9ddc1d9726160355f7baa06a1ccc44c40bc8e12c09e9bc743aca43a2/discovery.py", line 333, in _generate_examples with open(data_file, encoding="utf8") as f: File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 64, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 369, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 456, in open return open_files( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 288, in open_files fs, fs_token, paths = get_fs_token_paths( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 611, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem return cls(**storage_options) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 68, in __call__ obj = super().__call__(*args, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__ self.zip = zipfile.ZipFile(self.fo) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1257, in __init__ self._RealGetContents() File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1320, in _RealGetContents endrec = _EndRecData(fp) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 263, in _EndRecData fpin.seek(0, 2) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 676, in seek raise ValueError("Cannot seek streaming HTTP file") ValueError: Cannot seek streaming HTTP file ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-1027-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 6.0.1
CLOSED
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
https://github.com/huggingface/datasets/issues/3677
severo
2
[ "bug" ]
3,676
`None` replaced by `[]` after first batch in map
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] ``` This issue has been experienced when running the `run_qa.py` example from `transformers` (see issue https://github.com/huggingface/transformers/issues/15401) This can be due to a bug in when casting `None` in nested lists. Casting only happens after the first batch, since the first batch is used to infer the feature types. cc @sgugger
CLOSED
2022-02-03T13:36:48
2022-10-28T13:13:20
2022-10-28T13:13:20
https://github.com/huggingface/datasets/issues/3676
lhoestq
8
[]
3,675
Add CodeContests dataset
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2022-02-03T13:20:00
2022-07-20T11:07:05
2022-07-20T11:07:05
https://github.com/huggingface/datasets/issues/3675
mariosasko
2
[ "dataset request" ]
3,673
`load_dataset("snli")` is different from dataset viewer
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is this expected? ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.4 - Python version: 3.7
CLOSED
2022-02-03T12:10:43
2022-02-16T11:22:31
2022-02-11T17:01:21
https://github.com/huggingface/datasets/issues/3673
pietrolesci
11
[ "bug", "dataset-viewer" ]
3,671
Give an estimate of the dataset size in DatasetInfo
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the solution you'd like** - get access to the git information for the dataset files hosted on the hub - look at the [`Content-Length`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length) for the files served by HTTP
OPEN
2022-02-03T09:47:10
2022-02-03T09:47:10
null
https://github.com/huggingface/datasets/issues/3671
severo
0
[ "enhancement" ]
3,668
Couldn't cast array of type string error with cast_column
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264037/152214027-9c42a71a-dd24-463c-a346-57e0287e5a8f.png) This was working with datasets version 1.17.1.dev0 but now with version 1.18.3 produces the error above. ## Steps to reproduce the bug load dataset: ![image](https://user-images.githubusercontent.com/25264037/152216145-159553b6-cddc-4f0b-8607-7e76b600e22a.png) remove columns: ![image](https://user-images.githubusercontent.com/25264037/152214707-7c7e89d1-87d8-4b4f-8cfc-5d7223d35644.png) run my fix_path function. This also creates the audio column that is referring to the absolute file path of the audio ![image](https://user-images.githubusercontent.com/25264037/152214773-51f71ccf-d31b-4449-b63a-1af56436e49f.png) Then I concatenate few other datasets and finally try the cast_column method ![image](https://user-images.githubusercontent.com/25264037/152215032-f341ec86-9d6d-48c9-943b-e2efe37a4d98.png) but get error: ![image](https://user-images.githubusercontent.com/25264037/152215073-b85bd057-98e8-413c-9b05-51e9805f2c24.png) ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: OVH Cloud, AI Training section, container for Huggingface Robust Speech Recognition event image(baaastijn/ovh_huggingface) ![image](https://user-images.githubusercontent.com/25264037/152215161-b4ff7bfb-2736-4afb-9223-761a3338d23c.png) - Python version: 3.8.8 - PyArrow version: ![image](https://user-images.githubusercontent.com/25264037/152215936-4d365760-557e-456b-b5eb-ad1d15cf5073.png)
CLOSED
2022-02-02T18:33:29
2022-07-19T13:36:24
2022-07-19T13:36:24
https://github.com/huggingface/datasets/issues/3668
R4ZZ3
5
[ "bug" ]
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results The path should be the complete absolute path to the downloaded audio file not some relative path. ## Actual results ```bash ~/hugging_face/venv_3.9/lib/python3.9/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file cv-corpus-6.1-2020-12-11/ab/clips/common_voice_ab_19904194.mp3 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3.dev0 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.27 - Python version: 3.9.1 - PyArrow version: 3.0.0
CLOSED
2022-02-01T18:40:10
2022-09-21T15:03:09
2022-09-21T14:56:22
https://github.com/huggingface/datasets/issues/3663
patrickvonplaten
19
[ "bug" ]
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with sampling_rate=32000 !wget https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_700KB.mp3 import torchaudio audio_path = "file_example_MP3_700KB.mp3" audio_path2 = audio_path.replace(".mp3", "_resampled.mp3") resample = torchaudio.transforms.Resample(32000, 16000) # create a new file with sampling_rate=16000 torchaudio.save(audio_path2, resample(torchaudio.load(audio_path)[0]), 16000) ``` Then we can see an issue here when decoding: ```python from datasets import Dataset, Audio dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[0] # decode the first audio file sets the resampler orig_freq to 32000 print(dataset .features["audio"]._resampler.orig_freq) # 32000 print(dataset[0]["audio"]["array"].shape) # here decoding is fine # (1308096,) dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[1] # decode the second audio file sets the resampler orig_freq to 16000 print(dataset .features["audio"]._resampler.orig_freq) # 16000 print(dataset[0]["audio"]["array"].shape) # here decoding uses orig_freq=16000 instead of 32000 # (2616192,) ``` The value of `orig_freq` doesn't change no matter what file needs to be decoded cc @patrickvonplaten @anton-l @cahya-wirawan @albertvillanova The issue seems to be here in `Audio.decode_mp3`: https://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/features/audio.py#L176-L180
CLOSED
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
https://github.com/huggingface/datasets/issues/3662
lhoestq
6
[]
3,659
push_to_hub but preview not working
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
CLOSED
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
https://github.com/huggingface/datasets/issues/3659
thomas-happify
1
[ "dataset-viewer" ]
3,658
Dataset viewer issue for *P3*
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
CLOSED
2022-02-01T15:57:56
2023-09-25T12:16:21
2023-09-25T12:16:21
https://github.com/huggingface/datasets/issues/3658
jeffistyping
4
[]
3,656
checksum error subjqa dataset
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-d2857d460155> in <module>() 2 from datasets import load_dataset 3 ----> 4 subjqa = load_dataset("subjqa","electronics") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/lewtun/SubjQA/archive/refs/heads/master.zip'] ``` ## Environment info Google colab - `datasets` version: 1.18.2 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
CLOSED
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
https://github.com/huggingface/datasets/issues/3656
RensDimmendaal
2
[ "bug" ]
3,655
Pubmed dataset not reachable
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'")) ``` ## Environment info - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0
CLOSED
2022-01-31T18:45:47
2022-12-19T19:18:10
2022-02-14T14:15:41
https://github.com/huggingface/datasets/issues/3655
abhi-mosaic
6
[ "bug" ]
3,653
`to_json` in multiprocessing fashion sometimes deadlock
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs.python.org/issue22393#msg315684 where `multiprocessing` fails to raise the OOM exception. One suggested alternative is not use `concurrent.futures` instead. ## Steps to reproduce the bug ## Expected results Script fails when one worker hits OOM, and raise appropriate error. ## Actual results Deadlock ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.1 - Platform: Linux - Python version: 3.8 - PyArrow version: 6.0.1
OPEN
2022-01-31T09:35:07
2022-01-31T09:35:07
null
https://github.com/huggingface/datasets/issues/3653
thomasw21
0
[ "bug" ]
3,649
Add IGLUE dataset
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-bug/iglue - **Motivation:** This dataset would provide a nice example of combining the text and image features of `datasets` together for multimodal applications. Note: the data / code are not yet visible on the GitHub repo, so I've pinged the authors for more information. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
OPEN
2022-01-28T14:59:41
2022-01-28T15:02:35
null
https://github.com/huggingface/datasets/issues/3649
lewtun
0
[ "dataset request", "multimodal" ]
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again: ```python from datasets import load_dataset d = load_dataset("common_voice", "ab", split="test", streaming=True) i = 0 for i, _ in enumerate(d): pass print(i) # 8 # let's do it again i = 0 for i, _ in enumerate(d): pass print(i) # 0 ```
CLOSED
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
https://github.com/huggingface/datasets/issues/3645
lhoestq
0
[]
3,644
Add a GROUP BY operator
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datasets.Value("string") # } ds = datasets.Dataset() def split(examples): sentences = [text.split(".") for text in examples["text"]] return { "example_id": [ example_id for example_id, sents in zip(examples["example_id"], sentences) for _ in sents ], "sentence": [sent for sents in sentences for sent in sents], "sentence_id": [i for sents in sentences for i in range(len(sents))], } split_ds = ds.map(split, batched=True) def process(examples): outputs = some_neural_network_that_works_on_sentences(examples["sentence"]) return {"outputs": outputs} split_ds = split_ds.map(process, batched=True) ``` I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together. **Describe the solution you'd like** Ideally, it would look something like this: ```python def join(examples): order = np.argsort(examples["sentence_id"]) text = ".".join(examples["text"][i] for i in order) outputs = [examples["outputs"][i] for i in order] return {"text": text, "outputs": outputs} ds = split_ds.group_by("example_id", join) ``` **Describe alternatives you've considered** Right now, we can do this: ```python def merge(example): meeting_id = example["example_id"] parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no") return {"outputs": list(parts["outputs"])} ds = ds.map(merge) ``` Of course, we could process the dataset like this: ```python def process(example): outputs = some_neural_network_that_works_on_sentences(example["text"].split(".")) return {"outputs": outputs} ds = ds.map(process, batched=True) ``` However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example. I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
OPEN
2022-01-27T16:57:54
2025-01-28T11:39:48
null
https://github.com/huggingface/datasets/issues/3644
felix-schneider
14
[ "enhancement" ]
3,640
Issues with custom dataset in Wav2Vec2
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://user-images.githubusercontent.com/9079808/151355893-6d5887cc-ca19-4b12-948a-124eb6dac372.png) We are able to work around the issue, for instance by adding this check in line#222 in transformers/models/wav2vec2/modeling_wav2vec2.py: ```python if input_length - (mask_length - 1) < num_masked_span: num_masked_span = input_length - (mask_length - 1) ``` Interestingly, these are the variable values before the adjustment: ``` input_length=10 mask_length=10 num_masked_span=2 ```` After adjusting num_masked_spin to 1, the training script runs. The issue is also fixed by setting “replace=True” in the same function. Do you have any idea what is causing this, and how to fix this error permanently? If you do not think this is an Datasets issue, feel free to move the issue.
CLOSED
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
https://github.com/huggingface/datasets/issues/3640
peregilk
1
[ "bug" ]
3,639
same value of precision, recall, f1 score at each epoch for classification task.
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:30:49 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7612903225806451} RECALL: {'recall': 0.7612903225806451} F1: {'f1': 0.7612903225806451} {'eval_loss': 1.4658324718475342, 'eval_accuracy': 0.7612903118133545, 'eval_runtime': 30.0054, 'eval_samples_per_second': 46.492, 'eval_steps_per_second': 46.492, 'epoch': 3.0} **4th Epoch:** 1/27/2022 09:56:55 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.92it/s] 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7698924731182796} RECALL: {'recall': 0.7698924731182796} F1: {'f1': 0.7698924731182796} ## Environment info !git clone https://github.com/huggingface/transformers %cd transformers !pip install . !pip install -r /content/transformers/examples/pytorch/token-classification/requirements.txt !pip install datasets
CLOSED
2022-01-27T10:14:16
2022-02-24T09:02:18
2022-02-24T09:02:17
https://github.com/huggingface/datasets/issues/3639
Dhanachandra
1
[ "bug" ]
3,638
AutoTokenizer hash value got change after datasets.map
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` got ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s] f4976bb4694ebc51 3fca35a1fd4a1251 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s] d32837619b7d7d01 5fd925c82edd62b6 ``` 3. run raw_datasets.map(tokenize_function, batched=True) again and see some dataset are not using cache. ## Expected results `AutoTokenizer` work like specific Tokenizer (The hash value don't change after map): ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s] 46d4b31f54153fc7 5b8771afd8d43888 Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow 46d4b31f54153fc7 5b8771afd8d43888 ``` ## Environment info - `datasets` version: 1.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.1
OPEN
2022-01-27T03:19:03
2024-03-11T13:56:15
null
https://github.com/huggingface/datasets/issues/3638
tshu-w
12
[ "bug" ]