number
int64 2
7.91k
| title
stringlengths 1
290
| body
stringlengths 0
228k
| state
stringclasses 2
values | created_at
timestamp[s]date 2020-04-14 18:18:51
2025-12-16 10:45:02
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 19:34:46
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 14:20:48
⌀ | url
stringlengths 48
51
| author
stringlengths 3
26
⌀ | comments_count
int64 0
70
| labels
listlengths 0
4
|
|---|---|---|---|---|---|---|---|---|---|---|
864
|
Unable to download cnn_dailymail dataset
|
### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-8-47c39c228935> in <module>()
1 from datasets import load_dataset
2
----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
5 frames
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
469 if not downloaded_from_gcs:
470 self._download_and_prepare(
--> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
472 )
473 # Sync info
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
524 split_dict = SplitDict(dataset_name=self.name)
525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
527
528 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager)
252 def _split_generators(self, dl_manager):
253 dl_paths = dl_manager.download_and_extract(_DL_URLS)
--> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)
255 # Generate shared vocabulary
256
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split)
153 else:
154 logging.fatal("Unsupported split: %s", split)
--> 155 cnn = _find_files(dl_paths, "cnn", urls)
156 dm = _find_files(dl_paths, "dm", urls)
157 return cnn + dm
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
Thanks for any suggestions.
|
CLOSED
| 2020-11-18T04:38:02
| 2020-11-20T05:22:11
| 2020-11-20T05:22:10
|
https://github.com/huggingface/datasets/issues/864
|
rohitashwa1907
| 6
|
[
"dataset bug"
] |
861
|
Possible Bug: Small training/dataset file creates gigantic output
|
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.
I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?
I've used the following CMD:
`python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
|
CLOSED
| 2020-11-17T13:48:59
| 2021-03-30T14:04:04
| 2021-03-22T12:04:55
|
https://github.com/huggingface/datasets/issues/861
|
NebelAI
| 7
|
[
"enhancement",
"question"
] |
860
|
wmt16 cs-en does not donwload
|
Hi
I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "finetune_t5_trainer.py", line 109, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset
dataset = load_dataset("wmt16", self.pair, split=split)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
|
CLOSED
| 2020-11-17T13:45:35
| 2022-10-05T12:27:00
| 2022-10-05T12:26:59
|
https://github.com/huggingface/datasets/issues/860
|
rabeehk
| 1
|
[
"dataset bug"
] |
854
|
wmt16 does not download
|
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
|
CLOSED
| 2020-11-16T09:31:51
| 2022-10-05T12:27:42
| 2022-10-05T12:27:42
|
https://github.com/huggingface/datasets/issues/854
|
rabeehk
| 12
|
[
"dataset bug"
] |
853
|
concatenate_datasets support axis=0 or 1 ?
|
I want to achieve the following result

|
CLOSED
| 2020-11-16T02:46:23
| 2021-04-19T16:07:18
| 2021-04-19T16:07:18
|
https://github.com/huggingface/datasets/issues/853
|
renqingcolin
| 10
|
[
"enhancement",
"help wanted",
"question"
] |
852
|
wmt cannot be downloaded
|
Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
|
CLOSED
| 2020-11-16T01:04:41
| 2020-11-16T09:31:58
| 2020-11-16T09:31:58
|
https://github.com/huggingface/datasets/issues/852
|
rabeehk
| 0
|
[
"dataset request"
] |
849
|
Load amazon dataset
|
Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
dataset = load_dataset("amazon_us_reviews")
```
How it is when I tried (the error generated does point me to the right direction though)
```
from datasets import load_dataset
dataset = load_dataset("amazon_us_reviews", 'Books_v1_00')
```
Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it?
|
CLOSED
| 2020-11-13T08:34:24
| 2020-11-17T07:22:59
| 2020-11-17T07:22:59
|
https://github.com/huggingface/datasets/issues/849
|
bhavitvyamalik
| 1
|
[] |
848
|
Error when concatenate_datasets
|
Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-38-74fa525512ca> in <module>
----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset])
/opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split)
2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format(
2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]],
-> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]],
2550 )
2551 )
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error:
```
trn_dataset._data_files
# output
[{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}]
test_dataset._data_files
# output
[{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}]
print([not dset._data_files for dset in [trn_dataset, test_dataset]])
# [False, False]
# And I tested the code the same as arrow_dataset, but nothing happened
dsets = [trn_dataset, test_dataset]
dsets_in_memory = [not dset._data_files for dset in dsets]
if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory):
raise ValueError(
"Datasets should ALL come from memory, or should ALL come from disk.\n"
"However datasets {} come from memory and datasets {} come from disk.".format(
[i for i in range(len(dsets)) if dsets_in_memory[i]],
[i for i in range(len(dsets)) if not dsets_in_memory[i]],
)
)
```
Any suggestions would be greatly appreciated!
Thanks!
|
CLOSED
| 2020-11-13T07:56:02
| 2020-11-13T17:40:59
| 2020-11-13T15:55:10
|
https://github.com/huggingface/datasets/issues/848
|
shexuan
| 4
|
[] |
847
|
multiprocessing in dataset map "can only test a child process"
|
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single
for i in pbar:
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__
for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__
self.close()
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close
super(tqdm_notebook, self).close(*args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close
fp_write('')
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write
self.fp.write(_unicode(s))
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write
cb(name, data)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback
self._backend.interface.publish_output(name, data)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output
self._publish_output(o)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output
self._publish(rec)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish
if self._process and not self._process.is_alive():
File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
"""
```
|
CLOSED
| 2020-11-13T06:01:04
| 2022-10-05T12:22:51
| 2022-10-05T12:22:51
|
https://github.com/huggingface/datasets/issues/847
|
timothyjlaurent
| 8
|
[] |
846
|
Add HoVer multi-hop fact verification dataset
|
## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding)
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-12T19:55:46
| 2020-12-10T21:47:33
| 2020-12-10T21:47:33
|
https://github.com/huggingface/datasets/issues/846
|
yjernite
| 3
|
[
"dataset request"
] |
843
|
use_custom_baseline still produces errors for bertscore
|
`metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'`
Adding 'use_custom_baseline = False' as an argument produces this error
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'`
This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
|
CLOSED
| 2020-11-12T11:44:32
| 2024-05-28T16:30:17
| 2021-02-09T14:21:48
|
https://github.com/huggingface/datasets/issues/843
|
penatbater
| 5
|
[
"metric bug"
] |
842
|
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
|
Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish?
Thanks!
|
OPEN
| 2020-11-12T02:04:38
| 2025-03-26T09:10:22
| null |
https://github.com/huggingface/datasets/issues/842
|
shangw-nvidia
| 5
|
[] |
841
|
Can not reuse datasets already downloaded
|
Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error.
On frontal node:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset('wikipedia', '20200501.en')
Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd)
/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
```
On gpu node:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset('wikipedia', '20200501.en')
Traceback (most recent call last):
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module
head_hf_s3(path, filename=name, dataset=dataset)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3
return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head
return request('head', url, **kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',))
```
Any advice?Thanks!
|
CLOSED
| 2020-11-11T12:42:15
| 2020-11-11T18:17:16
| 2020-11-11T18:17:16
|
https://github.com/huggingface/datasets/issues/841
|
jc-hou
| 2
|
[] |
839
|
XSum dataset missing spaces between sentences
|
I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"`
|
OPEN
| 2020-11-11T00:34:43
| 2020-11-11T00:34:43
| null |
https://github.com/huggingface/datasets/issues/839
|
loganlebanoff
| 0
|
[] |
836
|
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
|
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4...
I am getting this error:
6a4ac4/csv.py in _generate_tables(self, files)
78 def _generate_tables(self, files):
79 for i, file in enumerate(files):
---> 80 pa_table = pac.read_csv(
81 file,
82 read_options=self.config.pa_read_options,
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv()
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
**ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**
The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need.
There is no issue reading the file with pandas. any idea what could be the issue?
When I am running a different CSV I do not get this line:
(download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size)
Any ideas?
|
CLOSED
| 2020-11-10T19:35:40
| 2021-11-24T16:59:19
| 2020-11-19T17:35:38
|
https://github.com/huggingface/datasets/issues/836
|
randubin
| 8
|
[
"dataset bug"
] |
835
|
Wikipedia postprocessing
|
Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930.
Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World.
Politische Biografie
Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde.
mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917
[...]
```
so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup?
Apologies if this has been asked before.
|
CLOSED
| 2020-11-10T17:26:38
| 2020-11-10T18:23:20
| 2020-11-10T17:49:21
|
https://github.com/huggingface/datasets/issues/835
|
bminixhofer
| 3
|
[] |
834
|
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
|
## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** https://arxiv.org/pdf/2010.03093.pdf
- **Data:** https://github.com/esdurmus/Wikilingua
- **Motivation:** Included in the GEM shared task. Multilingual.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T17:00:43
| 2021-04-15T12:04:09
| 2021-04-15T12:01:38
|
https://github.com/huggingface/datasets/issues/834
|
yjernite
| 2
|
[
"dataset request"
] |
833
|
[GEM] add ASSET text simplification dataset
|
## Adding a Dataset
- **Name:** ASSET
- **Description:** ASSET is a crowdsourced
multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf
- **Data:** https://github.com/facebookresearch/asset
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T16:56:30
| 2020-12-03T13:38:15
| 2020-12-03T13:38:15
|
https://github.com/huggingface/datasets/issues/833
|
yjernite
| 0
|
[
"dataset request"
] |
832
|
[GEM] add WikiAuto text simplification dataset
|
## Adding a Dataset
- **Name:** WikiAuto
- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf
- **Data:** https://github.com/chaojiang06/wiki-auto
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T16:53:23
| 2020-12-03T13:38:08
| 2020-12-03T13:38:08
|
https://github.com/huggingface/datasets/issues/832
|
yjernite
| 0
|
[
"dataset request"
] |
831
|
[GEM] Add WebNLG dataset
|
## Adding a Dataset
- **Name:** WebNLG
- **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian
- **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf
- **Data:** https://webnlg-challenge.loria.fr/download/
- **Motivation:** Included in the GEM shared task, multilingual
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T16:46:48
| 2020-12-03T13:38:01
| 2020-12-03T13:38:01
|
https://github.com/huggingface/datasets/issues/831
|
yjernite
| 0
|
[
"dataset request"
] |
830
|
[GEM] add ToTTo Table-to-text dataset
|
## Adding a Dataset
- **Name:** ToTTo
- **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
- **Paper:** https://arxiv.org/abs/2004.14373
- **Data:** https://github.com/google-research-datasets/totto
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T16:38:34
| 2020-12-10T13:06:02
| 2020-12-10T13:06:01
|
https://github.com/huggingface/datasets/issues/830
|
yjernite
| 1
|
[
"dataset request"
] |
829
|
[GEM] add Schema-Guided Dialogue
|
## Adding a Dataset
- **Name:** The Schema-Guided Dialogue Dataset
- **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather.
- **Paper:** https://arxiv.org/pdf/2002.01359.pdf https://arxiv.org/pdf/2004.15006.pdf
- **Data:** https://github.com/google-research-datasets/dstc8-schema-guided-dialogue
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T16:33:44
| 2020-12-03T13:37:50
| 2020-12-03T13:37:50
|
https://github.com/huggingface/datasets/issues/829
|
yjernite
| 0
|
[
"dataset request"
] |
827
|
[GEM] MultiWOZ dialogue dataset
|
## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user side.
- **Paper:** https://arxiv.org/pdf/2007.12720.pdf
- **Data:** https://github.com/budzianowski/multiwoz
- **Motivation:** Will likely be part of the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T14:57:50
| 2022-10-05T12:31:13
| 2022-10-05T12:31:13
|
https://github.com/huggingface/datasets/issues/827
|
yjernite
| 2
|
[
"dataset request"
] |
826
|
[GEM] Add E2E dataset
|
## Adding a Dataset
- **Name:** E2E NLG dataset (for End-to-end natural language generation)
- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average
- **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931
- **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data
- **Motivation:** This dataset will likely be included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-10T14:50:40
| 2020-12-03T13:37:57
| 2020-12-03T13:37:57
|
https://github.com/huggingface/datasets/issues/826
|
yjernite
| 0
|
[
"dataset request"
] |
824
|
Discussion using datasets in offline mode
|
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some points to open discussion:
- if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine.
- AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally.
- I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet.
WDYT? (thks)
|
CLOSED
| 2020-11-10T13:10:51
| 2023-10-26T09:26:26
| 2022-02-15T10:32:36
|
https://github.com/huggingface/datasets/issues/824
|
mandubian
| 11
|
[
"enhancement",
"generic discussion"
] |
823
|
how processing in batch works in datasets
|
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
max_source_length: str = NotImplemented
max_target_length: str = NotImplemented
# TODO: should not be a task item, but cannot see other ways.
tpu_num_cores: int = None
# The arguments set are for all tasks and needs to be kept common.
def __init__(self, config):
self.max_source_length = config['max_source_length']
self.max_target_length = config['max_target_length']
self.tokenizer = config['tokenizer']
self.tpu_num_cores = config['tpu_num_cores']
def _encode(self, batch) -> Dict[str, torch.Tensor]:
batch_encoding = self.tokenizer.prepare_seq2seq_batch(
[x["src_texts"] for x in batch],
tgt_texts=[x["tgt_texts"] for x in batch],
max_length=self.max_source_length,
max_target_length=self.max_target_length,
padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack
return_tensors="pt"
)
return batch_encoding.data
def data_split(self, split):
return self.split_to_data_split[split]
def get_dataset(self, split, n_obs=None):
split = self.data_split(split)
if n_obs is not None:
split = split+"[:{}]".format(n_obs)
dataset = load_dataset(self.task_name, split=split)
dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names)
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
return dataset
```
I call it like
`AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train)
`
This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks
File "finetune_multitask_trainer.py", line 192, in main
if training_args.do_train else None
File "finetune_multitask_trainer.py", line 191, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda>
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode
[x["src_texts"] for x in batch],
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp>
[x["src_texts"] for x in batch],
TypeError: string indices must be integers
|
CLOSED
| 2020-11-10T11:11:17
| 2020-11-10T13:11:10
| 2020-11-10T13:11:09
|
https://github.com/huggingface/datasets/issues/823
|
rabeehkarimimahabadi
| 3
|
[
"dataset request"
] |
822
|
datasets freezes
|
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
|
CLOSED
| 2020-11-10T05:10:19
| 2023-07-20T16:08:14
| 2023-07-20T16:08:13
|
https://github.com/huggingface/datasets/issues/822
|
rabeehkarimimahabadi
| 2
|
[
"dataset bug"
] |
821
|
`kor_nli` dataset doesn't being loaded properly
|
There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
- I found a reason why there is `None` values in label feature as following code
```python
from datasets import load_dataset
kor_nli_train = load_dataset('kor_nli', 'multi_nli')
for idx, example in enumerate(kor_nli_train['train']):
if example['gold_label'] is None:
print(idx, example)
break
# 16835 {'gold_label': None, 'sentence1': '그는 전쟁 전에 가벼운 벅스킨 암말을 가지고 달리기 위해 우유처럼 하얀 스터드를 넣었다.\t전쟁 전에 다인종 여성들과 함께 있는 백인 남자가 있었다.\tentailment\n슬림은 재빨리 옷을 입었고, 순간적으로 미지근한 물을 뿌릴 수 있는 아침 세탁물을 기꺼이 가두었다.\t슬림은 직장에 늦었다.\tneutral\n뉴욕에서 그 식사를 해봤는데, 거기서 소고기의 멋진 소고기 부분을 요리하고 바베큐로 만든 널빤지 같은 걸 가져왔는데, 정말 대단해.\t그들이 거기서 요리하는 쇠고기는 역겹다. 거기서 절대 먹지 마라.\tcontradiction\n판매원의 죽음에서 브라이언 데네히... 크리스 켈리\t크리스 켈리는 세일즈맨의 죽음을 언급하지 않는다.\tcontradiction\n그러는 동안 요리사는 그냥 화가 났어.\t스튜가 끓는 동안 요리사는 화가 났다.\tneutral\n마지막 로마의 맹공격 전날 밤, 900명 이상의 유대인 수비수들이 로마인들에게 그들을 사로잡는 승리를 주기 보다는 대량 자살을 저질렀다.\t로마인들이 그들의 포획에 승리하도록 내버려두기 보다는 900명의 유대인 수비수들이 자살했다.\tentailment\n앞으로 발사하라.\t발사.\tneutral\n그리고 당신은 우리 땅이 에이커에 있다는 것을 알고 있다. 우리 사람들은 어떤 것이 얼마나 많은지 이해하지 못할 것이다.\t모든 사람들은 우리의 측정 시스템이 어떻게 작동하는지 알고 이해합니다.\tcontradiction\n주미게스\tJumiyges는 도시의 이름이다.\tneutral\n사람은 자기 민족을 돌봐야 한다...\t사람은 조국에 공감해야 한다.\tentailment\n또한 PDD 63은 정부와 업계가 컴퓨터 기반 공격에 대해 경고하고 방어할 준비를 더 잘할 수 있도록 시스템 취약성, 위협, 침입 및 이상에 대한 정보를 공유하는 메커니즘을 수립하는 것이 중요하다는 것을 인식했습니다.\t정보 전송 프로토콜을 만드는 것은 중요하다.\tentailment\n카페 링 피아자 델라 레퓌블리카 바로 남쪽에는 피렌체가 알려진 짚 제품 때문에 한때 스트로 마켓이라고 불렸던 16세기 로지아인 메르카토 누오보(Mercato Nuovo)가 있다.\t피아자 델라 레퓌블리카에는 카페가 많이 있다.\tentailment\n우리가 여기 있는 한 트린판이 뭘 주웠는지 살펴봐야겠어\t우리는 트린판이 무엇을 주웠는지 보는 데 시간을 낭비하지 않을 것이다.\tcontradiction\n그러나 켈트족의 문화적 기반을 가진 아일랜드 교회는 유럽의 신흥 기독교 세계와는 다르게 발전했고 결국 로마와 중앙집권적 행정으로 대체되었다.\t아일랜드 교회에는 켈트족의 기지가 있었다.\tentailment\n글쎄, 넌 선택의 여지가 없어\t글쎄, 너에겐 많은 선택권이 있어.\tcontradiction\n사실, 공식적인 보장은 없다.\t내가 산 물건에 대한 보증이 없었다.\tneutral\n덜 활기차긴 하지만, 안시와 르 부르젯의 사랑스러운 호수에서도 삶은 똑같이 상쾌하다.\t안시와 르 부르겟에서는 호수에서의 활동이 서두르고 바쁜 분위기를 연출한다.\tcontradiction\n그의 여행 소식이 이미 퍼졌다면 공격 소식도 퍼졌을 테지만 마을에서는 전혀 공황의 기미가 보이지 않았다.\t그는 왜 마을이 당황하지 않았는지 알 수 없었다.\tneutral\n과거에는 죽음의 위협이 토지의 판매를 막는 데 거의 도움이 되지 않았다.\t토지 판매는 어떠한 위협도 교환하지 않고 이루어진다.\tcontradiction\n어느 시점에 이르러 나는 지금 다가오는 새로운 것들과 나오는 많은 새로운 것들이 내가 늙어가고 있다고 말하는 시대로 접어들고 있다.\t나는 여전히 내가 보는 모든 새로운 것을 사랑한다.\tcontradiction\n뉴스위크는 물리학자들이 경기장 행사에서 고속도로의 자동차 교통과 보행자 교통을 개선하기 위해 새떼의 움직임을 연구하고 있다고 말한다.\t고속도로의 자동차 교통 흐름을 개선하는 것은 물리학자들이 새떼를 연구하는 이유 중 하나이다.\tentailment\n얼마나 다른가? 그는 잠시 말을 멈추었다가 말을 이었다.\t그는 그 소녀가 어디에 있는지 알고 싶었다.\tentailment\n글쎄, 그에게 너무 많은 것을 주지마.\t그는 훨씬 더 많은 것을 요구할 것이다.\tneutral\n아무리 그의 창작물이 완벽해 보인다고 해도, 그들을 믿는 것은 아마도 좋은 생각이 아닐 것이다.\'\t도자기를 잘 만든다고 해서 누군가를 믿는 것은 아마 좋지 않을 것이다.\tneutral\n버스틀링 그란 비아(Bustling Gran Via)는 호텔, 상점, 극장, 나이트클럽, 카페 등이 어우러져 산책과 창가를 볼 수 있다.\tGran Via는 호텔, 상점, 극장, 나이트클럽, 카페의 번화한 조합이다.\tentailment\n정부 인쇄소\t그 사무실은 워싱턴에 위치해 있다.\tneutral\n실제 문화 전쟁이 어디 있는지 알고 싶다면 학원을 잊어버리고 실리콘 밸리와 레드몬드를 생각해 보라.\t실제 문화 전쟁은 레드몬드에서 일어난다.\tentailment\n그리고 페니실린을 주지 않기 위해 침대 위에 올려놨어\t그녀의 방에는 페니실린이 없다는 징후가 전혀 없었다.\tcontradiction\nL.A.의 야외 시장을 활보하는 것은 맛있고 저렴한 그루브를 잡고, 끝이 없는 햇빛을 즐기고, 신선한 농산물, 꽃, 향, 그리고 가젯 갈로어를 구입하면서 현지인들과 어울릴 수 있는 훌륭한 방법이다.\tLA의 야외 시장을 돌아다니는 것은 시간 낭비다.\tcontradiction\n안나는 밖으로 나와 안도의 한숨을 내쉬었다. 단 한 번, 그리고 마리후아쉬 맛의 술로 끝내자는 결심이 뒤섞여 있었다.\t안나는 안심하고 마리후아쉬 맛의 술을 다 마시기로 결심했다.\tentailment\n5 월에 Vajpayee는 핵 실험의 성공적인 완료를 발표했는데, 인도인들은 주권의 표시로 선전했지만 이웃 국가와 서구와의 인도 관계를 복잡하게 만들 수 있습니다.\t인도는 성공적인 핵실험을 한 적이 없다.\tcontradiction\n플라노 원에서 보통 얼마나 많은 것을 가지고 있는가?\t저 사람들 중에 플라노 원에 가본 사람 있어?\tcontradiction\n그것의 전체적인 형태의 우아함은 운하 건너편에서 가장 잘 볼 수 있다. 왜냐하면, 로마에 있는 성 베드로처럼, 돔은 길쭉한 본당 뒤로 더 가까운 곳에 사라지기 때문이다.\t성 베드로의 길쭉한 본당은 돔을 가린다.\tentailment\n당신은 수틴이 살에 강박적인 기쁨을 가지고 누드를 그릴 것이라고 생각하겠지만, 아니오; 그는 그의 모든 경력에서 단 한 점만을 그렸고, 그것은 사소한 그림이다.\t그는 그것이 그를 불편하게 만들었기 때문에 하나만 그렸다.\tneutral\n이 인상적인 풍경은 원래 나포 레온이 루브르 박물관의 침실에서 볼 수 있도록 계획되었는데, 그 당시 궁전이었습니다.\t나폴레옹은 그의 모든 궁전에 있는 그의 침실에서 보는 경치에 많은 관심을 가졌다.\tneutral\n그는 우리에게 문 열쇠를 건네주고는 급히 떠났다.\t그는 긴장해서 우리에게 열쇠를 빨리 주었다.\tneutral\n위원회는 또한 최종 규칙을 OMB에 제출했다.\t위원회는 또한 이 규칙을 다른 그룹에 제출했지만 최종 규칙은 OMB가 평가하기 위한 것이 었습니다.\tneutral\n정원가게에 가보면 올리비아의 복제 화합물 같은 유쾌한 이름을 가진 제품들을 찾을 수 있을 겁니다.이 제품이 뿌리를 내리도록 돕기 위해 촬영의 절단된 끝에 덩크슛을 하는 호르몬의 혼합물이죠.\t정원 가꾸기 가게의 제품들은 종종 그들의 목적을 설명하기 위해 기술적으로나 과학적으로 파생된 이름(올리비아의 복제 화합물처럼)을 부여받는다.\tneutral\n스타는 스틸 자신이나 왜 그녀의 이야기를 바꾸었는지에 훨씬 더 관심이 있을 것이다.\t스틸의 이야기는 조금도 변하지 않았다.\tcontradiction\n남편과의 마지막 대결로 맥티어는 노라의 변신을 너무나 능숙하게 예고해 왔기 때문에, 그녀에게는 당황스러울 정도로 갑작스러운 것처럼 보이지만, 우리에게는 감정적으로 불가피해 보인다.\t노라의 변신은 분명하고 필연적이었다.\tcontradiction\n이집트 최남단 도시인 아스완은 오랜 역사를 통해 중요한 역할을 해왔다.\t아스완은 이집트 국경 바로 위에 위치해 있습니다.\tneutral\n그러나 훨씬 더 우아한 건축적 터치는 신성한 춤인 Bharatanatyam에서 수행된 108 가지 기본 포즈를 시바 패널에서 볼 수 있습니다.\t패널에 대한 시바의 묘사는 일반적인 모티브다.\tneutral\n호화롭게 심어진 계단식 정원은 이탈리아 형식의 가장 훌륭한 앙상블 중 하나입니다.\t아름다운 정원과 희귀한 꽃꽂이 모두 이탈리아의 형식적인 스타일을 보여준다.\tneutral\n음, 그랬으면 좋았을 텐데\t나는 그것을 다르게 할 기회를 몹시 갈망한다.\tentailment\n폐허가 된 성의 기슭에 자리잡고 있는 예쁜 중세 도시 케이서스버그는 노벨 평화상 수상자 알버트 슈바이처(1875년)의 출생지로 널리 알려져 있다.\t알버트 슈바이처는 둘 다 케이서스버그 마을에 있었다.\tentailment\n고감도는 문제가 있는 대부분의 환자들이 발견될 것을 보장한다.\t장비 민감도는 문제 탐지와 관련이 없습니다.\tcontradiction\n오늘은 확실히 반바지 같은 날이었어\t오늘 사무실에 있는 모든 사람들은 반바지를 입었다.\tneutral\n못생긴 턱시도를 입고.\t그것은 분홍색과 주황색입니다.\tneutral\n이주 노동 수용소 오 마이 갓 그들은 판지 상자에 산다.\t노동 수용소에는 판지 상자에 사는 이주 노동자들의 사진이 있다.\tneutral\n그래, 그가 전 세계를 여행한 후에 그런 거야\t그것은 사람들의 세계 여행을 따른다.\tentailment\n건너편에 크고 큰 참나무 몇 그루가 있다.\t우리는 여기 오크나 어떤 종류의 미국 나무도 없다.\tcontradiction\nFort-de-France에서 출발하는 자동차나 여객선으로, 당신은 안세 ? 바다 포도가 그늘을 제공하는 쾌적한 갈색 모래 해변과 피크닉 테이블, 어린이 미끄럼틀, 식당이 있는 안느에 도착할 수 있다.\t프랑스 요새에서 자동차나 페리를 타고 안세로 갈 수 있다.\tentailment\n그리고 그것은 앨라배마주가 예상했던 대로 예산에서 50만 달러를 삭감하지 않을 것이라는 것을 의미한다.\t앨라배마 주는 예산 삭감을 하지 않았다. 왜냐하면 그렇게 하는 것에 대한 초기 정당성이 정밀 조사에 맞서지 않았기 때문이다.\tneutral\n알았어 먼저 어 .. 어 .. 노인이나 가족을 요양원에 보내는 것에 대해 어떻게 생각하니?\t가족을 요양원에 보내서 사는 것에 대해 어떻게 생각하는지 알 필요가 없다.\tcontradiction\n나머지는 너에게 달렸어.\t나머지는 너에게 달렸지만 시간이 많지 않다.\tneutral\n음-흠, 3월에 햇볕에 타는 것에 대해 걱정하면 안 된다는 것을 알고 있는 3월이야.\t3월은 그렇게 덥지 않다.\tneutral\n그리고 어, 그런 작은 것들로 다시 시작해봐. 아직 훨씬 싸. 어, 그 특별한 모델 차는 150달러야.\t그 모형차는 4천 달러가 든다.\tcontradiction\n내일 돌아가야 한다면, 칼이 말했다.\t돌아갈 수 없어. 오늘은 안 돼. 내일은 안 돼. 절대 안 돼." 칼이 말했다.', 'sentence2': 'contradiction'}
```
2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in 🤗 Transformers
- `kor_nli` dataset has same data structure of multi_nli, xnli
- Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
}
),
```
If you don't mind, I would like to fix this.
Thanks!
|
CLOSED
| 2020-11-10T02:04:12
| 2020-11-16T13:59:12
| 2020-11-16T13:59:12
|
https://github.com/huggingface/datasets/issues/821
|
sackoh
| 0
|
[] |
817
|
Add MRQA dataset
|
## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task
- **Paper:** https://arxiv.org/abs/1910.09753
- **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019
- **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-09T15:52:19
| 2020-12-04T15:44:42
| 2020-12-04T15:44:41
|
https://github.com/huggingface/datasets/issues/817
|
VictorSanh
| 1
|
[
"dataset request"
] |
816
|
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
|
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.
|
CLOSED
| 2020-11-09T15:01:20
| 2020-11-11T15:20:50
| 2020-11-11T15:20:50
|
https://github.com/huggingface/datasets/issues/816
|
lhoestq
| 1
|
[] |
815
|
Is dataset iterative or not?
|
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
|
CLOSED
| 2020-11-09T09:11:48
| 2020-11-10T10:50:03
| 2020-11-10T10:50:03
|
https://github.com/huggingface/datasets/issues/815
|
rabeehkarimimahabadi
| 8
|
[
"dataset request"
] |
814
|
Joining multiple datasets
|
Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
|
CLOSED
| 2020-11-08T16:19:30
| 2020-11-08T19:38:48
| 2020-11-08T19:38:48
|
https://github.com/huggingface/datasets/issues/814
|
rabeehkarimimahabadi
| 1
|
[
"dataset request"
] |
813
|
How to implement DistributedSampler with datasets
|
Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks.
|
CLOSED
| 2020-11-08T15:27:11
| 2022-10-05T12:54:23
| 2022-10-05T12:54:23
|
https://github.com/huggingface/datasets/issues/813
|
rabeehkarimimahabadi
| 4
|
[
"dataset request"
] |
812
|
Too much logging
|
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
using datasets version = 1.1.2
|
CLOSED
| 2020-11-07T23:56:30
| 2021-01-26T14:31:34
| 2020-11-16T17:06:42
|
https://github.com/huggingface/datasets/issues/812
|
dspoka
| 7
|
[] |
811
|
nlp viewer error
|
Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

|
CLOSED
| 2020-11-07T17:08:58
| 2022-02-15T10:51:44
| 2022-02-14T15:24:20
|
https://github.com/huggingface/datasets/issues/811
|
jc-hou
| 3
|
[
"nlp-viewer"
] |
809
|
Add Google Taskmaster dataset
|
## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-11-06T15:10:41
| 2021-04-20T13:09:26
| 2021-04-20T13:09:26
|
https://github.com/huggingface/datasets/issues/809
|
yjernite
| 2
|
[
"dataset request"
] |
807
|
load_dataset for LOCAL CSV files report CONNECTION ERROR
|
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=False)
print('datasets version: ', datasets.__version__)
print('pytorch version: ', torch.__version__)
print('transformers version: ', transformers.__version__)
# output:
datasets version: 1.1.2
pytorch version: 1.5.0
transformers version: 3.2.0
```
when I load data through `dataset`:
```
dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
```
Error infos:
```
ConnectionError Traceback (most recent call last)
<ipython-input-17-bbdadb9a0c78> in <module>
----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
588 # Download/copy dataset processing script
589 module_path, hash = prepare_module(
--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
591 )
592
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
270 if script_version is not None:
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
306 user_agent=download_config.user_agent,
307 local_files_only=download_config.local_files_only,
--> 308 use_etag=download_config.use_etag,
309 )
310 elif os.path.exists(url_or_filename):
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py
```
And I try to connect to the site with requests:
```
import requests
requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
```
Similarly Error occurs:
```
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
159 conn = connection.create_connection(
--> 160 (self._dns_host, self.port), self.timeout, **extra_kw
161 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
676 headers=headers,
--> 677 chunked=chunked,
678 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
380 try:
--> 381 self._validate_conn(conn)
382 except (SocketTimeout, BaseSSLError) as e:
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 976 conn.connect()
977
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self)
307 # Add certificate verification
--> 308 conn = self._new_conn()
309 hostname = self.host
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
171 raise NewConnectionError(
--> 172 self, "Failed to establish a new connection: %s" % e
173 )
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
724 retries = retries.increment(
--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
726 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
438 if new_retry.is_exhausted():
--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))
440
MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-20-18cc3eb4a049> in <module>
1 import requests
2
----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs)
102
103 kwargs.setdefault('allow_redirects', False)
--> 104 return request('head', url, **kwargs)
105
106
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 }
529 send_kwargs.update(settings)
--> 530 resp = self.send(prep, **send_kwargs)
531
532 return resp
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs)
641
642 # Send the request
--> 643 r = adapter.send(request, **kwargs)
644
645 # Total elapsed time of the request (approximately)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
```
|
CLOSED
| 2020-11-06T06:33:04
| 2021-01-11T01:30:27
| 2020-11-14T05:30:34
|
https://github.com/huggingface/datasets/issues/807
|
shexuan
| 11
|
[] |
806
|
Quail dataset urls are out of date
|
<h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
|
CLOSED
| 2020-11-05T19:40:19
| 2020-11-10T14:02:51
| 2020-11-10T14:02:51
|
https://github.com/huggingface/datasets/issues/806
|
ngdodd
| 3
|
[] |
805
|
On loading a metric from datasets, I get the following error
|
`from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you.
|
CLOSED
| 2020-11-05T15:14:38
| 2022-02-14T15:32:59
| 2022-02-14T15:32:59
|
https://github.com/huggingface/datasets/issues/805
|
laibamehnaz
| 1
|
[] |
804
|
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
|
# The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tasks = load_dataset("kilt_tasks")
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# both in "kilt_tasks"
In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']])
Out[18]: False
# and "trivia_qa"
In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']])
Out[13]: True
# appears to be fine on the train and validation sets.
In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']])
Out[14]: False
In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']])
Out[15]: False
In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']])
Out[16]: True
In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']])
Out[17]: True
```
|
CLOSED
| 2020-11-05T11:38:01
| 2020-11-09T14:14:59
| 2020-11-09T14:14:58
|
https://github.com/huggingface/datasets/issues/804
|
PaulLerner
| 3
|
[] |
801
|
How to join two datasets?
|
Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.
Thanks!
|
CLOSED
| 2020-11-04T03:53:11
| 2020-12-23T14:02:58
| 2020-12-23T14:02:58
|
https://github.com/huggingface/datasets/issues/801
|
shangw-nvidia
| 3
|
[] |
798
|
Cannot load TREC dataset: ConnectionError
|
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`
* Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address
* Increasing max_redirects to 100 doesn't help
Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.
* datasets.__version__ == '1.1.2'
* requests.__version__ == '2.24.0'
## Error trace
```
>>> import datasets
>>> datasets.__version__
'1.1.2'
>>> dataset = load_dataset("trec", split="train")
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
```
I would appreciate some suggestions here.
|
CLOSED
| 2020-11-03T17:45:22
| 2022-02-14T15:34:22
| 2022-02-14T15:34:22
|
https://github.com/huggingface/datasets/issues/798
|
kaletap
| 9
|
[
"dataset bug"
] |
797
|
Token classification labels are strings and we don't have the list of labels
|
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`).
|
CLOSED
| 2020-11-03T15:33:30
| 2022-02-14T15:41:54
| 2022-02-14T15:41:53
|
https://github.com/huggingface/datasets/issues/797
|
sgugger
| 4
|
[
"enhancement",
"Dataset discussion"
] |
795
|
Descriptions of raw and processed versions of wikitext are inverted
|
Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.
Also it would be nice if those descriptions appeared in the dataset explorer.
https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52
|
CLOSED
| 2020-11-03T10:24:51
| 2022-02-14T15:46:21
| 2022-02-14T15:46:21
|
https://github.com/huggingface/datasets/issues/795
|
fraboniface
| 2
|
[
"dataset bug"
] |
794
|
self.options cannot be converted to a Python object for pickling
|
Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
```
error is `self.options cannot be converted to a Python object for pickling`
Would you mind to take a look? Thanks!
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ab83fec2ded4> in <module>
----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
/tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
/tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
162 name,
163 custom_features=features,
--> 164 **config_kwargs,
165 )
166
/tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
281 )
282 else:
--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
284
285 if builder_config.data_files is not None:
/tmp/datasets/src/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/usr/lib/python3.6/pickle.py in dump(self, obj)
407 if self.proto >= 4:
408 self.framer.start_framing()
--> 409 self.save(obj)
410 self.write(STOP)
411 self.framer.end_framing()
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
474 f = self.dispatch.get(t)
475 if f is not None:
--> 476 f(self, obj) # Call unbound method with explicit self
477 return
478
~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/usr/lib/python3.6/pickle.py in save_dict(self, obj)
819
820 self.memoize(obj)
--> 821 self._batch_setitems(obj.items())
822
823 dispatch[dict] = save_dict
/usr/lib/python3.6/pickle.py in _batch_setitems(self, items)
850 k, v = tmp[0]
851 save(k)
--> 852 save(v)
853 write(SETITEM)
854 # else tmp is empty, and we're done
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
494 reduce = getattr(obj, "__reduce_ex__", None)
495 if reduce is not None:
--> 496 rv = reduce(self.proto)
497 else:
498 reduce = getattr(obj, "__reduce__", None)
~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()
TypeError: self.options cannot be converted to a Python object for pickling
```
|
CLOSED
| 2020-11-03T09:27:34
| 2020-11-19T17:35:38
| 2020-11-19T17:35:38
|
https://github.com/huggingface/datasets/issues/794
|
hzqjyyx
| 1
|
[
"bug"
] |
792
|
KILT dataset: empty string in triviaqa input field
|
# What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :)
|
CLOSED
| 2020-11-02T17:33:54
| 2020-11-05T10:34:59
| 2020-11-05T10:34:59
|
https://github.com/huggingface/datasets/issues/792
|
PaulLerner
| 1
|
[] |
790
|
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
|
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".[dev]"
```


Python 3.7.7
|
CLOSED
| 2020-11-02T12:36:35
| 2020-11-10T14:05:02
| 2020-11-10T14:05:02
|
https://github.com/huggingface/datasets/issues/790
|
shawwn
| 2
|
[] |
788
|
failed to reuse cache
|
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again.
|
CLOSED
| 2020-11-02T02:42:36
| 2020-11-02T12:26:15
| 2020-11-02T12:26:15
|
https://github.com/huggingface/datasets/issues/788
|
WangHexie
| 0
|
[] |
786
|
feat(dataset): multiprocessing _generate_examples
|
forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.
|
CLOSED
| 2020-10-31T16:52:16
| 2023-01-16T10:59:13
| 2023-01-16T10:59:13
|
https://github.com/huggingface/datasets/issues/786
|
AmitMY
| 2
|
[] |
784
|
Issue with downloading Wikipedia data for low resource language
|
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these two languages:
Javanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json
```
Sundanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json
```
I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid.
Any suggestions on how to handle this issue? Thanks!
|
CLOSED
| 2020-10-31T11:40:00
| 2022-02-09T17:50:16
| 2020-11-25T15:42:13
|
https://github.com/huggingface/datasets/issues/784
|
SamuelCahyawijaya
| 5
|
[] |
778
|
Unexpected behavior when loading cached csv file?
|
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.
Small snippet to reproduce the behavior:
```python
import datasets
with open("dummy_data.csv", "w") as file:
file.write("test,this;text\n")
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names)
# ["test", "this;text"]
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names)
# still ["test", "this;text"]
```
By the way, thanks a lot for this amazing library! :)
|
CLOSED
| 2020-10-29T16:06:10
| 2020-10-29T21:21:27
| 2020-10-29T21:21:27
|
https://github.com/huggingface/datasets/issues/778
|
dcfidalgo
| 2
|
[] |
773
|
Adding CC-100: Monolingual Datasets from Web Crawl Data
|
## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-10-28T18:20:41
| 2022-01-26T13:22:54
| 2020-12-14T10:20:07
|
https://github.com/huggingface/datasets/issues/773
|
yjernite
| 4
|
[
"dataset request"
] |
771
|
Using `Dataset.map` with `n_proc>1` print multiple progress bars
|
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
|
CLOSED
| 2020-10-28T14:13:27
| 2023-02-13T20:16:39
| 2023-02-13T20:16:39
|
https://github.com/huggingface/datasets/issues/771
|
sgugger
| 3
|
[] |
769
|
How to choose proper download_mode in function load_dataset?
|
Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5
```
First I try to use this command to load my csv file .
``` python
dataset=load_dataset('csv', data_files=['sst_test.csv'])
```
It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.
``` python
import pyarrow as pa
from pyarrow import csv
read_options = csv.ReadOptions(block_size=1024*1024)
parse_options = csv.ParseOptions()
convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})
dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,
parse_options=parse_options, convert_options=convert_options)
```
It keeps the same:
```shell
Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)
```
I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.
Is it a bug? How to choose proper download_mode to avoid this issue?
|
CLOSED
| 2020-10-28T09:16:19
| 2022-02-22T12:22:52
| 2022-02-22T12:22:52
|
https://github.com/huggingface/datasets/issues/769
|
jzq2000
| 5
|
[] |
768
|
Add a `lazy_map` method to `Dataset` and `DatasetDict`
|
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives).
|
OPEN
| 2020-10-27T22:33:03
| 2020-10-28T08:58:13
| null |
https://github.com/huggingface/datasets/issues/768
|
sgugger
| 1
|
[
"enhancement"
] |
767
|
Add option for named splits when using ds.train_test_split
|
### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
|
OPEN
| 2020-10-27T19:59:44
| 2020-11-10T14:05:21
| null |
https://github.com/huggingface/datasets/issues/767
|
nateraw
| 1
|
[
"enhancement"
] |
766
|
[GEM] add DART data-to-text generation dataset
|
## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-10-27T17:34:04
| 2020-12-03T13:37:18
| 2020-12-03T13:37:18
|
https://github.com/huggingface/datasets/issues/766
|
yjernite
| 2
|
[
"dataset request"
] |
765
|
[GEM] Add DART data-to-text generation dataset
|
## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** It will likely be included in the GEM generation evaluation benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
CLOSED
| 2020-10-27T17:32:23
| 2020-10-27T17:34:21
| 2020-10-27T17:34:21
|
https://github.com/huggingface/datasets/issues/765
|
yjernite
| 0
|
[
"dataset request"
] |
762
|
[GEM] Add Czech Restaurant data-to-text generation dataset
|
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark
|
CLOSED
| 2020-10-27T16:00:47
| 2020-12-03T13:37:44
| 2020-12-03T13:37:44
|
https://github.com/huggingface/datasets/issues/762
|
yjernite
| 0
|
[
"dataset request"
] |
761
|
Downloaded datasets are not usable offline
|
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0).
|
CLOSED
| 2020-10-26T20:54:46
| 2022-02-15T10:32:28
| 2022-02-15T10:32:28
|
https://github.com/huggingface/datasets/issues/761
|
ghazi-f
| 2
|
[] |
760
|
Add meta-data to the HANS dataset
|
The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.
|
CLOSED
| 2020-10-26T14:56:53
| 2020-12-03T13:38:34
| 2020-12-03T13:38:34
|
https://github.com/huggingface/datasets/issues/760
|
yjernite
| 0
|
[
"good first issue",
"dataset bug"
] |
759
|
(Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
|
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset
module_path, hash = prepare_module(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path
output_path = get_from_cache(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache
raise ConnectionError(“Couldn’t reach {}”.format(url))
ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ?
|
CLOSED
| 2020-10-25T15:34:57
| 2023-09-13T23:56:51
| 2021-08-04T18:10:09
|
https://github.com/huggingface/datasets/issues/759
|
AI678
| 19
|
[] |
758
|
Process 0 very slow when using num_procs with map to tokenizer
|
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
|
CLOSED
| 2020-10-24T02:40:20
| 2020-10-28T03:59:46
| 2020-10-28T03:59:45
|
https://github.com/huggingface/datasets/issues/758
|
ksjae
| 6
|
[] |
757
|
CUDA out of memory
|
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
|
CLOSED
| 2020-10-23T13:57:00
| 2020-12-23T14:06:29
| 2020-12-23T14:06:29
|
https://github.com/huggingface/datasets/issues/757
|
li1117heex
| 8
|
[] |
752
|
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
|
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.
Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page.
Thanks for all the great work!
|
CLOSED
| 2020-10-21T22:56:23
| 2020-10-22T16:19:42
| 2020-10-22T16:19:42
|
https://github.com/huggingface/datasets/issues/752
|
ogabrielluiz
| 2
|
[] |
751
|
Error loading ms_marco v2.1 using load_dataset()
|
Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
```
|
CLOSED
| 2020-10-21T19:54:43
| 2020-11-05T01:31:57
| 2020-11-05T01:31:57
|
https://github.com/huggingface/datasets/issues/751
|
JainSahit
| 3
|
[] |
750
|
load_dataset doesn't include `features` in its hash
|
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
```
|
CLOSED
| 2020-10-21T15:16:41
| 2020-10-29T09:36:01
| 2020-10-29T09:36:01
|
https://github.com/huggingface/datasets/issues/750
|
sgugger
| 0
|
[] |
749
|
[XGLUE] Adding new dataset
|
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
|
CLOSED
| 2020-10-21T10:51:36
| 2022-09-30T11:35:30
| 2021-01-06T10:02:55
|
https://github.com/huggingface/datasets/issues/749
|
patrickvonplaten
| 15
|
[
"dataset request"
] |
744
|
Dataset Explorer Doesn't Work for squad_es and squad_it
|
https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device".
|
CLOSED
| 2020-10-19T19:34:12
| 2020-10-26T16:36:17
| 2020-10-26T16:36:17
|
https://github.com/huggingface/datasets/issues/744
|
gaotongxiao
| 1
|
[
"nlp-viewer"
] |
743
|
load_dataset for CSV files not working
|
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you
|
OPEN
| 2020-10-19T14:53:51
| 2025-04-24T06:35:25
| null |
https://github.com/huggingface/datasets/issues/743
|
iliemihai
| 23
|
[] |
741
|
Creating dataset consumes too much memory
|
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.

If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:

And it is only using one CPU core, which is probably why it's so slow:

|
CLOSED
| 2020-10-18T06:07:06
| 2022-02-15T17:03:10
| 2022-02-15T17:03:10
|
https://github.com/huggingface/datasets/issues/741
|
AmitMY
| 20
|
[] |
737
|
Trec Dataset Connection Error
|
**Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details>
|
CLOSED
| 2020-10-15T15:57:53
| 2020-10-19T08:54:36
| 2020-10-19T08:54:36
|
https://github.com/huggingface/datasets/issues/737
|
aychang95
| 1
|
[] |
735
|
Throw error when an unexpected key is used in data_files
|
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.
|
CLOSED
| 2020-10-15T10:55:27
| 2020-10-30T13:23:52
| 2020-10-30T13:23:52
|
https://github.com/huggingface/datasets/issues/735
|
BramVanroy
| 1
|
[] |
730
|
Possible caching bug
|
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '🤗🤗🤗'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '🤗🤗🤗'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5
|
CLOSED
| 2020-10-14T02:02:34
| 2022-11-22T01:45:54
| 2020-10-29T09:36:01
|
https://github.com/huggingface/datasets/issues/730
|
ArneBinder
| 7
|
[
"bug"
] |
729
|
Better error message when one forgets to call `add_batch` before `compute`
|
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
|
CLOSED
| 2020-10-12T17:59:22
| 2020-10-29T15:18:24
| 2020-10-29T15:18:24
|
https://github.com/huggingface/datasets/issues/729
|
sgugger
| 0
|
[] |
728
|
Passing `cache_dir` to a metric does not work
|
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric.
|
CLOSED
| 2020-10-12T17:55:14
| 2020-10-29T09:34:42
| 2020-10-29T09:34:42
|
https://github.com/huggingface/datasets/issues/728
|
sgugger
| 0
|
[] |
727
|
Parallel downloads progress bar flickers
|
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.
|
OPEN
| 2020-10-12T13:36:05
| 2020-10-12T13:36:05
| null |
https://github.com/huggingface/datasets/issues/727
|
lhoestq
| 0
|
[] |
726
|
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
|
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake.
|
CLOSED
| 2020-10-12T11:45:10
| 2022-02-17T17:53:54
| 2022-02-15T10:38:57
|
https://github.com/huggingface/datasets/issues/726
|
SparkJiao
| 8
|
[] |
724
|
need to redirect /nlp to /datasets and remove outdated info
|
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).
|
CLOSED
| 2020-10-11T23:12:12
| 2020-10-14T17:00:12
| 2020-10-14T17:00:12
|
https://github.com/huggingface/datasets/issues/724
|
stas00
| 4
|
[] |
723
|
Adding pseudo-labels to datasets
|
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
|
CLOSED
| 2020-10-11T21:05:45
| 2021-08-03T05:11:51
| 2021-08-03T05:11:51
|
https://github.com/huggingface/datasets/issues/723
|
sshleifer
| 8
|
[] |
721
|
feat(dl_manager): add support for ftp downloads
|
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
|
CLOSED
| 2020-10-10T15:50:20
| 2022-02-15T10:44:44
| 2022-02-15T10:44:43
|
https://github.com/huggingface/datasets/issues/721
|
AmitMY
| 11
|
[] |
720
|
OSError: Cannot find data file when not using the dummy dataset in RAG
|
## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
|
CLOSED
| 2020-10-07T14:27:13
| 2020-12-23T14:04:31
| 2020-12-23T14:04:31
|
https://github.com/huggingface/datasets/issues/720
|
josemlopez
| 3
|
[] |
712
|
Error in the notebooks/Overview.ipynb notebook
|
Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can access various attributes of the datasets before downloading them
squad_dataset = list_datasets()[datasets.index('squad')]
pprint(squad_dataset.__dict__) # It's a simple python dataclass
```
Error message
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-8dc805c4949c> in <module>()
2 squad_dataset = list_datasets()[datasets.index('squad')]
3
----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass
AttributeError: 'str' object has no attribute '__dict__'
```
The object `squad_dataset` is a `str` not a `dataclass` .
|
CLOSED
| 2020-10-04T05:58:31
| 2020-10-05T16:25:40
| 2020-10-05T16:25:40
|
https://github.com/huggingface/datasets/issues/712
|
subhrm
| 2
|
[] |
709
|
How to use similarity settings other then "BM25" in Elasticsearch index ?
|
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
|
CLOSED
| 2020-10-03T11:18:49
| 2022-10-04T17:19:37
| 2022-10-04T17:19:37
|
https://github.com/huggingface/datasets/issues/709
|
nsankar
| 1
|
[] |
708
|
Datasets performance slow? - 6.4x slower than in memory dataset
|
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.
For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.
Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.
For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.
I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.
What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?
At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?
In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.
``` py
import sys
from datasets import load_dataset
from transformers import DataCollatorWithPadding, BertTokenizerFast
from torch.utils.data import DataLoader
from tqdm import tqdm
if __name__ == '__main__':
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
collate_fn = DataCollatorWithPadding(tokenizer, padding=True)
ds = load_dataset('yelp_polarity')
def do_tokenize(x):
return tokenizer(x['text'], truncation=True)
ds = ds.map(do_tokenize, batched=True)
ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])
if len(sys.argv) == 2 and sys.argv[1] == 'memory':
# copy to memory - probably a faster way to do this - but demonstrates the point
# approximately 530 batches per second - 17500 batches in 0:33
print('using memory')
_ds = [data for data in tqdm(ds['train'])]
else:
# approximately 83 batches per second - 17500 batches in 3:31
print('using datasets')
_ds = ds['train']
dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)
for data in tqdm(dl):
for k, v in data.items():
data[k] = v.to('cuda')
```
For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d)
Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.
Thanks for all your great work.
|
CLOSED
| 2020-10-03T06:44:07
| 2021-02-12T14:13:28
| 2021-02-12T14:13:28
|
https://github.com/huggingface/datasets/issues/708
|
eugeneware
| 10
|
[] |
707
|
Requirements should specify pyarrow<1
|
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file.
https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68
Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
|
CLOSED
| 2020-10-02T23:39:39
| 2020-12-04T08:22:39
| 2020-10-04T20:50:28
|
https://github.com/huggingface/datasets/issues/707
|
mathcass
| 7
|
[] |
705
|
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
|
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:
```
text,label
"Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION
```
However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
3. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/test.csv \
--label_column_id 1 \
--model_name_or_path neuralmind/bert-base-portuguese-cased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz
2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1
10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False
10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
Using custom data configuration default
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 222, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 43, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets.
Thanks!
|
CLOSED
| 2020-10-02T15:27:55
| 2020-10-05T08:14:59
| 2020-10-05T08:14:59
|
https://github.com/huggingface/datasets/issues/705
|
pvcastro
| 2
|
[] |
699
|
XNLI dataset is not loading
|
`dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```
I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip"
|
CLOSED
| 2020-10-02T06:53:16
| 2020-10-03T17:45:52
| 2020-10-03T17:43:37
|
https://github.com/huggingface/datasets/issues/699
|
imadarsh1001
| 3
|
[] |
691
|
Add UI filter to filter datasets based on task
|
This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)
- Classification
- Multi label
- Multi class
- Q&A
- Summarization
- Translation
I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.
Thank you :)
|
CLOSED
| 2020-10-01T00:56:18
| 2022-02-15T10:46:50
| 2022-02-15T10:46:50
|
https://github.com/huggingface/datasets/issues/691
|
praateekmahajan
| 1
|
[
"enhancement"
] |
690
|
XNLI dataset: NonMatchingChecksumError
|
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']`
The same code worked well several days ago in colab but stopped working now. Thanks!
|
CLOSED
| 2020-09-30T17:50:03
| 2020-10-01T17:15:08
| 2020-10-01T14:01:14
|
https://github.com/huggingface/datasets/issues/690
|
xiey1
| 5
|
[] |
687
|
`ArrowInvalid` occurs while running `Dataset.map()` function
|
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
# suggested in #665
class PicklableTokenizer(BertJapaneseTokenizer):
def __getstate__(self):
state = dict(self.__dict__)
state['do_lower_case'] = self.word_tokenizer.do_lower_case
state['never_split'] = self.word_tokenizer.never_split
del state['word_tokenizer']
return state
def __setstate(self):
do_lower_case = state.pop('do_lower_case')
never_split = state.pop('never_split')
self.__dict__ = state
self.word_tokenizer = MecabTokenizer(
do_lower_case=do_lower_case, never_split=never_split
)
t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')
encoded = train_ds.map(
lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000
)
```
Error Message:
```
99% 99/100 [00:22<00:00, 39.07ba/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<timed exec> in <module>
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1496 if update_data:
1497 batch = cast_to_python_objects(batch)
-> 1498 writer.write_batch(batch)
1499 if update_data:
1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
272 typed_sequence_examples[col] = typed_sequence
--> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples)
274 self.write_table(pa_table)
275
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
/usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000
```
|
CLOSED
| 2020-09-30T06:16:50
| 2020-09-30T09:53:03
| 2020-09-30T09:53:03
|
https://github.com/huggingface/datasets/issues/687
|
peinan
| 2
|
[] |
686
|
Dataset browser url is still https://huggingface.co/nlp/viewer/
|
Might be worth updating to https://huggingface.co/datasets/viewer/
|
CLOSED
| 2020-09-29T19:21:52
| 2021-01-08T18:29:26
| 2021-01-08T18:29:26
|
https://github.com/huggingface/datasets/issues/686
|
jarednielsen
| 2
|
[] |
678
|
The download instructions for c4 datasets are not contained in the error message
|
The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>.
Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>')
```
Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think.
Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one.
|
CLOSED
| 2020-09-28T08:30:54
| 2020-09-28T10:26:09
| 2020-09-28T10:26:09
|
https://github.com/huggingface/datasets/issues/678
|
Narsil
| 2
|
[] |
676
|
train_test_split returns empty dataset item
|
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
print(yelp_data['test'])
print(yelp_data['test'][0])
```
The outputs:
```
{'stars': 2.0, 'text': 'xxxx'}
Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow
DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)})
Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)
{} # yelp_data['test'][0] is empty
```
|
CLOSED
| 2020-09-28T07:19:33
| 2020-10-07T13:46:33
| 2020-10-07T13:38:06
|
https://github.com/huggingface/datasets/issues/676
|
mojave-pku
| 4
|
[] |
675
|
Add custom dataset to NLP?
|
Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks.
|
CLOSED
| 2020-09-27T21:22:50
| 2020-10-20T09:08:49
| 2020-10-20T09:08:49
|
https://github.com/huggingface/datasets/issues/675
|
timpal0l
| 2
|
[] |
674
|
load_dataset() won't download in Windows
|
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.
Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.
Could this be a bug, or is there something I'm doing wrong or not thinking of?
Thanks.
|
CLOSED
| 2020-09-27T03:56:25
| 2020-10-05T08:28:18
| 2020-10-05T08:28:18
|
https://github.com/huggingface/datasets/issues/674
|
ThisDavehead
| 3
|
[] |
673
|
blog_authorship_corpus crashed
|
This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

|
CLOSED
| 2020-09-26T20:15:28
| 2022-02-15T10:47:58
| 2022-02-15T10:47:58
|
https://github.com/huggingface/datasets/issues/673
|
Moshiii
| 1
|
[
"nlp-viewer"
] |
672
|
Questions about XSUM
|
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)
>>> data['test']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)
```
The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)
```
… training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.
```
Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten)
Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match.
CC @jbragg
|
CLOSED
| 2020-09-26T17:16:24
| 2022-10-04T17:30:17
| 2022-10-04T17:30:17
|
https://github.com/huggingface/datasets/issues/672
|
danyaljj
| 14
|
[] |
671
|
[BUG] No such file or directory
|
This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested on v1.0.2
@lhoestq
|
CLOSED
| 2020-09-25T16:38:54
| 2020-09-28T14:42:42
| 2020-09-28T14:42:42
|
https://github.com/huggingface/datasets/issues/671
|
jbragg
| 0
|
[] |
669
|
How to skip a example when running dataset.map
|
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
|
CLOSED
| 2020-09-25T11:17:53
| 2022-06-17T21:45:03
| 2020-10-05T16:28:13
|
https://github.com/huggingface/datasets/issues/669
|
xixiaoyao
| 3
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.