number int64 2 7.91k | title stringlengths 1 290 | body stringlengths 0 228k | state stringclasses 2 values | created_at timestamp[s]date 2020-04-14 18:18:51 2025-12-16 10:45:02 | updated_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 19:34:46 | closed_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 14:20:48 ⌀ | url stringlengths 48 51 | author stringlengths 3 26 ⌀ | comments_count int64 0 70 | labels listlengths 0 4 |
|---|---|---|---|---|---|---|---|---|---|---|
2,462 | Merge DatasetDict and Dataset | As discussed in #2424 and #2437 (please see there for detailed conversation):
- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.
- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users.
- A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict
The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both.
There are a few things that we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset
cc: @thomwolf @lhoestq | OPEN | 2021-06-08T19:22:04 | 2023-08-16T09:34:34 | null | https://github.com/huggingface/datasets/issues/2462 | albertvillanova | 2 | [
"enhancement",
"generic discussion"
] |
2,459 | `Proto_qa` hosting seems to be broken | ## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("proto_qa")
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators
train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download
num_proc=download_config.num_proc,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl
``` | CLOSED | 2021-06-08T16:16:32 | 2021-06-10T08:31:09 | 2021-06-10T08:31:09 | https://github.com/huggingface/datasets/issues/2459 | VictorSanh | 1 | [
"bug"
] |
2,458 | Revert default in-memory for small datasets | Users are reporting issues and confusion about setting default in-memory to True for small datasets.
We see 2 clear use cases of Datasets:
- the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation)
- some edge cases (speed benchmarks, interactive/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done
After discussing with @lhoestq we have agreed to:
- revert this feature (implemented in #2182)
- explain in the docs how to optimize speed/performance by setting default in-memory
cc: @stas00 https://github.com/huggingface/datasets/pull/2409#issuecomment-856210552 | CLOSED | 2021-06-08T15:51:41 | 2021-06-08T18:57:11 | 2021-06-08T17:55:43 | https://github.com/huggingface/datasets/issues/2458 | albertvillanova | 1 | [
"enhancement"
] |
2,452 | MRPC test set differences between torch and tensorflow datasets | ## Describe the bug
When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets.
## Steps to reproduce the bug
Minimal working code
```python
from datasets import load_dataset
import tensorflow as tf
import tensorflow_datasets
# torch
dataset = load_dataset("glue", "mrpc")
# tf
data = tensorflow_datasets.load('glue/{}'.format('mrpc'))
data = list(data['test'].as_numpy_iterator())
for i in range(40,50):
tf_sentence1 = data[i]['sentence1'].decode("utf-8")
tf_sentence2 = data[i]['sentence2'].decode("utf-8")
tf_label = data[i]['label']
index = data[i]['idx']
print('Index {}'.format(index))
torch_sentence1 = dataset['test']['sentence1'][index]
torch_sentence2 = dataset['test']['sentence2'][index]
torch_label = dataset['test']['label'][index]
print('Tensorflow: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label))
print('Torch: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label))
```
Sample output
```
Index 954
Tensorflow:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label -1
Torch:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label 1
Index 711
Tensorflow:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label -1
Torch:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label 0
```
## Expected results
I would expect the datasets to be independent of whether I am working with torch or tensorflow.
## Actual results
Test set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1.
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| CLOSED | 2021-06-07T14:20:26 | 2021-06-07T14:34:32 | 2021-06-07T14:34:32 | https://github.com/huggingface/datasets/issues/2452 | FredericOdermatt | 1 | [
"bug"
] |
2,450 | BLUE file not found | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 332, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 605, in load_metric
dataset=False,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 343, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at blue/blue.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py.
The file is also not present on the master branch on github.
```
Here is dataset installed version info
```shell
pip freeze | grep datasets
datasets==1.7.0
```
| CLOSED | 2021-06-06T17:01:54 | 2021-06-07T10:46:15 | 2021-06-07T10:46:15 | https://github.com/huggingface/datasets/issues/2450 | mirfan899 | 2 | [] |
2,447 | dataset adversarial_qa has no answers in the "test" set | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['test']
print('Loaded {:,} examples'.format(len(examples)))
has_answers = 0
for e in examples:
if e['answers']['text']:
has_answers += 1
print('{:,} have answers'.format(has_answers))
>>> Loaded 3,000 examples
>>> 0 have answers
examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['validation']
<...code above...>
>>> Loaded 3,000 examples
>>> 3,000 have answers
```
## Expected results
If 'test' is a valid dataset, it should have answers. Also note that all of the 'train' and 'validation' sets have answers, there are no "no answer" questions with this set (not sure if this is correct or not).
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyArrow version: 1.0.0
| CLOSED | 2021-06-05T14:57:38 | 2021-06-07T11:13:07 | 2021-06-07T11:13:07 | https://github.com/huggingface/datasets/issues/2447 | bjascob | 2 | [
"bug"
] |
2,446 | `yelp_polarity` is broken | 
| CLOSED | 2021-06-04T15:44:29 | 2021-06-04T18:56:47 | 2021-06-04T18:56:47 | https://github.com/huggingface/datasets/issues/2446 | JetRunner | 2 | [] |
2,444 | Sentence Boundaries missing in Dataset: xtreme / udpos | I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldependencies.org/format.html#sentence-boundaries-and-comments)
But the sentence boundaries seems not to be represented by huggingface datasets features well. I found out that multiple sentence are concatenated together as a 1D array, without any delimiter.
PAN-x, which is another token classification subset from xtreme do represent the sentence boundary using a 2D array.
You may compare in PAN-x.en and udpos.English in the explorer:
https://huggingface.co/datasets/viewer/?dataset=xtreme | CLOSED | 2021-06-04T09:10:26 | 2021-06-18T11:53:43 | 2021-06-18T11:53:43 | https://github.com/huggingface/datasets/issues/2444 | cosmeowpawlitan | 2 | [
"bug"
] |
2,443 | Some tests hang on Windows | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO throwing an error is too harsh, but maybe we can emit a warning in the top-level `__init__.py ` on startup if long paths are not enabled.
| CLOSED | 2021-06-03T00:27:30 | 2021-06-28T08:47:39 | 2021-06-28T08:47:39 | https://github.com/huggingface/datasets/issues/2443 | mariosasko | 3 | [
"bug"
] |
2,441 | DuplicatedKeysError on personal dataset | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
| CLOSED | 2021-06-01T17:59:41 | 2021-06-04T23:50:03 | 2021-06-04T23:50:03 | https://github.com/huggingface/datasets/issues/2441 | lucaguarro | 2 | [
"bug"
] |
2,440 | Remove `extended` field from dataset tagger | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() got an unexpected keyword argument 'extended'
tests/test_dataset_cards.py:70: ValueError
```
Consider either removing this tag from the tagger or including it as part of the validation step in the CI.
cc @yjernite | CLOSED | 2021-06-01T17:18:42 | 2021-06-09T09:06:31 | 2021-06-09T09:06:30 | https://github.com/huggingface/datasets/issues/2440 | lewtun | 4 | [
"bug"
] |
2,434 | Extend QuestionAnsweringExtractive template to handle nested columns | Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNotImplementedError`:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-12-50e5b8f69c20> in <module>
----> 1 ds.prepare_for_task("question-answering-extractive")[0]
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1436 # We found a template so now flush `DatasetInfo` to skip the template update in `DatasetInfo.__post_init__`
1437 dataset.info.task_templates = None
-> 1438 dataset = dataset.cast(features=template.features)
1439 return dataset
1440
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
977 format = self.format
978 dataset = self.with_format("arrow")
--> 979 dataset = dataset.map(
980 lambda t: t.cast(schema),
981 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1600
1601 if num_proc is None or num_proc == 1:
-> 1602 return self._map_single(
1603 function=function,
1604 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
176 }
177 # apply actual function
--> 178 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
179 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
180 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1940 ) # Something simpler?
1941 try:
-> 1942 batch = apply_function_on_filtered_inputs(
1943 batch,
1944 indices,
~/git/datasets/src/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1836 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1837 processed_inputs = (
-> 1838 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1839 )
1840 if update_data is None:
~/git/datasets/src/datasets/arrow_dataset.py in <lambda>(t)
978 dataset = self.with_format("arrow")
979 dataset = dataset.map(
--> 980 lambda t: t.cast(schema),
981 batched=True,
982 batch_size=batch_size,
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/compute.py in cast(arr, target_type, safe)
241 else:
242 options = CastOptions.unsafe(target_type)
--> 243 return call_function("cast", [arr], options)
244
245
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<answer_end: list<item: int32>, answer_start: list<item: int32>, text: list<item: string>> to struct using function cast_struct
``` | CLOSED | 2021-05-31T14:06:51 | 2022-10-05T17:06:28 | 2022-10-05T17:06:28 | https://github.com/huggingface/datasets/issues/2434 | lewtun | 2 | [
"enhancement"
] |
2,431 | DuplicatedKeysError when trying to load adversarial_qa | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
>
>
>During handling of the above exception, another exception occurred:
>
>DuplicatedKeysError Traceback (most recent call last)
>
>/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
> 347 for hash, key in self.hkey_record:
> 348 if hash in tmp_record:
>--> 349 raise DuplicatedKeysError(key)
> 350 else:
> 351 tmp_record.add(hash)
>
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| CLOSED | 2021-05-31T12:11:19 | 2021-06-01T08:54:03 | 2021-06-01T08:52:11 | https://github.com/huggingface/datasets/issues/2431 | hanss0n | 1 | [
"bug"
] |
2,426 | Saving Graph/Structured Data in Datasets | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data type''.
Although I also know that storing a python dict in pyarrow datasets is not the best practice, but I have no idea about how to save structured data in the Datasets.
Thank you very much for your help. | CLOSED | 2021-05-29T13:35:21 | 2021-06-02T01:21:03 | 2021-06-02T01:21:03 | https://github.com/huggingface/datasets/issues/2426 | gsh199449 | 6 | [
"enhancement"
] |
2,424 | load_from_disk and save_to_disk are not compatible with each other | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("art")
dataset.save_to_disk("mydir")
d = Dataset.load_from_disk("mydir")
```
## Expected results
It is expected that these two functions be the reverse of each other without more manipulation
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: 'mydir/art/state.json'
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-5.4.0-73-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| CLOSED | 2021-05-28T23:07:10 | 2021-06-08T19:22:32 | 2021-06-08T19:22:32 | https://github.com/huggingface/datasets/issues/2424 | roholazandie | 6 | [] |
2,415 | Cached dataset not loaded | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
return (
batch["duration"] <= 10
and batch["duration"] >= 1
and len(batch["target_text"]) > 5
)
def prepare_dataset(batch):
batch["input_values"] = processor(
batch["speech"], sampling_rate=batch["sampling_rate"][0]
).input_values
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
train_dataset = train_dataset.filter(
filter_by_duration,
remove_columns=["duration"],
num_proc=data_args.preprocessing_num_workers,
)
# PROBLEM HERE -> below function is reexecuted and cache is not loaded
train_dataset = train_dataset.map(
prepare_dataset,
remove_columns=train_dataset.column_names,
batch_size=training_args.per_device_train_batch_size,
batched=True,
num_proc=data_args.preprocessing_num_workers,
)
# Later in script
set_caching_enabled(False)
# apply map on trained model to eval/test sets
```
## Expected results
The cached dataset should always be reloaded.
## Actual results
The function is reexecuted.
I have access to cached files `cache-xxxxx.arrow`.
Is there a way I can somehow load manually 2 versions and see how the hash was created for debug purposes (to know if it's an issue with dataset or function)?
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | CLOSED | 2021-05-27T15:40:06 | 2021-06-02T13:15:47 | 2021-06-02T13:15:47 | https://github.com/huggingface/datasets/issues/2415 | borisdayma | 5 | [
"bug"
] |
2,413 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates' | ## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<any_dataset>`
## Expected results
All test passed
## Actual results
```
# check that dataset is not empty
self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset))
for split in dataset_builder.info.splits.keys():
# check that loaded datset is not empty
self.parent.assertTrue(len(dataset[split]) > 0)
# check that we can cast features for each task template
> task_templates = dataset_builder.info.task_templates
E AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
tests/test_dataset_common.py:175: AttributeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| CLOSED | 2021-05-27T13:44:28 | 2021-06-01T01:05:47 | 2021-06-01T01:05:47 | https://github.com/huggingface/datasets/issues/2413 | jungwhank | 1 | [
"bug"
] |
2,412 | Docstring mistake: dataset vs. metric | This:
https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582
Should better be something like:
`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`
I can provide a PR l8er... | CLOSED | 2021-05-27T13:39:11 | 2021-06-01T08:18:04 | 2021-06-01T08:18:04 | https://github.com/huggingface/datasets/issues/2412 | PhilipMay | 1 | [] |
2,407 | .map() function got an unexpected keyword argument 'cache_file_name' | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'".
I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.
Here is the code I use
## Steps to reproduce the bug
```datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
cache_file_name="my_tokenized_file"
)
```
## Actual results
tokenized_datasets = datasets.map(
TypeError: map() got an unexpected keyword argument 'cache_file_name'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.6.2
- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10
- Python version:3.8.5
- PyArrow version:3.0.0
| CLOSED | 2021-05-27T01:54:26 | 2021-05-27T13:46:40 | 2021-05-27T13:46:40 | https://github.com/huggingface/datasets/issues/2407 | cindyxinyiwang | 3 | [
"bug"
] |
2,406 | Add guide on using task templates to documentation | Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
| CLOSED | 2021-05-26T16:28:26 | 2022-10-05T17:07:00 | 2022-10-05T17:07:00 | https://github.com/huggingface/datasets/issues/2406 | lewtun | 0 | [
"enhancement"
] |
2,402 | PermissionError on Windows when using temp dir for caching | Currently, the following code raises a PermissionError on master if working on Windows:
```python
# run as a script or call exit() in REPL to initiate the temp dir cleanup
from datasets import *
d = load_dataset("sst", split="train", keep_in_memory=False)
set_caching_enabled(False)
d.map(lambda ex: ex)
```
Error stack trace:
```
Traceback (most recent call last):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 624, in _exitfunc
f()
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 548, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\tempfile.py", line 799, in _cleanup
_shutil.rmtree(name)
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 500, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 395, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 393, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Mario\\AppData\\Local\\Temp\\tmp20epyhmq\\cache-87a87ffb5a956e68.arrow'
``` | CLOSED | 2021-05-24T21:22:59 | 2021-05-26T16:39:29 | 2021-05-26T16:39:29 | https://github.com/huggingface/datasets/issues/2402 | mariosasko | 0 | [
"bug"
] |
2,401 | load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset" | ## Describe the bug
load_dataset('natural_questions') throws ValueError
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset('natural_questions', split='validation[:10]')
```
## Expected results
Call to load_dataset returns data.
## Actual results
```
Using custom data configuration default
Reusing dataset natural_questions (/mnt/d/huggingface/datasets/natural_questions/default/0.0.2/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-d55ab8a8cc1c> in <module>
----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='/mnt/d/huggingface/datasets')
~/miniconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
757 )
--> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
759 if save_infos:
760 builder_instance._save_infos()
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)
735
736 # Create a dataset for each of the given splits
--> 737 datasets = utils.map_nested(
738 partial(
739 self._build_single_dataset,
~/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
193 # Singleton
194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 195 return function(data_struct)
196
197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)
762
763 # Build base dataset
--> 764 ds = self._as_dataset(
765 split=split,
766 in_memory=in_memory,
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory)
838 in_memory=in_memory,
839 )
--> 840 return Dataset(**dataset_kwargs)
841
842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]:
~/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
271 assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object"
272 if self.info.features.type != inferred_features.type:
--> 273 raise ValueError(
274 "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format(
275 self.info.features, self.info.features.type, inferred_features, inferred_features.type
ValueError: External features info don't match the dataset:
Got
{'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<start_token: int64, end_token: int64, start_byte: int64, end_byte: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<title: string, url: string, html: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>>, id: string, question: struct<text: string, tokens: list<item: string>>>
but expected something like
{'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<end_byte: int64, end_token: int64, start_byte: int64, start_token: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<html: string, title: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>, url: string>, id: string, question: struct<text: string, tokens: list<item: string>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| CLOSED | 2021-05-24T18:38:53 | 2021-06-09T09:07:25 | 2021-06-09T09:07:25 | https://github.com/huggingface/datasets/issues/2401 | jonrbates | 4 | [
"bug"
] |
2,400 | Concatenate several datasets with removed columns is not working. | ## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] = wikiann["test"].remove_columns(["langs","spans"])
assert wikiann["train"].features.type == wikiann["test"].features.type
concate = concatenate_datasets([wikiann["train"],wikiann["test"]])
```
## Expected results
Merged dataset
## Actual results
```python
ValueError: External features info don't match the dataset:
Got
{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<langs: list<item: string>, ner_tags: list<item: int64>, spans: list<item: string>, tokens: list<item: string>>
but expected something like
{'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<ner_tags: list<item: int64>, tokens: list<item: string>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: ~1.6.2~ 1.5.0
- Platform: macos
- Python version: 3.8.5
- PyArrow version: 3.0.0
| CLOSED | 2021-05-24T17:40:15 | 2021-05-25T05:52:01 | 2021-05-25T05:51:59 | https://github.com/huggingface/datasets/issues/2400 | philschmid | 2 | [
"bug"
] |
2,398 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong | CLOSED | 2021-05-24T10:03:34 | 2022-10-05T17:13:49 | 2022-10-05T17:13:49 | https://github.com/huggingface/datasets/issues/2398 | anassalamah | 1 | [
"bug"
] |
2,396 | strange datasets from OSCAR corpus | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2KB data.
7 training instances is obviously not a right number.
As I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl.
And even if you don't read Yue Chinese, you can tell the first six instance are problematic.
(It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app)
It might not be the problem of the huggingface/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted.
I will try to inform the host of OSCAR corpus later.
Awy a remake about this dataset in huggingface/datasets is needed, perhaps after the host of the dataset fixes the issue.
> Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it?
Thanks a lot, the new post is here:
https://github.com/oscar-corpus/oscar-website/issues/11 | OPEN | 2021-05-23T13:06:02 | 2021-06-17T13:54:37 | null | https://github.com/huggingface/datasets/issues/2396 | cosmeowpawlitan | 2 | [
"bug"
] |
2,391 | Missing original answers in kilt-TriviaQA | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question.
However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`)
## How to fix
It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data
cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
| CLOSED | 2021-05-21T14:57:07 | 2021-06-14T17:29:11 | 2021-06-14T17:29:11 | https://github.com/huggingface/datasets/issues/2391 | PaulLerner | 2 | [
"bug"
] |
2,388 | Incorrect URLs for some datasets | ## Describe the bug
It seems that the URLs for the following datasets are invalid:
- [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a
- [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/
As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# pick one of the datasets from the list above
ds = load_dataset("bn_hate_speech")
```
## Expected results
Dataset loads without error.
## Actual results
```
Downloading: 3.36kB [00:00, 1.07MB/s]
Downloading: 2.03kB [00:00, 678kB/s]
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators
train_path = dl_manager.download_and_extract(_URL)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path
output_path = get_from_cache(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 3.0.0
| CLOSED | 2021-05-21T07:22:35 | 2021-06-04T17:39:45 | 2021-06-04T17:39:45 | https://github.com/huggingface/datasets/issues/2388 | lewtun | 0 | [
"bug"
] |
2,387 | datasets 1.6 ignores cache | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}`
>
> while the same command with the latest version of datasets (actually starting at `1.6.0`) gives:
> > `{'train': [], 'validation': []}`
>
I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used.
to reproduce:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name "stas/openwebtext-10k" \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run.
@lhoestq
| CLOSED | 2021-05-21T00:12:58 | 2021-05-26T16:07:54 | 2021-05-26T16:07:54 | https://github.com/huggingface/datasets/issues/2387 | stas00 | 13 | [
"bug"
] |
2,386 | Accessing Arrow dataset cache_files | ## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty.
Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
| CLOSED | 2021-05-20T23:57:43 | 2021-05-21T19:18:03 | 2021-05-21T19:18:03 | https://github.com/huggingface/datasets/issues/2386 | Mehrad0711 | 1 | [
"bug"
] |
2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-ea87002d32f0> in <module>()
2 from datasets import load_dataset
3 dataset = load_dataset(
----> 4 'head_qa', 'en')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 1
Keys should be unique and deterministic in nature
```
How can I fix the error? Thanks
| CLOSED | 2021-05-19T15:49:48 | 2021-05-30T13:26:16 | 2021-05-30T13:26:16 | https://github.com/huggingface/datasets/issues/2382 | helloworld123-lab | 0 | [] |
2,378 | Add missing dataset_infos.json files | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPath('datasets/reclor/reclor.py')]
[PosixPath('datasets/json/README.md')]
[PosixPath('datasets/csv/README.md')]
[PosixPath('datasets/wikihow/wikihow.py'), PosixPath('datasets/wikihow/README.md')]
[PosixPath('datasets/c4/c4.py'), PosixPath('datasets/c4/README.md')]
[PosixPath('datasets/text/README.md')]
[PosixPath('datasets/lm1b/README.md'), PosixPath('datasets/lm1b/lm1b.py')]
[PosixPath('datasets/pandas/README.md')]
```
For `json`, `text`, csv`, and `pandas` this is expected, but not for the others which should be fixed
| OPEN | 2021-05-19T08:11:12 | 2021-05-19T08:11:12 | null | https://github.com/huggingface/datasets/issues/2378 | lewtun | 0 | [
"enhancement"
] |
2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arrow')
```
## Expected results
I expect that the saved dataset can be read by the official Apache Arrow methods.
## Actual results
```
File "/usr/local/lib/python3.7/site-packages/pyarrow/feather.py", line 236, in read_table
reader.open(source, use_memory_map=memory_map)
File "pyarrow/feather.pxi", line 67, in pyarrow.lib.FeatherReader.open
File "pyarrow/error.pxi", line 123, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not a Feather V1 or Arrow IPC file
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-1.6.2
- Platform: Linux
- Python version: 3.7
- PyArrow version: 0.17.1, also 2.0.0
| OPEN | 2021-05-19T02:04:37 | 2024-01-18T08:06:15 | null | https://github.com/huggingface/datasets/issues/2377 | Ark-kun | 4 | [
"bug"
] |
2,373 | Loading dataset from local path | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly?
https://github.com/huggingface/datasets/blob/bc61954083f74e6460688202e9f77dde2475319c/src/datasets/builder.py#L153 | CLOSED | 2021-05-18T15:20:50 | 2021-05-18T15:36:36 | 2021-05-18T15:36:35 | https://github.com/huggingface/datasets/issues/2373 | kolakows | 1 | [] |
2,371 | Align question answering tasks with sub-domains | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
>
> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.
> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).
> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad
>
> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
>
> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly. | CLOSED | 2021-05-18T09:47:59 | 2023-07-25T16:52:05 | 2023-07-25T16:52:04 | https://github.com/huggingface/datasets/issues/2371 | lewtun | 1 | [
"enhancement"
] |
2,366 | Json loader fails if user-specified features don't match the json data fields order | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens']
```
This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast.
One way to fix the `cast` would be to replace it with:
```python
# reorder the arrays if necessary + cast to schema
# we can't simply use .cast here because we may need to change the order of the columns
pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema)
``` | CLOSED | 2021-05-17T10:26:08 | 2021-06-16T10:47:49 | 2021-06-16T10:47:49 | https://github.com/huggingface/datasets/issues/2366 | lhoestq | 0 | [
"bug"
] |
2,365 | Missing ClassLabel encoding in Json loader | Currently if you want to load a json dataset this way
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ArrowInvalid: Failed to parse string: 'O' as a scalar of type int64
```
This is because it just tries to cast the string data to integers, without applying the mapping str->int first
The current workaround is to do instead
```python
dataset = load_dataset("json", data_files=data_files)
dataset = dataset.map(features.encode_example, features=features)
``` | CLOSED | 2021-05-17T10:19:10 | 2021-06-28T15:05:34 | 2021-06-28T15:05:34 | https://github.com/huggingface/datasets/issues/2365 | lhoestq | 0 | [
"bug"
] |
2,360 | Automatically detect datasets with compatible task schemas | See description of #2255 for details.
| OPEN | 2021-05-14T14:23:40 | 2021-05-14T14:23:40 | null | https://github.com/huggingface/datasets/issues/2360 | lewtun | 0 | [
"enhancement"
] |
2,359 | Allow model labels to be passed during task preparation | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:
- "1 star", "2 stars", "3 stars", "4 stars", "5 stars"
- "1", "2", "3", "4", "5"
Some models may use the first set, while other models use the second set.
Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ?
Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that.
The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model.
Let me know what you think ! This can be done in a future PR
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_ | CLOSED | 2021-05-14T13:58:28 | 2022-10-05T17:37:22 | 2022-10-05T17:37:22 | https://github.com/huggingface/datasets/issues/2359 | lewtun | 1 | [] |
2,354 | Document DatasetInfo attributes | **Is your feature request related to a problem? Please describe.**
As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
| CLOSED | 2021-05-12T20:01:29 | 2021-05-22T09:26:14 | 2021-05-22T09:26:14 | https://github.com/huggingface/datasets/issues/2354 | lewtun | 0 | [
"enhancement"
] |
2,350 | `FaissIndex.save` throws error on GPU | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index
index.save(file)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save
faiss.write_index(index, str(file))
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index
```
## Steps to reproduce the bug
Any dataset will do, I just selected a familiar one.
```python
import numpy as np
import datasets
INDEX_STR = "OPQ16_128,IVF512,PQ32"
INDEX_SAVE_PATH = "will_not_save.faiss"
data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]")
def encode(item):
return {"text_emb": np.random.randn(768).astype(np.float32)}
data = data.map(encode)
data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0)
data.save_faiss_index("text_emb", INDEX_SAVE_PATH)
```
## Expected results
Saving the index
## Actual results
Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I will be proposing a fix in a couple of minutes | CLOSED | 2021-05-12T03:41:56 | 2021-05-17T13:41:41 | 2021-05-17T13:41:41 | https://github.com/huggingface/datasets/issues/2350 | Guitaricet | 1 | [
"bug"
] |
2,347 | Add an API to access the language and pretty name of a dataset | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | CLOSED | 2021-05-11T14:10:08 | 2022-10-05T17:16:54 | 2022-10-05T17:16:53 | https://github.com/huggingface/datasets/issues/2347 | sgugger | 6 | [
"enhancement"
] |
2,345 | [Question] How to move and reuse preprocessed dataset? | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset without loading cache.
I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M.
What is the proper way to do this? | CLOSED | 2021-05-11T09:09:17 | 2021-06-11T04:39:11 | 2021-06-11T04:39:11 | https://github.com/huggingface/datasets/issues/2345 | AtmaHou | 4 | [] |
2,344 | Is there a way to join multiple datasets in one? | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Additional context**
If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation. | OPEN | 2021-05-10T23:16:10 | 2022-10-05T17:27:05 | null | https://github.com/huggingface/datasets/issues/2344 | avacaondata | 2 | [
"enhancement"
] |
2,343 | Columns are removed before or after map function applied? | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
| OPEN | 2021-05-10T02:36:20 | 2022-10-24T11:31:55 | null | https://github.com/huggingface/datasets/issues/2343 | taghizad3h | 1 | [
"bug"
] |
2,337 | NonMatchingChecksumError for web_of_science dataset | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results in OSError.
>OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt'
```python
dataset = load_dataset('web_of_science', 'WOS5736')
```
There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985'
datasets 1.6.2
python 3.7.10
Ubuntu 18.04.5 LTS | CLOSED | 2021-05-09T02:02:02 | 2021-05-10T13:35:53 | 2021-05-10T13:35:53 | https://github.com/huggingface/datasets/issues/2337 | nbroad1881 | 1 | [
"bug"
] |
2,335 | Index error in Dataset.map | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700)
2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
>>> d.map(lambda ex: ex)
0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars
k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i]))
0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map
new_fingerprint=new_fingerprint,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper
out = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single
for i, example in enumerate(pbar):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__
format_kwargs=format_kwargs,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table
pa_subtable = _query_table(table, key)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table
return table.fast_slice(key % table.num_rows, 1)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice
i = _interpolation_search(self._offsets, offset)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search
raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")
IndexError: Invalid query '290162' for size 74004228.
```
Tested on Windows, can run on Linux if needed.
EDIT:
It seems like for this to happen, the default NumPy dtype has to be np.int32. | CLOSED | 2021-05-08T20:44:57 | 2021-05-10T13:26:12 | 2021-05-10T13:26:12 | https://github.com/huggingface/datasets/issues/2335 | mariosasko | 0 | [
"bug"
] |
2,331 | Add Topical-Chat | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| OPEN | 2021-05-07T13:43:59 | 2021-05-07T13:43:59 | null | https://github.com/huggingface/datasets/issues/2331 | ktangri | 0 | [
"dataset request"
] |
2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | CLOSED | 2021-05-07T05:52:54 | 2021-05-26T14:59:21 | 2021-05-26T14:59:21 | https://github.com/huggingface/datasets/issues/2330 | changjonathanc | 2 | [
"enhancement",
"good first issue"
] |
2,327 | A syntax error in example | 
Sorry to report with an image, I can't find the template source code of this snippet. | CLOSED | 2021-05-06T14:34:44 | 2021-05-20T03:04:19 | 2021-05-20T03:04:19 | https://github.com/huggingface/datasets/issues/2327 | mymusise | 2 | [
"bug"
] |
2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times.
I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit = load_dataset("timit_asr")
print(timit['train']['text'])
print(timit['test']['text'])
```
## Expected Result
Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb)
<img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png">
## Actual results
Rows of repeated text.
<img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png">
## Versions
- Datasets: 1.3.0
- Python: 3.9.1
- Platform: macOS-11.2.1-x86_64-i386-64bit}
| CLOSED | 2021-05-05T13:14:48 | 2021-05-07T10:32:30 | 2021-05-07T10:32:30 | https://github.com/huggingface/datasets/issues/2323 | ekeleshian | 3 | [
"bug"
] |
2,322 | Calls to map are not cached. | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])
return samples
# first call
x = sst.map(foo, batched=True, with_indices=True, num_proc=2)
print('\n'*3, "#" * 30, '\n'*3)
# second call
y = sst.map(foo, batched=True, with_indices=True, num_proc=2)
# print version
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
## Actual results
This code prints the following output for me:
```bash
No config specified, defaulting to: sst/default
Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff)
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
#0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s]
##############################
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
#0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s]
- Datasets: 1.6.1
- Python: 3.8.3 (default, May 19 2020, 18:47:26)
[GCC 7.3.0]
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
```
## Expected results
Caching should work.
| CLOSED | 2021-05-05T12:11:27 | 2021-06-08T19:10:02 | 2021-06-08T19:08:21 | https://github.com/huggingface/datasets/issues/2322 | villmow | 6 | [
"bug"
] |
2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```
## Expected results
Anything but an error, really.
## Actual results
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
Downloading: 14.7kB [00:00, 4.91MB/s]
Downloading: 3.07MB [00:00, 32.6MB/s]
Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split
for key, record in utils.tqdm(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples
for line in f:
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined>
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
- Datasets: 1.6.2
- Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
- Platform: Windows-10-10.0.19041-SP0 | CLOSED | 2021-05-05T09:22:52 | 2021-05-05T10:57:31 | 2021-05-05T10:50:55 | https://github.com/huggingface/datasets/issues/2319 | sgraaf | 3 | [
"bug"
] |
2,318 | [api request] API to obtain "dataset_module" dynamic path? | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import.
I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
`datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case.
By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently):
https://github.com/huggingface/blog/issues/106
https://github.com/huggingface/transformers/issues/11565
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
| CLOSED | 2021-05-05T08:40:48 | 2021-05-06T08:45:45 | 2021-05-06T07:57:54 | https://github.com/huggingface/datasets/issues/2318 | richardliaw | 5 | [
"enhancement"
] |
2,316 | Incorrect version specification for pyarrow | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install "pyarrow>=1.0.0<4.0.0"
```
## Expected results
It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive).
## Actual results
pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0.
This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well:
```bash
conda env export
InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s)
```
## Fix suggestion
Put a comma between the version limits which means replacing the line in setup.py file with the following:
```python
"pyarrow>=1.0.0,<4.0.0",
```
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.2
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
```
| CLOSED | 2021-05-04T19:15:11 | 2021-05-05T10:10:03 | 2021-05-05T10:10:03 | https://github.com/huggingface/datasets/issues/2316 | cemilcengiz | 1 | [
"bug"
] |
2,301 | Unable to setup dev env on Windows | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5)
Collecting pyarrow>=0.17.1
Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB)
Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1)
Collecting pandas
Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB)
Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0)
Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2)
Collecting multiprocess
Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB)
Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0)
Collecting huggingface_hub<0.1.0
Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB)
Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1)
Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3)
Collecting pytest-xdist
Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB)
Collecting apache-beam>=2.24.0
Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB)
Collecting elasticsearch
Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB)
Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43)
Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43)
Collecting moto[s3]==1.3.16
Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB)
Collecting rarfile>=4.0
Using cached rarfile-4.0-py3-none-any.whl (28 kB)
Collecting tensorflow>=2.3
Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB)
Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1)
Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1)
Collecting bs4
Using cached bs4-0.0.1-py3-none-any.whl
Collecting conllu
Using cached conllu-4.4-py2.py3-none-any.whl (15 kB)
Collecting langdetect
Using cached langdetect-1.0.8-py3-none-any.whl
Collecting lxml
Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB)
Collecting mwparserfromhell
Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB)
Collecting nltk
Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB)
Collecting openpyxl
Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB)
Collecting py7zr
Using cached py7zr-0.15.2-py3-none-any.whl (66 kB)
Collecting tldextract
Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB)
Collecting zstandard
Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB)
Collecting bert_score>=0.3.6
Using cached bert_score-0.3.9-py3-none-any.whl (59 kB)
Collecting rouge_score
Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)
Collecting sacrebleu
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Collecting seqeval
Using cached seqeval-1.2.2-py3-none-any.whl
Collecting sklearn
Using cached sklearn-0.0-py2.py3-none-any.whl
Collecting jiwer
Using cached jiwer-2.2.0-py3-none-any.whl (13 kB)
Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1)
Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2)
Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1)
Collecting black
Using cached black-21.4b2-py3-none-any.whl (130 kB)
Collecting isort
Using cached isort-5.8.0-py3-none-any.whl (103 kB)
Collecting flake8==3.7.9
Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1)
Collecting entrypoints<0.4.0,>=0.3.0
Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting pyflakes<2.2.0,>=2.1.0
Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB)
Collecting pycodestyle<2.6.0,>=2.5.0
Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB)
Collecting mccabe<0.7.0,>=0.6.0
Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1)
Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3)
Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0)
Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7)
Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0)
Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1)
Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0)
Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10)
Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1)
Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3)
Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125)
Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3)
Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1)
Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0)
Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1)
Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0)
Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0)
Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0)
Collecting hdfs<3.0.0,>=2.1.0
Using cached hdfs-2.6.0-py3-none-any.whl (33 kB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB)
Collecting fastavro<2,>=0.21.4
Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB)
Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4)
Collecting pymongo<4.0.0,>=3.8.0
Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB)
Collecting crcmod<2.0,>=1.7
Using cached crcmod-1.7-py3-none-any.whl
Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1
Using cached avro_python3-1.9.2.1-py3-none-any.whl
Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3)
Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2)
Collecting oauth2client<5,>=2.0.1
Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting pydot<2,>=1.2.0
Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8)
Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1)
Collecting matplotlib
Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB)
Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9)
Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32)
Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1)
Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0)
Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5)
Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20)
Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227)
Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0)
Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2)
Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12)
Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3)
Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0)
Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2)
Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8)
Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7)
Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0)
Collecting keras-preprocessing~=1.1.2
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0)
Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0)
Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2)
Collecting opt-einsum~=3.3.0
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting gast==0.3.3
Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Collecting google-pasta~=0.2
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0)
Collecting astunparse~=1.6.3
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting flatbuffers~=1.12.0
Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting h5py~=2.10.0
Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0)
Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45)
Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9)
Collecting pathspec<1,>=0.8.1
Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB)
Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2)
Collecting appdirs
Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting mypy-extensions>=0.4.3
Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3)
Collecting beautifulsoup4
Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1)
Collecting python-Levenshtein
Using cached python-Levenshtein-0.12.2.tar.gz (50 kB)
Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1)
Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0)
Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1)
Collecting multiprocess
Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB)
Using cached multiprocess-0.70.10.zip (2.4 MB)
Using cached multiprocess-0.70.9-py3-none-any.whl
Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1)
Collecting et-xmlfile
Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4)
Collecting pyppmd<0.13.0,>=0.12.1
Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB)
Collecting pycryptodome>=3.6.6
Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB)
Collecting bcj-cffi<0.6.0,>=0.5.1
Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB)
Collecting multivolumefile<0.3.0,>=0.2.0
Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB)
Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0)
Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1)
Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0)
Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4)
Collecting pytest-forked
Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB)
Collecting execnet>=1.1
Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5)
Collecting portalocker==2.0.0
Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0)
Building wheels for collected packages: python-Levenshtein
Building wheel for python-Levenshtein (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for python-Levenshtein
Running setup.py clean for python-Levenshtein
Failed to build python-Levenshtein
Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam
Running setup.py install for python-Levenshtein ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output.
```
Here are conda and python versions:
```bat
(env) C:\testing\datasets>conda --version
conda 4.9.2
(env) C:\testing\datasets>python --version
Python 3.7.10
```
Please help me out. Thanks. | CLOSED | 2021-05-02T13:20:42 | 2021-05-03T15:18:01 | 2021-05-03T15:17:34 | https://github.com/huggingface/datasets/issues/2301 | gchhablani | 2 | [] |
2,300 | Add VoxPopuli | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| CLOSED | 2021-05-02T12:17:40 | 2023-02-28T17:43:52 | 2023-02-28T17:43:51 | https://github.com/huggingface/datasets/issues/2300 | patrickvonplaten | 4 | [
"dataset request",
"speech"
] |
2,299 | My iPhone | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | CLOSED | 2021-05-02T11:11:11 | 2021-07-23T09:24:16 | 2021-05-03T08:17:38 | https://github.com/huggingface/datasets/issues/2299 | Jasonbuchanan1983 | 0 | [] |
2,296 | 1 | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | CLOSED | 2021-04-30T17:53:49 | 2021-05-03T08:17:31 | 2021-05-03T08:17:31 | https://github.com/huggingface/datasets/issues/2296 | zinnyi | 0 | [
"dataset request"
] |
2,294 | Slow #0 when using map to tokenize. | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
)` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others.
It looks like this:

It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
| OPEN | 2021-04-30T08:00:33 | 2021-05-04T11:00:11 | null | https://github.com/huggingface/datasets/issues/2294 | VerdureChen | 3 | [] |
2,288 | Load_dataset for local CSV files | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
```
The method, loads each list as a string: (i.g "['I' , 'am', 'John']").
To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type
```
new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None))
new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags)))
dataset = dataset.cast(new_features)
```
but I got the following error
```
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
```
Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well.
How can this be solved ? | CLOSED | 2021-04-29T15:01:10 | 2021-06-15T13:49:26 | 2021-06-15T13:49:26 | https://github.com/huggingface/datasets/issues/2288 | sstojanoska | 3 | [
"bug"
] |
2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :)) | CLOSED | 2021-04-29T13:16:45 | 2021-05-19T07:22:45 | 2021-05-19T07:22:39 | https://github.com/huggingface/datasets/issues/2285 | danieldiezmallo | 2 | [] |
2,279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC.
## Steps to reproduce the bug
1. clone the transformers repo
2. move to examples/pytorch/language-modeling
3. run example command:
```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm```
## Expected results
As described in the transformers repo.
## Actual results
```Traceback (most recent call last):
File "run_clm.py", line 34, in <module>
from transformers import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__
return super().__getattr__(name)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module>
from .tokenization_layoutlm import LayoutLMTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module>
from ..bert.tokenization_bert import BertTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module>
from tokenizers import AddedToken
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so)
```
## Versions
Paste the output of the following code:
```
- Datasets: 1.6.1
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid
```
| CLOSED | 2021-04-28T22:08:07 | 2021-04-29T07:42:42 | 2021-04-29T07:42:42 | https://github.com/huggingface/datasets/issues/2279 | tginart | 2 | [
"bug"
] |
2,278 | Loss result inGptNeoForCasual | Is there any way you give the " loss" and "logits" results in the gpt neo api? | CLOSED | 2021-04-28T15:39:52 | 2021-05-06T16:14:23 | 2021-05-06T16:14:23 | https://github.com/huggingface/datasets/issues/2278 | Yossillamm | 1 | [
"enhancement"
] |
2,276 | concatenate_datasets loads all the data into memory | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.

## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_from_disk
test_sampled_pro = load_from_disk("test_sampled_pro")
val_sampled_pro = load_from_disk("val_sampled_pro")
big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro])
# Loaded to memory
big_set.save_to_disk("big_set")
# Loaded to memory
big_set = concatenate_datasets([big_set, val_sampled_pro])
```
## Expected results
The data should be loaded into memory in batches and then saved directly to disk.
## Actual results
The entire data set is loaded into the memory and then saved to the hard disk.
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.1
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
```
| CLOSED | 2021-04-28T14:27:21 | 2021-05-03T08:41:55 | 2021-05-03T08:41:55 | https://github.com/huggingface/datasets/issues/2276 | chbensch | 7 | [
"bug"
] |
2,275 | SNLI dataset has labels of -1 | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set.
It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained.
Perhaps the documentation should be updated. | CLOSED | 2021-04-28T00:32:25 | 2021-05-17T13:34:18 | 2021-05-17T13:34:18 | https://github.com/huggingface/datasets/issues/2275 | puzzler10 | 1 | [] |
2,272 | Bug in Dataset.class_encode_column | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| CLOSED | 2021-04-27T16:13:18 | 2021-04-30T12:54:27 | 2021-04-30T12:54:27 | https://github.com/huggingface/datasets/issues/2272 | albertvillanova | 1 | [
"bug"
] |
2,271 | Synchronize table metadata with features | **Is your feature request related to a problem? Please describe.**
As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767):
> Metadata stored in the schema is just a redundant information regarding the feature types.
It is used when calling Dataset.from_file to know which feature types to use.
These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`.
However this something that's almost never tested properly.
**Describe the solution you'd like**
We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`). | CLOSED | 2021-04-27T15:55:13 | 2022-06-01T17:13:21 | 2022-06-01T17:13:21 | https://github.com/huggingface/datasets/issues/2271 | albertvillanova | 1 | [
"enhancement"
] |
2,267 | DatasetDict save load Failing test in 1.6 not in 1.5 | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.save_to_disk(path)
ds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6
```
## Expected results
Upgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk.
## Actual results
```
# Infer features if None
inferred_features = Features.from_arrow_schema(arrow_table.schema)
if self.info.features is None:
self.info.features = inferred_features
# Infer fingerprint if None
if self._fingerprint is None:
self._fingerprint = generate_fingerprint(self)
# Sanity checks
assert self.features is not None, "Features can't be None in a Dataset object"
assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object"
if self.info.features.type != inferred_features.type:
> raise ValueError(
"External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format(
self.info.features, self.info.features.type, inferred_features, inferred_features.type
)
)
E ValueError: External features info don't match the dataset:
E Got
E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]}
E with type
E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<child: int64, child_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, color: string, head: int64, head_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, label: string>>, spans: list<item: struct<end: int64, label: string, start: int64, text: string, token_end: int64, token_start: int64, type: string>>, text: string, tokens: list<item: struct<disabled: bool, end: int64, id: int64, start: int64, text: string, ws: bool>>>
E
E but expected something like
E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]}
E with type
E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<head: int64, child: int64, head_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, child_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, color: string, label: string>>, spans: list<item: struct<text: string, start: int64, token_start: int64, token_end: int64, end: int64, type: string, label: string>>, text: string, tokens: list<item: struct<text: string, start: int64, end: int64, id: int64, ws: bool, disabled: bool>>>
../../../../../.virtualenvs/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:274: ValueError
```
## Versions
- Datasets: 1.6.1
- Python: 3.8.5 (default, Jan 26 2021, 10:01:04)
[Clang 12.0.0 (clang-1200.0.32.2)]
- Platform: macOS-10.15.7-x86_64-i386-64bit
```
| OPEN | 2021-04-27T00:03:25 | 2021-05-28T15:27:34 | null | https://github.com/huggingface/datasets/issues/2267 | timothyjlaurent | 6 | [
"bug"
] |
2,262 | NewsPH NLI dataset script fails to access test data. | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If you download it according to the script above, you can see that train and test receive the same data as shown below.
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
```
In local, I modified the code of the source as below and got the correct result.
```python
71 test_path = os.path.join(download_path, "test.csv")
```
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 9000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': '-- JAI (@JaiPaller) September 13, 2019',
'label': 1,
'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'}
```
I don't have experience with open source pull requests, so I suggest that you reflect them in the source.
Thank you for reading :) | CLOSED | 2021-04-26T06:44:41 | 2021-04-29T09:32:03 | 2021-04-29T09:30:20 | https://github.com/huggingface/datasets/issues/2262 | jinmang2 | 1 | [
"dataset bug"
] |
2,256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
def _prepare_sample(batch):
return {"input_ids": list(), "attention_mask": list()}
for split_name, dataset_split in list(dstc8_datset.items()):
print(f"Processing {split_name}")
encoded_dataset_split = dataset_split.map(
function=_prepare_sample,
batched=True,
num_proc=4,
remove_columns=dataset_split.column_names,
batch_size=10,
writer_batch_size=10,
keep_in_memory=False,
)
print(encoded_dataset_split)
path = f"./data/encoded_{split_name}"
encoded_dataset_split.save_to_disk(path)
```
## Expected results
Memory usage should stay within reasonable boundaries.
## Actual results
This is htop-output from running the provided script.

## Versions
```
- Datasets: 1.6.0
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10
```
Running on WSL2
| CLOSED | 2021-04-24T09:56:20 | 2021-04-26T17:12:15 | 2021-04-26T17:12:15 | https://github.com/huggingface/datasets/issues/2256 | roskoN | 2 | [
"bug"
] |
2,252 | Slow dataloading with big datasets issue persists | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 517.96 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
model_backward | 0.26144 |100 | 26.144 | 5.0475 |
model_forward | 0.11123 |100 | 11.123 | 2.1474 |
get_train_batch | 0.097121 |100 | 9.7121 | 1.8751 |
```
3) Running with 600GB, datasets==1.6.0
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 4563.2 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
get_train_batch | 5.1279 |100 | 512.79 | 11.237 |
model_backward | 4.8394 |100 | 483.94 | 10.605 |
model_forward | 0.12162 |100 | 12.162 | 0.26652 |
```
I see that `get_train_batch` lags when data is large. Could this be related to different issues?
I would be happy to provide necessary information to investigate. | CLOSED | 2021-04-23T08:18:20 | 2024-01-26T15:10:28 | 2024-01-26T15:10:28 | https://github.com/huggingface/datasets/issues/2252 | hwijeen | 70 | [] |
2,251 | while running run_qa.py, ran into a value error | command:
python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/
error:
ValueError: External features info don't match the dataset:
Got
{'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)}
with type
struct<answer: struct<text: string, answer_start: int32, html_answer_start: int32>, context: string, id: string, question: string, raw_html: string, title: string, url: string>
but expected something like
{'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}
with type
struct<answer: struct<answer_start: int32, html_answer_start: int32, text: string>, context: string, id: string, question: string, raw_html: string, title: string, url: string>
I didn't encounter this error 4 hours ago. any solutions for this kind of issue?
looks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'. | OPEN | 2021-04-23T07:51:03 | 2021-04-23T07:51:03 | null | https://github.com/huggingface/datasets/issues/2251 | nlee0212 | 0 | [] |
2,250 | some issue in loading local txt file as Dataset for run_mlm.py | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by removing one of the training .txt files It's fixed and although if I put all file as training it's ok


after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining.
by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs.
> Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
File "run_mlm.py", line 486, in <module>
main()
File "run_mlm.py", line 242, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py.
The file is also not present on the master branch on github.
| CLOSED | 2021-04-22T19:39:13 | 2022-03-30T08:29:47 | 2022-03-30T08:29:47 | https://github.com/huggingface/datasets/issues/2250 | alighofrani95 | 2 | [] |
2,243 | Map is slow and processes batches one after another | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry.
I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps.
pseudo code:
```python
ds = datasets.load_from_disk("path")
new_dataset = ds.map(work, batched=True, ...) # fast uses all processes
final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another
```
## Expected results
Second stage should be as fast as the first stage.
## Versions
Paste the output of the following code:
- Datasets: 1.5.0
- Python: 3.8.8 (default, Feb 24 2021, 21:46:12)
- Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10
Do you guys have any idea? Thanks a lot! | CLOSED | 2021-04-20T14:58:20 | 2021-05-03T17:54:33 | 2021-05-03T17:54:32 | https://github.com/huggingface/datasets/issues/2243 | villmow | 5 | [
"bug"
] |
2,242 | Link to datasets viwer on Quick Tour page returns "502 Bad Gateway" | Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway"
The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc | CLOSED | 2021-04-20T14:19:51 | 2021-04-20T15:02:45 | 2021-04-20T15:02:45 | https://github.com/huggingface/datasets/issues/2242 | martavillegas | 1 | [
"bug"
] |
2,239 | Error loading wikihow dataset | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2).
## Steps to reproduce the bug
I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use
```python
from datasets import load_dataset
dataset = load_dataset('wikihow')
```
to load the dataset. I do so and I get the message
```
AssertionError: The dataset wikihow with config all requires manual data.
Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset.
You need to download the following two files manually:
1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv
2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv
The <path/to/folder> can e.g. be "~/manual_wikihow_data".
Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`.
.
Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>')
```
So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory.
Then I run
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2)
## Expected results
I expected it to load the downloaded files into a dataset.
## Actual results
```python
Using custom data configuration default-data_dir=.%2Fwikihow
Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... ---------------------------------------------------------------------------
AttributeError
Traceback (most recent call last)
<ipython-input-9-5e4d40142f30> in <module>
----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow')
~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
745 try_from_hf_gcs=try_from_hf_gcs,
746 base_path=base_path,-->
747 use_auth_token=use_auth_token,
748 )
749
~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
577 if not downloaded_from_gcs:
578 self._download_and_prepare( -->
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
581 # Sync info
~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
632 split_dict = SplitDict(dataset_name=self.name)
633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -->
634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
635
636 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager)
132
133 path_to_manual_file = os.path.join(
--> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename
135 )
136
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
```
- Datasets: 1.5.0
- Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
- Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic
``` | CLOSED | 2021-04-19T21:02:31 | 2021-04-20T16:33:11 | 2021-04-20T16:33:11 | https://github.com/huggingface/datasets/issues/2239 | odellus | 4 | [
"bug"
] |
2,237 | Update Dataset.dataset_size after transformed with map | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | OPEN | 2021-04-19T15:19:38 | 2021-04-20T14:22:05 | null | https://github.com/huggingface/datasets/issues/2237 | albertvillanova | 1 | [
"enhancement"
] |
2,236 | Request to add StrategyQA dataset | ## Request to add StrategyQA dataset
- **Name:** StrategyQA
- **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa)
- **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf)
- **Data:** [here](https://allenai.org/data/strategyqa)
- **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
| OPEN | 2021-04-19T14:46:26 | 2021-04-19T14:46:26 | null | https://github.com/huggingface/datasets/issues/2236 | sarahwie | 0 | [
"dataset request"
] |
2,230 | Keys yielded while generating dataset are not being checked | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Even after having a tuple as key, the dataset is generated without any warning.
Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):
```
>>> import datasets
>>> nik = datasets.load_dataset('anli')
Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...
0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}
2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''}
1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''}
1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}
1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}
```
Here also, the dataset was generated successfuly even hough it had same keys without any warning.
The reason appears to stem from here:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988
Here, although it has access to every key, but it is not being checked and the example is written directly:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992
I would like to take this issue if you allow me. Thank You! | CLOSED | 2021-04-16T13:29:47 | 2021-05-10T17:31:21 | 2021-05-10T17:31:21 | https://github.com/huggingface/datasets/issues/2230 | NikhilBartwal | 9 | [
"enhancement"
] |
2,229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset.
I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple. | CLOSED | 2021-04-16T13:21:53 | 2021-04-19T08:56:42 | 2021-04-19T08:56:42 | https://github.com/huggingface/datasets/issues/2229 | NikhilBartwal | 2 | [] |
2,226 | Batched map fails when removing all columns | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
Here is my code: (see edit, in which I added a simplified version
```
This is the error:
```bash
pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000
```
I wonder why this error occurs, when I delete every column? Can you give me a hint?
### Edit:
I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the
complete dataset and print every sample before calling map. There seems to be no other problem with the dataset.
I tried to simplify the code that crashes:
```python
# works
log.debug(dataset.column_names)
log.debug(dataset)
for i, sample in enumerate(dataset):
log.debug(i, sample)
# crashes
counted_dataset = dataset.map(
lambda x: {"a": list(range(20))},
input_columns=column,
remove_columns=dataset.column_names,
load_from_cache_file=False,
num_proc=num_workers,
batched=True,
)
```
```
pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000
```
Edit2:
May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:
```python
# crashes
counted_dataset = dataset.map(
lambda x: {"a": list(range(20))},
input_columns=column,
remove_columns=dataset.column_names,
load_from_cache_file=False,
num_proc=num_workers,
batched=True,
features=datasets.Features(
{
"a": datasets.Sequence(datasets.Value("int32"))
}
)
)
```
```
File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single
writer.write_batch(batch)
File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch
col_type = schema.field(col).type if schema is not None else None
File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field
KeyError: 'Column tokens does not exist in schema'
```
_Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_ | CLOSED | 2021-04-16T11:17:01 | 2022-10-05T17:32:15 | 2022-10-05T17:32:15 | https://github.com/huggingface/datasets/issues/2226 | villmow | 3 | [
"bug"
] |
2,224 | Raise error if Windows max path length is not disabled | On startup, raise an error if Windows max path length is not disabled; ask the user to disable it.
Linked to discussion in #2220. | OPEN | 2021-04-14T14:57:20 | 2021-04-14T14:59:13 | null | https://github.com/huggingface/datasets/issues/2224 | albertvillanova | 0 | [] |
2,218 | Duplicates in the LAMA dataset | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc)
>>> train_dataset = dataset['train']
>>> train_dataset[0]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
>>> train_dataset[1]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
```
I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from:
```
{"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]}
```
What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA? | OPEN | 2021-04-13T18:59:49 | 2021-04-14T21:42:27 | null | https://github.com/huggingface/datasets/issues/2218 | amarasovic | 3 | [] |
2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class
File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module>
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
``` | CLOSED | 2021-04-12T20:26:01 | 2021-04-23T15:20:02 | 2021-04-23T15:20:02 | https://github.com/huggingface/datasets/issues/2214 | nsaphra | 4 | [
"bug"
] |
2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-48-a2721797e23b> in <module>()
----> 1 fquad = load_dataset("fquad")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 616 raise ConnectionError("Couldn't reach {}".format(url))
617
618 # Try a second time
ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip
```
Does anyone know why that is and how to fix it? | CLOSED | 2021-04-12T13:49:56 | 2023-10-03T16:09:19 | 2023-10-03T16:09:18 | https://github.com/huggingface/datasets/issues/2212 | hanss0n | 5 | [] |
2,211 | Getting checksum error when trying to load lc_quad dataset | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-42-404ace83f73c> in <module>()
----> 1 lc_quad = load_dataset("lc_quad")
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip']
```
Does anyone know why this could be and how I fix it? | CLOSED | 2021-04-12T13:38:58 | 2021-04-14T13:42:25 | 2021-04-14T13:42:25 | https://github.com/huggingface/datasets/issues/2211 | hanss0n | 2 | [] |
2,210 | dataloading slow when using HUGE dataset | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
| CLOSED | 2021-04-12T08:33:02 | 2021-04-13T02:03:05 | 2021-04-13T02:03:05 | https://github.com/huggingface/datasets/issues/2210 | hwijeen | 2 | [] |
2,207 | making labels consistent across the datasets | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction,
it would be great to have the labels consistent.
thanks
| CLOSED | 2021-04-11T10:03:56 | 2022-06-01T16:23:08 | 2022-06-01T16:21:10 | https://github.com/huggingface/datasets/issues/2207 | dorost1234 | 2 | [] |
2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it? | CLOSED | 2021-04-11T08:40:09 | 2021-11-10T12:18:30 | 2021-11-10T12:04:28 | https://github.com/huggingface/datasets/issues/2206 | yana-xuyan | 7 | [
"bug"
] |
2,200 | _prepare_split will overwrite DatasetBuilder.info.features | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if data_args.num_features:
features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")})
if data_args.label_classes:
features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(","))
else:
features["label"] = hf_features.Value("float32")
return hf_features.Features(features)
datasets = load_dataset(extension,
data_files=data_files,
sep=data_args.delimiter,
header=data_args.header,
column_names=data_args.column_names.split(",") if data_args.column_names else None,
features=get_dataset_features(data_args=data_args))
```
The `features` is printout as below before `builder_instance.as_dataset` is called:
```
{'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
````
But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to:
```
{'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
```
After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`.
But `ArrowWriter` is initailized without passing `features`.
So my concern is:
It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function? | CLOSED | 2021-04-09T11:47:13 | 2021-06-04T10:37:35 | 2021-06-04T10:37:35 | https://github.com/huggingface/datasets/issues/2200 | Gforky | 2 | [] |
2,196 | `load_dataset` caches two arrow files? | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | CLOSED | 2021-04-09T03:49:19 | 2021-04-12T05:25:29 | 2021-04-12T05:25:29 | https://github.com/huggingface/datasets/issues/2196 | hwijeen | 3 | [
"question"
] |
2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk
if state["_indices_files"]:
KeyError: '_indices_files'
```
I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:
https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634
May I suggest using `state.get()` instead of directly indexing the dictionary?
@lhoestq | CLOSED | 2021-04-09T01:37:12 | 2021-04-09T09:55:09 | 2021-04-09T09:54:39 | https://github.com/huggingface/datasets/issues/2195 | samsontmr | 2 | [
"bug"
] |
2,194 | py3.7: TypeError: can't pickle _LazyModule objects | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks. | CLOSED | 2021-04-08T21:02:48 | 2021-04-09T16:56:50 | 2021-04-09T01:52:57 | https://github.com/huggingface/datasets/issues/2194 | stas00 | 1 | [] |
2,193 | Filtering/mapping on one column is very slow | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible. | CLOSED | 2021-04-08T18:16:14 | 2021-04-26T16:13:59 | 2021-04-26T16:13:59 | https://github.com/huggingface/datasets/issues/2193 | norabelrose | 12 | [
"question"
] |
2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong | CLOSED | 2021-04-08T07:53:43 | 2021-05-24T10:03:55 | 2021-05-24T10:03:55 | https://github.com/huggingface/datasets/issues/2190 | anassalamah | 2 | [] |
2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]
final_dataset=concatenate_datasets([kb_list[1],kb_list[2]])
final_dataset.save_to_disk('/home/gsir059/haha/k.arrow')
``` | CLOSED | 2021-04-08T04:42:53 | 2022-06-01T16:32:15 | 2022-06-01T16:32:15 | https://github.com/huggingface/datasets/issues/2189 | shamanez | 1 | [] |
2,188 | Duplicate data in Timit dataset | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful? | CLOSED | 2021-04-08T04:21:54 | 2021-04-08T12:13:19 | 2021-04-08T12:13:19 | https://github.com/huggingface/datasets/issues/2188 | thanh-p | 2 | [] |
2,187 | Question (potential issue?) related to datasets caching | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you! | OPEN | 2021-04-08T00:16:28 | 2023-01-03T18:30:38 | null | https://github.com/huggingface/datasets/issues/2187 | ioana-blue | 15 | [
"question"
] |
2,185 | .map() and distributed training | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
)
```
I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split).
When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect.
Everything so far was done by launching a **single process script**.
I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files.
I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it.
**My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training.
- I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case)
- I am using 1.5.0 version of datasets if that matters. | CLOSED | 2021-04-07T18:22:14 | 2021-10-23T07:11:15 | 2021-04-09T15:38:31 | https://github.com/huggingface/datasets/issues/2185 | VictorSanh | 8 | [] |
2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir
yield tmp_dir
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables
parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
When using only a small portion of the sample file, say first 100 lines, it works perfectly well..
I see that it is the error from pyarrow, but could you give me a hint or possible solutions?
#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance! | CLOSED | 2021-04-07T10:26:46 | 2021-04-12T07:15:55 | 2021-04-12T07:15:55 | https://github.com/huggingface/datasets/issues/2181 | hwijeen | 9 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.