number
int64 2
7.91k
| title
stringlengths 1
290
| body
stringlengths 0
228k
| state
stringclasses 2
values | created_at
timestamp[s]date 2020-04-14 18:18:51
2025-12-16 10:45:02
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 19:34:46
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 14:20:48
β | url
stringlengths 48
51
| author
stringlengths 3
26
β | comments_count
int64 0
70
| labels
listlengths 0
4
|
|---|---|---|---|---|---|---|---|---|---|---|
3,172
|
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
|
## Describe the bug
I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow.
The exception is raised only when the code runs within a specific context. Despite ~10h spent investigating this issue, I have failed to isolate the bug, so let me describe my setup.
In my project, `Dataset` is wrapped into a `LightningDataModule` and the data is preprocessed when calling `LightningDataModule.setup()`. Calling `.setup()` in an isolated script works fine (even when wrapped with `hydra.main()`). However, when calling `.setup()` within the experiment script (depends on `pytorch_lightning`), the script crashes and `SystemError 15`.
I could avoid throwing this error by modifying ` Dataset.__del__()` (see bellow), but I believe this only moves the problem somewhere else. I am completely stuck with this issue, any hint would be welcome.
```python
class Dataset()
...
def __del__(self):
if hasattr(self, "_data"):
_ = self._data # <- ugly trick that allows avoiding the issue.
del self._data
if hasattr(self, "_indices"):
del self._indices
```
## Steps to reproduce the bug
```python
# Unfortunately I couldn't isolate the bug.
```
## Expected results
Calling `Dataset.map()` without throwing an exception. Or at least raising a more detailed exception/traceback.
## Actual results
```
Exception ignored in: <function Dataset.__del__ at 0x7f7cec179160>βββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:05<00:00, 1.17ba/s]
Traceback (most recent call last):
File ".../python3.8/site-packages/datasets/arrow_dataset.py", line 906, in __del__
del self._data
File ".../python3.8/site-packages/ray/worker.py", line 1033, in sigterm_handler
sys.exit(signum)
SystemExit: 15
```
## Environment info
Tested on 2 environments:
**Environment 1.**
- `datasets` version: 1.14.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 6.0.0
**Environment 2.**
- `datasets` version: 1.14.0
- Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- PyArrow version: 6.0.0
|
CLOSED
| 2021-10-28T10:29:00
| 2024-04-02T18:13:21
| 2021-11-03T11:26:10
|
https://github.com/huggingface/datasets/issues/3172
|
vlievin
| 12
|
[
"bug"
] |
3,171
|
Raise exceptions instead of using assertions for control flow
|
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks.
Currently, there is a total of 87 files with the `assert` statements (located under `datasets` and `src/datasets`), so when working on this, to manage the PR size, only modify 4-5 files at most before submitting a PR.
|
CLOSED
| 2021-10-27T18:26:52
| 2021-12-23T16:40:37
| 2021-12-23T16:40:37
|
https://github.com/huggingface/datasets/issues/3171
|
mariosasko
| 4
|
[
"good first issue"
] |
3,168
|
OpenSLR/83 is empty
|
## Describe the bug
As the summary says, openslr / SLR83 / train is empty.
The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('openslr', 'SLR83')
```
## Expected results
```
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'sentence'],
num_rows: 17877
})
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'sentence'],
num_rows: 0
})
})
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.1.dev0 (master HEAD)
- Platform: Ubuntu 20.04
- Python version: 3.7.10
- PyArrow version: 3.0.0
|
CLOSED
| 2021-10-26T19:42:21
| 2021-10-29T10:04:09
| 2021-10-29T10:04:09
|
https://github.com/huggingface/datasets/issues/3168
|
tyrius02
| 3
|
[
"bug"
] |
3,167
|
bookcorpusopen no longer works
|
## Describe the bug
When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usage (the machine has 1TB of RAM...).
This did not happen with 1.4.1.
I tried also `rm -rf ~/.cache/huggingface` but did not help.
Changing python version between 3.7, 3.8 and 3.9 did not help too.
## Steps to reproduce the bug
```python
import datasets
d = datasets.load_dataset('bookcorpusopen')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.4.0-1054-aws-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 4.0.1
|
CLOSED
| 2021-10-26T16:06:15
| 2021-11-17T15:53:46
| 2021-11-17T15:53:46
|
https://github.com/huggingface/datasets/issues/3167
|
lucadiliello
| 3
|
[
"bug"
] |
3,165
|
Deprecate prepare_module
|
In version 1.13, `prepare_module` was deprecated.
Add deprecation warning and remove its usage from all the library.
|
CLOSED
| 2021-10-26T15:27:15
| 2021-11-05T09:27:36
| 2021-11-05T09:27:36
|
https://github.com/huggingface/datasets/issues/3165
|
albertvillanova
| 0
|
[] |
3,164
|
Add raw data files to the Hub with GitHub LFS for canonical dataset
|
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team. From what I can tell, this option is not immediately supported if one follows the sharing steps detailed here: [https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset](https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset), since GitHub LFS is not supported for public forks. Is there a way to request this? Thanks!
|
CLOSED
| 2021-10-25T23:28:21
| 2021-10-30T19:54:51
| 2021-10-30T19:54:51
|
https://github.com/huggingface/datasets/issues/3164
|
zlucia
| 3
|
[] |
3,162
|
`datasets-cli test` should work with datasets without scripts
|
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).
I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!
|
OPEN
| 2021-10-25T18:52:30
| 2021-11-25T16:04:29
| null |
https://github.com/huggingface/datasets/issues/3162
|
sashavor
| 5
|
[
"enhancement"
] |
3,155
|
Illegal instruction (core dumped) at datasets import
|
## Describe the bug
I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)"
## Steps to reproduce the bug
```
conda create --prefix path/to/env
conda activate path/to/env
conda install -c huggingface -c conda-forge datasets
# exits with output "Illegal instruction (core dumped)"
python -m datasets
```
## Environment info
When I run "datasets-cli env", I also get "Illegal instruction (core dumped)"
If I run the following commands:
```
conda create --prefix path/to/another/new/env
conda activate path/to/another/new/env
conda install -c huggingface transformers
transformers-cli env
```
Then I get:
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Let me know what additional information you need in order to debug this issue. Thanks in advance!
|
CLOSED
| 2021-10-24T17:21:36
| 2021-11-18T19:07:04
| 2021-11-18T19:07:03
|
https://github.com/huggingface/datasets/issues/3155
|
hacobe
| 1
|
[
"bug"
] |
3,154
|
Sacrebleu unexpected behaviour/requirement for data format
|
## Describe the bug
When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/datasets/pull/3153).
In the below snippet, the original sacrebleu snippet works just fine whereas the datasets implementation throws an error.
## Steps to reproduce the bug
```python
import sacrebleu
import datasets
refs = [
['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],
]
hyps = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
expected_bleu = 48.530827
ds_bleu = datasets.load_metric("sacrebleu")
bleu_score_sb = sacrebleu.corpus_bleu(hyps, refs).score
print(bleu_score_sb, expected_bleu)
# works: 48.5308...
bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"]
print(bleu_score_ds, expected_bleu)
# ValueError: Predictions and/or references don't match the expected format.
```
This seems to be related to how datasets forces the features format here:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99
and then manipulates the references during the compute stage here
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L119-L122
I do not quite understand why that is required since sacrebleu handles argument parsing quite well [by itself](https://github.com/mjpost/sacrebleu/blob/2787185dd0f8d224c72ee5a831d163c2ac711a47/sacrebleu/metrics/base.py#L229).
## Actual results
Traceback (most recent call last):
File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2020.3\scratches\scratch_23.py", line 23, in <module>
bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"]
File "C:\dev\python\datasets\src\datasets\metric.py", line 392, in compute
self.add_batch(predictions=predictions, references=references)
File "C:\dev\python\datasets\src\datasets\metric.py", line 439, in add_batch
raise ValueError(
ValueError: Predictions and/or references don't match the expected format.
Expected format: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')},
Input predictions: ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'],
Input references: [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1
|
CLOSED
| 2021-10-24T08:55:33
| 2021-10-31T09:08:32
| 2021-10-31T09:08:31
|
https://github.com/huggingface/datasets/issues/3154
|
BramVanroy
| 2
|
[
"bug"
] |
3,150
|
Faiss _is_ available on Windows
|
In the setup file, I find the following:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171
However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think.
(This isn't really a bug but didn't know how else to tag.)
If you agree I can do a quick PR and remove that line.
|
CLOSED
| 2021-10-22T18:07:16
| 2021-11-02T10:06:03
| 2021-11-02T10:06:03
|
https://github.com/huggingface/datasets/issues/3150
|
BramVanroy
| 1
|
[] |
3,148
|
Streaming with num_workers != 0
|
## Describe the bug
When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch.
The code owner is likely @lhoestq
## Steps to reproduce the bug
For your convenience, we've prepped a colab notebook that reproduces the bug
https://colab.research.google.com/drive/1Mgl0oTZSNIE3UeGl_oX9wPCOIxRg19h1?usp=sharing
```python
!pip install datasets==1.14.0
should_freeze_forever = True
# ^-- set this to True in order to freeze forever, set to False in order to work normally
import torch
from datasets import load_dataset
data = load_dataset("oscar", "unshuffled_deduplicated_bn", split="train", streaming=True)
data = data.map(lambda x: {"text": x["text"], "orig": f"oscar[{x['id']}]"}, batched=True)
data = data.shuffle(100, seed=1337)
data = data.with_format("torch")
loader = torch.utils.data.DataLoader(data, batch_size=2, num_workers=2 if should_freeze_forever else 0)
# v-- the code should freeze forever at this line
for i, row in enumerate(loader):
print(row)
if i > 10: break
print("DONE!")
```
## Expected results
The code should not freeze forever with num_workers=2
## Actual results
The code freezes forever with num_workers=2
## Environment info
- `datasets` version: 1.14.0 (also found in previous versions)
- Platform: google colab (also locally)
- Python version: 3.7, (also 3.8)
- PyArrow version: 3.0.0
|
CLOSED
| 2021-10-22T15:07:17
| 2022-07-04T12:14:58
| 2022-07-04T12:14:58
|
https://github.com/huggingface/datasets/issues/3148
|
justheuristic
| 4
|
[
"bug"
] |
3,146
|
CLI test command throws NonMatchingSplitsSizesError when saving infos
|
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown:
```
$ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs
Testing builder 'Alittihad' (1/10)
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4...
Traceback (most recent call last):
File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main
service.run()
File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run
builder.download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}]
```
This is due because a previous run generated a wrong `dataset_info.json`.
This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`.
|
CLOSED
| 2021-10-22T13:50:53
| 2021-10-27T08:01:49
| 2021-10-27T08:01:49
|
https://github.com/huggingface/datasets/issues/3146
|
albertvillanova
| 0
|
[
"bug"
] |
3,145
|
[when Image type will exist] provide a way to get the data as binary + filename
|
**Is your feature request related to a problem? Please describe.**
When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in order to serve it on the web.
Note: this issue would apply exactly the same for the `Audio` type.
**Describe the solution you'd like**
If a "cell" has the type `Image`, provide a way to get the binary content of the file, and the filename, eg as:
```python
filename: str
data: bytes
```
**Describe alternatives you've considered**
A way to write the cell to the disk (passing a local directory), and then return the pathname, filename, and mimetype.
|
CLOSED
| 2021-10-22T13:23:49
| 2021-12-22T11:05:37
| 2021-12-22T11:05:36
|
https://github.com/huggingface/datasets/issues/3145
|
severo
| 4
|
[
"enhancement",
"dataset-viewer"
] |
3,144
|
Infer the features if missing
|
**Is your feature request related to a problem? Please describe.**
Some datasets, in particular community datasets, have no info file, thus no features.
**Describe the solution you'd like**
If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type.
Related: `datasets` would provide a way to load the data, and get the rows AND the features as the result.
**Describe alternatives you've considered**
The HF hub could also provide some UI to help the dataset maintainers to explicit the types of their rows, or automatically infer them as an initial proposal.
|
CLOSED
| 2021-10-22T13:17:33
| 2022-09-08T08:23:10
| 2022-09-08T08:23:10
|
https://github.com/huggingface/datasets/issues/3144
|
severo
| 1
|
[
"enhancement",
"dataset-viewer"
] |
3,143
|
Provide a way to check if the features (in info) match with the data of a split
|
**Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
|
OPEN
| 2021-10-22T13:13:36
| 2021-10-22T13:17:56
| null |
https://github.com/huggingface/datasets/issues/3143
|
severo
| 1
|
[
"enhancement",
"dataset-viewer"
] |
3,142
|
Provide a way to write a streamed dataset to the disk
|
**Is your feature request related to a problem? Please describe.**
The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again.
**Describe the solution you'd like**
Provide a way to write the streamed rows of a dataset on the disk, and to load from it later.
**Describe alternatives you've considered**
Provide a third mode: `lazy`, which would use the local cache for the data that have already been fetched previously, and use streaming to get the rest of the requested data.
|
OPEN
| 2021-10-22T13:09:53
| 2024-01-12T07:26:43
| null |
https://github.com/huggingface/datasets/issues/3142
|
severo
| 2
|
[
"enhancement",
"dataset-viewer"
] |
3,139
|
Fix file/directory deletion on Windows
|
Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`.
Examples:
- download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset = load_dataset("sst", split="train", download_mode="force_redownload")
```
- try to clean up the cache files while keeping a reference to those files (via the mapped dataset):
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset_mapped = dset.map(lambda _: {"dummy_col": 1})
dset.cleanup_cache_files()
```
We should fix those.
|
OPEN
| 2021-10-22T12:22:08
| 2021-10-22T12:22:08
| null |
https://github.com/huggingface/datasets/issues/3139
|
mariosasko
| 0
|
[
"bug"
] |
3,138
|
More fine-grained taxonomy of error types
|
**Is your feature request related to a problem? Please describe.**
Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did
**Describe the solution you'd like**
Give a specific exception type for every group of similar errors
**Describe alternatives you've considered**
Rely on the error message, using regex
|
OPEN
| 2021-10-22T09:35:29
| 2022-09-20T13:04:42
| null |
https://github.com/huggingface/datasets/issues/3138
|
severo
| 1
|
[
"enhancement",
"dataset-viewer"
] |
3,135
|
Make inspect.get_dataset_config_names always return a non-empty list of configs
|
**Is your feature request related to a problem? Please describe.**
Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to
**Describe the solution you'd like**
In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`).
https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
|
CLOSED
| 2021-10-22T08:02:50
| 2021-10-28T05:44:49
| 2021-10-28T05:44:49
|
https://github.com/huggingface/datasets/issues/3135
|
severo
| 2
|
[
"enhancement",
"dataset-viewer"
] |
3,134
|
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
|
datasets version: 1.12.1
`metric = datasets.load_metric('rouge')`
The error:
> ConnectionError Traceback (most recent call last)
> <ipython-input-3-dd10a0c5212f> in <module>
> ----> 1 metric = datasets.load_metric('rouge')
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
> 613 download_config=download_config,
> 614 download_mode=download_mode,
> --> 615 dataset=False,
> 616 )
> 617 metric_cls = import_main_class(module_path, dataset=False)
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
> 328 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
> 329 try:
> --> 330 local_path = cached_path(file_path, download_config=download_config)
> 331 except FileNotFoundError:
> 332 if script_version is not None:
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
> 296 use_etag=download_config.use_etag,
> 297 max_retries=download_config.max_retries,
> --> 298 use_auth_token=download_config.use_auth_token,
> 299 )
> 300 elif os.path.exists(url_or_filename):
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
> 603 raise FileNotFoundError("Couldn't find file at {}".format(url))
> 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
> --> 605 raise ConnectionError("Couldn't reach {}".format(url))
> 606
> 607 # Try a second time
>
> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Is there any remedy to solve the connection issue ?
|
CLOSED
| 2021-10-22T07:07:52
| 2023-09-14T01:19:45
| 2022-01-19T14:02:31
|
https://github.com/huggingface/datasets/issues/3134
|
yanan1116
| 4
|
[
"bug"
] |
3,132
|
Support Audio feature in streaming mode
|
Currently, Audio feature is only supported for non-streaming datasets.
Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
|
CLOSED
| 2021-10-21T13:32:18
| 2021-11-12T14:13:04
| 2021-11-12T14:13:04
|
https://github.com/huggingface/datasets/issues/3132
|
albertvillanova
| 0
|
[
"enhancement"
] |
3,131
|
Add ADE20k
|
## Adding a Dataset
- **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k)
- **Description:** A semantic segmentation dataset, consisting of 150 classes.
- **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf
- **Data:** http://sceneparsing.csail.mit.edu/
- **Motivation:** I am currently adding Transformer-based semantic segmentation models that achieve SOTA on this dataset. It would be great to directly access this dataset using HuggingFace Datasets, in order to make example scripts in HuggingFace Transformers.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
CLOSED
| 2021-10-21T10:13:09
| 2023-01-27T14:40:20
| 2023-01-27T14:40:20
|
https://github.com/huggingface/datasets/issues/3131
|
NielsRogge
| 1
|
[
"dataset request",
"vision"
] |
3,128
|
Support Audio feature for TAR archives in sequential access
|
Currently, Audio feature accesses each audio file by their file path.
However, streamed TAR archive files do not allow random access to their archived files.
Therefore, we should enhance the Audio feature to support TAR archived files in sequential access.
|
CLOSED
| 2021-10-21T08:23:01
| 2021-11-17T17:42:07
| 2021-11-17T17:42:07
|
https://github.com/huggingface/datasets/issues/3128
|
albertvillanova
| 0
|
[
"enhancement"
] |
3,127
|
datasets-cli: convertion of a tfds dataset to a huggingface one.
|
### Discussed in https://github.com/huggingface/datasets/discussions/3079
<div type='discussions-op-text'>
<sup>Originally posted by **vitalyshalumov** October 14, 2021</sup>
I'm trying to convert a tfds dataset to a huggingface one.
I've tried:
1. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/mnist/3.0.1/
2. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/
and other permutations.
The script appears to be running and finishing without an error but when looking in the huggingface/datasets/ folder nothing is created.
</div>
|
OPEN
| 2021-10-21T06:14:27
| 2021-10-27T11:36:05
| null |
https://github.com/huggingface/datasets/issues/3127
|
vitalyshalumov
| 1
|
[] |
3,126
|
"arabic_billion_words" dataset does not create the full dataset
|
## Describe the bug
When running:
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
the correct dataset file is pulled from the url.
But, the generated dataset includes just a small portion of the data included in the file.
This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....)
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
#The screen message
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB)
## Expected results
over 100K sentences
## Actual results
only 11K sentences
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
|
CLOSED
| 2021-10-21T06:02:38
| 2021-10-22T13:28:40
| 2021-10-22T13:28:40
|
https://github.com/huggingface/datasets/issues/3126
|
vitalyshalumov
| 1
|
[
"bug"
] |
3,123
|
Segmentation fault when loading datasets from file
|
## Describe the bug
Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/
## Steps to reproduce the bug
Download an example file:
```
wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e5051507651ad/tiny_kelm.jsonl
```
Then in Python:
```
import datasets
tiny_kelm = datasets.load_dataset("json", data_files="tiny_kelm.jsonl", chunksize=100000)
```
## Expected results
a `tiny_kelm` functional dataset
## Actual results
β οΈ `Segmentation fault (core dumped)` β οΈ
## Environment info
- `datasets` version: 1.14.0
- Platform: Linux-5.11.0-38-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-20T20:16:11
| 2021-11-02T14:57:07
| 2021-11-02T14:57:07
|
https://github.com/huggingface/datasets/issues/3123
|
TevenLeScao
| 2
|
[
"bug"
] |
3,122
|
OSError with a custom dataset loading script
|
## Describe the bug
I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory structure, yet I am only getting an error with janes_tag.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('classla/janes_tag', split='validation')
```
## Expected results
Dataset correctly loaded.
## Actual results
Traceback (most recent call last):
File "C:/mypath/test.py", line 91, in <module>
load_and_print('janes_tag')
File "C:/mypath/test.py", line 32, in load_and_print
dataset = datasets.load_dataset('classla/{}'.format(ds_name), split='validation')
File "C:\mypath\venv\lib\site-packages\datasets\load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 704, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: 'C:\\mypath\\.cache\\huggingface\\datasets\\downloads\\2c9996e44bdc5af9c89bffb9e6d7a3e42fdb2f56bacab45de13b20f3032ea7ca\\data\\train_all.conllup'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.5
- PyArrow version: 3.0.0
|
CLOSED
| 2021-10-20T20:08:39
| 2021-11-23T09:55:38
| 2021-11-23T09:55:38
|
https://github.com/huggingface/datasets/issues/3122
|
suzanab
| 8
|
[
"bug"
] |
3,119
|
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
|
## Adding a Dataset
- **Name:** *openslr**
- **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.*
- **Paper:** *https://www.openslr.org/resources/83/about.html*
- **Data:** *Eleven separate data files can be found via https://www.openslr.org/resources/83/*
- **Motivation:** *Increase english ASR data with UK and Irish dialects*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
The *openslr* dataset already exists, this will add additional subset, *SLR83*.
|
CLOSED
| 2021-10-20T12:05:07
| 2021-10-22T19:00:52
| 2021-10-22T08:30:22
|
https://github.com/huggingface/datasets/issues/3119
|
tyrius02
| 1
|
[
"dataset request"
] |
3,117
|
CI error at each release commit
|
After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110
```
____________________ LoadTest.test_load_dataset_canonical _____________________
[gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe
self = <tests.test_load.LoadTest testMethod=test_load_dataset_canonical>
def test_load_dataset_canonical(self):
scripts_version = os.getenv("HF_SCRIPTS_VERSION", SCRIPTS_VERSION)
with self.assertRaises(FileNotFoundError) as context:
datasets.load_dataset("_dummy")
self.assertIn(
f"https://raw.githubusercontent.com/huggingface/datasets/{scripts_version}/datasets/_dummy/_dummy.py",
> str(context.exception),
)
E AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/1.14.0/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at C:\\Users\\circleci\\datasets\\_dummy\\_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py"
tests\test_load.py:358: AssertionError
```
|
CLOSED
| 2021-10-20T11:42:53
| 2021-10-20T13:02:35
| 2021-10-20T13:02:35
|
https://github.com/huggingface/datasets/issues/3117
|
albertvillanova
| 0
|
[
"bug"
] |
3,114
|
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
|
## Describe the bug
Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter.
## Steps to reproduce the bug
The documentation for the `fs` parameter states:
```
fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``):
Instance of the remote filesystem used to download the files from.
```
`PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error.
```python
from fsspec.implementations.hdfs import PyArrowHDFS
...
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
```
## Expected results
Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing:
```python
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
my_datasets.save_to_disk(transformed_corpus_path, fs=fs)
```
As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS:
```sh
$ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/"
Found 4 items
-rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation
```
I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above.
## Actual results
However, when trying to recover the saved datasets, I get this error:
```
...
File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download
TypeError: download() got an unexpected keyword argument 'recursive'
```
Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter:
```python
def download(self, path, stream, buffer_size=None):
with self.open(path, 'rb') as f:
f.download(stream, buffer_size=buffer_size)
```
## Environment info
- `datasets` version: 1.13.3
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-19T20:01:45
| 2022-02-14T14:00:28
| 2022-02-14T14:00:28
|
https://github.com/huggingface/datasets/issues/3114
|
francisco-perez-sorrosal
| 2
|
[
"bug"
] |
3,113
|
Loading Data from HDF files
|
**Is your feature request related to a problem? Please describe.**
More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset.
**Describe the solution you'd like**
I would love to see a `from_h5` method that gets an interface implemented by the user on how items are extracted from dataset (in case of multiple datasets containing elements like arrays and metadata and etc).
**Describe alternatives you've considered**
Currently I manually load hdf files using `h5py` and implement PyTorch dataset interface. For small h5 files I load them into a pandas dataframe and use `from_pandas` function in the `datasets` package to load them, but for big datasets this is not feasible.
**Additional context**
HDF files are widespread throughout different domains and are one of the go to's for many researchers/scientists/engineers who work with numerical data. Given `datasets`' usecases have outgrown NLP use cases, it will make a lot of sense focusing on things like supporting HDF files.
|
CLOSED
| 2021-10-19T19:26:46
| 2025-08-19T13:28:54
| 2025-08-19T13:28:54
|
https://github.com/huggingface/datasets/issues/3113
|
FeryET
| 9
|
[
"enhancement",
"good second issue"
] |
3,112
|
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
|
## Describe the bug
Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error :
> OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
Note that I always run `batch_size=writer_batch_size` :
## Steps to reproduce the bug
```python
datasets.map(lambda example : {"column_name" : function(arguments)}, batched=False, remove_columns = datasets.column_names, batch_size=batch_size, writer_batch_size=batch_size, disable_nullable=True, num_proc=None, desc="blablabla")
```
## Introspecting CUDA memory during bug
Placed within `function(arguments)` the following statement to introspect memory usage, merely a little over 1/4 of 2Gb
`print(torch.cuda.memory_summary(device=device, abbreviated=False))`
> |===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB |
| from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB |
| from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB |
|---------------------------------------------------------------------------|
| Active memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB |
| from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB |
| from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 598016 KB | 598016 KB | 598016 KB | 0 B |
| from large pool | 595968 KB | 595968 KB | 595968 KB | 0 B |
| from small pool | 2048 KB | 2048 KB | 2048 KB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 36117 KB | 52292 KB | 274275 KB | 238158 KB |
| from large pool | 34816 KB | 51537 KB | 261713 KB | 226897 KB |
| from small pool | 1301 KB | 2045 KB | 12562 KB | 11261 KB |
|---------------------------------------------------------------------------|
| Allocations | 198 | 224 | 478 | 280 |
| from large pool | 74 | 75 | 75 | 1 |
| from small pool | 124 | 150 | 403 | 279 |
|---------------------------------------------------------------------------|
| Active allocs | 198 | 224 | 478 | 280 |
| from large pool | 74 | 75 | 75 | 1 |
| from small pool | 124 | 150 | 403 | 279 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 21 | 21 | 21 | 0 |
| from large pool | 20 | 20 | 20 | 0 |
| from small pool | 1 | 1 | 1 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 18 | 23 | 166 | 148 |
| from large pool | 17 | 18 | 19 | 2 |
| from small pool | 1 | 6 | 147 | 146 |
|===========================================================================|
## Expected results
Efficiently process the datasets and write it down to disk.
## Actual results
--------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2390 else:
-> 2391 writer.write(example)
2392 else:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write(self, example, key, writer_batch_size)
367
--> 368 self.write_examples_on_file()
369
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self)
316 if not isinstance(pa_array[0], pa.lib.FloatScalar):
--> 317 raise OverflowError(
318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format(
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16268/2456940807.py in <module>
3 #tracker = OfflineEmissionsTracker(country_iso_code="FRA", project_name='xxx'+time_stamp,output_dir='./codecarbon')
4 #tracker.start()
----> 5 process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection=['wikipedia'], from_scratch=True,
6 clean_sentences=False, negative_sampling=False, translate=False, tokenize=False, generate_embeddings=True, concatenate_embeddings=False,
7 max_sample=10000, padding='do_not_pad', truncation=True, cpu_batch_size=1000, gpu_batch_size=2, cpu_writer_batch_size=1000, gpu_writer_batch_size=2, disable_nullable=True, num_proc=None) #
~\xxx\xxx.py in process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection, from_scratch, clean_sentences, translate, negative_sampling, tokenize, generate_embeddings, concatenate_embeddings, max_sample, padding, truncation, cpu_batch_size, gpu_batch_size, cpu_writer_batch_size, gpu_writer_batch_size, disable_nullable, num_proc)
481 for column in tqdm(dataset.column_names, desc=f'Processing column', leave=False):
482 if "xxx_" in column:
--> 483 dataset = dataset.map(lambda example :
484 {"embeddings_"+str(column).replace("translated_",""):function(input_ids=example[column],
485 token_type_ids=example[column.replace("input_ids","token_type_ids")],
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2034
2035 if num_proc is None or num_proc == 1:
-> 2036 return self._map_single(
2037 function=function,
2038 with_indices=with_indices,
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~\anaconda3\envs\xxx\lib\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2425 if update_data:
2426 if writer is not None:
-> 2427 writer.finalize()
2428 if tmp_file is not None:
2429 tmp_file.close()
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in finalize(self, close_stream)
440 # Re-intializing to empty list for next batch
441 self.hkey_record = []
--> 442 self.write_examples_on_file()
443 if self.pa_writer is None:
444 if self._schema is not None:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self)
315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:
316 if not isinstance(pa_array[0], pa.lib.FloatScalar):
--> 317 raise OverflowError(
318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format(
319 type(pa_array)
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.3
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
##Next steps
Testing on Linux.
@albertvillanova
|
OPEN
| 2021-10-19T18:21:41
| 2021-10-19T18:52:29
| null |
https://github.com/huggingface/datasets/issues/3112
|
BenoitDalFerro
| 4
|
[
"bug"
] |
3,111
|
concatenate_datasets removes ClassLabel typing.
|
## Describe the bug
When concatenating two datasets, we lose typing of ClassLabel columns.
I can work on this if this is a legitimate bug,
## Steps to reproduce the bug
```python
import datasets
from datasets import Dataset, ClassLabel, Value, concatenate_datasets
DS_LEN = 100
my_dataset = Dataset.from_dict(
{
"sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)],
"label": [i % 2 for i in range(DS_LEN)]
}
)
my_predictions = Dataset.from_dict(
{
"pred": [(i + 1) % 2 for i in range(DS_LEN)]
}
)
my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])}))
print("Original")
print(my_dataset)
print(my_dataset.features)
concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1)
print("Concatenated")
print(concat_ds)
print(concat_ds.features)
```
## Expected results
The features of `concat_ds` should contain ClassLabel.
## Actual results
On master, I get:
```
{'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)}
```
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
|
CLOSED
| 2021-10-19T18:05:31
| 2021-10-21T14:50:21
| 2021-10-21T14:50:21
|
https://github.com/huggingface/datasets/issues/3111
|
Dref360
| 1
|
[
"bug"
] |
3,105
|
download_mode=`force_redownload` does not work on removed datasets
|
## Describe the bug
If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead.
## Steps to reproduce the bug
_requires to already have `wit` in the cache_: see https://github.com/huggingface/datasets/pull/2981
```python
import datasets as ds
dataset = ds.load_dataset("wit", split="train", download_mode='force_redownload')
```
## Expected results
It should raise an exception, since the dataset does not exist anymore.
## Actual results
It uses the cached result
```
Using the latest cached version of the module from /home/slesage/.cache/huggingface/modules/datasets_modules/datasets/wit/107afbffd48e058b19101bddc47fbee25fa68eb6d50a733e262875f1285a5171 (last modified on Wed Sep 29 08:21:10 2021) since it couldn't be found locally at wit, or remotely on the Hugging Face Hub.
```
## Environment info
- `datasets` version: 1.13.4.dev0
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1
|
OPEN
| 2021-10-18T13:12:38
| 2021-10-22T09:36:10
| null |
https://github.com/huggingface/datasets/issues/3105
|
severo
| 0
|
[
"bug",
"dataset-viewer"
] |
3,104
|
Missing Zenodo 1.13.3 release
|
After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305
TODO:
- [x] Contact Zenodo support
- [x] Check it is fixed
|
CLOSED
| 2021-10-18T12:57:18
| 2021-10-22T13:22:25
| 2021-10-22T13:22:24
|
https://github.com/huggingface/datasets/issues/3104
|
albertvillanova
| 1
|
[
"bug"
] |
3,102
|
Unsuitable project description in PyPI
|
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
|
CLOSED
| 2021-10-18T12:45:00
| 2021-10-18T12:59:56
| 2021-10-18T12:59:56
|
https://github.com/huggingface/datasets/issues/3102
|
albertvillanova
| 0
|
[] |
3,099
|
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
|
## Describe the bug
When using `pip install datasets`
or use `conda install -c huggingface -c conda-forge datasets`
cannot install datasets
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sst", "default")
```
## Actual results
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-fbe7981e6e21> in <module>
1 import torch
2 import transformers
----> 3 from datasets import load_dataset
4
5 dataset = load_dataset("sst", "default")
~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/__init__.py in <module>
35 from .arrow_reader import ArrowReader, ReadInstruction
36 from .arrow_writer import ArrowWriter
---> 37 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
38 from .combine import interleave_datasets
39 from .dataset_dict import DatasetDict, IterableDatasetDict
~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/builder.py in <module>
42 )
43 from .arrow_writer import ArrowWriter, BeamWriter
---> 44 from .data_files import DataFilesDict, _sanitize_patterns
45 from .dataset_dict import DatasetDict, IterableDatasetDict
46 from .fingerprint import Hasher
~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/data_files.py in <module>
118
119 def _exec_patterns_in_dataset_repository(
--> 120 dataset_info: huggingface_hub.hf_api.DatasetInfo,
121 patterns: List[str],
122 allowed_extensions: Optional[list] = None,
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.3
- Platform: macOS-11.3.1-arm64-arm-64bit
- Python version: 3.8.10
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-17T14:17:47
| 2021-11-09T16:42:29
| 2021-11-09T16:42:28
|
https://github.com/huggingface/datasets/issues/3099
|
JTWang2000
| 6
|
[
"bug"
] |
3,097
|
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
|
## Describe the bug
I keep runnig into a fsspec ModuleNotFound error
## Steps to reproduce the bug
```python
>>> from datasets import get_dataset_infos
2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-10-15 15:25:37.863252: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 56, in <module>
from .utils.streaming_download_manager import StreamingDownloadManager
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 11, in <module>
from fsspec.exceptions import FSTimeoutError
ModuleNotFoundError: No module named 'fsspec.exceptions'
```
Yet, I do have `fsspec`:
```bash
hf@victor-scale:~/dev/promptsource$ pip show fsspec
Name: fsspec
Version: 2021.5.0
Summary: File-system specification
Home-page: http://github.com/intake/filesystem_spec
Author: None
Author-email: None
License: BSD
Location: /home/hf/dev/promptsource/.venv/lib/python3.7/site-packages
Requires:
Required-by: datasets
```
With the same version of fsspec and `datasets==1.9.0`, I don't see this problem....
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
I can't even run `datasets-cli env` actually.., but here's my env:
- `datasets` version: 1.13.3
- Platform: Ubuntu 18.04
- Python version: 3.7.10
- PyArrow version: 3.0.0
|
CLOSED
| 2021-10-15T19:34:38
| 2021-10-18T07:51:54
| 2021-10-18T07:51:54
|
https://github.com/huggingface/datasets/issues/3097
|
VictorSanh
| 1
|
[
"bug"
] |
3,095
|
`cast_column` makes audio decoding fail
|
## Describe the bug
After changing the sampling rate automatic decoding fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ab", split="train")
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
print(ds[0]["audio"]) # <- this fails currently
```
yields:
```
TypeError: forward() takes 2 positional arguments but 4 were given
```
## Expected results
no failure
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.13.2 (master)
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-15T13:36:58
| 2023-04-07T09:43:20
| 2021-10-15T15:38:30
|
https://github.com/huggingface/datasets/issues/3095
|
patrickvonplaten
| 2
|
[
"bug"
] |
3,094
|
Support loading a dataset from SQLite files
|
As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files.
|
CLOSED
| 2021-10-15T10:58:41
| 2022-10-03T16:32:29
| 2022-10-03T16:32:29
|
https://github.com/huggingface/datasets/issues/3094
|
albertvillanova
| 2
|
[
"enhancement",
"good second issue"
] |
3,093
|
Error loading json dataset with multiple splits if keys in nested dicts have a different order
|
## Describe the bug
Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below.
If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fine.
## Steps to reproduce the bug
Create two json files:
train.json
```
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
```
test.json
```
{"a": {"b": 1, "c": 2}}
{"a": {"b": 3, "c": 4}}
```
```python
from datasets import load_dataset
# Loading the files individually works (even though the keys in train.json don't have the same order)
load_dataset('json', data_files={"test": "test.json"})
load_dataset('json', data_files={"train": "train.json"})
# Loading both splits fails
load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
```
## Expected results
Loading both splits should not give an error whether the nested dicts are have the same order or not.
## Actual results
```
>>> load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
Using custom data configuration default-f1bc76fd07398c4c
Downloading and preparing dataset json/default to /home/dthulke/.cache/huggingface/datasets/json/default-f1bc76fd07398c4c/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 8839.42it/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 477.82it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/pyarrow/compute.py", line 297, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.15.0-147-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-15T09:33:25
| 2022-04-10T14:06:29
| 2022-04-10T14:06:29
|
https://github.com/huggingface/datasets/issues/3093
|
dthulke
| 2
|
[
"bug"
] |
3,091
|
`blog_authorship_corpus` is broken
|
## Describe the bug
The dataset `blog_authorship_corpus` is broken.
By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty.
I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip).
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("blog_authorship_corpus", split="train", download_mode='force_redownload')
```
## Expected results
No error.
## Actual results
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
/tmp/ipykernel_5237/1729238701.py in <module>
2 ds = load_dataset(
3 "blog_authorship_corpus", split="train",
----> 4 download_mode='force_redownload'
5 )
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
1115 ignore_verifications=ignore_verifications,
1116 try_from_hf_gcs=try_from_hf_gcs,
-> 1117 use_auth_token=use_auth_token,
1118 )
1119
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
635 if not downloaded_from_gcs:
636 self._download_and_prepare(
--> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
638 )
639 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
707 if verify_infos:
708 verify_checksums(
--> 709 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
710 )
711
/opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip']
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.10
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-15T09:20:40
| 2021-10-19T13:06:10
| 2021-10-19T12:50:39
|
https://github.com/huggingface/datasets/issues/3091
|
fdtomasi
| 3
|
[
"bug"
] |
3,089
|
JNLPBA Dataset
|
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in the [script](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L81-L83) are: O, B, and I. The correct entities from the original data file are:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
## Actual results
The dataset loader script needs to include the following NER names:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
And the [data](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L46) that is being pulled has been modified from the original dataset and does not include the original NER tags.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
|
CLOSED
| 2021-10-15T01:16:02
| 2021-10-22T08:23:57
| 2021-10-22T08:23:57
|
https://github.com/huggingface/datasets/issues/3089
|
sciarrilli
| 2
|
[
"bug"
] |
3,087
|
Removing label column in a text classification dataset yields to errors
|
## Describe the bug
This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error.
To reproduce:
```py
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.remove_columns("label")
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
context_length = 128
def tokenize_pad_and_truncate(texts):
return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
```
Traceback:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-ba61bb32f786> in <module>
12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
13
---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2051 new_fingerprint=new_fingerprint,
2052 disable_tqdm=disable_tqdm,
-> 2053 desc=desc,
2054 )
2055 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2243 if os.path.exists(cache_file_name) and load_from_cache_file:
2244 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2245 info = self.info.copy()
2246 info.features = features
2247 info.task_templates = None
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
```
|
CLOSED
| 2021-10-14T20:12:50
| 2021-10-15T10:11:04
| 2021-10-15T10:11:04
|
https://github.com/huggingface/datasets/issues/3087
|
sgugger
| 0
|
[
"bug"
] |
3,084
|
VisibleDeprecationWarning when using `set_format("numpy")`
|
Code to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("glue", "mnli")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased')
def tokenize_function(dataset):
return tokenizer(dataset['premise'])
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features)
tokenized_datasets.set_format("numpy")
tokenized_datasets['train'][5:8]
```
Outputs:
```
python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
```
|
CLOSED
| 2021-10-14T13:53:01
| 2021-10-22T16:04:14
| 2021-10-22T16:04:14
|
https://github.com/huggingface/datasets/issues/3084
|
Rocketknight1
| 1
|
[
"bug"
] |
3,083
|
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
|
## Describe the bug
As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: __init__() got an unexpected keyword argument '_resampler'
```
|
CLOSED
| 2021-10-14T13:23:53
| 2021-10-14T15:13:40
| 2021-10-14T15:13:40
|
https://github.com/huggingface/datasets/issues/3083
|
albertvillanova
| 0
|
[
"bug"
] |
3,080
|
Error related to timeout keyword argument
|
## Describe the bug
As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: dataset_info() got an unexpected keyword argument 'timeout'
```
|
CLOSED
| 2021-10-14T13:10:58
| 2021-10-14T14:39:51
| 2021-10-14T14:39:51
|
https://github.com/huggingface/datasets/issues/3080
|
albertvillanova
| 0
|
[
"bug"
] |
3,076
|
Error when loading a metric
|
## Describe the bug
As reported by @sgugger, after last release, exception is thrown when loading a metric.
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("squad_v2")
```
## Actual results
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-1-e612a8cab787> in <module>
1 from datasets import load_metric
----> 2 metric = load_metric("squad_v2")
d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs)
1336 )
1337 revision = script_version
-> 1338 metric_module = metric_module_factory(
1339 path, revision=revision, download_config=download_config, download_mode=download_mode
1340 ).module_path
d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs)
1237 if not isinstance(e1, FileNotFoundError):
1238 raise e1 from None
-> 1239 raise FileNotFoundError(
1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. "
1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either."
FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either.
```
|
CLOSED
| 2021-10-14T08:29:27
| 2021-10-14T09:14:55
| 2021-10-14T09:14:55
|
https://github.com/huggingface/datasets/issues/3076
|
albertvillanova
| 0
|
[
"bug"
] |
3,073
|
Import error installing with ppc64le
|
## Describe the bug
Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library.
```
python
Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Illegal instruction (core dumped)
```
Error when importing
`Illegal instruction (core dumped)`
## Steps to reproduce the bug
I get this error when installing the library by using conda. I can't install with pip I believe because pyarrow only has the ppc64le library on conda forge
```
conda create --name transformers_py36_v2 python=3.6
conda activate transformers_py36_v2
conda install datasets
```
## Tracebacks
conda create --name transformers_py36_v2 python=3.6
```
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- python=3.6
The following NEW packages will be INSTALLED:
_libgcc_mutex conda-forge/linux-ppc64le::_libgcc_mutex-0.1-conda_forge
_openmp_mutex conda-forge/linux-ppc64le::_openmp_mutex-4.5-1_gnu
ca-certificates conda-forge/linux-ppc64le::ca-certificates-2021.10.8-h1084571_0
certifi pkgs/main/linux-ppc64le::certifi-2020.12.5-py36h6ffa863_0
ld_impl_linux-ppc~ conda-forge/linux-ppc64le::ld_impl_linux-ppc64le-2.36.1-ha35d02b_2
libffi conda-forge/linux-ppc64le::libffi-3.4.2-h3b9df90_4
libgcc-ng conda-forge/linux-ppc64le::libgcc-ng-11.2.0-h7698a5e_11
libgomp conda-forge/linux-ppc64le::libgomp-11.2.0-h7698a5e_11
libstdcxx-ng conda-forge/linux-ppc64le::libstdcxx-ng-11.2.0-habdf983_11
libzlib conda-forge/linux-ppc64le::libzlib-1.2.11-h339bb43_1013
ncurses conda-forge/linux-ppc64le::ncurses-6.2-hea85c5d_4
openssl conda-forge/linux-ppc64le::openssl-1.1.1l-h4e0d66e_0
pip conda-forge/noarch::pip-21.3-pyhd8ed1ab_0
python conda-forge/linux-ppc64le::python-3.6.13-h57873ef_2_cpython
readline conda-forge/linux-ppc64le::readline-8.1-h5c45dff_0
setuptools pkgs/main/linux-ppc64le::setuptools-58.0.4-py36h6ffa863_0
sqlite conda-forge/linux-ppc64le::sqlite-3.36.0-h4e2196e_2
tk conda-forge/linux-ppc64le::tk-8.6.11-h41c6715_1
wheel conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1
xz conda-forge/linux-ppc64le::xz-5.2.5-h6eb9509_1
zlib conda-forge/linux-ppc64le::zlib-1.2.11-h339bb43_1013
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate transformers_py36_v2
#
# To deactivate an active environment, use
#
# $ conda deactivate
```
conda activate transformers_py36_v2
conda install datasets
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- datasets
The following NEW packages will be INSTALLED:
abseil-cpp conda-forge/linux-ppc64le::abseil-cpp-20210324.2-h3b9df90_0
aiohttp conda-forge/linux-ppc64le::aiohttp-3.7.4.post0-py36hc33305d_0
arrow-cpp conda-forge/linux-ppc64le::arrow-cpp-5.0.0-py36hf9cf308_8_cpu
async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000
attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0
aws-c-cal conda-forge/linux-ppc64le::aws-c-cal-0.5.11-hb3fac3d_0
aws-c-common conda-forge/linux-ppc64le::aws-c-common-0.6.2-h4e0d66e_0
aws-c-event-stream conda-forge/linux-ppc64le::aws-c-event-stream-0.2.7-h76da5f2_13
aws-c-io conda-forge/linux-ppc64le::aws-c-io-0.10.5-hf6a6c7c_0
aws-checksums conda-forge/linux-ppc64le::aws-checksums-0.1.11-hfe76d68_7
aws-sdk-cpp conda-forge/linux-ppc64le::aws-sdk-cpp-1.8.186-h90855e8_3
brotlipy conda-forge/linux-ppc64le::brotlipy-0.7.0-py36hc33305d_1001
bzip2 conda-forge/linux-ppc64le::bzip2-1.0.8-h4e0d66e_4
c-ares conda-forge/linux-ppc64le::c-ares-1.17.2-h4e0d66e_0
cffi conda-forge/linux-ppc64le::cffi-1.14.6-py36h021ab3c_1
chardet conda-forge/linux-ppc64le::chardet-4.0.0-py36h270354c_1
colorama conda-forge/noarch::colorama-0.4.4-pyh9f0ad1d_0
cryptography conda-forge/linux-ppc64le::cryptography-3.4.7-py36hc71b123_0
dataclasses conda-forge/noarch::dataclasses-0.8-pyh787bdff_2
datasets conda-forge/noarch::datasets-1.12.1-pyhd8ed1ab_1
dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0
filelock conda-forge/noarch::filelock-3.3.0-pyhd8ed1ab_0
fsspec conda-forge/noarch::fsspec-2021.10.0-pyhd8ed1ab_0
gflags conda-forge/linux-ppc64le::gflags-2.2.2-hb209c28_1004
glog conda-forge/linux-ppc64le::glog-0.5.0-h4040248_0
grpc-cpp conda-forge/linux-ppc64le::grpc-cpp-1.40.0-h2bf711c_2
huggingface_hub conda-forge/noarch::huggingface_hub-0.0.19-pyhd8ed1ab_0
idna conda-forge/noarch::idna-2.10-pyh9f0ad1d_0
idna_ssl conda-forge/noarch::idna_ssl-1.0.0-0
importlib-metadata conda-forge/linux-ppc64le::importlib-metadata-4.8.1-py36h270354c_0
importlib_metadata conda-forge/noarch::importlib_metadata-4.8.1-hd8ed1ab_0
krb5 conda-forge/linux-ppc64le::krb5-1.19.2-haf43566_2
libblas conda-forge/linux-ppc64le::libblas-3.9.0-11_linuxppc64le_openblas
libbrotlicommon conda-forge/linux-ppc64le::libbrotlicommon-1.0.9-h4e0d66e_5
libbrotlidec conda-forge/linux-ppc64le::libbrotlidec-1.0.9-h4e0d66e_5
libbrotlienc conda-forge/linux-ppc64le::libbrotlienc-1.0.9-h4e0d66e_5
libcblas conda-forge/linux-ppc64le::libcblas-3.9.0-11_linuxppc64le_openblas
libcurl conda-forge/linux-ppc64le::libcurl-7.79.1-he415e40_1
libedit conda-forge/linux-ppc64le::libedit-3.1.20191231-h41a240f_2
libev conda-forge/linux-ppc64le::libev-4.33-h6eb9509_1
libevent conda-forge/linux-ppc64le::libevent-2.1.10-h97db324_4
libgfortran-ng conda-forge/linux-ppc64le::libgfortran-ng-11.2.0-hfdc3801_11
libgfortran5 conda-forge/linux-ppc64le::libgfortran5-11.2.0-he58fbb4_11
liblapack conda-forge/linux-ppc64le::liblapack-3.9.0-11_linuxppc64le_openblas
libnghttp2 conda-forge/linux-ppc64le::libnghttp2-1.43.0-h42039ad_1
libopenblas conda-forge/linux-ppc64le::libopenblas-0.3.17-pthreads_h486567c_1
libprotobuf conda-forge/linux-ppc64le::libprotobuf-3.18.1-h690f14c_0
libssh2 conda-forge/linux-ppc64le::libssh2-1.10.0-ha5a9321_2
libthrift conda-forge/linux-ppc64le::libthrift-0.15.0-h54f692e_1
libutf8proc conda-forge/linux-ppc64le::libutf8proc-2.6.1-h4e0d66e_0
lz4-c conda-forge/linux-ppc64le::lz4-c-1.9.3-h3b9df90_1
multidict conda-forge/linux-ppc64le::multidict-5.2.0-py36hc33305d_0
multiprocess conda-forge/linux-ppc64le::multiprocess-0.70.12.2-py36hc33305d_0
numpy conda-forge/linux-ppc64le::numpy-1.19.5-py36h86665d4_1
orc conda-forge/linux-ppc64le::orc-1.7.0-hae6b4bd_0
packaging conda-forge/noarch::packaging-21.0-pyhd8ed1ab_0
pandas conda-forge/linux-ppc64le::pandas-1.1.5-py36hab1a6e6_0
parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2
pyarrow conda-forge/linux-ppc64le::pyarrow-5.0.0-py36h7a46c7e_8_cpu
pycparser conda-forge/noarch::pycparser-2.20-pyh9f0ad1d_2
pyopenssl conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0
pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0
pysocks conda-forge/linux-ppc64le::pysocks-1.7.1-py36h270354c_3
python-dateutil conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0
python-xxhash conda-forge/linux-ppc64le::python-xxhash-2.0.2-py36hc33305d_0
python_abi conda-forge/linux-ppc64le::python_abi-3.6-2_cp36m
pytz conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0
pyyaml conda-forge/linux-ppc64le::pyyaml-5.4.1-py36hc33305d_1
re2 conda-forge/linux-ppc64le::re2-2021.09.01-h3b9df90_0
requests conda-forge/noarch::requests-2.25.1-pyhd3deb0d_0
s2n conda-forge/linux-ppc64le::s2n-1.0.10-h97db324_0
six conda-forge/noarch::six-1.16.0-pyh6c4a22f_0
snappy conda-forge/linux-ppc64le::snappy-1.1.8-hb209c28_3
tqdm conda-forge/noarch::tqdm-4.62.3-pyhd8ed1ab_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.2-hd8ed1ab_0
typing_extensions conda-forge/noarch::typing_extensions-3.10.0.2-pyha770c72_0
urllib3 conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0
xxhash conda-forge/linux-ppc64le::xxhash-0.8.0-h4e0d66e_3
yaml conda-forge/linux-ppc64le::yaml-0.2.5-h6eb9509_0
yarl conda-forge/linux-ppc64le::yarl-1.6.3-py36hc33305d_2
zipp conda-forge/noarch::zipp-3.6.0-pyhd8ed1ab_0
zstd conda-forge/linux-ppc64le::zstd-1.5.0-h65c4b1a_0
The following packages will be UPDATED:
certifi pkgs/main::certifi-2020.12.5-py36h6ff~ --> conda-forge::certifi-2021.5.30-py36h270354c_0
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Red Hat Enterprise Linux 8.2 (Ootpa)
- Python version: 3.6
- PyArrow version: pyarrow - 5.0.0 - py36h7a46c7e_8_cpu - conda-forge
Any help would be appreciated! I've been struggling on installing datasets on this machine.
|
CLOSED
| 2021-10-13T21:37:23
| 2021-10-14T16:35:46
| 2021-10-14T16:33:28
|
https://github.com/huggingface/datasets/issues/3073
|
gcervantes8
| 1
|
[
"bug"
] |
3,071
|
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
|
## Adding a Dataset
- **Name:** text, json, csv
- **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
|
CLOSED
| 2021-10-13T07:32:10
| 2021-10-13T08:27:04
| 2021-10-13T08:27:03
|
https://github.com/huggingface/datasets/issues/3071
|
zixiliuUSC
| 1
|
[
"dataset request"
] |
3,069
|
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
|
## Describe the bug
After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321
Error summary:
```
ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - FileNotF...
ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - FileNotFo...
```
Stack trace:
```
______________ ERROR at setup of test_dummy_dataset_serialize_s3 ______________
[gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe
@pytest.fixture()
def s3_base():
# writable local S3 system
import shlex
import subprocess
# Mocked AWS Credentials for moto.
old_environ = os.environ.copy()
os.environ.update(S3_FAKE_ENV_VARS)
> proc = subprocess.Popen(shlex.split("moto_server s3 -p %s" % s3_port))
tests\s3_fixtures.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\tools\miniconda3\lib\subprocess.py:729: in __init__
restore_signals, start_new_session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <subprocess.Popen object at 0x0000012BB8A4B908>
args = 'moto_server s3 -p 5555', executable = None, preexec_fn = None
close_fds = True, pass_fds = (), cwd = None, env = None
startupinfo = <subprocess.STARTUPINFO object at 0x0000012BB8177630>
creationflags = 0, shell = False, p2cread = -1, p2cwrite = -1, c2pread = -1
c2pwrite = -1, errread = -1, errwrite = -1, unused_restore_signals = True
unused_start_new_session = False
def _execute_child(self, args, executable, preexec_fn, close_fds,
pass_fds, cwd, env,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite,
unused_restore_signals, unused_start_new_session):
"""Execute program (MS Windows version)"""
assert not pass_fds, "pass_fds not supported on Windows."
if not isinstance(args, str):
args = list2cmdline(args)
# Process startup details
if startupinfo is None:
startupinfo = STARTUPINFO()
if -1 not in (p2cread, c2pwrite, errwrite):
startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES
startupinfo.hStdInput = p2cread
startupinfo.hStdOutput = c2pwrite
startupinfo.hStdError = errwrite
if shell:
startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = _winapi.SW_HIDE
comspec = os.environ.get("COMSPEC", "cmd.exe")
args = '{} /c "{}"'.format (comspec, args)
# Start the process
try:
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
# no special security
None, None,
int(not close_fds),
creationflags,
env,
os.fspath(cwd) if cwd is not None else None,
> startupinfo)
E FileNotFoundError: [WinError 2] The system cannot find the file specified
C:\tools\miniconda3\lib\subprocess.py:1017: FileNotFoundError
```
|
CLOSED
| 2021-10-13T05:52:26
| 2021-10-13T08:05:49
| 2021-10-13T06:49:48
|
https://github.com/huggingface/datasets/issues/3069
|
albertvillanova
| 0
|
[
"bug"
] |
3,064
|
Make `interleave_datasets` more robust
|
**Is your feature request related to a problem? Please describe.**
Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration.
It creates new problems in calculation of epoch since there are no way to track which dataset from `interleave_datasets` completes how many epoch.
**Describe the solution you'd like**
For `interleave_datasets` module,
- [ ] Add a boolean argument `--stop-iter` in `interleave_datasets` that enables dataset to either iterate infinite amount of time or not. That means it should not return `StopIterator` exception in case `--stop-iter=False`.
- [ ] Internal list variable `iter_cnt` that explains how many times (in steps/epochs) each dataset iterates at a given point.
- [ ] Add an argument `--max-iter` (list type) that explain maximum amount of time each of the dataset can iterate. After complete `--max-iter` of one dataset, other dataset should continue sampling and when all the dataset finish their respective `--max-iter`, only then return `StopIterator`
Note: I'm new to `datasets` api. May be these features are already there in the datasets.
Since multitask training is the latest trends, I believe this feature would make the `datasets` api more popular.
@lhoestq
|
OPEN
| 2021-10-12T14:34:53
| 2022-07-30T08:47:26
| null |
https://github.com/huggingface/datasets/issues/3064
|
sbmaruf
| 3
|
[
"enhancement"
] |
3,063
|
Windows CI is unable to test streaming properly because of SSL issues
|
In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443
The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works.
to reproduce on windows:
```python
import fsspec
# use any URL to a file in a dataset repo
url = "https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes"
fsspec.open(url).open()
```
raises
```python
FileNotFoundError: https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes
```
because of
```python
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]
```
|
CLOSED
| 2021-10-12T09:33:40
| 2022-08-24T14:59:29
| 2022-08-24T14:59:29
|
https://github.com/huggingface/datasets/issues/3063
|
lhoestq
| 2
|
[
"streaming"
] |
3,061
|
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
|
**A clear and concise description of what you want to happen.**
It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions.
**Describe alternatives you've considered**
By the way is there not a way to directly interact with underlying tqdm module ? **kwargs-ish?
**Additional context**
Furthering tqdm integration #2374 and huggingface/transformers#11797 solutioned by huggingface/transformers#12226 provided with tqdm description as `desc=`
@sgugger @bhavitvyamalik
|
OPEN
| 2021-10-11T20:49:49
| 2021-10-22T09:34:10
| null |
https://github.com/huggingface/datasets/issues/3061
|
BenoitDalFerro
| 2
|
[
"enhancement"
] |
3,060
|
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
|
## Describe the bug
When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error.
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('openwebtext')
```
## Expected results
I expect the `dataset` variable to be properly constructed.
## Actual results
```
File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset
dataset_str,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset
use_auth_token=use_auth_token,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract
partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path
output_path, force_extract=download_config.force_extract
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract
self.extractor.extract(input_path, output_path, extractor=extractor)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract
return extractor.extract(input_path, output_path)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract
tar_file.extractall(output_path)
File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj
buf = src.read(bufsize)
File "/usr/lib/python3.6/lzma.py", line 200, in read
return self._buffer.read(size)
File "/usr/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.6/_compression.py", line 99, in read
raise EOFError("Compressed file ended before the "
python-BaseException
EOFError: Compressed file ended before the end-of-stream marker was reached
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-11T17:05:27
| 2021-10-28T05:52:21
| 2021-10-28T05:52:21
|
https://github.com/huggingface/datasets/issues/3060
|
RylanSchaeffer
| 2
|
[
"bug"
] |
3,058
|
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
|
## Describe the bug
I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipedia`.
## Steps to reproduce the bug
Run the `run_mlm_no_trainer.py` and the given script on this [link](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling). Change the dataset from wikitext to wikipedia or bookcorpusopen. BTW, the library transformers is of version 4.11.3.
## Expected results
The data batchs are fetched from the data loader and train.
## Actual results
The first time to fetch data batch occurs error.
`Traceback (most recent call last):
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors
tensor = as_tensor(value)
ValueError: too many dimensions 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "src/original_run_mlm_no_trainer.py", line 528, in <module>
main()
File "src/original_run_mlm_no_trainer.py", line 488, in main
for step, batch in enumerate(train_dataloader):
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/accelerate/data_loader.py", line 303, in __iter__
for batch in super().__iter__():
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 41, in __call__
return self.torch_call(features)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 671, in torch_call
batch = self.tokenizer.pad(examples, return_tensors="pt", pad_to_multiple_of=self.pad_to_multiple_of)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2774, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 210, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 722, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.8.0-59-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-11T11:54:59
| 2022-01-19T14:03:49
| 2022-01-19T14:03:49
|
https://github.com/huggingface/datasets/issues/3058
|
hobbitlzy
| 2
|
[
"bug"
] |
3,057
|
Error in per class precision computation
|
## Describe the bug
When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar`
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
precision_metric = load_metric("precision")
predictions = [0, 2, 1, 0, 0, 1]
references = [0, 1, 2, 0, 1, 2]
results = precision_metric.compute(predictions=predictions, references=references, average=None)
```
## Expected results
` {'precision': array([0.66666667, 0. , 0. ])}`
as per https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py
## Actual results
```
output = self._compute(predictions=predictions, references=references, **kwargs)
File "~/.cache/huggingface/modules/datasets_modules/metrics/precision/94709a71c6fe37171ef49d3466fec24dee9a79846c9f176dff66a649e9811690/precision.py", line 110, in _compute
sample_weight=sample_weight,
ValueError: can only convert an array of size 1 to a Python scalar
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: linux
- Python version: 3.6.9
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-11T10:05:19
| 2021-10-11T10:17:44
| 2021-10-11T10:16:16
|
https://github.com/huggingface/datasets/issues/3057
|
tidhamecha2
| 1
|
[
"bug"
] |
3,055
|
CI test suite fails after meteor metric update
|
## Describe the bug
CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010
Stack trace:
```
___________________ LocalMetricTest.test_load_metric_meteor ____________________
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor>
metric_name = 'meteor'
def test_load_metric(self, metric_name):
doctest.ELLIPSIS_MARKER = "[...]"
metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0])
metric = datasets.load.import_main_class(metric_module.__name__, dataset=False)
# check parameters
parameters = inspect.signature(metric._compute).parameters
self.assertTrue("predictions" in parameters)
self.assertTrue("references" in parameters)
self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs
# run doctest
with self.patch_intensive_calls(metric_name, metric_module.__name__):
with self.use_local_metrics():
> results = doctest.testmod(metric_module, verbose=True, raise_on_error=True)
tests/test_metric_common.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod
runner.run(test)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run
r = DocTestRunner.run(self, test, compileflags, out, False)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run
return self.__run(test, compileflags, out)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run
exception)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <doctest.DebugRunner object at 0x7f4c26bd3da0>
out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0>
test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
example = <doctest.Example object at 0x7f4c26bd3eb8>
exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>)
def report_unexpected_exception(self, out, test, example, exc_info):
> raise UnexpectedException(test, example, exc_info)
E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException
```
|
CLOSED
| 2021-10-11T06:37:12
| 2021-10-11T07:30:31
| 2021-10-11T07:30:31
|
https://github.com/huggingface/datasets/issues/3055
|
albertvillanova
| 0
|
[
"bug"
] |
3,053
|
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
|
## Describe the bug
When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type`
## Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset('the_pile_openwebtext2')
```
## Expected results
Should download the dataset, convert it to an arrow file, and return a working Dataset object.
## Actual results
The download works, but conversion to the arrow file fails as follows:
```
>>> ds = datasets.load_dataset('the_pile_openwebtext2')
Downloading and preparing dataset openwebtext2/plain_text (download: 27.33 GiB, generated: 63.86 GiB
, post-processed: Unknown size, total: 91.19 GiB) to /home/davidbau/.cache/huggingface/datasets/open
webtext2/plain_text/1.0.0/c48ec73ba3483bac673463f48f67e9a4fd8cb49a9d6ec4fb957f0b424b97cf25...
Traceback (most recent call last):
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/builder.py", line 1133,
in _prepare_split
writer.write(example, key)
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
366, in write
self.write_examples_on_file()
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
311, in write_examples_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
115, in __arrow_array__
out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)
File "pyarrow/array.pxi", line 305, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
```
- Platform: Ubuntu 20.04
- Python version: python 3.9
- PyArrow version: 3.0.0
|
CLOSED
| 2021-10-10T19:55:21
| 2023-02-24T14:02:20
| 2023-02-24T14:02:20
|
https://github.com/huggingface/datasets/issues/3053
|
davidbau
| 5
|
[
"bug"
] |
3,052
|
load_dataset cannot download the data and hangs on forever if cache dir specified
|
## Describe the bug
After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud.
Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :(
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir)
```
Note that exact same code without specifying _cache_dir_ argument works perfectly fine.
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train')
```
## Expected results
Downloads the dataset and cache is handled in the _cache_dir_ directory
## Actual results
Data download keeps hanging on forever, **NO TRACEBACK**!
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
|
CLOSED
| 2021-10-10T10:31:36
| 2021-10-11T10:57:09
| 2021-10-11T10:56:36
|
https://github.com/huggingface/datasets/issues/3052
|
BenoitDalFerro
| 1
|
[
"bug"
] |
3,051
|
Non-Matching Checksum Error with crd3 dataset
|
## Describe the bug
When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown.
## Steps to reproduce the bug
```python
dataset = load_dataset('crd3', split='train')
```
## Expected results
I expect no error to be thrown.
## Actual results
A non-matching checksum error is thrown.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/RevanthRameshkumar/CRD3/archive/master.zip']
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-10T01:32:43
| 2022-03-15T15:54:26
| 2022-03-15T15:54:26
|
https://github.com/huggingface/datasets/issues/3051
|
RylanSchaeffer
| 2
|
[
"bug"
] |
3,049
|
TimeoutError during streaming
|
## Describe the bug
I got a TimeoutError after streaming for about 10h.
## Steps to reproduce the bug
Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear.
## Expected results
This error was not expected in the code which considers only `ClientError` but not `TimeoutError`.
See [this line](https://github.com/huggingface/datasets/blob/2814fbd0e18150be409f10804670e98d9ecb87d4/src/datasets/utils/streaming_download_manager.py#L129).
Based on the traceback, it looks like the `TimeoutError` was not captured.
## Actual results
```
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 614, in async_fetch_range
out = await r.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1032, in read
self._body = await self.content.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 370, in read
block = await self.readany()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 392, in readany
await self._wait("readany")
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 306, in _wait
await waiter
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/helpers.py", line 656, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 1027, in <module>
main()
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 991, in main
for batch in tqdm(
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__
for obj in iterable:
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 376, in data_loader_streaming
for item in dataset:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in __iter__
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in <listcomp>
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 176, in __iter__
for key, example in iterator:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 225, in __iter__
for x in self.ex_iterable:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 99, in __iter__
for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 287, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/home/koush/datasets/src/datasets/packaged_modules/json/json.py", line 107, in _generate_tables
batch = f.read(self.config.chunksize)
File "/home/koush/datasets/src/datasets/utils/streaming_download_manager.py", line 126, in read_with_retries
out = read(*args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 572, in read
return super().read(length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/spec.py", line 1533, in read
out = self.cache._fetch(self.loc, self.loc + length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/caching.py", line 390, in _fetch
self.cache = self.fetcher(start, bend)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 91, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync
raise FSTimeoutError from return_result
fsspec.exceptions.FSTimeoutError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-09T18:06:51
| 2021-10-11T09:35:38
| 2021-10-11T09:35:38
|
https://github.com/huggingface/datasets/issues/3049
|
borisdayma
| 0
|
[
"bug"
] |
3,048
|
Identify which shard data belongs to
|
**Is your feature request related to a problem? Please describe.**
I'm training on a large dataset made of multiple sub-datasets.
During training I can observe some jumps in loss which may correspond to different shards.

My suspicion is that either:
* some of the sub-datasets are harder for the model than others
* some of the sub-datasets are not formatted properly
I'd like to identify which shards correspond to those jumps.
**Describe the solution you'd like**
It would be nice to have a key associated to each data sample or data batch containing details on where the data comes from (shard idx + item idx within the shard).
This should be supported both in local and streaming mode.
**Describe alternatives you've considered**
AΒ fix would be for me to add myself details (shard id, sample id) as part of each data sample.
The inconvenient is that it requires users to process/reupload every dataset when they need this feature.
|
OPEN
| 2021-10-09T17:46:35
| 2021-10-09T20:24:17
| null |
https://github.com/huggingface/datasets/issues/3048
|
borisdayma
| 1
|
[
"enhancement"
] |
3,047
|
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
|
## Describe the bug
Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle).
Create a dataset for masled-language modeling from the IMDB dataset.
```python
from datasets import load_dataset
from transformers import Autotokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased)
imdb_dataset = load_dataset("imdb", split="train")
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
chunk_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len.
result = {
k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["labels"] = result["input_ids"].copy()
return result
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Until now, all is well. The problem comes when you re-execute that code, more specifically:
```python
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-357a56ee3d53> in <module>
----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True)
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1947 new_fingerprint=new_fingerprint,
1948 disable_tqdm=disable_tqdm,
-> 1949 desc=desc,
1950 )
1951 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
424 }
425 # apply actual function
--> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
428 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2138 if os.path.exists(cache_file_name) and load_from_cache_file:
2139 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2140 info = self.info.copy()
2141 info.features = features
2142 return Dataset.from_file(cache_file_name, info=info, split=self.split)
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
```
It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed.
|
CLOSED
| 2021-10-08T18:23:11
| 2021-11-03T17:13:08
| 2021-11-03T17:13:08
|
https://github.com/huggingface/datasets/issues/3047
|
sgugger
| 1
|
[
"bug"
] |
3,044
|
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
|
## Describe the bug
Caching does not work when using `Dataset.map()` with:
1. a function that cannot be deterministically fingerprinted
2. `num_proc>1`
3. using a custom fingerprint set with the argument `new_fingerprint`.
This means that the dataset will be mapped with the function for each and every call, which does not happen if `num_proc==1`. In that case (`num_proc==1`) subsequent calls will load the transformed dataset from the cache, which is the expected behaviour. The example can easily be translated into a unit test.
I have a fix and will submit a pull request asap.
## Steps to reproduce the bug
```python
import hashlib
import json
import os
from typing import Dict, Any
import numpy as np
from datasets import load_dataset, Dataset
Batch = Dict[str, Any]
filename = 'example.json'
class Transformation():
"""A transformation with a random state that cannot be fingerprinted"""
def __init__(self):
self.state = np.random.random()
def __call__(self, batch: Batch) -> Batch:
batch['x'] = [np.random.random() for _ in batch['x']]
return batch
def generate_dataset():
"""generate a simple dataset"""
rgn = np.random.RandomState(24)
data = {
'data': [{'x': float(y), 'y': -float(y)} for y in
rgn.random(size=(1000,))]}
if not os.path.exists(filename):
with open(filename, 'w') as f:
f.write(json.dumps(data))
return filename
def process_dataset_with_cache(num_proc=1, remove_cache=False,
cache_expected_to_exist=False):
# load the generated dataset
dset: Dataset = next(
iter(load_dataset('json', data_files=filename, field='data').values()))
new_fingerprint = hashlib.md5("static-id".encode("utf8")).hexdigest()
# get the expected cached path
cache_path = dset._get_cache_file_path(new_fingerprint)
if remove_cache and os.path.exists(cache_path):
os.remove(cache_path)
# check that the cache exists, and print a statement
# if was actually expected to exist
cache_exist = os.path.exists(cache_path)
print(f"> cache file exists={cache_exist}")
if cache_expected_to_exist and not cache_exist:
print("=== Cache does not exist! ====")
# apply the transformation with the new fingerprint
dset = dset.map(
Transformation(),
batched=True,
num_proc=num_proc,
new_fingerprint=new_fingerprint,
desc="mapping dataset with transformation")
generate_dataset()
for num_proc in [1, 2]:
print(f"# num_proc={num_proc}, first pass")
# first pass to generate the cache (always create a new cache here)
process_dataset_with_cache(remove_cache=True,
num_proc=num_proc,
cache_expected_to_exist=False)
print(f"# num_proc={num_proc}, second pass")
# second pass, expects the cache to exist
process_dataset_with_cache(remove_cache=False,
num_proc=num_proc,
cache_expected_to_exist=True)
os.remove(filename)
```
## Expected results
In the above python example, with `num_proc=2`, the **cache file should exist in the second call** of `process_dataset_with_cache` ("=== Cache does not exist! ====" should not be printed).
When the cache is successfully created, `map()` is called only one time.
## Actual results
In the above python example, with `num_proc=2`, the **cache does not exist in the second call** of `process_dataset_with_cache` (this results in printing "=== Cache does not exist! ====").
Because the cache doesn't exist, the `map()` method is executed a second time and the dataset is not loaded from the cache.
## Environment info
- `datasets` version: 1.12.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 5.0.0
|
OPEN
| 2021-10-08T09:07:10
| 2025-03-04T07:16:00
| null |
https://github.com/huggingface/datasets/issues/3044
|
vlievin
| 4
|
[
"bug"
] |
3,043
|
Add PASS dataset
|
## Adding a Dataset
- **Name:** PASS
- **Description:** An ImageNet replacement for self-supervised pretraining without humans
- **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
CLOSED
| 2021-10-07T16:43:43
| 2022-01-20T16:50:47
| 2022-01-20T16:50:47
|
https://github.com/huggingface/datasets/issues/3043
|
osanseviero
| 0
|
[
"dataset request",
"vision"
] |
3,040
|
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
|
## Describe the bug
When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very big.
## Steps to reproduce the bug
E.g. run the following:
```python
from datasets import load_dataset, save_to_disk
nlp = load_dataset("glue", "mnli", split="train")
nlp.save_to_disk("full")
nlp = nlp.select(range(100))
nlp.save_to_disk("dummy")
```
Now one can see that both `"dummy"` and `"full"` have the same size. This shouldn't be the case IMO.
## Expected results
IMO `"dummy"` should be much smaller so that one can easily play around with the dataset on the hub.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-06T17:08:47
| 2021-11-02T15:41:08
| 2021-11-02T15:41:08
|
https://github.com/huggingface/datasets/issues/3040
|
patrickvonplaten
| 5
|
[
"bug"
] |
3,036
|
Protect master branch to force contributions via Pull Requests
|
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed.
- The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution
- The Pull Request will eventually be squashed and merged into master with a single commit that links to the Pull Request page (with all the context/discussions)
Note that we already implemented a protection in the master branch to avoid *merge* commits and ensure a linear history. This proposal goes one step further by avoiding all kind of direct commits and forcing contributions **only** through Pull Requests.
Please note that we can temporarily deactivate this protection if we need to make a direct commit, e.g. at each new version release.
The only way GitHub allows this kind or protection is by requiring a minimal number (at least one) of approvals of the Pull Request. The inconvenient is that the PR creator cannot approve their own PR: another person must approve it before it can be merged into master. To circumvent this, we could eventually disable this protection in the master branch when an urgent commit is needed (e.g. for a hotfix) and there is no other person available at that time to approve the PR.
|
CLOSED
| 2021-10-06T07:34:17
| 2021-10-07T06:51:47
| 2021-10-07T06:49:52
|
https://github.com/huggingface/datasets/issues/3036
|
albertvillanova
| 3
|
[
"enhancement"
] |
3,035
|
`load_dataset` does not work with uploaded arrow file
|
## Describe the bug
I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format.
The dataset can correctly be loaded when doing:
```bash
git lfs install
git clone https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed
```
followed by
```python
from datasets import load_from_disk
ds = load_from_disk("./ami_headset_single_preprocessed")
```
However when I try to directly download the dataset as follows:
```python
from datasets import load_dataset
ds = load_dataset("ami-wav2vec2/ami_headset_single_preprocessed")
```
the following error occurs:
```bash
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
1115 ignore_verifications=ignore_verifications,
1116 try_from_hf_gcs=try_from_hf_gcs,
-> 1117 use_auth_token=use_auth_token,
1118 )
1119
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
635 if not downloaded_from_gcs:
636 self._download_and_prepare(
--> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
638 )
639 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
724 try:
725 # Prepare split will record examples associated to the split
--> 726 self._prepare_split(split_generator, **prepare_split_kwargs)
727 except OSError as e:
728 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
1186 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET)
1187 ):
-> 1188 writer.write_table(table)
1189 num_examples, num_bytes = writer.finalize()
1190
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_table(self, pa_table, writer_batch_size)
424 # reorder the arrays if necessary + cast to self._schema
425 # we can't simply use .cast here because we may need to change the order of the columns
--> 426 pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
427 batches: List[pa.RecordBatch] = pa_table.to_batches(max_chunksize=writer_batch_size)
428 self._num_bytes += sum(batch.nbytes for batch in batches)
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib._sanitize_arrays()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
/usr/local/lib/python3.7/dist-packages/pyarrow/compute.py in cast(arr, target_type, safe)
279 else:
280 options = CastOptions.unsafe(target_type)
--> 281 return call_function("cast", [arr], options)
282
283
/usr/local/lib/python3.7/dist-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
/usr/local/lib/python3.7/dist-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<train: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>, validation: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>, test: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>> to list using function cast_list
```
## Expected results
The dataset should be correctly loaded with `load_dataset` IMO.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
|
OPEN
| 2021-10-05T20:15:10
| 2021-10-06T17:01:37
| null |
https://github.com/huggingface/datasets/issues/3035
|
patrickvonplaten
| 2
|
[
"enhancement"
] |
3,034
|
Errors loading dataset using fs = a gcsfs.GCSFileSystem
|
## Describe the bug
Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here...
Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you use gcsfs directly to download the file, but as is I can't get `load_from_disk` to work.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load some dataset
dataset = load_dataset("squad", split="train")
# save it to gcs
import gcsfs
fs = gcsfs.GCSFileSystem(project="my-gs-project")
dataset.save_to_disk("gs://my-bucket/squad", fs=fs)
# try to load it from gcs
from datasets import load_from_disk
dataset2 = load_from_disk("my-bucket/squad", fs=fs)
```
## Expected results
`dataset2` would be a copy of `dataset` but loaded from my bucket.
## Actual results
Long traceback but essentially it's a 404 error from gcsfs saying the object `my-bucket/squad` doesn't exist when this is called:
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L977
This is because there is no actual object called `my-bucket/squad`, there are objects called `my-bucket/squad/dataset.arrow`, etc.
Note that *this* works fine, since it's explicitly saying "download all the objects with this prefix":
```python
fs.download(src_dataset_path + "/*", dataset_path.as_posix(), recursive=True)
```
For example, I can do a workaround this way:
```python
import tempfile
with tempfile.TemporaryDirectory() as temppath:
fs.download("gs://my-bucket/squad/*", temppath)
dataset2 = load_from_disk(temppath)
```
It's unclear to me if it's `gcsfs`'s responsibility to say "hey that's folder not a file, I should try to get objects inside of it not the object itself", or if that's `datasets`'s responsibility... I'm leaning towards the latter since you're never loading a dataset from one file using this function/method, only a dataset folder?
Another minor thing that should maybe should be rolled into this bug...
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L968
These fail if you pass in a `gs://` path, e.g.
```python
dataset2 = load_from_disk("gs://my-bucket/squad", fs=fs)
```
Because at this point, `dataset_info_path` is `gs:/my-bucket/squad/dataset_info.json`, gcsfs throws a:
```
Invalid bucket name: 'gs:'
```
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: macOS Big Sur 11.6
- Python version: 3.7.12
- PyArrow version: 5.0.0
|
OPEN
| 2021-10-05T20:07:08
| 2021-10-05T20:26:39
| null |
https://github.com/huggingface/datasets/issues/3034
|
dconatha
| 0
|
[
"bug"
] |
3,032
|
Error when loading private dataset with "data_files" arg
|
## Describe the bug
A clear and concise description of what the bug is.
Private datasets with no loading script can't be loaded using `data_files` parameter.
## Steps to reproduce the bug
```python
from datasets import load_dataset
data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"}
dataset = load_dataset('dalle-mini/encoded', data_files=data_files, use_auth_token=True, streaming=True)
```
Same error happens in non-streaming mode.
## Expected results
Files should be loaded (whether in streaming or not).
## Actual results
Error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
539 try:
--> 540 local_path = cached_path(file_path, download_config=download_config)
541 except FileNotFoundError:
8 frames
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/dalle-mini/encoded/resolve/main/encoded.py
During handling of the above exception, another exception occurred:
HTTPError Traceback (most recent call last)
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/dalle-mini/encoded?full=true
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
547 except Exception:
548 raise FileNotFoundError(
--> 549 f"Couldn't find a directory or a {resource_type} named '{path}'. "
550 f"It doesn't exist locally at {expected_dir_for_combined_path_abs} or remotely on {hf_api.endpoint}/datasets"
551 )
FileNotFoundError: Couldn't find a directory or a dataset named 'dalle-mini/encoded'. It doesn't exist locally at /content/dalle-mini/encoded or remotely on https://huggingface.co/datasets
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
@lhoestq
|
CLOSED
| 2021-10-05T15:46:27
| 2021-10-12T15:26:22
| 2021-10-12T15:25:46
|
https://github.com/huggingface/datasets/issues/3032
|
borisdayma
| 1
|
[
"bug"
] |
3,027
|
Resolve data_files by split name
|
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example:
```python
load_dataset("lhoestq/demo1")
```
should return two splits "train" and "test" since the dataset repostiory is like
```
data/
βββ train.csv
βββ test.csv
```
Currently it returns only one split "train" which contains the data of both files
I started playing with this idea on this branch btw: `resolve-data_files-by-split-name`
Basically the idea is that if you named you data files after split names then the default pattern is
```python
{
"train": ["*train*"],
"test": ["*test*"],
"validation": ["*dev*", "valid"],
}
```
otherwise it's
```python
{
"train": ["*"]
}
```
Let me know what you think !
cc @albertvillanova @LysandreJik @vblagoje
|
CLOSED
| 2021-10-05T10:24:36
| 2021-11-05T17:49:58
| 2021-11-05T17:49:57
|
https://github.com/huggingface/datasets/issues/3027
|
lhoestq
| 3
|
[] |
3,024
|
Windows test suite fails
|
## Describe the bug
There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206
```
ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
|
CLOSED
| 2021-10-05T08:46:46
| 2021-10-05T09:58:27
| 2021-10-05T09:58:27
|
https://github.com/huggingface/datasets/issues/3024
|
albertvillanova
| 0
|
[
"bug"
] |
3,018
|
Support multiple zipped CSV data files
|
As requested by @lewtun, support loading multiple zipped CSV data files.
```python
from datasets import load_dataset
url = "https://domain.org/filename.zip"
data_files = {"train": "train_filename.csv", "test": "test_filename.csv"}
dataset = load_dataset("csv", data_dir=url, data_files=data_files)
```
|
OPEN
| 2021-10-04T15:16:59
| 2021-10-05T14:32:57
| null |
https://github.com/huggingface/datasets/issues/3018
|
albertvillanova
| 3
|
[
"enhancement"
] |
3,013
|
Improve `get_dataset_infos`?
|
Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info:
```
>>> from datasets import get_dataset_infos
>>> get_dataset_infos('wit')
{}
```
While it's totally possible to get it (regenerate it) with:
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder('wit')
>>> builder.info
DatasetInfo(description='Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set\n of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its\n size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n', citation='@article{srinivasan2021wit,\n title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},\n author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},\n journal={arXiv preprint arXiv:2103.01913},\n year={2021}\n}\n', homepage='https://github.com/google-research-datasets/wit', license='', features={'b64_bytes': Value(dtype='string', id=None), 'embedding': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'image_url': Value(dtype='string', id=None), 'metadata_url': Value(dtype='string', id=None), 'original_height': Value(dtype='int32', id=None), 'original_width': Value(dtype='int32', id=None), 'mime_type': Value(dtype='string', id=None), 'caption_attribution_description': Value(dtype='string', id=None), 'wit_features': Sequence(feature={'language': Value(dtype='string', id=None), 'page_url': Value(dtype='string', id=None), 'attribution_passes_lang_id': Value(dtype='string', id=None), 'caption_alt_text_description': Value(dtype='string', id=None), 'caption_reference_description': Value(dtype='string', id=None), 'caption_title_and_reference_description': Value(dtype='string', id=None), 'context_page_description': Value(dtype='string', id=None), 'context_section_description': Value(dtype='string', id=None), 'hierarchical_section_title': Value(dtype='string', id=None), 'is_main_image': Value(dtype='string', id=None), 'page_changed_recently': Value(dtype='string', id=None), 'page_title': Value(dtype='string', id=None), 'section_title': Value(dtype='string', id=None)}, length=-1, id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='wit', config_name='default', version=0.0.0, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)
```
Should we test if info is empty, and in that case regenerate it? Or always generate it?
|
CLOSED
| 2021-10-04T09:47:04
| 2022-02-21T15:57:10
| 2022-02-21T15:57:10
|
https://github.com/huggingface/datasets/issues/3013
|
severo
| 1
|
[
"question",
"dataset-viewer"
] |
3,011
|
load_dataset_builder should error if "name" does not exist?
|
```
import datasets as ds
builder = ds.load_dataset_builder('sent_comp', name="doesnotexist")
builder.info.config_name
```
returns
```
'doesnotexist'
```
Shouldn't it raise an error instead?
For this dataset, the only valid values for `name` should be: `"default"` or `None` (ie. argument not passed)
|
OPEN
| 2021-10-04T09:20:46
| 2022-09-20T13:05:07
| null |
https://github.com/huggingface/datasets/issues/3011
|
severo
| 1
|
[
"bug",
"dataset-viewer"
] |
3,010
|
Chain filtering is leaking
|
## Describe the bug
As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'.
These samples show that filtering behavior diverges from what's expected when chaining filterings.
On sample 2 the second filtering leads to "leaking" of data that should've been filtered on the first filtering into the results.
## Steps to reproduce the bug
Sample 1:
```python
import datasets
import json
items = [[1, 2], [3], [4]]
jsoned_items = map(json.dumps, [[1, 2], [3], [4]])
ds = datasets.Dataset.from_dict({'a': jsoned_items})
print(list(ds))
# > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}] as expected
filtered = ds
# get all lists that are shorter than 2
filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[3]'}, {'a': '[4]'}] as expected
# get all lists, which have a value bigger than 3 on its zero index
filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False)
print(list(filtered))
# > Should be: [{'a': [4]}]
# > Prints: [{'a': [3]}]
```
Sample 2:
```python
import datasets
import json
items = [[1, 2], [3], [4]]
jsoned_items = map(json.dumps, [[1, 2], [3], [4]])
ds = datasets.Dataset.from_dict({'a': jsoned_items})
print(list(ds))
# > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}]
filtered = ds
# get all lists, which have a value bigger than 3 on its zero index
filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[4]'}] as expected
# get all lists that are shorter than 2
filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[1, 2]'}]
# > Should be: [{'a': '[4]'}] (remain intact)
```
## Expected results
Expected and actual results are attached to the code snippets.
## Actual results
Expected and actual results are attached to the code snippets.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-04T09:04:55
| 2022-06-01T17:36:44
| 2022-06-01T17:36:44
|
https://github.com/huggingface/datasets/issues/3010
|
DrMatters
| 4
|
[
"bug"
] |
3,005
|
DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument
|
## Describe the bug
The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument
## Steps to reproduce the bug
```python
import datasets
example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}})
def filter_value(example, value):
return example['a'] == value
filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
```
## Expected results
`filtered` is a dataset containing {"a": {3}}
## Actual results
> Traceback (most recent call last):
> File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module>
> filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter
> indices = self.map(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map
> return self._map_single(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single
> batch = apply_function_on_filtered_inputs(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
CLOSED
| 2021-10-04T00:49:29
| 2021-10-11T10:18:01
| 2021-10-04T08:46:13
|
https://github.com/huggingface/datasets/issues/3005
|
DrMatters
| 2
|
[
"bug"
] |
2,998
|
cannot shuffle dataset loaded from disk
|
## Describe the bug
dataset loaded from disk cannot be shuffled.
## Steps to reproduce the bug
```
my_dataset = load_from_disk('s3://my_file/validate', fs=s3)
sample = my_dataset.select(range(100)).shuffle(seed=1234)
```
## Actual results
```
sample = my_dataset .select(range(100)).shuffle(seed=1234)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2494, in shuffle
new_fingerprint=new_fingerprint,
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2303, in select
tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 547, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 258, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpnnu5uhnx/my_file/validate/tmpy76d70g4'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Python version: 3.7
- PyArrow version: 5.0.0
|
OPEN
| 2021-10-01T13:49:52
| 2021-10-01T13:49:52
| null |
https://github.com/huggingface/datasets/issues/2998
|
pya25
| 0
|
[
"bug"
] |
2,997
|
Dataset has incorrect labels
|
The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached:

|
CLOSED
| 2021-10-01T12:09:06
| 2021-10-01T15:32:00
| 2021-10-01T13:54:34
|
https://github.com/huggingface/datasets/issues/2997
|
heiko-hotz
| 3
|
[] |
2,993
|
Can't download `trivia_qa/unfiltered`
|
## Describe the bug
For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough...
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("trivia_qa", "unfiltered")
Downloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6...
Traceback (most recent call last):
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 251, in _add_context
with open(os.path.join(file_dir, fname), encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/gpfsscratch/rech/six/commun/datasets/downloads/extracted/9fcb7eddc6afd46fd074af3c5128931dfe4b548f933c925a23847faf4c1995ad/evidence/wikipedia/Peanuts.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py", line 852, in load_dataset
use_auth_token=use_auth_token,
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 616, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split
disable=bool(logging.get_verbosity() == logging.NOTSET),
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 303, in _generate_examples
example = parse_example(article)
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 274, in parse_example
_add_context(article.get("EntityPages", []), "WikiContext", wiki_dir),
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 253, in _add_context
except (IOError, datasets.Value("errors").NotFoundError):
File "<string>", line 5, in __init__
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 265, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 134, in string_to_arrow
f"Neither {datasets_dtype} nor {datasets_dtype + '_'} seems to be a pyarrow data type. "
ValueError: Neither errors nor errors_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
## Expected results
I am able to load another subset (`rc`), but unable to load.
I am not sure why the try/except doesn't catch it...
https://github.com/huggingface/datasets/blob/9675a5a1e7b99a86f9c250f6ea5fa5d1e6d5cc7d/datasets/trivia_qa/trivia_qa.py#L253
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Linux-4.18.0-147.51.2.el8_1.x86_64-x86_64-with-redhat-8.1-Ootpa
- Python version: 3.7.10
- PyArrow version: 3.0.0
|
CLOSED
| 2021-09-30T23:00:18
| 2021-10-01T19:07:23
| 2021-10-01T19:07:22
|
https://github.com/huggingface/datasets/issues/2993
|
VictorSanh
| 3
|
[
"bug"
] |
2,991
|
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
|
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method.
This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-hugging-face-hub) in the previous documentation.
I'd love to hear your opinion @lhoestq , @albertvillanova and @stevhliu
|
OPEN
| 2021-09-30T13:22:01
| 2021-09-30T13:22:01
| null |
https://github.com/huggingface/datasets/issues/2991
|
SaulLu
| 0
|
[
"enhancement"
] |
2,988
|
IndexError: Invalid key: 14 is out of bounds for size 0
|
## Describe the bug
A clear and concise description of what the bug is.
Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is working fine before adding SWA optimizer, the moment I modify the model with `swa_model = AveragedModel(model)` in this script, I am getting the below error, since I am NOT touching the dataloader part, I am confused why this is occurring, I very much appreciate your opinion on this @lhoestq
## Steps to reproduce the bug
```
Traceback (most recent call last):
File "run_clm.py", line 723, in <module>
main()
File "run_clm.py", line 669, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/transformers/trainer.py", line 1258, in train
for step, inputs in enumerate(epoch_iterator):
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1530, in __getitem__
format_kwargs=self._format_kwargs,
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1517, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 368, in query_table
_check_valid_index_key(key, size)
File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 311, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 14 is out of bounds for size 0
```
## Expected results
not getting the index error
## Actual results
Please see the above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets 1.12.1
- Platform: linux
- Python version: 3.7.11
- PyArrow version: 5.0.0
|
CLOSED
| 2021-09-29T16:04:24
| 2022-04-10T14:49:49
| 2022-04-10T14:49:49
|
https://github.com/huggingface/datasets/issues/2988
|
dorost1234
| 13
|
[
"bug"
] |
2,987
|
ArrowInvalid: Can only convert 1-dimensional array values
|
## Describe the bug
For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset:
```
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
return encoded_inputs
```
```
Full trace:
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-8-0fc3efc6f0c2> in <module>()
27
28 train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names,
---> 29 features=features)
30 test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names,
31 features=features)
13 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1701 new_fingerprint=new_fingerprint,
1702 disable_tqdm=disable_tqdm,
-> 1703 desc=desc,
1704 )
1705 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
183 }
184 # apply actual function
--> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
187 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
396 # Call actual function
397
--> 398 out = func(self, *args, **kwargs)
399
400 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2063 writer.write_table(batch)
2064 else:
-> 2065 writer.write_batch(batch)
2066 if update_data and writer is not None:
2067 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
409 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
410 typed_sequence_examples[col] = typed_sequence
--> 411 pa_table = pa.Table.from_pydict(typed_sequence_examples)
412 self.write_table(pa_table, writer_batch_size)
413
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
106 storage = numpy_to_pyarrow_listarray(self.data, type=type.value_type)
107 else:
--> 108 storage = pa.array(self.data, type.storage_dtype)
109 out = pa.ExtensionArray.from_storage(type, storage)
110 elif isinstance(self.data, np.ndarray):
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Can only convert 1-dimensional array values
```
It can be fixed by adding the following line:
```diff
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
+ encoded_inputs["image"] = np.array(encoded_inputs["image"])
return encoded_inputs
```
However, would be great if this can be fixed within Datasets itself.
|
CLOSED
| 2021-09-29T14:18:52
| 2021-10-01T13:57:45
| 2021-10-01T13:57:45
|
https://github.com/huggingface/datasets/issues/2987
|
NielsRogge
| 1
|
[
"bug"
] |
2,984
|
Exceeded maximum rows when reading large files
|
## Describe the bug
A clear and concise description of what the bug is.
When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error.
## Steps to reproduce the bug
```python
dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a single file
```
## Expected results
No error
## Actual results
```
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
134 with open(file, encoding="utf-8") as f:
--> 135 dataset = json.load(f)
136 except json.JSONDecodeError:
~/anaconda3/envs/python/lib/python3.9/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
~/anaconda3/envs/python/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
~/anaconda3/envs/python/lib/python3.9/json/decoder.py in decode(self, s, _w)
339 if end != len(s):
--> 340 raise JSONDecodeError("Extra data", s, end)
341 return obj
JSONDecodeError: Extra data: line 2 column 1 (char 20321)
During handling of the above exception, another exception occurred:
ArrowInvalid Traceback (most recent call last)
<ipython-input-20-ab3718a6482f> in <module>
----> 1 dataset = load_dataset('json', data_files=data_files)
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
841
842 # Download and prepare data
--> 843 builder_instance.download_and_prepare(
844 download_config=download_config,
845 download_mode=download_mode,
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
606 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
607 if not downloaded_from_gcs:
--> 608 self._download_and_prepare(
609 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
610 )
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
684 try:
685 # Prepare split will record examples associated to the split
--> 686 self._prepare_split(split_generator, **prepare_split_kwargs)
687 except OSError as e:
688 raise OSError(
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1153 generator = self._generate_tables(**split_generator.gen_kwargs)
1154 with ArrowWriter(features=self.info.features, path=fpath) as writer:
-> 1155 for key, table in utils.tqdm(
1156 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET)
1157 ):
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
135 dataset = json.load(f)
136 except json.JSONDecodeError:
--> 137 raise e
138 raise ValueError(
139 f"Not able to read records in the JSON file at {file}. "
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
114 while True:
115 try:
--> 116 pa_table = paj.read_json(
117 BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
118 )
~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()
~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Exceeded maximum rows
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: 3.9
- PyArrow version: 4.0.1
|
CLOSED
| 2021-09-29T04:49:22
| 2021-10-12T06:05:42
| 2021-10-12T06:05:42
|
https://github.com/huggingface/datasets/issues/2984
|
zijwang
| 1
|
[
"bug"
] |
2,980
|
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
|
## Adding a Dataset
- **Name:** *SLR25*
- **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data*
- **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. *
- **Data:** *Currently the three links to the .tar.bz2 files can be found a thttps://www.openslr.org/25/*
- **Motivation:** *Increase ASR data for underrepresented African languages. Also, other subsets of OpenSLR speech recognition have been uploaded, so this would be easy.*
https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py already has been created for various other OpenSLR subsets, this should be relatively straightforward to do.
|
OPEN
| 2021-09-28T15:04:36
| 2021-09-29T17:25:14
| null |
https://github.com/huggingface/datasets/issues/2980
|
cdleong
| 3
|
[
"dataset request"
] |
2,979
|
ValueError when computing f1 metric with average None
|
## Describe the bug
When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py:
```python
return {
"f1": f1_score(
references,
predictions,
labels=labels,
pos_label=pos_label,
average=average,
sample_weight=sample_weight,
).item(),
}
```
Since the result is an array with more than one item, the `.item()` throws the error. I didn't submit a PR because this might be needed for the other averages, I'm not very familiar with the library
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("f1")
metric.add_batch(predictions=[2,34,1,34,1,2,3], references=[23,52,1,3,523,5,8])
metric.compute(average=None)
```
## Expected results
`array([0.66666667, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ])`
## Actual results
ValueError: can only convert an array of size 1 to a Python scalar
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.5
- PyArrow version: 5.0.0
|
CLOSED
| 2021-09-28T11:34:53
| 2021-10-01T14:17:38
| 2021-10-01T14:17:38
|
https://github.com/huggingface/datasets/issues/2979
|
asofiaoliveira
| 1
|
[
"bug"
] |
2,978
|
Run CI tests against non-production server
|
Currently, the CI test suite performs requests to the HF production server.
As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
|
OPEN
| 2021-09-28T09:41:26
| 2021-09-28T15:23:50
| null |
https://github.com/huggingface/datasets/issues/2978
|
albertvillanova
| 2
|
[] |
2,977
|
Impossible to load compressed csv
|
## Describe the bug
It is not possible to load from a compressed csv anymore.
## Steps to reproduce the bug
```python
load_dataset('csv', data_files=['/path/to/csv.bz2'])
```
## Problem and possible solution
This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782).
`pandas` usually gets the compression information from the filename itself (which was previously directly passed). Now, since it gets a file descriptor, it might be good to auto-infer the compression or let the user pass the `compression` kwarg to `load_dataset` (or maybe warn the user if the file ends with a commonly known compression scheme?).
## Environment info
- `datasets` version: 1.10.0 (and over)
- Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 3.0.0
|
CLOSED
| 2021-09-28T07:18:54
| 2021-10-01T15:53:16
| 2021-10-01T15:53:15
|
https://github.com/huggingface/datasets/issues/2977
|
Valahaar
| 1
|
[
"bug"
] |
2,976
|
Can't load dataset
|
I'm trying to load a wikitext dataset
```
from datasets import load_dataset
raw_datasets = load_dataset("wikitext")
```
ValueError: Config name is missing.
Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1']
Example of usage:
`load_dataset('wikitext', 'wikitext-103-raw-v1')`.
If I try
```
from datasets import load_dataset
raw_datasets = load_dataset("wikitext-2-v1")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/wikitext-2-v1/wikitext-2-v1.py
#### Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic (colab)
- Python version: 3.7.12
- PyArrow version: 3.0.0
|
CLOSED
| 2021-09-27T21:38:14
| 2024-04-08T03:27:29
| 2021-09-28T06:53:01
|
https://github.com/huggingface/datasets/issues/2976
|
mskovalova
| 4
|
[
"bug"
] |
2,972
|
OSError: Not enough disk space.
|
## Describe the bug
I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough space.
The file system structure is like below. The root `/` has `115G` disk space available, and the `sda1` is mounted to `/mnt`, which has `1.2T` disk space available:
```
/
/mnt/sda1/path/to/args.dataset_cache_dir
```
## Steps to reproduce the bug
```python
dataset_config = DownloadConfig(
cache_dir=os.path.abspath(args.dataset_cache_dir),
resume_download=True,
)
dataset = load_dataset("natural_questions", download_config=dataset_config)
```
## Expected results
Can download the dataset without an error.
## Actual results
The following error raised:
```
OSError: Not enough disk space. Needed: 134.92 GiB (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Ubuntu 18.04
- Python version: 3.8.10
- PyArrow version:
|
CLOSED
| 2021-09-27T07:41:22
| 2024-12-04T02:56:19
| 2021-09-28T06:43:15
|
https://github.com/huggingface/datasets/issues/2972
|
qqaatw
| 6
|
[
"bug"
] |
2,971
|
masakhaner dataset load problem
|
## Describe the bug
Masakhaner dataset is not loading
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("masakhaner",'amh')
```
## Expected results
Expected the return of a dataset
## Actual results
```
NonMatchingSplitsSizesError Traceback (most recent call last)
<ipython-input-3-a6abc1161d4c> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("masakhaner",'amh')
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py
in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=639927, num_examples=1751, dataset_name='masakhaner'), 'recorded': SplitInfo(name='train', num_bytes=639911, num_examples=1750, dataset_name='masakhaner')}, {'expected': SplitInfo(name='validation', num_bytes=92768, num_examples=251, dataset_name='masakhaner'), 'recorded': SplitInfo(name='validation', num_bytes=92753, num_examples=250, dataset_name='masakhaner')}, {'expected': SplitInfo(name='test', num_bytes=184286, num_examples=501, dataset_name='masakhaner'), 'recorded': SplitInfo(name='test', num_bytes=184271, num_examples=500, dataset_name='masakhaner')}]
```
## Environment info
Google Colab
|
CLOSED
| 2021-09-27T04:59:07
| 2021-09-27T12:59:59
| 2021-09-27T12:59:59
|
https://github.com/huggingface/datasets/issues/2971
|
huu4ontocord
| 1
|
[
"bug"
] |
2,970
|
Magnetβs
|
## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
CLOSED
| 2021-09-26T09:50:29
| 2021-09-26T10:38:59
| 2021-09-26T10:38:59
|
https://github.com/huggingface/datasets/issues/2970
|
rcacho172
| 0
|
[
"dataset request"
] |
2,969
|
medical-dialog error
|
## Describe the bug
A clear and concise description of what the bug is.
When I attempt to download the huggingface datatset medical_dialog it errors out midway through
## Steps to reproduce the bug
```python
raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English")
```
## Expected results
A clear and concise description of the expected results.
No error
## Actual results
```
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}]
```
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.21.1
- Platform: colab
- Python version: colab 3.7
- PyArrow version: N/A
|
CLOSED
| 2021-09-25T23:08:44
| 2024-01-08T09:55:12
| 2021-10-11T07:46:42
|
https://github.com/huggingface/datasets/issues/2969
|
smeyerhot
| 3
|
[
"bug"
] |
2,968
|
`DatasetDict` cannot be exported to parquet if the splits have different features
|
## Describe the bug
I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly.
For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file.
## Steps to reproduce the bug
The following works as expected:
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/custom_squad")
ds['train'].to_parquet("./ds/train/split.parquet")
ds['validation'].to_parquet("./ds/validation/split.parquet")
brand_new_dataset = load_dataset("ds")
```
Modifying a single split to add a new feature ends up in a crash:
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/custom_squad")
def identical_answers(e):
e['identical_answers'] = len(set(e['answers']['text'])) == 1
return e
ds['validation'] = ds['validation'].map(identical_answers)
ds['train'].to_parquet("./ds/train/split.parquet")
ds['validation'].to_parquet("./ds/validation/split.parquet")
brand_new_dataset = load_dataset("ds")
```
```
File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module>
brand_new_dataset = load_dataset("ds")
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split
writer.write_table(table)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "identical_answers" does not exist in table schema'
```
It does work, however, to use the `save_to_disk` and `load_from_disk` methods:
```py
from datasets import load_from_disk
ds = load_dataset("lhoestq/custom_squad")
def identical_answers(e):
e['identical_answers'] = len(set(e['answers']['text'])) == 1
return e
ds['validation'] = ds['validation'].map(identical_answers)
ds.save_to_disk("local_path")
brand_new_dataset = load_from_disk("local_path")
```
## Expected results
The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files.
If it's helpful, I've traced a possible patch to the `write_table` method here:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425
The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255
but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190
Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise:
```
File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module>
brand_new_dataset = load_dataset("ds")
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset
datasets = utils.map_nested(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested
mapped = [
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested
return function(data_struct)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset
ds = self._as_dataset(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table
return table_cls.from_file(filename)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status
OSError: Header-type of flatbuffer-encoded Message is not RecordBatch.
```
## Environment info
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
CLOSED
| 2021-09-25T22:18:39
| 2021-10-07T22:47:42
| 2021-10-07T22:47:26
|
https://github.com/huggingface/datasets/issues/2968
|
LysandreJik
| 9
|
[
"bug"
] |
2,967
|
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
|
**Is your feature request related to a problem? Please describe.**
Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets?
**Describe the solution you'd like**
N/A
**Describe alternatives you've considered**
N/A
**Additional context**
This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
|
CLOSED
| 2021-09-25T20:58:15
| 2021-10-03T20:34:22
| 2021-10-03T20:34:22
|
https://github.com/huggingface/datasets/issues/2967
|
WadeYin9712
| 0
|
[
"enhancement"
] |
2,965
|
Invalid download URL of WMT17 `zh-en` data
|
## Describe the bug
Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wmt17','zh-en')
```
## Expected results
ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip
|
CLOSED
| 2021-09-25T13:17:32
| 2022-08-31T06:47:11
| 2022-08-31T06:47:10
|
https://github.com/huggingface/datasets/issues/2965
|
Ririkoo
| 1
|
[
"bug",
"dataset bug"
] |
2,964
|
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
|
## Describe the bug
After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `π€datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required).
## Steps to reproduce the bug
```python
import torch
predictions = torch.ones((10,))
references = torch.zeros((10,))
from datasets import load_metric
METRIC = load_metric("matthews_correlation")
result = METRIC.compute(predictions=predictions, references=references)
```
## Expected results
We should expect a Python `dict` as it follows:
```
{
"matthews_correlation": float()
}
```
as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail.
## Actual results
```
Traceback (most recent call last):
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main
app()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper
return callback(**use_params) # type: ignore
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train
metrics = trainer.evaluate()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate
output = eval_loop(
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics
res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids)
File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute
"matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(),
AttributeError: 'float' object has no attribute 'item'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
CLOSED
| 2021-09-24T15:55:21
| 2024-02-16T10:14:35
| 2021-09-25T08:06:07
|
https://github.com/huggingface/datasets/issues/2964
|
alvarobartt
| 1
|
[
"bug"
] |
2,963
|
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
|
## Describe the bug
A clear and concise description of what the bug is.
I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError(
TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
I was able to load my file using dataset before but since this morning , I keep getting this erreor.
## Steps to reproduce the bug
```python
# Xtrain, ytrain, filename, len_labels = read_file_2(fic)
# Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge)
data_preprocessed = make_new_traindata(Xtrain)
my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction
dataset = Dataset.from_dict(my_dict)
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
|
CLOSED
| 2021-09-24T15:35:11
| 2021-09-24T15:38:24
| 2021-09-24T15:38:24
|
https://github.com/huggingface/datasets/issues/2963
|
keloemma
| 0
|
[
"bug"
] |
2,962
|
Enable splits during streaming the dataset
|
## Describe the Problem
I'd like to stream only a specific percentage or part of the dataset.
I want to do splitting when I'm streaming dataset as well.
## Solution
Enabling splits when `streaming = True` as well.
`e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)`
## Alternatives
Below is the alternative of doing it.
`dataset = load_dataset("dataset", split='train', streaming = True).take(100)`
|
OPEN
| 2021-09-24T15:01:29
| 2025-07-17T04:53:20
| null |
https://github.com/huggingface/datasets/issues/2962
|
merveenoyan
| 1
|
[
"enhancement"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.