number
int64 2
7.91k
| title
stringlengths 1
290
| body
stringlengths 0
228k
| state
stringclasses 2
values | created_at
timestamp[s]date 2020-04-14 18:18:51
2025-12-16 10:45:02
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 19:34:46
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 14:20:48
⌀ | url
stringlengths 48
51
| author
stringlengths 3
26
⌀ | comments_count
int64 0
70
| labels
listlengths 0
4
|
|---|---|---|---|---|---|---|---|---|---|---|
6,106
|
load local json_file as dataset
|
### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double
### Expected behavior
Should allow some columns are 'float' type, at least it should convert those columns to str type.
I tried to avoid the error by naively convert the float item to str:
```python
# if col type is not str, we need to convert it to str
mapping = {}
for col in keys:
if isinstance(dataset[0][col], str):
mapping[col] = [row.get(col) for row in dataset]
else:
mapping[col] = [str(row.get(col)) for row in dataset]
```
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
CLOSED
| 2023-07-31T12:53:49
| 2023-08-18T01:46:35
| 2023-08-18T01:46:35
|
https://github.com/huggingface/datasets/issues/6106
|
CiaoHe
| 2
|
[] |
6,104
|
HF Datasets data access is extremely slow even when in memory
|
### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long?
It's faster to produce the dataset from scratch than to access it from HF Datasets!
### Steps to reproduce the bug
I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1).
```python
#!/usr/bin/env python3
import sys
import time
import torch
from datasets import load_dataset
def main(dataset_name):
# Start the timer
start_time = time.time()
# Load the dataset from Hugging Face Hub
dataset = load_dataset(dataset_name)
# Set the dataset format as torch
dataset.set_format(type="torch")
# Perform an identity map
dataset = dataset.map(lambda example: example, batched=True, batch_size=20)
# End the timer
end_time = time.time()
# Print the time taken
print(f"Time taken: {end_time - start_time:.2f} seconds")
if __name__ == "__main__":
dataset_name = "NightMachinery/hf_datasets_bug1"
print(f"dataset_name: {dataset_name}")
main(dataset_name)
```
### Expected behavior
_
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
OPEN
| 2023-07-31T11:12:19
| 2023-08-01T11:22:43
| null |
https://github.com/huggingface/datasets/issues/6104
|
NightMachinery
| 1
|
[] |
6,100
|
TypeError when loading from GCP bucket
|
### Describe the bug
Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1.
### Steps to reproduce the bug
Load any file from a GCP bucket:
```python
import datasets
datasets.load_dataset("json", data_files=["gs://..."])
```
The following exception is raised:
```python
Traceback (most recent call last):
...
packages/datasets/data_files.py", line 335, in resolve_pattern
protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""
TypeError: can only concatenate tuple (not "str") to tuple
```
With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string.
### Expected behavior
The file should be loaded without exception.
### Environment info
- `datasets` version: 2.14.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
CLOSED
| 2023-07-30T23:03:00
| 2023-08-03T10:00:48
| 2023-08-01T10:38:55
|
https://github.com/huggingface/datasets/issues/6100
|
bilelomrani1
| 2
|
[] |
6,099
|
How do i get "amazon_us_reviews
|
### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02']
Example of usage:
`load_dataset('amazon_us_reviews', 'Wireless_v1_00')`]
__________________________________________________________________________
`amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00')
print(amazon_us_reviews)`
**ERROR**
`Generating` train split: 0%
0/960872 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1692 )
-> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record
1694 writer.write(example, key)
11 frames
KeyError: 'marketplace'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1711 e = e.__context__
-> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1713
1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Motivation
The dataset I'm using
https://huggingface.co/datasets/amazon_us_reviews
### Your contribution
What is the best way to load this data
|
CLOSED
| 2023-07-30T11:02:17
| 2023-08-21T05:08:08
| 2023-08-10T05:02:35
|
https://github.com/huggingface/datasets/issues/6099
|
IqraBaluch
| 10
|
[
"enhancement"
] |
6,097
|
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
|
### Describe the bug
Hi team!
I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc.
Are you able to reproduce what I observe?
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature:
```
{'vectors': array([[random values ...]])}
```
### Expected behavior
The expected behavior happens when the `set_format` method is not called:
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
# foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere
```
{'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-07-28T20:31:59
| 2023-07-28T20:49:58
| 2023-07-28T20:49:58
|
https://github.com/huggingface/datasets/issues/6097
|
aschoenauer-sebag
| 1
|
[] |
6,090
|
FilesIterable skips all the files after a hidden file
|
### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8-
### Expected behavior
The script should print all the files except the hidden one.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-07-28T07:25:57
| 2023-07-28T10:51:14
| 2023-07-28T10:50:11
|
https://github.com/huggingface/datasets/issues/6090
|
dkrivosic
| 1
|
[] |
6,089
|
AssertionError: daemonic processes are not allowed to have children
|
### Describe the bug
When I load_dataset with num_proc > 0 in a deamon process, I got an error:
```python
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract
return self.extract(self.download(url_or_urls))
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download
downloaded_path_or_paths = map_nested(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested
mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map
return _map_with_multiprocessing_pool(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool
with Pool(num_proc, initargs=initargs, initializer=initializer) as pool:
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__
self._repopulate_pool()
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start
assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^
AssertionError: daemonic processes are not allowed to have children
```
The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process.
### Steps to reproduce the bug
1. start a deamon process
2. run load_dataset with num_proc > 0
### Expected behavior
No error.
### Environment info
Python 3.11.4
datasets latest master
|
OPEN
| 2023-07-28T06:04:00
| 2023-07-31T02:34:02
| null |
https://github.com/huggingface/datasets/issues/6089
|
codingl2k1
| 2
|
[] |
6,088
|
Loading local data files initiates web requests
|
As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
```
But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows
```
in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode)
940 self.download_config = download_config
941 self.download_mode = download_mode
--> 942 increase_load_count(name, resource_type="dataset")
```
I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files?
|
CLOSED
| 2023-07-28T04:06:26
| 2023-07-28T05:02:22
| 2023-07-28T05:02:22
|
https://github.com/huggingface/datasets/issues/6088
|
lytning98
| 0
|
[] |
6,087
|
fsspec dependency is set too low
|
### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: https://github.com/fsspec/filesystem_spec/commit/9577c8a482eb0a69092913b81580942a68d66a76#diff-906155c7e926a9ff58b9f23369bb513b09b445f5b0f41fa2a84015d0b471c68cR180), however the dependency is set to 2021.11.1 https://github.com/huggingface/datasets/blob/main/setup.py#L129
### Steps to reproduce the bug
1. Install fsspec==2021.11.1
2. Install latest datasets==2.14.1
3. Import datasets, import fails due to lack of `fsspec.callbacks.TqdmCallback`
### Expected behavior
No import issue
### Environment info
N/A
|
CLOSED
| 2023-07-27T20:08:22
| 2023-07-28T10:07:56
| 2023-07-28T10:07:03
|
https://github.com/huggingface/datasets/issues/6087
|
iXce
| 1
|
[] |
6,086
|
Support `fsspec` in `Dataset.to_<format>` methods
|
Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353).
|
CLOSED
| 2023-07-27T19:08:37
| 2024-03-07T07:22:43
| 2024-03-07T07:22:42
|
https://github.com/huggingface/datasets/issues/6086
|
mariosasko
| 5
|
[
"enhancement"
] |
6,084
|
Changing pixel values of images in the Winoground dataset
|
Hi, as I followed the instructions, with lasted "datasets" version:
"
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
"
I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slightly different.
I thought it was due to the package version difference, but today's morning I found out that my winoground dataset in colab became the same with the one in my hpc environment. The dataset in colab can produce the correct result but now it is gone as well.
Can you help me with this? What causes the datasets to have the wrong pixel values?
|
OPEN
| 2023-07-27T17:55:35
| 2023-07-27T17:55:35
| null |
https://github.com/huggingface/datasets/issues/6084
|
ZitengWangNYU
| 0
|
[] |
6,079
|
Iterating over DataLoader based on HF datasets is stuck forever
|
### Describe the bug
I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment.
I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here?
### Steps to reproduce the bug
```
train_dataset = load_dataset(
"parquet", data_files = {'train': tr_data_path + '*.parquet'},
split = 'train',
collate_fn = streaming_data_collate_fn,
streaming = True
).with_format('torch')
train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0)
t = time.time()
iter_ = 0
for batch in train_dataloader:
iter_ += 1
if iter_ == 1000:
break
print (time.time() - t)
```
### Expected behavior
The snippet should work normally and load the next batch of data.
### Environment info
datasets: '2.14.0'
pyarrow: '12.0.0'
torch: '2.0.0'
Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0]
!uname -r
5.10.178-162.673.amzn2.x86_64
|
CLOSED
| 2023-07-26T14:52:37
| 2024-02-07T17:46:52
| 2023-07-30T14:09:06
|
https://github.com/huggingface/datasets/issues/6079
|
arindamsarkar93
| 15
|
[] |
6,078
|
resume_download with streaming=True
|
### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset?
`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.
### Steps to reproduce the bug
```
from datasets import load_dataset, DownloadConfig
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True, # optional
split="train",
download_config=DownloadConfig(resume_download=True)
)
# interupt the run and try to relaunch it => this restart from scratch
```
### Expected behavior
I would expect a parameter to start streaming from a given index in the dataset.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.0
|
CLOSED
| 2023-07-26T14:08:22
| 2023-07-28T11:05:03
| 2023-07-28T11:05:03
|
https://github.com/huggingface/datasets/issues/6078
|
NicolasMICAUX
| 3
|
[] |
6,077
|
Mapping gets stuck at 99%
|
### Describe the bug
Hi !
I'm currently working with a large (~150GB) unnormalized dataset at work.
The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it.
I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset.
The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why.
Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me.
### Steps to reproduce the bug
I'm able to reproduce the problem using the following scripts:
```python
# random_data.py
import datasets
import torch
_VERSION = "1.0.0"
class RandomDataset(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo(
version=_VERSION,
supervised_keys=None,
features=datasets.Features(
{
"positions": datasets.Array2D(
shape=(30000, 3),
dtype="float32",
),
"normals": datasets.Array2D(
shape=(30000, 3),
dtype="float32",
),
"features": datasets.Array2D(
shape=(30000, 6),
dtype="float32",
),
"scalars": datasets.Sequence(
feature=datasets.Value("float32"),
length=20,
),
},
),
)
def _split_generators(self, dl_manager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, # type: ignore
gen_kwargs={"nb_samples": 1000},
),
datasets.SplitGenerator(
name=datasets.Split.TEST, # type: ignore
gen_kwargs={"nb_samples": 100},
),
]
def _generate_examples(self, nb_samples: int):
for idx in range(nb_samples):
yield idx, {
"positions": torch.randn(30000, 3),
"normals": torch.randn(30000, 3),
"features": torch.randn(30000, 6),
"scalars": torch.randn(20),
}
```
```python
# main.py
import datasets
import torch
def apply_mean_std(
dataset: datasets.Dataset,
means: dict[str, torch.Tensor],
stds: dict[str, torch.Tensor],
) -> dict[str, torch.Tensor]:
"""Normalize the dataset using the mean and standard deviation of each feature.
Args:
dataset (`Dataset`): A huggingface dataset.
mean (`dict[str, Tensor]`): A dictionary containing the mean of each feature.
std (`dict[str, Tensor]`): A dictionary containing the standard deviation of each feature.
Returns:
dict: A dictionary containing the normalized dataset.
"""
result = {}
for key in means.keys():
# extract data from dataset
data: torch.Tensor = dataset[key] # type: ignore
# extract mean and std from dict
mean = means[key] # type: ignore
std = stds[key] # type: ignore
# normalize data
normalized_data = (data - mean) / std
result[key] = normalized_data
return result
# get dataset
ds = datasets.load_dataset(
path="random_data.py",
split="train",
).with_format("torch")
# compute mean (along last axis)
means = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names}
means_sq = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names}
for batch in ds.iter(batch_size=8):
for key in ds.column_names:
data = batch[key]
batch_size = data.shape[0]
data = data.reshape(-1, data.shape[-1])
means[key] += data.mean(dim=0) / len(ds) * batch_size
means_sq[key] += (data**2).mean(dim=0) / len(ds) * batch_size
# compute std (along last axis)
stds = {key: torch.sqrt(means_sq[key] - means[key] ** 2) for key in ds.column_names}
# normalize each feature of the dataset
ds_normalized = ds.map(
desc="Applying mean/std", # type: ignore
function=apply_mean_std,
batched=False,
fn_kwargs={
"means": means,
"stds": stds,
},
)
```
### Expected behavior
Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
OPEN
| 2023-07-26T14:00:40
| 2024-07-22T12:28:06
| null |
https://github.com/huggingface/datasets/issues/6077
|
Laurent2916
| 6
|
[] |
6,075
|
Error loading music files using `load_dataset`
|
### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem
formatted_output = format_table(
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__
return self.format_column(pa_table)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read
with SoundFile(file, 'r', samplerate, channels,
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__
self._file = self._open(file, mode_int, closefd)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open
_error_check(_snd.sf_error(file_ptr),
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check
raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised.
```
### Steps to reproduce the bug
Code to reproduce the error -
```python
from datasets import load_dataset
ds = load_dataset("susnato/pop2piano_real_music_test", split="test")
print(ds[0])
```
### Expected behavior
I should be able to read the music file without any error.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-07-26T12:44:05
| 2023-07-26T13:08:08
| 2023-07-26T13:08:08
|
https://github.com/huggingface/datasets/issues/6075
|
susnato
| 2
|
[] |
6,073
|
version2.3.2 load_dataset()data_files can't include .xxxx in path
|
### Describe the bug
First, I cd workdir.
Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
that couldn't work and
<FileNotFoundError: Unable to find
'/a/b/c/.d/train/train.jsonl' at
/a/b/c/.d/>
And I debug, it is fine in version2.1.2
So there maybe a bug in path join.
Here is the whole bug report:
/x/datasets/loa │
│ d.py:1656 in load_dataset │
│ │
│ 1653 │ ignore_verifications = ignore_verifications or save_infos │
│ 1654 │ │
│ 1655 │ # Create a dataset builder │
│ ❱ 1656 │ builder_instance = load_dataset_builder( │
│ 1657 │ │ path=path, │
│ 1658 │ │ name=name, │
│ 1659 │ │ data_dir=data_dir, │
│ │
│ x/datasets/loa │
│ d.py:1439 in load_dataset_builder │
│ │
│ 1436 │ if use_auth_token is not None: │
│ 1437 │ │ download_config = download_config.copy() if download_config e │
│ 1438 │ │ download_config.use_auth_token = use_auth_token │
│ ❱ 1439 │ dataset_module = dataset_module_factory( │
│ 1440 │ │ path, │
│ 1441 │ │ revision=revision, │
│ 1442 │ │ download_config=download_config, │
│ │
│ x/datasets/loa │
│ d.py:1097 in dataset_module_factory │
│ │
│ 1094 │ │
│ 1095 │ # Try packaged │
│ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │
│ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │
│ 1098 │ │ │ path, │
│ 1099 │ │ │ data_dir=data_dir, │
│ 1100 │ │ │ data_files=data_files, │
│ │
│x/datasets/loa │
│ d.py:743 in get_module │
│ │
│ 740 │ │ │ if self.data_dir is not None │
│ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │
│ 742 │ │ ) │
│ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │
│ 744 │ │ │ patterns, │
│ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │
│ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │
│ │
│ x/datasets/dat │
│ a_files.py:590 in from_local_or_remote │
│ │
│ 587 │ │ out = cls() │
│ 588 │ │ for key, patterns_for_key in patterns.items(): │
│ 589 │ │ │ out[key] = ( │
│ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │
│ 591 │ │ │ │ │ patterns_for_key, │
│ 592 │ │ │ │ │ base_path=base_path, │
│ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │
│ │
│ /x/datasets/dat │
│ a_files.py:558 in from_local_or_remote │
│ │
│ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │
│ 556 │ ) -> "DataFilesList": │
│ 557 │ │ base_path = base_path if base_path is not None else str(Path() │
│ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │
│ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │
│ 560 │ │ return cls(data_files, origin_metadata) │
│ 561 │
│ │
│ /x/datasets/dat │
│ a_files.py:195 in resolve_patterns_locally_or_by_urls │
│ │
│ 192 │ │ if is_remote_url(pattern): │
│ 193 │ │ │ data_files.append(Url(pattern)) │
│ 194 │ │ else: │
│ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │
│ 196 │ │ │ │ data_files.append(path) │
│ 197 │ │
│ 198 │ if not data_files: │
│ │
│ /x/datasets/dat │
│ a_files.py:145 in _resolve_single_pattern_locally │
│ │
│ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │
│ 143 │ │ if allowed_extensions is not None: │
│ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │
│ ❱ 145 │ │ raise FileNotFoundError(error_msg) │
│ 146 │ return sorted(out) │
│ 147
### Steps to reproduce the bug
1. Version=2.3.2
2. In shell, cd workdir.(cd /a/b/c/.d/)
3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
### Expected behavior
fix it please~
### Environment info
2.3.2
|
CLOSED
| 2023-07-26T11:09:31
| 2023-08-29T15:53:59
| 2023-08-29T15:53:59
|
https://github.com/huggingface/datasets/issues/6073
|
BUAAChuanWang
| 1
|
[] |
6,071
|
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
|
### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0
|
CLOSED
| 2023-07-26T09:37:20
| 2023-07-27T12:42:58
| 2023-07-27T12:42:58
|
https://github.com/huggingface/datasets/issues/6071
|
exs-avianello
| 2
|
[] |
6,069
|
KeyError: dataset has no key "image"
|
### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function.
For some reason, the images are not in the example batches.
### Steps to reproduce the bug
I'm using the latest stable version of datasets
### Expected behavior
I expect the example_batches to contain both images and labels
### Environment info
I'm using the latest stable version of datasets
|
CLOSED
| 2023-07-25T17:45:50
| 2024-09-06T08:16:16
| 2023-07-27T12:42:17
|
https://github.com/huggingface/datasets/issues/6069
|
etetteh
| 7
|
[] |
6,066
|
AttributeError: '_tqdm_cls' object has no attribute '_lock'
|
### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns
DataFilesList.from_patterns(
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map
with ensure_lock(tqdm_class, lock_name=lock_name) as lk:
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock
del tqdm_class._lock
^^^^^^^^^^^^^^^^
AttributeError: '_tqdm_cls' object has no attribute '_lock'
```
### Steps to reproduce the bug
Happens ocasionally.
### Expected behavior
I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print.
According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24
```python
@contextmanager
def ensure_lock(tqdm_class, lock_name=""):
"""get (create if necessary) and then restore `tqdm_class`'s lock"""
print("ensure_lock", tqdm_class, lock_name)
old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock
lock = old_lock or tqdm_class.get_lock() # maybe create a new lock
lock = getattr(lock, lock_name, lock) # maybe subtype
tqdm_class.set_lock(lock)
yield lock
if old_lock is None:
del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class.
else:
tqdm_class.set_lock(old_lock)
```
But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205
```python
class _tqdm_cls:
def __call__(self, *args, disable=False, **kwargs):
if _tqdm_active and not disable:
return tqdm_lib.tqdm(*args, **kwargs)
else:
return EmptyTqdm(*args, **kwargs)
def set_lock(self, *args, **kwargs):
self._lock = None
if _tqdm_active:
return tqdm_lib.tqdm.set_lock(*args, **kwargs)
def get_lock(self):
if _tqdm_active:
return tqdm_lib.tqdm.get_lock()
```
### Environment info
Python 3.11.4
tqdm '4.65.0'
datasets master
|
CLOSED
| 2023-07-25T07:24:36
| 2023-07-26T10:56:25
| 2023-07-26T10:56:24
|
https://github.com/huggingface/datasets/issues/6066
|
codingl2k1
| 7
|
[] |
6,060
|
Dataset.map() execute twice when in PyTorch DDP mode
|
### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same.
And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither.
I have tried to use `rank` and `local_rank` to check, they all didn't make sense.
### Steps to reproduce the bug
use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run
This is my code:
```python
if args.distributed and world_size > 1:
if args.local_rank > 0:
print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True)
torch.distributed.barrier()
print("Mapping dataset")
dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys")
dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift")
dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys")
if args.local_rank == 0:
print("Mapping finished, loading results from main process")
torch.distributed.barrier()
```
### Expected behavior
Only the main process will execute `map`, while the sub process will load cache from disk.
### Environment info
server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090
- `python==3.9.16`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `22.04.1-Ubuntu`
server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090
- `python==3.9.0`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `Ubuntu 20.04`
|
CLOSED
| 2023-07-22T05:06:43
| 2024-01-22T18:35:12
| 2024-01-22T18:35:12
|
https://github.com/huggingface/datasets/issues/6060
|
wanghaoyucn
| 4
|
[] |
6,059
|
Provide ability to load label mappings from file
|
### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file.
It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo`
I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred.
The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it.
```
class TestDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["label_1", "label_2"]),
}
),
task_templates=[TextClassification(text_column="text", label_column="label")],
)
def _split_generators(self, dl_manager):
train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
]
def _generate_examples(self, filepath):
"""Generate AG News examples."""
with open(filepath, encoding="utf-8") as csv_file:
csv_reader = csv.DictReader(csv_file)
for id_, row in enumerate(csv_reader):
yield id_, row
```
### Motivation
Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset.
### Your contribution
I'm willing to work on a PR with guidence.
|
OPEN
| 2023-07-22T02:04:19
| 2024-04-16T08:07:55
| null |
https://github.com/huggingface/datasets/issues/6059
|
david-waterworth
| 3
|
[
"enhancement"
] |
6,058
|
laion-coco download error
|
### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion--
laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd...
Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00,
Downloading data files: 100%|█| 1/1 [04:59<00:00, 2
Extracting data files: 100%|█| 1/1 [00:00<00:00, 13
Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader
name='/home/bian/.cache/huggingface/datasets/downlo
ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154
0cb9234572927f04c'>
Traceback (most recent call last):
File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module>
dataset = load_dataset("laion/laion-coco", ignore_verifications=True)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single
generator = self._generate_tables(**gen_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in
_generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file
.
```
I have carefully followed the instructions in #5264 but still get the same error.
Other helpful information:
```
ds = load_dataset("parquet", data_files=
...: "https://huggingface.co/datasets/laion/l
...: aion-coco/resolve/d22869de3ccd39dfec1507
...: f7ded32e4a518dad24/part-00000-2256f782-1
...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p
...: arquet")
Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f)
100%|██████████████| 1/1 [00:00<00:00, 4.55it/s]
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False)
```
### Expected behavior
Properly load Laion-coco dataset
### Environment info
datasets==2.11.0 torch==1.12.1 python 3.10
|
CLOSED
| 2023-07-21T04:24:15
| 2023-07-22T01:42:06
| 2023-07-22T01:42:06
|
https://github.com/huggingface/datasets/issues/6058
|
yangyijune
| 1
|
[] |
6,057
|
Why is the speed difference of gen example so big?
|
```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('text_content')
image_data = open(image_path, "rb").read()
yield idx, {
"text": text_content,
"image": {
"path": image_path,
"bytes": image_data,
},
"conditioning_image": {
"path": image_path,
"bytes": image_data,
},
}
```
Hello,
I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**

I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
|
CLOSED
| 2023-07-21T03:34:49
| 2023-10-04T18:06:16
| 2023-10-04T18:06:15
|
https://github.com/huggingface/datasets/issues/6057
|
pixeli99
| 1
|
[] |
6,055
|
Fix host URL in The Pile datasets
|
### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
### Steps to reproduce the bug
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
And
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
### Expected behavior
Downloading as normal.
### Environment info
Environment info
`datasets` version: 2.9.0
Platform: Windows
Python version: 3.9.13
|
OPEN
| 2023-07-20T09:08:52
| 2023-07-20T09:09:37
| null |
https://github.com/huggingface/datasets/issues/6055
|
nickovchinnikov
| 0
|
[] |
6,054
|
Multi-processed `Dataset.map` slows down a lot when `import torch`
|
### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows it down.
Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times.
- without `import torch` 
- with `import torch` 
### Steps to reproduce the bug
Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon.
```python3
from datasets import load_from_disk, disable_caching
from transformers import AutoTokenizer
# import torch
# import lightning
def rearrange_datapoints(
batch,
tokenizer,
sequence_length,
):
datapoints = []
input_ids = []
for x in batch['input_ids']:
input_ids += x
while len(input_ids) >= sequence_length:
datapoint = input_ids[:sequence_length]
datapoints.append(datapoint)
input_ids[:sequence_length] = []
if input_ids:
paddings = [-1] * (sequence_length - len(input_ids))
datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings
datapoints.append(datapoint)
batch['input_ids'] = datapoints
return batch
if __name__ == '__main__':
disable_caching()
tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False)
dataset = load_from_disk('...')
dataset = dataset.map(
rearrange_datapoints,
fn_kwargs=dict(
tokenizer=tokenizer,
sequence_length=2048,
),
batched=True,
num_proc=8,
)
```
### Expected behavior
The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
CLOSED
| 2023-07-20T06:36:14
| 2023-07-21T15:19:37
| 2023-07-21T15:19:37
|
https://github.com/huggingface/datasets/issues/6054
|
ShinoharaHare
| 1
|
[
"duplicate"
] |
6,053
|
Change package name from "datasets" to something less generic
|
### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors.
|
CLOSED
| 2023-07-19T19:53:28
| 2024-11-20T21:22:36
| 2023-10-03T16:04:09
|
https://github.com/huggingface/datasets/issues/6053
|
jack-jjm
| 2
|
[
"enhancement"
] |
6,051
|
Skipping shard in the remote repo and resume upload
|
### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
CLOSED
| 2023-07-19T09:25:26
| 2023-07-20T18:16:01
| 2023-07-20T18:16:00
|
https://github.com/huggingface/datasets/issues/6051
|
rs9000
| 2
|
[] |
6,048
|
when i use datasets.load_dataset, i encounter the http connect error!
|
### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15
|
CLOSED
| 2023-07-18T10:16:34
| 2023-07-18T16:18:39
| 2023-07-18T16:18:39
|
https://github.com/huggingface/datasets/issues/6048
|
yangy1992
| 1
|
[] |
6,046
|
Support proxy and user-agent in fsspec calls
|
Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem` could support passing at least the proxies
|
OPEN
| 2023-07-17T16:39:26
| 2025-06-26T18:26:27
| null |
https://github.com/huggingface/datasets/issues/6046
|
lhoestq
| 10
|
[
"enhancement",
"good second issue"
] |
6,043
|
Compression kwargs have no effect when saving datasets as csv
|
### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix.
### Steps to reproduce the bug
```python
# dataset is not compressed (but at least a warning is emitted)
import datasets
dataset = datasets.load_dataset("rotten_tomatoes", split="train")
dataset.to_csv("uncompressed.csv")
print(os.path.getsize("uncompressed.csv")) # 1008607
dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1})
print(os.path.getsize("compressed.csv.gz")) # 1008607
```
```shell
>>>
RuntimeWarning: compression has no effect when passing a non-binary object as input.
csv_str = batch.to_pandas().to_csv(
```
```python
# dataset is not compressed and no warnings are emitted
dataset.to_csv("compressed.csv.gz")
print(os.path.getsize("compressed.csv.gz")) # 1008607
# compare with
dataset.to_pandas().to_csv("pandas.csv.gz")
print(os.path.getsize("pandas.csv.gz")) # 418561
```
---
I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg.
### Expected behavior
The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'`
### Environment info
`datasets == 2.13.1`
|
OPEN
| 2023-07-17T13:19:21
| 2023-07-22T17:34:18
| null |
https://github.com/huggingface/datasets/issues/6043
|
exs-avianello
| 3
|
[] |
6,039
|
Loading column subset from parquet file produces error since version 2.13
|
### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
raise ValueError(
ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}'
```
This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset.
### Steps to reproduce the bug
```python
# Prepare some sample data
import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
iris.to_parquet('iris.parquet')
# ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
print(iris.columns)
# Load data with datasets
from datasets import load_dataset
# Load full parquet file
dataset = load_dataset('parquet', data_files='iris.parquet')
# Load column subset; throws error for datasets>=2.13
dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length'])
```
### Expected behavior
No error should be thrown and the given column subset should be loaded.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3
|
CLOSED
| 2023-07-16T09:13:07
| 2023-07-24T14:35:04
| 2023-07-24T14:35:04
|
https://github.com/huggingface/datasets/issues/6039
|
kklemon
| 0
|
[] |
6,038
|
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
|
Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
# urls = _URLS[self.config.name]
data_dir = dl_manager.download_and_extract(_URLs)
print(data_dir)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["train"]),
"split": "train",
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["dev"]),
"split": "dev",
},
),
]
```
and error occured
```
Traceback (most recent call last):
File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module>
dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare
if str(split_generator.split_info.name).lower() == "all":
AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
```
Could you help me?
|
CLOSED
| 2023-07-15T07:58:08
| 2023-07-24T11:54:15
| 2023-07-24T11:54:15
|
https://github.com/huggingface/datasets/issues/6038
|
BaiMeiyingxue
| 1
|
[] |
6,037
|
Documentation links to examples are broken
|
### Describe the bug
The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example
- text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data are in csv files)
### Steps to reproduce the bug
Click on links to examples from latest documentation
### Expected behavior
Links should be up to date - it might be more stable to link to https://huggingface.co/datasets/ag_news/blob/main/ag_news.py
### Environment info
dataset v1.2.1
|
CLOSED
| 2023-07-15T04:54:50
| 2023-07-17T22:35:14
| 2023-07-17T15:10:32
|
https://github.com/huggingface/datasets/issues/6037
|
david-waterworth
| 2
|
[] |
6,034
|
load_dataset hangs on WSL
|
### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8))
It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second).
### Steps to reproduce the bug
I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
>>> import datasets
>>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes
### Expected behavior
cache quickly recognized and loaded within a second
### Environment info
Please let me know if I should provide more environment information.
|
CLOSED
| 2023-07-14T09:03:10
| 2023-07-14T14:48:29
| 2023-07-14T14:48:29
|
https://github.com/huggingface/datasets/issues/6034
|
Andy-Zhou2
| 3
|
[] |
6,033
|
`map` function doesn't fully utilize `input_columns`.
|
### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8
|
CLOSED
| 2023-07-14T08:49:28
| 2023-07-14T09:16:04
| 2023-07-14T09:16:04
|
https://github.com/huggingface/datasets/issues/6033
|
kwonmha
| 0
|
[] |
6,032
|
DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info
|
### Describe the bug
```python
download_config = DownloadConfig(proxies={'https': '<my proxy>'})
builder = load_dataset_builder(..., download_config=download_config)
```
But, when getting the dataset_info from HfApi, the http requests not using the proxies.
### Steps to reproduce the bug
1. Setup proxies in DownloadConfig.
2. Call `load_dataset_build` with download_config.
3. Inspect the call stack in HfApi.dataset_info.

### Expected behavior
DownloadConfig.proxies works for getting dataset_info.
### Environment info
https://github.com/huggingface/datasets/commit/406b2212263c0d33f267e35b917f410ff6b3bc00
Python 3.11.4
|
OPEN
| 2023-07-14T07:22:55
| 2023-09-11T13:50:41
| null |
https://github.com/huggingface/datasets/issues/6032
|
codingl2k1
| 5
|
[] |
6,031
|
Argument type for map function changes when using `input_columns` for `IterableDataset`
|
### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and values in `examples` so I added `input_columns` argument to `map` function to select keys and values.
It gives me an error saying
```
TypeError: tokenize() takes 1 positional argument but 3 were given.
```
The code below matters.
https://github.com/huggingface/datasets/blob/406b2212263c0d33f267e35b917f410ff6b3bc00/src/datasets/iterable_dataset.py#L687
For example, `inputs = {"a":1, "b":2, "c":3}`.
If `self.input_coluns` is `None`,
`inputs` is a dictionary type variable and `function_args` becomes a `list` of a single `dict` variable.
`function_args` becomes `[{"a":1, "b":2, "c":3}]`
Otherwise, lets say `self.input_columns = ["a", "c"]`
`[inputs[col] for col in self.input_columns]` results in `[1, 3]`.
I think it should be `[{"a":1, "c":3}]`.
I want to ask if the resulting format is intended.
Maybe I can modify `tokenize()` to have 2 parameters in this case instead of having 1 dictionary.
But this is confusing to me.
Or it should be fixed as `[{col:inputs[col] for col in self.input_columns}]`
### Steps to reproduce the bug
Run `map` function of `IterableDataset` with `input_columns` argument.
### Expected behavior
`function_args` looks better to have same format.
I think it should be `[{"a":1, "c":3}]`.
### Environment info
dataset version: 2.12
python: 3.8
|
CLOSED
| 2023-07-14T05:11:14
| 2023-07-14T14:44:15
| 2023-07-14T14:44:15
|
https://github.com/huggingface/datasets/issues/6031
|
kwonmha
| 1
|
[] |
6,025
|
Using a dataset for a use other than it was intended for.
|
### Describe the bug
Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason?
Here is the full stacktrace
```
File "/home/suryahari/Vornoi/tryage-handoff-other-datasets.py", line 276, in create_dataloaders
dataset = interleave_datasets(dsfold, stopping_strategy="all_exhausted")
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py", line 134, in interleave_datasets
return _interleave_iterable_datasets(
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1833, in _interleave_iterable_datasets
info = DatasetInfo.from_merge([d.info for d in datasets])
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in from_merge
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in <listcomp>
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 378, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 208, in __post_init__
self.task_templates = [
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 209, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/tasks/text_classification.py", line 20, in align_with_features
raise ValueError(f"Column {self.label_column} is not present in features.")
ValueError: Column label is not present in features.
```
### Steps to reproduce the bug
Delete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets.
### Expected behavior
Should let me use the dataset with just the `text` field
### Environment info
latest datasets library? I don't think this was an issue in earlier versions.
|
CLOSED
| 2023-07-12T22:33:17
| 2023-07-13T13:57:36
| 2023-07-13T13:57:36
|
https://github.com/huggingface/datasets/issues/6025
|
surya-narayanan
| 1
|
[] |
6,022
|
Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int'
|
### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1328, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3483, in _map_single
writer.write_batch(batch)
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_writer.py", line 549, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 2063, in cast_array_to_feature
return feature.cast_storage(array)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/features/features.py", line 1098, in cast_storage
if min_max["max"] >= self.num_classes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/t1.py", line 33, in <module>
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 850, in map
{
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 851, in <dictcomp>
k: dataset.map(
^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 577, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 542, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3179, in map
for rank, done, content in iflatmap_unordered(
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
```
### Steps to reproduce the bug
1. Checkout the latest main of datasets.
2. Run the code:
```python
from datasets import load_dataset
def transforms(examples):
# examples["pixel_values"] = [image.convert("RGB").resize((100, 100)) for image in examples["image"]]
return examples
ds = load_dataset("scene_parse_150")
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
print(ds)
```
### Expected behavior
map without exception.
### Environment info
Datasets: https://github.com/huggingface/datasets/commit/b8067c0262073891180869f700ebef5ac3dc5cce
Python: 3.11.4
System: Macos
|
CLOSED
| 2023-07-12T03:20:17
| 2023-07-12T16:18:06
| 2023-07-12T16:18:05
|
https://github.com/huggingface/datasets/issues/6022
|
codingl2k1
| 1
|
[] |
6,020
|
Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs
|
### Describe the bug
I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes/shards used.
I've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)])
def test_func(row, idx):
if idx==58:
return {'output': []}
else:
return {'output' : [{'test':1}, {'test':2}]}
# this works fine
test1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4)
# this fails
test2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32)
>ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value("null").
```
The error occurs during the check
```python
_check_if_features_can_be_aligned([dset.features for dset in dsets])
```
When the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error.
### Expected behavior
Expected behavior is the result would be the same regardless of the `num_proc` value used.
### Environment info
Datasets version 2.11.0
Python 3.9.16
|
OPEN
| 2023-07-11T20:40:38
| 2024-10-27T06:30:13
| null |
https://github.com/huggingface/datasets/issues/6020
|
kheyer
| 4
|
[] |
6,017
|
Switch to huggingface_hub's HfFileSystem
|
instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919
|
CLOSED
| 2023-07-11T16:24:40
| 2023-07-17T17:01:01
| 2023-07-17T17:01:01
|
https://github.com/huggingface/datasets/issues/6017
|
lhoestq
| 0
|
[
"enhancement"
] |
6,014
|
Request to Share/Update Dataset Viewer Code
|
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code.
Thank you for considering this request, and I look forward to your response.
|
CLOSED
| 2023-07-11T06:36:09
| 2024-07-20T07:29:08
| 2023-09-25T12:01:17
|
https://github.com/huggingface/datasets/issues/6014
|
lilyorlilypad
| 10
|
[
"duplicate"
] |
6,013
|
[FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage
|
### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented.
### Your contribution
_
|
OPEN
| 2023-07-10T06:42:20
| 2025-06-19T06:30:38
| null |
https://github.com/huggingface/datasets/issues/6013
|
NightMachinery
| 2
|
[
"enhancement",
"good second issue"
] |
6,012
|
[FR] Transform Chaining, Lazy Mapping
|
### Feature request
Currently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space.
The solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested.
The API should look like `map`, as `set_transform` changes the current dataset while `map` returns another dataset.
### Motivation
Lazy processing allows lower disk usage and faster experimentation.
### Your contribution
_
|
OPEN
| 2023-07-09T21:40:21
| 2025-01-20T14:06:28
| null |
https://github.com/huggingface/datasets/issues/6012
|
NightMachinery
| 9
|
[
"enhancement"
] |
6,011
|
Documentation: wiki_dpr Dataset has no metric_type for Faiss Index
|
### Describe the bug
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.
### Steps to reproduce the bug
System: Python 3.9.16, Transformers 4.30.2, WSL
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.
```py
from transformers import DPRQuestionEncoder, DPRContextEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
def encode_question(query, tokenizer=tokenizer, encoder=encoder):
inputs = tokenizer(query, return_tensors='pt')
question_embedding = encoder(**inputs)[0].detach().numpy()
return question_embedding
def get_knn(query, k=5, tokenizer=tokenizer, encoder=encoder, verbose=False):
enc_question = encode_question(query, tokenizer, encoder)
topk_results = ds.get_nearest_examples(index_name='embeddings',
query=enc_question,
k=k)
a = torch.tensor(enc_question[0]).reshape(768)
b = torch.tensor(topk_results.examples['embeddings'][0])
print(a.shape, b.shape)
print(torch.dot(a, b))
print((a-b).pow(2).sum())
return topk_results
```
The [FAISS documentation](https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances) suggests the metric is usually L2 distance (without the square root) or the inner product. I compute both for the sample query:
```py
query = """ it catapulted into popular culture along with a line of action figures and other toys by Bandai.[2] By 2001, the media franchise had generated over $6 billion in toy sales.
Despite initial criticism that its action violence targeted child audiences, the franchise has been commercially successful."""
get_knn(query,k=5)
```
Here, I get dot product of 80.6020 and L2 distance of 77.6616 and
```py
NearestExamplesResults(scores=array([76.20431 , 75.312416, 74.945404, 74.866394, 74.68506 ],
dtype=float32), examples={'id': ['3081096', '2004811', '8908258', '9594124', '286575'], 'text': ['actors, resulting in the "Power Rangers" franchise which has continued since then into sequel TV series (with "Power Rangers Beast Morphers" set to premiere in 2019), comic books, video games, and three feature films, with a further cinematic universe planned. Following from the success of "Power Rangers", Saban acquired the rights to more of Toei\'s library, creating "VR Troopers" and "Big Bad Beetleborgs" from several Metal Hero Series shows and "Masked Rider" from Kamen Rider Series footage. DIC Entertainment joined this boom by acquiring the rights to "Gridman the Hyper Agent" and turning it into "Superhuman Samurai Syber-Squad". In 2002,',
```
Doing `k=1` indicates the higher the outputted number, the better the match, so the metric should not be L2 distance. However, my manually computed inner product (80.6) has a discrepancy with the reported (76.2). Perhaps, this has to do with me using the `compressed` embeddings?
### Expected behavior
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # METRIC_INNER_PRODUCT
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
CLOSED
| 2023-07-09T08:30:19
| 2023-07-11T03:02:36
| 2023-07-11T03:02:36
|
https://github.com/huggingface/datasets/issues/6011
|
YichiRockyZhang
| 2
|
[] |
6,010
|
Improve `Dataset`'s string representation
|
Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit.
|
OPEN
| 2023-07-07T16:38:03
| 2023-09-01T03:45:07
| null |
https://github.com/huggingface/datasets/issues/6010
|
mariosasko
| 3
|
[
"enhancement"
] |
6,008
|
Dataset.from_generator consistently freezes at ~1000 rows
|
### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.
I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.
Let me know if you have ideas how to resolve it!
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
def gen():
for row in range(10000):
yield {"i": np.random.rand(512, 512, 3)}
Dataset.from_generator(gen)
# -> 90% of the time gets stuck around 1000 rows
```
### Expected behavior
Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 12.0.1
- Pandas version: 1.5.1
|
CLOSED
| 2023-07-05T16:06:48
| 2023-07-10T13:46:39
| 2023-07-10T13:46:39
|
https://github.com/huggingface/datasets/issues/6008
|
andreemic
| 3
|
[] |
6,007
|
Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset
|
### Describe the bug
When load a large dataset with the following code
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
```
We encountered the error: "OverflowError: Python int too large to convert to C long"
The error look something like:
```
OverflowError: Python int too large to convert to C long
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
<ipython-input-7-0ed8700e662d> in <module>
----> 1 dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', cache_dir='/sfs/MNBVC/.cache/')
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
-> 1751 use_auth_token=use_auth_token,
1752 )
1753
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
791 try:
792 # Prepare split will record examples associated to the split
--> 793 self._prepare_split(split_generator, **prepare_split_kwargs)
794 except OSError as e:
795 raise OSError(
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
1219 writer.write(example, key)
1220 finally:
-> 1221 num_examples, num_bytes = writer.finalize()
1222
1223 split_generator.split_info.num_examples = num_examples
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in finalize(self, close_stream)
536 # Re-intializing to empty list for next batch
537 self.hkey_record = []
--> 538 self.write_examples_on_file()
539 if self.pa_writer is None:
540 if self.schema:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
407 # Since current_examples contains (example, key) tuples
408 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 409 self.write_batch(batch_examples=batch_examples)
410 self.current_examples = []
411
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
506 col_try_type = try_features[col] if try_features is not None and col in try_features else None
507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 508 arrays.append(pa.array(typed_sequence))
509 inferred_features[col] = typed_sequence.get_inferred_type()
510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
180 else:
181 trying_cast_to_python_objects = True
--> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
183 # use smaller integer precisions if possible
184 if self.trying_int_optimization:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
OverflowError: Python int too large to convert to C long
```
However, that dataset can be loaded in a streaming manner:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', streaming=True)
for i in dataset:
pass # it work well
```
Another issue is reported in our dataset hub:
https://huggingface.co/datasets/liwu/MNBVC/discussions/2
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
### Expected behavior
the dataset can be safely loaded
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9
- Python version: 3.6.8
- PyArrow version: 6.0.1
- Pandas version: 1.1.5
|
OPEN
| 2023-07-05T15:16:50
| 2024-02-07T22:22:35
| null |
https://github.com/huggingface/datasets/issues/6007
|
silverriver
| 8
|
[
"arrow"
] |
6,006
|
NotADirectoryError when loading gigawords
|
### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): [0/1862]
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single
for key, record in generator:
File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b
64efb424b6/gigaword.py", line 115, in _generate_examples
with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s:
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope
n
return open(main_hop, mode, *args, **kwargs)
NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e
89780c4be7599794a4f559048ec/org_data/train.src.txt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gigaword.py", line 38, in <module>
main()
File "gigaword.py", line 35, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data
dataset = self.load_dataset()
File "gigaword.py", line 29, in load_dataset
return datasets.load_dataset('gigaword')
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Download and process the dataset successfully
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
CLOSED
| 2023-07-05T06:23:41
| 2023-07-05T06:31:02
| 2023-07-05T06:31:01
|
https://github.com/huggingface/datasets/issues/6006
|
xipq
| 1
|
[] |
6,003
|
interleave_datasets & DataCollatorForLanguageModeling having a conflict ?
|
### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04
|
OPEN
| 2023-07-03T17:15:31
| 2023-07-03T17:15:31
| null |
https://github.com/huggingface/datasets/issues/6003
|
PonteIneptique
| 0
|
[] |
5,999
|
Getting a 409 error while loading xglue dataset
|
### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the dataset to load
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-06-30T04:13:54
| 2023-06-30T05:57:23
| 2023-06-30T05:57:22
|
https://github.com/huggingface/datasets/issues/5999
|
Praful932
| 1
|
[] |
5,998
|
The current implementation has a potential bug in the sort method
|
### Describe the bug
In the sort method,here's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
Of course, after I modified the tuple into a list, everything worked fine
Change the code to the following so there will be no problem
```python
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
if isinstance(column_names, str):
column_names = [column_names]
else:
column_names = list(column_names)
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
### Expected behavior
Passing tuple into column_names should be equivalent to passing list
### Environment info
- `datasets` version: 2.13.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
CLOSED
| 2023-06-30T03:16:57
| 2023-06-30T14:21:03
| 2023-06-30T14:11:25
|
https://github.com/huggingface/datasets/issues/5998
|
wangyuxinwhy
| 1
|
[] |
5,997
|
extend the map function so it can wrap around long text that does not fit in the context window
|
### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530):
```
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
```
but running the code gives me this error:
```
File "/llm/fine-tune.py", line 117, in <module>
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single
writer.write_batch(batch)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447
```
The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.
### Motivation
please see above
### Your contribution
I'm afraid I don't have much knowledge to help
|
OPEN
| 2023-06-29T22:15:21
| 2023-07-03T17:58:52
| null |
https://github.com/huggingface/datasets/issues/5997
|
siddhsql
| 2
|
[
"enhancement"
] |
5,993
|
ValueError: Table schema does not match schema used to create file
|
### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
|
CLOSED
| 2023-06-27T10:54:07
| 2023-06-27T15:36:42
| 2023-06-27T15:32:44
|
https://github.com/huggingface/datasets/issues/5993
|
exs-avianello
| 2
|
[] |
5,991
|
`map` with any joblib backend
|
We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.
Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process.
If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar !
Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).
|
OPEN
| 2023-06-26T10:33:42
| 2025-09-04T10:43:06
| null |
https://github.com/huggingface/datasets/issues/5991
|
lhoestq
| 2
|
[
"enhancement"
] |
5,989
|
Set a rule on the config and split names
|
> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
|
OPEN
| 2023-06-26T07:34:14
| 2023-07-19T14:22:54
| null |
https://github.com/huggingface/datasets/issues/5989
|
severo
| 3
|
[] |
5,988
|
ConnectionError: Couldn't reach dataset_infos.json
|
### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))
### Steps to reproduce the bug
train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train')
### Expected behavior
download the dataset
### Environment info
centos7
|
CLOSED
| 2023-06-25T12:39:31
| 2023-07-07T13:20:57
| 2023-07-07T13:20:57
|
https://github.com/huggingface/datasets/issues/5988
|
yulingao
| 1
|
[] |
5,987
|
Why max_shard_size is not supported in load_dataset and passed to download_and_prepare
|
### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
### Expected behavior
Users can define the max shard size.
### Environment info
datasets==2.13.1
|
CLOSED
| 2023-06-25T04:19:13
| 2023-06-29T16:06:08
| 2023-06-29T16:06:08
|
https://github.com/huggingface/datasets/issues/5987
|
npuichigo
| 5
|
[] |
5,985
|
Cannot reuse tokenizer object for dataset map
|
### Describe the bug
Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.
Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.
But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.
### Steps to reproduce the bug
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
t.save_pretrained("tok1")
th1 = hash(dumps(t))
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
t.save_pretrained("tok2")
th2 = hash(dumps(t))
assert th1 == th2 # Assertion Error
```
But if you use just the hash of the object without dumps, the hashes don't change
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
th1 = hash(t) # Just hash no dumps
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
th2 = hash(t) # Just hash no dumps
assert th1 == th2 # This is OK
```
This causes situations such as the following
1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt`
```python
from transformers import AutoTokenizer
import datasets
class TokenizeMapper(object):
"""Mapper for tokenizer.
This is needed because the caching mechanism of HuggingFace does not work on
lambdas. Each time a new lambda will be created by a new process which will
lead to a different hash.
This way we can have a universal mapper object in init and reuse it with the same
hash for each process.
"""
def __init__(self, tokenizer):
"""Initialize the tokenizer."""
self.tokenizer = tokenizer
def __call__(self, examples, **kwargs):
"""Run the mapper."""
texts = examples["text"]
tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True)
batch_outputs = {
"input_ids": tt.input_ids,
"attention_mask": tt.attention_mask,
}
return batch_outputs
t = AutoTokenizer.from_pretrained('bert-base-uncased')
mapper = TokenizeMapper(t)
ds = datasets.load_dataset("text", data_files="lines.txt")
mds1 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
mds2 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
```
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Expected behavior
We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
CLOSED
| 2023-06-23T14:45:31
| 2023-07-21T14:09:14
| 2023-07-21T14:09:14
|
https://github.com/huggingface/datasets/issues/5985
|
vikigenius
| 2
|
[
"duplicate"
] |
5,984
|
AutoSharding IterableDataset's when num_workers > 1
|
### Feature request
Minimal Example
```
import torch
from datasets import IterableDataset
d = IterableDataset.from_file(<file_name>)
dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)
for sample in dl:
print(sample)
```
Warning:
Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1.
Expected Behavior:
Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving)
### Motivation
I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers.
### Your contribution
If someone points me to what needs to change, I can create a PR.
|
OPEN
| 2023-06-23T14:34:20
| 2024-03-22T15:01:14
| null |
https://github.com/huggingface/datasets/issues/5984
|
mathephysicist
| 8
|
[
"enhancement"
] |
5,982
|
404 on Datasets Documentation Page
|
### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co
|
CLOSED
| 2023-06-22T20:14:57
| 2023-06-26T15:45:03
| 2023-06-26T15:45:03
|
https://github.com/huggingface/datasets/issues/5982
|
kmulka-bloomberg
| 2
|
[] |
5,981
|
Only two cores are getting used in sagemaker with pytorch 3.10 kernel
|
### Describe the bug
When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.
We have solved this in our own code by placing the following snippet in the code that is called inside subprocesses:
```os.sched_setaffinity(0, {i for i in range(1000)})```
The problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask ("0xfffff" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop.

When running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active.
### Steps to reproduce the bug
Repro steps:
1. Create an aws sagemaker instance
2. use the pytorch 3_10 kernel
3. Load a dataset
4. run a filter operation
5. watch as only 2 cores are used when num_proc > 2
6. run a map operation
7. watch as only 2 cores are used when num_proc > 2
8. run a map operation with processor affinity reset inside the function called via map
9. Watch as all cores run
### Expected behavior
All specified cores are used via the num_proc argument.
### Environment info
AWS sagemaker with the following init script run in the terminal after instance creation:
conda init bash
bash
conda activate pytorch_p310
pip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
sudo yum -y install htop
sudo yum -y update
sudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel
|
CLOSED
| 2023-06-22T19:57:31
| 2023-10-30T06:17:40
| 2023-07-24T11:54:52
|
https://github.com/huggingface/datasets/issues/5981
|
mmr-crexi
| 4
|
[] |
5,980
|
Viewing dataset card returns “502 Bad Gateway”
|
The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
|
CLOSED
| 2023-06-22T19:14:48
| 2023-06-27T08:38:19
| 2023-06-26T14:42:45
|
https://github.com/huggingface/datasets/issues/5980
|
tbenthompson
| 3
|
[] |
5,975
|
Streaming Dataset behind Proxy - FileNotFoundError
|
### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
|
CLOSED
| 2023-06-21T19:10:02
| 2023-06-30T05:55:39
| 2023-06-30T05:55:38
|
https://github.com/huggingface/datasets/issues/5975
|
Veluchs
| 9
|
[] |
5,971
|
Docs: make "repository structure" easier to find
|
The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.
|
OPEN
| 2023-06-21T08:26:44
| 2023-07-05T06:51:38
| null |
https://github.com/huggingface/datasets/issues/5971
|
severo
| 5
|
[
"documentation"
] |
5,970
|
description disappearing from Info when Uploading a Dataset Created with `from_dict`
|
### Describe the bug
When uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download.
### Steps to reproduce the bug
I think the most relevant pattern in the code might be the following lines:
```
description_json_str = json.dumps(
{
"dataset_id": dataset.spec.dataset_id,
"env_name": dataset.spec.env_spec.id,
"action_space": serialize_space(dataset.spec.action_space),
"observation_space": serialize_space(dataset.spec.observation_space),
}
)
hugging_face_dataset = Dataset.from_dict(
episodes_dict, info=DatasetInfo(description=description_json_str)
)
```
Which comes from this function https://github.com/balisujohn/minarai/blob/8e023727f0a8488c4451651d9f7a79b981412c40/minari/integrations/hugging_face.py#L39
To replicate,
clone this branch of my Minari fork https://github.com/balisujohn/minarai/tree/dev-huggingface then run
```
python3.8 -m venv env
source env/bin/activate
python3 -m pip install -e .
python3 -m pip install pytest
```
The change the hugging face repo path in the test called `test_hugging_face_push_and_pull_dataset` in `tests/integrations/test_hugging_face.py` to one you have permissions to write to.
Then run:
```
pytest tests/integrations/test_hugging_face.py::test_hugging_face_push_and_pull_dataset
```
### Expected behavior
DATASET INFO BEFORE UPLOADING
DatasetInfo(description='{"dataset_id": "dummy-combo-test-v0", "env_name": "DummyComboEnv-v0", "action_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}]}", "observation_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"component_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [-1.0], \\"high\\": [1.0]}, \\"component_2\\": {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"subcomponent_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, \\"subcomponent_2\\": {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}, {\\"type\\": \\"Discrete\\", \\"dtype\\": \\"int64\\", \\"start\\": 0, \\"n\\": 10}]}}}}}]}]}"}', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)
...
DATASET INFO AFTER UPLOADING AND DOWNLOADING
DatasetInfo(description='', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits={'train': SplitInfo(name='train', num_bytes=4846, num_examples=60, shard_lengths=None, dataset_name='parquet')}, download_checksums={'https://huggingface.co/datasets/balisujohn/minari_test/resolve/8217b614ff9ba5edc1a30c7df430e92a46f65363/data/train-00000-of-00001-7c5900b93b35745e.parquet': {'num_bytes': 9052, 'checksum': None}}, download_size=9052, post_processing_size=None, dataset_size=4846, size_in_bytes=13898)
...
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
OPEN
| 2023-06-20T19:18:26
| 2023-06-22T14:23:56
| null |
https://github.com/huggingface/datasets/issues/5970
|
balisujohn
| 2
|
[] |
5,968
|
Common Voice datasets still need `use_auth_token=True`
|
### Describe the bug
We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
However it throws an error - probably because something weird is hardcoded into the dataset loading script.
### Steps to reproduce the bug
1.)
```
huggingface-cli login
```
2.) Make sure that you have accepted the license here:
https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1
3.) Run:
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
4.) You'll get:
```
File ~/hf/lib/python3.10/site-packages/datasets/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
961 split_dict = SplitDict(dataset_name=self.name)
962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
965 # Checksums verification
966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_1/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager)
148 hf_auth_token = dl_manager.download_config.use_auth_token
149 if hf_auth_token is None:
--> 150 raise ConnectionError(
151 "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
152 )
154 bundle_url_template = STATS["bundleURLTemplate"]
155 bundle_version = bundle_url_template.split("/")[0]
ConnectionError: Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset
```
### Expected behavior
One should not have to pass `use_auth_token=True`. Also see discussion here: https://github.com/huggingface/blog/pull/1243#discussion_r1235131150
### Environment info
```
- `datasets` version: 2.13.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
```
|
CLOSED
| 2023-06-20T11:58:37
| 2023-07-29T16:08:59
| 2023-07-29T16:08:58
|
https://github.com/huggingface/datasets/issues/5968
|
patrickvonplaten
| 4
|
[] |
5,967
|
Config name / split name lost after map with multiproc
|
### Describe the bug
Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
from transformers import AutoFeatureExtractor
import numpy as np
# load dummy dataset
libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
# make train / test splits
libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1)
# example feature extractor
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)
sampling_rate = feature_extractor.sampling_rate
libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate))
max_duration = 30.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
# single proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1
)
print(10 * "=" ,"Single processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
# multi proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2
)
print(10 * "=" ,"Multi processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
```
**Print Output:**
```
========== Single processing ==========
Config name before: clean Split name before: validation
Config name after: clean Split name after: validation
========== Multi processing ==========
Config name before: clean Split name before: validation
Config name after: None Split name after: None
```
=> we can see that the config/split names are lost in the multiprocessing setting
### Expected behavior
Should retain both config / split names in the multiproc setting
### Environment info
- `datasets` version: 2.13.1.dev0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
OPEN
| 2023-06-19T17:27:36
| 2023-06-28T08:55:25
| null |
https://github.com/huggingface/datasets/issues/5967
|
sanchit-gandhi
| 2
|
[] |
5,965
|
"Couldn't cast array of type" in complex datasets
|
### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux
|
CLOSED
| 2023-06-19T14:16:14
| 2023-07-26T15:13:53
| 2023-07-26T15:13:53
|
https://github.com/huggingface/datasets/issues/5965
|
piercefreeman
| 4
|
[] |
5,963
|
Got an error _pickle.PicklingError use Dataset.from_spark.
|
python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="/pnc-data/data/nuplan/t5_spark/cache_data")
ds.save_to_disk(args.output_data)
Error :
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma
tion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
_Originally posted by @yanzia12138 in https://github.com/huggingface/datasets/issues/5701#issuecomment-1594674306_
W
Traceback (most recent call last):
File "/home/work/main.py", line 100, in <module>
run(args)
File "/home/work/main.py", line 80, in run
ds = Dataset.from_spark(df1, keep_in_memory=True,
File "/home/work/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1281, in from_spark
return SparkDatasetReader(
File "/home/work/.local/lib/python3.9/site-packages/datasets/io/spark.py", line 53, in read
self.builder.download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 254, in _prepare_split
self._validate_cache_dir()
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 122, in _validate_cache_dir
self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 950, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2951, in _jrdd
wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2830, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2816, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/serializers.py", line 447, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S
parkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
|
CLOSED
| 2023-06-19T05:30:35
| 2023-07-24T11:55:46
| 2023-07-24T11:55:46
|
https://github.com/huggingface/datasets/issues/5963
|
yanzia12138
| 5
|
[] |
5,962
|
Issue with train_test_split maintaining the same underlying PyArrow Table
|
### Describe the bug
I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.
### Steps to reproduce the bug
1. Load any dataset ```dataset = load_dataset("lhoestq/demo1")```
2. Try the next code:
```python
from datasets import Dataset, DatasetDict
train_size = 0.6
split_train = dataset["train"].train_test_split(
train_size=train_size,
)
separate_dataset_dict = DatasetDict({
"train": split_train["train"],
"test": split_train["test"],
})
```
3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.
4. But the next code:
```python
print(len(separate_dataset_dict["train"].data['id']))
print(len(separate_dataset_dict["test"].data['id']))
```
Indicates that both tables still have 5 rows.
### Expected behavior
However, I've noticed that train_test_split["train"].data, test_val_split["train"].data, and test_val_split["test"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.
I believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?
I would appreciate any assistance with this issue. Thank you.
### Environment info
I tried in Colab:
- `datasets` version: 2.13.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
and my PC:
- `datasets` version: 2.13.0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
OPEN
| 2023-06-17T02:19:58
| 2023-06-17T02:19:58
| null |
https://github.com/huggingface/datasets/issues/5962
|
Oziel14
| 0
|
[] |
5,961
|
IterableDataset: split by node and map may preprocess samples that will be skipped anyway
|
There are two ways an iterable dataset can be split by node:
1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU
2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others.
In case 2. it's therefore possible to have the same examples passed to `prepare_dataset` for each GPU.
This doesn't sound optimized though, because it runs the preprocessing on samples that won't be used in the end.
Could you open a new issue so that we can discuss about this and find a solution ?
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/5360#issuecomment-1592729051_
|
OPEN
| 2023-06-15T10:29:10
| 2023-09-01T10:35:11
| null |
https://github.com/huggingface/datasets/issues/5961
|
johnchienbronci
| 9
|
[] |
5,959
|
read metric glue.py from local file
|
### Describe the bug
Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub.
I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets.
My problem is about the load_metric.
When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns
` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper
return deprecated_function(*args, **kwargs)
File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable`
Thanks in advance for help!
### Steps to reproduce the bug
N/A
### Expected behavior
N/A
### Environment info
`datasets == 2.12.0`
|
CLOSED
| 2023-06-14T17:59:35
| 2023-06-14T18:04:16
| 2023-06-14T18:04:16
|
https://github.com/huggingface/datasets/issues/5959
|
JiazhaoLi
| 1
|
[] |
5,955
|
Strange bug in loading local JSON files, using load_dataset
|
### Describe the bug
I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error.
The data is a list containing a dictionary. As follows:
[
{'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]},
...
]
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
path = "target.json"
temp_path = "temp.json"
with open(path, "r") as f:
data = json.load(f)
print(f"\n-------the JSON file length is: {len(data)}-------\n")
with open(temp_path, "w") as f:
json.dump(data[:160000], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works when the JSON file length is 160000-------\n")
with open(temp_path, "w") as f:
json.dump(data[160000:], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works and eliminates data issues-------\n")
with open(temp_path, "w") as f:
json.dump(data[:170000], f)
dataset = load_dataset("json", data_files=temp_path)
```
### Expected behavior
```
-------the JSON file length is: 173049-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3328.81it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 639.47it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 265.85it/s]
-------This works when the JSON file length is 160000-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 2038.05it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 794.83it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 681.00it/s]
-------This works and eliminates data issues-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-63f391c89599c7b0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3682.44it/s]
Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 788.70it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values
Traceback (most recent call last):
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at /home/lakala/hjc/code/pycode/glm/temp.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lakala/hjc/code/pycode/glm/test.py", line 22, in <module>
dataset = load_dataset("json", data_files=temp_path)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
```
Ubuntu==22.04
python==3.8
pytorch-transformers==1.2.0
transformers== 4.27.1
datasets==2.12.0
numpy==1.24.3
pandas==1.5.3
```
|
CLOSED
| 2023-06-14T12:46:00
| 2023-06-21T14:42:15
| 2023-06-21T14:42:15
|
https://github.com/huggingface/datasets/issues/5955
|
Night-Quiet
| 4
|
[] |
5,953
|
Bad error message when trying to download gated dataset
|
### Describe the bug
When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.:
E.g.
```sh
Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password..
Will try to load from local cache.
```
If I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO:
```sh
File ~/hf/lib/python3.10/site-packages/fsspec/implementations/http.py:430, in HTTPFileSystem._info(self, url, **kwargs)
427 except Exception as exc:
428 if policy == "get":
429 # If get failed, then raise a FileNotFoundError
--> 430 raise FileNotFoundError(url) from exc
431 logger.debug(str(exc))
433 return {"name": url, "size": None, **info, "type": "file"}
FileNotFoundError: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0/resolve/main/n_shards.json
```
### Steps to reproduce the bug
```
huggingface-cli logout
```
and then:
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Swahili
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "sw", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
sw_sample = next(iter(stream_data))["audio"]["array"]
```
### Expected behavior
Better error message
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.12.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
CLOSED
| 2023-06-14T10:03:39
| 2023-06-14T16:36:51
| 2023-06-14T12:26:32
|
https://github.com/huggingface/datasets/issues/5953
|
patrickvonplaten
| 8
|
[] |
5,951
|
What is the Right way to use discofuse dataset??
|
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
1. **coherent_first_sentence**
2. **coherent_second_sentence**
3. **incoherent_first_sentence**
4. **incoherent_second_sentence**
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**
The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?
|
CLOSED
| 2023-06-14T08:38:39
| 2023-06-14T13:25:06
| 2023-06-14T12:10:16
|
https://github.com/huggingface/datasets/issues/5951
|
akesh1235
| 2
|
[] |
5,950
|
Support for data with instance-wise dictionary as features
|
### Feature request
I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.
It is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized).
### Motivation
I am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
}
},
...
{
"index": 9999,
"feature": {
"x >= 6": ["x >= 6", "x >= 0", "x >= -1"],
...
}
},
...
```
When directly loading the dataset using `data = load_dataset("json", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
"x >= 6": None, # keys from other instances
...
}
},
```
This is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys.
A solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets.
### Your contribution
N/A
|
OPEN
| 2023-06-13T15:49:00
| 2025-04-07T13:20:37
| null |
https://github.com/huggingface/datasets/issues/5950
|
richardwth
| 11
|
[
"enhancement"
] |
5,947
|
Return the audio filename when decoding fails due to corrupt files
|
### Feature request
Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file.
### Motivation
When you try to load an object file dataset and the decoding fails you can't know which file is corrupt
```
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised.
```
### Your contribution
Make a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails.
|
OPEN
| 2023-06-13T08:44:09
| 2023-06-14T12:45:01
| null |
https://github.com/huggingface/datasets/issues/5947
|
wetdog
| 2
|
[
"enhancement"
] |
5,946
|
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
|
### Describe the bug
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop │
│ │
│ 1786 │ │ │ │ rng_to_sync = True │
│ 1787 │ │ │ │
│ 1788 │ │ │ step = -1 │
│ ❱ 1789 │ │ │ for step, inputs in enumerate(epoch_iterator): │
│ 1790 │ │ │ │ total_batched_samples += 1 │
│ 1791 │ │ │ │ if rng_to_sync: │
│ 1792 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ │
│ │
│ 374 │ │ dataloader_iter = super().__iter__() │
│ 375 │ │ # We iterate one batch ahead to check when we are at the end │
│ 376 │ │ try: │
│ ❱ 377 │ │ │ current_batch = next(dataloader_iter) │
│ 378 │ │ except StopIteration: │
│ 379 │ │ │ yield │
│ 380 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │
│ │
│ 630 │ │ │ if self._sampler_iter is None: │
│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │
│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 633 │ │ │ data = self._next_data() │
│ 634 │ │ │ self._num_yielded += 1 │
│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │
│ │
│ 674 │ │
│ 675 │ def _next_data(self): │
│ 676 │ │ index = self._next_index() # may raise StopIteration │
│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │
│ 678 │ │ if self._pin_memory: │
│ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │
│ 680 │ │ return data │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch │
│ │
│ 46 │ def fetch(self, possibly_batched_index): │
│ 47 │ │ if self.auto_collation: │
│ 48 │ │ │ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: │
│ ❱ 49 │ │ │ │ data = self.dataset.__getitems__(possibly_batched_index) │
│ 50 │ │ │ else: │
│ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │
│ 52 │ │ else: │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ │
│ │
│ 2779 │ │
│ 2780 │ def __getitems__(self, keys: List) -> List: │
│ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │
│ ❱ 2782 │ │ batch = self.__getitem__(keys) │
│ 2783 │ │ n_examples = len(batch[next(iter(batch))]) │
│ 2784 │ │ return [{col: array[i] for col, array in batch.items()} for i in range(n_example │
│ 2785 │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ │
│ │
│ 2775 │ │
│ 2776 │ def __getitem__(self, key): # noqa: F811 │
│ 2777 │ │ """Can be used to index columns (by string names) or rows (by integer index or i │
│ ❱ 2778 │ │ return self._getitem(key) │
│ 2779 │ │
│ 2780 │ def __getitems__(self, keys: List) -> List: │
│ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem │
│ │
│ 2759 │ │ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ │
│ 2760 │ │ format_kwargs = format_kwargs if format_kwargs is not None else {} │
│ 2761 │ │ formatter = get_formatter(format_type, features=self._info.features, **format_kw │
│ ❱ 2762 │ │ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice │
│ 2763 │ │ formatted_output = format_table( │
│ 2764 │ │ │ pa_subtable, key, formatter=formatter, format_columns=format_columns, output │
│ 2765 │ │ ) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table │
│ │
│ 575 │ │ _check_valid_column_key(key, table.column_names) │
│ 576 │ else: │
│ 577 │ │ size = indices.num_rows if indices is not None else table.num_rows │
│ ❱ 578 │ │ _check_valid_index_key(key, size) │
│ 579 │ # Query the main table │
│ 580 │ if indices is None: │
│ 581 │ │ pa_subtable = _query_table(table, key) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in │
│ _check_valid_index_key │
│ │
│ 528 │ │ │ _check_valid_index_key(min(key), size=size) │
│ 529 │ elif isinstance(key, Iterable): │
│ 530 │ │ if len(key) > 0: │
│ ❱ 531 │ │ │ _check_valid_index_key(int(max(key)), size=size) │
│ 532 │ │ │ _check_valid_index_key(int(min(key)), size=size) │
│ 533 │ else: │
│ 534 │ │ _raise_bad_key_type(key) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in │
│ _check_valid_index_key │
│ │
│ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: │
│ 519 │ if isinstance(key, int): │
│ 520 │ │ if (key < 0 and key + size < 0) or (key >= size): │
│ ❱ 521 │ │ │ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") │
│ 522 │ │ return │
│ 523 │ elif isinstance(key, slice): │
│ 524 │ │ pass
### Steps to reproduce the bug
``
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
MODEL_NAME = "tiiuae/falcon-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map = "auto",
trust_remote_code = True,
quantization_config = bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r = 16,
lora_alpha = 32,
target_modules = ["query_key_value"],
lora_dropout = 0.05,
bias = "none",
task_type = "CASUAL_LM"
)
model = get_peft_model(model,config)
print_trainable_parameters(model)
def generate_prompt(data_point):
return f"""
<human>: {data_point["question"]}
<assistant>: {data_point["answer"]}
""".strip()
def generate_and_tokenize_prompt(data_point):
full_prompt = generate_prompt(data_point)
tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None)
return dict({
"input_ids" : tokenized_full_prompt["input_ids"],
"attention_mask" : tokenized_full_prompt["attention_mask"]
})
data = data["train"].shuffle().map(generate_and_tokenize_prompt, batched = False)
OUTPUT_DIR = "experiments"
trainings_args = transformers.TrainingArguments(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 4,
num_train_epochs = 1,
learning_rate = 2e-4,
fp16 = True,
save_total_limit = 3,
logging_steps = 1,
output_dir = OUTPUT_DIR,
max_steps = 80,
optim = "paged_adamw_8bit",
lr_scheduler_type = "cosine",
warmup_ratio = 0.05,
#remove_unused_columns=True
)
trainer = transformers.Trainer(
model = model,
train_dataset = data,
args = trainings_args,
data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()
IndexError: Invalid key: 32 is out of bounds for size 0
DataSet Format is like :
[{"question": "How can I create an account?", "answer": "To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process."}, .... ]
### Expected behavior
-
### Environment info
!pip install -q pip
!pip install -q bitsandbytes==0.39.0
!pip install -q torch==2.0.1
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/accelerate.git
!pip install -q datasets
!pip install -q loralib==0.1.1
!pip install -q einops==0.6.1
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
|
OPEN
| 2023-06-13T07:34:15
| 2023-07-14T12:04:48
| null |
https://github.com/huggingface/datasets/issues/5946
|
syngokhan
| 6
|
[] |
5,945
|
Failing to upload dataset to the hub
|
### Describe the bug
Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.
From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.
Please help.
I'm trying to upload the dataset for almost a week.
Thanks
### Steps to reproduce the bug
not relevant
### Expected behavior
Be able to upload thedataset
### Environment info
python: 3.9
|
CLOSED
| 2023-06-13T05:46:46
| 2023-07-24T11:56:40
| 2023-07-24T11:56:40
|
https://github.com/huggingface/datasets/issues/5945
|
Ar770
| 3
|
[] |
5,941
|
Load Data Sets Too Slow In Train Seq2seq Model
|
### Describe the bug
step 'Generating train split' in load_dataset is too slow:

### Steps to reproduce the bug
Data: own data,16K16B Mono wav
Oficial Script:[ run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)
Add Code:
if data_args.data_path is not None:
print(data_args.data_path)
raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True)
(change cache_dir to other path ,ex:/DATA/cache)
### Expected behavior
load data fast,at least 1000+
`Generating train split: 387875 examples [32:24:45, 1154.83 examples/s]`
### Environment info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
|
CLOSED
| 2023-06-12T03:58:43
| 2023-08-15T02:52:22
| 2023-08-15T02:52:22
|
https://github.com/huggingface/datasets/issues/5941
|
xyx361100238
| 10
|
[] |
5,990
|
Pushing a large dataset on the hub consistently hangs
|
### Describe the bug
Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help.
I already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it.
### Reproduction
```python
import multiprocessing as mp
import pathlib
from math import ceil
import datasets
import numpy as np
from tqdm.auto import tqdm
from tali.data.data import select_subtitles_between_timestamps
from tali.utils import load_json
tali_dataset_dir = "/data/"
if __name__ == "__main__":
full_dataset = datasets.load_dataset(
"Antreas/TALI", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir
)
def data_generator(set_name, percentage: float = 1.0):
dataset = full_dataset[set_name]
for item in tqdm(dataset):
video_list = item["youtube_content_video"]
video_list = np.random.choice(
video_list, int(ceil(len(video_list) * percentage))
)
if len(video_list) == 0:
continue
captions = item["youtube_subtitle_text"]
captions = select_subtitles_between_timestamps(
subtitle_dict=load_json(
captions.replace(
"/data/",
tali_dataset_dir,
)
),
starting_timestamp=0,
ending_timestamp=100000000,
)
for video_path in video_list:
temp_path = video_path.replace("/data/", tali_dataset_dir)
video_path_actual: pathlib.Path = pathlib.Path(temp_path)
if video_path_actual.exists():
item["youtube_content_video"] = open(video_path_actual, "rb").read()
item["youtube_subtitle_text"] = captions
yield item
train_generator = lambda: data_generator("train", percentage=0.1)
val_generator = lambda: data_generator("val")
test_generator = lambda: data_generator("test")
train_data = datasets.Dataset.from_generator(
train_generator,
num_proc=mp.cpu_count(),
writer_batch_size=5000,
cache_dir=tali_dataset_dir,
)
val_data = datasets.Dataset.from_generator(
val_generator,
writer_batch_size=5000,
num_proc=mp.cpu_count(),
cache_dir=tali_dataset_dir,
)
test_data = datasets.Dataset.from_generator(
test_generator,
writer_batch_size=5000,
num_proc=mp.cpu_count(),
cache_dir=tali_dataset_dir,
)
dataset = datasets.DatasetDict(
{
"train": train_data,
"val": val_data,
"test": test_data,
}
)
succesful_competion = False
while not succesful_competion:
try:
dataset.push_to_hub(repo_id="Antreas/TALI-small", max_shard_size="5GB")
succesful_competion = True
except Exception as e:
print(e)
```
### Logs
```shell
Pushing dataset shards to the dataset hub: 33%|██████████████████████████████████████▎ | 7/21 [24:33<49:06, 210.45s/it]
Error while uploading 'data/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub.
Pushing split train to the Hub.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [42:10<00:00, 55.01s/it]
Pushing split val to the Hub.
Resuming upload of the dataset shards.
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.55ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.51s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.39ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:30<00:00, 30.19s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.28ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:24<00:00, 24.08s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.42ba/s]
Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.97s/it]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.49ba/s]
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.54ba/s^
Upload 1 LFS files: 0%| | 0/1 [04:42<?, ?it/s]
Pushing dataset shards to the dataset hub: 52%|████████████████████████████████████████████████████████████▏ | 11/21 [17:23<15:48, 94.82s/it]
That's where it got stuck
```
### System info
```shell
- huggingface_hub version: 0.15.1
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: Antreas
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.0.dev20230606+cu121
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.5.0
- hf_transfer: N/A
- gradio: N/A
- numpy: 1.24.3
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
|
OPEN
| 2023-06-10T14:46:47
| 2025-02-15T09:29:10
| null |
https://github.com/huggingface/datasets/issues/5990
|
AntreasAntoniou
| 46
|
[
"bug"
] |
5,939
|
.
|
CLOSED
| 2023-06-09T14:01:34
| 2023-06-12T12:19:34
| 2023-06-12T12:19:19
|
https://github.com/huggingface/datasets/issues/5939
|
flckv
| 0
|
[] |
|
5,936
|
Sequence of array not supported for most dtype
|
### Describe the bug
Create a dataset composed of sequence of array fails for most dtypes (see code below).
### Steps to reproduce the bug
```python
from datasets import Sequence, Array2D, Features, Dataset
import numpy as np
for dtype in [
"bool", # ok
"int8", # failed
"int16", # failed
"int32", # failed
"int64", # ok
"uint8", # failed
"uint16", # failed
"uint32", # failed
"uint64", # failed
"float16", # failed
"float32", # failed
"float64", # ok
]:
features = Features({"foo": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})
sequence = [
[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]],
]
array = np.array(sequence, dtype=dtype)
try:
dataset = Dataset.from_dict({"foo": [array]}, features=features)
except Exception as e:
print(f"Failed for dtype={dtype}")
```
Traceback for `dtype="int8"`:
```
Traceback (most recent call last):
File "/home/qgallouedec/datasets/a.py", line 29, in <module>
raise e
File "/home/qgallouedec/datasets/a.py", line 26, in <module>
dataset = Dataset.from_dict({"foo": [array]}, features=features)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 899, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 799, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 3725, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 5254, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 350, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 236, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 204, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2091, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2139, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 879, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: int8>>, got list<item: list<item: int64>>
```
### Expected behavior
Not to fail.
### Environment info
- Python 3.10.6
- datasets: master branch
- Numpy: 1.23.4
|
CLOSED
| 2023-06-08T18:18:07
| 2023-06-14T15:03:34
| 2023-06-14T15:03:34
|
https://github.com/huggingface/datasets/issues/5936
|
qgallouedec
| 4
|
[] |
5,931
|
`datasets.map` not reusing cached copy by default
|
### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
CLOSED
| 2023-06-07T09:03:33
| 2023-06-21T16:15:40
| 2023-06-21T16:15:40
|
https://github.com/huggingface/datasets/issues/5931
|
bhavitvyamalik
| 1
|
[] |
5,930
|
loading private custom dataset script - authentication error
|
### Describe the bug
Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?
I am logged in in the terminal, in the browser. I receive this error:
/python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels `(ConnectionError('Unauthorized for URL `https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))
when I added: `use_auth_token=True` and logged in via terminal then I received error:
or the same error in different format:
raise ConnectionError(f"`Couldn't reach {url} (error {response.status_code}`)")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels (`error 401`)
### Steps to reproduce the bug
1. cloned transformers library locally:
https://huggingface.co/docs/transformers/v4.15.0/examples :
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> cd /transformers/examples/pytorch/audio-classification
> pip install -r requirements.txt
2. created **loading script**
> https://huggingface.co/docs/datasets/dataset_script added next to dataset:
3. uploaded **private custom dataset** with loading script to HuggingFace
> https://huggingface.co/docs/datasets/dataset_script
4. added dataset loading script to **local directory** in the above cloned transformers library:
> cd /transformers/examples/pytorch/audio-classification
5. logged in to HuggingFace on local terminal with :
> **huggingface-cli login**
6. run the model with the custom dataset stored on HuggingFace with code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
cd /transformers/examples/pytorch/audio-classification
> python run_audio_classification.py \
> --model_name_or_path facebook/wav2vec2-base \
> --output_dir l/users/flck/outputs/wav2vec2-base-s \
> --overwrite_output_dir \
> --dataset_name s \
> --dataset_config_name s \
> --remove_unused_columns False \
> --do_train \
> --do_eval \
> --fp16 \
> --learning_rate 3e-5 \
> --max_length_seconds 1 \
> --attention_mask False \
> --warmup_ratio 0.1 \
> --num_train_epochs 5 \
> --per_device_train_batch_size 32 \
> --gradient_accumulation_steps 4 \
> --per_device_eval_batch_size 32 \
> --dataloader_num_workers 4 \
> --logging_strategy steps \
> --logging_steps 10 \
> --evaluation_strategy epoch \
> --save_strategy epoch \
> --load_best_model_at_end True \
> --metric_for_best_model accuracy \
> --save_total_limit 3 \
> --seed 0 \
> --push_to_hub \
> **--use_auth_token=True**
### Expected behavior
Be able to train a model the https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/ run_audio_classification.py with private custom dataset stored on HuggingFace.
### Environment info
- datasets version: 2.12.0
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
|
CLOSED
| 2023-06-07T06:58:23
| 2023-06-15T14:49:21
| 2023-06-15T14:49:20
|
https://github.com/huggingface/datasets/issues/5930
|
flckv
| 1
|
[] |
5,929
|
Importing PyTorch reduces multiprocessing performance for map
|
### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
Takes around 4 seconds on my machine.
While the same code, but with an `import torch`:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
import torch
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
takes around 22 seconds.
### Expected behavior
I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
- torch: 2.0.1
|
CLOSED
| 2023-06-06T19:42:25
| 2023-06-16T13:09:12
| 2023-06-16T13:09:12
|
https://github.com/huggingface/datasets/issues/5929
|
Maxscha
| 2
|
[] |
5,927
|
`IndexError` when indexing `Sequence` of `Array2D` with `None` values
|
### Describe the bug
Having `None` values in a `Sequence` of `ArrayND` fails.
### Steps to reproduce the bug
```python
from datasets import Array2D, Dataset, Features, Sequence
data = [
[
[[0]],
None,
None,
]
]
feature = Sequence(Array2D((1, 1), dtype="int64"))
dataset = Dataset.from_dict({"a": data}, features=Features({"a": feature}))
dataset[0] # error raised only when indexing
```
```
Traceback (most recent call last):
File "/Users/quentingallouedec/gia/c.py", line 13, in <module>
dataset[0] # error raised only when indexing
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2658, in __getitem__
return self._getitem(key)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2643, in _getitem
formatted_output = format_table(
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 634, in format_table
return formatter(pa_table, query_type=query_type)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 406, in __call__
return self.format_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 441, in format_row
row = self.python_arrow_extractor().extract_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 144, in extract_row
return _unnest(pa_table.to_pydict())
File "pyarrow/table.pxi", line 4146, in pyarrow.lib.Table.to_pydict
File "pyarrow/table.pxi", line 1312, in pyarrow.lib.ChunkedArray.to_pylist
File "pyarrow/array.pxi", line 1521, in pyarrow.lib.Array.to_pylist
File "pyarrow/scalar.pxi", line 675, in pyarrow.lib.ListScalar.as_py
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 760, in to_pylist
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 725, in to_numpy
numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)
File "<__array_function__ internals>", line 200, in insert
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/numpy/lib/function_base.py", line 5426, in insert
old_mask[indices] = False
IndexError: index 3 is out of bounds for axis 0 with size 3
```
AFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.
I strongly suspect that the problem comes from this line, or `np.insert` is misused:
https://github.com/huggingface/datasets/blob/02ee418831aba68d0be93227bce8b3f42ef8980f/src/datasets/features/features.py#L729
To put t simply, you want something that do that:
```python
import numpy as np
numpy_arr = np.zeros((1, 1, 1))
null_indices = np.array([1, 2])
np.insert(numpy_arr, null_indices, np.nan, axis=0)
# raise an error, instead of outputting
# array([[[ 0.]],
# [[nan]],
# [[nan]]])
```
### Expected behavior
The previous code should not raise an error.
### Environment info
- Python 3.10.11
- datasets 2.10.0
- pyarrow 12.0.0
|
CLOSED
| 2023-06-06T14:36:22
| 2023-06-13T12:39:39
| 2023-06-09T13:23:50
|
https://github.com/huggingface/datasets/issues/5927
|
qgallouedec
| 2
|
[] |
5,926
|
Uncaught exception when generating the splits from a dataset that miss data
|
### Describe the bug
Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.
But when trying to generate the split names, we get an exception which is now correctly caught.
Seen originally in https://github.com/huggingface/datasets-server/blob/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15/services/worker/src/worker/job_runners/config/parquet_and_info.py#L435
### Steps to reproduce the bug
```python
>>> from datasets import StreamingDownloadManager, load_dataset_builder
>>> builder = load_dataset_builder(path="blog_authorship_corpus")
Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.60k/5.60k [00:00<00:00, 23.1MB/s]
Downloading metadata: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.81k/2.81k [00:00<00:00, 14.7MB/s]
Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.30k/7.30k [00:00<00:00, 30.8MB/s]
>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)
>>> builder._split_generators(dl_manager)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/blog_authorship_corpus/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683/blog_authorship_corpus.py", line 79, in _split_generators
data = dl_manager.download_and_extract(_DATA_URL)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 435, in map_nested
return function(data_struct)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol
with fsspec.open(urlpath, **kwargs) as f:
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open
return open_files(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__
out = super().__getitem__(item)
IndexError: list index out of range
```
### Expected behavior
We should have an Exception raised by the datasets library.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
|
OPEN
| 2023-06-06T13:51:01
| 2023-06-07T07:53:16
| null |
https://github.com/huggingface/datasets/issues/5926
|
severo
| 1
|
[] |
5,925
|
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
|
### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`.
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
Thanks,
Martin
### Steps to reproduce the bug
Here, the code crashed after we updated the `datasets` library:
```python
# list_datasets no longer returns a list, which leads to an error when one tries to slice it
for datasets.list_datasets(with_details=True)[:limit]:
...
```
### Expected behavior
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
### Environment info
Ubuntu 22.04
datasets 2.12.0
|
CLOSED
| 2023-06-05T14:46:04
| 2023-06-19T17:22:43
| 2023-06-19T17:22:43
|
https://github.com/huggingface/datasets/issues/5925
|
mtkinit
| 0
|
[] |
5,923
|
Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility
|
### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
|
CLOSED
| 2023-06-02T04:16:32
| 2024-06-27T10:07:49
| 2024-02-25T16:38:03
|
https://github.com/huggingface/datasets/issues/5923
|
ehuangc
| 25
|
[] |
5,922
|
Length of table does not accurately reflect the split
|
### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug

### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11
|
CLOSED
| 2023-06-01T18:56:26
| 2023-06-02T16:13:31
| 2023-06-02T16:13:31
|
https://github.com/huggingface/datasets/issues/5922
|
amogkam
| 2
|
[
"wontfix"
] |
5,918
|
File not found for audio dataset
|
### Describe the bug
After loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist.
### Steps to reproduce the bug
Run bug.py:
```py
import os.path
from datasets import load_dataset
def run() -> None:
cv13 = load_dataset(
"mozilla-foundation/common_voice_13_0",
"hi",
split="train",
)
print(cv13[0])
audio_file = cv13[0]["path"]
if not os.path.exists(audio_file):
raise ValueError(f'File {audio_file} does not exist.')
if __name__ == "__main__":
run()
```
The result (on my machine):
```json
{'client_id': '0f018a99663f33afbb7d38aee281fb1afcfd07f9e7acd00383f604e1e17c38d6ed8adf1bd2ccbf927a52c5adefb8ac4b158ce27a7c2ed9581e71202eb302dfb3', 'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'audio': {'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'array': array([ 6.46234854e-26, -1.35709319e-25, -8.07793567e-26, ...,
1.06425944e-07, 4.46417090e-08, 2.61451660e-09]), 'sampling_rate': 48000}, 'sentence': 'हमने उसका जन्मदिन मनाया।', 'up_votes': 2, 'down_votes': 0, 'age': '', 'gender': '', 'accent': '', 'locale': 'hi', 'segment': '' ', 'variant': ''}
```
```txt
Traceback (most recent call last):
File "F:\eo-reco\bug.py", line 18, in <module>
run()
File "F:\eo-reco\bug.py", line 15, in run
raise ValueError(f'File {audio_file} does not exist.')
ValueError: File C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\common_voice_hi_26008353.mp3 does not exist.
```
### Expected behavior
The `path` element points to the correct file, which happens to be:
```
C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\hi_train_0\common_voice_hi_26008353.mp3
```
That is, there's an extra directory `hi_train_0` that is not in the `path` element.
### Environment info
- `datasets` version: 2.12.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
-
|
OPEN
| 2023-06-01T02:15:29
| 2023-06-11T06:02:25
| null |
https://github.com/huggingface/datasets/issues/5918
|
RobertBaruch
| 1
|
[] |
5,914
|
array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets
|
### Describe the bug
When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size."
Detailed error message:
Traceback (most recent call last):
File "data_processing.py", line 26, in <module>
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map
desc=desc,
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "data_processing.py", line 11, in prepare_dataset
audio = batch["audio"]
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__
value = decode_nested_example(self.features[key], value) if value is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example
array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like
array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load
y, sr_native = __soundfile_load(path, offset, duration, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load
y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read
out = self._create_empty_array(frames, always_2d, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array
return np.empty(shape, dtype, order='C')
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
### Steps to reproduce the bug
```python
from datasets import load_dataset, DatasetDict
from transformers import WhisperFeatureExtractor
from transformers import WhisperTokenizer
samromur_children= load_dataset("language-and-voice-lab/samromur_children")
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe")
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["normalized_text"]).input_ids
return batch
cache_dict = {"train": "./cache/audio_train.cache", \
"validation": "./cache/audio_validation.cache", \
"test": "./cache/audio_test.cache"}
filter_cache_dict = {"train": "./cache/filter_train.arrow", \
"validation": "./cache/filter_validation.arrow", \
"test": "./cache/filter_test.arrow"}
print("before filtering")
print(samromur_children)
#filter the dataset to only include examples with more than 2 seconds of audio
samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict)
print("after filtering")
print(samromur_children)
processed_dataset = DatasetDict()
# processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,)
for split in ["train", "validation", "test"]:
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split])
```
### Expected behavior
The dataset is successfully processed and ready to train the model.
### Environment info
Python version: 3.7.13
datasets package version: 2.4.0
librosa package version: 0.10.0.post2
|
OPEN
| 2023-05-30T04:25:00
| 2024-10-27T04:09:18
| null |
https://github.com/huggingface/datasets/issues/5914
|
ravenouse
| 2
|
[] |
5,913
|
I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred.
|
### Describe the bug
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data files: 100%|██████████| 1/1 [00:00<00:00, 84.35it/s]
Extracting data files: 0%| | 0/1 [00:00<?, ?it/s] for _, table in generator:
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 114, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 258, in pyarrow._json.read_json
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 27.72it/s]
Generating train split: 0 examples [00:00, ? examples/s] File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 125, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2390448764
### Steps to reproduce the bug
1、data_files = ["1.json", "2.json", "3.json"]
2、dataset = load_dataset('json', data_files=data_files)
### Expected behavior
Read the dataset normally.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.15.0-29-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.3.5
|
CLOSED
| 2023-05-30T02:55:26
| 2023-07-24T12:00:38
| 2023-07-24T12:00:38
|
https://github.com/huggingface/datasets/issues/5913
|
cjt222
| 2
|
[] |
5,912
|
Missing elements in `map` a batched dataset
|
### Describe the bug
As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
The weirdest part is when inspecting the sizes of the tensors as shown below, both `tokenized_captions["input_ids"]` and `image_features` show the correct shapes. Simply the output only has one element (with the batch dimension squeezed out).
```python
class CollateFn:
def get_image(self, url):
try:
response = requests.get(url)
return Image.open(io.BytesIO(response.content)).convert("RGB")
except PIL.UnidentifiedImageError:
logger.info(f"Reading error: Could not transform f{url}")
return None
except requests.exceptions.ConnectionError:
logger.info(f"Connection error: Could not transform f{url}")
return None
def __call__(self, batch):
images = [self.get_image(url) for url in batch["url"]]
captions = [caption for caption, image in zip(batch["caption"], images) if image is not None]
images = [image for image in images if image is not None]
tokenized_captions = tokenizer(
captions,
padding="max_length",
truncation=True,
max_length=tokenizer.model_max_length,
return_tensors="pt",
)
image_features = torch.stack([torch.Tensor(feature_extractor(image)["pixel_values"][0]) for image in images])
# import pdb; pdb.set_trace()
return {"input_ids": tokenized_captions["input_ids"], "images": image_features}
collate_fn = CollateFn()
laion_ds = datasets.load_dataset("laion/laion400m", split="train", streaming=True)
laion_ds_batched = laion_ds.map(collate_fn, batched=True, batch_size=8, remove_columns=next(iter(laion_ds)).keys())
```
### Steps to reproduce the bug
A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
### Expected behavior
Would expect `next(iter(laion_ds_batched))` to produce two tensors of shape `(batch_size, 77)` and `batch_size, image_shape`.
### Environment info
datasets==2.12.0
python==3.10
|
CLOSED
| 2023-05-29T08:09:19
| 2023-07-26T15:48:15
| 2023-07-26T15:48:15
|
https://github.com/huggingface/datasets/issues/5912
|
sachinruk
| 1
|
[] |
5,910
|
Cannot use both set_format and set_transform
|
### Describe the bug
I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it.
I don't see anywhere in the documentation something that says that both methods cannot be used at the same time.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset("mnist", split="train")
ds.set_format(type="torch")
def transform(entry):
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
```
### Expected behavior
It should print the pytorch tensor image as a double, but it errors because "entry" in the transform function doesn't receive a pytorch tensor to begin with, it receives a PIL Image -> entry.double() errors because entry isn't a pytorch tensor.
### Environment info
Latest versions.
### Note:
It would be at least handy to have access to a function that can do the dataset.set_format in the set_transform function.
Something like:
```
from datasets import load_dataset, do_format
ds = load_dataset("mnist", split="train")
def transform(entry):
entry = do_format(entry, type="torch")
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
```
|
CLOSED
| 2023-05-27T19:22:23
| 2023-07-09T21:40:54
| 2023-06-16T14:41:24
|
https://github.com/huggingface/datasets/issues/5910
|
ybouane
| 5
|
[] |
5,908
|
Unbearably slow sorting on big mapped datasets
|
### Describe the bug
For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute).
### Steps to reproduce the bug
```Python
from datasets import load_dataset
import time
dataset = load_dataset("xnli", "en", split="train")
dataset = dataset.shard(10, 0)
print(len(dataset))
t = time.time()
# dataset = dataset.flatten_indices() # uncomment this line and it's fast
dataset = dataset.sort("label", reverse=True, load_from_cache_file=False)
print(f"finished in {time.time() - t:.4f} seconds")
```
### Expected behavior
Expect sorting to take the same or less time than flattening and then sorting.
### Environment info
- `datasets` version: 2.12.1.dev0 (same with 2.12.0 too)
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
OPEN
| 2023-05-27T11:08:32
| 2023-06-13T17:45:10
| null |
https://github.com/huggingface/datasets/issues/5908
|
maximxlss
| 6
|
[] |
5,906
|
Could you unpin responses version?
|
### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this library due to dependency conflict.
### Expected behavior
can install datasets
### Environment info
linux 64
|
CLOSED
| 2023-05-26T20:02:14
| 2023-05-30T17:53:31
| 2023-05-30T17:53:31
|
https://github.com/huggingface/datasets/issues/5906
|
kenimou
| 0
|
[] |
5,905
|
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
|
### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly.
I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice.
I understand that the nature of iterators make it probably nearly impossible to quickly resume training.
I thought about a possible solution nonetheless :
I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset.
Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.
If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ?
### Your contribution
I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature.
|
OPEN
| 2023-05-26T12:33:02
| 2023-06-15T13:34:18
| null |
https://github.com/huggingface/datasets/issues/5905
|
bruno-hays
| 1
|
[
"enhancement"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.