number
int64
2
7.91k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-12-16 10:45:02
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 19:34:46
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 14:20:48
url
stringlengths
48
51
author
stringlengths
3
26
comments_count
int64
0
70
labels
listlengths
0
4
2,179
Load small datasets in-memory instead of using memory map
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
CLOSED
2021-04-07T09:58:16
2021-04-20T10:04:04
2021-04-20T10:04:03
https://github.com/huggingface/datasets/issues/2179
lhoestq
0
[ "enhancement", "generic discussion" ]
2,176
Converting a Value to a ClassLabel
Hi! In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.` Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
CLOSED
2021-04-06T22:54:16
2022-06-01T16:31:49
2022-06-01T16:31:49
https://github.com/huggingface/datasets/issues/2176
nelson-liu
2
[ "enhancement" ]
2,175
dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**. During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. ![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png) Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ? Is this a problem of the index, where the faiss can't find any similar vector? Is there documentation on the output index being -1? @lhoestq
CLOSED
2021-04-06T21:50:49
2021-04-16T12:21:16
2021-04-16T12:21:15
https://github.com/huggingface/datasets/issues/2175
shamanez
6
[]
2,170
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ 21-Feb-2021 01:26 - 20210120/ 02-Mar-2021 01:25 - 20210201/ 21-Mar-2021 01:26 - 20210220/ 02-Apr-2021 01:26 - 20210301/ 03-Mar-2021 08:10 - 20210320/ 21-Mar-2021 18:13 - 20210401/ 03-Apr-2021 10:08 - latest/ 03-Apr-2021 10:08 - ``` However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets: ``` ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` The cached datasets: ``` % aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/ PRE 20200501.de/ PRE 20200501.en/ PRE 20200501.fr/ PRE 20200501.frr/ PRE 20200501.it/ PRE 20200501.simple/ ```
OPEN
2021-04-06T03:13:18
2021-06-16T01:10:50
null
https://github.com/huggingface/datasets/issues/2170
leezu
1
[]
2,167
Split type not preserved when reloading the dataset
A minimal reproducible example: ```python >>> from datasets import load_dataset, Dataset >>> dset = load_dataset("sst", split="train") >>> dset.save_to_disk("sst") >>> type(dset.split) <class 'datasets.splits.NamedSplit'> >>> dset = Dataset.load_from_disk("sst") >>> type(dset.split) # NamedSplit expected <class 'str'> ``` It seems like this bug was introduced in #2025.
CLOSED
2021-04-04T19:29:54
2021-04-19T09:08:55
2021-04-19T09:08:55
https://github.com/huggingface/datasets/issues/2167
mariosasko
0
[]
2,166
Regarding Test Sets for the GEM datasets
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have the target or references. ``` data['test'][0] {'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''} ```
CLOSED
2021-04-04T02:02:45
2021-04-06T08:13:12
2021-04-06T08:13:12
https://github.com/huggingface/datasets/issues/2166
vyraun
2
[ "Dataset discussion" ]
2,165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( args=args, model=model, model_parameters=[p for p in model.parameters() if p.requires_grad], training_data=train_ds) ``` but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
CLOSED
2021-04-04T01:01:48
2021-08-24T15:55:35
2021-04-07T15:06:04
https://github.com/huggingface/datasets/issues/2165
y-rokutan
7
[]
2,162
visualization for cc100 is broken
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
CLOSED
2021-04-02T10:11:13
2022-10-05T13:20:24
2022-10-05T13:20:24
https://github.com/huggingface/datasets/issues/2162
dorost1234
3
[ "nlp-viewer" ]
2,161
any possibility to download part of large datasets only?
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
CLOSED
2021-04-02T10:06:46
2022-10-05T13:26:51
2022-10-05T13:26:51
https://github.com/huggingface/datasets/issues/2161
dorost1234
6
[]
2,160
data_args.preprocessing_num_workers almost freezes
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up. thanks
CLOSED
2021-04-02T07:56:13
2021-04-02T10:14:32
2021-04-02T10:14:31
https://github.com/huggingface/datasets/issues/2160
dorost1234
2
[]
2,159
adding ccnet dataset
## Adding a Dataset - **Name:** ccnet - **Description:** Common Crawl - **Paper:** https://arxiv.org/abs/1911.00359 - **Data:** https://github.com/facebookresearch/cc_net - **Motivation:** this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). thanks
CLOSED
2021-04-01T23:28:36
2021-04-02T10:05:19
2021-04-02T10:05:19
https://github.com/huggingface/datasets/issues/2159
dorost1234
1
[ "dataset request" ]
2,158
viewer "fake_news_english" error
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance' as well as the error Traceback.
CLOSED
2021-04-01T14:13:20
2022-10-05T13:22:02
2022-10-05T13:22:02
https://github.com/huggingface/datasets/issues/2158
emanuelevivoli
2
[ "nlp-viewer" ]
2,153
load_dataset ignoring features
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work. Code to reproduce: ```python import datasets data_location = "/data/prueba_multiclase" features = datasets.Features( {"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])} ) dataset = datasets.load_dataset( "csv", data_files=data_location, delimiter="\t", features=features ) ``` Dataset I used: [prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped) Thank you! ❤️
CLOSED
2021-03-31T08:30:09
2022-10-05T13:29:12
2022-10-05T13:29:12
https://github.com/huggingface/datasets/issues/2153
GuillemGSubies
3
[ "bug" ]
2,149
Telugu subset missing for xtreme tatoeba dataset
from nlp import load_dataset train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation'] ValueError: BuilderConfig tatoeba.tel not found. but language tel is actually included in xtreme: https://github.com/google-research/xtreme/blob/master/utils_preprocess.py def tatoeba_preprocess(args): lang3_dict = { 'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn', 'deu':'de', 'ell':'el', 'spa':'es', 'est':'et', 'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr', 'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id', 'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka', 'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr', 'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw', 'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here 'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh', 'eng':'en', }
CLOSED
2021-03-30T15:26:34
2022-10-05T13:28:30
2022-10-05T13:28:30
https://github.com/huggingface/datasets/issues/2149
cosmeowpawlitan
2
[]
2,148
Add configurable options to `seqeval` metric
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute` https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109 Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches. The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation. If that makes sense, I am happy to implement the change.
CLOSED
2021-03-30T15:04:06
2021-04-15T13:49:46
2021-04-15T13:49:46
https://github.com/huggingface/datasets/issues/2148
marrodion
1
[]
2,146
Dataset file size on disk is very large with 3D Array
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "shape": [224, 224, 3], "dtype": "uint8", "id": null, "_type": "Array3D", } }, "post_processed": null, "supervised_keys": null, "builder_name": "shot_type_image_dataset", "config_name": "default", "version": { "version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0, }, "splits": { "train": { "name": "train", "num_bytes": 520803408, "num_examples": 1479, "dataset_name": "shot_type_image_dataset", } }, "download_checksums": { "": { "num_bytes": 16940447118, "checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03", } }, "download_size": 16940447118, "post_processing_size": null, "dataset_size": 520803408, "size_in_bytes": 17461250526, }` I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk. I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records. This might be a problem for large dataset. Thanks for your help.
OPEN
2021-03-30T14:46:09
2021-04-16T13:07:02
null
https://github.com/huggingface/datasets/issues/2146
jblemoine
6
[]
2,144
Loading wikipedia 20200501.en throws pyarrow related error
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931... Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s] Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s] Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data. Traceback (most recent call last): File "load_wiki.py", line 2, in <module> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache') File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset map_tuple=True, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table pa_table = f.read_all() File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Expected to be able to read 9176784 bytes for message body, got 4918712 **Detailed version info** datasets==1.5.0 - dataclasses [required: Any, installed: 0.8] - dill [required: Any, installed: 0.3.3] - fsspec [required: Any, installed: 0.8.7] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - huggingface-hub [required: <0.1.0, installed: 0.0.7] - filelock [required: Any, installed: 3.0.12] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - requests [required: Any, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: Any, installed: 4.49.0] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - multiprocess [required: Any, installed: 0.70.11.1] - dill [required: >=0.3.3, installed: 0.3.3] - numpy [required: >=1.17, installed: 1.17.0] - pandas [required: Any, installed: 1.1.5] - numpy [required: >=1.15.4, installed: 1.17.0] - python-dateutil [required: >=2.7.3, installed: 2.8.0] - six [required: >=1.5, installed: 1.15.0] - pytz [required: >=2017.2, installed: 2020.1] - pyarrow [required: >=0.17.1, installed: 3.0.0] - numpy [required: >=1.16.6, installed: 1.17.0] - requests [required: >=2.19.0, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: >=4.27,<4.50.0, installed: 4.49.0] - xxhash [required: Any, installed: 2.0.0]
OPEN
2021-03-30T10:38:31
2021-04-01T09:21:17
null
https://github.com/huggingface/datasets/issues/2144
TomPyonsuke
6
[]
2,139
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the minimal reproducible example: ```python from datasets import load_dataset from datasets import ReadInstruction data_1 = load_dataset( "wikiann", "en", split="validation", ) data_1.save_to_disk("temporary_path_1") print("Save with regular split works.") data_2 = load_dataset( "wikiann", "en", split=ReadInstruction("validation", to=50, unit="%"), ) data_2.save_to_disk("temporary_path_2") ``` and the corresponding output: ``` Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Save with regular split works. Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Traceback (most recent call last): File "bug.py", line 20, in <module> data_2.save_to_disk("temporary_path_2") File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk json.dump(state, state_file, indent=2, sort_keys=True) File "/usr/lib/python3.7/json/__init__.py", line 179, in dump for chunk in iterable: File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode o = _default(o) File "/usr/lib/python3.7/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type ReadInstruction is not JSON serializable ``` Let me know if there is some misuse from my end. Thanks in advance.
CLOSED
2021-03-29T18:23:54
2021-03-30T09:12:53
2021-03-30T09:12:53
https://github.com/huggingface/datasets/issues/2139
PedroMLF
2
[]
2,135
en language data from MLQA dataset is missing
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
CLOSED
2021-03-29T10:47:50
2021-03-30T10:20:23
2021-03-30T10:20:23
https://github.com/huggingface/datasets/issues/2135
rabeehk
3
[]
2,134
Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library. So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method. When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB). ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 80, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 75, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify contexts_dataset.save_to_disk(chunked_path) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk self = pickle.loads(pickle.dumps(self)) OverflowError: cannot serialize a bytes object larger than 4 GiB ``` From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository. To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk. Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that. ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295 ```
CLOSED
2021-03-29T10:43:15
2021-05-03T17:59:21
2021-05-03T17:59:21
https://github.com/huggingface/datasets/issues/2134
prokopCerny
6
[ "bug" ]
2,133
bug in mlqa dataset
Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", "\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?" ] ``` the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
CLOSED
2021-03-29T09:03:09
2021-03-30T17:40:57
2021-03-30T17:40:57
https://github.com/huggingface/datasets/issues/2133
dorost1234
3
[]
2,132
TydiQA dataset is mixed and is not split per language
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this. Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot
OPEN
2021-03-29T08:56:21
2021-04-04T09:57:15
null
https://github.com/huggingface/datasets/issues/2132
dorost1234
3
[]
2,131
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
version: 1.5.0 met a very strange error, I am training large scale language model, and need train on 2 machines(workers). And sometimes I will get this error `TypeError: 'NoneType' object is not iterable` This is traceback ``` 71 |   | Traceback (most recent call last): -- | -- | -- 72 |   | File "run_gpt.py", line 316, in <module> 73 |   | main() 74 |   | File "run_gpt.py", line 222, in main 75 |   | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"]) 76 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset 77 |   | use_auth_token=use_auth_token, 78 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare 79 |   | self.download_post_processing_resources(dl_manager) 80 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources 81 |   | for split in self.info.splits: 82 |   | TypeError: 'NoneType' object is not iterable 83 |   | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2) 84 |   | Traceback (most recent call last): 85 |   | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main 86 |   | "__main__", mod_spec) 87 |   | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code 88 |   | exec(code, run_globals) 89 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module> 90 |   | main() 91 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main 92 |   | sigkill_handler(signal.SIGTERM, None) # not coming back 93 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler 94 |   | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) ``` On worker 1 it loads the dataset well, however on worker 2 will get this error. And I will meet this error from time to time, sometimes it just goes well.
CLOSED
2021-03-29T08:45:58
2021-04-10T11:08:55
2021-04-10T11:08:55
https://github.com/huggingface/datasets/issues/2131
andy-yangz
3
[ "bug" ]
2,130
wikiann dataset is missing columns
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
CLOSED
2021-03-29T08:23:00
2021-08-27T14:44:18
2021-08-27T14:44:18
https://github.com/huggingface/datasets/issues/2130
dorost1234
5
[ "good first issue" ]
2,129
How to train BERT model with next sentence prediction?
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
CLOSED
2021-03-29T06:48:03
2021-04-01T04:58:40
2021-04-01T04:58:40
https://github.com/huggingface/datasets/issues/2129
jnishi
4
[]
2,128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262
CLOSED
2021-03-29T06:34:02
2021-03-31T12:48:01
2021-03-31T12:48:01
https://github.com/huggingface/datasets/issues/2128
adamlin120
1
[ "dataset bug" ]
2,125
Is dataset timit_asr broken?
Using `timit_asr` dataset, I saw all records are the same. ``` python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]), num_examples=20) ``` `output` <img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png"> I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem. <img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
CLOSED
2021-03-28T08:30:18
2021-03-28T12:29:25
2021-03-28T12:29:25
https://github.com/huggingface/datasets/issues/2125
kosuke-kitahara
2
[]
2,124
Adding ScaNN library to do MIPS?
@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors. https://github.com/google-research/google-research/tree/master/scann ![image](https://user-images.githubusercontent.com/16892570/112738294-78ec9800-8fc6-11eb-9a5f-3d7ee5818e76.png)
OPEN
2021-03-28T00:07:00
2021-03-29T13:23:43
null
https://github.com/huggingface/datasets/issues/2124
shamanez
1
[]
2,123
Problem downloading GEM wiki_auto_asset_turk dataset
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') dataset = load_dataset('gem', 'wiki_auto_asset_turk') ``` **Expected behavior:** I expect the dataset to start downloading (download bar appears and progresses toward 100%) **Actual behavior:** Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more: Downloading: 36.6kB [00:00, 37.2MB/s] Downloading: 41.7kB [00:00, ?B/s] Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d... ### Is this a regression? No, it was the first time I was trying to download this dataset (same for the other ones). ### Debug info - Python version: Python 3.8.2 - OS version: Windows 10 Family
CLOSED
2021-03-27T18:41:28
2021-05-12T16:15:18
2021-05-12T16:15:17
https://github.com/huggingface/datasets/issues/2123
mille-s
5
[]
2,120
dataset viewer does not work anymore
Hi I normally use this link to see all datasets and how I can load them https://huggingface.co/datasets/viewer/ Now I am getting 502 Bad Gateway nginx/1.18.0 (Ubuntu) could you bring this webpage back ? this was very helpful @lhoestq thanks for your help
CLOSED
2021-03-26T13:22:13
2021-03-26T15:52:22
2021-03-26T15:52:22
https://github.com/huggingface/datasets/issues/2120
dorost1234
2
[ "nlp-viewer" ]
2,117
load_metric from local "glue.py" meet error 'NoneType' object is not callable
actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset(path='/home/glue.py', name=actual_task) metric = load_metric(path='/home/glue.py', name=actual_task) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-7ab77a465d81> in <module> 1 actual_task = "mnli" if task == "mnli-mm" else task 2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task) ----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task) ~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 508 keep_in_memory=keep_in_memory, 509 experiment_id=experiment_id, --> 510 **metric_init_kwargs, 511 ) 512 TypeError: 'NoneType' object is not callable Please help
CLOSED
2021-03-26T02:35:22
2021-08-25T21:44:05
2021-03-26T02:40:26
https://github.com/huggingface/datasets/issues/2117
Frankie123421
3
[]
2,116
Creating custom dataset results in error while calling the map() function
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the total number of samples" return len(self.samples) def __getitem__(self, index): "Generates one sample of data" # Select sample # Load data and get label samples = self.samples[index] return samples def preprocess_function_train(examples): inputs = examples labels = [example+tokenizer.eos_token for example in examples ] inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True) labels = tokenizer(labels, max_length=30, padding=True, truncation=True) model_inputs = inputs model_inputs["labels"] = labels["input_ids"] print("about to return") return model_inputs ##train["sentence"] is dataframe column train_dataset = MyDataset(train['sentence'].values.tolist()) train_dataset = train_dataset.map( preprocess_function, batched = True, batch_size=32 ) ``` Stack trace of error: ``` Traceback (most recent call last): File "dir/train_generate.py", line 362, in <module> main() File "dir/train_generate.py", line 245, in main train_dataset = train_dataset.map( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map return self._map_single( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper unformatted_columns = set(self.column_names) - set(self._format_columns or []) File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names return self._data.column_names AttributeError: 'MyDataset' object has no attribute '_data' ```
CLOSED
2021-03-26T00:37:46
2021-03-31T14:30:32
2021-03-31T14:30:32
https://github.com/huggingface/datasets/issues/2116
GeetDsa
1
[]
2,115
The datasets.map() implementation modifies the datatype of os.environ object
In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'. This causes following function calls to fail as follows: ` x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) TypeError: get() takes no keyword arguments ` It looks like the following line in datasets.map implementation introduced this functionality. https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421 Here is the test script to reproduce this error. ``` from datasets import load_dataset from transformers import AutoTokenizer import os def test_train(): model_checkpoint = "distilgpt2" datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): y = tokenizer(examples['text'], truncation=True, max_length=64) return y x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}") print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}") datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"]) print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}") x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}") if __name__ == "__main__": test_train() ```
CLOSED
2021-03-25T20:29:19
2021-03-26T15:13:52
2021-03-26T15:13:52
https://github.com/huggingface/datasets/issues/2115
leleamol
0
[]
2,108
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
OPEN
2021-03-24T21:32:16
2021-03-25T06:31:43
null
https://github.com/huggingface/datasets/issues/2108
shamanez
0
[ "question" ]
2,106
WMT19 Dataset for Kazakh-English is not formatted correctly
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді. > > Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды. > > Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code ``` import datasets from datasets import load_dataset dataset = load_dataset('wmt19', 'kk-en') for key in dataset['train']['translation']: if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']: print(key['en']) print(key['kk']) break ``` we get: > 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. > The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one. Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface.
OPEN
2021-03-23T20:14:47
2021-03-25T21:36:20
null
https://github.com/huggingface/datasets/issues/2106
trina731
1
[ "dataset bug" ]
2,105
Request to remove S2ORC dataset
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
OPEN
2021-03-23T19:43:06
2021-08-04T19:18:02
null
https://github.com/huggingface/datasets/issues/2105
kyleclo
3
[]
2,104
Trouble loading wiki_movies
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py` Trying to do `python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wiki_movies \` also gives the same error. Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago. Thank you!
CLOSED
2021-03-23T18:59:54
2022-03-30T08:22:58
2022-03-30T08:22:58
https://github.com/huggingface/datasets/issues/2104
adityaarunsinghal
2
[]
2,103
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n ``` @lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times.
CLOSED
2021-03-23T17:18:09
2021-04-06T14:39:59
2021-04-06T14:39:59
https://github.com/huggingface/datasets/issues/2103
samsontmr
1
[ "enhancement", "good first issue" ]
2,099
load_from_disk takes a long time to load local dataset
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though). Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers? Tagging @lhoestq since you seem to be working on these issues and PRs :)
CLOSED
2021-03-23T09:28:37
2021-03-23T17:12:16
2021-03-23T17:12:16
https://github.com/huggingface/datasets/issues/2099
samsontmr
8
[]
2,098
SQuAD version
Hi~ I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
CLOSED
2021-03-23T07:47:54
2021-03-26T09:48:54
2021-03-26T09:48:54
https://github.com/huggingface/datasets/issues/2098
h-peng17
2
[]
2,096
CoNLL 2003 dataset not including German
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with! I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of... This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf). ## Adding a Dataset - **Name:** CoNLL 2003 German - **Paper:** https://www.aclweb.org/anthology/W03-0419/ - **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
CLOSED
2021-03-22T19:23:56
2023-07-25T16:49:07
2023-07-25T16:49:07
https://github.com/huggingface/datasets/issues/2096
rxian
2
[ "dataset request" ]
2,092
How to disable making arrow tables in load_dataset ?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
CLOSED
2021-03-21T04:50:07
2022-06-01T16:49:52
2022-06-01T16:49:52
https://github.com/huggingface/datasets/issues/2092
Jeevesh8
7
[]
2,089
Add documentaton for dataset README.md files
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which values should licenses have? What do I say when it is a custom license? Should I add a link? - how should I choose size_categories ? What are valid ranges? - what are valid task_categories? Thanks Philip
CLOSED
2021-03-20T11:44:38
2023-07-25T16:45:38
2023-07-25T16:45:37
https://github.com/huggingface/datasets/issues/2089
PhilipMay
8
[]
2,084
CUAD - Contract Understanding Atticus Dataset
## Adding a Dataset - **Name:** CUAD - Contract Understanding Atticus Dataset - **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. - **Paper:** https://arxiv.org/abs/2103.06268 - **Data:** https://github.com/TheAtticusProject/cuad/ - **Motivation:** good domain specific datasets are valuable Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CLOSED
2021-03-19T09:27:43
2021-04-16T08:50:44
2021-04-16T08:50:44
https://github.com/huggingface/datasets/issues/2084
theo-m
1
[ "dataset request" ]
2,083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO. Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing
CLOSED
2021-03-19T08:29:48
2021-04-09T09:25:33
2021-04-09T09:25:33
https://github.com/huggingface/datasets/issues/2083
patrickvonplaten
1
[]
2,080
Multidimensional arrays in a Dataset
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`) ``` from datasets import Dataset import pandas as pd import numpy as np dataset = pd.DataFrame({ 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]) ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) ``` Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists. ``` import torch from datasets import Dataset import pandas as pd dataset = pd.DataFrame({ 'bbox': [ [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]] ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) def test(examples): return {'bbbox': torch.Tensor(examples['bbox'])} dataset = dataset.map(test) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) def test2(examples): return {'bbbox': torch.stack(examples['bbox'])} dataset = dataset.map(test2) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) ``` Is is possible to support n-D arrays/tensors in datasets? It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263).
CLOSED
2021-03-18T16:29:14
2021-03-25T12:46:53
2021-03-25T12:46:53
https://github.com/huggingface/datasets/issues/2080
vermouthmjl
2
[]
2,078
MemoryError when computing WER metric
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module> print(wer.compute(predictions=result["predicted"], references=result["target"])) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute return wer(references, predictions) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer truth, hypothesis, truth_transform, hypothesis_transform, **kwargs File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures H, S, D, I = _get_operation_counts(truth, hypothesis) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts editops = Levenshtein.editops(source_string, destination_string) MemoryError` My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
CLOSED
2021-03-18T11:30:05
2021-05-01T08:31:49
2021-04-06T07:20:43
https://github.com/huggingface/datasets/issues/2078
diego-fustes
11
[ "metric bug" ]
2,076
Issue: Dataset download error
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
OPEN
2021-03-18T06:36:06
2021-03-22T11:52:31
null
https://github.com/huggingface/datasets/issues/2076
XuhuiZhou
7
[ "dataset bug" ]
2,075
ConnectionError: Couldn't reach common_voice.py
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py Version: 1.4.1 Thanks! @lhoestq @LysandreJik @thomwolf
CLOSED
2021-03-18T01:19:06
2021-03-20T10:29:41
2021-03-20T10:29:41
https://github.com/huggingface/datasets/issues/2075
LifaSun
2
[]
2,071
Multiprocessing is slower than single process
```python # benchmark_filter.py import logging import sys import time from datasets import load_dataset, set_caching_enabled if __name__ == "__main__": set_caching_enabled(False) logging.basicConfig(level=logging.DEBUG) bc = load_dataset("bookcorpus") now = time.time() try: bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1])) except Exception as e: print(f"cancelled: {e}") elapsed = time.time() - now print(elapsed) ``` Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
CLOSED
2021-03-17T16:08:58
2021-03-18T09:10:23
2021-03-18T09:10:23
https://github.com/huggingface/datasets/issues/2071
theo-m
1
[ "bug" ]
2,070
ArrowInvalid issue for squad v2 dataset
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error: `ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178` My code is as follows: ``` def generate_candidate_questions(examples): val_questions = examples["question"] candididate_questions = random.sample(datasets["train"]["question"], len(val_questions)) candididate_questions = [x[:max_length] for x in candididate_questions] return candididate_questions def prepare_validation_features(examples, use_mixing=False): pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) if use_mixing: candidate_questions = generate_candidate_questions(examples) tokenized_candidates = tokenizer( candidate_questions if pad_on_right else examples["context"], examples["context"] if pad_on_right else candidate_questions, truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] if use_mixing: tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"] tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"] tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"] for i in range(len(tokenized_examples["input_ids"])): sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples validation_features = datasets["validation"].map( lambda xs: prepare_validation_features(xs, True), batched=True, remove_columns=datasets["validation"].column_names ) ``` I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
CLOSED
2021-03-17T13:51:49
2021-08-04T17:57:16
2021-08-04T17:57:16
https://github.com/huggingface/datasets/issues/2070
MichaelYxWang
1
[]
2,068
PyTorch not available error on SageMaker GPU docker though it is installed
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/opt/ml/code/data_module.py", line 103, in setup self.dataset[split].set_format(type="torch", columns=self.columns) File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format _ = get_formatter(type, **format_kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type] ValueError: PyTorch needs to be installed to be able to return PyTorch tensors. ``` when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines ``` self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns] self.dataset[split].set_format(type="torch", columns=self.columns) ``` The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 . By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`. Also as a first line in the data loading module I have ``` import os os.environ["USE_TF"] = "0" os.environ["USE_TORCH"] = "1" ```` But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack. Many Thanks!
CLOSED
2021-03-17T10:04:27
2021-06-14T04:47:30
2021-06-14T04:47:30
https://github.com/huggingface/datasets/issues/2068
sivakhno
7
[]
2,067
Multiprocessing windows error
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
CLOSED
2021-03-17T09:12:28
2021-08-04T17:59:08
2021-08-04T17:59:08
https://github.com/huggingface/datasets/issues/2067
flozi00
10
[]
2,065
Only user permission of saved cache files, not group
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
CLOSED
2021-03-17T00:20:22
2023-03-31T12:17:06
2021-05-10T06:45:29
https://github.com/huggingface/datasets/issues/2065
lorr1
26
[ "enhancement", "good first issue" ]
2,061
Cannot load udpos subsets from xtreme dataset using load_dataset()
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. Reprex is: `from datasets import load_dataset ` `dataset = load_dataset('xtreme', 'udpos.English')` The error is: `KeyError: '_'` The full traceback is: KeyError Traceback (most recent call last) <ipython-input-5-7181359ea09d> in <module> 1 from datasets import load_dataset ----> 2 dataset = load_dataset('xtreme', 'udpos.English') ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 738 739 # Download and prepare data --> 740 builder_instance.download_and_prepare( 741 download_config=download_config, 742 download_mode=download_mode, ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 576 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 577 if not downloaded_from_gcs: --> 578 self._download_and_prepare( 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 654 try: 655 # Prepare split will record examples associated to the split --> 656 self._prepare_split(split_generator, **prepare_split_kwargs) 657 except OSError as e: 658 raise OSError( ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator) 977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 978 ): --> 979 example = self.info.features.encode_example(record) 980 writer.write(example) 981 finally: ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example) 946 def encode_example(self, example): 947 example = cast_to_python_objects(example) --> 948 return encode_nested_example(self, example) 949 950 def encode_batch(self, batch): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 840 # Nested structures: we allow dict, list/tuples, sequences 841 if isinstance(schema, dict): --> 842 return { 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0) 841 if isinstance(schema, dict): 842 return { --> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } 845 elif isinstance(schema, (list, tuple)): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 870 return schema.encode_example(obj) 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 872 return obj ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data) 647 # If a string is given, convert to associated integer 648 if isinstance(example_data, str): --> 649 example_data = self.str2int(example_data) 650 651 # Allowing -1 to mean no label. ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values) 605 if value not in self._str2int: 606 value = value.strip() --> 607 output.append(self._str2int[str(value)]) 608 else: 609 # No names provided, try to integerize KeyError: '_'
CLOSED
2021-03-16T09:32:13
2021-06-18T11:54:11
2021-06-18T11:54:10
https://github.com/huggingface/datasets/issues/2061
adzcodez
6
[ "good first issue" ]
2,059
Error while following docs to load the `ted_talks_iwslt` dataset
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error attached below. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-7dcc67154ef9> in <module>() ----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") 4 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 730 hash=hash, 731 features=features, --> 732 **config_kwargs, 733 ) 734 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs) 927 928 def __init__(self, *args, writer_batch_size=None, **kwargs): --> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) 930 # Batch size used by the ArrowWriter 931 # It defines the number of samples that are kept in memory before writing them /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 241 name, 242 custom_features=features, --> 243 **config_kwargs, 244 ) 245 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 338 config_kwargs["version"] = self.VERSION --> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 340 341 # otherwise use the config_kwargs to overwrite the attributes /root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs) 219 description=description, 220 version=datasets.Version("1.1.0", ""), --> 221 **kwargs, 222 ) 223 TypeError: __init__() got multiple values for keyword argument 'version' ``` How to resolve this? PS: Thanks a lot @huggingface team for creating this great library!
CLOSED
2021-03-16T09:12:19
2021-03-16T18:00:31
2021-03-16T18:00:07
https://github.com/huggingface/datasets/issues/2059
ekdnam
2
[ "dataset bug" ]
2,058
Is it possible to convert a `tfds` to HuggingFace `dataset`?
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful. Thanks!
CLOSED
2021-03-15T20:18:47
2023-07-25T16:47:40
2023-07-25T16:47:40
https://github.com/huggingface/datasets/issues/2058
abarbosa94
1
[]
2,056
issue with opus100/en-fr dataset
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s] Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 412, in main in zip(data_args.dataset_name, data_args.dataset_config_name)] File "run_mlm.py", line 411, in <listcomp> logger) for dataset_name, dataset_config_name\ File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset load_from_cache_file=not data_args.overwrite_cache, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp> for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map update_data=update_data, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__ **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617 `
CLOSED
2021-03-15T11:32:42
2021-03-16T15:49:00
2021-03-16T15:48:59
https://github.com/huggingface/datasets/issues/2056
dorost1234
3
[]
2,055
is there a way to override a dataset object saved with save_to_disk?
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
CLOSED
2021-03-15T10:50:53
2021-03-22T04:06:17
2021-03-22T04:06:17
https://github.com/huggingface/datasets/issues/2055
shamanez
4
[]
2,054
Could not find file for ZEST dataset
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-18dbbc1a4b8a> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("zest") 9 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 612 ) 613 elif response is not None and response.status_code == 404: --> 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 616 raise ConnectionError("Couldn't reach {}".format(url)) FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip ```
CLOSED
2021-03-15T09:11:58
2021-05-03T09:30:24
2021-05-03T09:30:24
https://github.com/huggingface/datasets/issues/2054
bhadreshpsavani
4
[ "dataset bug" ]
2,052
Timit_asr dataset repeats examples
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text'] #['Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', ``` The same behavior happens for other columns Expected behavior: Different info on the actual timit_asr dataset Actual behavior: When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different Debug info Streamlit version: (get it with $ streamlit version) Python version: Python 3.6.12 Using Conda? PipEnv? PyEnv? Pex? Using pip OS version: Centos-release-7-9.2009.1.el7.centos.x86_64 Additional information You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
CLOSED
2021-03-14T11:43:43
2021-03-15T10:37:16
2021-03-15T10:37:16
https://github.com/huggingface/datasets/issues/2052
fermaat
2
[]
2,050
Build custom dataset to fine-tune Wav2Vec2
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
CLOSED
2021-03-13T22:01:10
2021-03-15T09:27:28
2021-03-15T09:27:28
https://github.com/huggingface/datasets/issues/2050
Omarnabk
3
[ "dataset request" ]
2,048
github is not always available - probably need a back up
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
CLOSED
2021-03-13T18:03:32
2022-04-01T15:27:10
2022-04-01T15:27:10
https://github.com/huggingface/datasets/issues/2048
stas00
0
[]
2,046
add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
CLOSED
2021-03-12T20:27:18
2021-03-24T22:29:11
2021-03-24T22:29:11
https://github.com/huggingface/datasets/issues/2046
shamanez
11
[]
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
CLOSED
2021-03-12T14:27:00
2021-08-04T18:00:43
2021-08-04T18:00:43
https://github.com/huggingface/datasets/issues/2040
simonschoe
4
[]
2,038
outdated dataset_infos.json might fail verifications
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
CLOSED
2021-03-12T11:41:54
2021-03-16T16:27:40
2021-03-16T16:27:40
https://github.com/huggingface/datasets/issues/2038
songfeng
2
[]
2,036
Cannot load wikitext
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
CLOSED
2021-03-12T09:09:39
2021-03-15T08:45:02
2021-03-15T08:44:44
https://github.com/huggingface/datasets/issues/2036
Gpwner
1
[]
2,035
wiki40b/wikipedia for almost all languages cannot be downloaded
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. thank you very much. ``` (fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f... Traceback (most recent call last): File "test_data.py", line 3, in <module> dataset = load_dataset("wiki40b", "cs") File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare import apache_beam as beam File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module> from apache_beam import io File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module> from apache_beam.io.avroio import * File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module> import avro File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module> File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt' ```
CLOSED
2021-03-11T19:54:54
2024-03-15T16:09:49
2024-03-15T16:09:48
https://github.com/huggingface/datasets/issues/2035
dorost1234
11
[]
2,032
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)` - if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)` The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table. The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask. Feel free to discuss this idea in this thread :) One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle. cc @theo-m @gchhablani related issues: #1796 #1949
CLOSED
2021-03-11T15:18:50
2024-01-19T13:26:32
2024-01-19T13:26:32
https://github.com/huggingface/datasets/issues/2032
lhoestq
1
[ "enhancement" ]
2,031
wikipedia.py generator that extracts XML doesn't release memory
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
CLOSED
2021-03-11T12:51:24
2021-03-22T08:33:52
2021-03-22T08:33:52
https://github.com/huggingface/datasets/issues/2031
miyamonz
2
[]
2,029
Loading a faiss index KeyError
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
CLOSED
2021-03-11T12:16:13
2021-03-12T00:21:09
2021-03-12T00:21:09
https://github.com/huggingface/datasets/issues/2029
nbroad1881
4
[ "documentation" ]
2,026
KeyError on using map after renaming a column
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
CLOSED
2021-03-10T18:54:17
2021-03-11T14:39:34
2021-03-11T14:38:40
https://github.com/huggingface/datasets/issues/2026
gchhablani
3
[]
2,022
ValueError when rename_column on splitted dataset
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_dataset( path='csv', # use 'text' loading script to load from local txt-files delimiter='\t', # xxx data_files=text_files, # list of paths to local text files split=split, # xxx ) dataset ``` Part of output: ```python DatasetDict({ train: Dataset({ features: ['sentence', 'sentiment'], num_rows: 900 }) test: Dataset({ features: ['sentence', 'sentiment'], num_rows: 100 }) }) ``` Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however: ```python dataset['train'].rename_column('sentence', 'text') ``` ```python /usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name) 353 for split_name in split_names_from_instruction: 354 if not re.match(_split_re, split_name): --> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.") 356 357 def __str__(self): ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('. ``` In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split. Thanks in advance! :)
CLOSED
2021-03-10T09:40:38
2025-02-05T13:36:07
2021-03-16T14:05:05
https://github.com/huggingface/datasets/issues/2022
simonschoe
2
[]
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
CLOSED
2021-03-10T02:48:34
2021-03-13T10:07:41
2021-03-13T10:07:41
https://github.com/huggingface/datasets/issues/2021
shamanez
1
[]
2,012
No upstream branch
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
CLOSED
2021-03-09T09:48:55
2021-03-09T11:33:31
2021-03-09T11:33:31
https://github.com/huggingface/datasets/issues/2012
theo-m
2
[ "documentation" ]
2,010
Local testing fails
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes) 1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04) ``` Seems like a discrepancy with CI, perhaps a lib version that's not controlled? Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
CLOSED
2021-03-09T09:01:38
2021-03-09T14:06:03
2021-03-09T14:06:03
https://github.com/huggingface/datasets/issues/2010
theo-m
3
[ "bug" ]
2,009
Ambiguous documentation
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR with a clearer statement when I understand the meaning.
CLOSED
2021-03-09T08:42:11
2021-03-12T15:01:34
2021-03-12T15:01:34
https://github.com/huggingface/datasets/issues/2009
theo-m
2
[ "documentation" ]
2,007
How to not load huggingface datasets into memory
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir (Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory. I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set. thank you so much @lhoestq for your great help in advance
CLOSED
2021-03-08T12:35:26
2021-08-04T18:02:25
2021-08-04T18:02:25
https://github.com/huggingface/datasets/issues/2007
dorost1234
2
[]
2,005
Setting to torch format not working with torchvision and MNIST
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
CLOSED
2021-03-08T07:38:11
2021-03-09T17:58:13
2021-03-09T17:58:13
https://github.com/huggingface/datasets/issues/2005
gchhablani
9
[]
2,003
Messages are being printed to the `stdout`
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`. In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
CLOSED
2021-03-07T22:09:34
2023-07-25T16:35:21
2023-07-25T16:35:21
https://github.com/huggingface/datasets/issues/2003
mahnerak
3
[]
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
CLOSED
2021-03-07T15:41:35
2022-12-19T19:25:14
2021-03-17T05:51:01
https://github.com/huggingface/datasets/issues/2001
donggyukimc
1
[]
2,000
Windows Permission Error (most recent version of datasets)
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance! Luisa My script: ``` import datasets import csv logger = datasets.logging.get_logger(__name__) class SampleConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(SampleConfig, self).__init__(**kwargs) class Sample(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"), ] def _info(self): return datasets.DatasetInfo( description="Dataset with words and their POS-Tags", features=datasets.Features( { "id": datasets.Value("string"), "tokens": datasets.Sequence(datasets.Value("string")), "pos_tags": datasets.Sequence( datasets.features.ClassLabel( names=[ "''", ",", "-LRB-", "-RRB-", ".", ":", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WRB", "``" ] ) ), } ), supervised_keys=None, homepage="https://catalog.ldc.upenn.edu/LDC2011T03", citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.", ) def _split_generators(self, dl_manager): loaded_files = dl_manager.download_and_extract(self.config.data_files) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]}) ] def _generate_examples(self, filepath): logger.info("generating examples from = %s", filepath) with open(filepath, encoding="cp1252") as f: data = csv.reader(f, delimiter="\t") ids = list() tokens = list() pos_tags = list() for id_, line in enumerate(data): #print(line) if len(line) == 1: if tokens: yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} ids = list() tokens = list() pos_tags = list() else: ids.append(line[0]) tokens.append(line[1]) pos_tags.append(line[2]) # last example yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} def main(): dataset = datasets.load_dataset( "data_loading.py", data_files={ "train": "train.tsv", "test": "test.tsv", "val": "val.tsv" } ) #print(dataset) if __name__=="__main__": main() ```
CLOSED
2021-03-07T11:55:28
2021-03-09T12:42:57
2021-03-09T12:42:57
https://github.com/huggingface/datasets/issues/2000
itsLuisa
5
[]
1,997
from datasets import MoleculeDataset, GEOMDataset
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
CLOSED
2021-03-06T15:50:19
2021-03-06T16:13:26
2021-03-06T16:13:26
https://github.com/huggingface/datasets/issues/1997
futianfan
0
[ "dataset request" ]
1,996
Error when exploring `arabic_speech_corpus`
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 233, in <module> configs = get_confs(option) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs module_path = nlp.load.prepare_module(path, dataset=True File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ```
CLOSED
2021-03-06T05:55:20
2022-10-05T13:24:26
2022-10-05T13:24:26
https://github.com/huggingface/datasets/issues/1996
elgeish
3
[ "bug", "nlp-viewer", "speech" ]
1,994
not being able to get wikipedia es language
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare "\n\t`{}`".format(usage_example) datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')` thanks @lhoestq for any suggestion/help
OPEN
2021-03-05T08:31:48
2021-03-11T20:46:21
null
https://github.com/huggingface/datasets/issues/1994
dorost1234
8
[]
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
CLOSED
2021-03-05T05:25:50
2021-03-22T04:05:50
2021-03-22T04:05:50
https://github.com/huggingface/datasets/issues/1993
shamanez
7
[]
1,992
`datasets.map` multi processing much slower than single processing
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer. I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running. What could be the reason? I would be happy to provide information necessary to spot the reason. p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing. p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work. ![Screen Shot 2021-03-05 at 11 04 59](https://user-images.githubusercontent.com/29157715/110056895-ef6cf000-7da2-11eb-8307-6698e9fb1ad4.png)
OPEN
2021-03-05T02:10:02
2024-06-08T20:18:03
null
https://github.com/huggingface/datasets/issues/1992
hwijeen
14
[ "bug" ]
1,990
OSError: Memory mapping file failed: Cannot allocate memory
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128 ``` I am using transformer version: 4.3.2 But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset? Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions: ``` File "run_mlm.py", line 441, in <module> main() File "run_mlm.py", line 233, in main split=f"train[{data_args.validation_split_percentage}%:]", File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ```
CLOSED
2021-03-04T18:21:58
2021-08-04T18:04:25
2021-08-04T18:04:25
https://github.com/huggingface/datasets/issues/1990
dorost1234
6
[]
1,989
Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module> main() File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main datasets = load_dataset("csv", data_files=data_files) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split writer.write_table(table) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table pa_table = pa_table.cast(self._schema) File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Failed to parse string: not nurse ``` Any ideas how to fix this? For now, I'll probably make them numeric.
CLOSED
2021-03-04T17:06:53
2023-07-24T14:39:33
2023-07-24T14:39:33
https://github.com/huggingface/datasets/issues/1989
ioana-blue
10
[]
1,988
Readme.md is misleading about kinds of datasets?
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You mention other kinds of datasets, with images and so on. I'm confused. Is it possible to use it to store, say, imagenet locally?
CLOSED
2021-03-04T17:04:20
2021-08-04T18:05:23
2021-08-04T18:05:23
https://github.com/huggingface/datasets/issues/1988
surak
1
[]
1,987
wmt15 is broken
While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken: ``` python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")' Downloading: 2.91kB [00:00, 818kB/s] Downloading: 3.02kB [00:00, 897kB/s] Downloading: 41.1kB [00:00, 19.1MB/s] Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f... Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested mapped = [ File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz ```
CLOSED
2021-03-04T16:46:25
2022-10-05T13:12:26
2022-10-05T13:12:26
https://github.com/huggingface/datasets/issues/1987
stas00
1
[]
1,986
wmt datasets fail to load
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager) 758 # Extract manually downloaded files. 759 manual_files = dl_manager.extract(manual_paths_dict) --> 760 extraction_map = dict(downloaded_files, **manual_files) 761 762 for language in self.config.language_pair: TypeError: type object argument after ** must be a mapping, not list
CLOSED
2021-03-04T14:18:55
2021-03-04T14:31:07
2021-03-04T14:31:07
https://github.com/huggingface/datasets/issues/1986
sabania
1
[]
1,984
Add tests for WMT datasets
As requested in #1981, we need tests for WMT datasets, using dummy data.
CLOSED
2021-03-04T06:46:42
2022-11-04T14:19:16
2022-11-04T14:19:16
https://github.com/huggingface/datasets/issues/1984
albertvillanova
1
[]
1,983
The size of CoNLL-2003 is not consistant with the official release.
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
CLOSED
2021-03-04T04:41:34
2022-10-05T13:13:26
2022-10-05T13:13:26
https://github.com/huggingface/datasets/issues/1983
h-peng17
4
[]
1,981
wmt datasets fail to load
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e... Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators extraction_map = dict(downloaded_files, **manual_files) ``` it worked fine recently. same problem if I try wmt16. git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa @albertvillanova
CLOSED
2021-03-03T19:21:39
2021-03-04T14:16:47
2021-03-03T22:48:36
https://github.com/huggingface/datasets/issues/1981
stas00
6
[]
1,977
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_length 256 ` I am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset? Do you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks I really appreciate your help. @lhoestq thanks. [1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py error I get: ``` >>> import datasets >>> datasets.load_dataset("wikipedia", "20200501.aa") Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /dara/temp/cache_home_2/datasets/wikipedia/20200501.aa/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 1099, in _download_and_prepare import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ```
OPEN
2021-03-02T19:21:28
2021-03-03T10:17:40
null
https://github.com/huggingface/datasets/issues/1977
dorost1234
2
[]
1,973
Question: what gets stored in the datasets cache and why is it so huge?
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
CLOSED
2021-03-02T14:35:53
2021-03-30T14:03:59
2021-03-16T09:44:00
https://github.com/huggingface/datasets/issues/1973
ioana-blue
8
[]
1,972
'Dataset' object has no attribute 'rename_column'
'Dataset' object has no attribute 'rename_column'
CLOSED
2021-03-02T08:01:49
2022-06-01T16:08:47
2022-06-01T16:08:47
https://github.com/huggingface/datasets/issues/1972
farooqzaman1
1
[]
1,965
Can we parallelized the add_faiss_index process over dataset shards ?
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process. @lhoestq
CLOSED
2021-03-01T12:47:34
2021-03-04T19:40:56
2021-03-04T19:40:42
https://github.com/huggingface/datasets/issues/1965
shamanez
3
[]
1,964
Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad ``` the bug is that: ``` Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7... Traceback (most recent call last): File "examples/question-answering/run_qa.py", line 501, in <module> main() File "examples/question-answering/run_qa.py", line 217, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json'] ``` And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json) ,is the problem plain_text do not have a checksum? ### 2 When I try to train lxmert,and use local dataset: ``` python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad ``` The bug is that ``` ['title', 'paragraphs'] Traceback (most recent call last): File "examples/question-answering/run_qa.py", line 501, in <module> main() File "examples/question-answering/run_qa.py", line 273, in main answer_column_name = "answers" if "answers" in column_names else column_names[2] IndexError: list index out of range ``` I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work: ``` if training_args.do_train: column_names = datasets["train"].column_names else: column_names = datasets["validation"].column_names print(datasets["train"].column_names) question_column_name = "question" if "question" in column_names else column_names[0] context_column_name = "context" if "context" in column_names else column_names[1] answer_column_name = "answers" if "answers" in column_names else column_names[2] ``` ## Please tell me how to fix the bug,thks a lot!
CLOSED
2021-03-01T08:41:31
2022-10-05T13:09:47
2022-10-05T13:09:47
https://github.com/huggingface/datasets/issues/1964
LeopoldACC
6
[]