LangCell: Language-Cell Pre-training for Cell Identity Understanding
Paper
• 2405.06708 • Published
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
input_ids: list<item: int16>
child 0, item: int16
soma_joinid: int64
dataset_id: string
assay: string
assay_ontology_term_id: string
cell_type: string
cell_type_ontology_term_id: string
development_stage: string
development_stage_ontology_term_id: string
disease: string
disease_ontology_term_id: string
donor_id: string
is_primary_data: bool
self_reported_ethnicity: string
self_reported_ethnicity_ontology_term_id: string
sex: string
sex_ontology_term_id: string
suspension_type: string
tissue: string
tissue_ontology_term_id: string
tissue_general: string
tissue_general_ontology_term_id: string
fulltext: string
length: int64
-- schema metadata --
huggingface: '{"info": {"features": {"input_ids": {"feature": {"dtype": "' + 1412
to
{'indices': Value(dtype='uint64', id=None)}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1855, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 690, in wrapped
for item in generator(*args, **kwargs):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 76, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 59, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
input_ids: list<item: int16>
child 0, item: int16
soma_joinid: int64
dataset_id: string
assay: string
assay_ontology_term_id: string
cell_type: string
cell_type_ontology_term_id: string
development_stage: string
development_stage_ontology_term_id: string
disease: string
disease_ontology_term_id: string
donor_id: string
is_primary_data: bool
self_reported_ethnicity: string
self_reported_ethnicity_ontology_term_id: string
sex: string
sex_ontology_term_id: string
suspension_type: string
tissue: string
tissue_ontology_term_id: string
tissue_general: string
tissue_general_ontology_term_id: string
fulltext: string
length: int64
-- schema metadata --
huggingface: '{"info": {"features": {"input_ids": {"feature": {"dtype": "' + 1412
to
{'indices': Value(dtype='uint64', id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1431, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 992, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1898, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
indices uint64 |
|---|
14,340,059 |
14,340,561 |
14,341,066 |
14,341,098 |
14,342,958 |
14,344,068 |
14,344,243 |
14,344,463 |
14,344,508 |
14,344,540 |
14,344,713 |
14,344,771 |
14,345,335 |
14,345,590 |
14,345,618 |
14,345,787 |
14,345,906 |
14,346,285 |
14,346,421 |
14,346,457 |
14,359,356 |
14,359,545 |
14,359,691 |
14,359,789 |
14,360,037 |
14,360,358 |
14,360,415 |
14,360,665 |
14,360,679 |
14,360,779 |
14,360,960 |
14,361,108 |
14,361,150 |
14,361,175 |
14,361,485 |
14,361,722 |
14,361,786 |
14,361,906 |
14,361,934 |
14,362,102 |
14,362,139 |
14,362,215 |
14,362,229 |
14,362,281 |
14,362,313 |
14,362,325 |
14,362,386 |
14,362,445 |
14,362,511 |
14,362,546 |
14,362,592 |
14,362,597 |
14,362,622 |
14,362,690 |
14,362,730 |
14,362,793 |
14,362,812 |
14,362,834 |
14,362,837 |
14,362,949 |
14,362,951 |
14,362,959 |
14,362,977 |
14,362,989 |
14,363,030 |
14,363,122 |
14,363,134 |
14,363,157 |
14,363,158 |
14,363,213 |
14,363,255 |
14,363,261 |
14,363,327 |
14,363,408 |
14,363,429 |
14,363,473 |
14,363,564 |
14,363,583 |
14,363,634 |
14,363,635 |
14,363,639 |
14,363,666 |
14,363,723 |
14,363,733 |
14,363,765 |
14,363,790 |
14,363,798 |
14,363,827 |
14,363,831 |
14,363,848 |
14,363,899 |
14,363,900 |
14,364,015 |
14,364,122 |
14,364,125 |
14,364,149 |
14,364,172 |
14,364,173 |
14,364,187 |
14,364,226 |
The dataset scLibrary is the pre-training dataset used by the LangCell model.
You can use git-lfs to download sclibrary.dataset from this repository, and then use the following code to load the data:
from datasets import load_from_disk
sclibrary=load_from_disk("/path/to/sclibrary.dataset")
Model github: https://github.com/PharMolix/LangCell
Paper: https://arxiv.org/abs/2405.06708
If you find scLibrary helpful to your research, please consider giving our Github repository a 🌟star and 📎citing the following article. Thank you for your support!
@misc{zhao2024langcell,
title={LangCell: Language-Cell Pre-training for Cell Identity Understanding},
author={Suyuan Zhao and Jiahuan Zhang and Yizhen Luo and Yushuai Wu and Zaiqing Nie},
year={2024},
eprint={2405.06708},
archivePrefix={arXiv},
primaryClass={q-bio.GN}
}