| --- |
| dataset_info: |
| features: |
| - name: sentence |
| dtype: string |
| - name: unfairness_level |
| dtype: string |
| - name: __index_level_0__ |
| dtype: int64 |
| splits: |
| - name: train |
| num_bytes: 859666 |
| num_examples: 5378 |
| - name: validation |
| num_bytes: 73545 |
| num_examples: 415 |
| - name: test |
| num_bytes: 175734 |
| num_examples: 1038 |
| download_size: 547326 |
| dataset_size: 1108945 |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/train-* |
| - split: validation |
| path: data/validation-* |
| - split: test |
| path: data/test-* |
| license: mit |
| language: |
| - en |
| pretty_name: TOS_Dataset |
| --- |
| |
| # TOS_Dataset |
| |
| This dataset contains clauses from Terms of Service (ToS) documents with annotations indicating the fairness level of each clause. The dataset includes clauses labeled as `clearly_fair`, `potentially_unfair`, and `clearly_unfair`. |
|
|
| ## Dataset Summary |
|
|
| The dataset comprises clauses extracted from various ToS documents. Each clause is annotated with a fairness level, indicating whether it is clearly fair, potentially unfair, or clearly unfair. |
|
|
| ## Supported Tasks |
|
|
| This dataset can be used for multi-class classification tasks, specifically for classifying the fairness of clauses in ToS documents. |
|
|
| ## Languages |
|
|
| The dataset is in English. |
|
|
| ## Dataset Structure |
|
|
| The dataset is split into three sets: train, validation, and test. |
|
|
| ### Data Fields |
|
|
| - `sentence`: The clause from the ToS document. |
| - `unfairness_level`: The fairness level assigned to the clause. Possible values are `clearly_fair`, `potentially_unfair`, and `clearly_unfair`. |
|
|
| ### Data Splits |
|
|
| | Split | Count | |
| |------------|--------------| |
| | Train | 5.38k rows | |
| | Validation | 415 rows | |
| | Test | 1.04k rows | |
|
|
| ## Usage |
|
|
| To load the dataset: |
|
|
| ```python |
| from datasets import load_dataset |
| |
| dataset = load_dataset("CodeHima/TOS_Dataset") |
| ``` |
|
|
| ## Example |
|
|
| ```python |
| from datasets import load_dataset |
| |
| dataset = load_dataset("CodeHima/TOS_Dataset") |
| |
| for split in ['train', 'validation', 'test']: |
| print(f"Example from {split} split:") |
| print(dataset[split][0]) |
| ``` |
|
|
| ## License |
|
|
| This dataset is licensed under the MIT. Please see the LICENSE file for more details. |
|
|
| ## Citation |
|
|
| If you use this dataset in your research, please cite it as follows: |
|
|
| ``` |
| @dataset{CodeHima_TOS_Dataset, |
| author = {CodeHima}, |
| title = {TOS_Dataset}, |
| year = {2024}, |
| publisher = {Hugging Face}, |
| url = {https://huggingface.co/datasets/CodeHima/TOS_Dataset}, |
| } |