InternLM2-SFT-SCDPO
This model is a fine-tuned version of the InternLM2-20B model using SFT and SCDPO.
It achieves the following results on the evaluation set:
- Loss: 0.2572
- Rewards/chosen: 0.7366
- Rewards/rejected: -2.9817
- Rewards/accuracies: 0.8929
- Rewards/margins: 3.7183
- Logps/rejected: -155.1884
- Logps/chosen: -92.5904
- Logits/rejected: -2.3032
- Logits/chosen: -2.4880
Model description
This is a model fine-tuned for mathematical problem-solving.
Intended uses & limitations
The model is intended for solving math problems.
Training and evaluation data
|
gsm8k |
math |
ape |
cmath |
mgsm_zh |
| InternLM2-SFT |
86.4 |
55.8 |
77.1 |
88.4 |
74.8 |
| InternLM2-SFT-DPO |
87 |
57.6 |
78.7 |
89.9 |
76 |
| InternLM2-SFT-DPO (data-equal) |
88.2 |
57.5 |
78.8 |
89.3 |
76 |
| InternLM2-SFT-SCDPO |
88.5 |
58.1 |
79.3 |
90.3 |
80.4 |
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2