File size: 3,227 Bytes
aa7c321
 
 
 
 
 
 
 
 
 
05da45b
 
aa7c321
 
 
 
 
 
 
 
 
 
60269a7
 
9ad23bc
 
c3ab79f
ea06872
 
 
05da45b
 
 
 
aa7c321
 
ea06872
 
 
 
aa7c321
 
 
 
f126a67
aa7c321
c3ab79f
 
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
84969d8
c3ab79f
e74ee85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa7c321
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
language:
- sv
license: apache-2.0
library_name: transformers
tags:
- unsloth
datasets:
- neph1/bellman-7b-finetune
- neph1/codefeedback-swedish
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
---

# Model Card for Bellman

This version of bellman is finetuned from Mistral-Nemo-Instruct-2407.
It's a rank 128 qlora trained for about 1 epoch.
It's finetuned for prompt question answering, based on a dataset created from 
Swedish wikipedia, with a lot of Sweden-centric questions.
New from previous versions is questions from a translated code-feedback dataset, as well as a number of stories.

Consider this a work in progress as I adjust the training for this new model size. Will provide a few updates to the model.

Quality seems to be dependent on "initial seed". Sometimes it's just amazing, and sometimes quite bad. If you're having trouble getting good results, try lowering temperature.


For GGUFs, please look to: <br> https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-bellman-12b-GGUF and <br>
https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-bellman-12b-i1-GGUF

![image/png](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/IDGX3d9lGe6yx-yHjsrav.png)

[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/T6T3S8VXY)

## Model Details

Rank: 128

Trained with Unsloth on a 3090

Differences from base model:

The base model is pretty good at Swedish already, but my 'vibe check' says this finetune is slightly improved. There are less weird wordings.

Bellman is trained on fairly short answers and tends to be less verbose.

Output example (Mistral-Nemo-Instruct-bellman-12b.i1-Q4_K_M.gguf):

   User: "Hej!"

   Bellman: "Hej! Vad kan jag hjälpa dig med?"

   User: "Berätta lite om Sverige"

   Bellman: "Sverige är ett land i norra Europa med en rik historia. Det består av 21 län och har en yta på drygt 450,000 kvadratkilometer. Stockholm är Sveriges huvudstad."

   User: "När grundades det?"

   Bellman: "Det är inte exakt känt när Sverige grundades. Det var först under 1300-talet att Sverige började ge sig uttryck i internationella affärer."

   User: "Vad hände 1523?"

   Bellman: "År 1523 blev Gustav Vasa kung över Sverige och inledde reformationen i landet. Han var också den förste monarken att bära titeln "Sveriges Konung"."

   User: "Hur länge levde han?"

   Bellman: "Gustav Vasa levde från år 1496 till 1560."

### Training Parameters
  per_device_train_batch_size = 2,<br>
  gradient_accumulation_steps = 64,<br>
  num_train_epochs=3,<br>
  warmup_steps = 5,<br>
  learning_rate = 1e-4,<br>
  logging_steps = 15,<br>
  optim = "adamw_8bit",<br>
  weight_decay = 0.01,<br>
  lr_scheduler_type = "linear",<br>
  seed = 3407,<br>
  per_device_eval_batch_size = 2,<br>
  evaluation_strategy="steps",<br>
  eval_accumulation_steps = 64,<br>
  eval_steps = 15,<br>
  eval_delay = 0,<br>
  save_strategy="steps",<br>
  save_steps=50,<br>
  
### Model Description


- **Developed by:** Me
- **Funded by:** Me
- **Model type:** Instruct
- **Language(s) (NLP):** Swedish
- **License:** Apache 2 License
- **Finetuned from model:** Mistral-Nemo-Instruct-2407

## Model Card Contact

rickard@mindemia.com