🍷 Llama-3.2-3B-Instruct-Alpaca

This is a finetune of meta-llama/Llama-3.2-3B-Instruct.

It was trained on the yahma/alpaca-cleaned dataset using Unsloth.

This was my first fine tune and it's not done the best, but it is usable for small applications.

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "itsnebulalol/Llama-3.2-3B-Instruct-Alpaca"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
326
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
Input a message to start chatting with itsnebulalol/Llama-3.2-3B-Instruct-Alpaca.

Model tree for itsnebulalol/Llama-3.2-3B-Instruct-Alpaca

Quantized
(113)
this model
Quantizations
1 model

Dataset used to train itsnebulalol/Llama-3.2-3B-Instruct-Alpaca