Instructions to use bitext/OpenELM-450M_Retail with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bitext/OpenELM-450M_Retail with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bitext/OpenELM-450M_Retail", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("bitext/OpenELM-450M_Retail", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bitext/OpenELM-450M_Retail with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bitext/OpenELM-450M_Retail" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bitext/OpenELM-450M_Retail", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/bitext/OpenELM-450M_Retail
- SGLang
How to use bitext/OpenELM-450M_Retail with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bitext/OpenELM-450M_Retail" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bitext/OpenELM-450M_Retail", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bitext/OpenELM-450M_Retail" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bitext/OpenELM-450M_Retail", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use bitext/OpenELM-450M_Retail with Docker Model Runner:
docker model run hf.co/bitext/OpenELM-450M_Retail
| from typing import Any, Dict, List | |
| import torch | |
| import transformers | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] == 8 else torch.float16 | |
| class EndpointHandler: | |
| def __init__(self, path=""): | |
| tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| path, | |
| return_dict=True, | |
| device_map="auto", | |
| torch_dtype=dtype, | |
| trust_remote_code=True, | |
| ) | |
| generation_config = model.generation_config | |
| generation_config.max_new_tokens = 2000 | |
| generation_config.temperature = 0 | |
| generation_config.num_return_sequences = 1 | |
| generation_config.pad_token_id = tokenizer.eos_token_id | |
| generation_config.eos_token_id = tokenizer.eos_token_id | |
| self.generation_config = generation_config | |
| self.pipeline = transformers.pipeline( | |
| "text-generation", model=model, tokenizer=tokenizer | |
| ) | |
| def __call__(self, data: Dict[str, Any]) -> Dict[str, Any]: | |
| prompt = data.pop("inputs", data) | |
| result = self.pipeline(prompt, generation_config=self.generation_config) | |
| return result |