YaYaB/onepiece-blip-captions
Viewer β’ Updated β’ 856 β’ 99 β’ 15
How to use YaYaB/sd-onepiece-diffusers4 with Diffusers:
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("YaYaB/sd-onepiece-diffusers4", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("YaYaB/sd-onepiece-diffusers4", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]This diffusion model is trained with the π€ Diffusers library
on the YaYaB/onepiece-blip-captions dataset.
# TODO: add an example code snippet for running this diffusion pipeline
[TODO: provide examples of latent issues and potential remediations]
[TODO: describe the data used to train the model]
The following hyperparameters were used during training:
π TensorBoard logs