Instructions to use pruna-test/test-load-tiny-stable-diffusion-pipe-smashed-pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use pruna-test/test-load-tiny-stable-diffusion-pipe-smashed-pro with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("pruna-test/test-load-tiny-stable-diffusion-pipe-smashed-pro", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Pruna AI
How to use pruna-test/test-load-tiny-stable-diffusion-pipe-smashed-pro with Pruna AI:
from pruna import PrunaModel pip install -U diffusers transformers accelerate
from pruna import PrunaModel import torch # switch to "mps" for apple devices pipe = PrunaModel.from_pretrained("pruna-test/test-load-tiny-stable-diffusion-pipe-smashed-pro", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("pruna-test/test-load-tiny-stable-diffusion-pipe-smashed-pro", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]Model Card for PrunaAI/test-load-tiny-stable-diffusion-pipe-smashed-pro
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install pruna
You can use the diffusers library to load the model but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
from pruna import PrunaModel
loaded_model = PrunaModel.from_hub(
"PrunaAI/test-load-tiny-stable-diffusion-pipe-smashed-pro"
)
After loading the model, you can use the inference methods of the original model. Take a look at the documentation for more usage information.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json file, which describes the optimization methods that were applied to the model.
{
"batcher": null,
"cacher": null,
"compiler": null,
"distiller": null,
"enhancer": null,
"factorizer": null,
"pruner": null,
"quantizer": null,
"recoverer": null,
"batch_size": 1,
"device": "cpu",
"save_fns": [],
"load_fns": [
"diffusers"
],
"reapply_after_load": {
"factorizer": null,
"pruner": null,
"quantizer": null,
"distiller": null,
"cacher": null,
"recoverer": null,
"compiler": null,
"batcher": null,
"enhancer": null
}
}
π Join the Pruna AI community!
- Downloads last month
- 59