hinairo commited on
Commit
b904e25
·
verified ·
1 Parent(s): 9ce2230

Revert README.md to pre-March-3 version (undo broken template changes)

Browse files
Files changed (1) hide show
  1. README.md +60 -409
README.md CHANGED
@@ -1,74 +1,45 @@
1
  ---
 
2
  base_model:
3
  - black-forest-labs/FLUX.1-dev
 
4
  pipeline_tag: text-to-image
5
- language:
6
- - en
7
  ---
8
 
9
- # Elastic model: FLUX.1-dev
10
 
11
- ## Overview
12
 
13
- ----
14
-
15
- ElasticModels are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement, routing different compression algorithms to different layers. For each model, we have produced a series of optimized models:
16
-
17
- - **XL**: Mathematically equivalent neural network, optimized with our DNN compiler.
18
- - **L**: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
19
- - **M**: Faster model, with accuracy degradation less than 1.5%.
20
- - **S**: The fastest model, with accuracy degradation less than 2%.
21
-
22
- Models can be accessed via TheStage AI Python SDK: ElasticModels, or deployed as Docker containers with REST API endpoints (see Deploy section).
23
-
24
- ## Installation
25
-
26
- ---
27
 
28
- ### System Requirements
29
 
30
- | **Property**| **Value** |
31
- | --- | --- |
32
- | **GPU** | L40s, RTX 5090, H100, B200 |
33
- | **Python Version** | 3.10-3.12 |
34
- | **CPU** | Intel/AMD x86_64 |
35
- | **CUDA Version** | 12.8+ |
36
 
 
37
 
38
- ### TheStage AI Access token setup
39
 
40
- Install TheStage AI CLI and setup API token:
41
 
42
- ```bash
43
- pip install thestage
44
- thestage config set --api-token <YOUR_ACCESS_TOKEN>
45
- ```
46
 
47
- ### ElasticModels installation
 
 
 
 
48
 
49
- Install TheStage Elastic Models package:
50
-
51
- ```bash
52
- pip install 'thestage-elastic-models[nvidia]' \
53
- --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
54
- ```
55
 
56
- If you want to run on Nvidia Blackwell architecture, you need to install package as follows:
57
 
58
- ```bash
59
- pip install 'thestage-elastic-models[blackwell]' \
60
- --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
61
- pip install -U --pre torch \
62
- --index-url https://download.pytorch.org/whl/nightly/cu128
63
- pip install -U --pre torchvision \
64
- --index-url https://download.pytorch.org/whl/nightly/cu128
65
- ```
66
 
67
- ## Usage example
 
68
 
69
- ----
70
 
71
- Elastic Models provides the same interface as HuggingFace Diffusers. Here is an example of how to use the FLUX.1-dev model:
 
72
 
73
  ```python
74
  import torch
@@ -82,8 +53,6 @@ pipeline = FluxPipeline.from_pretrained(
82
  mode_name,
83
  torch_dtype=torch.bfloat16,
84
  token=hf_token,
85
- # 'original' for original model
86
- # 'S', 'M', 'L', 'XL' for accelerated models
87
  mode='S'
88
  )
89
  pipeline.to(device)
@@ -95,387 +64,69 @@ for prompt, output_image in zip(prompts, output.images):
95
  output_image.save((prompt.replace(' ', '_') + '.png'))
96
  ```
97
 
 
98
 
99
- ## Quality Benchmarks
100
-
101
- ------------
102
-
103
- We have used PartiPrompts and DrawBench datasets to evaluate the quality of images generated by different sizes of FLUX.1-dev models (S, M, L, XL) compared to the original model. The evaluation metrics include ARNIQA, CLIP IQA, PSNR, SSIM, and VQA Faithfulness.
104
-
105
- ![Quality Benchmarking](https://cdn.thestage.ai/production/cms_file_upload/1767818654-9b8b5892-54bb-4987-bcf9-db2b72af2bec/flux-dev-quality.png)
106
-
107
- ### Quality Benchmark Results
108
-
109
- | **Metric/Model Size**| **S**| **M**| **L**| **XL**| **Original** |
110
- | --- | --- | --- | --- | --- | --- |
111
- | **ARNIQA (PartiPrompts)** | 64.1 | 63.2 | 61.9 | 66.8 | 66.9 |
112
- | **ARNIQA (DrawBench)** | 64.3 | 63.5 | 63.6 | 68.2 | 68.5 |
113
- | **CLIP IQA (PartiPrompts)** | 85.5 | 86.4 | 83.8 | 88.3 | 87.9 |
114
- | **CLIP IQA (DrawBench)** | 86.4 | 86.5 | 84.5 | 89.5 | 90.0 |
115
- | **VQA Faithfulness (PartiPrompts)** | 87.5 | 85.5 | 85.5 | 85.5 | 88.6 |
116
- | **VQA Faithfulness (DrawBench)** | 69.3 | 64.7 | 64.8 | 67.8 | 65.2 |
117
- | **PSNR (PartiPrompts)** | 30.22 | 30.24 | 30.38 | N/A | N/A |
118
- | **SSIM (PartiPrompts)** | 0.72 | 0.72 | 0.76 | 1.0 | 1.0 |
119
-
120
-
121
- ## Datasets
122
-
123
- -------
124
-
125
- - **PartiPrompts**: A benchmark dataset created by Google Research, containing 1,632 diverse and challenging prompts that test various aspects of text-to-image generation models. It includes categories such as abstract concepts, complex compositions, properties and attributes, counting and numbers, text rendering, artistic styles, and fine-grained details.
126
-
127
- - **DrawBench**: A comprehensive benchmark dataset developed by Google Research, containing 200 carefully curated prompts designed to test specific capabilities and challenge areas of diffusion models. It includes categories such as colors, counting, conflicting requirements, DALL-E inspired prompts, detailed descriptions, misspellings, positional relationships, rare words, Reddit user prompts, and text generation.
128
-
129
- ## Metrics
130
-
131
- ----------
132
-
133
- - **ARNIQA**: No-reference image quality assessment metric that predicts perceptual quality without reference images.
134
- - **CLIP_IQA**: No-reference image quality metric using contrastive learning to assess image quality without references.
135
- - **VQA Faithfulness**: Metric measuring how accurately generated images represent the text prompts.
136
- - **PSNR**: Peak Signal-to-Noise Ratio measuring similarity between generated by accelerated model and original model images.
137
- - **SSIM**: Structural Similarity Index measuring perceptual similarity between generated by accelerated model and original model images.
138
-
139
-
140
- ## Latency Benchmarks
141
-
142
- -----
143
-
144
- We have measured the latency of different sizes of FLUX.1-dev model (S, M, L, XL, original) on various GPUs. The measurements were taken for generating images of size 1024x1024 pixels.
145
-
146
- ![Latency Benchmarking](https://cdn.thestage.ai/production/cms_file_upload/1767818701-7eef4234-acf0-4f2f-8577-ac6c2527cbaa/flux-dev-performance.png)
147
-
148
- ### Latency Benchmark Results
149
-
150
- Latency (in seconds) for generating a 1024x1024 image using different model sizes on various hardware setups.
151
-
152
- | **GPU/Model Size**| **S**| **M**| **L**| **XL**| **Original** |
153
- | --- | --- | --- | --- | --- | --- |
154
- | **H100** | 2.88 | 3.06 | 3.25 | 4.18 | 6.46 |
155
- | **L40s** | 9.22 | 10.07 | 10.67 | 14.39 | 16 |
156
- | **B200** | 1.89 | 2.04 | 2.12 | 2.23 | 4.4 |
157
- | **GeForce RTX 5090** | 5.53 | N/A | N/A | N/A | N/A |
158
-
159
-
160
- ## Benchmarking Methodology
161
-
162
- ----
163
-
164
- The benchmarking was performed on a single GPU with a batch size of 1. Each model was run for 10 iterations, and the average latency was calculated.
165
-
166
- > **Algorithm summary:**
167
- > 1. Load the FLUX.1-dev model with the specified size (S, M, L, XL, original).
168
- > 2. Move the model to the GPU.
169
- > 3. Prepare a sample prompt for image generation.
170
- > 4. Run the model for a number of iterations (e.g., 10) and measure the time taken for each iteration. On each iteration:
171
- > - Synchronize the GPU to flush any previous operations.
172
- > - Record the start time.
173
- > - Generate the image using the model.
174
- > - Synchronize the GPU again.
175
- > - Record the end time and calculate the latency for that iteration.
176
- > 5. Calculate the average latency over all iterations.
177
-
178
- ## Reproduce benchmarking
179
-
180
- ----
181
-
182
- ```python
183
- import torch
184
- from elastic_models.diffusers import FluxPipeline
185
-
186
- mode_name = 'black-forest-labs/FLUX.1-dev'
187
- hf_token = ''
188
- device = torch.device("cuda")
189
-
190
- pipeline = FluxPipeline.from_pretrained(
191
- mode_name,
192
- torch_dtype=torch.bfloat16,
193
- token=hf_token,
194
- # 'original' for original model
195
- # 'S', 'M', 'L', 'XL' for accelerated models
196
- mode='S'
197
- )
198
- pipeline.to(device)
199
-
200
- prompt = ["Kitten eating a banana"]
201
- generate_kwargs={
202
- "height": 1024,
203
- "width": 1024,
204
- "num_inference_steps": 4,
205
- "cfg_scale": 0.0
206
- }
207
-
208
- def evaluate_pipline():
209
- torch.cuda.synchronize()
210
- start_time = time.time()
211
- output = pipeline(
212
- prompt=prompt,
213
- **generate_kwargs
214
- )
215
- torch.cuda.synchronize()
216
- end_time = time.time()
217
-
218
- return end_time - start_time
219
-
220
- # Warm-up
221
- for _ in range(5):
222
- evaluate_pipline()
223
-
224
- # Benchmarking
225
- num_runs = 10
226
- total_time = 0.0
227
-
228
- for _ in range(num_runs):
229
- latency = evaluate_pipline()
230
- total_time += latency
231
-
232
- average_latency = total_time / num_runs
233
- print(f"Average Latency over {num_runs} runs: {average_latency} seconds")
234
- ```
235
-
236
-
237
- ## Serving with Docker Image
238
-
239
- ------------
240
-
241
- For serving with Nvidia GPUs, we provide ready-to-go Docker containers with OpenAI-compatible API endpoints.
242
- Using our containers you can set up an inference endpoint on any desired cloud/serverless providers as well as on-premise servers.
243
- You can also use this container to run inference through TheStage AI platform.
244
-
245
- ### Prebuilt image from ECR
246
-
247
- | **GPU** | **Docker image name** |
248
- | --- | --- |
249
- | H100, L40s | `public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-nvidia-24.09b` |
250
- | B200, RTX 5090 | `public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-blackwell-24.09b` |
251
-
252
- Pull docker image for your Nvidia GPU and start inference container:
253
-
254
- ```bash
255
- docker pull <IMAGE_NAME>
256
- ```
257
- ```bash
258
- docker run --rm -ti \
259
- --name serving_thestage_model \
260
- -p 8000:80 \
261
- -e AUTH_TOKEN=<AUTH_TOKEN> \
262
- -e MODEL_REPO=black-forest-labs/FLUX.1-dev \
263
- -e MODEL_SIZE=<MODEL_SIZE> \
264
- -e MODEL_BATCH=<MAX_BATCH_SIZE> \
265
- -e HUGGINGFACE_ACCESS_TOKEN=<HUGGINGFACE_ACCESS_TOKEN> \
266
- -e THESTAGE_AUTH_TOKEN=<THESTAGE_ACCESS_TOKEN> \
267
- -v /mnt/hf_cache:/root/.cache/huggingface \
268
- <IMAGE_NAME_DEPNDING_ON_YOUR_GPU>
269
- ```
270
-
271
- | **Parameter** | **Description** |
272
- |----------------------------|------------------------------------------------------------------------------------------------------|
273
- | `<MODEL_SIZE>` | Available: S, M, L, XL. |
274
- | `<MAX_BATCH_SIZE>` | Maximum batch size to process in parallel. |
275
- | `<HUGGINGFACE_ACCESS_TOKEN>` | Hugging Face access token. |
276
- | `<THESTAGE_ACCESS_TOKEN>` | TheStage token generated on the platform (Profile -> Access tokens). |
277
- | `<AUTH_TOKEN>` | Token for endpoint authentication. You can set it to any random string; it must match the value used by the client. |
278
- | `<IMAGE_NAME>` | Image name which you have pulled. |
279
-
280
- ## Invocation
281
-
282
- ------
283
-
284
- You can invoke the endpoint using CURL as follows:
285
-
286
- ```bash
287
- curl -X POST <http://127.0.0.1:8000/v1/images/generations> \
288
- -H "Authorization: Bearer <AUTH_TOKEN>" \
289
- -H "Content-Type: application/json" \
290
- -H "X-Model-Name: flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>" \
291
- -d '{
292
- "prompt": "Cat eating banana",
293
- "seed": 12,
294
- "aspect_ratio": "1:1",
295
- "guidance_scale": 6.5,
296
- "num_inference_steps": 4
297
- }' \
298
- --output sunset.webp -D -
299
- ```
300
-
301
- Or using Python requests:
302
 
303
- ```python
304
- import requests
305
- import json
306
- url = "http://127.0.0.1:8000/v1/images/generations"
307
- payload = json.dumps({
308
- "prompt": "sunset",
309
- "seed": 12,
310
- "aspect_ratio": "1:1",
311
- "guidance_scale": 6.5,
312
- "num_inference_steps": 4
313
- })
314
- headers = {
315
- 'Authorization: ''Bearer <AUTH_TOKEN>'',
316
- 'Content-Type': 'application/json',
317
- 'X-Model-Name': 'flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>'
318
- }
319
- response = requests.request("POST", url, headers=headers, data=payload)
320
- with open("sunset.webp", "wb") as f:
321
- f.write(response.content)
322
- ```
323
 
324
- Or using OpenAI python client:
325
 
326
- ```python
327
- import os, base64, pathlib, json
328
- from openai import OpenAI
329
 
330
- BASE_URL = "http://<your_ip>/v1"
331
- API_KEY = ""
332
- MODEL = "flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>"
333
 
334
- client = OpenAI(
335
- api_key=API_KEY,
336
- base_url=BASE_URL,
337
- default_headers={"X-Model-Name": MODEL}
338
- )
339
 
340
- response = client.with_raw_response.images.generate(
341
- model=MODEL,
342
- prompt="Cat eating banana",
343
- n=1,
344
- extra_body={
345
- "seed": 111,
346
- "aspect_ratio": "1:1",
347
- "guidance_scale": 3.5,
348
- "num_inference_steps": 4
349
- },
350
- )
351
 
352
- with open("thestage_image.webp", "wb") as f:
353
- f.write(response.content)
354
  ```
355
 
356
- ## Endpoint Parameters
357
-
358
- -------------
359
-
360
- ### Method
361
-
362
- > **POST** `/v1/images/generations`
363
-
364
- ### Header Parameters
365
-
366
- > `Authorization`: `string`
367
- >
368
- > Bearer token for authentication. Should match the `AUTH_TOKEN` set during container startup.
369
-
370
- > `Content-Type`: `string`
371
- >
372
- > Must be set to `application/json`.
373
-
374
- > `X-Model-Name`: `string`
375
- >
376
- > Specifies the model to use for generation. Format: `flux-1-dev-<size>-bs<batch_size>`, where `<size>` is one of `S`, `M`, `L`, `XL`, `original` and `<batch_size>` is the maximum batch size configured during container startup.
377
-
378
- ### Input Body
379
-
380
- > `prompt` : `string`
381
- >
382
- > The text prompt to generate an image for.
383
-
384
- > `seed`: `int32`
385
- >
386
- > Random seed for generation.
387
-
388
- > `num_inference_steps`: `int32`
389
- >
390
- > Number of diffusion steps to use for generation. Higher values yield better quality but take longer. Default is 28
391
-
392
- > `aspect_ratio`: `string`
393
- >
394
- > Aspect ratio of the generated image. Supported values:
395
- > ```
396
- > "1:1": (1024, 1024),
397
- > "16:9": (1280, 736),
398
- > "21:9": (1280, 544),
399
- > "3:2": (1248, 832),
400
- > "2:3": (832, 1248),
401
- > "4:3": (1184, 896),
402
- > "3:4": (896, 1184),
403
- > "5:4": (1152, 928),
404
- > "4:5": (928, 1152),
405
- > "9:16": (736, 1280),
406
- > "9:21": (544, 1280)
407
- > ```
408
-
409
- > `guidance_scale`: float32
410
- >
411
- > Guidance scale for classifier-free guidance. Higher values increase adherence to the prompt.
412
-
413
- ## Deploy on Modal
414
-
415
- -----------------------
416
-
417
- For more details please use the tutorial [Modal deployment](https://docs.thestage.ai/tutorials/source/modal_thestage.html)
418
-
419
- ### Clone modal serving code
420
 
421
  ```shell
422
- git clone https://github.com/TheStageAI/ElasticModels.git
423
- cd ElasticModels/examples/modal
424
  ```
425
 
426
- ### Configuration of environment variables
427
 
428
- Set your environment variables in `modal_serving.py`:
429
 
430
- ```python
431
- # modal_serving.py
432
-
433
- ENVS = {
434
- "MODEL_REPO": "black-forest-labs/FLUX.1-dev",
435
- "MODEL_BATCH": "4",
436
- "THESTAGE_AUTH_TOKEN": "",
437
- "HUGGINGFACE_ACCESS_TOKEN": "",
438
- "PORT": "80",
439
- "PORT_HEALTH": "80",
440
- "HF_HOME": "/cache/huggingface",
441
- }
442
- ```
443
 
444
- ### Configuration of GPUs
445
 
446
- Set your desired GPU type and autoscaling setup. variables in `modal_serving.py`:
447
 
448
- ```python
449
- # modal_serving.py
450
-
451
- @app.function(
452
- image=image,
453
- gpu="B200",
454
- min_containers=8,
455
- max_containers=8,
456
- timeout=10000,
457
- ephemeral_disk=600 * 1024,
458
- volumes={"/opt/project/.cache": HF_CACHE},
459
- startup_timeout=60*20
460
- )
461
- @modal.web_server(
462
- 80,
463
- label="black-forest-labs/FLUX.1-dev-test",
464
- startup_timeout=60*20
465
- )
466
- def serve():
467
- pass
468
- ```
469
 
470
- ### Run serving
471
 
472
- ```shell
473
- modal serve modal_serving.py
474
- ```
 
 
 
 
 
 
475
 
476
 
477
  ## Links
478
 
479
  * __Platform__: [app.thestage.ai](https://app.thestage.ai)
 
480
  * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
481
- * __Contact email__: contact@thestage.ai
 
1
  ---
2
+ license: apache-2.0
3
  base_model:
4
  - black-forest-labs/FLUX.1-dev
5
+ base_model_relation: quantized
6
  pipeline_tag: text-to-image
 
 
7
  ---
8
 
 
9
 
10
+ # Elastic model: Fastest self-serving models. FLUX.1-dev.
11
 
12
+ Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
15
 
16
+ * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
 
 
 
 
 
17
 
18
+ * __M__: Faster model, with accuracy degradation less than 1.5%.
19
 
20
+ * __S__: The fastest model, with accuracy degradation less than 2%.
21
 
 
22
 
23
+ __Goals of Elastic Models:__
 
 
 
24
 
25
+ * Provide the fastest models and service for self-hosting.
26
+ * Provide flexibility in cost vs quality selection for inference.
27
+ * Provide clear quality and latency benchmarks.
28
+ * Provide interface of HF libraries: transformers and diffusers with a single line of code.
29
+ * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
30
 
31
+ > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.
 
 
 
 
 
32
 
33
+ -----
34
 
 
 
 
 
 
 
 
 
35
 
36
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67991798ae62bd1f17cc22ed/2FXY0tqSGqZq76j5Tz4Vi.png)
37
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6799fc8e150f5a4014b030ca/CuuzzA_csoRnzbaZq1U1x.png)
38
 
39
+ ## Inference
40
 
41
+ Currently, our demo model only supports 1024x1024, 768x768 and 512x512 outputs without batching (for B200 - only 1024x1024). This will be updated in the near future.
42
+ To infer our models, you just need to replace `diffusers` import with `elastic_models.diffusers`:
43
 
44
  ```python
45
  import torch
 
53
  mode_name,
54
  torch_dtype=torch.bfloat16,
55
  token=hf_token,
 
 
56
  mode='S'
57
  )
58
  pipeline.to(device)
 
64
  output_image.save((prompt.replace(' ', '_') + '.png'))
65
  ```
66
 
67
+ ### Installation
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
+ __System requirements:__
71
+ * GPUs: H100, L40s, B200
72
+ * CPU: AMD, Intel
73
+ * Python: 3.10-3.12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
 
75
 
76
+ To work with our models just run these lines in your terminal:
 
 
77
 
78
+ ```shell
79
+ pip install thestage
80
+ pip install 'thestage-elastic-models[nvidia]' --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
81
 
82
+ # or for blackwell support
83
+ pip install 'thestage-elastic-models[blackwell]' --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
84
+ pip install -U --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128
85
+ pip install -U --pre torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
 
86
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
+ pip install flash_attn==2.7.3 --no-build-isolation
89
+ pip uninstall apex
90
  ```
91
 
92
+ Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
  ```shell
95
+ thestage config set --api-token <YOUR_API_TOKEN>
 
96
  ```
97
 
98
+ Congrats, now you can use accelerated models!
99
 
100
+ ----
101
 
102
+ ## Benchmarks
 
 
 
 
 
 
 
 
 
 
 
 
103
 
104
+ Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms.
105
 
106
+ ### Quality benchmarks
107
 
108
+ For quality evaluation we have used: PSNR, SSIM and CLIP score. PSNR and SSIM were computed using outputs of original model.
109
+ | Metric/Model | S | M | L | XL | Original |
110
+ |---------------|---|---|---|----|----------|
111
+ | PSNR | 30.22 | 30.24 | 30.38 | inf | inf |
112
+ | SSIM | 0.72 | 0.72 | 0.76 | 1.0 | 1.0 |
113
+ | CLIP | 12.49 | 12.51 | 12.69 | 12.41 | 12.41|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
 
 
115
 
116
+ ### Latency benchmarks
117
+
118
+ Time in seconds to generate one image 1024x1024
119
+ | GPU/Model | S | M | L | XL | Original |
120
+ |-----------|-----|---|---|----|----------|
121
+ | H100 | 2.71 | 3.0 | 3.18 | 4.17 | 6.46 |
122
+ | L40s | 8.5 | 9.29 | 9.29 | 13.2 | 16|
123
+ | B200 | 1.89 | 2.04 | 2.12 | 2.23 | 4.4|
124
+ | GeForce RTX 5090 | 5.53 | - | - | - | -|
125
 
126
 
127
  ## Links
128
 
129
  * __Platform__: [app.thestage.ai](https://app.thestage.ai)
130
+ <!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
131
  * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
132
+ * __Contact email__: contact@thestage.ai