first commit
This commit is contained in:
parent
77e147cbef
commit
9d904e7d16
106
README.md
106
README.md
|
@ -1,3 +1,105 @@
|
||||||
# flux.1-lite-8B
|
---
|
||||||
|
license: other
|
||||||
|
license_name: flux-1-dev-non-commercial-license
|
||||||
|
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/LICENSE.md
|
||||||
|
base_model:
|
||||||
|
- black-forest-labs/FLUX.1-dev
|
||||||
|
pipeline_tag: text-to-image
|
||||||
|
library_name: diffusers
|
||||||
|
tags:
|
||||||
|
- flux
|
||||||
|
- text-to-image
|
||||||
|
---
|
||||||
|
|
||||||
|
![Flux.1 Lite](sample_images/flux1-lite-8B_sample.png)
|
||||||
|
|
||||||
|
# Flux.1 Lite
|
||||||
|
We are thrilled to announce the release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model. This version uses 7 GB less RAM and runs 23% faster while maintaining the same precision (bfloat16) as the original model.
|
||||||
|
|
||||||
|
`🔥 UPDATE 🔥`: We have released a new version of Flux.1 Lite 8B. This version is trained with a new dataset and achieves better results than the previous alpha version. The main changes include:
|
||||||
|
- Distillation for a broader range of guidance values (2.0-5.0)
|
||||||
|
- Distillation for a broader range of number of steps (20-32)
|
||||||
|
- More diverse dataset with longer prompts
|
||||||
|
|
||||||
|
![Flux.1 Lite vs FLUX.1-dev](sample_images/models_comparison.png)
|
||||||
|
|
||||||
|
## Text-to-Image
|
||||||
|
|
||||||
|
Flux.1 Lite is ready to unleash your creativity! For the best results, we strongly **recommend using a `guidance_scale` between 2.0 and 5.0 and setting `n_steps` between 20 and 32**.
|
||||||
|
|
||||||
|
```python
|
||||||
|
import torch
|
||||||
|
from diffusers import FluxPipeline
|
||||||
|
|
||||||
|
torch_dtype = torch.bfloat16
|
||||||
|
device = "cuda"
|
||||||
|
|
||||||
|
# Load the pipe
|
||||||
|
model_id = "Freepik/flux.1-lite-8B"
|
||||||
|
pipe = FluxPipeline.from_pretrained(
|
||||||
|
model_id, torch_dtype=torch_dtype
|
||||||
|
).to(device)
|
||||||
|
|
||||||
|
# Inference
|
||||||
|
prompt = "A close-up image of a green alien with fluorescent skin in the middle of a dark purple forest"
|
||||||
|
|
||||||
|
guidance_scale = 3.5
|
||||||
|
n_steps = 28
|
||||||
|
seed = 11
|
||||||
|
|
||||||
|
with torch.inference_mode():
|
||||||
|
image = pipe(
|
||||||
|
prompt=prompt,
|
||||||
|
generator=torch.Generator(device="cpu").manual_seed(seed),
|
||||||
|
num_inference_steps=n_steps,
|
||||||
|
guidance_scale=guidance_scale,
|
||||||
|
height=1024,
|
||||||
|
width=1024,
|
||||||
|
).images[0]
|
||||||
|
image.save("output.png")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
Inspired by [Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/) findings, we analyzed the mean squared error (MSE) between the input and output of each block to quantify their contribution to the final result, revealing significant variability.
|
||||||
|
|
||||||
|
![Flux.1 Lite generated image](sample_images/generated_img.png)
|
||||||
|
![MSE MMDIT](sample_images/mse_mmdit_img.png)
|
||||||
|
![MSE DIT](sample_images/mse_dit_img.png)
|
||||||
|
|
||||||
|
As Ostris pointed out, not all blocks contribute equally. While skipping just one of the early MMDiT or late DiT blocks can significantly impact model performance, skipping any single block in between does not have a significant impact over the final image quality.
|
||||||
|
|
||||||
|
![Skip one MMDIT block](sample_images/skip_one_MMDIT_block.png)
|
||||||
|
![Skip one DIT block](sample_images/skip_one_DIT_block.png)
|
||||||
|
|
||||||
|
## ComfyUI
|
||||||
|
|
||||||
|
We've also crafted a ComfyUI workflow to make using Flux.1 Lite even more seamless! Find it in `comfy/flux.1-lite_workflow.json`.
|
||||||
|
![ComfyUI workflow](comfy/flux.1-lite_workflow.png)
|
||||||
|
|
||||||
|
The safetensors checkpoint is available here: [flux.1-lite-8B.safetensors](flux.1-lite-8B.safetensors)
|
||||||
|
|
||||||
|
## Try it out at Freepik!
|
||||||
|
|
||||||
|
Our [AI generator](https://www.freepik.com/pikaso/ai-image-generator) is now powered by Flux.1 Lite!
|
||||||
|
|
||||||
|
## 🔥 News 🔥
|
||||||
|
* Dec 30, 2024. Flux.1 Lite 8B new trained model is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B).
|
||||||
|
* Oct 23, 2024. Alpha 8B checkpoint is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
|
||||||
|
|
||||||
|
## Citation
|
||||||
|
|
||||||
|
If you find our work helpful, please cite it!
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@article{flux1-lite,
|
||||||
|
title={Flux.1 Lite: Distilling Flux1.dev for Efficient Text-to-Image Generation},
|
||||||
|
author={Daniel Verdú, Javier Martín},
|
||||||
|
email={dverdu@freepik.com, javier.martin@freepik.com},
|
||||||
|
year={2024},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
flux.1-lite-8B
|
|
|
@ -0,0 +1,655 @@
|
||||||
|
{
|
||||||
|
"last_node_id": 44,
|
||||||
|
"last_link_id": 66,
|
||||||
|
"nodes": [
|
||||||
|
{
|
||||||
|
"id": 16,
|
||||||
|
"type": "KSamplerSelect",
|
||||||
|
"pos": [
|
||||||
|
390,
|
||||||
|
330
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
210,
|
||||||
|
58
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 0,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "SAMPLER",
|
||||||
|
"type": "SAMPLER",
|
||||||
|
"links": [
|
||||||
|
19
|
||||||
|
],
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "KSamplerSelect"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
"euler"
|
||||||
|
],
|
||||||
|
"color": "#432",
|
||||||
|
"bgcolor": "#653"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 22,
|
||||||
|
"type": "BasicGuider",
|
||||||
|
"pos": [
|
||||||
|
390,
|
||||||
|
230
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
210.91961669921875,
|
||||||
|
46
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 9,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "model",
|
||||||
|
"type": "MODEL",
|
||||||
|
"link": 39,
|
||||||
|
"slot_index": 0
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "conditioning",
|
||||||
|
"type": "CONDITIONING",
|
||||||
|
"link": 55,
|
||||||
|
"slot_index": 1
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "GUIDER",
|
||||||
|
"type": "GUIDER",
|
||||||
|
"links": [
|
||||||
|
30
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "BasicGuider"
|
||||||
|
},
|
||||||
|
"widgets_values": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 5,
|
||||||
|
"type": "EmptyLatentImage",
|
||||||
|
"pos": [
|
||||||
|
390,
|
||||||
|
710
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
211.3397979736328,
|
||||||
|
106
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 1,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "LATENT",
|
||||||
|
"type": "LATENT",
|
||||||
|
"links": [
|
||||||
|
62
|
||||||
|
],
|
||||||
|
"slot_index": 0
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "EmptyLatentImage"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
1024,
|
||||||
|
1024,
|
||||||
|
1
|
||||||
|
],
|
||||||
|
"color": "#323",
|
||||||
|
"bgcolor": "#535"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 13,
|
||||||
|
"type": "SamplerCustomAdvanced",
|
||||||
|
"pos": [
|
||||||
|
701,
|
||||||
|
225
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
320,
|
||||||
|
110
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 10,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "noise",
|
||||||
|
"type": "NOISE",
|
||||||
|
"link": 56,
|
||||||
|
"slot_index": 0
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "guider",
|
||||||
|
"type": "GUIDER",
|
||||||
|
"link": 30,
|
||||||
|
"slot_index": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sampler",
|
||||||
|
"type": "SAMPLER",
|
||||||
|
"link": 19,
|
||||||
|
"slot_index": 2
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sigmas",
|
||||||
|
"type": "SIGMAS",
|
||||||
|
"link": 20,
|
||||||
|
"slot_index": 3
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "latent_image",
|
||||||
|
"type": "LATENT",
|
||||||
|
"link": 62,
|
||||||
|
"slot_index": 4
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "output",
|
||||||
|
"type": "LATENT",
|
||||||
|
"links": [
|
||||||
|
64
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"shape": 3
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "denoised_output",
|
||||||
|
"type": "LATENT",
|
||||||
|
"links": [],
|
||||||
|
"slot_index": 1,
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "SamplerCustomAdvanced"
|
||||||
|
},
|
||||||
|
"widgets_values": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 8,
|
||||||
|
"type": "VAEDecode",
|
||||||
|
"pos": [
|
||||||
|
1108,
|
||||||
|
230
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
320,
|
||||||
|
50
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 11,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "samples",
|
||||||
|
"type": "LATENT",
|
||||||
|
"link": 64
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "vae",
|
||||||
|
"type": "VAE",
|
||||||
|
"link": 12
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "IMAGE",
|
||||||
|
"type": "IMAGE",
|
||||||
|
"links": [
|
||||||
|
66
|
||||||
|
],
|
||||||
|
"slot_index": 0
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "VAEDecode"
|
||||||
|
},
|
||||||
|
"widgets_values": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 12,
|
||||||
|
"type": "UNETLoader",
|
||||||
|
"pos": [
|
||||||
|
-496,
|
||||||
|
224
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
317.64862060546875,
|
||||||
|
82.45046997070312
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 2,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "MODEL",
|
||||||
|
"type": "MODEL",
|
||||||
|
"links": [
|
||||||
|
38,
|
||||||
|
39
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "UNETLoader"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
"flux.1-lite-8B.safetensors",
|
||||||
|
"default"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 10,
|
||||||
|
"type": "VAELoader",
|
||||||
|
"pos": [
|
||||||
|
699,
|
||||||
|
389
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
320,
|
||||||
|
60
|
||||||
|
],
|
||||||
|
"flags": {
|
||||||
|
"collapsed": false
|
||||||
|
},
|
||||||
|
"order": 3,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "VAE",
|
||||||
|
"type": "VAE",
|
||||||
|
"links": [
|
||||||
|
12
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "VAELoader"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
"sd3ae.safetensors"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 29,
|
||||||
|
"type": "CLIPTextEncodeFlux",
|
||||||
|
"pos": [
|
||||||
|
-119,
|
||||||
|
230
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
448.8125,
|
||||||
|
455.3394775390625
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 8,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "clip",
|
||||||
|
"type": "CLIP",
|
||||||
|
"link": 43
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "guidance",
|
||||||
|
"type": "FLOAT",
|
||||||
|
"link": 54,
|
||||||
|
"slot_index": 2,
|
||||||
|
"widget": {
|
||||||
|
"name": "guidance"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "CONDITIONING",
|
||||||
|
"type": "CONDITIONING",
|
||||||
|
"links": [
|
||||||
|
55
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "CLIPTextEncodeFlux"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
"A scene inspired by 2000 comedy animation, a glowing green alien whose fluorescent skin emits light, standing in a dark purple forest. The alien is holding a large sign that reads \"FLUX LITE 8B\" in bold letters. The forest around is shadowy, with tall, eerie trees and mist rolling in. The alien radiates a soft, supernatural glow, illuminating the surroundings, creating a stark contrast between light and darkness. Style of an old comic, with flat colors, halftone shading, and a slightly weathered, vintage texture.\n\n",
|
||||||
|
"A scene inspired by 2000 comedy animation, a glowing green alien whose fluorescent skin emits light, standing in a dark purple forest. The alien is holding a large sign that reads \"FLUX LITE 8B\" in bold letters. The forest around is shadowy, with tall, eerie trees and mist rolling in. The alien radiates a soft, supernatural glow, illuminating the surroundings, creating a stark contrast between light and darkness. Style of an old comic, with flat colors, halftone shading, and a slightly weathered, vintage texture.\n\n",
|
||||||
|
3.5
|
||||||
|
],
|
||||||
|
"color": "#232",
|
||||||
|
"bgcolor": "#353"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 11,
|
||||||
|
"type": "DualCLIPLoader",
|
||||||
|
"pos": [
|
||||||
|
-497,
|
||||||
|
351
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
320,
|
||||||
|
110
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 4,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "CLIP",
|
||||||
|
"type": "CLIP",
|
||||||
|
"links": [
|
||||||
|
43
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "DualCLIPLoader"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
"clip_l.safetensors",
|
||||||
|
"t5xxl_fp16.safetensors",
|
||||||
|
"flux"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 17,
|
||||||
|
"type": "BasicScheduler",
|
||||||
|
"pos": [
|
||||||
|
390,
|
||||||
|
430
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
210,
|
||||||
|
106
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 7,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "model",
|
||||||
|
"type": "MODEL",
|
||||||
|
"link": 38,
|
||||||
|
"slot_index": 0
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "SIGMAS",
|
||||||
|
"type": "SIGMAS",
|
||||||
|
"links": [
|
||||||
|
20
|
||||||
|
],
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "BasicScheduler"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
"simple",
|
||||||
|
28,
|
||||||
|
1
|
||||||
|
],
|
||||||
|
"color": "#432",
|
||||||
|
"bgcolor": "#653"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 25,
|
||||||
|
"type": "RandomNoise",
|
||||||
|
"pos": [
|
||||||
|
390,
|
||||||
|
580
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
210.0858612060547,
|
||||||
|
82
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 5,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "NOISE",
|
||||||
|
"type": "NOISE",
|
||||||
|
"links": [
|
||||||
|
56
|
||||||
|
],
|
||||||
|
"shape": 3
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"properties": {
|
||||||
|
"Node name for S&R": "RandomNoise"
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
8,
|
||||||
|
"increment"
|
||||||
|
],
|
||||||
|
"color": "#432",
|
||||||
|
"bgcolor": "#653"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 44,
|
||||||
|
"type": "SaveImage",
|
||||||
|
"pos": [
|
||||||
|
1101.447998046875,
|
||||||
|
390.652587890625
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
470.7904968261719,
|
||||||
|
502.0829162597656
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 12,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "images",
|
||||||
|
"type": "IMAGE",
|
||||||
|
"link": 66
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"outputs": [],
|
||||||
|
"properties": {},
|
||||||
|
"widgets_values": [
|
||||||
|
"Flux-Lite-8B/1"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 38,
|
||||||
|
"type": "PrimitiveNode",
|
||||||
|
"pos": [
|
||||||
|
-387,
|
||||||
|
525
|
||||||
|
],
|
||||||
|
"size": [
|
||||||
|
210,
|
||||||
|
82
|
||||||
|
],
|
||||||
|
"flags": {},
|
||||||
|
"order": 6,
|
||||||
|
"mode": 0,
|
||||||
|
"inputs": [],
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "FLOAT",
|
||||||
|
"type": "FLOAT",
|
||||||
|
"links": [
|
||||||
|
54
|
||||||
|
],
|
||||||
|
"slot_index": 0,
|
||||||
|
"widget": {
|
||||||
|
"name": "guidance"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Guidance (CFG)",
|
||||||
|
"properties": {
|
||||||
|
"Run widget replace on values": false
|
||||||
|
},
|
||||||
|
"widgets_values": [
|
||||||
|
3.5,
|
||||||
|
"fixed"
|
||||||
|
],
|
||||||
|
"color": "#233",
|
||||||
|
"bgcolor": "#355"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"links": [
|
||||||
|
[
|
||||||
|
12,
|
||||||
|
10,
|
||||||
|
0,
|
||||||
|
8,
|
||||||
|
1,
|
||||||
|
"VAE"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
19,
|
||||||
|
16,
|
||||||
|
0,
|
||||||
|
13,
|
||||||
|
2,
|
||||||
|
"SAMPLER"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
20,
|
||||||
|
17,
|
||||||
|
0,
|
||||||
|
13,
|
||||||
|
3,
|
||||||
|
"SIGMAS"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
30,
|
||||||
|
22,
|
||||||
|
0,
|
||||||
|
13,
|
||||||
|
1,
|
||||||
|
"GUIDER"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
38,
|
||||||
|
12,
|
||||||
|
0,
|
||||||
|
17,
|
||||||
|
0,
|
||||||
|
"MODEL"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
39,
|
||||||
|
12,
|
||||||
|
0,
|
||||||
|
22,
|
||||||
|
0,
|
||||||
|
"MODEL"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
43,
|
||||||
|
11,
|
||||||
|
0,
|
||||||
|
29,
|
||||||
|
0,
|
||||||
|
"CLIP"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
54,
|
||||||
|
38,
|
||||||
|
0,
|
||||||
|
29,
|
||||||
|
1,
|
||||||
|
"FLOAT"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
55,
|
||||||
|
29,
|
||||||
|
0,
|
||||||
|
22,
|
||||||
|
1,
|
||||||
|
"CONDITIONING"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
56,
|
||||||
|
25,
|
||||||
|
0,
|
||||||
|
13,
|
||||||
|
0,
|
||||||
|
"NOISE"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
62,
|
||||||
|
5,
|
||||||
|
0,
|
||||||
|
13,
|
||||||
|
4,
|
||||||
|
"LATENT"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
64,
|
||||||
|
13,
|
||||||
|
0,
|
||||||
|
8,
|
||||||
|
0,
|
||||||
|
"LATENT"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
66,
|
||||||
|
8,
|
||||||
|
0,
|
||||||
|
44,
|
||||||
|
0,
|
||||||
|
"IMAGE"
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"groups": [],
|
||||||
|
"config": {},
|
||||||
|
"extra": {
|
||||||
|
"ds": {
|
||||||
|
"scale": 1.2100000000000006,
|
||||||
|
"offset": [
|
||||||
|
601.8310927090779,
|
||||||
|
99.46984247019009
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workspace_info": {
|
||||||
|
"id": "kzetqdSCzE7d9gP4zHck1",
|
||||||
|
"saveLock": false,
|
||||||
|
"cloudID": null,
|
||||||
|
"coverMediaPath": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"version": 0.4
|
||||||
|
}
|
Binary file not shown.
|
@ -0,0 +1,33 @@
|
||||||
|
{
|
||||||
|
"_class_name": "FluxPipeline",
|
||||||
|
"_diffusers_version": "0.32.0.dev0",
|
||||||
|
"_name_or_path": "black-forest-labs/FLUX.1-dev",
|
||||||
|
"scheduler": [
|
||||||
|
"diffusers",
|
||||||
|
"FlowMatchEulerDiscreteScheduler"
|
||||||
|
],
|
||||||
|
"text_encoder": [
|
||||||
|
"transformers",
|
||||||
|
"CLIPTextModel"
|
||||||
|
],
|
||||||
|
"text_encoder_2": [
|
||||||
|
"transformers",
|
||||||
|
"T5EncoderModel"
|
||||||
|
],
|
||||||
|
"tokenizer": [
|
||||||
|
"transformers",
|
||||||
|
"CLIPTokenizer"
|
||||||
|
],
|
||||||
|
"tokenizer_2": [
|
||||||
|
"transformers",
|
||||||
|
"T5TokenizerFast"
|
||||||
|
],
|
||||||
|
"transformer": [
|
||||||
|
"diffusers",
|
||||||
|
"FluxTransformer2DModel"
|
||||||
|
],
|
||||||
|
"vae": [
|
||||||
|
"diffusers",
|
||||||
|
"AutoencoderKL"
|
||||||
|
]
|
||||||
|
}
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
After Width: | Height: | Size: 58 KiB |
Binary file not shown.
After Width: | Height: | Size: 69 KiB |
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,16 @@
|
||||||
|
{
|
||||||
|
"_class_name": "FlowMatchEulerDiscreteScheduler",
|
||||||
|
"_diffusers_version": "0.32.0.dev0",
|
||||||
|
"base_image_seq_len": 256,
|
||||||
|
"base_shift": 0.5,
|
||||||
|
"invert_sigmas": false,
|
||||||
|
"max_image_seq_len": 4096,
|
||||||
|
"max_shift": 1.15,
|
||||||
|
"num_train_timesteps": 1000,
|
||||||
|
"shift": 3.0,
|
||||||
|
"shift_terminal": null,
|
||||||
|
"use_beta_sigmas": false,
|
||||||
|
"use_dynamic_shifting": true,
|
||||||
|
"use_exponential_sigmas": false,
|
||||||
|
"use_karras_sigmas": false
|
||||||
|
}
|
|
@ -0,0 +1,25 @@
|
||||||
|
{
|
||||||
|
"_name_or_path": "/mnt/localnvme/huggingface_cache/hub/models--black-forest-labs--FLUX.1-dev/snapshots/0ef5fff789c832c5c7f4e127f94c8b54bbcced44/text_encoder",
|
||||||
|
"architectures": [
|
||||||
|
"CLIPTextModel"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": 0,
|
||||||
|
"dropout": 0.0,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "quick_gelu",
|
||||||
|
"hidden_size": 768,
|
||||||
|
"initializer_factor": 1.0,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 3072,
|
||||||
|
"layer_norm_eps": 1e-05,
|
||||||
|
"max_position_embeddings": 77,
|
||||||
|
"model_type": "clip_text_model",
|
||||||
|
"num_attention_heads": 12,
|
||||||
|
"num_hidden_layers": 12,
|
||||||
|
"pad_token_id": 1,
|
||||||
|
"projection_dim": 768,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.47.1",
|
||||||
|
"vocab_size": 49408
|
||||||
|
}
|
|
@ -0,0 +1,32 @@
|
||||||
|
{
|
||||||
|
"_name_or_path": "/mnt/localnvme/huggingface_cache/hub/models--black-forest-labs--FLUX.1-dev/snapshots/0ef5fff789c832c5c7f4e127f94c8b54bbcced44/text_encoder_2",
|
||||||
|
"architectures": [
|
||||||
|
"T5EncoderModel"
|
||||||
|
],
|
||||||
|
"classifier_dropout": 0.0,
|
||||||
|
"d_ff": 10240,
|
||||||
|
"d_kv": 64,
|
||||||
|
"d_model": 4096,
|
||||||
|
"decoder_start_token_id": 0,
|
||||||
|
"dense_act_fn": "gelu_new",
|
||||||
|
"dropout_rate": 0.1,
|
||||||
|
"eos_token_id": 1,
|
||||||
|
"feed_forward_proj": "gated-gelu",
|
||||||
|
"initializer_factor": 1.0,
|
||||||
|
"is_encoder_decoder": true,
|
||||||
|
"is_gated_act": true,
|
||||||
|
"layer_norm_epsilon": 1e-06,
|
||||||
|
"model_type": "t5",
|
||||||
|
"num_decoder_layers": 24,
|
||||||
|
"num_heads": 64,
|
||||||
|
"num_layers": 24,
|
||||||
|
"output_past": true,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"relative_attention_max_distance": 128,
|
||||||
|
"relative_attention_num_buckets": 32,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.47.1",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 32128
|
||||||
|
}
|
Binary file not shown.
|
@ -0,0 +1,226 @@
|
||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 9524621312
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"encoder.block.0.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.0.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.1.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.10.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.11.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.12.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.13.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.14.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.15.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.16.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.17.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.18.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.19.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.2.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.20.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.21.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.22.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.1.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.23.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.3.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.4.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.5.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.6.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.7.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.8.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.block.9.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"encoder.final_layer_norm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"shared.weight": "model-00001-of-00002.safetensors"
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,30 @@
|
||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<|startoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,31 @@
|
||||||
|
{
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"49406": {
|
||||||
|
"content": "<|startoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"49407": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"bos_token": "<|startoftext|>",
|
||||||
|
"clean_up_tokenization_spaces": true,
|
||||||
|
"do_lower_case": true,
|
||||||
|
"eos_token": "<|endoftext|>",
|
||||||
|
"errors": "replace",
|
||||||
|
"extra_special_tokens": {},
|
||||||
|
"model_max_length": 77,
|
||||||
|
"pad_token": "<|endoftext|>",
|
||||||
|
"tokenizer_class": "CLIPTokenizer",
|
||||||
|
"unk_token": "<|endoftext|>"
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,125 @@
|
||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<extra_id_0>",
|
||||||
|
"<extra_id_1>",
|
||||||
|
"<extra_id_2>",
|
||||||
|
"<extra_id_3>",
|
||||||
|
"<extra_id_4>",
|
||||||
|
"<extra_id_5>",
|
||||||
|
"<extra_id_6>",
|
||||||
|
"<extra_id_7>",
|
||||||
|
"<extra_id_8>",
|
||||||
|
"<extra_id_9>",
|
||||||
|
"<extra_id_10>",
|
||||||
|
"<extra_id_11>",
|
||||||
|
"<extra_id_12>",
|
||||||
|
"<extra_id_13>",
|
||||||
|
"<extra_id_14>",
|
||||||
|
"<extra_id_15>",
|
||||||
|
"<extra_id_16>",
|
||||||
|
"<extra_id_17>",
|
||||||
|
"<extra_id_18>",
|
||||||
|
"<extra_id_19>",
|
||||||
|
"<extra_id_20>",
|
||||||
|
"<extra_id_21>",
|
||||||
|
"<extra_id_22>",
|
||||||
|
"<extra_id_23>",
|
||||||
|
"<extra_id_24>",
|
||||||
|
"<extra_id_25>",
|
||||||
|
"<extra_id_26>",
|
||||||
|
"<extra_id_27>",
|
||||||
|
"<extra_id_28>",
|
||||||
|
"<extra_id_29>",
|
||||||
|
"<extra_id_30>",
|
||||||
|
"<extra_id_31>",
|
||||||
|
"<extra_id_32>",
|
||||||
|
"<extra_id_33>",
|
||||||
|
"<extra_id_34>",
|
||||||
|
"<extra_id_35>",
|
||||||
|
"<extra_id_36>",
|
||||||
|
"<extra_id_37>",
|
||||||
|
"<extra_id_38>",
|
||||||
|
"<extra_id_39>",
|
||||||
|
"<extra_id_40>",
|
||||||
|
"<extra_id_41>",
|
||||||
|
"<extra_id_42>",
|
||||||
|
"<extra_id_43>",
|
||||||
|
"<extra_id_44>",
|
||||||
|
"<extra_id_45>",
|
||||||
|
"<extra_id_46>",
|
||||||
|
"<extra_id_47>",
|
||||||
|
"<extra_id_48>",
|
||||||
|
"<extra_id_49>",
|
||||||
|
"<extra_id_50>",
|
||||||
|
"<extra_id_51>",
|
||||||
|
"<extra_id_52>",
|
||||||
|
"<extra_id_53>",
|
||||||
|
"<extra_id_54>",
|
||||||
|
"<extra_id_55>",
|
||||||
|
"<extra_id_56>",
|
||||||
|
"<extra_id_57>",
|
||||||
|
"<extra_id_58>",
|
||||||
|
"<extra_id_59>",
|
||||||
|
"<extra_id_60>",
|
||||||
|
"<extra_id_61>",
|
||||||
|
"<extra_id_62>",
|
||||||
|
"<extra_id_63>",
|
||||||
|
"<extra_id_64>",
|
||||||
|
"<extra_id_65>",
|
||||||
|
"<extra_id_66>",
|
||||||
|
"<extra_id_67>",
|
||||||
|
"<extra_id_68>",
|
||||||
|
"<extra_id_69>",
|
||||||
|
"<extra_id_70>",
|
||||||
|
"<extra_id_71>",
|
||||||
|
"<extra_id_72>",
|
||||||
|
"<extra_id_73>",
|
||||||
|
"<extra_id_74>",
|
||||||
|
"<extra_id_75>",
|
||||||
|
"<extra_id_76>",
|
||||||
|
"<extra_id_77>",
|
||||||
|
"<extra_id_78>",
|
||||||
|
"<extra_id_79>",
|
||||||
|
"<extra_id_80>",
|
||||||
|
"<extra_id_81>",
|
||||||
|
"<extra_id_82>",
|
||||||
|
"<extra_id_83>",
|
||||||
|
"<extra_id_84>",
|
||||||
|
"<extra_id_85>",
|
||||||
|
"<extra_id_86>",
|
||||||
|
"<extra_id_87>",
|
||||||
|
"<extra_id_88>",
|
||||||
|
"<extra_id_89>",
|
||||||
|
"<extra_id_90>",
|
||||||
|
"<extra_id_91>",
|
||||||
|
"<extra_id_92>",
|
||||||
|
"<extra_id_93>",
|
||||||
|
"<extra_id_94>",
|
||||||
|
"<extra_id_95>",
|
||||||
|
"<extra_id_96>",
|
||||||
|
"<extra_id_97>",
|
||||||
|
"<extra_id_98>",
|
||||||
|
"<extra_id_99>"
|
||||||
|
],
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<pad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
Binary file not shown.
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,941 @@
|
||||||
|
{
|
||||||
|
"add_prefix_space": true,
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"0": {
|
||||||
|
"content": "<pad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"1": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"2": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32000": {
|
||||||
|
"content": "<extra_id_99>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32001": {
|
||||||
|
"content": "<extra_id_98>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32002": {
|
||||||
|
"content": "<extra_id_97>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32003": {
|
||||||
|
"content": "<extra_id_96>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32004": {
|
||||||
|
"content": "<extra_id_95>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32005": {
|
||||||
|
"content": "<extra_id_94>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32006": {
|
||||||
|
"content": "<extra_id_93>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32007": {
|
||||||
|
"content": "<extra_id_92>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32008": {
|
||||||
|
"content": "<extra_id_91>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32009": {
|
||||||
|
"content": "<extra_id_90>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32010": {
|
||||||
|
"content": "<extra_id_89>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32011": {
|
||||||
|
"content": "<extra_id_88>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32012": {
|
||||||
|
"content": "<extra_id_87>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32013": {
|
||||||
|
"content": "<extra_id_86>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32014": {
|
||||||
|
"content": "<extra_id_85>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32015": {
|
||||||
|
"content": "<extra_id_84>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32016": {
|
||||||
|
"content": "<extra_id_83>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32017": {
|
||||||
|
"content": "<extra_id_82>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32018": {
|
||||||
|
"content": "<extra_id_81>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32019": {
|
||||||
|
"content": "<extra_id_80>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32020": {
|
||||||
|
"content": "<extra_id_79>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32021": {
|
||||||
|
"content": "<extra_id_78>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32022": {
|
||||||
|
"content": "<extra_id_77>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32023": {
|
||||||
|
"content": "<extra_id_76>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32024": {
|
||||||
|
"content": "<extra_id_75>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32025": {
|
||||||
|
"content": "<extra_id_74>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32026": {
|
||||||
|
"content": "<extra_id_73>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32027": {
|
||||||
|
"content": "<extra_id_72>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32028": {
|
||||||
|
"content": "<extra_id_71>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32029": {
|
||||||
|
"content": "<extra_id_70>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32030": {
|
||||||
|
"content": "<extra_id_69>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32031": {
|
||||||
|
"content": "<extra_id_68>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32032": {
|
||||||
|
"content": "<extra_id_67>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32033": {
|
||||||
|
"content": "<extra_id_66>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32034": {
|
||||||
|
"content": "<extra_id_65>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32035": {
|
||||||
|
"content": "<extra_id_64>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32036": {
|
||||||
|
"content": "<extra_id_63>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32037": {
|
||||||
|
"content": "<extra_id_62>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32038": {
|
||||||
|
"content": "<extra_id_61>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32039": {
|
||||||
|
"content": "<extra_id_60>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32040": {
|
||||||
|
"content": "<extra_id_59>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32041": {
|
||||||
|
"content": "<extra_id_58>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32042": {
|
||||||
|
"content": "<extra_id_57>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32043": {
|
||||||
|
"content": "<extra_id_56>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32044": {
|
||||||
|
"content": "<extra_id_55>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32045": {
|
||||||
|
"content": "<extra_id_54>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32046": {
|
||||||
|
"content": "<extra_id_53>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32047": {
|
||||||
|
"content": "<extra_id_52>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32048": {
|
||||||
|
"content": "<extra_id_51>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32049": {
|
||||||
|
"content": "<extra_id_50>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32050": {
|
||||||
|
"content": "<extra_id_49>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32051": {
|
||||||
|
"content": "<extra_id_48>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32052": {
|
||||||
|
"content": "<extra_id_47>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32053": {
|
||||||
|
"content": "<extra_id_46>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32054": {
|
||||||
|
"content": "<extra_id_45>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32055": {
|
||||||
|
"content": "<extra_id_44>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32056": {
|
||||||
|
"content": "<extra_id_43>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32057": {
|
||||||
|
"content": "<extra_id_42>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32058": {
|
||||||
|
"content": "<extra_id_41>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32059": {
|
||||||
|
"content": "<extra_id_40>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32060": {
|
||||||
|
"content": "<extra_id_39>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32061": {
|
||||||
|
"content": "<extra_id_38>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32062": {
|
||||||
|
"content": "<extra_id_37>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32063": {
|
||||||
|
"content": "<extra_id_36>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32064": {
|
||||||
|
"content": "<extra_id_35>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32065": {
|
||||||
|
"content": "<extra_id_34>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32066": {
|
||||||
|
"content": "<extra_id_33>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32067": {
|
||||||
|
"content": "<extra_id_32>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32068": {
|
||||||
|
"content": "<extra_id_31>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32069": {
|
||||||
|
"content": "<extra_id_30>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32070": {
|
||||||
|
"content": "<extra_id_29>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32071": {
|
||||||
|
"content": "<extra_id_28>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32072": {
|
||||||
|
"content": "<extra_id_27>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32073": {
|
||||||
|
"content": "<extra_id_26>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32074": {
|
||||||
|
"content": "<extra_id_25>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32075": {
|
||||||
|
"content": "<extra_id_24>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32076": {
|
||||||
|
"content": "<extra_id_23>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32077": {
|
||||||
|
"content": "<extra_id_22>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32078": {
|
||||||
|
"content": "<extra_id_21>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32079": {
|
||||||
|
"content": "<extra_id_20>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32080": {
|
||||||
|
"content": "<extra_id_19>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32081": {
|
||||||
|
"content": "<extra_id_18>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32082": {
|
||||||
|
"content": "<extra_id_17>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32083": {
|
||||||
|
"content": "<extra_id_16>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32084": {
|
||||||
|
"content": "<extra_id_15>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32085": {
|
||||||
|
"content": "<extra_id_14>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32086": {
|
||||||
|
"content": "<extra_id_13>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32087": {
|
||||||
|
"content": "<extra_id_12>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32088": {
|
||||||
|
"content": "<extra_id_11>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32089": {
|
||||||
|
"content": "<extra_id_10>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32090": {
|
||||||
|
"content": "<extra_id_9>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32091": {
|
||||||
|
"content": "<extra_id_8>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32092": {
|
||||||
|
"content": "<extra_id_7>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32093": {
|
||||||
|
"content": "<extra_id_6>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32094": {
|
||||||
|
"content": "<extra_id_5>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32095": {
|
||||||
|
"content": "<extra_id_4>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32096": {
|
||||||
|
"content": "<extra_id_3>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32097": {
|
||||||
|
"content": "<extra_id_2>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32098": {
|
||||||
|
"content": "<extra_id_1>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"32099": {
|
||||||
|
"content": "<extra_id_0>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<extra_id_0>",
|
||||||
|
"<extra_id_1>",
|
||||||
|
"<extra_id_2>",
|
||||||
|
"<extra_id_3>",
|
||||||
|
"<extra_id_4>",
|
||||||
|
"<extra_id_5>",
|
||||||
|
"<extra_id_6>",
|
||||||
|
"<extra_id_7>",
|
||||||
|
"<extra_id_8>",
|
||||||
|
"<extra_id_9>",
|
||||||
|
"<extra_id_10>",
|
||||||
|
"<extra_id_11>",
|
||||||
|
"<extra_id_12>",
|
||||||
|
"<extra_id_13>",
|
||||||
|
"<extra_id_14>",
|
||||||
|
"<extra_id_15>",
|
||||||
|
"<extra_id_16>",
|
||||||
|
"<extra_id_17>",
|
||||||
|
"<extra_id_18>",
|
||||||
|
"<extra_id_19>",
|
||||||
|
"<extra_id_20>",
|
||||||
|
"<extra_id_21>",
|
||||||
|
"<extra_id_22>",
|
||||||
|
"<extra_id_23>",
|
||||||
|
"<extra_id_24>",
|
||||||
|
"<extra_id_25>",
|
||||||
|
"<extra_id_26>",
|
||||||
|
"<extra_id_27>",
|
||||||
|
"<extra_id_28>",
|
||||||
|
"<extra_id_29>",
|
||||||
|
"<extra_id_30>",
|
||||||
|
"<extra_id_31>",
|
||||||
|
"<extra_id_32>",
|
||||||
|
"<extra_id_33>",
|
||||||
|
"<extra_id_34>",
|
||||||
|
"<extra_id_35>",
|
||||||
|
"<extra_id_36>",
|
||||||
|
"<extra_id_37>",
|
||||||
|
"<extra_id_38>",
|
||||||
|
"<extra_id_39>",
|
||||||
|
"<extra_id_40>",
|
||||||
|
"<extra_id_41>",
|
||||||
|
"<extra_id_42>",
|
||||||
|
"<extra_id_43>",
|
||||||
|
"<extra_id_44>",
|
||||||
|
"<extra_id_45>",
|
||||||
|
"<extra_id_46>",
|
||||||
|
"<extra_id_47>",
|
||||||
|
"<extra_id_48>",
|
||||||
|
"<extra_id_49>",
|
||||||
|
"<extra_id_50>",
|
||||||
|
"<extra_id_51>",
|
||||||
|
"<extra_id_52>",
|
||||||
|
"<extra_id_53>",
|
||||||
|
"<extra_id_54>",
|
||||||
|
"<extra_id_55>",
|
||||||
|
"<extra_id_56>",
|
||||||
|
"<extra_id_57>",
|
||||||
|
"<extra_id_58>",
|
||||||
|
"<extra_id_59>",
|
||||||
|
"<extra_id_60>",
|
||||||
|
"<extra_id_61>",
|
||||||
|
"<extra_id_62>",
|
||||||
|
"<extra_id_63>",
|
||||||
|
"<extra_id_64>",
|
||||||
|
"<extra_id_65>",
|
||||||
|
"<extra_id_66>",
|
||||||
|
"<extra_id_67>",
|
||||||
|
"<extra_id_68>",
|
||||||
|
"<extra_id_69>",
|
||||||
|
"<extra_id_70>",
|
||||||
|
"<extra_id_71>",
|
||||||
|
"<extra_id_72>",
|
||||||
|
"<extra_id_73>",
|
||||||
|
"<extra_id_74>",
|
||||||
|
"<extra_id_75>",
|
||||||
|
"<extra_id_76>",
|
||||||
|
"<extra_id_77>",
|
||||||
|
"<extra_id_78>",
|
||||||
|
"<extra_id_79>",
|
||||||
|
"<extra_id_80>",
|
||||||
|
"<extra_id_81>",
|
||||||
|
"<extra_id_82>",
|
||||||
|
"<extra_id_83>",
|
||||||
|
"<extra_id_84>",
|
||||||
|
"<extra_id_85>",
|
||||||
|
"<extra_id_86>",
|
||||||
|
"<extra_id_87>",
|
||||||
|
"<extra_id_88>",
|
||||||
|
"<extra_id_89>",
|
||||||
|
"<extra_id_90>",
|
||||||
|
"<extra_id_91>",
|
||||||
|
"<extra_id_92>",
|
||||||
|
"<extra_id_93>",
|
||||||
|
"<extra_id_94>",
|
||||||
|
"<extra_id_95>",
|
||||||
|
"<extra_id_96>",
|
||||||
|
"<extra_id_97>",
|
||||||
|
"<extra_id_98>",
|
||||||
|
"<extra_id_99>"
|
||||||
|
],
|
||||||
|
"clean_up_tokenization_spaces": true,
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"extra_ids": 100,
|
||||||
|
"extra_special_tokens": {},
|
||||||
|
"legacy": true,
|
||||||
|
"model_max_length": 512,
|
||||||
|
"pad_token": "<pad>",
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"tokenizer_class": "T5Tokenizer",
|
||||||
|
"unk_token": "<unk>"
|
||||||
|
}
|
|
@ -0,0 +1 @@
|
||||||
|
{"patch_size": 1, "in_channels": 64, "out_channels": null, "num_layers": 8, "num_single_layers": 38, "attention_head_dim": 128, "num_attention_heads": 24, "joint_attention_dim": 4096, "pooled_projection_dim": 768, "guidance_embeds": true, "axes_dims_rope": [16, 56, 56], "_use_default_values": ["axes_dims_rope", "out_channels"], "_class_name": "FluxTransformer2DModel", "_diffusers_version": "0.32.0.dev0", "_name_or_path": "/mnt/localnvme/huggingface_cache/hub/models--black-forest-labs--FLUX.1-dev/snapshots/0ef5fff789c832c5c7f4e127f94c8b54bbcced44/transformer"}
|
|
@ -0,0 +1,815 @@
|
||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 16326528128
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"context_embedder.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"context_embedder.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"norm_out.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"norm_out.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.0.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.1.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.10.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.11.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.12.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.13.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.14.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.15.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.16.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.17.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.18.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.19.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.2.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.20.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.21.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.22.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.23.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.24.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.25.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.26.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.27.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.28.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.29.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.3.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.30.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.31.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.32.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.33.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.34.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.35.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.36.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.norm_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.norm_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.to_k.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.to_k.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.to_q.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.to_q.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.to_v.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.attn.to_v.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.norm.linear.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.norm.linear.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.proj_mlp.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.proj_mlp.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.proj_out.bias": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.37.proj_out.weight": "diffusion_pytorch_model-00002-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.4.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.5.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.6.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.7.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.8.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.norm.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.norm.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.proj_mlp.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.proj_mlp.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.proj_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"single_transformer_blocks.9.proj_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.guidance_embedder.linear_1.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.guidance_embedder.linear_1.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.guidance_embedder.linear_2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.guidance_embedder.linear_2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.text_embedder.linear_1.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.text_embedder.linear_1.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.text_embedder.linear_2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.text_embedder.linear_2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.timestep_embedder.linear_1.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.timestep_embedder.linear_1.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.timestep_embedder.linear_2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"time_text_embed.timestep_embedder.linear_2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.0.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.1.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.2.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.3.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.4.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.5.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.6.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.add_k_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.add_k_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.add_q_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.add_q_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.add_v_proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.add_v_proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.norm_added_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.norm_added_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.norm_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.norm_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_add_out.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_add_out.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_k.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_k.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_out.0.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_out.0.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_q.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_q.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_v.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.attn.to_v.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff_context.net.0.proj.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff_context.net.0.proj.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff_context.net.2.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.ff_context.net.2.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.norm1.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.norm1.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.norm1_context.linear.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"transformer_blocks.7.norm1_context.linear.weight": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"x_embedder.bias": "diffusion_pytorch_model-00001-of-00002.safetensors",
|
||||||
|
"x_embedder.weight": "diffusion_pytorch_model-00001-of-00002.safetensors"
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,38 @@
|
||||||
|
{
|
||||||
|
"_class_name": "AutoencoderKL",
|
||||||
|
"_diffusers_version": "0.32.0.dev0",
|
||||||
|
"_name_or_path": "/mnt/localnvme/huggingface_cache/hub/models--black-forest-labs--FLUX.1-dev/snapshots/0ef5fff789c832c5c7f4e127f94c8b54bbcced44/vae",
|
||||||
|
"act_fn": "silu",
|
||||||
|
"block_out_channels": [
|
||||||
|
128,
|
||||||
|
256,
|
||||||
|
512,
|
||||||
|
512
|
||||||
|
],
|
||||||
|
"down_block_types": [
|
||||||
|
"DownEncoderBlock2D",
|
||||||
|
"DownEncoderBlock2D",
|
||||||
|
"DownEncoderBlock2D",
|
||||||
|
"DownEncoderBlock2D"
|
||||||
|
],
|
||||||
|
"force_upcast": true,
|
||||||
|
"in_channels": 3,
|
||||||
|
"latent_channels": 16,
|
||||||
|
"latents_mean": null,
|
||||||
|
"latents_std": null,
|
||||||
|
"layers_per_block": 2,
|
||||||
|
"mid_block_add_attention": true,
|
||||||
|
"norm_num_groups": 32,
|
||||||
|
"out_channels": 3,
|
||||||
|
"sample_size": 1024,
|
||||||
|
"scaling_factor": 0.3611,
|
||||||
|
"shift_factor": 0.1159,
|
||||||
|
"up_block_types": [
|
||||||
|
"UpDecoderBlock2D",
|
||||||
|
"UpDecoderBlock2D",
|
||||||
|
"UpDecoderBlock2D",
|
||||||
|
"UpDecoderBlock2D"
|
||||||
|
],
|
||||||
|
"use_post_quant_conv": false,
|
||||||
|
"use_quant_conv": false
|
||||||
|
}
|
Loading…
Reference in New Issue