1
0
Fork 0

first commit

This commit is contained in:
ailab 2024-06-07 08:11:47 +00:00
commit 936fbfe81d
23 changed files with 156041 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
SimSun.ttf filter=lfs diff=lfs merge=lfs -text
assets/apple.jpeg filter=lfs diff=lfs merge=lfs -text

727
README.md Normal file
View File

@ -0,0 +1,727 @@
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
---
# Qwen-VL-Chat
<br>
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_vl.jpg" width="400"/>
<p>
<br>
<p align="center">
Qwen-VL
<a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>
<a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖</a>&nbsp
Qwen-VL-Chat
<a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>
<a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖</a>&nbsp
(Int4:
<a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
<a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat-Int4/summary">🤖</a>&nbsp)
Qwen-VL-Plus
<a href="https://huggingface.co/spaces/Qwen/Qwen-VL-Plus">🤗</a>
<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">🤖</a>&nbsp
Qwen-VL-Max
<a href="https://huggingface.co/spaces/Qwen/Qwen-VL-Max">🤗</a>
<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Max/summary">🤖</a>&nbsp
<br>
<a href="https://tongyi.aliyun.com/qianwen">Web</a>&nbsp&nbsp | &nbsp&nbsp
<a href="https://help.aliyun.com/zh/dashscope/developer-reference/vl-plus-quick-start">API</a>&nbsp&nbsp | &nbsp&nbsp
<a href="assets/wechat.png">WeChat</a>&nbsp&nbsp | &nbsp&nbsp
<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp | &nbsp&nbsp
<a href="https://arxiv.org/abs/2308.12966">Paper</a>&nbsp&nbsp | &nbsp&nbsp
<a href="TUTORIAL.md">Tutorial</a>
</p>
<br>
**Qwen-VL** 是阿里云研发的大规模视觉语言模型Large Vision Language Model, LVLM。Qwen-VL 可以以图像、文本、检测框作为输入并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
**Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
目前我们提供了Qwen-VL和Qwen-VL-Chat两个模型分别为预训练模型和Chat模型。如果想了解更多关于模型的信息请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat仓库。
We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL-Chat.
<br>
## 安装要求 (Requirements)
* python 3.8及以上版本
* pytorch 1.12及以上版本推荐2.0及以上版本
* 建议使用CUDA 11.4及以上GPU用户需考虑此选项
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users)
<br>
## 快速开始 (Quickstart)
我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用Qwen-VL-Chat。
在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
Below, we provide simple examples to show how to use Qwen-VL-Chat with 🤗 Transformers.
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
```bash
pip install -r requirements.txt
```
接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法请参考[教程](TUTORIAL.md)。
Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
#### 🤗 Transformers
To use Qwen-VL-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cpu", trust_remote_code=True).eval()
# use cuda device
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", trust_remote_code=True).eval()
# Specify hyperparameters for generation (No need to do this if you are using transformers>=4.32.0)
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
# 1st dialogue turn
query = tokenizer.from_list_format([
{'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'text': '这是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
# 2nd dialogue turn
response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
print(response)
# <ref>击掌</ref><box>(517,508),(589,611)</box>
image = tokenizer.draw_bbox_on_latest_picture(response, history)
if image:
image.save('1.jpg')
else:
print("no box")
```
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
<p>
<br>
## 量化 (Quantization)
### 用法 (Usage)
当前我们提供了基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化方案并提供了Qwen-VL-Chat的Int4量化版本Qwen-VL-Chat-Int4 [点击此处](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4)。该模型在效果评测上几乎无损,并在显存占用和推理速度上具有明显优势。
下文说明如何使用该量化模型。开始之前请确保你满足要求如torch2.0及以上、transformers 4.32.0及以上,等)并安装所需的代码库:
We provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-VL-Chat, Qwen-VL-Chat-Int4 [Click here](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed.
Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages:
```bash
pip install optimum
git clone https://github.com/JustinLin610/AutoGPTQ.git & cd AutoGPTQ
pip install -v .
```
如遇到安装 `auto-gptq` 的问题,建议您前往官方[repo](https://github.com/PanQiWei/AutoGPTQ) 寻找合适的wheel。
随后你便可以按照上述用法,轻松调用量化模型:
If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a wheel.
Then you can load the quantized model easily and run inference as same as usual:
```python
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-VL-Chat-Int4",
device_map="auto",
trust_remote_code=True
).eval()
# Either a local path or an u[](https://)rl between <img></img> tags.
image_path = 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'
response, history = model.chat(tokenizer, query=f'<img>{image_path}</img>这是什么', history=None)
print(response)
```
### 效果评测 (Performance)
我们列出不同精度下模型在评测基准 **[TouchStone](https://github.com/OFA-Sys/TouchStone)** 上的表现,并发现量化模型并没有显著性能损失。结果如下所示:
We illustrate the model performance of both BF16 and Int4 models on the benchmark **[TouchStone](https://github.com/OFA-Sys/TouchStone)**, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
| Quantization | ZH. | EN |
| ------------ | :--------: | :-----------: |
| BF16 | 401.2 | 645.2 |
| Int4 | 386.6 | 651.4 |
### 推理速度 (Inference Speed)
我们测算了在输入一张图片即258个token的条件下BF16和Int4的模型生成1792 (2048-258) 和 7934 (8192-258) 个token的平均速度。
We measured the average inference speed (tokens/s) of generating 1792 (2048-258) and 7934 (8192-258) tokens with the context of an image (which takes 258 tokens) under BF16 precision and Int4 quantization, respectively.
| Quantization | Speed (2048 tokens) | Speed (8192 tokens) |
| ------------ | :-----------------: | :-----------------: |
| BF16 | 28.87 | 24.32 |
| Int4 | 37.79 | 34.34 |
推理速度测算是在单卡 A100-SXM4-80G GPU上运行使用PyTorch 2.0.1及CUDA 11.4。
The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4.
### GPU显存占用 (GPU Memory Usage)
我们还测算了在一张图片输入的条件下BF16和Int4模型生成1792 (2048-258) 和 7934 (8192-258) 个token所需显存。结果如下所示
We also profile the peak GPU memory usage for encoding 1792 (2048-258) tokens (including an image) as context (and generating single token) and generating 7934 (8192-258) tokens (with an image as context) under BF16 or Int4 quantization level, respectively. The results are shown below.
| Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
| ------------ | :---------------------------------: | :-----------------------------------: |
| BF16 | 22.60GB | 28.01GB |
| Int4 | 11.82GB | 17.23GB |
上述速度和显存测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py)完成。
The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py).
<br>
## 评测
我们从两个角度评测了两个模型的能力:
1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
- Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
- General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
- Text-based VQA评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
- Referring Expression Compression评测模型给定物体描述画检测框的能力
2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 BenchmarkTouchStone。在 TouchStone-v0.1 中:
- 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作商品比较、图片解题等**尽可能广泛的类别**。
- 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
- 评测同时包含英文版本和中文版本。
评测结果如下:
We evaluated the model's ability from two perspectives:
1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
- Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
- General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
- Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
- Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
- The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
- In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
- The benchmark includes both English and Chinese versions.
The results of the evaluation are as follows:
Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has a more comprehensive coverage in terms of capability range.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
<p>
### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="2">Zero-shot Captioning</th>
<th colspan="5">General VQA</th>
</tr>
<tr>
<th>NoCaps</th>
<th>Flickr30K</th>
<th>VQAv2<sup>dev</sup></th>
<th>OK-VQA</th>
<th>GQA</th>
<th>SciQA-Img<br>(0-shot)</th>
<th>VizWiz<br>(0-shot)</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="10">Generalist<br>Models</td>
<td>Flamingo-9B</td>
<td>-</td>
<td>61.5</td>
<td>51.8</td>
<td>44.7</td>
<td>-</td>
<td>-</td>
<td>28.8</td>
</tr>
<tr>
<td>Flamingo-80B</td>
<td>-</td>
<td>67.2</td>
<td>56.3</td>
<td>50.6</td>
<td>-</td>
<td>-</td>
<td>31.6</td>
</tr>
<tr>
<td>Unified-IO-XL</td>
<td>100.0</td>
<td>-</td>
<td>77.9</td>
<td>54.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Kosmos-1</td>
<td>-</td>
<td>67.1</td>
<td>51.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>29.2</td>
</tr>
<tr>
<td>Kosmos-2</td>
<td>-</td>
<td>66.7</td>
<td>45.6</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>BLIP-2 (Vicuna-13B)</td>
<td>103.9</td>
<td>71.6</td>
<td>65.0</td>
<td>45.9</td>
<td>32.3</td>
<td>61.0</td>
<td>19.6</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td><strong>121.9</strong></td>
<td>82.8</td>
<td>-</td>
<td>-</td>
<td>49.5</td>
<td>63.1</td>
<td>33.4</td>
</tr>
<tr>
<td>Shikra (Vicuna-13B)</td>
<td>-</td>
<td>73.9</td>
<td>77.36</td>
<td>47.16</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td><strong>Qwen-VL (Qwen-7B)</strong></td>
<td>121.4</td>
<td><b>85.8</b></td>
<td><b>78.8</b></td>
<td><b>58.6</b></td>
<td><b>59.3</b></td>
<td>67.1</td>
<td>35.2</td>
</tr>
<!-- <tr>
<td>Qwen-VL (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>63.6</td>
<td>-</td>
<td>-</td>
<td>39.1</td>
</tr> -->
<tr>
<td>Qwen-VL-Chat</td>
<td>120.2</td>
<td>81.0</td>
<td>78.2</td>
<td>56.6</td>
<td>57.5</td>
<td><b>68.2</b></td>
<td><b>38.9</b></td>
</tr>
<!-- <tr>
<td>Qwen-VL-Chat (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>60.6</td>
<td>-</td>
<td>-</td>
<td>44.45</td>
</tr> -->
<tr>
<td>Previous SOTA<br>(Per Task Fine-tuning)</td>
<td>-</td>
<td>127.0<br>(PALI-17B)</td>
<td>84.5<br>(InstructBLIP<br>-FlanT5-XL)</td>
<td>86.1<br>(PALI-X<br>-55B)</td>
<td>66.1<br>(PALI-X<br>-55B)</td>
<td>72.1<br>(CFR)</td>
<td>92.53<br>(LLaVa+<br>GPT-4)</td>
<td>70.9<br>(PALI-X<br>-55B)</td>
</tr>
</tbody>
</table>
- 在 Zero-shot Caption 中Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
- 在 General VQA 中Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
- For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
- For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
### 文本导向的视觉问答 (Text-oriented VQA)
<table>
<thead>
<tr>
<th>Model type</th>
<th>Model</th>
<th>TextVQA</th>
<th>DocVQA</th>
<th>ChartQA</th>
<th>AI2D</th>
<th>OCR-VQA</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="5">Generalist Models</td>
<td>BLIP-2 (Vicuna-13B)</td>
<td>42.4</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td>50.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>mPLUG-DocOwl (LLaMA-7B)</td>
<td>52.6</td>
<td>62.2</td>
<td>57.4</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Pic2Struct-Large (1.3B)</td>
<td>-</td>
<td><b>76.6</b></td>
<td>58.6</td>
<td>42.1</td>
<td>71.3</td>
</tr>
<tr>
<td>Qwen-VL (Qwen-7B)</td>
<td><b>63.8</b></td>
<td>65.1</td>
<td><b>65.7</b></td>
<td><b>62.3</b></td>
<td><b>75.7</b></td>
</tr>
<tr>
<td>Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>PALI-X-55B (Single-task FT)<br>(Without OCR Pipeline)</td>
<td>71.44</td>
<td>80.0</td>
<td>70.0</td>
<td>81.2</td>
<td>75.0</td>
</tr>
</tbody>
</table>
- 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
- 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
- In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
- Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
### 细粒度视觉定位 (Referring Expression Comprehension)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="3">RefCOCO</th>
<th colspan="3">RefCOCO+</th>
<th colspan="2">RefCOCOg</th>
<th>GRIT</th>
</tr>
<tr>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val-u</th>
<th>test-u</th>
<th>refexp</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="8">Generalist Models</td>
<td>GPV-2</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>51.50</td>
</tr>
<tr>
<td>OFA-L*</td>
<td>79.96</td>
<td>83.67</td>
<td>76.39</td>
<td>68.29</td>
<td>76.00</td>
<td>61.75</td>
<td>67.57</td>
<td>67.58</td>
<td>61.70</td>
</tr>
<tr>
<td>Unified-IO</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td><b>78.61</b></td>
</tr>
<tr>
<td>VisionLLM-H</td>
<td></td>
<td>86.70</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Shikra-7B</td>
<td>87.01</td>
<td>90.61</td>
<td>80.24 </td>
<td>81.60</td>
<td>87.36</td>
<td>72.12</td>
<td>82.27</td>
<td>82.19</td>
<td>69.34</td>
</tr>
<tr>
<td>Shikra-13B</td>
<td>87.83 </td>
<td>91.11</td>
<td>81.81</td>
<td>82.89</td>
<td>87.79</td>
<td>74.41</td>
<td>82.64</td>
<td>83.16</td>
<td>69.03</td>
</tr>
<tr>
<td>Qwen-VL-7B</td>
<td><b>89.36</b></td>
<td>92.26</td>
<td><b>85.34</b></td>
<td><b>83.12</b></td>
<td>88.25</td>
<td><b>77.21</b></td>
<td>85.58</td>
<td>85.48</td>
<td>78.22</td>
</tr>
<tr>
<td>Qwen-VL-7B-Chat</td>
<td>88.55</td>
<td><b>92.27</b></td>
<td>84.51</td>
<td>82.82</td>
<td><b>88.59</b></td>
<td>76.79</td>
<td><b>85.96</b></td>
<td><b>86.32</b></td>
<td>-</td>
<tr>
<td rowspan="3">Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>G-DINO-L</td>
<td>90.56&nbsp;&nbsp;</td>
<td>93.19</td>
<td>88.24</td>
<td>82.75</td>
<td>88.95</td>
<td>75.92</td>
<td>86.13</td>
<td>87.02</td>
<td>-</td>
</tr>
<tr>
<td>UNINEXT-H</td>
<td>92.64 </td>
<td>94.33</td>
<td>91.46</td>
<td>85.24</td>
<td>89.63</td>
<td>79.79</td>
<td>88.73</td>
<td>89.37</td>
<td>-</td>
</tr>
<tr>
<td>ONE-PEACE</td>
<td>92.58 </td>
<td>94.18</td>
<td>89.26</td>
<td>88.77</td>
<td>92.21</td>
<td>83.23</td>
<td>89.22</td>
<td>89.27</td>
<td>-</td>
</tr>
</tbody>
</table>
- 在定位任务上Qwen-VL 全面超过 Shikra-13B取得了目前 Generalist LVLM 模型上在 Refcoco 上的 **SOTA**
- Qwen-VL 并没有在任何中文定位数据上训练过,但通过中文 Caption 数据和 英文 Grounding 数据的训练,可以 Zero-shot 泛化出中文 Grounding 能力。
我们提供了以上**所有**评测脚本以供复现我们的实验结果。请阅读 [eval/EVALUATION.md](eval/EVALUATION.md) 了解更多信息。
- Qwen-VL achieves the **SOTA** in all above referring expression comprehension benchmarks.
- Qwen-VL has not been trained on any Chinese grounding data, but it can still generalize to the Chinese Grounding tasks in a zero-shot way by training Chinese Caption data and English Grounding data.
We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
### 闲聊能力测评 (Chat Evaluation)
TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
#### 英语 (English)
| Model | Score |
|---------------|-------|
| PandaGPT | 488.5 |
| MiniGPT4 | 531.7 |
| InstructBLIP | 552.4 |
| LLaMA-AdapterV2 | 590.1 |
| mPLUG-Owl | 605.4 |
| LLaVA | 602.7 |
| Qwen-VL-Chat | 645.2 |
#### 中文 (Chinese)
| Model | Score |
|---------------|-------|
| VisualGLM | 247.1 |
| Qwen-VL-Chat | 401.2 |
Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
<br>
## 常见问题 (FAQ)
如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 使用协议 (License Agreement)
研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
<br>
## 引用 (Citation)
如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen-VL,
title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
<br>
## 联系我们 (Contact Us)
如果你想给我们的研发团队和产品团队留言请通过邮件qianwen_opensource@alibabacloud.com联系我们。
If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
```
```

BIN
SimSun.ttf (Stored with Git LFS) Normal file

Binary file not shown.

49
config.json Normal file
View File

@ -0,0 +1,49 @@
{
"_name_or_path": "./",
"architectures": [
"QWenLMHeadModel"
],
"attn_dropout_prob": 0.0,
"auto_map": {
"AutoConfig": "configuration_qwen.QWenConfig",
"AutoModelForCausalLM": "modeling_qwen.QWenLMHeadModel"
},
"bf16": false,
"emb_dropout_prob": 0.0,
"fp16": false,
"fp32": false,
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 22016,
"kv_channels": 128,
"layer_norm_epsilon": 1e-06,
"max_position_embeddings": 8192,
"model_type": "qwen",
"no_bias": true,
"num_attention_heads": 32,
"num_hidden_layers": 32,
"onnx_safe": null,
"rotary_emb_base": 10000,
"rotary_pct": 1.0,
"scale_attn_weights": true,
"seq_length": 2048,
"tie_word_embeddings": false,
"tokenizer_type": "QWenTokenizer",
"torch_dtype": "bfloat16",
"transformers_version": "4.31.0",
"use_cache": true,
"use_dynamic_ntk": true,
"use_flash_attn": false,
"use_logn_attn": true,
"visual": {
"heads": 16,
"image_size": 448,
"image_start_id": 151857,
"layers": 48,
"mlp_ratio": 4.9231,
"output_dim": 4096,
"patch_size": 14,
"width": 1664
},
"vocab_size": 151936
}

65
configuration_qwen.py Normal file
View File

@ -0,0 +1,65 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from transformers import PretrainedConfig
class QWenConfig(PretrainedConfig):
model_type = "qwen"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size=151936,
hidden_size=4096,
num_hidden_layers=32,
num_attention_heads=32,
emb_dropout_prob=0.0,
attn_dropout_prob=0.0,
layer_norm_epsilon=1e-6,
initializer_range=0.02,
max_position_embeddings=8192,
scale_attn_weights=True,
use_cache=True,
bf16=False,
fp16=False,
fp32=False,
kv_channels=128,
rotary_pct=1.0,
rotary_emb_base=10000,
use_dynamic_ntk=True,
use_logn_attn=True,
use_flash_attn="auto",
intermediate_size=22016,
no_bias=True,
tie_word_embeddings=False,
**kwargs,
):
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.emb_dropout_prob = emb_dropout_prob
self.attn_dropout_prob = attn_dropout_prob
self.layer_norm_epsilon = layer_norm_epsilon
self.initializer_range = initializer_range
self.scale_attn_weights = scale_attn_weights
self.use_cache = use_cache
self.max_position_embeddings = max_position_embeddings
self.bf16 = bf16
self.fp16 = fp16
self.fp32 = fp32
self.kv_channels = kv_channels
self.rotary_pct = rotary_pct
self.rotary_emb_base = rotary_emb_base
self.use_dynamic_ntk = use_dynamic_ntk
self.use_logn_attn = use_logn_attn
self.use_flash_attn = use_flash_attn
self.no_bias = no_bias
super().__init__(
tie_word_embeddings=tie_word_embeddings,
**kwargs
)

11
generation_config.json Normal file
View File

@ -0,0 +1,11 @@
{
"chat_format": "chatml",
"do_sample": true,
"eos_token_id": 151643,
"max_new_tokens": 512,
"max_window_size": 6144,
"pad_token_id": 151643,
"top_k": 0,
"top_p": 0.3,
"transformers_version": "4.31.0"
}

1162
modeling_qwen.py Normal file

File diff suppressed because it is too large Load Diff

BIN
pytorch_model-00001-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00002-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00003-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00004-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00005-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00006-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00007-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00008-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00009-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00010-of-00010.bin (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,860 @@
{
"metadata": {
"total_size": 19313870336
},
"weight_map": {
"lm_head.weight": "pytorch_model-00010-of-00010.bin",
"transformer.h.0.attn.c_attn.bias": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.attn.c_attn.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.attn.c_proj.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.ln_1.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.ln_2.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.mlp.c_proj.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.mlp.w1.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.0.mlp.w2.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.attn.c_attn.bias": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.attn.c_attn.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.attn.c_proj.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.ln_1.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.ln_2.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.1.mlp.w1.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.1.mlp.w2.weight": "pytorch_model-00001-of-00010.bin",
"transformer.h.10.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.ln_1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.ln_2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.10.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.11.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
"transformer.h.11.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.11.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.11.ln_1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.11.ln_2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.11.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.11.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.11.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.ln_1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.ln_2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.12.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.ln_1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.ln_2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.13.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.ln_1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.ln_2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.14.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.attn.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.ln_1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.ln_2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.mlp.c_proj.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.mlp.w1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.15.mlp.w2.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.16.attn.c_attn.bias": "pytorch_model-00004-of-00010.bin",
"transformer.h.16.attn.c_attn.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.16.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.16.ln_1.weight": "pytorch_model-00004-of-00010.bin",
"transformer.h.16.ln_2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.16.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.16.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.16.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.ln_1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.ln_2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.17.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.ln_1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.ln_2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.18.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.ln_1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.ln_2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.19.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.2.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.ln_1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.ln_2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.2.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.20.attn.c_attn.bias": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.attn.c_attn.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.attn.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.ln_1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.ln_2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.mlp.c_proj.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.mlp.w1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.20.mlp.w2.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.21.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
"transformer.h.21.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.21.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.21.ln_1.weight": "pytorch_model-00005-of-00010.bin",
"transformer.h.21.ln_2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.21.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.21.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.21.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.ln_1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.ln_2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.22.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.ln_1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.ln_2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.23.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.ln_1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.ln_2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.mlp.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.24.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.attn.c_attn.bias": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.attn.c_attn.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.attn.c_proj.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.ln_1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.ln_2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.25.mlp.w1.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.25.mlp.w2.weight": "pytorch_model-00006-of-00010.bin",
"transformer.h.26.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.ln_1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.ln_2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.26.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.ln_1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.ln_2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.27.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.ln_1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.ln_2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.28.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.ln_1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.ln_2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.mlp.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.29.mlp.w2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.3.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.ln_1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.ln_2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.3.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.30.attn.c_attn.bias": "pytorch_model-00007-of-00010.bin",
"transformer.h.30.attn.c_attn.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.30.attn.c_proj.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.30.ln_1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.30.ln_2.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.30.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.30.mlp.w1.weight": "pytorch_model-00007-of-00010.bin",
"transformer.h.30.mlp.w2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.attn.c_attn.bias": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.attn.c_attn.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.attn.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.mlp.w1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.31.mlp.w2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.h.4.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.ln_1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.ln_2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.4.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.ln_1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.ln_2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.mlp.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.5.mlp.w2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.attn.c_attn.bias": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.attn.c_attn.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.attn.c_proj.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.ln_1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.ln_2.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.6.mlp.w1.weight": "pytorch_model-00002-of-00010.bin",
"transformer.h.6.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.ln_1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.ln_2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.7.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.ln_1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.ln_2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.8.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.attn.c_attn.bias": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.attn.c_attn.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.attn.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.ln_1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.ln_2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.mlp.c_proj.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.mlp.w1.weight": "pytorch_model-00003-of-00010.bin",
"transformer.h.9.mlp.w2.weight": "pytorch_model-00003-of-00010.bin",
"transformer.ln_f.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.attn_pool.attn.in_proj_bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.attn.in_proj_weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.kv_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.ln_kv.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.ln_kv.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.ln_q.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.ln_q.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.pos_embed": "pytorch_model-00010-of-00010.bin",
"transformer.visual.attn_pool.query": "pytorch_model-00010-of-00010.bin",
"transformer.visual.conv1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.ln_post.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.ln_post.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.ln_pre.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.ln_pre.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.positional_embedding": "pytorch_model-00008-of-00010.bin",
"transformer.visual.proj": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.0.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.1.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.10.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.11.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.12.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.13.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.14.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.15.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.16.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.17.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.17.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.18.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.19.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.2.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.2.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.20.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.20.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.21.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.22.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.23.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.24.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.25.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.26.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.27.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.28.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.29.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.3.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.3.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.30.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.30.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.31.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.32.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.33.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.34.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.35.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.36.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.37.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.38.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.39.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.4.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.4.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.40.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.40.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.41.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.mlp.c_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.42.mlp.c_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.attn.in_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.attn.in_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.attn.out_proj.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.attn.out_proj.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.ln_1.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.ln_1.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.ln_2.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.ln_2.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.mlp.c_fc.bias": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.mlp.c_fc.weight": "pytorch_model-00009-of-00010.bin",
"transformer.visual.transformer.resblocks.43.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.43.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.ln_1.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.ln_1.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.ln_2.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.ln_2.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.44.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.ln_1.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.ln_1.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.ln_2.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.ln_2.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.45.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.ln_1.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.ln_1.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.ln_2.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.ln_2.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.46.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.attn.in_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.attn.in_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.attn.out_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.attn.out_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.ln_1.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.ln_1.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.ln_2.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.ln_2.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.mlp.c_fc.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.mlp.c_fc.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.mlp.c_proj.bias": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.47.mlp.c_proj.weight": "pytorch_model-00010-of-00010.bin",
"transformer.visual.transformer.resblocks.5.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.5.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.6.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.7.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.8.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.attn.in_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.attn.in_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.attn.out_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.attn.out_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.ln_1.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.ln_1.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.ln_2.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.ln_2.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.mlp.c_fc.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.mlp.c_fc.weight": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.mlp.c_proj.bias": "pytorch_model-00008-of-00010.bin",
"transformer.visual.transformer.resblocks.9.mlp.c_proj.weight": "pytorch_model-00008-of-00010.bin",
"transformer.wte.weight": "pytorch_model-00001-of-00010.bin"
}
}

151643
qwen.tiktoken Normal file

File diff suppressed because it is too large Load Diff

420
qwen_generation_utils.py Normal file
View File

@ -0,0 +1,420 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
"""Generation support."""
from typing import Tuple, List, Union, Iterable
import numpy as np
import torch
import torch.nn.functional as F
from transformers import PreTrainedTokenizer
from transformers import logging
from transformers.generation import LogitsProcessor
logger = logging.get_logger(__name__)
# Types.
HistoryType = List[Tuple[str, str]]
TokensType = List[int]
BatchTokensType = List[List[int]]
def pad_batch(batch: BatchTokensType, pad_id: int, seq_length: int) -> BatchTokensType:
for tokens in batch:
context_length = len(tokens)
if context_length < seq_length:
tokens.extend([pad_id] * (seq_length - context_length))
return batch
def get_ltor_masks_and_position_ids(
data,
eod_token,
reset_position_ids,
reset_attention_mask,
eod_mask_loss,
):
"""Build masks and position id for left to right model."""
# Extract batch size and sequence length.
micro_batch_size, seq_length = data.size()
# Attention mask (lower triangular).
if reset_attention_mask:
att_mask_batch = micro_batch_size
else:
att_mask_batch = 1
attention_mask = torch.tril(
torch.ones((att_mask_batch, seq_length, seq_length), device=data.device)
).view(att_mask_batch, 1, seq_length, seq_length)
# Loss mask.
loss_mask = torch.ones(data.size(), dtype=torch.float, device=data.device)
if eod_mask_loss:
loss_mask[data == eod_token] = 0.0
# Position ids.
position_ids = torch.arange(seq_length, dtype=torch.long, device=data.device)
position_ids = position_ids.unsqueeze(0).expand_as(data)
# We need to clone as the ids will be modifed based on batch index.
if reset_position_ids:
position_ids = position_ids.clone()
if reset_position_ids or reset_attention_mask:
# Loop through the batches:
for b in range(micro_batch_size):
# Find indecies where EOD token is.
eod_index = position_ids[b, data[b] == eod_token]
# Detach indecies from positions if going to modify positions.
if reset_position_ids:
eod_index = eod_index.clone()
# Loop through EOD indecies:
prev_index = 0
for j in range(eod_index.size()[0]):
i = eod_index[j]
# Mask attention loss.
if reset_attention_mask:
attention_mask[b, 0, (i + 1) :, : (i + 1)] = 0
# Reset positions.
if reset_position_ids:
position_ids[b, (i + 1) :] -= i + 1 - prev_index
prev_index = i + 1
# Convert attention mask to binary:
attention_mask = attention_mask < 0.5
return attention_mask, loss_mask, position_ids
def get_batch(context_tokens: torch.LongTensor, eod_id: int):
"""Generate batch from context tokens."""
# Move to GPU.
tokens = context_tokens.contiguous().to(context_tokens.device)
# Get the attention mask and postition ids.
attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
tokens,
eod_id,
reset_position_ids=False,
reset_attention_mask=False,
eod_mask_loss=False,
)
return tokens, attention_mask, position_ids
def get_stop_words_ids(chat_format, tokenizer):
if chat_format == "raw":
stop_words_ids = [tokenizer.encode("Human:"), [tokenizer.eod_id]]
elif chat_format == "chatml":
stop_words_ids = [[tokenizer.im_end_id], [tokenizer.im_start_id]]
else:
raise NotImplementedError(f"Unknown chat format {chat_format!r}")
return stop_words_ids
def make_context(
tokenizer: PreTrainedTokenizer,
query: str,
history: List[Tuple[str, str]] = None,
system: str = "",
max_window_size: int = 6144,
chat_format: str = "chatml",
):
if history is None:
history = []
if chat_format == "chatml":
im_start, im_end = "<|im_start|>", "<|im_end|>"
im_start_tokens = [tokenizer.im_start_id]
im_end_tokens = [tokenizer.im_end_id]
nl_tokens = tokenizer.encode("\n")
def _tokenize_str(role, content):
return f"{role}\n{content}", tokenizer.encode(
role, allowed_special=set(tokenizer.IMAGE_ST)
) + nl_tokens + tokenizer.encode(content, allowed_special=set(tokenizer.IMAGE_ST))
system_text, system_tokens_part = _tokenize_str("system", system)
system_tokens = im_start_tokens + system_tokens_part + im_end_tokens
raw_text = ""
context_tokens = []
for turn_query, turn_response in reversed(history):
query_text, query_tokens_part = _tokenize_str("user", turn_query)
query_tokens = im_start_tokens + query_tokens_part + im_end_tokens
if turn_response is not None:
response_text, response_tokens_part = _tokenize_str(
"assistant", turn_response
)
response_tokens = im_start_tokens + response_tokens_part + im_end_tokens
next_context_tokens = nl_tokens + query_tokens + nl_tokens + response_tokens
prev_chat = (
f"\n{im_start}{query_text}{im_end}\n{im_start}{response_text}{im_end}"
)
else:
next_context_tokens = nl_tokens + query_tokens + nl_tokens
prev_chat = f"\n{im_start}{query_text}{im_end}\n"
current_context_size = (
len(system_tokens) + len(next_context_tokens) + len(context_tokens)
)
if current_context_size < max_window_size:
context_tokens = next_context_tokens + context_tokens
raw_text = prev_chat + raw_text
else:
break
context_tokens = system_tokens + context_tokens
raw_text = f"{im_start}{system_text}{im_end}" + raw_text
context_tokens += (
nl_tokens
+ im_start_tokens
+ _tokenize_str("user", query)[1]
+ im_end_tokens
+ nl_tokens
+ im_start_tokens
+ tokenizer.encode("assistant")
+ nl_tokens
)
raw_text += f"\n{im_start}user\n{query}{im_end}\n{im_start}assistant\n"
elif chat_format == "raw":
raw_text = query
context_tokens = tokenizer.encode(raw_text)
else:
raise NotImplementedError(f"Unknown chat format {chat_format!r}")
return raw_text, context_tokens
def _decode_default(
tokens: List[int],
*,
stop_words: List[str],
eod_words: List[str],
tokenizer: PreTrainedTokenizer,
raw_text_len: int,
verbose: bool = False,
return_end_reason: bool = False,
errors: str='replace',
):
trim_decode_tokens = tokenizer.decode(tokens, errors=errors)[raw_text_len:]
if verbose:
print("\nRaw Generate: ", trim_decode_tokens)
end_reason = f"Gen length {len(tokens)}"
for stop_word in stop_words:
trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
for eod_word in eod_words:
if eod_word in trim_decode_tokens:
end_reason = f"Gen {eod_word!r}"
trim_decode_tokens = trim_decode_tokens.split(eod_word)[0]
trim_decode_tokens = trim_decode_tokens.strip()
if verbose:
print("\nEnd Reason:", end_reason)
print("\nGenerate: ", trim_decode_tokens)
if return_end_reason:
return trim_decode_tokens, end_reason
else:
return trim_decode_tokens
def _decode_chatml(
tokens: List[int],
*,
stop_words: List[str],
eod_token_ids: List[int],
tokenizer: PreTrainedTokenizer,
raw_text_len: int,
context_length: int,
verbose: bool = False,
return_end_reason: bool = False,
errors: str='replace'
):
end_reason = f"Gen length {len(tokens)}"
eod_token_idx = context_length
for eod_token_idx in range(context_length, len(tokens)):
if tokens[eod_token_idx] in eod_token_ids:
end_reason = f"Gen {tokenizer.decode([tokens[eod_token_idx]])!r}"
break
trim_decode_tokens = tokenizer.decode(tokens[:eod_token_idx], errors=errors)[raw_text_len:]
if verbose:
print("\nRaw Generate w/o EOD:", tokenizer.decode(tokens, errors=errors)[raw_text_len:])
print("\nRaw Generate:", trim_decode_tokens)
print("\nEnd Reason:", end_reason)
for stop_word in stop_words:
trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
trim_decode_tokens = trim_decode_tokens.strip()
if verbose:
print("\nGenerate:", trim_decode_tokens)
if return_end_reason:
return trim_decode_tokens, end_reason
else:
return trim_decode_tokens
def decode_tokens(
tokens: Union[torch.LongTensor, TokensType],
tokenizer: PreTrainedTokenizer,
raw_text_len: int,
context_length: int,
chat_format: str,
verbose: bool = False,
return_end_reason: bool = False,
errors: str="replace",
) -> str:
if torch.is_tensor(tokens):
tokens = tokens.cpu().numpy().tolist()
if chat_format == "chatml":
return _decode_chatml(
tokens,
stop_words=[],
eod_token_ids=[tokenizer.im_start_id, tokenizer.im_end_id],
tokenizer=tokenizer,
raw_text_len=raw_text_len,
context_length=context_length,
verbose=verbose,
return_end_reason=return_end_reason,
errors=errors,
)
elif chat_format == "raw":
return _decode_default(
tokens,
stop_words=["<|endoftext|>"],
eod_words=["<|endoftext|>"],
tokenizer=tokenizer,
raw_text_len=raw_text_len,
verbose=verbose,
return_end_reason=return_end_reason,
errors=errors,
)
else:
raise NotImplementedError(f"Unknown chat format {chat_format!r}")
class StopWordsLogitsProcessor(LogitsProcessor):
"""
:class:`transformers.LogitsProcessor` that enforces that when specified sequences appear, stop geration.
Args:
stop_words_ids (:obj:`List[List[int]]`):
List of list of token ids of stop ids. In order to get the tokens of the words
that should not appear in the generated text, use :obj:`tokenizer(bad_word,
add_prefix_space=True).input_ids`.
eos_token_id (:obj:`int`):
The id of the `end-of-sequence` token.
"""
def __init__(self, stop_words_ids: Iterable[Iterable[int]], eos_token_id: int):
if not isinstance(stop_words_ids, List) or len(stop_words_ids) == 0:
raise ValueError(
f"`stop_words_ids` has to be a non-emtpy list, but is {stop_words_ids}."
)
if any(not isinstance(bad_word_ids, list) for bad_word_ids in stop_words_ids):
raise ValueError(
f"`stop_words_ids` has to be a list of lists, but is {stop_words_ids}."
)
if any(
any(
(not isinstance(token_id, (int, np.integer)) or token_id < 0)
for token_id in stop_word_ids
)
for stop_word_ids in stop_words_ids
):
raise ValueError(
f"Each list in `stop_words_ids` has to be a list of positive integers, but is {stop_words_ids}."
)
self.stop_words_ids = list(
filter(
lambda bad_token_seq: bad_token_seq != [eos_token_id], stop_words_ids
)
)
self.eos_token_id = eos_token_id
for stop_token_seq in self.stop_words_ids:
assert (
len(stop_token_seq) > 0
), "Stop words token sequences {} cannot have an empty list".format(
stop_words_ids
)
def __call__(
self, input_ids: torch.LongTensor, scores: torch.FloatTensor
) -> torch.FloatTensor:
stopped_samples = self._calc_stopped_samples(input_ids)
for i, should_stop in enumerate(stopped_samples):
if should_stop:
scores[i, self.eos_token_id] = float(2**15)
return scores
def _tokens_match(self, prev_tokens: torch.LongTensor, tokens: List[int]) -> bool:
if len(tokens) == 0:
# if bad word tokens is just one token always ban it
return True
elif len(tokens) > len(prev_tokens):
# if bad word tokens are longer then prev input_ids they can't be equal
return False
elif prev_tokens[-len(tokens) :].tolist() == tokens:
# if tokens match
return True
else:
return False
def _calc_stopped_samples(self, prev_input_ids: Iterable[int]) -> Iterable[int]:
stopped_samples = []
for prev_input_ids_slice in prev_input_ids:
match = False
for stop_token_seq in self.stop_words_ids:
if self._tokens_match(prev_input_ids_slice, stop_token_seq):
# if tokens do not match continue
match = True
break
stopped_samples.append(match)
return stopped_samples
def top_k_logits(logits, top_k=0, top_p=0.0, filter_value=-float("Inf")):
"""This function has been mostly taken from huggingface conversational
ai code at
https://medium.com/huggingface/how-to-build-a-state-of-the-art-
conversational-ai-with-transfer-learning-2d818ac26313"""
if top_k > 0:
# Remove all tokens with a probability less than the
# last token of the top-k
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p > 0.0:
# Cconvert to 1D
sorted_logits, sorted_indices = torch.sort(logits, descending=True, dim=-1)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold
sorted_indices_to_remove = cumulative_probs > top_p
# Shift the indices to the right to keep also the first token
# above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
for i in range(sorted_indices.size(0)):
indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]]
logits[i][indices_to_remove] = filter_value
return logits
def switch(val1, val2, boolean):
boolean = boolean.type_as(val1)
return (1 - boolean) * val1 + boolean * val2

598
tokenization_qwen.py Normal file
View File

@ -0,0 +1,598 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
"""Tokenization classes for QWen."""
import base64
import logging
import os
import requests
import unicodedata
from typing import Collection, Dict, List, Set, Tuple, Union, Any, Callable, Optional
import tiktoken
import numpy as np
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from transformers import PreTrainedTokenizer, AddedToken
from transformers.utils import try_to_load_from_cache
import matplotlib.colors as mcolors
from matplotlib.font_manager import FontProperties
logger = logging.getLogger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken", "ttf": "SimSun.ttf"}
FONT_PATH = try_to_load_from_cache("Qwen/Qwen-VL-Chat", "SimSun.ttf")
if FONT_PATH is None:
if not os.path.exists("SimSun.ttf"):
ttf = requests.get("https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/SimSun.ttf")
open("SimSun.ttf", "wb").write(ttf.content)
FONT_PATH = "SimSun.ttf"
PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
ENDOFTEXT = "<|endoftext|>"
IMSTART = "<|im_start|>"
IMEND = "<|im_end|>"
# as the default behavior is changed to allow special tokens in
# regular texts, the surface forms of special tokens need to be
# as different as possible to minimize the impact
EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
SPECIAL_TOKENS = (
ENDOFTEXT,
IMSTART,
IMEND,
) + EXTRAS
IMG_TOKEN_SPAN = 256
def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
with open(tiktoken_bpe_file, "rb") as f:
contents = f.read()
return {
base64.b64decode(token): int(rank)
for token, rank in (line.split() for line in contents.splitlines() if line)
}
def _list_find(
input_list: List[Any],
candidates: Tuple[Any],
start: int = 0,
):
for i in range(start, len(input_list)):
if input_list[i] in candidates:
return i
return -1
def _replace_closed_tag(
input_tokens: List[Any],
start_tags: Union[Any, Tuple[Any]],
end_tags: Union[Any, Tuple[Any]],
inclusive_replace_func: Callable,
exclusive_replace_func: Callable = lambda x: x,
):
if isinstance(start_tags, (str, int)):
start_tags = (start_tags,)
if isinstance(end_tags, (str, int)):
end_tags = (end_tags,)
assert len(start_tags) == len(end_tags)
output_tokens = []
end = 0
while True:
start = _list_find(input_tokens, start_tags, end)
if start == -1:
break
output_tokens.extend(exclusive_replace_func(input_tokens[end : start]))
tag_idx = start_tags.index(input_tokens[start])
end = _list_find(input_tokens, (end_tags[tag_idx],), start)
if end == -1:
raise ValueError("Unclosed image token")
output_tokens.extend(inclusive_replace_func(input_tokens[start : end + 1]))
end += 1
output_tokens.extend(exclusive_replace_func(input_tokens[end : ]))
return output_tokens
class QWenTokenizer(PreTrainedTokenizer):
"""QWen tokenizer."""
vocab_files_names = VOCAB_FILES_NAMES
def __init__(
self,
vocab_file,
errors="replace",
image_start_tag='<img>',
image_end_tag='</img>',
image_pad_tag='<imgpad>',
ref_start_tag='<ref>',
ref_end_tag='</ref>',
box_start_tag='<box>',
box_end_tag='</box>',
quad_start_tag='<quad>',
quad_end_tag='</quad>',
**kwargs,
):
super().__init__(**kwargs)
self.image_start_tag = image_start_tag
self.image_end_tag = image_end_tag
self.image_pad_tag = image_pad_tag
self.ref_start_tag = ref_start_tag
self.ref_end_tag = ref_end_tag
self.box_start_tag = box_start_tag
self.box_end_tag = box_end_tag
self.quad_start_tag = quad_start_tag
self.quad_end_tag = quad_end_tag
self.IMAGE_ST = (
ref_start_tag, ref_end_tag,
box_start_tag, box_end_tag,
quad_start_tag, quad_end_tag,
image_start_tag, image_end_tag,
image_pad_tag
)
self.errors = errors # how to handle errors in decoding
self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: dict[bytes, int]
self.special_tokens = {
token: index
for index, token in enumerate(
SPECIAL_TOKENS + self.IMAGE_ST, start=len(self.mergeable_ranks)
)
}
self.img_start_id = self.special_tokens[self.image_start_tag]
self.img_end_id = self.special_tokens[self.image_end_tag]
self.img_pad_id = self.special_tokens[self.image_pad_tag]
self.ref_start_id = self.special_tokens[self.ref_start_tag]
self.ref_end_id = self.special_tokens[self.ref_end_tag]
self.box_start_id = self.special_tokens[self.box_start_tag]
self.box_end_id = self.special_tokens[self.box_end_tag]
self.quad_start_id = self.special_tokens[self.quad_start_tag]
self.quad_end_id = self.special_tokens[self.quad_end_tag]
self.image_special_tokens = set([
self.ref_start_id, self.ref_end_id, self.box_start_id, self.box_end_id,
self.quad_start_id, self.quad_end_id,
])
enc = tiktoken.Encoding(
"Qwen",
pat_str=PAT_STR,
mergeable_ranks=self.mergeable_ranks,
special_tokens=self.special_tokens,
)
assert (
len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
self.decoder = {
v: k for k, v in self.mergeable_ranks.items()
} # type: dict[int, bytes|str]
self.decoder.update({v: k for k, v in self.special_tokens.items()})
self.tokenizer = enc # type: tiktoken.Encoding
self.eod_id = self.tokenizer.eot_token
self.im_start_id = self.special_tokens[IMSTART]
self.im_end_id = self.special_tokens[IMEND]
def __getstate__(self):
# for pickle lovers
state = self.__dict__.copy()
del state['tokenizer']
return state
def __setstate__(self, state):
# tokenizer is not python native; don't pass it; rebuild it
self.__dict__.update(state)
enc = tiktoken.Encoding(
"Qwen",
pat_str=PAT_STR,
mergeable_ranks=self.mergeable_ranks,
special_tokens=self.special_tokens,
)
self.tokenizer = enc
def __len__(self) -> int:
return self.tokenizer.n_vocab
def get_vocab(self) -> Dict[bytes, int]:
return self.mergeable_ranks
def convert_tokens_to_ids(
self, tokens: Union[bytes, str, List[Union[bytes, str]]]
) -> List[int]:
ids = []
if isinstance(tokens, (str, bytes)):
if tokens in self.special_tokens:
return self.special_tokens[tokens]
else:
return self.mergeable_ranks.get(tokens)
for token in tokens:
if token in self.special_tokens:
ids.append(self.special_tokens[token])
else:
ids.append(self.mergeable_ranks.get(token))
return ids
def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int:
if not special_tokens and new_tokens:
raise ValueError('Adding regular tokens is not supported')
for token in new_tokens:
surface_form = token.content if isinstance(token, AddedToken) else token
if surface_form not in SPECIAL_TOKENS + self.IMAGE_ST:
raise ValueError('Adding unknown special tokens is not supported')
return 0
def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
"""
Save only the vocabulary of the tokenizer (vocabulary).
Returns:
`Tuple(str)`: Paths to the files saved.
"""
file_path = os.path.join(save_directory, "qwen.tiktoken")
with open(file_path, "w", encoding="utf8") as w:
for k, v in self.mergeable_ranks.items():
line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
w.write(line)
return (file_path,)
def tokenize(
self,
text: str,
allowed_special: Union[Set, str] = "all",
disallowed_special: Union[Collection, str] = (),
**kwargs,
) -> List[Union[bytes, str]]:
"""
Converts a string in a sequence of tokens.
Args:
text (`str`):
The sequence to be encoded.
allowed_special (`Literal["all"]` or `set`):
The surface forms of the tokens to be encoded as special tokens in regular texts.
Default to "all".
disallowed_special (`Literal["all"]` or `Collection`):
The surface forms of the tokens that should not be in regular texts and trigger errors.
Default to an empty tuple.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific encode method.
Returns:
`List[bytes|str]`: The list of tokens.
"""
tokens = []
text = unicodedata.normalize("NFC", text)
# this implementation takes a detour: text -> token id -> token surface forms
for t in self.tokenizer.encode(
text, allowed_special=allowed_special, disallowed_special=disallowed_special
):
tokens.append(self.decoder[t])
def _encode_imgurl(img_tokens):
assert img_tokens[0] == self.image_start_tag and img_tokens[-1] == self.image_end_tag
img_tokens = img_tokens[1:-1]
img_url = b''.join(img_tokens)
out_img_tokens = list(map(self.decoder.get, img_url))
if len(out_img_tokens) > IMG_TOKEN_SPAN:
raise ValueError("The content in {}..{} is too long".format(
self.image_start_tag, self.image_end_tag))
out_img_tokens.extend([self.image_pad_tag] * (IMG_TOKEN_SPAN - len(out_img_tokens)))
out_img_tokens = [self.image_start_tag] + out_img_tokens + [self.image_end_tag]
return out_img_tokens
return _replace_closed_tag(tokens, self.image_start_tag, self.image_end_tag, _encode_imgurl)
def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
"""
Converts a sequence of tokens in a single string.
"""
text = ""
temp = b""
for t in tokens:
if isinstance(t, str):
if temp:
text += temp.decode("utf-8", errors=self.errors)
temp = b""
text += t
elif isinstance(t, bytes):
temp += t
else:
raise TypeError("token should only be of type types or str")
if temp:
text += temp.decode("utf-8", errors=self.errors)
return text
@property
def vocab_size(self):
return self.tokenizer.n_vocab
def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
"""Converts an id to a token, special tokens included"""
if index in self.decoder:
return self.decoder[index]
raise ValueError("unknown ids")
def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
"""Converts a token to an id using the vocab, special tokens included"""
if token in self.special_tokens:
return self.special_tokens[token]
if token in self.mergeable_ranks:
return self.mergeable_ranks[token]
raise ValueError("unknown token")
def _tokenize(self, text: str, **kwargs):
"""
Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
Do NOT take care of added tokens.
"""
raise NotImplementedError
def _decode(
self,
token_ids: Union[int, List[int]],
skip_special_tokens: bool = False,
errors: str = None,
**kwargs,
) -> str:
if isinstance(token_ids, int):
token_ids = [token_ids]
def _decode_imgurl(img_token_ids):
assert img_token_ids[0] == self.img_start_id and img_token_ids[-1] == self.img_end_id
img_token_ids = img_token_ids[1:-1]
img_token_ids = img_token_ids[ : img_token_ids.index(self.img_pad_id)]
img_url = bytes(img_token_ids).decode('utf-8')
return [self.img_start_id] + self.tokenizer.encode(img_url) + [self.img_end_id]
token_ids = _replace_closed_tag(token_ids, self.img_start_id, self.img_end_id, _decode_imgurl)
if skip_special_tokens:
if kwargs.get('keep_image_special', False):
token_ids = [i for i in token_ids if i < self.eod_id
or i in self.image_special_tokens]
else:
token_ids = [i for i in token_ids if i < self.eod_id]
return self.tokenizer.decode(token_ids, errors=errors or self.errors)
def to_list_format(self, text: str):
text = unicodedata.normalize("NFC", text)
token_ids = self.tokenizer.encode(
text, allowed_special=set(self.IMAGE_ST + (ENDOFTEXT,)))
def _encode_vl_info(tokens):
if len(tokens) == 0:
return []
if tokens[0] == self.img_start_id and tokens[-1] == self.img_end_id:
key = 'image'
elif tokens[0] == self.ref_start_id and tokens[-1] == self.ref_end_id:
key = 'ref'
elif tokens[0] == self.box_start_id and tokens[-1] == self.box_end_id:
key = 'box'
elif tokens[0] == self.quad_start_id and tokens[-1] == self.quad_end_id:
key = 'quad'
else:
_tobytes = lambda x: x.encode('utf-8') if isinstance(x, str) else x
return [{'text': b''.join(map(_tobytes, map(self.decoder.get, tokens))).decode('utf-8')}]
_tobytes = lambda x: x.encode('utf-8') if isinstance(x, str) else x
val = b''.join(map(_tobytes, map(self.decoder.get, tokens[1:-1]))).decode('utf-8')
return [{key: val}]
return _replace_closed_tag(
token_ids,
(self.img_start_id, self.ref_start_id, self.box_start_id, self.quad_start_id),
(self.img_end_id, self.ref_end_id, self.box_end_id, self.quad_end_id),
_encode_vl_info,
_encode_vl_info,
)
def from_list_format(self, list_format: List[Dict]):
text = ''
num_images = 0
for ele in list_format:
if 'image' in ele:
num_images += 1
text += f'Picture {num_images}: '
text += self.image_start_tag + ele['image'] + self.image_end_tag
text += '\n'
elif 'text' in ele:
text += ele['text']
elif 'box' in ele:
if 'ref' in ele:
text += self.ref_start_tag + ele['ref'] + self.ref_end_tag
for box in ele['box']:
text += self.box_start_tag + '(%d,%d),(%d,%d)' % (box[0], box[1], box[2], box[3]) + self.box_end_tag
else:
raise ValueError("Unsupport element: " + str(ele))
return text
def _fetch_latest_picture(self, response, history):
if history is None:
history = []
_history = history + [(response, None)]
for q, r in _history[::-1]:
for ele in self.to_list_format(q)[::-1]:
if 'image' in ele:
return ele['image']
return None
def _fetch_all_box_with_ref(self, text):
list_format = self.to_list_format(text)
output = []
for i, ele in enumerate(list_format):
if 'box' in ele:
bbox = tuple(map(int, ele['box'].replace('(', '').replace(')', '').split(',')))
assert len(bbox) == 4
output.append({'box': bbox})
if i > 0 and 'ref' in list_format[i-1]:
output[-1]['ref'] = list_format[i-1]['ref'].strip()
return output
def draw_bbox_on_latest_picture(
self,
response,
history=None,
) -> Optional[Image.Image]:
image = self._fetch_latest_picture(response, history)
if image is None:
return None
if image.startswith("http://") or image.startswith("https://"):
image = Image.open(requests.get(image, stream=True).raw).convert("RGB")
h, w = image.height, image.width
else:
image = np.asarray(Image.open(image).convert("RGB"))
h, w = image.shape[0], image.shape[1]
visualizer = Visualizer(image)
boxes = self._fetch_all_box_with_ref(response)
if not boxes:
return None
color = random.choice([_ for _ in mcolors.TABLEAU_COLORS.keys()]) # init color
for box in boxes:
if 'ref' in box: # random new color for new refexps
color = random.choice([_ for _ in mcolors.TABLEAU_COLORS.keys()])
x1, y1, x2, y2 = box['box']
x1, y1, x2, y2 = (int(x1 / 1000 * w), int(y1 / 1000 * h), int(x2 / 1000 * w), int(y2 / 1000 * h))
visualizer.draw_box((x1, y1, x2, y2), alpha=1, edge_color=color)
if 'ref' in box:
visualizer.draw_text(box['ref'], (x1, y1), color=color, horizontal_alignment="left")
return visualizer.output
import colorsys
import logging
import math
import numpy as np
import matplotlib as mpl
import matplotlib.colors as mplc
import matplotlib.figure as mplfigure
import torch
from matplotlib.backends.backend_agg import FigureCanvasAgg
from PIL import Image
import random
logger = logging.getLogger(__name__)
class VisImage:
def __init__(self, img, scale=1.0):
self.img = img
self.scale = scale
self.width, self.height = img.shape[1], img.shape[0]
self._setup_figure(img)
def _setup_figure(self, img):
fig = mplfigure.Figure(frameon=False)
self.dpi = fig.get_dpi()
# add a small 1e-2 to avoid precision lost due to matplotlib's truncation
# (https://github.com/matplotlib/matplotlib/issues/15363)
fig.set_size_inches(
(self.width * self.scale + 1e-2) / self.dpi,
(self.height * self.scale + 1e-2) / self.dpi,
)
self.canvas = FigureCanvasAgg(fig)
# self.canvas = mpl.backends.backend_cairo.FigureCanvasCairo(fig)
ax = fig.add_axes([0.0, 0.0, 1.0, 1.0])
ax.axis("off")
self.fig = fig
self.ax = ax
self.reset_image(img)
def reset_image(self, img):
img = img.astype("uint8")
self.ax.imshow(img, extent=(0, self.width, self.height, 0), interpolation="nearest")
def save(self, filepath):
self.fig.savefig(filepath)
def get_image(self):
canvas = self.canvas
s, (width, height) = canvas.print_to_buffer()
buffer = np.frombuffer(s, dtype="uint8")
img_rgba = buffer.reshape(height, width, 4)
rgb, alpha = np.split(img_rgba, [3], axis=2)
return rgb.astype("uint8")
class Visualizer:
def __init__(self, img_rgb, metadata=None, scale=1.0):
self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8)
self.font_path = FONT_PATH
self.output = VisImage(self.img, scale=scale)
self.cpu_device = torch.device("cpu")
# too small texts are useless, therefore clamp to 14
self._default_font_size = max(
np.sqrt(self.output.height * self.output.width) // 30, 15 // scale
)
def draw_text(
self,
text,
position,
*,
font_size=None,
color="g",
horizontal_alignment="center",
rotation=0,
):
if not font_size:
font_size = self._default_font_size
# since the text background is dark, we don't want the text to be dark
color = np.maximum(list(mplc.to_rgb(color)), 0.2)
color[np.argmax(color)] = max(0.8, np.max(color))
x, y = position
self.output.ax.text(
x,
y,
text,
size=font_size * self.output.scale,
fontproperties=FontProperties(fname=self.font_path),
bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"},
verticalalignment="top",
horizontalalignment=horizontal_alignment,
color=color,
zorder=10,
rotation=rotation,
)
return self.output
def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"):
x0, y0, x1, y1 = box_coord
width = x1 - x0
height = y1 - y0
linewidth = max(self._default_font_size / 4, 1)
self.output.ax.add_patch(
mpl.patches.Rectangle(
(x0, y0),
width,
height,
fill=False,
edgecolor=edge_color,
linewidth=linewidth * self.output.scale,
alpha=alpha,
linestyle=line_style,
)
)
return self.output
def get_output(self):
return self.output

10
tokenizer_config.json Normal file
View File

@ -0,0 +1,10 @@
{
"model_max_length": 8192,
"tokenizer_class": "QWenTokenizer",
"auto_map": {
"AutoTokenizer": [
"tokenization_qwen.QWenTokenizer",
null
]
}
}

426
visual.py Normal file
View File

@ -0,0 +1,426 @@
# Copyright (c) Alibaba Cloud.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from collections import OrderedDict
import math
import requests
from io import BytesIO
from functools import partial
from PIL import Image
from typing import Callable, Optional, Sequence, Tuple, List
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
from torch.nn.init import trunc_normal_
from torchvision import transforms
from torchvision.transforms import InterpolationMode
def get_abs_pos(abs_pos, tgt_size):
# abs_pos: L, C
# tgt_size: M
# return: M, C
src_size = int(math.sqrt(abs_pos.size(0)))
tgt_size = int(math.sqrt(tgt_size))
dtype = abs_pos.dtype
if src_size != tgt_size:
return F.interpolate(
abs_pos.float().reshape(1, src_size, src_size, -1).permute(0, 3, 1, 2),
size=(tgt_size, tgt_size),
mode="bicubic",
align_corners=False,
).permute(0, 2, 3, 1).flatten(0, 2).to(dtype=dtype)
else:
return abs_pos
# https://github.com/facebookresearch/mae/blob/efb2a8062c206524e35e47d04501ed4f544c0ae8/util/pos_embed.py#L20
def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False):
"""
grid_size: int of the grid height and width
return:
pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
"""
grid_h = np.arange(grid_size, dtype=np.float32)
grid_w = np.arange(grid_size, dtype=np.float32)
grid = np.meshgrid(grid_w, grid_h) # here w goes first
grid = np.stack(grid, axis=0)
grid = grid.reshape([2, 1, grid_size, grid_size])
pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
if cls_token:
pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0)
return pos_embed
def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
assert embed_dim % 2 == 0
# use half of dimensions to encode grid_h
emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2)
emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2)
emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D)
return emb
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
embed_dim: output dimension for each position
pos: a list of positions to be encoded: size (M,)
out: (M, D)
"""
assert embed_dim % 2 == 0
omega = np.arange(embed_dim // 2, dtype=np.float32)
omega /= embed_dim / 2.
omega = 1. / 10000**omega # (D/2,)
pos = pos.reshape(-1) # (M,)
out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product
emb_sin = np.sin(out) # (M, D/2)
emb_cos = np.cos(out) # (M, D/2)
emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
return emb
class Resampler(nn.Module):
"""
A 2D perceiver-resampler network with one cross attention layers by
(grid_size**2) learnable queries and 2d sincos pos_emb
Outputs:
A tensor with the shape of (grid_size**2, embed_dim)
"""
def __init__(
self,
grid_size,
embed_dim,
num_heads,
kv_dim=None,
norm_layer=nn.LayerNorm
):
super().__init__()
self.num_queries = grid_size ** 2
self.embed_dim = embed_dim
self.num_heads = num_heads
self.pos_embed = nn.Parameter(
torch.from_numpy(get_2d_sincos_pos_embed(embed_dim, grid_size)).float()
).requires_grad_(False)
self.query = nn.Parameter(torch.zeros(self.num_queries, embed_dim))
trunc_normal_(self.query, std=.02)
if kv_dim is not None and kv_dim != embed_dim:
self.kv_proj = nn.Linear(kv_dim, embed_dim, bias=False)
else:
self.kv_proj = nn.Identity()
self.attn = nn.MultiheadAttention(embed_dim, num_heads)
self.ln_q = norm_layer(embed_dim)
self.ln_kv = norm_layer(embed_dim)
# self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
def forward(self, x, attn_mask=None):
pos_embed = get_abs_pos(self.pos_embed, x.size(1))
x = self.kv_proj(x)
x = self.ln_kv(x).permute(1, 0, 2)
N = x.shape[1]
q = self.ln_q(self.query)
out = self.attn(
self._repeat(q, N) + self.pos_embed.unsqueeze(1),
x + pos_embed.unsqueeze(1),
x,
attn_mask=attn_mask)[0]
return out.permute(1, 0, 2)
def _repeat(self, query, N: int):
return query.unsqueeze(1).repeat(1, N, 1)
class VisualAttention(nn.Module):
"""self-attention layer class.
Self-attention layer takes input with size [s, b, h]
and returns output of the same size.
"""
def __init__(self, embed_dim, num_heads,
bias=True, kdim=None, vdim=None):
super(VisualAttention, self).__init__()
self.embed_dim = embed_dim
self.kdim = kdim if kdim is not None else embed_dim
self.vdim = vdim if vdim is not None else embed_dim
self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
self.num_heads = num_heads
# Per attention head and per partition values.
assert embed_dim % num_heads == 0
self.hidden_size_per_attention_head = embed_dim // num_heads
self.num_attention_heads_per_partition = num_heads
self.hidden_size_per_partition = embed_dim
# Strided linear layer.
assert self._qkv_same_embed_dim, 'Only Support SelfAttention Currently'
self.in_proj = nn.Linear(embed_dim, 3 * embed_dim)
self.out_proj = nn.Linear(embed_dim, embed_dim)
self.norm_factor = math.sqrt(self.hidden_size_per_attention_head)
def forward(self, query, key, value, attn_mask = None):
# query/key/value: [sq, b, h]
sq, b, _ = query.size()
assert torch.allclose(query, key), 'Only Support Self-Attention Currently'
sk = sq
mixed_x_layer = self.in_proj(query)
# [sq, b, (np * 3 * hn)] --> [sq, b, np, 3 * hn]
new_tensor_shape = mixed_x_layer.size()[:-1] + \
(self.num_attention_heads_per_partition,
3 * self.hidden_size_per_attention_head)
mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
# [sq, b, np, 3 * hn] --> 3 [sq, b, np, hn]
query_layer, key_layer, value_layer = mixed_x_layer.split(
self.hidden_size_per_attention_head, dim=-1)
# [sq, b, np, hn] -> [sq, b * np, hn]
query_layer = query_layer.view(sq,
b * self.num_attention_heads_per_partition,
self.hidden_size_per_attention_head).transpose(0, 1)
# [sk, b, np, hn] -> [sk, b * np, hn]
key_layer = key_layer.view(sk,
b * self.num_attention_heads_per_partition,
self.hidden_size_per_attention_head).transpose(0, 1)
q_scaled = query_layer / self.norm_factor
if attn_mask is not None:
attention_probs = torch.baddbmm(attn_mask, q_scaled, key_layer.transpose(-2, -1))
else:
attention_probs = torch.bmm(q_scaled, key_layer.transpose(-2, -1))
attention_probs = attention_probs.softmax(dim=-1)
value_layer = value_layer.view(sk,
b * self.num_attention_heads_per_partition,
self.hidden_size_per_attention_head).transpose(0, 1)
# matmul: [b * np, sq, hn]
context_layer = torch.bmm(attention_probs, value_layer)
# change view [b, np, sq, hn]
context_layer = context_layer.view(b,
self.num_attention_heads_per_partition,
sq, self.hidden_size_per_attention_head)
# [b, np, sq, hn] --> [sq, b, np, hn]
context_layer = context_layer.permute(2, 0, 1, 3).contiguous()
# [sq, b, np, hn] --> [sq, b, hp]
new_context_layer_shape = context_layer.size()[:-2] + \
(self.hidden_size_per_partition,)
context_layer = context_layer.view(*new_context_layer_shape)
output = self.out_proj(context_layer)
return output
class VisualAttentionBlock(nn.Module):
def __init__(
self,
d_model: int,
n_head: int,
mlp_ratio: float = 4.0,
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
is_cross_attention: bool = False,
):
super().__init__()
self.ln_1 = norm_layer(d_model)
if is_cross_attention:
self.ln_1_kv = norm_layer(d_model)
self.ln_2 = norm_layer(d_model)
mlp_width = int(d_model * mlp_ratio)
self.attn = VisualAttention(d_model, n_head)
self.mlp = nn.Sequential(OrderedDict([
("c_fc", nn.Linear(d_model, mlp_width)),
("gelu", act_layer()),
("c_proj", nn.Linear(mlp_width, d_model))
]))
def attention(
self,
q_x: torch.Tensor,
k_x: Optional[torch.Tensor] = None,
v_x: Optional[torch.Tensor] = None,
attn_mask: Optional[torch.Tensor] = None,
):
k_x = k_x if k_x is not None else q_x
v_x = v_x if v_x is not None else q_x
attn_mask = attn_mask.to(q_x.dtype) if attn_mask is not None else None
return self.attn(q_x, k_x, v_x, attn_mask=attn_mask)
def forward(
self,
q_x: torch.Tensor,
k_x: Optional[torch.Tensor] = None,
v_x: Optional[torch.Tensor] = None,
attn_mask: Optional[torch.Tensor] = None,
):
k_x = self.ln_1_kv(k_x) if hasattr(self, "ln_1_kv") and k_x is not None else None
v_x = self.ln_1_kv(v_x) if hasattr(self, "ln_1_kv") and v_x is not None else None
x = q_x + self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask)
x = x + self.mlp(self.ln_2(x))
return x
class TransformerBlock(nn.Module):
def __init__(
self,
width: int,
layers: int,
heads: int,
mlp_ratio: float = 4.0,
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.ModuleList([
VisualAttentionBlock(
width, heads, mlp_ratio, act_layer=act_layer, norm_layer=norm_layer)
for _ in range(layers)
])
def get_cast_dtype(self) -> torch.dtype:
return self.resblocks[0].mlp.c_fc.weight.dtype
def get_cast_device(self) -> torch.device:
return self.resblocks[0].mlp.c_fc.weight.device
def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None):
for r in self.resblocks:
x = r(x, attn_mask=attn_mask)
return x
class VisionTransformer(nn.Module):
def __init__(
self,
image_size: int,
patch_size: int,
width: int,
layers: int,
heads: int,
mlp_ratio: float,
n_queries: int = 256,
output_dim: int = 512,
**kwargs
):
super().__init__()
image_height, image_width = self.image_size = (image_size, image_size)
patch_height, patch_width = self.patch_size = (patch_size, patch_size)
self.grid_size = (image_height // patch_height, image_width // patch_width)
self.output_dim = output_dim
mean = (0.48145466, 0.4578275, 0.40821073)
std = (0.26862954, 0.26130258, 0.27577711)
self.image_transform = transforms.Compose([
transforms.Resize(
(image_size, image_size),
interpolation=InterpolationMode.BICUBIC
),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std),
])
self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
# class embeddings and positional embeddings
scale = width ** -0.5
self.positional_embedding = nn.Parameter(scale * torch.randn(256, width))
norm_layer = partial(nn.LayerNorm, eps=1e-6)
act_layer = nn.GELU
self.ln_pre = norm_layer(width)
self.transformer = TransformerBlock(
width,
layers,
heads,
mlp_ratio,
act_layer=act_layer,
norm_layer=norm_layer,
)
self.attn_pool = Resampler(
grid_size=int(math.sqrt(n_queries)),
embed_dim=output_dim,
num_heads=output_dim // 128,
kv_dim=width,
norm_layer=norm_layer,
)
self.ln_post = norm_layer(output_dim)
self.proj = nn.Parameter((output_dim** -0.5) * torch.randn(output_dim, output_dim))
def forward(self, x: torch.Tensor):
x = x.to(
dtype=self.transformer.get_cast_dtype(),
device=self.transformer.get_cast_device(),
)
# to patches
x = self.conv1(x) # shape = [*, width, grid, grid]
x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
x = x + get_abs_pos(self.positional_embedding, x.size(1))
x = self.ln_pre(x)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.attn_pool(x)
x = self.ln_post(x)
x = x @ self.proj
return x
def encode(self, image_paths: List[str]):
images = []
for image_path in image_paths:
if image_path.startswith("http://") or image_path.startswith("https://"):
image = Image.open(requests.get(image_path, stream=True).raw)
else:
image = Image.open(image_path)
image = image.convert("RGB")
images.append(self.image_transform(image))
images = torch.stack(images, dim=0)
return self(images)