Vision Transformer (ViT) 是一个 transformer 编码器模型(类似 BERT),以 224x224 像素的受监督方式在大量图像(即 ImageNet-21k)上进行预训练。
Go to file
YYJ-aaaa eb9e57072c first commit 2024-11-26 14:38:40 +08:00
.gitattributes Add .gitattributes 2024-11-26 14:21:50 +08:00
README.md Initial commit 2024-11-26 14:21:50 +08:00
config.json first commit 2024-11-26 14:38:40 +08:00
flax_model.msgpack first commit 2024-11-26 14:38:40 +08:00
model.safetensors first commit 2024-11-26 14:38:40 +08:00
preprocessor_config.json first commit 2024-11-26 14:38:40 +08:00
pytorch_model.bin first commit 2024-11-26 14:38:40 +08:00
tf_model.h5 first commit 2024-11-26 14:38:40 +08:00

README.md

vit-base-patch16-384_a13728443302801408312394

Vision Transformer ViT 是一个 transformer 编码器模型(类似 BERT以 224x224 像素的受监督方式在大量图像(即 ImageNet-21k上进行预训练。