HuatuoGPT-Vision-7B
Go to file
xxl c87f508359 first commit 2024-11-28 10:42:33 +08:00
vit/clip_vit_large_patch14_336 first commit 2024-11-28 10:42:33 +08:00
.gitattributes Add .gitattributes 2024-11-27 10:56:05 +08:00
README.md first commit 2024-11-28 10:42:33 +08:00
added_tokens.json first commit 2024-11-28 10:42:33 +08:00
config.json first commit 2024-11-28 10:42:33 +08:00
generation_config.json first commit 2024-11-28 10:42:33 +08:00
merges.txt first commit 2024-11-28 10:42:33 +08:00
model-00001-of-00004.safetensors first commit 2024-11-28 10:42:33 +08:00
model-00002-of-00004.safetensors first commit 2024-11-28 10:42:33 +08:00
model-00003-of-00004.safetensors first commit 2024-11-28 10:42:33 +08:00
model-00004-of-00004.safetensors first commit 2024-11-28 10:42:33 +08:00
model.safetensors.index.json first commit 2024-11-28 10:42:33 +08:00
special_tokens_map.json first commit 2024-11-28 10:42:33 +08:00
tokenizer.json first commit 2024-11-28 10:42:33 +08:00
tokenizer_config.json first commit 2024-11-28 10:42:33 +08:00
vocab.json first commit 2024-11-28 10:42:33 +08:00

README.md

license datasets language pipeline_tag
apache-2.0
FreedomIntelligence/PubMedVision
en
zh
text-generation

HuatuoGPT-Vision-7B

Introduction

HuatuoGPT-Vision is a multimodal LLM for medical applications, built with the PubMedVision dataset. HuatuoGPT-Vision-7B is trained based on Qwen2-7B using the LLaVA-v1.5 architecture.

Quick Start

  1. Get the model inference code from Github.
git clone https://github.com/FreedomIntelligence/HuatuoGPT-Vision.git
  1. Model inference
query = 'What does the picture show?'
image_paths = ['image_path1']

from cli import HuatuoChatbot
bot = HuatuoChatbot(huatuogpt_vision_model_path) # loads the model 
output = bot.inference(query, image_paths) # generates
print(output)  # Prints the model output

Citation

@misc{chen2024huatuogptvisioninjectingmedicalvisual,
      title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale}, 
      author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
      year={2024},
      eprint={2406.19280},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2406.19280}, 
}