70 lines
2.5 KiB
Markdown
70 lines
2.5 KiB
Markdown
---
|
||
license: other
|
||
---
|
||
|
||
|
||
![Aquila_logo](./log.jpeg)
|
||
|
||
|
||
<h4 align="center">
|
||
<p>
|
||
<b>English</b> |
|
||
<a href="https://huggingface.co/BAAI/AquilaChat2-7B/blob/main/README_zh.md">简体中文</a>
|
||
</p>
|
||
</h4>
|
||
|
||
|
||
We opensource our **Aquila2** series, now including **Aquila2**, the base language models, namely **Aquila2-7B** and **Aquila2-34B**, as well as **AquilaChat2**, the chat models, namely **AquilaChat2-7B** and **AquilaChat2-34B**, as well as the long-text chat models, namely **AquilaChat2-7B-16k** and **AquilaChat2-34B-16k**
|
||
|
||
The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels.
|
||
|
||
## Quick Start AquilaChat2-7B(Chat model)
|
||
|
||
### 1. Inference
|
||
|
||
```python
|
||
import torch
|
||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||
from transformers import BitsAndBytesConfig
|
||
|
||
device = torch.device("cuda:0")
|
||
model_info = "BAAI/AquilaChat2-7B"
|
||
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
|
||
quantization_config=BitsAndBytesConfig(
|
||
load_in_4bit=True,
|
||
bnb_4bit_use_double_quant=True,
|
||
bnb_4bit_quant_type="nf4",
|
||
bnb_4bit_compute_dtype=torch.bfloat16,
|
||
)
|
||
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True, torch_dtype=torch.float16,
|
||
# quantization_config=quantization_config, # Uncomment this line for 4bit quantization
|
||
)
|
||
model.eval()
|
||
model.to(device)
|
||
text = "请给出10个要到北京旅游的理由。"
|
||
from predict import predict
|
||
out = predict(model, text, tokenizer=tokenizer, max_gen_len=200, top_p=0.95,
|
||
seed=1234, topk=100, temperature=0.9, sft=True, device=device,
|
||
model_name="AquilaChat2-7B")
|
||
print(out)
|
||
```
|
||
|
||
|
||
## License
|
||
|
||
Aquila2 series open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/AquilaChat2-7B/blob/main/BAAI-Aquila-Model-License%20-Agreement.pdf)
|
||
|
||
## Citation
|
||
Feel free to cite the repo if you think Aquila2 is useful.
|
||
|
||
```python
|
||
@misc{zhang2024aquila2technicalreport,
|
||
title={Aquila2 Technical Report},
|
||
author={Bo-Wen Zhang and Liangdong Wang and Jijie Li and Shuhao Gu and Xinya Wu and Zhengduo Zhang and Boyan Gao and Yulong Ao and Guang Liu},
|
||
year={2024},
|
||
eprint={2408.07410},
|
||
archivePrefix={arXiv},
|
||
primaryClass={cs.CL},
|
||
url={https://arxiv.org/abs/2408.07410},
|
||
}
|
||
``` |