first commit

This commit is contained in:
Charles95 2024-10-08 00:51:26 +00:00
commit a65bb8ba81
29 changed files with 919 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
image/sensevoice2.png filter=lfs diff=lfs merge=lfs -text
image/webui.png filter=lfs diff=lfs merge=lfs -text

219
README.md Normal file
View File

@ -0,0 +1,219 @@
---
license: other
license_name: model-license
license_link: https://github.com/modelscope/FunASR/blob/main/MODEL_LICENSE
language:
- en
- zh
- ja
- ko
library: funasr
---
([简体中文](./README_zh.md)|English|[日本語](./README_ja.md))
# Introduction
github [repo](https://github.com/FunAudioLLM/SenseVoice) : https://github.com/FunAudioLLM/SenseVoice
SenseVoice is a speech foundation model with multiple speech understanding capabilities, including automatic speech recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), and audio event detection (AED).
<img src="image/sensevoice2.png">
[//]: # (<div align="center"><img src="image/sensevoice.png" width="700"/> </div>)
<div align="center">
<h4>
<a href="https://fun-audio-llm.github.io/"> Homepage </a>
<a href="#What's News"> What's News </a>
<a href="#Benchmarks"> Benchmarks </a>
<a href="#Install"> Install </a>
<a href="#Usage"> Usage </a>
<a href="#Community"> Community </a>
</h4>
Model Zoo:
[modelscope](https://www.modelscope.cn/models/iic/SenseVoiceSmall), [huggingface](https://huggingface.co/FunAudioLLM/SenseVoiceSmall)
Online Demo:
[modelscope demo](https://www.modelscope.cn/studios/iic/SenseVoice), [huggingface space](https://huggingface.co/spaces/FunAudioLLM/SenseVoice)
</div>
<a name="Highligts"></a>
# Highlights 🎯
**SenseVoice** focuses on high-accuracy multilingual speech recognition, speech emotion recognition, and audio event detection.
- **Multilingual Speech Recognition:** Trained with over 400,000 hours of data, supporting more than 50 languages, the recognition performance surpasses that of the Whisper model.
- **Rich transcribe:**
- Possess excellent emotion recognition capabilities, achieving and surpassing the effectiveness of the current best emotion recognition models on test data.
- Offer sound event detection capabilities, supporting the detection of various common human-computer interaction events such as bgm, applause, laughter, crying, coughing, and sneezing.
- **Efficient Inference:** The SenseVoice-Small model utilizes a non-autoregressive end-to-end framework, leading to exceptionally low inference latency. It requires only 70ms to process 10 seconds of audio, which is 15 times faster than Whisper-Large.
- **Convenient Finetuning:** Provide convenient finetuning scripts and strategies, allowing users to easily address long-tail sample issues according to their business scenarios.
- **Service Deployment:** Offer service deployment pipeline, supporting multi-concurrent requests, with client-side languages including Python, C++, HTML, Java, and C#, among others.
<a name="What's News"></a>
# What's New 🔥
- 2024/7: Added Export Features for [ONNX](https://github.com/FunAudioLLM/SenseVoice/demo_onnx.py) and [libtorch](https://github.com/FunAudioLLM/SenseVoice/demo_libtorch.py), as well as Python Version Runtimes: [funasr-onnx-0.4.0](https://pypi.org/project/funasr-onnx/), [funasr-torch-0.1.1](https://pypi.org/project/funasr-torch/)
- 2024/7: The [SenseVoice-Small](https://www.modelscope.cn/models/iic/SenseVoiceSmall) voice understanding model is open-sourced, which offers high-precision multilingual speech recognition, emotion recognition, and audio event detection capabilities for Mandarin, Cantonese, English, Japanese, and Korean and leads to exceptionally low inference latency.
- 2024/7: The CosyVoice for natural speech generation with multi-language, timbre, and emotion control. CosyVoice excels in multi-lingual voice generation, zero-shot voice generation, cross-lingual voice cloning, and instruction-following capabilities. [CosyVoice repo](https://github.com/FunAudioLLM/CosyVoice) and [CosyVoice space](https://www.modelscope.cn/studios/iic/CosyVoice-300M).
- 2024/7: [FunASR](https://github.com/modelscope/FunASR) is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker Diarization and multi-talker ASR.
<a name="Benchmarks"></a>
# Benchmarks 📝
## Multilingual Speech Recognition
We compared the performance of multilingual speech recognition between SenseVoice and Whisper on open-source benchmark datasets, including AISHELL-1, AISHELL-2, Wenetspeech, LibriSpeech, and Common Voice. In terms of Chinese and Cantonese recognition, the SenseVoice-Small model has advantages.
<div align="center">
<img src="image/asr_results1.png" width="400" /><img src="image/asr_results2.png" width="400" />
</div>
## Speech Emotion Recognition
Due to the current lack of widely-used benchmarks and methods for speech emotion recognition, we conducted evaluations across various metrics on multiple test sets and performed a comprehensive comparison with numerous results from recent benchmarks. The selected test sets encompass data in both Chinese and English, and include multiple styles such as performances, films, and natural conversations. Without finetuning on the target data, SenseVoice was able to achieve and exceed the performance of the current best speech emotion recognition models.
<div align="center">
<img src="image/ser_table.png" width="1000" />
</div>
Furthermore, we compared multiple open-source speech emotion recognition models on the test sets, and the results indicate that the SenseVoice-Large model achieved the best performance on nearly all datasets, while the SenseVoice-Small model also surpassed other open-source models on the majority of the datasets.
<div align="center">
<img src="image/ser_figure.png" width="500" />
</div>
## Audio Event Detection
Although trained exclusively on speech data, SenseVoice can still function as a standalone event detection model. We compared its performance on the environmental sound classification ESC-50 dataset against the widely used industry models BEATS and PANN. The SenseVoice model achieved commendable results on these tasks. However, due to limitations in training data and methodology, its event classification performance has some gaps compared to specialized AED models.
<div align="center">
<img src="image/aed_figure.png" width="500" />
</div>
## Computational Efficiency
The SenseVoice-Small model deploys a non-autoregressive end-to-end architecture, resulting in extremely low inference latency. With a similar number of parameters to the Whisper-Small model, it infers more than 5 times faster than Whisper-Small and 15 times faster than Whisper-Large.
<div align="center">
<img src="image/inference.png" width="1000" />
</div>
# Requirements
```shell
pip install -r requirements.txt
```
<a name="Usage"></a>
# Usage
## Inference
Supports input of audio in any format and of any duration.
```python
from funasr import AutoModel
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "FunAudioLLM/SenseVoiceSmall"
model = AutoModel(
model=model_dir,
vad_model="fsmn-vad",
vad_kwargs={"max_single_segment_time": 30000},
device="cuda:0",
hub="hf",
)
# en
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=True,
batch_size_s=60,
merge_vad=True, #
merge_length_s=15,
)
text = rich_transcription_postprocess(res[0]["text"])
print(text)
```
Parameter Description:
- `model_dir`: The name of the model, or the path to the model on the local disk.
- `vad_model`: This indicates the activation of VAD (Voice Activity Detection). The purpose of VAD is to split long audio into shorter clips. In this case, the inference time includes both VAD and SenseVoice total consumption, and represents the end-to-end latency. If you wish to test the SenseVoice model's inference time separately, the VAD model can be disabled.
- `vad_kwargs`: Specifies the configurations for the VAD model. `max_single_segment_time`: denotes the maximum duration for audio segmentation by the `vad_model`, with the unit being milliseconds (ms).
- `use_itn`: Whether the output result includes punctuation and inverse text normalization.
- `batch_size_s`: Indicates the use of dynamic batching, where the total duration of audio in the batch is measured in seconds (s).
- `merge_vad`: Whether to merge short audio fragments segmented by the VAD model, with the merged length being `merge_length_s`, in seconds (s).
If all inputs are short audios (<30s), and batch inference is needed to speed up inference efficiency, the VAD model can be removed, and `batch_size` can be set accordingly.
```python
model = AutoModel(model=model_dir, device="cuda:0", hub="hf")
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="zh", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=False,
batch_size=64,
hub="hf",
)
```
For more usage, please refer to [docs](https://github.com/modelscope/FunASR/blob/main/docs/tutorial/README.md)
### Inference directly
Supports input of audio in any format, with an input duration limit of 30 seconds or less.
```python
from model import SenseVoiceSmall
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "FunAudioLLM/SenseVoiceSmall"
m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0", hub="hf")
m.eval()
res = m.inference(
data_in=f"{kwargs['model_path']}/example/en.mp3",
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=False,
**kwargs,
)
text = rich_transcription_postprocess(res[0][0]["text"])
print(text)
```
### Export and Test (*On going*)
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
## Service
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
## Finetune
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
## WebUI
```shell
python webui.py
```
<div align="center"><img src="image/webui.png" width="700"/> </div>
<a name="Community"></a>
# Community
If you encounter problems in use, you can directly raise Issues on the github page.
You can also scan the following DingTalk group QR code to join the community group for communication and discussion.
| FunAudioLLM | FunASR |
|:----------------------------------------------------------------:|:--------------------------------------------------------:|
| <div align="left"><img src="image/dingding_sv.png" width="250"/> | <img src="image/dingding_funasr.png" width="250"/></div> |

278
README_ja.md Normal file
View File

@ -0,0 +1,278 @@
# SenseVoice
「[简体中文](./README_zh.md)」|「[English](./README.md)」|「日本語」
github [repo](https://github.com/FunAudioLLM/SenseVoice) : https://github.com/FunAudioLLM/SenseVoice
SenseVoiceは、音声認識ASR、言語識別LID、音声感情認識SER、および音響イベント分類AECまたは音響イベント検出AEDを含む音声理解能力を備えた音声基盤モデルです。本プロジェクトでは、SenseVoiceモデルの紹介と、複数のタスクテストセットでのベンチマーク、およびモデルの体験に必要な環境のインストールと推論方法を提供します。
<div align="center">
<img src="image/sensevoice2.png">
[//]: # (<div align="center"><img src="image/sensevoice2.png" width="700"/> </div>)
<h4>
<a href="#What's New"> ドキュメントホーム </a>
<a href="#核心功能"> コア機能 </a>
</h4>
<h4>
<a href="#On Going"> 最新情報 </a>
<a href="#Benchmark"> ベンチマーク </a>
<a href="#环境安装"> 環境インストール </a>
<a href="#用法教程"> 使用方法 </a>
<a href="#联系我们"> お問い合わせ </a>
</h4>
モデルリポジトリ:[modelscope](https://www.modelscope.cn/models/iic/SenseVoiceSmall)[huggingface](https://huggingface.co/FunAudioLLM/SenseVoiceSmall)
オンライン体験:
[modelscope demo](https://www.modelscope.cn/studios/iic/SenseVoice), [huggingface space](https://huggingface.co/spaces/FunAudioLLM/SenseVoice)
</div>
<a name="核心功能"></a>
# コア機能 🎯
**SenseVoice**は、高精度な多言語音声認識、感情認識、および音声イベント検出に焦点を当てています。
- **多言語認識:** 40万時間以上のデータを使用してトレーニングされ、50以上の言語をサポートし、認識性能はWhisperモデルを上回ります。
- **リッチテキスト認識:**
- 優れた感情認識能力を持ち、テストデータで現在の最良の感情認識モデルの効果を達成および上回ります。
- 音声イベント検出能力を提供し、音楽、拍手、笑い声、泣き声、咳、くしゃみなどのさまざまな一般的な人間とコンピュータのインタラクションイベントを検出します。
- **効率的な推論:** SenseVoice-Smallモデルは非自己回帰エンドツーエンドフレームワークを採用しており、推論遅延が非常に低く、10秒の音声の推論に70msしかかかりません。Whisper-Largeより15倍高速です。
- **簡単な微調整:** 便利な微調整スクリプトと戦略を提供し、ユーザーがビジネスシナリオに応じてロングテールサンプルの問題を簡単に解決できるようにします。
- **サービス展開:** マルチコンカレントリクエストをサポートする完全なサービス展開パイプラインを提供し、クライアントサイドの言語にはPython、C++、HTML、Java、C#などがあります。
<a name="最新动态"></a>
# 最新情報 🔥
- 2024/7新しく[ONNX](./demo_onnx.py)と[libtorch](./demo_libtorch.py)のエクスポート機能を追加し、Pythonバージョンのランタイム[funasr-onnx-0.4.0](https://pypi.org/project/funasr-onnx/)、[funasr-torch-0.1.1](https://pypi.org/project/funasr-torch/)も提供開始。
- 2024/7: [SenseVoice-Small](https://www.modelscope.cn/models/iic/SenseVoiceSmall) 多言語音声理解モデルがオープンソース化されました。中国語、広東語、英語、日本語、韓国語の多言語音声認識、感情認識、およびイベント検出能力をサポートし、非常に低い推論遅延を実現しています。
- 2024/7: CosyVoiceは自然な音声生成に取り組んでおり、多言語、音色、感情制御をサポートします。多言語音声生成、ゼロショット音声生成、クロスランゲージ音声クローン、および指示に従う能力に優れています。[CosyVoice repo](https://github.com/FunAudioLLM/CosyVoice) and [CosyVoice オンライン体験](https://www.modelscope.cn/studios/iic/CosyVoice-300M).
- 2024/7: [FunASR](https://github.com/modelscope/FunASR) は、音声認識ASR、音声活動検出VAD、句読点復元、言語モデル、話者検証、話者分離、およびマルチトーカーASRなどの機能を提供する基本的な音声認識ツールキットです。
<a name="Benchmarks"></a>
# ベンチマーク 📝
## 多言語音声認識
オープンソースのベンチマークデータセットAISHELL-1、AISHELL-2、Wenetspeech、Librispeech、Common Voiceを含むでSenseVoiceとWhisperの多言語音声認識性能と推論効率を比較しました。中国語と広東語の認識効果において、SenseVoice-Smallモデルは明らかな効果の優位性を持っています。
<div align="center">
<img src="image/asr_results1.png" width="400" /><img src="image/asr_results2.png" width="400" />
</div>
## 感情認識
現在、広く使用されている感情認識のテスト指標と方法が不足しているため、複数のテストセットでさまざまな指標をテストし、最近のベンチマークの複数の結果と包括的に比較しました。選択されたテストセットには、中国語/英語の両方の言語と、パフォーマンス、映画、自然な会話などのさまざまなスタイルのデータが含まれています。ターゲットデータの微調整を行わない前提で、SenseVoiceはテストデータで現在の最良の感情認識モデルの効果を達成および上回ることができました。
<div align="center">
<img src="image/ser_table.png" width="1000" />
</div>
さらに、テストセットで複数のオープンソースの感情認識モデルを比較し、結果はSenseVoice-Largeモデルがほぼすべてのデータで最良の効果を達成し、SenseVoice-Smallモデルも多数のデータセットで他のオープンソースモデルを上回る効果を達成したことを示しています。
<div align="center">
<img src="image/ser_figure.png" width="500" />
</div>
## イベント検出
SenseVoiceは音声データのみでトレーニングされていますが、イベント検出モデルとして単独で使用することもできます。環境音分類ESC-50データセットで、現在業界で広く使用されているBEATSおよびPANNモデルの効果と比較しました。SenseVoiceモデルはこれらのタスクで良好な効果を達成しましたが、トレーニングデータとトレーニング方法の制約により、イベント分類の効果は専門のイベント検出モデルと比較してまだ一定の差があります。
<div align="center">
<img src="image/aed_figure.png" width="500" />
</div>
## 推論効率
SenseVoice-smallモデルは非自己回帰エンドツーエンドアーキテクチャを採用しており、推論遅延が非常に低いです。Whisper-Smallモデルと同等のパラメータ量で、Whisper-Smallモデルより5倍高速で、Whisper-Largeモデルより15倍高速です。同時に、SenseVoice-smallモデルは音声の長さが増加しても、推論時間に明らかな増加はありません。
<div align="center">
<img src="image/inference.png" width="1000" />
</div>
<a name="环境安装"></a>
# 環境インストール 🐍
```shell
pip install -r requirements.txt
```
<a name="用法教程"></a>
# 使用方法 🛠️
## 推論
任意の形式の音声入力をサポートし、任意の長さの入力をサポートします。
```python
from funasr import AutoModel
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "iic/SenseVoiceSmall"
model = AutoModel(
model=model_dir,
trust_remote_code=True,
remote_code="./model.py",
vad_model="fsmn-vad",
vad_kwargs={"max_single_segment_time": 30000},
device="cuda:0",
)
# en
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=True,
batch_size_s=60,
merge_vad=True, #
merge_length_s=15,
)
text = rich_transcription_postprocess(res[0]["text"])
print(text)
```
パラメータの説明:
- `model_dir`:モデル名、またはローカルディスク上のモデルパス。
- `trust_remote_code`
- `True`は、modelコードの実装が`remote_code`からロードされることを意味し、`remote_code`は`model`コードの正確な位置を指定します(例:現在のディレクトリの`model.py`。絶対パス、相対パス、およびネットワークURLをサポートします。
- `False`は、modelコードの実装が[FunASR](https://github.com/modelscope/FunASR)内部に統合されたバージョンであることを意味し、この場合、現在のディレクトリの`model.py`を変更しても効果がありません。FunASR内部バージョンがロードされるためです。モデルコード[こちらを参照](https://github.com/modelscope/FunASR/tree/main/funasr/models/sense_voice)。
- `vad_model`VAD音声活動検出を有効にすることを示します。VADの目的は、長い音声を短いクリップに分割することです。この場合、推論時間にはVADとSenseVoiceの合計消費が含まれ、エンドツーエンドの遅延を表します。SenseVoiceモデルの推論時間を個別にテストする場合は、VADモデルを無効にできます。
- `vad_kwargs`VADモデルの設定を指定します。`max_single_segment_time``vad_model`による音声セグメントの最大長を示し、単位はミリ秒msです。
- `use_itn`:出力結果に句読点と逆テキスト正規化が含まれるかどうか。
- `batch_size_s`動的バッチの使用を示し、バッチ内の音声の合計長を秒sで測定します。
- `merge_vad`VADモデルによって分割された短い音声フラグメントをマージするかどうか。マージ後の長さは`merge_length_s`で、単位は秒sです。
- `ban_emo_unk`emo_unkラベルを無効にする。
すべての入力が短い音声30秒未満であり、バッチ推論が必要な場合、推論効率を向上させるためにVADモデルを削除し、`batch_size`を設定できます。
```python
model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=True,
batch_size=64,
)
```
詳細な使用方法については、[ドキュメント](https://github.com/modelscope/FunASR/blob/main/docs/tutorial/README.md)を参照してください。
### 直接推論
任意の形式の音声入力をサポートし、入力音声の長さは30秒以下に制限されます。
```python
from model import SenseVoiceSmall
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "iic/SenseVoiceSmall"
m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0")
m.eval()
res = m.inference(
data_in=f"{kwargs['model_path']}/example/en.mp3",
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=False,
**kwargs,
)
text = rich_transcription_postprocess(res[0][0]["text"])
print(text)
```
## サービス展開
未完了
### エクスポートとテスト
<details><summary>ONNXとLibtorchのエクスポート</summary>
#### ONNX
```python
# pip3 install -U funasr funasr-onnx
from pathlib import Path
from funasr_onnx import SenseVoiceSmall
from funasr_onnx.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "iic/SenseVoiceSmall"
model = SenseVoiceSmall(model_dir, batch_size=10, quantize=True)
# inference
wav_or_scp = ["{}/.cache/modelscope/hub/{}/example/en.mp3".format(Path.home(), model_dir)]
res = model(wav_or_scp, language="auto", use_itn=True)
print([rich_transcription_postprocess(i) for i in res])
```
備考ONNXモデルは元のモデルディレクトリにエクスポートされます。
#### Libtorch
```python
from pathlib import Path
from funasr_torch import SenseVoiceSmall
from funasr_torch.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "iic/SenseVoiceSmall"
model = SenseVoiceSmall(model_dir, batch_size=10, device="cuda:0")
wav_or_scp = ["{}/.cache/modelscope/hub/{}/example/en.mp3".format(Path.home(), model_dir)]
res = model(wav_or_scp, language="auto", use_itn=True)
print([rich_transcription_postprocess(i) for i in res])
```
備考Libtorchモデルは元のモデルディレクトリにエクスポートされます。
<details>
### 展開
未完了
## 微調整
### トレーニング環境のインストール
```shell
git clone https://github.com/alibaba/FunASR.git && cd FunASR
pip3 install -e ./
```
### データ準備
データ形式には以下のフィールドが含まれている必要があります:
```text
{"key": "YOU0000008470_S0000238_punc_itn", "text_language": "<|en|>", "emo_target": "<|NEUTRAL|>", "event_target": "<|Speech|>", "with_or_wo_itn": "<|withitn|>", "target": "Including legal due diligence, subscription agreement, negotiation.", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/industrial_data/english_all/audio/YOU0000008470_S0000238.wav", "target_len": 7, "source_len": 140}
{"key": "AUD0000001556_S0007580", "text_language": "<|en|>", "emo_target": "<|NEUTRAL|>", "event_target": "<|Speech|>", "with_or_wo_itn": "<|woitn|>", "target": "there is a tendency to identify the self or take interest in what one has got used to", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/industrial_data/english_all/audio/AUD0000001556_S0007580.wav", "target_len": 18, "source_len": 360}
```
詳細は:`data/train_example.jsonl`を参照してください。
### トレーニングの開始
`finetune.sh`の`train_tool`を、前述のFunASRパス内の`funasr/bin/train_ds.py`の絶対パスに変更することを忘れないでください。
```shell
bash finetune.sh
```
## WebUI
```shell
python webui.py
```
<div align="center"><img src="image/webui.png" width="700"/> </div>
# お問い合わせ
使用中に問題が発生した場合は、githubページで直接Issuesを提起できます。音声に興味のある方は、以下のDingTalkグループQRコードをスキャンしてコミュニティグループに参加し、交流と議論を行ってください。
| FunAudioLLM | FunASR |
|:----------------------------------------------------------------:|:--------------------------------------------------------:|
| <div align="left"><img src="image/dingding_sv.png" width="250"/> | <img src="image/dingding_funasr.png" width="250"/></div> |

219
README_zh.md Normal file
View File

@ -0,0 +1,219 @@
# SenseVoice
「简体中文」|「[English](./README.md)」|「[日本語](./README_ja.md)」
github [repo](https://github.com/FunAudioLLM/SenseVoice) : https://github.com/FunAudioLLM/SenseVoice
SenseVoice是具有音频理解能力的音频基础模型包括语音识别ASR、语种识别LID、语音情感识别SER和声学事件分类AEC或声学事件检测AED。本项目提供SenseVoice模型的介绍以及在多个任务测试集上的benchmark以及体验模型所需的环境安装的与推理方式。
<div align="center">
<img src="image/sensevoice2.png">
[//]: # (<div align="center"><img src="image/sensevoice2.png" width="700"/> </div>)
<h4>
<a href="#What's New"> 文档主页 </a>
<a href="#核心功能"> 核心功能 </a>
</h4>
<h4>
<a href="#On Going"> 最新动态 </a>
<a href="#Benchmark"> Benchmark </a>
<a href="#环境安装"> 环境安装 </a>
<a href="#用法教程"> 用法教程 </a>
<a href="#联系我们"> 联系我们 </a>
</h4>
模型仓库:[modelscope](https://www.modelscope.cn/models/iic/SenseVoiceSmall)[huggingface](https://huggingface.co/FunAudioLLM/SenseVoiceSmall)
在线体验:
[modelscope demo](https://www.modelscope.cn/studios/iic/SenseVoice), [huggingface space](https://huggingface.co/spaces/FunAudioLLM/SenseVoice)
</div>
<a name="核心功能"></a>
# 核心功能 🎯
**SenseVoice**专注于高精度多语言语音识别、情感辨识和音频事件检测
- **多语言识别:** 采用超过40万小时数据训练支持超过50种语言识别效果上优于Whisper模型。
- **富文本识别:**
- 具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。
- 支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。
- **高效推理:** SenseVoice-Small模型采用非自回归端到端框架推理延迟极低10s音频推理仅耗时70ms15倍优于Whisper-Large。
- **微调定制:** 具备便捷的微调脚本与策略,方便用户根据业务场景修复长尾样本问题。
- **服务部署:** 具有完整的服务部署链路支持多并发请求支持客户端语言有python、c++、html、java与c#等。
<a name="最新动态"></a>
# 最新动态 🔥
- 2024/7新增加导出 [ONNX](https://github.com/FunAudioLLM/SenseVoice) 与 [libtorch](https://github.com/FunAudioLLM/SenseVoice) 功能,以及 python 版本 runtime[funasr-onnx-0.4.0](https://pypi.org/project/funasr-onnx/)[funasr-torch-0.1.1](https://pypi.org/project/funasr-torch/)
- 2024/7: [SenseVoice-Small](https://www.modelscope.cn/models/iic/SenseVoiceSmall) 多语言音频理解模型开源,支持中、粤、英、日、韩语的多语言语音识别,情感识别和事件检测能力,具有极低的推理延迟。。
- 2024/7: CosyVoice致力于自然语音生成支持多语言、音色和情感控制擅长多语言语音生成、零样本语音生成、跨语言语音克隆以及遵循指令的能力。[CosyVoice repo](https://github.com/FunAudioLLM/CosyVoice) and [CosyVoice 在线体验](https://www.modelscope.cn/studios/iic/CosyVoice-300M).
- 2024/7: [FunASR](https://github.com/modelscope/FunASR) 是一个基础语音识别工具包提供多种功能包括语音识别ASR、语音端点检测VAD、标点恢复、语言模型、说话人验证、说话人分离和多人对话语音识别等。
<a name="Benchmarks"></a>
# Benchmarks 📝
## 多语言语音识别
我们在开源基准数据集(包括 AISHELL-1、AISHELL-2、Wenetspeech、Librispeech和Common Voice上比较了SenseVoice与Whisper的多语言语音识别性能和推理效率。在中文和粤语识别效果上SenseVoice-Small模型具有明显的效果优势。
<div align="center">
<img src="image/asr_results1.png" width="400" /><img src="image/asr_results2.png" width="400" />
</div>
## 情感识别
由于目前缺乏被广泛使用的情感识别测试指标和方法我们在多个测试集的多种指标进行测试并与近年来Benchmark上的多个结果进行了全面的对比。所选取的测试集同时包含中文/英文两种语言以及表演、影视剧、自然对话等多种风格的数据在不进行目标数据微调的前提下SenseVoice能够在测试数据上达到和超过目前最佳情感识别模型的效果。
<div align="center">
<img src="image/ser_table.png" width="1000" />
</div>
同时我们还在测试集上对多个开源情感识别模型进行对比结果表明SenseVoice-Large模型可以在几乎所有数据上都达到了最佳效果而SenseVoice-Small模型同样可以在多数数据集上取得超越其他开源模型的效果。
<div align="center">
<img src="image/ser_figure.png" width="500" />
</div>
## 事件检测
尽管SenseVoice只在语音数据上进行训练它仍然可以作为事件检测模型进行单独使用。我们在环境音分类ESC-50数据集上与目前业内广泛使用的BEATS与PANN模型的效果进行了对比。SenseVoice模型能够在这些任务上取得较好的效果但受限于训练数据与训练方式其事件分类效果专业的事件检测模型相比仍然有一定的差距。
<div align="center">
<img src="image/aed_figure.png" width="500" />
</div>
## 推理效率
SenseVoice-small模型采用非自回归端到端架构推理延迟极低。在参数量与Whisper-Small模型相当的情况下比Whisper-Small模型推理速度快5倍比Whisper-Large模型快15倍。同时SenseVoice-small模型在音频时长增加的情况下推理耗时也无明显增加。
<div align="center">
<img src="image/inference.png" width="1000" />
</div>
<a name="环境安装"></a>
# 安装依赖环境 🐍
```shell
pip install -r requirements.txt
```
<a name="用法教程"></a>
# 用法 🛠️
## 推理
### 使用funasr推理
支持任意格式音频输入,支持任意时长输入
```python
from funasr import AutoModel
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "FunAudioLLM/SenseVoiceSmall"
model = AutoModel(
model=model_dir,
trust_remote_code=True,
vad_kwargs={"max_single_segment_time": 30000},
device="cuda:0",
hub="hf",
)
# en
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=True,
batch_size_s=60,
merge_vad=True, #
merge_length_s=15,
)
text = rich_transcription_postprocess(res[0]["text"])
print(text)
```
参数说明:
- `model_dir`:模型名称,或本地磁盘中的模型路径。
- `vad_model`表示开启VADVAD的作用是将长音频切割成短音频此时推理耗时包括了VAD与SenseVoice总耗时为链路耗时如果需要单独测试SenseVoice模型耗时可以关闭VAD模型。
- `vad_kwargs`表示VAD模型配置,`max_single_segment_time`: 表示`vad_model`最大切割音频时长, 单位是毫秒ms。
- `use_itn`:输出结果中是否包含标点与逆文本正则化。
- `batch_size_s` 表示采用动态batchbatch中总音频时长单位为秒s。
- `merge_vad`:是否将 vad 模型切割的短音频碎片合成,合并后长度为`merge_length_s`单位为秒s。
如果输入均为短音频小于30s并且需要批量化推理为了加快推理效率可以移除vad模型并设置`batch_size`
```python
model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0", hub="hf")
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=True,
batch_size=64,
)
```
更多详细用法,请参考 [文档](https://github.com/modelscope/FunASR/blob/main/docs/tutorial/README.md)
### 直接推理
支持任意格式音频输入输入音频时长限制在30s以下
```python
from model import SenseVoiceSmall
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "FunAudioLLM/SenseVoiceSmall"
m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0", hub="hf")
m.eval()
res = m.inference(
data_in=f"{kwargs['model_path']}/example/en.mp3",
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=False,
**kwargs,
)
text = rich_transcription_postprocess(res[0][0]["text"])
print(text)
```
## 服务部署
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
### 导出与测试(*进行中*
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
### 部署
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
## 微调
Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
## WebUI
```shell
python webui.py
```
<div align="center"><img src="image/webui.png" width="700"/> </div>
# 联系我们
如果您在使用中遇到问题可以直接在github页面提Issues。欢迎语音兴趣爱好者扫描以下的钉钉群二维码加入社区群进行交流和讨论。
| FunAudioLLM | FunASR |
|:----------------------------------------------------------------:|:--------------------------------------------------------:|
| <div align="left"><img src="image/dingding_sv.png" width="250"/> | <img src="image/dingding_funasr.png" width="250"/></div> |

8
am.mvn Normal file

File diff suppressed because one or more lines are too long

BIN
chn_jpn_yue_eng_ko_spectok.bpe.model (Stored with Git LFS) Normal file

Binary file not shown.

97
config.yaml Normal file
View File

@ -0,0 +1,97 @@
encoder: SenseVoiceEncoderSmall
encoder_conf:
output_size: 512
attention_heads: 4
linear_units: 2048
num_blocks: 50
tp_blocks: 20
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: pe
pos_enc_class: SinusoidalPositionEncoder
normalize_before: true
kernel_size: 11
sanm_shfit: 0
selfattention_layer_type: sanm
model: SenseVoiceSmall
model_conf:
length_normalized_loss: true
sos: 1
eos: 2
ignore_id: -1
tokenizer: SentencepiecesTokenizer
tokenizer_conf:
bpemodel: null
unk_symbol: <unk>
split_with_space: true
frontend: WavFrontend
frontend_conf:
fs: 16000
window: hamming
n_mels: 80
frame_length: 25
frame_shift: 10
lfr_m: 7
lfr_n: 6
cmvn_file: null
dataset: SenseVoiceCTCDataset
dataset_conf:
index_ds: IndexDSJsonl
batch_sampler: EspnetStyleBatchSampler
data_split_num: 32
batch_type: token
batch_size: 14000
max_token_length: 2000
min_token_length: 60
max_source_length: 2000
min_source_length: 60
max_target_length: 200
min_target_length: 0
shuffle: true
num_workers: 4
sos: ${model_conf.sos}
eos: ${model_conf.eos}
IndexDSJsonl: IndexDSJsonl
retry: 20
train_conf:
accum_grad: 1
grad_clip: 5
max_epoch: 20
keep_nbest_models: 10
avg_nbest_model: 10
log_interval: 100
resume: true
validate_interval: 10000
save_checkpoint_interval: 10000
optim: adamw
optim_conf:
lr: 0.00002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
specaug: SpecAugLFR
specaug_conf:
apply_time_warp: false
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
lfr_rate: 6
num_freq_mask: 1
apply_time_mask: true
time_mask_width_range:
- 0
- 12
num_time_mask: 1

14
configuration.json Normal file
View File

@ -0,0 +1,14 @@
{
"framework": "pytorch",
"task" : "auto-speech-recognition",
"model": {"type" : "funasr"},
"pipeline": {"type":"funasr-pipeline"},
"model_name_in_hub": {
"ms":"",
"hf":""},
"file_path_metas": {
"init_param":"model.pt",
"config":"config.yaml",
"tokenizer_conf": {"bpemodel": "chn_jpn_yue_eng_ko_spectok.bpe.model"},
"frontend_conf":{"cmvn_file": "am.mvn"}}
}

27
demo.py Normal file
View File

@ -0,0 +1,27 @@
from funasr import AutoModel
from funasr.utils.postprocess_utils import rich_transcription_postprocess
model_dir = "FunAudioLLM/SenseVoiceSmall"
model = AutoModel(
model=model_dir,
vad_model="fsmn-vad",
vad_kwargs={"max_single_segment_time": 30000},
device="cuda:0",
hub="hf",
)
# en
res = model.generate(
input=f"{model.model_path}/example/en.mp3",
cache={},
language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
use_itn=True,
batch_size_s=60,
merge_vad=True, #
merge_length_s=15,
)
text = rich_transcription_postprocess(res[0]["text"])
print(text)

BIN
example/en.mp3 Normal file

Binary file not shown.

BIN
example/ja.mp3 Normal file

Binary file not shown.

BIN
example/ko.mp3 Normal file

Binary file not shown.

BIN
example/yue.mp3 Normal file

Binary file not shown.

BIN
example/zh.mp3 Normal file

Binary file not shown.

BIN
image/aed_figure.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

BIN
image/asr_results.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

BIN
image/asr_results1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

BIN
image/asr_results2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

BIN
image/dingding_funasr.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

BIN
image/dingding_sv.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

BIN
image/inference.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 935 KiB

BIN
image/sensevoice.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 880 KiB

BIN
image/sensevoice2.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
image/ser_figure.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

BIN
image/ser_table.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 318 KiB

BIN
image/webui.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
image/wechat.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

BIN
model.pt (Stored with Git LFS) Normal file

Binary file not shown.

8
requirements.txt Normal file
View File

@ -0,0 +1,8 @@
torch>=1.13
torchaudio
modelscope
huggingface
huggingface_hub
funasr>=1.1.2
numpy<=1.26.4
gradio