本地部署qwen2.5-omni

本地部署 Qwen2.5-Omni

qwen2.5-omni-7b使用官方提供的transformers演示demo,在不做任何优化(如禁用视觉)的情况下需要26g左右的显存,qwen2.5-omni-3b加载大概需要12g,生成一段22s音频增加至15g

环境

5090,cuda12.8,ubuntu22,由于时间隔得比较久,vllm和transformers的部署方式使用了不同的pytorch版本。可以去这里的地址下载 torchflash_attn

一、使用vllm部署

依赖:torch2.9.1、torchvision0.24.1、torchaudio2.9.1从pip安装,python3.12,vllm0.12.0和vllm-omni从源码编译

1
export MAX_JOBS=4 # 这个参数非常重要
1
2
3
4
5
6
7
8
# 从源码编译vllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
git checkout v0.12.0
python use_existing_torch.py
pip install -r requirements/build.txt
pip install -r requirements/common.txt
pip install -e . --no-build-isolation
1
2
3
4
# 从源码编译vllm-omni
git clone https://github.com/vllm-project/vllm-omni.git
cd vllm-omni
pip install -e . --no-build-isolation

踩坑点

MAX_JOBS是一个非常重要的参数,我32g的服务器不设置的话直接内存溢出,建议根据自己的内存大小调整这个值,仅影响编译vllm的速度,不影响运行

二、使用transformers的演示demo

依赖:torch2.8,torchvision0.23,torchaudio2.8,flash_attn2.8.3

这种方式可以快速看到这个大模型的效果,先安装依赖:

1
2
pip install transformers==4.52.3
pip install accelerate
1
2
3
pip install qwen-omni-utils[decord] -U
# 如果无法安装,则安装下面的
# pip install qwen-omni-utils -U

运行脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
import soundfile as sf

from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info

# default: Load the model on the available device(s)
# model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto")

# 我们建议启用 flash_attention_2 以获取更快的推理速度以及更低的显存占用.
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"../models/Qwen/Qwen2.5-Omni-7B", # 这里修改模型地址
torch_dtype="auto",
device_map="auto",
attn_implementation="flash_attention_2",
)

processor = Qwen2_5OmniProcessor.from_pretrained("../models/Qwen/Qwen2.5-Omni-7B")

conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "text", "text": "你好,介绍一下杭州的美食!"},
# {"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
],
},
]

# set use audio in video
USE_AUDIO_IN_VIDEO = True

# Preparation for inference
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)

# Inference: Generation of the output text and audio
text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)

text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)

踩坑点

我在使用官方的音频地址时,服务器直接卡死,不要使用官方提供的音频地址。报错如下:

1
[10:00:38] /github/workspace/src/video/video_reader.cc:83: ERROR opening: https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4, Protocol not found