文章目录
一、关于 Qwen2.5-Omni
Qwen2.5-Omni 是一个端到端的多模态模型,旨在感知包括文本、图像、音频和视频在内的多种模态,同时以流式方式生成文本和自然语音响应。
相关链接
- HuggingFace : https://huggingface.co/collections/Qwen/qwen25-omni-67de1e5f0f9464dc6314b36e
- Qwen chat : https://chat.qwen.ai/
核心特性
- 全能新颖的架构:我们提出了Thinker-Talker架构,这是一个端到端多模态模型,能够感知文本、图像、音频和视频等多种模态,同时以流式方式生成文本和自然语音响应。我们还提出了一种名为TMRoPE(时间对齐多模态RoPE)的新型位置嵌入方法,用于同步视频输入与音频的时间戳。
- 实时语音视频聊天:专为完全实时交互设计的架构,支持分块输入和即时输出。
- 自然且鲁棒的语音生成:超越了许多现有的流式和非流式方案,在语音生成方面展现出卓越的鲁棒性和自然度。
- 跨模态的强劲性能:在与同规模单模态模型的基准测试中,Qwen2.5-Omni在所有模态上都表现出色。其音频能力超越了同规模的Qwen2-Audio,并与Qwen2.5-VL-7B达到了相当的性能水平。
- 出色的端到端语音指令跟随:Qwen2.5-Omni在端到端语音指令跟随方面的表现与文本输入相当,MMLU和GSM8K等基准测试结果印证了这一点。
模型架构
二、快速入门
1、安装依赖
以下我们提供简单示例,展示如何通过🤗 Transformers使用Qwen2.5-Omni。
Qwen2.5-Omni的代码已集成至最新版Hugging Face transformers库,建议您通过以下命令从源码构建:
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview
pip install accelerate
或者您可能会遇到以下错误:
KeyError: 'qwen2_5_omni'
我们提供了一套工具包,帮助您像调用API一样便捷地处理各类音视频输入,包括base64编码、URL链接以及交错的音频、图像和视频数据。您可以通过以下命令安装,并确保系统中已安装ffmpeg
:
# It's highly recommended to use `[decord]` feature for faster video loading.
pip install qwen-omni-utils[decord] -U
如果你没有使用 Linux 系统,可能无法通过 PyPI 安装 decord
。这种情况下,可以使用 pip install qwen-omni-utils -U
命令回退到 torchvision 进行视频处理。不过,你仍然可以从源码安装 decord 来确保加载视频时使用 decord。
2、🤗 Transformers 使用指南
以下代码片段展示了如何结合 transformers
和 qwen_omni_utils
使用聊天模型:
import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info
# default: Load the model on the available device(s)
model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-3B", torch_dtype="auto", device_map="auto")
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2.5-Omni-3B",
# torch_dtype="auto",
# device_map="auto",
# attn_implementation="flash_attention_2",
# )
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-3B")
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
],
},
]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for inference
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)
3、最低GPU显存需求
模型 | 精度 | 15秒视频 | 30秒视频 | 60秒视频 |
---|---|---|---|---|
Qwen-Omni-3B | FP32 | 89.10 GB | 不推荐 | 不推荐 |
Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |
Qwen-Omni-7B | FP32 | 93.56 GB | 不推荐 | 不推荐 |
Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |
注:上表展示了使用transformers
进行推理时的理论最低显存需求,其中BF16
测试时启用了attn_implementation="flash_attention_2"
;但实际应用中,显存占用通常至少是理论值的1.2倍。更多信息请参阅此链接资源。
4、视频URL资源使用
视频URL的兼容性主要取决于第三方库版本,具体如下表所示。若需更改默认后端,可通过设置环境变量FORCE_QWENVL_VIDEO_READER=torchvision
或FORCE_QWENVL_VIDEO_READER=decord
实现。
后端 | HTTP | HTTPS |
---|---|---|
torchvision >= 0.19.0 | ✅ | ✅ |
torchvision < 0.19.0 | ❌ | ❌ |
decord | ✅ | ❌ |
5、批量推理
当设置 return_audio=False
时,该模型能够将文本、图像、音频和视频等多种类型的混合样本组合成批量输入。以下是一个示例。
# Sample messages for batch inference
# Conversation with video only
conversation1 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "/path/to/video.mp4"},
]
}
]
# Conversation with audio only
conversation2 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "audio", "audio": "/path/to/audio.wav"},
]
}
]
# Conversation with pure text
conversation3 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": "who are you?"
}
]
# Conversation with mixed media
conversation4 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "image", "image": "/path/to/image.jpg"},
{"type": "video", "video": "/path/to/video.mp4"},
{"type": "audio", "audio": "/path/to/audio.wav"},
{"type": "text", "text": "What are the elements can you see and hear in these medias?"},
],
}
]
# Combine messages for batch processing
conversations = [conversation1, conversation2, conversation3, conversation4]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for batch inference
text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Batch Inference
text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
三、使用技巧
1、音频输出提示
若用户需要音频输出,必须将系统提示设置为:"你是由阿里巴巴集团Qwen团队开发的虚拟人类Qwen,能够感知听觉和视觉输入,并生成文本和语音。"否则音频输出可能无法正常工作。
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
}
2、在视频中使用音频
在多模态交互过程中,用户提供的视频通常包含音频(例如对视频内容的提问,或视频中某些事件产生的声音)。这些信息有助于模型提供更好的交互体验。因此我们为用户提供以下选项,来决定是否在视频中使用音频。
# first place, in data preprocessing
audios, images, videos = process_mm_info(conversations, use_audio_in_video=True)
# second place, in model processor
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt",
padding=True, use_audio_in_video=True)
# third place, in model inference
text_ids, audio = model.generate(**inputs, use_audio_in_video=True)
值得注意的是,在多轮对话过程中,这些地方的use_audio_in_video
参数必须设置为相同值,否则会出现意外结果。
3、是否启用音频输出
该模型同时支持文本和音频输出。如果用户不需要音频输出,可以在初始化模型后调用 model.disable_talker()
方法。此选项将节省约 ~2GB
的 GPU 显存,但会导致 generate
函数的 return_audio
参数只能设置为 False
。
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
torch_dtype="auto",
device_map="auto"
)
model.disable_talker()
为了获得更灵活的体验,我们建议用户可以在调用 generate
函数时自行决定是否返回音频。如果将 return_audio
参数设为 False
,模型将仅返回文本输出,从而更快地获取文本响应。
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
torch_dtype="auto",
device_map="auto"
)
...
text_ids = model.generate(**inputs, return_audio=False)
4、更改输出音频的语音类型
Qwen2.5-Omni 支持修改输出音频的语音风格。"Qwen/Qwen2.5-Omni-3B"
模型检查点目前支持以下两种语音类型:
语音类型 | 性别 | 描述 |
---|---|---|
Chelsie | 女声 | 如蜜般丝滑的声线,带着温柔的暖意和通透的清澈感 |
Ethan | 男声 | 明亮活泼的声调,充满感染力,传递温暖亲切的氛围 |
用户可通过 generate
函数的 speaker
参数指定语音类型。若未指定该参数,系统默认使用 Chelsie
语音类型。
text_ids, audio = model.generate(**inputs, speaker="Chelsie")
text_ids, audio = model.generate(**inputs, speaker="Ethan")
5、使用 Flash-Attention 2 加速生成
首先,请确保安装最新版本的 Flash Attention 2:
pip install -U flash-attn --no-build-isolation
此外,您的硬件需要兼容 FlashAttention 2。更多详情请参阅 flash attention 代码库 的官方文档。FlashAttention-2 仅当模型以 torch.float16
或 torch.bfloat16
精度加载时才能使用。
要使用 FlashAttention-2 加载并运行模型,请在加载模型时添加参数 attn_implementation="flash_attention_2"
:
from transformers import Qwen2_5OmniForConditionalGeneration
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
四、性能表现
我们对Qwen2.5-Omni进行了全面评估,结果显示其在所有模态上都展现出强劲性能,优于同规模单模态模型及Qwen2.5-VL-7B、Qwen2-Audio、Gemini-1.5-pro等闭源模型。
在多模态融合任务(如OmniBench)中,Qwen2.5-Omni实现了业界领先水平。
即使在单模态任务中,它也在语音识别(Common Voice)、翻译(CoVoST2)、音频理解(MMAU)、图像推理(MMMU/MMStar)、视频理解(MVBench)及语音生成(Seed-tts-eval与主观自然度评估)等领域表现卓越。
多模态转文本
数据集 | 模型 | 性能 |
---|---|---|
OmniBenchSpeech | 声音事件 | 音乐 |
MIO-Instruct | 36.96% | 33.58% |
AnyGPT (7B) | 17.77% | 20.75% |
video-SALMONN | 34.11% | 31.70% |
UnifiedIO2-xlarge | 39.56% | 36.98% |
UnifiedIO2-xxlarge | 34.24% | 36.98% |
MiniCPM-o | - | - |
Baichuan-Omni-1.5 | - | - |
Qwen2.5-Omni-3B | 52.14% | 52.08% |
Qwen2.5-Omni-7B | 55.25% | 60.00% |
音频转文本
数据集 | 模型 | 性能 |
---|---|---|
自动语音识别(ASR) | ||
Librispeechdev-clean | dev other | test-clean |
SpeechVerse | - | - |
Whisper-large-v3 | - | - |
Llama-3-8B | - | - |
Llama-3-70B | - | - |
Seed-ASR-Multilingual | - | - |
MiniCPM-o | - | - |
MinMo | - | - |
Qwen-Audio | 1.8 | 4.0 |
Qwen2-Audio | 1.3 | 3.4 |
Qwen2.5-Omni-3B | 2.0 | 4.1 |
Qwen2.5-Omni-7B | 1.6 | 3.5 |
Common Voice 15en | zh | yue |
MinMo | 7.9 | 6.3 |
Qwen2-Audio | 8.6 | 6.9 |
Qwen2.5-Omni-3B | 9.1 | 6.0 |
Qwen2.5-Omni-7B | 7.6 | 5.2 |
Fleurszh | en | Whisper-large-v3 |
Seed-ASR-Multilingual | - | 3.4 |
Megrez-3B-Omni | 10.8 | - |
MiniCPM-o | 4.4 | - |
MinMo | 3.0 | 3.8 |
Qwen2-Audio | 7.5 | - |
Qwen2.5-Omni-3B | 3.2 | 5.4 |
Qwen2.5-Omni-7B | 3.0 | 4.1 |
Wenetspeechtest-net | test-meeting | Seed-ASR-Chinese |
Megrez-3B-Omni | - | 16.4 |
MiniCPM-o | 6.9 | - |
MinMo | 6.8 | 7.4 |
Qwen2.5-Omni-3B | 6.3 | 8.1 |
Qwen2.5-Omni-7B | 5.9 | 7.7 |
Voxpopuli-V1.0-en | Llama-3-8B | 6.2 |
Llama-3-70B | 5.7 | |
Qwen2.5-Omni-3B | 6.6 | |
Qwen2.5-Omni-7B | 5.8 | |
语音到文本翻译(S2TT) | ||
CoVoST2en-de | de-en | en-zh |
SpeechLLaMA | - | 27.1 |
BLSP | 14.1 | - |
MiniCPM-o | - | - |
MinMo | - | 39.9 |
Qwen-Audio | 25.1 | 33.9 |
Qwen2-Audio | 29.9 | 35.2 |
Qwen2.5-Omni-3B | 28.3 | 38.1 |
Qwen2.5-Omni-7B | 30.2 | 37.7 |
语音情感识别(SER) | ||
Meld | WavLM-large | 0.542 |
MiniCPM-o | 0.524 | |
Qwen-Audio | 0.557 | |
Qwen2-Audio | 0.553 | |
Qwen2.5-Omni-3B | 0.558 | |
Qwen2.5-Omni-7B | 0.570 | |
声音场景分类(VSC) | ||
VocalSound | CLAP | 0.495 |
Pengi | 0.604 | |
Qwen-Audio | 0.929 | |
Qwen2-Audio | 0.939 | |
Qwen2.5-Omni-3B | 0.936 | |
Qwen2.5-Omni-7B | 0.939 | |
音乐 | ||
GiantSteps Tempo | Llark-7B | 0.86 |
Qwen2.5-Omni-3B | 0.88 | |
Qwen2.5-Omni-7B | 0.88 | |
MusicCaps | LP-MusicCaps | 0.291 |
Qwen2.5-Omni-3B | 0.325 | 0.163 |
Qwen2.5-Omni-7B | 0.328 | 0.162 |
音频推理 | ||
MMAUSound | Music | Speech |
Qwen2-Audio | 54.95 | 50.98 |
Qwen2.5-Omni-3B | 70.27 | 60.48 |
Qwen2.5-Omni-7B | 67.87 | **69.16 |
语音对话 | ||
VoiceBenchAlpacaEval | CommonEval | SD-QA |
MERaLiON | 4.50 | 3.77 |
Megrez-3B-Omni | 3.50 | 2.95 |
Lyra-Base | 3.85 | 3.50 |
MiniCPM-o | 4.42 | 4.15 |
Baichuan-Omni-1.5 | 4.50 | 4.05 |
Qwen2-Audio | 3.74 | 3.43 |
Qwen2.5-Omni-3B | 4.32 | 4.00 |
Qwen2.5-Omni-7B | 4.49 | 3.93 |
VoiceBenchOpenBookQA | IFEval | AdvBench |
MERaLiON | 27.23 | 62.93 |
Megrez-3B-Omni | 28.35 | 25.71 |
Lyra-Base | 72.75 | 36.28 |
MiniCPM-o | 78.02 | 49.25 |
Baichuan-Omni-1.5 | 74.51 | 54.54 |
Qwen2-Audio | 49.45 | 26.33 |
Qwen2.5-Omni-3B | 74.73 | 42.10 |
Qwen2.5-Omni-7B | 81.10 | 52.87 |
图像转文本
数据集 | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | 其他最佳 | Qwen2.5-VL-7B | GPT-4o-mini |
---|---|---|---|---|---|
MMMU验证集 | 59.2 | 53.1 | 53.9 | 58.6 | 60.0 |
MMMU-Proof总体 | 36.6 | 29.7 | - | 38.3 | 37.6 |
MathVista测试精简版 | 67.9 | 59.4 | 71.9 | 68.2 | 52.5 |
MathVision完整版 | 25.0 | 20.8 | 23.1 | 25.1 | - |
MMBench-V1.1-EN测试 | 81.8 | 77.8 | 80.5 | 82.6 | 76.0 |
MMVet加速版 | 66.8 | 62.1 | 67.5 | 67.1 | 66.9 |
MMStar | 64.0 | 55.7 | 64.0 | 63.9 | 54.8 |
MME总和 | 2340 | 2117 | 2372 | 2347 | 2003 |
MuirBench | 59.2 | 48.0 | - | 59.2 | - |
CRPE关系 | 76.5 | 73.7 | - | 76.4 | - |
RealWorldQA平均 | 70.3 | 62.6 | 71.9 | 68.5 | - |
MME-RealWorld英文 | 61.6 | 55.6 | - | 57.4 | - |
MM-MT-Bench | 6.0 | 5.0 | - | 6.3 | - |
AI2D | 83.2 | 79.5 | 85.8 | 83.9 | - |
TextVQA验证集 | 84.4 | 79.8 | 83.2 | 84.9 | - |
DocVQA测试集 | 95.2 | 93.3 | 93.5 | 95.7 | - |
ChartQA测试平均 | 85.3 | 82.8 | 84.9 | 87.3 | - |
OCRBench_V2英文 | 57.8 | 51.7 | - | 56.3 | - |
数据集 | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro |
---|---|---|---|---|---|
Refcoco验证集 | 90.5 | 88.7 | 90.0 | 90.6 | 73.2 |
RefcocotextA | 93.5 | 91.8 | 92.5 | 93.2 | 72.9 |
RefcocotextB | 86.6 | 84.0 | 85.4 | 88.2 | 74.6 |
Refcoco+验证集 | 85.4 | 81.1 | 84.2 | 88.2 | 62.5 |
Refcoco+textA | 91.0 | 87.5 | 89.1 | 89.0 | 63.9 |
Refcoco+textB | 79.3 | 73.2 | 76.9 | 75.9 | 65.0 |
Refcocog+验证集 | 87.4 | 85.0 | 87.2 | 86.1 | 75.2 |
Refcocog+测试集 | 87.9 | 85.1 | 87.2 | 87.0 | 76.2 |
ODinW | 42.4 | 39.2 | 37.3 | 55.0 | 36.7 |
PointGrounding | 66.5 | 46.2 | 67.3 | - | - |
视频(无音频)转文本
数据集 | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | 其他最佳 | Qwen2.5-VL-7B | GPT-4o-mini |
---|---|---|---|---|---|
Video-MME(无字幕) | 64.3 | 62.0 | 63.9 | 65.1 | 64.8 |
Video-MME(带字幕) | 72.4 | 68.6 | 67.9 | 71.6 | - |
MVBench | 70.3 | 68.7 | 67.2 | 69.6 | - |
EgoSchema测试集 | 68.6 | 61.4 | 63.2 | 65.0 | - |
零样本语音生成
数据集 | 模型 | 性能 |
---|---|---|
内容一致性 | ||
SEEDtest-zh | test-en | test-hard |
Seed-TTS_RL | 1.00 | 1.94 |
MaskGCT | 2.27 | 2.62 |
E2_TTS | 1.97 | 2.19 |
F5-TTS | 1.56 | 1.83 |
CosyVoice 2 | 1.45 | 2.57 |
CosyVoice 2-S | 1.45 | 2.38 |
Qwen2.5-Omni-3B_ICL | 1.95 | 2.87 |
Qwen2.5-Omni-3B_RL | 1.58 | 2.51 |
Qwen2.5-Omni-7B_ICL | 1.70 | 2.72 |
Qwen2.5-Omni-7B_RL | 1.42 | 2.32 |
说话人相似度 | ||
SEEDtest-zh | test-en | test-hard |
Seed-TTS_RL | 0.801 | 0.766 |
MaskGCT | 0.774 | 0.714 |
E2_TTS | 0.730 | 0.710 |
F5-TTS | 0.741 | 0.647 |
CosyVoice 2 | 0.748 | 0.652 |
CosyVoice 2-S | 0.753 | 0.654 |
Qwen2.5-Omni-3B_ICL | 0.741 | 0.635 |
Qwen2.5-Omni-3B_RL | 0.744 | 0.635 |
Qwen2.5-Omni-7B_ICL | 0.752 | 0.632 |
Qwen2.5-Omni-7B_RL | 0.754 | 0.641 |
文本到文本性能对比
数据集 | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B |
---|---|---|---|---|---|---|---|
MMLU-Pro | 47.0 | 40.4 | 56.3 | 43.7 | 44.1 | 48.3 | 52.1 |
MMLU-redux | 71.0 | 60.9 | 75.4 | 64.4 | 67.3 | 67.2 | 72.8 |
LiveBench0831 | 29.6 | 22.3 | 35.9 | 26.8 | 29.2 | 26.7 | 30.6 |
GPQA | 30.8 | 34.3 | 36.4 | 30.3 | 34.3 | 32.8 | 32.8 |
MATH | 71.5 | 63.6 | 75.5 | 65.9 | 52.9 | 51.9 | 44.3 |
GSM8K | 88.7 | 82.6 | 91.6 | 86.7 | 85.7 | 84.5 | 76.7 |
HumanEval | 78.7 | 70.7 | 84.8 | 74.4 | 79.9 | 72.6 | 68.9 |
MBPP | 73.2 | 70.4 | 79.2 | 72.7 | 67.2 | 69.6 | 74.9 |
MultiPL-E | 65.8 | 57.6 | 70.4 | 60.2 | 59.1 | 50.7 | 53.4 |
LiveCodeBench2305-2409 | 24.6 | 16.5 | 28.7 | 19.9 | 23.9 | 8.3 | 18.9 |
2025-05-06(二)