更新
This commit is contained in:
60
Docs/DevLogs/Day16.md
Normal file
60
Docs/DevLogs/Day16.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
|
||||
## 🔧 Qwen-TTS Flash Attention 优化 (10:00)
|
||||
|
||||
### 优化背景
|
||||
Qwen3-TTS 1.7B 模型在默认情况下加载速度慢,推理显存占用高。通过引入 Flash Attention 2,可以显著提升模型加载速度和推理效率。
|
||||
|
||||
### 实施方案
|
||||
在 `qwen-tts` Conda 环境中安装 `flash-attn`:
|
||||
|
||||
```bash
|
||||
conda activate qwen-tts
|
||||
pip install -U flash-attn --no-build-isolation
|
||||
```
|
||||
|
||||
### 验证结果
|
||||
- **加载速度**: 从 ~60s 提升至 **8.9s** ⚡
|
||||
- **显存占用**: 显著降低,消除 OOM 风险
|
||||
- **代码变动**: 无代码变动,仅环境优化 (自动检测)
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ 服务看门狗 Watchdog (10:30)
|
||||
|
||||
### 问题描述
|
||||
常驻服务 (`vigent2-qwen-tts` 和 `vigent2-latentsync`) 可能会因显存碎片或长时间运行出现僵死 (Port open but unresponsive)。
|
||||
|
||||
### 解决方案
|
||||
开发了一个 Python Watchdog 脚本,每 30 秒轮询服务的 `/health` 接口,如果连续 3 次失败则自动重启服务。
|
||||
|
||||
1. **Watchdog 脚本**: `backend/scripts/watchdog.py`
|
||||
2. **启动脚本**: `run_watchdog.sh` (基于 PM2)
|
||||
|
||||
### 核心逻辑
|
||||
```python
|
||||
# 连续 3 次心跳失败触发重启
|
||||
if service["failures"] >= service['threshold']:
|
||||
subprocess.run(["pm2", "restart", service["name"]])
|
||||
```
|
||||
|
||||
### 部署状态
|
||||
- `vigent2-watchdog` 已启动并加入 PM2 列表
|
||||
- 监控对象: `vigent2-qwen-tts` (8009), `vigent2-latentsync` (8007)
|
||||
|
||||
---
|
||||
|
||||
## ⚡ LatentSync 性能确认
|
||||
|
||||
经代码审计,LatentSync 1.6 已内置优化:
|
||||
- ✅ **Flash Attention**: 原生使用 `torch.nn.functional.scaled_dot_product_attention`
|
||||
- ✅ **DeepCache**: 已启用 (`cache_interval=3`),提供 ~2.5x 加速
|
||||
- ✅ **GPU 并发**: 双卡流水线 (GPU0 TTS | GPU1 LipSync) 已确认工作正常
|
||||
|
||||
---
|
||||
|
||||
## 📝 文档更新
|
||||
|
||||
- [x] `Docs/QWEN3_TTS_DEPLOY.md`: 添加 Flash Attention 安装指南
|
||||
- [x] `Docs/DEPLOY_MANUAL.md`: 添加 Watchdog 部署说明
|
||||
- [x] `Docs/task_complete.md`: 更新进度至 100% (Day 16)
|
||||
Reference in New Issue
Block a user