Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bc0fe9326a | ||
|
|
035ee29d72 | ||
|
|
a6cc919e5c | ||
|
|
96a298e51c |
278
Docs/ALIPAY_DEPLOY.md
Normal file
278
Docs/ALIPAY_DEPLOY.md
Normal file
@@ -0,0 +1,278 @@
|
||||
# 支付宝付费开通会员 — 部署指南
|
||||
|
||||
本文档涵盖支付宝电脑网站支付功能的完整部署流程。用户注册后通过支付宝付费自动激活会员,有效期 1 年。
|
||||
|
||||
---
|
||||
|
||||
## 前置条件
|
||||
|
||||
- 支付宝企业/个体商户账号
|
||||
- 已在 [支付宝开放平台](https://open.alipay.com) 创建应用并获取 APPID
|
||||
- 应用已开通 **「电脑网站支付」** 产品权限(`alipay.trade.page.pay` 接口)
|
||||
- 服务器域名已配置 HTTPS(支付宝回调要求公网可达)
|
||||
|
||||
---
|
||||
|
||||
## 第一部分:支付宝开放平台配置
|
||||
|
||||
### 1. 创建应用
|
||||
|
||||
登录 https://open.alipay.com → 控制台 → 创建应用(或使用已有应用)。
|
||||
|
||||
### 2. 开通「电脑网站支付」产品
|
||||
|
||||
进入应用详情 → 产品绑定/产品管理 → 添加 **「电脑网站支付」** → 提交审核。
|
||||
|
||||
> **注意**:未开通此产品会导致 `ACQ.ACCESS_FORBIDDEN` 错误。
|
||||
|
||||
### 3. 生成密钥对
|
||||
|
||||
进入应用详情 → 开发设置 → 接口加签方式 → 选择 **RSA2(SHA256)**:
|
||||
|
||||
1. 使用支付宝官方密钥工具生成 RSA2048 密钥对
|
||||
2. 将 **应用公钥** 上传到开放平台
|
||||
3. 上传后平台会显示 **支付宝公钥**(`alipayPublicKey_RSA2`)
|
||||
|
||||
最终你会得到两样东西:
|
||||
- **应用私钥**:你本地保存,代码用来签名请求
|
||||
- **支付宝公钥**:平台返回给你,代码用来验证回调签名
|
||||
|
||||
> 应用公钥只是上传用的中间产物,代码中不需要。
|
||||
|
||||
---
|
||||
|
||||
## 第二部分:服务器配置
|
||||
|
||||
### 1. 放置密钥文件
|
||||
|
||||
将密钥保存为标准 PEM 格式,放到 `backend/keys/` 目录:
|
||||
|
||||
```bash
|
||||
mkdir -p /home/rongye/ProgramFiles/ViGent2/backend/keys
|
||||
```
|
||||
|
||||
**`backend/keys/app_private_key.pem`**(应用私钥):
|
||||
|
||||
```
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...(你的私钥内容)
|
||||
...
|
||||
-----END PRIVATE KEY-----
|
||||
```
|
||||
|
||||
**`backend/keys/alipay_public_key.pem`**(支付宝公钥):
|
||||
|
||||
```
|
||||
-----BEGIN PUBLIC KEY-----
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A...(支付宝公钥内容)
|
||||
...
|
||||
-----END PUBLIC KEY-----
|
||||
```
|
||||
|
||||
#### PEM 格式要求
|
||||
|
||||
支付宝密钥工具导出的是一行纯文本,需要转换为标准 PEM 格式:
|
||||
|
||||
- 必须有头尾标记(`-----BEGIN/END ...-----`)
|
||||
- 密钥内容每 64 字符换行
|
||||
- 私钥头标记为 `-----BEGIN PRIVATE KEY-----`(PKCS#8 格式)
|
||||
- 公钥头标记为 `-----BEGIN PUBLIC KEY-----`
|
||||
|
||||
如果你拿到的是一行裸密钥,用以下命令转换:
|
||||
|
||||
```bash
|
||||
# 私钥格式化(假设裸密钥在 raw_private.txt 中)
|
||||
echo "-----BEGIN PRIVATE KEY-----" > app_private_key.pem
|
||||
cat raw_private.txt | fold -w 64 >> app_private_key.pem
|
||||
echo "-----END PRIVATE KEY-----" >> app_private_key.pem
|
||||
|
||||
# 公钥格式化
|
||||
echo "-----BEGIN PUBLIC KEY-----" > alipay_public_key.pem
|
||||
cat raw_public.txt | fold -w 64 >> alipay_public_key.pem
|
||||
echo "-----END PUBLIC KEY-----" >> alipay_public_key.pem
|
||||
```
|
||||
|
||||
> `backend/keys/` 目录已加入 `.gitignore`,不会被提交到仓库。
|
||||
|
||||
### 2. 配置环境变量
|
||||
|
||||
在 `backend/.env` 中添加:
|
||||
|
||||
```ini
|
||||
# =============== 支付宝配置 ===============
|
||||
ALIPAY_APP_ID=你的应用APPID
|
||||
ALIPAY_PRIVATE_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/app_private_key.pem
|
||||
ALIPAY_PUBLIC_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/alipay_public_key.pem
|
||||
ALIPAY_NOTIFY_URL=https://vigent.hbyrkj.top/api/payment/notify
|
||||
ALIPAY_RETURN_URL=https://vigent.hbyrkj.top/pay
|
||||
```
|
||||
|
||||
| 变量 | 说明 |
|
||||
|------|------|
|
||||
| `ALIPAY_APP_ID` | 支付宝开放平台应用 APPID |
|
||||
| `ALIPAY_PRIVATE_KEY_PATH` | 应用私钥 PEM 文件绝对路径 |
|
||||
| `ALIPAY_PUBLIC_KEY_PATH` | 支付宝公钥 PEM 文件绝对路径 |
|
||||
| `ALIPAY_NOTIFY_URL` | 异步回调地址(服务器间通信),必须公网 HTTPS 可达 |
|
||||
| `ALIPAY_RETURN_URL` | 同步跳转地址(用户支付完成后浏览器跳转回的页面) |
|
||||
|
||||
`config.py` 中还有几个可调参数(已有默认值,一般不需要加到 .env):
|
||||
|
||||
| 变量 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| `ALIPAY_SANDBOX` | `false` | 是否使用沙箱环境 |
|
||||
| `PAYMENT_AMOUNT` | `999.00` | 会员价格(元) |
|
||||
| `PAYMENT_EXPIRE_DAYS` | `365` | 会员有效天数 |
|
||||
|
||||
### 3. 创建数据库表
|
||||
|
||||
通过 Docker 在本地 Supabase 中执行:
|
||||
|
||||
```bash
|
||||
docker exec -i supabase-db psql -U postgres -c "
|
||||
CREATE TABLE IF NOT EXISTS orders (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
out_trade_no TEXT UNIQUE NOT NULL,
|
||||
amount DECIMAL(10, 2) NOT NULL DEFAULT 999.00,
|
||||
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'paid', 'failed')),
|
||||
trade_no TEXT,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
paid_at TIMESTAMP WITH TIME ZONE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_orders_out_trade_no ON orders(out_trade_no);
|
||||
"
|
||||
```
|
||||
|
||||
### 4. 安装依赖
|
||||
|
||||
```bash
|
||||
# 后端(在 venv 中)
|
||||
cd /home/rongye/ProgramFiles/ViGent2/backend
|
||||
venv/bin/pip install python-alipay-sdk
|
||||
```
|
||||
|
||||
> 前端无额外依赖需要安装。
|
||||
|
||||
### 5. Nginx 配置
|
||||
|
||||
确保 Nginx 将 `/api/payment/notify` 代理到后端。如果现有配置已覆盖 `/api/` 前缀,则无需额外修改:
|
||||
|
||||
```nginx
|
||||
location /api/ {
|
||||
proxy_pass http://localhost:8006;
|
||||
# ... 现有配置
|
||||
}
|
||||
```
|
||||
|
||||
### 6. 重启服务
|
||||
|
||||
```bash
|
||||
# 构建前端
|
||||
cd /home/rongye/ProgramFiles/ViGent2/frontend
|
||||
npx next build
|
||||
|
||||
# 重启
|
||||
pm2 restart vigent2-backend
|
||||
pm2 restart vigent2-frontend
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 第三部分:正式上线
|
||||
|
||||
测试通过后,将 `backend/app/core/config.py` 中的测试金额改为正式价格:
|
||||
|
||||
```python
|
||||
PAYMENT_AMOUNT: float = 999.00 # 正式价格
|
||||
```
|
||||
|
||||
或在 `backend/.env` 中添加覆盖:
|
||||
|
||||
```ini
|
||||
PAYMENT_AMOUNT=999.00
|
||||
```
|
||||
|
||||
然后重启后端:
|
||||
|
||||
```bash
|
||||
pm2 restart vigent2-backend
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 支付流程说明
|
||||
|
||||
```
|
||||
用户注册 → 登录(密码正确但 is_active=false)
|
||||
→ 后端返回 403 + payment_token
|
||||
→ 前端跳转 /pay 页面
|
||||
→ POST /api/payment/create-order → 返回支付宝收银台 URL
|
||||
→ 前端重定向到支付宝收银台页面(支持扫码、账号登录、余额等多种支付方式)
|
||||
→ 用户完成支付
|
||||
→ 支付宝异步回调 POST /api/payment/notify
|
||||
→ 后端验签 → 更新订单 → 激活用户(is_active=true, expires_at=+365天)
|
||||
→ 支付宝同步跳转回 /pay?out_trade_no=xxx
|
||||
→ 前端轮询 GET /api/payment/status/{out_trade_no}
|
||||
→ 轮询到 paid → 提示成功 → 跳转登录页
|
||||
→ 用户重新登录 → 成功进入系统
|
||||
```
|
||||
|
||||
**电脑网站支付 vs 当面付**:电脑网站支付(`alipay.trade.page.pay`)会跳转到支付宝官方收银台页面,用户可以选择扫码、支付宝账号登录、余额等多种方式支付,体验更好。当面付(`alipay.trade.precreate`)仅生成一个二维码,只能扫码支付。
|
||||
|
||||
会员到期续费同流程:登录时检测到过期 → 返回 PAYMENT_REQUIRED → 跳转 /pay。
|
||||
|
||||
管理员手动激活功能不受影响,两种方式并存。
|
||||
|
||||
---
|
||||
|
||||
## 涉及文件
|
||||
|
||||
| 文件 | 变更类型 | 说明 |
|
||||
|------|---------|------|
|
||||
| `backend/requirements.txt` | 修改 | 添加 `python-alipay-sdk` |
|
||||
| `backend/database/schema.sql` | 修改 | 新增 `orders` 表 |
|
||||
| `backend/app/core/config.py` | 修改 | 支付宝配置项 |
|
||||
| `backend/app/core/security.py` | 修改 | payment_token 函数 |
|
||||
| `backend/app/core/deps.py` | 修改 | is_active 安全兜底 |
|
||||
| `backend/app/repositories/orders.py` | 新建 | orders 数据层 |
|
||||
| `backend/app/modules/payment/__init__.py` | 新建 | 模块初始化 |
|
||||
| `backend/app/modules/payment/schemas.py` | 新建 | 请求/响应模型 |
|
||||
| `backend/app/modules/payment/service.py` | 新建 | 支付业务逻辑(电脑网站支付) |
|
||||
| `backend/app/modules/payment/router.py` | 新建 | 3 个 API 端点 |
|
||||
| `backend/app/modules/auth/router.py` | 修改 | 登录返回 PAYMENT_REQUIRED |
|
||||
| `backend/app/main.py` | 修改 | 注册 payment_router |
|
||||
| `backend/.env` | 修改 | 支付宝环境变量 |
|
||||
| `backend/keys/` | 新建 | PEM 密钥文件 |
|
||||
| `frontend/src/shared/lib/auth.ts` | 修改 | login() 处理 paymentToken |
|
||||
| `frontend/src/shared/api/axios.ts` | 修改 | PUBLIC_PATHS 加 /pay |
|
||||
| `frontend/src/app/login/page.tsx` | 修改 | paymentToken 跳转 |
|
||||
| `frontend/src/app/register/page.tsx` | 修改 | 注册成功提示文案 |
|
||||
| `frontend/src/app/pay/page.tsx` | 新建 | 付费页面(重定向到支付宝收银台) |
|
||||
|
||||
---
|
||||
|
||||
## 常见问题
|
||||
|
||||
### RSA key format is not supported
|
||||
|
||||
密钥文件缺少 PEM 头尾标记或未按 64 字符换行。参考「PEM 格式要求」重新格式化。
|
||||
|
||||
### ACQ.ACCESS_FORBIDDEN
|
||||
|
||||
应用未开通「电脑网站支付」产品。在支付宝开放平台 → 应用详情 → 产品管理中添加并开通。
|
||||
|
||||
### 支付宝回调不到
|
||||
|
||||
1. 检查 `ALIPAY_NOTIFY_URL` 是否公网 HTTPS 可达
|
||||
2. 检查 Nginx 是否将 `/api/payment/notify` 代理到后端
|
||||
3. 支付宝回调超时(15s 未响应)会重试,共重试 8 次,持续 24 小时
|
||||
|
||||
### 支付完成后页面未跳转回来
|
||||
|
||||
检查 `ALIPAY_RETURN_URL` 配置是否正确,必须是前端 `/pay` 页面的完整 URL(如 `https://vigent.hbyrkj.top/pay`)。支付宝会在用户支付完成后将浏览器重定向到此地址,并附带 `out_trade_no` 等参数。
|
||||
|
||||
### 前端显示"网络错误"而非具体错误
|
||||
|
||||
API 函数缺少 try/catch 捕获 axios 异常。已在 `auth.ts` 的 `register()` 和 `login()` 中修复。
|
||||
@@ -39,6 +39,7 @@ backend/
|
||||
│ │ ├── generated_audios/ # 预生成配音管理(router/schemas/service)
|
||||
│ │ ├── login_helper/ # 扫码登录辅助
|
||||
│ │ ├── tools/ # 工具接口(router/schemas/service)
|
||||
│ │ ├── payment/ # 支付宝付费开通(router/schemas/service)
|
||||
│ │ └── admin/ # 管理员功能
|
||||
│ ├── repositories/ # Supabase 数据访问
|
||||
│ ├── services/ # 外部服务集成
|
||||
@@ -74,6 +75,15 @@ backend/
|
||||
- 错误通过 `HTTPException` 抛出,统一由全局异常处理返回 `{success:false, message, code}`。
|
||||
- 不再使用 `detail` 作为前端错误文案(前端已改为读 `message`)。
|
||||
|
||||
### `/api/videos/generate` 参数契约(关键约定)
|
||||
|
||||
- `custom_assignments` 每项使用 `material_path/start/end/source_start/source_end?`,并以时间轴可见段为准。
|
||||
- `output_aspect_ratio` 仅允许 `9:16` / `16:9`,默认 `9:16`。
|
||||
- 标题显示模式参数:
|
||||
- `title_display_mode`: `short` / `persistent`(默认 `short`)
|
||||
- `title_duration`: 默认 `4.0`(秒),仅 `short` 模式生效
|
||||
- workflow/remotion 侧需保持字段透传一致,避免前后端语义漂移。
|
||||
|
||||
---
|
||||
|
||||
## 4. 认证与权限
|
||||
@@ -159,6 +169,13 @@ backend/user_data/{user_uuid}/cookies/
|
||||
- `DOUYIN_DEBUG_ARTIFACTS` / `DOUYIN_RECORD_VIDEO` / `DOUYIN_KEEP_SUCCESS_VIDEO`
|
||||
- `DOUYIN_COOKIE` (抖音视频下载 Cookie)
|
||||
|
||||
### 支付宝
|
||||
- `ALIPAY_APP_ID` / `ALIPAY_PRIVATE_KEY_PATH` / `ALIPAY_PUBLIC_KEY_PATH`
|
||||
- `ALIPAY_NOTIFY_URL` / `ALIPAY_RETURN_URL`
|
||||
- `ALIPAY_SANDBOX` (沙箱模式,默认 false)
|
||||
- `PAYMENT_AMOUNT` (会员价格,默认 999.00)
|
||||
- `PAYMENT_EXPIRE_DAYS` (会员有效天数,默认 365)
|
||||
|
||||
---
|
||||
|
||||
## 10. Playwright 发布调试
|
||||
|
||||
@@ -25,6 +25,7 @@ backend/
|
||||
│ │ ├── generated_audios/ # 预生成配音管理(router/schemas/service)
|
||||
│ │ ├── login_helper/ # 扫码登录辅助
|
||||
│ │ ├── tools/ # 工具接口(router/schemas/service)
|
||||
│ │ ├── payment/ # 支付宝付费开通(router/schemas/service)
|
||||
│ │ └── admin/ # 管理员功能
|
||||
│ ├── repositories/ # Supabase 数据访问
|
||||
│ ├── services/ # 外部服务集成 (TTS/Remotion/Storage/Uploader 等)
|
||||
@@ -51,6 +52,8 @@ backend/
|
||||
* `POST /api/auth/register`: 用户注册
|
||||
* `GET /api/auth/me`: 获取当前用户信息
|
||||
|
||||
> 授权有效期策略:在登录与受保护接口鉴权时,后端会检查 `users.expires_at`。账号到期会自动停用 (`is_active=false`) 并清理 session,返回 `403: 会员已到期,请续费`。
|
||||
|
||||
2. **视频生成 (Videos)**
|
||||
* `POST /api/videos/generate`: 提交生成任务
|
||||
* `GET /api/videos/tasks/{task_id}`: 查询单个任务状态
|
||||
@@ -77,10 +80,11 @@ backend/
|
||||
* `GET /api/assets/bgm`: 背景音乐列表
|
||||
|
||||
6. **声音克隆 (Ref Audios)**
|
||||
* `POST /api/ref-audios`: 上传参考音频 (multipart/form-data)
|
||||
* `POST /api/ref-audios`: 上传参考音频 (multipart/form-data,自动 Whisper 转写 ref_text)
|
||||
* `GET /api/ref-audios`: 获取参考音频列表
|
||||
* `PUT /api/ref-audios/{id}`: 重命名参考音频
|
||||
* `DELETE /api/ref-audios/{id}`: 删除参考音频
|
||||
* `POST /api/ref-audios/{id}/retranscribe`: 重新识别参考音频文字(Whisper 转写 + 超 10s 自动截取)
|
||||
|
||||
7. **AI 功能 (AI)**
|
||||
* `POST /api/ai/generate-meta`: AI 生成标题和标签
|
||||
@@ -98,7 +102,14 @@ backend/
|
||||
|
||||
10. **健康检查**
|
||||
* `GET /api/lipsync/health`: LatentSync 服务健康状态
|
||||
* `GET /api/voiceclone/health`: Qwen3-TTS 服务健康状态
|
||||
* `GET /api/voiceclone/health`: CosyVoice 3.0 服务健康状态
|
||||
|
||||
11. **支付 (Payment)**
|
||||
* `POST /api/payment/create-order`: 创建支付宝电脑网站支付订单(需 payment_token)
|
||||
* `POST /api/payment/notify`: 支付宝异步通知回调(返回纯文本 success/fail)
|
||||
* `GET /api/payment/status/{out_trade_no}`: 查询订单支付状态(前端轮询)
|
||||
|
||||
> 登录时若账号未激活或已过期,返回 403 + `payment_token`,前端跳转 `/pay` 页面完成付费。详见 [支付宝部署指南](ALIPAY_DEPLOY.md)。
|
||||
|
||||
### 统一响应结构
|
||||
|
||||
@@ -123,9 +134,13 @@ backend/
|
||||
- `voice`: EdgeTTS 音色 ID(edgetts 模式)
|
||||
- `ref_audio_id` / `ref_text`: 参考音频 ID 与文本(voiceclone 模式)
|
||||
- `generated_audio_id`: 预生成配音 ID(存在时跳过内联 TTS,使用已生成的配音文件)
|
||||
- `custom_assignments`: 自定义素材分配数组(每项含 `material_path` / `start` / `end` / `source_start`),存在时跳过 Whisper 均分
|
||||
- `language`: TTS 语言(默认自动检测,声音克隆时透传给 Qwen3-TTS)
|
||||
- `speed`: 语速(声音克隆模式,默认 1.0,范围 0.8-1.2)
|
||||
- `custom_assignments`: 自定义素材分配数组(每项含 `material_path` / `start` / `end` / `source_start` / `source_end?`),存在时优先按时间轴可见段生成
|
||||
- `output_aspect_ratio`: 输出画面比例(`9:16` 或 `16:9`,默认 `9:16`)
|
||||
- `language`: TTS 语言(默认自动检测,声音克隆时透传给 CosyVoice 3.0)
|
||||
- `title`: 片头标题文字
|
||||
- `title_display_mode`: 标题显示模式(`short` / `persistent`,默认 `short`)
|
||||
- `title_duration`: 标题显示时长(秒,默认 `4.0`;`short` 模式生效)
|
||||
- `subtitle_style_id`: 字幕样式 ID
|
||||
- `title_style_id`: 标题样式 ID
|
||||
- `subtitle_font_size`: 字幕字号(覆盖样式默认值)
|
||||
@@ -136,6 +151,12 @@ backend/
|
||||
- `bgm_id`: 背景音乐 ID
|
||||
- `bgm_volume`: 背景音乐音量(0-1,默认 0.2)
|
||||
|
||||
### 多素材稳定性说明
|
||||
|
||||
- 多素材片段在拼接前统一重编码,并强制 `25fps + CFR`,减少段边界时间基不一致导致的画面卡顿。
|
||||
- concat 流程启用 `+genpts` 重建时间戳,提升拼接后时间轴连续性。
|
||||
- 对带旋转元数据的 MOV 素材会先做方向归一化,再进入分辨率判断和后续流程。
|
||||
|
||||
## 📦 资源库与静态资源
|
||||
|
||||
- 本地资源目录:`backend/assets/{fonts,bgm,styles}`
|
||||
|
||||
211
Docs/COSYVOICE3_DEPLOY.md
Normal file
211
Docs/COSYVOICE3_DEPLOY.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# CosyVoice 3.0 部署文档
|
||||
|
||||
## 概览
|
||||
|
||||
| 项目 | 值 |
|
||||
|------|------|
|
||||
| 模型 | Fun-CosyVoice3-0.5B-2512 (0.5B 参数) |
|
||||
| 端口 | 8010 |
|
||||
| GPU | 0 (CUDA_VISIBLE_DEVICES=0) |
|
||||
| PM2 名称 | vigent2-cosyvoice (id=15) |
|
||||
| Conda 环境 | cosyvoice (Python 3.10) |
|
||||
| 启动脚本 | `run_cosyvoice.sh` |
|
||||
| 服务脚本 | `models/CosyVoice/cosyvoice_server.py` |
|
||||
| 模型加载时间 | ~22-34 秒 |
|
||||
| 显存占用 | ~3-5 GB |
|
||||
|
||||
## 支持语言
|
||||
|
||||
中文、英文、日语、韩语、德语、西班牙语、法语、意大利语、俄语,18+ 中国方言
|
||||
|
||||
## 目录结构
|
||||
|
||||
```
|
||||
models/CosyVoice/
|
||||
├── cosyvoice_server.py # FastAPI 服务 (端口 8010)
|
||||
├── cosyvoice/ # CosyVoice 源码
|
||||
│ └── cli/cosyvoice.py # AutoModel 入口
|
||||
├── third_party/Matcha-TTS/ # 子模块依赖
|
||||
├── pretrained_models/
|
||||
│ ├── Fun-CosyVoice3-0.5B/ # 模型文件 (~8.2GB)
|
||||
│ │ ├── llm.pt # LLM 模型 (1.9GB)
|
||||
│ │ ├── llm.rl.pt # RL 模型 (1.9GB, 备用)
|
||||
│ │ ├── flow.pt # Flow 模型 (1.3GB)
|
||||
│ │ ├── hift.pt # HiFT 声码器 (80MB)
|
||||
│ │ ├── campplus.onnx # 说话人嵌入 (27MB)
|
||||
│ │ ├── speech_tokenizer_v3.onnx # 语音分词器 (925MB)
|
||||
│ │ ├── cosyvoice3.yaml # 模型配置
|
||||
│ │ └── CosyVoice-BlankEN/ # Qwen tokenizer
|
||||
│ └── CosyVoice-ttsfrd/ # 文本正则化资源
|
||||
│ ├── resource/ # 解压后的 ttsfrd 资源
|
||||
│ └── resource.zip
|
||||
run_cosyvoice.sh # PM2 启动脚本
|
||||
```
|
||||
|
||||
## API 接口
|
||||
|
||||
### GET /health
|
||||
|
||||
健康检查,返回:
|
||||
```json
|
||||
{
|
||||
"service": "CosyVoice 3.0 Voice Clone",
|
||||
"model": "Fun-CosyVoice3-0.5B",
|
||||
"ready": true,
|
||||
"gpu_id": 0
|
||||
}
|
||||
```
|
||||
|
||||
### POST /generate
|
||||
|
||||
声音克隆生成。
|
||||
|
||||
**参数 (multipart/form-data):**
|
||||
|
||||
| 参数 | 类型 | 必填 | 说明 |
|
||||
|------|------|------|------|
|
||||
| ref_audio | File | 是 | 参考音频 (WAV) |
|
||||
| text | string | 是 | 要合成的文本 |
|
||||
| ref_text | string | 是 | 参考音频的转写文字 |
|
||||
| language | string | 否 | 语言 (默认 "Chinese",CosyVoice 自动检测) |
|
||||
| speed | float | 否 | 语速 (默认 1.0,范围 0.5-2.0,建议 0.8-1.2) |
|
||||
|
||||
**返回:** WAV 音频文件
|
||||
|
||||
**状态码:**
|
||||
- 200: 成功
|
||||
- 429: GPU 忙,请重试
|
||||
- 500: 生成失败/超时
|
||||
- 503: 模型未加载/服务中毒
|
||||
|
||||
## 安全机制
|
||||
|
||||
1. **GPU 推理锁** (`asyncio.Lock`): 防止并发推理导致 GPU 状态损坏
|
||||
2. **429 拒绝**: 锁被占用时立即返回 429,客户端重试
|
||||
3. **超时保护**: `60 + len(text) * 2` 秒,上限 300 秒
|
||||
4. **Poisoned 标记**: 超时后标记服务为中毒状态,健康检查返回 `ready: false`
|
||||
5. **强制退出**: 超时后 1.5 秒强制 `os._exit(1)`,PM2 自动重启
|
||||
6. **启动自检**: 启动时用短文本做一次真实推理,验证 GPU 推理链路可用;失败则 `_model_loaded = False`,健康检查返回 `ready: false`,避免假阳性
|
||||
7. **参考音频自动截取**: 参考音频超过 10 秒时自动截取前 10 秒(CosyVoice 建议 3-10 秒),避免采样异常
|
||||
|
||||
## 运维命令
|
||||
|
||||
```bash
|
||||
# 启动
|
||||
pm2 start run_cosyvoice.sh --name vigent2-cosyvoice
|
||||
|
||||
# 重启
|
||||
pm2 restart vigent2-cosyvoice
|
||||
|
||||
# 查看日志
|
||||
pm2 logs vigent2-cosyvoice --lines 50
|
||||
|
||||
# 健康检查
|
||||
curl http://localhost:8010/health
|
||||
|
||||
# 停止
|
||||
pm2 stop vigent2-cosyvoice
|
||||
```
|
||||
|
||||
## 从零部署步骤
|
||||
|
||||
### 1. 克隆仓库
|
||||
|
||||
```bash
|
||||
cd /home/rongye/ProgramFiles/ViGent2/models
|
||||
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
|
||||
cd CosyVoice
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
### 2. 创建 Conda 环境
|
||||
|
||||
```bash
|
||||
conda create -n cosyvoice -y python=3.10
|
||||
conda activate cosyvoice
|
||||
```
|
||||
|
||||
### 3. 安装依赖
|
||||
|
||||
注意:不能直接 `pip install -r requirements.txt`,有版本冲突需要处理。
|
||||
|
||||
```bash
|
||||
# 安装 PyTorch 2.3.1 (CUDA 12.1) — 必须先装,版本严格要求
|
||||
pip install torch==2.3.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
|
||||
|
||||
# 核心推理依赖
|
||||
pip install conformer==0.3.2 HyperPyYAML==1.2.2 inflect==7.3.1 \
|
||||
librosa==0.10.2 lightning==2.2.4 modelscope==1.20.0 omegaconf==2.3.0 \
|
||||
pydantic==2.7.0 soundfile==0.12.1 fastapi==0.115.6 uvicorn==0.30.0 \
|
||||
transformers==4.51.3 protobuf==4.25 hydra-core==1.3.2 \
|
||||
rich==13.7.1 diffusers==0.29.0 x-transformers==2.11.24 wetext==0.0.4
|
||||
|
||||
# onnxruntime-gpu
|
||||
pip install onnxruntime-gpu==1.18.0 \
|
||||
--extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
|
||||
|
||||
# 其他必要依赖
|
||||
pip install gdown matplotlib pyarrow wget onnx python-multipart httpx
|
||||
|
||||
# openai-whisper 需要 setuptools < 71(提供 pkg_resources)
|
||||
pip install "setuptools<71"
|
||||
pip install --no-build-isolation openai-whisper==20231117
|
||||
|
||||
# pyworld 需要 g++ 和 Cython
|
||||
pip install Cython
|
||||
PATH="/usr/bin:$PATH" pip install pyworld==0.3.4
|
||||
|
||||
# 关键版本修复
|
||||
pip install "numpy<2" # onnxruntime-gpu 不兼容 numpy 2.x
|
||||
pip install "ruamel.yaml<0.18" # hyperpyyaml 不兼容 ruamel.yaml 0.19+
|
||||
```
|
||||
|
||||
> **重要**: CosyVoice 要求 torch==2.3.1。torch 2.10+ 会导致 CUBLAS_STATUS_INVALID_VALUE 错误。
|
||||
> torch 2.3.1+cu121 自带 nvidia-cudnn-cu12,onnxruntime CUDAExecutionProvider 可正常使用。
|
||||
|
||||
### 4. 下载模型
|
||||
|
||||
```bash
|
||||
# 使用 huggingface_hub (国内用 hf-mirror.com)
|
||||
HF_ENDPOINT=https://hf-mirror.com python -c "
|
||||
from huggingface_hub import snapshot_download
|
||||
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
||||
snapshot_download('FunAudioLLM/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
||||
"
|
||||
```
|
||||
|
||||
### 5. 安装 ttsfrd (可选,提升文本正则化质量)
|
||||
|
||||
```bash
|
||||
cd pretrained_models/CosyVoice-ttsfrd/
|
||||
unzip resource.zip -d .
|
||||
pip install ttsfrd_dependency-0.1-py3-none-any.whl
|
||||
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
|
||||
```
|
||||
|
||||
### 6. 注册 PM2
|
||||
|
||||
```bash
|
||||
pm2 start run_cosyvoice.sh --name vigent2-cosyvoice
|
||||
pm2 save
|
||||
```
|
||||
|
||||
## 已知问题
|
||||
|
||||
1. **ttsfrd "prepare tts engine failed"**: ttsfrd C 库内部日志,Python 层初始化成功,不影响使用
|
||||
2. **Sliding Window Attention 警告**: transformers 库提示,不影响推理结果
|
||||
3. **onnxruntime Memcpy 性能提示**: `Memcpy nodes are not supported by the CUDA EP`,仅为性能建议日志,不影响功能
|
||||
|
||||
> 注:libcudnn.so.8 问题在 torch 2.3.1+cu121 环境下已解决(自带 nvidia-cudnn-cu12),onnxruntime CUDAExecutionProvider 可正常加载。
|
||||
|
||||
## 与 Qwen3-TTS 对比
|
||||
|
||||
| 特性 | Qwen3-TTS (已停用) | CosyVoice 3.0 (当前) |
|
||||
|------|-----------|----------------|
|
||||
| 端口 | 8009 | 8010 |
|
||||
| 模型大小 | 0.6B | 0.5B |
|
||||
| 语言 | 中/英/日/韩 | 9 语言 + 18 方言 |
|
||||
| 克隆方式 | ref_audio + ref_text | ref_audio + ref_text |
|
||||
| prompt 格式 | 直接传 ref_text | `You are a helpful assistant.<\|endofprompt\|>` + ref_text |
|
||||
| 内置分段 | 无,需客户端分段 | 内置 text_normalize 自动分段 |
|
||||
| 状态 | 已停用 (PM2 stopped) | 生产使用中 |
|
||||
@@ -213,6 +213,15 @@ cp .env.example .env
|
||||
| `DOUYIN_KEEP_SUCCESS_VIDEO` | false | 成功后保留录屏 |
|
||||
| `CORS_ORIGINS` | `*` | CORS 允许源 (生产环境建议白名单) |
|
||||
| `DOUYIN_COOKIE` | 空 | 抖音视频下载 Cookie (文案提取功能) |
|
||||
| `ALIPAY_APP_ID` | 空 | 支付宝应用 APPID |
|
||||
| `ALIPAY_PRIVATE_KEY_PATH` | 空 | 应用私钥 PEM 文件路径 |
|
||||
| `ALIPAY_PUBLIC_KEY_PATH` | 空 | 支付宝公钥 PEM 文件路径 |
|
||||
| `ALIPAY_NOTIFY_URL` | 空 | 支付宝异步回调地址 (公网 HTTPS) |
|
||||
| `ALIPAY_RETURN_URL` | 空 | 支付完成后浏览器跳转地址 |
|
||||
| `PAYMENT_AMOUNT` | `999.00` | 会员价格 (元) |
|
||||
| `PAYMENT_EXPIRE_DAYS` | `365` | 会员有效天数 |
|
||||
|
||||
> 支付宝完整配置步骤(密钥生成、PEM 格式、产品开通等)请参考 **[支付宝部署指南](ALIPAY_DEPLOY.md)**。
|
||||
|
||||
---
|
||||
|
||||
@@ -336,34 +345,28 @@ chmod +x run_latentsync.sh
|
||||
pm2 start ./run_latentsync.sh --name vigent2-latentsync
|
||||
```
|
||||
|
||||
### 4. 启动 Qwen3-TTS 声音克隆服务 (可选)
|
||||
### 4. 启动 CosyVoice 3.0 声音克隆服务 (可选)
|
||||
|
||||
> 如需使用声音克隆功能,需要启动此服务。
|
||||
> 如需使用声音克隆功能,需要启动此服务。详细部署步骤见 [CosyVoice 3.0 部署文档](COSYVOICE3_DEPLOY.md)。
|
||||
|
||||
1. 安装 HTTP 服务依赖:
|
||||
```bash
|
||||
conda activate qwen-tts
|
||||
pip install fastapi uvicorn python-multipart
|
||||
```
|
||||
1. 启动脚本位于项目根目录: `run_cosyvoice.sh`
|
||||
|
||||
2. 启动脚本位于项目根目录: `run_qwen_tts.sh`
|
||||
|
||||
3. 使用 pm2 启动:
|
||||
2. 使用 pm2 启动:
|
||||
```bash
|
||||
cd /home/rongye/ProgramFiles/ViGent2
|
||||
pm2 start ./run_qwen_tts.sh --name vigent2-qwen-tts
|
||||
pm2 start ./run_cosyvoice.sh --name vigent2-cosyvoice
|
||||
pm2 save
|
||||
```
|
||||
|
||||
4. 验证服务:
|
||||
3. 验证服务:
|
||||
```bash
|
||||
# 检查健康状态
|
||||
curl http://localhost:8009/health
|
||||
curl http://localhost:8010/health
|
||||
```
|
||||
|
||||
### 5. 启动服务看门狗 (Watchdog)
|
||||
|
||||
> 🛡️ **推荐**:监控 Qwen-TTS 和 LatentSync 服务健康状态,卡死时自动重启。
|
||||
> 🛡️ **推荐**:监控 CosyVoice 和 LatentSync 服务健康状态,卡死时自动重启。
|
||||
|
||||
```bash
|
||||
cd /home/rongye/ProgramFiles/ViGent2
|
||||
@@ -384,7 +387,7 @@ pm2 startup
|
||||
pm2 status # 查看所有服务状态
|
||||
pm2 logs # 查看所有日志
|
||||
pm2 logs vigent2-backend # 查看后端日志
|
||||
pm2 logs vigent2-qwen-tts # 查看 Qwen3-TTS 日志
|
||||
pm2 logs vigent2-cosyvoice # 查看 CosyVoice 日志
|
||||
pm2 restart all # 重启所有服务
|
||||
pm2 stop vigent2-latentsync # 停止 LatentSync 服务
|
||||
pm2 delete all # 删除所有服务
|
||||
@@ -523,7 +526,7 @@ python3 -c "import torch; print(torch.cuda.is_available())"
|
||||
sudo lsof -i :8006
|
||||
sudo lsof -i :3002
|
||||
sudo lsof -i :8007
|
||||
sudo lsof -i :8009 # Qwen3-TTS
|
||||
sudo lsof -i :8010 # CosyVoice
|
||||
```
|
||||
|
||||
### 查看日志
|
||||
@@ -533,7 +536,7 @@ sudo lsof -i :8009 # Qwen3-TTS
|
||||
pm2 logs vigent2-backend
|
||||
pm2 logs vigent2-frontend
|
||||
pm2 logs vigent2-latentsync
|
||||
pm2 logs vigent2-qwen-tts
|
||||
pm2 logs vigent2-cosyvoice
|
||||
```
|
||||
|
||||
### SSH 连接卡顿 / 系统响应慢
|
||||
@@ -564,6 +567,7 @@ pm2 logs vigent2-qwen-tts
|
||||
| `playwright` | 社交媒体自动发布 |
|
||||
| `biliup` | B站视频上传 |
|
||||
| `loguru` | 日志管理 |
|
||||
| `python-alipay-sdk` | 支付宝支付集成 |
|
||||
|
||||
### 前端关键依赖
|
||||
|
||||
|
||||
@@ -328,11 +328,13 @@ interface TimelineSegment {
|
||||
|
||||
### 概述
|
||||
|
||||
根据用户反馈,修复 6 项 UI 体验问题,同时修复 Qwen3-TTS 声音克隆服务的 SoX 路径问题和显存缓存管理。
|
||||
根据用户反馈,修复 6 项 UI 体验问题,同时修复声音克隆服务的 SoX 路径问题和显存缓存管理。
|
||||
|
||||
> **注**: Qwen3-TTS 已在后续被 CosyVoice 3.0 (端口 8010) 替换,以下记录为当时的修复过程。
|
||||
|
||||
---
|
||||
|
||||
### 一、Qwen3-TTS 稳定性修复
|
||||
### 一、Qwen3-TTS 稳定性修复 (已被 CosyVoice 3.0 替换)
|
||||
|
||||
#### 1.1 SoX PATH 修复
|
||||
|
||||
@@ -348,6 +350,8 @@ export PATH="/home/rongye/ProgramFiles/miniconda3/envs/qwen-tts/bin:$PATH"
|
||||
|
||||
**修复**: `qwen_tts_server.py` 每次生成完成后(无论成功或失败)调用 `torch.cuda.empty_cache()`,防止显存碎片累积。使用 `asyncio.to_thread()` 在线程池中运行推理,避免阻塞事件循环导致健康检查超时。
|
||||
|
||||
> **后续**: Qwen3-TTS 已停用,CosyVoice 3.0 沿用了相同的保护机制(GPU 推理锁、超时保护、显存清理、启动自检)。
|
||||
|
||||
---
|
||||
|
||||
### 二、配音列表按钮布局统一 (反馈 #1 + #6)
|
||||
@@ -415,8 +419,8 @@ export PATH="/home/rongye/ProgramFiles/miniconda3/envs/qwen-tts/bin:$PATH"
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `run_qwen_tts.sh` | export conda env bin 到 PATH,修复 SoX 找不到问题 |
|
||||
| `models/Qwen3-TTS/qwen_tts_server.py` | 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞 |
|
||||
| `run_qwen_tts.sh` | export conda env bin 到 PATH,修复 SoX 找不到问题 (已停用) |
|
||||
| `models/Qwen3-TTS/qwen_tts_server.py` | 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞 (已停用) |
|
||||
|
||||
#### 前端修改
|
||||
|
||||
@@ -544,3 +548,309 @@ next.splice(toIdx, 0, moved);
|
||||
| `frontend/src/features/home/model/useHomeController.ts` | 集成 useSavedScripts,新增 handleSaveScript |
|
||||
| `frontend/src/features/home/ui/HomePage.tsx` | 传递 savedScripts / handleSaveScript / deleteSavedScript 到 ScriptEditor |
|
||||
| `frontend/src/features/home/model/useTimelineEditor.ts` | reorderSegments 从属性交换改为数组移动(splice) |
|
||||
|
||||
---
|
||||
|
||||
## 🔤 字幕语言不匹配 + 视频比例错位修复 — 第五阶段 (Day 23)
|
||||
|
||||
### 概述
|
||||
|
||||
修复两个视频生成 Bug:
|
||||
1. **字幕语言不匹配**: 中文配音 + 英文翻译文案 → 字幕错误显示英文(Whisper 独立转录,忽略原文)
|
||||
2. **标题字幕比例错位**: 9:16 竖屏素材生成视频后,标题/字幕按 16:9 横屏布局渲染
|
||||
|
||||
附带修复代码审查中发现的 `split_word_to_chars` 英文空格丢失问题。
|
||||
|
||||
---
|
||||
|
||||
### 一、字幕用原文替换 Whisper 转录文字
|
||||
|
||||
#### 根因
|
||||
|
||||
Whisper 对音频独立转录,完全忽略传入的 `text` 参数。当配音语言与编辑器文案语言不一致时(例如:用户先写中文文案 → 翻译成英文 → 生成英文配音 → 再改回中文文案),Whisper "听到"英文语音就输出英文字幕。
|
||||
|
||||
#### 修复思路
|
||||
|
||||
Whisper 仅负责检测**语音总时间范围**(`first_start` → `last_end`),字幕文字永远用配音保存的原始文案。
|
||||
|
||||
#### `whisper_service.py` — `align()` 新增 `original_text` 参数
|
||||
|
||||
```python
|
||||
async def align(self, audio_path, text, output_path=None,
|
||||
language="zh", original_text=None):
|
||||
```
|
||||
|
||||
当 `original_text` 非空时:
|
||||
1. 正常运行 Whisper 转录,记录 `whisper_first_start` 和 `whisper_last_end`
|
||||
2. 将 `original_text` 传入 `split_word_to_chars()` 在总时间范围上线性分布
|
||||
3. 用 `split_segment_to_lines()` 按标点和字数断行
|
||||
4. 替换 Whisper 的转录结果
|
||||
|
||||
#### `workflow.py` — 配音元数据无条件覆盖 + 传入原文
|
||||
|
||||
```python
|
||||
# 改前(只在文案为空时覆盖)
|
||||
if not req.text.strip():
|
||||
req.text = meta.get("text", req.text)
|
||||
|
||||
# 改后(无条件用配音元数据覆盖)
|
||||
meta_text = meta.get("text", "")
|
||||
if meta_text:
|
||||
req.text = meta_text
|
||||
```
|
||||
|
||||
所有 4 处 `whisper_service.align()` 调用添加 `original_text=req.text`。
|
||||
|
||||
---
|
||||
|
||||
### 二、Remotion 动态传入视频尺寸
|
||||
|
||||
#### 根因
|
||||
|
||||
`remotion/src/Root.tsx` 硬编码 `width={1280} height={720}`。虽然 `render.ts` 用 ffprobe 检测真实尺寸后覆盖 `composition.width/height`,但 `selectComposition` 阶段组件已按 1280×720 初始化,标题和字幕定位基于错误的画布尺寸。
|
||||
|
||||
#### 修复
|
||||
|
||||
##### `Root.tsx` — `calculateMetadata` 从 props 读取尺寸
|
||||
|
||||
```tsx
|
||||
<Composition
|
||||
id="ViGentVideo"
|
||||
component={Video}
|
||||
durationInFrames={300}
|
||||
fps={25}
|
||||
width={1080}
|
||||
height={1920}
|
||||
calculateMetadata={async ({ props }) => ({
|
||||
width: props.width || 1080,
|
||||
height: props.height || 1920,
|
||||
})}
|
||||
defaultProps={{
|
||||
videoSrc: '',
|
||||
width: 1080,
|
||||
height: 1920,
|
||||
// ...
|
||||
}}
|
||||
/>
|
||||
```
|
||||
|
||||
默认从 1280×720 改为 1080×1920(竖屏优先),`calculateMetadata` 确保 `selectComposition` 阶段使用 ffprobe 检测的真实尺寸。
|
||||
|
||||
##### `Video.tsx` — VideoProps 新增可选 `width/height`
|
||||
|
||||
仅供 `calculateMetadata` 访问,组件渲染不引用。
|
||||
|
||||
##### `render.ts` — inputProps 统一传入视频尺寸
|
||||
|
||||
```typescript
|
||||
const inputProps = {
|
||||
videoSrc: videoFileName,
|
||||
captions,
|
||||
title: options.title,
|
||||
// ...
|
||||
width: videoWidth, // ffprobe 检测值
|
||||
height: videoHeight, // ffprobe 检测值
|
||||
};
|
||||
```
|
||||
|
||||
`selectComposition` 和 `renderMedia` 使用同一个 `inputProps`。保留显式 `composition.width/height` 覆盖作为保险。
|
||||
|
||||
---
|
||||
|
||||
### 三、代码审查修复:英文空格丢失
|
||||
|
||||
#### 问题
|
||||
|
||||
`split_word_to_chars` 原设计处理 Whisper 单个词(如 `" Hello"`),但 `original_text` 传入整段文本时,中间空格被 `continue` 跳过且不 flush `ascii_buffer`,导致 `"Hello World"` 变成 `"HelloWorld"`。
|
||||
|
||||
#### 执行路径追踪
|
||||
|
||||
```
|
||||
输入: "Hello World"
|
||||
H,e,l,l,o → ascii_buffer = "Hello"
|
||||
' ' → continue(跳过,不 flush!)
|
||||
W,o,r,l,d → ascii_buffer = "HelloWorld"
|
||||
结果: tokens = ["HelloWorld"] ← 空格丢失
|
||||
```
|
||||
|
||||
#### 修复
|
||||
|
||||
遇到空格时 flush `ascii_buffer`,并用 `pending_space` 标记给下一个 token 前置空格:
|
||||
|
||||
```python
|
||||
if not char.strip():
|
||||
if ascii_buffer:
|
||||
tokens.append(ascii_buffer)
|
||||
ascii_buffer = ""
|
||||
if tokens:
|
||||
pending_space = True
|
||||
continue
|
||||
```
|
||||
|
||||
修复后:`"Hello World"` → tokens = `["Hello", " World"]` → 字幕正确显示。中文不受影响。
|
||||
|
||||
---
|
||||
|
||||
### 涉及文件汇总
|
||||
|
||||
#### 后端修改
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `backend/app/services/whisper_service.py` | `align()` 新增 `original_text` 参数;`split_word_to_chars` 修复英文空格丢失 |
|
||||
| `backend/app/modules/videos/workflow.py` | 配音元数据无条件覆盖 text/language;4 处 `align()` 调用传入 `original_text` |
|
||||
|
||||
#### 前端修改(Remotion)
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `remotion/src/Root.tsx` | 默认尺寸改为 1080×1920,新增 `calculateMetadata` + width/height defaultProps |
|
||||
| `remotion/src/Video.tsx` | VideoProps 新增可选 `width`/`height` |
|
||||
| `remotion/render.ts` | inputProps 统一传入 `videoWidth`/`videoHeight`,selectComposition 和 renderMedia 共用 |
|
||||
|
||||
---
|
||||
|
||||
## 🎤 参考音频自动转写 + 语速控制 — 第六阶段 (Day 23)
|
||||
|
||||
### 概述
|
||||
|
||||
解决声音克隆 ref_text 不匹配问题:旧方案使用前端固定文字作为 ref_text,CosyVoice zero-shot 克隆要求 ref_text 必须与参考音频实际内容匹配,不匹配时模型会在生成音频开头"幻觉"出多余片段。
|
||||
|
||||
**改进**:上传参考音频时自动调用 Whisper 转写内容作为 ref_text,同时新增语速控制功能。
|
||||
|
||||
---
|
||||
|
||||
### 一、Whisper 自动转写参考音频
|
||||
|
||||
#### 1.1 `whisper_service.py` — 语言自动检测
|
||||
|
||||
`transcribe()` 方法原先硬编码 `language="zh"`,改为接受可选 `language` 参数(默认 `None` = 自动检测),支持多语言参考音频。
|
||||
|
||||
#### 1.2 `ref_audios/service.py` — 上传时自动转写
|
||||
|
||||
上传流程变更:转码 WAV → 检查时长(≥1s) → 超 10s 在静音点截取 → **Whisper 自动转写** → 验证非空 → 上传。
|
||||
|
||||
```python
|
||||
try:
|
||||
transcribed = await whisper_service.transcribe(tmp_wav_path)
|
||||
if transcribed.strip():
|
||||
ref_text = transcribed.strip()
|
||||
except Exception as e:
|
||||
logger.warning(f"Auto-transcribe failed: {e}")
|
||||
|
||||
if not ref_text or not ref_text.strip():
|
||||
raise ValueError("无法识别音频内容,请确保音频包含清晰的语音")
|
||||
```
|
||||
|
||||
#### 1.3 `ref_audios/router.py` — ref_text 改为可选
|
||||
|
||||
`ref_text: str = Form("")`(不再必填),前端不再发送固定文字。
|
||||
|
||||
---
|
||||
|
||||
### 二、参考音频智能截取(10 秒上限)
|
||||
|
||||
CosyVoice 对 3-10 秒参考音频效果最好。
|
||||
|
||||
#### 2.1 静音点检测
|
||||
|
||||
使用 ffmpeg `silencedetect` 找 10 秒内最后一个静音结束点(阈值 -30dB,最短 0.3s),避免在字词中间硬切:
|
||||
|
||||
```python
|
||||
def _find_silence_cut_point(file_path, max_duration):
|
||||
# silencedetect → 解析 silence_end → 找 3s~max_duration 内最后的静音点
|
||||
# 找不到则回退到 max_duration
|
||||
```
|
||||
|
||||
#### 2.2 淡出处理
|
||||
|
||||
截取时末尾 0.1 秒淡出(`afade=t=out`),避免截断爆音。
|
||||
|
||||
---
|
||||
|
||||
### 三、重新识别功能(旧数据迁移)
|
||||
|
||||
#### 3.1 新增 API
|
||||
|
||||
`POST /api/ref-audios/{audio_id}/retranscribe` — 下载音频 → 超 10s 截取 → Whisper 转写 → 重新上传音频和元数据。
|
||||
|
||||
#### 3.2 前端 UI
|
||||
|
||||
- RefAudioPanel 新增 RotateCw 按钮("重新识别文字"),转写中显示 `animate-spin`
|
||||
- 旧音频 ref_text 以固定文字开头时显示 ⚠ 黄色警告
|
||||
|
||||
---
|
||||
|
||||
### 四、语速控制(CosyVoice speed 参数)
|
||||
|
||||
#### 4.1 全链路传递
|
||||
|
||||
```
|
||||
前端 GeneratedAudiosPanel (速度选择器)
|
||||
→ useHomeController (speed state + persistence)
|
||||
→ useGeneratedAudios.generateAudio(params)
|
||||
→ POST /api/generated-audios/generate { speed: 1.0 }
|
||||
→ GenerateAudioRequest.speed (Pydantic)
|
||||
→ generate_audio_task → voice_clone_service.generate_audio(speed=)
|
||||
→ _generate_once → POST /generate { speed: "1.0" }
|
||||
→ cosyvoice_server → _model.inference_zero_shot(speed=speed)
|
||||
```
|
||||
|
||||
#### 4.2 前端 UI
|
||||
|
||||
声音克隆模式下,配音列表面板标题栏"生成配音"按钮左侧显示语速下拉菜单(`语速: 正常 ▼`):
|
||||
|
||||
| 标签 | speed 值 |
|
||||
|------|----------|
|
||||
| 较慢 | 0.8 |
|
||||
| 稍慢 | 0.9 |
|
||||
| 正常 | 1.0 (默认) |
|
||||
| 稍快 | 1.1 |
|
||||
| 较快 | 1.2 |
|
||||
|
||||
语速选择持久化到 localStorage(`vigent_{storageKey}_speed`)。
|
||||
|
||||
---
|
||||
|
||||
### 五、缺少参考音频门控
|
||||
|
||||
声音克隆模式下未选参考音频时:
|
||||
- "生成配音"按钮禁用 + title 提示"请先选择参考音频"
|
||||
- 面板内显示黄色警告条"声音克隆模式需要先选择参考音频"
|
||||
|
||||
---
|
||||
|
||||
### 六、前端清理
|
||||
|
||||
- 移除 `FIXED_REF_TEXT` 常量和 `fixedRefText` prop
|
||||
- 移除"请朗读以下内容"引导区块
|
||||
- 上传提示简化为"上传任意语音样本(3-10秒),系统将自动识别内容并克隆声音"
|
||||
- 录音区备注"建议 3-10 秒,超出将自动截取"
|
||||
|
||||
---
|
||||
|
||||
### 涉及文件汇总
|
||||
|
||||
#### 后端修改
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `backend/app/services/whisper_service.py` | `transcribe()` 增加可选 `language` 参数,默认 None (自动检测) |
|
||||
| `backend/app/modules/ref_audios/service.py` | 上传自动转写 + 静音点截取 + 淡出 + retranscribe 函数 |
|
||||
| `backend/app/modules/ref_audios/router.py` | `ref_text` 改为 Form(""),新增 retranscribe 端点 |
|
||||
| `backend/app/modules/generated_audios/schemas.py` | `GenerateAudioRequest` 新增 `speed: float = 1.0` |
|
||||
| `backend/app/modules/generated_audios/service.py` | 传递 `req.speed` 到 voice_clone_service |
|
||||
| `backend/app/services/voice_clone_service.py` | `generate_audio()` / `_generate_once()` 接受并传递 speed |
|
||||
| `models/CosyVoice/cosyvoice_server.py` | `/generate` 端点接受 `speed` 参数,传递到 `inference_zero_shot(speed=)` |
|
||||
|
||||
#### 前端修改
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `frontend/src/features/home/model/useHomeController.ts` | 新增 speed state,移除 FIXED_REF_TEXT,handleGenerateAudio 传 speed |
|
||||
| `frontend/src/features/home/model/useHomePersistence.ts` | 新增 speed 持久化 |
|
||||
| `frontend/src/features/home/model/useRefAudios.ts` | 移除 fixedRefText,新增 retranscribe |
|
||||
| `frontend/src/features/home/model/useGeneratedAudios.ts` | generateAudio params 新增 speed |
|
||||
| `frontend/src/features/home/ui/GeneratedAudiosPanel.tsx` | 新增语速选择器 + 缺少参考音频门控 |
|
||||
| `frontend/src/features/home/ui/RefAudioPanel.tsx` | 移除朗读引导,新增重新识别按钮 + ⚠ 警告 |
|
||||
| `frontend/src/features/home/ui/HomePage.tsx` | 传递 speed/setSpeed/ttsMode 到 GeneratedAudiosPanel |
|
||||
|
||||
185
Docs/DevLogs/Day24.md
Normal file
185
Docs/DevLogs/Day24.md
Normal file
@@ -0,0 +1,185 @@
|
||||
## 🔧 鉴权到期治理 + 多素材时间轴稳定性修复 (Day 24)
|
||||
|
||||
### 概述
|
||||
|
||||
本日主要完成两条主线:
|
||||
|
||||
1. **账号与鉴权治理**:会员到期改为请求时自动失效(登录/鉴权接口触发),并统一返回续费提示。
|
||||
2. **视频生成稳定性**:围绕多素材时间轴、截取语义、拼接边界冻结、画面比例与字幕标题适配进行一轮端到端修复。
|
||||
|
||||
---
|
||||
|
||||
## 🔐 会员到期请求时失效 — 第一阶段 (Day 24)
|
||||
|
||||
### 目标
|
||||
|
||||
避免依赖定时任务,用户在触发登录或访问受保护接口时即可完成到期判定与账号停用。
|
||||
|
||||
### 行为调整
|
||||
|
||||
- 到期判断基于 `users.expires_at`。
|
||||
- 判定到期后:
|
||||
- 将 `is_active` 自动置为 `false`
|
||||
- 删除该用户全部 session
|
||||
- 返回 `403`,提示:`会员已到期,请续费`
|
||||
|
||||
### 实现点
|
||||
|
||||
- `users.py` 新增 `deactivate_user_if_expired()`,并补充 `_parse_expires_at()` 统一时区解析。
|
||||
- `deps.py` 在 `get_current_user` / `get_current_user_optional` 中统一接入到期检查。
|
||||
- `auth/router.py` 在登录路径增加到期停用逻辑;`/api/auth/me` 统一走 `Depends(get_current_user)`。
|
||||
|
||||
---
|
||||
|
||||
## 🖼️ 画面比例控制 + 字幕标题适配 — 第二阶段 (Day 24)
|
||||
|
||||
### 2.1 输出画面比例可配置
|
||||
|
||||
- 时间轴顶部新增“画面比例”下拉:`9:16` / `16:9`。
|
||||
- 默认值 `9:16`,并持久化到 localStorage。
|
||||
- 生成请求携带 `output_aspect_ratio`,后端在单素材与多素材流程中统一按目标分辨率处理。
|
||||
|
||||
### 2.2 标题/字幕在窄屏画布防溢出
|
||||
|
||||
为减少“预览正常、成片溢出”的差异,统一了预览与渲染策略:
|
||||
|
||||
- 根据 composition 宽度进行响应式缩放。
|
||||
- 开启可换行:`white-space: normal` + `word-break` + `overflow-wrap`。
|
||||
- 描边、字距、上下边距同步按比例缩放。
|
||||
|
||||
### 2.3 片头标题显示模式(短暂/常驻)
|
||||
|
||||
- 在“标题与字幕”面板的“片头标题”行尾新增下拉,支持:`短暂显示` / `常驻显示`。
|
||||
- 默认模式为 `短暂显示`,短暂模式默认时长为 4 秒。
|
||||
- 用户选择会持久化到 localStorage,刷新后保持上次配置。
|
||||
- 生成请求新增 `title_display_mode`,短暂模式透传 `title_duration=4.0`。
|
||||
- Remotion 端到端支持该参数:
|
||||
- `short`:标题在设定时长后淡出并结束渲染;
|
||||
- `persistent`:标题全程常驻(保留淡入动画,不执行淡出)。
|
||||
|
||||
---
|
||||
|
||||
## 🎥 方向归一化 + 多素材拼接稳定性 — 第三阶段 (Day 24)
|
||||
|
||||
### 3.1 MOV 旋转元数据导致横竖识别错误
|
||||
|
||||
问题场景:编码分辨率是横屏,但依赖 rotation side-data 才能正确显示为竖屏(常见于手机 MOV)。
|
||||
|
||||
修复方案:
|
||||
|
||||
- `get_video_metadata()` 扩展返回 `rotation/effective_width/effective_height`。
|
||||
- 新增 `normalize_orientation()`,在流程前对带旋转元数据素材做物理方向归一化。
|
||||
- 单素材和多素材下载后统一执行方向归一化,再做分辨率决策。
|
||||
|
||||
### 3.2 多素材“只看到第一段”与边界冻结
|
||||
|
||||
针对拼接可靠性补了两类保护:
|
||||
|
||||
- **分配保护**:`custom_assignments` 与素材数量不一致时,后端回退自动分配,避免异常输入导致仅首段生效。
|
||||
- **编码一致性**:
|
||||
- 片段准备阶段统一重编码;
|
||||
- concat 阶段不再走拷贝;
|
||||
- 进一步统一为 `25fps + CFR`,并在 concat 增加 `+genpts`,降低段边界时间基不连续导致的“画面冻结口型还动”风险。
|
||||
|
||||
---
|
||||
|
||||
## ⏱️ 时间轴截取语义对齐修复 — 第四阶段 (Day 24)
|
||||
|
||||
### 背景
|
||||
|
||||
时间轴设计语义是:
|
||||
|
||||
- 每段可以设置 `sourceStart/sourceEnd`;
|
||||
- 总时长超出音频时,仅保留可见段,末段截齐音频;
|
||||
- 总时长不足时,由最后可见段循环补齐。
|
||||
|
||||
本日将前后端对齐到这一语义。
|
||||
|
||||
### 4.1 `source_end` 全链路打通
|
||||
|
||||
此前仅传 `source_start`,导致后端无法准确知道“截到哪里”。
|
||||
|
||||
本次改动:
|
||||
|
||||
- 前端 `toCustomAssignments()` 增加可选 `source_end`。
|
||||
- 后端 `CustomAssignment` schema 增加 `source_end`。
|
||||
- workflow 将 `source_end` 透传到 `prepare_segment()`(单素材/多素材均支持)。
|
||||
- `prepare_segment()` 增加 `source_end` 参数,按 `[source_start, source_end)` 计算可用片段,并在需要循环时先裁剪再循环,避免循环范围错位。
|
||||
|
||||
### 4.2 时间轴有效时长计算修复
|
||||
|
||||
修复 `sourceStart > 0 且 sourceEnd = 0` 时的有效时长错误:
|
||||
|
||||
- 旧逻辑会按整段素材时长计算;
|
||||
- 新逻辑改为 `materialDuration - sourceStart`。
|
||||
|
||||
该修复同时用于:
|
||||
|
||||
- `recalcPositions()` 的段时长计算;
|
||||
- TimelineEditor 中“循环补足”可视化比例计算。
|
||||
|
||||
### 4.3 可见段分配优先级修复
|
||||
|
||||
修复“可见段数 < 已选素材数时,custom_assignments 被丢弃回退自动分配”的问题:
|
||||
|
||||
- 生成请求优先以时间轴可见段的 `assignments` 为准;
|
||||
- 超出时间轴的素材不参与本次生成。
|
||||
|
||||
### 4.4 单素材截取触发条件补齐
|
||||
|
||||
单素材模式下,若只改了终点(`sourceEnd > 0`)也会发送 `custom_assignments`,确保截取生效。
|
||||
|
||||
---
|
||||
|
||||
## 🧭 页面交互与体验细节 — 第五阶段 (Day 24)
|
||||
|
||||
- 页面刷新后自动回到顶部,避免从历史滚动位置进入页面。
|
||||
- 素材列表与历史视频列表滚动增加“跳过首次自动滚动”保护,减少恢复状态时页面跳动。
|
||||
- 时间轴比例区移除多余文案,保持信息简洁。
|
||||
|
||||
---
|
||||
|
||||
## 涉及文件汇总
|
||||
|
||||
### 后端修改
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `backend/app/repositories/users.py` | 新增 `deactivate_user_if_expired()` 与 `_parse_expires_at()` |
|
||||
| `backend/app/core/deps.py` | `get_current_user` / `get_current_user_optional` 接入到期失效检查 |
|
||||
| `backend/app/modules/auth/router.py` | 登录时到期停用 + `/api/auth/me` 统一鉴权依赖 |
|
||||
| `backend/app/modules/videos/schemas.py` | `CustomAssignment` 新增 `source_end`;保留 `output_aspect_ratio` |
|
||||
| `backend/app/modules/videos/workflow.py` | 多素材/单素材透传 `source_end`;多素材 prepare/concat 统一 25fps;标题显示模式参数透传 Remotion |
|
||||
| `backend/app/services/video_service.py` | 旋转元数据解析与方向归一化;`prepare_segment` 支持 `source_end/target_fps`;concat 强制 CFR + `+genpts` |
|
||||
| `backend/app/services/remotion_service.py` | render 支持 `title_display_mode/title_duration` 并传递到 render.ts |
|
||||
|
||||
### 前端修改
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `frontend/src/features/home/model/useTimelineEditor.ts` | `CustomAssignment` 新增 `source_end`;修复 sourceStart 开放终点时长计算 |
|
||||
| `frontend/src/features/home/model/useHomeController.ts` | 多素材以可见 assignments 为准发送;单素材截取触发条件补齐 |
|
||||
| `frontend/src/features/home/ui/TimelineEditor.tsx` | 画面比例下拉;循环比例按截取后有效时长计算 |
|
||||
| `frontend/src/features/home/model/useHomePersistence.ts` | `outputAspectRatio` 与 `titleDisplayMode` 持久化 |
|
||||
| `frontend/src/features/home/ui/HomePage.tsx` | 页面进入滚动到顶部;ClipTrimmer/Timeline 交互保持一致 |
|
||||
| `frontend/src/features/home/ui/FloatingStylePreview.tsx` | 标题/字幕样式预览与成片渲染策略对齐 |
|
||||
| `frontend/src/features/home/ui/TitleSubtitlePanel.tsx` | 标题行新增“短暂显示/常驻显示”下拉 |
|
||||
|
||||
### Remotion 修改
|
||||
|
||||
| 文件 | 变更 |
|
||||
|------|------|
|
||||
| `remotion/src/components/Title.tsx` | 标题响应式缩放与自动换行;新增短暂/常驻显示模式控制 |
|
||||
| `remotion/src/components/Subtitles.tsx` | 字幕响应式缩放与自动换行,减少预览/成片差异 |
|
||||
| `remotion/src/Video.tsx` | 新增 `titleDisplayMode` 透传到标题组件 |
|
||||
| `remotion/src/Root.tsx` | 默认 props 增加 `titleDisplayMode='short'` 与 `titleDuration=4` |
|
||||
| `remotion/render.ts` | CLI 参数新增 `--titleDisplayMode`,inputProps 增加 `titleDisplayMode` |
|
||||
|
||||
---
|
||||
|
||||
## 验证记录
|
||||
|
||||
- 后端语法检查:`python -m py_compile backend/app/modules/videos/schemas.py backend/app/modules/videos/workflow.py backend/app/services/video_service.py backend/app/services/remotion_service.py`
|
||||
- 前端类型检查:`npx tsc --noEmit`
|
||||
- 前端 ESLint:`npx eslint src/features/home/model/useHomeController.ts src/features/home/model/useHomePersistence.ts src/features/home/ui/HomePage.tsx src/features/home/ui/TitleSubtitlePanel.tsx`
|
||||
- Remotion 渲染脚本构建:`npm run build:render`
|
||||
@@ -30,7 +30,7 @@
|
||||
| ⚡ **Med** | `Docs/BACKEND_README.md` | **(后端文档)** 接口说明、架构设计 |
|
||||
| ⚡ **Med** | `Docs/FRONTEND_DEV.md` | **(前端规范)** API封装、日期格式化、新页面规范 |
|
||||
| ⚡ **Med** | `Docs/FRONTEND_README.md` | **(前端文档)** 功能说明、页面变更 |
|
||||
| 🧊 **Low** | `Docs/*_DEPLOY.md` | **(子系统部署)** LatentSync/Qwen3/字幕等独立部署文档 |
|
||||
| 🧊 **Low** | `Docs/*_DEPLOY.md` | **(子系统部署)** LatentSync/CosyVoice/字幕等独立部署文档 |
|
||||
|
||||
---
|
||||
|
||||
@@ -195,7 +195,8 @@ ViGent2/Docs/
|
||||
├── DEPLOY_MANUAL.md # 部署手册
|
||||
├── SUPABASE_DEPLOY.md # Supabase 部署文档
|
||||
├── LATENTSYNC_DEPLOY.md # LatentSync 部署文档
|
||||
├── QWEN3_TTS_DEPLOY.md # 声音克隆部署文档
|
||||
├── COSYVOICE3_DEPLOY.md # 声音克隆部署文档
|
||||
├── ALIPAY_DEPLOY.md # 支付宝付费部署文档
|
||||
├── SUBTITLE_DEPLOY.md # 字幕系统部署文档
|
||||
└── DevLogs/
|
||||
├── Day1.md # 开发日志
|
||||
@@ -304,4 +305,4 @@ ViGent2/Docs/
|
||||
|
||||
---
|
||||
|
||||
**最后更新**:2026-02-08
|
||||
**最后更新**:2026-02-11
|
||||
|
||||
@@ -10,8 +10,9 @@ frontend/src/
|
||||
│ ├── page.tsx # 首页(视频生成)
|
||||
│ ├── publish/ # 发布管理页
|
||||
│ ├── admin/ # 管理员页面
|
||||
│ ├── login/ # 登录
|
||||
│ └── register/ # 注册
|
||||
│ ├── login/ # 登录
|
||||
│ ├── register/ # 注册
|
||||
│ └── pay/ # 付费开通会员
|
||||
├── features/ # 功能模块(按业务拆分)
|
||||
│ ├── home/
|
||||
│ │ ├── model/ # 业务逻辑 hooks
|
||||
@@ -256,6 +257,12 @@ import { formatDate } from '@/shared/lib/media';
|
||||
|
||||
## ⚡️ 体验优化规范
|
||||
|
||||
### 刷新回顶部(统一体验)
|
||||
|
||||
- 长页面(如首页/发布页)在首次挂载时统一回到顶部,避免浏览器恢复旧滚动位置导致进入即跳到中部。
|
||||
- 推荐实现:`useEffect(() => { window.scrollTo({ top: 0, left: 0, behavior: 'auto' }); }, [])`
|
||||
- 列表内自动定位(素材/历史记录)应跳过恢复后的首次触发,防止刷新后页面二次跳动。
|
||||
|
||||
### 路由预取
|
||||
|
||||
- 首页进入发布管理时使用 `router.prefetch("/publish")`
|
||||
@@ -305,9 +312,12 @@ import { formatDate } from '@/shared/lib/media';
|
||||
- **必须持久化**:
|
||||
- 标题样式 ID / 字幕样式 ID
|
||||
- 标题字号 / 字幕字号
|
||||
- 标题显示模式(`short` / `persistent`)
|
||||
- 背景音乐选择 / 音量 / 开关状态
|
||||
- 输出画面比例(`9:16` / `16:9`)
|
||||
- 素材选择 / 历史作品选择
|
||||
- 选中配音 ID (`selectedAudioId`)
|
||||
- 语速 (`speed`,声音克隆模式)
|
||||
- 时间轴段信息 (`useTimelineEditor` 的 localStorage)
|
||||
|
||||
### 历史文案(独立持久化)
|
||||
@@ -332,6 +342,7 @@ import { formatDate } from '@/shared/lib/media';
|
||||
- 片头标题与发布信息标题统一限制 15 字。
|
||||
- 中文输入法合成阶段不截断,合成结束后才校验长度。
|
||||
- 首页片头标题修改会同步写入 `vigent_${storageKey}_publish_title`。
|
||||
- 标题显示模式使用 `short` / `persistent` 两个固定值;默认 `short`(短暂显示 4 秒)。
|
||||
- 避免使用 `maxLength` 强制截断输入法合成态。
|
||||
- 推荐使用 `@/shared/hooks/useTitleInput` 统一处理输入逻辑。
|
||||
|
||||
@@ -361,9 +372,11 @@ import { formatDate } from '@/shared/lib/media';
|
||||
|
||||
| 接口 | 方法 | 功能 |
|
||||
|------|------|------|
|
||||
| `/api/ref-audios` | POST | 上传参考音频 (multipart/form-data: file + ref_text) |
|
||||
| `/api/ref-audios` | POST | 上传参考音频 (multipart/form-data: file,ref_text 可选,后端自动 Whisper 转写) |
|
||||
| `/api/ref-audios` | GET | 列出用户的参考音频 |
|
||||
| `/api/ref-audios/{id}` | PUT | 重命名参考音频 |
|
||||
| `/api/ref-audios/{id}` | DELETE | 删除参考音频 (id 需 encodeURIComponent) |
|
||||
| `/api/ref-audios/{id}/retranscribe` | POST | 重新识别参考音频文字(Whisper 转写 + 超 10s 自动截取) |
|
||||
|
||||
### 视频生成 API 扩展
|
||||
|
||||
@@ -382,7 +395,8 @@ await api.post('/api/videos/generate', {
|
||||
text: '口播文案',
|
||||
tts_mode: 'voiceclone',
|
||||
ref_audio_id: 'user_id/timestamp_name.wav',
|
||||
ref_text: '参考音频对应文字',
|
||||
ref_text: '参考音频对应文字', // 从参考音频 metadata 自动获取
|
||||
speed: 1.0, // 语速 (0.8-1.2)
|
||||
});
|
||||
```
|
||||
|
||||
@@ -396,8 +410,14 @@ const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
|
||||
const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
|
||||
```
|
||||
|
||||
### 参考音频自动处理
|
||||
|
||||
- **自动转写**: 上传参考音频时后端自动调用 Whisper 转写内容作为 `ref_text`,无需用户手动输入
|
||||
- **自动截取**: 参考音频超过 10 秒时自动在静音点截取前 10 秒(CosyVoice 建议 3-10 秒)
|
||||
- **重新识别**: 旧参考音频可通过 retranscribe 端点重新转写并截取
|
||||
|
||||
### UI 结构
|
||||
|
||||
配音方式使用 Tab 切换:
|
||||
- **EdgeTTS 音色** - 预设音色 2x3 网格
|
||||
- **声音克隆** - 参考音频列表 + 在线录音 + 参考文字输入
|
||||
- **声音克隆** - 参考音频列表 + 在线录音 + 语速下拉菜单 (5 档: 较慢/稍慢/正常/稍快/较快)
|
||||
|
||||
@@ -35,8 +35,10 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
||||
|
||||
### 3. 声音克隆 [Day 13 新增]
|
||||
- **TTS 模式选择**: EdgeTTS (预设音色) / 声音克隆 (自定义音色) 切换。
|
||||
- **参考音频管理**: 上传/列表/删除参考音频 (3-20秒 WAV)。
|
||||
- **一键克隆**: 选择参考音频后自动调用 Qwen3-TTS 服务。
|
||||
- **参考音频管理**: 上传/列表/重命名/删除参考音频,上传后自动 Whisper 转写 ref_text + 超 10s 自动截取。
|
||||
- **重新识别**: 旧参考音频可重新转写并截取 (RotateCw 按钮)。
|
||||
- **一键克隆**: 选择参考音频后自动调用 CosyVoice 3.0 服务。
|
||||
- **语速控制**: 声音克隆模式下支持 5 档语速 (0.8-1.2),选择持久化 (Day 23)。
|
||||
- **多语言支持**: EdgeTTS 10 语言声音列表,声音克隆 language 透传 (Day 22)。
|
||||
|
||||
### 4. 配音前置 + 时间轴编排 [Day 23 新增]
|
||||
@@ -45,10 +47,12 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
||||
- **时间轴编辑器**: wavesurfer.js 音频波形 + 色块可视化素材分配,拖拽分割线调整各段时长。
|
||||
- **素材截取设置**: ClipTrimmer 双手柄 range slider + HTML5 视频预览播放。
|
||||
- **拖拽排序**: 时间轴色块支持 HTML5 Drag & Drop 调换素材顺序。
|
||||
- **自定义分配**: 后端 `custom_assignments` 支持用户定义的素材分配方案。
|
||||
- **自定义分配**: 后端 `custom_assignments` 支持用户定义的素材分配方案(含 `source_start/source_end` 截取区间)。
|
||||
- **时间轴语义对齐**: 超出音频时仅保留可见段并截齐末段,超出段不参与生成;不足音频时最后可见段自动循环补齐。
|
||||
- **画面比例控制**: 时间轴顶部支持 `9:16 / 16:9` 输出比例选择,设置持久化并透传后端。
|
||||
|
||||
### 5. 字幕与标题 [Day 13 新增]
|
||||
- **片头标题**: 可选输入,限制 15 字,视频开头显示 3 秒淡入淡出标题。
|
||||
- **片头标题**: 可选输入,限制 15 字;支持“短暂显示 / 常驻显示”,默认短暂显示(4 秒)。
|
||||
- **标题同步**: 首页片头标题修改会同步到发布信息标题。
|
||||
- **逐字高亮字幕**: 卡拉OK效果,默认开启,可关闭。
|
||||
- **自动对齐**: 基于 faster-whisper 生成字级别时间戳。
|
||||
@@ -65,6 +69,12 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
||||
- **账户下拉菜单**: 显示有效期 + 修改密码 + 安全退出。
|
||||
- **修改密码**: 弹窗输入当前密码与新密码,修改后强制重新登录。
|
||||
|
||||
### 8. 付费开通会员 (`/pay`)
|
||||
- **支付宝电脑网站支付**: 跳转支付宝官方收银台,支持扫码/账号登录/余额等多种支付方式。
|
||||
- **自动激活**: 支付成功后异步回调自动激活会员(有效期 1 年),前端轮询检测支付结果。
|
||||
- **到期续费**: 会员到期后登录自动跳转付费页续费,流程与首次开通一致。
|
||||
- **管理员激活**: 管理员手动激活功能并存,两种方式互不影响。
|
||||
|
||||
### 8. 文案提取助手 (`ScriptExtractionModal`) [Day 15 新增]
|
||||
- **多源提取**: 支持文件拖拽上传与 URL 粘贴 (B站/抖音/TikTok)。
|
||||
- **AI 洗稿**: 集成 GLM-4.7-Flash,自动改写为口播文案。
|
||||
@@ -105,6 +115,8 @@ src/
|
||||
│ ├── page.tsx # 视频生成主页
|
||||
│ ├── publish/ # 发布管理页
|
||||
│ │ └── page.tsx
|
||||
│ ├── pay/ # 付费开通会员页
|
||||
│ │ └── page.tsx
|
||||
│ └── layout.tsx # 全局布局 (导航栏)
|
||||
├── features/
|
||||
│ ├── home/
|
||||
|
||||
@@ -185,7 +185,8 @@ Remotion 渲染参数在 `backend/app/services/remotion_service.py` 中配置:
|
||||
| 参数 | 默认值 | 说明 |
|
||||
|------|--------|------|
|
||||
| `fps` | 25 | 输出帧率 |
|
||||
| `title_duration` | 3.0 | 标题显示时长(秒) |
|
||||
| `title_display_mode` | `short` | 标题显示模式(`short`=短暂显示;`persistent`=常驻显示) |
|
||||
| `title_duration` | 4.0 | 标题显示时长(秒,仅 `short` 模式生效) |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# ViGent2 开发任务清单 (Task Log)
|
||||
|
||||
**项目**: ViGent2 数字人口播视频生成系统
|
||||
**进度**: 100% (Day 23 - 配音前置重构 + 素材时间轴编排 + UI 体验优化)
|
||||
**更新时间**: 2026-02-10
|
||||
**进度**: 100% (Day 25 - 支付宝付费开通会员)
|
||||
**更新时间**: 2026-02-11
|
||||
|
||||
---
|
||||
|
||||
@@ -10,7 +10,27 @@
|
||||
|
||||
> 这里记录了每一天的核心开发内容与 milestone。
|
||||
|
||||
### Day 23: 配音前置重构 + 素材时间轴编排 + UI 体验优化 + 历史文案 (Current)
|
||||
### Day 25: 支付宝付费开通会员 (Current)
|
||||
- [x] **支付宝电脑网站支付**: 集成 `python-alipay-sdk`,支持 `alipay.trade.page.pay` 跳转支付宝收银台。
|
||||
- [x] **payment_token 机制**: 登录时未激活/已过期用户返回 403 + 短时效 JWT(30 分钟),安全传递身份到付费页。
|
||||
- [x] **异步通知回调**: `POST /api/payment/notify` 验签 → 更新订单 → 激活用户(is_active=true, expires_at=+365天)。
|
||||
- [x] **前端付费页**: `/pay` 页面,首次访问创建订单并跳转收银台,支付完成返回后轮询状态。
|
||||
- [x] **is_active 安全兜底**: `deps.py` 在登录和鉴权两处均检查 is_active,到期自动停用并清理 session。
|
||||
- [x] **orders 数据层**: 新增 `repositories/orders.py` + `orders` 数据库表。
|
||||
- [x] **登录流程适配**: 登录接口返回 PAYMENT_REQUIRED,前端 auth.ts 处理 paymentToken 跳转。
|
||||
- [x] **部署文档**: 新增 `Docs/ALIPAY_DEPLOY.md`,含密钥配置、PEM 格式、产品开通等完整指南。
|
||||
|
||||
### Day 24: 鉴权到期治理 + 多素材时间轴稳定性修复
|
||||
- [x] **会员到期请求时失效**: 登录与鉴权接口统一执行 `expires_at` 检查;到期后自动停用账号、清理 session,并返回“会员已到期,请续费”。
|
||||
- [x] **画面比例控制**: 时间轴新增 `9:16 / 16:9` 输出比例选择,前端持久化并透传后端,单素材/多素材统一按目标分辨率处理。
|
||||
- [x] **标题/字幕防溢出**: Remotion 与前端预览统一响应式缩放、自动换行、描边/字距/边距比例缩放,降低预览与成片差异。
|
||||
- [x] **标题显示模式**: 标题行新增“短暂显示/常驻显示”下拉;默认短暂显示(4 秒),用户选择持久化并透传至 Remotion 渲染链路。
|
||||
- [x] **MOV 方向归一化**: 新增旋转元数据解析与 orientation normalize,修复“编码横屏+旋转元数据”导致的竖屏判断偏差。
|
||||
- [x] **多素材拼接稳定性**: 片段 prepare 与 concat 统一 25fps/CFR,concat 增加 `+genpts`,缓解段切换处“画面冻结口型还动”。
|
||||
- [x] **时间轴语义对齐**: 打通 `source_end` 全链路;修复 `sourceStart>0 且 sourceEnd=0` 时长计算;生成时以时间轴可见段 assignments 为准,超出段不参与。
|
||||
- [x] **交互细节优化**: 页面刷新回顶部;素材/历史列表首轮自动滚动抑制,减少恢复状态时页面跳动。
|
||||
|
||||
### Day 23: 配音前置重构 + 素材时间轴编排 + UI 体验优化 + 声音克隆增强
|
||||
|
||||
#### 第一阶段:配音前置
|
||||
- [x] **配音生成独立化**: 新增 `generated_audios` 后端模块(router/schemas/service),5 个 API 端点,复用现有 TTSService / voice_clone_service / task_store。
|
||||
@@ -28,8 +48,8 @@
|
||||
- [x] **MaterialSelector 精简**: 移除旧的时长信息栏和拖拽排序区(功能迁移到 TimelineEditor)。
|
||||
|
||||
#### 第三阶段:UI 体验优化 + TTS 稳定性
|
||||
- [x] **TTS SoX PATH 修复**: `run_qwen_tts.sh` export conda env bin 到 PATH,修复 `SoX could not be found!` 警告。
|
||||
- [x] **TTS 显存管理**: 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞事件循环。
|
||||
- [x] **TTS SoX PATH 修复**: `run_qwen_tts.sh` export conda env bin 到 PATH (Qwen3-TTS 已停用,已被 CosyVoice 3.0 替换)。
|
||||
- [x] **TTS 显存管理**: 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞事件循环 (CosyVoice 沿用相同机制)。
|
||||
- [x] **配音列表按钮统一**: Play/Edit/Delete 按钮右侧同组 hover 显示,与 RefAudioPanel 一致,移除文案摘要。
|
||||
- [x] **素材区解除配音门控**: 移除 MaterialSelector 的 selectedAudio 遮罩,素材随时可上传管理。
|
||||
- [x] **时间轴拖拽排序**: TimelineEditor 色块支持 HTML5 Drag & Drop 调换素材顺序。
|
||||
@@ -42,6 +62,20 @@
|
||||
- [x] **按钮视觉统一**: 文案编辑区 4 个按钮统一为固定高度 `h-7`,移除多余 `<span>` 嵌套。
|
||||
- [x] **底部栏调整**: "保存文案"按钮移至底部右侧,移除预计时长显示。
|
||||
|
||||
#### 第五阶段:字幕语言不匹配 + 视频比例错位修复
|
||||
- [x] **字幕用原文替换 Whisper 转录**: `align()` 新增 `original_text` 参数,字幕文字永远用配音保存的原始文案。
|
||||
- [x] **Remotion 动态视频尺寸**: `calculateMetadata` 从 props 读取真实尺寸,修复标题/字幕比例错位。
|
||||
- [x] **英文空格丢失修复**: `split_word_to_chars` 遇到空格时 flush buffer + pending_space 标记。
|
||||
|
||||
#### 第六阶段:参考音频自动转写 + 语速控制
|
||||
- [x] **Whisper 自动转写 ref_text**: 上传参考音频时自动调用 Whisper 转写内容作为 ref_text,不再使用前端固定文字。
|
||||
- [x] **参考音频自动截取**: 超过 10 秒自动在静音点截取(ffmpeg silencedetect),末尾 0.1 秒淡出避免截断爆音。
|
||||
- [x] **重新识别功能**: 新增 `POST /ref-audios/{id}/retranscribe` 端点 + 前端 RotateCw 按钮,旧音频可重新转写并截取。
|
||||
- [x] **语速控制**: 全链路 speed 参数(前端选择器 → 持久化 → 后端 → CosyVoice `inference_zero_shot(speed=)`),5 档:较慢(0.8)/稍慢(0.9)/正常(1.0)/稍快(1.1)/较快(1.2)。
|
||||
- [x] **缺少参考音频门控**: 声音克隆模式下未选参考音频时,生成配音按钮禁用 + 黄色警告提示。
|
||||
- [x] **Whisper 语言自动检测**: `transcribe()` language 参数改为可选(默认 None = 自动检测),支持多语言参考音频。
|
||||
- [x] **前端清理**: 移除固定 ref_text 常量、朗读引导文字,简化为"上传任意语音样本,系统将自动识别内容并克隆声音"。
|
||||
|
||||
### Day 22: 多素材优化 + AI 翻译 + TTS 多语言
|
||||
- [x] **多素材 Bug 修复**: 6 个高优 Bug(边界溢出、单段 fallback、除零、duration 校验、Whisper 兜底、空列表检查)。
|
||||
- [x] **架构重构**: 多素材从"逐段 LatentSync"重构为"先拼接再推理",推理次数 N→1。
|
||||
@@ -117,7 +151,7 @@
|
||||
- [x] **体验细节优化**: 录音预览 URL 回收,预览弹窗滚动恢复,全局任务提示挂载。
|
||||
|
||||
### Day 16: 深度性能优化
|
||||
- [x] **Qwen-TTS 加速**: 集成 Flash Attention 2,模型加载速度提升至 8.9s。
|
||||
- [x] **Qwen-TTS 加速**: 集成 Flash Attention 2 (已停用,被 CosyVoice 3.0 替换)。
|
||||
- [x] **服务守护**: 开发 `Watchdog` 看门狗机制,自动监控并重启僵死服务。
|
||||
- [x] **LatentSync 性能确认**: 验证 DeepCache + 原生 Flash Attn 生效。
|
||||
- [x] **文档重构**: 全面更新 README、部署手册及后端文档。
|
||||
@@ -130,10 +164,10 @@
|
||||
### Day 14: AI 增强与体验优化
|
||||
- [x] **AI 标题/标签**: 集成 GLM-4API 自动生成视频元数据。
|
||||
- [x] **字幕升级**: Remotion 逐字高亮字幕 (卡拉OK效果) 及动画片头。
|
||||
- [x] **模型升级**: Qwen3-TTS 升级至 1.7B-Base 版本。
|
||||
- [x] **模型升级**: 声音克隆已迁移至 CosyVoice 3.0 (0.5B)。
|
||||
|
||||
### Day 13: 声音克隆集成
|
||||
- [x] **声音克隆微服务**: 封装 Qwen3-TTS 为独立 API (8009端口)。
|
||||
- [x] **声音克隆微服务**: 封装 CosyVoice 3.0 为独立 API (8010端口,替换 Qwen3-TTS)。
|
||||
- [x] **参考音频管理**: Supabase 存储桶配置与管理接口。
|
||||
- [x] **多模态 TTS**: 前端支持 EdgeTTS / Clone Voice 切换。
|
||||
|
||||
@@ -186,9 +220,10 @@
|
||||
| **核心 API** | 100% | ✅ 稳定 |
|
||||
| **Web UI** | 100% | ✅ 稳定 (移动端适配) |
|
||||
| **唇形同步** | 100% | ✅ LatentSync 1.6 |
|
||||
| **TTS 配音** | 100% | ✅ EdgeTTS + Qwen3 + 配音前置 + 时间轴编排 |
|
||||
| **TTS 配音** | 100% | ✅ EdgeTTS + CosyVoice 3.0 + 配音前置 + 时间轴编排 + 自动转写 + 语速控制 |
|
||||
| **自动发布** | 100% | ✅ 抖音/微信视频号/B站/小红书 |
|
||||
| **用户认证** | 100% | ✅ 手机号 + JWT |
|
||||
| **付费会员** | 100% | ✅ 支付宝电脑网站支付 + 自动激活 |
|
||||
| **部署运维** | 100% | ✅ PM2 + Watchdog |
|
||||
|
||||
---
|
||||
|
||||
20
README.md
20
README.md
@@ -5,7 +5,7 @@
|
||||
> 📹 **上传人物** · 🎙️ **输入文案** · 🎬 **一键成片**
|
||||
|
||||
基于 **LatentSync 1.6 + EdgeTTS** 的开源数字人口播视频生成系统。
|
||||
集成 **Qwen3-TTS** 声音克隆与自动社交媒体发布功能。
|
||||
集成 **CosyVoice 3.0** 声音克隆与自动社交媒体发布功能。
|
||||
|
||||
[功能特性](#-功能特性) • [技术栈](#-技术栈) • [文档中心](#-文档中心) • [部署指南](Docs/DEPLOY_MANUAL.md)
|
||||
|
||||
@@ -17,11 +17,13 @@
|
||||
|
||||
### 核心能力
|
||||
- 🎬 **高清唇形同步** - LatentSync 1.6 驱动,512×512 高分辨率 Latent Diffusion 模型。
|
||||
- 🎙️ **多模态配音** - 支持 **EdgeTTS** (微软超自然语音, 10 语言) 和 **Qwen3-TTS** (3秒极速声音克隆)。配音前置工作流:先生成配音 → 选素材 → 生成视频。
|
||||
- 🎙️ **多模态配音** - 支持 **EdgeTTS** (微软超自然语音, 10 语言) 和 **CosyVoice 3.0** (3秒极速声音克隆, 9语言+18方言, 语速可调)。上传参考音频自动 Whisper 转写 + 智能截取。配音前置工作流:先生成配音 → 选素材 → 生成视频。
|
||||
- 📝 **智能字幕** - 集成 faster-whisper + Remotion,自动生成逐字高亮 (卡拉OK效果) 字幕。
|
||||
- 🎨 **样式预设** - 标题/字幕样式选择 + 预览 + 字号调节,支持自定义字体库。
|
||||
- 🖼️ **作品预览一致性** - 标题/字幕预览按素材分辨率缩放,效果更接近成片。
|
||||
- 🎞️ **多素材多机位** - 支持多选素材 + 时间轴编辑器 (wavesurfer.js 波形可视化),拖拽分割线调整时长、拖拽排序切换机位、截取源视频片段。
|
||||
- 🏷️ **标题显示模式** - 片头标题支持 `短暂显示` / `常驻显示`,默认短暂显示(4秒),用户偏好自动持久化。
|
||||
- 🖼️ **作品预览一致性** - 标题/字幕预览与 Remotion 成片统一响应式缩放和自动换行,窄屏画布也稳定显示。
|
||||
- 🎞️ **多素材多机位** - 支持多选素材 + 时间轴编辑器 (wavesurfer.js 波形可视化),拖拽分割线调整时长、拖拽排序切换机位、按 `source_start/source_end` 截取片段。
|
||||
- 📐 **画面比例控制** - 时间轴一键切换 `9:16 / 16:9` 输出比例,生成链路全程按目标比例处理。
|
||||
- 💾 **用户偏好持久化** - 首页状态统一恢复/保存,刷新后延续上次配置。历史文案手动保存与加载。
|
||||
- 🎵 **背景音乐** - 试听 + 音量控制 + 混音,保持配音音量稳定。
|
||||
- 🤖 **AI 辅助创作** - 内置 GLM-4.7-Flash,支持 B站/抖音链接文案提取、AI 洗稿、标题/标签自动生成、9 语言翻译。
|
||||
@@ -31,6 +33,7 @@
|
||||
- 🖥️ **发布管理预览** - 支持签名 URL / 相对路径作品预览,确保可直接播放。
|
||||
- 📸 **发布结果可视化** - 抖音/微信视频号发布成功后返回截图,发布页结果卡片可直接查看。
|
||||
- 🛡️ **发布防误操作** - 发布进行中自动提示“请勿刷新或关闭网页”,并拦截刷新/关页二次确认。
|
||||
- 💳 **付费会员** - 支付宝电脑网站支付自动开通会员,到期自动停用并引导续费,管理员手动激活并存。
|
||||
- 🔐 **认证与隔离** - 基于 Supabase 的用户隔离,支持手机号注册/登录、密码管理。
|
||||
- 🛡️ **服务守护** - 内置 Watchdog 看门狗机制,自动监控并重启僵死服务,确保 7x24h 稳定运行。
|
||||
- 🚀 **性能优化** - 视频预压缩、模型常驻服务(近实时加载)、双 GPU 流水线并发。
|
||||
@@ -45,7 +48,7 @@
|
||||
| **后端** | FastAPI | Python 3.10, AsyncIO, PM2 |
|
||||
| **数据库** | Supabase | PostgreSQL, Storage (本地/S3), Auth |
|
||||
| **唇形同步** | LatentSync 1.6 | PyTorch 2.5, Diffusers, DeepCache |
|
||||
| **声音克隆** | Qwen3-TTS | 1.7B 参数量,Flash Attention 2 加速 |
|
||||
| **声音克隆** | CosyVoice 3.0 | 0.5B 参数量,9 语言 + 18 方言 |
|
||||
| **自动化** | Playwright | 社交媒体无头浏览器自动化 |
|
||||
| **部署** | Docker & PM2 | 混合部署架构 |
|
||||
|
||||
@@ -57,9 +60,10 @@
|
||||
|
||||
### 部署运维
|
||||
- **[部署手册 (DEPLOY_MANUAL.md)](Docs/DEPLOY_MANUAL.md)** - 👈 **部署请看这里**!包含完整的环境搭建步骤。
|
||||
- [参考音频服务部署 (QWEN3_TTS_DEPLOY.md)](Docs/QWEN3_TTS_DEPLOY.md) - 声音克隆模型部署指南。
|
||||
- [参考音频服务部署 (COSYVOICE3_DEPLOY.md)](Docs/COSYVOICE3_DEPLOY.md) - 声音克隆模型部署指南。
|
||||
- [LatentSync 部署指南](models/LatentSync/DEPLOY.md) - 唇形同步模型独立部署。
|
||||
- [Supabase 部署指南 (SUPABASE_DEPLOY.md)](Docs/SUPABASE_DEPLOY.md) - Supabase 与认证系统配置。
|
||||
- [支付宝部署指南 (ALIPAY_DEPLOY.md)](Docs/ALIPAY_DEPLOY.md) - 支付宝付费开通会员配置。
|
||||
|
||||
### 开发文档
|
||||
- [后端开发指南](Docs/BACKEND_README.md) - 接口规范与开发流程。
|
||||
@@ -82,7 +86,7 @@ ViGent2/
|
||||
├── remotion/ # Remotion 视频渲染 (标题/字幕合成)
|
||||
├── models/ # AI 模型仓库
|
||||
│ ├── LatentSync/ # 唇形同步服务
|
||||
│ └── Qwen3-TTS/ # 声音克隆服务
|
||||
│ └── CosyVoice/ # 声音克隆服务
|
||||
└── Docs/ # 项目文档
|
||||
```
|
||||
|
||||
@@ -97,7 +101,7 @@ ViGent2/
|
||||
| **Web UI** | 3002 | 用户访问入口 (Next.js) |
|
||||
| **Backend API** | 8006 | 核心业务接口 (FastAPI) |
|
||||
| **LatentSync** | 8007 | 唇形同步推理服务 |
|
||||
| **Qwen3-TTS** | 8009 | 声音克隆推理服务 |
|
||||
| **CosyVoice 3.0** | 8010 | 声音克隆推理服务 |
|
||||
| **Supabase** | 8008 | 数据库与认证网关 |
|
||||
|
||||
---
|
||||
|
||||
@@ -73,3 +73,10 @@ SUPABASE_STORAGE_LOCAL_PATH=/home/rongye/ProgramFiles/Supabase/volumes/storage/s
|
||||
# =============== 抖音视频下载 Cookie ===============
|
||||
# 用于从抖音 URL 提取视频文案功能,会过期需要定期更新
|
||||
DOUYIN_COOKIE=douyin.com; device_web_cpu_core=10; device_web_memory_size=8; __ac_nonce=06760391f00b9b51264ae; __ac_signature=_02B4Z6wo00f019a5ceAAAIDAhEZR-X3jjWfWmXVAAJLXd4; ttwid=1%7C7MTKBSMsP4eOv9h5NAh8p0E-NYIud09ftNmB0mjLpWc%7C1734359327%7C8794abeabbd47447e1f56e5abc726be089f2a0344d6343b5f75f23e7b0f0028f; UIFID_TEMP=0de8750d2b188f4235dbfd208e44abbb976428f0720eb983255afefa45d39c0c6532e1d4768dd8587bf919f866ff1396912bcb2af71efee56a14a2a9f37b74010d0a0413795262f6d4afe02a032ac7ab; s_v_web_id=verify_m4r4ribr_c7krmY1z_WoeI_43po_ATpO_I4o8U1bex2D7; hevc_supported=true; home_can_add_dy_2_desktop=%220%22; dy_swidth=2560; dy_sheight=1440; stream_recommend_feed_params=%22%7B%5C%22cookie_enabled%5C%22%3Atrue%2C%5C%22screen_width%5C%22%3A2560%2C%5C%22screen_height%5C%22%3A1440%2C%5C%22browser_online%5C%22%3Atrue%2C%5C%22cpu_core_num%5C%22%3A10%2C%5C%22device_memory%5C%22%3A8%2C%5C%22downlink%5C%22%3A10%2C%5C%22effective_type%5C%22%3A%5C%224g%5C%22%2C%5C%22round_trip_time%5C%22%3A50%7D%22; strategyABtestKey=%221734359328.577%22; csrf_session_id=2f53aed9aa6974e83aa9a1014180c3a4; fpk1=U2FsdGVkX1/IpBh0qdmlKAVhGyYHgur4/VtL9AReZoeSxadXn4juKvsakahRGqjxOPytHWspYoBogyhS/V6QSw==; fpk2=0845b309c7b9b957afd9ecf775a4c21f; passport_csrf_token=d80e0c5b2fa2328219856be5ba7e671e; passport_csrf_token_default=d80e0c5b2fa2328219856be5ba7e671e; odin_tt=3c891091d2eb0f4718c1d5645bc4a0017032d4d5aa989decb729e9da2ad570918cbe5e9133dc6b145fa8c758de98efe32ff1f81aa0d611e838cc73ab08ef7d3f6adf66ab4d10e8372ddd628f94f16b8e; volume_info=%7B%22isUserMute%22%3Afalse%2C%22isMute%22%3Afalse%2C%22volume%22%3A0.5%7D; bd_ticket_guard_client_web_domain=2; FORCE_LOGIN=%7B%22videoConsumedRemainSeconds%22%3A180%7D; UIFID=0de8750d2b188f4235dbfd208e44abbb976428f0720eb983255afefa45d39c0c6532e1d4768dd8587bf919f866ff139655a3c2b735923234f371c699560c657923fd3d6c5b63ab7bb9b83423b6cb4787e2ce66a7fbc4ecb24c8570f520fe6de068bbb95115023c0c6c1b6ee31b49fb7e3996fb8349f43a3fd8b7a61cd9e18e8fe65eb6a7c13de4c0960d84e344b644725db3eb2fa6b7caf821de1b50527979f2; is_dash_user=1; biz_trace_id=b57a241f; bd_ticket_guard_client_data=eyJiZC10aWNrZXQtZ3VhcmQtdmVyc2lvbiI6MiwiYmQtdGlja2V0LWd1YXJkLWl0ZXJhdGlvbi12ZXJzaW9uIjoxLCJiZC10aWNrZXQtZ3VhcmQtcmVlLXB1YmxpYy1rZXkiOiJCTEo2R0lDalVoWW1XcHpGOFdrN0Vrc0dXcCtaUzNKY1g4NGNGY2k0TTl1TEowNjdUb21mbFU5aDdvWVBGamhNRWNRQWtKdnN1MnM3RmpTWnlJQXpHMjA9IiwiYmQtdGlja2V0LWd1YXJkLXdlYi12ZXJzaW9uIjoyfQ%3D%3D; download_guide=%221%2F20241216%2F0%22; sdk_source_info=7e276470716a68645a606960273f276364697660272927676c715a6d6069756077273f276364697660272927666d776a68605a607d71606b766c6a6b5a7666776c7571273f275e58272927666a6b766a69605a696c6061273f27636469766027292762696a6764695a7364776c6467696076273f275e5827292771273f273d33323131333c3036313632342778; bit_env=RiOY4jzzpxZoVCl6zdVSVhVRjdwHRTxqcqWdqMBZLPGjMdB4Tax1kAELHNTVAAh72KuhumewE4Lq6f0-VJ2UpJrkrhSxoPw9LUb3zQrq1OSwbeSPHkRlRgRQvO89sItdGUyq1oFr0XyRCnMYG87KSeWyc4x0czGR0o50hTDoDLG5rJVoRcdQOLvjiAegsqyytKF59sPX_QM9qffK2SqYsg0hCggURc_AI6kguDDE5DvG0bnyz1utw4z1eEnIoLrkGDqzqBZj4dOAr0BVU6ofbsS-pOQ2u2PM1dLP9FlBVBlVaqYVgHJeSLsR5k76BRTddUjTb4zEilVIEwAMJWGN4I1BxVt6fC9B5tBQpuT0lj3n3eKXCKXZsd8FrEs5_pbfDsxV-e_WMiXI2ff4qxiTC0U73sfo9OpicKICtZjdq8qsHxJuu6wVR36zvXeL2Wch5C6MzprNvkivv0l8nbh2mSgy1nabZr3dmU6NcR-Bg3Q3xTWUlR9aAUmpopC-cNuXjgLpT-Lw1AYGilSUnCvosth1Gfypq-b0MpgmdSDgTrQ%3D; gulu_source_res=eyJwX2luIjoiMDhjOGQ3ZTJiODQyNjZkZWI5Y2VkMGJiODNlNmY1ZWY0ZjMyNTE2ZmYyZjAzNDMzZjI0OWU1Y2Q1NTczNTk5NyJ9; passport_auth_mix_state=hp9bc3dgb1tm5wd8p82zawus27g0e3ue; IsDouyinActive=false
|
||||
|
||||
# =============== 支付宝配置 ===============
|
||||
ALIPAY_APP_ID=********
|
||||
ALIPAY_PRIVATE_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/app_private_key.pem
|
||||
ALIPAY_PUBLIC_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/alipay_public_key.pem
|
||||
ALIPAY_NOTIFY_URL=https://vigent.hbyrkj.top/api/payment/notify
|
||||
ALIPAY_RETURN_URL=https://vigent.hbyrkj.top/pay
|
||||
|
||||
@@ -76,6 +76,16 @@ class Settings(BaseSettings):
|
||||
GLM_API_KEY: str = ""
|
||||
GLM_MODEL: str = "glm-4.7-flash"
|
||||
|
||||
# 支付宝配置
|
||||
ALIPAY_APP_ID: str = ""
|
||||
ALIPAY_PRIVATE_KEY_PATH: str = "" # 应用私钥 PEM 文件路径
|
||||
ALIPAY_PUBLIC_KEY_PATH: str = "" # 支付宝公钥 PEM 文件路径
|
||||
ALIPAY_NOTIFY_URL: str = "" # 异步通知回调地址(公网可达)
|
||||
ALIPAY_RETURN_URL: str = "" # 支付成功后同步跳转地址
|
||||
ALIPAY_SANDBOX: bool = False # 是否使用沙箱环境
|
||||
PAYMENT_AMOUNT: float = 999.00 # 会员价格(元)
|
||||
PAYMENT_EXPIRE_DAYS: int = 365 # 会员有效天数
|
||||
|
||||
# CORS 配置 (逗号分隔的域名列表,* 表示允许所有)
|
||||
CORS_ORIGINS: str = "*"
|
||||
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
"""
|
||||
依赖注入模块:认证和用户获取
|
||||
"""
|
||||
from typing import Optional, Any, Dict, cast
|
||||
from typing import Optional, Any, Dict, cast
|
||||
from fastapi import Request, HTTPException, Depends, status
|
||||
from app.core.security import decode_access_token, TokenData
|
||||
from app.repositories.sessions import get_session
|
||||
from app.repositories.users import get_user_by_id
|
||||
from app.core.security import decode_access_token
|
||||
from app.repositories.sessions import get_session, delete_sessions
|
||||
from app.repositories.users import get_user_by_id, deactivate_user_if_expired
|
||||
from loguru import logger
|
||||
|
||||
|
||||
@@ -14,9 +14,9 @@ async def get_token_from_cookie(request: Request) -> Optional[str]:
|
||||
return request.cookies.get("access_token")
|
||||
|
||||
|
||||
async def get_current_user_optional(
|
||||
request: Request
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
async def get_current_user_optional(
|
||||
request: Request
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
获取当前用户 (可选,未登录返回 None)
|
||||
"""
|
||||
@@ -29,22 +29,30 @@ async def get_current_user_optional(
|
||||
return None
|
||||
|
||||
# 验证 session_token 是否有效 (单设备登录检查)
|
||||
try:
|
||||
session = get_session(token_data.user_id, token_data.session_token)
|
||||
if not session:
|
||||
logger.warning(f"Session token 无效: user_id={token_data.user_id}")
|
||||
return None
|
||||
|
||||
user = get_user_by_id(token_data.user_id)
|
||||
return cast(Optional[Dict[str, Any]], user)
|
||||
except Exception as e:
|
||||
logger.error(f"获取用户信息失败: {e}")
|
||||
return None
|
||||
try:
|
||||
session = get_session(token_data.user_id, token_data.session_token)
|
||||
if not session:
|
||||
logger.warning(f"Session token 无效: user_id={token_data.user_id}")
|
||||
return None
|
||||
|
||||
user = cast(Optional[Dict[str, Any]], get_user_by_id(token_data.user_id))
|
||||
if user and deactivate_user_if_expired(user):
|
||||
delete_sessions(token_data.user_id)
|
||||
return None
|
||||
|
||||
if user and not user.get("is_active"):
|
||||
delete_sessions(token_data.user_id)
|
||||
return None
|
||||
|
||||
return user
|
||||
except Exception as e:
|
||||
logger.error(f"获取用户信息失败: {e}")
|
||||
return None
|
||||
|
||||
|
||||
async def get_current_user(
|
||||
request: Request
|
||||
) -> Dict[str, Any]:
|
||||
async def get_current_user(
|
||||
request: Request
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
获取当前用户 (必须登录)
|
||||
|
||||
@@ -66,40 +74,45 @@ async def get_current_user(
|
||||
detail="Token 无效或已过期"
|
||||
)
|
||||
|
||||
try:
|
||||
session = get_session(token_data.user_id, token_data.session_token)
|
||||
if not session:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="会话已失效,请重新登录(可能已在其他设备登录)"
|
||||
)
|
||||
|
||||
user = get_user_by_id(token_data.user_id)
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="用户不存在"
|
||||
)
|
||||
user = cast(Dict[str, Any], user)
|
||||
|
||||
if user.get("expires_at"):
|
||||
from datetime import datetime, timezone
|
||||
expires_at = datetime.fromisoformat(user["expires_at"].replace("Z", "+00:00"))
|
||||
if datetime.now(timezone.utc) > expires_at:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="授权已过期,请联系管理员续期"
|
||||
)
|
||||
|
||||
return user
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"获取用户信息失败: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="服务器错误"
|
||||
)
|
||||
try:
|
||||
session = get_session(token_data.user_id, token_data.session_token)
|
||||
if not session:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="会话已失效,请重新登录(可能已在其他设备登录)"
|
||||
)
|
||||
|
||||
user = get_user_by_id(token_data.user_id)
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="用户不存在"
|
||||
)
|
||||
user = cast(Dict[str, Any], user)
|
||||
|
||||
if deactivate_user_if_expired(user):
|
||||
delete_sessions(token_data.user_id)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="会员已到期,请续费"
|
||||
)
|
||||
|
||||
if not user.get("is_active"):
|
||||
delete_sessions(token_data.user_id)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="账号已停用"
|
||||
)
|
||||
|
||||
return user
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"获取用户信息失败: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="服务器错误"
|
||||
)
|
||||
|
||||
|
||||
async def get_current_admin(
|
||||
|
||||
@@ -110,3 +110,28 @@ def set_auth_cookie(response: Response, token: str) -> None:
|
||||
def clear_auth_cookie(response: Response) -> None:
|
||||
"""清除认证 Cookie"""
|
||||
response.delete_cookie(key="access_token")
|
||||
|
||||
|
||||
def create_payment_token(user_id: str) -> str:
|
||||
"""生成付费专用短期 JWT token(30 分钟有效)"""
|
||||
payload = {
|
||||
"sub": user_id,
|
||||
"purpose": "payment",
|
||||
"exp": datetime.now(timezone.utc) + timedelta(minutes=30),
|
||||
}
|
||||
return jwt.encode(payload, settings.JWT_SECRET_KEY, algorithm=settings.JWT_ALGORITHM)
|
||||
|
||||
|
||||
def decode_payment_token(token: str) -> str | None:
|
||||
"""解析 payment_token,返回 user_id(仅 purpose=payment 有效)"""
|
||||
try:
|
||||
data = jwt.decode(
|
||||
token,
|
||||
settings.JWT_SECRET_KEY,
|
||||
algorithms=[settings.JWT_ALGORITHM],
|
||||
)
|
||||
if data.get("purpose") != "payment":
|
||||
return None
|
||||
return data.get("sub")
|
||||
except JWTError:
|
||||
return None
|
||||
|
||||
@@ -16,6 +16,7 @@ from app.modules.ai.router import router as ai_router
|
||||
from app.modules.tools.router import router as tools_router
|
||||
from app.modules.assets.router import router as assets_router
|
||||
from app.modules.generated_audios.router import router as generated_audios_router
|
||||
from app.modules.payment.router import router as payment_router
|
||||
from loguru import logger
|
||||
import os
|
||||
|
||||
@@ -126,6 +127,7 @@ app.include_router(ai_router) # /api/ai
|
||||
app.include_router(tools_router, prefix="/api/tools", tags=["Tools"])
|
||||
app.include_router(assets_router, prefix="/api/assets", tags=["Assets"])
|
||||
app.include_router(generated_audios_router, prefix="/api/generated-audios", tags=["GeneratedAudios"])
|
||||
app.include_router(payment_router) # /api/payment
|
||||
|
||||
|
||||
@app.on_event("startup")
|
||||
|
||||
@@ -1,22 +1,32 @@
|
||||
"""
|
||||
认证 API:注册、登录、登出、修改密码
|
||||
"""
|
||||
from fastapi import APIRouter, HTTPException, Response, status, Request
|
||||
from fastapi import APIRouter, HTTPException, Response, status, Request, Depends
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel, field_validator
|
||||
from app.core.security import (
|
||||
get_password_hash,
|
||||
verify_password,
|
||||
create_access_token,
|
||||
generate_session_token,
|
||||
set_auth_cookie,
|
||||
clear_auth_cookie,
|
||||
decode_access_token
|
||||
)
|
||||
from app.repositories.sessions import create_session, delete_sessions
|
||||
from app.repositories.users import create_user, get_user_by_id, get_user_by_phone, user_exists_by_phone, update_user
|
||||
from app.core.response import success_response
|
||||
from app.core.security import (
|
||||
get_password_hash,
|
||||
verify_password,
|
||||
create_access_token,
|
||||
generate_session_token,
|
||||
set_auth_cookie,
|
||||
clear_auth_cookie,
|
||||
decode_access_token,
|
||||
create_payment_token,
|
||||
)
|
||||
from app.repositories.sessions import create_session, delete_sessions
|
||||
from app.repositories.users import (
|
||||
create_user,
|
||||
get_user_by_id,
|
||||
get_user_by_phone,
|
||||
user_exists_by_phone,
|
||||
update_user,
|
||||
deactivate_user_if_expired,
|
||||
)
|
||||
from app.core.deps import get_current_user
|
||||
from app.core.response import success_response
|
||||
from loguru import logger
|
||||
from typing import Optional, Any, cast
|
||||
from typing import Optional, Any, cast
|
||||
import re
|
||||
|
||||
router = APIRouter(prefix="/api/auth", tags=["认证"])
|
||||
@@ -76,26 +86,26 @@ async def register(request: RegisterRequest):
|
||||
注册后状态为 pending,需要管理员激活
|
||||
"""
|
||||
try:
|
||||
if user_exists_by_phone(request.phone):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="该手机号已注册"
|
||||
)
|
||||
if user_exists_by_phone(request.phone):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail="该手机号已注册"
|
||||
)
|
||||
|
||||
# 创建用户
|
||||
password_hash = get_password_hash(request.password)
|
||||
|
||||
create_user({
|
||||
"phone": request.phone,
|
||||
"password_hash": password_hash,
|
||||
"username": request.username or f"用户{request.phone[-4:]}",
|
||||
"role": "pending",
|
||||
"is_active": False
|
||||
})
|
||||
create_user({
|
||||
"phone": request.phone,
|
||||
"password_hash": password_hash,
|
||||
"username": request.username or f"用户{request.phone[-4:]}",
|
||||
"role": "pending",
|
||||
"is_active": False
|
||||
})
|
||||
|
||||
logger.info(f"新用户注册: {request.phone}")
|
||||
|
||||
return success_response(message="注册成功,请等待管理员审核激活")
|
||||
return success_response(message="注册成功,请等待管理员审核激活")
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
@@ -116,12 +126,12 @@ async def login(request: LoginRequest, response: Response):
|
||||
- 实现"后踢前"单设备登录
|
||||
"""
|
||||
try:
|
||||
user = cast(dict[str, Any], get_user_by_phone(request.phone) or {})
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="手机号或密码错误"
|
||||
)
|
||||
user = cast(dict[str, Any], get_user_by_phone(request.phone) or {})
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="手机号或密码错误"
|
||||
)
|
||||
|
||||
# 验证密码
|
||||
if not verify_password(request.password, user["password_hash"]):
|
||||
@@ -130,29 +140,33 @@ async def login(request: LoginRequest, response: Response):
|
||||
detail="手机号或密码错误"
|
||||
)
|
||||
|
||||
# 检查是否激活
|
||||
if not user["is_active"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="账号未激活,请等待管理员审核"
|
||||
# 过期自动停用(注意:只更新 DB,不修改内存中的 user 字典)
|
||||
expired = deactivate_user_if_expired(user)
|
||||
if expired:
|
||||
delete_sessions(user["id"])
|
||||
|
||||
# 过期 或 未激活(新注册)→ 返回付费指引
|
||||
if expired or not user["is_active"]:
|
||||
payment_token = create_payment_token(user["id"])
|
||||
return JSONResponse(
|
||||
status_code=403,
|
||||
content={
|
||||
"success": False,
|
||||
"message": "请付费开通会员",
|
||||
"code": 403,
|
||||
"data": {
|
||||
"reason": "PAYMENT_REQUIRED",
|
||||
"payment_token": payment_token,
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
# 检查授权是否过期
|
||||
if user.get("expires_at"):
|
||||
from datetime import datetime, timezone
|
||||
expires_at = datetime.fromisoformat(user["expires_at"].replace("Z", "+00:00"))
|
||||
if datetime.now(timezone.utc) > expires_at:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="授权已过期,请联系管理员续期"
|
||||
)
|
||||
|
||||
# 生成新的 session_token (后踢前)
|
||||
session_token = generate_session_token()
|
||||
|
||||
# 删除旧 session,插入新 session
|
||||
delete_sessions(user["id"])
|
||||
create_session(user["id"], session_token, None)
|
||||
delete_sessions(user["id"])
|
||||
create_session(user["id"], session_token, None)
|
||||
|
||||
# 生成 JWT Token
|
||||
token = create_access_token(user["id"], session_token)
|
||||
@@ -162,19 +176,19 @@ async def login(request: LoginRequest, response: Response):
|
||||
|
||||
logger.info(f"用户登录: {request.phone}")
|
||||
|
||||
return success_response(
|
||||
data={
|
||||
"user": UserResponse(
|
||||
id=user["id"],
|
||||
phone=user["phone"],
|
||||
username=user.get("username"),
|
||||
role=user["role"],
|
||||
is_active=user["is_active"],
|
||||
expires_at=user.get("expires_at")
|
||||
).model_dump()
|
||||
},
|
||||
message="登录成功",
|
||||
)
|
||||
return success_response(
|
||||
data={
|
||||
"user": UserResponse(
|
||||
id=user["id"],
|
||||
phone=user["phone"],
|
||||
username=user.get("username"),
|
||||
role=user["role"],
|
||||
is_active=user["is_active"],
|
||||
expires_at=user.get("expires_at")
|
||||
).model_dump()
|
||||
},
|
||||
message="登录成功",
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
@@ -186,10 +200,10 @@ async def login(request: LoginRequest, response: Response):
|
||||
|
||||
|
||||
@router.post("/logout")
|
||||
async def logout(response: Response):
|
||||
"""用户登出"""
|
||||
clear_auth_cookie(response)
|
||||
return success_response(message="已登出")
|
||||
async def logout(response: Response):
|
||||
"""用户登出"""
|
||||
clear_auth_cookie(response)
|
||||
return success_response(message="已登出")
|
||||
|
||||
|
||||
@router.post("/change-password")
|
||||
@@ -217,12 +231,12 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
||||
)
|
||||
|
||||
try:
|
||||
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="用户不存在"
|
||||
)
|
||||
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="用户不存在"
|
||||
)
|
||||
|
||||
# 验证当前密码
|
||||
if not verify_password(request.old_password, user["password_hash"]):
|
||||
@@ -233,13 +247,13 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
||||
|
||||
# 更新密码
|
||||
new_password_hash = get_password_hash(request.new_password)
|
||||
update_user(user["id"], {"password_hash": new_password_hash})
|
||||
update_user(user["id"], {"password_hash": new_password_hash})
|
||||
|
||||
# 生成新的 session token,使旧 token 失效
|
||||
new_session_token = generate_session_token()
|
||||
|
||||
delete_sessions(user["id"])
|
||||
create_session(user["id"], new_session_token, None)
|
||||
delete_sessions(user["id"])
|
||||
create_session(user["id"], new_session_token, None)
|
||||
|
||||
# 生成新的 JWT Token
|
||||
new_token = create_access_token(user["id"], new_session_token)
|
||||
@@ -247,7 +261,7 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
||||
|
||||
logger.info(f"用户修改密码: {user['phone']}")
|
||||
|
||||
return success_response(message="密码修改成功")
|
||||
return success_response(message="密码修改成功")
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
@@ -259,35 +273,13 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
||||
|
||||
|
||||
@router.get("/me")
|
||||
async def get_me(request: Request):
|
||||
async def get_me(user: dict = Depends(get_current_user)):
|
||||
"""获取当前用户信息"""
|
||||
# 从 Cookie 获取用户
|
||||
token = request.cookies.get("access_token")
|
||||
if not token:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="未登录"
|
||||
)
|
||||
|
||||
token_data = decode_access_token(token)
|
||||
if not token_data:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Token 无效"
|
||||
)
|
||||
|
||||
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
|
||||
if not user:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="用户不存在"
|
||||
)
|
||||
|
||||
return success_response(UserResponse(
|
||||
id=user["id"],
|
||||
phone=user["phone"],
|
||||
username=user.get("username"),
|
||||
role=user["role"],
|
||||
is_active=user["is_active"],
|
||||
expires_at=user.get("expires_at")
|
||||
).model_dump())
|
||||
return success_response(UserResponse(
|
||||
id=user["id"],
|
||||
phone=user["phone"],
|
||||
username=user.get("username"),
|
||||
role=user["role"],
|
||||
is_active=user["is_active"],
|
||||
expires_at=user.get("expires_at")
|
||||
).model_dump())
|
||||
|
||||
@@ -9,6 +9,7 @@ class GenerateAudioRequest(BaseModel):
|
||||
ref_audio_id: Optional[str] = None
|
||||
ref_text: Optional[str] = None
|
||||
language: str = "zh-CN"
|
||||
speed: float = 1.0
|
||||
|
||||
|
||||
class RenameAudioRequest(BaseModel):
|
||||
|
||||
@@ -25,7 +25,7 @@ from app.modules.generated_audios.schemas import (
|
||||
BUCKET = "generated-audios"
|
||||
|
||||
|
||||
def _locale_to_qwen_lang(locale: str) -> str:
|
||||
def _locale_to_tts_lang(locale: str) -> str:
|
||||
mapping = {"zh": "Chinese", "en": "English"}
|
||||
return mapping.get(locale.split("-")[0], "Auto")
|
||||
|
||||
@@ -73,19 +73,20 @@ async def generate_audio_task(task_id: str, req: GenerateAudioRequest, user_id:
|
||||
async for chunk in resp.aiter_bytes():
|
||||
f.write(chunk)
|
||||
|
||||
task_store.update(task_id, {"progress": 40, "message": "正在克隆声音 (Qwen3-TTS)..."})
|
||||
task_store.update(task_id, {"progress": 40, "message": "正在克隆声音..."})
|
||||
await voice_clone_service.generate_audio(
|
||||
text=req.text,
|
||||
ref_audio_path=ref_local,
|
||||
ref_text=req.ref_text,
|
||||
output_path=audio_path,
|
||||
language=_locale_to_qwen_lang(req.language),
|
||||
language=_locale_to_tts_lang(req.language),
|
||||
speed=req.speed,
|
||||
)
|
||||
finally:
|
||||
if os.path.exists(ref_local):
|
||||
os.unlink(ref_local)
|
||||
else:
|
||||
task_store.update(task_id, {"progress": 30, "message": "正在生成语音 (EdgeTTS)..."})
|
||||
task_store.update(task_id, {"progress": 30, "message": "正在生成语音..."})
|
||||
tts = TTSService()
|
||||
await tts.generate_audio(req.text, req.voice, audio_path)
|
||||
|
||||
|
||||
0
backend/app/modules/payment/__init__.py
Normal file
0
backend/app/modules/payment/__init__.py
Normal file
52
backend/app/modules/payment/router.py
Normal file
52
backend/app/modules/payment/router.py
Normal file
@@ -0,0 +1,52 @@
|
||||
"""
|
||||
支付 API:创建订单、异步通知、状态查询
|
||||
|
||||
遵循 BACKEND_DEV.md 规范:router 只做参数校验、调用 service、返回统一响应
|
||||
"""
|
||||
from fastapi import APIRouter, HTTPException, Request, status
|
||||
from fastapi.responses import PlainTextResponse
|
||||
|
||||
from app.core.response import success_response
|
||||
from .schemas import CreateOrderRequest, CreateOrderResponse, OrderStatusResponse
|
||||
from . import service
|
||||
|
||||
router = APIRouter(prefix="/api/payment", tags=["支付"])
|
||||
|
||||
|
||||
@router.post("/create-order")
|
||||
async def create_payment_order(request: CreateOrderRequest):
|
||||
"""创建支付宝电脑网站支付订单,返回收银台 URL"""
|
||||
try:
|
||||
result = service.create_payment_order(request.payment_token)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
|
||||
except RuntimeError as e:
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e))
|
||||
|
||||
return success_response(
|
||||
CreateOrderResponse(**result).model_dump()
|
||||
)
|
||||
|
||||
|
||||
@router.post("/notify")
|
||||
async def payment_notify(request: Request):
|
||||
"""
|
||||
支付宝异步通知回调
|
||||
|
||||
必须返回纯文本 "success"(不是 JSON),否则支付宝会重复推送。
|
||||
"""
|
||||
form_data = await request.form()
|
||||
verified = service.handle_payment_notify(dict(form_data))
|
||||
return PlainTextResponse("success" if verified else "fail")
|
||||
|
||||
|
||||
@router.get("/status/{out_trade_no}")
|
||||
async def check_payment_status(out_trade_no: str):
|
||||
"""查询订单支付状态(前端轮询)"""
|
||||
order_status = service.get_order_status(out_trade_no)
|
||||
if order_status is None:
|
||||
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="订单不存在")
|
||||
|
||||
return success_response(
|
||||
OrderStatusResponse(status=order_status).model_dump()
|
||||
)
|
||||
15
backend/app/modules/payment/schemas.py
Normal file
15
backend/app/modules/payment/schemas.py
Normal file
@@ -0,0 +1,15 @@
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class CreateOrderRequest(BaseModel):
|
||||
payment_token: str
|
||||
|
||||
|
||||
class CreateOrderResponse(BaseModel):
|
||||
pay_url: str
|
||||
out_trade_no: str
|
||||
amount: float
|
||||
|
||||
|
||||
class OrderStatusResponse(BaseModel):
|
||||
status: str
|
||||
137
backend/app/modules/payment/service.py
Normal file
137
backend/app/modules/payment/service.py
Normal file
@@ -0,0 +1,137 @@
|
||||
"""
|
||||
支付业务服务
|
||||
|
||||
职责:Alipay SDK 封装、创建订单、处理支付通知、查询状态
|
||||
遵循 BACKEND_DEV.md "薄路由 + 厚服务" 原则
|
||||
"""
|
||||
from datetime import datetime, timezone, timedelta
|
||||
import uuid
|
||||
|
||||
from alipay import AliPay
|
||||
from loguru import logger
|
||||
|
||||
from app.core.config import settings
|
||||
from app.core.security import decode_payment_token
|
||||
from app.repositories.orders import create_order, get_order_by_trade_no, update_order_status
|
||||
from app.repositories.users import update_user
|
||||
|
||||
# 支付宝网关地址
|
||||
ALIPAY_GATEWAY = "https://openapi.alipay.com/gateway.do"
|
||||
ALIPAY_GATEWAY_SANDBOX = "https://openapi-sandbox.dl.alipaydev.com/gateway.do"
|
||||
|
||||
|
||||
def _get_alipay_client() -> AliPay:
|
||||
"""延迟初始化 Alipay 客户端"""
|
||||
return AliPay(
|
||||
appid=settings.ALIPAY_APP_ID,
|
||||
app_notify_url=settings.ALIPAY_NOTIFY_URL,
|
||||
app_private_key_string=open(settings.ALIPAY_PRIVATE_KEY_PATH).read(),
|
||||
alipay_public_key_string=open(settings.ALIPAY_PUBLIC_KEY_PATH).read(),
|
||||
sign_type="RSA2",
|
||||
debug=settings.ALIPAY_SANDBOX,
|
||||
)
|
||||
|
||||
|
||||
def _create_page_pay_url(out_trade_no: str, amount: float, subject: str) -> str | None:
|
||||
"""调用 alipay.trade.page.pay,返回支付宝收银台 URL"""
|
||||
client = _get_alipay_client()
|
||||
order_string = client.api_alipay_trade_page_pay(
|
||||
subject=subject,
|
||||
out_trade_no=out_trade_no,
|
||||
total_amount=amount,
|
||||
return_url=settings.ALIPAY_RETURN_URL,
|
||||
)
|
||||
if not order_string:
|
||||
logger.error(f"电脑网站支付下单失败: {out_trade_no}")
|
||||
return None
|
||||
|
||||
gateway = ALIPAY_GATEWAY_SANDBOX if settings.ALIPAY_SANDBOX else ALIPAY_GATEWAY
|
||||
pay_url = f"{gateway}?{order_string}"
|
||||
logger.info(f"电脑网站支付下单成功: {out_trade_no}")
|
||||
return pay_url
|
||||
|
||||
|
||||
def _verify_signature(data: dict, signature: str) -> bool:
|
||||
"""验证支付宝异步通知签名"""
|
||||
client = _get_alipay_client()
|
||||
return client.verify(data, signature)
|
||||
|
||||
|
||||
def create_payment_order(payment_token: str) -> dict:
|
||||
"""
|
||||
创建支付订单完整流程
|
||||
|
||||
Returns: {"pay_url": str, "out_trade_no": str, "amount": float}
|
||||
Raises: ValueError (token 无效), RuntimeError (API 失败)
|
||||
"""
|
||||
user_id = decode_payment_token(payment_token)
|
||||
if not user_id:
|
||||
raise ValueError("付费凭证无效或已过期,请重新登录")
|
||||
|
||||
out_trade_no = f"VG_{int(datetime.now().timestamp())}_{uuid.uuid4().hex[:8]}"
|
||||
amount = settings.PAYMENT_AMOUNT
|
||||
|
||||
create_order(user_id, out_trade_no, amount)
|
||||
|
||||
pay_url = _create_page_pay_url(out_trade_no, amount, "IPAgent 会员开通")
|
||||
if not pay_url:
|
||||
raise RuntimeError("创建支付订单失败,请稍后重试")
|
||||
|
||||
logger.info(f"用户 {user_id} 创建支付订单: {out_trade_no}")
|
||||
|
||||
return {"pay_url": pay_url, "out_trade_no": out_trade_no, "amount": amount}
|
||||
|
||||
|
||||
def handle_payment_notify(form_data: dict) -> bool:
|
||||
"""
|
||||
处理支付宝异步通知完整流程
|
||||
|
||||
Returns: True=验签通过, False=验签失败
|
||||
"""
|
||||
data = dict(form_data)
|
||||
|
||||
signature = data.pop("sign", "")
|
||||
data.pop("sign_type", None)
|
||||
|
||||
if not _verify_signature(data, signature):
|
||||
logger.warning(f"支付宝通知验签失败: {data.get('out_trade_no')}")
|
||||
return False
|
||||
|
||||
out_trade_no = data.get("out_trade_no", "")
|
||||
trade_status = data.get("trade_status", "")
|
||||
trade_no = data.get("trade_no", "")
|
||||
|
||||
logger.info(f"收到支付宝通知: {out_trade_no}, status={trade_status}, trade_no={trade_no}")
|
||||
|
||||
if trade_status not in ("TRADE_SUCCESS", "TRADE_FINISHED"):
|
||||
return True
|
||||
|
||||
order = get_order_by_trade_no(out_trade_no)
|
||||
if not order:
|
||||
logger.warning(f"订单不存在: {out_trade_no}")
|
||||
return True
|
||||
|
||||
if order["status"] == "paid":
|
||||
logger.info(f"订单已处理过: {out_trade_no}")
|
||||
return True
|
||||
|
||||
update_order_status(out_trade_no, "paid", trade_no)
|
||||
|
||||
user_id = order["user_id"]
|
||||
expires_at = (datetime.now(timezone.utc) + timedelta(days=settings.PAYMENT_EXPIRE_DAYS)).isoformat()
|
||||
update_user(user_id, {
|
||||
"is_active": True,
|
||||
"role": "user",
|
||||
"expires_at": expires_at,
|
||||
})
|
||||
|
||||
logger.success(f"用户 {user_id} 支付成功,已激活,有效期至 {expires_at}")
|
||||
return True
|
||||
|
||||
|
||||
def get_order_status(out_trade_no: str) -> str | None:
|
||||
"""查询订单支付状态"""
|
||||
order = get_order_by_trade_no(out_trade_no)
|
||||
if not order:
|
||||
return None
|
||||
return order["status"]
|
||||
@@ -13,7 +13,7 @@ router = APIRouter()
|
||||
@router.post("")
|
||||
async def upload_ref_audio(
|
||||
file: UploadFile = File(...),
|
||||
ref_text: str = Form(...),
|
||||
ref_text: str = Form(""),
|
||||
user: dict = Depends(get_current_user)
|
||||
):
|
||||
"""上传参考音频"""
|
||||
@@ -68,3 +68,21 @@ async def rename_ref_audio(
|
||||
except Exception as e:
|
||||
logger.error(f"重命名失败: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"重命名失败: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/{audio_id:path}/retranscribe")
|
||||
async def retranscribe_ref_audio(
|
||||
audio_id: str,
|
||||
user: dict = Depends(get_current_user)
|
||||
):
|
||||
"""重新识别参考音频的文字内容"""
|
||||
try:
|
||||
result = await service.retranscribe_ref_audio(audio_id, user["id"])
|
||||
return success_response(result, message="识别完成")
|
||||
except PermissionError as e:
|
||||
raise HTTPException(status_code=403, detail=str(e))
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error(f"重新识别失败: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"识别失败: {str(e)}")
|
||||
|
||||
@@ -41,16 +41,40 @@ def _get_audio_duration(file_path: str) -> float:
|
||||
return 0.0
|
||||
|
||||
|
||||
def _convert_to_wav(input_path: str, output_path: str) -> bool:
|
||||
"""将音频转换为 WAV 格式 (16kHz, mono)"""
|
||||
def _find_silence_cut_point(file_path: str, max_duration: float) -> float:
|
||||
"""在 max_duration 附近找一个静音点作为截取位置,找不到则回退到 max_duration"""
|
||||
try:
|
||||
subprocess.run([
|
||||
'ffmpeg', '-y', '-i', input_path,
|
||||
'-ar', '16000',
|
||||
'-ac', '1',
|
||||
'-acodec', 'pcm_s16le',
|
||||
output_path
|
||||
], capture_output=True, timeout=60, check=True)
|
||||
# 用 silencedetect 找所有静音段(阈值 -30dB,最短 0.3 秒)
|
||||
result = subprocess.run(
|
||||
['ffmpeg', '-i', file_path, '-af',
|
||||
'silencedetect=noise=-30dB:d=0.3', '-f', 'null', '-'],
|
||||
capture_output=True, text=True, timeout=30
|
||||
)
|
||||
# 解析 silence_end 时间点
|
||||
import re as _re
|
||||
ends = [float(m) for m in _re.findall(r'silence_end:\s*([\d.]+)', result.stderr)]
|
||||
# 找 max_duration 之前最后一个静音结束点(至少 3 秒)
|
||||
candidates = [t for t in ends if 3.0 <= t <= max_duration]
|
||||
if candidates:
|
||||
cut = candidates[-1]
|
||||
logger.info(f"Found silence cut point at {cut:.1f}s (max={max_duration}s)")
|
||||
return cut
|
||||
except Exception as e:
|
||||
logger.warning(f"Silence detection failed: {e}")
|
||||
return max_duration
|
||||
|
||||
|
||||
def _convert_to_wav(input_path: str, output_path: str, max_duration: float = 0) -> bool:
|
||||
"""将音频转换为 WAV 格式 (16kHz, mono),可选截取前 max_duration 秒并淡出"""
|
||||
try:
|
||||
cmd = ['ffmpeg', '-y', '-i', input_path]
|
||||
if max_duration > 0:
|
||||
cmd += ['-t', str(max_duration)]
|
||||
# 末尾 0.1 秒淡出,避免截断爆音
|
||||
fade_start = max(0, max_duration - 0.1)
|
||||
cmd += ['-af', f'afade=t=out:st={fade_start}:d=0.1']
|
||||
cmd += ['-ar', '16000', '-ac', '1', '-acodec', 'pcm_s16le', output_path]
|
||||
subprocess.run(cmd, capture_output=True, timeout=60, check=True)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"音频转换失败: {e}")
|
||||
@@ -67,9 +91,6 @@ async def upload_ref_audio(file, ref_text: str, user_id: str) -> dict:
|
||||
if ext not in ALLOWED_AUDIO_EXTENSIONS:
|
||||
raise ValueError(f"不支持的音频格式: {ext}。支持的格式: {', '.join(ALLOWED_AUDIO_EXTENSIONS)}")
|
||||
|
||||
if not ref_text or len(ref_text.strip()) < 2:
|
||||
raise ValueError("参考文字不能为空")
|
||||
|
||||
# 创建临时文件
|
||||
with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as tmp_input:
|
||||
content = await file.read()
|
||||
@@ -86,8 +107,31 @@ async def upload_ref_audio(file, ref_text: str, user_id: str) -> dict:
|
||||
duration = _get_audio_duration(tmp_wav_path)
|
||||
if duration < 1.0:
|
||||
raise ValueError("音频时长过短,至少需要 1 秒")
|
||||
if duration > 60.0:
|
||||
raise ValueError("音频时长过长,最多 60 秒")
|
||||
|
||||
# 超过 10 秒自动在静音点截取(CosyVoice 对 3-10 秒效果最好)
|
||||
MAX_REF_DURATION = 10.0
|
||||
if duration > MAX_REF_DURATION:
|
||||
cut_point = _find_silence_cut_point(tmp_wav_path, MAX_REF_DURATION)
|
||||
logger.info(f"Ref audio {duration:.1f}s > {MAX_REF_DURATION}s, trimming at {cut_point:.1f}s")
|
||||
trimmed_path = tmp_input_path + "_trimmed.wav"
|
||||
if not _convert_to_wav(tmp_wav_path, trimmed_path, max_duration=cut_point):
|
||||
raise RuntimeError("音频截取失败")
|
||||
os.unlink(tmp_wav_path)
|
||||
tmp_wav_path = trimmed_path
|
||||
duration = _get_audio_duration(tmp_wav_path)
|
||||
|
||||
# 自动转写参考音频内容
|
||||
try:
|
||||
from app.services.whisper_service import whisper_service
|
||||
transcribed = await whisper_service.transcribe(tmp_wav_path)
|
||||
if transcribed.strip():
|
||||
ref_text = transcribed.strip()
|
||||
logger.info(f"Auto-transcribed ref audio: {ref_text[:50]}...")
|
||||
except Exception as e:
|
||||
logger.warning(f"Auto-transcribe failed: {e}")
|
||||
|
||||
if not ref_text or not ref_text.strip():
|
||||
raise ValueError("无法识别音频内容,请确保音频包含清晰的语音")
|
||||
|
||||
# 检查重名
|
||||
existing_files = await storage_service.list_files(BUCKET_REF_AUDIOS, user_id)
|
||||
@@ -267,3 +311,85 @@ async def rename_ref_audio(audio_id: str, new_name: str, user_id: str) -> dict:
|
||||
)
|
||||
|
||||
return {"name": new_name}
|
||||
|
||||
|
||||
async def retranscribe_ref_audio(audio_id: str, user_id: str) -> dict:
|
||||
"""重新转写参考音频的 ref_text,并截取前 10 秒重新上传(用于迁移旧数据)"""
|
||||
if not audio_id.startswith(f"{user_id}/"):
|
||||
raise PermissionError("无权修改此文件")
|
||||
|
||||
# 下载音频到临时文件
|
||||
audio_url = await storage_service.get_signed_url(BUCKET_REF_AUDIOS, audio_id)
|
||||
tmp_wav_path = None
|
||||
trimmed_path = None
|
||||
try:
|
||||
with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as tmp:
|
||||
tmp_wav_path = tmp.name
|
||||
timeout = httpx.Timeout(None)
|
||||
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||
async with client.stream("GET", audio_url) as resp:
|
||||
resp.raise_for_status()
|
||||
async for chunk in resp.aiter_bytes():
|
||||
tmp.write(chunk)
|
||||
|
||||
# 超过 10 秒则截取前 10 秒并重新上传音频
|
||||
MAX_REF_DURATION = 10.0
|
||||
duration = _get_audio_duration(tmp_wav_path)
|
||||
transcribe_path = tmp_wav_path
|
||||
need_reupload = False
|
||||
|
||||
if duration > MAX_REF_DURATION:
|
||||
cut_point = _find_silence_cut_point(tmp_wav_path, MAX_REF_DURATION)
|
||||
logger.info(f"Retranscribe: trimming {audio_id} from {duration:.1f}s at {cut_point:.1f}s")
|
||||
trimmed_path = tmp_wav_path + "_trimmed.wav"
|
||||
if _convert_to_wav(tmp_wav_path, trimmed_path, max_duration=cut_point):
|
||||
transcribe_path = trimmed_path
|
||||
duration = _get_audio_duration(trimmed_path)
|
||||
need_reupload = True
|
||||
|
||||
# Whisper 转写
|
||||
from app.services.whisper_service import whisper_service
|
||||
transcribed = await whisper_service.transcribe(transcribe_path)
|
||||
if not transcribed or not transcribed.strip():
|
||||
raise ValueError("无法识别音频内容")
|
||||
|
||||
ref_text = transcribed.strip()
|
||||
logger.info(f"Re-transcribed ref audio {audio_id}: {ref_text[:50]}...")
|
||||
|
||||
# 截取过的音频重新上传覆盖原文件
|
||||
if need_reupload and trimmed_path:
|
||||
with open(trimmed_path, "rb") as f:
|
||||
await storage_service.upload_file(
|
||||
bucket=BUCKET_REF_AUDIOS, path=audio_id,
|
||||
file_data=f.read(), content_type="audio/wav",
|
||||
)
|
||||
logger.info(f"Re-uploaded trimmed audio: {audio_id} ({duration:.1f}s)")
|
||||
|
||||
# 更新 metadata
|
||||
metadata_path = audio_id.replace(".wav", ".json")
|
||||
try:
|
||||
meta_url = await storage_service.get_signed_url(BUCKET_REF_AUDIOS, metadata_path)
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
resp = await client.get(meta_url)
|
||||
if resp.status_code == 200:
|
||||
metadata = resp.json()
|
||||
else:
|
||||
raise Exception(f"status {resp.status_code}")
|
||||
except Exception:
|
||||
metadata = {}
|
||||
|
||||
metadata["ref_text"] = ref_text
|
||||
metadata["duration_sec"] = duration
|
||||
await storage_service.upload_file(
|
||||
bucket=BUCKET_REF_AUDIOS,
|
||||
path=metadata_path,
|
||||
file_data=json.dumps(metadata, ensure_ascii=False).encode('utf-8'),
|
||||
content_type="application/json"
|
||||
)
|
||||
|
||||
return {"ref_text": ref_text, "duration_sec": duration}
|
||||
finally:
|
||||
if tmp_wav_path and os.path.exists(tmp_wav_path):
|
||||
os.unlink(tmp_wav_path)
|
||||
if trimmed_path and os.path.exists(trimmed_path):
|
||||
os.unlink(trimmed_path)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from pydantic import BaseModel
|
||||
from typing import Optional, List
|
||||
from typing import Optional, List, Literal
|
||||
|
||||
|
||||
class CustomAssignment(BaseModel):
|
||||
@@ -7,6 +7,7 @@ class CustomAssignment(BaseModel):
|
||||
start: float # 音频时间轴起点
|
||||
end: float # 音频时间轴终点
|
||||
source_start: float = 0.0 # 源视频截取起点
|
||||
source_end: Optional[float] = None # 源视频截取终点(可选)
|
||||
|
||||
|
||||
class GenerateRequest(BaseModel):
|
||||
@@ -20,6 +21,8 @@ class GenerateRequest(BaseModel):
|
||||
language: str = "zh-CN"
|
||||
generated_audio_id: Optional[str] = None # 预生成配音 ID(存在时跳过内联 TTS)
|
||||
title: Optional[str] = None
|
||||
title_display_mode: Literal["short", "persistent"] = "short"
|
||||
title_duration: float = 4.0
|
||||
enable_subtitles: bool = True
|
||||
subtitle_style_id: Optional[str] = None
|
||||
title_style_id: Optional[str] = None
|
||||
@@ -30,3 +33,4 @@ class GenerateRequest(BaseModel):
|
||||
bgm_id: Optional[str] = None
|
||||
bgm_volume: Optional[float] = 0.2
|
||||
custom_assignments: Optional[List[CustomAssignment]] = None
|
||||
output_aspect_ratio: Literal["9:16", "16:9"] = "9:16"
|
||||
|
||||
@@ -29,7 +29,7 @@ def _locale_to_whisper_lang(locale: str) -> str:
|
||||
return locale.split("-")[0] if "-" in locale else locale
|
||||
|
||||
|
||||
def _locale_to_qwen_lang(locale: str) -> str:
|
||||
def _locale_to_tts_lang(locale: str) -> str:
|
||||
"""'zh-CN' → 'Chinese', 'en-US' → 'English', 其他 → 'Auto'"""
|
||||
mapping = {"zh": "Chinese", "en": "English"}
|
||||
return mapping.get(locale.split("-")[0], "Auto")
|
||||
@@ -174,17 +174,27 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
|
||||
# ── 确定素材列表 ──
|
||||
material_paths: List[str] = []
|
||||
if req.material_paths and len(req.material_paths) > 1:
|
||||
if req.custom_assignments and len(req.custom_assignments) > 1:
|
||||
material_paths = [a.material_path for a in req.custom_assignments if a.material_path]
|
||||
elif req.material_paths and len(req.material_paths) > 1:
|
||||
material_paths = req.material_paths
|
||||
else:
|
||||
material_paths = [req.material_path]
|
||||
|
||||
is_multi = len(material_paths) > 1
|
||||
target_resolution = (1080, 1920) if req.output_aspect_ratio == "9:16" else (1920, 1080)
|
||||
|
||||
logger.info(
|
||||
f"[Render] 输出画面比例: {req.output_aspect_ratio}, "
|
||||
f"目标分辨率: {target_resolution[0]}x{target_resolution[1]}"
|
||||
)
|
||||
|
||||
_update_task(task_id, status="processing", progress=5, message="正在下载素材...")
|
||||
|
||||
temp_dir = settings.UPLOAD_DIR / "temp"
|
||||
temp_dir.mkdir(parents=True, exist_ok=True)
|
||||
video = VideoService()
|
||||
input_material_path: Optional[Path] = None
|
||||
|
||||
# 单素材模式:下载主素材
|
||||
if not is_multi:
|
||||
@@ -192,6 +202,16 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
temp_files.append(input_material_path)
|
||||
await _download_material(material_paths[0], input_material_path)
|
||||
|
||||
# 归一化旋转元数据(如 iPhone MOV 1920x1080 + rotation=-90)
|
||||
normalized_input_path = temp_dir / f"{task_id}_input_norm.mp4"
|
||||
normalized_result = video.normalize_orientation(
|
||||
str(input_material_path),
|
||||
str(normalized_input_path),
|
||||
)
|
||||
if normalized_result != str(input_material_path):
|
||||
temp_files.append(normalized_input_path)
|
||||
input_material_path = normalized_input_path
|
||||
|
||||
_update_task(task_id, message="正在生成语音...", progress=10)
|
||||
|
||||
audio_path = temp_dir / f"{task_id}_audio.wav"
|
||||
@@ -218,8 +238,10 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
if resp.status_code == 200:
|
||||
meta = resp.json()
|
||||
req.language = meta.get("language", req.language)
|
||||
if not req.text.strip():
|
||||
req.text = meta.get("text", req.text)
|
||||
# 无条件用配音元数据覆盖文案,确保字幕与配音语言一致
|
||||
meta_text = meta.get("text", "")
|
||||
if meta_text:
|
||||
req.text = meta_text
|
||||
except Exception as e:
|
||||
logger.warning(f"读取配音元数据失败: {e}")
|
||||
|
||||
@@ -238,13 +260,13 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
)
|
||||
await _download_material(ref_audio_url, ref_audio_local)
|
||||
|
||||
_update_task(task_id, message="正在克隆声音 (Qwen3-TTS)...")
|
||||
_update_task(task_id, message="正在克隆声音...")
|
||||
await voice_clone_service.generate_audio(
|
||||
text=req.text,
|
||||
ref_audio_path=str(ref_audio_local),
|
||||
ref_text=req.ref_text,
|
||||
output_path=str(audio_path),
|
||||
language=_locale_to_qwen_lang(req.language)
|
||||
language=_locale_to_tts_lang(req.language)
|
||||
)
|
||||
else:
|
||||
_update_task(task_id, message="正在生成语音 (EdgeTTS)...")
|
||||
@@ -258,7 +280,6 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
lipsync_video_path = temp_dir / f"{task_id}_lipsync.mp4"
|
||||
temp_files.append(lipsync_video_path)
|
||||
|
||||
video = VideoService()
|
||||
captions_path = None
|
||||
|
||||
if is_multi:
|
||||
@@ -267,7 +288,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
# ══════════════════════════════════════
|
||||
_update_task(task_id, progress=12, message="正在分配素材...")
|
||||
|
||||
if req.custom_assignments:
|
||||
if req.custom_assignments and len(req.custom_assignments) == len(material_paths):
|
||||
# 用户自定义分配,跳过 Whisper 均分
|
||||
assignments = [
|
||||
{
|
||||
@@ -275,6 +296,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
"start": a.start,
|
||||
"end": a.end,
|
||||
"source_start": a.source_start,
|
||||
"source_end": a.source_end,
|
||||
"index": i,
|
||||
}
|
||||
for i, a in enumerate(req.custom_assignments)
|
||||
@@ -290,6 +312,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
text=req.text,
|
||||
output_path=str(captions_path),
|
||||
language=_locale_to_whisper_lang(req.language),
|
||||
original_text=req.text,
|
||||
)
|
||||
print(f"[Pipeline] Whisper alignment completed (custom assignments)")
|
||||
except Exception as e:
|
||||
@@ -297,6 +320,49 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
captions_path = None
|
||||
else:
|
||||
captions_path = None
|
||||
elif req.custom_assignments:
|
||||
logger.warning(
|
||||
f"[MultiMat] custom_assignments 数量({len(req.custom_assignments)})"
|
||||
f" 与素材数量({len(material_paths)})不一致,回退自动分配"
|
||||
)
|
||||
|
||||
# 原有逻辑:Whisper → _split_equal
|
||||
_update_task(task_id, message="正在生成字幕 (Whisper)...")
|
||||
|
||||
captions_path = temp_dir / f"{task_id}_captions.json"
|
||||
temp_files.append(captions_path)
|
||||
|
||||
try:
|
||||
captions_data = await whisper_service.align(
|
||||
audio_path=str(audio_path),
|
||||
text=req.text,
|
||||
output_path=str(captions_path),
|
||||
language=_locale_to_whisper_lang(req.language),
|
||||
original_text=req.text,
|
||||
)
|
||||
print(f"[Pipeline] Whisper alignment completed (multi-material)")
|
||||
except Exception as e:
|
||||
logger.warning(f"Whisper alignment failed: {e}")
|
||||
captions_data = None
|
||||
captions_path = None
|
||||
|
||||
_update_task(task_id, progress=15, message="正在分配素材...")
|
||||
|
||||
if captions_data and captions_data.get("segments"):
|
||||
assignments = _split_equal(captions_data["segments"], material_paths)
|
||||
else:
|
||||
# Whisper 失败 → 按时长均分(不依赖字符对齐)
|
||||
logger.warning("[MultiMat] Whisper 无数据,按时长均分")
|
||||
audio_dur = video._get_duration(str(audio_path))
|
||||
if audio_dur <= 0:
|
||||
audio_dur = 30.0 # 安全兜底
|
||||
seg_dur = audio_dur / len(material_paths)
|
||||
assignments = [
|
||||
{"material_path": material_paths[i], "start": i * seg_dur,
|
||||
"end": (i + 1) * seg_dur, "index": i}
|
||||
for i in range(len(material_paths))
|
||||
]
|
||||
|
||||
else:
|
||||
# 原有逻辑:Whisper → _split_equal
|
||||
_update_task(task_id, message="正在生成字幕 (Whisper)...")
|
||||
@@ -310,6 +376,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
text=req.text,
|
||||
output_path=str(captions_path),
|
||||
language=_locale_to_whisper_lang(req.language),
|
||||
original_text=req.text,
|
||||
)
|
||||
print(f"[Pipeline] Whisper alignment completed (multi-material)")
|
||||
except Exception as e:
|
||||
@@ -356,12 +423,23 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
material_local = temp_dir / f"{task_id}_material_{i}.mp4"
|
||||
temp_files.append(material_local)
|
||||
await _download_material(assignment["material_path"], material_local)
|
||||
|
||||
# 归一化旋转元数据,确保分辨率判断与后续推理一致
|
||||
normalized_material = temp_dir / f"{task_id}_material_{i}_norm.mp4"
|
||||
normalized_result = video.normalize_orientation(
|
||||
str(material_local),
|
||||
str(normalized_material),
|
||||
)
|
||||
if normalized_result != str(material_local):
|
||||
temp_files.append(normalized_material)
|
||||
material_local = normalized_material
|
||||
|
||||
material_locals.append(material_local)
|
||||
resolutions.append(video.get_resolution(str(material_local)))
|
||||
|
||||
# 分辨率不一致时,统一到第一个素材的分辨率
|
||||
base_res = resolutions[0] if resolutions else (0, 0)
|
||||
need_scale = any(r != base_res for r in resolutions) and base_res[0] > 0
|
||||
# 按用户选择的画面比例统一分辨率
|
||||
base_res = target_resolution
|
||||
need_scale = any(r != base_res for r in resolutions)
|
||||
if need_scale:
|
||||
logger.info(f"[MultiMat] 素材分辨率不一致,统一到 {base_res[0]}x{base_res[1]}")
|
||||
|
||||
@@ -381,8 +459,11 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
temp_files.append(prepared_path)
|
||||
video.prepare_segment(
|
||||
str(material_locals[i]), seg_dur, str(prepared_path),
|
||||
target_resolution=base_res if need_scale else None,
|
||||
# 多素材拼接前统一重编码为同分辨率/同编码,避免 concat 仅保留首段
|
||||
target_resolution=base_res,
|
||||
source_start=assignment.get("source_start", 0.0),
|
||||
source_end=assignment.get("source_end"),
|
||||
target_fps=25,
|
||||
)
|
||||
prepared_segments.append(prepared_path)
|
||||
|
||||
@@ -392,7 +473,8 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
temp_files.append(concat_path)
|
||||
video.concat_videos(
|
||||
[str(p) for p in prepared_segments],
|
||||
str(concat_path)
|
||||
str(concat_path),
|
||||
target_fps=25,
|
||||
)
|
||||
|
||||
# ── 第三步:一次 LatentSync 推理 ──
|
||||
@@ -425,23 +507,31 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
# 单素材流水线(原有逻辑)
|
||||
# ══════════════════════════════════════
|
||||
|
||||
# 单素材 + source_start:先截取片段
|
||||
if input_material_path is None:
|
||||
raise RuntimeError("单素材流程缺少输入素材")
|
||||
|
||||
# 单素材:按用户选择画面比例统一到目标分辨率,并应用 source_start
|
||||
single_source_start = 0.0
|
||||
single_source_end = None
|
||||
if req.custom_assignments and len(req.custom_assignments) == 1:
|
||||
single_source_start = req.custom_assignments[0].source_start
|
||||
single_source_end = req.custom_assignments[0].source_end
|
||||
|
||||
if single_source_start > 0:
|
||||
_update_task(task_id, progress=20, message="正在截取素材片段...")
|
||||
audio_dur = video._get_duration(str(audio_path))
|
||||
if audio_dur <= 0:
|
||||
audio_dur = 30.0
|
||||
trimmed_path = temp_dir / f"{task_id}_trimmed.mp4"
|
||||
temp_files.append(trimmed_path)
|
||||
video.prepare_segment(
|
||||
str(input_material_path), audio_dur, str(trimmed_path),
|
||||
source_start=single_source_start,
|
||||
)
|
||||
input_material_path = trimmed_path
|
||||
_update_task(task_id, progress=20, message="正在准备素材片段...")
|
||||
audio_dur = video._get_duration(str(audio_path))
|
||||
if audio_dur <= 0:
|
||||
audio_dur = 30.0
|
||||
prepared_single_path = temp_dir / f"{task_id}_prepared_single.mp4"
|
||||
temp_files.append(prepared_single_path)
|
||||
video.prepare_segment(
|
||||
str(input_material_path),
|
||||
audio_dur,
|
||||
str(prepared_single_path),
|
||||
target_resolution=target_resolution,
|
||||
source_start=single_source_start,
|
||||
source_end=single_source_end,
|
||||
)
|
||||
input_material_path = prepared_single_path
|
||||
|
||||
_update_task(task_id, progress=25)
|
||||
_update_task(task_id, message="正在合成唇形 (LatentSync)...", progress=30)
|
||||
@@ -476,6 +566,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
text=req.text,
|
||||
output_path=str(captions_path),
|
||||
language=_locale_to_whisper_lang(req.language),
|
||||
original_text=req.text,
|
||||
)
|
||||
print(f"[Pipeline] Whisper alignment completed")
|
||||
except Exception as e:
|
||||
@@ -566,12 +657,20 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
||||
mapped = 87 + int(percent * 0.08)
|
||||
_update_task(task_id, progress=mapped)
|
||||
|
||||
title_display_mode = (
|
||||
req.title_display_mode
|
||||
if req.title_display_mode in ("short", "persistent")
|
||||
else "short"
|
||||
)
|
||||
title_duration = max(0.5, min(float(req.title_duration or 4.0), 30.0))
|
||||
|
||||
await remotion_service.render(
|
||||
video_path=str(composed_video_path),
|
||||
output_path=str(final_output_local_path),
|
||||
captions_path=str(captions_path) if captions_path else None,
|
||||
title=req.title,
|
||||
title_duration=3.0,
|
||||
title_duration=title_duration,
|
||||
title_display_mode=title_display_mode,
|
||||
fps=25,
|
||||
enable_subtitles=req.enable_subtitles,
|
||||
subtitle_style=subtitle_style,
|
||||
|
||||
34
backend/app/repositories/orders.py
Normal file
34
backend/app/repositories/orders.py
Normal file
@@ -0,0 +1,34 @@
|
||||
"""
|
||||
订单数据访问层
|
||||
"""
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, Optional, cast
|
||||
|
||||
from app.core.supabase import get_supabase
|
||||
|
||||
|
||||
def create_order(user_id: str, out_trade_no: str, amount: float) -> Dict[str, Any]:
|
||||
supabase = get_supabase()
|
||||
result = supabase.table("orders").insert({
|
||||
"user_id": user_id,
|
||||
"out_trade_no": out_trade_no,
|
||||
"amount": amount,
|
||||
"status": "pending",
|
||||
}).execute()
|
||||
return cast(Dict[str, Any], (result.data or [{}])[0])
|
||||
|
||||
|
||||
def get_order_by_trade_no(out_trade_no: str) -> Optional[Dict[str, Any]]:
|
||||
supabase = get_supabase()
|
||||
result = supabase.table("orders").select("*").eq("out_trade_no", out_trade_no).single().execute()
|
||||
return cast(Optional[Dict[str, Any]], result.data or None)
|
||||
|
||||
|
||||
def update_order_status(out_trade_no: str, status: str, trade_no: str | None = None) -> None:
|
||||
supabase = get_supabase()
|
||||
payload: Dict[str, Any] = {"status": status}
|
||||
if trade_no:
|
||||
payload["trade_no"] = trade_no
|
||||
if status == "paid":
|
||||
payload["paid_at"] = datetime.now(timezone.utc).isoformat()
|
||||
supabase.table("orders").update(payload).eq("out_trade_no", out_trade_no).execute()
|
||||
@@ -1,3 +1,4 @@
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, List, Optional, cast
|
||||
|
||||
from app.core.supabase import get_supabase
|
||||
@@ -37,3 +38,33 @@ def update_user(user_id: str, payload: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
supabase = get_supabase()
|
||||
result = supabase.table("users").update(payload).eq("id", user_id).execute()
|
||||
return cast(List[Dict[str, Any]], result.data or [])
|
||||
|
||||
|
||||
def _parse_expires_at(expires_at: Any) -> Optional[datetime]:
|
||||
try:
|
||||
expires_at_dt = datetime.fromisoformat(str(expires_at).replace("Z", "+00:00"))
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
if expires_at_dt.tzinfo is None:
|
||||
expires_at_dt = expires_at_dt.replace(tzinfo=timezone.utc)
|
||||
return expires_at_dt.astimezone(timezone.utc)
|
||||
|
||||
|
||||
def deactivate_user_if_expired(user: Dict[str, Any]) -> bool:
|
||||
expires_at = user.get("expires_at")
|
||||
if not expires_at:
|
||||
return False
|
||||
|
||||
expires_at_dt = _parse_expires_at(expires_at)
|
||||
if not expires_at_dt:
|
||||
return False
|
||||
|
||||
if datetime.now(timezone.utc) <= expires_at_dt:
|
||||
return False
|
||||
|
||||
user_id = user.get("id")
|
||||
if user.get("is_active") and user_id:
|
||||
update_user(cast(str, user_id), {"is_active": False})
|
||||
|
||||
return True
|
||||
|
||||
@@ -7,6 +7,7 @@ import asyncio
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
from collections.abc import Callable
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from loguru import logger
|
||||
@@ -29,12 +30,13 @@ class RemotionService:
|
||||
output_path: str,
|
||||
captions_path: Optional[str] = None,
|
||||
title: Optional[str] = None,
|
||||
title_duration: float = 3.0,
|
||||
title_duration: float = 4.0,
|
||||
title_display_mode: str = "short",
|
||||
fps: int = 25,
|
||||
enable_subtitles: bool = True,
|
||||
subtitle_style: Optional[dict] = None,
|
||||
title_style: Optional[dict] = None,
|
||||
on_progress: Optional[callable] = None
|
||||
on_progress: Optional[Callable[[int], None]] = None
|
||||
) -> str:
|
||||
"""
|
||||
使用 Remotion 渲染视频(添加字幕和标题)
|
||||
@@ -45,6 +47,7 @@ class RemotionService:
|
||||
captions_path: 字幕 JSON 文件路径(Whisper 生成)
|
||||
title: 视频标题(可选)
|
||||
title_duration: 标题显示时长(秒)
|
||||
title_display_mode: 标题显示模式(short/persistent)
|
||||
fps: 帧率
|
||||
enable_subtitles: 是否启用字幕
|
||||
on_progress: 进度回调函数
|
||||
@@ -75,6 +78,7 @@ class RemotionService:
|
||||
if title:
|
||||
cmd.extend(["--title", title])
|
||||
cmd.extend(["--titleDuration", str(title_duration)])
|
||||
cmd.extend(["--titleDisplayMode", title_display_mode])
|
||||
|
||||
if subtitle_style:
|
||||
cmd.extend(["--subtitleStyle", json.dumps(subtitle_style, ensure_ascii=False)])
|
||||
@@ -95,8 +99,12 @@ class RemotionService:
|
||||
bufsize=1
|
||||
)
|
||||
|
||||
if process.stdout is None:
|
||||
raise RuntimeError("Remotion process stdout is unavailable")
|
||||
stdout = process.stdout
|
||||
|
||||
output_lines = []
|
||||
for line in iter(process.stdout.readline, ''):
|
||||
for line in iter(stdout.readline, ''):
|
||||
line = line.strip()
|
||||
if line:
|
||||
output_lines.append(line)
|
||||
|
||||
@@ -9,9 +9,110 @@ from pathlib import Path
|
||||
from loguru import logger
|
||||
from typing import Optional
|
||||
|
||||
class VideoService:
|
||||
def __init__(self):
|
||||
pass
|
||||
class VideoService:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def get_video_metadata(self, file_path: str) -> dict:
|
||||
"""获取视频元信息(含旋转角与有效显示分辨率)"""
|
||||
cmd = [
|
||||
"ffprobe", "-v", "error",
|
||||
"-select_streams", "v:0",
|
||||
"-show_entries", "stream=width,height:stream_side_data=rotation",
|
||||
"-of", "json",
|
||||
file_path,
|
||||
]
|
||||
default_info = {
|
||||
"width": 0,
|
||||
"height": 0,
|
||||
"rotation": 0,
|
||||
"effective_width": 0,
|
||||
"effective_height": 0,
|
||||
}
|
||||
|
||||
try:
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
||||
if result.returncode != 0:
|
||||
return default_info
|
||||
|
||||
payload = json.loads(result.stdout or "{}")
|
||||
streams = payload.get("streams") or []
|
||||
if not streams:
|
||||
return default_info
|
||||
|
||||
stream = streams[0]
|
||||
width = int(stream.get("width") or 0)
|
||||
height = int(stream.get("height") or 0)
|
||||
|
||||
rotation = 0
|
||||
for side_data in stream.get("side_data_list") or []:
|
||||
if not isinstance(side_data, dict):
|
||||
continue
|
||||
raw_rotation = side_data.get("rotation")
|
||||
if raw_rotation is None:
|
||||
continue
|
||||
try:
|
||||
rotation = int(round(float(str(raw_rotation))))
|
||||
except Exception:
|
||||
rotation = 0
|
||||
break
|
||||
|
||||
norm_rotation = rotation % 360
|
||||
if norm_rotation > 180:
|
||||
norm_rotation -= 360
|
||||
swap_wh = abs(norm_rotation) == 90
|
||||
|
||||
effective_width = height if swap_wh else width
|
||||
effective_height = width if swap_wh else height
|
||||
|
||||
return {
|
||||
"width": width,
|
||||
"height": height,
|
||||
"rotation": norm_rotation,
|
||||
"effective_width": effective_width,
|
||||
"effective_height": effective_height,
|
||||
}
|
||||
except Exception as e:
|
||||
logger.warning(f"获取视频元信息失败: {e}")
|
||||
return default_info
|
||||
|
||||
def normalize_orientation(self, video_path: str, output_path: str) -> str:
|
||||
"""将带旋转元数据的视频转为物理方向,避免后续流程忽略 rotation。"""
|
||||
info = self.get_video_metadata(video_path)
|
||||
rotation = int(info.get("rotation") or 0)
|
||||
if rotation == 0:
|
||||
return video_path
|
||||
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
logger.info(
|
||||
f"检测到旋转元数据 rotation={rotation},归一化方向: "
|
||||
f"{info.get('effective_width', 0)}x{info.get('effective_height', 0)}"
|
||||
)
|
||||
|
||||
cmd = [
|
||||
"ffmpeg", "-y",
|
||||
"-i", video_path,
|
||||
"-map", "0:v:0",
|
||||
"-map", "0:a?",
|
||||
"-c:v", "libx264",
|
||||
"-preset", "fast",
|
||||
"-crf", "18",
|
||||
"-c:a", "copy",
|
||||
"-movflags", "+faststart",
|
||||
output_path,
|
||||
]
|
||||
|
||||
if self._run_ffmpeg(cmd):
|
||||
normalized = self.get_video_metadata(output_path)
|
||||
logger.info(
|
||||
"视频方向归一化完成: "
|
||||
f"coded={normalized.get('width', 0)}x{normalized.get('height', 0)}, "
|
||||
f"rotation={normalized.get('rotation', 0)}"
|
||||
)
|
||||
return output_path
|
||||
|
||||
logger.warning("视频方向归一化失败,回退使用原视频")
|
||||
return video_path
|
||||
|
||||
def _run_ffmpeg(self, cmd: list) -> bool:
|
||||
cmd_str = ' '.join(shlex.quote(str(c)) for c in cmd)
|
||||
@@ -139,8 +240,8 @@ class VideoService:
|
||||
else:
|
||||
raise RuntimeError("FFmpeg composition failed")
|
||||
|
||||
def concat_videos(self, video_paths: list, output_path: str) -> str:
|
||||
"""使用 FFmpeg concat demuxer 拼接多个视频片段"""
|
||||
def concat_videos(self, video_paths: list, output_path: str, target_fps: int = 25) -> str:
|
||||
"""使用 FFmpeg concat demuxer 拼接多个视频片段"""
|
||||
if not video_paths:
|
||||
raise ValueError("No video segments to concat")
|
||||
|
||||
@@ -152,14 +253,22 @@ class VideoService:
|
||||
for vp in video_paths:
|
||||
f.write(f"file '{vp}'\n")
|
||||
|
||||
cmd = [
|
||||
"ffmpeg", "-y",
|
||||
"-f", "concat",
|
||||
"-safe", "0",
|
||||
"-i", str(list_path),
|
||||
"-c", "copy",
|
||||
output_path,
|
||||
]
|
||||
cmd = [
|
||||
"ffmpeg", "-y",
|
||||
"-f", "concat",
|
||||
"-safe", "0",
|
||||
"-fflags", "+genpts",
|
||||
"-i", str(list_path),
|
||||
"-an",
|
||||
"-vsync", "cfr",
|
||||
"-r", str(target_fps),
|
||||
"-c:v", "libx264",
|
||||
"-preset", "fast",
|
||||
"-crf", "18",
|
||||
"-pix_fmt", "yuv420p",
|
||||
"-movflags", "+faststart",
|
||||
output_path,
|
||||
]
|
||||
|
||||
try:
|
||||
if self._run_ffmpeg(cmd):
|
||||
@@ -193,54 +302,60 @@ class VideoService:
|
||||
return output_path
|
||||
raise RuntimeError(f"FFmpeg audio split failed: {start}-{end}")
|
||||
|
||||
def get_resolution(self, file_path: str) -> tuple:
|
||||
"""获取视频分辨率,返回 (width, height)"""
|
||||
cmd = [
|
||||
'ffprobe', '-v', 'error',
|
||||
'-select_streams', 'v:0',
|
||||
'-show_entries', 'stream=width,height',
|
||||
'-of', 'csv=p=0',
|
||||
file_path
|
||||
]
|
||||
try:
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
||||
parts = result.stdout.strip().split(',')
|
||||
return (int(parts[0]), int(parts[1]))
|
||||
except Exception:
|
||||
return (0, 0)
|
||||
def get_resolution(self, file_path: str) -> tuple[int, int]:
|
||||
"""获取视频有效显示分辨率(考虑旋转元数据)。"""
|
||||
info = self.get_video_metadata(file_path)
|
||||
return (
|
||||
int(info.get("effective_width") or 0),
|
||||
int(info.get("effective_height") or 0),
|
||||
)
|
||||
|
||||
def prepare_segment(self, video_path: str, target_duration: float, output_path: str,
|
||||
target_resolution: tuple = None, source_start: float = 0.0) -> str:
|
||||
"""将素材视频裁剪或循环到指定时长(无音频)。
|
||||
target_resolution: (width, height) 如需统一分辨率则传入,否则保持原分辨率。
|
||||
source_start: 源视频截取起点(秒),默认 0。
|
||||
"""
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
video_dur = self._get_duration(video_path)
|
||||
if video_dur <= 0:
|
||||
video_dur = target_duration
|
||||
|
||||
# 可用时长 = 从 source_start 到视频结尾
|
||||
available = max(video_dur - source_start, 0.1)
|
||||
needs_loop = target_duration > available
|
||||
needs_scale = target_resolution is not None
|
||||
|
||||
# 当需要循环且有 source_start 时,先裁剪出片段,再循环裁剪后的文件
|
||||
# 避免 stream_loop 循环整个视频(而不是从 source_start 开始的片段)
|
||||
actual_input = video_path
|
||||
trim_temp = None
|
||||
if needs_loop and source_start > 0:
|
||||
trim_temp = str(Path(output_path).parent / (Path(output_path).stem + "_trim_tmp.mp4"))
|
||||
trim_cmd = [
|
||||
"ffmpeg", "-y",
|
||||
"-ss", str(source_start),
|
||||
"-i", video_path,
|
||||
"-t", str(available),
|
||||
"-an",
|
||||
"-c:v", "libx264", "-preset", "fast", "-crf", "18",
|
||||
trim_temp,
|
||||
]
|
||||
def prepare_segment(self, video_path: str, target_duration: float, output_path: str,
|
||||
target_resolution: Optional[tuple] = None, source_start: float = 0.0,
|
||||
source_end: Optional[float] = None, target_fps: Optional[int] = None) -> str:
|
||||
"""将素材视频裁剪或循环到指定时长(无音频)。
|
||||
target_resolution: (width, height) 如需统一分辨率则传入,否则保持原分辨率。
|
||||
source_start: 源视频截取起点(秒),默认 0。
|
||||
source_end: 源视频截取终点(秒),默认到素材结尾。
|
||||
target_fps: 输出帧率(可选),用于多素材拼接前统一时间基。
|
||||
"""
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
video_dur = self._get_duration(video_path)
|
||||
if video_dur <= 0:
|
||||
video_dur = target_duration
|
||||
|
||||
clip_end = video_dur
|
||||
if source_end is not None:
|
||||
try:
|
||||
source_end_value = float(source_end)
|
||||
if source_end_value > source_start:
|
||||
clip_end = min(source_end_value, video_dur)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 可用时长 = 从 source_start 到视频结尾
|
||||
available = max(clip_end - source_start, 0.1)
|
||||
needs_loop = target_duration > available
|
||||
needs_scale = target_resolution is not None
|
||||
needs_fps = bool(target_fps and target_fps > 0)
|
||||
has_source_end = clip_end < video_dur
|
||||
|
||||
# 当需要循环且存在截取范围时,先裁剪出片段,再循环裁剪后的文件
|
||||
# 避免 stream_loop 循环整个视频(而不是截取后的片段)
|
||||
actual_input = video_path
|
||||
trim_temp = None
|
||||
if needs_loop and (source_start > 0 or has_source_end):
|
||||
trim_temp = str(Path(output_path).parent / (Path(output_path).stem + "_trim_tmp.mp4"))
|
||||
trim_cmd = [
|
||||
"ffmpeg", "-y",
|
||||
"-ss", str(source_start),
|
||||
"-i", video_path,
|
||||
"-t", str(available),
|
||||
"-an",
|
||||
"-c:v", "libx264", "-preset", "fast", "-crf", "18",
|
||||
trim_temp,
|
||||
]
|
||||
if not self._run_ffmpeg(trim_cmd):
|
||||
raise RuntimeError(f"FFmpeg trim for loop failed: {video_path}")
|
||||
actual_input = trim_temp
|
||||
@@ -253,19 +368,27 @@ class VideoService:
|
||||
cmd = ["ffmpeg", "-y"]
|
||||
if needs_loop:
|
||||
cmd.extend(["-stream_loop", str(loop_count)])
|
||||
if source_start > 0:
|
||||
cmd.extend(["-ss", str(source_start)])
|
||||
cmd.extend(["-i", actual_input, "-t", str(target_duration), "-an"])
|
||||
|
||||
if needs_scale:
|
||||
w, h = target_resolution
|
||||
cmd.extend(["-vf", f"scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:(ow-iw)/2:(oh-ih)/2"])
|
||||
|
||||
# 需要循环、缩放或指定起点时必须重编码,否则用 stream copy 保持原画质
|
||||
if needs_loop or needs_scale or source_start > 0:
|
||||
cmd.extend(["-c:v", "libx264", "-preset", "fast", "-crf", "18"])
|
||||
else:
|
||||
cmd.extend(["-c:v", "copy"])
|
||||
if source_start > 0:
|
||||
cmd.extend(["-ss", str(source_start)])
|
||||
cmd.extend(["-i", actual_input, "-t", str(target_duration), "-an"])
|
||||
|
||||
filters = []
|
||||
if needs_fps:
|
||||
filters.append(f"fps={int(target_fps)}")
|
||||
if needs_scale:
|
||||
w, h = target_resolution
|
||||
filters.append(f"scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:(ow-iw)/2:(oh-ih)/2")
|
||||
|
||||
if filters:
|
||||
cmd.extend(["-vf", ",".join(filters)])
|
||||
if needs_fps:
|
||||
cmd.extend(["-vsync", "cfr", "-r", str(int(target_fps))])
|
||||
|
||||
# 需要循环、缩放或指定起点时必须重编码,否则用 stream copy 保持原画质
|
||||
if needs_loop or needs_scale or source_start > 0 or has_source_end or needs_fps:
|
||||
cmd.extend(["-c:v", "libx264", "-preset", "fast", "-crf", "18"])
|
||||
else:
|
||||
cmd.extend(["-c:v", "copy"])
|
||||
|
||||
cmd.append(output_path)
|
||||
|
||||
|
||||
@@ -1,37 +1,104 @@
|
||||
"""
|
||||
声音克隆服务
|
||||
通过 HTTP 调用 Qwen3-TTS 独立服务 (端口 8009)
|
||||
通过 HTTP 调用 CosyVoice 3.0 独立服务 (端口 8010)
|
||||
"""
|
||||
import httpx
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import httpx
|
||||
from loguru import logger
|
||||
|
||||
from app.core.config import settings
|
||||
|
||||
# Qwen3-TTS 服务地址
|
||||
QWEN_TTS_URL = "http://localhost:8009"
|
||||
# CosyVoice 3.0 服务地址
|
||||
VOICE_CLONE_URL = "http://localhost:8010"
|
||||
|
||||
|
||||
class VoiceCloneService:
|
||||
"""声音克隆服务 - 调用 Qwen3-TTS HTTP API"""
|
||||
"""声音克隆服务 - 调用 CosyVoice 3.0 HTTP API"""
|
||||
|
||||
def __init__(self):
|
||||
self.base_url = QWEN_TTS_URL
|
||||
self.base_url = VOICE_CLONE_URL
|
||||
# 健康状态缓存
|
||||
self._health_cache: Optional[dict] = None
|
||||
self._health_cache_time: float = 0
|
||||
# GPU 并发锁 (Serial Queue)
|
||||
self._lock = asyncio.Lock()
|
||||
|
||||
async def _generate_once(
|
||||
self,
|
||||
*,
|
||||
text: str,
|
||||
ref_audio_data: bytes,
|
||||
ref_text: str,
|
||||
language: str,
|
||||
speed: float = 1.0,
|
||||
max_retries: int = 4,
|
||||
) -> bytes:
|
||||
timeout = httpx.Timeout(240.0)
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||
response = await client.post(
|
||||
f"{self.base_url}/generate",
|
||||
files={"ref_audio": ("ref.wav", ref_audio_data, "audio/wav")},
|
||||
data={
|
||||
"text": text,
|
||||
"ref_text": ref_text,
|
||||
"language": language,
|
||||
"speed": str(speed),
|
||||
},
|
||||
)
|
||||
|
||||
retryable = False
|
||||
reason = ""
|
||||
|
||||
if response.status_code in (429, 502, 503, 504):
|
||||
retryable = True
|
||||
reason = f"HTTP {response.status_code}"
|
||||
elif response.status_code == 500 and (
|
||||
"生成超时" in response.text or "timeout" in response.text.lower()
|
||||
):
|
||||
retryable = True
|
||||
reason = "upstream timeout"
|
||||
|
||||
if retryable and attempt < max_retries - 1:
|
||||
wait = 8 * (attempt + 1)
|
||||
logger.warning(
|
||||
f"Voice clone retryable error ({reason}), retrying in {wait}s "
|
||||
f"(attempt {attempt + 1}/{max_retries})"
|
||||
)
|
||||
await asyncio.sleep(wait)
|
||||
continue
|
||||
|
||||
response.raise_for_status()
|
||||
return response.content
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Voice clone API error: {e.response.status_code} - {e.response.text}")
|
||||
raise RuntimeError(f"声音克隆服务错误: {e.response.text}")
|
||||
except httpx.RequestError as e:
|
||||
if attempt < max_retries - 1:
|
||||
wait = 6 * (attempt + 1)
|
||||
logger.warning(
|
||||
f"Voice clone connection error: {e}; retrying in {wait}s "
|
||||
f"(attempt {attempt + 1}/{max_retries})"
|
||||
)
|
||||
await asyncio.sleep(wait)
|
||||
continue
|
||||
logger.error(f"Voice clone connection error: {e}")
|
||||
raise RuntimeError("无法连接声音克隆服务,请检查服务是否启动")
|
||||
|
||||
raise RuntimeError("声音克隆服务繁忙,请稍后重试")
|
||||
|
||||
async def generate_audio(
|
||||
self,
|
||||
text: str,
|
||||
ref_audio_path: str,
|
||||
ref_text: str,
|
||||
output_path: str,
|
||||
language: str = "Chinese"
|
||||
language: str = "Chinese",
|
||||
speed: float = 1.0,
|
||||
) -> str:
|
||||
"""
|
||||
使用声音克隆生成语音
|
||||
@@ -51,60 +118,49 @@ class VoiceCloneService:
|
||||
logger.info(f"🎤 Voice Clone: {text[:30]}... (language={language})")
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# 读取参考音频
|
||||
text = text.strip()
|
||||
if not text:
|
||||
raise RuntimeError("文本为空,无法生成语音")
|
||||
|
||||
with open(ref_audio_path, "rb") as f:
|
||||
ref_audio_data = f.read()
|
||||
|
||||
# 调用 Qwen3-TTS 服务
|
||||
timeout = httpx.Timeout(300.0) # 5分钟超时
|
||||
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||
try:
|
||||
response = await client.post(
|
||||
f"{self.base_url}/generate",
|
||||
files={"ref_audio": ("ref.wav", ref_audio_data, "audio/wav")},
|
||||
data={
|
||||
"text": text,
|
||||
"ref_text": ref_text,
|
||||
"language": language
|
||||
}
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
# 保存返回的音频
|
||||
with open(output_path, "wb") as f:
|
||||
f.write(response.content)
|
||||
|
||||
logger.info(f"✅ Voice clone saved: {output_path}")
|
||||
return output_path
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"Qwen3-TTS API error: {e.response.status_code} - {e.response.text}")
|
||||
raise RuntimeError(f"声音克隆服务错误: {e.response.text}")
|
||||
except httpx.RequestError as e:
|
||||
logger.error(f"Qwen3-TTS connection error: {e}")
|
||||
raise RuntimeError("无法连接声音克隆服务,请检查服务是否启动")
|
||||
# CosyVoice 内部自带 text_normalize 分段,无需客户端切分
|
||||
audio_bytes = await self._generate_once(
|
||||
text=text,
|
||||
ref_audio_data=ref_audio_data,
|
||||
ref_text=ref_text,
|
||||
language=language,
|
||||
speed=speed,
|
||||
)
|
||||
with open(output_path, "wb") as f:
|
||||
f.write(audio_bytes)
|
||||
logger.info(f"✅ Voice clone saved: {output_path}")
|
||||
return output_path
|
||||
|
||||
async def check_health(self) -> dict:
|
||||
"""健康检查"""
|
||||
import time
|
||||
|
||||
# 5分钟缓存
|
||||
# 30秒缓存
|
||||
now = time.time()
|
||||
if self._health_cache and (now - self._health_cache_time) < 300:
|
||||
return self._health_cache
|
||||
cached = self._health_cache
|
||||
if cached is not None and (now - self._health_cache_time) < 30:
|
||||
return cached
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{self.base_url}/health")
|
||||
response.raise_for_status()
|
||||
self._health_cache = response.json()
|
||||
payload = response.json()
|
||||
self._health_cache = payload
|
||||
self._health_cache_time = now
|
||||
return self._health_cache
|
||||
return payload
|
||||
except Exception as e:
|
||||
logger.warning(f"Qwen3-TTS health check failed: {e}")
|
||||
logger.warning(f"Voice clone health check failed: {e}")
|
||||
return {
|
||||
"service": "Qwen3-TTS Voice Clone",
|
||||
"model": "0.6B-Base",
|
||||
"service": "CosyVoice 3.0 Voice Clone",
|
||||
"model": "unknown",
|
||||
"ready": False,
|
||||
"gpu_id": 0,
|
||||
"error": str(e)
|
||||
|
||||
@@ -39,12 +39,22 @@ def split_word_to_chars(word: str, start: float, end: float) -> list:
|
||||
|
||||
tokens = []
|
||||
ascii_buffer = ""
|
||||
pending_space = False # 记录是否有待处理的空格(用于英文单词间距)
|
||||
|
||||
for char in word:
|
||||
if not char.strip():
|
||||
# 空格:flush ascii_buffer,标记下一个 token 需要前导空格
|
||||
if ascii_buffer:
|
||||
tokens.append(ascii_buffer)
|
||||
ascii_buffer = ""
|
||||
if tokens: # 仅在已有 token 时标记(避免开头重复空格)
|
||||
pending_space = True
|
||||
continue
|
||||
|
||||
if char.isascii() and char.isalnum():
|
||||
if pending_space and not ascii_buffer:
|
||||
ascii_buffer = " " # 将空格前置到新英文单词
|
||||
pending_space = False
|
||||
ascii_buffer += char
|
||||
continue
|
||||
|
||||
@@ -52,7 +62,9 @@ def split_word_to_chars(word: str, start: float, end: float) -> list:
|
||||
tokens.append(ascii_buffer)
|
||||
ascii_buffer = ""
|
||||
|
||||
tokens.append(char)
|
||||
prefix = " " if pending_space else ""
|
||||
pending_space = False
|
||||
tokens.append(prefix + char)
|
||||
|
||||
if ascii_buffer:
|
||||
tokens.append(ascii_buffer)
|
||||
@@ -175,6 +187,7 @@ class WhisperService:
|
||||
text: str,
|
||||
output_path: Optional[str] = None,
|
||||
language: str = "zh",
|
||||
original_text: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""
|
||||
对音频进行转录,生成字级别时间戳
|
||||
@@ -184,6 +197,8 @@ class WhisperService:
|
||||
text: 原始文本(用于参考,但实际使用 whisper 转录结果)
|
||||
output_path: 可选,输出 JSON 文件路径
|
||||
language: 语言代码 (zh/en 等)
|
||||
original_text: 原始文案。非空时,Whisper 仅用于检测总时间范围,
|
||||
字幕文字用此原文替换(解决语言不匹配问题)
|
||||
|
||||
Returns:
|
||||
包含字级别时间戳的字典
|
||||
@@ -208,16 +223,19 @@ class WhisperService:
|
||||
|
||||
logger.info(f"Detected language: {info.language} (prob: {info.language_probability:.2f})")
|
||||
|
||||
# 收集 Whisper 转录结果(始终需要,用于获取时间范围)
|
||||
all_segments = []
|
||||
whisper_first_start = None
|
||||
whisper_last_end = None
|
||||
for segment in segments_iter:
|
||||
# 提取每个字的时间戳,并拆分成单字
|
||||
all_words = []
|
||||
if segment.words:
|
||||
for word_info in segment.words:
|
||||
word_text = word_info.word
|
||||
if word_text.strip():
|
||||
# 将词拆分成单字,时间戳线性插值
|
||||
# 保留前导空格用于英文词间距
|
||||
if whisper_first_start is None:
|
||||
whisper_first_start = word_info.start
|
||||
whisper_last_end = word_info.end
|
||||
chars = split_word_to_chars(
|
||||
word_text,
|
||||
word_info.start,
|
||||
@@ -225,11 +243,24 @@ class WhisperService:
|
||||
)
|
||||
all_words.extend(chars)
|
||||
|
||||
# 将长段落按标点和字数拆分成多行
|
||||
if all_words:
|
||||
line_segments = split_segment_to_lines(all_words, max_chars)
|
||||
all_segments.extend(line_segments)
|
||||
|
||||
# 如果提供了 original_text,用原文替换 Whisper 转录文字
|
||||
if original_text and original_text.strip() and whisper_first_start is not None:
|
||||
logger.info(f"Using original_text for subtitles (len={len(original_text)}), "
|
||||
f"Whisper time range: {whisper_first_start:.2f}-{whisper_last_end:.2f}s")
|
||||
# 用 split_word_to_chars 拆分原文
|
||||
orig_chars = split_word_to_chars(
|
||||
original_text.strip(),
|
||||
whisper_first_start,
|
||||
whisper_last_end
|
||||
)
|
||||
if orig_chars:
|
||||
all_segments = split_segment_to_lines(orig_chars, max_chars)
|
||||
logger.info(f"Rebuilt {len(all_segments)} subtitle segments from original text")
|
||||
|
||||
logger.info(f"Generated {len(all_segments)} subtitle segments")
|
||||
return {"segments": all_segments}
|
||||
|
||||
@@ -247,12 +278,13 @@ class WhisperService:
|
||||
|
||||
return result
|
||||
|
||||
async def transcribe(self, audio_path: str) -> str:
|
||||
async def transcribe(self, audio_path: str, language: str | None = None) -> str:
|
||||
"""
|
||||
仅转录文本(用于提取文案)
|
||||
|
||||
Args:
|
||||
audio_path: 音频/视频文件路径
|
||||
language: 语言代码,None 表示自动检测
|
||||
|
||||
Returns:
|
||||
纯文本内容
|
||||
@@ -266,7 +298,7 @@ class WhisperService:
|
||||
# 转录 (无需字级时间戳)
|
||||
segments_iter, _ = model.transcribe(
|
||||
audio_path,
|
||||
language="zh",
|
||||
language=language,
|
||||
word_timestamps=False,
|
||||
vad_filter=True,
|
||||
)
|
||||
|
||||
@@ -71,3 +71,18 @@ CREATE TRIGGER users_updated_at
|
||||
BEFORE UPDATE ON users
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_at();
|
||||
|
||||
-- 8. 订单表(支付宝付费)
|
||||
CREATE TABLE IF NOT EXISTS orders (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
out_trade_no TEXT UNIQUE NOT NULL,
|
||||
amount DECIMAL(10, 2) NOT NULL DEFAULT 999.00,
|
||||
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'paid', 'failed')),
|
||||
trade_no TEXT,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
paid_at TIMESTAMP WITH TIME ZONE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_orders_out_trade_no ON orders(out_trade_no);
|
||||
|
||||
31
backend/package-lock.json
generated
Normal file
31
backend/package-lock.json
generated
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"name": "backend",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"dependencies": {
|
||||
"qrcode.react": "^4.2.0"
|
||||
}
|
||||
},
|
||||
"node_modules/qrcode.react": {
|
||||
"version": "4.2.0",
|
||||
"resolved": "https://registry.npmjs.org/qrcode.react/-/qrcode.react-4.2.0.tgz",
|
||||
"integrity": "sha512-QpgqWi8rD9DsS9EP3z7BT+5lY5SFhsqGjpgW5DY/i3mK4M9DTBNz3ErMi8BWYEfI3L0d8GIbGmcdFAS1uIRGjA==",
|
||||
"license": "ISC",
|
||||
"peerDependencies": {
|
||||
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/react": {
|
||||
"version": "19.2.4",
|
||||
"resolved": "https://registry.npmjs.org/react/-/react-19.2.4.tgz",
|
||||
"integrity": "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==",
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">=0.10.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
5
backend/package.json
Normal file
5
backend/package.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"dependencies": {
|
||||
"qrcode.react": "^4.2.0"
|
||||
}
|
||||
}
|
||||
@@ -20,14 +20,14 @@ logger = logging.getLogger("Watchdog")
|
||||
# 服务配置
|
||||
SERVICES = [
|
||||
{
|
||||
"name": "vigent2-qwen-tts",
|
||||
"url": "http://localhost:8009/health",
|
||||
"name": "vigent2-cosyvoice",
|
||||
"url": "http://localhost:8010/health",
|
||||
"failures": 0,
|
||||
"threshold": 5, # 连续5次失败才重启(5×30s = 2.5分钟容忍期)
|
||||
"threshold": 3, # 连续3次失败才重启(3×15s ≈ 45秒容忍期)
|
||||
"timeout": 10.0,
|
||||
"restart_cmd": ["pm2", "restart", "vigent2-qwen-tts"],
|
||||
"restart_cmd": ["pm2", "restart", "vigent2-cosyvoice"],
|
||||
"cooldown_until": 0, # 重启后的冷却截止时间戳
|
||||
"cooldown_sec": 120, # 重启后等待120秒再开始检查
|
||||
"cooldown_sec": 45, # 重启后等待45秒再开始检查
|
||||
}
|
||||
]
|
||||
|
||||
@@ -45,10 +45,20 @@ async def check_service(service):
|
||||
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||
response = await client.get(service["url"])
|
||||
if response.status_code == 200:
|
||||
if service["failures"] > 0:
|
||||
logger.info(f"✅ 服务 {service['name']} 已恢复正常")
|
||||
service["failures"] = 0
|
||||
return True
|
||||
ready = True
|
||||
try:
|
||||
payload = response.json()
|
||||
ready = bool(payload.get("ready", True))
|
||||
except Exception:
|
||||
payload = {}
|
||||
|
||||
if ready:
|
||||
if service["failures"] > 0:
|
||||
logger.info(f"✅ 服务 {service['name']} 已恢复正常")
|
||||
service["failures"] = 0
|
||||
return True
|
||||
|
||||
logger.warning(f"⚠️ 服务 {service['name']} ready=false,健康检查未通过: {payload}")
|
||||
else:
|
||||
logger.warning(f"⚠️ 服务 {service['name']} 返回状态码 {response.status_code}")
|
||||
except Exception as e:
|
||||
@@ -83,8 +93,8 @@ async def main():
|
||||
for service in SERVICES:
|
||||
await check_service(service)
|
||||
|
||||
# 每 30 秒检查一次
|
||||
await asyncio.sleep(30)
|
||||
# 每 15 秒检查一次
|
||||
await asyncio.sleep(15)
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
|
||||
10
frontend/package-lock.json
generated
10
frontend/package-lock.json
generated
@@ -15,6 +15,7 @@
|
||||
"axios": "^1.13.4",
|
||||
"lucide-react": "^0.563.0",
|
||||
"next": "16.1.1",
|
||||
"qrcode.react": "^4.2.0",
|
||||
"react": "19.2.3",
|
||||
"react-dom": "19.2.3",
|
||||
"sonner": "^2.0.7",
|
||||
@@ -5618,6 +5619,15 @@
|
||||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/qrcode.react": {
|
||||
"version": "4.2.0",
|
||||
"resolved": "https://registry.npmjs.org/qrcode.react/-/qrcode.react-4.2.0.tgz",
|
||||
"integrity": "sha512-QpgqWi8rD9DsS9EP3z7BT+5lY5SFhsqGjpgW5DY/i3mK4M9DTBNz3ErMi8BWYEfI3L0d8GIbGmcdFAS1uIRGjA==",
|
||||
"license": "ISC",
|
||||
"peerDependencies": {
|
||||
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/queue-microtask": {
|
||||
"version": "1.2.3",
|
||||
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
"axios": "^1.13.4",
|
||||
"lucide-react": "^0.563.0",
|
||||
"next": "16.1.1",
|
||||
"qrcode.react": "^4.2.0",
|
||||
"react": "19.2.3",
|
||||
"react-dom": "19.2.3",
|
||||
"sonner": "^2.0.7",
|
||||
|
||||
@@ -25,7 +25,10 @@ export default function LoginPage() {
|
||||
|
||||
try {
|
||||
const result = await login(phone, password);
|
||||
if (result.success) {
|
||||
if (result.paymentToken) {
|
||||
sessionStorage.setItem('payment_token', result.paymentToken);
|
||||
router.push('/pay');
|
||||
} else if (result.success) {
|
||||
router.push('/');
|
||||
} else {
|
||||
setError(result.message || '登录失败');
|
||||
|
||||
160
frontend/src/app/pay/page.tsx
Normal file
160
frontend/src/app/pay/page.tsx
Normal file
@@ -0,0 +1,160 @@
|
||||
'use client';
|
||||
|
||||
import { Suspense, useState, useEffect, useRef } from 'react';
|
||||
import { useRouter, useSearchParams } from 'next/navigation';
|
||||
import api from '@/shared/api/axios';
|
||||
|
||||
type PageStatus = 'loading' | 'redirecting' | 'checking' | 'success' | 'error';
|
||||
|
||||
function PayContent() {
|
||||
const router = useRouter();
|
||||
const searchParams = useSearchParams();
|
||||
const [status, setStatus] = useState<PageStatus>('loading');
|
||||
const [errorMsg, setErrorMsg] = useState('');
|
||||
const pollRef = useRef<ReturnType<typeof setInterval> | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const outTradeNo = searchParams.get('out_trade_no');
|
||||
if (outTradeNo) {
|
||||
setStatus('checking');
|
||||
startPolling(outTradeNo);
|
||||
return;
|
||||
}
|
||||
|
||||
const token = sessionStorage.getItem('payment_token');
|
||||
if (!token) {
|
||||
router.replace('/login');
|
||||
return;
|
||||
}
|
||||
createOrder(token);
|
||||
|
||||
return () => {
|
||||
if (pollRef.current) clearInterval(pollRef.current);
|
||||
};
|
||||
}, []);
|
||||
|
||||
const createOrder = async (token: string) => {
|
||||
try {
|
||||
const { data } = await api.post('/api/payment/create-order', { payment_token: token });
|
||||
const { pay_url } = data.data;
|
||||
setStatus('redirecting');
|
||||
window.location.href = pay_url;
|
||||
} catch (err: any) {
|
||||
setStatus('error');
|
||||
setErrorMsg(err.response?.data?.message || '创建订单失败,请重新登录');
|
||||
}
|
||||
};
|
||||
|
||||
const startPolling = (tradeNo: string) => {
|
||||
checkStatus(tradeNo);
|
||||
pollRef.current = setInterval(() => checkStatus(tradeNo), 3000);
|
||||
};
|
||||
|
||||
const checkStatus = async (tradeNo: string) => {
|
||||
try {
|
||||
const { data } = await api.get(`/api/payment/status/${tradeNo}`);
|
||||
if (data.data.status === 'paid') {
|
||||
if (pollRef.current) clearInterval(pollRef.current);
|
||||
setStatus('success');
|
||||
sessionStorage.removeItem('payment_token');
|
||||
setTimeout(() => router.replace('/login'), 3000);
|
||||
}
|
||||
} catch {
|
||||
// ignore polling errors
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="w-full max-w-md p-8 bg-white/10 backdrop-blur-lg rounded-2xl shadow-2xl border border-white/20">
|
||||
{(status === 'loading' || status === 'redirecting') && (
|
||||
<div className="text-center">
|
||||
<div className="mb-6">
|
||||
<svg className="animate-spin h-12 w-12 mx-auto text-purple-400" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
|
||||
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
|
||||
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||
</svg>
|
||||
</div>
|
||||
<p className="text-gray-300">
|
||||
{status === 'loading' ? '正在创建订单...' : '正在跳转到支付宝...'}
|
||||
</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{status === 'checking' && (
|
||||
<div className="text-center">
|
||||
<h1 className="text-2xl font-bold text-white mb-6">支付确认中</h1>
|
||||
<div className="flex items-center justify-center gap-2 text-purple-300 mb-4">
|
||||
<svg className="animate-spin h-5 w-5" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
|
||||
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
|
||||
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||
</svg>
|
||||
正在确认支付结果...
|
||||
</div>
|
||||
<p className="text-gray-400 text-sm">如果您已完成支付,请稍候</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{status === 'success' && (
|
||||
<div className="text-center">
|
||||
<div className="mb-6">
|
||||
<svg className="w-16 h-16 mx-auto text-green-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
</div>
|
||||
<h2 className="text-2xl font-bold text-white mb-4">支付成功!</h2>
|
||||
<p className="text-gray-300 mb-2">会员已开通,即将跳转到登录页...</p>
|
||||
<p className="text-gray-500 text-sm">请重新登录即可使用</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{status === 'error' && (
|
||||
<div className="text-center">
|
||||
<div className="mb-6">
|
||||
<svg className="w-16 h-16 mx-auto text-red-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4m0 4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
</div>
|
||||
<h2 className="text-2xl font-bold text-white mb-4">创建订单失败</h2>
|
||||
<p className="text-red-300 mb-6">{errorMsg}</p>
|
||||
<button
|
||||
onClick={() => router.replace('/login')}
|
||||
className="py-3 px-6 bg-gradient-to-r from-purple-600 to-pink-600 text-white font-semibold rounded-lg"
|
||||
>
|
||||
返回登录
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{status === 'checking' && (
|
||||
<div className="mt-6 text-center">
|
||||
<button
|
||||
onClick={() => {
|
||||
if (pollRef.current) clearInterval(pollRef.current);
|
||||
router.replace('/login');
|
||||
}}
|
||||
className="text-purple-300 hover:text-purple-200 text-sm"
|
||||
>
|
||||
返回登录
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function PayPage() {
|
||||
return (
|
||||
<div className="min-h-dvh flex items-center justify-center">
|
||||
<Suspense fallback={
|
||||
<div className="w-full max-w-md p-8 bg-white/10 backdrop-blur-lg rounded-2xl shadow-2xl border border-white/20 text-center">
|
||||
<svg className="animate-spin h-12 w-12 mx-auto text-purple-400" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
|
||||
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
|
||||
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||
</svg>
|
||||
</div>
|
||||
}>
|
||||
<PayContent />
|
||||
</Suspense>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -61,7 +61,7 @@ export default function RegisterPage() {
|
||||
</div>
|
||||
<h2 className="text-2xl font-bold text-white mb-4">注册成功!</h2>
|
||||
<p className="text-gray-300 mb-6">
|
||||
您的账号已创建,请等待管理员审核激活后即可登录。
|
||||
注册成功!请返回登录页,登录后完成付费即可开通。
|
||||
</p>
|
||||
<a
|
||||
href="/login"
|
||||
|
||||
@@ -126,6 +126,7 @@ export const useGeneratedAudios = ({
|
||||
ref_audio_id?: string;
|
||||
ref_text?: string;
|
||||
language: string;
|
||||
speed?: number;
|
||||
}) => {
|
||||
setIsGeneratingAudio(true);
|
||||
setAudioTask({ status: "pending", progress: 0, message: "正在提交..." });
|
||||
|
||||
@@ -87,10 +87,9 @@ const LANG_TO_LOCALE: Record<string, string> = {
|
||||
"Português": "pt-BR",
|
||||
};
|
||||
|
||||
const DEFAULT_SHORT_TITLE_DURATION = 4;
|
||||
|
||||
|
||||
const FIXED_REF_TEXT =
|
||||
"其实生活中有许多美好的瞬间,比如清晨的阳光,或者一杯温热的清茶。希望这次生成的音色能够自然、流畅,完美还原出我最真实的声音状态。";
|
||||
|
||||
const scrollContainerToItem = (container: HTMLDivElement, item: HTMLDivElement) => {
|
||||
const containerRect = container.getBoundingClientRect();
|
||||
@@ -152,7 +151,9 @@ export const useHomeController = () => {
|
||||
const [subtitleSizeLocked, setSubtitleSizeLocked] = useState<boolean>(false);
|
||||
const [titleSizeLocked, setTitleSizeLocked] = useState<boolean>(false);
|
||||
const [titleTopMargin, setTitleTopMargin] = useState<number>(62);
|
||||
const [titleDisplayMode, setTitleDisplayMode] = useState<"short" | "persistent">("short");
|
||||
const [subtitleBottomMargin, setSubtitleBottomMargin] = useState<number>(80);
|
||||
const [outputAspectRatio, setOutputAspectRatio] = useState<"9:16" | "16:9">("9:16");
|
||||
const [showStylePreview, setShowStylePreview] = useState<boolean>(false);
|
||||
const [materialDimensions, setMaterialDimensions] = useState<{ width: number; height: number } | null>(null);
|
||||
|
||||
@@ -165,11 +166,14 @@ export const useHomeController = () => {
|
||||
// 声音克隆相关状态
|
||||
const [ttsMode, setTtsMode] = useState<"edgetts" | "voiceclone">("edgetts");
|
||||
const [selectedRefAudio, setSelectedRefAudio] = useState<RefAudio | null>(null);
|
||||
const [refText, setRefText] = useState(FIXED_REF_TEXT);
|
||||
const [refText, setRefText] = useState("");
|
||||
|
||||
// 预生成配音选中 ID
|
||||
const [selectedAudioId, setSelectedAudioId] = useState<string | null>(null);
|
||||
|
||||
// 语速控制
|
||||
const [speed, setSpeed] = useState<number>(1.0);
|
||||
|
||||
// ClipTrimmer 模态框状态
|
||||
const [clipTrimmerOpen, setClipTrimmerOpen] = useState(false);
|
||||
const [clipTrimmerSegmentId, setClipTrimmerSegmentId] = useState<string | null>(null);
|
||||
@@ -286,7 +290,6 @@ export const useHomeController = () => {
|
||||
setUploadError,
|
||||
fetchMaterials,
|
||||
toggleMaterial,
|
||||
reorderMaterials,
|
||||
deleteMaterial,
|
||||
handleUpload,
|
||||
} = useMaterials({
|
||||
@@ -314,8 +317,9 @@ export const useHomeController = () => {
|
||||
fetchRefAudios,
|
||||
uploadRefAudio,
|
||||
deleteRefAudio,
|
||||
retranscribeRefAudio,
|
||||
retranscribingId,
|
||||
} = useRefAudios({
|
||||
fixedRefText: FIXED_REF_TEXT,
|
||||
selectedRefAudio,
|
||||
setSelectedRefAudio,
|
||||
setRefText,
|
||||
@@ -446,8 +450,12 @@ export const useHomeController = () => {
|
||||
setTitleSizeLocked,
|
||||
titleTopMargin,
|
||||
setTitleTopMargin,
|
||||
titleDisplayMode,
|
||||
setTitleDisplayMode,
|
||||
subtitleBottomMargin,
|
||||
setSubtitleBottomMargin,
|
||||
outputAspectRatio,
|
||||
setOutputAspectRatio,
|
||||
selectedBgmId,
|
||||
setSelectedBgmId,
|
||||
bgmVolume,
|
||||
@@ -459,6 +467,8 @@ export const useHomeController = () => {
|
||||
selectedRefAudio,
|
||||
selectedAudioId,
|
||||
setSelectedAudioId,
|
||||
speed,
|
||||
setSpeed,
|
||||
});
|
||||
|
||||
const { savedScripts, saveScript, deleteScript: deleteSavedScript } = useSavedScripts(storageKey);
|
||||
@@ -523,7 +533,6 @@ export const useHomeController = () => {
|
||||
|
||||
let isActive = true;
|
||||
const video = document.createElement("video");
|
||||
video.crossOrigin = "anonymous";
|
||||
video.preload = "metadata";
|
||||
video.src = url;
|
||||
video.load();
|
||||
@@ -610,7 +619,7 @@ export const useHomeController = () => {
|
||||
setSelectedVideoId(firstId);
|
||||
setGeneratedVideo(resolveMediaUrl(generatedVideos[0].path));
|
||||
}
|
||||
}, [isRestored, generatedVideos, selectedVideoId, setSelectedVideoId, setGeneratedVideo, resolveMediaUrl]);
|
||||
}, [isRestored, generatedVideos, selectedVideoId, setSelectedVideoId, setGeneratedVideo]);
|
||||
|
||||
// 【修复】BGM 默认选中逻辑
|
||||
useEffect(() => {
|
||||
@@ -619,8 +628,14 @@ export const useHomeController = () => {
|
||||
}
|
||||
}, [isRestored, bgmList, selectedBgmId, enableBgm, setSelectedBgmId]);
|
||||
|
||||
const videoScrollReady = useRef(false);
|
||||
useEffect(() => {
|
||||
if (!selectedVideoId) return;
|
||||
if (!videoScrollReady.current) {
|
||||
videoScrollReady.current = true;
|
||||
return;
|
||||
}
|
||||
|
||||
const target = videoItemRefs.current[selectedVideoId];
|
||||
if (target) {
|
||||
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
|
||||
@@ -815,6 +830,7 @@ export const useHomeController = () => {
|
||||
ref_audio_id: ttsMode === "voiceclone" ? selectedRefAudio!.id : undefined,
|
||||
ref_text: ttsMode === "voiceclone" ? refText : undefined,
|
||||
language: textLang,
|
||||
speed: ttsMode === "voiceclone" ? speed : undefined,
|
||||
};
|
||||
await generateAudio(params);
|
||||
};
|
||||
@@ -854,22 +870,59 @@ export const useHomeController = () => {
|
||||
language: selectedAudio.language || textLang,
|
||||
title: videoTitle.trim() || undefined,
|
||||
enable_subtitles: true,
|
||||
output_aspect_ratio: outputAspectRatio,
|
||||
};
|
||||
|
||||
// 多素材
|
||||
if (selectedMaterials.length > 1) {
|
||||
payload.material_paths = selectedMaterials
|
||||
const timelineOrderedIds = timelineSegments
|
||||
.map((seg) => seg.materialId)
|
||||
.filter((id, index, arr) => arr.indexOf(id) === index);
|
||||
const orderedMaterialIds = [
|
||||
...timelineOrderedIds.filter((id) => selectedMaterials.includes(id)),
|
||||
...selectedMaterials.filter((id) => !timelineOrderedIds.includes(id)),
|
||||
];
|
||||
|
||||
const materialPaths = orderedMaterialIds
|
||||
.map((id) => materials.find((x) => x.id === id)?.path)
|
||||
.filter((path): path is string => !!path);
|
||||
|
||||
if (materialPaths.length === 0) {
|
||||
toast.error("多素材解析失败,请刷新素材后重试");
|
||||
return;
|
||||
}
|
||||
|
||||
payload.material_paths = materialPaths;
|
||||
payload.material_path = materialPaths[0];
|
||||
|
||||
// 发送自定义时间轴分配
|
||||
const assignments = toCustomAssignments();
|
||||
if (assignments.length > 0) {
|
||||
const assignmentPaths = assignments
|
||||
.map((a) => a.material_path)
|
||||
.filter((path): path is string => !!path);
|
||||
|
||||
if (assignmentPaths.length === assignments.length) {
|
||||
// 以时间轴可见段为准:超出时间轴的素材不会参与本次生成
|
||||
payload.material_paths = assignmentPaths;
|
||||
payload.material_path = assignmentPaths[0];
|
||||
}
|
||||
payload.custom_assignments = assignments;
|
||||
} else {
|
||||
console.warn(
|
||||
"[Timeline] custom_assignments 为空,回退后端自动分配",
|
||||
{ materials: materialPaths.length }
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 单素材 + 截取起点
|
||||
if (selectedMaterials.length === 1 && timelineSegments[0]?.sourceStart > 0) {
|
||||
// 单素材 + 截取范围
|
||||
const singleSeg = timelineSegments[0];
|
||||
if (
|
||||
selectedMaterials.length === 1
|
||||
&& singleSeg
|
||||
&& (singleSeg.sourceStart > 0 || singleSeg.sourceEnd > 0)
|
||||
) {
|
||||
payload.custom_assignments = toCustomAssignments();
|
||||
}
|
||||
|
||||
@@ -890,6 +943,10 @@ export const useHomeController = () => {
|
||||
}
|
||||
|
||||
if (videoTitle.trim()) {
|
||||
payload.title_display_mode = titleDisplayMode;
|
||||
if (titleDisplayMode === "short") {
|
||||
payload.title_duration = DEFAULT_SHORT_TITLE_DURATION;
|
||||
}
|
||||
payload.title_top_margin = Math.round(titleTopMargin);
|
||||
}
|
||||
|
||||
@@ -1000,8 +1057,12 @@ export const useHomeController = () => {
|
||||
setSubtitleSizeLocked,
|
||||
titleTopMargin,
|
||||
setTitleTopMargin,
|
||||
titleDisplayMode,
|
||||
setTitleDisplayMode,
|
||||
subtitleBottomMargin,
|
||||
setSubtitleBottomMargin,
|
||||
outputAspectRatio,
|
||||
setOutputAspectRatio,
|
||||
resolveAssetUrl,
|
||||
getFontFormat,
|
||||
buildTextShadow,
|
||||
@@ -1029,6 +1090,8 @@ export const useHomeController = () => {
|
||||
saveEditing,
|
||||
cancelEditing,
|
||||
deleteRefAudio,
|
||||
retranscribeRefAudio,
|
||||
retranscribingId,
|
||||
recordedBlob,
|
||||
isRecording,
|
||||
recordingTime,
|
||||
@@ -1036,7 +1099,6 @@ export const useHomeController = () => {
|
||||
stopRecording,
|
||||
useRecording,
|
||||
formatRecordingTime,
|
||||
fixedRefText: FIXED_REF_TEXT,
|
||||
bgmList,
|
||||
bgmLoading,
|
||||
bgmError,
|
||||
@@ -1072,6 +1134,8 @@ export const useHomeController = () => {
|
||||
deleteAudio,
|
||||
renameAudio,
|
||||
selectAudio,
|
||||
speed,
|
||||
setSpeed,
|
||||
timelineSegments,
|
||||
reorderSegments,
|
||||
setSourceRange,
|
||||
|
||||
@@ -37,8 +37,12 @@ interface UseHomePersistenceOptions {
|
||||
setTitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
titleTopMargin: number;
|
||||
setTitleTopMargin: React.Dispatch<React.SetStateAction<number>>;
|
||||
titleDisplayMode: 'short' | 'persistent';
|
||||
setTitleDisplayMode: React.Dispatch<React.SetStateAction<'short' | 'persistent'>>;
|
||||
subtitleBottomMargin: number;
|
||||
setSubtitleBottomMargin: React.Dispatch<React.SetStateAction<number>>;
|
||||
outputAspectRatio: '9:16' | '16:9';
|
||||
setOutputAspectRatio: React.Dispatch<React.SetStateAction<'9:16' | '16:9'>>;
|
||||
selectedBgmId: string;
|
||||
setSelectedBgmId: React.Dispatch<React.SetStateAction<string>>;
|
||||
bgmVolume: number;
|
||||
@@ -50,6 +54,8 @@ interface UseHomePersistenceOptions {
|
||||
selectedRefAudio: RefAudio | null;
|
||||
selectedAudioId: string | null;
|
||||
setSelectedAudioId: React.Dispatch<React.SetStateAction<string | null>>;
|
||||
speed: number;
|
||||
setSpeed: React.Dispatch<React.SetStateAction<number>>;
|
||||
}
|
||||
|
||||
export const useHomePersistence = ({
|
||||
@@ -79,8 +85,12 @@ export const useHomePersistence = ({
|
||||
setTitleSizeLocked,
|
||||
titleTopMargin,
|
||||
setTitleTopMargin,
|
||||
titleDisplayMode,
|
||||
setTitleDisplayMode,
|
||||
subtitleBottomMargin,
|
||||
setSubtitleBottomMargin,
|
||||
outputAspectRatio,
|
||||
setOutputAspectRatio,
|
||||
selectedBgmId,
|
||||
setSelectedBgmId,
|
||||
bgmVolume,
|
||||
@@ -92,6 +102,8 @@ export const useHomePersistence = ({
|
||||
selectedRefAudio,
|
||||
selectedAudioId,
|
||||
setSelectedAudioId,
|
||||
speed,
|
||||
setSpeed,
|
||||
}: UseHomePersistenceOptions) => {
|
||||
const [isRestored, setIsRestored] = useState(false);
|
||||
|
||||
@@ -114,7 +126,10 @@ export const useHomePersistence = ({
|
||||
const savedBgmVolume = localStorage.getItem(`vigent_${storageKey}_bgmVolume`);
|
||||
const savedEnableBgm = localStorage.getItem(`vigent_${storageKey}_enableBgm`);
|
||||
const savedTitleTopMargin = localStorage.getItem(`vigent_${storageKey}_titleTopMargin`);
|
||||
const savedTitleDisplayMode = localStorage.getItem(`vigent_${storageKey}_titleDisplayMode`);
|
||||
const savedSubtitleBottomMargin = localStorage.getItem(`vigent_${storageKey}_subtitleBottomMargin`);
|
||||
const savedOutputAspectRatio = localStorage.getItem(`vigent_${storageKey}_outputAspectRatio`);
|
||||
const savedSpeed = localStorage.getItem(`vigent_${storageKey}_speed`);
|
||||
|
||||
setText(savedText || "大家好,欢迎来到我的频道,今天给大家分享一些有趣的内容。");
|
||||
setVideoTitle(savedTitle ? clampTitle(savedTitle) : "");
|
||||
@@ -164,11 +179,23 @@ export const useHomePersistence = ({
|
||||
const parsed = parseInt(savedTitleTopMargin, 10);
|
||||
if (!Number.isNaN(parsed)) setTitleTopMargin(parsed);
|
||||
}
|
||||
if (savedTitleDisplayMode === 'short' || savedTitleDisplayMode === 'persistent') {
|
||||
setTitleDisplayMode(savedTitleDisplayMode);
|
||||
}
|
||||
if (savedSubtitleBottomMargin) {
|
||||
const parsed = parseInt(savedSubtitleBottomMargin, 10);
|
||||
if (!Number.isNaN(parsed)) setSubtitleBottomMargin(parsed);
|
||||
}
|
||||
|
||||
if (savedOutputAspectRatio === '9:16' || savedOutputAspectRatio === '16:9') {
|
||||
setOutputAspectRatio(savedOutputAspectRatio);
|
||||
}
|
||||
|
||||
if (savedSpeed) {
|
||||
const parsed = parseFloat(savedSpeed);
|
||||
if (!Number.isNaN(parsed)) setSpeed(parsed);
|
||||
}
|
||||
|
||||
// eslint-disable-next-line react-hooks/set-state-in-effect
|
||||
setIsRestored(true);
|
||||
}, [
|
||||
@@ -181,6 +208,7 @@ export const useHomePersistence = ({
|
||||
setSelectedTitleStyleId,
|
||||
setSelectedVideoId,
|
||||
setSelectedAudioId,
|
||||
setSpeed,
|
||||
setSubtitleFontSize,
|
||||
setSubtitleSizeLocked,
|
||||
setText,
|
||||
@@ -188,7 +216,9 @@ export const useHomePersistence = ({
|
||||
setTitleFontSize,
|
||||
setTitleSizeLocked,
|
||||
setTitleTopMargin,
|
||||
setTitleDisplayMode,
|
||||
setSubtitleBottomMargin,
|
||||
setOutputAspectRatio,
|
||||
setTtsMode,
|
||||
setVideoTitle,
|
||||
setVoice,
|
||||
@@ -259,12 +289,24 @@ export const useHomePersistence = ({
|
||||
}
|
||||
}, [titleTopMargin, storageKey, isRestored]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isRestored) {
|
||||
localStorage.setItem(`vigent_${storageKey}_titleDisplayMode`, titleDisplayMode);
|
||||
}
|
||||
}, [titleDisplayMode, storageKey, isRestored]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isRestored) {
|
||||
localStorage.setItem(`vigent_${storageKey}_subtitleBottomMargin`, String(subtitleBottomMargin));
|
||||
}
|
||||
}, [subtitleBottomMargin, storageKey, isRestored]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isRestored) {
|
||||
localStorage.setItem(`vigent_${storageKey}_outputAspectRatio`, outputAspectRatio);
|
||||
}
|
||||
}, [outputAspectRatio, storageKey, isRestored]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isRestored) {
|
||||
localStorage.setItem(`vigent_${storageKey}_bgmId`, selectedBgmId);
|
||||
@@ -309,5 +351,11 @@ export const useHomePersistence = ({
|
||||
}
|
||||
}, [selectedRefAudio, storageKey, isRestored]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isRestored) {
|
||||
localStorage.setItem(`vigent_${storageKey}_speed`, String(speed));
|
||||
}
|
||||
}, [speed, storageKey, isRestored]);
|
||||
|
||||
return { isRestored };
|
||||
};
|
||||
|
||||
@@ -185,11 +185,14 @@ export const useMaterials = ({
|
||||
).then((enriched) => setMaterials(enriched));
|
||||
}
|
||||
|
||||
// 找出新增的素材 ID 并自动选中
|
||||
// 找出新增素材并默认仅选中新上传项,避免误触发多素材模式
|
||||
const oldIds = new Set(materials.map((m) => m.id));
|
||||
const newIds = nextMaterials.filter((m) => !oldIds.has(m.id)).map((m) => m.id);
|
||||
if (newIds.length > 0) {
|
||||
setSelectedMaterials((prev) => [...prev, ...newIds]);
|
||||
setSelectedMaterials([newIds[0]]);
|
||||
} else if (nextMaterials[0]?.id) {
|
||||
// 兜底:即使未识别到新增项,也保持单素材默认选择最新一个
|
||||
setSelectedMaterials([nextMaterials[0].id]);
|
||||
}
|
||||
} catch (err: unknown) {
|
||||
console.error("Upload failed:", err);
|
||||
@@ -200,7 +203,7 @@ export const useMaterials = ({
|
||||
}
|
||||
|
||||
e.target.value = '';
|
||||
}, [fetchMaterials]);
|
||||
}, [materials, setSelectedMaterials]);
|
||||
|
||||
return {
|
||||
materials,
|
||||
|
||||
@@ -13,14 +13,12 @@ interface RefAudio {
|
||||
}
|
||||
|
||||
interface UseRefAudiosOptions {
|
||||
fixedRefText: string;
|
||||
selectedRefAudio: RefAudio | null;
|
||||
setSelectedRefAudio: React.Dispatch<React.SetStateAction<RefAudio | null>>;
|
||||
setRefText: React.Dispatch<React.SetStateAction<string>>;
|
||||
}
|
||||
|
||||
export const useRefAudios = ({
|
||||
fixedRefText,
|
||||
selectedRefAudio,
|
||||
setSelectedRefAudio,
|
||||
setRefText,
|
||||
@@ -28,6 +26,7 @@ export const useRefAudios = ({
|
||||
const [refAudios, setRefAudios] = useState<RefAudio[]>([]);
|
||||
const [isUploadingRef, setIsUploadingRef] = useState(false);
|
||||
const [uploadRefError, setUploadRefError] = useState<string | null>(null);
|
||||
const [retranscribingId, setRetranscribingId] = useState<string | null>(null);
|
||||
|
||||
const fetchRefAudios = useCallback(async () => {
|
||||
try {
|
||||
@@ -42,15 +41,12 @@ export const useRefAudios = ({
|
||||
}, []);
|
||||
|
||||
const uploadRefAudio = useCallback(async (file: File) => {
|
||||
const refTextInput = fixedRefText;
|
||||
|
||||
setIsUploadingRef(true);
|
||||
setUploadRefError(null);
|
||||
|
||||
try {
|
||||
const formData = new FormData();
|
||||
formData.append('file', file);
|
||||
formData.append('ref_text', refTextInput);
|
||||
|
||||
const { data: res } = await api.post<ApiResponse<RefAudio>>('/api/ref-audios', formData, {
|
||||
headers: { 'Content-Type': 'multipart/form-data' },
|
||||
@@ -68,7 +64,7 @@ export const useRefAudios = ({
|
||||
const errorMsg = axiosErr.response?.data?.message || axiosErr.message || String(err);
|
||||
setUploadRefError(`上传失败: ${errorMsg}`);
|
||||
}
|
||||
}, [fetchRefAudios, fixedRefText, setRefText, setSelectedRefAudio]);
|
||||
}, [fetchRefAudios, setRefText, setSelectedRefAudio]);
|
||||
|
||||
const deleteRefAudio = useCallback(async (audioId: string) => {
|
||||
if (!confirm("确定要删除这个参考音频吗?")) return;
|
||||
@@ -84,6 +80,28 @@ export const useRefAudios = ({
|
||||
}
|
||||
}, [fetchRefAudios, selectedRefAudio, setRefText, setSelectedRefAudio]);
|
||||
|
||||
const retranscribeRefAudio = useCallback(async (audioId: string) => {
|
||||
setRetranscribingId(audioId);
|
||||
try {
|
||||
const { data: res } = await api.post<ApiResponse<{ ref_text: string }>>(
|
||||
`/api/ref-audios/${encodeURIComponent(audioId)}/retranscribe`
|
||||
);
|
||||
const payload = unwrap(res);
|
||||
toast.success("识别完成");
|
||||
// 更新列表和当前选中
|
||||
await fetchRefAudios();
|
||||
if (selectedRefAudio?.id === audioId) {
|
||||
setRefText(payload.ref_text);
|
||||
}
|
||||
} catch (err: unknown) {
|
||||
const axiosErr = err as { response?: { data?: { message?: string } }; message?: string };
|
||||
const errorMsg = axiosErr.response?.data?.message || axiosErr.message || String(err);
|
||||
toast.error(`识别失败: ${errorMsg}`);
|
||||
} finally {
|
||||
setRetranscribingId(null);
|
||||
}
|
||||
}, [fetchRefAudios, selectedRefAudio, setRefText]);
|
||||
|
||||
return {
|
||||
refAudios,
|
||||
isUploadingRef,
|
||||
@@ -92,5 +110,7 @@ export const useRefAudios = ({
|
||||
fetchRefAudios,
|
||||
uploadRefAudio,
|
||||
deleteRefAudio,
|
||||
retranscribeRefAudio,
|
||||
retranscribingId,
|
||||
};
|
||||
};
|
||||
|
||||
@@ -12,12 +12,13 @@ export interface TimelineSegment {
|
||||
color: string;
|
||||
}
|
||||
|
||||
export interface CustomAssignment {
|
||||
material_path: string;
|
||||
start: number;
|
||||
end: number;
|
||||
source_start: number;
|
||||
}
|
||||
export interface CustomAssignment {
|
||||
material_path: string;
|
||||
start: number;
|
||||
end: number;
|
||||
source_start: number;
|
||||
source_end?: number;
|
||||
}
|
||||
|
||||
const COLORS = ["#8b5cf6", "#ec4899", "#06b6d4", "#f59e0b", "#10b981", "#f97316"];
|
||||
|
||||
@@ -31,14 +32,16 @@ interface SegmentSnapshot {
|
||||
}
|
||||
|
||||
/** Get effective duration of a segment (clipped range or full material duration) */
|
||||
function getEffectiveDuration(
|
||||
seg: { sourceStart: number; sourceEnd: number; materialId: string },
|
||||
mats: Material[]
|
||||
): number {
|
||||
if (seg.sourceEnd > seg.sourceStart) return seg.sourceEnd - seg.sourceStart;
|
||||
const mat = mats.find((m) => m.id === seg.materialId);
|
||||
return mat?.duration_sec ?? 0;
|
||||
}
|
||||
function getEffectiveDuration(
|
||||
seg: { sourceStart: number; sourceEnd: number; materialId: string },
|
||||
mats: Material[]
|
||||
): number {
|
||||
const mat = mats.find((m) => m.id === seg.materialId);
|
||||
const matDur = mat?.duration_sec ?? 0;
|
||||
if (seg.sourceEnd > seg.sourceStart) return seg.sourceEnd - seg.sourceStart;
|
||||
if (seg.sourceStart > 0) return Math.max(matDur - seg.sourceStart, 0);
|
||||
return matDur;
|
||||
}
|
||||
|
||||
/**
|
||||
* Recalculate segment start/end positions based on effective durations.
|
||||
@@ -97,11 +100,17 @@ export const useTimelineEditor = ({
|
||||
const prevKey = useRef("");
|
||||
const restoredRef = useRef(false);
|
||||
|
||||
// Refs for stable callbacks (avoid recreating on every materials/duration change)
|
||||
const materialsRef = useRef(materials);
|
||||
materialsRef.current = materials;
|
||||
const audioDurationRef = useRef(audioDuration);
|
||||
audioDurationRef.current = audioDuration;
|
||||
// Refs for stable callbacks (avoid recreating on every materials/duration change)
|
||||
const materialsRef = useRef(materials);
|
||||
const audioDurationRef = useRef(audioDuration);
|
||||
|
||||
useEffect(() => {
|
||||
materialsRef.current = materials;
|
||||
}, [materials]);
|
||||
|
||||
useEffect(() => {
|
||||
audioDurationRef.current = audioDuration;
|
||||
}, [audioDuration]);
|
||||
|
||||
// Build a durationsKey so segments re-init when material durations become available
|
||||
const durationsKey = selectedMaterials
|
||||
@@ -227,14 +236,15 @@ export const useTimelineEditor = ({
|
||||
.filter((seg) => seg.start < duration)
|
||||
.map((seg) => {
|
||||
const mat = materialsRef.current.find((m) => m.id === seg.materialId);
|
||||
return {
|
||||
material_path: mat?.path || seg.materialId,
|
||||
start: seg.start,
|
||||
end: seg.end,
|
||||
source_start: seg.sourceStart,
|
||||
};
|
||||
});
|
||||
}, [segments]);
|
||||
return {
|
||||
material_path: mat?.path || seg.materialId,
|
||||
start: seg.start,
|
||||
end: seg.end,
|
||||
source_start: seg.sourceStart,
|
||||
source_end: seg.sourceEnd > seg.sourceStart ? seg.sourceEnd : undefined,
|
||||
};
|
||||
});
|
||||
}, [segments]);
|
||||
|
||||
return {
|
||||
segments,
|
||||
|
||||
@@ -86,6 +86,8 @@ export function FloatingStylePreview({
|
||||
|
||||
const previewScale = windowWidth / previewBaseWidth;
|
||||
const previewHeight = previewBaseHeight * previewScale;
|
||||
const widthScale = Math.min(1, previewBaseWidth / 1080);
|
||||
const responsiveScale = Math.max(0.55, widthScale);
|
||||
|
||||
const activeSubtitleStyle = subtitleStyles.find((s) => s.id === selectedSubtitleStyleId)
|
||||
|| subtitleStyles.find((s) => s.is_default)
|
||||
@@ -102,8 +104,8 @@ export function FloatingStylePreview({
|
||||
const subtitleHighlightColor = activeSubtitleStyle?.highlight_color || "#FFE600";
|
||||
const subtitleNormalColor = activeSubtitleStyle?.normal_color || "#FFFFFF";
|
||||
const subtitleStrokeColor = activeSubtitleStyle?.stroke_color || "#000000";
|
||||
const subtitleStrokeSize = activeSubtitleStyle?.stroke_size ?? 3;
|
||||
const subtitleLetterSpacing = activeSubtitleStyle?.letter_spacing ?? 2;
|
||||
const subtitleStrokeSize = Math.max(1, Math.round((activeSubtitleStyle?.stroke_size ?? 3) * responsiveScale));
|
||||
const subtitleLetterSpacing = Math.max(0, (activeSubtitleStyle?.letter_spacing ?? 2) * responsiveScale);
|
||||
const subtitleFontFamilyName = `SubtitlePreview-${activeSubtitleStyle?.id || "default"}`;
|
||||
const subtitleFontUrl = activeSubtitleStyle?.font_file
|
||||
? resolveAssetUrl(`fonts/${activeSubtitleStyle.font_file}`)
|
||||
@@ -111,14 +113,19 @@ export function FloatingStylePreview({
|
||||
|
||||
const titleColor = activeTitleStyle?.color || "#FFFFFF";
|
||||
const titleStrokeColor = activeTitleStyle?.stroke_color || "#000000";
|
||||
const titleStrokeSize = activeTitleStyle?.stroke_size ?? 8;
|
||||
const titleLetterSpacing = activeTitleStyle?.letter_spacing ?? 4;
|
||||
const titleStrokeSize = Math.max(1, Math.round((activeTitleStyle?.stroke_size ?? 8) * responsiveScale));
|
||||
const titleLetterSpacing = Math.max(0, (activeTitleStyle?.letter_spacing ?? 4) * responsiveScale);
|
||||
const titleFontWeight = activeTitleStyle?.font_weight ?? 900;
|
||||
const titleFontFamilyName = `TitlePreview-${activeTitleStyle?.id || "default"}`;
|
||||
const titleFontUrl = activeTitleStyle?.font_file
|
||||
? resolveAssetUrl(`fonts/${activeTitleStyle.font_file}`)
|
||||
: null;
|
||||
|
||||
const scaledTitleFontSize = Math.max(36, Math.round(titleFontSize * responsiveScale));
|
||||
const scaledSubtitleFontSize = Math.max(28, Math.round(subtitleFontSize * responsiveScale));
|
||||
const scaledTitleTopMargin = Math.max(0, Math.round(titleTopMargin * responsiveScale));
|
||||
const scaledSubtitleBottomMargin = Math.max(0, Math.round(subtitleBottomMargin * responsiveScale));
|
||||
|
||||
const content = (
|
||||
<div
|
||||
style={{
|
||||
@@ -172,11 +179,11 @@ export function FloatingStylePreview({
|
||||
className="w-full text-center"
|
||||
style={{
|
||||
position: 'absolute',
|
||||
top: `${titleTopMargin}px`,
|
||||
top: `${scaledTitleTopMargin}px`,
|
||||
left: 0,
|
||||
right: 0,
|
||||
color: titleColor,
|
||||
fontSize: `${titleFontSize}px`,
|
||||
fontSize: `${scaledTitleFontSize}px`,
|
||||
fontWeight: titleFontWeight,
|
||||
fontFamily: titleFontUrl
|
||||
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
||||
@@ -184,6 +191,10 @@ export function FloatingStylePreview({
|
||||
textShadow: buildTextShadow(titleStrokeColor, titleStrokeSize),
|
||||
letterSpacing: `${titleLetterSpacing}px`,
|
||||
lineHeight: 1.2,
|
||||
whiteSpace: 'normal',
|
||||
wordBreak: 'break-word',
|
||||
overflowWrap: 'anywhere',
|
||||
boxSizing: 'border-box',
|
||||
opacity: videoTitle.trim() ? 1 : 0.7,
|
||||
padding: '0 5%',
|
||||
}}
|
||||
@@ -195,16 +206,20 @@ export function FloatingStylePreview({
|
||||
className="w-full text-center"
|
||||
style={{
|
||||
position: 'absolute',
|
||||
bottom: `${subtitleBottomMargin}px`,
|
||||
bottom: `${scaledSubtitleBottomMargin}px`,
|
||||
left: 0,
|
||||
right: 0,
|
||||
fontSize: `${subtitleFontSize}px`,
|
||||
fontSize: `${scaledSubtitleFontSize}px`,
|
||||
fontFamily: subtitleFontUrl
|
||||
? `'${subtitleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
||||
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
|
||||
textShadow: buildTextShadow(subtitleStrokeColor, subtitleStrokeSize),
|
||||
letterSpacing: `${subtitleLetterSpacing}px`,
|
||||
lineHeight: 1.35,
|
||||
whiteSpace: 'normal',
|
||||
wordBreak: 'break-word',
|
||||
overflowWrap: 'anywhere',
|
||||
boxSizing: 'border-box',
|
||||
padding: '0 6%',
|
||||
}}
|
||||
>
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { useState, useRef, useCallback, useEffect } from "react";
|
||||
import { Play, Pause, Pencil, Trash2, Check, X, RefreshCw, Mic } from "lucide-react";
|
||||
import { Play, Pause, Pencil, Trash2, Check, X, RefreshCw, Mic, ChevronDown } from "lucide-react";
|
||||
import type { GeneratedAudio } from "@/features/home/model/useGeneratedAudios";
|
||||
|
||||
interface AudioTask {
|
||||
@@ -19,6 +19,10 @@ interface GeneratedAudiosPanelProps {
|
||||
onDeleteAudio: (id: string) => void;
|
||||
onRenameAudio: (id: string, newName: string) => void;
|
||||
hasText: boolean;
|
||||
missingRefAudio?: boolean;
|
||||
speed: number;
|
||||
onSpeedChange: (speed: number) => void;
|
||||
ttsMode: string;
|
||||
}
|
||||
|
||||
export function GeneratedAudiosPanel({
|
||||
@@ -32,11 +36,17 @@ export function GeneratedAudiosPanel({
|
||||
onDeleteAudio,
|
||||
onRenameAudio,
|
||||
hasText,
|
||||
missingRefAudio = false,
|
||||
speed,
|
||||
onSpeedChange,
|
||||
ttsMode,
|
||||
}: GeneratedAudiosPanelProps) {
|
||||
const [editingId, setEditingId] = useState<string | null>(null);
|
||||
const [editName, setEditName] = useState("");
|
||||
const [playingId, setPlayingId] = useState<string | null>(null);
|
||||
const [speedOpen, setSpeedOpen] = useState(false);
|
||||
const audioRef = useRef<HTMLAudioElement | null>(null);
|
||||
const speedRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
const stopPlaying = useCallback(() => {
|
||||
if (audioRef.current) {
|
||||
@@ -57,6 +67,17 @@ export function GeneratedAudiosPanel({
|
||||
};
|
||||
}, []);
|
||||
|
||||
// Close speed dropdown on click outside
|
||||
useEffect(() => {
|
||||
const handler = (e: MouseEvent) => {
|
||||
if (speedRef.current && !speedRef.current.contains(e.target as Node)) {
|
||||
setSpeedOpen(false);
|
||||
}
|
||||
};
|
||||
if (speedOpen) document.addEventListener("mousedown", handler);
|
||||
return () => document.removeEventListener("mousedown", handler);
|
||||
}, [speedOpen]);
|
||||
|
||||
const togglePlay = (audio: GeneratedAudio, e: React.MouseEvent) => {
|
||||
e.stopPropagation();
|
||||
if (playingId === audio.id) {
|
||||
@@ -91,19 +112,60 @@ export function GeneratedAudiosPanel({
|
||||
setEditName("");
|
||||
};
|
||||
|
||||
const canGenerate = hasText && !missingRefAudio;
|
||||
|
||||
const speedOptions = [
|
||||
{ value: 0.8, label: "较慢" },
|
||||
{ value: 0.9, label: "稍慢" },
|
||||
{ value: 1.0, label: "正常" },
|
||||
{ value: 1.1, label: "稍快" },
|
||||
{ value: 1.2, label: "较快" },
|
||||
] as const;
|
||||
const currentSpeedLabel = speedOptions.find((o) => o.value === speed)?.label ?? "正常";
|
||||
|
||||
return (
|
||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm relative z-10">
|
||||
<div className="flex justify-between items-center gap-2 mb-4">
|
||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
|
||||
<Mic className="h-4 w-4 text-purple-400" />
|
||||
配音列表
|
||||
</h2>
|
||||
<div className="flex gap-1.5">
|
||||
{/* 语速下拉 (仅声音克隆模式) */}
|
||||
{ttsMode === "voiceclone" && (
|
||||
<div ref={speedRef} className="relative">
|
||||
<button
|
||||
onClick={() => setSpeedOpen((v) => !v)}
|
||||
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
|
||||
>
|
||||
语速: {currentSpeedLabel}
|
||||
<ChevronDown className={`h-3 w-3 transition-transform ${speedOpen ? "rotate-180" : ""}`} />
|
||||
</button>
|
||||
{speedOpen && (
|
||||
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[80px]">
|
||||
{speedOptions.map((opt) => (
|
||||
<button
|
||||
key={opt.value}
|
||||
onClick={() => { onSpeedChange(opt.value); setSpeedOpen(false); }}
|
||||
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
|
||||
speed === opt.value
|
||||
? "bg-purple-600/40 text-purple-200"
|
||||
: "text-gray-300 hover:bg-white/10"
|
||||
}`}
|
||||
>
|
||||
{opt.label}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<button
|
||||
onClick={onGenerateAudio}
|
||||
disabled={isGeneratingAudio || !hasText}
|
||||
disabled={isGeneratingAudio || !canGenerate}
|
||||
title={missingRefAudio ? "请先选择参考音频" : !hasText ? "请先输入文案" : ""}
|
||||
className={`px-2 py-1 text-xs rounded transition-all whitespace-nowrap flex items-center gap-1 ${
|
||||
isGeneratingAudio || !hasText
|
||||
isGeneratingAudio || !canGenerate
|
||||
? "bg-gray-600 cursor-not-allowed text-gray-400"
|
||||
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white"
|
||||
}`}
|
||||
@@ -120,6 +182,13 @@ export function GeneratedAudiosPanel({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* 缺少参考音频提示 */}
|
||||
{missingRefAudio && (
|
||||
<div className="mb-3 px-3 py-2 bg-yellow-500/10 border border-yellow-500/30 rounded-lg text-yellow-300 text-xs">
|
||||
声音克隆模式需要先选择参考音频
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* 生成进度 */}
|
||||
{isGeneratingAudio && audioTask && (
|
||||
<div className="mb-4 p-3 bg-purple-500/10 rounded-xl border border-purple-500/30">
|
||||
|
||||
@@ -80,10 +80,13 @@ export function HomePage() {
|
||||
setTitleTopMargin,
|
||||
subtitleBottomMargin,
|
||||
setSubtitleBottomMargin,
|
||||
titleDisplayMode,
|
||||
setTitleDisplayMode,
|
||||
outputAspectRatio,
|
||||
setOutputAspectRatio,
|
||||
resolveAssetUrl,
|
||||
getFontFormat,
|
||||
buildTextShadow,
|
||||
materialDimensions,
|
||||
ttsMode,
|
||||
setTtsMode,
|
||||
voices,
|
||||
@@ -106,6 +109,8 @@ export function HomePage() {
|
||||
saveEditing,
|
||||
cancelEditing,
|
||||
deleteRefAudio,
|
||||
retranscribeRefAudio,
|
||||
retranscribingId,
|
||||
recordedBlob,
|
||||
isRecording,
|
||||
recordingTime,
|
||||
@@ -113,7 +118,6 @@ export function HomePage() {
|
||||
stopRecording,
|
||||
useRecording,
|
||||
formatRecordingTime,
|
||||
fixedRefText,
|
||||
bgmList,
|
||||
bgmLoading,
|
||||
bgmError,
|
||||
@@ -149,6 +153,8 @@ export function HomePage() {
|
||||
deleteAudio,
|
||||
renameAudio,
|
||||
selectAudio,
|
||||
speed,
|
||||
setSpeed,
|
||||
timelineSegments,
|
||||
reorderSegments,
|
||||
setSourceRange,
|
||||
@@ -162,6 +168,11 @@ export function HomePage() {
|
||||
router.prefetch("/publish");
|
||||
}, [router]);
|
||||
|
||||
useEffect(() => {
|
||||
if (typeof window === "undefined") return;
|
||||
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
|
||||
}, []);
|
||||
|
||||
const clipTrimmerSegment = useMemo(
|
||||
() => timelineSegments.find((s) => s.id === clipTrimmerSegmentId) ?? null,
|
||||
[timelineSegments, clipTrimmerSegmentId]
|
||||
@@ -226,11 +237,13 @@ export function HomePage() {
|
||||
onTitleTopMarginChange={setTitleTopMargin}
|
||||
subtitleBottomMargin={subtitleBottomMargin}
|
||||
onSubtitleBottomMarginChange={setSubtitleBottomMargin}
|
||||
titleDisplayMode={titleDisplayMode}
|
||||
onTitleDisplayModeChange={setTitleDisplayMode}
|
||||
resolveAssetUrl={resolveAssetUrl}
|
||||
getFontFormat={getFontFormat}
|
||||
buildTextShadow={buildTextShadow}
|
||||
previewBaseWidth={materialDimensions?.width || 1080}
|
||||
previewBaseHeight={materialDimensions?.height || 1920}
|
||||
previewBaseWidth={outputAspectRatio === "16:9" ? 1920 : 1080}
|
||||
previewBaseHeight={outputAspectRatio === "16:9" ? 1080 : 1920}
|
||||
/>
|
||||
|
||||
{/* 3. 配音方式选择 */}
|
||||
@@ -259,6 +272,8 @@ export function HomePage() {
|
||||
onSaveEditing={saveEditing}
|
||||
onCancelEditing={cancelEditing}
|
||||
onDeleteRefAudio={deleteRefAudio}
|
||||
onRetranscribe={retranscribeRefAudio}
|
||||
retranscribingId={retranscribingId}
|
||||
recordedBlob={recordedBlob}
|
||||
isRecording={isRecording}
|
||||
recordingTime={recordingTime}
|
||||
@@ -266,7 +281,6 @@ export function HomePage() {
|
||||
onStopRecording={stopRecording}
|
||||
onUseRecording={useRecording}
|
||||
formatRecordingTime={formatRecordingTime}
|
||||
fixedRefText={fixedRefText}
|
||||
/>
|
||||
)}
|
||||
/>
|
||||
@@ -283,6 +297,10 @@ export function HomePage() {
|
||||
onDeleteAudio={deleteAudio}
|
||||
onRenameAudio={renameAudio}
|
||||
hasText={!!text.trim()}
|
||||
missingRefAudio={ttsMode === "voiceclone" && !selectedRefAudio}
|
||||
speed={speed}
|
||||
onSpeedChange={setSpeed}
|
||||
ttsMode={ttsMode}
|
||||
/>
|
||||
|
||||
{/* 5. 视频素材 */}
|
||||
@@ -325,6 +343,8 @@ export function HomePage() {
|
||||
audioUrl={selectedAudio ? (resolveMediaUrl(selectedAudio.path) || "") : ""}
|
||||
segments={timelineSegments}
|
||||
materials={materials}
|
||||
outputAspectRatio={outputAspectRatio}
|
||||
onOutputAspectRatioChange={setOutputAspectRatio}
|
||||
onReorderSegment={reorderSegments}
|
||||
onClickSegment={(seg) => {
|
||||
setClipTrimmerSegmentId(seg.id);
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { useEffect, useState } from "react";
|
||||
import type { MouseEvent } from "react";
|
||||
import { Upload, RefreshCw, Play, Pause, Pencil, Trash2, Check, X, Mic, Square } from "lucide-react";
|
||||
import { Upload, RefreshCw, Play, Pause, Pencil, Trash2, Check, X, Mic, Square, RotateCw } from "lucide-react";
|
||||
|
||||
interface RefAudio {
|
||||
id: string;
|
||||
@@ -29,6 +29,8 @@ interface RefAudioPanelProps {
|
||||
onSaveEditing: (id: string, event: MouseEvent) => void;
|
||||
onCancelEditing: (event: MouseEvent) => void;
|
||||
onDeleteRefAudio: (id: string) => void;
|
||||
onRetranscribe: (id: string) => void;
|
||||
retranscribingId: string | null;
|
||||
recordedBlob: Blob | null;
|
||||
isRecording: boolean;
|
||||
recordingTime: number;
|
||||
@@ -36,9 +38,10 @@ interface RefAudioPanelProps {
|
||||
onStopRecording: () => void;
|
||||
onUseRecording: () => void;
|
||||
formatRecordingTime: (seconds: number) => string;
|
||||
fixedRefText: string;
|
||||
}
|
||||
|
||||
const OLD_FIXED_REF_TEXT = "其实生活中有许多美好的瞬间";
|
||||
|
||||
export function RefAudioPanel({
|
||||
refAudios,
|
||||
selectedRefAudio,
|
||||
@@ -57,6 +60,8 @@ export function RefAudioPanel({
|
||||
onSaveEditing,
|
||||
onCancelEditing,
|
||||
onDeleteRefAudio,
|
||||
onRetranscribe,
|
||||
retranscribingId,
|
||||
recordedBlob,
|
||||
isRecording,
|
||||
recordingTime,
|
||||
@@ -64,7 +69,6 @@ export function RefAudioPanel({
|
||||
onStopRecording,
|
||||
onUseRecording,
|
||||
formatRecordingTime,
|
||||
fixedRefText,
|
||||
}: RefAudioPanelProps) {
|
||||
const [recordedUrl, setRecordedUrl] = useState<string | null>(null);
|
||||
|
||||
@@ -81,6 +85,9 @@ export function RefAudioPanel({
|
||||
};
|
||||
}, [recordedBlob]);
|
||||
|
||||
const needsRetranscribe = (audio: RefAudio) =>
|
||||
audio.ref_text.startsWith(OLD_FIXED_REF_TEXT);
|
||||
|
||||
return (
|
||||
<div className="space-y-4">
|
||||
<div>
|
||||
@@ -122,7 +129,7 @@ export function RefAudioPanel({
|
||||
|
||||
{isUploadingRef && (
|
||||
<div className="mb-2 p-2 bg-purple-500/10 rounded text-sm text-purple-300">
|
||||
⏳ 上传中...
|
||||
⏳ 上传并识别中...
|
||||
</div>
|
||||
)}
|
||||
|
||||
@@ -192,6 +199,17 @@ export function RefAudioPanel({
|
||||
<Play className="h-3.5 w-3.5" />
|
||||
)}
|
||||
</button>
|
||||
<button
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
onRetranscribe(audio.id);
|
||||
}}
|
||||
disabled={retranscribingId === audio.id}
|
||||
className="text-gray-400 hover:text-cyan-400 text-xs disabled:opacity-50"
|
||||
title="重新识别文字"
|
||||
>
|
||||
<RotateCw className={`h-3.5 w-3.5 ${retranscribingId === audio.id ? 'animate-spin' : ''}`} />
|
||||
</button>
|
||||
<button
|
||||
onClick={(e) => onStartEditing(audio, e)}
|
||||
className="text-gray-400 hover:text-blue-400 text-xs"
|
||||
@@ -211,7 +229,12 @@ export function RefAudioPanel({
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<div className="text-gray-400 text-xs">{audio.duration_sec.toFixed(1)}s</div>
|
||||
<div className="text-gray-400 text-xs">
|
||||
{audio.duration_sec.toFixed(1)}s
|
||||
{needsRetranscribe(audio) && (
|
||||
<span className="text-yellow-500 ml-1" title="需要重新识别文字">⚠</span>
|
||||
)}
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
@@ -221,7 +244,7 @@ export function RefAudioPanel({
|
||||
</div>
|
||||
|
||||
<div className="border-t border-white/10 pt-4">
|
||||
<span className="text-sm text-gray-300 mb-2 block">🎤 或在线录音</span>
|
||||
<span className="text-sm text-gray-300 mb-2 block">🎤 或在线录音 <span className="text-xs text-gray-500">(建议 3-10 秒,超出将自动截取)</span></span>
|
||||
<div className="flex gap-2 items-center">
|
||||
{!isRecording ? (
|
||||
<button
|
||||
@@ -264,15 +287,9 @@ export function RefAudioPanel({
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="border-t border-white/10 pt-4">
|
||||
<label className="text-sm text-gray-300 mb-2 block">📝 录音/上传时请朗读以下内容:</label>
|
||||
<div className="w-full bg-black/30 border border-white/10 rounded-lg p-3 text-white text-sm">
|
||||
{fixedRefText}
|
||||
</div>
|
||||
<p className="text-xs text-gray-500 mt-1">
|
||||
请清晰朗读上述内容完成录音,系统将以此为参考克隆您的声音
|
||||
</p>
|
||||
</div>
|
||||
<p className="text-xs text-gray-500 mt-2 border-t border-white/10 pt-3">
|
||||
上传任意语音样本(3-10秒),系统将自动识别内容并克隆声音
|
||||
</p>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,16 +1,19 @@
|
||||
import { useEffect, useRef, useCallback, useState } from "react";
|
||||
import WaveSurfer from "wavesurfer.js";
|
||||
import type { TimelineSegment } from "@/features/home/model/useTimelineEditor";
|
||||
import type { Material } from "@/shared/types/material";
|
||||
import { useEffect, useRef, useCallback, useState } from "react";
|
||||
import WaveSurfer from "wavesurfer.js";
|
||||
import { ChevronDown } from "lucide-react";
|
||||
import type { TimelineSegment } from "@/features/home/model/useTimelineEditor";
|
||||
import type { Material } from "@/shared/types/material";
|
||||
|
||||
interface TimelineEditorProps {
|
||||
audioDuration: number;
|
||||
audioUrl: string;
|
||||
segments: TimelineSegment[];
|
||||
materials: Material[];
|
||||
onReorderSegment: (fromIdx: number, toIdx: number) => void;
|
||||
onClickSegment: (segment: TimelineSegment) => void;
|
||||
}
|
||||
interface TimelineEditorProps {
|
||||
audioDuration: number;
|
||||
audioUrl: string;
|
||||
segments: TimelineSegment[];
|
||||
materials: Material[];
|
||||
outputAspectRatio: "9:16" | "16:9";
|
||||
onOutputAspectRatioChange: (ratio: "9:16" | "16:9") => void;
|
||||
onReorderSegment: (fromIdx: number, toIdx: number) => void;
|
||||
onClickSegment: (segment: TimelineSegment) => void;
|
||||
}
|
||||
|
||||
function formatTime(sec: number): string {
|
||||
const m = Math.floor(sec / 60);
|
||||
@@ -18,32 +21,60 @@ function formatTime(sec: number): string {
|
||||
return `${String(m).padStart(2, "0")}:${s.toFixed(1).padStart(4, "0")}`;
|
||||
}
|
||||
|
||||
export function TimelineEditor({
|
||||
audioDuration,
|
||||
audioUrl,
|
||||
segments,
|
||||
materials,
|
||||
onReorderSegment,
|
||||
onClickSegment,
|
||||
}: TimelineEditorProps) {
|
||||
export function TimelineEditor({
|
||||
audioDuration,
|
||||
audioUrl,
|
||||
segments,
|
||||
materials,
|
||||
outputAspectRatio,
|
||||
onOutputAspectRatioChange,
|
||||
onReorderSegment,
|
||||
onClickSegment,
|
||||
}: TimelineEditorProps) {
|
||||
const waveRef = useRef<HTMLDivElement>(null);
|
||||
const wsRef = useRef<WaveSurfer | null>(null);
|
||||
const [waveReady, setWaveReady] = useState(false);
|
||||
const [isPlaying, setIsPlaying] = useState(false);
|
||||
|
||||
// Refs for high-frequency DOM updates (avoid 60fps re-renders)
|
||||
const playheadRef = useRef<HTMLDivElement>(null);
|
||||
const timeRef = useRef<HTMLSpanElement>(null);
|
||||
const audioDurationRef = useRef(audioDuration);
|
||||
audioDurationRef.current = audioDuration;
|
||||
// Refs for high-frequency DOM updates (avoid 60fps re-renders)
|
||||
const playheadRef = useRef<HTMLDivElement>(null);
|
||||
const timeRef = useRef<HTMLSpanElement>(null);
|
||||
const audioDurationRef = useRef(audioDuration);
|
||||
|
||||
useEffect(() => {
|
||||
audioDurationRef.current = audioDuration;
|
||||
}, [audioDuration]);
|
||||
|
||||
// Drag-to-reorder state
|
||||
const [dragFromIdx, setDragFromIdx] = useState<number | null>(null);
|
||||
const [dragOverIdx, setDragOverIdx] = useState<number | null>(null);
|
||||
// Drag-to-reorder state
|
||||
const [dragFromIdx, setDragFromIdx] = useState<number | null>(null);
|
||||
const [dragOverIdx, setDragOverIdx] = useState<number | null>(null);
|
||||
|
||||
// Aspect ratio dropdown
|
||||
const [ratioOpen, setRatioOpen] = useState(false);
|
||||
const ratioRef = useRef<HTMLDivElement>(null);
|
||||
const ratioOptions = [
|
||||
{ value: "9:16" as const, label: "竖屏 9:16" },
|
||||
{ value: "16:9" as const, label: "横屏 16:9" },
|
||||
];
|
||||
const currentRatioLabel =
|
||||
ratioOptions.find((opt) => opt.value === outputAspectRatio)?.label ?? "竖屏 9:16";
|
||||
|
||||
useEffect(() => {
|
||||
const handler = (e: MouseEvent) => {
|
||||
if (ratioRef.current && !ratioRef.current.contains(e.target as Node)) {
|
||||
setRatioOpen(false);
|
||||
}
|
||||
};
|
||||
if (ratioOpen) document.addEventListener("mousedown", handler);
|
||||
return () => document.removeEventListener("mousedown", handler);
|
||||
}, [ratioOpen]);
|
||||
|
||||
// Create / recreate wavesurfer when audioUrl changes
|
||||
useEffect(() => {
|
||||
if (!waveRef.current || !audioUrl) return;
|
||||
useEffect(() => {
|
||||
if (!waveRef.current || !audioUrl) return;
|
||||
|
||||
const playheadEl = playheadRef.current;
|
||||
const timeEl = timeRef.current;
|
||||
|
||||
// Destroy previous instance
|
||||
if (wsRef.current) {
|
||||
@@ -88,14 +119,14 @@ export function TimelineEditor({
|
||||
ws.load(audioUrl);
|
||||
wsRef.current = ws;
|
||||
|
||||
return () => {
|
||||
ws.destroy();
|
||||
wsRef.current = null;
|
||||
setIsPlaying(false);
|
||||
if (playheadRef.current) playheadRef.current.style.display = "none";
|
||||
if (timeRef.current) timeRef.current.textContent = formatTime(0);
|
||||
};
|
||||
}, [audioUrl, waveReady]);
|
||||
return () => {
|
||||
ws.destroy();
|
||||
wsRef.current = null;
|
||||
setIsPlaying(false);
|
||||
if (playheadEl) playheadEl.style.display = "none";
|
||||
if (timeEl) timeEl.textContent = formatTime(0);
|
||||
};
|
||||
}, [audioUrl, waveReady]);
|
||||
|
||||
// Callback ref to detect when waveRef div mounts
|
||||
const waveCallbackRef = useCallback((node: HTMLDivElement | null) => {
|
||||
@@ -146,25 +177,60 @@ export function TimelineEditor({
|
||||
|
||||
return (
|
||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||
<div className="flex items-center justify-between mb-3">
|
||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
||||
🎞️ 时间轴编辑
|
||||
</h2>
|
||||
{audioUrl && (
|
||||
<div className="flex items-center gap-2 text-xs text-gray-400">
|
||||
<button
|
||||
onClick={handlePlayPause}
|
||||
className="w-7 h-7 flex items-center justify-center rounded-full bg-white/10 hover:bg-white/20 text-white transition-colors"
|
||||
title={isPlaying ? "暂停" : "播放"}
|
||||
>
|
||||
{isPlaying ? "⏸" : "▶"}
|
||||
</button>
|
||||
<span ref={timeRef} className="tabular-nums">00:00.0</span>
|
||||
<span className="text-gray-600">/</span>
|
||||
<span className="tabular-nums">{formatTime(audioDuration)}</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center justify-between mb-3">
|
||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
||||
🎞️ 时间轴编辑
|
||||
</h2>
|
||||
<div className="flex items-center gap-2 text-xs text-gray-400">
|
||||
<div ref={ratioRef} className="relative">
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => setRatioOpen((v) => !v)}
|
||||
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
|
||||
title="设置输出画面比例"
|
||||
>
|
||||
画面: {currentRatioLabel}
|
||||
<ChevronDown className={`h-3 w-3 transition-transform ${ratioOpen ? "rotate-180" : ""}`} />
|
||||
</button>
|
||||
{ratioOpen && (
|
||||
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[106px]">
|
||||
{ratioOptions.map((opt) => (
|
||||
<button
|
||||
key={opt.value}
|
||||
type="button"
|
||||
onClick={() => {
|
||||
onOutputAspectRatioChange(opt.value);
|
||||
setRatioOpen(false);
|
||||
}}
|
||||
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
|
||||
outputAspectRatio === opt.value
|
||||
? "bg-purple-600/40 text-purple-200"
|
||||
: "text-gray-300 hover:bg-white/10"
|
||||
}`}
|
||||
>
|
||||
{opt.label}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{audioUrl && (
|
||||
<>
|
||||
<button
|
||||
onClick={handlePlayPause}
|
||||
className="w-7 h-7 flex items-center justify-center rounded-full bg-white/10 hover:bg-white/20 text-white transition-colors"
|
||||
title={isPlaying ? "暂停" : "播放"}
|
||||
>
|
||||
{isPlaying ? "⏸" : "▶"}
|
||||
</button>
|
||||
<span ref={timeRef} className="tabular-nums">00:00.0</span>
|
||||
<span className="text-gray-600">/</span>
|
||||
<span className="tabular-nums">{formatTime(audioDuration)}</span>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Waveform — always rendered so ref stays mounted */}
|
||||
<div className="relative mb-1">
|
||||
@@ -187,19 +253,19 @@ export function TimelineEditor({
|
||||
const segDur = seg.end - seg.start;
|
||||
const isDragTarget = dragOverIdx === i && dragFromIdx !== i;
|
||||
|
||||
// Compute loop portion for the last visible segment
|
||||
const isLastVisible = i === visibleSegments.length - 1;
|
||||
let loopPercent = 0;
|
||||
if (isLastVisible && audioDuration > 0) {
|
||||
const mat = materials.find((m) => m.id === seg.materialId);
|
||||
const matDur = mat?.duration_sec ?? 0;
|
||||
const effDur = (seg.sourceEnd > seg.sourceStart)
|
||||
? (seg.sourceEnd - seg.sourceStart)
|
||||
: matDur;
|
||||
if (effDur > 0 && segDur > effDur + 0.1) {
|
||||
loopPercent = ((segDur - effDur) / segDur) * 100;
|
||||
}
|
||||
}
|
||||
// Compute loop portion for the last visible segment
|
||||
const isLastVisible = i === visibleSegments.length - 1;
|
||||
let loopPercent = 0;
|
||||
if (isLastVisible && audioDuration > 0) {
|
||||
const mat = materials.find((m) => m.id === seg.materialId);
|
||||
const matDur = mat?.duration_sec ?? 0;
|
||||
const effDur = (seg.sourceEnd > seg.sourceStart)
|
||||
? (seg.sourceEnd - seg.sourceStart)
|
||||
: Math.max(matDur - seg.sourceStart, 0);
|
||||
if (effDur > 0 && segDur > effDur + 0.1) {
|
||||
loopPercent = ((segDur - effDur) / segDur) * 100;
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<div key={seg.id} className="absolute top-0 h-full" style={{ left: `${left}%`, width: `${width}%` }}>
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { Eye } from "lucide-react";
|
||||
import { ChevronDown, Eye } from "lucide-react";
|
||||
import { FloatingStylePreview } from "@/features/home/ui/FloatingStylePreview";
|
||||
|
||||
interface SubtitleStyleOption {
|
||||
@@ -52,6 +52,8 @@ interface TitleSubtitlePanelProps {
|
||||
onTitleTopMarginChange: (value: number) => void;
|
||||
subtitleBottomMargin: number;
|
||||
onSubtitleBottomMarginChange: (value: number) => void;
|
||||
titleDisplayMode: "short" | "persistent";
|
||||
onTitleDisplayModeChange: (mode: "short" | "persistent") => void;
|
||||
resolveAssetUrl: (path?: string | null) => string | null;
|
||||
getFontFormat: (fontFile?: string) => string;
|
||||
buildTextShadow: (color: string, size: number) => string;
|
||||
@@ -80,6 +82,8 @@ export function TitleSubtitlePanel({
|
||||
onTitleTopMarginChange,
|
||||
subtitleBottomMargin,
|
||||
onSubtitleBottomMarginChange,
|
||||
titleDisplayMode,
|
||||
onTitleDisplayModeChange,
|
||||
resolveAssetUrl,
|
||||
getFontFormat,
|
||||
buildTextShadow,
|
||||
@@ -123,7 +127,21 @@ export function TitleSubtitlePanel({
|
||||
)}
|
||||
|
||||
<div className="mb-4">
|
||||
<label className="text-sm text-gray-300 mb-2 block">片头标题(限制15个字)</label>
|
||||
<div className="mb-2 flex items-center justify-between gap-2">
|
||||
<label className="text-sm text-gray-300">片头标题(限制15个字)</label>
|
||||
<div className="relative shrink-0">
|
||||
<select
|
||||
value={titleDisplayMode}
|
||||
onChange={(e) => onTitleDisplayModeChange(e.target.value as "short" | "persistent")}
|
||||
className="appearance-none rounded-lg border border-white/15 bg-black/35 px-2.5 py-1.5 pr-7 text-xs text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
|
||||
aria-label="标题显示方式"
|
||||
>
|
||||
<option value="short">短暂显示</option>
|
||||
<option value="persistent">常驻显示</option>
|
||||
</select>
|
||||
<ChevronDown className="pointer-events-none absolute right-2 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
|
||||
</div>
|
||||
</div>
|
||||
<input
|
||||
type="text"
|
||||
value={videoTitle}
|
||||
|
||||
@@ -12,7 +12,7 @@ const API_BASE = typeof window === 'undefined'
|
||||
// 防止重复跳转
|
||||
let isRedirecting = false;
|
||||
|
||||
const PUBLIC_PATHS = new Set(['/login', '/register']);
|
||||
const PUBLIC_PATHS = new Set(['/login', '/register', '/pay']);
|
||||
|
||||
// 创建 axios 实例
|
||||
const api = axios.create({
|
||||
|
||||
@@ -12,6 +12,7 @@ export interface AuthResponse {
|
||||
success: boolean;
|
||||
message: string;
|
||||
user?: User;
|
||||
paymentToken?: string;
|
||||
}
|
||||
|
||||
interface ApiResponse<T> {
|
||||
@@ -25,20 +26,41 @@ interface ApiResponse<T> {
|
||||
* 用户注册
|
||||
*/
|
||||
export async function register(phone: string, password: string, username?: string): Promise<AuthResponse> {
|
||||
const { data: payload } = await api.post<ApiResponse<null>>('/api/auth/register', {
|
||||
phone, password, username
|
||||
});
|
||||
return { success: payload.success, message: payload.message };
|
||||
try {
|
||||
const { data: payload } = await api.post<ApiResponse<null>>('/api/auth/register', {
|
||||
phone, password, username
|
||||
});
|
||||
return { success: payload.success, message: payload.message };
|
||||
} catch (err: any) {
|
||||
return {
|
||||
success: false,
|
||||
message: err.response?.data?.message || '注册失败',
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 用户登录
|
||||
*/
|
||||
export async function login(phone: string, password: string): Promise<AuthResponse> {
|
||||
const { data: payload } = await api.post<ApiResponse<{ user?: User }>>('/api/auth/login', {
|
||||
phone, password
|
||||
});
|
||||
return { success: payload.success, message: payload.message, user: payload.data?.user };
|
||||
try {
|
||||
const { data: payload } = await api.post<ApiResponse<{ user?: User }>>('/api/auth/login', {
|
||||
phone, password
|
||||
});
|
||||
return { success: payload.success, message: payload.message, user: payload.data?.user };
|
||||
} catch (err: any) {
|
||||
if (err.response?.status === 403 && err.response?.data?.data?.reason === 'PAYMENT_REQUIRED') {
|
||||
return {
|
||||
success: false,
|
||||
message: err.response.data.message,
|
||||
paymentToken: err.response.data.data.payment_token,
|
||||
};
|
||||
}
|
||||
return {
|
||||
success: false,
|
||||
message: err.response?.data?.message || '登录失败',
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
76
models/CosyVoice/CODE_OF_CONDUCT.md
Normal file
76
models/CosyVoice/CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as
|
||||
contributors and maintainers pledge to making participation in our project and
|
||||
our community a harassment-free experience for everyone, regardless of age, body
|
||||
size, disability, ethnicity, sex characteristics, gender identity and expression,
|
||||
level of experience, education, socio-economic status, nationality, personal
|
||||
appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment
|
||||
include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or
|
||||
advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic
|
||||
address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable
|
||||
behavior and are expected to take appropriate and fair corrective action in
|
||||
response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or
|
||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||
permanently any contributor for other behaviors that they deem inappropriate,
|
||||
threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community. Examples of
|
||||
representing a project or community include using an official project e-mail
|
||||
address, posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event. Representation of a project may be
|
||||
further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting the project team at mikelei@mobvoi.com. All
|
||||
complaints will be reviewed and investigated and will result in a response that
|
||||
is deemed necessary and appropriate to the circumstances. The project team is
|
||||
obligated to maintain confidentiality with regard to the reporter of an incident.
|
||||
Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good
|
||||
faith may face temporary or permanent repercussions as determined by other
|
||||
members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
||||
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see
|
||||
https://www.contributor-covenant.org/faq
|
||||
16
models/CosyVoice/FAQ.md
Normal file
16
models/CosyVoice/FAQ.md
Normal file
@@ -0,0 +1,16 @@
|
||||
## ModuleNotFoundError: No module named 'matcha'
|
||||
|
||||
Matcha-TTS is a third_party module. Please check `third_party` directory. If there is no `Matcha-TTS`, execute `git submodule update --init --recursive`.
|
||||
|
||||
run `export PYTHONPATH=third_party/Matcha-TTS` if you want to use `from cosyvoice.cli.cosyvoice import CosyVoice` in python script.
|
||||
|
||||
## cannot find resource.zip or cannot unzip resource.zip
|
||||
|
||||
Please make sure you have git-lfs installed. Execute
|
||||
|
||||
```sh
|
||||
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
|
||||
cd pretrained_models/CosyVoice-ttsfrd/
|
||||
unzip resource.zip -d .
|
||||
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
|
||||
```
|
||||
201
models/CosyVoice/LICENSE
Normal file
201
models/CosyVoice/LICENSE
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
264
models/CosyVoice/README.md
Normal file
264
models/CosyVoice/README.md
Normal file
@@ -0,0 +1,264 @@
|
||||

|
||||
|
||||
## 👉🏻 CosyVoice 👈🏻
|
||||
|
||||
**Fun-CosyVoice 3.0**: [Demos](https://funaudiollm.github.io/cosyvoice3/); [Paper](https://arxiv.org/pdf/2505.17589); [Modelscope](https://www.modelscope.cn/models/FunAudioLLM/Fun-CosyVoice3-0.5B-2512); [Huggingface](https://huggingface.co/FunAudioLLM/Fun-CosyVoice3-0.5B-2512); [CV3-Eval](https://github.com/FunAudioLLM/CV3-Eval)
|
||||
|
||||
**CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/pdf/2412.10117); [Modelscope](https://www.modelscope.cn/models/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/FunAudioLLM/CosyVoice2-0.5B)
|
||||
|
||||
**CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/models/iic/CosyVoice-300M); [HuggingFace](https://huggingface.co/FunAudioLLM/CosyVoice-300M)
|
||||
|
||||
## Highlight🔥
|
||||
|
||||
**Fun-CosyVoice 3.0** is an advanced text-to-speech (TTS) system based on large language models (LLM), surpassing its predecessor (CosyVoice 2.0) in content consistency, speaker similarity, and prosody naturalness. It is designed for zero-shot multilingual speech synthesis in the wild.
|
||||
### Key Features
|
||||
- **Language Coverage**: Covers 9 common languages (Chinese, English, Japanese, Korean, German, Spanish, French, Italian, Russian), 18+ Chinese dialects/accents (Guangdong, Minnan, Sichuan, Dongbei, Shan3xi, Shan1xi, Shanghai, Tianjin, Shandong, Ningxia, Gansu, etc.) and meanwhile supports both multi-lingual/cross-lingual zero-shot voice cloning.
|
||||
- **Content Consistency & Naturalness**: Achieves state-of-the-art performance in content consistency, speaker similarity, and prosody naturalness.
|
||||
- **Pronunciation Inpainting**: Supports pronunciation inpainting of Chinese Pinyin and English CMU phonemes, providing more controllability and thus suitable for production use.
|
||||
- **Text Normalization**: Supports reading of numbers, special symbols and various text formats without a traditional frontend module.
|
||||
- **Bi-Streaming**: Support both text-in streaming and audio-out streaming, and achieves latency as low as 150ms while maintaining high-quality audio output.
|
||||
- **Instruct Support**: Supports various instructions such as languages, dialects, emotions, speed, volume, etc.
|
||||
|
||||
|
||||
## Roadmap
|
||||
|
||||
- [x] 2025/12
|
||||
|
||||
- [x] release Fun-CosyVoice3-0.5B-2512 base model, rl model and its training/inference script
|
||||
- [x] release Fun-CosyVoice3-0.5B modelscope gradio space
|
||||
|
||||
- [x] 2025/08
|
||||
|
||||
- [x] Thanks to the contribution from NVIDIA Yuekai Zhang, add triton trtllm runtime support and cosyvoice2 grpo training support
|
||||
|
||||
- [x] 2025/07
|
||||
|
||||
- [x] release Fun-CosyVoice 3.0 eval set
|
||||
|
||||
- [x] 2025/05
|
||||
|
||||
- [x] add CosyVoice2-0.5B vllm support
|
||||
|
||||
- [x] 2024/12
|
||||
|
||||
- [x] 25hz CosyVoice2-0.5B released
|
||||
|
||||
- [x] 2024/09
|
||||
|
||||
- [x] 25hz CosyVoice-300M base model
|
||||
- [x] 25hz CosyVoice-300M voice conversion function
|
||||
|
||||
- [x] 2024/08
|
||||
|
||||
- [x] Repetition Aware Sampling(RAS) inference for llm stability
|
||||
- [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
|
||||
|
||||
- [x] 2024/07
|
||||
|
||||
- [x] Flow matching training support
|
||||
- [x] WeTextProcessing support when ttsfrd is not available
|
||||
- [x] Fastapi server and client
|
||||
|
||||
## Evaluation
|
||||
|
||||
| Model | Open-Source | Model Size | test-zh<br>CER (%) ↓ | test-zh<br>SS (%) ↑ | test-en<br>WER (%) ↓ | test-en<br>SS (%) ↑ | test-hard<br>CER (%) ↓ | test-hard<br>SS (%) ↑ |
|
||||
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
||||
| Human | - | - | 1.26 | 75.5 | 2.14 | 73.4 | - | - |
|
||||
| Seed-TTS | ❌ | - | 1.12 | 79.6 | 2.25 | 76.2 | 7.59 | 77.6 |
|
||||
| MiniMax-Speech | ❌ | - | 0.83 | 78.3 | 1.65 | 69.2 | - | - |
|
||||
| F5-TTS | ✅ | 0.3B | 1.52 | 74.1 | 2.00 | 64.7 | 8.67 | 71.3 |
|
||||
| Spark TTS | ✅ | 0.5B | 1.2 | 66.0 | 1.98 | 57.3 | - | - |
|
||||
| CosyVoice2 | ✅ | 0.5B | 1.45 | 75.7 | 2.57 | 65.9 | 6.83 | 72.4 |
|
||||
| FireRedTTS2 | ✅ | 1.5B | 1.14 | 73.2 | 1.95 | 66.5 | - | - |
|
||||
| Index-TTS2 | ✅ | 1.5B | 1.03 | 76.5 | 2.23 | 70.6 | 7.12 | 75.5 |
|
||||
| VibeVoice-1.5B | ✅ | 1.5B | 1.16 | 74.4 | 3.04 | 68.9 | - | - |
|
||||
| VibeVoice-Realtime | ✅ | 0.5B | - | - | 2.05 | 63.3 | - | - |
|
||||
| HiggsAudio-v2 | ✅ | 3B | 1.50 | 74.0 | 2.44 | 67.7 | - | - |
|
||||
| VoxCPM | ✅ | 0.5B | 0.93 | 77.2 | 1.85 | 72.9 | 8.87 | 73.0 |
|
||||
| GLM-TTS | ✅ | 1.5B | 1.03 | 76.1 | - | - | - | - |
|
||||
| GLM-TTS RL | ✅ | 1.5B | 0.89 | 76.4 | - | - | - | - |
|
||||
| Fun-CosyVoice3-0.5B-2512 | ✅ | 0.5B | 1.21 | 78.0 | 2.24 | 71.8 | 6.71 | 75.8 |
|
||||
| Fun-CosyVoice3-0.5B-2512_RL | ✅ | 0.5B | 0.81 | 77.4 | 1.68 | 69.5 | 5.44 | 75.0 |
|
||||
|
||||
|
||||
## Install
|
||||
|
||||
### Clone and install
|
||||
|
||||
- Clone the repo
|
||||
``` sh
|
||||
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
|
||||
# If you failed to clone the submodule due to network failures, please run the following command until success
|
||||
cd CosyVoice
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
|
||||
- Create Conda env:
|
||||
|
||||
``` sh
|
||||
conda create -n cosyvoice -y python=3.10
|
||||
conda activate cosyvoice
|
||||
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
||||
|
||||
# If you encounter sox compatibility issues
|
||||
# ubuntu
|
||||
sudo apt-get install sox libsox-dev
|
||||
# centos
|
||||
sudo yum install sox sox-devel
|
||||
```
|
||||
|
||||
### Model download
|
||||
|
||||
We strongly recommend that you download our pretrained `Fun-CosyVoice3-0.5B` `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
|
||||
|
||||
``` python
|
||||
# modelscope SDK model download
|
||||
from modelscope import snapshot_download
|
||||
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
||||
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
|
||||
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
|
||||
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
|
||||
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
|
||||
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
||||
|
||||
# for oversea users, huggingface SDK model download
|
||||
from huggingface_hub import snapshot_download
|
||||
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
||||
snapshot_download('FunAudioLLM/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
|
||||
snapshot_download('FunAudioLLM/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
|
||||
snapshot_download('FunAudioLLM/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
|
||||
snapshot_download('FunAudioLLM/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
|
||||
snapshot_download('FunAudioLLM/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
||||
```
|
||||
|
||||
Optionally, you can unzip `ttsfrd` resource and install `ttsfrd` package for better text normalization performance.
|
||||
|
||||
Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use wetext by default.
|
||||
|
||||
``` sh
|
||||
cd pretrained_models/CosyVoice-ttsfrd/
|
||||
unzip resource.zip -d .
|
||||
pip install ttsfrd_dependency-0.1-py3-none-any.whl
|
||||
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
|
||||
```
|
||||
|
||||
### Basic Usage
|
||||
|
||||
We strongly recommend using `Fun-CosyVoice3-0.5B` for better performance.
|
||||
Follow the code in `example.py` for detailed usage of each model.
|
||||
```sh
|
||||
python example.py
|
||||
```
|
||||
|
||||
#### vLLM Usage
|
||||
CosyVoice2/3 now supports **vLLM 0.11.x+ (V1 engine)** and **vLLM 0.9.0 (legacy)**.
|
||||
Older vllm version(<0.9.0) do not support CosyVoice inference, and versions in between (e.g., 0.10.x) are not tested.
|
||||
|
||||
Notice that `vllm` has a lot of specific requirements. You can create a new env to in case your hardward do not support vllm and old env is corrupted.
|
||||
|
||||
``` sh
|
||||
conda create -n cosyvoice_vllm --clone cosyvoice
|
||||
conda activate cosyvoice_vllm
|
||||
# for vllm==0.9.0
|
||||
pip install vllm==v0.9.0 transformers==4.51.3 numpy==1.26.4 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
||||
# for vllm>=0.11.0
|
||||
pip install vllm==v0.11.0 transformers==4.57.1 numpy==1.26.4 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
||||
python vllm_example.py
|
||||
```
|
||||
|
||||
#### Start web demo
|
||||
|
||||
You can use our web demo page to get familiar with CosyVoice quickly.
|
||||
|
||||
Please see the demo website for details.
|
||||
|
||||
``` python
|
||||
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
|
||||
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
|
||||
```
|
||||
|
||||
#### Advanced Usage
|
||||
|
||||
For advanced users, we have provided training and inference scripts in `examples/libritts`.
|
||||
|
||||
#### Build for deployment
|
||||
|
||||
Optionally, if you want service deployment,
|
||||
You can run the following steps.
|
||||
|
||||
``` sh
|
||||
cd runtime/python
|
||||
docker build -t cosyvoice:v1.0 .
|
||||
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
|
||||
# for grpc usage
|
||||
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
|
||||
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
|
||||
# for fastapi usage
|
||||
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
|
||||
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
|
||||
```
|
||||
|
||||
#### Using Nvidia TensorRT-LLM for deployment
|
||||
|
||||
Using TensorRT-LLM to accelerate cosyvoice2 llm could give 4x acceleration comparing with huggingface transformers implementation.
|
||||
To quick start:
|
||||
|
||||
``` sh
|
||||
cd runtime/triton_trtllm
|
||||
docker compose up -d
|
||||
```
|
||||
For more details, you could check [here](https://github.com/FunAudioLLM/CosyVoice/tree/main/runtime/triton_trtllm)
|
||||
|
||||
## Discussion & Communication
|
||||
|
||||
You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
|
||||
|
||||
You can also scan the QR code to join our official Dingding chat group.
|
||||
|
||||
<img src="./asset/dingding.png" width="250px">
|
||||
|
||||
## Acknowledge
|
||||
|
||||
1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
|
||||
2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
|
||||
3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
|
||||
4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
|
||||
5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
|
||||
|
||||
## Citations
|
||||
|
||||
``` bibtex
|
||||
@article{du2024cosyvoice,
|
||||
title={Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens},
|
||||
author={Du, Zhihao and Chen, Qian and Zhang, Shiliang and Hu, Kai and Lu, Heng and Yang, Yexin and Hu, Hangrui and Zheng, Siqi and Gu, Yue and Ma, Ziyang and others},
|
||||
journal={arXiv preprint arXiv:2407.05407},
|
||||
year={2024}
|
||||
}
|
||||
|
||||
@article{du2024cosyvoice,
|
||||
title={Cosyvoice 2: Scalable streaming speech synthesis with large language models},
|
||||
author={Du, Zhihao and Wang, Yuxuan and Chen, Qian and Shi, Xian and Lv, Xiang and Zhao, Tianyu and Gao, Zhifu and Yang, Yexin and Gao, Changfeng and Wang, Hui and others},
|
||||
journal={arXiv preprint arXiv:2412.10117},
|
||||
year={2024}
|
||||
}
|
||||
|
||||
@article{du2025cosyvoice,
|
||||
title={CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-training},
|
||||
author={Du, Zhihao and Gao, Changfeng and Wang, Yuxuan and Yu, Fan and Zhao, Tianyu and Wang, Hao and Lv, Xiang and Wang, Hui and Shi, Xian and An, Keyu and others},
|
||||
journal={arXiv preprint arXiv:2505.17589},
|
||||
year={2025}
|
||||
}
|
||||
|
||||
@inproceedings{lyu2025build,
|
||||
title={Build LLM-Based Zero-Shot Streaming TTS System with Cosyvoice},
|
||||
author={Lyu, Xiang and Wang, Yuxuan and Zhao, Tianyu and Wang, Hao and Liu, Huadai and Du, Zhihao},
|
||||
booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
||||
pages={1--2},
|
||||
year={2025},
|
||||
organization={IEEE}
|
||||
}
|
||||
```
|
||||
|
||||
## Disclaimer
|
||||
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
|
||||
0
models/CosyVoice/cosyvoice/__init__.py
Normal file
0
models/CosyVoice/cosyvoice/__init__.py
Normal file
93
models/CosyVoice/cosyvoice/bin/average_model.py
Normal file
93
models/CosyVoice/cosyvoice/bin/average_model.py
Normal file
@@ -0,0 +1,93 @@
|
||||
# Copyright (c) 2020 Mobvoi Inc (Di Wu)
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import argparse
|
||||
import glob
|
||||
|
||||
import yaml
|
||||
import torch
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description='average model')
|
||||
parser.add_argument('--dst_model', required=True, help='averaged model')
|
||||
parser.add_argument('--src_path',
|
||||
required=True,
|
||||
help='src model path for average')
|
||||
parser.add_argument('--val_best',
|
||||
action="store_true",
|
||||
help='averaged model')
|
||||
parser.add_argument('--num',
|
||||
default=5,
|
||||
type=int,
|
||||
help='nums for averaged model')
|
||||
|
||||
args = parser.parse_args()
|
||||
print(args)
|
||||
return args
|
||||
|
||||
|
||||
def main():
|
||||
args = get_args()
|
||||
val_scores = []
|
||||
if args.val_best:
|
||||
yamls = glob.glob('{}/*.yaml'.format(args.src_path))
|
||||
yamls = [
|
||||
f for f in yamls
|
||||
if not (os.path.basename(f).startswith('train')
|
||||
or os.path.basename(f).startswith('init'))
|
||||
]
|
||||
for y in yamls:
|
||||
with open(y, 'r') as f:
|
||||
dic_yaml = yaml.load(f, Loader=yaml.BaseLoader)
|
||||
loss = float(dic_yaml['loss_dict']['loss'])
|
||||
epoch = int(dic_yaml['epoch'])
|
||||
step = int(dic_yaml['step'])
|
||||
tag = dic_yaml['tag']
|
||||
val_scores += [[epoch, step, loss, tag]]
|
||||
sorted_val_scores = sorted(val_scores,
|
||||
key=lambda x: x[2],
|
||||
reverse=False)
|
||||
print("best val (epoch, step, loss, tag) = " +
|
||||
str(sorted_val_scores[:args.num]))
|
||||
path_list = [
|
||||
args.src_path + '/epoch_{}_whole.pt'.format(score[0])
|
||||
for score in sorted_val_scores[:args.num]
|
||||
]
|
||||
print(path_list)
|
||||
avg = {}
|
||||
num = args.num
|
||||
assert num == len(path_list)
|
||||
for path in path_list:
|
||||
print('Processing {}'.format(path))
|
||||
states = torch.load(path, map_location=torch.device('cpu'))
|
||||
for k in states.keys():
|
||||
if k not in ['step', 'epoch']:
|
||||
if k not in avg.keys():
|
||||
avg[k] = states[k].clone()
|
||||
else:
|
||||
avg[k] += states[k]
|
||||
# average
|
||||
for k in avg.keys():
|
||||
if avg[k] is not None:
|
||||
# pytorch 1.6 use true_divide instead of /=
|
||||
avg[k] = torch.true_divide(avg[k], num)
|
||||
print('Saving to {}'.format(args.dst_model))
|
||||
torch.save(avg, args.dst_model)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
99
models/CosyVoice/cosyvoice/bin/export_jit.py
Normal file
99
models/CosyVoice/cosyvoice/bin/export_jit.py
Normal file
@@ -0,0 +1,99 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
||||
import os
|
||||
import sys
|
||||
import torch
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
sys.path.append('{}/../..'.format(ROOT_DIR))
|
||||
sys.path.append('{}/../../third_party/Matcha-TTS'.format(ROOT_DIR))
|
||||
from cosyvoice.cli.cosyvoice import AutoModel
|
||||
from cosyvoice.utils.file_utils import logging
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description='export your model for deployment')
|
||||
parser.add_argument('--model_dir',
|
||||
type=str,
|
||||
default='pretrained_models/CosyVoice-300M',
|
||||
help='local path')
|
||||
args = parser.parse_args()
|
||||
print(args)
|
||||
return args
|
||||
|
||||
|
||||
def get_optimized_script(model, preserved_attrs=[]):
|
||||
script = torch.jit.script(model)
|
||||
if preserved_attrs != []:
|
||||
script = torch.jit.freeze(script, preserved_attrs=preserved_attrs)
|
||||
else:
|
||||
script = torch.jit.freeze(script)
|
||||
script = torch.jit.optimize_for_inference(script)
|
||||
return script
|
||||
|
||||
|
||||
def main():
|
||||
args = get_args()
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
format='%(asctime)s %(levelname)s %(message)s')
|
||||
|
||||
torch._C._jit_set_fusion_strategy([('STATIC', 1)])
|
||||
torch._C._jit_set_profiling_mode(False)
|
||||
torch._C._jit_set_profiling_executor(False)
|
||||
|
||||
model = AutoModel(model_dir=args.model_dir)
|
||||
|
||||
if model.__class__.__name__ == 'CosyVoice':
|
||||
# 1. export llm text_encoder
|
||||
llm_text_encoder = model.model.llm.text_encoder
|
||||
script = get_optimized_script(llm_text_encoder)
|
||||
script.save('{}/llm.text_encoder.fp32.zip'.format(args.model_dir))
|
||||
script = get_optimized_script(llm_text_encoder.half())
|
||||
script.save('{}/llm.text_encoder.fp16.zip'.format(args.model_dir))
|
||||
logging.info('successfully export llm_text_encoder')
|
||||
|
||||
# 2. export llm llm
|
||||
llm_llm = model.model.llm.llm
|
||||
script = get_optimized_script(llm_llm, ['forward_chunk'])
|
||||
script.save('{}/llm.llm.fp32.zip'.format(args.model_dir))
|
||||
script = get_optimized_script(llm_llm.half(), ['forward_chunk'])
|
||||
script.save('{}/llm.llm.fp16.zip'.format(args.model_dir))
|
||||
logging.info('successfully export llm_llm')
|
||||
|
||||
# 3. export flow encoder
|
||||
flow_encoder = model.model.flow.encoder
|
||||
script = get_optimized_script(flow_encoder)
|
||||
script.save('{}/flow.encoder.fp32.zip'.format(args.model_dir))
|
||||
script = get_optimized_script(flow_encoder.half())
|
||||
script.save('{}/flow.encoder.fp16.zip'.format(args.model_dir))
|
||||
logging.info('successfully export flow_encoder')
|
||||
elif model.__class__.__name__ == 'CosyVoice2':
|
||||
# 1. export flow encoder
|
||||
flow_encoder = model.model.flow.encoder
|
||||
script = get_optimized_script(flow_encoder)
|
||||
script.save('{}/flow.encoder.fp32.zip'.format(args.model_dir))
|
||||
script = get_optimized_script(flow_encoder.half())
|
||||
script.save('{}/flow.encoder.fp16.zip'.format(args.model_dir))
|
||||
logging.info('successfully export flow_encoder')
|
||||
else:
|
||||
raise ValueError('unsupported model type')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
114
models/CosyVoice/cosyvoice/bin/export_onnx.py
Normal file
114
models/CosyVoice/cosyvoice/bin/export_onnx.py
Normal file
@@ -0,0 +1,114 @@
|
||||
# Copyright (c) 2024 Antgroup Inc (authors: Zhoubofan, hexisyztem@icloud.com)
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
||||
import os
|
||||
import sys
|
||||
import onnxruntime
|
||||
import random
|
||||
import torch
|
||||
from tqdm import tqdm
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
sys.path.append('{}/../..'.format(ROOT_DIR))
|
||||
sys.path.append('{}/../../third_party/Matcha-TTS'.format(ROOT_DIR))
|
||||
from cosyvoice.cli.cosyvoice import AutoModel
|
||||
from cosyvoice.utils.file_utils import logging
|
||||
|
||||
|
||||
def get_dummy_input(batch_size, seq_len, out_channels, device):
|
||||
x = torch.rand((batch_size, out_channels, seq_len), dtype=torch.float32, device=device)
|
||||
mask = torch.ones((batch_size, 1, seq_len), dtype=torch.float32, device=device)
|
||||
mu = torch.rand((batch_size, out_channels, seq_len), dtype=torch.float32, device=device)
|
||||
t = torch.rand((batch_size), dtype=torch.float32, device=device)
|
||||
spks = torch.rand((batch_size, out_channels), dtype=torch.float32, device=device)
|
||||
cond = torch.rand((batch_size, out_channels, seq_len), dtype=torch.float32, device=device)
|
||||
return x, mask, mu, t, spks, cond
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description='export your model for deployment')
|
||||
parser.add_argument('--model_dir',
|
||||
type=str,
|
||||
default='pretrained_models/CosyVoice-300M',
|
||||
help='local path')
|
||||
args = parser.parse_args()
|
||||
print(args)
|
||||
return args
|
||||
|
||||
|
||||
@torch.no_grad()
|
||||
def main():
|
||||
args = get_args()
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
format='%(asctime)s %(levelname)s %(message)s')
|
||||
|
||||
model = AutoModel(model_dir=args.model_dir)
|
||||
|
||||
# 1. export flow decoder estimator
|
||||
estimator = model.model.flow.decoder.estimator
|
||||
estimator.eval()
|
||||
|
||||
device = model.model.device
|
||||
batch_size, seq_len = 2, 256
|
||||
out_channels = model.model.flow.decoder.estimator.out_channels
|
||||
x, mask, mu, t, spks, cond = get_dummy_input(batch_size, seq_len, out_channels, device)
|
||||
torch.onnx.export(
|
||||
estimator,
|
||||
(x, mask, mu, t, spks, cond),
|
||||
'{}/flow.decoder.estimator.fp32.onnx'.format(args.model_dir),
|
||||
export_params=True,
|
||||
opset_version=18,
|
||||
do_constant_folding=True,
|
||||
input_names=['x', 'mask', 'mu', 't', 'spks', 'cond'],
|
||||
output_names=['estimator_out'],
|
||||
dynamic_axes={
|
||||
'x': {2: 'seq_len'},
|
||||
'mask': {2: 'seq_len'},
|
||||
'mu': {2: 'seq_len'},
|
||||
'cond': {2: 'seq_len'},
|
||||
'estimator_out': {2: 'seq_len'},
|
||||
}
|
||||
)
|
||||
|
||||
# 2. test computation consistency
|
||||
option = onnxruntime.SessionOptions()
|
||||
option.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
|
||||
option.intra_op_num_threads = 1
|
||||
providers = ['CUDAExecutionProvider' if torch.cuda.is_available() else 'CPUExecutionProvider']
|
||||
estimator_onnx = onnxruntime.InferenceSession('{}/flow.decoder.estimator.fp32.onnx'.format(args.model_dir),
|
||||
sess_options=option, providers=providers)
|
||||
|
||||
for _ in tqdm(range(10)):
|
||||
x, mask, mu, t, spks, cond = get_dummy_input(batch_size, random.randint(16, 512), out_channels, device)
|
||||
output_pytorch = estimator(x, mask, mu, t, spks, cond)
|
||||
ort_inputs = {
|
||||
'x': x.cpu().numpy(),
|
||||
'mask': mask.cpu().numpy(),
|
||||
'mu': mu.cpu().numpy(),
|
||||
't': t.cpu().numpy(),
|
||||
'spks': spks.cpu().numpy(),
|
||||
'cond': cond.cpu().numpy()
|
||||
}
|
||||
output_onnx = estimator_onnx.run(None, ort_inputs)[0]
|
||||
torch.testing.assert_allclose(output_pytorch, torch.from_numpy(output_onnx).to(device), rtol=1e-2, atol=1e-4)
|
||||
logging.info('successfully export estimator')
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
195
models/CosyVoice/cosyvoice/bin/train.py
Normal file
195
models/CosyVoice/cosyvoice/bin/train.py
Normal file
@@ -0,0 +1,195 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
import argparse
|
||||
import datetime
|
||||
import logging
|
||||
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
||||
from copy import deepcopy
|
||||
import os
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
import deepspeed
|
||||
|
||||
from hyperpyyaml import load_hyperpyyaml
|
||||
|
||||
from torch.distributed.elastic.multiprocessing.errors import record
|
||||
|
||||
from cosyvoice.utils.losses import DPOLoss
|
||||
from cosyvoice.utils.executor import Executor
|
||||
from cosyvoice.utils.train_utils import (
|
||||
init_distributed,
|
||||
init_dataset_and_dataloader,
|
||||
init_optimizer_and_scheduler,
|
||||
init_summarywriter, save_model,
|
||||
wrap_cuda_model, check_modify_and_save_config)
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser(description='training your network')
|
||||
parser.add_argument('--train_engine',
|
||||
default='torch_ddp',
|
||||
choices=['torch_ddp', 'deepspeed'],
|
||||
help='Engine for paralleled training')
|
||||
parser.add_argument('--model', required=True, help='model which will be trained')
|
||||
parser.add_argument('--ref_model', required=False, help='ref model used in dpo')
|
||||
parser.add_argument('--config', required=True, help='config file')
|
||||
parser.add_argument('--train_data', required=True, help='train data file')
|
||||
parser.add_argument('--cv_data', required=True, help='cv data file')
|
||||
parser.add_argument('--qwen_pretrain_path', required=False, help='qwen pretrain path')
|
||||
parser.add_argument('--onnx_path', required=False, help='onnx path, which is required for online feature extraction')
|
||||
parser.add_argument('--checkpoint', help='checkpoint model')
|
||||
parser.add_argument('--model_dir', required=True, help='save model dir')
|
||||
parser.add_argument('--tensorboard_dir',
|
||||
default='tensorboard',
|
||||
help='tensorboard log dir')
|
||||
parser.add_argument('--ddp.dist_backend',
|
||||
dest='dist_backend',
|
||||
default='nccl',
|
||||
choices=['nccl', 'gloo'],
|
||||
help='distributed backend')
|
||||
parser.add_argument('--num_workers',
|
||||
default=0,
|
||||
type=int,
|
||||
help='num of subprocess workers for reading')
|
||||
parser.add_argument('--prefetch',
|
||||
default=100,
|
||||
type=int,
|
||||
help='prefetch number')
|
||||
parser.add_argument('--pin_memory',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='Use pinned memory buffers used for reading')
|
||||
parser.add_argument('--use_amp',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='Use automatic mixed precision training')
|
||||
parser.add_argument('--dpo',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='Use Direct Preference Optimization')
|
||||
parser.add_argument('--deepspeed.save_states',
|
||||
dest='save_states',
|
||||
default='model_only',
|
||||
choices=['model_only', 'model+optimizer'],
|
||||
help='save model/optimizer states')
|
||||
parser.add_argument('--timeout',
|
||||
default=60,
|
||||
type=int,
|
||||
help='timeout (in seconds) of cosyvoice_join.')
|
||||
parser = deepspeed.add_config_arguments(parser)
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
@record
|
||||
def main():
|
||||
args = get_args()
|
||||
os.environ['onnx_path'] = args.onnx_path
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
format='%(asctime)s %(levelname)s %(message)s')
|
||||
# gan train has some special initialization logic
|
||||
gan = True if args.model == 'hifigan' else False
|
||||
|
||||
override_dict = {k: None for k in ['llm', 'flow', 'hift', 'hifigan'] if k != args.model}
|
||||
if gan is True:
|
||||
override_dict.pop('hift')
|
||||
if args.qwen_pretrain_path is not None:
|
||||
override_dict['qwen_pretrain_path'] = args.qwen_pretrain_path
|
||||
with open(args.config, 'r') as f:
|
||||
configs = load_hyperpyyaml(f, overrides=override_dict)
|
||||
if gan is True:
|
||||
configs['train_conf'] = configs['train_conf_gan']
|
||||
configs['train_conf'].update(vars(args))
|
||||
|
||||
# Init env for ddp
|
||||
init_distributed(args)
|
||||
|
||||
# Get dataset & dataloader
|
||||
train_dataset, cv_dataset, train_data_loader, cv_data_loader = \
|
||||
init_dataset_and_dataloader(args, configs, gan, args.dpo)
|
||||
|
||||
# Do some sanity checks and save config to arsg.model_dir
|
||||
configs = check_modify_and_save_config(args, configs)
|
||||
|
||||
# Tensorboard summary
|
||||
writer = init_summarywriter(args)
|
||||
|
||||
# load checkpoint
|
||||
if args.dpo is True:
|
||||
configs[args.model].forward = configs[args.model].forward_dpo
|
||||
model = configs[args.model]
|
||||
start_step, start_epoch = 0, -1
|
||||
if args.checkpoint is not None:
|
||||
if os.path.exists(args.checkpoint):
|
||||
state_dict = torch.load(args.checkpoint, map_location='cpu')
|
||||
model.load_state_dict(state_dict, strict=False)
|
||||
if 'step' in state_dict:
|
||||
start_step = state_dict['step']
|
||||
if 'epoch' in state_dict:
|
||||
start_epoch = state_dict['epoch']
|
||||
else:
|
||||
logging.warning('checkpoint {} do not exsist!'.format(args.checkpoint))
|
||||
|
||||
# Dispatch model from cpu to gpu
|
||||
model = wrap_cuda_model(args, model)
|
||||
|
||||
# Get optimizer & scheduler
|
||||
model, optimizer, scheduler, optimizer_d, scheduler_d = init_optimizer_and_scheduler(args, configs, model, gan)
|
||||
scheduler.set_step(start_step)
|
||||
if scheduler_d is not None:
|
||||
scheduler_d.set_step(start_step)
|
||||
|
||||
# Save init checkpoints
|
||||
info_dict = deepcopy(configs['train_conf'])
|
||||
info_dict['step'] = start_step
|
||||
info_dict['epoch'] = start_epoch
|
||||
save_model(model, 'init', info_dict)
|
||||
|
||||
# DPO related
|
||||
if args.dpo is True:
|
||||
ref_model = deepcopy(configs[args.model])
|
||||
state_dict = torch.load(args.ref_model, map_location='cpu')
|
||||
ref_model.load_state_dict(state_dict, strict=False)
|
||||
dpo_loss = DPOLoss(beta=0.01, label_smoothing=0.0, ipo=False)
|
||||
# NOTE maybe it is not needed to wrap ref_model as ddp because its parameter is not updated
|
||||
ref_model = wrap_cuda_model(args, ref_model)
|
||||
else:
|
||||
ref_model, dpo_loss = None, None
|
||||
|
||||
# Get executor
|
||||
executor = Executor(gan=gan, ref_model=ref_model, dpo_loss=dpo_loss)
|
||||
executor.step = start_step
|
||||
|
||||
# Init scaler, used for pytorch amp mixed precision training
|
||||
scaler = torch.cuda.amp.GradScaler() if args.use_amp else None
|
||||
print('start step {} start epoch {}'.format(start_step, start_epoch))
|
||||
|
||||
# Start training loop
|
||||
for epoch in range(start_epoch + 1, info_dict['max_epoch']):
|
||||
executor.epoch = epoch
|
||||
train_dataset.set_epoch(epoch)
|
||||
dist.barrier()
|
||||
group_join = dist.new_group(backend="gloo", timeout=datetime.timedelta(seconds=args.timeout))
|
||||
if gan is True:
|
||||
executor.train_one_epoc_gan(model, optimizer, scheduler, optimizer_d, scheduler_d, train_data_loader, cv_data_loader,
|
||||
writer, info_dict, scaler, group_join)
|
||||
else:
|
||||
executor.train_one_epoc(model, optimizer, scheduler, train_data_loader, cv_data_loader, writer, info_dict, scaler, group_join, ref_model=ref_model)
|
||||
dist.destroy_process_group(group_join)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
0
models/CosyVoice/cosyvoice/cli/__init__.py
Normal file
0
models/CosyVoice/cosyvoice/cli/__init__.py
Normal file
240
models/CosyVoice/cosyvoice/cli/cosyvoice.py
Normal file
240
models/CosyVoice/cosyvoice/cli/cosyvoice.py
Normal file
@@ -0,0 +1,240 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
import time
|
||||
from typing import Generator
|
||||
from tqdm import tqdm
|
||||
from hyperpyyaml import load_hyperpyyaml
|
||||
from modelscope import snapshot_download
|
||||
import torch
|
||||
from cosyvoice.cli.frontend import CosyVoiceFrontEnd
|
||||
from cosyvoice.cli.model import CosyVoiceModel, CosyVoice2Model, CosyVoice3Model
|
||||
from cosyvoice.utils.file_utils import logging
|
||||
from cosyvoice.utils.class_utils import get_model_type
|
||||
|
||||
|
||||
class CosyVoice:
|
||||
|
||||
def __init__(self, model_dir, load_jit=False, load_trt=False, fp16=False, trt_concurrent=1):
|
||||
self.model_dir = model_dir
|
||||
self.fp16 = fp16
|
||||
if not os.path.exists(model_dir):
|
||||
model_dir = snapshot_download(model_dir)
|
||||
hyper_yaml_path = '{}/cosyvoice.yaml'.format(model_dir)
|
||||
if not os.path.exists(hyper_yaml_path):
|
||||
raise ValueError('{} not found!'.format(hyper_yaml_path))
|
||||
with open(hyper_yaml_path, 'r') as f:
|
||||
configs = load_hyperpyyaml(f)
|
||||
assert get_model_type(configs) == CosyVoiceModel, 'do not use {} for CosyVoice initialization!'.format(model_dir)
|
||||
self.frontend = CosyVoiceFrontEnd(configs['get_tokenizer'],
|
||||
configs['feat_extractor'],
|
||||
'{}/campplus.onnx'.format(model_dir),
|
||||
'{}/speech_tokenizer_v1.onnx'.format(model_dir),
|
||||
'{}/spk2info.pt'.format(model_dir),
|
||||
configs['allowed_special'])
|
||||
self.sample_rate = configs['sample_rate']
|
||||
if torch.cuda.is_available() is False and (load_jit is True or load_trt is True or fp16 is True):
|
||||
load_jit, load_trt, fp16 = False, False, False
|
||||
logging.warning('no cuda device, set load_jit/load_trt/fp16 to False')
|
||||
self.model = CosyVoiceModel(configs['llm'], configs['flow'], configs['hift'], fp16)
|
||||
self.model.load('{}/llm.pt'.format(model_dir),
|
||||
'{}/flow.pt'.format(model_dir),
|
||||
'{}/hift.pt'.format(model_dir))
|
||||
if load_jit:
|
||||
self.model.load_jit('{}/llm.text_encoder.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||
'{}/llm.llm.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||
'{}/flow.encoder.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'))
|
||||
if load_trt:
|
||||
self.model.load_trt('{}/flow.decoder.estimator.{}.mygpu.plan'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||
'{}/flow.decoder.estimator.fp32.onnx'.format(model_dir),
|
||||
trt_concurrent,
|
||||
self.fp16)
|
||||
del configs
|
||||
|
||||
def list_available_spks(self):
|
||||
spks = list(self.frontend.spk2info.keys())
|
||||
return spks
|
||||
|
||||
def add_zero_shot_spk(self, prompt_text, prompt_wav, zero_shot_spk_id):
|
||||
assert zero_shot_spk_id != '', 'do not use empty zero_shot_spk_id'
|
||||
model_input = self.frontend.frontend_zero_shot('', prompt_text, prompt_wav, self.sample_rate, '')
|
||||
del model_input['text']
|
||||
del model_input['text_len']
|
||||
self.frontend.spk2info[zero_shot_spk_id] = model_input
|
||||
return True
|
||||
|
||||
def save_spkinfo(self):
|
||||
torch.save(self.frontend.spk2info, '{}/spk2info.pt'.format(self.model_dir))
|
||||
|
||||
def inference_sft(self, tts_text, spk_id, stream=False, speed=1.0, text_frontend=True):
|
||||
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||
model_input = self.frontend.frontend_sft(i, spk_id)
|
||||
start_time = time.time()
|
||||
logging.info('synthesis text {}'.format(i))
|
||||
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||
yield model_output
|
||||
start_time = time.time()
|
||||
|
||||
def inference_zero_shot(self, tts_text, prompt_text, prompt_wav, zero_shot_spk_id='', stream=False, speed=1.0, text_frontend=True):
|
||||
if self.__class__.__name__ == 'CosyVoice3' and '<|endofprompt|>' not in prompt_text + tts_text:
|
||||
logging.warning('<|endofprompt|> not found in CosyVoice3 inference, check your input text')
|
||||
prompt_text = self.frontend.text_normalize(prompt_text, split=False, text_frontend=text_frontend)
|
||||
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||
if (not isinstance(i, Generator)) and len(i) < 0.5 * len(prompt_text):
|
||||
logging.warning('synthesis text {} too short than prompt text {}, this may lead to bad performance'.format(i, prompt_text))
|
||||
model_input = self.frontend.frontend_zero_shot(i, prompt_text, prompt_wav, self.sample_rate, zero_shot_spk_id)
|
||||
start_time = time.time()
|
||||
logging.info('synthesis text {}'.format(i))
|
||||
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||
yield model_output
|
||||
start_time = time.time()
|
||||
|
||||
def inference_cross_lingual(self, tts_text, prompt_wav, zero_shot_spk_id='', stream=False, speed=1.0, text_frontend=True):
|
||||
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||
model_input = self.frontend.frontend_cross_lingual(i, prompt_wav, self.sample_rate, zero_shot_spk_id)
|
||||
start_time = time.time()
|
||||
logging.info('synthesis text {}'.format(i))
|
||||
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||
yield model_output
|
||||
start_time = time.time()
|
||||
|
||||
def inference_instruct(self, tts_text, spk_id, instruct_text, stream=False, speed=1.0, text_frontend=True):
|
||||
assert self.__class__.__name__ == 'CosyVoice', 'inference_instruct is only implemented for CosyVoice!'
|
||||
instruct_text = self.frontend.text_normalize(instruct_text, split=False, text_frontend=text_frontend)
|
||||
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||
model_input = self.frontend.frontend_instruct(i, spk_id, instruct_text)
|
||||
start_time = time.time()
|
||||
logging.info('synthesis text {}'.format(i))
|
||||
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||
yield model_output
|
||||
start_time = time.time()
|
||||
|
||||
def inference_vc(self, source_wav, prompt_wav, stream=False, speed=1.0):
|
||||
model_input = self.frontend.frontend_vc(source_wav, prompt_wav, self.sample_rate)
|
||||
start_time = time.time()
|
||||
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||
yield model_output
|
||||
start_time = time.time()
|
||||
|
||||
|
||||
class CosyVoice2(CosyVoice):
|
||||
|
||||
def __init__(self, model_dir, load_jit=False, load_trt=False, load_vllm=False, fp16=False, trt_concurrent=1):
|
||||
self.model_dir = model_dir
|
||||
self.fp16 = fp16
|
||||
if not os.path.exists(model_dir):
|
||||
model_dir = snapshot_download(model_dir)
|
||||
hyper_yaml_path = '{}/cosyvoice2.yaml'.format(model_dir)
|
||||
if not os.path.exists(hyper_yaml_path):
|
||||
raise ValueError('{} not found!'.format(hyper_yaml_path))
|
||||
with open(hyper_yaml_path, 'r') as f:
|
||||
configs = load_hyperpyyaml(f, overrides={'qwen_pretrain_path': os.path.join(model_dir, 'CosyVoice-BlankEN')})
|
||||
assert get_model_type(configs) == CosyVoice2Model, 'do not use {} for CosyVoice2 initialization!'.format(model_dir)
|
||||
self.frontend = CosyVoiceFrontEnd(configs['get_tokenizer'],
|
||||
configs['feat_extractor'],
|
||||
'{}/campplus.onnx'.format(model_dir),
|
||||
'{}/speech_tokenizer_v2.onnx'.format(model_dir),
|
||||
'{}/spk2info.pt'.format(model_dir),
|
||||
configs['allowed_special'])
|
||||
self.sample_rate = configs['sample_rate']
|
||||
if torch.cuda.is_available() is False and (load_jit is True or load_trt is True or load_vllm is True or fp16 is True):
|
||||
load_jit, load_trt, load_vllm, fp16 = False, False, False, False
|
||||
logging.warning('no cuda device, set load_jit/load_trt/load_vllm/fp16 to False')
|
||||
self.model = CosyVoice2Model(configs['llm'], configs['flow'], configs['hift'], fp16)
|
||||
self.model.load('{}/llm.pt'.format(model_dir),
|
||||
'{}/flow.pt'.format(model_dir),
|
||||
'{}/hift.pt'.format(model_dir))
|
||||
if load_vllm:
|
||||
self.model.load_vllm('{}/vllm'.format(model_dir))
|
||||
if load_jit:
|
||||
self.model.load_jit('{}/flow.encoder.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'))
|
||||
if load_trt:
|
||||
self.model.load_trt('{}/flow.decoder.estimator.{}.mygpu.plan'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||
'{}/flow.decoder.estimator.fp32.onnx'.format(model_dir),
|
||||
trt_concurrent,
|
||||
self.fp16)
|
||||
del configs
|
||||
|
||||
def inference_instruct2(self, tts_text, instruct_text, prompt_wav, zero_shot_spk_id='', stream=False, speed=1.0, text_frontend=True):
|
||||
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||
model_input = self.frontend.frontend_instruct2(i, instruct_text, prompt_wav, self.sample_rate, zero_shot_spk_id)
|
||||
start_time = time.time()
|
||||
logging.info('synthesis text {}'.format(i))
|
||||
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||
yield model_output
|
||||
start_time = time.time()
|
||||
|
||||
|
||||
class CosyVoice3(CosyVoice2):
|
||||
|
||||
def __init__(self, model_dir, load_trt=False, load_vllm=False, fp16=False, trt_concurrent=1):
|
||||
self.model_dir = model_dir
|
||||
self.fp16 = fp16
|
||||
if not os.path.exists(model_dir):
|
||||
model_dir = snapshot_download(model_dir)
|
||||
hyper_yaml_path = '{}/cosyvoice3.yaml'.format(model_dir)
|
||||
if not os.path.exists(hyper_yaml_path):
|
||||
raise ValueError('{} not found!'.format(hyper_yaml_path))
|
||||
with open(hyper_yaml_path, 'r') as f:
|
||||
configs = load_hyperpyyaml(f, overrides={'qwen_pretrain_path': os.path.join(model_dir, 'CosyVoice-BlankEN')})
|
||||
assert get_model_type(configs) == CosyVoice3Model, 'do not use {} for CosyVoice3 initialization!'.format(model_dir)
|
||||
self.frontend = CosyVoiceFrontEnd(configs['get_tokenizer'],
|
||||
configs['feat_extractor'],
|
||||
'{}/campplus.onnx'.format(model_dir),
|
||||
'{}/speech_tokenizer_v3.onnx'.format(model_dir),
|
||||
'{}/spk2info.pt'.format(model_dir),
|
||||
configs['allowed_special'])
|
||||
self.sample_rate = configs['sample_rate']
|
||||
if torch.cuda.is_available() is False and (load_trt is True or fp16 is True):
|
||||
load_trt, fp16 = False, False
|
||||
logging.warning('no cuda device, set load_trt/fp16 to False')
|
||||
self.model = CosyVoice3Model(configs['llm'], configs['flow'], configs['hift'], fp16)
|
||||
self.model.load('{}/llm.pt'.format(model_dir),
|
||||
'{}/flow.pt'.format(model_dir),
|
||||
'{}/hift.pt'.format(model_dir))
|
||||
if load_vllm:
|
||||
self.model.load_vllm('{}/vllm'.format(model_dir))
|
||||
if load_trt:
|
||||
if self.fp16 is True:
|
||||
logging.warning('DiT tensorRT fp16 engine have some performance issue, use at caution!')
|
||||
self.model.load_trt('{}/flow.decoder.estimator.{}.mygpu.plan'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||
'{}/flow.decoder.estimator.fp32.onnx'.format(model_dir),
|
||||
trt_concurrent,
|
||||
self.fp16)
|
||||
del configs
|
||||
|
||||
|
||||
def AutoModel(**kwargs):
|
||||
if not os.path.exists(kwargs['model_dir']):
|
||||
kwargs['model_dir'] = snapshot_download(kwargs['model_dir'])
|
||||
if os.path.exists('{}/cosyvoice.yaml'.format(kwargs['model_dir'])):
|
||||
return CosyVoice(**kwargs)
|
||||
elif os.path.exists('{}/cosyvoice2.yaml'.format(kwargs['model_dir'])):
|
||||
return CosyVoice2(**kwargs)
|
||||
elif os.path.exists('{}/cosyvoice3.yaml'.format(kwargs['model_dir'])):
|
||||
return CosyVoice3(**kwargs)
|
||||
else:
|
||||
raise TypeError('No valid model type found!')
|
||||
224
models/CosyVoice/cosyvoice/cli/frontend.py
Normal file
224
models/CosyVoice/cosyvoice/cli/frontend.py
Normal file
@@ -0,0 +1,224 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from functools import partial
|
||||
from typing import Generator
|
||||
import json
|
||||
import onnxruntime
|
||||
import torch
|
||||
import numpy as np
|
||||
import whisper
|
||||
from typing import Callable
|
||||
import torchaudio.compliance.kaldi as kaldi
|
||||
import os
|
||||
import re
|
||||
import inflect
|
||||
from cosyvoice.utils.file_utils import logging, load_wav
|
||||
from cosyvoice.utils.frontend_utils import contains_chinese, replace_blank, replace_corner_mark, remove_bracket, spell_out_number, split_paragraph, is_only_punctuation
|
||||
|
||||
|
||||
class CosyVoiceFrontEnd:
|
||||
|
||||
def __init__(self,
|
||||
get_tokenizer: Callable,
|
||||
feat_extractor: Callable,
|
||||
campplus_model: str,
|
||||
speech_tokenizer_model: str,
|
||||
spk2info: str = '',
|
||||
allowed_special: str = 'all'):
|
||||
self.tokenizer = get_tokenizer()
|
||||
self.feat_extractor = feat_extractor
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
option = onnxruntime.SessionOptions()
|
||||
option.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
|
||||
option.intra_op_num_threads = 1
|
||||
self.campplus_session = onnxruntime.InferenceSession(campplus_model, sess_options=option, providers=["CPUExecutionProvider"])
|
||||
self.speech_tokenizer_session = onnxruntime.InferenceSession(speech_tokenizer_model, sess_options=option,
|
||||
providers=["CUDAExecutionProvider" if torch.cuda.is_available() else
|
||||
"CPUExecutionProvider"])
|
||||
if os.path.exists(spk2info):
|
||||
self.spk2info = torch.load(spk2info, map_location=self.device, weights_only=True)
|
||||
else:
|
||||
self.spk2info = {}
|
||||
self.allowed_special = allowed_special
|
||||
self.inflect_parser = inflect.engine()
|
||||
# NOTE compatible when no text frontend tool is avaliable
|
||||
try:
|
||||
import ttsfrd
|
||||
self.frd = ttsfrd.TtsFrontendEngine()
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
assert self.frd.initialize('{}/../../pretrained_models/CosyVoice-ttsfrd/resource'.format(ROOT_DIR)) is True, \
|
||||
'failed to initialize ttsfrd resource'
|
||||
self.frd.set_lang_type('pinyinvg')
|
||||
self.text_frontend = 'ttsfrd'
|
||||
logging.info('use ttsfrd frontend')
|
||||
except:
|
||||
try:
|
||||
from wetext import Normalizer as ZhNormalizer
|
||||
from wetext import Normalizer as EnNormalizer
|
||||
self.zh_tn_model = ZhNormalizer(remove_erhua=False)
|
||||
self.en_tn_model = EnNormalizer()
|
||||
self.text_frontend = 'wetext'
|
||||
logging.info('use wetext frontend')
|
||||
except:
|
||||
self.text_frontend = ''
|
||||
logging.info('no frontend is avaliable')
|
||||
|
||||
|
||||
def _extract_text_token(self, text):
|
||||
if isinstance(text, Generator):
|
||||
logging.info('get tts_text generator, will return _extract_text_token_generator!')
|
||||
# NOTE add a dummy text_token_len for compatibility
|
||||
return self._extract_text_token_generator(text), torch.tensor([0], dtype=torch.int32).to(self.device)
|
||||
else:
|
||||
text_token = self.tokenizer.encode(text, allowed_special=self.allowed_special)
|
||||
text_token = torch.tensor([text_token], dtype=torch.int32).to(self.device)
|
||||
text_token_len = torch.tensor([text_token.shape[1]], dtype=torch.int32).to(self.device)
|
||||
return text_token, text_token_len
|
||||
|
||||
def _extract_text_token_generator(self, text_generator):
|
||||
for text in text_generator:
|
||||
text_token, _ = self._extract_text_token(text)
|
||||
for i in range(text_token.shape[1]):
|
||||
yield text_token[:, i: i + 1]
|
||||
|
||||
def _extract_speech_token(self, prompt_wav):
|
||||
speech = load_wav(prompt_wav, 16000)
|
||||
assert speech.shape[1] / 16000 <= 30, 'do not support extract speech token for audio longer than 30s'
|
||||
feat = whisper.log_mel_spectrogram(speech, n_mels=128)
|
||||
speech_token = self.speech_tokenizer_session.run(None,
|
||||
{self.speech_tokenizer_session.get_inputs()[0].name:
|
||||
feat.detach().cpu().numpy(),
|
||||
self.speech_tokenizer_session.get_inputs()[1].name:
|
||||
np.array([feat.shape[2]], dtype=np.int32)})[0].flatten().tolist()
|
||||
speech_token = torch.tensor([speech_token], dtype=torch.int32).to(self.device)
|
||||
speech_token_len = torch.tensor([speech_token.shape[1]], dtype=torch.int32).to(self.device)
|
||||
return speech_token, speech_token_len
|
||||
|
||||
def _extract_spk_embedding(self, prompt_wav):
|
||||
speech = load_wav(prompt_wav, 16000)
|
||||
feat = kaldi.fbank(speech,
|
||||
num_mel_bins=80,
|
||||
dither=0,
|
||||
sample_frequency=16000)
|
||||
feat = feat - feat.mean(dim=0, keepdim=True)
|
||||
embedding = self.campplus_session.run(None,
|
||||
{self.campplus_session.get_inputs()[0].name: feat.unsqueeze(dim=0).cpu().numpy()})[0].flatten().tolist()
|
||||
embedding = torch.tensor([embedding]).to(self.device)
|
||||
return embedding
|
||||
|
||||
def _extract_speech_feat(self, prompt_wav):
|
||||
speech = load_wav(prompt_wav, 24000)
|
||||
speech_feat = self.feat_extractor(speech).squeeze(dim=0).transpose(0, 1).to(self.device)
|
||||
speech_feat = speech_feat.unsqueeze(dim=0)
|
||||
speech_feat_len = torch.tensor([speech_feat.shape[1]], dtype=torch.int32).to(self.device)
|
||||
return speech_feat, speech_feat_len
|
||||
|
||||
def text_normalize(self, text, split=True, text_frontend=True):
|
||||
if isinstance(text, Generator):
|
||||
logging.info('get tts_text generator, will skip text_normalize!')
|
||||
return [text]
|
||||
# NOTE skip text_frontend when ssml symbol in text
|
||||
if '<|' in text and '|>' in text:
|
||||
text_frontend = False
|
||||
if text_frontend is False or text == '':
|
||||
return [text] if split is True else text
|
||||
text = text.strip()
|
||||
if self.text_frontend == 'ttsfrd':
|
||||
texts = [i["text"] for i in json.loads(self.frd.do_voicegen_frd(text))["sentences"]]
|
||||
text = ''.join(texts)
|
||||
else:
|
||||
if contains_chinese(text):
|
||||
if self.text_frontend == 'wetext':
|
||||
text = self.zh_tn_model.normalize(text)
|
||||
text = text.replace("\n", "")
|
||||
text = replace_blank(text)
|
||||
text = replace_corner_mark(text)
|
||||
text = text.replace(".", "。")
|
||||
text = text.replace(" - ", ",")
|
||||
text = remove_bracket(text)
|
||||
text = re.sub(r'[,,、]+$', '。', text)
|
||||
texts = list(split_paragraph(text, partial(self.tokenizer.encode, allowed_special=self.allowed_special), "zh", token_max_n=80,
|
||||
token_min_n=60, merge_len=20, comma_split=False))
|
||||
else:
|
||||
if self.text_frontend == 'wetext':
|
||||
text = self.en_tn_model.normalize(text)
|
||||
text = spell_out_number(text, self.inflect_parser)
|
||||
texts = list(split_paragraph(text, partial(self.tokenizer.encode, allowed_special=self.allowed_special), "en", token_max_n=80,
|
||||
token_min_n=60, merge_len=20, comma_split=False))
|
||||
texts = [i for i in texts if not is_only_punctuation(i)]
|
||||
return texts if split is True else text
|
||||
|
||||
def frontend_sft(self, tts_text, spk_id):
|
||||
tts_text_token, tts_text_token_len = self._extract_text_token(tts_text)
|
||||
embedding = self.spk2info[spk_id]['embedding']
|
||||
model_input = {'text': tts_text_token, 'text_len': tts_text_token_len, 'llm_embedding': embedding, 'flow_embedding': embedding}
|
||||
return model_input
|
||||
|
||||
def frontend_zero_shot(self, tts_text, prompt_text, prompt_wav, resample_rate, zero_shot_spk_id):
|
||||
tts_text_token, tts_text_token_len = self._extract_text_token(tts_text)
|
||||
if zero_shot_spk_id == '':
|
||||
prompt_text_token, prompt_text_token_len = self._extract_text_token(prompt_text)
|
||||
speech_feat, speech_feat_len = self._extract_speech_feat(prompt_wav)
|
||||
speech_token, speech_token_len = self._extract_speech_token(prompt_wav)
|
||||
if resample_rate == 24000:
|
||||
# cosyvoice2, force speech_feat % speech_token = 2
|
||||
token_len = min(int(speech_feat.shape[1] / 2), speech_token.shape[1])
|
||||
speech_feat, speech_feat_len[:] = speech_feat[:, :2 * token_len], 2 * token_len
|
||||
speech_token, speech_token_len[:] = speech_token[:, :token_len], token_len
|
||||
embedding = self._extract_spk_embedding(prompt_wav)
|
||||
model_input = {'prompt_text': prompt_text_token, 'prompt_text_len': prompt_text_token_len,
|
||||
'llm_prompt_speech_token': speech_token, 'llm_prompt_speech_token_len': speech_token_len,
|
||||
'flow_prompt_speech_token': speech_token, 'flow_prompt_speech_token_len': speech_token_len,
|
||||
'prompt_speech_feat': speech_feat, 'prompt_speech_feat_len': speech_feat_len,
|
||||
'llm_embedding': embedding, 'flow_embedding': embedding}
|
||||
else:
|
||||
model_input = {**self.spk2info[zero_shot_spk_id]}
|
||||
model_input['text'] = tts_text_token
|
||||
model_input['text_len'] = tts_text_token_len
|
||||
return model_input
|
||||
|
||||
def frontend_cross_lingual(self, tts_text, prompt_wav, resample_rate, zero_shot_spk_id):
|
||||
model_input = self.frontend_zero_shot(tts_text, '', prompt_wav, resample_rate, zero_shot_spk_id)
|
||||
# in cross lingual mode, we remove prompt in llm
|
||||
del model_input['prompt_text']
|
||||
del model_input['prompt_text_len']
|
||||
del model_input['llm_prompt_speech_token']
|
||||
del model_input['llm_prompt_speech_token_len']
|
||||
return model_input
|
||||
|
||||
def frontend_instruct(self, tts_text, spk_id, instruct_text):
|
||||
model_input = self.frontend_sft(tts_text, spk_id)
|
||||
# in instruct mode, we remove spk_embedding in llm due to information leakage
|
||||
del model_input['llm_embedding']
|
||||
instruct_text_token, instruct_text_token_len = self._extract_text_token(instruct_text)
|
||||
model_input['prompt_text'] = instruct_text_token
|
||||
model_input['prompt_text_len'] = instruct_text_token_len
|
||||
return model_input
|
||||
|
||||
def frontend_instruct2(self, tts_text, instruct_text, prompt_wav, resample_rate, zero_shot_spk_id):
|
||||
model_input = self.frontend_zero_shot(tts_text, instruct_text, prompt_wav, resample_rate, zero_shot_spk_id)
|
||||
del model_input['llm_prompt_speech_token']
|
||||
del model_input['llm_prompt_speech_token_len']
|
||||
return model_input
|
||||
|
||||
def frontend_vc(self, source_speech_16k, prompt_wav, resample_rate):
|
||||
prompt_speech_token, prompt_speech_token_len = self._extract_speech_token(prompt_wav)
|
||||
prompt_speech_feat, prompt_speech_feat_len = self._extract_speech_feat(prompt_wav)
|
||||
embedding = self._extract_spk_embedding(prompt_wav)
|
||||
source_speech_token, source_speech_token_len = self._extract_speech_token(source_speech_16k)
|
||||
model_input = {'source_speech_token': source_speech_token, 'source_speech_token_len': source_speech_token_len,
|
||||
'flow_prompt_speech_token': prompt_speech_token, 'flow_prompt_speech_token_len': prompt_speech_token_len,
|
||||
'prompt_speech_feat': prompt_speech_feat, 'prompt_speech_feat_len': prompt_speech_feat_len,
|
||||
'flow_embedding': embedding}
|
||||
return model_input
|
||||
450
models/CosyVoice/cosyvoice/cli/model.py
Normal file
450
models/CosyVoice/cosyvoice/cli/model.py
Normal file
@@ -0,0 +1,450 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
# 2025 Alibaba Inc (authors: Xiang Lyu, Bofan Zhou)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
from typing import Generator
|
||||
import torch
|
||||
import numpy as np
|
||||
import threading
|
||||
import time
|
||||
from torch.nn import functional as F
|
||||
from contextlib import nullcontext
|
||||
import uuid
|
||||
from cosyvoice.utils.common import fade_in_out
|
||||
from cosyvoice.utils.file_utils import convert_onnx_to_trt, export_cosyvoice2_vllm
|
||||
from cosyvoice.utils.common import TrtContextWrapper
|
||||
|
||||
|
||||
class CosyVoiceModel:
|
||||
|
||||
def __init__(self,
|
||||
llm: torch.nn.Module,
|
||||
flow: torch.nn.Module,
|
||||
hift: torch.nn.Module,
|
||||
fp16: bool = False):
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
self.llm = llm
|
||||
self.flow = flow
|
||||
self.hift = hift
|
||||
self.fp16 = fp16
|
||||
self.token_min_hop_len = 2 * self.flow.input_frame_rate
|
||||
self.token_max_hop_len = 4 * self.flow.input_frame_rate
|
||||
self.token_overlap_len = 20
|
||||
# mel fade in out
|
||||
self.mel_overlap_len = int(self.token_overlap_len / self.flow.input_frame_rate * 22050 / 256)
|
||||
self.mel_window = np.hamming(2 * self.mel_overlap_len)
|
||||
# hift cache
|
||||
self.mel_cache_len = 20
|
||||
self.source_cache_len = int(self.mel_cache_len * 256)
|
||||
# speech fade in out
|
||||
self.speech_window = np.hamming(2 * self.source_cache_len)
|
||||
# rtf and decoding related
|
||||
self.stream_scale_factor = 1
|
||||
assert self.stream_scale_factor >= 1, 'stream_scale_factor should be greater than 1, change it according to your actual rtf'
|
||||
self.llm_context = torch.cuda.stream(torch.cuda.Stream(self.device)) if torch.cuda.is_available() else nullcontext()
|
||||
self.lock = threading.Lock()
|
||||
# dict used to store session related variable
|
||||
self.tts_speech_token_dict = {}
|
||||
self.llm_end_dict = {}
|
||||
self.mel_overlap_dict = {}
|
||||
self.flow_cache_dict = {}
|
||||
self.hift_cache_dict = {}
|
||||
self.silent_tokens = []
|
||||
|
||||
def load(self, llm_model, flow_model, hift_model):
|
||||
self.llm.load_state_dict(torch.load(llm_model, map_location=self.device, weights_only=True), strict=True)
|
||||
self.llm.to(self.device).eval()
|
||||
self.flow.load_state_dict(torch.load(flow_model, map_location=self.device, weights_only=True), strict=True)
|
||||
self.flow.to(self.device).eval()
|
||||
# in case hift_model is a hifigan model
|
||||
hift_state_dict = {k.replace('generator.', ''): v for k, v in torch.load(hift_model, map_location=self.device, weights_only=True).items()}
|
||||
self.hift.load_state_dict(hift_state_dict, strict=True)
|
||||
self.hift.to(self.device).eval()
|
||||
|
||||
def load_jit(self, llm_text_encoder_model, llm_llm_model, flow_encoder_model):
|
||||
llm_text_encoder = torch.jit.load(llm_text_encoder_model, map_location=self.device)
|
||||
self.llm.text_encoder = llm_text_encoder
|
||||
llm_llm = torch.jit.load(llm_llm_model, map_location=self.device)
|
||||
self.llm.llm = llm_llm
|
||||
flow_encoder = torch.jit.load(flow_encoder_model, map_location=self.device)
|
||||
self.flow.encoder = flow_encoder
|
||||
|
||||
def load_trt(self, flow_decoder_estimator_model, flow_decoder_onnx_model, trt_concurrent, fp16):
|
||||
assert torch.cuda.is_available(), 'tensorrt only supports gpu!'
|
||||
if not os.path.exists(flow_decoder_estimator_model) or os.path.getsize(flow_decoder_estimator_model) == 0:
|
||||
convert_onnx_to_trt(flow_decoder_estimator_model, self.get_trt_kwargs(), flow_decoder_onnx_model, fp16)
|
||||
del self.flow.decoder.estimator
|
||||
import tensorrt as trt
|
||||
with open(flow_decoder_estimator_model, 'rb') as f:
|
||||
estimator_engine = trt.Runtime(trt.Logger(trt.Logger.INFO)).deserialize_cuda_engine(f.read())
|
||||
assert estimator_engine is not None, 'failed to load trt {}'.format(flow_decoder_estimator_model)
|
||||
self.flow.decoder.estimator = TrtContextWrapper(estimator_engine, trt_concurrent=trt_concurrent, device=self.device)
|
||||
|
||||
def get_trt_kwargs(self):
|
||||
min_shape = [(2, 80, 4), (2, 1, 4), (2, 80, 4), (2, 80, 4)]
|
||||
opt_shape = [(2, 80, 500), (2, 1, 500), (2, 80, 500), (2, 80, 500)]
|
||||
max_shape = [(2, 80, 3000), (2, 1, 3000), (2, 80, 3000), (2, 80, 3000)]
|
||||
input_names = ["x", "mask", "mu", "cond"]
|
||||
return {'min_shape': min_shape, 'opt_shape': opt_shape, 'max_shape': max_shape, 'input_names': input_names}
|
||||
|
||||
def llm_job(self, text, prompt_text, llm_prompt_speech_token, llm_embedding, uuid):
|
||||
cur_silent_token_num, max_silent_token_num = 0, 5
|
||||
with self.llm_context, torch.cuda.amp.autocast(self.fp16 is True and hasattr(self.llm, 'vllm') is False):
|
||||
if isinstance(text, Generator):
|
||||
assert (self.__class__.__name__ != 'CosyVoiceModel') and not hasattr(self.llm, 'vllm'), 'streaming input text is only implemented for CosyVoice2/3 and do not support vllm!'
|
||||
token_generator = self.llm.inference_bistream(text=text,
|
||||
prompt_text=prompt_text.to(self.device),
|
||||
prompt_text_len=torch.tensor([prompt_text.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_speech_token=llm_prompt_speech_token.to(self.device),
|
||||
prompt_speech_token_len=torch.tensor([llm_prompt_speech_token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
embedding=llm_embedding.to(self.device))
|
||||
else:
|
||||
token_generator = self.llm.inference(text=text.to(self.device),
|
||||
text_len=torch.tensor([text.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_text=prompt_text.to(self.device),
|
||||
prompt_text_len=torch.tensor([prompt_text.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_speech_token=llm_prompt_speech_token.to(self.device),
|
||||
prompt_speech_token_len=torch.tensor([llm_prompt_speech_token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
embedding=llm_embedding.to(self.device),
|
||||
uuid=uuid)
|
||||
for i in token_generator:
|
||||
if i in self.silent_tokens:
|
||||
cur_silent_token_num += 1
|
||||
if cur_silent_token_num > max_silent_token_num:
|
||||
continue
|
||||
else:
|
||||
cur_silent_token_num = 0
|
||||
self.tts_speech_token_dict[uuid].append(i)
|
||||
self.llm_end_dict[uuid] = True
|
||||
|
||||
def vc_job(self, source_speech_token, uuid):
|
||||
self.tts_speech_token_dict[uuid] = source_speech_token.flatten().tolist()
|
||||
self.llm_end_dict[uuid] = True
|
||||
|
||||
def token2wav(self, token, prompt_token, prompt_feat, embedding, uuid, finalize=False, speed=1.0):
|
||||
with torch.cuda.amp.autocast(self.fp16):
|
||||
tts_mel, self.flow_cache_dict[uuid] = self.flow.inference(token=token.to(self.device, dtype=torch.int32),
|
||||
token_len=torch.tensor([token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_token=prompt_token.to(self.device),
|
||||
prompt_token_len=torch.tensor([prompt_token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_feat=prompt_feat.to(self.device),
|
||||
prompt_feat_len=torch.tensor([prompt_feat.shape[1]], dtype=torch.int32).to(self.device),
|
||||
embedding=embedding.to(self.device),
|
||||
flow_cache=self.flow_cache_dict[uuid])
|
||||
|
||||
# mel overlap fade in out
|
||||
if self.mel_overlap_dict[uuid].shape[2] != 0:
|
||||
tts_mel = fade_in_out(tts_mel, self.mel_overlap_dict[uuid], self.mel_window)
|
||||
# append hift cache
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
hift_cache_mel, hift_cache_source = self.hift_cache_dict[uuid]['mel'], self.hift_cache_dict[uuid]['source']
|
||||
tts_mel = torch.concat([hift_cache_mel, tts_mel], dim=2)
|
||||
else:
|
||||
hift_cache_source = torch.zeros(1, 1, 0)
|
||||
# keep overlap mel and hift cache
|
||||
if finalize is False:
|
||||
self.mel_overlap_dict[uuid] = tts_mel[:, :, -self.mel_overlap_len:]
|
||||
tts_mel = tts_mel[:, :, :-self.mel_overlap_len]
|
||||
tts_speech, tts_source = self.hift.inference(speech_feat=tts_mel, cache_source=hift_cache_source)
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
tts_speech = fade_in_out(tts_speech, self.hift_cache_dict[uuid]['speech'], self.speech_window)
|
||||
self.hift_cache_dict[uuid] = {'mel': tts_mel[:, :, -self.mel_cache_len:],
|
||||
'source': tts_source[:, :, -self.source_cache_len:],
|
||||
'speech': tts_speech[:, -self.source_cache_len:]}
|
||||
tts_speech = tts_speech[:, :-self.source_cache_len]
|
||||
else:
|
||||
if speed != 1.0:
|
||||
assert self.hift_cache_dict[uuid] is None, 'speed change only support non-stream inference mode'
|
||||
tts_mel = F.interpolate(tts_mel, size=int(tts_mel.shape[2] / speed), mode='linear')
|
||||
tts_speech, tts_source = self.hift.inference(speech_feat=tts_mel, cache_source=hift_cache_source)
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
tts_speech = fade_in_out(tts_speech, self.hift_cache_dict[uuid]['speech'], self.speech_window)
|
||||
return tts_speech
|
||||
|
||||
def tts(self, text=torch.zeros(1, 0, dtype=torch.int32), flow_embedding=torch.zeros(0, 192), llm_embedding=torch.zeros(0, 192),
|
||||
prompt_text=torch.zeros(1, 0, dtype=torch.int32),
|
||||
llm_prompt_speech_token=torch.zeros(1, 0, dtype=torch.int32),
|
||||
flow_prompt_speech_token=torch.zeros(1, 0, dtype=torch.int32),
|
||||
prompt_speech_feat=torch.zeros(1, 0, 80), source_speech_token=torch.zeros(1, 0, dtype=torch.int32), stream=False, speed=1.0, **kwargs):
|
||||
# this_uuid is used to track variables related to this inference thread
|
||||
this_uuid = str(uuid.uuid1())
|
||||
with self.lock:
|
||||
self.tts_speech_token_dict[this_uuid], self.llm_end_dict[this_uuid] = [], False
|
||||
self.hift_cache_dict[this_uuid] = None
|
||||
self.mel_overlap_dict[this_uuid] = torch.zeros(1, 80, 0)
|
||||
self.flow_cache_dict[this_uuid] = torch.zeros(1, 80, 0, 2)
|
||||
if source_speech_token.shape[1] == 0:
|
||||
p = threading.Thread(target=self.llm_job, args=(text, prompt_text, llm_prompt_speech_token, llm_embedding, this_uuid))
|
||||
else:
|
||||
p = threading.Thread(target=self.vc_job, args=(source_speech_token, this_uuid))
|
||||
p.start()
|
||||
if stream is True:
|
||||
token_hop_len = self.token_min_hop_len
|
||||
while True:
|
||||
time.sleep(0.1)
|
||||
if len(self.tts_speech_token_dict[this_uuid]) >= token_hop_len + self.token_overlap_len:
|
||||
this_tts_speech_token = torch.tensor(self.tts_speech_token_dict[this_uuid][:token_hop_len + self.token_overlap_len]) \
|
||||
.unsqueeze(dim=0)
|
||||
this_tts_speech = self.token2wav(token=this_tts_speech_token,
|
||||
prompt_token=flow_prompt_speech_token,
|
||||
prompt_feat=prompt_speech_feat,
|
||||
embedding=flow_embedding,
|
||||
uuid=this_uuid,
|
||||
finalize=False)
|
||||
yield {'tts_speech': this_tts_speech.cpu()}
|
||||
with self.lock:
|
||||
self.tts_speech_token_dict[this_uuid] = self.tts_speech_token_dict[this_uuid][token_hop_len:]
|
||||
# increase token_hop_len for better speech quality
|
||||
token_hop_len = min(self.token_max_hop_len, int(token_hop_len * self.stream_scale_factor))
|
||||
if self.llm_end_dict[this_uuid] is True and len(self.tts_speech_token_dict[this_uuid]) < token_hop_len + self.token_overlap_len:
|
||||
break
|
||||
p.join()
|
||||
# deal with remain tokens, make sure inference remain token len equals token_hop_len when cache_speech is not None
|
||||
this_tts_speech_token = torch.tensor(self.tts_speech_token_dict[this_uuid]).unsqueeze(dim=0)
|
||||
this_tts_speech = self.token2wav(token=this_tts_speech_token,
|
||||
prompt_token=flow_prompt_speech_token,
|
||||
prompt_feat=prompt_speech_feat,
|
||||
embedding=flow_embedding,
|
||||
uuid=this_uuid,
|
||||
finalize=True)
|
||||
yield {'tts_speech': this_tts_speech.cpu()}
|
||||
else:
|
||||
# deal with all tokens
|
||||
p.join()
|
||||
this_tts_speech_token = torch.tensor(self.tts_speech_token_dict[this_uuid]).unsqueeze(dim=0)
|
||||
this_tts_speech = self.token2wav(token=this_tts_speech_token,
|
||||
prompt_token=flow_prompt_speech_token,
|
||||
prompt_feat=prompt_speech_feat,
|
||||
embedding=flow_embedding,
|
||||
uuid=this_uuid,
|
||||
finalize=True,
|
||||
speed=speed)
|
||||
yield {'tts_speech': this_tts_speech.cpu()}
|
||||
with self.lock:
|
||||
self.tts_speech_token_dict.pop(this_uuid)
|
||||
self.llm_end_dict.pop(this_uuid)
|
||||
self.mel_overlap_dict.pop(this_uuid)
|
||||
self.hift_cache_dict.pop(this_uuid)
|
||||
self.flow_cache_dict.pop(this_uuid)
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.empty_cache()
|
||||
torch.cuda.current_stream().synchronize()
|
||||
|
||||
|
||||
class CosyVoice2Model(CosyVoiceModel):
|
||||
|
||||
def __init__(self,
|
||||
llm: torch.nn.Module,
|
||||
flow: torch.nn.Module,
|
||||
hift: torch.nn.Module,
|
||||
fp16: bool = False):
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
self.llm = llm
|
||||
self.flow = flow
|
||||
self.hift = hift
|
||||
self.fp16 = fp16
|
||||
# NOTE must matching training static_chunk_size
|
||||
self.token_hop_len = 25
|
||||
# NOTE increase token_hop_len incrementally to avoid duplicate inference
|
||||
self.token_max_hop_len = 4 * self.token_hop_len
|
||||
self.stream_scale_factor = 2
|
||||
assert self.stream_scale_factor >= 1, 'stream_scale_factor should be greater than 1, change it according to your actual rtf'
|
||||
# hift cache
|
||||
self.mel_cache_len = 8
|
||||
self.source_cache_len = int(self.mel_cache_len * 480)
|
||||
# speech fade in out
|
||||
self.speech_window = np.hamming(2 * self.source_cache_len)
|
||||
# rtf and decoding related
|
||||
self.llm_context = torch.cuda.stream(torch.cuda.Stream(self.device)) if torch.cuda.is_available() else nullcontext()
|
||||
self.lock = threading.Lock()
|
||||
# dict used to store session related variable
|
||||
self.tts_speech_token_dict = {}
|
||||
self.llm_end_dict = {}
|
||||
self.hift_cache_dict = {}
|
||||
self.silent_tokens = []
|
||||
|
||||
def load_jit(self, flow_encoder_model):
|
||||
flow_encoder = torch.jit.load(flow_encoder_model, map_location=self.device)
|
||||
self.flow.encoder = flow_encoder
|
||||
|
||||
def load_vllm(self, model_dir):
|
||||
export_cosyvoice2_vllm(self.llm, model_dir, self.device)
|
||||
from vllm import EngineArgs, LLMEngine
|
||||
engine_args = EngineArgs(model=model_dir,
|
||||
skip_tokenizer_init=True,
|
||||
enable_prompt_embeds=True,
|
||||
gpu_memory_utilization=0.2)
|
||||
self.llm.vllm = LLMEngine.from_engine_args(engine_args)
|
||||
self.llm.lock = threading.Lock()
|
||||
del self.llm.llm.model.model.layers
|
||||
|
||||
def token2wav(self, token, prompt_token, prompt_feat, embedding, token_offset, uuid, stream=False, finalize=False, speed=1.0):
|
||||
with torch.cuda.amp.autocast(self.fp16):
|
||||
tts_mel, _ = self.flow.inference(token=token.to(self.device, dtype=torch.int32),
|
||||
token_len=torch.tensor([token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_token=prompt_token.to(self.device),
|
||||
prompt_token_len=torch.tensor([prompt_token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_feat=prompt_feat.to(self.device),
|
||||
prompt_feat_len=torch.tensor([prompt_feat.shape[1]], dtype=torch.int32).to(self.device),
|
||||
embedding=embedding.to(self.device),
|
||||
streaming=stream,
|
||||
finalize=finalize)
|
||||
tts_mel = tts_mel[:, :, token_offset * self.flow.token_mel_ratio:]
|
||||
# append hift cache
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
hift_cache_mel, hift_cache_source = self.hift_cache_dict[uuid]['mel'], self.hift_cache_dict[uuid]['source']
|
||||
tts_mel = torch.concat([hift_cache_mel, tts_mel], dim=2)
|
||||
else:
|
||||
hift_cache_source = torch.zeros(1, 1, 0)
|
||||
# keep overlap mel and hift cache
|
||||
if finalize is False:
|
||||
tts_speech, tts_source = self.hift.inference(speech_feat=tts_mel, cache_source=hift_cache_source)
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
tts_speech = fade_in_out(tts_speech, self.hift_cache_dict[uuid]['speech'], self.speech_window)
|
||||
self.hift_cache_dict[uuid] = {'mel': tts_mel[:, :, -self.mel_cache_len:],
|
||||
'source': tts_source[:, :, -self.source_cache_len:],
|
||||
'speech': tts_speech[:, -self.source_cache_len:]}
|
||||
tts_speech = tts_speech[:, :-self.source_cache_len]
|
||||
else:
|
||||
if speed != 1.0:
|
||||
assert self.hift_cache_dict[uuid] is None, 'speed change only support non-stream inference mode'
|
||||
tts_mel = F.interpolate(tts_mel, size=int(tts_mel.shape[2] / speed), mode='linear')
|
||||
tts_speech, tts_source = self.hift.inference(speech_feat=tts_mel, cache_source=hift_cache_source)
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
tts_speech = fade_in_out(tts_speech, self.hift_cache_dict[uuid]['speech'], self.speech_window)
|
||||
return tts_speech
|
||||
|
||||
def tts(self, text=torch.zeros(1, 0, dtype=torch.int32), flow_embedding=torch.zeros(0, 192), llm_embedding=torch.zeros(0, 192),
|
||||
prompt_text=torch.zeros(1, 0, dtype=torch.int32),
|
||||
llm_prompt_speech_token=torch.zeros(1, 0, dtype=torch.int32),
|
||||
flow_prompt_speech_token=torch.zeros(1, 0, dtype=torch.int32),
|
||||
prompt_speech_feat=torch.zeros(1, 0, 80), source_speech_token=torch.zeros(1, 0, dtype=torch.int32), stream=False, speed=1.0, **kwargs):
|
||||
# this_uuid is used to track variables related to this inference thread
|
||||
this_uuid = str(uuid.uuid1())
|
||||
with self.lock:
|
||||
self.tts_speech_token_dict[this_uuid], self.llm_end_dict[this_uuid] = [], False
|
||||
self.hift_cache_dict[this_uuid] = None
|
||||
if source_speech_token.shape[1] == 0:
|
||||
p = threading.Thread(target=self.llm_job, args=(text, prompt_text, llm_prompt_speech_token, llm_embedding, this_uuid))
|
||||
else:
|
||||
p = threading.Thread(target=self.vc_job, args=(source_speech_token, this_uuid))
|
||||
p.start()
|
||||
if stream is True:
|
||||
token_offset = 0
|
||||
prompt_token_pad = int(np.ceil(flow_prompt_speech_token.shape[1] / self.token_hop_len) * self.token_hop_len - flow_prompt_speech_token.shape[1])
|
||||
while True:
|
||||
time.sleep(0.1)
|
||||
this_token_hop_len = self.token_hop_len + prompt_token_pad if token_offset == 0 else self.token_hop_len
|
||||
if len(self.tts_speech_token_dict[this_uuid]) - token_offset >= this_token_hop_len + self.flow.pre_lookahead_len:
|
||||
this_tts_speech_token = torch.tensor(self.tts_speech_token_dict[this_uuid][:token_offset + this_token_hop_len + self.flow.pre_lookahead_len]).unsqueeze(dim=0)
|
||||
this_tts_speech = self.token2wav(token=this_tts_speech_token,
|
||||
prompt_token=flow_prompt_speech_token,
|
||||
prompt_feat=prompt_speech_feat,
|
||||
embedding=flow_embedding,
|
||||
token_offset=token_offset,
|
||||
uuid=this_uuid,
|
||||
stream=stream,
|
||||
finalize=False)
|
||||
token_offset += this_token_hop_len
|
||||
self.token_hop_len = min(self.token_max_hop_len, self.token_hop_len * self.stream_scale_factor)
|
||||
yield {'tts_speech': this_tts_speech.cpu()}
|
||||
if self.llm_end_dict[this_uuid] is True and len(self.tts_speech_token_dict[this_uuid]) - token_offset < this_token_hop_len + self.flow.pre_lookahead_len:
|
||||
break
|
||||
p.join()
|
||||
# deal with remain tokens, make sure inference remain token len equals token_hop_len when cache_speech is not None
|
||||
this_tts_speech_token = torch.tensor(self.tts_speech_token_dict[this_uuid]).unsqueeze(dim=0)
|
||||
this_tts_speech = self.token2wav(token=this_tts_speech_token,
|
||||
prompt_token=flow_prompt_speech_token,
|
||||
prompt_feat=prompt_speech_feat,
|
||||
embedding=flow_embedding,
|
||||
token_offset=token_offset,
|
||||
uuid=this_uuid,
|
||||
finalize=True)
|
||||
yield {'tts_speech': this_tts_speech.cpu()}
|
||||
else:
|
||||
# deal with all tokens
|
||||
p.join()
|
||||
this_tts_speech_token = torch.tensor(self.tts_speech_token_dict[this_uuid]).unsqueeze(dim=0)
|
||||
this_tts_speech = self.token2wav(token=this_tts_speech_token,
|
||||
prompt_token=flow_prompt_speech_token,
|
||||
prompt_feat=prompt_speech_feat,
|
||||
embedding=flow_embedding,
|
||||
token_offset=0,
|
||||
uuid=this_uuid,
|
||||
finalize=True,
|
||||
speed=speed)
|
||||
yield {'tts_speech': this_tts_speech.cpu()}
|
||||
with self.lock:
|
||||
self.tts_speech_token_dict.pop(this_uuid)
|
||||
self.llm_end_dict.pop(this_uuid)
|
||||
self.hift_cache_dict.pop(this_uuid)
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.empty_cache()
|
||||
torch.cuda.current_stream().synchronize()
|
||||
|
||||
|
||||
class CosyVoice3Model(CosyVoice2Model):
|
||||
|
||||
def __init__(self,
|
||||
llm: torch.nn.Module,
|
||||
flow: torch.nn.Module,
|
||||
hift: torch.nn.Module,
|
||||
fp16: bool = False):
|
||||
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
self.llm = llm
|
||||
self.flow = flow
|
||||
self.hift = hift
|
||||
self.fp16 = fp16
|
||||
# NOTE must matching training static_chunk_size
|
||||
self.token_hop_len = 25
|
||||
# NOTE increase token_hop_len incrementally to avoid duplicate inference
|
||||
self.token_max_hop_len = 4 * self.token_hop_len
|
||||
self.stream_scale_factor = 2
|
||||
assert self.stream_scale_factor >= 1, 'stream_scale_factor should be greater than 1, change it according to your actual rtf'
|
||||
# rtf and decoding related
|
||||
self.llm_context = torch.cuda.stream(torch.cuda.Stream(self.device)) if torch.cuda.is_available() else nullcontext()
|
||||
self.lock = threading.Lock()
|
||||
# dict used to store session related variable
|
||||
self.tts_speech_token_dict = {}
|
||||
self.llm_end_dict = {}
|
||||
self.hift_cache_dict = {}
|
||||
# FSQ silent and breath token
|
||||
self.silent_tokens = [1, 2, 28, 29, 55, 248, 494, 2241, 2242, 2322, 2323]
|
||||
|
||||
def token2wav(self, token, prompt_token, prompt_feat, embedding, token_offset, uuid, stream=False, finalize=False, speed=1.0):
|
||||
with torch.cuda.amp.autocast(self.fp16):
|
||||
tts_mel, _ = self.flow.inference(token=token.to(self.device, dtype=torch.int32),
|
||||
token_len=torch.tensor([token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_token=prompt_token.to(self.device),
|
||||
prompt_token_len=torch.tensor([prompt_token.shape[1]], dtype=torch.int32).to(self.device),
|
||||
prompt_feat=prompt_feat.to(self.device),
|
||||
prompt_feat_len=torch.tensor([prompt_feat.shape[1]], dtype=torch.int32).to(self.device),
|
||||
embedding=embedding.to(self.device),
|
||||
streaming=stream,
|
||||
finalize=finalize)
|
||||
tts_mel = tts_mel[:, :, token_offset * self.flow.token_mel_ratio:]
|
||||
# append mel cache
|
||||
if self.hift_cache_dict[uuid] is not None:
|
||||
hift_cache_mel = self.hift_cache_dict[uuid]['mel']
|
||||
tts_mel = torch.concat([hift_cache_mel, tts_mel], dim=2)
|
||||
self.hift_cache_dict[uuid]['mel'] = tts_mel
|
||||
else:
|
||||
self.hift_cache_dict[uuid] = {'mel': tts_mel, 'speech_offset': 0}
|
||||
if speed != 1.0:
|
||||
assert token_offset == 0 and finalize is True, 'speed change only support non-stream inference mode'
|
||||
tts_mel = F.interpolate(tts_mel, size=int(tts_mel.shape[2] / speed), mode='linear')
|
||||
tts_speech, _ = self.hift.inference(speech_feat=tts_mel, finalize=finalize)
|
||||
tts_speech = tts_speech[:, self.hift_cache_dict[uuid]['speech_offset']:]
|
||||
self.hift_cache_dict[uuid]['speech_offset'] += tts_speech.shape[1]
|
||||
return tts_speech
|
||||
0
models/CosyVoice/cosyvoice/dataset/__init__.py
Normal file
0
models/CosyVoice/cosyvoice/dataset/__init__.py
Normal file
155
models/CosyVoice/cosyvoice/dataset/dataset.py
Normal file
155
models/CosyVoice/cosyvoice/dataset/dataset.py
Normal file
@@ -0,0 +1,155 @@
|
||||
# Copyright (c) 2021 Mobvoi Inc. (authors: Binbin Zhang)
|
||||
# 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import math
|
||||
from functools import partial
|
||||
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
from torch.utils.data import IterableDataset
|
||||
from cosyvoice.utils.file_utils import read_lists
|
||||
|
||||
|
||||
class Processor(IterableDataset):
|
||||
|
||||
def __init__(self, source, f, *args, **kw):
|
||||
assert callable(f)
|
||||
self.source = source
|
||||
self.f = f
|
||||
self.args = args
|
||||
self.kw = kw
|
||||
|
||||
def set_epoch(self, epoch):
|
||||
self.source.set_epoch(epoch)
|
||||
|
||||
def __iter__(self):
|
||||
""" Return an iterator over the source dataset processed by the
|
||||
given processor.
|
||||
"""
|
||||
assert self.source is not None
|
||||
assert callable(self.f)
|
||||
return self.f(iter(self.source), *self.args, **self.kw)
|
||||
|
||||
def apply(self, f):
|
||||
assert callable(f)
|
||||
return Processor(self, f, *self.args, **self.kw)
|
||||
|
||||
|
||||
class DistributedSampler:
|
||||
|
||||
def __init__(self, shuffle=True, partition=True):
|
||||
self.epoch = -1
|
||||
self.update()
|
||||
self.shuffle = shuffle
|
||||
self.partition = partition
|
||||
|
||||
def update(self):
|
||||
assert dist.is_available()
|
||||
if dist.is_initialized():
|
||||
self.rank = dist.get_rank()
|
||||
self.world_size = dist.get_world_size()
|
||||
else:
|
||||
self.rank = 0
|
||||
self.world_size = 1
|
||||
worker_info = torch.utils.data.get_worker_info()
|
||||
if worker_info is None:
|
||||
self.worker_id = 0
|
||||
self.num_workers = 1
|
||||
else:
|
||||
self.worker_id = worker_info.id
|
||||
self.num_workers = worker_info.num_workers
|
||||
return dict(rank=self.rank,
|
||||
world_size=self.world_size,
|
||||
worker_id=self.worker_id,
|
||||
num_workers=self.num_workers)
|
||||
|
||||
def set_epoch(self, epoch):
|
||||
self.epoch = epoch
|
||||
|
||||
def sample(self, data):
|
||||
""" Sample data according to rank/world_size/num_workers
|
||||
|
||||
Args:
|
||||
data(List): input data list
|
||||
|
||||
Returns:
|
||||
List: data list after sample
|
||||
"""
|
||||
data = list(range(len(data)))
|
||||
# force datalist even
|
||||
if self.partition:
|
||||
if self.shuffle:
|
||||
random.Random(self.epoch).shuffle(data)
|
||||
if len(data) < self.world_size:
|
||||
data = data * math.ceil(self.world_size / len(data))
|
||||
data = data[:self.world_size]
|
||||
data = data[self.rank::self.world_size]
|
||||
if len(data) < self.num_workers:
|
||||
data = data * math.ceil(self.num_workers / len(data))
|
||||
data = data[:self.num_workers]
|
||||
data = data[self.worker_id::self.num_workers]
|
||||
return data
|
||||
|
||||
|
||||
class DataList(IterableDataset):
|
||||
|
||||
def __init__(self, lists, shuffle=True, partition=True):
|
||||
self.lists = lists
|
||||
self.sampler = DistributedSampler(shuffle, partition)
|
||||
|
||||
def set_epoch(self, epoch):
|
||||
self.sampler.set_epoch(epoch)
|
||||
|
||||
def __iter__(self):
|
||||
sampler_info = self.sampler.update()
|
||||
indexes = self.sampler.sample(self.lists)
|
||||
for index in indexes:
|
||||
data = dict(src=self.lists[index])
|
||||
data.update(sampler_info)
|
||||
yield data
|
||||
|
||||
|
||||
def Dataset(data_list_file,
|
||||
data_pipeline,
|
||||
mode='train',
|
||||
gan=False,
|
||||
dpo=False,
|
||||
shuffle=True,
|
||||
partition=True):
|
||||
""" Construct dataset from arguments
|
||||
|
||||
We have two shuffle stage in the Dataset. The first is global
|
||||
shuffle at shards tar/raw file level. The second is global shuffle
|
||||
at training samples level.
|
||||
|
||||
Args:
|
||||
data_type(str): raw/shard
|
||||
tokenizer (BaseTokenizer): tokenizer to tokenize
|
||||
partition(bool): whether to do data partition in terms of rank
|
||||
"""
|
||||
lists = read_lists(data_list_file)
|
||||
dataset = DataList(lists,
|
||||
shuffle=shuffle,
|
||||
partition=partition)
|
||||
# map partial arg to padding func
|
||||
for i in range(1, len(data_pipeline)):
|
||||
if data_pipeline[i].func.__name__ == 'compute_fbank' and gan is True:
|
||||
data_pipeline[i] = partial(data_pipeline[i], token_mel_ratio=0)
|
||||
if data_pipeline[i].func.__name__ == 'padding':
|
||||
data_pipeline[i] = partial(data_pipeline[i], gan=gan, dpo=dpo)
|
||||
for func in data_pipeline:
|
||||
dataset = Processor(dataset, func, mode=mode)
|
||||
return dataset
|
||||
431
models/CosyVoice/cosyvoice/dataset/processor.py
Normal file
431
models/CosyVoice/cosyvoice/dataset/processor.py
Normal file
@@ -0,0 +1,431 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import random
|
||||
|
||||
import pyarrow.parquet as pq
|
||||
from io import BytesIO
|
||||
import numpy as np
|
||||
import whisper
|
||||
import torch
|
||||
import torchaudio
|
||||
from torch.nn.utils.rnn import pad_sequence
|
||||
import torch.nn.functional as F
|
||||
import pyworld as pw
|
||||
from cosyvoice.utils.onnx import embedding_extractor, online_feature
|
||||
|
||||
AUDIO_FORMAT_SETS = {'flac', 'mp3', 'm4a', 'ogg', 'opus', 'wav', 'wma'}
|
||||
|
||||
|
||||
def parquet_opener(data, mode='train'):
|
||||
""" Give url or local file, return file descriptor
|
||||
Inplace operation.
|
||||
|
||||
Args:
|
||||
data(Iterable[str]): url or local file list
|
||||
|
||||
Returns:
|
||||
Iterable[{src, stream}]
|
||||
"""
|
||||
for sample in data:
|
||||
assert 'src' in sample
|
||||
url = sample['src']
|
||||
try:
|
||||
for df in pq.ParquetFile(url).iter_batches(batch_size=64):
|
||||
df = df.to_pandas()
|
||||
for i in range(len(df)):
|
||||
sample.update(dict(df.loc[i]))
|
||||
# NOTE do not return sample directly, must initialize a new dict
|
||||
yield {**sample}
|
||||
except Exception as ex:
|
||||
logging.warning('Failed to open {}, ex info {}'.format(url, ex))
|
||||
|
||||
|
||||
def filter(data,
|
||||
max_length=10240,
|
||||
min_length=10,
|
||||
token_max_length=200,
|
||||
token_min_length=1,
|
||||
min_output_input_ratio=0.0005,
|
||||
max_output_input_ratio=1,
|
||||
mode='train'):
|
||||
""" Filter sample according to feature and label length
|
||||
Inplace operation.
|
||||
|
||||
Args::
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
max_length: drop utterance which is greater than max_length(10ms)
|
||||
min_length: drop utterance which is less than min_length(10ms)
|
||||
token_max_length: drop utterance which is greater than
|
||||
token_max_length, especially when use char unit for
|
||||
english modeling
|
||||
token_min_length: drop utterance which is
|
||||
less than token_max_length
|
||||
min_output_input_ratio: minimal ration of
|
||||
token_length / feats_length(10ms)
|
||||
max_output_input_ratio: maximum ration of
|
||||
token_length / feats_length(10ms)
|
||||
|
||||
Returns:
|
||||
Iterable[{key, wav, label, sample_rate}]
|
||||
"""
|
||||
for sample in data:
|
||||
sample['speech'], sample['sample_rate'] = torchaudio.load(BytesIO(sample['audio_data']))
|
||||
sample['speech'] = sample['speech'].mean(dim=0, keepdim=True)
|
||||
del sample['audio_data']
|
||||
# sample['wav'] is torch.Tensor, we have 100 frames every second
|
||||
num_frames = sample['speech'].size(1) / sample['sample_rate'] * 100
|
||||
if num_frames < min_length:
|
||||
continue
|
||||
if num_frames > max_length:
|
||||
continue
|
||||
if len(sample['text_token']) < token_min_length:
|
||||
continue
|
||||
if len(sample['text_token']) > token_max_length:
|
||||
continue
|
||||
if online_feature is False and len(sample['speech_token']) == 0:
|
||||
continue
|
||||
if online_feature is False and 'reject_speech_token' in sample and len(sample['reject_speech_token']) == 0:
|
||||
continue
|
||||
if num_frames != 0:
|
||||
if len(sample['text_token']) / num_frames < min_output_input_ratio:
|
||||
continue
|
||||
if len(sample['text_token']) / num_frames > max_output_input_ratio:
|
||||
continue
|
||||
yield sample
|
||||
|
||||
|
||||
def resample(data, resample_rate=22050, min_sample_rate=16000, mode='train'):
|
||||
""" Resample data.
|
||||
Inplace operation.
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
resample_rate: target resample rate
|
||||
|
||||
Returns:
|
||||
Iterable[{key, wav, label, sample_rate}]
|
||||
"""
|
||||
for sample in data:
|
||||
assert 'sample_rate' in sample
|
||||
assert 'speech' in sample
|
||||
sample_rate = sample['sample_rate']
|
||||
waveform = sample['speech']
|
||||
if sample_rate != resample_rate:
|
||||
if sample_rate < min_sample_rate:
|
||||
continue
|
||||
sample['sample_rate'] = resample_rate
|
||||
sample['speech'] = torchaudio.transforms.Resample(
|
||||
orig_freq=sample_rate, new_freq=resample_rate)(waveform)
|
||||
max_val = sample['speech'].abs().max()
|
||||
if max_val > 1:
|
||||
sample['speech'] /= max_val
|
||||
yield sample
|
||||
|
||||
|
||||
def truncate(data, truncate_length=24576, mode='train'):
|
||||
""" Truncate data.
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
truncate_length: truncate length
|
||||
|
||||
Returns:
|
||||
Iterable[{key, wav, label, sample_rate}]
|
||||
"""
|
||||
for sample in data:
|
||||
waveform = sample['speech']
|
||||
if waveform.shape[1] > truncate_length:
|
||||
start = random.randint(0, waveform.shape[1] - truncate_length)
|
||||
waveform = waveform[:, start: start + truncate_length]
|
||||
else:
|
||||
waveform = torch.concat([waveform, torch.zeros(1, truncate_length - waveform.shape[1])], dim=1)
|
||||
sample['speech'] = waveform
|
||||
yield sample
|
||||
|
||||
|
||||
def compute_fbank(data,
|
||||
feat_extractor,
|
||||
num_frames=-1,
|
||||
mode='train'):
|
||||
""" Extract fbank
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
|
||||
Returns:
|
||||
Iterable[{key, feat, label}]
|
||||
"""
|
||||
for sample in data:
|
||||
assert 'sample_rate' in sample
|
||||
assert 'speech' in sample
|
||||
assert 'utt' in sample
|
||||
assert 'text_token' in sample
|
||||
# NOTE in cosyvoice2/3, we support online token extraction, so we need to align speech to 25hz first
|
||||
if num_frames != -1:
|
||||
index = int(np.ceil(sample['speech'].shape[1] / num_frames))
|
||||
sample['speech'] = torch.concat([sample['speech'], torch.zeros(1, index * num_frames - sample['speech'].shape[1])], dim=1)
|
||||
sample['speech_feat'] = feat_extractor(sample['speech']).squeeze(dim=0).transpose(0, 1)
|
||||
yield sample
|
||||
|
||||
|
||||
def compute_whisper_fbank(data, num_frames=-1, mode='train'):
|
||||
""" Extract whisper fbank
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
|
||||
Returns:
|
||||
Iterable[{key, feat, label}]
|
||||
"""
|
||||
for sample in data:
|
||||
if num_frames != -1:
|
||||
assert sample['speech'].shape[1] % num_frames == 0, 'speech length is not aligned with speech_token'
|
||||
sample['speech_16k'] = torchaudio.transforms.Resample(orig_freq=sample['sample_rate'], new_freq=16000)(sample['speech'])
|
||||
sample['whisper_feat'] = whisper.log_mel_spectrogram(sample['speech_16k'], n_mels=128).squeeze(dim=0).transpose(0, 1)
|
||||
yield sample
|
||||
|
||||
|
||||
def compute_f0(data, sample_rate, hop_size, mode='train'):
|
||||
""" Extract f0
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
|
||||
Returns:
|
||||
Iterable[{key, feat, label}]
|
||||
"""
|
||||
frame_period = hop_size * 1000 / sample_rate
|
||||
for sample in data:
|
||||
assert 'sample_rate' in sample
|
||||
assert 'speech' in sample
|
||||
assert 'utt' in sample
|
||||
assert 'text_token' in sample
|
||||
waveform = sample['speech']
|
||||
_f0, t = pw.harvest(waveform.squeeze(dim=0).numpy().astype('double'), sample_rate, frame_period=frame_period)
|
||||
if sum(_f0 != 0) < 5: # this happens when the algorithm fails
|
||||
_f0, t = pw.dio(waveform.squeeze(dim=0).numpy().astype('double'), sample_rate, frame_period=frame_period) # if harvest fails, try dio
|
||||
f0 = pw.stonemask(waveform.squeeze(dim=0).numpy().astype('double'), _f0, t, sample_rate)
|
||||
f0 = F.interpolate(torch.from_numpy(f0).view(1, 1, -1), size=sample['speech_feat'].shape[0], mode='linear').view(-1)
|
||||
sample['pitch_feat'] = f0
|
||||
yield sample
|
||||
|
||||
|
||||
def parse_embedding(data, normalize, mode='train'):
|
||||
""" Parse utt_embedding/spk_embedding
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, label, sample_rate}]
|
||||
|
||||
Returns:
|
||||
Iterable[{key, feat, label}]
|
||||
"""
|
||||
for sample in data:
|
||||
if 'utt_embedding' not in sample and 'spk_embedding' not in sample:
|
||||
sample['speech_16k'] = torchaudio.transforms.Resample(orig_freq=sample['sample_rate'], new_freq=16000)(sample['speech'])
|
||||
embedding = embedding_extractor.inference(sample['speech_16k'])
|
||||
sample['spk_embedding'] = sample['utt_embedding'] = embedding
|
||||
else:
|
||||
sample['utt_embedding'] = torch.tensor(sample['utt_embedding'], dtype=torch.float32)
|
||||
sample['spk_embedding'] = torch.tensor(sample['spk_embedding'], dtype=torch.float32)
|
||||
if normalize:
|
||||
sample['utt_embedding'] = F.normalize(sample['utt_embedding'], dim=0)
|
||||
sample['spk_embedding'] = F.normalize(sample['spk_embedding'], dim=0)
|
||||
yield sample
|
||||
|
||||
|
||||
def tokenize(data, get_tokenizer, allowed_special, mode='train'):
|
||||
""" Decode text to chars or BPE
|
||||
Inplace operation
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, wav, txt, sample_rate}]
|
||||
|
||||
Returns:
|
||||
Iterable[{key, wav, txt, tokens, label, sample_rate}]
|
||||
"""
|
||||
tokenizer = get_tokenizer()
|
||||
for sample in data:
|
||||
assert 'text' in sample
|
||||
sample['text_token'] = tokenizer.encode(sample['text'], allowed_special=allowed_special)
|
||||
if 'instruct' in sample:
|
||||
sample['instruct_token'] = tokenizer.encode(sample['instruct'], allowed_special=allowed_special)
|
||||
yield sample
|
||||
|
||||
|
||||
def shuffle(data, shuffle_size=10000, mode='train'):
|
||||
""" Local shuffle the data
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, feat, label}]
|
||||
shuffle_size: buffer size for shuffle
|
||||
|
||||
Returns:
|
||||
Iterable[{key, feat, label}]
|
||||
"""
|
||||
buf = []
|
||||
yield_size = int(shuffle_size / 2)
|
||||
for sample in data:
|
||||
buf.append(sample)
|
||||
if len(buf) >= shuffle_size:
|
||||
random.shuffle(buf)
|
||||
for x in buf[:yield_size]:
|
||||
yield x
|
||||
buf = buf[yield_size:]
|
||||
# The sample left over
|
||||
random.shuffle(buf)
|
||||
for x in buf:
|
||||
yield x
|
||||
|
||||
|
||||
def sort(data, sort_size=500, mode='train'):
|
||||
""" Sort the data by feature length.
|
||||
Sort is used after shuffle and before batch, so we can group
|
||||
utts with similar lengths into a batch, and `sort_size` should
|
||||
be less than `shuffle_size`
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, feat, label}]
|
||||
sort_size: buffer size for sort
|
||||
|
||||
Returns:
|
||||
Iterable[{key, feat, label}]
|
||||
"""
|
||||
|
||||
buf = []
|
||||
for sample in data:
|
||||
buf.append(sample)
|
||||
if len(buf) >= sort_size:
|
||||
buf.sort(key=lambda x: x['speech_feat'].size(0))
|
||||
for x in buf:
|
||||
yield x
|
||||
buf = []
|
||||
# The sample left over
|
||||
buf.sort(key=lambda x: x['speech_feat'].size(0))
|
||||
for x in buf:
|
||||
yield x
|
||||
|
||||
|
||||
def static_batch(data, batch_size=16):
|
||||
""" Static batch the data by `batch_size`
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, feat, label}]
|
||||
batch_size: batch size
|
||||
|
||||
Returns:
|
||||
Iterable[List[{key, feat, label}]]
|
||||
"""
|
||||
buf = []
|
||||
for sample in data:
|
||||
buf.append(sample)
|
||||
if len(buf) >= batch_size:
|
||||
yield buf
|
||||
buf = []
|
||||
if len(buf) > 0:
|
||||
yield buf
|
||||
|
||||
|
||||
def dynamic_batch(data, max_frames_in_batch=12000, mode='train'):
|
||||
""" Dynamic batch the data until the total frames in batch
|
||||
reach `max_frames_in_batch`
|
||||
|
||||
Args:
|
||||
data: Iterable[{key, feat, label}]
|
||||
max_frames_in_batch: max_frames in one batch
|
||||
|
||||
Returns:
|
||||
Iterable[List[{key, feat, label}]]
|
||||
"""
|
||||
buf = []
|
||||
longest_frames = 0
|
||||
for sample in data:
|
||||
assert 'speech_feat' in sample
|
||||
assert isinstance(sample['speech_feat'], torch.Tensor)
|
||||
new_sample_frames = sample['speech_feat'].size(0)
|
||||
longest_frames = max(longest_frames, new_sample_frames)
|
||||
frames_after_padding = longest_frames * (len(buf) + 1)
|
||||
if frames_after_padding > max_frames_in_batch:
|
||||
yield buf
|
||||
buf = [sample]
|
||||
longest_frames = new_sample_frames
|
||||
else:
|
||||
buf.append(sample)
|
||||
if len(buf) > 0:
|
||||
yield buf
|
||||
|
||||
|
||||
def batch(data, batch_type='static', batch_size=16, max_frames_in_batch=12000, mode='train'):
|
||||
""" Wrapper for static/dynamic batch
|
||||
"""
|
||||
if batch_type == 'static':
|
||||
return static_batch(data, batch_size)
|
||||
elif batch_type == 'dynamic':
|
||||
return dynamic_batch(data, max_frames_in_batch)
|
||||
else:
|
||||
logging.fatal('Unsupported batch type {}'.format(batch_type))
|
||||
|
||||
|
||||
def padding(data, use_spk_embedding, mode='train', gan=False, dpo=False):
|
||||
""" Padding the data into training data
|
||||
|
||||
Args:
|
||||
data: Iterable[List[{key, feat, label}]]
|
||||
|
||||
Returns:
|
||||
Iterable[Tuple(keys, feats, labels, feats lengths, label lengths)]
|
||||
"""
|
||||
for sample in data:
|
||||
assert isinstance(sample, list)
|
||||
order = torch.argsort(torch.tensor([x['speech'].size(1) for x in sample], dtype=torch.int32), descending=True)
|
||||
batch = {}
|
||||
batch['utts'] = [sample[i]['utt'] for i in order]
|
||||
batch['text'] = [sample[i]['text'] for i in order]
|
||||
text_token = [torch.tensor(sample[i]['text_token']) for i in order]
|
||||
batch['text_token_len'] = torch.tensor([i.size(0) for i in text_token], dtype=torch.int32)
|
||||
batch['text_token'] = pad_sequence(text_token, batch_first=True, padding_value=0)
|
||||
speech_feat = [sample[i]['speech_feat'] for i in order]
|
||||
batch['speech_feat_len'] = torch.tensor([i.size(0) for i in speech_feat], dtype=torch.int32)
|
||||
batch['speech_feat'] = pad_sequence(speech_feat, batch_first=True, padding_value=0)
|
||||
batch['utt_embedding'] = torch.stack([sample[i]['utt_embedding'] for i in order], dim=0)
|
||||
batch['spk_embedding'] = torch.stack([sample[i]['spk_embedding'] for i in order], dim=0)
|
||||
if torch.tensor(['instruct_token' in sample[i] for i in order]).all():
|
||||
instruct_token = [torch.tensor(sample[i]['instruct_token']) for i in order]
|
||||
batch['instruct_token_len'] = torch.tensor([i.size(0) for i in instruct_token], dtype=torch.int32)
|
||||
batch['instruct_token'] = pad_sequence(instruct_token, batch_first=True, padding_value=0)
|
||||
if torch.tensor(['whisper_feat' in sample[i] for i in order]).all():
|
||||
whisper_feat = [sample[i]['whisper_feat'] for i in order]
|
||||
batch['whisper_feat_len'] = torch.tensor([i.size(0) for i in whisper_feat], dtype=torch.int32)
|
||||
batch['whisper_feat'] = pad_sequence(whisper_feat, batch_first=True, padding_value=0)
|
||||
if torch.tensor(['speech_token' in sample[i] for i in order]).all():
|
||||
speech_token = [torch.tensor(sample[i]['speech_token']) for i in order]
|
||||
batch['speech_token_len'] = torch.tensor([i.size(0) for i in speech_token], dtype=torch.int32)
|
||||
batch['speech_token'] = pad_sequence(speech_token, batch_first=True, padding_value=0)
|
||||
if gan is True:
|
||||
# in gan train, we need speech/pitch_feat
|
||||
speech = [sample[i]['speech'].squeeze(dim=0) for i in order]
|
||||
batch['speech_len'] = torch.tensor([i.size(0) for i in speech], dtype=torch.int32)
|
||||
batch['speech'] = pad_sequence(speech, batch_first=True, padding_value=0)
|
||||
pitch_feat = [sample[i]['pitch_feat'] for i in order]
|
||||
batch['pitch_feat_len'] = torch.tensor([i.size(0) for i in pitch_feat], dtype=torch.int32)
|
||||
batch['pitch_feat'] = pad_sequence(pitch_feat, batch_first=True, padding_value=0)
|
||||
if dpo is True:
|
||||
reject_speech_token = [torch.tensor(sample[i]['reject_speech_token']) for i in order]
|
||||
batch['reject_speech_token_len'] = torch.tensor([i.size(0) for i in reject_speech_token], dtype=torch.int32)
|
||||
batch['reject_speech_token'] = pad_sequence(reject_speech_token, batch_first=True, padding_value=0)
|
||||
if use_spk_embedding is True:
|
||||
batch["embedding"] = batch["spk_embedding"]
|
||||
else:
|
||||
batch["embedding"] = batch["utt_embedding"]
|
||||
yield batch
|
||||
176
models/CosyVoice/cosyvoice/flow/DiT/dit.py
Normal file
176
models/CosyVoice/cosyvoice/flow/DiT/dit.py
Normal file
@@ -0,0 +1,176 @@
|
||||
|
||||
"""
|
||||
ein notation:
|
||||
b - batch
|
||||
n - sequence
|
||||
nt - text sequence
|
||||
nw - raw wave length
|
||||
d - dimension
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
import torch.nn.functional as F
|
||||
from einops import repeat
|
||||
from x_transformers.x_transformers import RotaryEmbedding
|
||||
from cosyvoice.utils.mask import add_optional_chunk_mask
|
||||
from cosyvoice.flow.DiT.modules import (
|
||||
TimestepEmbedding,
|
||||
ConvNeXtV2Block,
|
||||
CausalConvPositionEmbedding,
|
||||
DiTBlock,
|
||||
AdaLayerNormZero_Final,
|
||||
precompute_freqs_cis,
|
||||
get_pos_embed_indices,
|
||||
)
|
||||
|
||||
|
||||
# Text embedding
|
||||
|
||||
|
||||
class TextEmbedding(nn.Module):
|
||||
def __init__(self, text_num_embeds, text_dim, conv_layers=0, conv_mult=2):
|
||||
super().__init__()
|
||||
self.text_embed = nn.Embedding(text_num_embeds + 1, text_dim) # use 0 as filler token
|
||||
|
||||
if conv_layers > 0:
|
||||
self.extra_modeling = True
|
||||
self.precompute_max_pos = 4096 # ~44s of 24khz audio
|
||||
self.register_buffer("freqs_cis", precompute_freqs_cis(text_dim, self.precompute_max_pos), persistent=False)
|
||||
self.text_blocks = nn.Sequential(
|
||||
*[ConvNeXtV2Block(text_dim, text_dim * conv_mult) for _ in range(conv_layers)]
|
||||
)
|
||||
else:
|
||||
self.extra_modeling = False
|
||||
|
||||
def forward(self, text: int["b nt"], seq_len, drop_text=False): # noqa: F722
|
||||
batch, text_len = text.shape[0], text.shape[1]
|
||||
text = text + 1 # use 0 as filler token. preprocess of batch pad -1, see list_str_to_idx()
|
||||
text = text[:, :seq_len] # curtail if character tokens are more than the mel spec tokens
|
||||
text = F.pad(text, (0, seq_len - text_len), value=0)
|
||||
|
||||
if drop_text: # cfg for text
|
||||
text = torch.zeros_like(text)
|
||||
|
||||
text = self.text_embed(text) # b n -> b n d
|
||||
|
||||
# possible extra modeling
|
||||
if self.extra_modeling:
|
||||
# sinus pos emb
|
||||
batch_start = torch.zeros((batch,), dtype=torch.long)
|
||||
pos_idx = get_pos_embed_indices(batch_start, seq_len, max_pos=self.precompute_max_pos)
|
||||
text_pos_embed = self.freqs_cis[pos_idx]
|
||||
text = text + text_pos_embed
|
||||
|
||||
# convnextv2 blocks
|
||||
text = self.text_blocks(text)
|
||||
|
||||
return text
|
||||
|
||||
|
||||
# noised input audio and context mixing embedding
|
||||
|
||||
|
||||
class InputEmbedding(nn.Module):
|
||||
def __init__(self, mel_dim, text_dim, out_dim, spk_dim=None):
|
||||
super().__init__()
|
||||
spk_dim = 0 if spk_dim is None else spk_dim
|
||||
self.spk_dim = spk_dim
|
||||
self.proj = nn.Linear(mel_dim * 2 + text_dim + spk_dim, out_dim)
|
||||
self.conv_pos_embed = CausalConvPositionEmbedding(dim=out_dim)
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: float["b n d"],
|
||||
cond: float["b n d"],
|
||||
text_embed: float["b n d"],
|
||||
spks: float["b d"],
|
||||
):
|
||||
to_cat = [x, cond, text_embed]
|
||||
if self.spk_dim > 0:
|
||||
spks = repeat(spks, "b c -> b t c", t=x.shape[1])
|
||||
to_cat.append(spks)
|
||||
|
||||
x = self.proj(torch.cat(to_cat, dim=-1))
|
||||
x = self.conv_pos_embed(x) + x
|
||||
return x
|
||||
|
||||
|
||||
# Transformer backbone using DiT blocks
|
||||
|
||||
|
||||
class DiT(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
dim,
|
||||
depth=8,
|
||||
heads=8,
|
||||
dim_head=64,
|
||||
dropout=0.1,
|
||||
ff_mult=4,
|
||||
mel_dim=80,
|
||||
mu_dim=None,
|
||||
long_skip_connection=False,
|
||||
spk_dim=None,
|
||||
out_channels=None,
|
||||
static_chunk_size=50,
|
||||
num_decoding_left_chunks=2
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.time_embed = TimestepEmbedding(dim)
|
||||
if mu_dim is None:
|
||||
mu_dim = mel_dim
|
||||
self.input_embed = InputEmbedding(mel_dim, mu_dim, dim, spk_dim)
|
||||
|
||||
self.rotary_embed = RotaryEmbedding(dim_head)
|
||||
|
||||
self.dim = dim
|
||||
self.depth = depth
|
||||
|
||||
self.transformer_blocks = nn.ModuleList(
|
||||
[DiTBlock(dim=dim, heads=heads, dim_head=dim_head, ff_mult=ff_mult, dropout=dropout) for _ in range(depth)]
|
||||
)
|
||||
self.long_skip_connection = nn.Linear(dim * 2, dim, bias=False) if long_skip_connection else None
|
||||
|
||||
self.norm_out = AdaLayerNormZero_Final(dim) # final modulation
|
||||
self.proj_out = nn.Linear(dim, mel_dim)
|
||||
self.out_channels = out_channels
|
||||
self.static_chunk_size = static_chunk_size
|
||||
self.num_decoding_left_chunks = num_decoding_left_chunks
|
||||
|
||||
def forward(self, x, mask, mu, t, spks=None, cond=None, streaming=False):
|
||||
x = x.transpose(1, 2)
|
||||
mu = mu.transpose(1, 2)
|
||||
cond = cond.transpose(1, 2)
|
||||
spks = spks.unsqueeze(dim=1)
|
||||
batch, seq_len = x.shape[0], x.shape[1]
|
||||
if t.ndim == 0:
|
||||
t = t.repeat(batch)
|
||||
|
||||
# t: conditioning time, c: context (text + masked cond audio), x: noised input audio
|
||||
t = self.time_embed(t)
|
||||
x = self.input_embed(x, cond, mu, spks.squeeze(1))
|
||||
|
||||
rope = self.rotary_embed.forward_from_seq_len(seq_len)
|
||||
|
||||
if self.long_skip_connection is not None:
|
||||
residual = x
|
||||
|
||||
if streaming is True:
|
||||
attn_mask = add_optional_chunk_mask(x, mask.bool(), False, False, 0, self.static_chunk_size, -1).unsqueeze(dim=1)
|
||||
else:
|
||||
attn_mask = add_optional_chunk_mask(x, mask.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1).unsqueeze(dim=1)
|
||||
|
||||
for block in self.transformer_blocks:
|
||||
x = block(x, t, mask=attn_mask.bool(), rope=rope)
|
||||
|
||||
if self.long_skip_connection is not None:
|
||||
x = self.long_skip_connection(torch.cat((x, residual), dim=-1))
|
||||
|
||||
x = self.norm_out(x, t)
|
||||
output = self.proj_out(x).transpose(1, 2)
|
||||
return output
|
||||
616
models/CosyVoice/cosyvoice/flow/DiT/modules.py
Normal file
616
models/CosyVoice/cosyvoice/flow/DiT/modules.py
Normal file
@@ -0,0 +1,616 @@
|
||||
|
||||
"""
|
||||
ein notation:
|
||||
b - batch
|
||||
n - sequence
|
||||
nt - text sequence
|
||||
nw - raw wave length
|
||||
d - dimension
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from typing import Optional
|
||||
import math
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
import torch.nn.functional as F
|
||||
import torchaudio
|
||||
|
||||
from x_transformers.x_transformers import apply_rotary_pos_emb
|
||||
|
||||
|
||||
# raw wav to mel spec
|
||||
class MelSpec(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
filter_length=1024,
|
||||
hop_length=256,
|
||||
win_length=1024,
|
||||
n_mel_channels=100,
|
||||
target_sample_rate=24_000,
|
||||
normalize=False,
|
||||
power=1,
|
||||
norm=None,
|
||||
center=True,
|
||||
):
|
||||
super().__init__()
|
||||
self.n_mel_channels = n_mel_channels
|
||||
|
||||
self.mel_stft = torchaudio.transforms.MelSpectrogram(
|
||||
sample_rate=target_sample_rate,
|
||||
n_fft=filter_length,
|
||||
win_length=win_length,
|
||||
hop_length=hop_length,
|
||||
n_mels=n_mel_channels,
|
||||
power=power,
|
||||
center=center,
|
||||
normalized=normalize,
|
||||
norm=norm,
|
||||
)
|
||||
|
||||
self.register_buffer("dummy", torch.tensor(0), persistent=False)
|
||||
|
||||
def forward(self, inp):
|
||||
if len(inp.shape) == 3:
|
||||
inp = inp.squeeze(1) # 'b 1 nw -> b nw'
|
||||
|
||||
assert len(inp.shape) == 2
|
||||
|
||||
if self.dummy.device != inp.device:
|
||||
self.to(inp.device)
|
||||
|
||||
mel = self.mel_stft(inp)
|
||||
mel = mel.clamp(min=1e-5).log()
|
||||
return mel
|
||||
|
||||
|
||||
# sinusoidal position embedding
|
||||
|
||||
|
||||
class SinusPositionEmbedding(nn.Module):
|
||||
def __init__(self, dim):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
|
||||
def forward(self, x, scale=1000):
|
||||
device = x.device
|
||||
half_dim = self.dim // 2
|
||||
emb = math.log(10000) / (half_dim - 1)
|
||||
emb = torch.exp(torch.arange(half_dim, device=device).float() * -emb)
|
||||
emb = scale * x.unsqueeze(1) * emb.unsqueeze(0)
|
||||
emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
|
||||
return emb
|
||||
|
||||
|
||||
# convolutional position embedding
|
||||
|
||||
|
||||
class ConvPositionEmbedding(nn.Module):
|
||||
def __init__(self, dim, kernel_size=31, groups=16):
|
||||
super().__init__()
|
||||
assert kernel_size % 2 != 0
|
||||
self.conv1d = nn.Sequential(
|
||||
nn.Conv1d(dim, dim, kernel_size, groups=groups, padding=kernel_size // 2),
|
||||
nn.Mish(),
|
||||
nn.Conv1d(dim, dim, kernel_size, groups=groups, padding=kernel_size // 2),
|
||||
nn.Mish(),
|
||||
)
|
||||
|
||||
def forward(self, x: float["b n d"], mask: bool["b n"] | None = None): # noqa: F722
|
||||
if mask is not None:
|
||||
mask = mask[..., None]
|
||||
x = x.masked_fill(~mask, 0.0)
|
||||
|
||||
x = x.permute(0, 2, 1)
|
||||
x = self.conv1d(x)
|
||||
out = x.permute(0, 2, 1)
|
||||
|
||||
if mask is not None:
|
||||
out = out.masked_fill(~mask, 0.0)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
class CausalConvPositionEmbedding(nn.Module):
|
||||
def __init__(self, dim, kernel_size=31, groups=16):
|
||||
super().__init__()
|
||||
assert kernel_size % 2 != 0
|
||||
self.kernel_size = kernel_size
|
||||
self.conv1 = nn.Sequential(
|
||||
nn.Conv1d(dim, dim, kernel_size, groups=groups, padding=0),
|
||||
nn.Mish(),
|
||||
)
|
||||
self.conv2 = nn.Sequential(
|
||||
nn.Conv1d(dim, dim, kernel_size, groups=groups, padding=0),
|
||||
nn.Mish(),
|
||||
)
|
||||
|
||||
def forward(self, x: float["b n d"], mask: bool["b n"] | None = None): # noqa: F722
|
||||
if mask is not None:
|
||||
mask = mask[..., None]
|
||||
x = x.masked_fill(~mask, 0.0)
|
||||
|
||||
x = x.permute(0, 2, 1)
|
||||
x = F.pad(x, (self.kernel_size - 1, 0, 0, 0))
|
||||
x = self.conv1(x)
|
||||
x = F.pad(x, (self.kernel_size - 1, 0, 0, 0))
|
||||
x = self.conv2(x)
|
||||
out = x.permute(0, 2, 1)
|
||||
|
||||
if mask is not None:
|
||||
out = out.masked_fill(~mask, 0.0)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
# rotary positional embedding related
|
||||
|
||||
|
||||
def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0, theta_rescale_factor=1.0):
|
||||
# proposed by reddit user bloc97, to rescale rotary embeddings to longer sequence length without fine-tuning
|
||||
# has some connection to NTK literature
|
||||
# https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/
|
||||
# https://github.com/lucidrains/rotary-embedding-torch/blob/main/rotary_embedding_torch/rotary_embedding_torch.py
|
||||
theta *= theta_rescale_factor ** (dim / (dim - 2))
|
||||
freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim))
|
||||
t = torch.arange(end, device=freqs.device) # type: ignore
|
||||
freqs = torch.outer(t, freqs).float() # type: ignore
|
||||
freqs_cos = torch.cos(freqs) # real part
|
||||
freqs_sin = torch.sin(freqs) # imaginary part
|
||||
return torch.cat([freqs_cos, freqs_sin], dim=-1)
|
||||
|
||||
|
||||
def get_pos_embed_indices(start, length, max_pos, scale=1.0):
|
||||
# length = length if isinstance(length, int) else length.max()
|
||||
scale = scale * torch.ones_like(start, dtype=torch.float32) # in case scale is a scalar
|
||||
pos = (
|
||||
start.unsqueeze(1)
|
||||
+ (torch.arange(length, device=start.device, dtype=torch.float32).unsqueeze(0) * scale.unsqueeze(1)).long()
|
||||
)
|
||||
# avoid extra long error.
|
||||
pos = torch.where(pos < max_pos, pos, max_pos - 1)
|
||||
return pos
|
||||
|
||||
|
||||
# Global Response Normalization layer (Instance Normalization ?)
|
||||
|
||||
|
||||
class GRN(nn.Module):
|
||||
def __init__(self, dim):
|
||||
super().__init__()
|
||||
self.gamma = nn.Parameter(torch.zeros(1, 1, dim))
|
||||
self.beta = nn.Parameter(torch.zeros(1, 1, dim))
|
||||
|
||||
def forward(self, x):
|
||||
Gx = torch.norm(x, p=2, dim=1, keepdim=True)
|
||||
Nx = Gx / (Gx.mean(dim=-1, keepdim=True) + 1e-6)
|
||||
return self.gamma * (x * Nx) + self.beta + x
|
||||
|
||||
|
||||
# ConvNeXt-V2 Block https://github.com/facebookresearch/ConvNeXt-V2/blob/main/models/convnextv2.py
|
||||
# ref: https://github.com/bfs18/e2_tts/blob/main/rfwave/modules.py#L108
|
||||
|
||||
|
||||
class ConvNeXtV2Block(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
dim: int,
|
||||
intermediate_dim: int,
|
||||
dilation: int = 1,
|
||||
):
|
||||
super().__init__()
|
||||
padding = (dilation * (7 - 1)) // 2
|
||||
self.dwconv = nn.Conv1d(
|
||||
dim, dim, kernel_size=7, padding=padding, groups=dim, dilation=dilation
|
||||
) # depthwise conv
|
||||
self.norm = nn.LayerNorm(dim, eps=1e-6)
|
||||
self.pwconv1 = nn.Linear(dim, intermediate_dim) # pointwise/1x1 convs, implemented with linear layers
|
||||
self.act = nn.GELU()
|
||||
self.grn = GRN(intermediate_dim)
|
||||
self.pwconv2 = nn.Linear(intermediate_dim, dim)
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
residual = x
|
||||
x = x.transpose(1, 2) # b n d -> b d n
|
||||
x = self.dwconv(x)
|
||||
x = x.transpose(1, 2) # b d n -> b n d
|
||||
x = self.norm(x)
|
||||
x = self.pwconv1(x)
|
||||
x = self.act(x)
|
||||
x = self.grn(x)
|
||||
x = self.pwconv2(x)
|
||||
return residual + x
|
||||
|
||||
|
||||
# AdaLayerNormZero
|
||||
# return with modulated x for attn input, and params for later mlp modulation
|
||||
|
||||
|
||||
class AdaLayerNormZero(nn.Module):
|
||||
def __init__(self, dim):
|
||||
super().__init__()
|
||||
|
||||
self.silu = nn.SiLU()
|
||||
self.linear = nn.Linear(dim, dim * 6)
|
||||
|
||||
self.norm = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-6)
|
||||
|
||||
def forward(self, x, emb=None):
|
||||
emb = self.linear(self.silu(emb))
|
||||
shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = torch.chunk(emb, 6, dim=1)
|
||||
|
||||
x = self.norm(x) * (1 + scale_msa[:, None]) + shift_msa[:, None]
|
||||
return x, gate_msa, shift_mlp, scale_mlp, gate_mlp
|
||||
|
||||
|
||||
# AdaLayerNormZero for final layer
|
||||
# return only with modulated x for attn input, cuz no more mlp modulation
|
||||
|
||||
|
||||
class AdaLayerNormZero_Final(nn.Module):
|
||||
def __init__(self, dim):
|
||||
super().__init__()
|
||||
|
||||
self.silu = nn.SiLU()
|
||||
self.linear = nn.Linear(dim, dim * 2)
|
||||
|
||||
self.norm = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-6)
|
||||
|
||||
def forward(self, x, emb):
|
||||
emb = self.linear(self.silu(emb))
|
||||
scale, shift = torch.chunk(emb, 2, dim=1)
|
||||
|
||||
x = self.norm(x) * (1 + scale)[:, None, :] + shift[:, None, :]
|
||||
return x
|
||||
|
||||
|
||||
# FeedForward
|
||||
|
||||
|
||||
class FeedForward(nn.Module):
|
||||
def __init__(self, dim, dim_out=None, mult=4, dropout=0.0, approximate: str = "none"):
|
||||
super().__init__()
|
||||
inner_dim = int(dim * mult)
|
||||
dim_out = dim_out if dim_out is not None else dim
|
||||
|
||||
activation = nn.GELU(approximate=approximate)
|
||||
project_in = nn.Sequential(nn.Linear(dim, inner_dim), activation)
|
||||
self.ff = nn.Sequential(project_in, nn.Dropout(dropout), nn.Linear(inner_dim, dim_out))
|
||||
|
||||
def forward(self, x):
|
||||
return self.ff(x)
|
||||
|
||||
|
||||
# Attention with possible joint part
|
||||
# modified from diffusers/src/diffusers/models/attention_processor.py
|
||||
|
||||
|
||||
class Attention(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
processor: JointAttnProcessor | AttnProcessor,
|
||||
dim: int,
|
||||
heads: int = 8,
|
||||
dim_head: int = 64,
|
||||
dropout: float = 0.0,
|
||||
context_dim: Optional[int] = None, # if not None -> joint attention
|
||||
context_pre_only=None,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
if not hasattr(F, "scaled_dot_product_attention"):
|
||||
raise ImportError("Attention equires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
|
||||
|
||||
self.processor = processor
|
||||
|
||||
self.dim = dim
|
||||
self.heads = heads
|
||||
self.inner_dim = dim_head * heads
|
||||
self.dropout = dropout
|
||||
|
||||
self.context_dim = context_dim
|
||||
self.context_pre_only = context_pre_only
|
||||
|
||||
self.to_q = nn.Linear(dim, self.inner_dim)
|
||||
self.to_k = nn.Linear(dim, self.inner_dim)
|
||||
self.to_v = nn.Linear(dim, self.inner_dim)
|
||||
|
||||
if self.context_dim is not None:
|
||||
self.to_k_c = nn.Linear(context_dim, self.inner_dim)
|
||||
self.to_v_c = nn.Linear(context_dim, self.inner_dim)
|
||||
if self.context_pre_only is not None:
|
||||
self.to_q_c = nn.Linear(context_dim, self.inner_dim)
|
||||
|
||||
self.to_out = nn.ModuleList([])
|
||||
self.to_out.append(nn.Linear(self.inner_dim, dim))
|
||||
self.to_out.append(nn.Dropout(dropout))
|
||||
|
||||
if self.context_pre_only is not None and not self.context_pre_only:
|
||||
self.to_out_c = nn.Linear(self.inner_dim, dim)
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: float["b n d"], # noised input x # noqa: F722
|
||||
c: float["b n d"] = None, # context c # noqa: F722
|
||||
mask: bool["b n"] | None = None, # noqa: F722
|
||||
rope=None, # rotary position embedding for x
|
||||
c_rope=None, # rotary position embedding for c
|
||||
) -> torch.Tensor:
|
||||
if c is not None:
|
||||
return self.processor(self, x, c=c, mask=mask, rope=rope, c_rope=c_rope)
|
||||
else:
|
||||
return self.processor(self, x, mask=mask, rope=rope)
|
||||
|
||||
|
||||
# Attention processor
|
||||
|
||||
|
||||
class AttnProcessor:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
attn: Attention,
|
||||
x: float["b n d"], # noised input x # noqa: F722
|
||||
mask: bool["b n"] | None = None, # noqa: F722
|
||||
rope=None, # rotary position embedding
|
||||
) -> torch.FloatTensor:
|
||||
batch_size = x.shape[0]
|
||||
|
||||
# `sample` projections.
|
||||
query = attn.to_q(x)
|
||||
key = attn.to_k(x)
|
||||
value = attn.to_v(x)
|
||||
|
||||
# apply rotary position embedding
|
||||
if rope is not None:
|
||||
freqs, xpos_scale = rope
|
||||
q_xpos_scale, k_xpos_scale = (xpos_scale, xpos_scale**-1.0) if xpos_scale is not None else (1.0, 1.0)
|
||||
|
||||
query = apply_rotary_pos_emb(query, freqs, q_xpos_scale)
|
||||
key = apply_rotary_pos_emb(key, freqs, k_xpos_scale)
|
||||
|
||||
# attention
|
||||
inner_dim = key.shape[-1]
|
||||
head_dim = inner_dim // attn.heads
|
||||
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||
|
||||
# mask. e.g. inference got a batch with different target durations, mask out the padding
|
||||
if mask is not None:
|
||||
attn_mask = mask
|
||||
if attn_mask.dim() == 2:
|
||||
attn_mask = attn_mask.unsqueeze(1).unsqueeze(1) # 'b n -> b 1 1 n'
|
||||
attn_mask = attn_mask.expand(batch_size, attn.heads, query.shape[-2], key.shape[-2])
|
||||
else:
|
||||
attn_mask = None
|
||||
|
||||
x = F.scaled_dot_product_attention(query, key, value, attn_mask=attn_mask, dropout_p=0.0, is_causal=False)
|
||||
x = x.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
|
||||
x = x.to(query.dtype)
|
||||
|
||||
# linear proj
|
||||
x = attn.to_out[0](x)
|
||||
# dropout
|
||||
x = attn.to_out[1](x)
|
||||
|
||||
if mask is not None:
|
||||
if mask.dim() == 2:
|
||||
mask = mask.unsqueeze(-1)
|
||||
else:
|
||||
mask = mask[:, 0, -1].unsqueeze(-1)
|
||||
x = x.masked_fill(~mask, 0.0)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
# Joint Attention processor for MM-DiT
|
||||
# modified from diffusers/src/diffusers/models/attention_processor.py
|
||||
|
||||
|
||||
class JointAttnProcessor:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
attn: Attention,
|
||||
x: float["b n d"], # noised input x # noqa: F722
|
||||
c: float["b nt d"] = None, # context c, here text # noqa: F722
|
||||
mask: bool["b n"] | None = None, # noqa: F722
|
||||
rope=None, # rotary position embedding for x
|
||||
c_rope=None, # rotary position embedding for c
|
||||
) -> torch.FloatTensor:
|
||||
residual = x
|
||||
|
||||
batch_size = c.shape[0]
|
||||
|
||||
# `sample` projections.
|
||||
query = attn.to_q(x)
|
||||
key = attn.to_k(x)
|
||||
value = attn.to_v(x)
|
||||
|
||||
# `context` projections.
|
||||
c_query = attn.to_q_c(c)
|
||||
c_key = attn.to_k_c(c)
|
||||
c_value = attn.to_v_c(c)
|
||||
|
||||
# apply rope for context and noised input independently
|
||||
if rope is not None:
|
||||
freqs, xpos_scale = rope
|
||||
q_xpos_scale, k_xpos_scale = (xpos_scale, xpos_scale**-1.0) if xpos_scale is not None else (1.0, 1.0)
|
||||
query = apply_rotary_pos_emb(query, freqs, q_xpos_scale)
|
||||
key = apply_rotary_pos_emb(key, freqs, k_xpos_scale)
|
||||
if c_rope is not None:
|
||||
freqs, xpos_scale = c_rope
|
||||
q_xpos_scale, k_xpos_scale = (xpos_scale, xpos_scale**-1.0) if xpos_scale is not None else (1.0, 1.0)
|
||||
c_query = apply_rotary_pos_emb(c_query, freqs, q_xpos_scale)
|
||||
c_key = apply_rotary_pos_emb(c_key, freqs, k_xpos_scale)
|
||||
|
||||
# attention
|
||||
query = torch.cat([query, c_query], dim=1)
|
||||
key = torch.cat([key, c_key], dim=1)
|
||||
value = torch.cat([value, c_value], dim=1)
|
||||
|
||||
inner_dim = key.shape[-1]
|
||||
head_dim = inner_dim // attn.heads
|
||||
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
|
||||
|
||||
# mask. e.g. inference got a batch with different target durations, mask out the padding
|
||||
if mask is not None:
|
||||
attn_mask = F.pad(mask, (0, c.shape[1]), value=True) # no mask for c (text)
|
||||
attn_mask = attn_mask.unsqueeze(1).unsqueeze(1) # 'b n -> b 1 1 n'
|
||||
attn_mask = attn_mask.expand(batch_size, attn.heads, query.shape[-2], key.shape[-2])
|
||||
else:
|
||||
attn_mask = None
|
||||
|
||||
x = F.scaled_dot_product_attention(query, key, value, attn_mask=attn_mask, dropout_p=0.0, is_causal=False)
|
||||
x = x.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
|
||||
x = x.to(query.dtype)
|
||||
|
||||
# Split the attention outputs.
|
||||
x, c = (
|
||||
x[:, : residual.shape[1]],
|
||||
x[:, residual.shape[1]:],
|
||||
)
|
||||
|
||||
# linear proj
|
||||
x = attn.to_out[0](x)
|
||||
# dropout
|
||||
x = attn.to_out[1](x)
|
||||
if not attn.context_pre_only:
|
||||
c = attn.to_out_c(c)
|
||||
|
||||
if mask is not None:
|
||||
mask = mask.unsqueeze(-1)
|
||||
x = x.masked_fill(~mask, 0.0)
|
||||
# c = c.masked_fill(~mask, 0.) # no mask for c (text)
|
||||
|
||||
return x, c
|
||||
|
||||
|
||||
# DiT Block
|
||||
|
||||
|
||||
class DiTBlock(nn.Module):
|
||||
def __init__(self, dim, heads, dim_head, ff_mult=4, dropout=0.1):
|
||||
super().__init__()
|
||||
|
||||
self.attn_norm = AdaLayerNormZero(dim)
|
||||
self.attn = Attention(
|
||||
processor=AttnProcessor(),
|
||||
dim=dim,
|
||||
heads=heads,
|
||||
dim_head=dim_head,
|
||||
dropout=dropout,
|
||||
)
|
||||
|
||||
self.ff_norm = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-6)
|
||||
self.ff = FeedForward(dim=dim, mult=ff_mult, dropout=dropout, approximate="tanh")
|
||||
|
||||
def forward(self, x, t, mask=None, rope=None): # x: noised input, t: time embedding
|
||||
# pre-norm & modulation for attention input
|
||||
norm, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.attn_norm(x, emb=t)
|
||||
|
||||
# attention
|
||||
attn_output = self.attn(x=norm, mask=mask, rope=rope)
|
||||
|
||||
# process attention output for input x
|
||||
x = x + gate_msa.unsqueeze(1) * attn_output
|
||||
|
||||
ff_norm = self.ff_norm(x) * (1 + scale_mlp[:, None]) + shift_mlp[:, None]
|
||||
ff_output = self.ff(ff_norm)
|
||||
x = x + gate_mlp.unsqueeze(1) * ff_output
|
||||
|
||||
return x
|
||||
|
||||
|
||||
# MMDiT Block https://arxiv.org/abs/2403.03206
|
||||
|
||||
|
||||
class MMDiTBlock(nn.Module):
|
||||
r"""
|
||||
modified from diffusers/src/diffusers/models/attention.py
|
||||
|
||||
notes.
|
||||
_c: context related. text, cond, etc. (left part in sd3 fig2.b)
|
||||
_x: noised input related. (right part)
|
||||
context_pre_only: last layer only do prenorm + modulation cuz no more ffn
|
||||
"""
|
||||
|
||||
def __init__(self, dim, heads, dim_head, ff_mult=4, dropout=0.1, context_pre_only=False):
|
||||
super().__init__()
|
||||
|
||||
self.context_pre_only = context_pre_only
|
||||
|
||||
self.attn_norm_c = AdaLayerNormZero_Final(dim) if context_pre_only else AdaLayerNormZero(dim)
|
||||
self.attn_norm_x = AdaLayerNormZero(dim)
|
||||
self.attn = Attention(
|
||||
processor=JointAttnProcessor(),
|
||||
dim=dim,
|
||||
heads=heads,
|
||||
dim_head=dim_head,
|
||||
dropout=dropout,
|
||||
context_dim=dim,
|
||||
context_pre_only=context_pre_only,
|
||||
)
|
||||
|
||||
if not context_pre_only:
|
||||
self.ff_norm_c = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-6)
|
||||
self.ff_c = FeedForward(dim=dim, mult=ff_mult, dropout=dropout, approximate="tanh")
|
||||
else:
|
||||
self.ff_norm_c = None
|
||||
self.ff_c = None
|
||||
self.ff_norm_x = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-6)
|
||||
self.ff_x = FeedForward(dim=dim, mult=ff_mult, dropout=dropout, approximate="tanh")
|
||||
|
||||
def forward(self, x, c, t, mask=None, rope=None, c_rope=None): # x: noised input, c: context, t: time embedding
|
||||
# pre-norm & modulation for attention input
|
||||
if self.context_pre_only:
|
||||
norm_c = self.attn_norm_c(c, t)
|
||||
else:
|
||||
norm_c, c_gate_msa, c_shift_mlp, c_scale_mlp, c_gate_mlp = self.attn_norm_c(c, emb=t)
|
||||
norm_x, x_gate_msa, x_shift_mlp, x_scale_mlp, x_gate_mlp = self.attn_norm_x(x, emb=t)
|
||||
|
||||
# attention
|
||||
x_attn_output, c_attn_output = self.attn(x=norm_x, c=norm_c, mask=mask, rope=rope, c_rope=c_rope)
|
||||
|
||||
# process attention output for context c
|
||||
if self.context_pre_only:
|
||||
c = None
|
||||
else: # if not last layer
|
||||
c = c + c_gate_msa.unsqueeze(1) * c_attn_output
|
||||
|
||||
norm_c = self.ff_norm_c(c) * (1 + c_scale_mlp[:, None]) + c_shift_mlp[:, None]
|
||||
c_ff_output = self.ff_c(norm_c)
|
||||
c = c + c_gate_mlp.unsqueeze(1) * c_ff_output
|
||||
|
||||
# process attention output for input x
|
||||
x = x + x_gate_msa.unsqueeze(1) * x_attn_output
|
||||
|
||||
norm_x = self.ff_norm_x(x) * (1 + x_scale_mlp[:, None]) + x_shift_mlp[:, None]
|
||||
x_ff_output = self.ff_x(norm_x)
|
||||
x = x + x_gate_mlp.unsqueeze(1) * x_ff_output
|
||||
|
||||
return c, x
|
||||
|
||||
|
||||
# time step conditioning embedding
|
||||
|
||||
|
||||
class TimestepEmbedding(nn.Module):
|
||||
def __init__(self, dim, freq_embed_dim=256):
|
||||
super().__init__()
|
||||
self.time_embed = SinusPositionEmbedding(freq_embed_dim)
|
||||
self.time_mlp = nn.Sequential(nn.Linear(freq_embed_dim, dim), nn.SiLU(), nn.Linear(dim, dim))
|
||||
|
||||
def forward(self, timestep: float["b"]): # noqa: F821
|
||||
time_hidden = self.time_embed(timestep)
|
||||
time_hidden = time_hidden.to(timestep.dtype)
|
||||
time = self.time_mlp(time_hidden) # b d
|
||||
return time
|
||||
494
models/CosyVoice/cosyvoice/flow/decoder.py
Normal file
494
models/CosyVoice/cosyvoice/flow/decoder.py
Normal file
@@ -0,0 +1,494 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Zhihao Du)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Tuple
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from einops import pack, rearrange, repeat
|
||||
from cosyvoice.utils.common import mask_to_bias
|
||||
from cosyvoice.utils.mask import add_optional_chunk_mask
|
||||
from matcha.models.components.decoder import SinusoidalPosEmb, Block1D, ResnetBlock1D, Downsample1D, TimestepEmbedding, Upsample1D
|
||||
from matcha.models.components.transformer import BasicTransformerBlock
|
||||
|
||||
|
||||
class Transpose(torch.nn.Module):
|
||||
def __init__(self, dim0: int, dim1: int):
|
||||
super().__init__()
|
||||
self.dim0 = dim0
|
||||
self.dim1 = dim1
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
x = torch.transpose(x, self.dim0, self.dim1)
|
||||
return x
|
||||
|
||||
|
||||
class CausalConv1d(torch.nn.Conv1d):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels: int,
|
||||
out_channels: int,
|
||||
kernel_size: int,
|
||||
stride: int = 1,
|
||||
dilation: int = 1,
|
||||
groups: int = 1,
|
||||
bias: bool = True,
|
||||
padding_mode: str = 'zeros',
|
||||
device=None,
|
||||
dtype=None
|
||||
) -> None:
|
||||
super(CausalConv1d, self).__init__(in_channels, out_channels,
|
||||
kernel_size, stride,
|
||||
padding=0, dilation=dilation,
|
||||
groups=groups, bias=bias,
|
||||
padding_mode=padding_mode,
|
||||
device=device, dtype=dtype)
|
||||
assert stride == 1
|
||||
self.causal_padding = kernel_size - 1
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
x = F.pad(x, (self.causal_padding, 0), value=0.0)
|
||||
x = super(CausalConv1d, self).forward(x)
|
||||
return x
|
||||
|
||||
|
||||
class CausalBlock1D(Block1D):
|
||||
def __init__(self, dim: int, dim_out: int):
|
||||
super(CausalBlock1D, self).__init__(dim, dim_out)
|
||||
self.block = torch.nn.Sequential(
|
||||
CausalConv1d(dim, dim_out, 3),
|
||||
Transpose(1, 2),
|
||||
nn.LayerNorm(dim_out),
|
||||
Transpose(1, 2),
|
||||
nn.Mish(),
|
||||
)
|
||||
|
||||
def forward(self, x: torch.Tensor, mask: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
output = self.block(x * mask)
|
||||
return output * mask
|
||||
|
||||
|
||||
class CausalResnetBlock1D(ResnetBlock1D):
|
||||
def __init__(self, dim: int, dim_out: int, time_emb_dim: int, groups: int = 8):
|
||||
super(CausalResnetBlock1D, self).__init__(dim, dim_out, time_emb_dim, groups)
|
||||
self.block1 = CausalBlock1D(dim, dim_out)
|
||||
self.block2 = CausalBlock1D(dim_out, dim_out)
|
||||
|
||||
|
||||
class ConditionalDecoder(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
channels=(256, 256),
|
||||
dropout=0.05,
|
||||
attention_head_dim=64,
|
||||
n_blocks=1,
|
||||
num_mid_blocks=2,
|
||||
num_heads=4,
|
||||
act_fn="snake",
|
||||
):
|
||||
"""
|
||||
This decoder requires an input with the same shape of the target. So, if your text content
|
||||
is shorter or longer than the outputs, please re-sampling it before feeding to the decoder.
|
||||
"""
|
||||
super().__init__()
|
||||
channels = tuple(channels)
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
|
||||
self.time_embeddings = SinusoidalPosEmb(in_channels)
|
||||
time_embed_dim = channels[0] * 4
|
||||
self.time_mlp = TimestepEmbedding(
|
||||
in_channels=in_channels,
|
||||
time_embed_dim=time_embed_dim,
|
||||
act_fn="silu",
|
||||
)
|
||||
self.down_blocks = nn.ModuleList([])
|
||||
self.mid_blocks = nn.ModuleList([])
|
||||
self.up_blocks = nn.ModuleList([])
|
||||
|
||||
output_channel = in_channels
|
||||
for i in range(len(channels)): # pylint: disable=consider-using-enumerate
|
||||
input_channel = output_channel
|
||||
output_channel = channels[i]
|
||||
is_last = i == len(channels) - 1
|
||||
resnet = ResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
|
||||
transformer_blocks = nn.ModuleList(
|
||||
[
|
||||
BasicTransformerBlock(
|
||||
dim=output_channel,
|
||||
num_attention_heads=num_heads,
|
||||
attention_head_dim=attention_head_dim,
|
||||
dropout=dropout,
|
||||
activation_fn=act_fn,
|
||||
)
|
||||
for _ in range(n_blocks)
|
||||
]
|
||||
)
|
||||
downsample = (
|
||||
Downsample1D(output_channel) if not is_last else nn.Conv1d(output_channel, output_channel, 3, padding=1)
|
||||
)
|
||||
self.down_blocks.append(nn.ModuleList([resnet, transformer_blocks, downsample]))
|
||||
|
||||
for _ in range(num_mid_blocks):
|
||||
input_channel = channels[-1]
|
||||
out_channels = channels[-1]
|
||||
resnet = ResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
|
||||
|
||||
transformer_blocks = nn.ModuleList(
|
||||
[
|
||||
BasicTransformerBlock(
|
||||
dim=output_channel,
|
||||
num_attention_heads=num_heads,
|
||||
attention_head_dim=attention_head_dim,
|
||||
dropout=dropout,
|
||||
activation_fn=act_fn,
|
||||
)
|
||||
for _ in range(n_blocks)
|
||||
]
|
||||
)
|
||||
|
||||
self.mid_blocks.append(nn.ModuleList([resnet, transformer_blocks]))
|
||||
|
||||
channels = channels[::-1] + (channels[0],)
|
||||
for i in range(len(channels) - 1):
|
||||
input_channel = channels[i] * 2
|
||||
output_channel = channels[i + 1]
|
||||
is_last = i == len(channels) - 2
|
||||
resnet = ResnetBlock1D(
|
||||
dim=input_channel,
|
||||
dim_out=output_channel,
|
||||
time_emb_dim=time_embed_dim,
|
||||
)
|
||||
transformer_blocks = nn.ModuleList(
|
||||
[
|
||||
BasicTransformerBlock(
|
||||
dim=output_channel,
|
||||
num_attention_heads=num_heads,
|
||||
attention_head_dim=attention_head_dim,
|
||||
dropout=dropout,
|
||||
activation_fn=act_fn,
|
||||
)
|
||||
for _ in range(n_blocks)
|
||||
]
|
||||
)
|
||||
upsample = (
|
||||
Upsample1D(output_channel, use_conv_transpose=True)
|
||||
if not is_last
|
||||
else nn.Conv1d(output_channel, output_channel, 3, padding=1)
|
||||
)
|
||||
self.up_blocks.append(nn.ModuleList([resnet, transformer_blocks, upsample]))
|
||||
self.final_block = Block1D(channels[-1], channels[-1])
|
||||
self.final_proj = nn.Conv1d(channels[-1], self.out_channels, 1)
|
||||
self.initialize_weights()
|
||||
|
||||
def initialize_weights(self):
|
||||
for m in self.modules():
|
||||
if isinstance(m, nn.Conv1d):
|
||||
nn.init.kaiming_normal_(m.weight, nonlinearity="relu")
|
||||
if m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
elif isinstance(m, nn.GroupNorm):
|
||||
nn.init.constant_(m.weight, 1)
|
||||
nn.init.constant_(m.bias, 0)
|
||||
elif isinstance(m, nn.Linear):
|
||||
nn.init.kaiming_normal_(m.weight, nonlinearity="relu")
|
||||
if m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
|
||||
def forward(self, x, mask, mu, t, spks=None, cond=None, streaming=False):
|
||||
"""Forward pass of the UNet1DConditional model.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): shape (batch_size, in_channels, time)
|
||||
mask (_type_): shape (batch_size, 1, time)
|
||||
t (_type_): shape (batch_size)
|
||||
spks (_type_, optional): shape: (batch_size, condition_channels). Defaults to None.
|
||||
cond (_type_, optional): placeholder for future use. Defaults to None.
|
||||
|
||||
Raises:
|
||||
ValueError: _description_
|
||||
ValueError: _description_
|
||||
|
||||
Returns:
|
||||
_type_: _description_
|
||||
"""
|
||||
|
||||
t = self.time_embeddings(t).to(t.dtype)
|
||||
t = self.time_mlp(t)
|
||||
|
||||
x = pack([x, mu], "b * t")[0]
|
||||
|
||||
if spks is not None:
|
||||
spks = repeat(spks, "b c -> b c t", t=x.shape[-1])
|
||||
x = pack([x, spks], "b * t")[0]
|
||||
if cond is not None:
|
||||
x = pack([x, cond], "b * t")[0]
|
||||
|
||||
hiddens = []
|
||||
masks = [mask]
|
||||
for resnet, transformer_blocks, downsample in self.down_blocks:
|
||||
mask_down = masks[-1]
|
||||
x = resnet(x, mask_down, t)
|
||||
x = rearrange(x, "b c t -> b t c").contiguous()
|
||||
attn_mask = add_optional_chunk_mask(x, mask_down.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
|
||||
attn_mask = mask_to_bias(attn_mask, x.dtype)
|
||||
for transformer_block in transformer_blocks:
|
||||
x = transformer_block(
|
||||
hidden_states=x,
|
||||
attention_mask=attn_mask,
|
||||
timestep=t,
|
||||
)
|
||||
x = rearrange(x, "b t c -> b c t").contiguous()
|
||||
hiddens.append(x) # Save hidden states for skip connections
|
||||
x = downsample(x * mask_down)
|
||||
masks.append(mask_down[:, :, ::2])
|
||||
masks = masks[:-1]
|
||||
mask_mid = masks[-1]
|
||||
|
||||
for resnet, transformer_blocks in self.mid_blocks:
|
||||
x = resnet(x, mask_mid, t)
|
||||
x = rearrange(x, "b c t -> b t c").contiguous()
|
||||
attn_mask = add_optional_chunk_mask(x, mask_mid.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
|
||||
attn_mask = mask_to_bias(attn_mask, x.dtype)
|
||||
for transformer_block in transformer_blocks:
|
||||
x = transformer_block(
|
||||
hidden_states=x,
|
||||
attention_mask=attn_mask,
|
||||
timestep=t,
|
||||
)
|
||||
x = rearrange(x, "b t c -> b c t").contiguous()
|
||||
|
||||
for resnet, transformer_blocks, upsample in self.up_blocks:
|
||||
mask_up = masks.pop()
|
||||
skip = hiddens.pop()
|
||||
x = pack([x[:, :, :skip.shape[-1]], skip], "b * t")[0]
|
||||
x = resnet(x, mask_up, t)
|
||||
x = rearrange(x, "b c t -> b t c").contiguous()
|
||||
attn_mask = add_optional_chunk_mask(x, mask_up.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
|
||||
attn_mask = mask_to_bias(attn_mask, x.dtype)
|
||||
for transformer_block in transformer_blocks:
|
||||
x = transformer_block(
|
||||
hidden_states=x,
|
||||
attention_mask=attn_mask,
|
||||
timestep=t,
|
||||
)
|
||||
x = rearrange(x, "b t c -> b c t").contiguous()
|
||||
x = upsample(x * mask_up)
|
||||
x = self.final_block(x, mask_up)
|
||||
output = self.final_proj(x * mask_up)
|
||||
return output * mask
|
||||
|
||||
|
||||
class CausalConditionalDecoder(ConditionalDecoder):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
channels=(256, 256),
|
||||
dropout=0.05,
|
||||
attention_head_dim=64,
|
||||
n_blocks=1,
|
||||
num_mid_blocks=2,
|
||||
num_heads=4,
|
||||
act_fn="snake",
|
||||
static_chunk_size=50,
|
||||
num_decoding_left_chunks=2,
|
||||
):
|
||||
"""
|
||||
This decoder requires an input with the same shape of the target. So, if your text content
|
||||
is shorter or longer than the outputs, please re-sampling it before feeding to the decoder.
|
||||
"""
|
||||
torch.nn.Module.__init__(self)
|
||||
channels = tuple(channels)
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.time_embeddings = SinusoidalPosEmb(in_channels)
|
||||
time_embed_dim = channels[0] * 4
|
||||
self.time_mlp = TimestepEmbedding(
|
||||
in_channels=in_channels,
|
||||
time_embed_dim=time_embed_dim,
|
||||
act_fn="silu",
|
||||
)
|
||||
self.static_chunk_size = static_chunk_size
|
||||
self.num_decoding_left_chunks = num_decoding_left_chunks
|
||||
self.down_blocks = nn.ModuleList([])
|
||||
self.mid_blocks = nn.ModuleList([])
|
||||
self.up_blocks = nn.ModuleList([])
|
||||
|
||||
output_channel = in_channels
|
||||
for i in range(len(channels)): # pylint: disable=consider-using-enumerate
|
||||
input_channel = output_channel
|
||||
output_channel = channels[i]
|
||||
is_last = i == len(channels) - 1
|
||||
resnet = CausalResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
|
||||
transformer_blocks = nn.ModuleList(
|
||||
[
|
||||
BasicTransformerBlock(
|
||||
dim=output_channel,
|
||||
num_attention_heads=num_heads,
|
||||
attention_head_dim=attention_head_dim,
|
||||
dropout=dropout,
|
||||
activation_fn=act_fn,
|
||||
)
|
||||
for _ in range(n_blocks)
|
||||
]
|
||||
)
|
||||
downsample = (
|
||||
Downsample1D(output_channel) if not is_last else CausalConv1d(output_channel, output_channel, 3)
|
||||
)
|
||||
self.down_blocks.append(nn.ModuleList([resnet, transformer_blocks, downsample]))
|
||||
|
||||
for _ in range(num_mid_blocks):
|
||||
input_channel = channels[-1]
|
||||
out_channels = channels[-1]
|
||||
resnet = CausalResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
|
||||
|
||||
transformer_blocks = nn.ModuleList(
|
||||
[
|
||||
BasicTransformerBlock(
|
||||
dim=output_channel,
|
||||
num_attention_heads=num_heads,
|
||||
attention_head_dim=attention_head_dim,
|
||||
dropout=dropout,
|
||||
activation_fn=act_fn,
|
||||
)
|
||||
for _ in range(n_blocks)
|
||||
]
|
||||
)
|
||||
|
||||
self.mid_blocks.append(nn.ModuleList([resnet, transformer_blocks]))
|
||||
|
||||
channels = channels[::-1] + (channels[0],)
|
||||
for i in range(len(channels) - 1):
|
||||
input_channel = channels[i] * 2
|
||||
output_channel = channels[i + 1]
|
||||
is_last = i == len(channels) - 2
|
||||
resnet = CausalResnetBlock1D(
|
||||
dim=input_channel,
|
||||
dim_out=output_channel,
|
||||
time_emb_dim=time_embed_dim,
|
||||
)
|
||||
transformer_blocks = nn.ModuleList(
|
||||
[
|
||||
BasicTransformerBlock(
|
||||
dim=output_channel,
|
||||
num_attention_heads=num_heads,
|
||||
attention_head_dim=attention_head_dim,
|
||||
dropout=dropout,
|
||||
activation_fn=act_fn,
|
||||
)
|
||||
for _ in range(n_blocks)
|
||||
]
|
||||
)
|
||||
upsample = (
|
||||
Upsample1D(output_channel, use_conv_transpose=True)
|
||||
if not is_last
|
||||
else CausalConv1d(output_channel, output_channel, 3)
|
||||
)
|
||||
self.up_blocks.append(nn.ModuleList([resnet, transformer_blocks, upsample]))
|
||||
self.final_block = CausalBlock1D(channels[-1], channels[-1])
|
||||
self.final_proj = nn.Conv1d(channels[-1], self.out_channels, 1)
|
||||
self.initialize_weights()
|
||||
|
||||
def forward(self, x, mask, mu, t, spks=None, cond=None, streaming=False):
|
||||
"""Forward pass of the UNet1DConditional model.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): shape (batch_size, in_channels, time)
|
||||
mask (_type_): shape (batch_size, 1, time)
|
||||
t (_type_): shape (batch_size)
|
||||
spks (_type_, optional): shape: (batch_size, condition_channels). Defaults to None.
|
||||
cond (_type_, optional): placeholder for future use. Defaults to None.
|
||||
|
||||
Raises:
|
||||
ValueError: _description_
|
||||
ValueError: _description_
|
||||
|
||||
Returns:
|
||||
_type_: _description_
|
||||
"""
|
||||
t = self.time_embeddings(t).to(t.dtype)
|
||||
t = self.time_mlp(t)
|
||||
|
||||
x = pack([x, mu], "b * t")[0]
|
||||
|
||||
if spks is not None:
|
||||
spks = repeat(spks, "b c -> b c t", t=x.shape[-1])
|
||||
x = pack([x, spks], "b * t")[0]
|
||||
if cond is not None:
|
||||
x = pack([x, cond], "b * t")[0]
|
||||
|
||||
hiddens = []
|
||||
masks = [mask]
|
||||
for resnet, transformer_blocks, downsample in self.down_blocks:
|
||||
mask_down = masks[-1]
|
||||
x = resnet(x, mask_down, t)
|
||||
x = rearrange(x, "b c t -> b t c").contiguous()
|
||||
if streaming is True:
|
||||
attn_mask = add_optional_chunk_mask(x, mask_down.bool(), False, False, 0, self.static_chunk_size, -1)
|
||||
else:
|
||||
attn_mask = add_optional_chunk_mask(x, mask_down.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
|
||||
attn_mask = mask_to_bias(attn_mask, x.dtype)
|
||||
for transformer_block in transformer_blocks:
|
||||
x = transformer_block(
|
||||
hidden_states=x,
|
||||
attention_mask=attn_mask,
|
||||
timestep=t,
|
||||
)
|
||||
x = rearrange(x, "b t c -> b c t").contiguous()
|
||||
hiddens.append(x) # Save hidden states for skip connections
|
||||
x = downsample(x * mask_down)
|
||||
masks.append(mask_down[:, :, ::2])
|
||||
masks = masks[:-1]
|
||||
mask_mid = masks[-1]
|
||||
|
||||
for resnet, transformer_blocks in self.mid_blocks:
|
||||
x = resnet(x, mask_mid, t)
|
||||
x = rearrange(x, "b c t -> b t c").contiguous()
|
||||
if streaming is True:
|
||||
attn_mask = add_optional_chunk_mask(x, mask_mid.bool(), False, False, 0, self.static_chunk_size, -1)
|
||||
else:
|
||||
attn_mask = add_optional_chunk_mask(x, mask_mid.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
|
||||
attn_mask = mask_to_bias(attn_mask, x.dtype)
|
||||
for transformer_block in transformer_blocks:
|
||||
x = transformer_block(
|
||||
hidden_states=x,
|
||||
attention_mask=attn_mask,
|
||||
timestep=t,
|
||||
)
|
||||
x = rearrange(x, "b t c -> b c t").contiguous()
|
||||
|
||||
for resnet, transformer_blocks, upsample in self.up_blocks:
|
||||
mask_up = masks.pop()
|
||||
skip = hiddens.pop()
|
||||
x = pack([x[:, :, :skip.shape[-1]], skip], "b * t")[0]
|
||||
x = resnet(x, mask_up, t)
|
||||
x = rearrange(x, "b c t -> b t c").contiguous()
|
||||
if streaming is True:
|
||||
attn_mask = add_optional_chunk_mask(x, mask_up.bool(), False, False, 0, self.static_chunk_size, -1)
|
||||
else:
|
||||
attn_mask = add_optional_chunk_mask(x, mask_up.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
|
||||
attn_mask = mask_to_bias(attn_mask, x.dtype)
|
||||
for transformer_block in transformer_blocks:
|
||||
x = transformer_block(
|
||||
hidden_states=x,
|
||||
attention_mask=attn_mask,
|
||||
timestep=t,
|
||||
)
|
||||
x = rearrange(x, "b t c -> b c t").contiguous()
|
||||
x = upsample(x * mask_up)
|
||||
x = self.final_block(x, mask_up)
|
||||
output = self.final_proj(x * mask_up)
|
||||
return output * mask
|
||||
443
models/CosyVoice/cosyvoice/flow/flow.py
Normal file
443
models/CosyVoice/cosyvoice/flow/flow.py
Normal file
@@ -0,0 +1,443 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Zhihao Du)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os, logging
|
||||
import random
|
||||
from typing import Dict, Optional
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from torch.nn import functional as F
|
||||
from omegaconf import DictConfig
|
||||
from cosyvoice.utils.mask import make_pad_mask
|
||||
from cosyvoice.utils.onnx import SpeechTokenExtractor, online_feature, onnx_path
|
||||
|
||||
|
||||
class MaskedDiffWithXvec(torch.nn.Module):
|
||||
def __init__(self,
|
||||
input_size: int = 512,
|
||||
output_size: int = 80,
|
||||
spk_embed_dim: int = 192,
|
||||
output_type: str = "mel",
|
||||
vocab_size: int = 4096,
|
||||
input_frame_rate: int = 50,
|
||||
only_mask_loss: bool = True,
|
||||
encoder: torch.nn.Module = None,
|
||||
length_regulator: torch.nn.Module = None,
|
||||
decoder: torch.nn.Module = None,
|
||||
decoder_conf: Dict = {'in_channels': 240, 'out_channel': 80, 'spk_emb_dim': 80, 'n_spks': 1,
|
||||
'cfm_params': DictConfig({'sigma_min': 1e-06, 'solver': 'euler', 't_scheduler': 'cosine',
|
||||
'training_cfg_rate': 0.2, 'inference_cfg_rate': 0.7, 'reg_loss_type': 'l1'}),
|
||||
'decoder_params': {'channels': [256, 256], 'dropout': 0.0, 'attention_head_dim': 64,
|
||||
'n_blocks': 4, 'num_mid_blocks': 12, 'num_heads': 8, 'act_fn': 'gelu'}}):
|
||||
super().__init__()
|
||||
self.input_size = input_size
|
||||
self.output_size = output_size
|
||||
self.decoder_conf = decoder_conf
|
||||
self.vocab_size = vocab_size
|
||||
self.output_type = output_type
|
||||
self.input_frame_rate = input_frame_rate
|
||||
logging.info(f"input frame rate={self.input_frame_rate}")
|
||||
self.input_embedding = nn.Embedding(vocab_size, input_size)
|
||||
self.spk_embed_affine_layer = torch.nn.Linear(spk_embed_dim, output_size)
|
||||
self.encoder = encoder
|
||||
self.encoder_proj = torch.nn.Linear(self.encoder.output_size(), output_size)
|
||||
self.decoder = decoder
|
||||
self.length_regulator = length_regulator
|
||||
self.only_mask_loss = only_mask_loss
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
token = batch['speech_token'].to(device)
|
||||
token_len = batch['speech_token_len'].to(device)
|
||||
feat = batch['speech_feat'].to(device)
|
||||
feat_len = batch['speech_feat_len'].to(device)
|
||||
embedding = batch['embedding'].to(device)
|
||||
|
||||
# xvec projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
|
||||
# concat text and prompt_text
|
||||
mask = (~make_pad_mask(token_len)).float().unsqueeze(-1).to(device)
|
||||
token = self.input_embedding(torch.clamp(token, min=0)) * mask
|
||||
|
||||
# text encode
|
||||
h, h_lengths = self.encoder(token, token_len)
|
||||
h = self.encoder_proj(h)
|
||||
h, h_lengths = self.length_regulator(h, feat_len)
|
||||
|
||||
# get conditions
|
||||
conds = torch.zeros(feat.shape, device=token.device)
|
||||
for i, j in enumerate(feat_len):
|
||||
if random.random() < 0.5:
|
||||
continue
|
||||
index = random.randint(0, int(0.3 * j))
|
||||
conds[i, :index] = feat[i, :index]
|
||||
conds = conds.transpose(1, 2)
|
||||
|
||||
mask = (~make_pad_mask(feat_len)).to(h)
|
||||
# NOTE this is unnecessary, feat/h already same shape
|
||||
loss, _ = self.decoder.compute_loss(
|
||||
feat.transpose(1, 2).contiguous(),
|
||||
mask.unsqueeze(1),
|
||||
h.transpose(1, 2).contiguous(),
|
||||
embedding,
|
||||
cond=conds
|
||||
)
|
||||
return {'loss': loss}
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(self,
|
||||
token,
|
||||
token_len,
|
||||
prompt_token,
|
||||
prompt_token_len,
|
||||
prompt_feat,
|
||||
prompt_feat_len,
|
||||
embedding,
|
||||
flow_cache):
|
||||
assert token.shape[0] == 1
|
||||
# xvec projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
|
||||
# concat speech token and prompt speech token
|
||||
token_len1, token_len2 = prompt_token.shape[1], token.shape[1]
|
||||
token, token_len = torch.concat([prompt_token, token], dim=1), prompt_token_len + token_len
|
||||
mask = (~make_pad_mask(token_len)).unsqueeze(-1).to(embedding)
|
||||
token = self.input_embedding(torch.clamp(token, min=0)) * mask
|
||||
|
||||
# text encode
|
||||
h, h_lengths = self.encoder(token, token_len)
|
||||
h = self.encoder_proj(h)
|
||||
mel_len1, mel_len2 = prompt_feat.shape[1], int(token_len2 / self.input_frame_rate * 22050 / 256)
|
||||
h, h_lengths = self.length_regulator.inference(h[:, :token_len1], h[:, token_len1:], mel_len1, mel_len2, self.input_frame_rate)
|
||||
|
||||
# get conditions
|
||||
conds = torch.zeros([1, mel_len1 + mel_len2, self.output_size], device=token.device).to(h.dtype)
|
||||
conds[:, :mel_len1] = prompt_feat
|
||||
conds = conds.transpose(1, 2)
|
||||
|
||||
mask = (~make_pad_mask(torch.tensor([mel_len1 + mel_len2]))).to(h)
|
||||
feat, flow_cache = self.decoder(
|
||||
mu=h.transpose(1, 2).contiguous(),
|
||||
mask=mask.unsqueeze(1),
|
||||
spks=embedding,
|
||||
cond=conds,
|
||||
n_timesteps=10,
|
||||
prompt_len=mel_len1,
|
||||
cache=flow_cache
|
||||
)
|
||||
feat = feat[:, :, mel_len1:]
|
||||
assert feat.shape[2] == mel_len2
|
||||
return feat.float(), flow_cache
|
||||
|
||||
|
||||
class CausalMaskedDiffWithXvec(torch.nn.Module):
|
||||
def __init__(self,
|
||||
input_size: int = 512,
|
||||
output_size: int = 80,
|
||||
spk_embed_dim: int = 192,
|
||||
output_type: str = "mel",
|
||||
vocab_size: int = 4096,
|
||||
input_frame_rate: int = 50,
|
||||
only_mask_loss: bool = True,
|
||||
token_mel_ratio: int = 2,
|
||||
pre_lookahead_len: int = 3,
|
||||
encoder: torch.nn.Module = None,
|
||||
decoder: torch.nn.Module = None,
|
||||
decoder_conf: Dict = {'in_channels': 240, 'out_channel': 80, 'spk_emb_dim': 80, 'n_spks': 1,
|
||||
'cfm_params': DictConfig({'sigma_min': 1e-06, 'solver': 'euler', 't_scheduler': 'cosine',
|
||||
'training_cfg_rate': 0.2, 'inference_cfg_rate': 0.7, 'reg_loss_type': 'l1'}),
|
||||
'decoder_params': {'channels': [256, 256], 'dropout': 0.0, 'attention_head_dim': 64,
|
||||
'n_blocks': 4, 'num_mid_blocks': 12, 'num_heads': 8, 'act_fn': 'gelu'}}):
|
||||
super().__init__()
|
||||
self.input_size = input_size
|
||||
self.output_size = output_size
|
||||
self.decoder_conf = decoder_conf
|
||||
self.vocab_size = vocab_size
|
||||
self.output_type = output_type
|
||||
self.input_frame_rate = input_frame_rate
|
||||
logging.info(f"input frame rate={self.input_frame_rate}")
|
||||
self.input_embedding = nn.Embedding(vocab_size, input_size)
|
||||
self.spk_embed_affine_layer = torch.nn.Linear(spk_embed_dim, output_size)
|
||||
self.encoder = encoder
|
||||
self.encoder_proj = torch.nn.Linear(self.encoder.output_size(), output_size)
|
||||
self.decoder = decoder
|
||||
self.only_mask_loss = only_mask_loss
|
||||
self.token_mel_ratio = token_mel_ratio
|
||||
self.pre_lookahead_len = pre_lookahead_len
|
||||
if online_feature is True:
|
||||
self.speech_token_extractor = SpeechTokenExtractor(model_path=os.path.join(onnx_path, 'speech_tokenizer_v2.batch.onnx'))
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
if 'speech_token' not in batch:
|
||||
token, token_len = self.speech_token_extractor.inference(batch['whisper_feat'], batch['whisper_feat_len'], device)
|
||||
else:
|
||||
token = batch['speech_token'].to(device)
|
||||
token_len = batch['speech_token_len'].to(device)
|
||||
feat = batch['speech_feat'].to(device)
|
||||
feat_len = batch['speech_feat_len'].to(device)
|
||||
embedding = batch['embedding'].to(device)
|
||||
|
||||
# NOTE unified training, static_chunk_size > 0 or = 0
|
||||
streaming = True if random.random() < 0.5 else False
|
||||
|
||||
# xvec projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
|
||||
# concat text and prompt_text
|
||||
mask = (~make_pad_mask(token_len)).float().unsqueeze(-1).to(device)
|
||||
token = self.input_embedding(torch.clamp(token, min=0)) * mask
|
||||
|
||||
# text encode
|
||||
h, h_lengths = self.encoder(token, token_len, streaming=streaming)
|
||||
h = self.encoder_proj(h)
|
||||
|
||||
# get conditions
|
||||
conds = torch.zeros(feat.shape, device=token.device)
|
||||
for i, j in enumerate(feat_len):
|
||||
if random.random() < 0.5:
|
||||
continue
|
||||
index = random.randint(0, int(0.3 * j))
|
||||
conds[i, :index] = feat[i, :index]
|
||||
conds = conds.transpose(1, 2)
|
||||
|
||||
mask = (~make_pad_mask(h_lengths.sum(dim=-1).squeeze(dim=1))).to(h)
|
||||
loss, _ = self.decoder.compute_loss(
|
||||
feat.transpose(1, 2).contiguous(),
|
||||
mask.unsqueeze(1),
|
||||
h.transpose(1, 2).contiguous(),
|
||||
embedding,
|
||||
cond=conds,
|
||||
streaming=streaming,
|
||||
)
|
||||
return {'loss': loss}
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(self,
|
||||
token,
|
||||
token_len,
|
||||
prompt_token,
|
||||
prompt_token_len,
|
||||
prompt_feat,
|
||||
prompt_feat_len,
|
||||
embedding,
|
||||
streaming,
|
||||
finalize):
|
||||
assert token.shape[0] == 1
|
||||
# xvec projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
|
||||
# concat text and prompt_text
|
||||
token, token_len = torch.concat([prompt_token, token], dim=1), prompt_token_len + token_len
|
||||
mask = (~make_pad_mask(token_len)).unsqueeze(-1).to(embedding)
|
||||
token = self.input_embedding(torch.clamp(token, min=0)) * mask
|
||||
|
||||
# text encode
|
||||
if finalize is True:
|
||||
h, h_lengths = self.encoder(token, token_len, streaming=streaming)
|
||||
else:
|
||||
token, context = token[:, :-self.pre_lookahead_len], token[:, -self.pre_lookahead_len:]
|
||||
h, h_lengths = self.encoder(token, token_len, context=context, streaming=streaming)
|
||||
mel_len1, mel_len2 = prompt_feat.shape[1], h.shape[1] - prompt_feat.shape[1]
|
||||
h = self.encoder_proj(h)
|
||||
|
||||
# get conditions
|
||||
conds = torch.zeros([1, mel_len1 + mel_len2, self.output_size], device=token.device).to(h.dtype)
|
||||
conds[:, :mel_len1] = prompt_feat
|
||||
conds = conds.transpose(1, 2)
|
||||
|
||||
mask = (~make_pad_mask(torch.tensor([mel_len1 + mel_len2]))).to(h)
|
||||
feat, _ = self.decoder(
|
||||
mu=h.transpose(1, 2).contiguous(),
|
||||
mask=mask.unsqueeze(1),
|
||||
spks=embedding,
|
||||
cond=conds,
|
||||
n_timesteps=10,
|
||||
streaming=streaming
|
||||
)
|
||||
feat = feat[:, :, mel_len1:]
|
||||
assert feat.shape[2] == mel_len2
|
||||
return feat.float(), None
|
||||
|
||||
|
||||
class CausalMaskedDiffWithDiT(torch.nn.Module):
|
||||
def __init__(self,
|
||||
input_size: int = 512,
|
||||
output_size: int = 80,
|
||||
spk_embed_dim: int = 192,
|
||||
output_type: str = "mel",
|
||||
vocab_size: int = 4096,
|
||||
input_frame_rate: int = 50,
|
||||
only_mask_loss: bool = True,
|
||||
token_mel_ratio: int = 2,
|
||||
pre_lookahead_len: int = 3,
|
||||
pre_lookahead_layer: torch.nn.Module = None,
|
||||
decoder: torch.nn.Module = None,
|
||||
decoder_conf: Dict = {'in_channels': 240, 'out_channel': 80, 'spk_emb_dim': 80, 'n_spks': 1,
|
||||
'cfm_params': DictConfig({'sigma_min': 1e-06, 'solver': 'euler', 't_scheduler': 'cosine',
|
||||
'training_cfg_rate': 0.2, 'inference_cfg_rate': 0.7, 'reg_loss_type': 'l1'}),
|
||||
'decoder_params': {'channels': [256, 256], 'dropout': 0.0, 'attention_head_dim': 64,
|
||||
'n_blocks': 4, 'num_mid_blocks': 12, 'num_heads': 8, 'act_fn': 'gelu'}}):
|
||||
super().__init__()
|
||||
self.input_size = input_size
|
||||
self.output_size = output_size
|
||||
self.decoder_conf = decoder_conf
|
||||
self.vocab_size = vocab_size
|
||||
self.output_type = output_type
|
||||
self.input_frame_rate = input_frame_rate
|
||||
logging.info(f"input frame rate={self.input_frame_rate}")
|
||||
self.input_embedding = nn.Embedding(vocab_size, input_size)
|
||||
self.spk_embed_affine_layer = torch.nn.Linear(spk_embed_dim, output_size)
|
||||
self.pre_lookahead_len = pre_lookahead_len
|
||||
self.pre_lookahead_layer = pre_lookahead_layer
|
||||
self.decoder = decoder
|
||||
self.only_mask_loss = only_mask_loss
|
||||
self.token_mel_ratio = token_mel_ratio
|
||||
if online_feature is True:
|
||||
self.speech_token_extractor = SpeechTokenExtractor(model_path=os.path.join(onnx_path, 'speech_tokenizer_v3.batch.onnx'))
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
if 'speech_token' not in batch:
|
||||
token, token_len = self.speech_token_extractor.inference(batch['whisper_feat'], batch['whisper_feat_len'], device)
|
||||
else:
|
||||
token = batch['speech_token'].to(device)
|
||||
token_len = batch['speech_token_len'].to(device)
|
||||
feat = batch['speech_feat'].to(device)
|
||||
feat_len = batch['speech_feat_len'].to(device)
|
||||
embedding = batch['embedding'].to(device)
|
||||
|
||||
# NOTE unified training, static_chunk_size > 0 or = 0
|
||||
streaming = True if random.random() < 0.5 else False
|
||||
|
||||
# xvec projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
|
||||
# concat text and prompt_text
|
||||
mask = (~make_pad_mask(token_len)).float().unsqueeze(-1).to(device)
|
||||
token = self.input_embedding(torch.clamp(token, min=0)) * mask
|
||||
|
||||
# text encode
|
||||
h = self.pre_lookahead_layer(token)
|
||||
h = h.repeat_interleave(self.token_mel_ratio, dim=1)
|
||||
mask = mask.repeat_interleave(self.token_mel_ratio, dim=1).squeeze(dim=-1)
|
||||
|
||||
# get conditions
|
||||
conds = torch.zeros(feat.shape, device=token.device)
|
||||
for i, j in enumerate(feat_len):
|
||||
if random.random() < 0.5:
|
||||
continue
|
||||
index = random.randint(0, int(0.3 * j))
|
||||
conds[i, :index] = feat[i, :index]
|
||||
conds = conds.transpose(1, 2)
|
||||
|
||||
loss, _ = self.decoder.compute_loss(
|
||||
feat.transpose(1, 2).contiguous(),
|
||||
mask.unsqueeze(1),
|
||||
h.transpose(1, 2).contiguous(),
|
||||
embedding,
|
||||
cond=conds,
|
||||
streaming=streaming,
|
||||
)
|
||||
return {'loss': loss}
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(self,
|
||||
token,
|
||||
token_len,
|
||||
prompt_token,
|
||||
prompt_token_len,
|
||||
prompt_feat,
|
||||
prompt_feat_len,
|
||||
embedding,
|
||||
streaming,
|
||||
finalize):
|
||||
assert token.shape[0] == 1
|
||||
# xvec projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
|
||||
# concat text and prompt_text
|
||||
token, token_len = torch.concat([prompt_token, token], dim=1), prompt_token_len + token_len
|
||||
mask = (~make_pad_mask(token_len)).unsqueeze(-1).to(embedding)
|
||||
token = self.input_embedding(torch.clamp(token, min=0)) * mask
|
||||
|
||||
# text encode
|
||||
if finalize is True:
|
||||
h = self.pre_lookahead_layer(token)
|
||||
else:
|
||||
h = self.pre_lookahead_layer(token[:, :-self.pre_lookahead_len], context=token[:, -self.pre_lookahead_len:])
|
||||
h = h.repeat_interleave(self.token_mel_ratio, dim=1)
|
||||
mel_len1, mel_len2 = prompt_feat.shape[1], h.shape[1] - prompt_feat.shape[1]
|
||||
|
||||
# get conditions
|
||||
conds = torch.zeros([1, mel_len1 + mel_len2, self.output_size], device=token.device).to(h.dtype)
|
||||
conds[:, :mel_len1] = prompt_feat
|
||||
conds = conds.transpose(1, 2)
|
||||
|
||||
mask = (~make_pad_mask(torch.tensor([mel_len1 + mel_len2]))).to(h)
|
||||
feat, _ = self.decoder(
|
||||
mu=h.transpose(1, 2).contiguous(),
|
||||
mask=mask.unsqueeze(1),
|
||||
spks=embedding,
|
||||
cond=conds,
|
||||
n_timesteps=10,
|
||||
streaming=streaming
|
||||
)
|
||||
feat = feat[:, :, mel_len1:]
|
||||
assert feat.shape[2] == mel_len2
|
||||
return feat.float(), None
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
torch.backends.cudnn.deterministic = True
|
||||
torch.backends.cudnn.benchmark = False
|
||||
from hyperpyyaml import load_hyperpyyaml
|
||||
with open('./pretrained_models/Fun-CosyVoice3-0.5B/cosyvoice3.yaml', 'r') as f:
|
||||
configs = load_hyperpyyaml(f, overrides={'llm': None, 'hift': None})
|
||||
model = configs['flow']
|
||||
device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
||||
model.to(device)
|
||||
model.eval()
|
||||
max_len = 10 * model.decoder.estimator.static_chunk_size
|
||||
chunk_size = model.decoder.estimator.static_chunk_size
|
||||
context_size = model.pre_lookahead_layer.pre_lookahead_len
|
||||
token = torch.randint(0, 6561, size=(1, max_len)).to(device)
|
||||
token_len = torch.tensor([max_len]).to(device)
|
||||
prompt_token = torch.randint(0, 6561, size=(1, chunk_size)).to(device)
|
||||
prompt_token_len = torch.tensor([chunk_size]).to(device)
|
||||
prompt_feat = torch.rand(1, chunk_size * 2, 80).to(device)
|
||||
prompt_feat_len = torch.tensor([chunk_size * 2]).to(device)
|
||||
prompt_embedding = torch.rand(1, 192).to(device)
|
||||
pred_gt, _ = model.inference(token, token_len, prompt_token, prompt_token_len, prompt_feat, prompt_feat_len, prompt_embedding, streaming=True, finalize=True)
|
||||
for i in range(0, max_len, chunk_size):
|
||||
finalize = True if i + chunk_size + context_size >= max_len else False
|
||||
pred_chunk, _ = model.inference(token[:, :i + chunk_size + context_size], torch.tensor([token[:, :i + chunk_size + context_size].shape[1]]).to(device),
|
||||
prompt_token, prompt_token_len, prompt_feat, prompt_feat_len, prompt_embedding, streaming=True, finalize=finalize)
|
||||
pred_chunk = pred_chunk[:, :, i * model.token_mel_ratio:]
|
||||
print((pred_gt[:, :, i * model.token_mel_ratio: i * model.token_mel_ratio + pred_chunk.shape[2]] - pred_chunk).abs().max().item())
|
||||
227
models/CosyVoice/cosyvoice/flow/flow_matching.py
Normal file
227
models/CosyVoice/cosyvoice/flow/flow_matching.py
Normal file
@@ -0,0 +1,227 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Zhihao Du)
|
||||
# 2025 Alibaba Inc (authors: Xiang Lyu, Bofan Zhou)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
from matcha.models.components.flow_matching import BASECFM
|
||||
from cosyvoice.utils.common import set_all_random_seed
|
||||
|
||||
|
||||
class ConditionalCFM(BASECFM):
|
||||
def __init__(self, in_channels, cfm_params, n_spks=1, spk_emb_dim=64, estimator: torch.nn.Module = None):
|
||||
super().__init__(
|
||||
n_feats=in_channels,
|
||||
cfm_params=cfm_params,
|
||||
n_spks=n_spks,
|
||||
spk_emb_dim=spk_emb_dim,
|
||||
)
|
||||
self.t_scheduler = cfm_params.t_scheduler
|
||||
self.training_cfg_rate = cfm_params.training_cfg_rate
|
||||
self.inference_cfg_rate = cfm_params.inference_cfg_rate
|
||||
in_channels = in_channels + (spk_emb_dim if n_spks > 0 else 0)
|
||||
# Just change the architecture of the estimator here
|
||||
self.estimator = estimator
|
||||
|
||||
@torch.inference_mode()
|
||||
def forward(self, mu, mask, n_timesteps, temperature=1.0, spks=None, cond=None, prompt_len=0, cache=torch.zeros(1, 80, 0, 2)):
|
||||
"""Forward diffusion
|
||||
|
||||
Args:
|
||||
mu (torch.Tensor): output of encoder
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
mask (torch.Tensor): output_mask
|
||||
shape: (batch_size, 1, mel_timesteps)
|
||||
n_timesteps (int): number of diffusion steps
|
||||
temperature (float, optional): temperature for scaling noise. Defaults to 1.0.
|
||||
spks (torch.Tensor, optional): speaker ids. Defaults to None.
|
||||
shape: (batch_size, spk_emb_dim)
|
||||
cond: Not used but kept for future purposes
|
||||
|
||||
Returns:
|
||||
sample: generated mel-spectrogram
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
"""
|
||||
|
||||
z = torch.randn_like(mu).to(mu.device).to(mu.dtype) * temperature
|
||||
cache_size = cache.shape[2]
|
||||
# fix prompt and overlap part mu and z
|
||||
if cache_size != 0:
|
||||
z[:, :, :cache_size] = cache[:, :, :, 0]
|
||||
mu[:, :, :cache_size] = cache[:, :, :, 1]
|
||||
z_cache = torch.concat([z[:, :, :prompt_len], z[:, :, -34:]], dim=2)
|
||||
mu_cache = torch.concat([mu[:, :, :prompt_len], mu[:, :, -34:]], dim=2)
|
||||
cache = torch.stack([z_cache, mu_cache], dim=-1)
|
||||
|
||||
t_span = torch.linspace(0, 1, n_timesteps + 1, device=mu.device, dtype=mu.dtype)
|
||||
if self.t_scheduler == 'cosine':
|
||||
t_span = 1 - torch.cos(t_span * 0.5 * torch.pi)
|
||||
return self.solve_euler(z, t_span=t_span, mu=mu, mask=mask, spks=spks, cond=cond), cache
|
||||
|
||||
def solve_euler(self, x, t_span, mu, mask, spks, cond, streaming=False):
|
||||
"""
|
||||
Fixed euler solver for ODEs.
|
||||
Args:
|
||||
x (torch.Tensor): random noise
|
||||
t_span (torch.Tensor): n_timesteps interpolated
|
||||
shape: (n_timesteps + 1,)
|
||||
mu (torch.Tensor): output of encoder
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
mask (torch.Tensor): output_mask
|
||||
shape: (batch_size, 1, mel_timesteps)
|
||||
spks (torch.Tensor, optional): speaker ids. Defaults to None.
|
||||
shape: (batch_size, spk_emb_dim)
|
||||
cond: Not used but kept for future purposes
|
||||
"""
|
||||
t, _, dt = t_span[0], t_span[-1], t_span[1] - t_span[0]
|
||||
t = t.unsqueeze(dim=0)
|
||||
|
||||
# I am storing this because I can later plot it by putting a debugger here and saving it to a file
|
||||
# Or in future might add like a return_all_steps flag
|
||||
sol = []
|
||||
|
||||
# Do not use concat, it may cause memory format changed and trt infer with wrong results!
|
||||
# NOTE when flow run in amp mode, x.dtype is float32, which cause nan in trt fp16 inference, so set dtype=spks.dtype
|
||||
x_in = torch.zeros([2, 80, x.size(2)], device=x.device, dtype=spks.dtype)
|
||||
mask_in = torch.zeros([2, 1, x.size(2)], device=x.device, dtype=spks.dtype)
|
||||
mu_in = torch.zeros([2, 80, x.size(2)], device=x.device, dtype=spks.dtype)
|
||||
t_in = torch.zeros([2], device=x.device, dtype=spks.dtype)
|
||||
spks_in = torch.zeros([2, 80], device=x.device, dtype=spks.dtype)
|
||||
cond_in = torch.zeros([2, 80, x.size(2)], device=x.device, dtype=spks.dtype)
|
||||
for step in range(1, len(t_span)):
|
||||
# Classifier-Free Guidance inference introduced in VoiceBox
|
||||
x_in[:] = x
|
||||
mask_in[:] = mask
|
||||
mu_in[0] = mu
|
||||
t_in[:] = t.unsqueeze(0)
|
||||
spks_in[0] = spks
|
||||
cond_in[0] = cond
|
||||
dphi_dt = self.forward_estimator(
|
||||
x_in, mask_in,
|
||||
mu_in, t_in,
|
||||
spks_in,
|
||||
cond_in,
|
||||
streaming
|
||||
)
|
||||
dphi_dt, cfg_dphi_dt = torch.split(dphi_dt, [x.size(0), x.size(0)], dim=0)
|
||||
dphi_dt = ((1.0 + self.inference_cfg_rate) * dphi_dt - self.inference_cfg_rate * cfg_dphi_dt)
|
||||
x = x + dt * dphi_dt
|
||||
t = t + dt
|
||||
sol.append(x)
|
||||
if step < len(t_span) - 1:
|
||||
dt = t_span[step + 1] - t
|
||||
|
||||
return sol[-1].float()
|
||||
|
||||
def forward_estimator(self, x, mask, mu, t, spks, cond, streaming=False):
|
||||
if isinstance(self.estimator, torch.nn.Module):
|
||||
return self.estimator(x, mask, mu, t, spks, cond, streaming=streaming)
|
||||
else:
|
||||
[estimator, stream], trt_engine = self.estimator.acquire_estimator()
|
||||
# NOTE need to synchronize when switching stream
|
||||
torch.cuda.current_stream().synchronize()
|
||||
with stream:
|
||||
estimator.set_input_shape('x', (2, 80, x.size(2)))
|
||||
estimator.set_input_shape('mask', (2, 1, x.size(2)))
|
||||
estimator.set_input_shape('mu', (2, 80, x.size(2)))
|
||||
estimator.set_input_shape('t', (2,))
|
||||
estimator.set_input_shape('spks', (2, 80))
|
||||
estimator.set_input_shape('cond', (2, 80, x.size(2)))
|
||||
data_ptrs = [x.contiguous().data_ptr(),
|
||||
mask.contiguous().data_ptr(),
|
||||
mu.contiguous().data_ptr(),
|
||||
t.contiguous().data_ptr(),
|
||||
spks.contiguous().data_ptr(),
|
||||
cond.contiguous().data_ptr(),
|
||||
x.data_ptr()]
|
||||
for i, j in enumerate(data_ptrs):
|
||||
estimator.set_tensor_address(trt_engine.get_tensor_name(i), j)
|
||||
# run trt engine
|
||||
assert estimator.execute_async_v3(torch.cuda.current_stream().cuda_stream) is True
|
||||
torch.cuda.current_stream().synchronize()
|
||||
self.estimator.release_estimator(estimator, stream)
|
||||
return x
|
||||
|
||||
def compute_loss(self, x1, mask, mu, spks=None, cond=None, streaming=False):
|
||||
"""Computes diffusion loss
|
||||
|
||||
Args:
|
||||
x1 (torch.Tensor): Target
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
mask (torch.Tensor): target mask
|
||||
shape: (batch_size, 1, mel_timesteps)
|
||||
mu (torch.Tensor): output of encoder
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
spks (torch.Tensor, optional): speaker embedding. Defaults to None.
|
||||
shape: (batch_size, spk_emb_dim)
|
||||
|
||||
Returns:
|
||||
loss: conditional flow matching loss
|
||||
y: conditional flow
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
"""
|
||||
b, _, t = mu.shape
|
||||
|
||||
# random timestep
|
||||
t = torch.rand([b, 1, 1], device=mu.device, dtype=mu.dtype)
|
||||
|
||||
# sample noise p(x_0)
|
||||
z = torch.randn_like(x1)
|
||||
|
||||
y = (1 - (1 - self.sigma_min) * t) * z + t * x1
|
||||
u = x1 - (1 - self.sigma_min) * z
|
||||
|
||||
# during training, we randomly drop condition to trade off mode coverage and sample fidelity
|
||||
if self.training_cfg_rate > 0:
|
||||
cfg_mask = torch.rand(b, device=x1.device) > self.training_cfg_rate
|
||||
mu = mu * cfg_mask.view(-1, 1, 1)
|
||||
spks = spks * cfg_mask.view(-1, 1)
|
||||
cond = cond * cfg_mask.view(-1, 1, 1)
|
||||
|
||||
pred = self.estimator(y, mask, mu, t.squeeze(), spks, cond, streaming=streaming)
|
||||
loss = F.mse_loss(pred * mask, u * mask, reduction="sum") / (torch.sum(mask) * u.shape[1])
|
||||
return loss, y
|
||||
|
||||
|
||||
class CausalConditionalCFM(ConditionalCFM):
|
||||
def __init__(self, in_channels, cfm_params, n_spks=1, spk_emb_dim=64, estimator: torch.nn.Module = None):
|
||||
super().__init__(in_channels, cfm_params, n_spks, spk_emb_dim, estimator)
|
||||
set_all_random_seed(0)
|
||||
self.rand_noise = torch.randn([1, 80, 50 * 300])
|
||||
|
||||
@torch.inference_mode()
|
||||
def forward(self, mu, mask, n_timesteps, temperature=1.0, spks=None, cond=None, streaming=False):
|
||||
"""Forward diffusion
|
||||
|
||||
Args:
|
||||
mu (torch.Tensor): output of encoder
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
mask (torch.Tensor): output_mask
|
||||
shape: (batch_size, 1, mel_timesteps)
|
||||
n_timesteps (int): number of diffusion steps
|
||||
temperature (float, optional): temperature for scaling noise. Defaults to 1.0.
|
||||
spks (torch.Tensor, optional): speaker ids. Defaults to None.
|
||||
shape: (batch_size, spk_emb_dim)
|
||||
cond: Not used but kept for future purposes
|
||||
|
||||
Returns:
|
||||
sample: generated mel-spectrogram
|
||||
shape: (batch_size, n_feats, mel_timesteps)
|
||||
"""
|
||||
|
||||
z = self.rand_noise[:, :, :mu.size(2)].to(mu.device).to(mu.dtype) * temperature
|
||||
# fix prompt and overlap part mu and z
|
||||
t_span = torch.linspace(0, 1, n_timesteps + 1, device=mu.device, dtype=mu.dtype)
|
||||
if self.t_scheduler == 'cosine':
|
||||
t_span = 1 - torch.cos(t_span * 0.5 * torch.pi)
|
||||
return self.solve_euler(z, t_span=t_span, mu=mu, mask=mask, spks=spks, cond=cond, streaming=streaming), None
|
||||
70
models/CosyVoice/cosyvoice/flow/length_regulator.py
Normal file
70
models/CosyVoice/cosyvoice/flow/length_regulator.py
Normal file
@@ -0,0 +1,70 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Zhihao Du)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Tuple
|
||||
import torch.nn as nn
|
||||
import torch
|
||||
from torch.nn import functional as F
|
||||
from cosyvoice.utils.mask import make_pad_mask
|
||||
|
||||
|
||||
class InterpolateRegulator(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
channels: int,
|
||||
sampling_ratios: Tuple,
|
||||
out_channels: int = None,
|
||||
groups: int = 1,
|
||||
):
|
||||
super().__init__()
|
||||
self.sampling_ratios = sampling_ratios
|
||||
out_channels = out_channels or channels
|
||||
model = nn.ModuleList([])
|
||||
if len(sampling_ratios) > 0:
|
||||
for _ in sampling_ratios:
|
||||
module = nn.Conv1d(channels, channels, 3, 1, 1)
|
||||
norm = nn.GroupNorm(groups, channels)
|
||||
act = nn.Mish()
|
||||
model.extend([module, norm, act])
|
||||
model.append(
|
||||
nn.Conv1d(channels, out_channels, 1, 1)
|
||||
)
|
||||
self.model = nn.Sequential(*model)
|
||||
|
||||
def forward(self, x, ylens=None):
|
||||
# x in (B, T, D)
|
||||
mask = (~make_pad_mask(ylens)).to(x).unsqueeze(-1)
|
||||
x = F.interpolate(x.transpose(1, 2).contiguous(), size=ylens.max(), mode='linear')
|
||||
out = self.model(x).transpose(1, 2).contiguous()
|
||||
olens = ylens
|
||||
return out * mask, olens
|
||||
|
||||
def inference(self, x1, x2, mel_len1, mel_len2, input_frame_rate=50):
|
||||
# in inference mode, interploate prompt token and token(head/mid/tail) seprately, so we can get a clear separation point of mel
|
||||
# NOTE 20 corresponds to token_overlap_len in cosyvoice/cli/model.py
|
||||
# x in (B, T, D)
|
||||
if x2.shape[1] > 40:
|
||||
x2_head = F.interpolate(x2[:, :20].transpose(1, 2).contiguous(), size=int(20 / input_frame_rate * 22050 / 256), mode='linear')
|
||||
x2_mid = F.interpolate(x2[:, 20:-20].transpose(1, 2).contiguous(), size=mel_len2 - int(20 / input_frame_rate * 22050 / 256) * 2,
|
||||
mode='linear')
|
||||
x2_tail = F.interpolate(x2[:, -20:].transpose(1, 2).contiguous(), size=int(20 / input_frame_rate * 22050 / 256), mode='linear')
|
||||
x2 = torch.concat([x2_head, x2_mid, x2_tail], dim=2)
|
||||
else:
|
||||
x2 = F.interpolate(x2.transpose(1, 2).contiguous(), size=mel_len2, mode='linear')
|
||||
if x1.shape[1] != 0:
|
||||
x1 = F.interpolate(x1.transpose(1, 2).contiguous(), size=mel_len1, mode='linear')
|
||||
x = torch.concat([x1, x2], dim=2)
|
||||
else:
|
||||
x = x2
|
||||
out = self.model(x).transpose(1, 2).contiguous()
|
||||
return out, mel_len1 + mel_len2
|
||||
230
models/CosyVoice/cosyvoice/hifigan/discriminator.py
Normal file
230
models/CosyVoice/cosyvoice/hifigan/discriminator.py
Normal file
@@ -0,0 +1,230 @@
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
try:
|
||||
from torch.nn.utils.parametrizations import weight_norm, spectral_norm
|
||||
except ImportError:
|
||||
from torch.nn.utils import weight_norm, spectral_norm
|
||||
from typing import List, Optional, Tuple
|
||||
from einops import rearrange
|
||||
from torchaudio.transforms import Spectrogram
|
||||
|
||||
LRELU_SLOPE = 0.1
|
||||
|
||||
|
||||
class MultipleDiscriminator(nn.Module):
|
||||
def __init__(
|
||||
self, mpd: nn.Module, mrd: nn.Module
|
||||
):
|
||||
super().__init__()
|
||||
self.mpd = mpd
|
||||
self.mrd = mrd
|
||||
|
||||
def forward(self, y: torch.Tensor, y_hat: torch.Tensor):
|
||||
y_d_rs, y_d_gs, fmap_rs, fmap_gs = [], [], [], []
|
||||
this_y_d_rs, this_y_d_gs, this_fmap_rs, this_fmap_gs = self.mpd(y.unsqueeze(dim=1), y_hat.unsqueeze(dim=1))
|
||||
y_d_rs += this_y_d_rs
|
||||
y_d_gs += this_y_d_gs
|
||||
fmap_rs += this_fmap_rs
|
||||
fmap_gs += this_fmap_gs
|
||||
this_y_d_rs, this_y_d_gs, this_fmap_rs, this_fmap_gs = self.mrd(y, y_hat)
|
||||
y_d_rs += this_y_d_rs
|
||||
y_d_gs += this_y_d_gs
|
||||
fmap_rs += this_fmap_rs
|
||||
fmap_gs += this_fmap_gs
|
||||
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
||||
|
||||
|
||||
class MultiResolutionDiscriminator(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
fft_sizes: Tuple[int, ...] = (2048, 1024, 512),
|
||||
num_embeddings: Optional[int] = None,
|
||||
):
|
||||
"""
|
||||
Multi-Resolution Discriminator module adapted from https://github.com/descriptinc/descript-audio-codec.
|
||||
Additionally, it allows incorporating conditional information with a learned embeddings table.
|
||||
|
||||
Args:
|
||||
fft_sizes (tuple[int]): Tuple of window lengths for FFT. Defaults to (2048, 1024, 512).
|
||||
num_embeddings (int, optional): Number of embeddings. None means non-conditional discriminator.
|
||||
Defaults to None.
|
||||
"""
|
||||
|
||||
super().__init__()
|
||||
self.discriminators = nn.ModuleList(
|
||||
[DiscriminatorR(window_length=w, num_embeddings=num_embeddings) for w in fft_sizes]
|
||||
)
|
||||
|
||||
def forward(
|
||||
self, y: torch.Tensor, y_hat: torch.Tensor, bandwidth_id: torch.Tensor = None
|
||||
) -> Tuple[List[torch.Tensor], List[torch.Tensor], List[List[torch.Tensor]], List[List[torch.Tensor]]]:
|
||||
y_d_rs = []
|
||||
y_d_gs = []
|
||||
fmap_rs = []
|
||||
fmap_gs = []
|
||||
|
||||
for d in self.discriminators:
|
||||
y_d_r, fmap_r = d(x=y, cond_embedding_id=bandwidth_id)
|
||||
y_d_g, fmap_g = d(x=y_hat, cond_embedding_id=bandwidth_id)
|
||||
y_d_rs.append(y_d_r)
|
||||
fmap_rs.append(fmap_r)
|
||||
y_d_gs.append(y_d_g)
|
||||
fmap_gs.append(fmap_g)
|
||||
|
||||
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
||||
|
||||
|
||||
class DiscriminatorR(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
window_length: int,
|
||||
num_embeddings: Optional[int] = None,
|
||||
channels: int = 32,
|
||||
hop_factor: float = 0.25,
|
||||
bands: Tuple[Tuple[float, float], ...] = ((0.0, 0.1), (0.1, 0.25), (0.25, 0.5), (0.5, 0.75), (0.75, 1.0)),
|
||||
):
|
||||
super().__init__()
|
||||
self.window_length = window_length
|
||||
self.hop_factor = hop_factor
|
||||
self.spec_fn = Spectrogram(
|
||||
n_fft=window_length, hop_length=int(window_length * hop_factor), win_length=window_length, power=None
|
||||
)
|
||||
n_fft = window_length // 2 + 1
|
||||
bands = [(int(b[0] * n_fft), int(b[1] * n_fft)) for b in bands]
|
||||
self.bands = bands
|
||||
convs = lambda: nn.ModuleList(
|
||||
[
|
||||
weight_norm(nn.Conv2d(2, channels, (3, 9), (1, 1), padding=(1, 4))),
|
||||
weight_norm(nn.Conv2d(channels, channels, (3, 9), (1, 2), padding=(1, 4))),
|
||||
weight_norm(nn.Conv2d(channels, channels, (3, 9), (1, 2), padding=(1, 4))),
|
||||
weight_norm(nn.Conv2d(channels, channels, (3, 9), (1, 2), padding=(1, 4))),
|
||||
weight_norm(nn.Conv2d(channels, channels, (3, 3), (1, 1), padding=(1, 1))),
|
||||
]
|
||||
)
|
||||
self.band_convs = nn.ModuleList([convs() for _ in range(len(self.bands))])
|
||||
|
||||
if num_embeddings is not None:
|
||||
self.emb = torch.nn.Embedding(num_embeddings=num_embeddings, embedding_dim=channels)
|
||||
torch.nn.init.zeros_(self.emb.weight)
|
||||
|
||||
self.conv_post = weight_norm(nn.Conv2d(channels, 1, (3, 3), (1, 1), padding=(1, 1)))
|
||||
|
||||
def spectrogram(self, x):
|
||||
# Remove DC offset
|
||||
x = x - x.mean(dim=-1, keepdims=True)
|
||||
# Peak normalize the volume of input audio
|
||||
x = 0.8 * x / (x.abs().max(dim=-1, keepdim=True)[0] + 1e-9)
|
||||
x = self.spec_fn(x)
|
||||
x = torch.view_as_real(x)
|
||||
x = rearrange(x, "b f t c -> b c t f")
|
||||
# Split into bands
|
||||
x_bands = [x[..., b[0]: b[1]] for b in self.bands]
|
||||
return x_bands
|
||||
|
||||
def forward(self, x: torch.Tensor, cond_embedding_id: torch.Tensor = None):
|
||||
x_bands = self.spectrogram(x)
|
||||
fmap = []
|
||||
x = []
|
||||
for band, stack in zip(x_bands, self.band_convs):
|
||||
for i, layer in enumerate(stack):
|
||||
band = layer(band)
|
||||
band = torch.nn.functional.leaky_relu(band, 0.1)
|
||||
if i > 0:
|
||||
fmap.append(band)
|
||||
x.append(band)
|
||||
x = torch.cat(x, dim=-1)
|
||||
if cond_embedding_id is not None:
|
||||
emb = self.emb(cond_embedding_id)
|
||||
h = (emb.view(1, -1, 1, 1) * x).sum(dim=1, keepdims=True)
|
||||
else:
|
||||
h = 0
|
||||
x = self.conv_post(x)
|
||||
fmap.append(x)
|
||||
x += h
|
||||
|
||||
return x, fmap
|
||||
|
||||
|
||||
class MultiResSpecDiscriminator(torch.nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
fft_sizes=[1024, 2048, 512],
|
||||
hop_sizes=[120, 240, 50],
|
||||
win_lengths=[600, 1200, 240],
|
||||
window="hann_window"):
|
||||
|
||||
super(MultiResSpecDiscriminator, self).__init__()
|
||||
self.discriminators = nn.ModuleList([
|
||||
SpecDiscriminator(fft_sizes[0], hop_sizes[0], win_lengths[0], window),
|
||||
SpecDiscriminator(fft_sizes[1], hop_sizes[1], win_lengths[1], window),
|
||||
SpecDiscriminator(fft_sizes[2], hop_sizes[2], win_lengths[2], window)])
|
||||
|
||||
def forward(self, y, y_hat):
|
||||
y_d_rs = []
|
||||
y_d_gs = []
|
||||
fmap_rs = []
|
||||
fmap_gs = []
|
||||
for _, d in enumerate(self.discriminators):
|
||||
y_d_r, fmap_r = d(y)
|
||||
y_d_g, fmap_g = d(y_hat)
|
||||
y_d_rs.append(y_d_r)
|
||||
fmap_rs.append(fmap_r)
|
||||
y_d_gs.append(y_d_g)
|
||||
fmap_gs.append(fmap_g)
|
||||
|
||||
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
||||
|
||||
|
||||
def stft(x, fft_size, hop_size, win_length, window):
|
||||
"""Perform STFT and convert to magnitude spectrogram.
|
||||
Args:
|
||||
x (Tensor): Input signal tensor (B, T).
|
||||
fft_size (int): FFT size.
|
||||
hop_size (int): Hop size.
|
||||
win_length (int): Window length.
|
||||
window (str): Window function type.
|
||||
Returns:
|
||||
Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1).
|
||||
"""
|
||||
x_stft = torch.stft(x, fft_size, hop_size, win_length, window, return_complex=True)
|
||||
|
||||
# NOTE(kan-bayashi): clamp is needed to avoid nan or inf
|
||||
return torch.abs(x_stft).transpose(2, 1)
|
||||
|
||||
|
||||
class SpecDiscriminator(nn.Module):
|
||||
"""docstring for Discriminator."""
|
||||
|
||||
def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window", use_spectral_norm=False):
|
||||
super(SpecDiscriminator, self).__init__()
|
||||
norm_f = weight_norm if use_spectral_norm is False else spectral_norm
|
||||
self.fft_size = fft_size
|
||||
self.shift_size = shift_size
|
||||
self.win_length = win_length
|
||||
self.window = getattr(torch, window)(win_length)
|
||||
self.discriminators = nn.ModuleList([
|
||||
norm_f(nn.Conv2d(1, 32, kernel_size=(3, 9), padding=(1, 4))),
|
||||
norm_f(nn.Conv2d(32, 32, kernel_size=(3, 9), stride=(1, 2), padding=(1, 4))),
|
||||
norm_f(nn.Conv2d(32, 32, kernel_size=(3, 9), stride=(1, 2), padding=(1, 4))),
|
||||
norm_f(nn.Conv2d(32, 32, kernel_size=(3, 9), stride=(1, 2), padding=(1, 4))),
|
||||
norm_f(nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))),
|
||||
])
|
||||
|
||||
self.out = norm_f(nn.Conv2d(32, 1, 3, 1, 1))
|
||||
|
||||
def forward(self, y):
|
||||
|
||||
fmap = []
|
||||
y = y.squeeze(1)
|
||||
y = stft(y, self.fft_size, self.shift_size, self.win_length, self.window.to(y.device))
|
||||
y = y.unsqueeze(1)
|
||||
for _, d in enumerate(self.discriminators):
|
||||
y = d(y)
|
||||
y = F.leaky_relu(y, LRELU_SLOPE)
|
||||
fmap.append(y)
|
||||
|
||||
y = self.out(y)
|
||||
fmap.append(y)
|
||||
|
||||
return torch.flatten(y, 1, -1), fmap
|
||||
103
models/CosyVoice/cosyvoice/hifigan/f0_predictor.py
Normal file
103
models/CosyVoice/cosyvoice/hifigan/f0_predictor.py
Normal file
@@ -0,0 +1,103 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Kai Hu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
try:
|
||||
from torch.nn.utils.parametrizations import weight_norm
|
||||
except ImportError:
|
||||
from torch.nn.utils import weight_norm
|
||||
from cosyvoice.transformer.convolution import CausalConv1d
|
||||
|
||||
|
||||
class ConvRNNF0Predictor(nn.Module):
|
||||
def __init__(self,
|
||||
num_class: int = 1,
|
||||
in_channels: int = 80,
|
||||
cond_channels: int = 512
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.num_class = num_class
|
||||
self.condnet = nn.Sequential(
|
||||
weight_norm(
|
||||
nn.Conv1d(in_channels, cond_channels, kernel_size=3, padding=1)
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
|
||||
),
|
||||
nn.ELU(),
|
||||
)
|
||||
self.classifier = nn.Linear(in_features=cond_channels, out_features=self.num_class)
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
x = self.condnet(x)
|
||||
x = x.transpose(1, 2)
|
||||
return torch.abs(self.classifier(x).squeeze(-1))
|
||||
|
||||
|
||||
class CausalConvRNNF0Predictor(nn.Module):
|
||||
def __init__(self,
|
||||
num_class: int = 1,
|
||||
in_channels: int = 80,
|
||||
cond_channels: int = 512
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.num_class = num_class
|
||||
self.condnet = nn.Sequential(
|
||||
weight_norm(
|
||||
CausalConv1d(in_channels, cond_channels, kernel_size=4, causal_type='right')
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
CausalConv1d(cond_channels, cond_channels, kernel_size=3, causal_type='left')
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
CausalConv1d(cond_channels, cond_channels, kernel_size=3, causal_type='left')
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
CausalConv1d(cond_channels, cond_channels, kernel_size=3, causal_type='left')
|
||||
),
|
||||
nn.ELU(),
|
||||
weight_norm(
|
||||
CausalConv1d(cond_channels, cond_channels, kernel_size=3, causal_type='left')
|
||||
),
|
||||
nn.ELU(),
|
||||
)
|
||||
self.classifier = nn.Linear(in_features=cond_channels, out_features=self.num_class)
|
||||
|
||||
def forward(self, x: torch.Tensor, finalize: bool = True) -> torch.Tensor:
|
||||
if finalize is True:
|
||||
x = self.condnet[0](x)
|
||||
else:
|
||||
x = self.condnet[0](x[:, :, :-self.condnet[0].causal_padding], x[:, :, -self.condnet[0].causal_padding:])
|
||||
for i in range(1, len(self.condnet)):
|
||||
x = self.condnet[i](x)
|
||||
x = x.transpose(1, 2)
|
||||
return torch.abs(self.classifier(x).squeeze(-1))
|
||||
746
models/CosyVoice/cosyvoice/hifigan/generator.py
Normal file
746
models/CosyVoice/cosyvoice/hifigan/generator.py
Normal file
@@ -0,0 +1,746 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Kai Hu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""HIFI-GAN"""
|
||||
|
||||
from typing import Dict, Optional, List
|
||||
import numpy as np
|
||||
from scipy.signal import get_window
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from torch.nn import Conv1d
|
||||
from torch.nn import ConvTranspose1d
|
||||
from torch.nn.utils import remove_weight_norm
|
||||
try:
|
||||
from torch.nn.utils.parametrizations import weight_norm
|
||||
except ImportError:
|
||||
from torch.nn.utils import weight_norm
|
||||
from torch.distributions.uniform import Uniform
|
||||
from cosyvoice.transformer.convolution import CausalConv1d, CausalConv1dDownSample, CausalConv1dUpsample
|
||||
from cosyvoice.transformer.activation import Snake
|
||||
from cosyvoice.utils.common import get_padding
|
||||
from cosyvoice.utils.common import init_weights
|
||||
|
||||
|
||||
"""hifigan based generator implementation.
|
||||
|
||||
This code is modified from https://github.com/jik876/hifi-gan
|
||||
,https://github.com/kan-bayashi/ParallelWaveGAN and
|
||||
https://github.com/NVIDIA/BigVGAN
|
||||
|
||||
"""
|
||||
|
||||
|
||||
class ResBlock(torch.nn.Module):
|
||||
"""Residual block module in HiFiGAN/BigVGAN."""
|
||||
def __init__(
|
||||
self,
|
||||
channels: int = 512,
|
||||
kernel_size: int = 3,
|
||||
dilations: List[int] = [1, 3, 5],
|
||||
causal: bool = False,
|
||||
):
|
||||
super(ResBlock, self).__init__()
|
||||
self.causal = causal
|
||||
self.convs1 = nn.ModuleList()
|
||||
self.convs2 = nn.ModuleList()
|
||||
|
||||
for dilation in dilations:
|
||||
self.convs1.append(
|
||||
weight_norm(
|
||||
Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation,
|
||||
padding=get_padding(kernel_size, dilation)) if causal is False else
|
||||
CausalConv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation,
|
||||
causal_type='left'
|
||||
)
|
||||
)
|
||||
)
|
||||
self.convs2.append(
|
||||
weight_norm(
|
||||
Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=1,
|
||||
padding=get_padding(kernel_size, 1)) if causal is False else
|
||||
CausalConv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=1,
|
||||
causal_type='left'
|
||||
)
|
||||
)
|
||||
)
|
||||
self.convs1.apply(init_weights)
|
||||
self.convs2.apply(init_weights)
|
||||
self.activations1 = nn.ModuleList([
|
||||
Snake(channels, alpha_logscale=False)
|
||||
for _ in range(len(self.convs1))
|
||||
])
|
||||
self.activations2 = nn.ModuleList([
|
||||
Snake(channels, alpha_logscale=False)
|
||||
for _ in range(len(self.convs2))
|
||||
])
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
for idx in range(len(self.convs1)):
|
||||
xt = self.activations1[idx](x)
|
||||
xt = self.convs1[idx](xt)
|
||||
xt = self.activations2[idx](xt)
|
||||
xt = self.convs2[idx](xt)
|
||||
x = xt + x
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
for idx in range(len(self.convs1)):
|
||||
remove_weight_norm(self.convs1[idx])
|
||||
remove_weight_norm(self.convs2[idx])
|
||||
|
||||
|
||||
class SineGen(torch.nn.Module):
|
||||
""" Definition of sine generator
|
||||
SineGen(samp_rate, harmonic_num = 0,
|
||||
sine_amp = 0.1, noise_std = 0.003,
|
||||
voiced_threshold = 0,
|
||||
flag_for_pulse=False)
|
||||
samp_rate: sampling rate in Hz
|
||||
harmonic_num: number of harmonic overtones (default 0)
|
||||
sine_amp: amplitude of sine-wavefrom (default 0.1)
|
||||
noise_std: std of Gaussian noise (default 0.003)
|
||||
voiced_thoreshold: F0 threshold for U/V classification (default 0)
|
||||
flag_for_pulse: this SinGen is used inside PulseGen (default False)
|
||||
Note: when flag_for_pulse is True, the first time step of a voiced
|
||||
segment is always sin(np.pi) or cos(0)
|
||||
"""
|
||||
|
||||
def __init__(self, samp_rate, harmonic_num=0,
|
||||
sine_amp=0.1, noise_std=0.003,
|
||||
voiced_threshold=0):
|
||||
super(SineGen, self).__init__()
|
||||
self.sine_amp = sine_amp
|
||||
self.noise_std = noise_std
|
||||
self.harmonic_num = harmonic_num
|
||||
self.sampling_rate = samp_rate
|
||||
self.voiced_threshold = voiced_threshold
|
||||
|
||||
def _f02uv(self, f0):
|
||||
# generate uv signal
|
||||
uv = (f0 > self.voiced_threshold).type(torch.float32)
|
||||
return uv
|
||||
|
||||
@torch.no_grad()
|
||||
def forward(self, f0):
|
||||
""" sine_tensor, uv = forward(f0)
|
||||
input F0: tensor(batchsize=1, dim=1, length)
|
||||
f0 for unvoiced steps should be 0
|
||||
output sine_tensor: tensor(batchsize=1, length, dim)
|
||||
output uv: tensor(batchsize=1, length, 1)
|
||||
"""
|
||||
f0 = f0.transpose(1, 2)
|
||||
F_mat = torch.zeros((f0.size(0), self.harmonic_num + 1, f0.size(-1))).to(f0.device)
|
||||
for i in range(self.harmonic_num + 1):
|
||||
F_mat[:, i: i + 1, :] = f0 * (i + 1) / self.sampling_rate
|
||||
|
||||
theta_mat = 2 * np.pi * (torch.cumsum(F_mat, dim=-1) % 1)
|
||||
u_dist = Uniform(low=-np.pi, high=np.pi)
|
||||
phase_vec = u_dist.sample(sample_shape=(f0.size(0), self.harmonic_num + 1, 1)).to(F_mat.device)
|
||||
phase_vec[:, 0, :] = 0
|
||||
|
||||
# generate sine waveforms
|
||||
sine_waves = self.sine_amp * torch.sin(theta_mat + phase_vec)
|
||||
|
||||
# generate uv signal
|
||||
uv = self._f02uv(f0)
|
||||
|
||||
# noise: for unvoiced should be similar to sine_amp
|
||||
# std = self.sine_amp/3 -> max value ~ self.sine_amp
|
||||
# . for voiced regions is self.noise_std
|
||||
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
|
||||
noise = noise_amp * torch.randn_like(sine_waves)
|
||||
|
||||
# first: set the unvoiced part to 0 by uv
|
||||
# then: additive noise
|
||||
sine_waves = sine_waves * uv + noise
|
||||
return sine_waves.transpose(1, 2), uv.transpose(1, 2), noise
|
||||
|
||||
|
||||
class SineGen2(torch.nn.Module):
|
||||
""" Definition of sine generator
|
||||
SineGen(samp_rate, harmonic_num = 0,
|
||||
sine_amp = 0.1, noise_std = 0.003,
|
||||
voiced_threshold = 0,
|
||||
flag_for_pulse=False)
|
||||
samp_rate: sampling rate in Hz
|
||||
harmonic_num: number of harmonic overtones (default 0)
|
||||
sine_amp: amplitude of sine-wavefrom (default 0.1)
|
||||
noise_std: std of Gaussian noise (default 0.003)
|
||||
voiced_thoreshold: F0 threshold for U/V classification (default 0)
|
||||
flag_for_pulse: this SinGen is used inside PulseGen (default False)
|
||||
Note: when flag_for_pulse is True, the first time step of a voiced
|
||||
segment is always sin(np.pi) or cos(0)
|
||||
"""
|
||||
|
||||
def __init__(self, samp_rate, upsample_scale, harmonic_num=0,
|
||||
sine_amp=0.1, noise_std=0.003,
|
||||
voiced_threshold=0,
|
||||
flag_for_pulse=False,
|
||||
causal=False):
|
||||
super(SineGen2, self).__init__()
|
||||
self.sine_amp = sine_amp
|
||||
self.noise_std = noise_std
|
||||
self.harmonic_num = harmonic_num
|
||||
self.dim = self.harmonic_num + 1
|
||||
self.sampling_rate = samp_rate
|
||||
self.voiced_threshold = voiced_threshold
|
||||
self.flag_for_pulse = flag_for_pulse
|
||||
self.upsample_scale = upsample_scale
|
||||
self.causal = causal
|
||||
if causal is True:
|
||||
self.rand_ini = torch.rand(1, 9)
|
||||
self.rand_ini[:, 0] = 0
|
||||
self.sine_waves = torch.rand(1, 300 * 24000, 9)
|
||||
|
||||
def _f02uv(self, f0):
|
||||
# generate uv signal
|
||||
uv = (f0 > self.voiced_threshold).type(torch.float32)
|
||||
return uv
|
||||
|
||||
def _f02sine(self, f0_values):
|
||||
""" f0_values: (batchsize, length, dim)
|
||||
where dim indicates fundamental tone and overtones
|
||||
"""
|
||||
# convert to F0 in rad. The interger part n can be ignored
|
||||
# because 2 * np.pi * n doesn't affect phase
|
||||
rad_values = (f0_values / self.sampling_rate) % 1
|
||||
|
||||
# initial phase noise (no noise for fundamental component)
|
||||
if self.training is False and self.causal is True:
|
||||
rad_values[:, 0, :] = rad_values[:, 0, :] + self.rand_ini.to(rad_values.device)
|
||||
else:
|
||||
rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], device=f0_values.device)
|
||||
rand_ini[:, 0] = 0
|
||||
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
|
||||
|
||||
# instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
|
||||
if not self.flag_for_pulse:
|
||||
rad_values = torch.nn.functional.interpolate(rad_values.transpose(1, 2),
|
||||
scale_factor=1 / self.upsample_scale,
|
||||
mode="linear").transpose(1, 2)
|
||||
|
||||
phase = torch.cumsum(rad_values, dim=1) * 2 * np.pi
|
||||
phase = torch.nn.functional.interpolate(phase.transpose(1, 2) * self.upsample_scale,
|
||||
scale_factor=self.upsample_scale, mode="nearest" if self.causal is True else 'linear').transpose(1, 2)
|
||||
sines = torch.sin(phase)
|
||||
else:
|
||||
# If necessary, make sure that the first time step of every
|
||||
# voiced segments is sin(pi) or cos(0)
|
||||
# This is used for pulse-train generation
|
||||
|
||||
# identify the last time step in unvoiced segments
|
||||
uv = self._f02uv(f0_values)
|
||||
uv_1 = torch.roll(uv, shifts=-1, dims=1)
|
||||
uv_1[:, -1, :] = 1
|
||||
u_loc = (uv < 1) * (uv_1 > 0)
|
||||
|
||||
# get the instantanouse phase
|
||||
tmp_cumsum = torch.cumsum(rad_values, dim=1)
|
||||
# different batch needs to be processed differently
|
||||
for idx in range(f0_values.shape[0]):
|
||||
temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
|
||||
temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
|
||||
# stores the accumulation of i.phase within
|
||||
# each voiced segments
|
||||
tmp_cumsum[idx, :, :] = 0
|
||||
tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
|
||||
|
||||
# rad_values - tmp_cumsum: remove the accumulation of i.phase
|
||||
# within the previous voiced segment.
|
||||
i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
|
||||
|
||||
# get the sines
|
||||
sines = torch.cos(i_phase * 2 * np.pi)
|
||||
return sines
|
||||
|
||||
def forward(self, f0):
|
||||
""" sine_tensor, uv = forward(f0)
|
||||
input F0: tensor(batchsize=1, length, dim=1)
|
||||
f0 for unvoiced steps should be 0
|
||||
output sine_tensor: tensor(batchsize=1, length, dim)
|
||||
output uv: tensor(batchsize=1, length, 1)
|
||||
"""
|
||||
# fundamental component
|
||||
fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
|
||||
|
||||
# generate sine waveforms
|
||||
sine_waves = self._f02sine(fn) * self.sine_amp
|
||||
|
||||
# generate uv signal
|
||||
uv = self._f02uv(f0)
|
||||
|
||||
# noise: for unvoiced should be similar to sine_amp
|
||||
# std = self.sine_amp/3 -> max value ~ self.sine_amp
|
||||
# . for voiced regions is self.noise_std
|
||||
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
|
||||
if self.training is False and self.causal is True:
|
||||
noise = noise_amp * self.sine_waves[:, :sine_waves.shape[1]].to(sine_waves.device)
|
||||
else:
|
||||
noise = noise_amp * torch.randn_like(sine_waves)
|
||||
|
||||
# first: set the unvoiced part to 0 by uv
|
||||
# then: additive noise
|
||||
sine_waves = sine_waves * uv + noise
|
||||
return sine_waves, uv, noise
|
||||
|
||||
|
||||
class SourceModuleHnNSF(torch.nn.Module):
|
||||
""" SourceModule for hn-nsf
|
||||
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
|
||||
add_noise_std=0.003, voiced_threshod=0)
|
||||
sampling_rate: sampling_rate in Hz
|
||||
harmonic_num: number of harmonic above F0 (default: 0)
|
||||
sine_amp: amplitude of sine source signal (default: 0.1)
|
||||
add_noise_std: std of additive Gaussian noise (default: 0.003)
|
||||
note that amplitude of noise in unvoiced is decided
|
||||
by sine_amp
|
||||
voiced_threshold: threhold to set U/V given F0 (default: 0)
|
||||
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
|
||||
F0_sampled (batchsize, length, 1)
|
||||
Sine_source (batchsize, length, 1)
|
||||
noise_source (batchsize, length 1)
|
||||
uv (batchsize, length, 1)
|
||||
"""
|
||||
|
||||
def __init__(self, sampling_rate, upsample_scale, harmonic_num=0, sine_amp=0.1,
|
||||
add_noise_std=0.003, voiced_threshod=0, sinegen_type='1', causal=False):
|
||||
super(SourceModuleHnNSF, self).__init__()
|
||||
|
||||
self.sine_amp = sine_amp
|
||||
self.noise_std = add_noise_std
|
||||
|
||||
# to produce sine waveforms
|
||||
if sinegen_type == '1':
|
||||
self.l_sin_gen = SineGen(sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod)
|
||||
else:
|
||||
self.l_sin_gen = SineGen2(sampling_rate, upsample_scale, harmonic_num, sine_amp, add_noise_std, voiced_threshod, causal=causal)
|
||||
|
||||
# to merge source harmonics into a single excitation
|
||||
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
|
||||
self.l_tanh = torch.nn.Tanh()
|
||||
self.causal = causal
|
||||
if causal is True:
|
||||
self.uv = torch.rand(1, 300 * 24000, 1)
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
|
||||
F0_sampled (batchsize, length, 1)
|
||||
Sine_source (batchsize, length, 1)
|
||||
noise_source (batchsize, length 1)
|
||||
"""
|
||||
# source for harmonic branch
|
||||
with torch.no_grad():
|
||||
sine_wavs, uv, _ = self.l_sin_gen(x)
|
||||
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
|
||||
|
||||
# source for noise branch, in the same shape as uv
|
||||
if self.training is False and self.causal is True:
|
||||
noise = self.uv[:, :uv.shape[1]] * self.sine_amp / 3
|
||||
else:
|
||||
noise = torch.randn_like(uv) * self.sine_amp / 3
|
||||
return sine_merge, noise, uv
|
||||
|
||||
|
||||
class HiFTGenerator(nn.Module):
|
||||
"""
|
||||
HiFTNet Generator: Neural Source Filter + ISTFTNet
|
||||
https://arxiv.org/abs/2309.09493
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
in_channels: int = 80,
|
||||
base_channels: int = 512,
|
||||
nb_harmonics: int = 8,
|
||||
sampling_rate: int = 22050,
|
||||
nsf_alpha: float = 0.1,
|
||||
nsf_sigma: float = 0.003,
|
||||
nsf_voiced_threshold: float = 10,
|
||||
upsample_rates: List[int] = [8, 8],
|
||||
upsample_kernel_sizes: List[int] = [16, 16],
|
||||
istft_params: Dict[str, int] = {"n_fft": 16, "hop_len": 4},
|
||||
resblock_kernel_sizes: List[int] = [3, 7, 11],
|
||||
resblock_dilation_sizes: List[List[int]] = [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
|
||||
source_resblock_kernel_sizes: List[int] = [7, 11],
|
||||
source_resblock_dilation_sizes: List[List[int]] = [[1, 3, 5], [1, 3, 5]],
|
||||
lrelu_slope: float = 0.1,
|
||||
audio_limit: float = 0.99,
|
||||
f0_predictor: torch.nn.Module = None,
|
||||
):
|
||||
super(HiFTGenerator, self).__init__()
|
||||
|
||||
self.out_channels = 1
|
||||
self.nb_harmonics = nb_harmonics
|
||||
self.sampling_rate = sampling_rate
|
||||
self.istft_params = istft_params
|
||||
self.lrelu_slope = lrelu_slope
|
||||
self.audio_limit = audio_limit
|
||||
|
||||
self.num_kernels = len(resblock_kernel_sizes)
|
||||
self.num_upsamples = len(upsample_rates)
|
||||
# NOTE in CosyVoice2, we use the original SineGen implementation
|
||||
self.m_source = SourceModuleHnNSF(
|
||||
sampling_rate=sampling_rate,
|
||||
upsample_scale=np.prod(upsample_rates) * istft_params["hop_len"],
|
||||
harmonic_num=nb_harmonics,
|
||||
sine_amp=nsf_alpha,
|
||||
add_noise_std=nsf_sigma,
|
||||
voiced_threshod=nsf_voiced_threshold,
|
||||
sinegen_type='1' if self.sampling_rate == 22050 else '2',
|
||||
causal=False)
|
||||
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates) * istft_params["hop_len"])
|
||||
|
||||
self.conv_pre = weight_norm(
|
||||
Conv1d(in_channels, base_channels, 7, 1, padding=3)
|
||||
)
|
||||
|
||||
# Up
|
||||
self.ups = nn.ModuleList()
|
||||
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
||||
self.ups.append(
|
||||
weight_norm(
|
||||
ConvTranspose1d(
|
||||
base_channels // (2**i),
|
||||
base_channels // (2**(i + 1)),
|
||||
k,
|
||||
u,
|
||||
padding=(k - u) // 2,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Down
|
||||
self.source_downs = nn.ModuleList()
|
||||
self.source_resblocks = nn.ModuleList()
|
||||
downsample_rates = [1] + upsample_rates[::-1][:-1]
|
||||
downsample_cum_rates = np.cumprod(downsample_rates)
|
||||
for i, (u, k, d) in enumerate(zip(downsample_cum_rates[::-1], source_resblock_kernel_sizes, source_resblock_dilation_sizes)):
|
||||
if u == 1:
|
||||
self.source_downs.append(
|
||||
Conv1d(istft_params["n_fft"] + 2, base_channels // (2 ** (i + 1)), 1, 1)
|
||||
)
|
||||
else:
|
||||
self.source_downs.append(
|
||||
Conv1d(istft_params["n_fft"] + 2, base_channels // (2 ** (i + 1)), u * 2, u, padding=(u // 2))
|
||||
)
|
||||
|
||||
self.source_resblocks.append(
|
||||
ResBlock(base_channels // (2 ** (i + 1)), k, d)
|
||||
)
|
||||
|
||||
self.resblocks = nn.ModuleList()
|
||||
for i in range(len(self.ups)):
|
||||
ch = base_channels // (2**(i + 1))
|
||||
for _, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
||||
self.resblocks.append(ResBlock(ch, k, d))
|
||||
|
||||
self.conv_post = weight_norm(Conv1d(ch, istft_params["n_fft"] + 2, 7, 1, padding=3))
|
||||
self.ups.apply(init_weights)
|
||||
self.conv_post.apply(init_weights)
|
||||
self.reflection_pad = nn.ReflectionPad1d((1, 0))
|
||||
self.stft_window = torch.from_numpy(get_window("hann", istft_params["n_fft"], fftbins=True).astype(np.float32))
|
||||
self.f0_predictor = f0_predictor
|
||||
|
||||
def remove_weight_norm(self):
|
||||
print('Removing weight norm...')
|
||||
for l in self.ups:
|
||||
remove_weight_norm(l)
|
||||
for l in self.resblocks:
|
||||
l.remove_weight_norm()
|
||||
remove_weight_norm(self.conv_pre)
|
||||
remove_weight_norm(self.conv_post)
|
||||
self.m_source.remove_weight_norm()
|
||||
for l in self.source_downs:
|
||||
remove_weight_norm(l)
|
||||
for l in self.source_resblocks:
|
||||
l.remove_weight_norm()
|
||||
|
||||
def _stft(self, x):
|
||||
spec = torch.stft(
|
||||
x,
|
||||
self.istft_params["n_fft"], self.istft_params["hop_len"], self.istft_params["n_fft"], window=self.stft_window.to(x.device),
|
||||
return_complex=True)
|
||||
spec = torch.view_as_real(spec) # [B, F, TT, 2]
|
||||
return spec[..., 0], spec[..., 1]
|
||||
|
||||
def _istft(self, magnitude, phase):
|
||||
magnitude = torch.clip(magnitude, max=1e2)
|
||||
real = magnitude * torch.cos(phase)
|
||||
img = magnitude * torch.sin(phase)
|
||||
inverse_transform = torch.istft(torch.complex(real, img), self.istft_params["n_fft"], self.istft_params["hop_len"],
|
||||
self.istft_params["n_fft"], window=self.stft_window.to(magnitude.device))
|
||||
return inverse_transform
|
||||
|
||||
def decode(self, x: torch.Tensor, s: torch.Tensor = torch.zeros(1, 1, 0)) -> torch.Tensor:
|
||||
s_stft_real, s_stft_imag = self._stft(s.squeeze(1))
|
||||
s_stft = torch.cat([s_stft_real, s_stft_imag], dim=1)
|
||||
|
||||
x = self.conv_pre(x)
|
||||
for i in range(self.num_upsamples):
|
||||
x = F.leaky_relu(x, self.lrelu_slope)
|
||||
x = self.ups[i](x)
|
||||
|
||||
if i == self.num_upsamples - 1:
|
||||
x = self.reflection_pad(x)
|
||||
|
||||
# fusion
|
||||
si = self.source_downs[i](s_stft)
|
||||
si = self.source_resblocks[i](si)
|
||||
x = x + si
|
||||
|
||||
xs = None
|
||||
for j in range(self.num_kernels):
|
||||
if xs is None:
|
||||
xs = self.resblocks[i * self.num_kernels + j](x)
|
||||
else:
|
||||
xs += self.resblocks[i * self.num_kernels + j](x)
|
||||
x = xs / self.num_kernels
|
||||
|
||||
x = F.leaky_relu(x)
|
||||
x = self.conv_post(x)
|
||||
magnitude = torch.exp(x[:, :self.istft_params["n_fft"] // 2 + 1, :])
|
||||
phase = torch.sin(x[:, self.istft_params["n_fft"] // 2 + 1:, :]) # actually, sin is redundancy
|
||||
|
||||
x = self._istft(magnitude, phase)
|
||||
x = torch.clamp(x, -self.audio_limit, self.audio_limit)
|
||||
return x
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
speech_feat = batch['speech_feat'].transpose(1, 2).to(device)
|
||||
# mel->f0
|
||||
f0 = self.f0_predictor(speech_feat)
|
||||
# f0->source
|
||||
s = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
|
||||
s, _, _ = self.m_source(s)
|
||||
s = s.transpose(1, 2)
|
||||
# mel+source->speech
|
||||
generated_speech = self.decode(x=speech_feat, s=s)
|
||||
return generated_speech, f0
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(self, speech_feat: torch.Tensor, cache_source: torch.Tensor = torch.zeros(1, 1, 0)) -> torch.Tensor:
|
||||
# mel->f0
|
||||
f0 = self.f0_predictor(speech_feat)
|
||||
# f0->source
|
||||
s = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
|
||||
s, _, _ = self.m_source(s)
|
||||
s = s.transpose(1, 2)
|
||||
# use cache_source to avoid glitch
|
||||
if cache_source.shape[2] != 0:
|
||||
s[:, :, :cache_source.shape[2]] = cache_source
|
||||
generated_speech = self.decode(x=speech_feat, s=s)
|
||||
return generated_speech, s
|
||||
|
||||
|
||||
class CausalHiFTGenerator(HiFTGenerator):
|
||||
"""
|
||||
HiFTNet Generator: Neural Source Filter + ISTFTNet
|
||||
https://arxiv.org/abs/2309.09493
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
in_channels: int = 80,
|
||||
base_channels: int = 512,
|
||||
nb_harmonics: int = 8,
|
||||
sampling_rate: int = 22050,
|
||||
nsf_alpha: float = 0.1,
|
||||
nsf_sigma: float = 0.003,
|
||||
nsf_voiced_threshold: float = 10,
|
||||
upsample_rates: List[int] = [8, 8],
|
||||
upsample_kernel_sizes: List[int] = [16, 16],
|
||||
istft_params: Dict[str, int] = {"n_fft": 16, "hop_len": 4},
|
||||
resblock_kernel_sizes: List[int] = [3, 7, 11],
|
||||
resblock_dilation_sizes: List[List[int]] = [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
|
||||
source_resblock_kernel_sizes: List[int] = [7, 11],
|
||||
source_resblock_dilation_sizes: List[List[int]] = [[1, 3, 5], [1, 3, 5]],
|
||||
lrelu_slope: float = 0.1,
|
||||
audio_limit: float = 0.99,
|
||||
conv_pre_look_right: int = 4,
|
||||
f0_predictor: torch.nn.Module = None,
|
||||
):
|
||||
torch.nn.Module.__init__(self)
|
||||
|
||||
self.out_channels = 1
|
||||
self.nb_harmonics = nb_harmonics
|
||||
self.sampling_rate = sampling_rate
|
||||
self.istft_params = istft_params
|
||||
self.lrelu_slope = lrelu_slope
|
||||
self.audio_limit = audio_limit
|
||||
|
||||
self.num_kernels = len(resblock_kernel_sizes)
|
||||
self.num_upsamples = len(upsample_rates)
|
||||
self.m_source = SourceModuleHnNSF(
|
||||
sampling_rate=sampling_rate,
|
||||
upsample_scale=np.prod(upsample_rates) * istft_params["hop_len"],
|
||||
harmonic_num=nb_harmonics,
|
||||
sine_amp=nsf_alpha,
|
||||
add_noise_std=nsf_sigma,
|
||||
voiced_threshod=nsf_voiced_threshold,
|
||||
sinegen_type='1' if self.sampling_rate == 22050 else '2',
|
||||
causal=True)
|
||||
self.upsample_rates = upsample_rates
|
||||
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates) * istft_params["hop_len"])
|
||||
|
||||
self.conv_pre = weight_norm(
|
||||
CausalConv1d(in_channels, base_channels, conv_pre_look_right + 1, 1, causal_type='right')
|
||||
)
|
||||
|
||||
# Up
|
||||
self.ups = nn.ModuleList()
|
||||
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
||||
self.ups.append(
|
||||
weight_norm(
|
||||
CausalConv1dUpsample(
|
||||
base_channels // (2**i),
|
||||
base_channels // (2**(i + 1)),
|
||||
k,
|
||||
u,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Down
|
||||
self.source_downs = nn.ModuleList()
|
||||
self.source_resblocks = nn.ModuleList()
|
||||
downsample_rates = [1] + upsample_rates[::-1][:-1]
|
||||
downsample_cum_rates = np.cumprod(downsample_rates)
|
||||
for i, (u, k, d) in enumerate(zip(downsample_cum_rates[::-1], source_resblock_kernel_sizes, source_resblock_dilation_sizes)):
|
||||
if u == 1:
|
||||
self.source_downs.append(
|
||||
CausalConv1d(istft_params["n_fft"] + 2, base_channels // (2 ** (i + 1)), 1, 1, causal_type='left')
|
||||
)
|
||||
else:
|
||||
self.source_downs.append(
|
||||
CausalConv1dDownSample(istft_params["n_fft"] + 2, base_channels // (2 ** (i + 1)), u * 2, u)
|
||||
)
|
||||
|
||||
self.source_resblocks.append(
|
||||
ResBlock(base_channels // (2 ** (i + 1)), k, d, causal=True)
|
||||
)
|
||||
|
||||
self.resblocks = nn.ModuleList()
|
||||
for i in range(len(self.ups)):
|
||||
ch = base_channels // (2**(i + 1))
|
||||
for _, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
||||
self.resblocks.append(ResBlock(ch, k, d, causal=True))
|
||||
|
||||
self.conv_post = weight_norm(CausalConv1d(ch, istft_params["n_fft"] + 2, 7, 1, causal_type='left'))
|
||||
self.ups.apply(init_weights)
|
||||
self.conv_post.apply(init_weights)
|
||||
self.reflection_pad = nn.ReflectionPad1d((1, 0))
|
||||
self.stft_window = torch.from_numpy(get_window("hann", istft_params["n_fft"], fftbins=True).astype(np.float32))
|
||||
self.conv_pre_look_right = conv_pre_look_right
|
||||
self.f0_predictor = f0_predictor
|
||||
|
||||
def decode(self, x: torch.Tensor, s: torch.Tensor = torch.zeros(1, 1, 0), finalize: bool = True) -> torch.Tensor:
|
||||
s_stft_real, s_stft_imag = self._stft(s.squeeze(1))
|
||||
if finalize is True:
|
||||
x = self.conv_pre(x)
|
||||
else:
|
||||
x = self.conv_pre(x[:, :, :-self.conv_pre_look_right], x[:, :, -self.conv_pre_look_right:])
|
||||
s_stft_real = s_stft_real[:, :, :-int(np.prod(self.upsample_rates) * self.conv_pre_look_right)]
|
||||
s_stft_imag = s_stft_imag[:, :, :-int(np.prod(self.upsample_rates) * self.conv_pre_look_right)]
|
||||
s_stft = torch.cat([s_stft_real, s_stft_imag], dim=1)
|
||||
|
||||
for i in range(self.num_upsamples):
|
||||
x = F.leaky_relu(x, self.lrelu_slope)
|
||||
x = self.ups[i](x)
|
||||
|
||||
if i == self.num_upsamples - 1:
|
||||
x = self.reflection_pad(x)
|
||||
|
||||
# fusion
|
||||
si = self.source_downs[i](s_stft)
|
||||
si = self.source_resblocks[i](si)
|
||||
x = x + si
|
||||
|
||||
xs = None
|
||||
for j in range(self.num_kernels):
|
||||
if xs is None:
|
||||
xs = self.resblocks[i * self.num_kernels + j](x)
|
||||
else:
|
||||
xs += self.resblocks[i * self.num_kernels + j](x)
|
||||
x = xs / self.num_kernels
|
||||
|
||||
x = F.leaky_relu(x)
|
||||
x = self.conv_post(x)
|
||||
magnitude = torch.exp(x[:, :self.istft_params["n_fft"] // 2 + 1, :])
|
||||
phase = torch.sin(x[:, self.istft_params["n_fft"] // 2 + 1:, :]) # actually, sin is redundancy
|
||||
|
||||
x = self._istft(magnitude, phase)
|
||||
if finalize is False:
|
||||
x = x[:, :-int(np.prod(self.upsample_rates) * self.istft_params['hop_len'])]
|
||||
x = torch.clamp(x, -self.audio_limit, self.audio_limit)
|
||||
return x
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(self, speech_feat: torch.Tensor, finalize: bool = True) -> torch.Tensor:
|
||||
# mel->f0 NOTE f0_predictor precision is crucial for causal inference, move self.f0_predictor to cpu if necessary
|
||||
self.f0_predictor.to(torch.float64)
|
||||
f0 = self.f0_predictor(speech_feat.to(torch.float64), finalize=finalize).to(speech_feat)
|
||||
# f0->source
|
||||
s = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
|
||||
s, _, _ = self.m_source(s)
|
||||
s = s.transpose(1, 2)
|
||||
if finalize is True:
|
||||
generated_speech = self.decode(x=speech_feat, s=s, finalize=finalize)
|
||||
else:
|
||||
generated_speech = self.decode(x=speech_feat[:, :, :-self.f0_predictor.condnet[0].causal_padding], s=s, finalize=finalize)
|
||||
return generated_speech, s
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
torch.backends.cudnn.deterministic = True
|
||||
torch.backends.cudnn.benchmark = False
|
||||
from hyperpyyaml import load_hyperpyyaml
|
||||
with open('./pretrained_models/Fun-CosyVoice3-0.5B/cosyvoice3.yaml', 'r') as f:
|
||||
configs = load_hyperpyyaml(f, overrides={'llm': None, 'flow': None})
|
||||
model = configs['hift']
|
||||
device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
||||
model.to(device)
|
||||
model.eval()
|
||||
max_len, chunk_size, context_size = 300, 30, 8
|
||||
mel = torch.rand(1, 80, max_len).to(device)
|
||||
pred_gt, _ = model.inference(mel)
|
||||
for i in range(0, max_len, chunk_size):
|
||||
finalize = True if i + chunk_size + context_size >= max_len else False
|
||||
pred_chunk, _ = model.inference(mel[:, :, : i + chunk_size + context_size], finalize=finalize)
|
||||
pred_chunk = pred_chunk[:, i * 480:]
|
||||
print((pred_gt[:, i * 480:i * 480 + pred_chunk.shape[1]] - pred_chunk).abs().max().item())
|
||||
67
models/CosyVoice/cosyvoice/hifigan/hifigan.py
Normal file
67
models/CosyVoice/cosyvoice/hifigan/hifigan.py
Normal file
@@ -0,0 +1,67 @@
|
||||
from typing import Dict, Optional
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from matcha.hifigan.models import feature_loss, generator_loss, discriminator_loss
|
||||
from cosyvoice.utils.losses import tpr_loss, mel_loss
|
||||
|
||||
|
||||
class HiFiGan(nn.Module):
|
||||
def __init__(self, generator, discriminator, mel_spec_transform,
|
||||
multi_mel_spectral_recon_loss_weight=45, feat_match_loss_weight=2.0,
|
||||
tpr_loss_weight=1.0, tpr_loss_tau=0.04):
|
||||
super(HiFiGan, self).__init__()
|
||||
self.generator = generator
|
||||
self.discriminator = discriminator
|
||||
self.mel_spec_transform = mel_spec_transform
|
||||
self.multi_mel_spectral_recon_loss_weight = multi_mel_spectral_recon_loss_weight
|
||||
self.feat_match_loss_weight = feat_match_loss_weight
|
||||
self.tpr_loss_weight = tpr_loss_weight
|
||||
self.tpr_loss_tau = tpr_loss_tau
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
if batch['turn'] == 'generator':
|
||||
return self.forward_generator(batch, device)
|
||||
else:
|
||||
return self.forward_discriminator(batch, device)
|
||||
|
||||
def forward_generator(self, batch, device):
|
||||
real_speech = batch['speech'].to(device)
|
||||
pitch_feat = batch['pitch_feat'].to(device)
|
||||
# 1. calculate generator outputs
|
||||
generated_speech, generated_f0 = self.generator(batch, device)
|
||||
# 2. calculate discriminator outputs
|
||||
y_d_rs, y_d_gs, fmap_rs, fmap_gs = self.discriminator(real_speech, generated_speech)
|
||||
# 3. calculate generator losses, feature loss, mel loss, tpr losses [Optional]
|
||||
loss_gen, _ = generator_loss(y_d_gs)
|
||||
loss_fm = feature_loss(fmap_rs, fmap_gs)
|
||||
loss_mel = mel_loss(real_speech, generated_speech, self.mel_spec_transform)
|
||||
if self.tpr_loss_weight != 0:
|
||||
loss_tpr = tpr_loss(y_d_gs, y_d_rs, self.tpr_loss_tau)
|
||||
else:
|
||||
loss_tpr = torch.zeros(1).to(device)
|
||||
loss_f0 = F.l1_loss(generated_f0, pitch_feat)
|
||||
loss = loss_gen + self.feat_match_loss_weight * loss_fm + \
|
||||
self.multi_mel_spectral_recon_loss_weight * loss_mel + \
|
||||
self.tpr_loss_weight * loss_tpr + loss_f0
|
||||
return {'loss': loss, 'loss_gen': loss_gen, 'loss_fm': loss_fm, 'loss_mel': loss_mel, 'loss_tpr': loss_tpr, 'loss_f0': loss_f0}
|
||||
|
||||
def forward_discriminator(self, batch, device):
|
||||
real_speech = batch['speech'].to(device)
|
||||
# 1. calculate generator outputs
|
||||
with torch.no_grad():
|
||||
generated_speech, generated_f0 = self.generator(batch, device)
|
||||
# 2. calculate discriminator outputs
|
||||
y_d_rs, y_d_gs, fmap_rs, fmap_gs = self.discriminator(real_speech, generated_speech.detach())
|
||||
# 3. calculate discriminator losses, tpr losses [Optional]
|
||||
loss_disc, _, _ = discriminator_loss(y_d_rs, y_d_gs)
|
||||
if self.tpr_loss_weight != 0:
|
||||
loss_tpr = tpr_loss(y_d_rs, y_d_gs, self.tpr_loss_tau)
|
||||
else:
|
||||
loss_tpr = torch.zeros(1).to(device)
|
||||
loss = loss_disc + self.tpr_loss_weight * loss_tpr
|
||||
return {'loss': loss, 'loss_disc': loss_disc, 'loss_tpr': loss_tpr}
|
||||
767
models/CosyVoice/cosyvoice/llm/llm.py
Normal file
767
models/CosyVoice/cosyvoice/llm/llm.py
Normal file
@@ -0,0 +1,767 @@
|
||||
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Zhihao Du)
|
||||
# 2025 Alibaba Inc (authors: Xiang Lyu, Yabin Li, Qihua, Shengqiang Li)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os, queue
|
||||
import random
|
||||
import time
|
||||
import threading
|
||||
from typing import Dict, Optional, Callable, List, Generator
|
||||
import numpy as np
|
||||
import torch
|
||||
from torch import nn
|
||||
import torch.nn.functional as F
|
||||
from transformers import Qwen2ForCausalLM
|
||||
from torch.nn.utils.rnn import pad_sequence, unpad_sequence
|
||||
from cosyvoice.utils.common import IGNORE_ID
|
||||
from cosyvoice.transformer.label_smoothing_loss import LabelSmoothingLoss
|
||||
from cosyvoice.utils.common import th_accuracy
|
||||
from cosyvoice.utils.file_utils import logging
|
||||
from cosyvoice.utils.mask import make_pad_mask
|
||||
from cosyvoice.utils.onnx import SpeechTokenExtractor, online_feature, onnx_path
|
||||
|
||||
|
||||
class TransformerLM(torch.nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
text_encoder_input_size: int,
|
||||
llm_input_size: int,
|
||||
llm_output_size: int,
|
||||
text_token_size: int,
|
||||
speech_token_size: int,
|
||||
text_encoder: torch.nn.Module,
|
||||
llm: torch.nn.Module,
|
||||
sampling: Callable,
|
||||
length_normalized_loss: bool = True,
|
||||
lsm_weight: float = 0.0,
|
||||
spk_embed_dim: int = 192,
|
||||
):
|
||||
super().__init__()
|
||||
self.llm_input_size = llm_input_size
|
||||
self.speech_token_size = speech_token_size
|
||||
# 1. build text token inputs related modules
|
||||
self.text_embedding = torch.nn.Embedding(text_token_size, text_encoder_input_size)
|
||||
self.text_encoder = text_encoder
|
||||
self.text_encoder_affine_layer = nn.Linear(
|
||||
self.text_encoder.output_size(),
|
||||
llm_input_size
|
||||
)
|
||||
|
||||
# 2. build speech token language model related modules
|
||||
self.sos = 0
|
||||
self.task_id = 1
|
||||
self.eos_token = self.speech_token_size
|
||||
self.llm_embedding = torch.nn.Embedding(2, llm_input_size)
|
||||
self.llm = llm
|
||||
self.llm_decoder = nn.Linear(llm_output_size, speech_token_size + 1)
|
||||
self.criterion_ce = LabelSmoothingLoss(
|
||||
size=speech_token_size + 1,
|
||||
padding_idx=IGNORE_ID,
|
||||
smoothing=lsm_weight,
|
||||
normalize_length=length_normalized_loss,
|
||||
)
|
||||
|
||||
# 3. [Optional] build speech token related modules
|
||||
self.speech_embedding = torch.nn.Embedding(speech_token_size, llm_input_size)
|
||||
self.spk_embed_affine_layer = torch.nn.Linear(spk_embed_dim, llm_input_size)
|
||||
|
||||
# 4. sampling method
|
||||
self.sampling = sampling
|
||||
|
||||
def encode(
|
||||
self,
|
||||
text: torch.Tensor,
|
||||
text_lengths: torch.Tensor,
|
||||
):
|
||||
encoder_out, encoder_mask = self.text_encoder(text, text_lengths, decoding_chunk_size=1, num_decoding_left_chunks=-1)
|
||||
encoder_out_lens = encoder_mask.squeeze(1).sum(1)
|
||||
encoder_out = self.text_encoder_affine_layer(encoder_out)
|
||||
return encoder_out, encoder_out_lens
|
||||
|
||||
def pad_unpad_sequence(self, sos_emb, embedding, text_token, text_token_len, task_id_emb, speech_token, speech_token_len):
|
||||
text_token = unpad_sequence(text_token, text_token_len.cpu(), batch_first=True)
|
||||
speech_token = unpad_sequence(speech_token, speech_token_len.cpu(), batch_first=True)
|
||||
lm_input = [torch.concat([sos_emb.squeeze(dim=0), embedding[i], text_token[i], task_id_emb.squeeze(dim=0), speech_token[i]], dim=0)
|
||||
for i in range(len(text_token))]
|
||||
lm_input_len = torch.tensor([i.size(0) for i in lm_input], dtype=torch.int32)
|
||||
lm_input = pad_sequence(lm_input, batch_first=True, padding_value=IGNORE_ID)
|
||||
return lm_input, lm_input_len
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
"""
|
||||
Args:
|
||||
text: (B, L, D)
|
||||
text_lengths: (B,)
|
||||
audio: (B, T, N) or (B, T)
|
||||
audio_lengths: (B,)
|
||||
"""
|
||||
text_token = batch['text_token'].to(device)
|
||||
text_token_len = batch['text_token_len'].to(device)
|
||||
speech_token = batch['speech_token'].to(device)
|
||||
speech_token_len = batch['speech_token_len'].to(device)
|
||||
embedding = batch['embedding'].to(device)
|
||||
|
||||
# 1. prepare llm_target
|
||||
lm_target = [torch.tensor([IGNORE_ID] * (2 + text_token_len[i]) + speech_token[i, :speech_token_len[i]].tolist() +
|
||||
[self.speech_token_size]) for i in range(text_token.size(0))]
|
||||
lm_target = pad_sequence(lm_target, batch_first=True, padding_value=IGNORE_ID).to(device)
|
||||
|
||||
# 1. encode text_token
|
||||
text_token = self.text_embedding(text_token)
|
||||
text_token, text_token_len = self.encode(text_token, text_token_len)
|
||||
|
||||
# 2. embedding projection
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
embedding = embedding.unsqueeze(1)
|
||||
|
||||
# 3. sos and task_id
|
||||
sos_emb = self.llm_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.llm_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
|
||||
# 4. encode speech_token
|
||||
speech_token = self.speech_embedding(speech_token)
|
||||
|
||||
# 5. unpad and pad
|
||||
lm_input, lm_input_len = self.pad_unpad_sequence(sos_emb, embedding, text_token, text_token_len,
|
||||
task_id_emb, speech_token, speech_token_len)
|
||||
|
||||
# 6. run lm forward
|
||||
lm_output, lm_output_mask = self.llm(lm_input, lm_input_len.to(device))
|
||||
logits = self.llm_decoder(lm_output)
|
||||
loss = self.criterion_ce(logits, lm_target)
|
||||
acc = th_accuracy(logits.view(-1, self.speech_token_size + 1), lm_target, ignore_label=IGNORE_ID)
|
||||
return {'loss': loss, 'acc': acc}
|
||||
|
||||
def sampling_ids(
|
||||
self,
|
||||
weighted_scores: torch.Tensor,
|
||||
decoded_tokens: List,
|
||||
sampling: int,
|
||||
ignore_eos: bool = True,
|
||||
):
|
||||
num_trials, max_trials = 0, 100
|
||||
while True:
|
||||
top_ids = self.sampling(weighted_scores, decoded_tokens, sampling)
|
||||
if (not ignore_eos) or (top_ids < self.speech_token_size):
|
||||
break
|
||||
num_trials += 1
|
||||
if num_trials > max_trials:
|
||||
raise RuntimeError('sampling reaches max_trials {} and still get eos when ignore_eos is True, check your input!'.format(max_trials))
|
||||
return top_ids
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(
|
||||
self,
|
||||
text: torch.Tensor,
|
||||
text_len: torch.Tensor,
|
||||
prompt_text: torch.Tensor,
|
||||
prompt_text_len: torch.Tensor,
|
||||
prompt_speech_token: torch.Tensor,
|
||||
prompt_speech_token_len: torch.Tensor,
|
||||
embedding: torch.Tensor,
|
||||
sampling: int = 25,
|
||||
max_token_text_ratio: float = 20,
|
||||
min_token_text_ratio: float = 2,
|
||||
uuid: str = '',
|
||||
) -> Generator[torch.Tensor, None, None]:
|
||||
device = text.device
|
||||
text = torch.concat([prompt_text, text], dim=1)
|
||||
text_len += prompt_text_len
|
||||
text = self.text_embedding(text)
|
||||
|
||||
# 1. encode text
|
||||
text, text_len = self.encode(text, text_len)
|
||||
|
||||
# 2. encode embedding
|
||||
if embedding.shape[0] != 0:
|
||||
embedding = F.normalize(embedding, dim=1)
|
||||
embedding = self.spk_embed_affine_layer(embedding)
|
||||
embedding = embedding.unsqueeze(dim=1)
|
||||
else:
|
||||
embedding = torch.zeros(1, 0, self.llm_input_size, dtype=text.dtype).to(device).to(text.dtype)
|
||||
|
||||
# 3. concat llm_input
|
||||
sos_emb = self.llm_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.llm_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
if prompt_speech_token_len != 0:
|
||||
prompt_speech_token_emb = self.speech_embedding(prompt_speech_token)
|
||||
else:
|
||||
prompt_speech_token_emb = torch.zeros(1, 0, self.llm_input_size, dtype=text.dtype).to(device)
|
||||
lm_input = torch.concat([sos_emb, embedding, text, task_id_emb, prompt_speech_token_emb], dim=1)
|
||||
|
||||
# 4. cal min/max_length
|
||||
min_len = int((text_len - prompt_text_len) * min_token_text_ratio)
|
||||
max_len = int((text_len - prompt_text_len) * max_token_text_ratio)
|
||||
|
||||
# 5. step by step decode
|
||||
out_tokens = []
|
||||
offset = 0
|
||||
att_cache, cnn_cache = torch.zeros((0, 0, 0, 0), device=lm_input.device), torch.zeros((0, 0, 0, 0), device=lm_input.device)
|
||||
for i in range(max_len):
|
||||
y_pred, att_cache, cnn_cache = self.llm.forward_chunk(lm_input, offset=offset, required_cache_size=-1,
|
||||
att_cache=att_cache, cnn_cache=cnn_cache,
|
||||
att_mask=torch.tril(torch.ones((1, lm_input.shape[1], lm_input.shape[1]),
|
||||
device=lm_input.device)).to(torch.bool))
|
||||
logp = self.llm_decoder(y_pred[:, -1]).log_softmax(dim=-1)
|
||||
top_ids = self.sampling_ids(logp.squeeze(dim=0), out_tokens, sampling, ignore_eos=True if i < min_len else False)
|
||||
if top_ids == self.eos_token:
|
||||
break
|
||||
# in stream mode, yield token one by one
|
||||
yield top_ids
|
||||
out_tokens.append(top_ids)
|
||||
offset += lm_input.size(1)
|
||||
lm_input = self.speech_embedding.weight[top_ids].reshape(1, 1, -1)
|
||||
|
||||
|
||||
class Qwen2Encoder(torch.nn.Module):
|
||||
def __init__(self, pretrain_path):
|
||||
super().__init__()
|
||||
attn_impl = os.getenv('COSYVOICE_ATTN_IMPL', 'eager')
|
||||
try:
|
||||
self.model = Qwen2ForCausalLM.from_pretrained(pretrain_path, attn_implementation=attn_impl)
|
||||
logging.info(f'Qwen2ForCausalLM loaded with attn_implementation={attn_impl}')
|
||||
except TypeError:
|
||||
# transformers 旧版本无 attn_implementation 参数
|
||||
self.model = Qwen2ForCausalLM.from_pretrained(pretrain_path)
|
||||
logging.info('Qwen2ForCausalLM loaded without attn_implementation override')
|
||||
|
||||
def forward(self, xs: torch.Tensor, xs_lens: torch.Tensor):
|
||||
T = xs.size(1)
|
||||
masks = ~make_pad_mask(xs_lens, T)
|
||||
outs = self.model(
|
||||
inputs_embeds=xs,
|
||||
attention_mask=masks,
|
||||
output_hidden_states=True,
|
||||
return_dict=True,
|
||||
)
|
||||
return outs.hidden_states[-1], masks.unsqueeze(1)
|
||||
|
||||
def forward_one_step(self, xs, masks, cache=None):
|
||||
input_masks = masks[:, -1, :]
|
||||
outs = self.model(
|
||||
inputs_embeds=xs,
|
||||
attention_mask=input_masks,
|
||||
output_hidden_states=True,
|
||||
return_dict=True,
|
||||
use_cache=True,
|
||||
past_key_values=cache,
|
||||
)
|
||||
xs = outs.hidden_states[-1]
|
||||
new_cache = outs.past_key_values
|
||||
return xs, new_cache
|
||||
|
||||
|
||||
class Qwen2LM(TransformerLM):
|
||||
def __init__(
|
||||
self,
|
||||
llm_input_size: int,
|
||||
llm_output_size: int,
|
||||
speech_token_size: int,
|
||||
llm: torch.nn.Module,
|
||||
sampling: Callable,
|
||||
length_normalized_loss: bool = True,
|
||||
lsm_weight: float = 0.0,
|
||||
mix_ratio: List[int] = [5, 15],
|
||||
):
|
||||
torch.nn.Module.__init__(self)
|
||||
self.llm_input_size = llm_input_size
|
||||
self.llm_output_size = llm_output_size
|
||||
self.speech_token_size = speech_token_size
|
||||
# 2. build speech token language model related modules
|
||||
self.sos = 0
|
||||
self.task_id = 1
|
||||
self.eos_token = speech_token_size
|
||||
self.fill_token = speech_token_size + 2
|
||||
|
||||
self.llm_embedding = torch.nn.Embedding(2, llm_input_size)
|
||||
self.llm = llm
|
||||
self.llm_decoder = nn.Linear(llm_output_size, speech_token_size + 3)
|
||||
self.criterion_ce = LabelSmoothingLoss(
|
||||
size=speech_token_size + 3,
|
||||
padding_idx=IGNORE_ID,
|
||||
smoothing=lsm_weight,
|
||||
normalize_length=length_normalized_loss,
|
||||
)
|
||||
|
||||
# 3. [Optional] build speech token related modules
|
||||
self.speech_embedding = torch.nn.Embedding(speech_token_size + 3, llm_input_size)
|
||||
|
||||
# 4. sampling method
|
||||
self.sampling = sampling
|
||||
self.mix_ratio = mix_ratio
|
||||
|
||||
# 5. vllm related
|
||||
self.stop_token_ids = [speech_token_size + i for i in range(3)]
|
||||
self.vllm_output_queue = {}
|
||||
if online_feature is True:
|
||||
self.speech_token_extractor = SpeechTokenExtractor(model_path=os.path.join(onnx_path, 'speech_tokenizer_v2.batch.onnx'))
|
||||
|
||||
def prepare_lm_input_target(self, sos_emb, text_token, text_token_emb, text_token_len, task_id_emb, speech_token, speech_token_emb, speech_token_len, instruct_token=None, instruct_token_emb=None, instruct_token_len=None):
|
||||
lm_target, lm_input = [], []
|
||||
text_token = unpad_sequence(text_token, text_token_len.cpu(), batch_first=True)
|
||||
speech_token = unpad_sequence(speech_token, speech_token_len.cpu(), batch_first=True)
|
||||
text_token_emb = unpad_sequence(text_token_emb, text_token_len.cpu(), batch_first=True)
|
||||
speech_token_emb = unpad_sequence(speech_token_emb, speech_token_len.cpu(), batch_first=True)
|
||||
# NOTE add instruct_token in CosyVoice3
|
||||
if instruct_token is not None and instruct_token_emb is not None and instruct_token_len is not None:
|
||||
instruct_token = unpad_sequence(instruct_token, instruct_token_len.cpu(), batch_first=True)
|
||||
instruct_token_emb = unpad_sequence(instruct_token_emb, instruct_token_len.cpu(), batch_first=True)
|
||||
else:
|
||||
instruct_token = [torch.empty(0).to(text_token[0])] * len(text_token)
|
||||
instruct_token_emb = [torch.empty(0, 896).to(text_token_emb[0])] * len(text_token)
|
||||
instruct_token_len = torch.zeros(len(text_token)).to(text_token_len)
|
||||
for i in range(len(text_token)):
|
||||
# bistream sequence
|
||||
if random.random() < 0.5 and speech_token_len[i] / text_token_len[i] > self.mix_ratio[1] / self.mix_ratio[0]:
|
||||
this_lm_target, this_lm_input = [IGNORE_ID], [sos_emb.squeeze(dim=0)]
|
||||
this_lm_target += [IGNORE_ID] * instruct_token_len[i]
|
||||
this_lm_input.append(instruct_token_emb[i])
|
||||
for j in range(((text_token_len[i] + 1) / self.mix_ratio[0]).ceil().int().item()):
|
||||
this_text_token = text_token[i][j * self.mix_ratio[0]: (j + 1) * self.mix_ratio[0]].tolist()
|
||||
this_speech_token = speech_token[i][j * self.mix_ratio[1]: (j + 1) * self.mix_ratio[1]].tolist()
|
||||
if len(this_text_token) == self.mix_ratio[0]:
|
||||
assert len(this_speech_token) == self.mix_ratio[1]
|
||||
this_lm_target += [IGNORE_ID] * (self.mix_ratio[0] - 1)
|
||||
this_lm_target += this_speech_token
|
||||
this_lm_target.append(self.fill_token)
|
||||
this_lm_input.append(text_token_emb[i][j * self.mix_ratio[0]: (j + 1) * self.mix_ratio[0]])
|
||||
this_lm_input.append(speech_token_emb[i][j * self.mix_ratio[1]: (j + 1) * self.mix_ratio[1]])
|
||||
else:
|
||||
this_lm_target += [-1] * len(this_text_token)
|
||||
this_lm_target += speech_token[i][j * self.mix_ratio[1]:].tolist()
|
||||
this_lm_target.append(self.eos_token)
|
||||
this_lm_input.append(text_token_emb[i][j * self.mix_ratio[0]:])
|
||||
this_lm_input.append(task_id_emb.squeeze(dim=0))
|
||||
this_lm_input.append(speech_token_emb[i][j * self.mix_ratio[1]:])
|
||||
this_lm_target, this_lm_input = torch.tensor(this_lm_target), torch.concat(this_lm_input, dim=0)
|
||||
# unistream sequence
|
||||
else:
|
||||
this_lm_target = torch.tensor([IGNORE_ID] * (1 + instruct_token_len[i] + text_token_len[i]) + speech_token[i].tolist() + [self.eos_token])
|
||||
this_lm_input = torch.concat([sos_emb.squeeze(dim=0), instruct_token_emb[i], text_token_emb[i], task_id_emb.squeeze(dim=0), speech_token_emb[i]], dim=0)
|
||||
lm_target.append(this_lm_target)
|
||||
lm_input.append(this_lm_input)
|
||||
lm_input_len = torch.tensor([i.size(0) for i in lm_input], dtype=torch.int32)
|
||||
lm_input = pad_sequence(lm_input, batch_first=True, padding_value=IGNORE_ID)
|
||||
lm_target = pad_sequence(lm_target, batch_first=True, padding_value=IGNORE_ID)
|
||||
return lm_target, lm_input, lm_input_len
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
"""
|
||||
Args:
|
||||
text: (B, L, D)
|
||||
text_lengths: (B,)
|
||||
audio: (B, T, N) or (B, T)
|
||||
audio_lengths: (B,)
|
||||
"""
|
||||
text_token = batch['text_token'].to(device)
|
||||
text_token_len = batch['text_token_len'].to(device)
|
||||
if 'speech_token' not in batch:
|
||||
speech_token, speech_token_len = self.speech_token_extractor.inference(batch['whisper_feat'], batch['whisper_feat_len'], device)
|
||||
else:
|
||||
speech_token = batch['speech_token'].to(device)
|
||||
speech_token_len = batch['speech_token_len'].to(device)
|
||||
|
||||
# 1. encode text_token
|
||||
text_token_emb = self.llm.model.model.embed_tokens(text_token)
|
||||
|
||||
# 3. sos and task_id
|
||||
sos_emb = self.llm_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.llm_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
|
||||
# 2. encode speech_token
|
||||
speech_token_emb = self.speech_embedding(speech_token)
|
||||
|
||||
# 3. prepare llm_input/target
|
||||
lm_target, lm_input, lm_input_len = self.prepare_lm_input_target(sos_emb, text_token, text_token_emb, text_token_len, task_id_emb,
|
||||
speech_token, speech_token_emb, speech_token_len)
|
||||
lm_target = lm_target.to(device)
|
||||
|
||||
# 4. run lm forward
|
||||
lm_output, lm_output_mask = self.llm(lm_input, lm_input_len.to(device))
|
||||
logits = self.llm_decoder(lm_output)
|
||||
loss = self.criterion_ce(logits, lm_target.to(device))
|
||||
acc = th_accuracy(logits.view(-1, self.speech_token_size + 3), lm_target, ignore_label=IGNORE_ID)
|
||||
return {'loss': loss, 'acc': acc}
|
||||
|
||||
def forward_dpo(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
text_token = batch['text_token'].to(device)
|
||||
text_token_len = batch['text_token_len'].to(device)
|
||||
speech_token = batch['speech_token'].to(device)
|
||||
speech_token_len = batch['speech_token_len'].to(device)
|
||||
reject_speech_token = batch['reject_speech_token'].to(device)
|
||||
reject_speech_token_len = batch['reject_speech_token_len'].to(device)
|
||||
|
||||
# 1. encode text_token
|
||||
text_token_emb = self.llm.model.model.embed_tokens(text_token)
|
||||
|
||||
# 3. sos and task_id
|
||||
sos_emb = self.llm_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.llm_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
|
||||
# 2. encode speech_token
|
||||
speech_token = unpad_sequence(speech_token, speech_token_len.cpu(), batch_first=True)
|
||||
reject_speech_token = unpad_sequence(reject_speech_token, reject_speech_token_len.cpu(), batch_first=True)
|
||||
speech_token_combined = speech_token + reject_speech_token
|
||||
speech_token_combined = pad_sequence(speech_token_combined, batch_first=True, padding_value=0)
|
||||
speech_token_combined_len = torch.concat([speech_token_len, reject_speech_token_len], dim=0)
|
||||
speech_token_combined_emb = self.speech_embedding(speech_token_combined)
|
||||
|
||||
# 3. prepare llm_input/target
|
||||
lm_target, lm_input, lm_input_len = self.prepare_lm_input_target(sos_emb, text_token.repeat(2, 1), text_token_emb.repeat(2, 1, 1), text_token_len.repeat(2),
|
||||
task_id_emb, speech_token_combined, speech_token_combined_emb, speech_token_combined_len)
|
||||
lm_target = lm_target.to(device)
|
||||
|
||||
# 4. run lm forward
|
||||
lm_output, lm_output_mask = self.llm(lm_input, lm_input_len.to(device))
|
||||
logits = self.llm_decoder(lm_output)
|
||||
chosen_logits = logits[:text_token.shape[0]]
|
||||
rejected_logits = logits[text_token.shape[0]:]
|
||||
chosen_lm_target = lm_target[:text_token.shape[0]]
|
||||
rejected_lm_target = lm_target[text_token.shape[0]:]
|
||||
loss = self.criterion_ce(chosen_logits, chosen_lm_target.to(device))
|
||||
acc = th_accuracy(chosen_logits.view(-1, self.speech_token_size + 3), chosen_lm_target, ignore_label=IGNORE_ID)
|
||||
|
||||
# 5. calculate dpo logits
|
||||
chosen_lm_mask = chosen_lm_target == IGNORE_ID
|
||||
rejected_lm_mask = rejected_lm_target == IGNORE_ID
|
||||
chosen_logps = torch.gather(chosen_logits.log_softmax(dim=-1), dim=2, index=chosen_lm_target.masked_fill(chosen_lm_mask, 0).unsqueeze(dim=-1)).squeeze(dim=-1)
|
||||
rejected_logps = torch.gather(rejected_logits.log_softmax(dim=-1), dim=2, index=rejected_lm_target.masked_fill(rejected_lm_mask, 0).unsqueeze(dim=-1)).squeeze(dim=-1)
|
||||
chosen_logps = (chosen_logps * chosen_lm_mask).sum(dim=-1) / chosen_lm_mask.sum(dim=-1)
|
||||
rejected_logps = (rejected_logps * rejected_lm_mask).sum(dim=-1) / rejected_lm_mask.sum(dim=-1)
|
||||
return {'loss': loss, 'acc': acc, 'chosen_logps': chosen_logps, 'rejected_logps': rejected_logps}
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(
|
||||
self,
|
||||
text: torch.Tensor,
|
||||
text_len: torch.Tensor,
|
||||
prompt_text: torch.Tensor,
|
||||
prompt_text_len: torch.Tensor,
|
||||
prompt_speech_token: torch.Tensor,
|
||||
prompt_speech_token_len: torch.Tensor,
|
||||
embedding: torch.Tensor,
|
||||
sampling: int = 25,
|
||||
max_token_text_ratio: float = 20,
|
||||
min_token_text_ratio: float = 2,
|
||||
uuid: str = '',
|
||||
) -> Generator[torch.Tensor, None, None]:
|
||||
device = text.device
|
||||
text = torch.concat([prompt_text, text], dim=1)
|
||||
text_len += prompt_text_len
|
||||
text = self.llm.model.model.embed_tokens(text)
|
||||
|
||||
# 3. concat llm_input
|
||||
sos_emb = self.llm_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.llm_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
if prompt_speech_token_len != 0:
|
||||
prompt_speech_token_emb = self.speech_embedding(prompt_speech_token)
|
||||
else:
|
||||
prompt_speech_token_emb = torch.zeros(1, 0, self.llm_input_size, dtype=text.dtype).to(device)
|
||||
lm_input = torch.concat([sos_emb, text, task_id_emb, prompt_speech_token_emb], dim=1)
|
||||
|
||||
# 4. cal min/max_length
|
||||
min_len = int((text_len - prompt_text_len) * min_token_text_ratio)
|
||||
max_len = int((text_len - prompt_text_len) * max_token_text_ratio)
|
||||
|
||||
# 5. step by step decode
|
||||
for token in self.inference_wrapper(lm_input, sampling, min_len, max_len, uuid):
|
||||
yield token
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference_wrapper(self, lm_input, sampling, min_len, max_len, uuid):
|
||||
if hasattr(self, 'vllm'):
|
||||
from vllm import SamplingParams, RequestOutput
|
||||
sampling_params = SamplingParams(top_k=sampling,
|
||||
stop_token_ids=self.stop_token_ids,
|
||||
min_tokens=min_len,
|
||||
max_tokens=max_len)
|
||||
with self.lock:
|
||||
self.vllm.add_request(uuid, {"prompt_embeds": lm_input.squeeze(0).to(torch.bfloat16).to(lm_input.device)}, sampling_params)
|
||||
self.vllm_output_queue[uuid] = queue.Queue()
|
||||
out_tokens = []
|
||||
while True:
|
||||
with self.lock:
|
||||
if self.vllm_output_queue[uuid].empty() is True:
|
||||
request_outputs: List[RequestOutput] = self.vllm.step()
|
||||
for request_output in request_outputs:
|
||||
top_ids = list(request_output.outputs[0].token_ids)[-1]
|
||||
self.vllm_output_queue[request_output.request_id].put(top_ids)
|
||||
if self.vllm_output_queue[uuid].empty() is False:
|
||||
top_ids = self.vllm_output_queue[uuid].get()
|
||||
if top_ids in self.stop_token_ids:
|
||||
break
|
||||
# in stream mode, yield token one by one
|
||||
yield top_ids
|
||||
out_tokens.append(top_ids)
|
||||
if len(out_tokens) == max_len:
|
||||
break
|
||||
time.sleep(0.001)
|
||||
with self.lock:
|
||||
self.vllm_output_queue.pop(uuid)
|
||||
else:
|
||||
out_tokens = []
|
||||
cache = None
|
||||
for i in range(max_len):
|
||||
y_pred, cache = self.llm.forward_one_step(lm_input,
|
||||
masks=torch.tril(torch.ones((1, lm_input.shape[1], lm_input.shape[1]), device=lm_input.device)).to(torch.bool),
|
||||
cache=cache)
|
||||
logp = self.llm_decoder(y_pred[:, -1]).log_softmax(dim=-1)
|
||||
top_ids = self.sampling_ids(logp.squeeze(dim=0), out_tokens, sampling, ignore_eos=True if i < min_len else False)
|
||||
if top_ids in self.stop_token_ids:
|
||||
break
|
||||
# in stream mode, yield token one by one
|
||||
yield top_ids
|
||||
out_tokens.append(top_ids)
|
||||
lm_input = self.speech_embedding.weight[top_ids].reshape(1, 1, -1)
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference_bistream(
|
||||
self,
|
||||
text: Generator,
|
||||
prompt_text: torch.Tensor,
|
||||
prompt_text_len: torch.Tensor,
|
||||
prompt_speech_token: torch.Tensor,
|
||||
prompt_speech_token_len: torch.Tensor,
|
||||
embedding: torch.Tensor,
|
||||
sampling: int = 25,
|
||||
max_token_text_ratio: float = 20,
|
||||
min_token_text_ratio: float = 2,
|
||||
) -> Generator[torch.Tensor, None, None]:
|
||||
|
||||
device = prompt_text.device
|
||||
# 1. prepare input
|
||||
sos_emb = self.llm_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.llm_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
if prompt_speech_token_len != 0:
|
||||
prompt_speech_token_emb = self.speech_embedding(prompt_speech_token)
|
||||
else:
|
||||
prompt_speech_token_emb = torch.zeros(1, 0, self.llm_input_size, dtype=prompt_text.dtype).to(device)
|
||||
lm_input = torch.concat([sos_emb], dim=1)
|
||||
|
||||
# 2. iterate text
|
||||
out_tokens = []
|
||||
cache = None
|
||||
# NOTE init prompt_text as text_cache as it is basically impossible prompt_speech_token/prompt_text < 15/5
|
||||
text_cache = self.llm.model.model.embed_tokens(prompt_text)
|
||||
next_fill_index = (int(prompt_speech_token.shape[1] / self.mix_ratio[1]) + 1) * self.mix_ratio[1] - prompt_speech_token.shape[1]
|
||||
for this_text in text:
|
||||
text_cache = torch.concat([text_cache, self.llm.model.model.embed_tokens(this_text)], dim=1)
|
||||
# prompt_speech_token_emb not empty, try append to lm_input
|
||||
while prompt_speech_token_emb.size(1) != 0:
|
||||
if text_cache.size(1) >= self.mix_ratio[0]:
|
||||
lm_input_text, lm_input_speech = text_cache[:, :self.mix_ratio[0]], prompt_speech_token_emb[:, :self.mix_ratio[1]]
|
||||
logging.info('append {} text token {} speech token'.format(lm_input_text.size(1), lm_input_speech.size(1)))
|
||||
lm_input = torch.concat([lm_input, lm_input_text, lm_input_speech], dim=1)
|
||||
text_cache, prompt_speech_token_emb = text_cache[:, self.mix_ratio[0]:], prompt_speech_token_emb[:, self.mix_ratio[1]:]
|
||||
else:
|
||||
logging.info('not enough text token to decode, wait for more')
|
||||
break
|
||||
# no prompt_speech_token_emb remain, can decode some speech token
|
||||
if prompt_speech_token_emb.size(1) == 0:
|
||||
if (len(out_tokens) != 0 and out_tokens[-1] == self.fill_token) or (len(out_tokens) == 0 and lm_input.size(1) == 1):
|
||||
logging.info('get fill token, need to append more text token')
|
||||
if text_cache.size(1) >= self.mix_ratio[0]:
|
||||
lm_input_text = text_cache[:, :self.mix_ratio[0]]
|
||||
logging.info('append {} text token'.format(lm_input_text.size(1)))
|
||||
if len(out_tokens) != 0 and out_tokens[-1] == self.fill_token:
|
||||
lm_input = lm_input_text
|
||||
else:
|
||||
lm_input = torch.concat([lm_input, lm_input_text], dim=1)
|
||||
text_cache = text_cache[:, self.mix_ratio[0]:]
|
||||
else:
|
||||
logging.info('not enough text token to decode, wait for more')
|
||||
continue
|
||||
while True:
|
||||
seq_len = lm_input.shape[1] if cache is None else lm_input.shape[1] + cache[0][0].size(2)
|
||||
y_pred, cache = self.llm.forward_one_step(lm_input,
|
||||
masks=torch.tril(torch.ones((1, seq_len, seq_len), device=lm_input.device)).to(torch.bool),
|
||||
cache=cache)
|
||||
logp = self.llm_decoder(y_pred[:, -1]).log_softmax(dim=-1)
|
||||
if next_fill_index != -1 and len(out_tokens) == next_fill_index:
|
||||
top_ids = self.fill_token
|
||||
next_fill_index += (self.mix_ratio[1] + 1)
|
||||
else:
|
||||
top_ids = self.sampling_ids(logp.squeeze(dim=0), out_tokens, sampling, ignore_eos=True)
|
||||
if top_ids == self.fill_token:
|
||||
next_fill_index = len(out_tokens) + self.mix_ratio[1] + 1
|
||||
logging.info('fill_token index {} next fill_token index {}'.format(len(out_tokens), next_fill_index))
|
||||
out_tokens.append(top_ids)
|
||||
if top_ids >= self.speech_token_size:
|
||||
if top_ids == self.fill_token:
|
||||
break
|
||||
else:
|
||||
raise ValueError('should not get token {}'.format(top_ids))
|
||||
yield top_ids
|
||||
lm_input = self.speech_embedding.weight[top_ids].reshape(1, 1, -1)
|
||||
|
||||
# 3. final decode
|
||||
lm_input = torch.concat([lm_input, text_cache, task_id_emb], dim=1)
|
||||
logging.info('no more text token, decode until met eos')
|
||||
while True:
|
||||
seq_len = lm_input.shape[1] if cache is None else lm_input.shape[1] + cache[0][0].size(2)
|
||||
y_pred, cache = self.llm.forward_one_step(lm_input,
|
||||
masks=torch.tril(torch.ones((1, seq_len, seq_len), device=lm_input.device)).to(torch.bool),
|
||||
cache=cache)
|
||||
logp = self.llm_decoder(y_pred[:, -1]).log_softmax(dim=-1)
|
||||
top_ids = self.sampling_ids(logp.squeeze(dim=0), out_tokens, sampling, ignore_eos=False)
|
||||
out_tokens.append(top_ids)
|
||||
if top_ids >= self.speech_token_size:
|
||||
if top_ids == self.eos_token:
|
||||
break
|
||||
else:
|
||||
raise ValueError('should not get token {}'.format(top_ids))
|
||||
# in stream mode, yield token one by one
|
||||
yield top_ids
|
||||
lm_input = self.speech_embedding.weight[top_ids].reshape(1, 1, -1)
|
||||
|
||||
|
||||
class CosyVoice3LM(Qwen2LM):
|
||||
def __init__(
|
||||
self,
|
||||
llm_input_size: int,
|
||||
llm_output_size: int,
|
||||
speech_token_size: int,
|
||||
llm: torch.nn.Module,
|
||||
sampling: Callable,
|
||||
length_normalized_loss: bool = True,
|
||||
lsm_weight: float = 0.0,
|
||||
mix_ratio: List[int] = [5, 15],
|
||||
):
|
||||
torch.nn.Module.__init__(self)
|
||||
self.llm_input_size = llm_input_size
|
||||
self.llm_output_size = llm_output_size
|
||||
self.speech_token_size = speech_token_size
|
||||
# 2. build speech token language model related modules
|
||||
self.sos = speech_token_size + 0
|
||||
self.eos_token = speech_token_size + 1
|
||||
self.task_id = speech_token_size + 2
|
||||
self.fill_token = speech_token_size + 3
|
||||
|
||||
self.llm = llm
|
||||
self.llm_decoder = nn.Linear(llm_output_size, speech_token_size + 200, bias=False)
|
||||
self.criterion_ce = LabelSmoothingLoss(
|
||||
size=speech_token_size + 200,
|
||||
padding_idx=IGNORE_ID,
|
||||
smoothing=lsm_weight,
|
||||
normalize_length=length_normalized_loss,
|
||||
)
|
||||
|
||||
# 3. [Optional] build speech token related modules
|
||||
self.speech_embedding = torch.nn.Embedding(speech_token_size + 200, llm_input_size)
|
||||
|
||||
# 4. sampling method
|
||||
self.sampling = sampling
|
||||
self.mix_ratio = mix_ratio
|
||||
|
||||
# 5. vllm related
|
||||
self.stop_token_ids = [speech_token_size + i for i in range(200)]
|
||||
self.vllm_output_queue = {}
|
||||
if online_feature is True:
|
||||
self.speech_token_extractor = SpeechTokenExtractor(model_path=os.path.join(onnx_path, 'speech_tokenizer_v3.batch.onnx'))
|
||||
|
||||
def forward(
|
||||
self,
|
||||
batch: dict,
|
||||
device: torch.device,
|
||||
) -> Dict[str, Optional[torch.Tensor]]:
|
||||
"""
|
||||
Args:
|
||||
text: (B, L, D)
|
||||
text_lengths: (B,)
|
||||
audio: (B, T, N) or (B, T)
|
||||
audio_lengths: (B,)
|
||||
"""
|
||||
text_token = batch['text_token'].to(device)
|
||||
text_token_len = batch['text_token_len'].to(device)
|
||||
if 'speech_token' not in batch:
|
||||
speech_token, speech_token_len = self.speech_token_extractor.inference(batch['whisper_feat'], batch['whisper_feat_len'], device)
|
||||
else:
|
||||
speech_token = batch['speech_token'].to(device)
|
||||
speech_token_len = batch['speech_token_len'].to(device)
|
||||
|
||||
# NOTE should append instruct_token to sequence, not implemented yet
|
||||
instruct_token = batch['instruct_token'].to(device)
|
||||
instruct_token_len = batch['instruct_token_len'].to(device)
|
||||
|
||||
# 1. encode text_token
|
||||
text_token_emb = self.llm.model.model.embed_tokens(text_token)
|
||||
instruct_token_emb = self.llm.model.model.embed_tokens(instruct_token)
|
||||
|
||||
# 3. sos and task_id
|
||||
sos_emb = self.speech_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.speech_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
|
||||
# 2. encode speech_token
|
||||
speech_token_emb = self.speech_embedding(speech_token)
|
||||
|
||||
# 3. prepare llm_input/target
|
||||
lm_target, lm_input, lm_input_len = self.prepare_lm_input_target(sos_emb, text_token, text_token_emb, text_token_len, task_id_emb,
|
||||
speech_token, speech_token_emb, speech_token_len, instruct_token, instruct_token_emb, instruct_token_len)
|
||||
lm_target = lm_target.to(device)
|
||||
|
||||
# 4. run lm forward
|
||||
lm_output, lm_output_mask = self.llm(lm_input, lm_input_len.to(device))
|
||||
logits = self.llm_decoder(lm_output)
|
||||
loss = self.criterion_ce(logits, lm_target.to(device))
|
||||
acc = th_accuracy(logits.view(-1, self.speech_token_size + 200), lm_target, ignore_label=IGNORE_ID)
|
||||
return {'loss': loss, 'acc': acc}
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference(
|
||||
self,
|
||||
text: torch.Tensor,
|
||||
text_len: torch.Tensor,
|
||||
prompt_text: torch.Tensor,
|
||||
prompt_text_len: torch.Tensor,
|
||||
prompt_speech_token: torch.Tensor,
|
||||
prompt_speech_token_len: torch.Tensor,
|
||||
embedding: torch.Tensor,
|
||||
sampling: int = 25,
|
||||
max_token_text_ratio: float = 20,
|
||||
min_token_text_ratio: float = 2,
|
||||
uuid: str = '',
|
||||
) -> Generator[torch.Tensor, None, None]:
|
||||
device = text.device
|
||||
text = torch.concat([prompt_text, text], dim=1)
|
||||
text_len += prompt_text_len
|
||||
text = self.llm.model.model.embed_tokens(text)
|
||||
|
||||
# 3. concat llm_input
|
||||
sos_emb = self.speech_embedding.weight[self.sos].reshape(1, 1, -1)
|
||||
task_id_emb = self.speech_embedding.weight[self.task_id].reshape(1, 1, -1)
|
||||
if prompt_speech_token_len != 0:
|
||||
prompt_speech_token_emb = self.speech_embedding(prompt_speech_token)
|
||||
else:
|
||||
prompt_speech_token_emb = torch.zeros(1, 0, self.llm_input_size, dtype=text.dtype).to(device)
|
||||
lm_input = torch.concat([sos_emb, text, task_id_emb, prompt_speech_token_emb], dim=1)
|
||||
|
||||
# 4. cal min/max_length
|
||||
min_len = int((text_len - prompt_text_len) * min_token_text_ratio)
|
||||
max_len = int((text_len - prompt_text_len) * max_token_text_ratio)
|
||||
|
||||
# 5. step by step decode
|
||||
for token in self.inference_wrapper(lm_input, sampling, min_len, max_len, uuid):
|
||||
yield token
|
||||
File diff suppressed because it is too large
Load Diff
327
models/CosyVoice/cosyvoice/tokenizer/tokenizer.py
Normal file
327
models/CosyVoice/cosyvoice/tokenizer/tokenizer.py
Normal file
@@ -0,0 +1,327 @@
|
||||
import base64
|
||||
import os
|
||||
from functools import lru_cache
|
||||
from typing import Optional
|
||||
import torch
|
||||
from transformers import AutoTokenizer
|
||||
from whisper.tokenizer import Tokenizer
|
||||
|
||||
import tiktoken
|
||||
|
||||
LANGUAGES = {
|
||||
"en": "english",
|
||||
"zh": "chinese",
|
||||
"de": "german",
|
||||
"es": "spanish",
|
||||
"ru": "russian",
|
||||
"ko": "korean",
|
||||
"fr": "french",
|
||||
"ja": "japanese",
|
||||
"pt": "portuguese",
|
||||
"tr": "turkish",
|
||||
"pl": "polish",
|
||||
"ca": "catalan",
|
||||
"nl": "dutch",
|
||||
"ar": "arabic",
|
||||
"sv": "swedish",
|
||||
"it": "italian",
|
||||
"id": "indonesian",
|
||||
"hi": "hindi",
|
||||
"fi": "finnish",
|
||||
"vi": "vietnamese",
|
||||
"he": "hebrew",
|
||||
"uk": "ukrainian",
|
||||
"el": "greek",
|
||||
"ms": "malay",
|
||||
"cs": "czech",
|
||||
"ro": "romanian",
|
||||
"da": "danish",
|
||||
"hu": "hungarian",
|
||||
"ta": "tamil",
|
||||
"no": "norwegian",
|
||||
"th": "thai",
|
||||
"ur": "urdu",
|
||||
"hr": "croatian",
|
||||
"bg": "bulgarian",
|
||||
"lt": "lithuanian",
|
||||
"la": "latin",
|
||||
"mi": "maori",
|
||||
"ml": "malayalam",
|
||||
"cy": "welsh",
|
||||
"sk": "slovak",
|
||||
"te": "telugu",
|
||||
"fa": "persian",
|
||||
"lv": "latvian",
|
||||
"bn": "bengali",
|
||||
"sr": "serbian",
|
||||
"az": "azerbaijani",
|
||||
"sl": "slovenian",
|
||||
"kn": "kannada",
|
||||
"et": "estonian",
|
||||
"mk": "macedonian",
|
||||
"br": "breton",
|
||||
"eu": "basque",
|
||||
"is": "icelandic",
|
||||
"hy": "armenian",
|
||||
"ne": "nepali",
|
||||
"mn": "mongolian",
|
||||
"bs": "bosnian",
|
||||
"kk": "kazakh",
|
||||
"sq": "albanian",
|
||||
"sw": "swahili",
|
||||
"gl": "galician",
|
||||
"mr": "marathi",
|
||||
"pa": "punjabi",
|
||||
"si": "sinhala",
|
||||
"km": "khmer",
|
||||
"sn": "shona",
|
||||
"yo": "yoruba",
|
||||
"so": "somali",
|
||||
"af": "afrikaans",
|
||||
"oc": "occitan",
|
||||
"ka": "georgian",
|
||||
"be": "belarusian",
|
||||
"tg": "tajik",
|
||||
"sd": "sindhi",
|
||||
"gu": "gujarati",
|
||||
"am": "amharic",
|
||||
"yi": "yiddish",
|
||||
"lo": "lao",
|
||||
"uz": "uzbek",
|
||||
"fo": "faroese",
|
||||
"ht": "haitian creole",
|
||||
"ps": "pashto",
|
||||
"tk": "turkmen",
|
||||
"nn": "nynorsk",
|
||||
"mt": "maltese",
|
||||
"sa": "sanskrit",
|
||||
"lb": "luxembourgish",
|
||||
"my": "myanmar",
|
||||
"bo": "tibetan",
|
||||
"tl": "tagalog",
|
||||
"mg": "malagasy",
|
||||
"as": "assamese",
|
||||
"tt": "tatar",
|
||||
"haw": "hawaiian",
|
||||
"ln": "lingala",
|
||||
"ha": "hausa",
|
||||
"ba": "bashkir",
|
||||
"jw": "javanese",
|
||||
"su": "sundanese",
|
||||
"yue": "cantonese",
|
||||
"minnan": "minnan",
|
||||
"wuyu": "wuyu",
|
||||
"dialect": "dialect",
|
||||
"zh/en": "zh/en",
|
||||
"en/zh": "en/zh",
|
||||
}
|
||||
|
||||
# language code lookup by name, with a few language aliases
|
||||
TO_LANGUAGE_CODE = {
|
||||
**{language: code for code, language in LANGUAGES.items()},
|
||||
"burmese": "my",
|
||||
"valencian": "ca",
|
||||
"flemish": "nl",
|
||||
"haitian": "ht",
|
||||
"letzeburgesch": "lb",
|
||||
"pushto": "ps",
|
||||
"panjabi": "pa",
|
||||
"moldavian": "ro",
|
||||
"moldovan": "ro",
|
||||
"sinhalese": "si",
|
||||
"castilian": "es",
|
||||
"mandarin": "zh",
|
||||
}
|
||||
|
||||
AUDIO_EVENT = {
|
||||
"ASR": "ASR",
|
||||
"AED": "AED",
|
||||
"SER": "SER",
|
||||
"Speech": "Speech",
|
||||
"/Speech": "/Speech",
|
||||
"BGM": "BGM",
|
||||
"/BGM": "/BGM",
|
||||
"Laughter": "Laughter",
|
||||
"/Laughter": "/Laughter",
|
||||
"Applause": "Applause",
|
||||
"/Applause": "/Applause",
|
||||
}
|
||||
|
||||
EMOTION = {
|
||||
"HAPPY": "HAPPY",
|
||||
"SAD": "SAD",
|
||||
"ANGRY": "ANGRY",
|
||||
"NEUTRAL": "NEUTRAL",
|
||||
}
|
||||
|
||||
TTS_Vocal_Token = {
|
||||
"TTS/B": "TTS/B",
|
||||
"TTS/O": "TTS/O",
|
||||
"TTS/Q": "TTS/Q",
|
||||
"TTS/A": "TTS/A",
|
||||
"TTS/CO": "TTS/CO",
|
||||
"TTS/CL": "TTS/CL",
|
||||
"TTS/H": "TTS/H",
|
||||
**{f"TTS/SP{i:02d}": f"TTS/SP{i:02d}" for i in range(1, 14)}
|
||||
}
|
||||
|
||||
|
||||
@lru_cache(maxsize=None)
|
||||
def get_encoding(name: str = "gpt2", num_languages: int = 99):
|
||||
vocab_path = os.path.join(os.path.dirname(__file__), "assets", f"{name}.tiktoken")
|
||||
ranks = {
|
||||
base64.b64decode(token): int(rank)
|
||||
for token, rank in (line.split() for line in open(vocab_path) if line)
|
||||
}
|
||||
n_vocab = len(ranks)
|
||||
special_tokens = {}
|
||||
|
||||
specials = [
|
||||
"<|endoftext|>",
|
||||
"<|startoftranscript|>",
|
||||
*[f"<|{lang}|>" for lang in list(LANGUAGES.keys())[:num_languages]],
|
||||
*[f"<|{audio_event}|>" for audio_event in list(AUDIO_EVENT.keys())],
|
||||
*[f"<|{emotion}|>" for emotion in list(EMOTION.keys())],
|
||||
"<|translate|>",
|
||||
"<|transcribe|>",
|
||||
"<|startoflm|>",
|
||||
"<|startofprev|>",
|
||||
"<|nospeech|>",
|
||||
"<|notimestamps|>",
|
||||
*[f"<|SPECIAL_TOKEN_{i}|>" for i in range(1, 31)], # register special tokens for ASR
|
||||
*[f"<|{tts}|>" for tts in list(TTS_Vocal_Token.keys())], # register special tokens for TTS
|
||||
*[f"<|{i * 0.02:.2f}|>" for i in range(1501)],
|
||||
]
|
||||
|
||||
for token in specials:
|
||||
special_tokens[token] = n_vocab
|
||||
n_vocab += 1
|
||||
|
||||
return tiktoken.Encoding(
|
||||
name=os.path.basename(vocab_path),
|
||||
explicit_n_vocab=n_vocab,
|
||||
pat_str=r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""",
|
||||
mergeable_ranks=ranks,
|
||||
special_tokens=special_tokens,
|
||||
)
|
||||
|
||||
|
||||
@lru_cache(maxsize=None)
|
||||
def get_tokenizer(
|
||||
multilingual: bool,
|
||||
*,
|
||||
num_languages: int = 99,
|
||||
language: Optional[str] = None,
|
||||
task: Optional[str] = None, # Literal["transcribe", "translate", None]
|
||||
) -> Tokenizer:
|
||||
if language is not None:
|
||||
language = language.lower()
|
||||
if language not in LANGUAGES:
|
||||
if language in TO_LANGUAGE_CODE:
|
||||
language = TO_LANGUAGE_CODE[language]
|
||||
else:
|
||||
raise ValueError(f"Unsupported language: {language}")
|
||||
|
||||
if multilingual:
|
||||
encoding_name = "multilingual_zh_ja_yue_char_del"
|
||||
language = language or "en"
|
||||
task = task or "transcribe"
|
||||
else:
|
||||
encoding_name = "gpt2"
|
||||
language = None
|
||||
task = None
|
||||
|
||||
encoding = get_encoding(name=encoding_name, num_languages=num_languages)
|
||||
|
||||
return Tokenizer(
|
||||
encoding=encoding, num_languages=num_languages, language=language, task=task
|
||||
)
|
||||
|
||||
|
||||
class CosyVoice2Tokenizer():
|
||||
def __init__(self, token_path, skip_special_tokens=True):
|
||||
super().__init__()
|
||||
# NOTE: non-chat model, all these special tokens keep randomly initialized.
|
||||
special_tokens = {
|
||||
'eos_token': '<|endoftext|>',
|
||||
'pad_token': '<|endoftext|>',
|
||||
'additional_special_tokens': [
|
||||
'<|im_start|>', '<|im_end|>', '<|endofprompt|>',
|
||||
'[breath]', '<strong>', '</strong>', '[noise]',
|
||||
'[laughter]', '[cough]', '[clucking]', '[accent]',
|
||||
'[quick_breath]',
|
||||
"<laughter>", "</laughter>",
|
||||
"[hissing]", "[sigh]", "[vocalized-noise]",
|
||||
"[lipsmack]", "[mn]"
|
||||
]
|
||||
}
|
||||
self.special_tokens = special_tokens
|
||||
self.tokenizer = AutoTokenizer.from_pretrained(token_path)
|
||||
self.tokenizer.add_special_tokens(special_tokens)
|
||||
self.skip_special_tokens = skip_special_tokens
|
||||
|
||||
def encode(self, text, **kwargs):
|
||||
tokens = self.tokenizer([text], return_tensors="pt")
|
||||
tokens = tokens["input_ids"][0].cpu().tolist()
|
||||
return tokens
|
||||
|
||||
def decode(self, tokens):
|
||||
tokens = torch.tensor(tokens, dtype=torch.int64)
|
||||
text = self.tokenizer.batch_decode([tokens], skip_special_tokens=self.skip_special_tokens)[0]
|
||||
return text
|
||||
|
||||
|
||||
class CosyVoice3Tokenizer(CosyVoice2Tokenizer):
|
||||
def __init__(self, token_path, skip_special_tokens=True):
|
||||
# NOTE: non-chat model, all these special tokens keep randomly initialized.
|
||||
special_tokens = {
|
||||
'eos_token': '<|endoftext|>',
|
||||
'pad_token': '<|endoftext|>',
|
||||
'additional_special_tokens': [
|
||||
'<|im_start|>', '<|im_end|>', '<|endofprompt|>',
|
||||
'[breath]', '<strong>', '</strong>', '[noise]',
|
||||
'[laughter]', '[cough]', '[clucking]', '[accent]',
|
||||
'[quick_breath]',
|
||||
"<laughter>", "</laughter>",
|
||||
"[hissing]", "[sigh]", "[vocalized-noise]",
|
||||
"[lipsmack]", "[mn]", "<|endofsystem|>",
|
||||
"[AA]", "[AA0]", "[AA1]", "[AA2]", "[AE]", "[AE0]", "[AE1]", "[AE2]", "[AH]", "[AH0]", "[AH1]", "[AH2]",
|
||||
"[AO]", "[AO0]", "[AO1]", "[AO2]", "[AW]", "[AW0]", "[AW1]", "[AW2]", "[AY]", "[AY0]", "[AY1]", "[AY2]",
|
||||
"[B]", "[CH]", "[D]", "[DH]", "[EH]", "[EH0]", "[EH1]", "[EH2]", "[ER]", "[ER0]", "[ER1]", "[ER2]", "[EY]",
|
||||
"[EY0]", "[EY1]", "[EY2]", "[F]", "[G]", "[HH]", "[IH]", "[IH0]", "[IH1]", "[IH2]", "[IY]", "[IY0]", "[IY1]",
|
||||
"[IY2]", "[JH]", "[K]", "[L]", "[M]", "[N]", "[NG]", "[OW]", "[OW0]", "[OW1]", "[OW2]", "[OY]", "[OY0]",
|
||||
"[OY1]", "[OY2]", "[P]", "[R]", "[S]", "[SH]", "[T]", "[TH]", "[UH]", "[UH0]", "[UH1]", "[UH2]", "[UW]",
|
||||
"[UW0]", "[UW1]", "[UW2]", "[V]", "[W]", "[Y]", "[Z]", "[ZH]",
|
||||
"[a]", "[ai]", "[an]", "[ang]", "[ao]", "[b]", "[c]", "[ch]", "[d]", "[e]", "[ei]", "[en]", "[eng]", "[f]",
|
||||
"[g]", "[h]", "[i]", "[ian]", "[in]", "[ing]", "[iu]", "[ià]", "[iàn]", "[iàng]", "[iào]", "[iá]", "[ián]",
|
||||
"[iáng]", "[iáo]", "[iè]", "[ié]", "[iòng]", "[ióng]", "[iù]", "[iú]", "[iā]", "[iān]", "[iāng]", "[iāo]",
|
||||
"[iē]", "[iě]", "[iōng]", "[iū]", "[iǎ]", "[iǎn]", "[iǎng]", "[iǎo]", "[iǒng]", "[iǔ]", "[j]", "[k]", "[l]",
|
||||
"[m]", "[n]", "[o]", "[ong]", "[ou]", "[p]", "[q]", "[r]", "[s]", "[sh]", "[t]", "[u]", "[uang]", "[ue]",
|
||||
"[un]", "[uo]", "[uà]", "[uài]", "[uàn]", "[uàng]", "[uá]", "[uái]", "[uán]", "[uáng]", "[uè]", "[ué]", "[uì]",
|
||||
"[uí]", "[uò]", "[uó]", "[uā]", "[uāi]", "[uān]", "[uāng]", "[uē]", "[uě]", "[uī]", "[uō]", "[uǎ]", "[uǎi]",
|
||||
"[uǎn]", "[uǎng]", "[uǐ]", "[uǒ]", "[vè]", "[w]", "[x]", "[y]", "[z]", "[zh]", "[à]", "[ài]", "[àn]", "[àng]",
|
||||
"[ào]", "[á]", "[ái]", "[án]", "[áng]", "[áo]", "[è]", "[èi]", "[èn]", "[èng]", "[èr]", "[é]", "[éi]", "[én]",
|
||||
"[éng]", "[ér]", "[ì]", "[ìn]", "[ìng]", "[í]", "[ín]", "[íng]", "[ò]", "[òng]", "[òu]", "[ó]", "[óng]", "[óu]",
|
||||
"[ù]", "[ùn]", "[ú]", "[ún]", "[ā]", "[āi]", "[ān]", "[āng]", "[āo]", "[ē]", "[ēi]", "[ēn]", "[ēng]", "[ě]",
|
||||
"[ěi]", "[ěn]", "[ěng]", "[ěr]", "[ī]", "[īn]", "[īng]", "[ō]", "[ōng]", "[ōu]", "[ū]", "[ūn]", "[ǎ]", "[ǎi]",
|
||||
"[ǎn]", "[ǎng]", "[ǎo]", "[ǐ]", "[ǐn]", "[ǐng]", "[ǒ]", "[ǒng]", "[ǒu]", "[ǔ]", "[ǔn]", "[ǘ]", "[ǚ]", "[ǜ]"
|
||||
]
|
||||
}
|
||||
self.special_tokens = special_tokens
|
||||
self.tokenizer = AutoTokenizer.from_pretrained(token_path)
|
||||
self.tokenizer.add_special_tokens(special_tokens)
|
||||
self.skip_special_tokens = skip_special_tokens
|
||||
|
||||
|
||||
@lru_cache(maxsize=None)
|
||||
def get_qwen_tokenizer(
|
||||
token_path: str,
|
||||
skip_special_tokens: bool,
|
||||
version: str = 'cosyvoice2'
|
||||
):
|
||||
if version == 'cosyvoice2':
|
||||
return CosyVoice2Tokenizer(token_path=token_path, skip_special_tokens=skip_special_tokens)
|
||||
elif version == 'cosyvoice3':
|
||||
return CosyVoice3Tokenizer(token_path=token_path, skip_special_tokens=skip_special_tokens)
|
||||
else:
|
||||
raise ValueError
|
||||
0
models/CosyVoice/cosyvoice/transformer/__init__.py
Normal file
0
models/CosyVoice/cosyvoice/transformer/__init__.py
Normal file
84
models/CosyVoice/cosyvoice/transformer/activation.py
Normal file
84
models/CosyVoice/cosyvoice/transformer/activation.py
Normal file
@@ -0,0 +1,84 @@
|
||||
# Copyright (c) 2020 Johns Hopkins University (Shinji Watanabe)
|
||||
# 2020 Northwestern Polytechnical University (Pengcheng Guo)
|
||||
# 2020 Mobvoi Inc (Binbin Zhang)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Swish() activation function for Conformer."""
|
||||
|
||||
import torch
|
||||
from torch import nn, sin, pow
|
||||
from torch.nn import Parameter
|
||||
|
||||
|
||||
class Swish(torch.nn.Module):
|
||||
"""Construct an Swish object."""
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
"""Return Swish activation function."""
|
||||
return x * torch.sigmoid(x)
|
||||
|
||||
|
||||
# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
class Snake(nn.Module):
|
||||
'''
|
||||
Implementation of a sine-based periodic activation function
|
||||
Shape:
|
||||
- Input: (B, C, T)
|
||||
- Output: (B, C, T), same shape as the input
|
||||
Parameters:
|
||||
- alpha - trainable parameter
|
||||
References:
|
||||
- This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
|
||||
https://arxiv.org/abs/2006.08195
|
||||
Examples:
|
||||
>>> a1 = snake(256)
|
||||
>>> x = torch.randn(256)
|
||||
>>> x = a1(x)
|
||||
'''
|
||||
def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False):
|
||||
'''
|
||||
Initialization.
|
||||
INPUT:
|
||||
- in_features: shape of the input
|
||||
- alpha: trainable parameter
|
||||
alpha is initialized to 1 by default, higher values = higher-frequency.
|
||||
alpha will be trained along with the rest of your model.
|
||||
'''
|
||||
super(Snake, self).__init__()
|
||||
self.in_features = in_features
|
||||
|
||||
# initialize alpha
|
||||
self.alpha_logscale = alpha_logscale
|
||||
if self.alpha_logscale: # log scale alphas initialized to zeros
|
||||
self.alpha = Parameter(torch.zeros(in_features) * alpha)
|
||||
else: # linear scale alphas initialized to ones
|
||||
self.alpha = Parameter(torch.ones(in_features) * alpha)
|
||||
|
||||
self.alpha.requires_grad = alpha_trainable
|
||||
|
||||
self.no_div_by_zero = 0.000000001
|
||||
|
||||
def forward(self, x):
|
||||
'''
|
||||
Forward pass of the function.
|
||||
Applies the function to the input elementwise.
|
||||
Snake ∶= x + 1/a * sin^2 (xa)
|
||||
'''
|
||||
alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T]
|
||||
if self.alpha_logscale:
|
||||
alpha = torch.exp(alpha)
|
||||
x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2)
|
||||
|
||||
return x
|
||||
330
models/CosyVoice/cosyvoice/transformer/attention.py
Normal file
330
models/CosyVoice/cosyvoice/transformer/attention.py
Normal file
@@ -0,0 +1,330 @@
|
||||
# Copyright (c) 2019 Shigeki Karita
|
||||
# 2020 Mobvoi Inc (Binbin Zhang)
|
||||
# 2022 Xingchen Song (sxc19@mails.tsinghua.edu.cn)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Multi-Head Attention layer definition."""
|
||||
|
||||
import math
|
||||
from typing import Tuple
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
|
||||
|
||||
class MultiHeadedAttention(nn.Module):
|
||||
"""Multi-Head Attention layer.
|
||||
|
||||
Args:
|
||||
n_head (int): The number of heads.
|
||||
n_feat (int): The number of features.
|
||||
dropout_rate (float): Dropout rate.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
n_head: int,
|
||||
n_feat: int,
|
||||
dropout_rate: float,
|
||||
key_bias: bool = True):
|
||||
"""Construct an MultiHeadedAttention object."""
|
||||
super().__init__()
|
||||
assert n_feat % n_head == 0
|
||||
# We assume d_v always equals d_k
|
||||
self.d_k = n_feat // n_head
|
||||
self.h = n_head
|
||||
self.linear_q = nn.Linear(n_feat, n_feat)
|
||||
self.linear_k = nn.Linear(n_feat, n_feat, bias=key_bias)
|
||||
self.linear_v = nn.Linear(n_feat, n_feat)
|
||||
self.linear_out = nn.Linear(n_feat, n_feat)
|
||||
self.dropout = nn.Dropout(p=dropout_rate)
|
||||
|
||||
def forward_qkv(
|
||||
self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Transform query, key and value.
|
||||
|
||||
Args:
|
||||
query (torch.Tensor): Query tensor (#batch, time1, size).
|
||||
key (torch.Tensor): Key tensor (#batch, time2, size).
|
||||
value (torch.Tensor): Value tensor (#batch, time2, size).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Transformed query tensor, size
|
||||
(#batch, n_head, time1, d_k).
|
||||
torch.Tensor: Transformed key tensor, size
|
||||
(#batch, n_head, time2, d_k).
|
||||
torch.Tensor: Transformed value tensor, size
|
||||
(#batch, n_head, time2, d_k).
|
||||
|
||||
"""
|
||||
n_batch = query.size(0)
|
||||
q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k)
|
||||
k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k)
|
||||
v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k)
|
||||
q = q.transpose(1, 2) # (batch, head, time1, d_k)
|
||||
k = k.transpose(1, 2) # (batch, head, time2, d_k)
|
||||
v = v.transpose(1, 2) # (batch, head, time2, d_k)
|
||||
|
||||
return q, k, v
|
||||
|
||||
def forward_attention(
|
||||
self,
|
||||
value: torch.Tensor,
|
||||
scores: torch.Tensor,
|
||||
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool)
|
||||
) -> torch.Tensor:
|
||||
"""Compute attention context vector.
|
||||
|
||||
Args:
|
||||
value (torch.Tensor): Transformed value, size
|
||||
(#batch, n_head, time2, d_k).
|
||||
scores (torch.Tensor): Attention score, size
|
||||
(#batch, n_head, time1, time2).
|
||||
mask (torch.Tensor): Mask, size (#batch, 1, time2) or
|
||||
(#batch, time1, time2), (0, 0, 0) means fake mask.
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Transformed value (#batch, time1, d_model)
|
||||
weighted by the attention score (#batch, time1, time2).
|
||||
|
||||
"""
|
||||
n_batch = value.size(0)
|
||||
# NOTE(xcsong): When will `if mask.size(2) > 0` be True?
|
||||
# 1. onnx(16/4) [WHY? Because we feed real cache & real mask for the
|
||||
# 1st chunk to ease the onnx export.]
|
||||
# 2. pytorch training
|
||||
if mask.size(2) > 0: # time2 > 0
|
||||
mask = mask.unsqueeze(1).eq(0) # (batch, 1, *, time2)
|
||||
# For last chunk, time2 might be larger than scores.size(-1)
|
||||
mask = mask[:, :, :, :scores.size(-1)] # (batch, 1, *, time2)
|
||||
scores = scores.masked_fill(mask, -float('inf'))
|
||||
attn = torch.softmax(scores, dim=-1).masked_fill(
|
||||
mask, 0.0) # (batch, head, time1, time2)
|
||||
# NOTE(xcsong): When will `if mask.size(2) > 0` be False?
|
||||
# 1. onnx(16/-1, -1/-1, 16/0)
|
||||
# 2. jit (16/-1, -1/-1, 16/0, 16/4)
|
||||
else:
|
||||
attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2)
|
||||
|
||||
p_attn = self.dropout(attn)
|
||||
x = torch.matmul(p_attn, value) # (batch, head, time1, d_k)
|
||||
x = (x.transpose(1, 2).contiguous().view(n_batch, -1,
|
||||
self.h * self.d_k)
|
||||
) # (batch, time1, d_model)
|
||||
|
||||
return self.linear_out(x) # (batch, time1, d_model)
|
||||
|
||||
def forward(
|
||||
self,
|
||||
query: torch.Tensor,
|
||||
key: torch.Tensor,
|
||||
value: torch.Tensor,
|
||||
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
|
||||
pos_emb: torch.Tensor = torch.empty(0),
|
||||
cache: torch.Tensor = torch.zeros((0, 0, 0, 0))
|
||||
) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Compute scaled dot product attention.
|
||||
|
||||
Args:
|
||||
query (torch.Tensor): Query tensor (#batch, time1, size).
|
||||
key (torch.Tensor): Key tensor (#batch, time2, size).
|
||||
value (torch.Tensor): Value tensor (#batch, time2, size).
|
||||
mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
|
||||
(#batch, time1, time2).
|
||||
1.When applying cross attention between decoder and encoder,
|
||||
the batch padding mask for input is in (#batch, 1, T) shape.
|
||||
2.When applying self attention of encoder,
|
||||
the mask is in (#batch, T, T) shape.
|
||||
3.When applying self attention of decoder,
|
||||
the mask is in (#batch, L, L) shape.
|
||||
4.If the different position in decoder see different block
|
||||
of the encoder, such as Mocha, the passed in mask could be
|
||||
in (#batch, L, T) shape. But there is no such case in current
|
||||
CosyVoice.
|
||||
cache (torch.Tensor): Cache tensor (1, head, cache_t, d_k * 2),
|
||||
where `cache_t == chunk_size * num_decoding_left_chunks`
|
||||
and `head * d_k == size`
|
||||
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Output tensor (#batch, time1, d_model).
|
||||
torch.Tensor: Cache tensor (1, head, cache_t + time1, d_k * 2)
|
||||
where `cache_t == chunk_size * num_decoding_left_chunks`
|
||||
and `head * d_k == size`
|
||||
|
||||
"""
|
||||
q, k, v = self.forward_qkv(query, key, value)
|
||||
|
||||
# NOTE(xcsong):
|
||||
# when export onnx model, for 1st chunk, we feed
|
||||
# cache(1, head, 0, d_k * 2) (16/-1, -1/-1, 16/0 mode)
|
||||
# or cache(1, head, real_cache_t, d_k * 2) (16/4 mode).
|
||||
# In all modes, `if cache.size(0) > 0` will alwayse be `True`
|
||||
# and we will always do splitting and
|
||||
# concatnation(this will simplify onnx export). Note that
|
||||
# it's OK to concat & split zero-shaped tensors(see code below).
|
||||
# when export jit model, for 1st chunk, we always feed
|
||||
# cache(0, 0, 0, 0) since jit supports dynamic if-branch.
|
||||
# >>> a = torch.ones((1, 2, 0, 4))
|
||||
# >>> b = torch.ones((1, 2, 3, 4))
|
||||
# >>> c = torch.cat((a, b), dim=2)
|
||||
# >>> torch.equal(b, c) # True
|
||||
# >>> d = torch.split(a, 2, dim=-1)
|
||||
# >>> torch.equal(d[0], d[1]) # True
|
||||
if cache.size(0) > 0:
|
||||
key_cache, value_cache = torch.split(cache,
|
||||
cache.size(-1) // 2,
|
||||
dim=-1)
|
||||
k = torch.cat([key_cache, k], dim=2)
|
||||
v = torch.cat([value_cache, v], dim=2)
|
||||
# NOTE(xcsong): We do cache slicing in encoder.forward_chunk, since it's
|
||||
# non-trivial to calculate `next_cache_start` here.
|
||||
new_cache = torch.cat((k, v), dim=-1)
|
||||
|
||||
scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k)
|
||||
return self.forward_attention(v, scores, mask), new_cache
|
||||
|
||||
|
||||
class RelPositionMultiHeadedAttention(MultiHeadedAttention):
|
||||
"""Multi-Head Attention layer with relative position encoding.
|
||||
Paper: https://arxiv.org/abs/1901.02860
|
||||
Args:
|
||||
n_head (int): The number of heads.
|
||||
n_feat (int): The number of features.
|
||||
dropout_rate (float): Dropout rate.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
n_head: int,
|
||||
n_feat: int,
|
||||
dropout_rate: float,
|
||||
key_bias: bool = True):
|
||||
"""Construct an RelPositionMultiHeadedAttention object."""
|
||||
super().__init__(n_head, n_feat, dropout_rate, key_bias)
|
||||
# linear transformation for positional encoding
|
||||
self.linear_pos = nn.Linear(n_feat, n_feat, bias=False)
|
||||
# these two learnable bias are used in matrix c and matrix d
|
||||
# as described in https://arxiv.org/abs/1901.02860 Section 3.3
|
||||
self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k))
|
||||
self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k))
|
||||
torch.nn.init.xavier_uniform_(self.pos_bias_u)
|
||||
torch.nn.init.xavier_uniform_(self.pos_bias_v)
|
||||
|
||||
def rel_shift(self, x: torch.Tensor) -> torch.Tensor:
|
||||
"""Compute relative positional encoding.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (batch, head, time1, 2*time1-1).
|
||||
time1 means the length of query vector.
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Output tensor.
|
||||
|
||||
"""
|
||||
zero_pad = torch.zeros((x.size()[0], x.size()[1], x.size()[2], 1),
|
||||
device=x.device,
|
||||
dtype=x.dtype)
|
||||
x_padded = torch.cat([zero_pad, x], dim=-1)
|
||||
|
||||
x_padded = x_padded.view(x.size()[0],
|
||||
x.size()[1],
|
||||
x.size(3) + 1, x.size(2))
|
||||
x = x_padded[:, :, 1:].view_as(x)[
|
||||
:, :, :, : x.size(-1) // 2 + 1
|
||||
] # only keep the positions from 0 to time2
|
||||
return x
|
||||
|
||||
def forward(
|
||||
self,
|
||||
query: torch.Tensor,
|
||||
key: torch.Tensor,
|
||||
value: torch.Tensor,
|
||||
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
|
||||
pos_emb: torch.Tensor = torch.empty(0),
|
||||
cache: torch.Tensor = torch.zeros((0, 0, 0, 0))
|
||||
) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding.
|
||||
Args:
|
||||
query (torch.Tensor): Query tensor (#batch, time1, size).
|
||||
key (torch.Tensor): Key tensor (#batch, time2, size).
|
||||
value (torch.Tensor): Value tensor (#batch, time2, size).
|
||||
mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
|
||||
(#batch, time1, time2), (0, 0, 0) means fake mask.
|
||||
pos_emb (torch.Tensor): Positional embedding tensor
|
||||
(#batch, time2, size).
|
||||
cache (torch.Tensor): Cache tensor (1, head, cache_t, d_k * 2),
|
||||
where `cache_t == chunk_size * num_decoding_left_chunks`
|
||||
and `head * d_k == size`
|
||||
Returns:
|
||||
torch.Tensor: Output tensor (#batch, time1, d_model).
|
||||
torch.Tensor: Cache tensor (1, head, cache_t + time1, d_k * 2)
|
||||
where `cache_t == chunk_size * num_decoding_left_chunks`
|
||||
and `head * d_k == size`
|
||||
"""
|
||||
q, k, v = self.forward_qkv(query, key, value)
|
||||
q = q.transpose(1, 2) # (batch, time1, head, d_k)
|
||||
|
||||
# NOTE(xcsong):
|
||||
# when export onnx model, for 1st chunk, we feed
|
||||
# cache(1, head, 0, d_k * 2) (16/-1, -1/-1, 16/0 mode)
|
||||
# or cache(1, head, real_cache_t, d_k * 2) (16/4 mode).
|
||||
# In all modes, `if cache.size(0) > 0` will alwayse be `True`
|
||||
# and we will always do splitting and
|
||||
# concatnation(this will simplify onnx export). Note that
|
||||
# it's OK to concat & split zero-shaped tensors(see code below).
|
||||
# when export jit model, for 1st chunk, we always feed
|
||||
# cache(0, 0, 0, 0) since jit supports dynamic if-branch.
|
||||
# >>> a = torch.ones((1, 2, 0, 4))
|
||||
# >>> b = torch.ones((1, 2, 3, 4))
|
||||
# >>> c = torch.cat((a, b), dim=2)
|
||||
# >>> torch.equal(b, c) # True
|
||||
# >>> d = torch.split(a, 2, dim=-1)
|
||||
# >>> torch.equal(d[0], d[1]) # True
|
||||
if cache.size(0) > 0:
|
||||
key_cache, value_cache = torch.split(cache,
|
||||
cache.size(-1) // 2,
|
||||
dim=-1)
|
||||
k = torch.cat([key_cache, k], dim=2)
|
||||
v = torch.cat([value_cache, v], dim=2)
|
||||
# NOTE(xcsong): We do cache slicing in encoder.forward_chunk, since it's
|
||||
# non-trivial to calculate `next_cache_start` here.
|
||||
new_cache = torch.cat((k, v), dim=-1)
|
||||
|
||||
n_batch_pos = pos_emb.size(0)
|
||||
p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k)
|
||||
p = p.transpose(1, 2) # (batch, head, time1, d_k)
|
||||
|
||||
# (batch, head, time1, d_k)
|
||||
q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2)
|
||||
# (batch, head, time1, d_k)
|
||||
q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2)
|
||||
|
||||
# compute attention score
|
||||
# first compute matrix a and matrix c
|
||||
# as described in https://arxiv.org/abs/1901.02860 Section 3.3
|
||||
# (batch, head, time1, time2)
|
||||
matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1))
|
||||
|
||||
# compute matrix b and matrix d
|
||||
# (batch, head, time1, time2)
|
||||
matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1))
|
||||
# NOTE(Xiang Lyu): Keep rel_shift since espnet rel_pos_emb is used
|
||||
if matrix_ac.shape != matrix_bd.shape:
|
||||
matrix_bd = self.rel_shift(matrix_bd)
|
||||
|
||||
scores = (matrix_ac + matrix_bd) / math.sqrt(
|
||||
self.d_k) # (batch, head, time1, time2)
|
||||
|
||||
return self.forward_attention(v, scores, mask), new_cache
|
||||
258
models/CosyVoice/cosyvoice/transformer/convolution.py
Normal file
258
models/CosyVoice/cosyvoice/transformer/convolution.py
Normal file
@@ -0,0 +1,258 @@
|
||||
# Copyright (c) 2020 Mobvoi Inc. (authors: Binbin Zhang, Di Wu)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""ConvolutionModule definition."""
|
||||
|
||||
from typing import Tuple
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
|
||||
class ConvolutionModule(nn.Module):
|
||||
"""ConvolutionModule in Conformer model."""
|
||||
|
||||
def __init__(self,
|
||||
channels: int,
|
||||
kernel_size: int = 15,
|
||||
activation: nn.Module = nn.ReLU(),
|
||||
norm: str = "batch_norm",
|
||||
causal: bool = False,
|
||||
bias: bool = True):
|
||||
"""Construct an ConvolutionModule object.
|
||||
Args:
|
||||
channels (int): The number of channels of conv layers.
|
||||
kernel_size (int): Kernel size of conv layers.
|
||||
causal (int): Whether use causal convolution or not
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
self.pointwise_conv1 = nn.Conv1d(
|
||||
channels,
|
||||
2 * channels,
|
||||
kernel_size=1,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=bias,
|
||||
)
|
||||
# self.lorder is used to distinguish if it's a causal convolution,
|
||||
# if self.lorder > 0: it's a causal convolution, the input will be
|
||||
# padded with self.lorder frames on the left in forward.
|
||||
# else: it's a symmetrical convolution
|
||||
if causal:
|
||||
padding = 0
|
||||
self.lorder = kernel_size - 1
|
||||
else:
|
||||
# kernel_size should be an odd number for none causal convolution
|
||||
assert (kernel_size - 1) % 2 == 0
|
||||
padding = (kernel_size - 1) // 2
|
||||
self.lorder = 0
|
||||
self.depthwise_conv = nn.Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
padding=padding,
|
||||
groups=channels,
|
||||
bias=bias,
|
||||
)
|
||||
|
||||
assert norm in ['batch_norm', 'layer_norm']
|
||||
if norm == "batch_norm":
|
||||
self.use_layer_norm = False
|
||||
self.norm = nn.BatchNorm1d(channels)
|
||||
else:
|
||||
self.use_layer_norm = True
|
||||
self.norm = nn.LayerNorm(channels)
|
||||
|
||||
self.pointwise_conv2 = nn.Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size=1,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=bias,
|
||||
)
|
||||
self.activation = activation
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
mask_pad: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
|
||||
cache: torch.Tensor = torch.zeros((0, 0, 0)),
|
||||
) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Compute convolution module.
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, channels).
|
||||
mask_pad (torch.Tensor): used for batch padding (#batch, 1, time),
|
||||
(0, 0, 0) means fake mask.
|
||||
cache (torch.Tensor): left context cache, it is only
|
||||
used in causal convolution (#batch, channels, cache_t),
|
||||
(0, 0, 0) meas fake cache.
|
||||
Returns:
|
||||
torch.Tensor: Output tensor (#batch, time, channels).
|
||||
"""
|
||||
# exchange the temporal dimension and the feature dimension
|
||||
x = x.transpose(1, 2) # (#batch, channels, time)
|
||||
|
||||
# mask batch padding
|
||||
if mask_pad.size(2) > 0: # time > 0
|
||||
x.masked_fill_(~mask_pad, 0.0)
|
||||
|
||||
if self.lorder > 0:
|
||||
if cache.size(2) == 0: # cache_t == 0
|
||||
x = nn.functional.pad(x, (self.lorder, 0), 'constant', 0.0)
|
||||
else:
|
||||
assert cache.size(0) == x.size(0) # equal batch
|
||||
assert cache.size(1) == x.size(1) # equal channel
|
||||
x = torch.cat((cache, x), dim=2)
|
||||
assert (x.size(2) > self.lorder)
|
||||
new_cache = x[:, :, -self.lorder:]
|
||||
else:
|
||||
# It's better we just return None if no cache is required,
|
||||
# However, for JIT export, here we just fake one tensor instead of
|
||||
# None.
|
||||
new_cache = torch.zeros((0, 0, 0), dtype=x.dtype, device=x.device)
|
||||
|
||||
# GLU mechanism
|
||||
x = self.pointwise_conv1(x) # (batch, 2*channel, dim)
|
||||
x = nn.functional.glu(x, dim=1) # (batch, channel, dim)
|
||||
|
||||
# 1D Depthwise Conv
|
||||
x = self.depthwise_conv(x)
|
||||
if self.use_layer_norm:
|
||||
x = x.transpose(1, 2)
|
||||
x = self.activation(self.norm(x))
|
||||
if self.use_layer_norm:
|
||||
x = x.transpose(1, 2)
|
||||
x = self.pointwise_conv2(x)
|
||||
# mask batch padding
|
||||
if mask_pad.size(2) > 0: # time > 0
|
||||
x.masked_fill_(~mask_pad, 0.0)
|
||||
|
||||
return x.transpose(1, 2), new_cache
|
||||
|
||||
|
||||
# NOTE(Xiang Lyu) causal conv module used in convolution-based vocoder
|
||||
class CausalConv1d(torch.nn.Conv1d):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels: int,
|
||||
out_channels: int,
|
||||
kernel_size: int,
|
||||
stride: int = 1,
|
||||
dilation: int = 1,
|
||||
groups: int = 1,
|
||||
bias: bool = True,
|
||||
padding_mode: str = 'zeros',
|
||||
causal_type: str = 'left',
|
||||
device=None,
|
||||
dtype=None
|
||||
) -> None:
|
||||
super(CausalConv1d, self).__init__(in_channels, out_channels,
|
||||
kernel_size, stride=1,
|
||||
padding=0, dilation=dilation,
|
||||
groups=groups, bias=bias,
|
||||
padding_mode=padding_mode,
|
||||
device=device, dtype=dtype)
|
||||
assert stride == 1
|
||||
self.causal_padding = int((kernel_size * dilation - dilation) / 2) * 2 + (kernel_size + 1) % 2
|
||||
assert causal_type in ['left', 'right']
|
||||
self.causal_type = causal_type
|
||||
|
||||
def forward(self, x: torch.Tensor, cache: torch.Tensor = torch.zeros(0, 0, 0)) -> Tuple[torch.Tensor]:
|
||||
input_timestep = x.shape[2]
|
||||
if cache.size(2) == 0:
|
||||
cache = torch.zeros(x.shape[0], x.shape[1], self.causal_padding).to(x)
|
||||
assert cache.size(2) == self.causal_padding
|
||||
if self.causal_type == 'left':
|
||||
x = torch.concat([cache, x], dim=2)
|
||||
else:
|
||||
x = torch.concat([x, cache], dim=2)
|
||||
x = super(CausalConv1d, self).forward(x)
|
||||
assert x.shape[2] == input_timestep
|
||||
return x
|
||||
|
||||
|
||||
class CausalConv1dDownSample(torch.nn.Conv1d):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels: int,
|
||||
out_channels: int,
|
||||
kernel_size: int,
|
||||
stride: int = 1,
|
||||
dilation: int = 1,
|
||||
groups: int = 1,
|
||||
bias: bool = True,
|
||||
padding_mode: str = 'zeros',
|
||||
device=None,
|
||||
dtype=None
|
||||
) -> None:
|
||||
super(CausalConv1dDownSample, self).__init__(in_channels, out_channels,
|
||||
kernel_size, stride,
|
||||
padding=0, dilation=dilation,
|
||||
groups=groups, bias=bias,
|
||||
padding_mode=padding_mode,
|
||||
device=device, dtype=dtype)
|
||||
assert stride != 1 and dilation == 1
|
||||
assert kernel_size % stride == 0
|
||||
self.causal_padding = stride - 1
|
||||
|
||||
def forward(self, x: torch.Tensor, cache: torch.Tensor = torch.zeros(0, 0, 0)) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
if cache.size(2) == 0:
|
||||
x = F.pad(x, (self.causal_padding, 0), value=0.0)
|
||||
else:
|
||||
assert cache.size(2) == self.causal_padding
|
||||
x = torch.concat([cache, x], dim=2)
|
||||
x = super(CausalConv1dDownSample, self).forward(x)
|
||||
return x
|
||||
|
||||
|
||||
class CausalConv1dUpsample(torch.nn.Conv1d):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels: int,
|
||||
out_channels: int,
|
||||
kernel_size: int,
|
||||
stride: int = 1,
|
||||
dilation: int = 1,
|
||||
groups: int = 1,
|
||||
bias: bool = True,
|
||||
padding_mode: str = 'zeros',
|
||||
device=None,
|
||||
dtype=None
|
||||
) -> None:
|
||||
super(CausalConv1dUpsample, self).__init__(in_channels, out_channels,
|
||||
kernel_size, 1,
|
||||
padding=0, dilation=dilation,
|
||||
groups=groups, bias=bias,
|
||||
padding_mode=padding_mode,
|
||||
device=device, dtype=dtype)
|
||||
assert dilation == 1
|
||||
self.causal_padding = kernel_size - 1
|
||||
self.upsample = torch.nn.Upsample(scale_factor=stride, mode='nearest')
|
||||
|
||||
def forward(self, x: torch.Tensor, cache: torch.Tensor = torch.zeros(0, 0, 0)) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
x = self.upsample(x)
|
||||
input_timestep = x.shape[2]
|
||||
if cache.size(2) == 0:
|
||||
x = F.pad(x, (self.causal_padding, 0), value=0.0)
|
||||
else:
|
||||
assert cache.size(2) == self.causal_padding
|
||||
x = torch.concat([cache, x], dim=2)
|
||||
x = super(CausalConv1dUpsample, self).forward(x)
|
||||
assert input_timestep == x.shape[2]
|
||||
return x
|
||||
396
models/CosyVoice/cosyvoice/transformer/decoder.py
Normal file
396
models/CosyVoice/cosyvoice/transformer/decoder.py
Normal file
@@ -0,0 +1,396 @@
|
||||
# Copyright (c) 2021 Mobvoi Inc. (authors: Binbin Zhang, Di Wu)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""Decoder definition."""
|
||||
from typing import Tuple, List, Optional
|
||||
|
||||
import torch
|
||||
import torch.utils.checkpoint as ckpt
|
||||
import logging
|
||||
|
||||
from cosyvoice.transformer.decoder_layer import DecoderLayer
|
||||
from cosyvoice.transformer.positionwise_feed_forward import PositionwiseFeedForward
|
||||
from cosyvoice.utils.class_utils import (
|
||||
COSYVOICE_EMB_CLASSES,
|
||||
COSYVOICE_ATTENTION_CLASSES,
|
||||
COSYVOICE_ACTIVATION_CLASSES,
|
||||
)
|
||||
from cosyvoice.utils.mask import (subsequent_mask, make_pad_mask)
|
||||
|
||||
|
||||
class TransformerDecoder(torch.nn.Module):
|
||||
"""Base class of Transfomer decoder module.
|
||||
Args:
|
||||
vocab_size: output dim
|
||||
encoder_output_size: dimension of attention
|
||||
attention_heads: the number of heads of multi head attention
|
||||
linear_units: the hidden units number of position-wise feedforward
|
||||
num_blocks: the number of decoder blocks
|
||||
dropout_rate: dropout rate
|
||||
self_attention_dropout_rate: dropout rate for attention
|
||||
input_layer: input layer type
|
||||
use_output_layer: whether to use output layer
|
||||
pos_enc_class: PositionalEncoding or ScaledPositionalEncoding
|
||||
normalize_before:
|
||||
True: use layer_norm before each sub-block of a layer.
|
||||
False: use layer_norm after each sub-block of a layer.
|
||||
src_attention: if false, encoder-decoder cross attention is not
|
||||
applied, such as CIF model
|
||||
key_bias: whether use bias in attention.linear_k, False for whisper models.
|
||||
gradient_checkpointing: rerunning a forward-pass segment for each
|
||||
checkpointed segment during backward.
|
||||
tie_word_embedding: Tie or clone module weights depending of whether we are
|
||||
using TorchScript or not
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size: int,
|
||||
encoder_output_size: int,
|
||||
attention_heads: int = 4,
|
||||
linear_units: int = 2048,
|
||||
num_blocks: int = 6,
|
||||
dropout_rate: float = 0.1,
|
||||
positional_dropout_rate: float = 0.1,
|
||||
self_attention_dropout_rate: float = 0.0,
|
||||
src_attention_dropout_rate: float = 0.0,
|
||||
input_layer: str = "embed",
|
||||
use_output_layer: bool = True,
|
||||
normalize_before: bool = True,
|
||||
src_attention: bool = True,
|
||||
key_bias: bool = True,
|
||||
activation_type: str = "relu",
|
||||
gradient_checkpointing: bool = False,
|
||||
tie_word_embedding: bool = False,
|
||||
):
|
||||
super().__init__()
|
||||
attention_dim = encoder_output_size
|
||||
activation = COSYVOICE_ACTIVATION_CLASSES[activation_type]()
|
||||
|
||||
self.embed = torch.nn.Sequential(
|
||||
torch.nn.Identity() if input_layer == "no_pos" else
|
||||
torch.nn.Embedding(vocab_size, attention_dim),
|
||||
COSYVOICE_EMB_CLASSES[input_layer](attention_dim,
|
||||
positional_dropout_rate),
|
||||
)
|
||||
|
||||
self.normalize_before = normalize_before
|
||||
self.after_norm = torch.nn.LayerNorm(attention_dim, eps=1e-5)
|
||||
self.use_output_layer = use_output_layer
|
||||
if use_output_layer:
|
||||
self.output_layer = torch.nn.Linear(attention_dim, vocab_size)
|
||||
else:
|
||||
self.output_layer = torch.nn.Identity()
|
||||
self.num_blocks = num_blocks
|
||||
self.decoders = torch.nn.ModuleList([
|
||||
DecoderLayer(
|
||||
attention_dim,
|
||||
COSYVOICE_ATTENTION_CLASSES["selfattn"](
|
||||
attention_heads, attention_dim,
|
||||
self_attention_dropout_rate, key_bias),
|
||||
COSYVOICE_ATTENTION_CLASSES["selfattn"](
|
||||
attention_heads, attention_dim, src_attention_dropout_rate,
|
||||
key_bias) if src_attention else None,
|
||||
PositionwiseFeedForward(attention_dim, linear_units,
|
||||
dropout_rate, activation),
|
||||
dropout_rate,
|
||||
normalize_before,
|
||||
) for _ in range(self.num_blocks)
|
||||
])
|
||||
|
||||
self.gradient_checkpointing = gradient_checkpointing
|
||||
self.tie_word_embedding = tie_word_embedding
|
||||
|
||||
def forward(
|
||||
self,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor,
|
||||
ys_in_pad: torch.Tensor,
|
||||
ys_in_lens: torch.Tensor,
|
||||
r_ys_in_pad: torch.Tensor = torch.empty(0),
|
||||
reverse_weight: float = 0.0,
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Forward decoder.
|
||||
Args:
|
||||
memory: encoded memory, float32 (batch, maxlen_in, feat)
|
||||
memory_mask: encoder memory mask, (batch, 1, maxlen_in)
|
||||
ys_in_pad: padded input token ids, int64 (batch, maxlen_out)
|
||||
ys_in_lens: input lengths of this batch (batch)
|
||||
r_ys_in_pad: not used in transformer decoder, in order to unify api
|
||||
with bidirectional decoder
|
||||
reverse_weight: not used in transformer decoder, in order to unify
|
||||
api with bidirectional decode
|
||||
Returns:
|
||||
(tuple): tuple containing:
|
||||
x: decoded token score before softmax (batch, maxlen_out,
|
||||
vocab_size) if use_output_layer is True,
|
||||
torch.tensor(0.0), in order to unify api with bidirectional decoder
|
||||
olens: (batch, )
|
||||
NOTE(xcsong):
|
||||
We pass the `__call__` method of the modules instead of `forward` to the
|
||||
checkpointing API because `__call__` attaches all the hooks of the module.
|
||||
https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
|
||||
"""
|
||||
tgt = ys_in_pad
|
||||
maxlen = tgt.size(1)
|
||||
# tgt_mask: (B, 1, L)
|
||||
tgt_mask = ~make_pad_mask(ys_in_lens, maxlen).unsqueeze(1)
|
||||
tgt_mask = tgt_mask.to(tgt.device)
|
||||
# m: (1, L, L)
|
||||
m = subsequent_mask(tgt_mask.size(-1),
|
||||
device=tgt_mask.device).unsqueeze(0)
|
||||
# tgt_mask: (B, L, L)
|
||||
tgt_mask = tgt_mask & m
|
||||
x, _ = self.embed(tgt)
|
||||
if self.gradient_checkpointing and self.training:
|
||||
x = self.forward_layers_checkpointed(x, tgt_mask, memory,
|
||||
memory_mask)
|
||||
else:
|
||||
x = self.forward_layers(x, tgt_mask, memory, memory_mask)
|
||||
if self.normalize_before:
|
||||
x = self.after_norm(x)
|
||||
if self.use_output_layer:
|
||||
x = self.output_layer(x)
|
||||
olens = tgt_mask.sum(1)
|
||||
return x, torch.tensor(0.0), olens
|
||||
|
||||
def forward_layers(self, x: torch.Tensor, tgt_mask: torch.Tensor,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor) -> torch.Tensor:
|
||||
for layer in self.decoders:
|
||||
x, tgt_mask, memory, memory_mask = layer(x, tgt_mask, memory,
|
||||
memory_mask)
|
||||
return x
|
||||
|
||||
@torch.jit.unused
|
||||
def forward_layers_checkpointed(self, x: torch.Tensor,
|
||||
tgt_mask: torch.Tensor,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor) -> torch.Tensor:
|
||||
for layer in self.decoders:
|
||||
x, tgt_mask, memory, memory_mask = ckpt.checkpoint(
|
||||
layer.__call__, x, tgt_mask, memory, memory_mask)
|
||||
return x
|
||||
|
||||
def forward_one_step(
|
||||
self,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor,
|
||||
tgt: torch.Tensor,
|
||||
tgt_mask: torch.Tensor,
|
||||
cache: Optional[List[torch.Tensor]] = None,
|
||||
) -> Tuple[torch.Tensor, List[torch.Tensor]]:
|
||||
"""Forward one step.
|
||||
This is only used for decoding.
|
||||
Args:
|
||||
memory: encoded memory, float32 (batch, maxlen_in, feat)
|
||||
memory_mask: encoded memory mask, (batch, 1, maxlen_in)
|
||||
tgt: input token ids, int64 (batch, maxlen_out)
|
||||
tgt_mask: input token mask, (batch, maxlen_out)
|
||||
dtype=torch.uint8 in PyTorch 1.2-
|
||||
dtype=torch.bool in PyTorch 1.2+ (include 1.2)
|
||||
cache: cached output list of (batch, max_time_out-1, size)
|
||||
Returns:
|
||||
y, cache: NN output value and cache per `self.decoders`.
|
||||
y.shape` is (batch, maxlen_out, token)
|
||||
"""
|
||||
x, _ = self.embed(tgt)
|
||||
new_cache = []
|
||||
for i, decoder in enumerate(self.decoders):
|
||||
if cache is None:
|
||||
c = None
|
||||
else:
|
||||
c = cache[i]
|
||||
x, tgt_mask, memory, memory_mask = decoder(x,
|
||||
tgt_mask,
|
||||
memory,
|
||||
memory_mask,
|
||||
cache=c)
|
||||
new_cache.append(x)
|
||||
if self.normalize_before:
|
||||
y = self.after_norm(x[:, -1])
|
||||
else:
|
||||
y = x[:, -1]
|
||||
if self.use_output_layer:
|
||||
y = torch.log_softmax(self.output_layer(y), dim=-1)
|
||||
return y, new_cache
|
||||
|
||||
def tie_or_clone_weights(self, jit_mode: bool = True):
|
||||
"""Tie or clone module weights (between word_emb and output_layer)
|
||||
depending of whether we are using TorchScript or not"""
|
||||
if not self.use_output_layer:
|
||||
return
|
||||
if jit_mode:
|
||||
logging.info("clone emb.weight to output.weight")
|
||||
self.output_layer.weight = torch.nn.Parameter(
|
||||
self.embed[0].weight.clone())
|
||||
else:
|
||||
logging.info("tie emb.weight with output.weight")
|
||||
self.output_layer.weight = self.embed[0].weight
|
||||
|
||||
if getattr(self.output_layer, "bias", None) is not None:
|
||||
self.output_layer.bias.data = torch.nn.functional.pad(
|
||||
self.output_layer.bias.data,
|
||||
(
|
||||
0,
|
||||
self.output_layer.weight.shape[0] -
|
||||
self.output_layer.bias.shape[0],
|
||||
),
|
||||
"constant",
|
||||
0,
|
||||
)
|
||||
|
||||
|
||||
class BiTransformerDecoder(torch.nn.Module):
|
||||
"""Base class of Transfomer decoder module.
|
||||
Args:
|
||||
vocab_size: output dim
|
||||
encoder_output_size: dimension of attention
|
||||
attention_heads: the number of heads of multi head attention
|
||||
linear_units: the hidden units number of position-wise feedforward
|
||||
num_blocks: the number of decoder blocks
|
||||
r_num_blocks: the number of right to left decoder blocks
|
||||
dropout_rate: dropout rate
|
||||
self_attention_dropout_rate: dropout rate for attention
|
||||
input_layer: input layer type
|
||||
use_output_layer: whether to use output layer
|
||||
pos_enc_class: PositionalEncoding or ScaledPositionalEncoding
|
||||
normalize_before:
|
||||
True: use layer_norm before each sub-block of a layer.
|
||||
False: use layer_norm after each sub-block of a layer.
|
||||
key_bias: whether use bias in attention.linear_k, False for whisper models.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size: int,
|
||||
encoder_output_size: int,
|
||||
attention_heads: int = 4,
|
||||
linear_units: int = 2048,
|
||||
num_blocks: int = 6,
|
||||
r_num_blocks: int = 0,
|
||||
dropout_rate: float = 0.1,
|
||||
positional_dropout_rate: float = 0.1,
|
||||
self_attention_dropout_rate: float = 0.0,
|
||||
src_attention_dropout_rate: float = 0.0,
|
||||
input_layer: str = "embed",
|
||||
use_output_layer: bool = True,
|
||||
normalize_before: bool = True,
|
||||
key_bias: bool = True,
|
||||
gradient_checkpointing: bool = False,
|
||||
tie_word_embedding: bool = False,
|
||||
):
|
||||
|
||||
super().__init__()
|
||||
self.tie_word_embedding = tie_word_embedding
|
||||
self.left_decoder = TransformerDecoder(
|
||||
vocab_size,
|
||||
encoder_output_size,
|
||||
attention_heads,
|
||||
linear_units,
|
||||
num_blocks,
|
||||
dropout_rate,
|
||||
positional_dropout_rate,
|
||||
self_attention_dropout_rate,
|
||||
src_attention_dropout_rate,
|
||||
input_layer,
|
||||
use_output_layer,
|
||||
normalize_before,
|
||||
key_bias=key_bias,
|
||||
gradient_checkpointing=gradient_checkpointing,
|
||||
tie_word_embedding=tie_word_embedding)
|
||||
|
||||
self.right_decoder = TransformerDecoder(
|
||||
vocab_size,
|
||||
encoder_output_size,
|
||||
attention_heads,
|
||||
linear_units,
|
||||
r_num_blocks,
|
||||
dropout_rate,
|
||||
positional_dropout_rate,
|
||||
self_attention_dropout_rate,
|
||||
src_attention_dropout_rate,
|
||||
input_layer,
|
||||
use_output_layer,
|
||||
normalize_before,
|
||||
key_bias=key_bias,
|
||||
gradient_checkpointing=gradient_checkpointing,
|
||||
tie_word_embedding=tie_word_embedding)
|
||||
|
||||
def forward(
|
||||
self,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor,
|
||||
ys_in_pad: torch.Tensor,
|
||||
ys_in_lens: torch.Tensor,
|
||||
r_ys_in_pad: torch.Tensor,
|
||||
reverse_weight: float = 0.0,
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Forward decoder.
|
||||
Args:
|
||||
memory: encoded memory, float32 (batch, maxlen_in, feat)
|
||||
memory_mask: encoder memory mask, (batch, 1, maxlen_in)
|
||||
ys_in_pad: padded input token ids, int64 (batch, maxlen_out)
|
||||
ys_in_lens: input lengths of this batch (batch)
|
||||
r_ys_in_pad: padded input token ids, int64 (batch, maxlen_out),
|
||||
used for right to left decoder
|
||||
reverse_weight: used for right to left decoder
|
||||
Returns:
|
||||
(tuple): tuple containing:
|
||||
x: decoded token score before softmax (batch, maxlen_out,
|
||||
vocab_size) if use_output_layer is True,
|
||||
r_x: x: decoded token score (right to left decoder)
|
||||
before softmax (batch, maxlen_out, vocab_size)
|
||||
if use_output_layer is True,
|
||||
olens: (batch, )
|
||||
"""
|
||||
l_x, _, olens = self.left_decoder(memory, memory_mask, ys_in_pad,
|
||||
ys_in_lens)
|
||||
r_x = torch.tensor(0.0)
|
||||
if reverse_weight > 0.0:
|
||||
r_x, _, olens = self.right_decoder(memory, memory_mask,
|
||||
r_ys_in_pad, ys_in_lens)
|
||||
return l_x, r_x, olens
|
||||
|
||||
def forward_one_step(
|
||||
self,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor,
|
||||
tgt: torch.Tensor,
|
||||
tgt_mask: torch.Tensor,
|
||||
cache: Optional[List[torch.Tensor]] = None,
|
||||
) -> Tuple[torch.Tensor, List[torch.Tensor]]:
|
||||
"""Forward one step.
|
||||
This is only used for decoding.
|
||||
Args:
|
||||
memory: encoded memory, float32 (batch, maxlen_in, feat)
|
||||
memory_mask: encoded memory mask, (batch, 1, maxlen_in)
|
||||
tgt: input token ids, int64 (batch, maxlen_out)
|
||||
tgt_mask: input token mask, (batch, maxlen_out)
|
||||
dtype=torch.uint8 in PyTorch 1.2-
|
||||
dtype=torch.bool in PyTorch 1.2+ (include 1.2)
|
||||
cache: cached output list of (batch, max_time_out-1, size)
|
||||
Returns:
|
||||
y, cache: NN output value and cache per `self.decoders`.
|
||||
y.shape` is (batch, maxlen_out, token)
|
||||
"""
|
||||
return self.left_decoder.forward_one_step(memory, memory_mask, tgt,
|
||||
tgt_mask, cache)
|
||||
|
||||
def tie_or_clone_weights(self, jit_mode: bool = True):
|
||||
"""Tie or clone module weights (between word_emb and output_layer)
|
||||
depending of whether we are using TorchScript or not"""
|
||||
self.left_decoder.tie_or_clone_weights(jit_mode)
|
||||
self.right_decoder.tie_or_clone_weights(jit_mode)
|
||||
132
models/CosyVoice/cosyvoice/transformer/decoder_layer.py
Normal file
132
models/CosyVoice/cosyvoice/transformer/decoder_layer.py
Normal file
@@ -0,0 +1,132 @@
|
||||
# Copyright (c) 2019 Shigeki Karita
|
||||
# 2020 Mobvoi Inc (Binbin Zhang)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Decoder self-attention layer definition."""
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
|
||||
|
||||
class DecoderLayer(nn.Module):
|
||||
"""Single decoder layer module.
|
||||
|
||||
Args:
|
||||
size (int): Input dimension.
|
||||
self_attn (torch.nn.Module): Self-attention module instance.
|
||||
`MultiHeadedAttention` instance can be used as the argument.
|
||||
src_attn (torch.nn.Module): Inter-attention module instance.
|
||||
`MultiHeadedAttention` instance can be used as the argument.
|
||||
If `None` is passed, Inter-attention is not used, such as
|
||||
CIF, GPT, and other decoder only model.
|
||||
feed_forward (torch.nn.Module): Feed-forward module instance.
|
||||
`PositionwiseFeedForward` instance can be used as the argument.
|
||||
dropout_rate (float): Dropout rate.
|
||||
normalize_before (bool):
|
||||
True: use layer_norm before each sub-block.
|
||||
False: to use layer_norm after each sub-block.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
size: int,
|
||||
self_attn: nn.Module,
|
||||
src_attn: Optional[nn.Module],
|
||||
feed_forward: nn.Module,
|
||||
dropout_rate: float,
|
||||
normalize_before: bool = True,
|
||||
):
|
||||
"""Construct an DecoderLayer object."""
|
||||
super().__init__()
|
||||
self.size = size
|
||||
self.self_attn = self_attn
|
||||
self.src_attn = src_attn
|
||||
self.feed_forward = feed_forward
|
||||
self.norm1 = nn.LayerNorm(size, eps=1e-5)
|
||||
self.norm2 = nn.LayerNorm(size, eps=1e-5)
|
||||
self.norm3 = nn.LayerNorm(size, eps=1e-5)
|
||||
self.dropout = nn.Dropout(dropout_rate)
|
||||
self.normalize_before = normalize_before
|
||||
|
||||
def forward(
|
||||
self,
|
||||
tgt: torch.Tensor,
|
||||
tgt_mask: torch.Tensor,
|
||||
memory: torch.Tensor,
|
||||
memory_mask: torch.Tensor,
|
||||
cache: Optional[torch.Tensor] = None
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Compute decoded features.
|
||||
|
||||
Args:
|
||||
tgt (torch.Tensor): Input tensor (#batch, maxlen_out, size).
|
||||
tgt_mask (torch.Tensor): Mask for input tensor
|
||||
(#batch, maxlen_out).
|
||||
memory (torch.Tensor): Encoded memory
|
||||
(#batch, maxlen_in, size).
|
||||
memory_mask (torch.Tensor): Encoded memory mask
|
||||
(#batch, maxlen_in).
|
||||
cache (torch.Tensor): cached tensors.
|
||||
(#batch, maxlen_out - 1, size).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Output tensor (#batch, maxlen_out, size).
|
||||
torch.Tensor: Mask for output tensor (#batch, maxlen_out).
|
||||
torch.Tensor: Encoded memory (#batch, maxlen_in, size).
|
||||
torch.Tensor: Encoded memory mask (#batch, maxlen_in).
|
||||
|
||||
"""
|
||||
residual = tgt
|
||||
if self.normalize_before:
|
||||
tgt = self.norm1(tgt)
|
||||
|
||||
if cache is None:
|
||||
tgt_q = tgt
|
||||
tgt_q_mask = tgt_mask
|
||||
else:
|
||||
# compute only the last frame query keeping dim: max_time_out -> 1
|
||||
assert cache.shape == (
|
||||
tgt.shape[0],
|
||||
tgt.shape[1] - 1,
|
||||
self.size,
|
||||
), "{cache.shape} == {(tgt.shape[0], tgt.shape[1] - 1, self.size)}"
|
||||
tgt_q = tgt[:, -1:, :]
|
||||
residual = residual[:, -1:, :]
|
||||
tgt_q_mask = tgt_mask[:, -1:, :]
|
||||
|
||||
x = residual + self.dropout(
|
||||
self.self_attn(tgt_q, tgt, tgt, tgt_q_mask)[0])
|
||||
if not self.normalize_before:
|
||||
x = self.norm1(x)
|
||||
|
||||
if self.src_attn is not None:
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm2(x)
|
||||
x = residual + self.dropout(
|
||||
self.src_attn(x, memory, memory, memory_mask)[0])
|
||||
if not self.normalize_before:
|
||||
x = self.norm2(x)
|
||||
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm3(x)
|
||||
x = residual + self.dropout(self.feed_forward(x))
|
||||
if not self.normalize_before:
|
||||
x = self.norm3(x)
|
||||
|
||||
if cache is not None:
|
||||
x = torch.cat([cache, x], dim=1)
|
||||
|
||||
return x, tgt_mask, memory, memory_mask
|
||||
302
models/CosyVoice/cosyvoice/transformer/embedding.py
Normal file
302
models/CosyVoice/cosyvoice/transformer/embedding.py
Normal file
@@ -0,0 +1,302 @@
|
||||
# Copyright (c) 2020 Mobvoi Inc. (authors: Binbin Zhang, Di Wu)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""Positonal Encoding Module."""
|
||||
|
||||
import math
|
||||
from typing import Tuple, Union
|
||||
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
import numpy as np
|
||||
|
||||
|
||||
class PositionalEncoding(torch.nn.Module):
|
||||
"""Positional encoding.
|
||||
|
||||
:param int d_model: embedding dim
|
||||
:param float dropout_rate: dropout rate
|
||||
:param int max_len: maximum input length
|
||||
|
||||
PE(pos, 2i) = sin(pos/(10000^(2i/dmodel)))
|
||||
PE(pos, 2i+1) = cos(pos/(10000^(2i/dmodel)))
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
d_model: int,
|
||||
dropout_rate: float,
|
||||
max_len: int = 5000,
|
||||
reverse: bool = False):
|
||||
"""Construct an PositionalEncoding object."""
|
||||
super().__init__()
|
||||
self.d_model = d_model
|
||||
self.xscale = math.sqrt(self.d_model)
|
||||
self.dropout = torch.nn.Dropout(p=dropout_rate)
|
||||
self.max_len = max_len
|
||||
|
||||
self.pe = torch.zeros(self.max_len, self.d_model)
|
||||
position = torch.arange(0, self.max_len,
|
||||
dtype=torch.float32).unsqueeze(1)
|
||||
div_term = torch.exp(
|
||||
torch.arange(0, self.d_model, 2, dtype=torch.float32) *
|
||||
-(math.log(10000.0) / self.d_model))
|
||||
self.pe[:, 0::2] = torch.sin(position * div_term)
|
||||
self.pe[:, 1::2] = torch.cos(position * div_term)
|
||||
self.pe = self.pe.unsqueeze(0)
|
||||
|
||||
def forward(self,
|
||||
x: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0) \
|
||||
-> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Add positional encoding.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input. Its shape is (batch, time, ...)
|
||||
offset (int, torch.tensor): position offset
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Encoded tensor. Its shape is (batch, time, ...)
|
||||
torch.Tensor: for compatibility to RelPositionalEncoding
|
||||
"""
|
||||
|
||||
self.pe = self.pe.to(x.device)
|
||||
pos_emb = self.position_encoding(offset, x.size(1), False)
|
||||
x = x * self.xscale + pos_emb
|
||||
return self.dropout(x), self.dropout(pos_emb)
|
||||
|
||||
def position_encoding(self,
|
||||
offset: Union[int, torch.Tensor],
|
||||
size: int,
|
||||
apply_dropout: bool = True) -> torch.Tensor:
|
||||
""" For getting encoding in a streaming fashion
|
||||
|
||||
Attention!!!!!
|
||||
we apply dropout only once at the whole utterance level in a none
|
||||
streaming way, but will call this function several times with
|
||||
increasing input size in a streaming scenario, so the dropout will
|
||||
be applied several times.
|
||||
|
||||
Args:
|
||||
offset (int or torch.tensor): start offset
|
||||
size (int): required size of position encoding
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Corresponding encoding
|
||||
"""
|
||||
# How to subscript a Union type:
|
||||
# https://github.com/pytorch/pytorch/issues/69434
|
||||
if isinstance(offset, int):
|
||||
assert offset + size <= self.max_len
|
||||
pos_emb = self.pe[:, offset:offset + size]
|
||||
elif isinstance(offset, torch.Tensor) and offset.dim() == 0: # scalar
|
||||
assert offset + size <= self.max_len
|
||||
pos_emb = self.pe[:, offset:offset + size]
|
||||
else: # for batched streaming decoding on GPU
|
||||
assert torch.max(offset) + size <= self.max_len
|
||||
index = offset.unsqueeze(1) + \
|
||||
torch.arange(0, size).to(offset.device) # B X T
|
||||
flag = index > 0
|
||||
# remove negative offset
|
||||
index = index * flag
|
||||
pos_emb = F.embedding(index, self.pe[0]) # B X T X d_model
|
||||
|
||||
if apply_dropout:
|
||||
pos_emb = self.dropout(pos_emb)
|
||||
return pos_emb
|
||||
|
||||
|
||||
class RelPositionalEncoding(PositionalEncoding):
|
||||
"""Relative positional encoding module.
|
||||
See : Appendix B in https://arxiv.org/abs/1901.02860
|
||||
Args:
|
||||
d_model (int): Embedding dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
max_len (int): Maximum input length.
|
||||
"""
|
||||
|
||||
def __init__(self, d_model: int, dropout_rate: float, max_len: int = 5000):
|
||||
"""Initialize class."""
|
||||
super().__init__(d_model, dropout_rate, max_len, reverse=True)
|
||||
|
||||
def forward(self,
|
||||
x: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0) \
|
||||
-> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Compute positional encoding.
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (batch, time, `*`).
|
||||
Returns:
|
||||
torch.Tensor: Encoded tensor (batch, time, `*`).
|
||||
torch.Tensor: Positional embedding tensor (1, time, `*`).
|
||||
"""
|
||||
self.pe = self.pe.to(x.device)
|
||||
x = x * self.xscale
|
||||
pos_emb = self.position_encoding(offset, x.size(1), False)
|
||||
return self.dropout(x), self.dropout(pos_emb)
|
||||
|
||||
|
||||
class WhisperPositionalEncoding(PositionalEncoding):
|
||||
""" Sinusoids position encoding used in openai-whisper.encoder
|
||||
"""
|
||||
|
||||
def __init__(self, d_model: int, dropout_rate: float, max_len: int = 1500):
|
||||
super().__init__(d_model, dropout_rate, max_len)
|
||||
self.xscale = 1.0
|
||||
log_timescale_increment = np.log(10000) / (d_model // 2 - 1)
|
||||
inv_timescales = torch.exp(-log_timescale_increment *
|
||||
torch.arange(d_model // 2))
|
||||
scaled_time = torch.arange(max_len)[:, np.newaxis] * \
|
||||
inv_timescales[np.newaxis, :]
|
||||
pe = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], dim=1)
|
||||
delattr(self, "pe")
|
||||
self.register_buffer("pe", pe.unsqueeze(0))
|
||||
|
||||
|
||||
class LearnablePositionalEncoding(PositionalEncoding):
|
||||
""" Learnable position encoding used in openai-whisper.decoder
|
||||
"""
|
||||
|
||||
def __init__(self, d_model: int, dropout_rate: float, max_len: int = 448):
|
||||
super().__init__(d_model, dropout_rate, max_len)
|
||||
# NOTE(xcsong): overwrite self.pe & self.xscale
|
||||
self.pe = torch.nn.Parameter(torch.empty(1, max_len, d_model))
|
||||
self.xscale = 1.0
|
||||
|
||||
|
||||
class NoPositionalEncoding(torch.nn.Module):
|
||||
""" No position encoding
|
||||
"""
|
||||
|
||||
def __init__(self, d_model: int, dropout_rate: float):
|
||||
super().__init__()
|
||||
self.d_model = d_model
|
||||
self.dropout = torch.nn.Dropout(p=dropout_rate)
|
||||
|
||||
def forward(self,
|
||||
x: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0) \
|
||||
-> Tuple[torch.Tensor, torch.Tensor]:
|
||||
""" Just return zero vector for interface compatibility
|
||||
"""
|
||||
pos_emb = torch.zeros(1, x.size(1), self.d_model).to(x.device)
|
||||
return self.dropout(x), pos_emb
|
||||
|
||||
def position_encoding(self, offset: Union[int, torch.Tensor],
|
||||
size: int) -> torch.Tensor:
|
||||
return torch.zeros(1, size, self.d_model)
|
||||
|
||||
|
||||
class EspnetRelPositionalEncoding(torch.nn.Module):
|
||||
"""Relative positional encoding module (new implementation).
|
||||
|
||||
Details can be found in https://github.com/espnet/espnet/pull/2816.
|
||||
|
||||
See : Appendix B in https://arxiv.org/abs/1901.02860
|
||||
|
||||
Args:
|
||||
d_model (int): Embedding dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
max_len (int): Maximum input length.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, d_model: int, dropout_rate: float, max_len: int = 5000):
|
||||
"""Construct an PositionalEncoding object."""
|
||||
super(EspnetRelPositionalEncoding, self).__init__()
|
||||
self.d_model = d_model
|
||||
self.xscale = math.sqrt(self.d_model)
|
||||
self.dropout = torch.nn.Dropout(p=dropout_rate)
|
||||
self.pe = None
|
||||
self.extend_pe(torch.tensor(0.0).expand(1, max_len))
|
||||
|
||||
def extend_pe(self, x: torch.Tensor):
|
||||
"""Reset the positional encodings."""
|
||||
if self.pe is not None:
|
||||
# self.pe contains both positive and negative parts
|
||||
# the length of self.pe is 2 * input_len - 1
|
||||
if self.pe.size(1) >= x.size(1) * 2 - 1:
|
||||
if self.pe.dtype != x.dtype or self.pe.device != x.device:
|
||||
self.pe = self.pe.to(dtype=x.dtype, device=x.device)
|
||||
return
|
||||
# Suppose `i` means to the position of query vecotr and `j` means the
|
||||
# position of key vector. We use position relative positions when keys
|
||||
# are to the left (i>j) and negative relative positions otherwise (i<j).
|
||||
pe_positive = torch.zeros(x.size(1), self.d_model)
|
||||
pe_negative = torch.zeros(x.size(1), self.d_model)
|
||||
position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1)
|
||||
div_term = torch.exp(
|
||||
torch.arange(0, self.d_model, 2, dtype=torch.float32)
|
||||
* -(math.log(10000.0) / self.d_model)
|
||||
)
|
||||
pe_positive[:, 0::2] = torch.sin(position * div_term)
|
||||
pe_positive[:, 1::2] = torch.cos(position * div_term)
|
||||
pe_negative[:, 0::2] = torch.sin(-1 * position * div_term)
|
||||
pe_negative[:, 1::2] = torch.cos(-1 * position * div_term)
|
||||
|
||||
# Reserve the order of positive indices and concat both positive and
|
||||
# negative indices. This is used to support the shifting trick
|
||||
# as in https://arxiv.org/abs/1901.02860
|
||||
pe_positive = torch.flip(pe_positive, [0]).unsqueeze(0)
|
||||
pe_negative = pe_negative[1:].unsqueeze(0)
|
||||
pe = torch.cat([pe_positive, pe_negative], dim=1)
|
||||
self.pe = pe.to(device=x.device, dtype=x.dtype)
|
||||
|
||||
def forward(self, x: torch.Tensor, offset: Union[int, torch.Tensor] = 0) \
|
||||
-> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Add positional encoding.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (batch, time, `*`).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Encoded tensor (batch, time, `*`).
|
||||
|
||||
"""
|
||||
self.extend_pe(x)
|
||||
x = x * self.xscale
|
||||
pos_emb = self.position_encoding(size=x.size(1), offset=offset)
|
||||
return self.dropout(x), self.dropout(pos_emb)
|
||||
|
||||
def position_encoding(self,
|
||||
offset: Union[int, torch.Tensor],
|
||||
size: int) -> torch.Tensor:
|
||||
""" For getting encoding in a streaming fashion
|
||||
|
||||
Attention!!!!!
|
||||
we apply dropout only once at the whole utterance level in a none
|
||||
streaming way, but will call this function several times with
|
||||
increasing input size in a streaming scenario, so the dropout will
|
||||
be applied several times.
|
||||
|
||||
Args:
|
||||
offset (int or torch.tensor): start offset
|
||||
size (int): required size of position encoding
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Corresponding encoding
|
||||
"""
|
||||
# How to subscript a Union type:
|
||||
# https://github.com/pytorch/pytorch/issues/69434
|
||||
if isinstance(offset, int):
|
||||
pos_emb = self.pe[
|
||||
:,
|
||||
self.pe.size(1) // 2 - size - offset + 1: self.pe.size(1) // 2 + size + offset,
|
||||
]
|
||||
elif isinstance(offset, torch.Tensor):
|
||||
pos_emb = self.pe[
|
||||
:,
|
||||
self.pe.size(1) // 2 - size - offset + 1: self.pe.size(1) // 2 + size + offset,
|
||||
]
|
||||
return pos_emb
|
||||
474
models/CosyVoice/cosyvoice/transformer/encoder.py
Normal file
474
models/CosyVoice/cosyvoice/transformer/encoder.py
Normal file
@@ -0,0 +1,474 @@
|
||||
# Copyright (c) 2021 Mobvoi Inc (Binbin Zhang, Di Wu)
|
||||
# 2022 Xingchen Song (sxc19@mails.tsinghua.edu.cn)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""Encoder definition."""
|
||||
from typing import Tuple
|
||||
|
||||
import torch
|
||||
import torch.utils.checkpoint as ckpt
|
||||
|
||||
from cosyvoice.transformer.convolution import ConvolutionModule
|
||||
from cosyvoice.transformer.encoder_layer import TransformerEncoderLayer
|
||||
from cosyvoice.transformer.encoder_layer import ConformerEncoderLayer
|
||||
from cosyvoice.transformer.positionwise_feed_forward import PositionwiseFeedForward
|
||||
from cosyvoice.utils.class_utils import (
|
||||
COSYVOICE_EMB_CLASSES,
|
||||
COSYVOICE_SUBSAMPLE_CLASSES,
|
||||
COSYVOICE_ATTENTION_CLASSES,
|
||||
COSYVOICE_ACTIVATION_CLASSES,
|
||||
)
|
||||
from cosyvoice.utils.mask import make_pad_mask
|
||||
from cosyvoice.utils.mask import add_optional_chunk_mask
|
||||
|
||||
|
||||
class BaseEncoder(torch.nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
input_size: int,
|
||||
output_size: int = 256,
|
||||
attention_heads: int = 4,
|
||||
linear_units: int = 2048,
|
||||
num_blocks: int = 6,
|
||||
dropout_rate: float = 0.1,
|
||||
positional_dropout_rate: float = 0.1,
|
||||
attention_dropout_rate: float = 0.0,
|
||||
input_layer: str = "conv2d",
|
||||
pos_enc_layer_type: str = "abs_pos",
|
||||
normalize_before: bool = True,
|
||||
static_chunk_size: int = 0,
|
||||
use_dynamic_chunk: bool = False,
|
||||
global_cmvn: torch.nn.Module = None,
|
||||
use_dynamic_left_chunk: bool = False,
|
||||
gradient_checkpointing: bool = False,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
input_size (int): input dim
|
||||
output_size (int): dimension of attention
|
||||
attention_heads (int): the number of heads of multi head attention
|
||||
linear_units (int): the hidden units number of position-wise feed
|
||||
forward
|
||||
num_blocks (int): the number of decoder blocks
|
||||
dropout_rate (float): dropout rate
|
||||
attention_dropout_rate (float): dropout rate in attention
|
||||
positional_dropout_rate (float): dropout rate after adding
|
||||
positional encoding
|
||||
input_layer (str): input layer type.
|
||||
optional [linear, conv2d, conv2d6, conv2d8]
|
||||
pos_enc_layer_type (str): Encoder positional encoding layer type.
|
||||
opitonal [abs_pos, scaled_abs_pos, rel_pos, no_pos]
|
||||
normalize_before (bool):
|
||||
True: use layer_norm before each sub-block of a layer.
|
||||
False: use layer_norm after each sub-block of a layer.
|
||||
static_chunk_size (int): chunk size for static chunk training and
|
||||
decoding
|
||||
use_dynamic_chunk (bool): whether use dynamic chunk size for
|
||||
training or not, You can only use fixed chunk(chunk_size > 0)
|
||||
or dyanmic chunk size(use_dynamic_chunk = True)
|
||||
global_cmvn (Optional[torch.nn.Module]): Optional GlobalCMVN module
|
||||
use_dynamic_left_chunk (bool): whether use dynamic left chunk in
|
||||
dynamic chunk training
|
||||
key_bias: whether use bias in attention.linear_k, False for whisper models.
|
||||
gradient_checkpointing: rerunning a forward-pass segment for each
|
||||
checkpointed segment during backward.
|
||||
"""
|
||||
super().__init__()
|
||||
self._output_size = output_size
|
||||
|
||||
self.global_cmvn = global_cmvn
|
||||
self.embed = COSYVOICE_SUBSAMPLE_CLASSES[input_layer](
|
||||
input_size,
|
||||
output_size,
|
||||
dropout_rate,
|
||||
COSYVOICE_EMB_CLASSES[pos_enc_layer_type](output_size,
|
||||
positional_dropout_rate),
|
||||
)
|
||||
|
||||
self.normalize_before = normalize_before
|
||||
self.after_norm = torch.nn.LayerNorm(output_size, eps=1e-5)
|
||||
self.static_chunk_size = static_chunk_size
|
||||
self.use_dynamic_chunk = use_dynamic_chunk
|
||||
self.use_dynamic_left_chunk = use_dynamic_left_chunk
|
||||
self.gradient_checkpointing = gradient_checkpointing
|
||||
|
||||
def output_size(self) -> int:
|
||||
return self._output_size
|
||||
|
||||
def forward(
|
||||
self,
|
||||
xs: torch.Tensor,
|
||||
xs_lens: torch.Tensor,
|
||||
decoding_chunk_size: int = 0,
|
||||
num_decoding_left_chunks: int = -1,
|
||||
) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Embed positions in tensor.
|
||||
|
||||
Args:
|
||||
xs: padded input tensor (B, T, D)
|
||||
xs_lens: input length (B)
|
||||
decoding_chunk_size: decoding chunk size for dynamic chunk
|
||||
0: default for training, use random dynamic chunk.
|
||||
<0: for decoding, use full chunk.
|
||||
>0: for decoding, use fixed chunk size as set.
|
||||
num_decoding_left_chunks: number of left chunks, this is for decoding,
|
||||
the chunk size is decoding_chunk_size.
|
||||
>=0: use num_decoding_left_chunks
|
||||
<0: use all left chunks
|
||||
Returns:
|
||||
encoder output tensor xs, and subsampled masks
|
||||
xs: padded output tensor (B, T' ~= T/subsample_rate, D)
|
||||
masks: torch.Tensor batch padding mask after subsample
|
||||
(B, 1, T' ~= T/subsample_rate)
|
||||
NOTE(xcsong):
|
||||
We pass the `__call__` method of the modules instead of `forward` to the
|
||||
checkpointing API because `__call__` attaches all the hooks of the module.
|
||||
https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
|
||||
"""
|
||||
T = xs.size(1)
|
||||
masks = ~make_pad_mask(xs_lens, T).unsqueeze(1) # (B, 1, T)
|
||||
if self.global_cmvn is not None:
|
||||
xs = self.global_cmvn(xs)
|
||||
xs, pos_emb, masks = self.embed(xs, masks)
|
||||
mask_pad = masks # (B, 1, T/subsample_rate)
|
||||
chunk_masks = add_optional_chunk_mask(xs, masks,
|
||||
self.use_dynamic_chunk,
|
||||
self.use_dynamic_left_chunk,
|
||||
decoding_chunk_size,
|
||||
self.static_chunk_size,
|
||||
num_decoding_left_chunks)
|
||||
if self.gradient_checkpointing and self.training:
|
||||
xs = self.forward_layers_checkpointed(xs, chunk_masks, pos_emb,
|
||||
mask_pad)
|
||||
else:
|
||||
xs = self.forward_layers(xs, chunk_masks, pos_emb, mask_pad)
|
||||
if self.normalize_before:
|
||||
xs = self.after_norm(xs)
|
||||
# Here we assume the mask is not changed in encoder layers, so just
|
||||
# return the masks before encoder layers, and the masks will be used
|
||||
# for cross attention with decoder later
|
||||
return xs, masks
|
||||
|
||||
def forward_layers(self, xs: torch.Tensor, chunk_masks: torch.Tensor,
|
||||
pos_emb: torch.Tensor,
|
||||
mask_pad: torch.Tensor) -> torch.Tensor:
|
||||
for layer in self.encoders:
|
||||
xs, chunk_masks, _, _ = layer(xs, chunk_masks, pos_emb, mask_pad)
|
||||
return xs
|
||||
|
||||
@torch.jit.unused
|
||||
def forward_layers_checkpointed(self, xs: torch.Tensor,
|
||||
chunk_masks: torch.Tensor,
|
||||
pos_emb: torch.Tensor,
|
||||
mask_pad: torch.Tensor) -> torch.Tensor:
|
||||
for layer in self.encoders:
|
||||
xs, chunk_masks, _, _ = ckpt.checkpoint(layer.__call__, xs,
|
||||
chunk_masks, pos_emb,
|
||||
mask_pad)
|
||||
return xs
|
||||
|
||||
@torch.jit.export
|
||||
def forward_chunk(
|
||||
self,
|
||||
xs: torch.Tensor,
|
||||
offset: int,
|
||||
required_cache_size: int,
|
||||
att_cache: torch.Tensor = torch.zeros(0, 0, 0, 0),
|
||||
cnn_cache: torch.Tensor = torch.zeros(0, 0, 0, 0),
|
||||
att_mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
""" Forward just one chunk
|
||||
|
||||
Args:
|
||||
xs (torch.Tensor): chunk input, with shape (b=1, time, mel-dim),
|
||||
where `time == (chunk_size - 1) * subsample_rate + \
|
||||
subsample.right_context + 1`
|
||||
offset (int): current offset in encoder output time stamp
|
||||
required_cache_size (int): cache size required for next chunk
|
||||
compuation
|
||||
>=0: actual cache size
|
||||
<0: means all history cache is required
|
||||
att_cache (torch.Tensor): cache tensor for KEY & VALUE in
|
||||
transformer/conformer attention, with shape
|
||||
(elayers, head, cache_t1, d_k * 2), where
|
||||
`head * d_k == hidden-dim` and
|
||||
`cache_t1 == chunk_size * num_decoding_left_chunks`.
|
||||
cnn_cache (torch.Tensor): cache tensor for cnn_module in conformer,
|
||||
(elayers, b=1, hidden-dim, cache_t2), where
|
||||
`cache_t2 == cnn.lorder - 1`
|
||||
|
||||
Returns:
|
||||
torch.Tensor: output of current input xs,
|
||||
with shape (b=1, chunk_size, hidden-dim).
|
||||
torch.Tensor: new attention cache required for next chunk, with
|
||||
dynamic shape (elayers, head, ?, d_k * 2)
|
||||
depending on required_cache_size.
|
||||
torch.Tensor: new conformer cnn cache required for next chunk, with
|
||||
same shape as the original cnn_cache.
|
||||
|
||||
"""
|
||||
assert xs.size(0) == 1
|
||||
# tmp_masks is just for interface compatibility
|
||||
tmp_masks = torch.ones(1,
|
||||
xs.size(1),
|
||||
device=xs.device,
|
||||
dtype=torch.bool)
|
||||
tmp_masks = tmp_masks.unsqueeze(1)
|
||||
if self.global_cmvn is not None:
|
||||
xs = self.global_cmvn(xs)
|
||||
# NOTE(xcsong): Before embed, shape(xs) is (b=1, time, mel-dim)
|
||||
xs, pos_emb, _ = self.embed(xs, tmp_masks, offset)
|
||||
# NOTE(xcsong): After embed, shape(xs) is (b=1, chunk_size, hidden-dim)
|
||||
elayers, cache_t1 = att_cache.size(0), att_cache.size(2)
|
||||
chunk_size = xs.size(1)
|
||||
attention_key_size = cache_t1 + chunk_size
|
||||
pos_emb = self.embed.position_encoding(offset=offset - cache_t1,
|
||||
size=attention_key_size)
|
||||
if required_cache_size < 0:
|
||||
next_cache_start = 0
|
||||
elif required_cache_size == 0:
|
||||
next_cache_start = attention_key_size
|
||||
else:
|
||||
next_cache_start = max(attention_key_size - required_cache_size, 0)
|
||||
r_att_cache = []
|
||||
r_cnn_cache = []
|
||||
for i, layer in enumerate(self.encoders):
|
||||
# NOTE(xcsong): Before layer.forward
|
||||
# shape(att_cache[i:i + 1]) is (1, head, cache_t1, d_k * 2),
|
||||
# shape(cnn_cache[i]) is (b=1, hidden-dim, cache_t2)
|
||||
xs, _, new_att_cache, new_cnn_cache = layer(
|
||||
xs,
|
||||
att_mask,
|
||||
pos_emb,
|
||||
att_cache=att_cache[i:i + 1] if elayers > 0 else att_cache,
|
||||
cnn_cache=cnn_cache[i] if cnn_cache.size(0) > 0 else cnn_cache)
|
||||
# NOTE(xcsong): After layer.forward
|
||||
# shape(new_att_cache) is (1, head, attention_key_size, d_k * 2),
|
||||
# shape(new_cnn_cache) is (b=1, hidden-dim, cache_t2)
|
||||
r_att_cache.append(new_att_cache[:, :, next_cache_start:, :])
|
||||
r_cnn_cache.append(new_cnn_cache.unsqueeze(0))
|
||||
if self.normalize_before:
|
||||
xs = self.after_norm(xs)
|
||||
|
||||
# NOTE(xcsong): shape(r_att_cache) is (elayers, head, ?, d_k * 2),
|
||||
# ? may be larger than cache_t1, it depends on required_cache_size
|
||||
r_att_cache = torch.cat(r_att_cache, dim=0)
|
||||
# NOTE(xcsong): shape(r_cnn_cache) is (e, b=1, hidden-dim, cache_t2)
|
||||
r_cnn_cache = torch.cat(r_cnn_cache, dim=0)
|
||||
|
||||
return (xs, r_att_cache, r_cnn_cache)
|
||||
|
||||
@torch.jit.unused
|
||||
def forward_chunk_by_chunk(
|
||||
self,
|
||||
xs: torch.Tensor,
|
||||
decoding_chunk_size: int,
|
||||
num_decoding_left_chunks: int = -1,
|
||||
) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
""" Forward input chunk by chunk with chunk_size like a streaming
|
||||
fashion
|
||||
|
||||
Here we should pay special attention to computation cache in the
|
||||
streaming style forward chunk by chunk. Three things should be taken
|
||||
into account for computation in the current network:
|
||||
1. transformer/conformer encoder layers output cache
|
||||
2. convolution in conformer
|
||||
3. convolution in subsampling
|
||||
|
||||
However, we don't implement subsampling cache for:
|
||||
1. We can control subsampling module to output the right result by
|
||||
overlapping input instead of cache left context, even though it
|
||||
wastes some computation, but subsampling only takes a very
|
||||
small fraction of computation in the whole model.
|
||||
2. Typically, there are several covolution layers with subsampling
|
||||
in subsampling module, it is tricky and complicated to do cache
|
||||
with different convolution layers with different subsampling
|
||||
rate.
|
||||
3. Currently, nn.Sequential is used to stack all the convolution
|
||||
layers in subsampling, we need to rewrite it to make it work
|
||||
with cache, which is not preferred.
|
||||
Args:
|
||||
xs (torch.Tensor): (1, max_len, dim)
|
||||
chunk_size (int): decoding chunk size
|
||||
"""
|
||||
assert decoding_chunk_size > 0
|
||||
# The model is trained by static or dynamic chunk
|
||||
assert self.static_chunk_size > 0 or self.use_dynamic_chunk
|
||||
subsampling = self.embed.subsampling_rate
|
||||
context = self.embed.right_context + 1 # Add current frame
|
||||
stride = subsampling * decoding_chunk_size
|
||||
decoding_window = (decoding_chunk_size - 1) * subsampling + context
|
||||
num_frames = xs.size(1)
|
||||
att_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), device=xs.device)
|
||||
cnn_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), device=xs.device)
|
||||
outputs = []
|
||||
offset = 0
|
||||
required_cache_size = decoding_chunk_size * num_decoding_left_chunks
|
||||
|
||||
# Feed forward overlap input step by step
|
||||
for cur in range(0, num_frames - context + 1, stride):
|
||||
end = min(cur + decoding_window, num_frames)
|
||||
chunk_xs = xs[:, cur:end, :]
|
||||
(y, att_cache,
|
||||
cnn_cache) = self.forward_chunk(chunk_xs, offset,
|
||||
required_cache_size, att_cache,
|
||||
cnn_cache)
|
||||
outputs.append(y)
|
||||
offset += y.size(1)
|
||||
ys = torch.cat(outputs, 1)
|
||||
masks = torch.ones((1, 1, ys.size(1)),
|
||||
device=ys.device,
|
||||
dtype=torch.bool)
|
||||
return ys, masks
|
||||
|
||||
|
||||
class TransformerEncoder(BaseEncoder):
|
||||
"""Transformer encoder module."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
input_size: int,
|
||||
output_size: int = 256,
|
||||
attention_heads: int = 4,
|
||||
linear_units: int = 2048,
|
||||
num_blocks: int = 6,
|
||||
dropout_rate: float = 0.1,
|
||||
positional_dropout_rate: float = 0.1,
|
||||
attention_dropout_rate: float = 0.0,
|
||||
input_layer: str = "conv2d",
|
||||
pos_enc_layer_type: str = "abs_pos",
|
||||
normalize_before: bool = True,
|
||||
static_chunk_size: int = 0,
|
||||
use_dynamic_chunk: bool = False,
|
||||
global_cmvn: torch.nn.Module = None,
|
||||
use_dynamic_left_chunk: bool = False,
|
||||
key_bias: bool = True,
|
||||
selfattention_layer_type: str = "selfattn",
|
||||
activation_type: str = "relu",
|
||||
gradient_checkpointing: bool = False,
|
||||
):
|
||||
""" Construct TransformerEncoder
|
||||
|
||||
See Encoder for the meaning of each parameter.
|
||||
"""
|
||||
super().__init__(input_size, output_size, attention_heads,
|
||||
linear_units, num_blocks, dropout_rate,
|
||||
positional_dropout_rate, attention_dropout_rate,
|
||||
input_layer, pos_enc_layer_type, normalize_before,
|
||||
static_chunk_size, use_dynamic_chunk, global_cmvn,
|
||||
use_dynamic_left_chunk, gradient_checkpointing)
|
||||
activation = COSYVOICE_ACTIVATION_CLASSES[activation_type]()
|
||||
self.encoders = torch.nn.ModuleList([
|
||||
TransformerEncoderLayer(
|
||||
output_size,
|
||||
COSYVOICE_ATTENTION_CLASSES[selfattention_layer_type](attention_heads,
|
||||
output_size,
|
||||
attention_dropout_rate,
|
||||
key_bias),
|
||||
PositionwiseFeedForward(output_size, linear_units,
|
||||
dropout_rate, activation),
|
||||
dropout_rate, normalize_before) for _ in range(num_blocks)
|
||||
])
|
||||
|
||||
|
||||
class ConformerEncoder(BaseEncoder):
|
||||
"""Conformer encoder module."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
input_size: int,
|
||||
output_size: int = 256,
|
||||
attention_heads: int = 4,
|
||||
linear_units: int = 2048,
|
||||
num_blocks: int = 6,
|
||||
dropout_rate: float = 0.1,
|
||||
positional_dropout_rate: float = 0.1,
|
||||
attention_dropout_rate: float = 0.0,
|
||||
input_layer: str = "conv2d",
|
||||
pos_enc_layer_type: str = "rel_pos",
|
||||
normalize_before: bool = True,
|
||||
static_chunk_size: int = 0,
|
||||
use_dynamic_chunk: bool = False,
|
||||
global_cmvn: torch.nn.Module = None,
|
||||
use_dynamic_left_chunk: bool = False,
|
||||
positionwise_conv_kernel_size: int = 1,
|
||||
macaron_style: bool = True,
|
||||
selfattention_layer_type: str = "rel_selfattn",
|
||||
activation_type: str = "swish",
|
||||
use_cnn_module: bool = True,
|
||||
cnn_module_kernel: int = 15,
|
||||
causal: bool = False,
|
||||
cnn_module_norm: str = "batch_norm",
|
||||
key_bias: bool = True,
|
||||
gradient_checkpointing: bool = False,
|
||||
):
|
||||
"""Construct ConformerEncoder
|
||||
|
||||
Args:
|
||||
input_size to use_dynamic_chunk, see in BaseEncoder
|
||||
positionwise_conv_kernel_size (int): Kernel size of positionwise
|
||||
conv1d layer.
|
||||
macaron_style (bool): Whether to use macaron style for
|
||||
positionwise layer.
|
||||
selfattention_layer_type (str): Encoder attention layer type,
|
||||
the parameter has no effect now, it's just for configure
|
||||
compatibility.
|
||||
activation_type (str): Encoder activation function type.
|
||||
use_cnn_module (bool): Whether to use convolution module.
|
||||
cnn_module_kernel (int): Kernel size of convolution module.
|
||||
causal (bool): whether to use causal convolution or not.
|
||||
key_bias: whether use bias in attention.linear_k, False for whisper models.
|
||||
"""
|
||||
super().__init__(input_size, output_size, attention_heads,
|
||||
linear_units, num_blocks, dropout_rate,
|
||||
positional_dropout_rate, attention_dropout_rate,
|
||||
input_layer, pos_enc_layer_type, normalize_before,
|
||||
static_chunk_size, use_dynamic_chunk, global_cmvn,
|
||||
use_dynamic_left_chunk, gradient_checkpointing)
|
||||
activation = COSYVOICE_ACTIVATION_CLASSES[activation_type]()
|
||||
|
||||
# self-attention module definition
|
||||
encoder_selfattn_layer_args = (
|
||||
attention_heads,
|
||||
output_size,
|
||||
attention_dropout_rate,
|
||||
key_bias,
|
||||
)
|
||||
# feed-forward module definition
|
||||
positionwise_layer_args = (
|
||||
output_size,
|
||||
linear_units,
|
||||
dropout_rate,
|
||||
activation,
|
||||
)
|
||||
# convolution module definition
|
||||
convolution_layer_args = (output_size, cnn_module_kernel, activation,
|
||||
cnn_module_norm, causal)
|
||||
|
||||
self.encoders = torch.nn.ModuleList([
|
||||
ConformerEncoderLayer(
|
||||
output_size,
|
||||
COSYVOICE_ATTENTION_CLASSES[selfattention_layer_type](
|
||||
*encoder_selfattn_layer_args),
|
||||
PositionwiseFeedForward(*positionwise_layer_args),
|
||||
PositionwiseFeedForward(
|
||||
*positionwise_layer_args) if macaron_style else None,
|
||||
ConvolutionModule(
|
||||
*convolution_layer_args) if use_cnn_module else None,
|
||||
dropout_rate,
|
||||
normalize_before,
|
||||
) for _ in range(num_blocks)
|
||||
])
|
||||
236
models/CosyVoice/cosyvoice/transformer/encoder_layer.py
Normal file
236
models/CosyVoice/cosyvoice/transformer/encoder_layer.py
Normal file
@@ -0,0 +1,236 @@
|
||||
# Copyright (c) 2021 Mobvoi Inc (Binbin Zhang, Di Wu)
|
||||
# 2022 Xingchen Song (sxc19@mails.tsinghua.edu.cn)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""Encoder self-attention layer definition."""
|
||||
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
|
||||
|
||||
class TransformerEncoderLayer(nn.Module):
|
||||
"""Encoder layer module.
|
||||
|
||||
Args:
|
||||
size (int): Input dimension.
|
||||
self_attn (torch.nn.Module): Self-attention module instance.
|
||||
`MultiHeadedAttention` or `RelPositionMultiHeadedAttention`
|
||||
instance can be used as the argument.
|
||||
feed_forward (torch.nn.Module): Feed-forward module instance.
|
||||
`PositionwiseFeedForward`, instance can be used as the argument.
|
||||
dropout_rate (float): Dropout rate.
|
||||
normalize_before (bool):
|
||||
True: use layer_norm before each sub-block.
|
||||
False: to use layer_norm after each sub-block.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
size: int,
|
||||
self_attn: torch.nn.Module,
|
||||
feed_forward: torch.nn.Module,
|
||||
dropout_rate: float,
|
||||
normalize_before: bool = True,
|
||||
):
|
||||
"""Construct an EncoderLayer object."""
|
||||
super().__init__()
|
||||
self.self_attn = self_attn
|
||||
self.feed_forward = feed_forward
|
||||
self.norm1 = nn.LayerNorm(size, eps=1e-12)
|
||||
self.norm2 = nn.LayerNorm(size, eps=1e-12)
|
||||
self.dropout = nn.Dropout(dropout_rate)
|
||||
self.size = size
|
||||
self.normalize_before = normalize_before
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
mask: torch.Tensor,
|
||||
pos_emb: torch.Tensor,
|
||||
mask_pad: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
|
||||
att_cache: torch.Tensor = torch.zeros((0, 0, 0, 0)),
|
||||
cnn_cache: torch.Tensor = torch.zeros((0, 0, 0, 0)),
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Compute encoded features.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): (#batch, time, size)
|
||||
mask (torch.Tensor): Mask tensor for the input (#batch, time,time),
|
||||
(0, 0, 0) means fake mask.
|
||||
pos_emb (torch.Tensor): just for interface compatibility
|
||||
to ConformerEncoderLayer
|
||||
mask_pad (torch.Tensor): does not used in transformer layer,
|
||||
just for unified api with conformer.
|
||||
att_cache (torch.Tensor): Cache tensor of the KEY & VALUE
|
||||
(#batch=1, head, cache_t1, d_k * 2), head * d_k == size.
|
||||
cnn_cache (torch.Tensor): Convolution cache in conformer layer
|
||||
(#batch=1, size, cache_t2), not used here, it's for interface
|
||||
compatibility to ConformerEncoderLayer.
|
||||
Returns:
|
||||
torch.Tensor: Output tensor (#batch, time, size).
|
||||
torch.Tensor: Mask tensor (#batch, time, time).
|
||||
torch.Tensor: att_cache tensor,
|
||||
(#batch=1, head, cache_t1 + time, d_k * 2).
|
||||
torch.Tensor: cnn_cahce tensor (#batch=1, size, cache_t2).
|
||||
|
||||
"""
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm1(x)
|
||||
x_att, new_att_cache = self.self_attn(x, x, x, mask, pos_emb=pos_emb, cache=att_cache)
|
||||
x = residual + self.dropout(x_att)
|
||||
if not self.normalize_before:
|
||||
x = self.norm1(x)
|
||||
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm2(x)
|
||||
x = residual + self.dropout(self.feed_forward(x))
|
||||
if not self.normalize_before:
|
||||
x = self.norm2(x)
|
||||
|
||||
fake_cnn_cache = torch.zeros((0, 0, 0), dtype=x.dtype, device=x.device)
|
||||
return x, mask, new_att_cache, fake_cnn_cache
|
||||
|
||||
|
||||
class ConformerEncoderLayer(nn.Module):
|
||||
"""Encoder layer module.
|
||||
Args:
|
||||
size (int): Input dimension.
|
||||
self_attn (torch.nn.Module): Self-attention module instance.
|
||||
`MultiHeadedAttention` or `RelPositionMultiHeadedAttention`
|
||||
instance can be used as the argument.
|
||||
feed_forward (torch.nn.Module): Feed-forward module instance.
|
||||
`PositionwiseFeedForward` instance can be used as the argument.
|
||||
feed_forward_macaron (torch.nn.Module): Additional feed-forward module
|
||||
instance.
|
||||
`PositionwiseFeedForward` instance can be used as the argument.
|
||||
conv_module (torch.nn.Module): Convolution module instance.
|
||||
`ConvlutionModule` instance can be used as the argument.
|
||||
dropout_rate (float): Dropout rate.
|
||||
normalize_before (bool):
|
||||
True: use layer_norm before each sub-block.
|
||||
False: use layer_norm after each sub-block.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
size: int,
|
||||
self_attn: torch.nn.Module,
|
||||
feed_forward: Optional[nn.Module] = None,
|
||||
feed_forward_macaron: Optional[nn.Module] = None,
|
||||
conv_module: Optional[nn.Module] = None,
|
||||
dropout_rate: float = 0.1,
|
||||
normalize_before: bool = True,
|
||||
):
|
||||
"""Construct an EncoderLayer object."""
|
||||
super().__init__()
|
||||
self.self_attn = self_attn
|
||||
self.feed_forward = feed_forward
|
||||
self.feed_forward_macaron = feed_forward_macaron
|
||||
self.conv_module = conv_module
|
||||
self.norm_ff = nn.LayerNorm(size, eps=1e-12) # for the FNN module
|
||||
self.norm_mha = nn.LayerNorm(size, eps=1e-12) # for the MHA module
|
||||
if feed_forward_macaron is not None:
|
||||
self.norm_ff_macaron = nn.LayerNorm(size, eps=1e-12)
|
||||
self.ff_scale = 0.5
|
||||
else:
|
||||
self.ff_scale = 1.0
|
||||
if self.conv_module is not None:
|
||||
self.norm_conv = nn.LayerNorm(size, eps=1e-12) # for the CNN module
|
||||
self.norm_final = nn.LayerNorm(
|
||||
size, eps=1e-12) # for the final output of the block
|
||||
self.dropout = nn.Dropout(dropout_rate)
|
||||
self.size = size
|
||||
self.normalize_before = normalize_before
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
mask: torch.Tensor,
|
||||
pos_emb: torch.Tensor,
|
||||
mask_pad: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
|
||||
att_cache: torch.Tensor = torch.zeros((0, 0, 0, 0)),
|
||||
cnn_cache: torch.Tensor = torch.zeros((0, 0, 0, 0)),
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Compute encoded features.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): (#batch, time, size)
|
||||
mask (torch.Tensor): Mask tensor for the input (#batch, time,time),
|
||||
(0, 0, 0) means fake mask.
|
||||
pos_emb (torch.Tensor): positional encoding, must not be None
|
||||
for ConformerEncoderLayer.
|
||||
mask_pad (torch.Tensor): batch padding mask used for conv module.
|
||||
(#batch, 1,time), (0, 0, 0) means fake mask.
|
||||
att_cache (torch.Tensor): Cache tensor of the KEY & VALUE
|
||||
(#batch=1, head, cache_t1, d_k * 2), head * d_k == size.
|
||||
cnn_cache (torch.Tensor): Convolution cache in conformer layer
|
||||
(#batch=1, size, cache_t2)
|
||||
Returns:
|
||||
torch.Tensor: Output tensor (#batch, time, size).
|
||||
torch.Tensor: Mask tensor (#batch, time, time).
|
||||
torch.Tensor: att_cache tensor,
|
||||
(#batch=1, head, cache_t1 + time, d_k * 2).
|
||||
torch.Tensor: cnn_cahce tensor (#batch, size, cache_t2).
|
||||
"""
|
||||
|
||||
# whether to use macaron style
|
||||
if self.feed_forward_macaron is not None:
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm_ff_macaron(x)
|
||||
x = residual + self.ff_scale * self.dropout(
|
||||
self.feed_forward_macaron(x))
|
||||
if not self.normalize_before:
|
||||
x = self.norm_ff_macaron(x)
|
||||
|
||||
# multi-headed self-attention module
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm_mha(x)
|
||||
x_att, new_att_cache = self.self_attn(x, x, x, mask, pos_emb,
|
||||
att_cache)
|
||||
x = residual + self.dropout(x_att)
|
||||
if not self.normalize_before:
|
||||
x = self.norm_mha(x)
|
||||
|
||||
# convolution module
|
||||
# Fake new cnn cache here, and then change it in conv_module
|
||||
new_cnn_cache = torch.zeros((0, 0, 0), dtype=x.dtype, device=x.device)
|
||||
if self.conv_module is not None:
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm_conv(x)
|
||||
x, new_cnn_cache = self.conv_module(x, mask_pad, cnn_cache)
|
||||
x = residual + self.dropout(x)
|
||||
|
||||
if not self.normalize_before:
|
||||
x = self.norm_conv(x)
|
||||
|
||||
# feed forward module
|
||||
residual = x
|
||||
if self.normalize_before:
|
||||
x = self.norm_ff(x)
|
||||
|
||||
x = residual + self.ff_scale * self.dropout(self.feed_forward(x))
|
||||
if not self.normalize_before:
|
||||
x = self.norm_ff(x)
|
||||
|
||||
if self.conv_module is not None:
|
||||
x = self.norm_final(x)
|
||||
|
||||
return x, mask, new_att_cache, new_cnn_cache
|
||||
@@ -0,0 +1,96 @@
|
||||
# Copyright (c) 2019 Shigeki Karita
|
||||
# 2020 Mobvoi Inc (Binbin Zhang)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Label smoothing module."""
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
|
||||
|
||||
class LabelSmoothingLoss(nn.Module):
|
||||
"""Label-smoothing loss.
|
||||
|
||||
In a standard CE loss, the label's data distribution is:
|
||||
[0,1,2] ->
|
||||
[
|
||||
[1.0, 0.0, 0.0],
|
||||
[0.0, 1.0, 0.0],
|
||||
[0.0, 0.0, 1.0],
|
||||
]
|
||||
|
||||
In the smoothing version CE Loss,some probabilities
|
||||
are taken from the true label prob (1.0) and are divided
|
||||
among other labels.
|
||||
|
||||
e.g.
|
||||
smoothing=0.1
|
||||
[0,1,2] ->
|
||||
[
|
||||
[0.9, 0.05, 0.05],
|
||||
[0.05, 0.9, 0.05],
|
||||
[0.05, 0.05, 0.9],
|
||||
]
|
||||
|
||||
Args:
|
||||
size (int): the number of class
|
||||
padding_idx (int): padding class id which will be ignored for loss
|
||||
smoothing (float): smoothing rate (0.0 means the conventional CE)
|
||||
normalize_length (bool):
|
||||
normalize loss by sequence length if True
|
||||
normalize loss by batch size if False
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
size: int,
|
||||
padding_idx: int,
|
||||
smoothing: float,
|
||||
normalize_length: bool = False):
|
||||
"""Construct an LabelSmoothingLoss object."""
|
||||
super(LabelSmoothingLoss, self).__init__()
|
||||
self.criterion = nn.KLDivLoss(reduction="none")
|
||||
self.padding_idx = padding_idx
|
||||
self.confidence = 1.0 - smoothing
|
||||
self.smoothing = smoothing
|
||||
self.size = size
|
||||
self.normalize_length = normalize_length
|
||||
|
||||
def forward(self, x: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
|
||||
"""Compute loss between x and target.
|
||||
|
||||
The model outputs and data labels tensors are flatten to
|
||||
(batch*seqlen, class) shape and a mask is applied to the
|
||||
padding part which should not be calculated for loss.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): prediction (batch, seqlen, class)
|
||||
target (torch.Tensor):
|
||||
target signal masked with self.padding_id (batch, seqlen)
|
||||
Returns:
|
||||
loss (torch.Tensor) : The KL loss, scalar float value
|
||||
"""
|
||||
assert x.size(2) == self.size
|
||||
batch_size = x.size(0)
|
||||
x = x.view(-1, self.size)
|
||||
target = target.view(-1)
|
||||
# use zeros_like instead of torch.no_grad() for true_dist,
|
||||
# since no_grad() can not be exported by JIT
|
||||
true_dist = torch.zeros_like(x)
|
||||
true_dist.fill_(self.smoothing / (self.size - 1))
|
||||
ignore = target == self.padding_idx # (B,)
|
||||
total = len(target) - ignore.sum().item()
|
||||
target = target.masked_fill(ignore, 0) # avoid -1 index
|
||||
true_dist.scatter_(1, target.unsqueeze(1), self.confidence)
|
||||
kl = self.criterion(torch.log_softmax(x, dim=1), true_dist)
|
||||
denom = total if self.normalize_length else batch_size
|
||||
return kl.masked_fill(ignore.unsqueeze(1), 0).sum() / denom
|
||||
@@ -0,0 +1,115 @@
|
||||
# Copyright (c) 2019 Shigeki Karita
|
||||
# 2020 Mobvoi Inc (Binbin Zhang)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Positionwise feed forward layer definition."""
|
||||
|
||||
import torch
|
||||
|
||||
|
||||
class PositionwiseFeedForward(torch.nn.Module):
|
||||
"""Positionwise feed forward layer.
|
||||
|
||||
FeedForward are appied on each position of the sequence.
|
||||
The output dim is same with the input dim.
|
||||
|
||||
Args:
|
||||
idim (int): Input dimenstion.
|
||||
hidden_units (int): The number of hidden units.
|
||||
dropout_rate (float): Dropout rate.
|
||||
activation (torch.nn.Module): Activation function
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
idim: int,
|
||||
hidden_units: int,
|
||||
dropout_rate: float,
|
||||
activation: torch.nn.Module = torch.nn.ReLU(),
|
||||
):
|
||||
"""Construct a PositionwiseFeedForward object."""
|
||||
super(PositionwiseFeedForward, self).__init__()
|
||||
self.w_1 = torch.nn.Linear(idim, hidden_units)
|
||||
self.activation = activation
|
||||
self.dropout = torch.nn.Dropout(dropout_rate)
|
||||
self.w_2 = torch.nn.Linear(hidden_units, idim)
|
||||
|
||||
def forward(self, xs: torch.Tensor) -> torch.Tensor:
|
||||
"""Forward function.
|
||||
|
||||
Args:
|
||||
xs: input tensor (B, L, D)
|
||||
Returns:
|
||||
output tensor, (B, L, D)
|
||||
"""
|
||||
return self.w_2(self.dropout(self.activation(self.w_1(xs))))
|
||||
|
||||
|
||||
class MoEFFNLayer(torch.nn.Module):
|
||||
"""
|
||||
Mixture of expert with Positionwise feed forward layer
|
||||
See also figure 1 in https://arxiv.org/pdf/2305.15663.pdf
|
||||
The output dim is same with the input dim.
|
||||
|
||||
Modified from https://github.com/Lightning-AI/lit-gpt/pull/823
|
||||
https://github.com/mistralai/mistral-src/blob/b46d6/moe_one_file_ref.py#L203-L219
|
||||
Args:
|
||||
n_expert: number of expert.
|
||||
n_expert_per_token: The actual number of experts used for each frame
|
||||
idim (int): Input dimenstion.
|
||||
hidden_units (int): The number of hidden units.
|
||||
dropout_rate (float): Dropout rate.
|
||||
activation (torch.nn.Module): Activation function
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
n_expert: int,
|
||||
n_expert_per_token: int,
|
||||
idim: int,
|
||||
hidden_units: int,
|
||||
dropout_rate: float,
|
||||
activation: torch.nn.Module = torch.nn.ReLU(),
|
||||
):
|
||||
super(MoEFFNLayer, self).__init__()
|
||||
self.gate = torch.nn.Linear(idim, n_expert, bias=False)
|
||||
self.experts = torch.nn.ModuleList(
|
||||
PositionwiseFeedForward(idim, hidden_units, dropout_rate,
|
||||
activation) for _ in range(n_expert))
|
||||
self.n_expert_per_token = n_expert_per_token
|
||||
|
||||
def forward(self, xs: torch.Tensor) -> torch.Tensor:
|
||||
"""Foward function.
|
||||
Args:
|
||||
xs: input tensor (B, L, D)
|
||||
Returns:
|
||||
output tensor, (B, L, D)
|
||||
|
||||
"""
|
||||
B, L, D = xs.size(
|
||||
) # batch size, sequence length, embedding dimension (idim)
|
||||
xs = xs.view(-1, D) # (B*L, D)
|
||||
router = self.gate(xs) # (B*L, n_expert)
|
||||
logits, indices = torch.topk(
|
||||
router, self.n_expert_per_token
|
||||
) # probs:(B*L, n_expert), indices: (B*L, n_expert)
|
||||
weights = torch.nn.functional.softmax(
|
||||
logits, dim=1,
|
||||
dtype=torch.float).to(dtype=xs.dtype) # (B*L, n_expert_per_token)
|
||||
output = torch.zeros_like(xs) # (B*L, D)
|
||||
for i, expert in enumerate(self.experts):
|
||||
mask = indices == i
|
||||
batch_idx, ith_expert = torch.where(mask)
|
||||
output[batch_idx] += weights[batch_idx, ith_expert, None] * expert(
|
||||
xs[batch_idx])
|
||||
return output.view(B, L, D)
|
||||
383
models/CosyVoice/cosyvoice/transformer/subsampling.py
Normal file
383
models/CosyVoice/cosyvoice/transformer/subsampling.py
Normal file
@@ -0,0 +1,383 @@
|
||||
# Copyright (c) 2021 Mobvoi Inc (Binbin Zhang, Di Wu)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""Subsampling layer definition."""
|
||||
|
||||
from typing import Tuple, Union
|
||||
|
||||
import torch
|
||||
|
||||
|
||||
class BaseSubsampling(torch.nn.Module):
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.right_context = 0
|
||||
self.subsampling_rate = 1
|
||||
|
||||
def position_encoding(self, offset: Union[int, torch.Tensor],
|
||||
size: int) -> torch.Tensor:
|
||||
return self.pos_enc.position_encoding(offset, size)
|
||||
|
||||
|
||||
class EmbedinigNoSubsampling(BaseSubsampling):
|
||||
"""Embedding input without subsampling
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
super().__init__()
|
||||
self.embed = torch.nn.Embedding(idim, odim)
|
||||
self.pos_enc = pos_enc_class
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Input x.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: linear input tensor (#batch, time', odim),
|
||||
where time' = time .
|
||||
torch.Tensor: linear input mask (#batch, 1, time'),
|
||||
where time' = time .
|
||||
|
||||
"""
|
||||
x = self.embed(x)
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask
|
||||
|
||||
|
||||
class LinearNoSubsampling(BaseSubsampling):
|
||||
"""Linear transform the input without subsampling
|
||||
|
||||
Args:
|
||||
idim (int): Input dimension.
|
||||
odim (int): Output dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
"""Construct an linear object."""
|
||||
super().__init__()
|
||||
self.out = torch.nn.Sequential(
|
||||
torch.nn.Linear(idim, odim),
|
||||
torch.nn.LayerNorm(odim, eps=1e-5),
|
||||
torch.nn.Dropout(dropout_rate),
|
||||
)
|
||||
self.pos_enc = pos_enc_class
|
||||
self.right_context = 0
|
||||
self.subsampling_rate = 1
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Input x.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: linear input tensor (#batch, time', odim),
|
||||
where time' = time .
|
||||
torch.Tensor: linear input mask (#batch, 1, time'),
|
||||
where time' = time .
|
||||
|
||||
"""
|
||||
x = self.out(x)
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask
|
||||
|
||||
|
||||
class Conv1dSubsampling2(BaseSubsampling):
|
||||
"""Convolutional 1D subsampling (to 1/2 length).
|
||||
It is designed for Whisper, ref:
|
||||
https://github.com/openai/whisper/blob/main/whisper/model.py
|
||||
|
||||
Args:
|
||||
idim (int): Input dimension.
|
||||
odim (int): Output dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
"""Construct an Conv1dSubsampling2 object."""
|
||||
super().__init__()
|
||||
self.conv = torch.nn.Sequential(
|
||||
torch.nn.Conv1d(idim, odim, kernel_size=3, padding=1),
|
||||
torch.nn.GELU(),
|
||||
torch.nn.Conv1d(odim, odim, kernel_size=3, stride=2, padding=1),
|
||||
torch.nn.GELU(),
|
||||
)
|
||||
self.pos_enc = pos_enc_class
|
||||
# The right context for every conv layer is computed by:
|
||||
# (kernel_size - 1) * frame_rate_of_this_layer
|
||||
self.subsampling_rate = 2
|
||||
# 4 = (3 - 1) * 1 + (3 - 1) * 1
|
||||
self.right_context = 4
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Subsample x.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Subsampled tensor (#batch, time', odim),
|
||||
where time' = time // 2.
|
||||
torch.Tensor: Subsampled mask (#batch, 1, time'),
|
||||
where time' = time // 2.
|
||||
torch.Tensor: positional encoding
|
||||
|
||||
"""
|
||||
time = x.size(1)
|
||||
x = x.transpose(1, 2) # (b, f, t)
|
||||
x = self.conv(x)
|
||||
x = x.transpose(1, 2) # (b, t, f)
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask[:, :, (time + 1) % 2::2]
|
||||
|
||||
|
||||
class Conv2dSubsampling4(BaseSubsampling):
|
||||
"""Convolutional 2D subsampling (to 1/4 length).
|
||||
|
||||
Args:
|
||||
idim (int): Input dimension.
|
||||
odim (int): Output dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
"""Construct an Conv2dSubsampling4 object."""
|
||||
super().__init__()
|
||||
self.conv = torch.nn.Sequential(
|
||||
torch.nn.Conv2d(1, odim, 3, 2),
|
||||
torch.nn.ReLU(),
|
||||
torch.nn.Conv2d(odim, odim, 3, 2),
|
||||
torch.nn.ReLU(),
|
||||
)
|
||||
self.out = torch.nn.Sequential(
|
||||
torch.nn.Linear(odim * (((idim - 1) // 2 - 1) // 2), odim))
|
||||
self.pos_enc = pos_enc_class
|
||||
# The right context for every conv layer is computed by:
|
||||
# (kernel_size - 1) * frame_rate_of_this_layer
|
||||
self.subsampling_rate = 4
|
||||
# 6 = (3 - 1) * 1 + (3 - 1) * 2
|
||||
self.right_context = 6
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Subsample x.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Subsampled tensor (#batch, time', odim),
|
||||
where time' = time // 4.
|
||||
torch.Tensor: Subsampled mask (#batch, 1, time'),
|
||||
where time' = time // 4.
|
||||
torch.Tensor: positional encoding
|
||||
|
||||
"""
|
||||
x = x.unsqueeze(1) # (b, c=1, t, f)
|
||||
x = self.conv(x)
|
||||
b, c, t, f = x.size()
|
||||
x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f))
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask[:, :, 2::2][:, :, 2::2]
|
||||
|
||||
|
||||
class Conv2dSubsampling6(BaseSubsampling):
|
||||
"""Convolutional 2D subsampling (to 1/6 length).
|
||||
Args:
|
||||
idim (int): Input dimension.
|
||||
odim (int): Output dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
pos_enc (torch.nn.Module): Custom position encoding layer.
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
"""Construct an Conv2dSubsampling6 object."""
|
||||
super().__init__()
|
||||
self.conv = torch.nn.Sequential(
|
||||
torch.nn.Conv2d(1, odim, 3, 2),
|
||||
torch.nn.ReLU(),
|
||||
torch.nn.Conv2d(odim, odim, 5, 3),
|
||||
torch.nn.ReLU(),
|
||||
)
|
||||
self.linear = torch.nn.Linear(odim * (((idim - 1) // 2 - 2) // 3),
|
||||
odim)
|
||||
self.pos_enc = pos_enc_class
|
||||
# 10 = (3 - 1) * 1 + (5 - 1) * 2
|
||||
self.subsampling_rate = 6
|
||||
self.right_context = 10
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Subsample x.
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Subsampled tensor (#batch, time', odim),
|
||||
where time' = time // 6.
|
||||
torch.Tensor: Subsampled mask (#batch, 1, time'),
|
||||
where time' = time // 6.
|
||||
torch.Tensor: positional encoding
|
||||
"""
|
||||
x = x.unsqueeze(1) # (b, c, t, f)
|
||||
x = self.conv(x)
|
||||
b, c, t, f = x.size()
|
||||
x = self.linear(x.transpose(1, 2).contiguous().view(b, t, c * f))
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask[:, :, 2::2][:, :, 4::3]
|
||||
|
||||
|
||||
class Conv2dSubsampling8(BaseSubsampling):
|
||||
"""Convolutional 2D subsampling (to 1/8 length).
|
||||
|
||||
Args:
|
||||
idim (int): Input dimension.
|
||||
odim (int): Output dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
"""Construct an Conv2dSubsampling8 object."""
|
||||
super().__init__()
|
||||
self.conv = torch.nn.Sequential(
|
||||
torch.nn.Conv2d(1, odim, 3, 2),
|
||||
torch.nn.ReLU(),
|
||||
torch.nn.Conv2d(odim, odim, 3, 2),
|
||||
torch.nn.ReLU(),
|
||||
torch.nn.Conv2d(odim, odim, 3, 2),
|
||||
torch.nn.ReLU(),
|
||||
)
|
||||
self.linear = torch.nn.Linear(
|
||||
odim * ((((idim - 1) // 2 - 1) // 2 - 1) // 2), odim)
|
||||
self.pos_enc = pos_enc_class
|
||||
self.subsampling_rate = 8
|
||||
# 14 = (3 - 1) * 1 + (3 - 1) * 2 + (3 - 1) * 4
|
||||
self.right_context = 14
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Subsample x.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: Subsampled tensor (#batch, time', odim),
|
||||
where time' = time // 8.
|
||||
torch.Tensor: Subsampled mask (#batch, 1, time'),
|
||||
where time' = time // 8.
|
||||
torch.Tensor: positional encoding
|
||||
"""
|
||||
x = x.unsqueeze(1) # (b, c, t, f)
|
||||
x = self.conv(x)
|
||||
b, c, t, f = x.size()
|
||||
x = self.linear(x.transpose(1, 2).contiguous().view(b, t, c * f))
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask[:, :, 2::2][:, :, 2::2][:, :, 2::2]
|
||||
|
||||
|
||||
class LegacyLinearNoSubsampling(BaseSubsampling):
|
||||
"""Linear transform the input without subsampling
|
||||
|
||||
Args:
|
||||
idim (int): Input dimension.
|
||||
odim (int): Output dimension.
|
||||
dropout_rate (float): Dropout rate.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, idim: int, odim: int, dropout_rate: float,
|
||||
pos_enc_class: torch.nn.Module):
|
||||
"""Construct an linear object."""
|
||||
super().__init__()
|
||||
self.out = torch.nn.Sequential(
|
||||
torch.nn.Linear(idim, odim),
|
||||
torch.nn.LayerNorm(odim, eps=1e-5),
|
||||
torch.nn.Dropout(dropout_rate),
|
||||
torch.nn.ReLU(),
|
||||
)
|
||||
self.pos_enc = pos_enc_class
|
||||
self.right_context = 0
|
||||
self.subsampling_rate = 1
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
x_mask: torch.Tensor,
|
||||
offset: Union[int, torch.Tensor] = 0
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""Input x.
|
||||
|
||||
Args:
|
||||
x (torch.Tensor): Input tensor (#batch, time, idim).
|
||||
x_mask (torch.Tensor): Input mask (#batch, 1, time).
|
||||
|
||||
Returns:
|
||||
torch.Tensor: linear input tensor (#batch, time', odim),
|
||||
where time' = time .
|
||||
torch.Tensor: linear input mask (#batch, 1, time'),
|
||||
where time' = time .
|
||||
|
||||
"""
|
||||
x = self.out(x)
|
||||
x, pos_emb = self.pos_enc(x, offset)
|
||||
return x, pos_emb, x_mask
|
||||
321
models/CosyVoice/cosyvoice/transformer/upsample_encoder.py
Normal file
321
models/CosyVoice/cosyvoice/transformer/upsample_encoder.py
Normal file
@@ -0,0 +1,321 @@
|
||||
# Copyright (c) 2021 Mobvoi Inc (Binbin Zhang, Di Wu)
|
||||
# 2022 Xingchen Song (sxc19@mails.tsinghua.edu.cn)
|
||||
# 2024 Alibaba Inc (Xiang Lyu)
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# Modified from ESPnet(https://github.com/espnet/espnet)
|
||||
"""Encoder definition."""
|
||||
from typing import Tuple
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from cosyvoice.transformer.convolution import ConvolutionModule
|
||||
from cosyvoice.transformer.encoder_layer import ConformerEncoderLayer
|
||||
from cosyvoice.transformer.positionwise_feed_forward import PositionwiseFeedForward
|
||||
from cosyvoice.utils.class_utils import (
|
||||
COSYVOICE_EMB_CLASSES,
|
||||
COSYVOICE_SUBSAMPLE_CLASSES,
|
||||
COSYVOICE_ATTENTION_CLASSES,
|
||||
COSYVOICE_ACTIVATION_CLASSES,
|
||||
)
|
||||
from cosyvoice.utils.mask import make_pad_mask
|
||||
from cosyvoice.utils.mask import add_optional_chunk_mask
|
||||
|
||||
|
||||
class Upsample1D(nn.Module):
|
||||
"""A 1D upsampling layer with an optional convolution.
|
||||
|
||||
Parameters:
|
||||
channels (`int`):
|
||||
number of channels in the inputs and outputs.
|
||||
use_conv (`bool`, default `False`):
|
||||
option to use a convolution.
|
||||
use_conv_transpose (`bool`, default `False`):
|
||||
option to use a convolution transpose.
|
||||
out_channels (`int`, optional):
|
||||
number of output channels. Defaults to `channels`.
|
||||
"""
|
||||
|
||||
def __init__(self, channels: int, out_channels: int, stride: int = 2):
|
||||
super().__init__()
|
||||
self.channels = channels
|
||||
self.out_channels = out_channels
|
||||
self.stride = stride
|
||||
# In this mode, first repeat interpolate, than conv with stride=1
|
||||
self.conv = nn.Conv1d(self.channels, self.out_channels, stride * 2 + 1, stride=1, padding=0)
|
||||
|
||||
def forward(self, inputs: torch.Tensor, input_lengths: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
outputs = F.interpolate(inputs, scale_factor=float(self.stride), mode="nearest")
|
||||
outputs = F.pad(outputs, (self.stride * 2, 0), value=0.0)
|
||||
outputs = self.conv(outputs)
|
||||
return outputs, input_lengths * self.stride
|
||||
|
||||
|
||||
class PreLookaheadLayer(nn.Module):
|
||||
def __init__(self, in_channels: int, channels: int, pre_lookahead_len: int = 1):
|
||||
super().__init__()
|
||||
self.in_channels = in_channels
|
||||
self.channels = channels
|
||||
self.pre_lookahead_len = pre_lookahead_len
|
||||
self.conv1 = nn.Conv1d(
|
||||
in_channels, channels,
|
||||
kernel_size=pre_lookahead_len + 1,
|
||||
stride=1, padding=0,
|
||||
)
|
||||
self.conv2 = nn.Conv1d(
|
||||
channels, in_channels,
|
||||
kernel_size=3, stride=1, padding=0,
|
||||
)
|
||||
|
||||
def forward(self, inputs: torch.Tensor, context: torch.Tensor = torch.zeros(0, 0, 0)) -> torch.Tensor:
|
||||
"""
|
||||
inputs: (batch_size, seq_len, channels)
|
||||
"""
|
||||
outputs = inputs.transpose(1, 2).contiguous()
|
||||
context = context.transpose(1, 2).contiguous()
|
||||
# look ahead
|
||||
if context.size(2) == 0:
|
||||
outputs = F.pad(outputs, (0, self.pre_lookahead_len), mode='constant', value=0.0)
|
||||
else:
|
||||
assert self.training is False, 'you have passed context, make sure that you are running inference mode'
|
||||
assert context.size(2) == self.pre_lookahead_len
|
||||
outputs = F.pad(torch.concat([outputs, context], dim=2), (0, self.pre_lookahead_len - context.size(2)), mode='constant', value=0.0)
|
||||
outputs = F.leaky_relu(self.conv1(outputs))
|
||||
# outputs
|
||||
outputs = F.pad(outputs, (self.conv2.kernel_size[0] - 1, 0), mode='constant', value=0.0)
|
||||
outputs = self.conv2(outputs)
|
||||
outputs = outputs.transpose(1, 2).contiguous()
|
||||
|
||||
# residual connection
|
||||
outputs = outputs + inputs
|
||||
return outputs
|
||||
|
||||
|
||||
class UpsampleConformerEncoder(torch.nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
input_size: int,
|
||||
output_size: int = 256,
|
||||
attention_heads: int = 4,
|
||||
linear_units: int = 2048,
|
||||
num_blocks: int = 6,
|
||||
dropout_rate: float = 0.1,
|
||||
positional_dropout_rate: float = 0.1,
|
||||
attention_dropout_rate: float = 0.0,
|
||||
input_layer: str = "conv2d",
|
||||
pos_enc_layer_type: str = "rel_pos",
|
||||
normalize_before: bool = True,
|
||||
static_chunk_size: int = 0,
|
||||
use_dynamic_chunk: bool = False,
|
||||
global_cmvn: torch.nn.Module = None,
|
||||
use_dynamic_left_chunk: bool = False,
|
||||
positionwise_conv_kernel_size: int = 1,
|
||||
macaron_style: bool = True,
|
||||
selfattention_layer_type: str = "rel_selfattn",
|
||||
activation_type: str = "swish",
|
||||
use_cnn_module: bool = True,
|
||||
cnn_module_kernel: int = 15,
|
||||
causal: bool = False,
|
||||
cnn_module_norm: str = "batch_norm",
|
||||
key_bias: bool = True,
|
||||
gradient_checkpointing: bool = False,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
input_size (int): input dim
|
||||
output_size (int): dimension of attention
|
||||
attention_heads (int): the number of heads of multi head attention
|
||||
linear_units (int): the hidden units number of position-wise feed
|
||||
forward
|
||||
num_blocks (int): the number of decoder blocks
|
||||
dropout_rate (float): dropout rate
|
||||
attention_dropout_rate (float): dropout rate in attention
|
||||
positional_dropout_rate (float): dropout rate after adding
|
||||
positional encoding
|
||||
input_layer (str): input layer type.
|
||||
optional [linear, conv2d, conv2d6, conv2d8]
|
||||
pos_enc_layer_type (str): Encoder positional encoding layer type.
|
||||
opitonal [abs_pos, scaled_abs_pos, rel_pos, no_pos]
|
||||
normalize_before (bool):
|
||||
True: use layer_norm before each sub-block of a layer.
|
||||
False: use layer_norm after each sub-block of a layer.
|
||||
static_chunk_size (int): chunk size for static chunk training and
|
||||
decoding
|
||||
use_dynamic_chunk (bool): whether use dynamic chunk size for
|
||||
training or not, You can only use fixed chunk(chunk_size > 0)
|
||||
or dyanmic chunk size(use_dynamic_chunk = True)
|
||||
global_cmvn (Optional[torch.nn.Module]): Optional GlobalCMVN module
|
||||
use_dynamic_left_chunk (bool): whether use dynamic left chunk in
|
||||
dynamic chunk training
|
||||
key_bias: whether use bias in attention.linear_k, False for whisper models.
|
||||
gradient_checkpointing: rerunning a forward-pass segment for each
|
||||
checkpointed segment during backward.
|
||||
"""
|
||||
super().__init__()
|
||||
self._output_size = output_size
|
||||
|
||||
self.global_cmvn = global_cmvn
|
||||
self.embed = COSYVOICE_SUBSAMPLE_CLASSES[input_layer](
|
||||
input_size,
|
||||
output_size,
|
||||
dropout_rate,
|
||||
COSYVOICE_EMB_CLASSES[pos_enc_layer_type](output_size,
|
||||
positional_dropout_rate),
|
||||
)
|
||||
|
||||
self.normalize_before = normalize_before
|
||||
self.after_norm = torch.nn.LayerNorm(output_size, eps=1e-5)
|
||||
self.static_chunk_size = static_chunk_size
|
||||
self.use_dynamic_chunk = use_dynamic_chunk
|
||||
self.use_dynamic_left_chunk = use_dynamic_left_chunk
|
||||
self.gradient_checkpointing = gradient_checkpointing
|
||||
activation = COSYVOICE_ACTIVATION_CLASSES[activation_type]()
|
||||
# self-attention module definition
|
||||
encoder_selfattn_layer_args = (
|
||||
attention_heads,
|
||||
output_size,
|
||||
attention_dropout_rate,
|
||||
key_bias,
|
||||
)
|
||||
# feed-forward module definition
|
||||
positionwise_layer_args = (
|
||||
output_size,
|
||||
linear_units,
|
||||
dropout_rate,
|
||||
activation,
|
||||
)
|
||||
# convolution module definition
|
||||
convolution_layer_args = (output_size, cnn_module_kernel, activation,
|
||||
cnn_module_norm, causal)
|
||||
self.pre_lookahead_layer = PreLookaheadLayer(in_channels=512, channels=512, pre_lookahead_len=3)
|
||||
self.encoders = torch.nn.ModuleList([
|
||||
ConformerEncoderLayer(
|
||||
output_size,
|
||||
COSYVOICE_ATTENTION_CLASSES[selfattention_layer_type](
|
||||
*encoder_selfattn_layer_args),
|
||||
PositionwiseFeedForward(*positionwise_layer_args),
|
||||
PositionwiseFeedForward(
|
||||
*positionwise_layer_args) if macaron_style else None,
|
||||
ConvolutionModule(
|
||||
*convolution_layer_args) if use_cnn_module else None,
|
||||
dropout_rate,
|
||||
normalize_before,
|
||||
) for _ in range(num_blocks)
|
||||
])
|
||||
self.up_layer = Upsample1D(channels=512, out_channels=512, stride=2)
|
||||
self.up_embed = COSYVOICE_SUBSAMPLE_CLASSES[input_layer](
|
||||
input_size,
|
||||
output_size,
|
||||
dropout_rate,
|
||||
COSYVOICE_EMB_CLASSES[pos_enc_layer_type](output_size,
|
||||
positional_dropout_rate),
|
||||
)
|
||||
self.up_encoders = torch.nn.ModuleList([
|
||||
ConformerEncoderLayer(
|
||||
output_size,
|
||||
COSYVOICE_ATTENTION_CLASSES[selfattention_layer_type](
|
||||
*encoder_selfattn_layer_args),
|
||||
PositionwiseFeedForward(*positionwise_layer_args),
|
||||
PositionwiseFeedForward(
|
||||
*positionwise_layer_args) if macaron_style else None,
|
||||
ConvolutionModule(
|
||||
*convolution_layer_args) if use_cnn_module else None,
|
||||
dropout_rate,
|
||||
normalize_before,
|
||||
) for _ in range(4)
|
||||
])
|
||||
|
||||
def output_size(self) -> int:
|
||||
return self._output_size
|
||||
|
||||
def forward(
|
||||
self,
|
||||
xs: torch.Tensor,
|
||||
xs_lens: torch.Tensor,
|
||||
context: torch.Tensor = torch.zeros(0, 0, 0),
|
||||
decoding_chunk_size: int = 0,
|
||||
num_decoding_left_chunks: int = -1,
|
||||
streaming: bool = False,
|
||||
) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Embed positions in tensor.
|
||||
|
||||
Args:
|
||||
xs: padded input tensor (B, T, D)
|
||||
xs_lens: input length (B)
|
||||
decoding_chunk_size: decoding chunk size for dynamic chunk
|
||||
0: default for training, use random dynamic chunk.
|
||||
<0: for decoding, use full chunk.
|
||||
>0: for decoding, use fixed chunk size as set.
|
||||
num_decoding_left_chunks: number of left chunks, this is for decoding,
|
||||
the chunk size is decoding_chunk_size.
|
||||
>=0: use num_decoding_left_chunks
|
||||
<0: use all left chunks
|
||||
Returns:
|
||||
encoder output tensor xs, and subsampled masks
|
||||
xs: padded output tensor (B, T' ~= T/subsample_rate, D)
|
||||
masks: torch.Tensor batch padding mask after subsample
|
||||
(B, 1, T' ~= T/subsample_rate)
|
||||
NOTE(xcsong):
|
||||
We pass the `__call__` method of the modules instead of `forward` to the
|
||||
checkpointing API because `__call__` attaches all the hooks of the module.
|
||||
https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
|
||||
"""
|
||||
T = xs.size(1)
|
||||
masks = ~make_pad_mask(xs_lens, T).unsqueeze(1) # (B, 1, T)
|
||||
if self.global_cmvn is not None:
|
||||
xs = self.global_cmvn(xs)
|
||||
xs, pos_emb, masks = self.embed(xs, masks)
|
||||
if context.size(1) != 0:
|
||||
assert self.training is False, 'you have passed context, make sure that you are running inference mode'
|
||||
context_masks = torch.ones(1, 1, context.size(1)).to(masks)
|
||||
context, _, _ = self.embed(context, context_masks, offset=xs.size(1))
|
||||
mask_pad = masks # (B, 1, T/subsample_rate)
|
||||
chunk_masks = add_optional_chunk_mask(xs, masks, False, False, 0, self.static_chunk_size if streaming is True else 0, -1)
|
||||
# lookahead + conformer encoder
|
||||
xs = self.pre_lookahead_layer(xs, context=context)
|
||||
xs = self.forward_layers(xs, chunk_masks, pos_emb, mask_pad)
|
||||
|
||||
# upsample + conformer encoder
|
||||
xs = xs.transpose(1, 2).contiguous()
|
||||
xs, xs_lens = self.up_layer(xs, xs_lens)
|
||||
xs = xs.transpose(1, 2).contiguous()
|
||||
T = xs.size(1)
|
||||
masks = ~make_pad_mask(xs_lens, T).unsqueeze(1) # (B, 1, T)
|
||||
xs, pos_emb, masks = self.up_embed(xs, masks)
|
||||
mask_pad = masks # (B, 1, T/subsample_rate)
|
||||
chunk_masks = add_optional_chunk_mask(xs, masks, False, False, 0, self.static_chunk_size * self.up_layer.stride if streaming is True else 0, -1)
|
||||
xs = self.forward_up_layers(xs, chunk_masks, pos_emb, mask_pad)
|
||||
|
||||
if self.normalize_before:
|
||||
xs = self.after_norm(xs)
|
||||
# Here we assume the mask is not changed in encoder layers, so just
|
||||
# return the masks before encoder layers, and the masks will be used
|
||||
# for cross attention with decoder later
|
||||
return xs, masks
|
||||
|
||||
def forward_layers(self, xs: torch.Tensor, chunk_masks: torch.Tensor,
|
||||
pos_emb: torch.Tensor,
|
||||
mask_pad: torch.Tensor) -> torch.Tensor:
|
||||
for layer in self.encoders:
|
||||
xs, chunk_masks, _, _ = layer(xs, chunk_masks, pos_emb, mask_pad)
|
||||
return xs
|
||||
|
||||
def forward_up_layers(self, xs: torch.Tensor, chunk_masks: torch.Tensor,
|
||||
pos_emb: torch.Tensor,
|
||||
mask_pad: torch.Tensor) -> torch.Tensor:
|
||||
for layer in self.up_encoders:
|
||||
xs, chunk_masks, _, _ = layer(xs, chunk_masks, pos_emb, mask_pad)
|
||||
return xs
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user