Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0e3502c6f0 | ||
|
|
a1604979f0 | ||
|
|
08221e48de | ||
|
|
42b5cc0c02 | ||
|
|
1717635bfd | ||
|
|
0a5a17402c | ||
|
|
bc0fe9326a | ||
|
|
035ee29d72 | ||
|
|
a6cc919e5c | ||
|
|
96a298e51c |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -40,6 +40,7 @@ backend/uploads/
|
|||||||
backend/cookies/
|
backend/cookies/
|
||||||
backend/user_data/
|
backend/user_data/
|
||||||
backend/debug_screenshots/
|
backend/debug_screenshots/
|
||||||
|
backend/keys/
|
||||||
*_cookies.json
|
*_cookies.json
|
||||||
|
|
||||||
# ============ 模型权重 ============
|
# ============ 模型权重 ============
|
||||||
|
|||||||
278
Docs/ALIPAY_DEPLOY.md
Normal file
278
Docs/ALIPAY_DEPLOY.md
Normal file
@@ -0,0 +1,278 @@
|
|||||||
|
# 支付宝付费开通会员 — 部署指南
|
||||||
|
|
||||||
|
本文档涵盖支付宝电脑网站支付功能的完整部署流程。用户注册后通过支付宝付费自动激活会员,有效期 1 年。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 前置条件
|
||||||
|
|
||||||
|
- 支付宝企业/个体商户账号
|
||||||
|
- 已在 [支付宝开放平台](https://open.alipay.com) 创建应用并获取 APPID
|
||||||
|
- 应用已开通 **「电脑网站支付」** 产品权限(`alipay.trade.page.pay` 接口)
|
||||||
|
- 服务器域名已配置 HTTPS(支付宝回调要求公网可达)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 第一部分:支付宝开放平台配置
|
||||||
|
|
||||||
|
### 1. 创建应用
|
||||||
|
|
||||||
|
登录 https://open.alipay.com → 控制台 → 创建应用(或使用已有应用)。
|
||||||
|
|
||||||
|
### 2. 开通「电脑网站支付」产品
|
||||||
|
|
||||||
|
进入应用详情 → 产品绑定/产品管理 → 添加 **「电脑网站支付」** → 提交审核。
|
||||||
|
|
||||||
|
> **注意**:未开通此产品会导致 `ACQ.ACCESS_FORBIDDEN` 错误。
|
||||||
|
|
||||||
|
### 3. 生成密钥对
|
||||||
|
|
||||||
|
进入应用详情 → 开发设置 → 接口加签方式 → 选择 **RSA2(SHA256)**:
|
||||||
|
|
||||||
|
1. 使用支付宝官方密钥工具生成 RSA2048 密钥对
|
||||||
|
2. 将 **应用公钥** 上传到开放平台
|
||||||
|
3. 上传后平台会显示 **支付宝公钥**(`alipayPublicKey_RSA2`)
|
||||||
|
|
||||||
|
最终你会得到两样东西:
|
||||||
|
- **应用私钥**:你本地保存,代码用来签名请求
|
||||||
|
- **支付宝公钥**:平台返回给你,代码用来验证回调签名
|
||||||
|
|
||||||
|
> 应用公钥只是上传用的中间产物,代码中不需要。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 第二部分:服务器配置
|
||||||
|
|
||||||
|
### 1. 放置密钥文件
|
||||||
|
|
||||||
|
将密钥保存为标准 PEM 格式,放到 `backend/keys/` 目录:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p /home/rongye/ProgramFiles/ViGent2/backend/keys
|
||||||
|
```
|
||||||
|
|
||||||
|
**`backend/keys/app_private_key.pem`**(应用私钥):
|
||||||
|
|
||||||
|
```
|
||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...(你的私钥内容)
|
||||||
|
...
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
|
```
|
||||||
|
|
||||||
|
**`backend/keys/alipay_public_key.pem`**(支付宝公钥):
|
||||||
|
|
||||||
|
```
|
||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A...(支付宝公钥内容)
|
||||||
|
...
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
|
```
|
||||||
|
|
||||||
|
#### PEM 格式要求
|
||||||
|
|
||||||
|
支付宝密钥工具导出的是一行纯文本,需要转换为标准 PEM 格式:
|
||||||
|
|
||||||
|
- 必须有头尾标记(`-----BEGIN/END ...-----`)
|
||||||
|
- 密钥内容每 64 字符换行
|
||||||
|
- 私钥头标记为 `-----BEGIN PRIVATE KEY-----`(PKCS#8 格式)
|
||||||
|
- 公钥头标记为 `-----BEGIN PUBLIC KEY-----`
|
||||||
|
|
||||||
|
如果你拿到的是一行裸密钥,用以下命令转换:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 私钥格式化(假设裸密钥在 raw_private.txt 中)
|
||||||
|
echo "-----BEGIN PRIVATE KEY-----" > app_private_key.pem
|
||||||
|
cat raw_private.txt | fold -w 64 >> app_private_key.pem
|
||||||
|
echo "-----END PRIVATE KEY-----" >> app_private_key.pem
|
||||||
|
|
||||||
|
# 公钥格式化
|
||||||
|
echo "-----BEGIN PUBLIC KEY-----" > alipay_public_key.pem
|
||||||
|
cat raw_public.txt | fold -w 64 >> alipay_public_key.pem
|
||||||
|
echo "-----END PUBLIC KEY-----" >> alipay_public_key.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
> `backend/keys/` 目录已加入 `.gitignore`,不会被提交到仓库。
|
||||||
|
|
||||||
|
### 2. 配置环境变量
|
||||||
|
|
||||||
|
在 `backend/.env` 中添加:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# =============== 支付宝配置 ===============
|
||||||
|
ALIPAY_APP_ID=你的应用APPID
|
||||||
|
ALIPAY_PRIVATE_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/app_private_key.pem
|
||||||
|
ALIPAY_PUBLIC_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/alipay_public_key.pem
|
||||||
|
ALIPAY_NOTIFY_URL=https://vigent.hbyrkj.top/api/payment/notify
|
||||||
|
ALIPAY_RETURN_URL=https://vigent.hbyrkj.top/pay
|
||||||
|
```
|
||||||
|
|
||||||
|
| 变量 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `ALIPAY_APP_ID` | 支付宝开放平台应用 APPID |
|
||||||
|
| `ALIPAY_PRIVATE_KEY_PATH` | 应用私钥 PEM 文件绝对路径 |
|
||||||
|
| `ALIPAY_PUBLIC_KEY_PATH` | 支付宝公钥 PEM 文件绝对路径 |
|
||||||
|
| `ALIPAY_NOTIFY_URL` | 异步回调地址(服务器间通信),必须公网 HTTPS 可达 |
|
||||||
|
| `ALIPAY_RETURN_URL` | 同步跳转地址(用户支付完成后浏览器跳转回的页面) |
|
||||||
|
|
||||||
|
`config.py` 中还有几个可调参数(已有默认值,一般不需要加到 .env):
|
||||||
|
|
||||||
|
| 变量 | 默认值 | 说明 |
|
||||||
|
|------|--------|------|
|
||||||
|
| `ALIPAY_SANDBOX` | `false` | 是否使用沙箱环境 |
|
||||||
|
| `PAYMENT_AMOUNT` | `999.00` | 会员价格(元) |
|
||||||
|
| `PAYMENT_EXPIRE_DAYS` | `365` | 会员有效天数 |
|
||||||
|
|
||||||
|
### 3. 创建数据库表
|
||||||
|
|
||||||
|
通过 Docker 在本地 Supabase 中执行:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec -i supabase-db psql -U postgres -c "
|
||||||
|
CREATE TABLE IF NOT EXISTS orders (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
out_trade_no TEXT UNIQUE NOT NULL,
|
||||||
|
amount DECIMAL(10, 2) NOT NULL DEFAULT 999.00,
|
||||||
|
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'paid', 'failed')),
|
||||||
|
trade_no TEXT,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
paid_at TIMESTAMP WITH TIME ZONE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_orders_out_trade_no ON orders(out_trade_no);
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. 安装依赖
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 后端(在 venv 中)
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/backend
|
||||||
|
venv/bin/pip install python-alipay-sdk
|
||||||
|
```
|
||||||
|
|
||||||
|
> 前端无额外依赖需要安装。
|
||||||
|
|
||||||
|
### 5. Nginx 配置
|
||||||
|
|
||||||
|
确保 Nginx 将 `/api/payment/notify` 代理到后端。如果现有配置已覆盖 `/api/` 前缀,则无需额外修改:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
location /api/ {
|
||||||
|
proxy_pass http://localhost:8006;
|
||||||
|
# ... 现有配置
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. 重启服务
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 构建前端
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/frontend
|
||||||
|
npx next build
|
||||||
|
|
||||||
|
# 重启
|
||||||
|
pm2 restart vigent2-backend
|
||||||
|
pm2 restart vigent2-frontend
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 第三部分:正式上线
|
||||||
|
|
||||||
|
测试通过后,将 `backend/app/core/config.py` 中的测试金额改为正式价格:
|
||||||
|
|
||||||
|
```python
|
||||||
|
PAYMENT_AMOUNT: float = 999.00 # 正式价格
|
||||||
|
```
|
||||||
|
|
||||||
|
或在 `backend/.env` 中添加覆盖:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
PAYMENT_AMOUNT=999.00
|
||||||
|
```
|
||||||
|
|
||||||
|
然后重启后端:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pm2 restart vigent2-backend
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 支付流程说明
|
||||||
|
|
||||||
|
```
|
||||||
|
用户注册 → 登录(密码正确但 is_active=false)
|
||||||
|
→ 后端返回 403 + payment_token
|
||||||
|
→ 前端跳转 /pay 页面
|
||||||
|
→ POST /api/payment/create-order → 返回支付宝收银台 URL
|
||||||
|
→ 前端重定向到支付宝收银台页面(支持扫码、账号登录、余额等多种支付方式)
|
||||||
|
→ 用户完成支付
|
||||||
|
→ 支付宝异步回调 POST /api/payment/notify
|
||||||
|
→ 后端验签 → 更新订单 → 激活用户(is_active=true, expires_at=+365天)
|
||||||
|
→ 支付宝同步跳转回 /pay?out_trade_no=xxx
|
||||||
|
→ 前端轮询 GET /api/payment/status/{out_trade_no}
|
||||||
|
→ 轮询到 paid → 提示成功 → 跳转登录页
|
||||||
|
→ 用户重新登录 → 成功进入系统
|
||||||
|
```
|
||||||
|
|
||||||
|
**电脑网站支付 vs 当面付**:电脑网站支付(`alipay.trade.page.pay`)会跳转到支付宝官方收银台页面,用户可以选择扫码、支付宝账号登录、余额等多种方式支付,体验更好。当面付(`alipay.trade.precreate`)仅生成一个二维码,只能扫码支付。
|
||||||
|
|
||||||
|
会员到期续费同流程:登录时检测到过期 → 返回 PAYMENT_REQUIRED → 跳转 /pay。
|
||||||
|
|
||||||
|
管理员手动激活功能不受影响,两种方式并存。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 涉及文件
|
||||||
|
|
||||||
|
| 文件 | 变更类型 | 说明 |
|
||||||
|
|------|---------|------|
|
||||||
|
| `backend/requirements.txt` | 修改 | 添加 `python-alipay-sdk` |
|
||||||
|
| `backend/database/schema.sql` | 修改 | 新增 `orders` 表 |
|
||||||
|
| `backend/app/core/config.py` | 修改 | 支付宝配置项 |
|
||||||
|
| `backend/app/core/security.py` | 修改 | payment_token 函数 |
|
||||||
|
| `backend/app/core/deps.py` | 修改 | is_active 安全兜底 |
|
||||||
|
| `backend/app/repositories/orders.py` | 新建 | orders 数据层 |
|
||||||
|
| `backend/app/modules/payment/__init__.py` | 新建 | 模块初始化 |
|
||||||
|
| `backend/app/modules/payment/schemas.py` | 新建 | 请求/响应模型 |
|
||||||
|
| `backend/app/modules/payment/service.py` | 新建 | 支付业务逻辑(电脑网站支付) |
|
||||||
|
| `backend/app/modules/payment/router.py` | 新建 | 3 个 API 端点 |
|
||||||
|
| `backend/app/modules/auth/router.py` | 修改 | 登录返回 PAYMENT_REQUIRED |
|
||||||
|
| `backend/app/main.py` | 修改 | 注册 payment_router |
|
||||||
|
| `backend/.env` | 修改 | 支付宝环境变量 |
|
||||||
|
| `backend/keys/` | 新建 | PEM 密钥文件 |
|
||||||
|
| `frontend/src/shared/lib/auth.ts` | 修改 | login() 处理 paymentToken |
|
||||||
|
| `frontend/src/shared/api/axios.ts` | 修改 | PUBLIC_PATHS 加 /pay |
|
||||||
|
| `frontend/src/app/login/page.tsx` | 修改 | paymentToken 跳转 |
|
||||||
|
| `frontend/src/app/register/page.tsx` | 修改 | 注册成功提示文案 |
|
||||||
|
| `frontend/src/app/pay/page.tsx` | 新建 | 付费页面(重定向到支付宝收银台) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 常见问题
|
||||||
|
|
||||||
|
### RSA key format is not supported
|
||||||
|
|
||||||
|
密钥文件缺少 PEM 头尾标记或未按 64 字符换行。参考「PEM 格式要求」重新格式化。
|
||||||
|
|
||||||
|
### ACQ.ACCESS_FORBIDDEN
|
||||||
|
|
||||||
|
应用未开通「电脑网站支付」产品。在支付宝开放平台 → 应用详情 → 产品管理中添加并开通。
|
||||||
|
|
||||||
|
### 支付宝回调不到
|
||||||
|
|
||||||
|
1. 检查 `ALIPAY_NOTIFY_URL` 是否公网 HTTPS 可达
|
||||||
|
2. 检查 Nginx 是否将 `/api/payment/notify` 代理到后端
|
||||||
|
3. 支付宝回调超时(15s 未响应)会重试,共重试 8 次,持续 24 小时
|
||||||
|
|
||||||
|
### 支付完成后页面未跳转回来
|
||||||
|
|
||||||
|
检查 `ALIPAY_RETURN_URL` 配置是否正确,必须是前端 `/pay` 页面的完整 URL(如 `https://vigent.hbyrkj.top/pay`)。支付宝会在用户支付完成后将浏览器重定向到此地址,并附带 `out_trade_no` 等参数。
|
||||||
|
|
||||||
|
### 前端显示"网络错误"而非具体错误
|
||||||
|
|
||||||
|
API 函数缺少 try/catch 捕获 axios 异常。已在 `auth.ts` 的 `register()` 和 `login()` 中修复。
|
||||||
@@ -39,6 +39,7 @@ backend/
|
|||||||
│ │ ├── generated_audios/ # 预生成配音管理(router/schemas/service)
|
│ │ ├── generated_audios/ # 预生成配音管理(router/schemas/service)
|
||||||
│ │ ├── login_helper/ # 扫码登录辅助
|
│ │ ├── login_helper/ # 扫码登录辅助
|
||||||
│ │ ├── tools/ # 工具接口(router/schemas/service)
|
│ │ ├── tools/ # 工具接口(router/schemas/service)
|
||||||
|
│ │ ├── payment/ # 支付宝付费开通(router/schemas/service)
|
||||||
│ │ └── admin/ # 管理员功能
|
│ │ └── admin/ # 管理员功能
|
||||||
│ ├── repositories/ # Supabase 数据访问
|
│ ├── repositories/ # Supabase 数据访问
|
||||||
│ ├── services/ # 外部服务集成
|
│ ├── services/ # 外部服务集成
|
||||||
@@ -74,6 +75,18 @@ backend/
|
|||||||
- 错误通过 `HTTPException` 抛出,统一由全局异常处理返回 `{success:false, message, code}`。
|
- 错误通过 `HTTPException` 抛出,统一由全局异常处理返回 `{success:false, message, code}`。
|
||||||
- 不再使用 `detail` 作为前端错误文案(前端已改为读 `message`)。
|
- 不再使用 `detail` 作为前端错误文案(前端已改为读 `message`)。
|
||||||
|
|
||||||
|
### `/api/videos/generate` 参数契约(关键约定)
|
||||||
|
|
||||||
|
- `custom_assignments` 每项使用 `material_path/start/end/source_start/source_end?`,并以时间轴可见段为准。
|
||||||
|
- `output_aspect_ratio` 仅允许 `9:16` / `16:9`,默认 `9:16`。
|
||||||
|
- 标题显示模式参数:
|
||||||
|
- `title_display_mode`: `short` / `persistent`(默认 `short`)
|
||||||
|
- `title_duration`: 默认 `4.0`(秒),仅 `short` 模式生效
|
||||||
|
- 片头副标题参数:
|
||||||
|
- `secondary_title`: 副标题文字(可选,限 20 字),仅在视频画面中显示,不参与发布标题
|
||||||
|
- `secondary_title_style_id` / `secondary_title_font_size` / `secondary_title_top_margin`: 副标题样式配置
|
||||||
|
- workflow/remotion 侧需保持字段透传一致,避免前后端语义漂移。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 4. 认证与权限
|
## 4. 认证与权限
|
||||||
@@ -143,6 +156,14 @@ backend/user_data/{user_uuid}/cookies/
|
|||||||
- `LATENTSYNC_*`
|
- `LATENTSYNC_*`
|
||||||
- `CORS_ORIGINS` (CORS 白名单,默认 *)
|
- `CORS_ORIGINS` (CORS 白名单,默认 *)
|
||||||
|
|
||||||
|
### MuseTalk / 混合唇形同步
|
||||||
|
- `MUSETALK_GPU_ID` (GPU 编号,默认 0)
|
||||||
|
- `MUSETALK_API_URL` (常驻服务地址,默认 http://localhost:8011)
|
||||||
|
- `MUSETALK_BATCH_SIZE` (推理批大小,默认 32)
|
||||||
|
- `MUSETALK_VERSION` (v15)
|
||||||
|
- `MUSETALK_USE_FLOAT16` (半精度,默认 true)
|
||||||
|
- `LIPSYNC_DURATION_THRESHOLD` (秒,>=此值用 MuseTalk,默认 120)
|
||||||
|
|
||||||
### 微信视频号
|
### 微信视频号
|
||||||
- `WEIXIN_HEADLESS_MODE` (headful/headless-new)
|
- `WEIXIN_HEADLESS_MODE` (headful/headless-new)
|
||||||
- `WEIXIN_CHROME_PATH` / `WEIXIN_BROWSER_CHANNEL`
|
- `WEIXIN_CHROME_PATH` / `WEIXIN_BROWSER_CHANNEL`
|
||||||
@@ -157,7 +178,13 @@ backend/user_data/{user_uuid}/cookies/
|
|||||||
- `DOUYIN_LOCALE` / `DOUYIN_TIMEZONE_ID`
|
- `DOUYIN_LOCALE` / `DOUYIN_TIMEZONE_ID`
|
||||||
- `DOUYIN_FORCE_SWIFTSHADER`
|
- `DOUYIN_FORCE_SWIFTSHADER`
|
||||||
- `DOUYIN_DEBUG_ARTIFACTS` / `DOUYIN_RECORD_VIDEO` / `DOUYIN_KEEP_SUCCESS_VIDEO`
|
- `DOUYIN_DEBUG_ARTIFACTS` / `DOUYIN_RECORD_VIDEO` / `DOUYIN_KEEP_SUCCESS_VIDEO`
|
||||||
- `DOUYIN_COOKIE` (抖音视频下载 Cookie)
|
|
||||||
|
### 支付宝
|
||||||
|
- `ALIPAY_APP_ID` / `ALIPAY_PRIVATE_KEY_PATH` / `ALIPAY_PUBLIC_KEY_PATH`
|
||||||
|
- `ALIPAY_NOTIFY_URL` / `ALIPAY_RETURN_URL`
|
||||||
|
- `ALIPAY_SANDBOX` (沙箱模式,默认 false)
|
||||||
|
- `PAYMENT_AMOUNT` (会员价格,默认 999.00)
|
||||||
|
- `PAYMENT_EXPIRE_DAYS` (会员有效天数,默认 365)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ backend/
|
|||||||
│ │ ├── generated_audios/ # 预生成配音管理(router/schemas/service)
|
│ │ ├── generated_audios/ # 预生成配音管理(router/schemas/service)
|
||||||
│ │ ├── login_helper/ # 扫码登录辅助
|
│ │ ├── login_helper/ # 扫码登录辅助
|
||||||
│ │ ├── tools/ # 工具接口(router/schemas/service)
|
│ │ ├── tools/ # 工具接口(router/schemas/service)
|
||||||
|
│ │ ├── payment/ # 支付宝付费开通(router/schemas/service)
|
||||||
│ │ └── admin/ # 管理员功能
|
│ │ └── admin/ # 管理员功能
|
||||||
│ ├── repositories/ # Supabase 数据访问
|
│ ├── repositories/ # Supabase 数据访问
|
||||||
│ ├── services/ # 外部服务集成 (TTS/Remotion/Storage/Uploader 等)
|
│ ├── services/ # 外部服务集成 (TTS/Remotion/Storage/Uploader 等)
|
||||||
@@ -51,6 +52,8 @@ backend/
|
|||||||
* `POST /api/auth/register`: 用户注册
|
* `POST /api/auth/register`: 用户注册
|
||||||
* `GET /api/auth/me`: 获取当前用户信息
|
* `GET /api/auth/me`: 获取当前用户信息
|
||||||
|
|
||||||
|
> 授权有效期策略:在登录与受保护接口鉴权时,后端会检查 `users.expires_at`。账号到期会自动停用 (`is_active=false`) 并清理 session,返回 `403: 会员已到期,请续费`。
|
||||||
|
|
||||||
2. **视频生成 (Videos)**
|
2. **视频生成 (Videos)**
|
||||||
* `POST /api/videos/generate`: 提交生成任务
|
* `POST /api/videos/generate`: 提交生成任务
|
||||||
* `GET /api/videos/tasks/{task_id}`: 查询单个任务状态
|
* `GET /api/videos/tasks/{task_id}`: 查询单个任务状态
|
||||||
@@ -77,10 +80,11 @@ backend/
|
|||||||
* `GET /api/assets/bgm`: 背景音乐列表
|
* `GET /api/assets/bgm`: 背景音乐列表
|
||||||
|
|
||||||
6. **声音克隆 (Ref Audios)**
|
6. **声音克隆 (Ref Audios)**
|
||||||
* `POST /api/ref-audios`: 上传参考音频 (multipart/form-data)
|
* `POST /api/ref-audios`: 上传参考音频 (multipart/form-data,自动 Whisper 转写 ref_text)
|
||||||
* `GET /api/ref-audios`: 获取参考音频列表
|
* `GET /api/ref-audios`: 获取参考音频列表
|
||||||
* `PUT /api/ref-audios/{id}`: 重命名参考音频
|
* `PUT /api/ref-audios/{id}`: 重命名参考音频
|
||||||
* `DELETE /api/ref-audios/{id}`: 删除参考音频
|
* `DELETE /api/ref-audios/{id}`: 删除参考音频
|
||||||
|
* `POST /api/ref-audios/{id}/retranscribe`: 重新识别参考音频文字(Whisper 转写 + 超 10s 自动截取)
|
||||||
|
|
||||||
7. **AI 功能 (AI)**
|
7. **AI 功能 (AI)**
|
||||||
* `POST /api/ai/generate-meta`: AI 生成标题和标签
|
* `POST /api/ai/generate-meta`: AI 生成标题和标签
|
||||||
@@ -97,8 +101,15 @@ backend/
|
|||||||
* `POST /api/tools/extract-script`: 从视频链接提取文案
|
* `POST /api/tools/extract-script`: 从视频链接提取文案
|
||||||
|
|
||||||
10. **健康检查**
|
10. **健康检查**
|
||||||
* `GET /api/lipsync/health`: LatentSync 服务健康状态
|
* `GET /api/lipsync/health`: 唇形同步服务健康状态(含 LatentSync + MuseTalk + 混合路由阈值)
|
||||||
* `GET /api/voiceclone/health`: Qwen3-TTS 服务健康状态
|
* `GET /api/voiceclone/health`: CosyVoice 3.0 服务健康状态
|
||||||
|
|
||||||
|
11. **支付 (Payment)**
|
||||||
|
* `POST /api/payment/create-order`: 创建支付宝电脑网站支付订单(需 payment_token)
|
||||||
|
* `POST /api/payment/notify`: 支付宝异步通知回调(返回纯文本 success/fail)
|
||||||
|
* `GET /api/payment/status/{out_trade_no}`: 查询订单支付状态(前端轮询)
|
||||||
|
|
||||||
|
> 登录时若账号未激活或已过期,返回 403 + `payment_token`,前端跳转 `/pay` 页面完成付费。详见 [支付宝部署指南](ALIPAY_DEPLOY.md)。
|
||||||
|
|
||||||
### 统一响应结构
|
### 统一响应结构
|
||||||
|
|
||||||
@@ -123,19 +134,33 @@ backend/
|
|||||||
- `voice`: EdgeTTS 音色 ID(edgetts 模式)
|
- `voice`: EdgeTTS 音色 ID(edgetts 模式)
|
||||||
- `ref_audio_id` / `ref_text`: 参考音频 ID 与文本(voiceclone 模式)
|
- `ref_audio_id` / `ref_text`: 参考音频 ID 与文本(voiceclone 模式)
|
||||||
- `generated_audio_id`: 预生成配音 ID(存在时跳过内联 TTS,使用已生成的配音文件)
|
- `generated_audio_id`: 预生成配音 ID(存在时跳过内联 TTS,使用已生成的配音文件)
|
||||||
- `custom_assignments`: 自定义素材分配数组(每项含 `material_path` / `start` / `end` / `source_start`),存在时跳过 Whisper 均分
|
- `speed`: 语速(声音克隆模式,默认 1.0,范围 0.8-1.2)
|
||||||
- `language`: TTS 语言(默认自动检测,声音克隆时透传给 Qwen3-TTS)
|
- `custom_assignments`: 自定义素材分配数组(每项含 `material_path` / `start` / `end` / `source_start` / `source_end?`),存在时优先按时间轴可见段生成
|
||||||
|
- `output_aspect_ratio`: 输出画面比例(`9:16` 或 `16:9`,默认 `9:16`)
|
||||||
|
- `language`: TTS 语言(默认自动检测,声音克隆时透传给 CosyVoice 3.0)
|
||||||
- `title`: 片头标题文字
|
- `title`: 片头标题文字
|
||||||
|
- `title_display_mode`: 标题显示模式(`short` / `persistent`,默认 `short`)
|
||||||
|
- `title_duration`: 标题显示时长(秒,默认 `4.0`;`short` 模式生效)
|
||||||
- `subtitle_style_id`: 字幕样式 ID
|
- `subtitle_style_id`: 字幕样式 ID
|
||||||
- `title_style_id`: 标题样式 ID
|
- `title_style_id`: 标题样式 ID
|
||||||
- `subtitle_font_size`: 字幕字号(覆盖样式默认值)
|
- `subtitle_font_size`: 字幕字号(覆盖样式默认值)
|
||||||
- `title_font_size`: 标题字号(覆盖样式默认值)
|
- `title_font_size`: 标题字号(覆盖样式默认值)
|
||||||
- `title_top_margin`: 标题距顶部像素
|
- `title_top_margin`: 标题距顶部像素
|
||||||
|
- `secondary_title`: 片头副标题文字(可选,限 20 字,仅视频画面显示)
|
||||||
|
- `secondary_title_style_id`: 副标题样式 ID
|
||||||
|
- `secondary_title_font_size`: 副标题字号
|
||||||
|
- `secondary_title_top_margin`: 副标题距主标题间距
|
||||||
- `subtitle_bottom_margin`: 字幕距底部像素
|
- `subtitle_bottom_margin`: 字幕距底部像素
|
||||||
- `enable_subtitles`: 是否启用字幕
|
- `enable_subtitles`: 是否启用字幕
|
||||||
- `bgm_id`: 背景音乐 ID
|
- `bgm_id`: 背景音乐 ID
|
||||||
- `bgm_volume`: 背景音乐音量(0-1,默认 0.2)
|
- `bgm_volume`: 背景音乐音量(0-1,默认 0.2)
|
||||||
|
|
||||||
|
### 多素材稳定性说明
|
||||||
|
|
||||||
|
- 多素材片段在拼接前统一重编码,并强制 `25fps + CFR`,减少段边界时间基不一致导致的画面卡顿。
|
||||||
|
- concat 流程启用 `+genpts` 重建时间戳,提升拼接后时间轴连续性。
|
||||||
|
- 对带旋转元数据的 MOV 素材会先做方向归一化,再进入分辨率判断和后续流程。
|
||||||
|
|
||||||
## 📦 资源库与静态资源
|
## 📦 资源库与静态资源
|
||||||
|
|
||||||
- 本地资源目录:`backend/assets/{fonts,bgm,styles}`
|
- 本地资源目录:`backend/assets/{fonts,bgm,styles}`
|
||||||
@@ -177,6 +202,12 @@ GLM_API_KEY=your_glm_api_key
|
|||||||
|
|
||||||
# LatentSync 配置
|
# LatentSync 配置
|
||||||
LATENTSYNC_GPU_ID=1
|
LATENTSYNC_GPU_ID=1
|
||||||
|
|
||||||
|
# MuseTalk 配置 (长视频唇形同步)
|
||||||
|
MUSETALK_GPU_ID=0
|
||||||
|
MUSETALK_API_URL=http://localhost:8011
|
||||||
|
MUSETALK_BATCH_SIZE=32
|
||||||
|
LIPSYNC_DURATION_THRESHOLD=120
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. 启动服务
|
### 4. 启动服务
|
||||||
@@ -199,6 +230,14 @@ uvicorn app.main:app --host 0.0.0.0 --port 8006 --reload
|
|||||||
3. **重要**: 如果模型占用 GPU,请务必使用 `asyncio.Lock` 进行并发控制,防止 OOM。
|
3. **重要**: 如果模型占用 GPU,请务必使用 `asyncio.Lock` 进行并发控制,防止 OOM。
|
||||||
4. 在 `app/modules/` 下创建对应模块,添加 router/service/schemas,并在 `main.py` 注册路由。
|
4. 在 `app/modules/` 下创建对应模块,添加 router/service/schemas,并在 `main.py` 注册路由。
|
||||||
|
|
||||||
|
### 唇形同步混合路由
|
||||||
|
|
||||||
|
`lipsync_service.py` 实现了 LatentSync + MuseTalk 混合路由:
|
||||||
|
- 短视频 (<`LIPSYNC_DURATION_THRESHOLD`s) → LatentSync 1.6 (GPU1, 端口 8007)
|
||||||
|
- 长视频 (>=阈值) → MuseTalk 1.5 (GPU0, 端口 8011)
|
||||||
|
- MuseTalk 不可用时自动回退到 LatentSync
|
||||||
|
- 路由逻辑对 workflow 完全透明
|
||||||
|
|
||||||
### 添加定时任务
|
### 添加定时任务
|
||||||
|
|
||||||
目前推荐使用 **APScheduler** 或 **Crontab** 来管理定时任务。
|
目前推荐使用 **APScheduler** 或 **Crontab** 来管理定时任务。
|
||||||
|
|||||||
212
Docs/COSYVOICE3_DEPLOY.md
Normal file
212
Docs/COSYVOICE3_DEPLOY.md
Normal file
@@ -0,0 +1,212 @@
|
|||||||
|
# CosyVoice 3.0 部署文档
|
||||||
|
|
||||||
|
## 概览
|
||||||
|
|
||||||
|
| 项目 | 值 |
|
||||||
|
|------|------|
|
||||||
|
| 模型 | Fun-CosyVoice3-0.5B-2512 (0.5B 参数) |
|
||||||
|
| 端口 | 8010 |
|
||||||
|
| GPU | 0 (CUDA_VISIBLE_DEVICES=0) |
|
||||||
|
| 推理精度 | FP16 (自动混合精度) |
|
||||||
|
| PM2 名称 | vigent2-cosyvoice (id=15) |
|
||||||
|
| Conda 环境 | cosyvoice (Python 3.10) |
|
||||||
|
| 启动脚本 | `run_cosyvoice.sh` |
|
||||||
|
| 服务脚本 | `models/CosyVoice/cosyvoice_server.py` |
|
||||||
|
| 模型加载时间 | ~22-34 秒 |
|
||||||
|
| 显存占用 | ~3-5 GB |
|
||||||
|
|
||||||
|
## 支持语言
|
||||||
|
|
||||||
|
中文、英文、日语、韩语、德语、西班牙语、法语、意大利语、俄语,18+ 中国方言
|
||||||
|
|
||||||
|
## 目录结构
|
||||||
|
|
||||||
|
```
|
||||||
|
models/CosyVoice/
|
||||||
|
├── cosyvoice_server.py # FastAPI 服务 (端口 8010)
|
||||||
|
├── cosyvoice/ # CosyVoice 源码
|
||||||
|
│ └── cli/cosyvoice.py # AutoModel 入口
|
||||||
|
├── third_party/Matcha-TTS/ # 子模块依赖
|
||||||
|
├── pretrained_models/
|
||||||
|
│ ├── Fun-CosyVoice3-0.5B/ # 模型文件 (~8.2GB)
|
||||||
|
│ │ ├── llm.pt # LLM 模型 (1.9GB)
|
||||||
|
│ │ ├── llm.rl.pt # RL 模型 (1.9GB, 备用)
|
||||||
|
│ │ ├── flow.pt # Flow 模型 (1.3GB)
|
||||||
|
│ │ ├── hift.pt # HiFT 声码器 (80MB)
|
||||||
|
│ │ ├── campplus.onnx # 说话人嵌入 (27MB)
|
||||||
|
│ │ ├── speech_tokenizer_v3.onnx # 语音分词器 (925MB)
|
||||||
|
│ │ ├── cosyvoice3.yaml # 模型配置
|
||||||
|
│ │ └── CosyVoice-BlankEN/ # Qwen tokenizer
|
||||||
|
│ └── CosyVoice-ttsfrd/ # 文本正则化资源
|
||||||
|
│ ├── resource/ # 解压后的 ttsfrd 资源
|
||||||
|
│ └── resource.zip
|
||||||
|
run_cosyvoice.sh # PM2 启动脚本
|
||||||
|
```
|
||||||
|
|
||||||
|
## API 接口
|
||||||
|
|
||||||
|
### GET /health
|
||||||
|
|
||||||
|
健康检查,返回:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"service": "CosyVoice 3.0 Voice Clone",
|
||||||
|
"model": "Fun-CosyVoice3-0.5B",
|
||||||
|
"ready": true,
|
||||||
|
"gpu_id": 0
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### POST /generate
|
||||||
|
|
||||||
|
声音克隆生成。
|
||||||
|
|
||||||
|
**参数 (multipart/form-data):**
|
||||||
|
|
||||||
|
| 参数 | 类型 | 必填 | 说明 |
|
||||||
|
|------|------|------|------|
|
||||||
|
| ref_audio | File | 是 | 参考音频 (WAV) |
|
||||||
|
| text | string | 是 | 要合成的文本 |
|
||||||
|
| ref_text | string | 是 | 参考音频的转写文字 |
|
||||||
|
| language | string | 否 | 语言 (默认 "Chinese",CosyVoice 自动检测) |
|
||||||
|
| speed | float | 否 | 语速 (默认 1.0,范围 0.5-2.0,建议 0.8-1.2) |
|
||||||
|
|
||||||
|
**返回:** WAV 音频文件
|
||||||
|
|
||||||
|
**状态码:**
|
||||||
|
- 200: 成功
|
||||||
|
- 429: GPU 忙,请重试
|
||||||
|
- 500: 生成失败/超时
|
||||||
|
- 503: 模型未加载/服务中毒
|
||||||
|
|
||||||
|
## 安全机制
|
||||||
|
|
||||||
|
1. **GPU 推理锁** (`asyncio.Lock`): 防止并发推理导致 GPU 状态损坏
|
||||||
|
2. **429 拒绝**: 锁被占用时立即返回 429,客户端重试
|
||||||
|
3. **超时保护**: `60 + len(text) * 2` 秒,上限 300 秒
|
||||||
|
4. **Poisoned 标记**: 超时后标记服务为中毒状态,健康检查返回 `ready: false`
|
||||||
|
5. **强制退出**: 超时后 1.5 秒强制 `os._exit(1)`,PM2 自动重启
|
||||||
|
6. **启动自检**: 启动时用短文本做一次真实推理,验证 GPU 推理链路可用;失败则 `_model_loaded = False`,健康检查返回 `ready: false`,避免假阳性
|
||||||
|
7. **参考音频自动截取**: 参考音频超过 10 秒时自动截取前 10 秒(CosyVoice 建议 3-10 秒),避免采样异常
|
||||||
|
|
||||||
|
## 运维命令
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 启动
|
||||||
|
pm2 start run_cosyvoice.sh --name vigent2-cosyvoice
|
||||||
|
|
||||||
|
# 重启
|
||||||
|
pm2 restart vigent2-cosyvoice
|
||||||
|
|
||||||
|
# 查看日志
|
||||||
|
pm2 logs vigent2-cosyvoice --lines 50
|
||||||
|
|
||||||
|
# 健康检查
|
||||||
|
curl http://localhost:8010/health
|
||||||
|
|
||||||
|
# 停止
|
||||||
|
pm2 stop vigent2-cosyvoice
|
||||||
|
```
|
||||||
|
|
||||||
|
## 从零部署步骤
|
||||||
|
|
||||||
|
### 1. 克隆仓库
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models
|
||||||
|
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
|
||||||
|
cd CosyVoice
|
||||||
|
git submodule update --init --recursive
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. 创建 Conda 环境
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda create -n cosyvoice -y python=3.10
|
||||||
|
conda activate cosyvoice
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. 安装依赖
|
||||||
|
|
||||||
|
注意:不能直接 `pip install -r requirements.txt`,有版本冲突需要处理。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 安装 PyTorch 2.3.1 (CUDA 12.1) — 必须先装,版本严格要求
|
||||||
|
pip install torch==2.3.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
|
||||||
|
|
||||||
|
# 核心推理依赖
|
||||||
|
pip install conformer==0.3.2 HyperPyYAML==1.2.2 inflect==7.3.1 \
|
||||||
|
librosa==0.10.2 lightning==2.2.4 modelscope==1.20.0 omegaconf==2.3.0 \
|
||||||
|
pydantic==2.7.0 soundfile==0.12.1 fastapi==0.115.6 uvicorn==0.30.0 \
|
||||||
|
transformers==4.51.3 protobuf==4.25 hydra-core==1.3.2 \
|
||||||
|
rich==13.7.1 diffusers==0.29.0 x-transformers==2.11.24 wetext==0.0.4
|
||||||
|
|
||||||
|
# onnxruntime-gpu
|
||||||
|
pip install onnxruntime-gpu==1.18.0 \
|
||||||
|
--extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
|
||||||
|
|
||||||
|
# 其他必要依赖
|
||||||
|
pip install gdown matplotlib pyarrow wget onnx python-multipart httpx
|
||||||
|
|
||||||
|
# openai-whisper 需要 setuptools < 71(提供 pkg_resources)
|
||||||
|
pip install "setuptools<71"
|
||||||
|
pip install --no-build-isolation openai-whisper==20231117
|
||||||
|
|
||||||
|
# pyworld 需要 g++ 和 Cython
|
||||||
|
pip install Cython
|
||||||
|
PATH="/usr/bin:$PATH" pip install pyworld==0.3.4
|
||||||
|
|
||||||
|
# 关键版本修复
|
||||||
|
pip install "numpy<2" # onnxruntime-gpu 不兼容 numpy 2.x
|
||||||
|
pip install "ruamel.yaml<0.18" # hyperpyyaml 不兼容 ruamel.yaml 0.19+
|
||||||
|
```
|
||||||
|
|
||||||
|
> **重要**: CosyVoice 要求 torch==2.3.1。torch 2.10+ 会导致 CUBLAS_STATUS_INVALID_VALUE 错误。
|
||||||
|
> torch 2.3.1+cu121 自带 nvidia-cudnn-cu12,onnxruntime CUDAExecutionProvider 可正常使用。
|
||||||
|
|
||||||
|
### 4. 下载模型
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 使用 huggingface_hub (国内用 hf-mirror.com)
|
||||||
|
HF_ENDPOINT=https://hf-mirror.com python -c "
|
||||||
|
from huggingface_hub import snapshot_download
|
||||||
|
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
||||||
|
snapshot_download('FunAudioLLM/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. 安装 ttsfrd (可选,提升文本正则化质量)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd pretrained_models/CosyVoice-ttsfrd/
|
||||||
|
unzip resource.zip -d .
|
||||||
|
pip install ttsfrd_dependency-0.1-py3-none-any.whl
|
||||||
|
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. 注册 PM2
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pm2 start run_cosyvoice.sh --name vigent2-cosyvoice
|
||||||
|
pm2 save
|
||||||
|
```
|
||||||
|
|
||||||
|
## 已知问题
|
||||||
|
|
||||||
|
1. **ttsfrd "prepare tts engine failed"**: ttsfrd C 库内部日志,Python 层初始化成功,不影响使用
|
||||||
|
2. **Sliding Window Attention 警告**: transformers 库提示,不影响推理结果
|
||||||
|
3. **onnxruntime Memcpy 性能提示**: `Memcpy nodes are not supported by the CUDA EP`,仅为性能建议日志,不影响功能
|
||||||
|
|
||||||
|
> 注:libcudnn.so.8 问题在 torch 2.3.1+cu121 环境下已解决(自带 nvidia-cudnn-cu12),onnxruntime CUDAExecutionProvider 可正常加载。
|
||||||
|
|
||||||
|
## 与 Qwen3-TTS 对比
|
||||||
|
|
||||||
|
| 特性 | Qwen3-TTS (已停用) | CosyVoice 3.0 (当前) |
|
||||||
|
|------|-----------|----------------|
|
||||||
|
| 端口 | 8009 | 8010 |
|
||||||
|
| 模型大小 | 0.6B | 0.5B |
|
||||||
|
| 语言 | 中/英/日/韩 | 9 语言 + 18 方言 |
|
||||||
|
| 克隆方式 | ref_audio + ref_text | ref_audio + ref_text |
|
||||||
|
| prompt 格式 | 直接传 ref_text | `You are a helpful assistant.<\|endofprompt\|>` + ref_text |
|
||||||
|
| 内置分段 | 无,需客户端分段 | 内置 text_normalize 自动分段 |
|
||||||
|
| 状态 | 已停用 (PM2 stopped) | 生产使用中 |
|
||||||
@@ -7,8 +7,8 @@
|
|||||||
| 服务器 | Dell PowerEdge R730 |
|
| 服务器 | Dell PowerEdge R730 |
|
||||||
| CPU | 2× Intel Xeon E5-2680 v4 (56 线程) |
|
| CPU | 2× Intel Xeon E5-2680 v4 (56 线程) |
|
||||||
| 内存 | 192GB DDR4 |
|
| 内存 | 192GB DDR4 |
|
||||||
| GPU 0 | NVIDIA RTX 3090 24GB |
|
| GPU 0 | NVIDIA RTX 3090 24GB (MuseTalk + CosyVoice) |
|
||||||
| GPU 1 | NVIDIA RTX 3090 24GB (用于 LatentSync) |
|
| GPU 1 | NVIDIA RTX 3090 24GB (LatentSync) |
|
||||||
| 部署路径 | `/home/rongye/ProgramFiles/ViGent2` |
|
| 部署路径 | `/home/rongye/ProgramFiles/ViGent2` |
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -72,7 +72,9 @@ cd /home/rongye/ProgramFiles/ViGent2
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 步骤 3: 部署 AI 模型 (LatentSync 1.6)
|
## 步骤 3: 部署 AI 模型
|
||||||
|
|
||||||
|
### 3a. LatentSync 1.6 (短视频唇形同步, GPU1)
|
||||||
|
|
||||||
> ⚠️ **重要**:LatentSync 需要独立的 Conda 环境和 **~18GB VRAM**。请**不要**直接安装在后端环境中。
|
> ⚠️ **重要**:LatentSync 需要独立的 Conda 环境和 **~18GB VRAM**。请**不要**直接安装在后端环境中。
|
||||||
|
|
||||||
@@ -93,6 +95,26 @@ conda activate latentsync
|
|||||||
python -m scripts.server # 测试能否启动,Ctrl+C 退出
|
python -m scripts.server # 测试能否启动,Ctrl+C 退出
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 3b. MuseTalk 1.5 (长视频唇形同步, GPU0)
|
||||||
|
|
||||||
|
> MuseTalk 是单步潜空间修复模型(非扩散模型),推理速度接近实时,适合 >=120s 的长视频。与 CosyVoice 共享 GPU0,fp16 推理约需 4-8GB 显存。
|
||||||
|
|
||||||
|
请参考详细的独立部署指南:
|
||||||
|
**[MuseTalk 部署指南](MUSETALK_DEPLOY.md)**
|
||||||
|
|
||||||
|
简要步骤:
|
||||||
|
1. 创建独立的 `musetalk` Conda 环境 (Python 3.10 + PyTorch 2.0.1 + CUDA 11.8)
|
||||||
|
2. 安装 mmcv/mmdet/mmpose 等依赖
|
||||||
|
3. 下载模型权重 (`download_weights.sh`)
|
||||||
|
4. 创建必要的软链接 (`musetalk/config.json`, `musetalk/musetalkV15`)
|
||||||
|
|
||||||
|
**验证 MuseTalk 部署**:
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
|
||||||
|
/home/rongye/ProgramFiles/miniconda3/envs/musetalk/bin/python scripts/server.py
|
||||||
|
# 另一个终端: curl http://localhost:8011/health
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 步骤 4: 安装后端依赖
|
## 步骤 4: 安装后端依赖
|
||||||
@@ -189,7 +211,7 @@ cp .env.example .env
|
|||||||
| `SUPABASE_PUBLIC_URL` | `https://api.hbyrkj.top` | Supabase API 公网地址 (前端访问) |
|
| `SUPABASE_PUBLIC_URL` | `https://api.hbyrkj.top` | Supabase API 公网地址 (前端访问) |
|
||||||
| `LATENTSYNC_GPU_ID` | 1 | GPU 选择 (0 或 1) |
|
| `LATENTSYNC_GPU_ID` | 1 | GPU 选择 (0 或 1) |
|
||||||
| `LATENTSYNC_USE_SERVER` | false | 设为 true 以启用常驻服务加速 |
|
| `LATENTSYNC_USE_SERVER` | false | 设为 true 以启用常驻服务加速 |
|
||||||
| `LATENTSYNC_INFERENCE_STEPS` | 20 | 推理步数 (20-50) |
|
| `LATENTSYNC_INFERENCE_STEPS` | 16 | 推理步数 (16-50) |
|
||||||
| `LATENTSYNC_GUIDANCE_SCALE` | 1.5 | 引导系数 (1.0-3.0) |
|
| `LATENTSYNC_GUIDANCE_SCALE` | 1.5 | 引导系数 (1.0-3.0) |
|
||||||
| `DEBUG` | true | 生产环境改为 false |
|
| `DEBUG` | true | 生产环境改为 false |
|
||||||
| `REDIS_URL` | `redis://localhost:6379/0` | 任务状态存储(不可用时回退内存) |
|
| `REDIS_URL` | `redis://localhost:6379/0` | 任务状态存储(不可用时回退内存) |
|
||||||
@@ -212,7 +234,21 @@ cp .env.example .env
|
|||||||
| `DOUYIN_RECORD_VIDEO` | false | 录制浏览器操作视频 |
|
| `DOUYIN_RECORD_VIDEO` | false | 录制浏览器操作视频 |
|
||||||
| `DOUYIN_KEEP_SUCCESS_VIDEO` | false | 成功后保留录屏 |
|
| `DOUYIN_KEEP_SUCCESS_VIDEO` | false | 成功后保留录屏 |
|
||||||
| `CORS_ORIGINS` | `*` | CORS 允许源 (生产环境建议白名单) |
|
| `CORS_ORIGINS` | `*` | CORS 允许源 (生产环境建议白名单) |
|
||||||
| `DOUYIN_COOKIE` | 空 | 抖音视频下载 Cookie (文案提取功能) |
|
| `MUSETALK_GPU_ID` | 0 | MuseTalk GPU 编号 |
|
||||||
|
| `MUSETALK_API_URL` | `http://localhost:8011` | MuseTalk 常驻服务地址 |
|
||||||
|
| `MUSETALK_BATCH_SIZE` | 32 | MuseTalk 推理批大小 |
|
||||||
|
| `MUSETALK_VERSION` | v15 | MuseTalk 模型版本 |
|
||||||
|
| `MUSETALK_USE_FLOAT16` | true | MuseTalk 半精度加速 |
|
||||||
|
| `LIPSYNC_DURATION_THRESHOLD` | 120 | 秒,>=此值用 MuseTalk,<此值用 LatentSync |
|
||||||
|
| `ALIPAY_APP_ID` | 空 | 支付宝应用 APPID |
|
||||||
|
| `ALIPAY_PRIVATE_KEY_PATH` | 空 | 应用私钥 PEM 文件路径 |
|
||||||
|
| `ALIPAY_PUBLIC_KEY_PATH` | 空 | 支付宝公钥 PEM 文件路径 |
|
||||||
|
| `ALIPAY_NOTIFY_URL` | 空 | 支付宝异步回调地址 (公网 HTTPS) |
|
||||||
|
| `ALIPAY_RETURN_URL` | 空 | 支付完成后浏览器跳转地址 |
|
||||||
|
| `PAYMENT_AMOUNT` | `999.00` | 会员价格 (元) |
|
||||||
|
| `PAYMENT_EXPIRE_DAYS` | `365` | 会员有效天数 |
|
||||||
|
|
||||||
|
> 支付宝完整配置步骤(密钥生成、PEM 格式、产品开通等)请参考 **[支付宝部署指南](ALIPAY_DEPLOY.md)**。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -262,6 +298,13 @@ cd /home/rongye/ProgramFiles/ViGent2/models/LatentSync
|
|||||||
conda activate latentsync
|
conda activate latentsync
|
||||||
python -m scripts.server
|
python -m scripts.server
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 启动 MuseTalk (终端 4, 长视频唇形同步)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
|
||||||
|
/home/rongye/ProgramFiles/miniconda3/envs/musetalk/bin/python scripts/server.py
|
||||||
|
```
|
||||||
|
|
||||||
### 验证
|
### 验证
|
||||||
|
|
||||||
@@ -336,34 +379,48 @@ chmod +x run_latentsync.sh
|
|||||||
pm2 start ./run_latentsync.sh --name vigent2-latentsync
|
pm2 start ./run_latentsync.sh --name vigent2-latentsync
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. 启动 Qwen3-TTS 声音克隆服务 (可选)
|
### 4. 启动 CosyVoice 3.0 声音克隆服务 (可选)
|
||||||
|
|
||||||
> 如需使用声音克隆功能,需要启动此服务。
|
> 如需使用声音克隆功能,需要启动此服务。详细部署步骤见 [CosyVoice 3.0 部署文档](COSYVOICE3_DEPLOY.md)。
|
||||||
|
|
||||||
1. 安装 HTTP 服务依赖:
|
1. 启动脚本位于项目根目录: `run_cosyvoice.sh`
|
||||||
```bash
|
|
||||||
conda activate qwen-tts
|
|
||||||
pip install fastapi uvicorn python-multipart
|
|
||||||
```
|
|
||||||
|
|
||||||
2. 启动脚本位于项目根目录: `run_qwen_tts.sh`
|
2. 使用 pm2 启动:
|
||||||
|
|
||||||
3. 使用 pm2 启动:
|
|
||||||
```bash
|
```bash
|
||||||
cd /home/rongye/ProgramFiles/ViGent2
|
cd /home/rongye/ProgramFiles/ViGent2
|
||||||
pm2 start ./run_qwen_tts.sh --name vigent2-qwen-tts
|
pm2 start ./run_cosyvoice.sh --name vigent2-cosyvoice
|
||||||
pm2 save
|
pm2 save
|
||||||
```
|
```
|
||||||
|
|
||||||
4. 验证服务:
|
3. 验证服务:
|
||||||
```bash
|
```bash
|
||||||
# 检查健康状态
|
# 检查健康状态
|
||||||
curl http://localhost:8009/health
|
curl http://localhost:8010/health
|
||||||
```
|
```
|
||||||
|
|
||||||
### 5. 启动服务看门狗 (Watchdog)
|
### 5. 启动 MuseTalk 长视频唇形同步服务
|
||||||
|
|
||||||
> 🛡️ **推荐**:监控 Qwen-TTS 和 LatentSync 服务健康状态,卡死时自动重启。
|
> 长视频 (>=120s) 自动路由到 MuseTalk。MuseTalk 不可用时自动回退 LatentSync。
|
||||||
|
> 详细部署步骤见 [MuseTalk 部署指南](MUSETALK_DEPLOY.md)。
|
||||||
|
|
||||||
|
1. 启动脚本位于项目根目录: `run_musetalk.sh`
|
||||||
|
|
||||||
|
2. 使用 pm2 启动:
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2
|
||||||
|
pm2 start ./run_musetalk.sh --name vigent2-musetalk
|
||||||
|
pm2 save
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 验证服务:
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8011/health
|
||||||
|
# {"status":"ok","model_loaded":true}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. 启动服务看门狗 (Watchdog)
|
||||||
|
|
||||||
|
> 🛡️ **推荐**:监控 CosyVoice 和 LatentSync 服务健康状态,卡死时自动重启。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd /home/rongye/ProgramFiles/ViGent2
|
cd /home/rongye/ProgramFiles/ViGent2
|
||||||
@@ -378,13 +435,16 @@ pm2 save
|
|||||||
pm2 startup
|
pm2 startup
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **提示**: 完整的 PM2 进程列表应包含 5-6 个服务: vigent2-backend, vigent2-frontend, vigent2-latentsync, vigent2-cosyvoice, vigent2-musetalk, vigent2-watchdog。
|
||||||
|
|
||||||
### pm2 常用命令
|
### pm2 常用命令
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pm2 status # 查看所有服务状态
|
pm2 status # 查看所有服务状态
|
||||||
pm2 logs # 查看所有日志
|
pm2 logs # 查看所有日志
|
||||||
pm2 logs vigent2-backend # 查看后端日志
|
pm2 logs vigent2-backend # 查看后端日志
|
||||||
pm2 logs vigent2-qwen-tts # 查看 Qwen3-TTS 日志
|
pm2 logs vigent2-cosyvoice # 查看 CosyVoice 日志
|
||||||
|
pm2 logs vigent2-musetalk # 查看 MuseTalk 日志
|
||||||
pm2 restart all # 重启所有服务
|
pm2 restart all # 重启所有服务
|
||||||
pm2 stop vigent2-latentsync # 停止 LatentSync 服务
|
pm2 stop vigent2-latentsync # 停止 LatentSync 服务
|
||||||
pm2 delete all # 删除所有服务
|
pm2 delete all # 删除所有服务
|
||||||
@@ -523,7 +583,8 @@ python3 -c "import torch; print(torch.cuda.is_available())"
|
|||||||
sudo lsof -i :8006
|
sudo lsof -i :8006
|
||||||
sudo lsof -i :3002
|
sudo lsof -i :3002
|
||||||
sudo lsof -i :8007
|
sudo lsof -i :8007
|
||||||
sudo lsof -i :8009 # Qwen3-TTS
|
sudo lsof -i :8010 # CosyVoice
|
||||||
|
sudo lsof -i :8011 # MuseTalk
|
||||||
```
|
```
|
||||||
|
|
||||||
### 查看日志
|
### 查看日志
|
||||||
@@ -533,7 +594,8 @@ sudo lsof -i :8009 # Qwen3-TTS
|
|||||||
pm2 logs vigent2-backend
|
pm2 logs vigent2-backend
|
||||||
pm2 logs vigent2-frontend
|
pm2 logs vigent2-frontend
|
||||||
pm2 logs vigent2-latentsync
|
pm2 logs vigent2-latentsync
|
||||||
pm2 logs vigent2-qwen-tts
|
pm2 logs vigent2-cosyvoice
|
||||||
|
pm2 logs vigent2-musetalk
|
||||||
```
|
```
|
||||||
|
|
||||||
### SSH 连接卡顿 / 系统响应慢
|
### SSH 连接卡顿 / 系统响应慢
|
||||||
@@ -564,6 +626,7 @@ pm2 logs vigent2-qwen-tts
|
|||||||
| `playwright` | 社交媒体自动发布 |
|
| `playwright` | 社交媒体自动发布 |
|
||||||
| `biliup` | B站视频上传 |
|
| `biliup` | B站视频上传 |
|
||||||
| `loguru` | 日志管理 |
|
| `loguru` | 日志管理 |
|
||||||
|
| `python-alipay-sdk` | 支付宝支付集成 |
|
||||||
|
|
||||||
### 前端关键依赖
|
### 前端关键依赖
|
||||||
|
|
||||||
|
|||||||
@@ -328,11 +328,13 @@ interface TimelineSegment {
|
|||||||
|
|
||||||
### 概述
|
### 概述
|
||||||
|
|
||||||
根据用户反馈,修复 6 项 UI 体验问题,同时修复 Qwen3-TTS 声音克隆服务的 SoX 路径问题和显存缓存管理。
|
根据用户反馈,修复 6 项 UI 体验问题,同时修复声音克隆服务的 SoX 路径问题和显存缓存管理。
|
||||||
|
|
||||||
|
> **注**: Qwen3-TTS 已在后续被 CosyVoice 3.0 (端口 8010) 替换,以下记录为当时的修复过程。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 一、Qwen3-TTS 稳定性修复
|
### 一、Qwen3-TTS 稳定性修复 (已被 CosyVoice 3.0 替换)
|
||||||
|
|
||||||
#### 1.1 SoX PATH 修复
|
#### 1.1 SoX PATH 修复
|
||||||
|
|
||||||
@@ -348,6 +350,8 @@ export PATH="/home/rongye/ProgramFiles/miniconda3/envs/qwen-tts/bin:$PATH"
|
|||||||
|
|
||||||
**修复**: `qwen_tts_server.py` 每次生成完成后(无论成功或失败)调用 `torch.cuda.empty_cache()`,防止显存碎片累积。使用 `asyncio.to_thread()` 在线程池中运行推理,避免阻塞事件循环导致健康检查超时。
|
**修复**: `qwen_tts_server.py` 每次生成完成后(无论成功或失败)调用 `torch.cuda.empty_cache()`,防止显存碎片累积。使用 `asyncio.to_thread()` 在线程池中运行推理,避免阻塞事件循环导致健康检查超时。
|
||||||
|
|
||||||
|
> **后续**: Qwen3-TTS 已停用,CosyVoice 3.0 沿用了相同的保护机制(GPU 推理锁、超时保护、显存清理、启动自检)。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 二、配音列表按钮布局统一 (反馈 #1 + #6)
|
### 二、配音列表按钮布局统一 (反馈 #1 + #6)
|
||||||
@@ -415,8 +419,8 @@ export PATH="/home/rongye/ProgramFiles/miniconda3/envs/qwen-tts/bin:$PATH"
|
|||||||
|
|
||||||
| 文件 | 变更 |
|
| 文件 | 变更 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| `run_qwen_tts.sh` | export conda env bin 到 PATH,修复 SoX 找不到问题 |
|
| `run_qwen_tts.sh` | export conda env bin 到 PATH,修复 SoX 找不到问题 (已停用) |
|
||||||
| `models/Qwen3-TTS/qwen_tts_server.py` | 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞 |
|
| `models/Qwen3-TTS/qwen_tts_server.py` | 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞 (已停用) |
|
||||||
|
|
||||||
#### 前端修改
|
#### 前端修改
|
||||||
|
|
||||||
@@ -544,3 +548,309 @@ next.splice(toIdx, 0, moved);
|
|||||||
| `frontend/src/features/home/model/useHomeController.ts` | 集成 useSavedScripts,新增 handleSaveScript |
|
| `frontend/src/features/home/model/useHomeController.ts` | 集成 useSavedScripts,新增 handleSaveScript |
|
||||||
| `frontend/src/features/home/ui/HomePage.tsx` | 传递 savedScripts / handleSaveScript / deleteSavedScript 到 ScriptEditor |
|
| `frontend/src/features/home/ui/HomePage.tsx` | 传递 savedScripts / handleSaveScript / deleteSavedScript 到 ScriptEditor |
|
||||||
| `frontend/src/features/home/model/useTimelineEditor.ts` | reorderSegments 从属性交换改为数组移动(splice) |
|
| `frontend/src/features/home/model/useTimelineEditor.ts` | reorderSegments 从属性交换改为数组移动(splice) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔤 字幕语言不匹配 + 视频比例错位修复 — 第五阶段 (Day 23)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
修复两个视频生成 Bug:
|
||||||
|
1. **字幕语言不匹配**: 中文配音 + 英文翻译文案 → 字幕错误显示英文(Whisper 独立转录,忽略原文)
|
||||||
|
2. **标题字幕比例错位**: 9:16 竖屏素材生成视频后,标题/字幕按 16:9 横屏布局渲染
|
||||||
|
|
||||||
|
附带修复代码审查中发现的 `split_word_to_chars` 英文空格丢失问题。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 一、字幕用原文替换 Whisper 转录文字
|
||||||
|
|
||||||
|
#### 根因
|
||||||
|
|
||||||
|
Whisper 对音频独立转录,完全忽略传入的 `text` 参数。当配音语言与编辑器文案语言不一致时(例如:用户先写中文文案 → 翻译成英文 → 生成英文配音 → 再改回中文文案),Whisper "听到"英文语音就输出英文字幕。
|
||||||
|
|
||||||
|
#### 修复思路
|
||||||
|
|
||||||
|
Whisper 仅负责检测**语音总时间范围**(`first_start` → `last_end`),字幕文字永远用配音保存的原始文案。
|
||||||
|
|
||||||
|
#### `whisper_service.py` — `align()` 新增 `original_text` 参数
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def align(self, audio_path, text, output_path=None,
|
||||||
|
language="zh", original_text=None):
|
||||||
|
```
|
||||||
|
|
||||||
|
当 `original_text` 非空时:
|
||||||
|
1. 正常运行 Whisper 转录,记录 `whisper_first_start` 和 `whisper_last_end`
|
||||||
|
2. 将 `original_text` 传入 `split_word_to_chars()` 在总时间范围上线性分布
|
||||||
|
3. 用 `split_segment_to_lines()` 按标点和字数断行
|
||||||
|
4. 替换 Whisper 的转录结果
|
||||||
|
|
||||||
|
#### `workflow.py` — 配音元数据无条件覆盖 + 传入原文
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 改前(只在文案为空时覆盖)
|
||||||
|
if not req.text.strip():
|
||||||
|
req.text = meta.get("text", req.text)
|
||||||
|
|
||||||
|
# 改后(无条件用配音元数据覆盖)
|
||||||
|
meta_text = meta.get("text", "")
|
||||||
|
if meta_text:
|
||||||
|
req.text = meta_text
|
||||||
|
```
|
||||||
|
|
||||||
|
所有 4 处 `whisper_service.align()` 调用添加 `original_text=req.text`。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 二、Remotion 动态传入视频尺寸
|
||||||
|
|
||||||
|
#### 根因
|
||||||
|
|
||||||
|
`remotion/src/Root.tsx` 硬编码 `width={1280} height={720}`。虽然 `render.ts` 用 ffprobe 检测真实尺寸后覆盖 `composition.width/height`,但 `selectComposition` 阶段组件已按 1280×720 初始化,标题和字幕定位基于错误的画布尺寸。
|
||||||
|
|
||||||
|
#### 修复
|
||||||
|
|
||||||
|
##### `Root.tsx` — `calculateMetadata` 从 props 读取尺寸
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
<Composition
|
||||||
|
id="ViGentVideo"
|
||||||
|
component={Video}
|
||||||
|
durationInFrames={300}
|
||||||
|
fps={25}
|
||||||
|
width={1080}
|
||||||
|
height={1920}
|
||||||
|
calculateMetadata={async ({ props }) => ({
|
||||||
|
width: props.width || 1080,
|
||||||
|
height: props.height || 1920,
|
||||||
|
})}
|
||||||
|
defaultProps={{
|
||||||
|
videoSrc: '',
|
||||||
|
width: 1080,
|
||||||
|
height: 1920,
|
||||||
|
// ...
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
```
|
||||||
|
|
||||||
|
默认从 1280×720 改为 1080×1920(竖屏优先),`calculateMetadata` 确保 `selectComposition` 阶段使用 ffprobe 检测的真实尺寸。
|
||||||
|
|
||||||
|
##### `Video.tsx` — VideoProps 新增可选 `width/height`
|
||||||
|
|
||||||
|
仅供 `calculateMetadata` 访问,组件渲染不引用。
|
||||||
|
|
||||||
|
##### `render.ts` — inputProps 统一传入视频尺寸
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const inputProps = {
|
||||||
|
videoSrc: videoFileName,
|
||||||
|
captions,
|
||||||
|
title: options.title,
|
||||||
|
// ...
|
||||||
|
width: videoWidth, // ffprobe 检测值
|
||||||
|
height: videoHeight, // ffprobe 检测值
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
`selectComposition` 和 `renderMedia` 使用同一个 `inputProps`。保留显式 `composition.width/height` 覆盖作为保险。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 三、代码审查修复:英文空格丢失
|
||||||
|
|
||||||
|
#### 问题
|
||||||
|
|
||||||
|
`split_word_to_chars` 原设计处理 Whisper 单个词(如 `" Hello"`),但 `original_text` 传入整段文本时,中间空格被 `continue` 跳过且不 flush `ascii_buffer`,导致 `"Hello World"` 变成 `"HelloWorld"`。
|
||||||
|
|
||||||
|
#### 执行路径追踪
|
||||||
|
|
||||||
|
```
|
||||||
|
输入: "Hello World"
|
||||||
|
H,e,l,l,o → ascii_buffer = "Hello"
|
||||||
|
' ' → continue(跳过,不 flush!)
|
||||||
|
W,o,r,l,d → ascii_buffer = "HelloWorld"
|
||||||
|
结果: tokens = ["HelloWorld"] ← 空格丢失
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 修复
|
||||||
|
|
||||||
|
遇到空格时 flush `ascii_buffer`,并用 `pending_space` 标记给下一个 token 前置空格:
|
||||||
|
|
||||||
|
```python
|
||||||
|
if not char.strip():
|
||||||
|
if ascii_buffer:
|
||||||
|
tokens.append(ascii_buffer)
|
||||||
|
ascii_buffer = ""
|
||||||
|
if tokens:
|
||||||
|
pending_space = True
|
||||||
|
continue
|
||||||
|
```
|
||||||
|
|
||||||
|
修复后:`"Hello World"` → tokens = `["Hello", " World"]` → 字幕正确显示。中文不受影响。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 涉及文件汇总
|
||||||
|
|
||||||
|
#### 后端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/services/whisper_service.py` | `align()` 新增 `original_text` 参数;`split_word_to_chars` 修复英文空格丢失 |
|
||||||
|
| `backend/app/modules/videos/workflow.py` | 配音元数据无条件覆盖 text/language;4 处 `align()` 调用传入 `original_text` |
|
||||||
|
|
||||||
|
#### 前端修改(Remotion)
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `remotion/src/Root.tsx` | 默认尺寸改为 1080×1920,新增 `calculateMetadata` + width/height defaultProps |
|
||||||
|
| `remotion/src/Video.tsx` | VideoProps 新增可选 `width`/`height` |
|
||||||
|
| `remotion/render.ts` | inputProps 统一传入 `videoWidth`/`videoHeight`,selectComposition 和 renderMedia 共用 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎤 参考音频自动转写 + 语速控制 — 第六阶段 (Day 23)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
解决声音克隆 ref_text 不匹配问题:旧方案使用前端固定文字作为 ref_text,CosyVoice zero-shot 克隆要求 ref_text 必须与参考音频实际内容匹配,不匹配时模型会在生成音频开头"幻觉"出多余片段。
|
||||||
|
|
||||||
|
**改进**:上传参考音频时自动调用 Whisper 转写内容作为 ref_text,同时新增语速控制功能。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 一、Whisper 自动转写参考音频
|
||||||
|
|
||||||
|
#### 1.1 `whisper_service.py` — 语言自动检测
|
||||||
|
|
||||||
|
`transcribe()` 方法原先硬编码 `language="zh"`,改为接受可选 `language` 参数(默认 `None` = 自动检测),支持多语言参考音频。
|
||||||
|
|
||||||
|
#### 1.2 `ref_audios/service.py` — 上传时自动转写
|
||||||
|
|
||||||
|
上传流程变更:转码 WAV → 检查时长(≥1s) → 超 10s 在静音点截取 → **Whisper 自动转写** → 验证非空 → 上传。
|
||||||
|
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
transcribed = await whisper_service.transcribe(tmp_wav_path)
|
||||||
|
if transcribed.strip():
|
||||||
|
ref_text = transcribed.strip()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Auto-transcribe failed: {e}")
|
||||||
|
|
||||||
|
if not ref_text or not ref_text.strip():
|
||||||
|
raise ValueError("无法识别音频内容,请确保音频包含清晰的语音")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.3 `ref_audios/router.py` — ref_text 改为可选
|
||||||
|
|
||||||
|
`ref_text: str = Form("")`(不再必填),前端不再发送固定文字。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 二、参考音频智能截取(10 秒上限)
|
||||||
|
|
||||||
|
CosyVoice 对 3-10 秒参考音频效果最好。
|
||||||
|
|
||||||
|
#### 2.1 静音点检测
|
||||||
|
|
||||||
|
使用 ffmpeg `silencedetect` 找 10 秒内最后一个静音结束点(阈值 -30dB,最短 0.3s),避免在字词中间硬切:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def _find_silence_cut_point(file_path, max_duration):
|
||||||
|
# silencedetect → 解析 silence_end → 找 3s~max_duration 内最后的静音点
|
||||||
|
# 找不到则回退到 max_duration
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 淡出处理
|
||||||
|
|
||||||
|
截取时末尾 0.1 秒淡出(`afade=t=out`),避免截断爆音。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 三、重新识别功能(旧数据迁移)
|
||||||
|
|
||||||
|
#### 3.1 新增 API
|
||||||
|
|
||||||
|
`POST /api/ref-audios/{audio_id}/retranscribe` — 下载音频 → 超 10s 截取 → Whisper 转写 → 重新上传音频和元数据。
|
||||||
|
|
||||||
|
#### 3.2 前端 UI
|
||||||
|
|
||||||
|
- RefAudioPanel 新增 RotateCw 按钮("重新识别文字"),转写中显示 `animate-spin`
|
||||||
|
- 旧音频 ref_text 以固定文字开头时显示 ⚠ 黄色警告
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 四、语速控制(CosyVoice speed 参数)
|
||||||
|
|
||||||
|
#### 4.1 全链路传递
|
||||||
|
|
||||||
|
```
|
||||||
|
前端 GeneratedAudiosPanel (速度选择器)
|
||||||
|
→ useHomeController (speed state + persistence)
|
||||||
|
→ useGeneratedAudios.generateAudio(params)
|
||||||
|
→ POST /api/generated-audios/generate { speed: 1.0 }
|
||||||
|
→ GenerateAudioRequest.speed (Pydantic)
|
||||||
|
→ generate_audio_task → voice_clone_service.generate_audio(speed=)
|
||||||
|
→ _generate_once → POST /generate { speed: "1.0" }
|
||||||
|
→ cosyvoice_server → _model.inference_zero_shot(speed=speed)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4.2 前端 UI
|
||||||
|
|
||||||
|
声音克隆模式下,配音列表面板标题栏"生成配音"按钮左侧显示语速下拉菜单(`语速: 正常 ▼`):
|
||||||
|
|
||||||
|
| 标签 | speed 值 |
|
||||||
|
|------|----------|
|
||||||
|
| 较慢 | 0.8 |
|
||||||
|
| 稍慢 | 0.9 |
|
||||||
|
| 正常 | 1.0 (默认) |
|
||||||
|
| 稍快 | 1.1 |
|
||||||
|
| 较快 | 1.2 |
|
||||||
|
|
||||||
|
语速选择持久化到 localStorage(`vigent_{storageKey}_speed`)。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 五、缺少参考音频门控
|
||||||
|
|
||||||
|
声音克隆模式下未选参考音频时:
|
||||||
|
- "生成配音"按钮禁用 + title 提示"请先选择参考音频"
|
||||||
|
- 面板内显示黄色警告条"声音克隆模式需要先选择参考音频"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 六、前端清理
|
||||||
|
|
||||||
|
- 移除 `FIXED_REF_TEXT` 常量和 `fixedRefText` prop
|
||||||
|
- 移除"请朗读以下内容"引导区块
|
||||||
|
- 上传提示简化为"上传任意语音样本(3-10秒),系统将自动识别内容并克隆声音"
|
||||||
|
- 录音区备注"建议 3-10 秒,超出将自动截取"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 涉及文件汇总
|
||||||
|
|
||||||
|
#### 后端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/services/whisper_service.py` | `transcribe()` 增加可选 `language` 参数,默认 None (自动检测) |
|
||||||
|
| `backend/app/modules/ref_audios/service.py` | 上传自动转写 + 静音点截取 + 淡出 + retranscribe 函数 |
|
||||||
|
| `backend/app/modules/ref_audios/router.py` | `ref_text` 改为 Form(""),新增 retranscribe 端点 |
|
||||||
|
| `backend/app/modules/generated_audios/schemas.py` | `GenerateAudioRequest` 新增 `speed: float = 1.0` |
|
||||||
|
| `backend/app/modules/generated_audios/service.py` | 传递 `req.speed` 到 voice_clone_service |
|
||||||
|
| `backend/app/services/voice_clone_service.py` | `generate_audio()` / `_generate_once()` 接受并传递 speed |
|
||||||
|
| `models/CosyVoice/cosyvoice_server.py` | `/generate` 端点接受 `speed` 参数,传递到 `inference_zero_shot(speed=)` |
|
||||||
|
|
||||||
|
#### 前端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `frontend/src/features/home/model/useHomeController.ts` | 新增 speed state,移除 FIXED_REF_TEXT,handleGenerateAudio 传 speed |
|
||||||
|
| `frontend/src/features/home/model/useHomePersistence.ts` | 新增 speed 持久化 |
|
||||||
|
| `frontend/src/features/home/model/useRefAudios.ts` | 移除 fixedRefText,新增 retranscribe |
|
||||||
|
| `frontend/src/features/home/model/useGeneratedAudios.ts` | generateAudio params 新增 speed |
|
||||||
|
| `frontend/src/features/home/ui/GeneratedAudiosPanel.tsx` | 新增语速选择器 + 缺少参考音频门控 |
|
||||||
|
| `frontend/src/features/home/ui/RefAudioPanel.tsx` | 移除朗读引导,新增重新识别按钮 + ⚠ 警告 |
|
||||||
|
| `frontend/src/features/home/ui/HomePage.tsx` | 传递 speed/setSpeed/ttsMode 到 GeneratedAudiosPanel |
|
||||||
|
|||||||
185
Docs/DevLogs/Day24.md
Normal file
185
Docs/DevLogs/Day24.md
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
## 🔧 鉴权到期治理 + 多素材时间轴稳定性修复 (Day 24)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
本日主要完成两条主线:
|
||||||
|
|
||||||
|
1. **账号与鉴权治理**:会员到期改为请求时自动失效(登录/鉴权接口触发),并统一返回续费提示。
|
||||||
|
2. **视频生成稳定性**:围绕多素材时间轴、截取语义、拼接边界冻结、画面比例与字幕标题适配进行一轮端到端修复。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔐 会员到期请求时失效 — 第一阶段 (Day 24)
|
||||||
|
|
||||||
|
### 目标
|
||||||
|
|
||||||
|
避免依赖定时任务,用户在触发登录或访问受保护接口时即可完成到期判定与账号停用。
|
||||||
|
|
||||||
|
### 行为调整
|
||||||
|
|
||||||
|
- 到期判断基于 `users.expires_at`。
|
||||||
|
- 判定到期后:
|
||||||
|
- 将 `is_active` 自动置为 `false`
|
||||||
|
- 删除该用户全部 session
|
||||||
|
- 返回 `403`,提示:`会员已到期,请续费`
|
||||||
|
|
||||||
|
### 实现点
|
||||||
|
|
||||||
|
- `users.py` 新增 `deactivate_user_if_expired()`,并补充 `_parse_expires_at()` 统一时区解析。
|
||||||
|
- `deps.py` 在 `get_current_user` / `get_current_user_optional` 中统一接入到期检查。
|
||||||
|
- `auth/router.py` 在登录路径增加到期停用逻辑;`/api/auth/me` 统一走 `Depends(get_current_user)`。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🖼️ 画面比例控制 + 字幕标题适配 — 第二阶段 (Day 24)
|
||||||
|
|
||||||
|
### 2.1 输出画面比例可配置
|
||||||
|
|
||||||
|
- 时间轴顶部新增“画面比例”下拉:`9:16` / `16:9`。
|
||||||
|
- 默认值 `9:16`,并持久化到 localStorage。
|
||||||
|
- 生成请求携带 `output_aspect_ratio`,后端在单素材与多素材流程中统一按目标分辨率处理。
|
||||||
|
|
||||||
|
### 2.2 标题/字幕在窄屏画布防溢出
|
||||||
|
|
||||||
|
为减少“预览正常、成片溢出”的差异,统一了预览与渲染策略:
|
||||||
|
|
||||||
|
- 根据 composition 宽度进行响应式缩放。
|
||||||
|
- 开启可换行:`white-space: normal` + `word-break` + `overflow-wrap`。
|
||||||
|
- 描边、字距、上下边距同步按比例缩放。
|
||||||
|
|
||||||
|
### 2.3 片头标题显示模式(短暂/常驻)
|
||||||
|
|
||||||
|
- 在“标题与字幕”面板的“片头标题”行尾新增下拉,支持:`短暂显示` / `常驻显示`。
|
||||||
|
- 默认模式为 `短暂显示`,短暂模式默认时长为 4 秒。
|
||||||
|
- 用户选择会持久化到 localStorage,刷新后保持上次配置。
|
||||||
|
- 生成请求新增 `title_display_mode`,短暂模式透传 `title_duration=4.0`。
|
||||||
|
- Remotion 端到端支持该参数:
|
||||||
|
- `short`:标题在设定时长后淡出并结束渲染;
|
||||||
|
- `persistent`:标题全程常驻(保留淡入动画,不执行淡出)。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎥 方向归一化 + 多素材拼接稳定性 — 第三阶段 (Day 24)
|
||||||
|
|
||||||
|
### 3.1 MOV 旋转元数据导致横竖识别错误
|
||||||
|
|
||||||
|
问题场景:编码分辨率是横屏,但依赖 rotation side-data 才能正确显示为竖屏(常见于手机 MOV)。
|
||||||
|
|
||||||
|
修复方案:
|
||||||
|
|
||||||
|
- `get_video_metadata()` 扩展返回 `rotation/effective_width/effective_height`。
|
||||||
|
- 新增 `normalize_orientation()`,在流程前对带旋转元数据素材做物理方向归一化。
|
||||||
|
- 单素材和多素材下载后统一执行方向归一化,再做分辨率决策。
|
||||||
|
|
||||||
|
### 3.2 多素材“只看到第一段”与边界冻结
|
||||||
|
|
||||||
|
针对拼接可靠性补了两类保护:
|
||||||
|
|
||||||
|
- **分配保护**:`custom_assignments` 与素材数量不一致时,后端回退自动分配,避免异常输入导致仅首段生效。
|
||||||
|
- **编码一致性**:
|
||||||
|
- 片段准备阶段统一重编码;
|
||||||
|
- concat 阶段不再走拷贝;
|
||||||
|
- 进一步统一为 `25fps + CFR`,并在 concat 增加 `+genpts`,降低段边界时间基不连续导致的“画面冻结口型还动”风险。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⏱️ 时间轴截取语义对齐修复 — 第四阶段 (Day 24)
|
||||||
|
|
||||||
|
### 背景
|
||||||
|
|
||||||
|
时间轴设计语义是:
|
||||||
|
|
||||||
|
- 每段可以设置 `sourceStart/sourceEnd`;
|
||||||
|
- 总时长超出音频时,仅保留可见段,末段截齐音频;
|
||||||
|
- 总时长不足时,由最后可见段循环补齐。
|
||||||
|
|
||||||
|
本日将前后端对齐到这一语义。
|
||||||
|
|
||||||
|
### 4.1 `source_end` 全链路打通
|
||||||
|
|
||||||
|
此前仅传 `source_start`,导致后端无法准确知道“截到哪里”。
|
||||||
|
|
||||||
|
本次改动:
|
||||||
|
|
||||||
|
- 前端 `toCustomAssignments()` 增加可选 `source_end`。
|
||||||
|
- 后端 `CustomAssignment` schema 增加 `source_end`。
|
||||||
|
- workflow 将 `source_end` 透传到 `prepare_segment()`(单素材/多素材均支持)。
|
||||||
|
- `prepare_segment()` 增加 `source_end` 参数,按 `[source_start, source_end)` 计算可用片段,并在需要循环时先裁剪再循环,避免循环范围错位。
|
||||||
|
|
||||||
|
### 4.2 时间轴有效时长计算修复
|
||||||
|
|
||||||
|
修复 `sourceStart > 0 且 sourceEnd = 0` 时的有效时长错误:
|
||||||
|
|
||||||
|
- 旧逻辑会按整段素材时长计算;
|
||||||
|
- 新逻辑改为 `materialDuration - sourceStart`。
|
||||||
|
|
||||||
|
该修复同时用于:
|
||||||
|
|
||||||
|
- `recalcPositions()` 的段时长计算;
|
||||||
|
- TimelineEditor 中“循环补足”可视化比例计算。
|
||||||
|
|
||||||
|
### 4.3 可见段分配优先级修复
|
||||||
|
|
||||||
|
修复“可见段数 < 已选素材数时,custom_assignments 被丢弃回退自动分配”的问题:
|
||||||
|
|
||||||
|
- 生成请求优先以时间轴可见段的 `assignments` 为准;
|
||||||
|
- 超出时间轴的素材不参与本次生成。
|
||||||
|
|
||||||
|
### 4.4 单素材截取触发条件补齐
|
||||||
|
|
||||||
|
单素材模式下,若只改了终点(`sourceEnd > 0`)也会发送 `custom_assignments`,确保截取生效。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧭 页面交互与体验细节 — 第五阶段 (Day 24)
|
||||||
|
|
||||||
|
- 页面刷新后自动回到顶部,避免从历史滚动位置进入页面。
|
||||||
|
- 素材列表与历史视频列表滚动增加“跳过首次自动滚动”保护,减少恢复状态时页面跳动。
|
||||||
|
- 时间轴比例区移除多余文案,保持信息简洁。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 涉及文件汇总
|
||||||
|
|
||||||
|
### 后端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/repositories/users.py` | 新增 `deactivate_user_if_expired()` 与 `_parse_expires_at()` |
|
||||||
|
| `backend/app/core/deps.py` | `get_current_user` / `get_current_user_optional` 接入到期失效检查 |
|
||||||
|
| `backend/app/modules/auth/router.py` | 登录时到期停用 + `/api/auth/me` 统一鉴权依赖 |
|
||||||
|
| `backend/app/modules/videos/schemas.py` | `CustomAssignment` 新增 `source_end`;保留 `output_aspect_ratio` |
|
||||||
|
| `backend/app/modules/videos/workflow.py` | 多素材/单素材透传 `source_end`;多素材 prepare/concat 统一 25fps;标题显示模式参数透传 Remotion |
|
||||||
|
| `backend/app/services/video_service.py` | 旋转元数据解析与方向归一化;`prepare_segment` 支持 `source_end/target_fps`;concat 强制 CFR + `+genpts` |
|
||||||
|
| `backend/app/services/remotion_service.py` | render 支持 `title_display_mode/title_duration` 并传递到 render.ts |
|
||||||
|
|
||||||
|
### 前端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `frontend/src/features/home/model/useTimelineEditor.ts` | `CustomAssignment` 新增 `source_end`;修复 sourceStart 开放终点时长计算 |
|
||||||
|
| `frontend/src/features/home/model/useHomeController.ts` | 多素材以可见 assignments 为准发送;单素材截取触发条件补齐 |
|
||||||
|
| `frontend/src/features/home/ui/TimelineEditor.tsx` | 画面比例下拉;循环比例按截取后有效时长计算 |
|
||||||
|
| `frontend/src/features/home/model/useHomePersistence.ts` | `outputAspectRatio` 与 `titleDisplayMode` 持久化 |
|
||||||
|
| `frontend/src/features/home/ui/HomePage.tsx` | 页面进入滚动到顶部;ClipTrimmer/Timeline 交互保持一致 |
|
||||||
|
| `frontend/src/features/home/ui/FloatingStylePreview.tsx` | 标题/字幕样式预览与成片渲染策略对齐 |
|
||||||
|
| `frontend/src/features/home/ui/TitleSubtitlePanel.tsx` | 标题行新增“短暂显示/常驻显示”下拉 |
|
||||||
|
|
||||||
|
### Remotion 修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `remotion/src/components/Title.tsx` | 标题响应式缩放与自动换行;新增短暂/常驻显示模式控制 |
|
||||||
|
| `remotion/src/components/Subtitles.tsx` | 字幕响应式缩放与自动换行,减少预览/成片差异 |
|
||||||
|
| `remotion/src/Video.tsx` | 新增 `titleDisplayMode` 透传到标题组件 |
|
||||||
|
| `remotion/src/Root.tsx` | 默认 props 增加 `titleDisplayMode='short'` 与 `titleDuration=4` |
|
||||||
|
| `remotion/render.ts` | CLI 参数新增 `--titleDisplayMode`,inputProps 增加 `titleDisplayMode` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 验证记录
|
||||||
|
|
||||||
|
- 后端语法检查:`python -m py_compile backend/app/modules/videos/schemas.py backend/app/modules/videos/workflow.py backend/app/services/video_service.py backend/app/services/remotion_service.py`
|
||||||
|
- 前端类型检查:`npx tsc --noEmit`
|
||||||
|
- 前端 ESLint:`npx eslint src/features/home/model/useHomeController.ts src/features/home/model/useHomePersistence.ts src/features/home/ui/HomePage.tsx src/features/home/ui/TitleSubtitlePanel.tsx`
|
||||||
|
- Remotion 渲染脚本构建:`npm run build:render`
|
||||||
254
Docs/DevLogs/Day25.md
Normal file
254
Docs/DevLogs/Day25.md
Normal file
@@ -0,0 +1,254 @@
|
|||||||
|
## 🔧 文案提取助手修复 — 抖音链接无法提取文案 (Day 25)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
文案提取助手粘贴抖音链接后无法提取文案,yt-dlp 报错 `Fresh cookies are needed`,手动回退方案也因抖音页面结构变化失效。本日完成了完整修复,并清理了不再需要的 `DOUYIN_COOKIE` 配置。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🐛 问题诊断
|
||||||
|
|
||||||
|
### 错误链路
|
||||||
|
|
||||||
|
1. **yt-dlp 失败**:`ERROR: [Douyin] Fresh cookies (not necessarily logged in) are needed`
|
||||||
|
- yt-dlp 版本 `2025.12.08` 过旧
|
||||||
|
- 抖音 API `aweme/v1/web/aweme/detail/` 需要签名 cookie(`s_v_web_id` 等),即使升级 yt-dlp 到最新版 + 传入 cookie 仍无法解决,属 yt-dlp 已知问题
|
||||||
|
2. **手动回退失败**:`Could not find RENDER_DATA in page`
|
||||||
|
- 旧方案通过桌面端用户主页 + `modal_id` 访问,抖音 SSR 已不再返回 `videoDetail` 数据
|
||||||
|
3. **`.env` 中 `DOUYIN_COOKIE`**:时间戳 2024 年 12 月,早已过期
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ 修复方案:移动端分享页 + 自动获取 ttwid
|
||||||
|
|
||||||
|
### 核心思路
|
||||||
|
|
||||||
|
放弃依赖 yt-dlp 下载抖音视频和手动维护 cookie,改为:
|
||||||
|
|
||||||
|
1. 自动从 ByteDance 公共 API 获取新鲜 `ttwid`(匿名令牌,不绑定账号)
|
||||||
|
2. 用 `ttwid` 访问移动端分享页 `m.douyin.com/share/video/{id}`
|
||||||
|
3. 从页面内嵌 JSON 中提取 `play_addr` 播放地址并下载
|
||||||
|
|
||||||
|
### 关键代码(`_download_douyin_manual` 重写)
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 1. 获取新鲜 ttwid
|
||||||
|
ttwid_resp = await client.post(
|
||||||
|
"https://ttwid.bytedance.com/ttwid/union/register/",
|
||||||
|
json={"region": "cn", "aid": 6383, "service": "www.douyin.com", ...}
|
||||||
|
)
|
||||||
|
ttwid = ttwid_resp.cookies.get("ttwid", "")
|
||||||
|
|
||||||
|
# 2. 访问移动端分享页
|
||||||
|
page_resp = await client.get(
|
||||||
|
f"https://m.douyin.com/share/video/{video_id}",
|
||||||
|
headers={"cookie": f"ttwid={ttwid}", ...}
|
||||||
|
)
|
||||||
|
|
||||||
|
# 3. 提取 play_addr
|
||||||
|
addr_match = re.search(r'"play_addr":\{"uri":"([^"]+)","url_list":\["([^"]+)"', page_text)
|
||||||
|
video_url = addr_match.group(2).replace(r"\u002F", "/")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 优势
|
||||||
|
|
||||||
|
- 不再依赖手动维护的 `DOUYIN_COOKIE`,ttwid 每次请求自动获取
|
||||||
|
- 不受 yt-dlp 对抖音支持状况影响
|
||||||
|
- 所有用户通用,不绑定特定账号
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧹 清理 DOUYIN_COOKIE 配置
|
||||||
|
|
||||||
|
`DOUYIN_COOKIE` 仅用于文案提取,新方案不再需要,已从以下位置删除:
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/.env` | 删除 `DOUYIN_COOKIE` 配置项及注释 |
|
||||||
|
| `backend/app/core/config.py` | 删除 `DOUYIN_COOKIE: str = ""` 字段定义 |
|
||||||
|
| `backend/app/modules/tools/service.py` | 删除 yt-dlp 传 cookie 逻辑和 `_write_netscape_cookies` 辅助函数 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔤 前端文案修正
|
||||||
|
|
||||||
|
将文案提取界面中的"AI 洗稿结果"改为"AI 改写结果"。
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `frontend/src/features/home/ui/ScriptExtractionModal.tsx` | `AI 洗稿结果` → `AI 改写结果` |
|
||||||
|
| `backend/app/modules/tools/service.py` | 注释中"洗稿"→"改写" |
|
||||||
|
| `backend/app/services/glm_service.py` | docstring 中"洗稿"→"改写文案" |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📦 其他变更
|
||||||
|
|
||||||
|
- **yt-dlp 升级**:`2025.12.08` → `2026.2.21`
|
||||||
|
- **yt-dlp 初始化修正**:改为 `YoutubeDL(ydl_opts)` 直接传参初始化(原先空初始化后 update params 不生效)
|
||||||
|
- **User-Agent 更新**:yt-dlp 中 `Chrome/91` → `Chrome/131`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 涉及文件汇总
|
||||||
|
|
||||||
|
### 后端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/modules/tools/service.py` | 重写 `_download_douyin_manual`(移动端分享页方案);修正 yt-dlp 初始化;清理 cookie 相关代码;注释改写 |
|
||||||
|
| `backend/app/services/glm_service.py` | docstring "洗稿" → "改写文案" |
|
||||||
|
| `backend/app/core/config.py` | 删除 `DOUYIN_COOKIE` 字段 |
|
||||||
|
| `backend/.env` | 删除 `DOUYIN_COOKIE` 配置 |
|
||||||
|
| `backend/requirements.txt` | yt-dlp 版本升级 |
|
||||||
|
|
||||||
|
### 前端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `frontend/src/features/home/ui/ScriptExtractionModal.tsx` | "AI 洗稿结果" → "AI 改写结果" |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✏️ AI 智能改写 — 自定义提示词功能
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
文案提取助手的"AI 智能改写"原先使用硬编码 prompt,用户无法定制改写风格。本次在 checkbox 右侧新增"自定义提示词"折叠区域,用户可编辑自定义 prompt,持久化到 localStorage,后端按需替换默认 prompt。
|
||||||
|
|
||||||
|
### 后端修改
|
||||||
|
|
||||||
|
**路由层** (`router.py`):`extract_script_tool` 新增可选 Form 参数 `custom_prompt: Optional[str] = Form(None)`,透传给 service。
|
||||||
|
|
||||||
|
**服务层** (`service.py`):`extract_script()` 签名新增 `custom_prompt`,透传给 `glm_service.rewrite_script(script, custom_prompt)`。
|
||||||
|
|
||||||
|
**AI 层** (`glm_service.py`):`rewrite_script(self, text, custom_prompt=None)`,若 `custom_prompt` 有值则用自定义 prompt + 原文拼接,否则保持原有默认 prompt。
|
||||||
|
|
||||||
|
```python
|
||||||
|
if custom_prompt and custom_prompt.strip():
|
||||||
|
prompt = f"""{custom_prompt.strip()}
|
||||||
|
|
||||||
|
原始文案:
|
||||||
|
{text}"""
|
||||||
|
else:
|
||||||
|
prompt = f"""请将以下视频文案进行改写。...(原有默认)"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### 前端修改
|
||||||
|
|
||||||
|
**Hook** (`useScriptExtraction.ts`):
|
||||||
|
- 新增 `customPrompt` / `showCustomPrompt` 状态
|
||||||
|
- 初始值从 `localStorage.getItem("vigent_rewriteCustomPrompt")` 恢复
|
||||||
|
- `customPrompt` 变化时防抖 300ms 保存到 localStorage
|
||||||
|
- `handleExtract()` 中若 `doRewrite && customPrompt.trim()` 有值,追加 `formData.append("custom_prompt", ...)`
|
||||||
|
- modal 重置时不清空 customPrompt(持久化偏好)
|
||||||
|
|
||||||
|
**UI** (`ScriptExtractionModal.tsx`):
|
||||||
|
- checkbox 同行右侧新增"自定义提示词 ▼"按钮(仅 `doRewrite` 时显示)
|
||||||
|
- 点击展开 textarea 编辑区域,底部提示"留空则使用默认提示词"
|
||||||
|
- 取消勾选 AI 智能改写时,自定义提示词区域自动隐藏
|
||||||
|
|
||||||
|
### 涉及文件
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/modules/tools/router.py` | 新增 `custom_prompt` Form 参数 |
|
||||||
|
| `backend/app/modules/tools/service.py` | `extract_script()` 透传 `custom_prompt` |
|
||||||
|
| `backend/app/services/glm_service.py` | `rewrite_script()` 支持自定义 prompt |
|
||||||
|
| `frontend/.../useScriptExtraction.ts` | 新增状态、localStorage 持久化、FormData 传参 |
|
||||||
|
| `frontend/.../ScriptExtractionModal.tsx` | UI 按钮 + 展开 textarea |
|
||||||
|
|
||||||
|
### 验证
|
||||||
|
|
||||||
|
- 后端 `python -m py_compile` 三个文件通过
|
||||||
|
- 前端 `npx tsc --noEmit` 通过
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🐛 SSR 构建修复 — localStorage is not defined
|
||||||
|
|
||||||
|
### 问题
|
||||||
|
|
||||||
|
`npm run build` 报错 `ReferenceError: localStorage is not defined`,因为 `useScriptExtraction.ts` 中 `useState` 的初始化函数在 SSR(Node.js)环境下也会执行,而服务端没有 `localStorage`。
|
||||||
|
|
||||||
|
### 修复
|
||||||
|
|
||||||
|
`useState` 初始化加 `typeof window !== "undefined"` 守卫:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const [customPrompt, setCustomPrompt] = useState(
|
||||||
|
() => typeof window !== "undefined" ? localStorage.getItem(CUSTOM_PROMPT_KEY) || "" : ""
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `frontend/.../useScriptExtraction.ts` | `useState` 初始化增加 SSR 安全守卫 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎬 片头副标题功能
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
新增片头副标题(secondary_title),显示在主标题下方,用于补充说明或悬念引导。副标题有独立的样式配置(字体、字号、颜色等),可由 AI 同时生成,20 字限制,仅在视频画面中显示,不参与发布标题。
|
||||||
|
|
||||||
|
命名约定:后端 `secondary_title`(snake_case),前端 `videoSecondaryTitle`(camelCase),用户界面"片头副标题"。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 后端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/modules/videos/schemas.py` | `GenerateRequest` 新增 4 个可选字段:`secondary_title`、`secondary_title_style_id`、`secondary_title_font_size`、`secondary_title_top_margin` |
|
||||||
|
| `backend/app/services/glm_service.py` | AI prompt 增加副标题生成要求(不超过20字),JSON 格式新增 `secondary_title` 字段 |
|
||||||
|
| `backend/app/modules/ai/router.py` | `GenerateMetaResponse` 增加 `secondary_title: str = ""`,endpoint 返回时取 `result.get("secondary_title", "")` |
|
||||||
|
| `backend/app/modules/videos/workflow.py` | `use_remotion` 条件增加 `or req.secondary_title`;副标题样式解析复用 `get_style("title", ...)`;字号/间距覆盖;`prepare_style_for_remotion` 处理副标题字体;`remotion_service.render()` 传入 `secondary_title` + `secondary_title_style` |
|
||||||
|
| `backend/app/services/remotion_service.py` | `render()` 新增 `secondary_title` 和 `secondary_title_style` 参数,构建 CLI 参数 `--secondaryTitle` 和 `--secondaryTitleStyle` |
|
||||||
|
|
||||||
|
### Remotion 修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `remotion/render.ts` | `RenderOptions` 新增 `secondaryTitle?` + `secondaryTitleStyle?`;`parseArgs()` 新增 switch case;`inputProps` 新增两个字段 |
|
||||||
|
| `remotion/src/components/Title.tsx` | `TitleProps` 新增 `secondaryTitle?` 和 `secondaryTitleStyle?`;`AbsoluteFill` 改为 `flexDirection: 'column'` 垂直堆叠;主标题 `<h1>` 后增加副标题 `<h2>`,独立样式(默认字号 48px、字重 700),共享淡入淡出动画;副标题字体使用独立 `@font-face`(`SecondaryTitleFont`)避免与主标题冲突 |
|
||||||
|
| `remotion/src/Video.tsx` | `VideoProps` 新增 `secondaryTitle?` + `secondaryTitleStyle?`;传递给 `<Title>` 组件;渲染条件改为 `{(title \|\| secondaryTitle) && ...}` |
|
||||||
|
| `remotion/src/Root.tsx` | `defaultProps` 新增 `secondaryTitle: undefined` + `secondaryTitleStyle: undefined` |
|
||||||
|
|
||||||
|
### 前端修改
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `frontend/src/shared/lib/title.ts` | 新增 `SECONDARY_TITLE_MAX_LENGTH = 20` 和 `clampSecondaryTitle()` |
|
||||||
|
| `frontend/src/features/home/model/useHomeController.ts` | 新增状态 `videoSecondaryTitle`、`selectedSecondaryTitleStyleId`、`secondaryTitleFontSize`、`secondaryTitleTopMargin`、`secondaryTitleSizeLocked`;新建 `secondaryTitleInput = useTitleInput({ maxLength: 20 })`(不 sync 到发布页);`handleGenerateMeta()` 接收并填充 `secondary_title`;`handleGenerate()` 构建 payload 增加副标题字段;return 暴露所有新状态 |
|
||||||
|
| `frontend/src/features/home/model/useHomePersistence.ts` | 新增 localStorage key:`secondaryTitle`、`secondaryTitleStyle`、`secondaryTitleFontSize`、`secondaryTitleTopMargin`;对应的恢复和保存 effect |
|
||||||
|
| `frontend/src/features/home/ui/TitleSubtitlePanel.tsx` | Props 新增副标题相关;主标题输入框下方添加"片头副标题(限制20个字)"输入框;副标题样式选择器(复用 titleStyles 预设)、字号滑块(30-100px)、间距滑块(0-100px) |
|
||||||
|
| `frontend/src/features/home/ui/FloatingStylePreview.tsx` | 标题预览改为 flex column 布局;主标题下方增加副标题预览行,独立样式渲染 |
|
||||||
|
| `frontend/src/features/home/ui/HomePage.tsx` | 从 `useHomeController` 解构新状态,传给 `TitleSubtitlePanel` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🐛 参考音频上传 — 中文文件名 InvalidKey 修复
|
||||||
|
|
||||||
|
### 问题
|
||||||
|
|
||||||
|
上传中文名参考音频(如"我的声音.wav")时,Supabase Storage 报 `InvalidKey`,因为存储路径直接使用了原始中文文件名。
|
||||||
|
|
||||||
|
### 修复
|
||||||
|
|
||||||
|
在 `ref_audios/service.py` 新增 `sanitize_filename()` 函数,将存储路径的文件名清洗为 ASCII 安全字符(仅 `A-Za-z0-9._-`):
|
||||||
|
|
||||||
|
- NFKD 规范化 → 丢弃非 ASCII → 非法字符替换为 `_`
|
||||||
|
- 纯中文/emoji 清洗后为空时,使用 MD5 哈希兜底(如 `audio_e924b1193007`)
|
||||||
|
- 文件名限长 50 字符
|
||||||
|
- 原始中文文件名保留在 metadata 中作为展示名,前端显示不受影响
|
||||||
|
|
||||||
|
```
|
||||||
|
修复前: cbbe.../1771915755_我的声音.wav → InvalidKey
|
||||||
|
修复后: cbbe.../1771915755_audio_xxxxxxxx.wav → 上传成功
|
||||||
|
```
|
||||||
|
|
||||||
|
| 文件 | 变更 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/modules/ref_audios/service.py` | 新增 `sanitize_filename()` 函数,上传路径使用清洗后文件名 |
|
||||||
239
Docs/DevLogs/Day26.md
Normal file
239
Docs/DevLogs/Day26.md
Normal file
@@ -0,0 +1,239 @@
|
|||||||
|
## 🎨 前端优化:板块合并 + 序号标题 + UI 精细化 (Day 26)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
首页原有 9 个独立板块(左栏 7 个 + 右栏 2 个),每个都有自己的卡片容器和标题,视觉碎片化严重。本次将相关板块合并为 5 个主板块,添加中文序号(一~十),移除 emoji 图标,并对多个子组件的布局和交互细节进行优化。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ 改动内容
|
||||||
|
|
||||||
|
### 1. 板块合并方案
|
||||||
|
|
||||||
|
**左栏(4 个主板块 + 2 个独立区域):**
|
||||||
|
|
||||||
|
| 序号 | 板块名 | 子板块 | 原组件 |
|
||||||
|
|------|--------|--------|--------|
|
||||||
|
| 一 | 文案提取与编辑 | — | ScriptEditor |
|
||||||
|
| 二 | 标题与字幕 | — | TitleSubtitlePanel |
|
||||||
|
| 三 | 配音 | 配音方式 / 配音列表 | VoiceSelector + GeneratedAudiosPanel |
|
||||||
|
| 四 | 素材编辑 | 视频素材 / 时间轴编辑 | MaterialSelector + TimelineEditor |
|
||||||
|
| 五 | 背景音乐 | — | BgmPanel |
|
||||||
|
| — | 生成按钮 | — | GenerateActionBar(不编号) |
|
||||||
|
|
||||||
|
**右栏(1 个主板块):**
|
||||||
|
|
||||||
|
| 序号 | 板块名 | 子板块 | 原组件 |
|
||||||
|
|------|--------|--------|--------|
|
||||||
|
| 六 | 作品 | 作品列表 / 作品预览 | HistoryList + PreviewPanel |
|
||||||
|
|
||||||
|
**发布页(/publish):**
|
||||||
|
|
||||||
|
| 序号 | 板块名 |
|
||||||
|
|------|--------|
|
||||||
|
| 七 | 平台账号 |
|
||||||
|
| 八 | 选择发布作品 |
|
||||||
|
| 九 | 发布信息 |
|
||||||
|
| 十 | 选择发布平台 |
|
||||||
|
|
||||||
|
### 2. embedded 模式
|
||||||
|
|
||||||
|
6 个组件新增 `embedded?: boolean` prop(默认 `false`):
|
||||||
|
|
||||||
|
- `VoiceSelector` — embedded 时不渲染外层卡片和主标题
|
||||||
|
- `GeneratedAudiosPanel` — embedded 时两行布局:第 1 行(语速+生成配音右对齐)、第 2 行(配音列表+刷新)
|
||||||
|
- `MaterialSelector` — embedded 时自渲染 h3 子标题"视频素材"+ 上传/刷新按钮同行
|
||||||
|
- `TimelineEditor` — embedded 时自渲染 h3 子标题"时间轴编辑"+ 画面比例/播放控件同行
|
||||||
|
- `PreviewPanel` — embedded 时不渲染外层卡片和标题
|
||||||
|
- `HistoryList` — embedded 时不渲染外层卡片和标题(刷新按钮由 HomePage 提供)
|
||||||
|
|
||||||
|
### 3. 序号标题 + emoji 移除
|
||||||
|
|
||||||
|
所有编号板块移除 emoji 图标,使用纯中文序号:
|
||||||
|
|
||||||
|
- ScriptEditor: `✍️ 文案提取与编辑` → `一、文案提取与编辑`
|
||||||
|
- TitleSubtitlePanel: `🎬 标题与字幕` → `二、标题与字幕`
|
||||||
|
- BgmPanel: `🎵 背景音乐` → `五、背景音乐`
|
||||||
|
- HomePage 右栏: `五、作品` → `六、作品`
|
||||||
|
- PublishPage: `👤 平台账号` → `七、平台账号`、`📹 选择发布作品` → `八、选择发布作品`、`✍️ 发布信息` → `九、发布信息`、`📱 选择发布平台` → `十、选择发布平台`
|
||||||
|
|
||||||
|
### 4. 子标题与分隔样式
|
||||||
|
|
||||||
|
- **主标题**: `text-base sm:text-lg font-semibold text-white`
|
||||||
|
- **子标题**: `text-sm font-medium text-gray-400`
|
||||||
|
- **分隔线**: `<div className="border-t border-white/10 my-4" />`
|
||||||
|
|
||||||
|
### 5. 配音列表布局优化
|
||||||
|
|
||||||
|
GeneratedAudiosPanel embedded 模式下采用两行布局:
|
||||||
|
- **第 1 行**:语速下拉 + 生成配音按钮(右对齐,`flex justify-end`)
|
||||||
|
- **第 2 行**:`<h3>配音列表</h3>` + 刷新按钮(两端对齐)
|
||||||
|
- 非 embedded 模式保持原单行布局
|
||||||
|
|
||||||
|
### 6. TitleSubtitlePanel 下拉对齐
|
||||||
|
|
||||||
|
- 标题样式/副标题样式/字幕样式三行标签统一 `w-20`(固定 80px),确保下拉菜单垂直对齐
|
||||||
|
- 下拉菜单宽度 `w-1/3 min-w-[100px]`,避免过宽
|
||||||
|
|
||||||
|
### 7. RefAudioPanel 文案简化
|
||||||
|
|
||||||
|
- 原底部段落"上传任意语音样本(3-10秒)…" 移至 "我的参考音频" 标题旁,简化为 `(上传3-10秒语音样本)`
|
||||||
|
|
||||||
|
### 8. 账户下拉菜单添加手机号
|
||||||
|
|
||||||
|
- AccountSettingsDropdown 在账户有效期上方新增手机号显示区域
|
||||||
|
- 显示 `user?.phone || '未知账户'`
|
||||||
|
|
||||||
|
### 9. 标题显示模式对副标题生效
|
||||||
|
|
||||||
|
- **payload 修复**: `useHomeController.ts` 中 `title_display_mode` 的发送条件从 `videoTitle.trim()` 改为 `videoTitle.trim() || videoSecondaryTitle.trim()`,确保仅有副标题时也能发送显示模式
|
||||||
|
- **UI 调整**: 短暂显示/常驻显示下拉从片头标题输入行移至"二、标题与字幕"板块标题行(与预览样式按钮同行),明确表示该设置对标题和副标题同时生效
|
||||||
|
- Remotion 端 `Title.tsx` 已支持(标题和副标题作为整体组件渲染,`displayMode` 统一控制)
|
||||||
|
|
||||||
|
### 10. 时间轴模糊遮罩
|
||||||
|
|
||||||
|
遮罩从外层 wrapper 移入"四、素材编辑"卡片内,仅覆盖时间轴子区域(`rounded-xl`)。
|
||||||
|
|
||||||
|
### 11. 登录后用户信息立即可用
|
||||||
|
|
||||||
|
- AuthContext 新增 `setUser` 方法暴露给消费者
|
||||||
|
- 登录页成功后调用 `setUser(result.user)` 立即写入 Context,无需等页面刷新
|
||||||
|
- 修复登录后账户下拉显示"未知账户"、刷新后才显示手机号的问题
|
||||||
|
|
||||||
|
### 12. 文案与选项微调
|
||||||
|
|
||||||
|
- MaterialSelector 描述 `(可多选,最多4个)` → `(上传自拍视频,最多可选4个)`
|
||||||
|
- TitleSubtitlePanel 显示模式选项 `短暂显示/常驻显示` → `标题短暂显示/标题常驻显示`
|
||||||
|
|
||||||
|
### 13. UI/UX 体验优化(6 项)
|
||||||
|
|
||||||
|
- **操作按钮移动端可见**: 配音列表、作品列表、素材列表、参考音频、历史文案的操作按钮从 `opacity-0`(hover 才显示)改为 `opacity-40`(平时半透明可见,hover 全亮),解决触屏设备无法发现按钮的问题
|
||||||
|
- **手机号脱敏**: AccountSettingsDropdown 手机号中间四位遮掩 `138****5678`
|
||||||
|
- **标题字数计数器**: TitleSubtitlePanel 标题/副标题输入框右侧显示实时字数 `3/15`,超限变红
|
||||||
|
- **列表滚动条提示**: ~~配音列表、作品列表、素材列表、BGM 列表从 `hide-scrollbar` 改为 `custom-scrollbar`~~ → 已全部改回 `hide-scrollbar` 隐藏滚动条(滚动功能不变)
|
||||||
|
- **时间轴拖拽提示**: TimelineEditor 色块左上角新增 `GripVertical` 抓手图标,暗示可拖拽排序
|
||||||
|
- **截取滑块放大**: ClipTrimmer 手柄从 16px 放大到 20px,触控区从 32px 放大到 40px
|
||||||
|
|
||||||
|
### 14. 代码质量修复(4 项)
|
||||||
|
|
||||||
|
- **AccountSettingsDropdown**: 关闭密码弹窗补齐 `setSuccess('')` 清空
|
||||||
|
- **MaterialSelector**: `selectedSet` 加 `useMemo` 避免每次渲染重建
|
||||||
|
- **TimelineEditor**: `visibleSegments`/`overflowSegments` 加 `useMemo`
|
||||||
|
- **MaterialSelector**: 素材满 4 个时非选中项按钮加 `disabled`
|
||||||
|
|
||||||
|
### 15. 发布页平台账号响应式布局
|
||||||
|
|
||||||
|
- **单行布局**:图标+名称+状态在左,按钮在右(`flex items-center`)
|
||||||
|
- **移动端紧凑**:图标 `h-6 w-6`、按钮 `text-xs px-2 py-1 rounded-md`、间距 `space-y-2 px-3 py-2.5`
|
||||||
|
- **桌面端宽松**:`sm:h-7 sm:w-7`、`sm:text-sm sm:px-3 sm:py-1.5 sm:rounded-lg`、`sm:space-y-3 sm:px-4 sm:py-3.5`
|
||||||
|
- 两端各自美观,风格与其他板块一致
|
||||||
|
|
||||||
|
### 16. 移动端刷新回顶部修复
|
||||||
|
|
||||||
|
- **问题**: 移动端刷新页面后不回到顶部,而是滚动到背景音乐板块
|
||||||
|
- **根因**: 1) 浏览器原生滚动恢复覆盖 `scrollTo(0,0)`;2) 列表 scroll effect 有双依赖(`selectedId` + `list`),数据异步加载时第二次触发跳过了 ref 守卫,执行了 `scrollIntoView` 导致页面跳动
|
||||||
|
- **修复**: 三管齐下 — ① `history.scrollRestoration = "manual"` 禁用浏览器原生恢复;② 时间门控 `scrollEffectsEnabled` ref(1 秒内禁止所有列表自动滚动)替代单次 ref 守卫;③ 200ms 延迟兜底 `scrollTo(0,0)`
|
||||||
|
|
||||||
|
### 17. 移动端样式预览窗口缩小
|
||||||
|
|
||||||
|
- **问题**: 移动端点击"预览样式"后窗口占满整屏(宽 358px,高约 636px),遮挡样式调节控件
|
||||||
|
- **修复**: 移动端宽度从 `window.innerWidth - 32` 缩小到 **160px**;位置从左上角改为**右下角**(`right:12, bottom:12`),不遮挡上方控件;最大高度限制 `50dvh`
|
||||||
|
- 桌面端保持不变(280px,左上角)
|
||||||
|
|
||||||
|
### 18. 列表滚动条统一隐藏
|
||||||
|
|
||||||
|
- 将 Day 26 早期改为 `custom-scrollbar`(细紫色滚动条)的 7 处全部改回 `hide-scrollbar`
|
||||||
|
- 涉及:BgmPanel、GeneratedAudiosPanel、HistoryList、MaterialSelector(2处)、ScriptExtractionModal(2处)
|
||||||
|
- 滚动功能不受影响,仅视觉上不显示滚动条
|
||||||
|
|
||||||
|
### 19. 配音按钮移动端适配
|
||||||
|
|
||||||
|
- VoiceSelector "选择声音/克隆声音" 按钮:内边距 `px-4` → `px-2 sm:px-4`,字号加 `text-sm sm:text-base`,图标加 `shrink-0`
|
||||||
|
- 修复移动端窄屏下按钮被挤压导致"克隆声音"不可见的问题
|
||||||
|
|
||||||
|
### 20. 素材标题溢出修复
|
||||||
|
|
||||||
|
- MaterialSelector embedded 标题行移除 `whitespace-nowrap`
|
||||||
|
- 描述文字 `(上传自拍视频,最多可选4个)` 在移动端隐藏(`hidden sm:inline`),桌面端正常显示
|
||||||
|
- 修复移动端刷新按钮被推出容器外的问题
|
||||||
|
|
||||||
|
### 21. 生成配音按钮放大
|
||||||
|
|
||||||
|
- "生成配音" 作为核心操作按钮,从辅助尺寸升级为主操作尺寸
|
||||||
|
- 内边距 `px-2/px-3 py-1/py-1.5` → `px-4 py-2`,字号 `text-xs` → `text-sm font-medium`
|
||||||
|
- 图标 `h-3.5 w-3.5` → `h-4 w-4`,新增 `shadow-sm` + hover `shadow-md`
|
||||||
|
- embedded 与非 embedded 模式统一放大
|
||||||
|
|
||||||
|
### 22. 生成进度条位置调整
|
||||||
|
|
||||||
|
- **问题**: 生成进度条在"六、作品"卡片内部(作品预览下方),不够醒目
|
||||||
|
- **修复**: 进度条从 PreviewPanel 内部提取到 HomePage 右栏,作为独立卡片渲染在"六、作品"卡片**上方**
|
||||||
|
- 使用紫色边框(`border-purple-500/30`)区分,显示任务消息和百分比
|
||||||
|
- PreviewPanel embedded 模式下不再渲染进度条(传入 `currentTask={null}`)
|
||||||
|
- 生成完成后进度卡片自动消失
|
||||||
|
|
||||||
|
### 23. LatentSync 超时修复
|
||||||
|
|
||||||
|
- **问题**: 约 2 分钟的视频(3023 帧,190 段推理)预计推理 54 分钟,但 httpx 超时仅 20 分钟,导致 LatentSync 调用失败并回退到无口型同步
|
||||||
|
- **根因**: `lipsync_service.py` 中 `httpx.AsyncClient(timeout=1200.0)` 不足以覆盖长视频推理时间
|
||||||
|
- **修复**: 超时从 `1200s`(20 分钟)改为 `3600s`(1 小时),足以覆盖 2-3 分钟视频的推理
|
||||||
|
|
||||||
|
### 24. 字幕时间戳节奏映射(修复长视频字幕漂移)
|
||||||
|
|
||||||
|
- **问题**: 2 分钟视频字幕明显对不上语音,越到后面偏差越大
|
||||||
|
- **根因**: `whisper_service.py` 的 `original_text` 处理逻辑丢弃了 Whisper 逐词时间戳,仅保留总时间范围后做全程线性插值,每个字分配相同时长,完全忽略语速变化和停顿
|
||||||
|
- **修复**: 保留 Whisper 的逐字时间戳作为语音节奏模板,将原文字符按比例映射到 Whisper 时间节奏上(rhythm-mapping),而非线性均分。字幕文字不变,只是时间戳跟随真实语速
|
||||||
|
- **算法**: 原文第 i 个字符映射到 Whisper 时间线的 `(i/N)*M` 位置(N=原文字符数,M=Whisper字符数),在相邻 Whisper 时间点间线性插值
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📁 修改文件清单
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `VoiceSelector.tsx` | 新增 embedded prop,移动端按钮适配(`px-2 sm:px-4`) |
|
||||||
|
| `GeneratedAudiosPanel.tsx` | 新增 embedded prop,两行布局,操作按钮可见度,"生成配音"按钮放大 |
|
||||||
|
| `MaterialSelector.tsx` | 新增 embedded prop,自渲染子标题+操作按钮,useMemo,disabled 守卫,操作按钮可见度,标题溢出修复 |
|
||||||
|
| `TimelineEditor.tsx` | 新增 embedded prop,自渲染子标题+控件,useMemo,拖拽抓手图标 |
|
||||||
|
| `PreviewPanel.tsx` | 新增 embedded prop |
|
||||||
|
| `HistoryList.tsx` | 新增 embedded prop,操作按钮可见度 |
|
||||||
|
| `ScriptEditor.tsx` | 标题加序号,移除 emoji,操作按钮可见度 |
|
||||||
|
| `TitleSubtitlePanel.tsx` | 标题加序号,移除 emoji,下拉对齐,显示模式下拉上移,字数计数器 |
|
||||||
|
| `BgmPanel.tsx` | 标题加序号 |
|
||||||
|
| `HomePage.tsx` | 核心重构:合并板块、序号标题、生成配音按钮迁入、`scrollRestoration` + 延迟兜底修复刷新回顶部、生成进度条提取到作品卡片上方 |
|
||||||
|
| `PublishPage.tsx` | 四个板块加序号(七~十),移除 emoji,平台卡片响应式单行布局 |
|
||||||
|
| `RefAudioPanel.tsx` | 简化提示文案,操作按钮可见度 |
|
||||||
|
| `AccountSettingsDropdown.tsx` | 新增手机号显示(脱敏),补齐 success 清空 |
|
||||||
|
| `AuthContext.tsx` | 新增 `setUser` 方法,登录后立即更新用户状态 |
|
||||||
|
| `login/page.tsx` | 登录成功后调用 `setUser` 写入用户数据 |
|
||||||
|
| `useHomeController.ts` | titleDisplayMode 条件修复,列表 scroll 时间门控 `scrollEffectsEnabled` |
|
||||||
|
| `FloatingStylePreview.tsx` | 移动端预览窗口缩小(160px)并移至右下角 |
|
||||||
|
| `ScriptExtractionModal.tsx` | 滚动条改回隐藏 |
|
||||||
|
| `ClipTrimmer.tsx` | 滑块手柄放大、触控区增高 |
|
||||||
|
| `lipsync_service.py` | httpx 超时从 1200s 改为 3600s |
|
||||||
|
| `whisper_service.py` | 字幕时间戳从线性插值改为 Whisper 节奏映射 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 验证
|
||||||
|
|
||||||
|
- `npm run build` — 零报错零警告
|
||||||
|
- 合并后布局:各子板块分隔清晰、主标题有序号
|
||||||
|
- 向后兼容:`embedded` 默认 `false`,组件独立使用不受影响
|
||||||
|
- 配音列表两行布局:语速+生成配音在上,配音列表+刷新在下
|
||||||
|
- 下拉菜单垂直对齐正确
|
||||||
|
- 短暂显示/常驻显示对标题和副标题同时生效
|
||||||
|
- 操作按钮在移动端(触屏)可见
|
||||||
|
- 手机号脱敏显示
|
||||||
|
- 标题字数计数器正常
|
||||||
|
- 列表滚动条全部隐藏
|
||||||
|
- 时间轴拖拽抓手图标显示
|
||||||
|
- 发布页平台卡片:移动端紧凑、桌面端宽松,风格一致
|
||||||
|
- 移动端刷新后回到顶部,不再滚动到背景音乐位置
|
||||||
|
- 移动端样式预览窗口不遮挡控件
|
||||||
|
- 移动端配音按钮(选择声音/克隆声音)均可见
|
||||||
|
- 移动端素材标题行按钮不溢出
|
||||||
|
- 生成配音按钮视觉层级高于辅助按钮
|
||||||
|
- 生成进度条在作品卡片上方独立显示
|
||||||
|
- LatentSync 长视频推理不再超时回退
|
||||||
|
- 字幕时间戳与语音节奏同步,长视频不漂移
|
||||||
231
Docs/DevLogs/Day27.md
Normal file
231
Docs/DevLogs/Day27.md
Normal file
@@ -0,0 +1,231 @@
|
|||||||
|
## Remotion 描边修复 + 字体样式扩展 + TypeScript 修复 (Day 27)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
修复标题/字幕描边渲染问题(描边过粗 + 副标题重影),扩展字体样式选项(标题 4→12、字幕 4→8),修复 Remotion 项目 TypeScript 类型错误。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ 改动内容
|
||||||
|
|
||||||
|
### 1. 描边渲染修复(标题 + 字幕)
|
||||||
|
|
||||||
|
- **问题**: 标题黑色描边过粗,副标题出现重影/鬼影
|
||||||
|
- **根因**: `buildTextShadow` 用 4 方向 `textShadow` 模拟描边 — 对角线叠加导致描边视觉上比实际 `stroke_size` 更粗;4 角方向在中间有间隙和叠加,造成重影
|
||||||
|
- **修复**: 改用 CSS 原生描边 `-webkit-text-stroke` + `paint-order: stroke fill`(Remotion 用 Chromium 渲染,完美支持)
|
||||||
|
- **旧方案**:
|
||||||
|
```javascript
|
||||||
|
textShadow: `-8px -8px 0 #000, 8px -8px 0 #000, -8px 8px 0 #000, 8px 8px 0 #000, 0 0 16px rgba(0,0,0,0.5), 0 2px 4px rgba(0,0,0,0.3)`
|
||||||
|
```
|
||||||
|
- **新方案**:
|
||||||
|
```javascript
|
||||||
|
WebkitTextStroke: `5px #000000`,
|
||||||
|
paintOrder: 'stroke fill',
|
||||||
|
textShadow: `0 2px 4px rgba(0,0,0,0.3)`,
|
||||||
|
```
|
||||||
|
- 同时将所有预设样式的 `stroke_size` 从 8 降到 5,配合原生描边视觉更干净
|
||||||
|
|
||||||
|
### 2. 字体样式扩展
|
||||||
|
|
||||||
|
**标题样式**: 4 个 → 12 个(+8)
|
||||||
|
|
||||||
|
| ID | 样式名 | 字体 | 配色 |
|
||||||
|
|----|--------|------|------|
|
||||||
|
| title_pangmen | 庞门正道 | 庞门正道标题体3.0 | 白字黑描 |
|
||||||
|
| title_round | 优设标题圆 | 优设标题圆 | 白字紫描 |
|
||||||
|
| title_alibaba | 阿里数黑体 | 阿里巴巴数黑体 | 白字黑描 |
|
||||||
|
| title_chaohei | 文道潮黑 | 文道潮黑 | 青蓝字深蓝描 |
|
||||||
|
| title_wujie | 无界黑 | 标小智无界黑 | 白字深灰描 |
|
||||||
|
| title_houdi | 厚底黑 | Aa厚底黑 | 红字深黑描 |
|
||||||
|
| title_banyuan | 寒蝉半圆体 | 寒蝉半圆体 | 白字黑描 |
|
||||||
|
| title_jixiang | 欣意吉祥宋 | 字体圈欣意吉祥宋 | 金字棕描 |
|
||||||
|
|
||||||
|
**字幕样式**: 4 个 → 8 个(+4)
|
||||||
|
|
||||||
|
| ID | 样式名 | 字体 | 高亮色 |
|
||||||
|
|----|--------|------|--------|
|
||||||
|
| subtitle_pink | 少女粉 | DingTalk JinBuTi | 粉色 #FF69B4 |
|
||||||
|
| subtitle_lime | 清新绿 | DingTalk Sans | 荧光绿 #76FF03 |
|
||||||
|
| subtitle_gold | 金色隶书 | 阿里妈妈刀隶体 | 金色 #FDE68A |
|
||||||
|
| subtitle_kai | 楷体红字 | SimKai | 红色 #FF4444 |
|
||||||
|
|
||||||
|
### 3. TypeScript 类型错误修复
|
||||||
|
|
||||||
|
- **Root.tsx**: `Composition` 泛型类型与 `calculateMetadata` 参数类型不匹配 — 内联 `calculateMetadata` 并显式标注参数类型,`defaultProps` 使用 `satisfies VideoProps` 约束
|
||||||
|
- **Video.tsx**: `VideoProps` 接口添加 `[key: string]: unknown` 索引签名,兼容 Remotion 要求的 `Record<string, unknown>` 约束
|
||||||
|
- **VideoLayer.tsx**: `OffthreadVideo` 组件不支持 `loop` prop — 移除(该 prop 原本就被忽略)
|
||||||
|
|
||||||
|
### 4. 进度条文案还原
|
||||||
|
|
||||||
|
- **问题**: 进度条显示后端推送的详细阶段消息(如"正在合成唇型"),用户希望只显示"正在AI生成中..."
|
||||||
|
- **修复**: `HomePage.tsx` 进度条文案从 `{currentTask.message || "正在AI生成中..."}` 改为固定 `正在AI生成中...`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📁 修改文件清单
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `remotion/src/components/Title.tsx` | `buildTextShadow` → `buildStrokeStyle`(CSS 原生描边),标题+副标题同时生效 |
|
||||||
|
| `remotion/src/components/Subtitles.tsx` | `buildTextShadow` → `buildStrokeStyle`(CSS 原生描边) |
|
||||||
|
| `remotion/src/Root.tsx` | 修复 `Composition` 泛型类型、`calculateMetadata` 参数类型 |
|
||||||
|
| `remotion/src/Video.tsx` | `VideoProps` 添加索引签名 |
|
||||||
|
| `remotion/src/components/VideoLayer.tsx` | 移除 `OffthreadVideo` 不支持的 `loop` prop |
|
||||||
|
| `backend/assets/styles/title.json` | 标题样式从 4 个扩展到 12 个,`stroke_size` 8→5 |
|
||||||
|
| `backend/assets/styles/subtitle.json` | 字幕样式从 4 个扩展到 8 个 |
|
||||||
|
| `frontend/.../HomePage.tsx` | 进度条文案还原为固定"正在AI生成中..." |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 验证
|
||||||
|
|
||||||
|
- `npx tsc --noEmit` — 零错误
|
||||||
|
- `npm run build:render` — 渲染脚本编译成功
|
||||||
|
- `npm run build`(前端)— 零报错
|
||||||
|
- 描边:标题/副标题/字幕使用 CSS 原生描边,无重影、无虚胖
|
||||||
|
- 样式选择:前端下拉可加载全部 12 个标题 + 8 个字幕样式
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 视频生成流水线性能优化
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
针对视频生成流水线进行全面性能优化,涵盖 FFmpeg 编码参数、LatentSync 推理参数、多素材并行化、以及后处理阶段并行化。预估 15s 单素材视频从 ~280s 降至 ~190s (32%),30s 双素材从 ~400s 降至 ~240s (40%)。
|
||||||
|
|
||||||
|
**服务器配置**: 2x RTX 3090 (24GB), 2x Xeon E5-2680 v4 (56核), 192GB RAM
|
||||||
|
|
||||||
|
### 第一阶段:FFmpeg 编码优化
|
||||||
|
|
||||||
|
**最终合成 preset `slow` → `medium`**
|
||||||
|
- 合成阶段从 ~50s 降到 ~25s,质量几乎无变化
|
||||||
|
|
||||||
|
**中间文件 CRF 18 → 23**
|
||||||
|
- 中间产物(trim、prepare_segment、concat、loop、normalize_orientation)不是最终输出,不需要高质量编码
|
||||||
|
- 每个中间步骤快 3-8 秒
|
||||||
|
|
||||||
|
**最终合成 CRF 18 → 20**
|
||||||
|
- 15 秒口播视频 CRF 18 vs 20 肉眼无法区分
|
||||||
|
|
||||||
|
### 第二阶段:LatentSync 推理参数调优
|
||||||
|
|
||||||
|
**inference_steps 20 → 16**
|
||||||
|
- 推理时间线性减少 20%(~180s → ~144s)
|
||||||
|
|
||||||
|
**guidance_scale 2.0 → 1.5**
|
||||||
|
- classifier-free guidance 权重降低,每步计算量微降(5-10%)
|
||||||
|
|
||||||
|
> ⚠️ 两项需重启 LatentSync 服务后测试唇形质量,确认可接受再保留。如质量不佳可回退 .env 参数。
|
||||||
|
|
||||||
|
### 第三阶段:多素材流水线并行化
|
||||||
|
|
||||||
|
**素材下载 + 归一化并行**
|
||||||
|
- 串行 `for` 循环改为 `asyncio.gather()`,`normalize_orientation` 通过 `run_in_executor` 在线程池执行
|
||||||
|
- N 个素材从串行 N×5s → ~5s
|
||||||
|
|
||||||
|
**片段预处理并行**
|
||||||
|
- 逐个 `prepare_segment` 改为 `asyncio.gather()` + `run_in_executor`
|
||||||
|
- 2 素材 ~90s → ~50s;4 素材 ~180s → ~60s
|
||||||
|
|
||||||
|
### 第四阶段:流水线交叠
|
||||||
|
|
||||||
|
**Whisper 字幕对齐 与 BGM 混音 并行**
|
||||||
|
- 两者互不依赖(都只依赖 audio_path),用 `asyncio.gather()` 并行执行
|
||||||
|
- 单素材模式下 Whisper 从 LatentSync 之后的串行步骤移至与 BGM 并行
|
||||||
|
- 不开 BGM 或不开字幕时行为不变,只有同时启用时才并行
|
||||||
|
|
||||||
|
### 修改文件
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `backend/app/services/video_service.py` | compose: preset slow→medium, CRF 18→20; normalize_orientation/prepare_segment/concat: CRF 18→23 |
|
||||||
|
| `backend/app/services/lipsync_service.py` | _loop_video_to_duration: CRF 18→23 |
|
||||||
|
| `backend/.env` | LATENTSYNC_INFERENCE_STEPS=16, LATENTSYNC_GUIDANCE_SCALE=1.5 |
|
||||||
|
| `backend/app/modules/videos/workflow.py` | import asyncio; 素材下载/归一化并行; 片段预处理并行; Whisper+BGM 并行 |
|
||||||
|
|
||||||
|
### 回退方案
|
||||||
|
|
||||||
|
- FFmpeg 参数:如画质不满意,将最终 CRF 改回 18、preset 改回 slow
|
||||||
|
- LatentSync:如唇形质量下降,将 .env 中 `INFERENCE_STEPS` 改回 20、`GUIDANCE_SCALE` 改回 2.0
|
||||||
|
- 并行化:纯架构优化,无质量影响,无需回退
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MuseTalk + LatentSync 混合唇形同步方案
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
LatentSync 1.6 质量高但推理极慢(~78% 总时长),长视频(>=2min)耗时 20-60 分钟不可接受。MuseTalk 1.5 是单步潜空间修复(非扩散模型),逐帧推理速度接近实时(30fps+ on V100),适合长视频。混合方案按音频时长自动路由:短视频用 LatentSync 保质量,长视频用 MuseTalk 保速度。
|
||||||
|
|
||||||
|
### 架构
|
||||||
|
|
||||||
|
- **路由阈值**: `LIPSYNC_DURATION_THRESHOLD` (默认 120s)
|
||||||
|
- **短视频 (<120s)**: LatentSync 1.6 (GPU1, 端口 8007)
|
||||||
|
- **长视频 (>=120s)**: MuseTalk 1.5 (GPU0, 端口 8011)
|
||||||
|
- **回退**: MuseTalk 不可用时自动 fallback 到 LatentSync
|
||||||
|
|
||||||
|
### 改动文件
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `models/MuseTalk/` | 从 Temp/MuseTalk 复制代码 + 下载权重 |
|
||||||
|
| `models/MuseTalk/scripts/server.py` | 新建 FastAPI 常驻服务 (端口 8011, GPU0) |
|
||||||
|
| `backend/app/core/config.py` | 新增 MUSETALK_* 和 LIPSYNC_DURATION_THRESHOLD |
|
||||||
|
| `backend/.env` | 新增对应环境变量 |
|
||||||
|
| `backend/app/services/lipsync_service.py` | 新增 `_call_musetalk_server()` + 混合路由逻辑 + 扩展 `check_health()` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MuseTalk 推理性能优化 (server.py v2)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
MuseTalk 首次长视频测试 (136s, 3404 帧) 耗时 1799s (~30 分钟),分析发现瓶颈集中在人脸检测 (28%)、BiSeNet 合成 (22%)、I/O (17%),而非 UNet 推理本身 (17%)。通过 6 项优化预估降至 8-10 分钟 (~3x 加速)。
|
||||||
|
|
||||||
|
### 性能瓶颈分析 (优化前, 1799s)
|
||||||
|
|
||||||
|
| 阶段 | 耗时 | 占比 | 瓶颈原因 |
|
||||||
|
|------|------|------|---------|
|
||||||
|
| DWPose + 人脸检测 | ~510s | 28% | `batch_size_fa=1`, 每帧跑 2 个 NN, 完全串行 |
|
||||||
|
| 合成 + BiSeNet 人脸解析 | ~400s | 22% | 每帧都跑 BiSeNet + PNG 写盘 |
|
||||||
|
| UNet 推理 | ~300s | 17% | batch_size=8 太小 |
|
||||||
|
| I/O (PNG 读写 + FFmpeg) | ~300s | 17% | PNG 压缩慢, ffmpeg→PNG→imread 链路 |
|
||||||
|
| VAE 编码 | ~100s | 6% | 逐帧编码, 未批处理 |
|
||||||
|
|
||||||
|
### 6 项优化
|
||||||
|
|
||||||
|
| # | 优化项 | 详情 |
|
||||||
|
|---|--------|------|
|
||||||
|
| 1 | **batch_size 8→32** | `.env` 修改, RTX 3090 显存充裕 |
|
||||||
|
| 2 | **cv2.VideoCapture 直读帧** | 跳过 ffmpeg→PNG→imread 链路, 省去 3404 次 PNG 编解码 |
|
||||||
|
| 3 | **人脸检测降频 (每5帧)** | 每 5 帧运行 DWPose + FaceAlignment, 中间帧线性插值 bbox |
|
||||||
|
| 4 | **BiSeNet mask 缓存 (每5帧)** | 每 5 帧运行 `get_image_prepare_material`, 中间帧用 `get_image_blending` 复用缓存 mask |
|
||||||
|
| 5 | **cv2.VideoWriter 直写** | 跳过逐帧 PNG 写盘 + ffmpeg 重编码, 用 VideoWriter 直写 mp4 |
|
||||||
|
| 6 | **每阶段计时** | 7 个阶段精确计时, 方便后续进一步调优 |
|
||||||
|
|
||||||
|
### 修改文件
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `models/MuseTalk/scripts/server.py` | 完全重写 `_run_inference()`, 新增 `_detect_faces_subsampled()` |
|
||||||
|
| `backend/.env` | `MUSETALK_BATCH_SIZE` 8→32 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Remotion 并发渲染优化
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
Remotion 渲染在 56 核服务器上默认只用 8 并发 (`min(8, cores/2)`),改为 16 并发,预估从 ~5 分钟降到 ~2-3 分钟。
|
||||||
|
|
||||||
|
### 改动
|
||||||
|
|
||||||
|
- `remotion/render.ts`: `renderMedia()` 新增 `concurrency` 参数 (默认 16), 支持 `--concurrency` CLI 参数覆盖
|
||||||
|
- `remotion/dist/render.js`: 重新编译
|
||||||
|
|
||||||
|
### 修改文件
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `remotion/render.ts` | `RenderOptions` 新增 `concurrency` 字段, `renderMedia()` 传入 `concurrency` |
|
||||||
|
| `remotion/dist/render.js` | TypeScript 重新编译 |
|
||||||
203
Docs/DevLogs/Day28.md
Normal file
203
Docs/DevLogs/Day28.md
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
## CosyVoice FP16 加速 + 文档更新 + AI改写界面重构 + 标题字幕面板重排与视频帧预览 (Day 28)
|
||||||
|
|
||||||
|
### 概述
|
||||||
|
|
||||||
|
CosyVoice 3.0 声音克隆服务开启 FP16 半精度推理,预估提速 30-40%。同步更新 4 个项目文档。重构 AI 改写文案界面(RewriteModal 两步流程 + ScriptExtractionModal 逻辑抽取)。前端将"标题与字幕"面板从第二步移至第四步(素材编辑之后),样式预览窗口背景从紫粉渐变改为视频片头帧截图,实现所见即所得。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ 改动内容
|
||||||
|
|
||||||
|
### 1. CosyVoice FP16 半精度加速
|
||||||
|
|
||||||
|
- **问题**: CosyVoice 3.0 以 FP32 全精度运行,RTF (Real-Time Factor) 约 0.9-1.35x,生成 2 分钟音频需要约 2 分钟
|
||||||
|
- **根因**: `AutoModel()` 初始化时未传入 `fp16=True`,LLM 推理和 Flow Matching (DiT) 均在 FP32 下运行
|
||||||
|
- **修复**: 一行改动开启 FP16 自动混合精度
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 旧: _model = AutoModel(model_dir=str(MODEL_DIR))
|
||||||
|
# 新:
|
||||||
|
_model = AutoModel(model_dir=str(MODEL_DIR), fp16=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
- **生效机制**: `CosyVoice3Model` 在 `llm_job()` 和 `token2wav()` 中通过 `torch.cuda.amp.autocast(self.fp16)` 自动将计算转为 FP16
|
||||||
|
- **预期效果**:
|
||||||
|
- 推理速度提升 30-40%
|
||||||
|
- 显存占用降低 ~30%
|
||||||
|
- 语音质量基本无损(0.5B 模型 FP16 精度充足)
|
||||||
|
- **验证**: 服务重启后自检通过,健康检查 `ready: true`
|
||||||
|
|
||||||
|
### 2. 文档全面更新 (4 个文件)
|
||||||
|
|
||||||
|
补充 Day 27 新增的 MuseTalk 混合唇形同步方案、性能优化、Remotion 并发渲染等内容到所有相关文档。
|
||||||
|
|
||||||
|
#### README.md
|
||||||
|
- 项目描述更新为 "LatentSync 1.6 + MuseTalk 1.5 混合唇形同步"
|
||||||
|
- 唇形同步功能描述改为混合方案(短视频 LatentSync,长视频 MuseTalk)
|
||||||
|
- 技术栈表新增 MuseTalk 1.5
|
||||||
|
- 项目结构新增 `models/MuseTalk/`
|
||||||
|
- 服务架构表新增 MuseTalk (端口 8011)
|
||||||
|
- 文档中心新增 MuseTalk 部署指南链接
|
||||||
|
- 性能优化描述新增降频检测 + Remotion 16 并发
|
||||||
|
|
||||||
|
#### DEPLOY_MANUAL.md
|
||||||
|
- GPU 分配说明更新 (GPU0=MuseTalk+CosyVoice, GPU1=LatentSync)
|
||||||
|
- 步骤 3 拆分为 3a (LatentSync) + 3b (MuseTalk)
|
||||||
|
- 环境变量表新增 7 个 MuseTalk 变量,移除过时的 `DOUYIN_COOKIE`
|
||||||
|
- LatentSync 推理步数默认值 20→16
|
||||||
|
- 测试运行新增 MuseTalk 启动终端
|
||||||
|
- PM2 管理新增 MuseTalk 服务(第 5 项)
|
||||||
|
- 端口检查、日志查看命令新增 8011/vigent2-musetalk
|
||||||
|
|
||||||
|
#### SUBTITLE_DEPLOY.md
|
||||||
|
- 技术架构图更新为 LatentSync/MuseTalk 混合路由
|
||||||
|
- 新增唇形同步路由说明
|
||||||
|
- Remotion 配置表新增 `concurrency` 参数 (默认 16)
|
||||||
|
- GPU 分配说明更新
|
||||||
|
- 更新日志新增 v1.3.0 条目
|
||||||
|
|
||||||
|
#### BACKEND_README.md
|
||||||
|
- 健康检查接口描述更新为含 LatentSync + MuseTalk + 混合路由阈值
|
||||||
|
- 环境变量配置新增 MuseTalk 相关变量
|
||||||
|
- 服务集成指南新增"唇形同步混合路由"章节
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. AI 改写文案界面重构
|
||||||
|
|
||||||
|
#### RewriteModal 重构
|
||||||
|
|
||||||
|
将 AI 改写弹窗改为两步式流程,提升交互体验:
|
||||||
|
|
||||||
|
**第一步 — 配置与触发**:
|
||||||
|
- 自定义提示词输入(可选),自动持久化到 localStorage
|
||||||
|
- "开始改写"按钮触发 `/api/ai/rewrite` 请求
|
||||||
|
|
||||||
|
**第二步 — 结果对比与选择**:
|
||||||
|
- 上方:AI 改写结果 + "使用此结果"按钮(紫粉渐变色,醒目)
|
||||||
|
- 下方:原文对比 + "保留原文"按钮(灰色低调)
|
||||||
|
- 底部:可"重新改写"(重回第一步,保留自定义提示词)
|
||||||
|
- ESC 快捷键关闭
|
||||||
|
|
||||||
|
#### ScriptExtractionModal 逻辑抽取
|
||||||
|
|
||||||
|
将文案提取模态框的全部业务逻辑抽取到独立 hook `useScriptExtraction`:
|
||||||
|
|
||||||
|
- **useScriptExtraction.ts** (新建): 管理 URL/文件双模式输入、拖拽上传、提取请求、步骤状态机 (config → processing → result)、剪贴板复制
|
||||||
|
- **ScriptExtractionModal.tsx**: 纯展示组件,消费 hook 返回值,新增 ESC/Enter 快捷键
|
||||||
|
|
||||||
|
#### ScriptEditor 工具栏调整
|
||||||
|
|
||||||
|
- 按钮组右对齐 (`justify-end`),统一高度 `h-7` 和圆角
|
||||||
|
- "历史文案"按钮用灰色 (bg-gray-600) 区分辅助功能
|
||||||
|
- "文案提取助手"用紫色 (bg-purple-600) 表示主功能
|
||||||
|
- "AI多语言"用绿渐变 (emerald-teal),"AI生成标题标签"用蓝渐变 (blue-cyan)
|
||||||
|
- "AI智能改写"和"保存文案"移至文本框下方状态栏
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. 标题字幕面板重排 + 视频帧背景预览
|
||||||
|
|
||||||
|
#### 面板顺序重排
|
||||||
|
|
||||||
|
将 `<TitleSubtitlePanel>` 从第二步移至第四步(素材编辑之后),使用户在设置标题字幕样式时已经完成了素材选择和时间轴编排。
|
||||||
|
|
||||||
|
新顺序:
|
||||||
|
```
|
||||||
|
一、文案提取与编辑(不变)
|
||||||
|
二、配音(原三)
|
||||||
|
三、素材编辑(原四)
|
||||||
|
四、标题与字幕(原二)→ 移到素材编辑之后
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 新建 useVideoFrameCapture hook
|
||||||
|
|
||||||
|
从视频 URL 截取 0.1s 处帧画面,返回 JPEG data URL:
|
||||||
|
|
||||||
|
- 创建 `<video>` 元素,设置 `crossOrigin="anonymous"`(素材存储在 Supabase Storage 跨域地址)
|
||||||
|
- 先绑定 `loadedmetadata` / `canplay` / `seeked` / `error` 事件监听,再设 src(避免事件丢失)
|
||||||
|
- `loadedmetadata` 或 `canplay` 触发后 seek 到 0.1s,`seeked` 回调中用 canvas `drawImage` 截帧
|
||||||
|
- canvas 缩放到 480px 宽再编码(预览窗口最大 280px,节省内存)
|
||||||
|
- `canvas.toDataURL("image/jpeg", 0.7)` 导出
|
||||||
|
- 防御 `videoWidth/videoHeight` 为 0 的边界情况
|
||||||
|
- try-catch 防 canvas taint,失败返回 null(降级渐变)
|
||||||
|
- `isActive` 标志 + `seeked` 去重标志防止 stale 和重复更新
|
||||||
|
- 截图完成后清理 video 元素释放内存
|
||||||
|
|
||||||
|
#### 按需截取(性能优化)
|
||||||
|
|
||||||
|
只在样式预览窗口打开时才触发截取:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const materialPosterUrl = useVideoFrameCapture(
|
||||||
|
showStylePreview ? firstTimelineMaterialUrl : null
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
截取源优先使用**时间轴第一段素材**(用户拖拽排序后的真实片头),回退到 `selectedMaterials[0]`(未生成配音、时间轴为空时)。
|
||||||
|
|
||||||
|
#### 预览背景替换
|
||||||
|
|
||||||
|
`FloatingStylePreview` 有视频帧时直接显示原始画面(不加半透明,保证颜色真实),文字靠描边保证可读性;无视频帧时降级为原紫粉渐变背景。
|
||||||
|
|
||||||
|
#### 踩坑记录
|
||||||
|
|
||||||
|
1. **CORS tainted canvas**: 素材文件存储在 Supabase Storage (`api.hbyrkj.top`),是跨域签名链接。必须设 `video.crossOrigin = "anonymous"` 才能让 canvas `toDataURL` 不被 SecurityError 拦截
|
||||||
|
2. **时间轴为空**: `useTimelineEditor` 在 `audioDuration <= 0`(未选配音)时返回空数组,需回退到 `selectedMaterials[0]`
|
||||||
|
3. **事件监听顺序**: 必须先绑定事件监听再设 `video.src`,否则快速加载时事件可能丢失
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📁 修改文件清单
|
||||||
|
|
||||||
|
| 文件 | 改动 |
|
||||||
|
|------|------|
|
||||||
|
| `models/CosyVoice/cosyvoice_server.py` | `AutoModel()` 新增 `fp16=True` 参数 |
|
||||||
|
| `README.md` | 混合唇形同步描述、技术栈、服务架构、项目结构更新 |
|
||||||
|
| `Docs/DEPLOY_MANUAL.md` | MuseTalk 部署步骤、环境变量、PM2 管理、端口检查 |
|
||||||
|
| `Docs/SUBTITLE_DEPLOY.md` | 架构图、Remotion concurrency、GPU 分配、更新日志 |
|
||||||
|
| `Docs/BACKEND_README.md` | 健康检查、环境变量、混合路由章节 |
|
||||||
|
| `frontend/.../RewriteModal.tsx` | 两步式改写流程(自定义提示词 → 结果对比) |
|
||||||
|
| `frontend/.../script-extraction/useScriptExtraction.ts` | **新建** — 文案提取逻辑 hook |
|
||||||
|
| `frontend/.../ScriptExtractionModal.tsx` | 纯展示组件,消费 hook,新增快捷键 |
|
||||||
|
| `frontend/.../ScriptEditor.tsx` | 工具栏右对齐 + 按钮分色 + 改写/保存移至底部 |
|
||||||
|
| `frontend/.../useVideoFrameCapture.ts` | **新建** — 视频帧截取 hook,crossOrigin + canvas 缩放 |
|
||||||
|
| `frontend/.../useHomeController.ts` | 新增 useMemo 计算素材 URL,调用帧截取 hook,showStylePreview 门控 |
|
||||||
|
| `frontend/.../HomePage.tsx` | 面板重排(二↔四互换),编号更新,透传 materialPosterUrl |
|
||||||
|
| `frontend/.../TitleSubtitlePanel.tsx` | 编号"二"→"四",新增 previewBackgroundUrl prop |
|
||||||
|
| `frontend/.../FloatingStylePreview.tsx` | 新增 previewBackgroundUrl prop,条件渲染视频帧/渐变背景 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 验证
|
||||||
|
|
||||||
|
- CosyVoice 重启成功,健康检查 `{"ready": true}`
|
||||||
|
- 自检推理通过(7.2s for "你好")
|
||||||
|
- FP16 通过 `torch.cuda.amp.autocast(self.fp16)` 在 LLM 和 Flow Matching 阶段生效
|
||||||
|
- `npx tsc --noEmit` — 零错误
|
||||||
|
- AI 改写:自定义提示词持久化 → 改写结果 + 原文对比 → "使用此结果"/"保留原文"
|
||||||
|
- 文案提取:URL / 文件双模式 → 处理中动画 → 结果填入
|
||||||
|
- 面板顺序:一→文案、二→配音、三→素材编辑、四→标题与字幕
|
||||||
|
- 样式预览背景:有素材时显示真实视频片头帧,无素材降级紫粉渐变
|
||||||
|
- 预览关闭时不触发截取,不浪费资源
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 💡 CosyVoice 性能分析备注
|
||||||
|
|
||||||
|
### 当前性能基线 (FP32, 优化前)
|
||||||
|
|
||||||
|
| 文本长度 | 音频时长 | 推理耗时 | RTF |
|
||||||
|
|----------|----------|----------|-----|
|
||||||
|
| 42 字 | 9.8s | 13.2s | 1.35x |
|
||||||
|
| 89 字 | 18.2s | 20.3s | 1.12x |
|
||||||
|
| ~530 字 | 115.8s | 107.7s | 0.93x |
|
||||||
|
| ~670 字 | 143.5s | 131.6s | 0.92x |
|
||||||
|
|
||||||
|
### 未来可选优化(收益递减,暂不实施)
|
||||||
|
|
||||||
|
| 优化项 | 预期提升 | 复杂度 |
|
||||||
|
|--------|----------|--------|
|
||||||
|
| TensorRT (DiT 模块) | +20-30% | 需编译 .plan 引擎 |
|
||||||
|
| torch.compile() | +10-20% | 一行代码,但首次编译慢 |
|
||||||
|
| vLLM (LLM 模块) | +10-15% | 额外依赖 |
|
||||||
@@ -30,7 +30,7 @@
|
|||||||
| ⚡ **Med** | `Docs/BACKEND_README.md` | **(后端文档)** 接口说明、架构设计 |
|
| ⚡ **Med** | `Docs/BACKEND_README.md` | **(后端文档)** 接口说明、架构设计 |
|
||||||
| ⚡ **Med** | `Docs/FRONTEND_DEV.md` | **(前端规范)** API封装、日期格式化、新页面规范 |
|
| ⚡ **Med** | `Docs/FRONTEND_DEV.md` | **(前端规范)** API封装、日期格式化、新页面规范 |
|
||||||
| ⚡ **Med** | `Docs/FRONTEND_README.md` | **(前端文档)** 功能说明、页面变更 |
|
| ⚡ **Med** | `Docs/FRONTEND_README.md` | **(前端文档)** 功能说明、页面变更 |
|
||||||
| 🧊 **Low** | `Docs/*_DEPLOY.md` | **(子系统部署)** LatentSync/Qwen3/字幕等独立部署文档 |
|
| 🧊 **Low** | `Docs/*_DEPLOY.md` | **(子系统部署)** LatentSync/CosyVoice/字幕等独立部署文档 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -195,7 +195,8 @@ ViGent2/Docs/
|
|||||||
├── DEPLOY_MANUAL.md # 部署手册
|
├── DEPLOY_MANUAL.md # 部署手册
|
||||||
├── SUPABASE_DEPLOY.md # Supabase 部署文档
|
├── SUPABASE_DEPLOY.md # Supabase 部署文档
|
||||||
├── LATENTSYNC_DEPLOY.md # LatentSync 部署文档
|
├── LATENTSYNC_DEPLOY.md # LatentSync 部署文档
|
||||||
├── QWEN3_TTS_DEPLOY.md # 声音克隆部署文档
|
├── COSYVOICE3_DEPLOY.md # 声音克隆部署文档
|
||||||
|
├── ALIPAY_DEPLOY.md # 支付宝付费部署文档
|
||||||
├── SUBTITLE_DEPLOY.md # 字幕系统部署文档
|
├── SUBTITLE_DEPLOY.md # 字幕系统部署文档
|
||||||
└── DevLogs/
|
└── DevLogs/
|
||||||
├── Day1.md # 开发日志
|
├── Day1.md # 开发日志
|
||||||
@@ -304,4 +305,4 @@ ViGent2/Docs/
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**最后更新**:2026-02-08
|
**最后更新**:2026-02-11
|
||||||
|
|||||||
@@ -10,8 +10,9 @@ frontend/src/
|
|||||||
│ ├── page.tsx # 首页(视频生成)
|
│ ├── page.tsx # 首页(视频生成)
|
||||||
│ ├── publish/ # 发布管理页
|
│ ├── publish/ # 发布管理页
|
||||||
│ ├── admin/ # 管理员页面
|
│ ├── admin/ # 管理员页面
|
||||||
│ ├── login/ # 登录
|
│ ├── login/ # 登录
|
||||||
│ └── register/ # 注册
|
│ ├── register/ # 注册
|
||||||
|
│ └── pay/ # 付费开通会员
|
||||||
├── features/ # 功能模块(按业务拆分)
|
├── features/ # 功能模块(按业务拆分)
|
||||||
│ ├── home/
|
│ ├── home/
|
||||||
│ │ ├── model/ # 业务逻辑 hooks
|
│ │ ├── model/ # 业务逻辑 hooks
|
||||||
@@ -150,6 +151,33 @@ body {
|
|||||||
| `sm:` | ≥ 640px | 平板/桌面 |
|
| `sm:` | ≥ 640px | 平板/桌面 |
|
||||||
| `lg:` | ≥ 1024px | 大屏桌面 |
|
| `lg:` | ≥ 1024px | 大屏桌面 |
|
||||||
|
|
||||||
|
### embedded 组件模式
|
||||||
|
|
||||||
|
合并板块时,子组件通过 `embedded?: boolean` prop 控制是否渲染外层卡片容器和主标题。
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
// embedded=false(独立使用):渲染完整卡片
|
||||||
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10">
|
||||||
|
<h2>标题</h2>
|
||||||
|
{content}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
// embedded=true(嵌入父卡片):只渲染内容
|
||||||
|
{content}
|
||||||
|
```
|
||||||
|
|
||||||
|
- 子标题使用 `<h3 className="text-sm font-medium text-gray-400">`
|
||||||
|
- 分隔线使用 `<div className="border-t border-white/10 my-4" />`
|
||||||
|
- 移动端标题行避免 `whitespace-nowrap`,长描述文字可用 `hidden sm:inline` 在移动端隐藏
|
||||||
|
|
||||||
|
### 按钮视觉层级
|
||||||
|
|
||||||
|
| 层级 | 样式 | 用途 |
|
||||||
|
|------|------|------|
|
||||||
|
| 主操作 | `px-4 py-2 text-sm font-medium bg-gradient-to-r from-purple-600 to-pink-600 shadow-sm` | 生成配音、立即发布 |
|
||||||
|
| 辅助操作 | `px-2 py-1 text-xs bg-white/10 rounded` | 刷新、上传、语速 |
|
||||||
|
| 触屏可见 | `opacity-40 group-hover:opacity-100` | 列表行内操作(编辑/删除) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## API 请求规范
|
## API 请求规范
|
||||||
@@ -256,6 +284,38 @@ import { formatDate } from '@/shared/lib/media';
|
|||||||
|
|
||||||
## ⚡️ 体验优化规范
|
## ⚡️ 体验优化规范
|
||||||
|
|
||||||
|
### 刷新回顶部(统一体验)
|
||||||
|
|
||||||
|
- 长页面(如首页/发布页)在首次挂载时统一回到顶部。
|
||||||
|
- **必须**在页面级 `useEffect` 中设置 `history.scrollRestoration = "manual"` 禁用浏览器原生滚动恢复。
|
||||||
|
- 调用 `window.scrollTo({ top: 0, left: 0, behavior: "auto" })` 并追加 200ms 延迟兜底(防止异步 effect 覆盖)。
|
||||||
|
- **列表自动滚动必须使用时间门控**:页面加载后 1 秒内禁止所有列表自动滚动效果(`scrollEffectsEnabled` ref),防止持久化恢复 + 异步数据加载触发 `scrollIntoView` 导致页面跳动。
|
||||||
|
- 推荐模式:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// 页面级(HomePage / PublishPage)
|
||||||
|
useEffect(() => {
|
||||||
|
if (typeof window === "undefined") return;
|
||||||
|
if ("scrollRestoration" in history) history.scrollRestoration = "manual";
|
||||||
|
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
|
||||||
|
const timer = setTimeout(() => window.scrollTo({ top: 0, left: 0, behavior: "auto" }), 200);
|
||||||
|
return () => clearTimeout(timer);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Controller 级(列表滚动时间门控)
|
||||||
|
const scrollEffectsEnabled = useRef(false);
|
||||||
|
useEffect(() => {
|
||||||
|
const timer = setTimeout(() => { scrollEffectsEnabled.current = true; }, 1000);
|
||||||
|
return () => clearTimeout(timer);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// 列表滚动 effect(BGM/素材/视频等)
|
||||||
|
useEffect(() => {
|
||||||
|
if (!selectedId || !scrollEffectsEnabled.current) return;
|
||||||
|
target?.scrollIntoView({ block: "nearest", behavior: "smooth" });
|
||||||
|
}, [selectedId, list]);
|
||||||
|
```
|
||||||
|
|
||||||
### 路由预取
|
### 路由预取
|
||||||
|
|
||||||
- 首页进入发布管理时使用 `router.prefetch("/publish")`
|
- 首页进入发布管理时使用 `router.prefetch("/publish")`
|
||||||
@@ -305,9 +365,12 @@ import { formatDate } from '@/shared/lib/media';
|
|||||||
- **必须持久化**:
|
- **必须持久化**:
|
||||||
- 标题样式 ID / 字幕样式 ID
|
- 标题样式 ID / 字幕样式 ID
|
||||||
- 标题字号 / 字幕字号
|
- 标题字号 / 字幕字号
|
||||||
|
- 标题显示模式(`short` / `persistent`)
|
||||||
- 背景音乐选择 / 音量 / 开关状态
|
- 背景音乐选择 / 音量 / 开关状态
|
||||||
|
- 输出画面比例(`9:16` / `16:9`)
|
||||||
- 素材选择 / 历史作品选择
|
- 素材选择 / 历史作品选择
|
||||||
- 选中配音 ID (`selectedAudioId`)
|
- 选中配音 ID (`selectedAudioId`)
|
||||||
|
- 语速 (`speed`,声音克隆模式)
|
||||||
- 时间轴段信息 (`useTimelineEditor` 的 localStorage)
|
- 时间轴段信息 (`useTimelineEditor` 的 localStorage)
|
||||||
|
|
||||||
### 历史文案(独立持久化)
|
### 历史文案(独立持久化)
|
||||||
@@ -332,6 +395,7 @@ import { formatDate } from '@/shared/lib/media';
|
|||||||
- 片头标题与发布信息标题统一限制 15 字。
|
- 片头标题与发布信息标题统一限制 15 字。
|
||||||
- 中文输入法合成阶段不截断,合成结束后才校验长度。
|
- 中文输入法合成阶段不截断,合成结束后才校验长度。
|
||||||
- 首页片头标题修改会同步写入 `vigent_${storageKey}_publish_title`。
|
- 首页片头标题修改会同步写入 `vigent_${storageKey}_publish_title`。
|
||||||
|
- 标题显示模式使用 `short` / `persistent` 两个固定值;默认 `short`(短暂显示 4 秒)。
|
||||||
- 避免使用 `maxLength` 强制截断输入法合成态。
|
- 避免使用 `maxLength` 强制截断输入法合成态。
|
||||||
- 推荐使用 `@/shared/hooks/useTitleInput` 统一处理输入逻辑。
|
- 推荐使用 `@/shared/hooks/useTitleInput` 统一处理输入逻辑。
|
||||||
|
|
||||||
@@ -361,9 +425,11 @@ import { formatDate } from '@/shared/lib/media';
|
|||||||
|
|
||||||
| 接口 | 方法 | 功能 |
|
| 接口 | 方法 | 功能 |
|
||||||
|------|------|------|
|
|------|------|------|
|
||||||
| `/api/ref-audios` | POST | 上传参考音频 (multipart/form-data: file + ref_text) |
|
| `/api/ref-audios` | POST | 上传参考音频 (multipart/form-data: file,ref_text 可选,后端自动 Whisper 转写) |
|
||||||
| `/api/ref-audios` | GET | 列出用户的参考音频 |
|
| `/api/ref-audios` | GET | 列出用户的参考音频 |
|
||||||
|
| `/api/ref-audios/{id}` | PUT | 重命名参考音频 |
|
||||||
| `/api/ref-audios/{id}` | DELETE | 删除参考音频 (id 需 encodeURIComponent) |
|
| `/api/ref-audios/{id}` | DELETE | 删除参考音频 (id 需 encodeURIComponent) |
|
||||||
|
| `/api/ref-audios/{id}/retranscribe` | POST | 重新识别参考音频文字(Whisper 转写 + 超 10s 自动截取) |
|
||||||
|
|
||||||
### 视频生成 API 扩展
|
### 视频生成 API 扩展
|
||||||
|
|
||||||
@@ -382,7 +448,8 @@ await api.post('/api/videos/generate', {
|
|||||||
text: '口播文案',
|
text: '口播文案',
|
||||||
tts_mode: 'voiceclone',
|
tts_mode: 'voiceclone',
|
||||||
ref_audio_id: 'user_id/timestamp_name.wav',
|
ref_audio_id: 'user_id/timestamp_name.wav',
|
||||||
ref_text: '参考音频对应文字',
|
ref_text: '参考音频对应文字', // 从参考音频 metadata 自动获取
|
||||||
|
speed: 1.0, // 语速 (0.8-1.2)
|
||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -396,8 +463,14 @@ const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
|
|||||||
const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
|
const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 参考音频自动处理
|
||||||
|
|
||||||
|
- **自动转写**: 上传参考音频时后端自动调用 Whisper 转写内容作为 `ref_text`,无需用户手动输入
|
||||||
|
- **自动截取**: 参考音频超过 10 秒时自动在静音点截取前 10 秒(CosyVoice 建议 3-10 秒)
|
||||||
|
- **重新识别**: 旧参考音频可通过 retranscribe 端点重新转写并截取
|
||||||
|
|
||||||
### UI 结构
|
### UI 结构
|
||||||
|
|
||||||
配音方式使用 Tab 切换:
|
配音方式使用 Tab 切换:
|
||||||
- **EdgeTTS 音色** - 预设音色 2x3 网格
|
- **EdgeTTS 音色** - 预设音色 2x3 网格
|
||||||
- **声音克隆** - 参考音频列表 + 在线录音 + 参考文字输入
|
- **声音克隆** - 参考音频列表 + 在线录音 + 语速下拉菜单 (5 档: 较慢/稍慢/正常/稍快/较快)
|
||||||
|
|||||||
@@ -5,14 +5,12 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
|||||||
## ✨ 核心功能
|
## ✨ 核心功能
|
||||||
|
|
||||||
### 1. 视频生成 (`/`)
|
### 1. 视频生成 (`/`)
|
||||||
- **素材管理**: 拖拽上传人物视频,实时预览。
|
- **一、文案提取与编辑**: 文案输入/提取/翻译/保存。
|
||||||
- **素材重命名**: 支持在列表中直接重命名素材。
|
- **二、配音**: 配音方式(EdgeTTS/声音克隆)+ 配音列表(生成/试听/管理)合并为一个板块。
|
||||||
- **文案配音**: 集成 EdgeTTS,支持多音色选择 (云溪 / 晓晓)。
|
- **三、素材编辑**: 视频素材(上传/选择/管理)+ 时间轴编辑(波形/色块/拖拽排序)合并为一个板块。
|
||||||
- **AI 标题/标签**: 一键生成视频标题与标签 (Day 14)。
|
- **四、标题与字幕**: 片头标题/副标题/字幕样式配置;短暂显示/常驻显示;样式预览使用视频片头帧作为真实背景 (Day 28)。
|
||||||
- **标题/字幕样式**: 样式选择 + 预览 + 字号调节 (Day 16)。
|
- **五、背景音乐**: 试听 + 音量控制 + 选择持久化。
|
||||||
- **背景音乐**: 试听 + 音量控制 + 选择持久化 (Day 16)。
|
- **六、作品**(右栏): 作品列表 + 作品预览合并为一个板块。
|
||||||
- **交互优化**: 选择项持久化、列表内定位、刷新回顶部 (Day 16)。
|
|
||||||
- **预览一致性**: 标题/字幕预览按素材分辨率缩放,效果更接近成片 (Day 17)。
|
|
||||||
- **进度追踪**: 实时显示视频生成进度 (10% -> 100%)。
|
- **进度追踪**: 实时显示视频生成进度 (10% -> 100%)。
|
||||||
- **作品预览**: 生成完成后直接播放下载(作品预览 + 历史作品)。
|
- **作品预览**: 生成完成后直接播放下载(作品预览 + 历史作品)。
|
||||||
- **预览优化**: 预览视频 `metadata` 预取,首帧加载更快。
|
- **预览优化**: 预览视频 `metadata` 预取,首帧加载更快。
|
||||||
@@ -35,8 +33,10 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
|||||||
|
|
||||||
### 3. 声音克隆 [Day 13 新增]
|
### 3. 声音克隆 [Day 13 新增]
|
||||||
- **TTS 模式选择**: EdgeTTS (预设音色) / 声音克隆 (自定义音色) 切换。
|
- **TTS 模式选择**: EdgeTTS (预设音色) / 声音克隆 (自定义音色) 切换。
|
||||||
- **参考音频管理**: 上传/列表/删除参考音频 (3-20秒 WAV)。
|
- **参考音频管理**: 上传/列表/重命名/删除参考音频,上传后自动 Whisper 转写 ref_text + 超 10s 自动截取。
|
||||||
- **一键克隆**: 选择参考音频后自动调用 Qwen3-TTS 服务。
|
- **重新识别**: 旧参考音频可重新转写并截取 (RotateCw 按钮)。
|
||||||
|
- **一键克隆**: 选择参考音频后自动调用 CosyVoice 3.0 服务。
|
||||||
|
- **语速控制**: 声音克隆模式下支持 5 档语速 (0.8-1.2),选择持久化 (Day 23)。
|
||||||
- **多语言支持**: EdgeTTS 10 语言声音列表,声音克隆 language 透传 (Day 22)。
|
- **多语言支持**: EdgeTTS 10 语言声音列表,声音克隆 language 透传 (Day 22)。
|
||||||
|
|
||||||
### 4. 配音前置 + 时间轴编排 [Day 23 新增]
|
### 4. 配音前置 + 时间轴编排 [Day 23 新增]
|
||||||
@@ -45,16 +45,19 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
|||||||
- **时间轴编辑器**: wavesurfer.js 音频波形 + 色块可视化素材分配,拖拽分割线调整各段时长。
|
- **时间轴编辑器**: wavesurfer.js 音频波形 + 色块可视化素材分配,拖拽分割线调整各段时长。
|
||||||
- **素材截取设置**: ClipTrimmer 双手柄 range slider + HTML5 视频预览播放。
|
- **素材截取设置**: ClipTrimmer 双手柄 range slider + HTML5 视频预览播放。
|
||||||
- **拖拽排序**: 时间轴色块支持 HTML5 Drag & Drop 调换素材顺序。
|
- **拖拽排序**: 时间轴色块支持 HTML5 Drag & Drop 调换素材顺序。
|
||||||
- **自定义分配**: 后端 `custom_assignments` 支持用户定义的素材分配方案。
|
- **自定义分配**: 后端 `custom_assignments` 支持用户定义的素材分配方案(含 `source_start/source_end` 截取区间)。
|
||||||
|
- **时间轴语义对齐**: 超出音频时仅保留可见段并截齐末段,超出段不参与生成;不足音频时最后可见段自动循环补齐。
|
||||||
|
- **画面比例控制**: 时间轴顶部支持 `9:16 / 16:9` 输出比例选择,设置持久化并透传后端。
|
||||||
|
|
||||||
### 5. 字幕与标题 [Day 13 新增]
|
### 5. 字幕与标题 [Day 13 新增]
|
||||||
- **片头标题**: 可选输入,限制 15 字,视频开头显示 3 秒淡入淡出标题。
|
- **片头标题**: 可选输入,限制 15 字;支持”短暂显示 / 常驻显示”,默认短暂显示(4 秒),对标题和副标题同时生效。
|
||||||
|
- **片头副标题**: 可选输入,限制 20 字;显示在主标题下方,用于补充说明或悬念引导;独立样式配置(字体/字号/颜色/间距),可由 AI 同时生成;与标题共享显示模式设定;仅在视频画面中显示,不参与发布标题 (Day 25)。
|
||||||
- **标题同步**: 首页片头标题修改会同步到发布信息标题。
|
- **标题同步**: 首页片头标题修改会同步到发布信息标题。
|
||||||
- **逐字高亮字幕**: 卡拉OK效果,默认开启,可关闭。
|
- **逐字高亮字幕**: 卡拉OK效果,默认开启,可关闭。
|
||||||
- **自动对齐**: 基于 faster-whisper 生成字级别时间戳。
|
- **自动对齐**: 基于 faster-whisper 生成字级别时间戳。
|
||||||
- **样式预设**: 标题/字幕样式选择 + 预览 + 字号调节 (Day 16)。
|
- **样式预设**: 标题/字幕/副标题样式选择 + 预览 + 字号调节 (Day 16/25)。
|
||||||
- **默认样式**: 标题 90px 站酷快乐体;字幕 60px 经典黄字 + DingTalkJinBuTi (Day 17)。
|
- **默认样式**: 标题 90px 站酷快乐体;字幕 60px 经典黄字 + DingTalkJinBuTi (Day 17)。
|
||||||
- **样式持久化**: 标题/字幕样式与字号刷新保留 (Day 17)。
|
- **样式持久化**: 标题/字幕/副标题样式与字号刷新保留 (Day 17/25)。
|
||||||
|
|
||||||
### 6. 背景音乐 [Day 16 新增]
|
### 6. 背景音乐 [Day 16 新增]
|
||||||
- **试听预览**: 点击试听即选中,音量滑块实时生效。
|
- **试听预览**: 点击试听即选中,音量滑块实时生效。
|
||||||
@@ -62,12 +65,20 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
|
|||||||
|
|
||||||
### 7. 账户设置 [Day 15 新增]
|
### 7. 账户设置 [Day 15 新增]
|
||||||
- **手机号登录**: 11位中国手机号验证登录。
|
- **手机号登录**: 11位中国手机号验证登录。
|
||||||
- **账户下拉菜单**: 显示有效期 + 修改密码 + 安全退出。
|
- **账户下拉菜单**: 显示手机号(中间四位脱敏)+ 有效期 + 修改密码 + 安全退出。
|
||||||
- **修改密码**: 弹窗输入当前密码与新密码,修改后强制重新登录。
|
- **修改密码**: 弹窗输入当前密码与新密码,修改后强制重新登录。
|
||||||
|
- **登录即时生效**: 登录成功后 AuthContext 立即写入用户数据,无需刷新即显示手机号。
|
||||||
|
|
||||||
|
### 8. 付费开通会员 (`/pay`)
|
||||||
|
- **支付宝电脑网站支付**: 跳转支付宝官方收银台,支持扫码/账号登录/余额等多种支付方式。
|
||||||
|
- **自动激活**: 支付成功后异步回调自动激活会员(有效期 1 年),前端轮询检测支付结果。
|
||||||
|
- **到期续费**: 会员到期后登录自动跳转付费页续费,流程与首次开通一致。
|
||||||
|
- **管理员激活**: 管理员手动激活功能并存,两种方式互不影响。
|
||||||
|
|
||||||
### 8. 文案提取助手 (`ScriptExtractionModal`) [Day 15 新增]
|
### 8. 文案提取助手 (`ScriptExtractionModal`) [Day 15 新增]
|
||||||
- **多源提取**: 支持文件拖拽上传与 URL 粘贴 (B站/抖音/TikTok)。
|
- **多源提取**: 支持文件拖拽上传与 URL 粘贴 (B站/抖音/TikTok)。
|
||||||
- **AI 洗稿**: 集成 GLM-4.7-Flash,自动改写为口播文案。
|
- **AI 智能改写**: 集成 GLM-4.7-Flash,自动改写为口播文案。
|
||||||
|
- **自定义提示词**: 可自定义改写提示词,留空使用默认;设置持久化到 localStorage (Day 25)。
|
||||||
- **一键填入**: 提取结果直接填充至视频生成输入框。
|
- **一键填入**: 提取结果直接填充至视频生成输入框。
|
||||||
- **智能交互**: 实时进度展示,防误触设计。
|
- **智能交互**: 实时进度展示,防误触设计。
|
||||||
|
|
||||||
@@ -105,6 +116,8 @@ src/
|
|||||||
│ ├── page.tsx # 视频生成主页
|
│ ├── page.tsx # 视频生成主页
|
||||||
│ ├── publish/ # 发布管理页
|
│ ├── publish/ # 发布管理页
|
||||||
│ │ └── page.tsx
|
│ │ └── page.tsx
|
||||||
|
│ ├── pay/ # 付费开通会员页
|
||||||
|
│ │ └── page.tsx
|
||||||
│ └── layout.tsx # 全局布局 (导航栏)
|
│ └── layout.tsx # 全局布局 (导航栏)
|
||||||
├── features/
|
├── features/
|
||||||
│ ├── home/
|
│ ├── home/
|
||||||
@@ -129,5 +142,8 @@ src/
|
|||||||
## 🎨 设计规范
|
## 🎨 设计规范
|
||||||
|
|
||||||
- **主色调**: 深紫/黑色系 (Dark Mode)
|
- **主色调**: 深紫/黑色系 (Dark Mode)
|
||||||
- **交互**: 悬停微动画 (Hover Effects)
|
- **交互**: 悬停微动画 (Hover Effects);操作按钮默认半透明可见 (opacity-40),hover 时全亮,兼顾触屏设备
|
||||||
- **响应式**: 适配桌面端大屏操作
|
- **响应式**: 适配桌面端与移动端;发布页平台卡片响应式布局(移动端紧凑/桌面端宽松)
|
||||||
|
- **滚动体验**: 列表滚动条统一隐藏 (hide-scrollbar);刷新后自动回到顶部(禁用浏览器滚动恢复 + 列表 scroll 时间门控)
|
||||||
|
- **样式预览**: 浮动预览窗口,桌面端左上角 280px,移动端右下角 160px(不遮挡控件)
|
||||||
|
- **输入辅助**: 标题/副标题输入框实时字数计数器,超限变红
|
||||||
|
|||||||
252
Docs/MUSETALK_DEPLOY.md
Normal file
252
Docs/MUSETALK_DEPLOY.md
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
# MuseTalk 部署指南
|
||||||
|
|
||||||
|
> **更新时间**:2026-02-27
|
||||||
|
> **适用版本**:MuseTalk v1.5 (常驻服务模式)
|
||||||
|
> **架构**:FastAPI 常驻服务 + PM2 进程管理
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 架构概览
|
||||||
|
|
||||||
|
MuseTalk 作为 **混合唇形同步方案** 的长视频引擎:
|
||||||
|
|
||||||
|
- **短视频 (<120s)** → LatentSync 1.6 (GPU1, 端口 8007)
|
||||||
|
- **长视频 (>=120s)** → MuseTalk 1.5 (GPU0, 端口 8011)
|
||||||
|
- 路由阈值由 `LIPSYNC_DURATION_THRESHOLD` 控制
|
||||||
|
- MuseTalk 不可用时自动回退到 LatentSync
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 硬件要求
|
||||||
|
|
||||||
|
| 配置 | 最低要求 | 推荐配置 |
|
||||||
|
|------|----------|----------|
|
||||||
|
| GPU | 8GB VRAM (RTX 3060) | 24GB VRAM (RTX 3090) |
|
||||||
|
| 内存 | 32GB | 64GB |
|
||||||
|
| CUDA | 11.7+ | 11.8 |
|
||||||
|
|
||||||
|
> MuseTalk fp16 推理约需 4-8GB 显存,可与 CosyVoice 共享 GPU0。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 安装步骤
|
||||||
|
|
||||||
|
### 1. Conda 环境
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
|
||||||
|
conda create -n musetalk python=3.10 -y
|
||||||
|
conda activate musetalk
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. PyTorch 2.0.1 + CUDA 11.8
|
||||||
|
|
||||||
|
> 必须使用此版本,mmcv 预编译包依赖。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. 依赖安装
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# MMLab 系列
|
||||||
|
pip install --no-cache-dir -U openmim
|
||||||
|
mim install mmengine
|
||||||
|
mim install "mmcv==2.0.1"
|
||||||
|
mim install "mmdet==3.1.0"
|
||||||
|
pip install chumpy --no-build-isolation
|
||||||
|
pip install "mmpose==1.1.0" --no-deps
|
||||||
|
|
||||||
|
# FastAPI 服务依赖
|
||||||
|
pip install fastapi uvicorn httpx
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 模型权重
|
||||||
|
|
||||||
|
### 目录结构
|
||||||
|
|
||||||
|
```
|
||||||
|
models/MuseTalk/models/
|
||||||
|
├── musetalk/ ← v1 基础模型
|
||||||
|
│ ├── config.json -> musetalk.json (软链接)
|
||||||
|
│ ├── musetalk.json
|
||||||
|
│ ├── musetalkV15 -> ../musetalkV15 (软链接, 关键!)
|
||||||
|
│ └── pytorch_model.bin (~3.2GB)
|
||||||
|
├── musetalkV15/ ← v1.5 UNet 模型
|
||||||
|
│ ├── musetalk.json
|
||||||
|
│ └── unet.pth (~3.2GB)
|
||||||
|
├── sd-vae/ ← Stable Diffusion VAE
|
||||||
|
│ ├── config.json
|
||||||
|
│ └── diffusion_pytorch_model.bin
|
||||||
|
├── whisper/ ← OpenAI Whisper Tiny
|
||||||
|
│ ├── config.json
|
||||||
|
│ ├── pytorch_model.bin (~151MB)
|
||||||
|
│ └── preprocessor_config.json
|
||||||
|
├── dwpose/ ← DWPose 人体姿态检测
|
||||||
|
│ └── dw-ll_ucoco_384.pth (~387MB)
|
||||||
|
├── syncnet/ ← SyncNet 唇形同步评估
|
||||||
|
│ └── latentsync_syncnet.pt
|
||||||
|
└── face-parse-bisent/ ← 人脸解析模型
|
||||||
|
├── 79999_iter.pth (~53MB)
|
||||||
|
└── resnet18-5c106cde.pth (~45MB)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 下载方式
|
||||||
|
|
||||||
|
使用项目自带脚本:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
|
||||||
|
conda activate musetalk
|
||||||
|
bash download_weights.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
或手动 Python API 下载:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda activate musetalk
|
||||||
|
export HF_ENDPOINT=https://hf-mirror.com
|
||||||
|
python -c "
|
||||||
|
from huggingface_hub import snapshot_download
|
||||||
|
snapshot_download('TMElyralab/MuseTalk', local_dir='models',
|
||||||
|
allow_patterns=['musetalk/*', 'musetalkV15/*'])
|
||||||
|
snapshot_download('stabilityai/sd-vae-ft-mse', local_dir='models/sd-vae',
|
||||||
|
allow_patterns=['config.json', 'diffusion_pytorch_model.bin'])
|
||||||
|
snapshot_download('openai/whisper-tiny', local_dir='models/whisper',
|
||||||
|
allow_patterns=['config.json', 'pytorch_model.bin', 'preprocessor_config.json'])
|
||||||
|
snapshot_download('yzd-v/DWPose', local_dir='models/dwpose',
|
||||||
|
allow_patterns=['dw-ll_ucoco_384.pth'])
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 创建必要的软链接
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk/models/musetalk
|
||||||
|
ln -sf musetalk.json config.json
|
||||||
|
ln -sf ../musetalkV15 musetalkV15
|
||||||
|
```
|
||||||
|
|
||||||
|
> **关键**:`musetalk/musetalkV15` 软链接缺失会导致权重检测失败 (`weights: False`)。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 服务启动
|
||||||
|
|
||||||
|
### PM2 进程管理(推荐)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 首次注册
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2
|
||||||
|
pm2 start run_musetalk.sh --name vigent2-musetalk
|
||||||
|
pm2 save
|
||||||
|
|
||||||
|
# 日常管理
|
||||||
|
pm2 restart vigent2-musetalk
|
||||||
|
pm2 logs vigent2-musetalk
|
||||||
|
pm2 stop vigent2-musetalk
|
||||||
|
```
|
||||||
|
|
||||||
|
### 手动启动
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
|
||||||
|
/home/rongye/ProgramFiles/miniconda3/envs/musetalk/bin/python scripts/server.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### 健康检查
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8011/health
|
||||||
|
# {"status":"ok","model_loaded":true}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 后端配置
|
||||||
|
|
||||||
|
`backend/.env` 中的相关变量:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# MuseTalk 配置
|
||||||
|
MUSETALK_GPU_ID=0 # GPU 编号 (与 CosyVoice 共存)
|
||||||
|
MUSETALK_API_URL=http://localhost:8011 # 常驻服务地址
|
||||||
|
MUSETALK_BATCH_SIZE=32 # 推理批大小
|
||||||
|
MUSETALK_VERSION=v15 # 模型版本
|
||||||
|
MUSETALK_USE_FLOAT16=true # 半精度加速
|
||||||
|
|
||||||
|
# 混合唇形同步路由
|
||||||
|
LIPSYNC_DURATION_THRESHOLD=120 # 秒, >=此值用 MuseTalk
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 相关文件
|
||||||
|
|
||||||
|
| 文件 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `models/MuseTalk/scripts/server.py` | FastAPI 常驻服务 (端口 8011) |
|
||||||
|
| `run_musetalk.sh` | PM2 启动脚本 |
|
||||||
|
| `backend/app/services/lipsync_service.py` | 混合路由 + `_call_musetalk_server()` |
|
||||||
|
| `backend/app/core/config.py` | `MUSETALK_*` 配置项 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 性能优化 (server.py v2)
|
||||||
|
|
||||||
|
首次长视频测试 (136s, 3404 帧) 耗时 30 分钟。分析发现瓶颈在人脸检测 (28%)、BiSeNet 合成 (22%)、I/O (17%),而非 UNet 推理 (17%)。
|
||||||
|
|
||||||
|
### 已实施优化
|
||||||
|
|
||||||
|
| 优化项 | 说明 |
|
||||||
|
|--------|------|
|
||||||
|
| `MUSETALK_BATCH_SIZE` 8→32 | RTX 3090 显存充裕,UNet 推理加速 ~3x |
|
||||||
|
| cv2.VideoCapture 直读帧 | 跳过 ffmpeg→PNG→imread 链路 |
|
||||||
|
| 人脸检测降频 (每5帧) | DWPose + FaceAlignment 只在采样帧运行,中间帧线性插值 bbox |
|
||||||
|
| BiSeNet mask 缓存 (每5帧) | `get_image_prepare_material` 每 5 帧运行,中间帧用 `get_image_blending` 复用 |
|
||||||
|
| cv2.VideoWriter 直写 | 跳过逐帧 PNG 写盘 + ffmpeg 重编码 |
|
||||||
|
| 每阶段计时 | 7 个阶段精确计时,方便后续调优 |
|
||||||
|
|
||||||
|
### 调优参数
|
||||||
|
|
||||||
|
`models/MuseTalk/scripts/server.py` 顶部可调:
|
||||||
|
|
||||||
|
```python
|
||||||
|
DETECT_EVERY = 5 # 人脸检测降频间隔 (帧)
|
||||||
|
BLEND_CACHE_EVERY = 5 # BiSeNet mask 缓存间隔 (帧)
|
||||||
|
```
|
||||||
|
|
||||||
|
> 对于口播视频 (人脸几乎不动),5 帧间隔的插值误差可忽略。
|
||||||
|
> 如人脸运动剧烈的场景,可降低为 2-3。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 常见问题
|
||||||
|
|
||||||
|
### huggingface-hub 版本冲突
|
||||||
|
|
||||||
|
```
|
||||||
|
ImportError: huggingface-hub>=0.19.3,<1.0 is required
|
||||||
|
```
|
||||||
|
|
||||||
|
**解决**:降级 huggingface-hub
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install "huggingface-hub>=0.19.3,<1.0"
|
||||||
|
```
|
||||||
|
|
||||||
|
### mmcv 导入失败
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip uninstall mmcv mmcv-full -y
|
||||||
|
mim install "mmcv==2.0.1"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 音视频长度不匹配
|
||||||
|
|
||||||
|
已在 `musetalk/utils/audio_processor.py` 中修复(零填充逻辑),无需额外处理。
|
||||||
@@ -16,14 +16,16 @@
|
|||||||
文本 → EdgeTTS → 音频 → LatentSync → FFmpeg合成 → 最终视频
|
文本 → EdgeTTS → 音频 → LatentSync → FFmpeg合成 → 最终视频
|
||||||
|
|
||||||
新流程 (单素材):
|
新流程 (单素材):
|
||||||
文本 → EdgeTTS/Qwen3-TTS/预生成配音 → 音频 ─┬→ LatentSync → 唇形视频 ─┐
|
文本 → EdgeTTS/CosyVoice/预生成配音 → 音频 ─┬→ LatentSync/MuseTalk → 唇形视频 ─┐
|
||||||
└→ faster-whisper → 字幕JSON ─┴→ Remotion合成 → 最终视频
|
└→ faster-whisper → 字幕JSON ─┴→ Remotion合成 → 最终视频
|
||||||
|
|
||||||
新流程 (多素材):
|
新流程 (多素材):
|
||||||
音频 → 多素材按 custom_assignments 拼接 → LatentSync (单次推理) → 唇形视频 ─┐
|
音频 → 多素材按 custom_assignments 拼接 → LatentSync/MuseTalk (单次推理) → 唇形视频 ─┐
|
||||||
音频 → faster-whisper → 字幕JSON ─────────────────────────────────────────────┴→ Remotion合成 → 最终视频
|
音频 → faster-whisper → 字幕JSON ─────────────────────────────────────────────┴→ Remotion合成 → 最终视频
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> **唇形同步路由**: 短视频 (<120s) 用 LatentSync 1.6 (GPU1),长视频 (>=120s) 用 MuseTalk 1.5 (GPU0),由 `LIPSYNC_DURATION_THRESHOLD` 控制。
|
||||||
|
|
||||||
## 系统要求
|
## 系统要求
|
||||||
|
|
||||||
| 组件 | 要求 |
|
| 组件 | 要求 |
|
||||||
@@ -185,7 +187,9 @@ Remotion 渲染参数在 `backend/app/services/remotion_service.py` 中配置:
|
|||||||
| 参数 | 默认值 | 说明 |
|
| 参数 | 默认值 | 说明 |
|
||||||
|------|--------|------|
|
|------|--------|------|
|
||||||
| `fps` | 25 | 输出帧率 |
|
| `fps` | 25 | 输出帧率 |
|
||||||
| `title_duration` | 3.0 | 标题显示时长(秒) |
|
| `concurrency` | 16 | Remotion 并发渲染进程数(默认 16,可通过 `--concurrency` CLI 参数覆盖) |
|
||||||
|
| `title_display_mode` | `short` | 标题显示模式(`short`=短暂显示;`persistent`=常驻显示) |
|
||||||
|
| `title_duration` | 4.0 | 标题显示时长(秒,仅 `short` 模式生效) |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -272,7 +276,7 @@ wget https://github.com/googlefonts/noto-cjk/raw/main/Sans/OTF/SimplifiedChinese
|
|||||||
|
|
||||||
### 使用 GPU 0
|
### 使用 GPU 0
|
||||||
|
|
||||||
faster-whisper 默认使用 GPU 0,与 LatentSync (GPU 1) 分开,避免显存冲突。如需指定 GPU:
|
faster-whisper 默认使用 GPU 0,与 MuseTalk 共享 GPU 0;LatentSync 使用 GPU 1,互不冲突。如需指定 GPU:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# 在 whisper_service.py 中修改
|
# 在 whisper_service.py 中修改
|
||||||
@@ -288,3 +292,5 @@ WhisperService(device="cuda:0") # 或 "cuda:1"
|
|||||||
| 2026-01-29 | 1.0.0 | 初始版本,使用 faster-whisper + Remotion 实现逐字高亮字幕和片头标题 |
|
| 2026-01-29 | 1.0.0 | 初始版本,使用 faster-whisper + Remotion 实现逐字高亮字幕和片头标题 |
|
||||||
| 2026-02-10 | 1.1.0 | 更新架构图:多素材 concat-then-infer、预生成配音选项 |
|
| 2026-02-10 | 1.1.0 | 更新架构图:多素材 concat-then-infer、预生成配音选项 |
|
||||||
| 2026-01-30 | 1.0.1 | 字幕高亮样式与标题动画优化,视觉表现更清晰 |
|
| 2026-01-30 | 1.0.1 | 字幕高亮样式与标题动画优化,视觉表现更清晰 |
|
||||||
|
| 2026-02-25 | 1.2.0 | 字幕时间戳从线性插值改为 Whisper 节奏映射,修复长视频字幕漂移 |
|
||||||
|
| 2026-02-27 | 1.3.0 | 架构图更新 MuseTalk 混合路由;Remotion 并发渲染从 8 提升到 16;GPU 分配说明更新 |
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
# ViGent2 开发任务清单 (Task Log)
|
# ViGent2 开发任务清单 (Task Log)
|
||||||
|
|
||||||
**项目**: ViGent2 数字人口播视频生成系统
|
**项目**: ViGent2 数字人口播视频生成系统
|
||||||
**进度**: 100% (Day 23 - 配音前置重构 + 素材时间轴编排 + UI 体验优化)
|
**进度**: 100% (Day 28 - CosyVoice FP16 加速 + 文档全面更新)
|
||||||
**更新时间**: 2026-02-10
|
**更新时间**: 2026-02-27
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -10,7 +10,65 @@
|
|||||||
|
|
||||||
> 这里记录了每一天的核心开发内容与 milestone。
|
> 这里记录了每一天的核心开发内容与 milestone。
|
||||||
|
|
||||||
### Day 23: 配音前置重构 + 素材时间轴编排 + UI 体验优化 + 历史文案 (Current)
|
### Day 28: CosyVoice FP16 加速 + 文档全面更新 (Current)
|
||||||
|
- [x] **CosyVoice FP16 半精度加速**: `AutoModel()` 开启 `fp16=True`,LLM 推理和 Flow Matching 自动混合精度运行,预估提速 30-40%、显存降低 ~30%。
|
||||||
|
- [x] **文档全面更新**: README.md / DEPLOY_MANUAL.md / SUBTITLE_DEPLOY.md / BACKEND_README.md 补充 MuseTalk 混合唇形同步方案、性能优化、Remotion 并发渲染等内容。
|
||||||
|
|
||||||
|
### Day 27: Remotion 描边修复 + 字体样式扩展 + 混合唇形同步 + 性能优化
|
||||||
|
- [x] **描边渲染修复**: 标题/副标题/字幕从 `textShadow` 4 方向模拟改为 CSS 原生 `-webkit-text-stroke` + `paint-order: stroke fill`,修复描边过粗和副标题重影问题。
|
||||||
|
- [x] **字体样式扩展**: 标题样式 4→12 个(+庞门正道/优设标题圆/阿里数黑体/文道潮黑/无界黑/厚底黑/寒蝉半圆体/欣意吉祥宋),字幕样式 4→8 个(+少女粉/清新绿/金色隶书/楷体红字)。
|
||||||
|
- [x] **描边参数优化**: 所有预设 `stroke_size` 从 8 降至 4~5,配合原生描边视觉更干净。
|
||||||
|
- [x] **TypeScript 类型修复**: Root.tsx `Composition` 泛型与 `calculateMetadata` 参数类型对齐;Video.tsx `VideoProps` 添加索引签名兼容 `Record<string, unknown>`;VideoLayer.tsx 移除 `OffthreadVideo` 不支持的 `loop` prop。
|
||||||
|
- [x] **进度条文案还原**: 进度条从显示后端推送消息改回固定 `正在AI生成中...`。
|
||||||
|
- [x] **MuseTalk 混合唇形同步**: 部署 MuseTalk 1.5 常驻服务 (GPU0, 端口 8011),按音频时长自动路由 — 短视频 (<120s) 走 LatentSync,长视频 (>=120s) 走 MuseTalk,MuseTalk 不可用时自动回退。
|
||||||
|
- [x] **MuseTalk 推理性能优化**: server.py v2 重写 — cv2 直读帧(跳过 ffmpeg→PNG)、人脸检测降频(每5帧)、BiSeNet mask 缓存(每5帧)、cv2.VideoWriter 直写(跳过 PNG 写盘)、batch_size 8→32,预估 30min→8-10min (~3x)。
|
||||||
|
- [x] **Remotion 并发渲染优化**: render.ts 新增 concurrency 参数,从默认 8 提升到 16 (56核 CPU),预估 5min→2-3min。
|
||||||
|
|
||||||
|
### Day 26: 前端优化:板块合并 + 序号标题 + UI 精细化
|
||||||
|
- [x] **板块合并**: 首页 9 个独立板块合并为 5 个主板块(配音方式+配音列表→三、配音;视频素材+时间轴→四、素材编辑;历史作品+作品预览→六、作品)。
|
||||||
|
- [x] **中文序号标题**: 一~十编号(首页一~六,发布页七~十),移除所有 emoji 图标。
|
||||||
|
- [x] **embedded 模式**: 6 个组件支持 `embedded` prop,嵌入时不渲染外层卡片/标题。
|
||||||
|
- [x] **配音列表两行布局**: embedded 模式第 1 行语速+生成配音(右对齐),第 2 行配音列表+刷新。
|
||||||
|
- [x] **子组件自渲染子标题**: MaterialSelector/TimelineEditor embedded 时自渲染 h3 子标题+操作按钮同行。
|
||||||
|
- [x] **下拉对齐**: TitleSubtitlePanel 标签统一 `w-20`,下拉 `w-1/3 min-w-[100px]`,垂直对齐。
|
||||||
|
- [x] **参考音频文案简化**: 底部段落移至标题旁,简化为 `(上传3-10秒语音样本)`。
|
||||||
|
- [x] **账户手机号显示**: AccountSettingsDropdown 新增手机号显示。
|
||||||
|
- [x] **标题显示模式对副标题生效**: payload 条件修复 + UI 下拉上移至板块标题行。
|
||||||
|
- [x] **登录后用户信息立即可用**: AuthContext 暴露 `setUser`,登录成功后立即写入用户数据,修复登录后显示"未知账户"的问题。
|
||||||
|
- [x] **文案微调**: 素材描述改为"上传自拍视频,最多可选4个";显示模式选项加"标题"前缀。
|
||||||
|
- [x] **UI/UX 体验优化**: 操作按钮移动端可见(opacity-40)、手机号脱敏、标题字数计数器、时间轴拖拽抓手图标、截取滑块放大。
|
||||||
|
- [x] **代码质量修复**: 密码弹窗 success 清空、MaterialSelector useMemo + disabled 守卫、TimelineEditor useMemo。
|
||||||
|
- [x] **发布页响应式布局**: 平台账号卡片单行布局,移动端紧凑(小图标/小按钮),桌面端宽松(与其他板块风格一致)。
|
||||||
|
- [x] **移动端刷新回顶部**: `scrollRestoration = "manual"` + 列表 scroll 时间门控(`scrollEffectsEnabled` ref,1 秒内禁止自动滚动)+ 延迟兜底 `scrollTo(0,0)`。
|
||||||
|
- [x] **移动端样式预览缩小**: FloatingStylePreview 移动端宽度缩至 160px,位置改为右下角,不遮挡样式调节控件。
|
||||||
|
- [x] **列表滚动条统一隐藏**: 所有列表(BGM/配音/作品/素材/文案提取)滚动条改回 `hide-scrollbar`。
|
||||||
|
- [x] **移动端配音/素材适配**: VoiceSelector 按钮移动端缩小(`px-2 sm:px-4`)修复克隆声音不可见;MaterialSelector 标题行移除 `whitespace-nowrap`,描述移动端隐藏,修复刷新按钮溢出。
|
||||||
|
- [x] **生成配音按钮放大**: 从辅助尺寸(`text-xs px-2 py-1`)升级为主操作尺寸(`text-sm font-medium px-4 py-2`),新增阴影。
|
||||||
|
- [x] **生成进度条位置调整**: 从"六、作品"卡片内部提取到右栏独立卡片,显示在作品卡片上方,更醒目。
|
||||||
|
- [x] **LatentSync 超时修复**: httpx 超时从 1200s(20 分钟)改为 3600s(1 小时),修复 2 分钟以上视频口型推理超时回退问题。
|
||||||
|
- [x] **字幕时间戳节奏映射**: `whisper_service.py` 从全程线性插值改为 Whisper 逐词节奏映射,修复长视频字幕漂移。
|
||||||
|
|
||||||
|
### Day 25: 文案提取修复 + 自定义提示词 + 片头副标题
|
||||||
|
- [x] **抖音文案提取修复**: yt-dlp Fresh cookies 报错,重写 `_download_douyin_manual` 为移动端分享页 + 自动获取 ttwid 方案。
|
||||||
|
- [x] **清理 DOUYIN_COOKIE**: 新方案不再需要手动维护 Cookie,从 `.env`/`config.py`/`service.py` 全面删除。
|
||||||
|
- [x] **AI 智能改写自定义提示词**: 后端 `rewrite_script()` 支持 `custom_prompt` 参数;前端 checkbox 旁新增折叠式提示词编辑区,localStorage 持久化。
|
||||||
|
- [x] **SSR 构建修复**: `useState` 初始化 `localStorage` 访问加 `typeof window` 守卫,修复 `npm run build` 报错。
|
||||||
|
- [x] **片头副标题**: 新增 secondary_title(后端/Remotion/前端全链路),AI 同时生成,独立样式配置,20 字限制。
|
||||||
|
- [x] **前端文案修正**: "AI 洗稿结果"→"AI 改写结果"。
|
||||||
|
- [x] **yt-dlp 升级**: `2025.12.08` → `2026.2.21`。
|
||||||
|
- [x] **参考音频中文文件名修复**: `sanitize_filename()` 将存储路径清洗为 ASCII 安全字符,纯中文名哈希兜底,原始名保留为展示名。
|
||||||
|
|
||||||
|
### Day 24: 鉴权到期治理 + 多素材时间轴稳定性修复
|
||||||
|
- [x] **会员到期请求时失效**: 登录与鉴权接口统一执行 `expires_at` 检查;到期后自动停用账号、清理 session,并返回“会员已到期,请续费”。
|
||||||
|
- [x] **画面比例控制**: 时间轴新增 `9:16 / 16:9` 输出比例选择,前端持久化并透传后端,单素材/多素材统一按目标分辨率处理。
|
||||||
|
- [x] **标题/字幕防溢出**: Remotion 与前端预览统一响应式缩放、自动换行、描边/字距/边距比例缩放,降低预览与成片差异。
|
||||||
|
- [x] **标题显示模式**: 标题行新增“短暂显示/常驻显示”下拉;默认短暂显示(4 秒),用户选择持久化并透传至 Remotion 渲染链路。
|
||||||
|
- [x] **MOV 方向归一化**: 新增旋转元数据解析与 orientation normalize,修复“编码横屏+旋转元数据”导致的竖屏判断偏差。
|
||||||
|
- [x] **多素材拼接稳定性**: 片段 prepare 与 concat 统一 25fps/CFR,concat 增加 `+genpts`,缓解段切换处“画面冻结口型还动”。
|
||||||
|
- [x] **时间轴语义对齐**: 打通 `source_end` 全链路;修复 `sourceStart>0 且 sourceEnd=0` 时长计算;生成时以时间轴可见段 assignments 为准,超出段不参与。
|
||||||
|
- [x] **交互细节优化**: 页面刷新回顶部;素材/历史列表首轮自动滚动抑制,减少恢复状态时页面跳动。
|
||||||
|
|
||||||
|
### Day 23: 配音前置重构 + 素材时间轴编排 + UI 体验优化 + 声音克隆增强
|
||||||
|
|
||||||
#### 第一阶段:配音前置
|
#### 第一阶段:配音前置
|
||||||
- [x] **配音生成独立化**: 新增 `generated_audios` 后端模块(router/schemas/service),5 个 API 端点,复用现有 TTSService / voice_clone_service / task_store。
|
- [x] **配音生成独立化**: 新增 `generated_audios` 后端模块(router/schemas/service),5 个 API 端点,复用现有 TTSService / voice_clone_service / task_store。
|
||||||
@@ -28,8 +86,8 @@
|
|||||||
- [x] **MaterialSelector 精简**: 移除旧的时长信息栏和拖拽排序区(功能迁移到 TimelineEditor)。
|
- [x] **MaterialSelector 精简**: 移除旧的时长信息栏和拖拽排序区(功能迁移到 TimelineEditor)。
|
||||||
|
|
||||||
#### 第三阶段:UI 体验优化 + TTS 稳定性
|
#### 第三阶段:UI 体验优化 + TTS 稳定性
|
||||||
- [x] **TTS SoX PATH 修复**: `run_qwen_tts.sh` export conda env bin 到 PATH,修复 `SoX could not be found!` 警告。
|
- [x] **TTS SoX PATH 修复**: `run_qwen_tts.sh` export conda env bin 到 PATH (Qwen3-TTS 已停用,已被 CosyVoice 3.0 替换)。
|
||||||
- [x] **TTS 显存管理**: 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞事件循环。
|
- [x] **TTS 显存管理**: 每次生成后 `torch.cuda.empty_cache()`,asyncio.to_thread 避免阻塞事件循环 (CosyVoice 沿用相同机制)。
|
||||||
- [x] **配音列表按钮统一**: Play/Edit/Delete 按钮右侧同组 hover 显示,与 RefAudioPanel 一致,移除文案摘要。
|
- [x] **配音列表按钮统一**: Play/Edit/Delete 按钮右侧同组 hover 显示,与 RefAudioPanel 一致,移除文案摘要。
|
||||||
- [x] **素材区解除配音门控**: 移除 MaterialSelector 的 selectedAudio 遮罩,素材随时可上传管理。
|
- [x] **素材区解除配音门控**: 移除 MaterialSelector 的 selectedAudio 遮罩,素材随时可上传管理。
|
||||||
- [x] **时间轴拖拽排序**: TimelineEditor 色块支持 HTML5 Drag & Drop 调换素材顺序。
|
- [x] **时间轴拖拽排序**: TimelineEditor 色块支持 HTML5 Drag & Drop 调换素材顺序。
|
||||||
@@ -42,6 +100,20 @@
|
|||||||
- [x] **按钮视觉统一**: 文案编辑区 4 个按钮统一为固定高度 `h-7`,移除多余 `<span>` 嵌套。
|
- [x] **按钮视觉统一**: 文案编辑区 4 个按钮统一为固定高度 `h-7`,移除多余 `<span>` 嵌套。
|
||||||
- [x] **底部栏调整**: "保存文案"按钮移至底部右侧,移除预计时长显示。
|
- [x] **底部栏调整**: "保存文案"按钮移至底部右侧,移除预计时长显示。
|
||||||
|
|
||||||
|
#### 第五阶段:字幕语言不匹配 + 视频比例错位修复
|
||||||
|
- [x] **字幕用原文替换 Whisper 转录**: `align()` 新增 `original_text` 参数,字幕文字永远用配音保存的原始文案。
|
||||||
|
- [x] **Remotion 动态视频尺寸**: `calculateMetadata` 从 props 读取真实尺寸,修复标题/字幕比例错位。
|
||||||
|
- [x] **英文空格丢失修复**: `split_word_to_chars` 遇到空格时 flush buffer + pending_space 标记。
|
||||||
|
|
||||||
|
#### 第六阶段:参考音频自动转写 + 语速控制
|
||||||
|
- [x] **Whisper 自动转写 ref_text**: 上传参考音频时自动调用 Whisper 转写内容作为 ref_text,不再使用前端固定文字。
|
||||||
|
- [x] **参考音频自动截取**: 超过 10 秒自动在静音点截取(ffmpeg silencedetect),末尾 0.1 秒淡出避免截断爆音。
|
||||||
|
- [x] **重新识别功能**: 新增 `POST /ref-audios/{id}/retranscribe` 端点 + 前端 RotateCw 按钮,旧音频可重新转写并截取。
|
||||||
|
- [x] **语速控制**: 全链路 speed 参数(前端选择器 → 持久化 → 后端 → CosyVoice `inference_zero_shot(speed=)`),5 档:较慢(0.8)/稍慢(0.9)/正常(1.0)/稍快(1.1)/较快(1.2)。
|
||||||
|
- [x] **缺少参考音频门控**: 声音克隆模式下未选参考音频时,生成配音按钮禁用 + 黄色警告提示。
|
||||||
|
- [x] **Whisper 语言自动检测**: `transcribe()` language 参数改为可选(默认 None = 自动检测),支持多语言参考音频。
|
||||||
|
- [x] **前端清理**: 移除固定 ref_text 常量、朗读引导文字,简化为"上传任意语音样本,系统将自动识别内容并克隆声音"。
|
||||||
|
|
||||||
### Day 22: 多素材优化 + AI 翻译 + TTS 多语言
|
### Day 22: 多素材优化 + AI 翻译 + TTS 多语言
|
||||||
- [x] **多素材 Bug 修复**: 6 个高优 Bug(边界溢出、单段 fallback、除零、duration 校验、Whisper 兜底、空列表检查)。
|
- [x] **多素材 Bug 修复**: 6 个高优 Bug(边界溢出、单段 fallback、除零、duration 校验、Whisper 兜底、空列表检查)。
|
||||||
- [x] **架构重构**: 多素材从"逐段 LatentSync"重构为"先拼接再推理",推理次数 N→1。
|
- [x] **架构重构**: 多素材从"逐段 LatentSync"重构为"先拼接再推理",推理次数 N→1。
|
||||||
@@ -117,7 +189,7 @@
|
|||||||
- [x] **体验细节优化**: 录音预览 URL 回收,预览弹窗滚动恢复,全局任务提示挂载。
|
- [x] **体验细节优化**: 录音预览 URL 回收,预览弹窗滚动恢复,全局任务提示挂载。
|
||||||
|
|
||||||
### Day 16: 深度性能优化
|
### Day 16: 深度性能优化
|
||||||
- [x] **Qwen-TTS 加速**: 集成 Flash Attention 2,模型加载速度提升至 8.9s。
|
- [x] **Qwen-TTS 加速**: 集成 Flash Attention 2 (已停用,被 CosyVoice 3.0 替换)。
|
||||||
- [x] **服务守护**: 开发 `Watchdog` 看门狗机制,自动监控并重启僵死服务。
|
- [x] **服务守护**: 开发 `Watchdog` 看门狗机制,自动监控并重启僵死服务。
|
||||||
- [x] **LatentSync 性能确认**: 验证 DeepCache + 原生 Flash Attn 生效。
|
- [x] **LatentSync 性能确认**: 验证 DeepCache + 原生 Flash Attn 生效。
|
||||||
- [x] **文档重构**: 全面更新 README、部署手册及后端文档。
|
- [x] **文档重构**: 全面更新 README、部署手册及后端文档。
|
||||||
@@ -130,10 +202,10 @@
|
|||||||
### Day 14: AI 增强与体验优化
|
### Day 14: AI 增强与体验优化
|
||||||
- [x] **AI 标题/标签**: 集成 GLM-4API 自动生成视频元数据。
|
- [x] **AI 标题/标签**: 集成 GLM-4API 自动生成视频元数据。
|
||||||
- [x] **字幕升级**: Remotion 逐字高亮字幕 (卡拉OK效果) 及动画片头。
|
- [x] **字幕升级**: Remotion 逐字高亮字幕 (卡拉OK效果) 及动画片头。
|
||||||
- [x] **模型升级**: Qwen3-TTS 升级至 1.7B-Base 版本。
|
- [x] **模型升级**: 声音克隆已迁移至 CosyVoice 3.0 (0.5B)。
|
||||||
|
|
||||||
### Day 13: 声音克隆集成
|
### Day 13: 声音克隆集成
|
||||||
- [x] **声音克隆微服务**: 封装 Qwen3-TTS 为独立 API (8009端口)。
|
- [x] **声音克隆微服务**: 封装 CosyVoice 3.0 为独立 API (8010端口,替换 Qwen3-TTS)。
|
||||||
- [x] **参考音频管理**: Supabase 存储桶配置与管理接口。
|
- [x] **参考音频管理**: Supabase 存储桶配置与管理接口。
|
||||||
- [x] **多模态 TTS**: 前端支持 EdgeTTS / Clone Voice 切换。
|
- [x] **多模态 TTS**: 前端支持 EdgeTTS / Clone Voice 切换。
|
||||||
|
|
||||||
@@ -186,9 +258,10 @@
|
|||||||
| **核心 API** | 100% | ✅ 稳定 |
|
| **核心 API** | 100% | ✅ 稳定 |
|
||||||
| **Web UI** | 100% | ✅ 稳定 (移动端适配) |
|
| **Web UI** | 100% | ✅ 稳定 (移动端适配) |
|
||||||
| **唇形同步** | 100% | ✅ LatentSync 1.6 |
|
| **唇形同步** | 100% | ✅ LatentSync 1.6 |
|
||||||
| **TTS 配音** | 100% | ✅ EdgeTTS + Qwen3 + 配音前置 + 时间轴编排 |
|
| **TTS 配音** | 100% | ✅ EdgeTTS + CosyVoice 3.0 + 配音前置 + 时间轴编排 + 自动转写 + 语速控制 |
|
||||||
| **自动发布** | 100% | ✅ 抖音/微信视频号/B站/小红书 |
|
| **自动发布** | 100% | ✅ 抖音/微信视频号/B站/小红书 |
|
||||||
| **用户认证** | 100% | ✅ 手机号 + JWT |
|
| **用户认证** | 100% | ✅ 手机号 + JWT |
|
||||||
|
| **付费会员** | 100% | ✅ 支付宝电脑网站支付 + 自动激活 |
|
||||||
| **部署运维** | 100% | ✅ PM2 + Watchdog |
|
| **部署运维** | 100% | ✅ PM2 + Watchdog |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
52
README.md
52
README.md
@@ -4,8 +4,8 @@
|
|||||||
|
|
||||||
> 📹 **上传人物** · 🎙️ **输入文案** · 🎬 **一键成片**
|
> 📹 **上传人物** · 🎙️ **输入文案** · 🎬 **一键成片**
|
||||||
|
|
||||||
基于 **LatentSync 1.6 + EdgeTTS** 的开源数字人口播视频生成系统。
|
基于 **LatentSync 1.6 + MuseTalk 1.5 混合唇形同步** 的开源数字人口播视频生成系统。
|
||||||
集成 **Qwen3-TTS** 声音克隆与自动社交媒体发布功能。
|
集成 **CosyVoice 3.0** 声音克隆与自动社交媒体发布功能。
|
||||||
|
|
||||||
[功能特性](#-功能特性) • [技术栈](#-技术栈) • [文档中心](#-文档中心) • [部署指南](Docs/DEPLOY_MANUAL.md)
|
[功能特性](#-功能特性) • [技术栈](#-技术栈) • [文档中心](#-文档中心) • [部署指南](Docs/DEPLOY_MANUAL.md)
|
||||||
|
|
||||||
@@ -16,24 +16,28 @@
|
|||||||
## ✨ 功能特性
|
## ✨ 功能特性
|
||||||
|
|
||||||
### 核心能力
|
### 核心能力
|
||||||
- 🎬 **高清唇形同步** - LatentSync 1.6 驱动,512×512 高分辨率 Latent Diffusion 模型。
|
- 🎬 **高清唇形同步** - 混合方案:短视频 (<120s) 用 LatentSync 1.6 (高质量 Latent Diffusion),长视频 (>=120s) 用 MuseTalk 1.5 (实时级单步推理),自动路由 + 回退。
|
||||||
- 🎙️ **多模态配音** - 支持 **EdgeTTS** (微软超自然语音, 10 语言) 和 **Qwen3-TTS** (3秒极速声音克隆)。配音前置工作流:先生成配音 → 选素材 → 生成视频。
|
- 🎙️ **多模态配音** - 支持 **EdgeTTS** (微软超自然语音, 10 语言) 和 **CosyVoice 3.0** (3秒极速声音克隆, 9语言+18方言, 语速可调)。上传参考音频自动 Whisper 转写 + 智能截取。配音前置工作流:先生成配音 → 选素材 → 生成视频。
|
||||||
- 📝 **智能字幕** - 集成 faster-whisper + Remotion,自动生成逐字高亮 (卡拉OK效果) 字幕。
|
- 📝 **智能字幕** - 集成 faster-whisper + Remotion,自动生成逐字高亮 (卡拉OK效果) 字幕。
|
||||||
- 🎨 **样式预设** - 标题/字幕样式选择 + 预览 + 字号调节,支持自定义字体库。
|
- 🎨 **样式预设** - 12 种标题 + 8 种字幕样式预设,支持预览 + 字号调节 + 自定义字体库。CSS 原生描边渲染,清晰无重影。
|
||||||
- 🖼️ **作品预览一致性** - 标题/字幕预览按素材分辨率缩放,效果更接近成片。
|
- 🏷️ **标题显示模式** - 片头标题支持 `短暂显示` / `常驻显示`,默认短暂显示(4秒),用户偏好自动持久化。
|
||||||
- 🎞️ **多素材多机位** - 支持多选素材 + 时间轴编辑器 (wavesurfer.js 波形可视化),拖拽分割线调整时长、拖拽排序切换机位、截取源视频片段。
|
- 📌 **片头副标题** - 可选副标题显示在主标题下方,独立样式配置,AI 可同时生成,20 字限制。
|
||||||
|
- 🖼️ **作品预览一致性** - 标题/字幕预览与 Remotion 成片统一响应式缩放和自动换行,窄屏画布也稳定显示。
|
||||||
|
- 🎞️ **多素材多机位** - 支持多选素材 + 时间轴编辑器 (wavesurfer.js 波形可视化),拖拽分割线调整时长、拖拽排序切换机位、按 `source_start/source_end` 截取片段。
|
||||||
|
- 📐 **画面比例控制** - 时间轴一键切换 `9:16 / 16:9` 输出比例,生成链路全程按目标比例处理。
|
||||||
- 💾 **用户偏好持久化** - 首页状态统一恢复/保存,刷新后延续上次配置。历史文案手动保存与加载。
|
- 💾 **用户偏好持久化** - 首页状态统一恢复/保存,刷新后延续上次配置。历史文案手动保存与加载。
|
||||||
- 🎵 **背景音乐** - 试听 + 音量控制 + 混音,保持配音音量稳定。
|
- 🎵 **背景音乐** - 试听 + 音量控制 + 混音,保持配音音量稳定。
|
||||||
- 🤖 **AI 辅助创作** - 内置 GLM-4.7-Flash,支持 B站/抖音链接文案提取、AI 洗稿、标题/标签自动生成、9 语言翻译。
|
- 🤖 **AI 辅助创作** - 内置 GLM-4.7-Flash,支持 B站/抖音链接文案提取、AI 智能改写(支持自定义提示词)、标题/标签自动生成、9 语言翻译。
|
||||||
|
|
||||||
### 平台化功能
|
### 平台化功能
|
||||||
- 📱 **全自动发布** - 支持抖音/微信视频号/B站/小红书立即发布;扫码登录 + Cookie 持久化。
|
- 📱 **全自动发布** - 支持抖音/微信视频号/B站/小红书立即发布;扫码登录 + Cookie 持久化。
|
||||||
- 🖥️ **发布管理预览** - 支持签名 URL / 相对路径作品预览,确保可直接播放。
|
- 🖥️ **发布管理预览** - 支持签名 URL / 相对路径作品预览,确保可直接播放。
|
||||||
- 📸 **发布结果可视化** - 抖音/微信视频号发布成功后返回截图,发布页结果卡片可直接查看。
|
- 📸 **发布结果可视化** - 抖音/微信视频号发布成功后返回截图,发布页结果卡片可直接查看。
|
||||||
- 🛡️ **发布防误操作** - 发布进行中自动提示“请勿刷新或关闭网页”,并拦截刷新/关页二次确认。
|
- 🛡️ **发布防误操作** - 发布进行中自动提示“请勿刷新或关闭网页”,并拦截刷新/关页二次确认。
|
||||||
|
- 💳 **付费会员** - 支付宝电脑网站支付自动开通会员,到期自动停用并引导续费,管理员手动激活并存。
|
||||||
- 🔐 **认证与隔离** - 基于 Supabase 的用户隔离,支持手机号注册/登录、密码管理。
|
- 🔐 **认证与隔离** - 基于 Supabase 的用户隔离,支持手机号注册/登录、密码管理。
|
||||||
- 🛡️ **服务守护** - 内置 Watchdog 看门狗机制,自动监控并重启僵死服务,确保 7x24h 稳定运行。
|
- 🛡️ **服务守护** - 内置 Watchdog 看门狗机制,自动监控并重启僵死服务,确保 7x24h 稳定运行。
|
||||||
- 🚀 **性能优化** - 视频预压缩、模型常驻服务(近实时加载)、双 GPU 流水线并发。
|
- 🚀 **性能优化** - 视频预压缩、模型常驻服务(近实时加载)、双 GPU 流水线并发、MuseTalk 人脸检测降频 + BiSeNet 缓存、Remotion 16 并发渲染。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -42,10 +46,10 @@
|
|||||||
| 领域 | 核心技术 | 说明 |
|
| 领域 | 核心技术 | 说明 |
|
||||||
|------|----------|------|
|
|------|----------|------|
|
||||||
| **前端** | Next.js 16 | TypeScript, TailwindCSS, SWR, wavesurfer.js |
|
| **前端** | Next.js 16 | TypeScript, TailwindCSS, SWR, wavesurfer.js |
|
||||||
| **后端** | FastAPI | Python 3.10, AsyncIO, PM2 |
|
| **后端** | FastAPI | Python 3.12, AsyncIO, PM2 |
|
||||||
| **数据库** | Supabase | PostgreSQL, Storage (本地/S3), Auth |
|
| **数据库** | Supabase | PostgreSQL, Storage (本地/S3), Auth |
|
||||||
| **唇形同步** | LatentSync 1.6 | PyTorch 2.5, Diffusers, DeepCache |
|
| **唇形同步** | LatentSync 1.6 + MuseTalk 1.5 | 混合路由:短视频 Diffusion 高质量,长视频单步实时推理 |
|
||||||
| **声音克隆** | Qwen3-TTS | 1.7B 参数量,Flash Attention 2 加速 |
|
| **声音克隆** | CosyVoice 3.0 | 0.5B 参数量,9 语言 + 18 方言 |
|
||||||
| **自动化** | Playwright | 社交媒体无头浏览器自动化 |
|
| **自动化** | Playwright | 社交媒体无头浏览器自动化 |
|
||||||
| **部署** | Docker & PM2 | 混合部署架构 |
|
| **部署** | Docker & PM2 | 混合部署架构 |
|
||||||
|
|
||||||
@@ -57,14 +61,18 @@
|
|||||||
|
|
||||||
### 部署运维
|
### 部署运维
|
||||||
- **[部署手册 (DEPLOY_MANUAL.md)](Docs/DEPLOY_MANUAL.md)** - 👈 **部署请看这里**!包含完整的环境搭建步骤。
|
- **[部署手册 (DEPLOY_MANUAL.md)](Docs/DEPLOY_MANUAL.md)** - 👈 **部署请看这里**!包含完整的环境搭建步骤。
|
||||||
- [参考音频服务部署 (QWEN3_TTS_DEPLOY.md)](Docs/QWEN3_TTS_DEPLOY.md) - 声音克隆模型部署指南。
|
- [参考音频服务部署 (COSYVOICE3_DEPLOY.md)](Docs/COSYVOICE3_DEPLOY.md) - 声音克隆模型部署指南。
|
||||||
- [LatentSync 部署指南](models/LatentSync/DEPLOY.md) - 唇形同步模型独立部署。
|
- [LatentSync 部署指南 (LATENTSYNC_DEPLOY.md)](Docs/LATENTSYNC_DEPLOY.md) - 唇形同步模型独立部署。
|
||||||
|
- [MuseTalk 部署指南 (MUSETALK_DEPLOY.md)](Docs/MUSETALK_DEPLOY.md) - 长视频唇形同步模型部署。
|
||||||
- [Supabase 部署指南 (SUPABASE_DEPLOY.md)](Docs/SUPABASE_DEPLOY.md) - Supabase 与认证系统配置。
|
- [Supabase 部署指南 (SUPABASE_DEPLOY.md)](Docs/SUPABASE_DEPLOY.md) - Supabase 与认证系统配置。
|
||||||
|
- [支付宝部署指南 (ALIPAY_DEPLOY.md)](Docs/ALIPAY_DEPLOY.md) - 支付宝付费开通会员配置。
|
||||||
|
|
||||||
### 开发文档
|
### 开发文档
|
||||||
- [后端开发指南](Docs/BACKEND_README.md) - 接口规范与开发流程。
|
- [后端开发指南 (BACKEND_README.md)](Docs/BACKEND_README.md) - 接口规范与开发流程。
|
||||||
- [后端开发规范](Docs/BACKEND_DEV.md) - 分层约定与开发习惯。
|
- [后端开发规范 (BACKEND_DEV.md)](Docs/BACKEND_DEV.md) - 分层约定与开发习惯。
|
||||||
- [前端开发指南](Docs/FRONTEND_DEV.md) - UI 组件与页面规范。
|
- [前端开发指南 (FRONTEND_DEV.md)](Docs/FRONTEND_DEV.md) - UI 组件与页面规范。
|
||||||
|
- [前端组件文档 (FRONTEND_README.md)](Docs/FRONTEND_README.md) - 组件结构与板块说明。
|
||||||
|
- [Remotion 字幕部署 (SUBTITLE_DEPLOY.md)](Docs/SUBTITLE_DEPLOY.md) - 字幕渲染服务部署。
|
||||||
- [开发日志 (DevLogs)](Docs/DevLogs/) - 每日开发进度与技术决策记录。
|
- [开发日志 (DevLogs)](Docs/DevLogs/) - 每日开发进度与技术决策记录。
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -81,8 +89,9 @@ ViGent2/
|
|||||||
├── frontend/ # Next.js 前端应用
|
├── frontend/ # Next.js 前端应用
|
||||||
├── remotion/ # Remotion 视频渲染 (标题/字幕合成)
|
├── remotion/ # Remotion 视频渲染 (标题/字幕合成)
|
||||||
├── models/ # AI 模型仓库
|
├── models/ # AI 模型仓库
|
||||||
│ ├── LatentSync/ # 唇形同步服务
|
│ ├── LatentSync/ # 唇形同步服务 (GPU1, 短视频)
|
||||||
│ └── Qwen3-TTS/ # 声音克隆服务
|
│ ├── MuseTalk/ # 唇形同步服务 (GPU0, 长视频)
|
||||||
|
│ └── CosyVoice/ # 声音克隆服务
|
||||||
└── Docs/ # 项目文档
|
└── Docs/ # 项目文档
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -96,8 +105,9 @@ ViGent2/
|
|||||||
|----------|------|------|
|
|----------|------|------|
|
||||||
| **Web UI** | 3002 | 用户访问入口 (Next.js) |
|
| **Web UI** | 3002 | 用户访问入口 (Next.js) |
|
||||||
| **Backend API** | 8006 | 核心业务接口 (FastAPI) |
|
| **Backend API** | 8006 | 核心业务接口 (FastAPI) |
|
||||||
| **LatentSync** | 8007 | 唇形同步推理服务 |
|
| **LatentSync** | 8007 | 唇形同步推理服务 (GPU1, 短视频) |
|
||||||
| **Qwen3-TTS** | 8009 | 声音克隆推理服务 |
|
| **MuseTalk** | 8011 | 唇形同步推理服务 (GPU0, 长视频) |
|
||||||
|
| **CosyVoice 3.0** | 8010 | 声音克隆推理服务 |
|
||||||
| **Supabase** | 8008 | 数据库与认证网关 |
|
| **Supabase** | 8008 | 数据库与认证网关 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -25,10 +25,10 @@ LATENTSYNC_USE_SERVER=true
|
|||||||
# LATENTSYNC_API_URL=http://localhost:8007
|
# LATENTSYNC_API_URL=http://localhost:8007
|
||||||
|
|
||||||
# 推理步数 (20-50, 越高质量越好,速度越慢)
|
# 推理步数 (20-50, 越高质量越好,速度越慢)
|
||||||
LATENTSYNC_INFERENCE_STEPS=40
|
LATENTSYNC_INFERENCE_STEPS=16
|
||||||
|
|
||||||
# 引导系数 (1.0-3.0, 越高唇同步越准,但可能抖动)
|
# 引导系数 (1.0-3.0, 越高唇同步越准,但可能抖动)
|
||||||
LATENTSYNC_GUIDANCE_SCALE=2.0
|
LATENTSYNC_GUIDANCE_SCALE=1.5
|
||||||
|
|
||||||
# 启用 DeepCache 加速 (推荐开启)
|
# 启用 DeepCache 加速 (推荐开启)
|
||||||
LATENTSYNC_ENABLE_DEEPCACHE=true
|
LATENTSYNC_ENABLE_DEEPCACHE=true
|
||||||
@@ -36,6 +36,26 @@ LATENTSYNC_ENABLE_DEEPCACHE=true
|
|||||||
# 随机种子 (设为 -1 则随机)
|
# 随机种子 (设为 -1 则随机)
|
||||||
LATENTSYNC_SEED=1247
|
LATENTSYNC_SEED=1247
|
||||||
|
|
||||||
|
# =============== MuseTalk 配置 ===============
|
||||||
|
# GPU 选择 (默认 GPU0,与 CosyVoice 共存)
|
||||||
|
MUSETALK_GPU_ID=0
|
||||||
|
|
||||||
|
# 常驻服务地址 (端口 8011)
|
||||||
|
MUSETALK_API_URL=http://localhost:8011
|
||||||
|
|
||||||
|
# 推理批大小
|
||||||
|
MUSETALK_BATCH_SIZE=32
|
||||||
|
|
||||||
|
# 模型版本
|
||||||
|
MUSETALK_VERSION=v15
|
||||||
|
|
||||||
|
# 半精度加速
|
||||||
|
MUSETALK_USE_FLOAT16=true
|
||||||
|
|
||||||
|
# =============== 混合唇形同步路由 ===============
|
||||||
|
# 音频时长 >= 此阈值(秒)用 MuseTalk,< 此阈值用 LatentSync
|
||||||
|
LIPSYNC_DURATION_THRESHOLD=120
|
||||||
|
|
||||||
# =============== 上传配置 ===============
|
# =============== 上传配置 ===============
|
||||||
# 最大上传文件大小 (MB)
|
# 最大上传文件大小 (MB)
|
||||||
MAX_UPLOAD_SIZE_MB=500
|
MAX_UPLOAD_SIZE_MB=500
|
||||||
@@ -70,6 +90,9 @@ GLM_MODEL=glm-4.7-flash
|
|||||||
# 确保存储卷映射正确,避免硬编码路径
|
# 确保存储卷映射正确,避免硬编码路径
|
||||||
SUPABASE_STORAGE_LOCAL_PATH=/home/rongye/ProgramFiles/Supabase/volumes/storage/stub/stub
|
SUPABASE_STORAGE_LOCAL_PATH=/home/rongye/ProgramFiles/Supabase/volumes/storage/stub/stub
|
||||||
|
|
||||||
# =============== 抖音视频下载 Cookie ===============
|
# =============== 支付宝配置 ===============
|
||||||
# 用于从抖音 URL 提取视频文案功能,会过期需要定期更新
|
ALIPAY_APP_ID=2021006132600283
|
||||||
DOUYIN_COOKIE=douyin.com; device_web_cpu_core=10; device_web_memory_size=8; __ac_nonce=06760391f00b9b51264ae; __ac_signature=_02B4Z6wo00f019a5ceAAAIDAhEZR-X3jjWfWmXVAAJLXd4; ttwid=1%7C7MTKBSMsP4eOv9h5NAh8p0E-NYIud09ftNmB0mjLpWc%7C1734359327%7C8794abeabbd47447e1f56e5abc726be089f2a0344d6343b5f75f23e7b0f0028f; UIFID_TEMP=0de8750d2b188f4235dbfd208e44abbb976428f0720eb983255afefa45d39c0c6532e1d4768dd8587bf919f866ff1396912bcb2af71efee56a14a2a9f37b74010d0a0413795262f6d4afe02a032ac7ab; s_v_web_id=verify_m4r4ribr_c7krmY1z_WoeI_43po_ATpO_I4o8U1bex2D7; hevc_supported=true; home_can_add_dy_2_desktop=%220%22; dy_swidth=2560; dy_sheight=1440; stream_recommend_feed_params=%22%7B%5C%22cookie_enabled%5C%22%3Atrue%2C%5C%22screen_width%5C%22%3A2560%2C%5C%22screen_height%5C%22%3A1440%2C%5C%22browser_online%5C%22%3Atrue%2C%5C%22cpu_core_num%5C%22%3A10%2C%5C%22device_memory%5C%22%3A8%2C%5C%22downlink%5C%22%3A10%2C%5C%22effective_type%5C%22%3A%5C%224g%5C%22%2C%5C%22round_trip_time%5C%22%3A50%7D%22; strategyABtestKey=%221734359328.577%22; csrf_session_id=2f53aed9aa6974e83aa9a1014180c3a4; fpk1=U2FsdGVkX1/IpBh0qdmlKAVhGyYHgur4/VtL9AReZoeSxadXn4juKvsakahRGqjxOPytHWspYoBogyhS/V6QSw==; fpk2=0845b309c7b9b957afd9ecf775a4c21f; passport_csrf_token=d80e0c5b2fa2328219856be5ba7e671e; passport_csrf_token_default=d80e0c5b2fa2328219856be5ba7e671e; odin_tt=3c891091d2eb0f4718c1d5645bc4a0017032d4d5aa989decb729e9da2ad570918cbe5e9133dc6b145fa8c758de98efe32ff1f81aa0d611e838cc73ab08ef7d3f6adf66ab4d10e8372ddd628f94f16b8e; volume_info=%7B%22isUserMute%22%3Afalse%2C%22isMute%22%3Afalse%2C%22volume%22%3A0.5%7D; bd_ticket_guard_client_web_domain=2; FORCE_LOGIN=%7B%22videoConsumedRemainSeconds%22%3A180%7D; UIFID=0de8750d2b188f4235dbfd208e44abbb976428f0720eb983255afefa45d39c0c6532e1d4768dd8587bf919f866ff139655a3c2b735923234f371c699560c657923fd3d6c5b63ab7bb9b83423b6cb4787e2ce66a7fbc4ecb24c8570f520fe6de068bbb95115023c0c6c1b6ee31b49fb7e3996fb8349f43a3fd8b7a61cd9e18e8fe65eb6a7c13de4c0960d84e344b644725db3eb2fa6b7caf821de1b50527979f2; is_dash_user=1; biz_trace_id=b57a241f; bd_ticket_guard_client_data=eyJiZC10aWNrZXQtZ3VhcmQtdmVyc2lvbiI6MiwiYmQtdGlja2V0LWd1YXJkLWl0ZXJhdGlvbi12ZXJzaW9uIjoxLCJiZC10aWNrZXQtZ3VhcmQtcmVlLXB1YmxpYy1rZXkiOiJCTEo2R0lDalVoWW1XcHpGOFdrN0Vrc0dXcCtaUzNKY1g4NGNGY2k0TTl1TEowNjdUb21mbFU5aDdvWVBGamhNRWNRQWtKdnN1MnM3RmpTWnlJQXpHMjA9IiwiYmQtdGlja2V0LWd1YXJkLXdlYi12ZXJzaW9uIjoyfQ%3D%3D; download_guide=%221%2F20241216%2F0%22; sdk_source_info=7e276470716a68645a606960273f276364697660272927676c715a6d6069756077273f276364697660272927666d776a68605a607d71606b766c6a6b5a7666776c7571273f275e58272927666a6b766a69605a696c6061273f27636469766027292762696a6764695a7364776c6467696076273f275e5827292771273f273d33323131333c3036313632342778; bit_env=RiOY4jzzpxZoVCl6zdVSVhVRjdwHRTxqcqWdqMBZLPGjMdB4Tax1kAELHNTVAAh72KuhumewE4Lq6f0-VJ2UpJrkrhSxoPw9LUb3zQrq1OSwbeSPHkRlRgRQvO89sItdGUyq1oFr0XyRCnMYG87KSeWyc4x0czGR0o50hTDoDLG5rJVoRcdQOLvjiAegsqyytKF59sPX_QM9qffK2SqYsg0hCggURc_AI6kguDDE5DvG0bnyz1utw4z1eEnIoLrkGDqzqBZj4dOAr0BVU6ofbsS-pOQ2u2PM1dLP9FlBVBlVaqYVgHJeSLsR5k76BRTddUjTb4zEilVIEwAMJWGN4I1BxVt6fC9B5tBQpuT0lj3n3eKXCKXZsd8FrEs5_pbfDsxV-e_WMiXI2ff4qxiTC0U73sfo9OpicKICtZjdq8qsHxJuu6wVR36zvXeL2Wch5C6MzprNvkivv0l8nbh2mSgy1nabZr3dmU6NcR-Bg3Q3xTWUlR9aAUmpopC-cNuXjgLpT-Lw1AYGilSUnCvosth1Gfypq-b0MpgmdSDgTrQ%3D; gulu_source_res=eyJwX2luIjoiMDhjOGQ3ZTJiODQyNjZkZWI5Y2VkMGJiODNlNmY1ZWY0ZjMyNTE2ZmYyZjAzNDMzZjI0OWU1Y2Q1NTczNTk5NyJ9; passport_auth_mix_state=hp9bc3dgb1tm5wd8p82zawus27g0e3ue; IsDouyinActive=false
|
ALIPAY_PRIVATE_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/app_private_key.pem
|
||||||
|
ALIPAY_PUBLIC_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/alipay_public_key.pem
|
||||||
|
ALIPAY_NOTIFY_URL=https://vigent.hbyrkj.top/api/payment/notify
|
||||||
|
ALIPAY_RETURN_URL=https://vigent.hbyrkj.top/pay
|
||||||
|
|||||||
@@ -57,7 +57,17 @@ class Settings(BaseSettings):
|
|||||||
LATENTSYNC_ENABLE_DEEPCACHE: bool = True # 启用 DeepCache 加速
|
LATENTSYNC_ENABLE_DEEPCACHE: bool = True # 启用 DeepCache 加速
|
||||||
LATENTSYNC_SEED: int = 1247 # 随机种子 (-1 则随机)
|
LATENTSYNC_SEED: int = 1247 # 随机种子 (-1 则随机)
|
||||||
LATENTSYNC_USE_SERVER: bool = True # 使用常驻服务 (Persistent Server) 加速
|
LATENTSYNC_USE_SERVER: bool = True # 使用常驻服务 (Persistent Server) 加速
|
||||||
|
|
||||||
|
# MuseTalk 配置
|
||||||
|
MUSETALK_GPU_ID: int = 0 # GPU ID (默认使用 GPU0)
|
||||||
|
MUSETALK_API_URL: str = "http://localhost:8011" # 常驻服务地址
|
||||||
|
MUSETALK_BATCH_SIZE: int = 8 # 推理批大小
|
||||||
|
MUSETALK_VERSION: str = "v15" # 模型版本
|
||||||
|
MUSETALK_USE_FLOAT16: bool = True # 半精度加速
|
||||||
|
|
||||||
|
# 混合唇形同步路由
|
||||||
|
LIPSYNC_DURATION_THRESHOLD: float = 120.0 # 秒,>=此值用 MuseTalk
|
||||||
|
|
||||||
# Supabase 配置
|
# Supabase 配置
|
||||||
SUPABASE_URL: str = ""
|
SUPABASE_URL: str = ""
|
||||||
SUPABASE_PUBLIC_URL: str = "" # 公网访问地址,用于生成前端可访问的 URL
|
SUPABASE_PUBLIC_URL: str = "" # 公网访问地址,用于生成前端可访问的 URL
|
||||||
@@ -76,17 +86,28 @@ class Settings(BaseSettings):
|
|||||||
GLM_API_KEY: str = ""
|
GLM_API_KEY: str = ""
|
||||||
GLM_MODEL: str = "glm-4.7-flash"
|
GLM_MODEL: str = "glm-4.7-flash"
|
||||||
|
|
||||||
|
# 支付宝配置
|
||||||
|
ALIPAY_APP_ID: str = ""
|
||||||
|
ALIPAY_PRIVATE_KEY_PATH: str = "" # 应用私钥 PEM 文件路径
|
||||||
|
ALIPAY_PUBLIC_KEY_PATH: str = "" # 支付宝公钥 PEM 文件路径
|
||||||
|
ALIPAY_NOTIFY_URL: str = "" # 异步通知回调地址(公网可达)
|
||||||
|
ALIPAY_RETURN_URL: str = "" # 支付成功后同步跳转地址
|
||||||
|
ALIPAY_SANDBOX: bool = False # 是否使用沙箱环境
|
||||||
|
PAYMENT_AMOUNT: float = 999.00 # 会员价格(元)
|
||||||
|
PAYMENT_EXPIRE_DAYS: int = 365 # 会员有效天数
|
||||||
|
|
||||||
# CORS 配置 (逗号分隔的域名列表,* 表示允许所有)
|
# CORS 配置 (逗号分隔的域名列表,* 表示允许所有)
|
||||||
CORS_ORIGINS: str = "*"
|
CORS_ORIGINS: str = "*"
|
||||||
|
|
||||||
# 抖音 Cookie (用于视频下载功能,会过期需要定期更新)
|
|
||||||
DOUYIN_COOKIE: str = ""
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def LATENTSYNC_DIR(self) -> Path:
|
def LATENTSYNC_DIR(self) -> Path:
|
||||||
"""LatentSync 目录路径 (动态计算)"""
|
"""LatentSync 目录路径 (动态计算)"""
|
||||||
return self.BASE_DIR.parent.parent / "models" / "LatentSync"
|
return self.BASE_DIR.parent.parent / "models" / "LatentSync"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def MUSETALK_DIR(self) -> Path:
|
||||||
|
"""MuseTalk 目录路径 (动态计算)"""
|
||||||
|
return self.BASE_DIR.parent.parent / "models" / "MuseTalk"
|
||||||
|
|
||||||
class Config:
|
class Config:
|
||||||
env_file = ".env"
|
env_file = ".env"
|
||||||
extra = "ignore" # 忽略未知的环境变量
|
extra = "ignore" # 忽略未知的环境变量
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
"""
|
"""
|
||||||
依赖注入模块:认证和用户获取
|
依赖注入模块:认证和用户获取
|
||||||
"""
|
"""
|
||||||
from typing import Optional, Any, Dict, cast
|
from typing import Optional, Any, Dict, cast
|
||||||
from fastapi import Request, HTTPException, Depends, status
|
from fastapi import Request, HTTPException, Depends, status
|
||||||
from app.core.security import decode_access_token, TokenData
|
from app.core.security import decode_access_token
|
||||||
from app.repositories.sessions import get_session
|
from app.repositories.sessions import get_session, delete_sessions
|
||||||
from app.repositories.users import get_user_by_id
|
from app.repositories.users import get_user_by_id, deactivate_user_if_expired
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
|
|
||||||
@@ -14,9 +14,9 @@ async def get_token_from_cookie(request: Request) -> Optional[str]:
|
|||||||
return request.cookies.get("access_token")
|
return request.cookies.get("access_token")
|
||||||
|
|
||||||
|
|
||||||
async def get_current_user_optional(
|
async def get_current_user_optional(
|
||||||
request: Request
|
request: Request
|
||||||
) -> Optional[Dict[str, Any]]:
|
) -> Optional[Dict[str, Any]]:
|
||||||
"""
|
"""
|
||||||
获取当前用户 (可选,未登录返回 None)
|
获取当前用户 (可选,未登录返回 None)
|
||||||
"""
|
"""
|
||||||
@@ -29,22 +29,30 @@ async def get_current_user_optional(
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
# 验证 session_token 是否有效 (单设备登录检查)
|
# 验证 session_token 是否有效 (单设备登录检查)
|
||||||
try:
|
try:
|
||||||
session = get_session(token_data.user_id, token_data.session_token)
|
session = get_session(token_data.user_id, token_data.session_token)
|
||||||
if not session:
|
if not session:
|
||||||
logger.warning(f"Session token 无效: user_id={token_data.user_id}")
|
logger.warning(f"Session token 无效: user_id={token_data.user_id}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
user = get_user_by_id(token_data.user_id)
|
user = cast(Optional[Dict[str, Any]], get_user_by_id(token_data.user_id))
|
||||||
return cast(Optional[Dict[str, Any]], user)
|
if user and deactivate_user_if_expired(user):
|
||||||
except Exception as e:
|
delete_sessions(token_data.user_id)
|
||||||
logger.error(f"获取用户信息失败: {e}")
|
return None
|
||||||
return None
|
|
||||||
|
if user and not user.get("is_active"):
|
||||||
|
delete_sessions(token_data.user_id)
|
||||||
|
return None
|
||||||
|
|
||||||
|
return user
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"获取用户信息失败: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
async def get_current_user(
|
async def get_current_user(
|
||||||
request: Request
|
request: Request
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
"""
|
"""
|
||||||
获取当前用户 (必须登录)
|
获取当前用户 (必须登录)
|
||||||
|
|
||||||
@@ -66,40 +74,45 @@ async def get_current_user(
|
|||||||
detail="Token 无效或已过期"
|
detail="Token 无效或已过期"
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
session = get_session(token_data.user_id, token_data.session_token)
|
session = get_session(token_data.user_id, token_data.session_token)
|
||||||
if not session:
|
if not session:
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_403_FORBIDDEN,
|
status_code=status.HTTP_403_FORBIDDEN,
|
||||||
detail="会话已失效,请重新登录(可能已在其他设备登录)"
|
detail="会话已失效,请重新登录(可能已在其他设备登录)"
|
||||||
)
|
)
|
||||||
|
|
||||||
user = get_user_by_id(token_data.user_id)
|
user = get_user_by_id(token_data.user_id)
|
||||||
if not user:
|
if not user:
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
detail="用户不存在"
|
detail="用户不存在"
|
||||||
)
|
)
|
||||||
user = cast(Dict[str, Any], user)
|
user = cast(Dict[str, Any], user)
|
||||||
|
|
||||||
if user.get("expires_at"):
|
if deactivate_user_if_expired(user):
|
||||||
from datetime import datetime, timezone
|
delete_sessions(token_data.user_id)
|
||||||
expires_at = datetime.fromisoformat(user["expires_at"].replace("Z", "+00:00"))
|
raise HTTPException(
|
||||||
if datetime.now(timezone.utc) > expires_at:
|
status_code=status.HTTP_403_FORBIDDEN,
|
||||||
raise HTTPException(
|
detail="会员已到期,请续费"
|
||||||
status_code=status.HTTP_403_FORBIDDEN,
|
)
|
||||||
detail="授权已过期,请联系管理员续期"
|
|
||||||
)
|
if not user.get("is_active"):
|
||||||
|
delete_sessions(token_data.user_id)
|
||||||
return user
|
raise HTTPException(
|
||||||
except HTTPException:
|
status_code=status.HTTP_403_FORBIDDEN,
|
||||||
raise
|
detail="账号已停用"
|
||||||
except Exception as e:
|
)
|
||||||
logger.error(f"获取用户信息失败: {e}")
|
|
||||||
raise HTTPException(
|
return user
|
||||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
except HTTPException:
|
||||||
detail="服务器错误"
|
raise
|
||||||
)
|
except Exception as e:
|
||||||
|
logger.error(f"获取用户信息失败: {e}")
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail="服务器错误"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
async def get_current_admin(
|
async def get_current_admin(
|
||||||
|
|||||||
@@ -110,3 +110,28 @@ def set_auth_cookie(response: Response, token: str) -> None:
|
|||||||
def clear_auth_cookie(response: Response) -> None:
|
def clear_auth_cookie(response: Response) -> None:
|
||||||
"""清除认证 Cookie"""
|
"""清除认证 Cookie"""
|
||||||
response.delete_cookie(key="access_token")
|
response.delete_cookie(key="access_token")
|
||||||
|
|
||||||
|
|
||||||
|
def create_payment_token(user_id: str) -> str:
|
||||||
|
"""生成付费专用短期 JWT token(30 分钟有效)"""
|
||||||
|
payload = {
|
||||||
|
"sub": user_id,
|
||||||
|
"purpose": "payment",
|
||||||
|
"exp": datetime.now(timezone.utc) + timedelta(minutes=30),
|
||||||
|
}
|
||||||
|
return jwt.encode(payload, settings.JWT_SECRET_KEY, algorithm=settings.JWT_ALGORITHM)
|
||||||
|
|
||||||
|
|
||||||
|
def decode_payment_token(token: str) -> str | None:
|
||||||
|
"""解析 payment_token,返回 user_id(仅 purpose=payment 有效)"""
|
||||||
|
try:
|
||||||
|
data = jwt.decode(
|
||||||
|
token,
|
||||||
|
settings.JWT_SECRET_KEY,
|
||||||
|
algorithms=[settings.JWT_ALGORITHM],
|
||||||
|
)
|
||||||
|
if data.get("purpose") != "payment":
|
||||||
|
return None
|
||||||
|
return data.get("sub")
|
||||||
|
except JWTError:
|
||||||
|
return None
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ from app.modules.ai.router import router as ai_router
|
|||||||
from app.modules.tools.router import router as tools_router
|
from app.modules.tools.router import router as tools_router
|
||||||
from app.modules.assets.router import router as assets_router
|
from app.modules.assets.router import router as assets_router
|
||||||
from app.modules.generated_audios.router import router as generated_audios_router
|
from app.modules.generated_audios.router import router as generated_audios_router
|
||||||
|
from app.modules.payment.router import router as payment_router
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
import os
|
import os
|
||||||
|
|
||||||
@@ -126,6 +127,7 @@ app.include_router(ai_router) # /api/ai
|
|||||||
app.include_router(tools_router, prefix="/api/tools", tags=["Tools"])
|
app.include_router(tools_router, prefix="/api/tools", tags=["Tools"])
|
||||||
app.include_router(assets_router, prefix="/api/assets", tags=["Assets"])
|
app.include_router(assets_router, prefix="/api/assets", tags=["Assets"])
|
||||||
app.include_router(generated_audios_router, prefix="/api/generated-audios", tags=["GeneratedAudios"])
|
app.include_router(generated_audios_router, prefix="/api/generated-audios", tags=["GeneratedAudios"])
|
||||||
|
app.include_router(payment_router) # /api/payment
|
||||||
|
|
||||||
|
|
||||||
@app.on_event("startup")
|
@app.on_event("startup")
|
||||||
|
|||||||
@@ -2,6 +2,8 @@
|
|||||||
AI 相关 API 路由
|
AI 相关 API 路由
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException
|
from fastapi import APIRouter, HTTPException
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
@@ -21,9 +23,16 @@ class GenerateMetaRequest(BaseModel):
|
|||||||
class GenerateMetaResponse(BaseModel):
|
class GenerateMetaResponse(BaseModel):
|
||||||
"""生成标题标签响应"""
|
"""生成标题标签响应"""
|
||||||
title: str
|
title: str
|
||||||
|
secondary_title: str = ""
|
||||||
tags: list[str]
|
tags: list[str]
|
||||||
|
|
||||||
|
|
||||||
|
class RewriteRequest(BaseModel):
|
||||||
|
"""改写请求"""
|
||||||
|
text: str
|
||||||
|
custom_prompt: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class TranslateRequest(BaseModel):
|
class TranslateRequest(BaseModel):
|
||||||
"""翻译请求"""
|
"""翻译请求"""
|
||||||
text: str
|
text: str
|
||||||
@@ -66,8 +75,24 @@ async def generate_meta(req: GenerateMetaRequest):
|
|||||||
result = await glm_service.generate_title_tags(req.text)
|
result = await glm_service.generate_title_tags(req.text)
|
||||||
return success_response(GenerateMetaResponse(
|
return success_response(GenerateMetaResponse(
|
||||||
title=result.get("title", ""),
|
title=result.get("title", ""),
|
||||||
|
secondary_title=result.get("secondary_title", ""),
|
||||||
tags=result.get("tags", [])
|
tags=result.get("tags", [])
|
||||||
).model_dump())
|
).model_dump())
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Generate meta failed: {e}")
|
logger.error(f"Generate meta failed: {e}")
|
||||||
raise HTTPException(status_code=500, detail=str(e))
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/rewrite")
|
||||||
|
async def rewrite_script(req: RewriteRequest):
|
||||||
|
"""AI 改写文案"""
|
||||||
|
if not req.text or not req.text.strip():
|
||||||
|
raise HTTPException(status_code=400, detail="文案不能为空")
|
||||||
|
|
||||||
|
try:
|
||||||
|
logger.info(f"Rewriting text: {req.text[:50]}...")
|
||||||
|
rewritten = await glm_service.rewrite_script(req.text.strip(), req.custom_prompt)
|
||||||
|
return success_response({"rewritten_text": rewritten})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Rewrite failed: {e}")
|
||||||
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
|||||||
@@ -1,22 +1,32 @@
|
|||||||
"""
|
"""
|
||||||
认证 API:注册、登录、登出、修改密码
|
认证 API:注册、登录、登出、修改密码
|
||||||
"""
|
"""
|
||||||
from fastapi import APIRouter, HTTPException, Response, status, Request
|
from fastapi import APIRouter, HTTPException, Response, status, Request, Depends
|
||||||
|
from fastapi.responses import JSONResponse
|
||||||
from pydantic import BaseModel, field_validator
|
from pydantic import BaseModel, field_validator
|
||||||
from app.core.security import (
|
from app.core.security import (
|
||||||
get_password_hash,
|
get_password_hash,
|
||||||
verify_password,
|
verify_password,
|
||||||
create_access_token,
|
create_access_token,
|
||||||
generate_session_token,
|
generate_session_token,
|
||||||
set_auth_cookie,
|
set_auth_cookie,
|
||||||
clear_auth_cookie,
|
clear_auth_cookie,
|
||||||
decode_access_token
|
decode_access_token,
|
||||||
)
|
create_payment_token,
|
||||||
from app.repositories.sessions import create_session, delete_sessions
|
)
|
||||||
from app.repositories.users import create_user, get_user_by_id, get_user_by_phone, user_exists_by_phone, update_user
|
from app.repositories.sessions import create_session, delete_sessions
|
||||||
from app.core.response import success_response
|
from app.repositories.users import (
|
||||||
|
create_user,
|
||||||
|
get_user_by_id,
|
||||||
|
get_user_by_phone,
|
||||||
|
user_exists_by_phone,
|
||||||
|
update_user,
|
||||||
|
deactivate_user_if_expired,
|
||||||
|
)
|
||||||
|
from app.core.deps import get_current_user
|
||||||
|
from app.core.response import success_response
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from typing import Optional, Any, cast
|
from typing import Optional, Any, cast
|
||||||
import re
|
import re
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/auth", tags=["认证"])
|
router = APIRouter(prefix="/api/auth", tags=["认证"])
|
||||||
@@ -76,26 +86,26 @@ async def register(request: RegisterRequest):
|
|||||||
注册后状态为 pending,需要管理员激活
|
注册后状态为 pending,需要管理员激活
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
if user_exists_by_phone(request.phone):
|
if user_exists_by_phone(request.phone):
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_400_BAD_REQUEST,
|
status_code=status.HTTP_400_BAD_REQUEST,
|
||||||
detail="该手机号已注册"
|
detail="该手机号已注册"
|
||||||
)
|
)
|
||||||
|
|
||||||
# 创建用户
|
# 创建用户
|
||||||
password_hash = get_password_hash(request.password)
|
password_hash = get_password_hash(request.password)
|
||||||
|
|
||||||
create_user({
|
create_user({
|
||||||
"phone": request.phone,
|
"phone": request.phone,
|
||||||
"password_hash": password_hash,
|
"password_hash": password_hash,
|
||||||
"username": request.username or f"用户{request.phone[-4:]}",
|
"username": request.username or f"用户{request.phone[-4:]}",
|
||||||
"role": "pending",
|
"role": "pending",
|
||||||
"is_active": False
|
"is_active": False
|
||||||
})
|
})
|
||||||
|
|
||||||
logger.info(f"新用户注册: {request.phone}")
|
logger.info(f"新用户注册: {request.phone}")
|
||||||
|
|
||||||
return success_response(message="注册成功,请等待管理员审核激活")
|
return success_response(message="注册成功,请等待管理员审核激活")
|
||||||
except HTTPException:
|
except HTTPException:
|
||||||
raise
|
raise
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -116,12 +126,12 @@ async def login(request: LoginRequest, response: Response):
|
|||||||
- 实现"后踢前"单设备登录
|
- 实现"后踢前"单设备登录
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
user = cast(dict[str, Any], get_user_by_phone(request.phone) or {})
|
user = cast(dict[str, Any], get_user_by_phone(request.phone) or {})
|
||||||
if not user:
|
if not user:
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
detail="手机号或密码错误"
|
detail="手机号或密码错误"
|
||||||
)
|
)
|
||||||
|
|
||||||
# 验证密码
|
# 验证密码
|
||||||
if not verify_password(request.password, user["password_hash"]):
|
if not verify_password(request.password, user["password_hash"]):
|
||||||
@@ -130,29 +140,33 @@ async def login(request: LoginRequest, response: Response):
|
|||||||
detail="手机号或密码错误"
|
detail="手机号或密码错误"
|
||||||
)
|
)
|
||||||
|
|
||||||
# 检查是否激活
|
# 过期自动停用(注意:只更新 DB,不修改内存中的 user 字典)
|
||||||
if not user["is_active"]:
|
expired = deactivate_user_if_expired(user)
|
||||||
raise HTTPException(
|
if expired:
|
||||||
status_code=status.HTTP_403_FORBIDDEN,
|
delete_sessions(user["id"])
|
||||||
detail="账号未激活,请等待管理员审核"
|
|
||||||
|
# 过期 或 未激活(新注册)→ 返回付费指引
|
||||||
|
if expired or not user["is_active"]:
|
||||||
|
payment_token = create_payment_token(user["id"])
|
||||||
|
return JSONResponse(
|
||||||
|
status_code=403,
|
||||||
|
content={
|
||||||
|
"success": False,
|
||||||
|
"message": "请付费开通会员",
|
||||||
|
"code": 403,
|
||||||
|
"data": {
|
||||||
|
"reason": "PAYMENT_REQUIRED",
|
||||||
|
"payment_token": payment_token,
|
||||||
|
}
|
||||||
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
# 检查授权是否过期
|
|
||||||
if user.get("expires_at"):
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
expires_at = datetime.fromisoformat(user["expires_at"].replace("Z", "+00:00"))
|
|
||||||
if datetime.now(timezone.utc) > expires_at:
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=status.HTTP_403_FORBIDDEN,
|
|
||||||
detail="授权已过期,请联系管理员续期"
|
|
||||||
)
|
|
||||||
|
|
||||||
# 生成新的 session_token (后踢前)
|
# 生成新的 session_token (后踢前)
|
||||||
session_token = generate_session_token()
|
session_token = generate_session_token()
|
||||||
|
|
||||||
# 删除旧 session,插入新 session
|
# 删除旧 session,插入新 session
|
||||||
delete_sessions(user["id"])
|
delete_sessions(user["id"])
|
||||||
create_session(user["id"], session_token, None)
|
create_session(user["id"], session_token, None)
|
||||||
|
|
||||||
# 生成 JWT Token
|
# 生成 JWT Token
|
||||||
token = create_access_token(user["id"], session_token)
|
token = create_access_token(user["id"], session_token)
|
||||||
@@ -162,19 +176,19 @@ async def login(request: LoginRequest, response: Response):
|
|||||||
|
|
||||||
logger.info(f"用户登录: {request.phone}")
|
logger.info(f"用户登录: {request.phone}")
|
||||||
|
|
||||||
return success_response(
|
return success_response(
|
||||||
data={
|
data={
|
||||||
"user": UserResponse(
|
"user": UserResponse(
|
||||||
id=user["id"],
|
id=user["id"],
|
||||||
phone=user["phone"],
|
phone=user["phone"],
|
||||||
username=user.get("username"),
|
username=user.get("username"),
|
||||||
role=user["role"],
|
role=user["role"],
|
||||||
is_active=user["is_active"],
|
is_active=user["is_active"],
|
||||||
expires_at=user.get("expires_at")
|
expires_at=user.get("expires_at")
|
||||||
).model_dump()
|
).model_dump()
|
||||||
},
|
},
|
||||||
message="登录成功",
|
message="登录成功",
|
||||||
)
|
)
|
||||||
except HTTPException:
|
except HTTPException:
|
||||||
raise
|
raise
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -186,10 +200,10 @@ async def login(request: LoginRequest, response: Response):
|
|||||||
|
|
||||||
|
|
||||||
@router.post("/logout")
|
@router.post("/logout")
|
||||||
async def logout(response: Response):
|
async def logout(response: Response):
|
||||||
"""用户登出"""
|
"""用户登出"""
|
||||||
clear_auth_cookie(response)
|
clear_auth_cookie(response)
|
||||||
return success_response(message="已登出")
|
return success_response(message="已登出")
|
||||||
|
|
||||||
|
|
||||||
@router.post("/change-password")
|
@router.post("/change-password")
|
||||||
@@ -217,12 +231,12 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
|||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
|
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
|
||||||
if not user:
|
if not user:
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
detail="用户不存在"
|
detail="用户不存在"
|
||||||
)
|
)
|
||||||
|
|
||||||
# 验证当前密码
|
# 验证当前密码
|
||||||
if not verify_password(request.old_password, user["password_hash"]):
|
if not verify_password(request.old_password, user["password_hash"]):
|
||||||
@@ -233,13 +247,13 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
|||||||
|
|
||||||
# 更新密码
|
# 更新密码
|
||||||
new_password_hash = get_password_hash(request.new_password)
|
new_password_hash = get_password_hash(request.new_password)
|
||||||
update_user(user["id"], {"password_hash": new_password_hash})
|
update_user(user["id"], {"password_hash": new_password_hash})
|
||||||
|
|
||||||
# 生成新的 session token,使旧 token 失效
|
# 生成新的 session token,使旧 token 失效
|
||||||
new_session_token = generate_session_token()
|
new_session_token = generate_session_token()
|
||||||
|
|
||||||
delete_sessions(user["id"])
|
delete_sessions(user["id"])
|
||||||
create_session(user["id"], new_session_token, None)
|
create_session(user["id"], new_session_token, None)
|
||||||
|
|
||||||
# 生成新的 JWT Token
|
# 生成新的 JWT Token
|
||||||
new_token = create_access_token(user["id"], new_session_token)
|
new_token = create_access_token(user["id"], new_session_token)
|
||||||
@@ -247,7 +261,7 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
|||||||
|
|
||||||
logger.info(f"用户修改密码: {user['phone']}")
|
logger.info(f"用户修改密码: {user['phone']}")
|
||||||
|
|
||||||
return success_response(message="密码修改成功")
|
return success_response(message="密码修改成功")
|
||||||
except HTTPException:
|
except HTTPException:
|
||||||
raise
|
raise
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -259,35 +273,13 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
|
|||||||
|
|
||||||
|
|
||||||
@router.get("/me")
|
@router.get("/me")
|
||||||
async def get_me(request: Request):
|
async def get_me(user: dict = Depends(get_current_user)):
|
||||||
"""获取当前用户信息"""
|
"""获取当前用户信息"""
|
||||||
# 从 Cookie 获取用户
|
return success_response(UserResponse(
|
||||||
token = request.cookies.get("access_token")
|
id=user["id"],
|
||||||
if not token:
|
phone=user["phone"],
|
||||||
raise HTTPException(
|
username=user.get("username"),
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
role=user["role"],
|
||||||
detail="未登录"
|
is_active=user["is_active"],
|
||||||
)
|
expires_at=user.get("expires_at")
|
||||||
|
).model_dump())
|
||||||
token_data = decode_access_token(token)
|
|
||||||
if not token_data:
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
|
||||||
detail="Token 无效"
|
|
||||||
)
|
|
||||||
|
|
||||||
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
|
|
||||||
if not user:
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
|
||||||
detail="用户不存在"
|
|
||||||
)
|
|
||||||
|
|
||||||
return success_response(UserResponse(
|
|
||||||
id=user["id"],
|
|
||||||
phone=user["phone"],
|
|
||||||
username=user.get("username"),
|
|
||||||
role=user["role"],
|
|
||||||
is_active=user["is_active"],
|
|
||||||
expires_at=user.get("expires_at")
|
|
||||||
).model_dump())
|
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ class GenerateAudioRequest(BaseModel):
|
|||||||
ref_audio_id: Optional[str] = None
|
ref_audio_id: Optional[str] = None
|
||||||
ref_text: Optional[str] = None
|
ref_text: Optional[str] = None
|
||||||
language: str = "zh-CN"
|
language: str = "zh-CN"
|
||||||
|
speed: float = 1.0
|
||||||
|
|
||||||
|
|
||||||
class RenameAudioRequest(BaseModel):
|
class RenameAudioRequest(BaseModel):
|
||||||
|
|||||||
@@ -25,7 +25,7 @@ from app.modules.generated_audios.schemas import (
|
|||||||
BUCKET = "generated-audios"
|
BUCKET = "generated-audios"
|
||||||
|
|
||||||
|
|
||||||
def _locale_to_qwen_lang(locale: str) -> str:
|
def _locale_to_tts_lang(locale: str) -> str:
|
||||||
mapping = {"zh": "Chinese", "en": "English"}
|
mapping = {"zh": "Chinese", "en": "English"}
|
||||||
return mapping.get(locale.split("-")[0], "Auto")
|
return mapping.get(locale.split("-")[0], "Auto")
|
||||||
|
|
||||||
@@ -73,19 +73,20 @@ async def generate_audio_task(task_id: str, req: GenerateAudioRequest, user_id:
|
|||||||
async for chunk in resp.aiter_bytes():
|
async for chunk in resp.aiter_bytes():
|
||||||
f.write(chunk)
|
f.write(chunk)
|
||||||
|
|
||||||
task_store.update(task_id, {"progress": 40, "message": "正在克隆声音 (Qwen3-TTS)..."})
|
task_store.update(task_id, {"progress": 40, "message": "正在克隆声音..."})
|
||||||
await voice_clone_service.generate_audio(
|
await voice_clone_service.generate_audio(
|
||||||
text=req.text,
|
text=req.text,
|
||||||
ref_audio_path=ref_local,
|
ref_audio_path=ref_local,
|
||||||
ref_text=req.ref_text,
|
ref_text=req.ref_text,
|
||||||
output_path=audio_path,
|
output_path=audio_path,
|
||||||
language=_locale_to_qwen_lang(req.language),
|
language=_locale_to_tts_lang(req.language),
|
||||||
|
speed=req.speed,
|
||||||
)
|
)
|
||||||
finally:
|
finally:
|
||||||
if os.path.exists(ref_local):
|
if os.path.exists(ref_local):
|
||||||
os.unlink(ref_local)
|
os.unlink(ref_local)
|
||||||
else:
|
else:
|
||||||
task_store.update(task_id, {"progress": 30, "message": "正在生成语音 (EdgeTTS)..."})
|
task_store.update(task_id, {"progress": 30, "message": "正在生成语音..."})
|
||||||
tts = TTSService()
|
tts = TTSService()
|
||||||
await tts.generate_audio(req.text, req.voice, audio_path)
|
await tts.generate_audio(req.text, req.voice, audio_path)
|
||||||
|
|
||||||
|
|||||||
0
backend/app/modules/payment/__init__.py
Normal file
0
backend/app/modules/payment/__init__.py
Normal file
52
backend/app/modules/payment/router.py
Normal file
52
backend/app/modules/payment/router.py
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
"""
|
||||||
|
支付 API:创建订单、异步通知、状态查询
|
||||||
|
|
||||||
|
遵循 BACKEND_DEV.md 规范:router 只做参数校验、调用 service、返回统一响应
|
||||||
|
"""
|
||||||
|
from fastapi import APIRouter, HTTPException, Request, status
|
||||||
|
from fastapi.responses import PlainTextResponse
|
||||||
|
|
||||||
|
from app.core.response import success_response
|
||||||
|
from .schemas import CreateOrderRequest, CreateOrderResponse, OrderStatusResponse
|
||||||
|
from . import service
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/api/payment", tags=["支付"])
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/create-order")
|
||||||
|
async def create_payment_order(request: CreateOrderRequest):
|
||||||
|
"""创建支付宝电脑网站支付订单,返回收银台 URL"""
|
||||||
|
try:
|
||||||
|
result = service.create_payment_order(request.payment_token)
|
||||||
|
except ValueError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
|
||||||
|
except RuntimeError as e:
|
||||||
|
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e))
|
||||||
|
|
||||||
|
return success_response(
|
||||||
|
CreateOrderResponse(**result).model_dump()
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/notify")
|
||||||
|
async def payment_notify(request: Request):
|
||||||
|
"""
|
||||||
|
支付宝异步通知回调
|
||||||
|
|
||||||
|
必须返回纯文本 "success"(不是 JSON),否则支付宝会重复推送。
|
||||||
|
"""
|
||||||
|
form_data = await request.form()
|
||||||
|
verified = service.handle_payment_notify(dict(form_data))
|
||||||
|
return PlainTextResponse("success" if verified else "fail")
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/status/{out_trade_no}")
|
||||||
|
async def check_payment_status(out_trade_no: str):
|
||||||
|
"""查询订单支付状态(前端轮询)"""
|
||||||
|
order_status = service.get_order_status(out_trade_no)
|
||||||
|
if order_status is None:
|
||||||
|
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="订单不存在")
|
||||||
|
|
||||||
|
return success_response(
|
||||||
|
OrderStatusResponse(status=order_status).model_dump()
|
||||||
|
)
|
||||||
15
backend/app/modules/payment/schemas.py
Normal file
15
backend/app/modules/payment/schemas.py
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
|
||||||
|
class CreateOrderRequest(BaseModel):
|
||||||
|
payment_token: str
|
||||||
|
|
||||||
|
|
||||||
|
class CreateOrderResponse(BaseModel):
|
||||||
|
pay_url: str
|
||||||
|
out_trade_no: str
|
||||||
|
amount: float
|
||||||
|
|
||||||
|
|
||||||
|
class OrderStatusResponse(BaseModel):
|
||||||
|
status: str
|
||||||
137
backend/app/modules/payment/service.py
Normal file
137
backend/app/modules/payment/service.py
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
"""
|
||||||
|
支付业务服务
|
||||||
|
|
||||||
|
职责:Alipay SDK 封装、创建订单、处理支付通知、查询状态
|
||||||
|
遵循 BACKEND_DEV.md "薄路由 + 厚服务" 原则
|
||||||
|
"""
|
||||||
|
from datetime import datetime, timezone, timedelta
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from alipay import AliPay
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from app.core.config import settings
|
||||||
|
from app.core.security import decode_payment_token
|
||||||
|
from app.repositories.orders import create_order, get_order_by_trade_no, update_order_status
|
||||||
|
from app.repositories.users import update_user
|
||||||
|
|
||||||
|
# 支付宝网关地址
|
||||||
|
ALIPAY_GATEWAY = "https://openapi.alipay.com/gateway.do"
|
||||||
|
ALIPAY_GATEWAY_SANDBOX = "https://openapi-sandbox.dl.alipaydev.com/gateway.do"
|
||||||
|
|
||||||
|
|
||||||
|
def _get_alipay_client() -> AliPay:
|
||||||
|
"""延迟初始化 Alipay 客户端"""
|
||||||
|
return AliPay(
|
||||||
|
appid=settings.ALIPAY_APP_ID,
|
||||||
|
app_notify_url=settings.ALIPAY_NOTIFY_URL,
|
||||||
|
app_private_key_string=open(settings.ALIPAY_PRIVATE_KEY_PATH).read(),
|
||||||
|
alipay_public_key_string=open(settings.ALIPAY_PUBLIC_KEY_PATH).read(),
|
||||||
|
sign_type="RSA2",
|
||||||
|
debug=settings.ALIPAY_SANDBOX,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_page_pay_url(out_trade_no: str, amount: float, subject: str) -> str | None:
|
||||||
|
"""调用 alipay.trade.page.pay,返回支付宝收银台 URL"""
|
||||||
|
client = _get_alipay_client()
|
||||||
|
order_string = client.api_alipay_trade_page_pay(
|
||||||
|
subject=subject,
|
||||||
|
out_trade_no=out_trade_no,
|
||||||
|
total_amount=amount,
|
||||||
|
return_url=settings.ALIPAY_RETURN_URL,
|
||||||
|
)
|
||||||
|
if not order_string:
|
||||||
|
logger.error(f"电脑网站支付下单失败: {out_trade_no}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
gateway = ALIPAY_GATEWAY_SANDBOX if settings.ALIPAY_SANDBOX else ALIPAY_GATEWAY
|
||||||
|
pay_url = f"{gateway}?{order_string}"
|
||||||
|
logger.info(f"电脑网站支付下单成功: {out_trade_no}")
|
||||||
|
return pay_url
|
||||||
|
|
||||||
|
|
||||||
|
def _verify_signature(data: dict, signature: str) -> bool:
|
||||||
|
"""验证支付宝异步通知签名"""
|
||||||
|
client = _get_alipay_client()
|
||||||
|
return client.verify(data, signature)
|
||||||
|
|
||||||
|
|
||||||
|
def create_payment_order(payment_token: str) -> dict:
|
||||||
|
"""
|
||||||
|
创建支付订单完整流程
|
||||||
|
|
||||||
|
Returns: {"pay_url": str, "out_trade_no": str, "amount": float}
|
||||||
|
Raises: ValueError (token 无效), RuntimeError (API 失败)
|
||||||
|
"""
|
||||||
|
user_id = decode_payment_token(payment_token)
|
||||||
|
if not user_id:
|
||||||
|
raise ValueError("付费凭证无效或已过期,请重新登录")
|
||||||
|
|
||||||
|
out_trade_no = f"VG_{int(datetime.now().timestamp())}_{uuid.uuid4().hex[:8]}"
|
||||||
|
amount = settings.PAYMENT_AMOUNT
|
||||||
|
|
||||||
|
create_order(user_id, out_trade_no, amount)
|
||||||
|
|
||||||
|
pay_url = _create_page_pay_url(out_trade_no, amount, "IPAgent 会员开通")
|
||||||
|
if not pay_url:
|
||||||
|
raise RuntimeError("创建支付订单失败,请稍后重试")
|
||||||
|
|
||||||
|
logger.info(f"用户 {user_id} 创建支付订单: {out_trade_no}")
|
||||||
|
|
||||||
|
return {"pay_url": pay_url, "out_trade_no": out_trade_no, "amount": amount}
|
||||||
|
|
||||||
|
|
||||||
|
def handle_payment_notify(form_data: dict) -> bool:
|
||||||
|
"""
|
||||||
|
处理支付宝异步通知完整流程
|
||||||
|
|
||||||
|
Returns: True=验签通过, False=验签失败
|
||||||
|
"""
|
||||||
|
data = dict(form_data)
|
||||||
|
|
||||||
|
signature = data.pop("sign", "")
|
||||||
|
data.pop("sign_type", None)
|
||||||
|
|
||||||
|
if not _verify_signature(data, signature):
|
||||||
|
logger.warning(f"支付宝通知验签失败: {data.get('out_trade_no')}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
out_trade_no = data.get("out_trade_no", "")
|
||||||
|
trade_status = data.get("trade_status", "")
|
||||||
|
trade_no = data.get("trade_no", "")
|
||||||
|
|
||||||
|
logger.info(f"收到支付宝通知: {out_trade_no}, status={trade_status}, trade_no={trade_no}")
|
||||||
|
|
||||||
|
if trade_status not in ("TRADE_SUCCESS", "TRADE_FINISHED"):
|
||||||
|
return True
|
||||||
|
|
||||||
|
order = get_order_by_trade_no(out_trade_no)
|
||||||
|
if not order:
|
||||||
|
logger.warning(f"订单不存在: {out_trade_no}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
if order["status"] == "paid":
|
||||||
|
logger.info(f"订单已处理过: {out_trade_no}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
update_order_status(out_trade_no, "paid", trade_no)
|
||||||
|
|
||||||
|
user_id = order["user_id"]
|
||||||
|
expires_at = (datetime.now(timezone.utc) + timedelta(days=settings.PAYMENT_EXPIRE_DAYS)).isoformat()
|
||||||
|
update_user(user_id, {
|
||||||
|
"is_active": True,
|
||||||
|
"role": "user",
|
||||||
|
"expires_at": expires_at,
|
||||||
|
})
|
||||||
|
|
||||||
|
logger.success(f"用户 {user_id} 支付成功,已激活,有效期至 {expires_at}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def get_order_status(out_trade_no: str) -> str | None:
|
||||||
|
"""查询订单支付状态"""
|
||||||
|
order = get_order_by_trade_no(out_trade_no)
|
||||||
|
if not order:
|
||||||
|
return None
|
||||||
|
return order["status"]
|
||||||
@@ -13,7 +13,7 @@ router = APIRouter()
|
|||||||
@router.post("")
|
@router.post("")
|
||||||
async def upload_ref_audio(
|
async def upload_ref_audio(
|
||||||
file: UploadFile = File(...),
|
file: UploadFile = File(...),
|
||||||
ref_text: str = Form(...),
|
ref_text: str = Form(""),
|
||||||
user: dict = Depends(get_current_user)
|
user: dict = Depends(get_current_user)
|
||||||
):
|
):
|
||||||
"""上传参考音频"""
|
"""上传参考音频"""
|
||||||
@@ -68,3 +68,21 @@ async def rename_ref_audio(
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"重命名失败: {e}")
|
logger.error(f"重命名失败: {e}")
|
||||||
raise HTTPException(status_code=500, detail=f"重命名失败: {str(e)}")
|
raise HTTPException(status_code=500, detail=f"重命名失败: {str(e)}")
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/{audio_id:path}/retranscribe")
|
||||||
|
async def retranscribe_ref_audio(
|
||||||
|
audio_id: str,
|
||||||
|
user: dict = Depends(get_current_user)
|
||||||
|
):
|
||||||
|
"""重新识别参考音频的文字内容"""
|
||||||
|
try:
|
||||||
|
result = await service.retranscribe_ref_audio(audio_id, user["id"])
|
||||||
|
return success_response(result, message="识别完成")
|
||||||
|
except PermissionError as e:
|
||||||
|
raise HTTPException(status_code=403, detail=str(e))
|
||||||
|
except ValueError as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"重新识别失败: {e}")
|
||||||
|
raise HTTPException(status_code=500, detail=f"识别失败: {str(e)}")
|
||||||
|
|||||||
@@ -2,9 +2,11 @@ import re
|
|||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
import json
|
import json
|
||||||
|
import hashlib
|
||||||
import asyncio
|
import asyncio
|
||||||
import subprocess
|
import subprocess
|
||||||
import tempfile
|
import tempfile
|
||||||
|
import unicodedata
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
@@ -19,8 +21,16 @@ BUCKET_REF_AUDIOS = "ref-audios"
|
|||||||
|
|
||||||
|
|
||||||
def sanitize_filename(filename: str) -> str:
|
def sanitize_filename(filename: str) -> str:
|
||||||
"""清理文件名,移除特殊字符"""
|
"""清理文件名用于 Storage key(仅保留 ASCII 安全字符)。"""
|
||||||
safe_name = re.sub(r'[<>:"/\\|?*\s]', '_', filename)
|
normalized = unicodedata.normalize("NFKD", filename)
|
||||||
|
ascii_name = normalized.encode("ascii", "ignore").decode("ascii")
|
||||||
|
safe_name = re.sub(r"[^A-Za-z0-9._-]+", "_", ascii_name).strip("._-")
|
||||||
|
|
||||||
|
# 纯中文/emoji 等场景会被清空,使用稳定哈希兜底,避免 InvalidKey
|
||||||
|
if not safe_name:
|
||||||
|
digest = hashlib.md5(filename.encode("utf-8")).hexdigest()[:12]
|
||||||
|
safe_name = f"audio_{digest}"
|
||||||
|
|
||||||
if len(safe_name) > 50:
|
if len(safe_name) > 50:
|
||||||
ext = Path(safe_name).suffix
|
ext = Path(safe_name).suffix
|
||||||
safe_name = safe_name[:50 - len(ext)] + ext
|
safe_name = safe_name[:50 - len(ext)] + ext
|
||||||
@@ -41,16 +51,40 @@ def _get_audio_duration(file_path: str) -> float:
|
|||||||
return 0.0
|
return 0.0
|
||||||
|
|
||||||
|
|
||||||
def _convert_to_wav(input_path: str, output_path: str) -> bool:
|
def _find_silence_cut_point(file_path: str, max_duration: float) -> float:
|
||||||
"""将音频转换为 WAV 格式 (16kHz, mono)"""
|
"""在 max_duration 附近找一个静音点作为截取位置,找不到则回退到 max_duration"""
|
||||||
try:
|
try:
|
||||||
subprocess.run([
|
# 用 silencedetect 找所有静音段(阈值 -30dB,最短 0.3 秒)
|
||||||
'ffmpeg', '-y', '-i', input_path,
|
result = subprocess.run(
|
||||||
'-ar', '16000',
|
['ffmpeg', '-i', file_path, '-af',
|
||||||
'-ac', '1',
|
'silencedetect=noise=-30dB:d=0.3', '-f', 'null', '-'],
|
||||||
'-acodec', 'pcm_s16le',
|
capture_output=True, text=True, timeout=30
|
||||||
output_path
|
)
|
||||||
], capture_output=True, timeout=60, check=True)
|
# 解析 silence_end 时间点
|
||||||
|
import re as _re
|
||||||
|
ends = [float(m) for m in _re.findall(r'silence_end:\s*([\d.]+)', result.stderr)]
|
||||||
|
# 找 max_duration 之前最后一个静音结束点(至少 3 秒)
|
||||||
|
candidates = [t for t in ends if 3.0 <= t <= max_duration]
|
||||||
|
if candidates:
|
||||||
|
cut = candidates[-1]
|
||||||
|
logger.info(f"Found silence cut point at {cut:.1f}s (max={max_duration}s)")
|
||||||
|
return cut
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Silence detection failed: {e}")
|
||||||
|
return max_duration
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_to_wav(input_path: str, output_path: str, max_duration: float = 0) -> bool:
|
||||||
|
"""将音频转换为 WAV 格式 (16kHz, mono),可选截取前 max_duration 秒并淡出"""
|
||||||
|
try:
|
||||||
|
cmd = ['ffmpeg', '-y', '-i', input_path]
|
||||||
|
if max_duration > 0:
|
||||||
|
cmd += ['-t', str(max_duration)]
|
||||||
|
# 末尾 0.1 秒淡出,避免截断爆音
|
||||||
|
fade_start = max(0, max_duration - 0.1)
|
||||||
|
cmd += ['-af', f'afade=t=out:st={fade_start}:d=0.1']
|
||||||
|
cmd += ['-ar', '16000', '-ac', '1', '-acodec', 'pcm_s16le', output_path]
|
||||||
|
subprocess.run(cmd, capture_output=True, timeout=60, check=True)
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"音频转换失败: {e}")
|
logger.error(f"音频转换失败: {e}")
|
||||||
@@ -67,9 +101,6 @@ async def upload_ref_audio(file, ref_text: str, user_id: str) -> dict:
|
|||||||
if ext not in ALLOWED_AUDIO_EXTENSIONS:
|
if ext not in ALLOWED_AUDIO_EXTENSIONS:
|
||||||
raise ValueError(f"不支持的音频格式: {ext}。支持的格式: {', '.join(ALLOWED_AUDIO_EXTENSIONS)}")
|
raise ValueError(f"不支持的音频格式: {ext}。支持的格式: {', '.join(ALLOWED_AUDIO_EXTENSIONS)}")
|
||||||
|
|
||||||
if not ref_text or len(ref_text.strip()) < 2:
|
|
||||||
raise ValueError("参考文字不能为空")
|
|
||||||
|
|
||||||
# 创建临时文件
|
# 创建临时文件
|
||||||
with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as tmp_input:
|
with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as tmp_input:
|
||||||
content = await file.read()
|
content = await file.read()
|
||||||
@@ -86,8 +117,31 @@ async def upload_ref_audio(file, ref_text: str, user_id: str) -> dict:
|
|||||||
duration = _get_audio_duration(tmp_wav_path)
|
duration = _get_audio_duration(tmp_wav_path)
|
||||||
if duration < 1.0:
|
if duration < 1.0:
|
||||||
raise ValueError("音频时长过短,至少需要 1 秒")
|
raise ValueError("音频时长过短,至少需要 1 秒")
|
||||||
if duration > 60.0:
|
|
||||||
raise ValueError("音频时长过长,最多 60 秒")
|
# 超过 10 秒自动在静音点截取(CosyVoice 对 3-10 秒效果最好)
|
||||||
|
MAX_REF_DURATION = 10.0
|
||||||
|
if duration > MAX_REF_DURATION:
|
||||||
|
cut_point = _find_silence_cut_point(tmp_wav_path, MAX_REF_DURATION)
|
||||||
|
logger.info(f"Ref audio {duration:.1f}s > {MAX_REF_DURATION}s, trimming at {cut_point:.1f}s")
|
||||||
|
trimmed_path = tmp_input_path + "_trimmed.wav"
|
||||||
|
if not _convert_to_wav(tmp_wav_path, trimmed_path, max_duration=cut_point):
|
||||||
|
raise RuntimeError("音频截取失败")
|
||||||
|
os.unlink(tmp_wav_path)
|
||||||
|
tmp_wav_path = trimmed_path
|
||||||
|
duration = _get_audio_duration(tmp_wav_path)
|
||||||
|
|
||||||
|
# 自动转写参考音频内容
|
||||||
|
try:
|
||||||
|
from app.services.whisper_service import whisper_service
|
||||||
|
transcribed = await whisper_service.transcribe(tmp_wav_path)
|
||||||
|
if transcribed.strip():
|
||||||
|
ref_text = transcribed.strip()
|
||||||
|
logger.info(f"Auto-transcribed ref audio: {ref_text[:50]}...")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Auto-transcribe failed: {e}")
|
||||||
|
|
||||||
|
if not ref_text or not ref_text.strip():
|
||||||
|
raise ValueError("无法识别音频内容,请确保音频包含清晰的语音")
|
||||||
|
|
||||||
# 检查重名
|
# 检查重名
|
||||||
existing_files = await storage_service.list_files(BUCKET_REF_AUDIOS, user_id)
|
existing_files = await storage_service.list_files(BUCKET_REF_AUDIOS, user_id)
|
||||||
@@ -267,3 +321,85 @@ async def rename_ref_audio(audio_id: str, new_name: str, user_id: str) -> dict:
|
|||||||
)
|
)
|
||||||
|
|
||||||
return {"name": new_name}
|
return {"name": new_name}
|
||||||
|
|
||||||
|
|
||||||
|
async def retranscribe_ref_audio(audio_id: str, user_id: str) -> dict:
|
||||||
|
"""重新转写参考音频的 ref_text,并截取前 10 秒重新上传(用于迁移旧数据)"""
|
||||||
|
if not audio_id.startswith(f"{user_id}/"):
|
||||||
|
raise PermissionError("无权修改此文件")
|
||||||
|
|
||||||
|
# 下载音频到临时文件
|
||||||
|
audio_url = await storage_service.get_signed_url(BUCKET_REF_AUDIOS, audio_id)
|
||||||
|
tmp_wav_path = None
|
||||||
|
trimmed_path = None
|
||||||
|
try:
|
||||||
|
with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as tmp:
|
||||||
|
tmp_wav_path = tmp.name
|
||||||
|
timeout = httpx.Timeout(None)
|
||||||
|
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||||
|
async with client.stream("GET", audio_url) as resp:
|
||||||
|
resp.raise_for_status()
|
||||||
|
async for chunk in resp.aiter_bytes():
|
||||||
|
tmp.write(chunk)
|
||||||
|
|
||||||
|
# 超过 10 秒则截取前 10 秒并重新上传音频
|
||||||
|
MAX_REF_DURATION = 10.0
|
||||||
|
duration = _get_audio_duration(tmp_wav_path)
|
||||||
|
transcribe_path = tmp_wav_path
|
||||||
|
need_reupload = False
|
||||||
|
|
||||||
|
if duration > MAX_REF_DURATION:
|
||||||
|
cut_point = _find_silence_cut_point(tmp_wav_path, MAX_REF_DURATION)
|
||||||
|
logger.info(f"Retranscribe: trimming {audio_id} from {duration:.1f}s at {cut_point:.1f}s")
|
||||||
|
trimmed_path = tmp_wav_path + "_trimmed.wav"
|
||||||
|
if _convert_to_wav(tmp_wav_path, trimmed_path, max_duration=cut_point):
|
||||||
|
transcribe_path = trimmed_path
|
||||||
|
duration = _get_audio_duration(trimmed_path)
|
||||||
|
need_reupload = True
|
||||||
|
|
||||||
|
# Whisper 转写
|
||||||
|
from app.services.whisper_service import whisper_service
|
||||||
|
transcribed = await whisper_service.transcribe(transcribe_path)
|
||||||
|
if not transcribed or not transcribed.strip():
|
||||||
|
raise ValueError("无法识别音频内容")
|
||||||
|
|
||||||
|
ref_text = transcribed.strip()
|
||||||
|
logger.info(f"Re-transcribed ref audio {audio_id}: {ref_text[:50]}...")
|
||||||
|
|
||||||
|
# 截取过的音频重新上传覆盖原文件
|
||||||
|
if need_reupload and trimmed_path:
|
||||||
|
with open(trimmed_path, "rb") as f:
|
||||||
|
await storage_service.upload_file(
|
||||||
|
bucket=BUCKET_REF_AUDIOS, path=audio_id,
|
||||||
|
file_data=f.read(), content_type="audio/wav",
|
||||||
|
)
|
||||||
|
logger.info(f"Re-uploaded trimmed audio: {audio_id} ({duration:.1f}s)")
|
||||||
|
|
||||||
|
# 更新 metadata
|
||||||
|
metadata_path = audio_id.replace(".wav", ".json")
|
||||||
|
try:
|
||||||
|
meta_url = await storage_service.get_signed_url(BUCKET_REF_AUDIOS, metadata_path)
|
||||||
|
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||||
|
resp = await client.get(meta_url)
|
||||||
|
if resp.status_code == 200:
|
||||||
|
metadata = resp.json()
|
||||||
|
else:
|
||||||
|
raise Exception(f"status {resp.status_code}")
|
||||||
|
except Exception:
|
||||||
|
metadata = {}
|
||||||
|
|
||||||
|
metadata["ref_text"] = ref_text
|
||||||
|
metadata["duration_sec"] = duration
|
||||||
|
await storage_service.upload_file(
|
||||||
|
bucket=BUCKET_REF_AUDIOS,
|
||||||
|
path=metadata_path,
|
||||||
|
file_data=json.dumps(metadata, ensure_ascii=False).encode('utf-8'),
|
||||||
|
content_type="application/json"
|
||||||
|
)
|
||||||
|
|
||||||
|
return {"ref_text": ref_text, "duration_sec": duration}
|
||||||
|
finally:
|
||||||
|
if tmp_wav_path and os.path.exists(tmp_wav_path):
|
||||||
|
os.unlink(tmp_wav_path)
|
||||||
|
if trimmed_path and os.path.exists(trimmed_path):
|
||||||
|
os.unlink(trimmed_path)
|
||||||
|
|||||||
@@ -13,11 +13,12 @@ router = APIRouter()
|
|||||||
async def extract_script_tool(
|
async def extract_script_tool(
|
||||||
file: Optional[UploadFile] = File(None),
|
file: Optional[UploadFile] = File(None),
|
||||||
url: Optional[str] = Form(None),
|
url: Optional[str] = Form(None),
|
||||||
rewrite: bool = Form(True)
|
rewrite: bool = Form(True),
|
||||||
|
custom_prompt: Optional[str] = Form(None)
|
||||||
):
|
):
|
||||||
"""独立文案提取工具"""
|
"""独立文案提取工具"""
|
||||||
try:
|
try:
|
||||||
result = await service.extract_script(file=file, url=url, rewrite=rewrite)
|
result = await service.extract_script(file=file, url=url, rewrite=rewrite, custom_prompt=custom_prompt)
|
||||||
return success_response(result)
|
return success_response(result)
|
||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
raise HTTPException(400, str(e))
|
raise HTTPException(400, str(e))
|
||||||
|
|||||||
@@ -17,9 +17,9 @@ from app.services.whisper_service import whisper_service
|
|||||||
from app.services.glm_service import glm_service
|
from app.services.glm_service import glm_service
|
||||||
|
|
||||||
|
|
||||||
async def extract_script(file=None, url: Optional[str] = None, rewrite: bool = True) -> dict:
|
async def extract_script(file=None, url: Optional[str] = None, rewrite: bool = True, custom_prompt: Optional[str] = None) -> dict:
|
||||||
"""
|
"""
|
||||||
文案提取:上传文件或视频链接 -> Whisper 转写 -> (可选) GLM 洗稿
|
文案提取:上传文件或视频链接 -> Whisper 转写 -> (可选) GLM 改写
|
||||||
"""
|
"""
|
||||||
if not file and not url:
|
if not file and not url:
|
||||||
raise ValueError("必须提供文件或视频链接")
|
raise ValueError("必须提供文件或视频链接")
|
||||||
@@ -63,11 +63,15 @@ async def extract_script(file=None, url: Optional[str] = None, rewrite: bool = T
|
|||||||
# 2. 提取文案 (Whisper)
|
# 2. 提取文案 (Whisper)
|
||||||
script = await whisper_service.transcribe(str(audio_path))
|
script = await whisper_service.transcribe(str(audio_path))
|
||||||
|
|
||||||
# 3. AI 洗稿 (GLM)
|
# 3. AI 改写 (GLM) — 失败时降级返回原文
|
||||||
rewritten = None
|
rewritten = None
|
||||||
if rewrite and script and len(script.strip()) > 0:
|
if rewrite and script and len(script.strip()) > 0:
|
||||||
logger.info("Rewriting script...")
|
logger.info("Rewriting script...")
|
||||||
rewritten = await glm_service.rewrite_script(script)
|
try:
|
||||||
|
rewritten = await glm_service.rewrite_script(script, custom_prompt)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"GLM rewrite failed, returning original script: {e}")
|
||||||
|
rewritten = None
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"original_script": script,
|
"original_script": script,
|
||||||
@@ -156,125 +160,120 @@ def _download_yt_dlp(url_value: str, temp_dir: Path, timestamp: int) -> Path:
|
|||||||
'quiet': True,
|
'quiet': True,
|
||||||
'no_warnings': True,
|
'no_warnings': True,
|
||||||
'http_headers': {
|
'http_headers': {
|
||||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
|
||||||
'Referer': 'https://www.douyin.com/',
|
'Referer': 'https://www.douyin.com/',
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
with yt_dlp.YoutubeDL() as ydl_raw:
|
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
|
||||||
ydl: Any = ydl_raw
|
|
||||||
ydl.params.update(ydl_opts)
|
|
||||||
info = ydl.extract_info(url_value, download=True)
|
info = ydl.extract_info(url_value, download=True)
|
||||||
if 'requested_downloads' in info:
|
if 'requested_downloads' in info:
|
||||||
downloaded_file = info['requested_downloads'][0]['filepath']
|
downloaded_file = info['requested_downloads'][0]['filepath']
|
||||||
else:
|
else:
|
||||||
ext = info.get('ext', 'mp4')
|
ext = info.get('ext', 'mp4')
|
||||||
id = info.get('id')
|
vid_id = info.get('id')
|
||||||
downloaded_file = str(temp_dir / f"tool_download_{timestamp}_{id}.{ext}")
|
downloaded_file = str(temp_dir / f"tool_download_{timestamp}_{vid_id}.{ext}")
|
||||||
|
|
||||||
return Path(downloaded_file)
|
return Path(downloaded_file)
|
||||||
|
|
||||||
|
|
||||||
async def _download_douyin_manual(url: str, temp_dir: Path, timestamp: int) -> Optional[Path]:
|
async def _download_douyin_manual(url: str, temp_dir: Path, timestamp: int) -> Optional[Path]:
|
||||||
"""手动下载抖音视频 (Fallback)"""
|
"""手动下载抖音视频 (Fallback) — 通过移动端分享页获取播放地址"""
|
||||||
logger.info(f"[SuperIPAgent] Starting download for: {url}")
|
logger.info(f"[douyin-fallback] Starting download for: {url}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# 1. 解析短链接,提取视频 ID
|
||||||
headers = {
|
headers = {
|
||||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
|
"user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15"
|
||||||
}
|
}
|
||||||
|
|
||||||
async with httpx.AsyncClient(follow_redirects=True, timeout=10.0) as client:
|
async with httpx.AsyncClient(follow_redirects=True, timeout=10.0) as client:
|
||||||
resp = await client.get(url, headers=headers)
|
resp = await client.get(url, headers=headers)
|
||||||
final_url = str(resp.url)
|
final_url = str(resp.url)
|
||||||
|
|
||||||
logger.info(f"[SuperIPAgent] Final URL: {final_url}")
|
logger.info(f"[douyin-fallback] Final URL: {final_url}")
|
||||||
|
|
||||||
modal_id = None
|
video_id = None
|
||||||
match = re.search(r'/video/(\d+)', final_url)
|
match = re.search(r'/video/(\d+)', final_url)
|
||||||
if match:
|
if match:
|
||||||
modal_id = match.group(1)
|
video_id = match.group(1)
|
||||||
|
|
||||||
if not modal_id:
|
if not video_id:
|
||||||
logger.error("[SuperIPAgent] Could not extract modal_id")
|
logger.error("[douyin-fallback] Could not extract video_id")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
logger.info(f"[SuperIPAgent] Extracted modal_id: {modal_id}")
|
logger.info(f"[douyin-fallback] Extracted video_id: {video_id}")
|
||||||
|
|
||||||
target_url = f"https://www.douyin.com/user/MS4wLjABAAAAN_s_hups7LD0N4qnrM3o2gI0vuG3pozNaEolz2_py3cHTTrpVr1Z4dukFD9SOlwY?from_tab_name=main&modal_id={modal_id}"
|
# 2. 获取新鲜 ttwid
|
||||||
|
ttwid = ""
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||||
|
ttwid_resp = await client.post(
|
||||||
|
"https://ttwid.bytedance.com/ttwid/union/register/",
|
||||||
|
json={
|
||||||
|
"region": "cn", "aid": 6383, "needFid": False,
|
||||||
|
"service": "www.douyin.com",
|
||||||
|
"migrate_info": {"ticket": "", "source": "node"},
|
||||||
|
"cbUrlProtocol": "https", "union": True,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
ttwid = ttwid_resp.cookies.get("ttwid", "")
|
||||||
|
logger.info(f"[douyin-fallback] Got fresh ttwid (len={len(ttwid)})")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[douyin-fallback] Failed to get ttwid: {e}")
|
||||||
|
|
||||||
from app.core.config import settings
|
# 3. 访问移动端分享页提取播放地址
|
||||||
if not settings.DOUYIN_COOKIE:
|
page_headers = {
|
||||||
logger.warning("[SuperIPAgent] DOUYIN_COOKIE 未配置,视频下载可能失败")
|
"user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15",
|
||||||
|
"cookie": f"ttwid={ttwid}" if ttwid else "",
|
||||||
headers_with_cookie = {
|
|
||||||
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
|
|
||||||
"cookie": settings.DOUYIN_COOKIE,
|
|
||||||
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.info(f"[SuperIPAgent] Requesting page with Cookie...")
|
async with httpx.AsyncClient(follow_redirects=True, timeout=15.0) as client:
|
||||||
|
page_resp = await client.get(
|
||||||
|
f"https://m.douyin.com/share/video/{video_id}",
|
||||||
|
headers=page_headers,
|
||||||
|
)
|
||||||
|
|
||||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
page_text = page_resp.text
|
||||||
response = await client.get(target_url, headers=headers_with_cookie)
|
logger.info(f"[douyin-fallback] Mobile page length: {len(page_text)}")
|
||||||
|
|
||||||
content_match = re.findall(r'<script id="RENDER_DATA" type="application/json">(.*?)</script>', response.text)
|
# 4. 提取 play_addr
|
||||||
if not content_match:
|
addr_match = re.search(
|
||||||
if "SSR_HYDRATED_DATA" in response.text:
|
r'"play_addr":\{"uri":"([^"]+)","url_list":\["([^"]+)"',
|
||||||
content_match = re.findall(r'<script id="SSR_HYDRATED_DATA" type="application/json">(.*?)</script>', response.text)
|
page_text,
|
||||||
|
)
|
||||||
if not content_match:
|
if not addr_match:
|
||||||
logger.error(f"[SuperIPAgent] Could not find RENDER_DATA in page (len={len(response.text)})")
|
logger.error("[douyin-fallback] Could not find play_addr in mobile page")
|
||||||
return None
|
|
||||||
|
|
||||||
content = unquote(content_match[0])
|
|
||||||
try:
|
|
||||||
data = json.loads(content)
|
|
||||||
except:
|
|
||||||
logger.error("[SuperIPAgent] JSON decode failed")
|
|
||||||
return None
|
|
||||||
|
|
||||||
video_url = None
|
|
||||||
try:
|
|
||||||
if "app" in data and "videoDetail" in data["app"]:
|
|
||||||
info = data["app"]["videoDetail"]["video"]
|
|
||||||
if "bitRateList" in info and info["bitRateList"]:
|
|
||||||
video_url = info["bitRateList"][0]["playAddr"][0]["src"]
|
|
||||||
elif "playAddr" in info and info["playAddr"]:
|
|
||||||
video_url = info["playAddr"][0]["src"]
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"[SuperIPAgent] Path extraction failed: {e}")
|
|
||||||
|
|
||||||
if not video_url:
|
|
||||||
logger.error("[SuperIPAgent] No video_url found")
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
video_url = addr_match.group(2).replace(r"\u002F", "/")
|
||||||
if video_url.startswith("//"):
|
if video_url.startswith("//"):
|
||||||
video_url = "https:" + video_url
|
video_url = "https:" + video_url
|
||||||
|
|
||||||
logger.info(f"[SuperIPAgent] Found video URL: {video_url[:50]}...")
|
logger.info(f"[douyin-fallback] Found video URL: {video_url[:80]}...")
|
||||||
|
|
||||||
|
# 5. 下载视频
|
||||||
temp_path = temp_dir / f"douyin_manual_{timestamp}.mp4"
|
temp_path = temp_dir / f"douyin_manual_{timestamp}.mp4"
|
||||||
download_headers = {
|
download_headers = {
|
||||||
'Referer': 'https://www.douyin.com/',
|
"Referer": "https://www.douyin.com/",
|
||||||
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
|
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15",
|
||||||
}
|
}
|
||||||
|
|
||||||
async with httpx.AsyncClient(timeout=60.0) as client:
|
async with httpx.AsyncClient(timeout=120.0, follow_redirects=True) as client:
|
||||||
async with client.stream("GET", video_url, headers=download_headers) as dl_resp:
|
async with client.stream("GET", video_url, headers=download_headers) as dl_resp:
|
||||||
if dl_resp.status_code == 200:
|
if dl_resp.status_code == 200:
|
||||||
with open(temp_path, 'wb') as f:
|
with open(temp_path, "wb") as f:
|
||||||
async for chunk in dl_resp.aiter_bytes(chunk_size=8192):
|
async for chunk in dl_resp.aiter_bytes(chunk_size=8192):
|
||||||
f.write(chunk)
|
f.write(chunk)
|
||||||
|
|
||||||
logger.info(f"[SuperIPAgent] Downloaded successfully: {temp_path}")
|
logger.info(f"[douyin-fallback] Downloaded successfully: {temp_path}")
|
||||||
return temp_path
|
return temp_path
|
||||||
else:
|
else:
|
||||||
logger.error(f"[SuperIPAgent] Download failed: {dl_resp.status_code}")
|
logger.error(f"[douyin-fallback] Download failed: {dl_resp.status_code}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"[SuperIPAgent] Logic failed: {e}")
|
logger.error(f"[douyin-fallback] Logic failed: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
from typing import Optional, List
|
from typing import Optional, List, Literal
|
||||||
|
|
||||||
|
|
||||||
class CustomAssignment(BaseModel):
|
class CustomAssignment(BaseModel):
|
||||||
@@ -7,6 +7,7 @@ class CustomAssignment(BaseModel):
|
|||||||
start: float # 音频时间轴起点
|
start: float # 音频时间轴起点
|
||||||
end: float # 音频时间轴终点
|
end: float # 音频时间轴终点
|
||||||
source_start: float = 0.0 # 源视频截取起点
|
source_start: float = 0.0 # 源视频截取起点
|
||||||
|
source_end: Optional[float] = None # 源视频截取终点(可选)
|
||||||
|
|
||||||
|
|
||||||
class GenerateRequest(BaseModel):
|
class GenerateRequest(BaseModel):
|
||||||
@@ -20,9 +21,15 @@ class GenerateRequest(BaseModel):
|
|||||||
language: str = "zh-CN"
|
language: str = "zh-CN"
|
||||||
generated_audio_id: Optional[str] = None # 预生成配音 ID(存在时跳过内联 TTS)
|
generated_audio_id: Optional[str] = None # 预生成配音 ID(存在时跳过内联 TTS)
|
||||||
title: Optional[str] = None
|
title: Optional[str] = None
|
||||||
|
title_display_mode: Literal["short", "persistent"] = "short"
|
||||||
|
title_duration: float = 4.0
|
||||||
enable_subtitles: bool = True
|
enable_subtitles: bool = True
|
||||||
subtitle_style_id: Optional[str] = None
|
subtitle_style_id: Optional[str] = None
|
||||||
title_style_id: Optional[str] = None
|
title_style_id: Optional[str] = None
|
||||||
|
secondary_title: Optional[str] = None
|
||||||
|
secondary_title_style_id: Optional[str] = None
|
||||||
|
secondary_title_font_size: Optional[int] = None
|
||||||
|
secondary_title_top_margin: Optional[int] = None
|
||||||
subtitle_font_size: Optional[int] = None
|
subtitle_font_size: Optional[int] = None
|
||||||
title_font_size: Optional[int] = None
|
title_font_size: Optional[int] = None
|
||||||
title_top_margin: Optional[int] = None
|
title_top_margin: Optional[int] = None
|
||||||
@@ -30,3 +37,4 @@ class GenerateRequest(BaseModel):
|
|||||||
bgm_id: Optional[str] = None
|
bgm_id: Optional[str] = None
|
||||||
bgm_volume: Optional[float] = 0.2
|
bgm_volume: Optional[float] = 0.2
|
||||||
custom_assignments: Optional[List[CustomAssignment]] = None
|
custom_assignments: Optional[List[CustomAssignment]] = None
|
||||||
|
output_aspect_ratio: Literal["9:16", "16:9"] = "9:16"
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from typing import Optional, Any, List
|
from typing import Optional, Any, List
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
import asyncio
|
||||||
import time
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
import httpx
|
import httpx
|
||||||
@@ -29,7 +30,7 @@ def _locale_to_whisper_lang(locale: str) -> str:
|
|||||||
return locale.split("-")[0] if "-" in locale else locale
|
return locale.split("-")[0] if "-" in locale else locale
|
||||||
|
|
||||||
|
|
||||||
def _locale_to_qwen_lang(locale: str) -> str:
|
def _locale_to_tts_lang(locale: str) -> str:
|
||||||
"""'zh-CN' → 'Chinese', 'en-US' → 'English', 其他 → 'Auto'"""
|
"""'zh-CN' → 'Chinese', 'en-US' → 'English', 其他 → 'Auto'"""
|
||||||
mapping = {"zh": "Chinese", "en": "English"}
|
mapping = {"zh": "Chinese", "en": "English"}
|
||||||
return mapping.get(locale.split("-")[0], "Auto")
|
return mapping.get(locale.split("-")[0], "Auto")
|
||||||
@@ -174,17 +175,27 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
|
|
||||||
# ── 确定素材列表 ──
|
# ── 确定素材列表 ──
|
||||||
material_paths: List[str] = []
|
material_paths: List[str] = []
|
||||||
if req.material_paths and len(req.material_paths) > 1:
|
if req.custom_assignments and len(req.custom_assignments) > 1:
|
||||||
|
material_paths = [a.material_path for a in req.custom_assignments if a.material_path]
|
||||||
|
elif req.material_paths and len(req.material_paths) > 1:
|
||||||
material_paths = req.material_paths
|
material_paths = req.material_paths
|
||||||
else:
|
else:
|
||||||
material_paths = [req.material_path]
|
material_paths = [req.material_path]
|
||||||
|
|
||||||
is_multi = len(material_paths) > 1
|
is_multi = len(material_paths) > 1
|
||||||
|
target_resolution = (1080, 1920) if req.output_aspect_ratio == "9:16" else (1920, 1080)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"[Render] 输出画面比例: {req.output_aspect_ratio}, "
|
||||||
|
f"目标分辨率: {target_resolution[0]}x{target_resolution[1]}"
|
||||||
|
)
|
||||||
|
|
||||||
_update_task(task_id, status="processing", progress=5, message="正在下载素材...")
|
_update_task(task_id, status="processing", progress=5, message="正在下载素材...")
|
||||||
|
|
||||||
temp_dir = settings.UPLOAD_DIR / "temp"
|
temp_dir = settings.UPLOAD_DIR / "temp"
|
||||||
temp_dir.mkdir(parents=True, exist_ok=True)
|
temp_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
video = VideoService()
|
||||||
|
input_material_path: Optional[Path] = None
|
||||||
|
|
||||||
# 单素材模式:下载主素材
|
# 单素材模式:下载主素材
|
||||||
if not is_multi:
|
if not is_multi:
|
||||||
@@ -192,6 +203,16 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
temp_files.append(input_material_path)
|
temp_files.append(input_material_path)
|
||||||
await _download_material(material_paths[0], input_material_path)
|
await _download_material(material_paths[0], input_material_path)
|
||||||
|
|
||||||
|
# 归一化旋转元数据(如 iPhone MOV 1920x1080 + rotation=-90)
|
||||||
|
normalized_input_path = temp_dir / f"{task_id}_input_norm.mp4"
|
||||||
|
normalized_result = video.normalize_orientation(
|
||||||
|
str(input_material_path),
|
||||||
|
str(normalized_input_path),
|
||||||
|
)
|
||||||
|
if normalized_result != str(input_material_path):
|
||||||
|
temp_files.append(normalized_input_path)
|
||||||
|
input_material_path = normalized_input_path
|
||||||
|
|
||||||
_update_task(task_id, message="正在生成语音...", progress=10)
|
_update_task(task_id, message="正在生成语音...", progress=10)
|
||||||
|
|
||||||
audio_path = temp_dir / f"{task_id}_audio.wav"
|
audio_path = temp_dir / f"{task_id}_audio.wav"
|
||||||
@@ -218,8 +239,10 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
if resp.status_code == 200:
|
if resp.status_code == 200:
|
||||||
meta = resp.json()
|
meta = resp.json()
|
||||||
req.language = meta.get("language", req.language)
|
req.language = meta.get("language", req.language)
|
||||||
if not req.text.strip():
|
# 无条件用配音元数据覆盖文案,确保字幕与配音语言一致
|
||||||
req.text = meta.get("text", req.text)
|
meta_text = meta.get("text", "")
|
||||||
|
if meta_text:
|
||||||
|
req.text = meta_text
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning(f"读取配音元数据失败: {e}")
|
logger.warning(f"读取配音元数据失败: {e}")
|
||||||
|
|
||||||
@@ -238,13 +261,13 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
)
|
)
|
||||||
await _download_material(ref_audio_url, ref_audio_local)
|
await _download_material(ref_audio_url, ref_audio_local)
|
||||||
|
|
||||||
_update_task(task_id, message="正在克隆声音 (Qwen3-TTS)...")
|
_update_task(task_id, message="正在克隆声音...")
|
||||||
await voice_clone_service.generate_audio(
|
await voice_clone_service.generate_audio(
|
||||||
text=req.text,
|
text=req.text,
|
||||||
ref_audio_path=str(ref_audio_local),
|
ref_audio_path=str(ref_audio_local),
|
||||||
ref_text=req.ref_text,
|
ref_text=req.ref_text,
|
||||||
output_path=str(audio_path),
|
output_path=str(audio_path),
|
||||||
language=_locale_to_qwen_lang(req.language)
|
language=_locale_to_tts_lang(req.language)
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
_update_task(task_id, message="正在生成语音 (EdgeTTS)...")
|
_update_task(task_id, message="正在生成语音 (EdgeTTS)...")
|
||||||
@@ -258,7 +281,6 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
lipsync_video_path = temp_dir / f"{task_id}_lipsync.mp4"
|
lipsync_video_path = temp_dir / f"{task_id}_lipsync.mp4"
|
||||||
temp_files.append(lipsync_video_path)
|
temp_files.append(lipsync_video_path)
|
||||||
|
|
||||||
video = VideoService()
|
|
||||||
captions_path = None
|
captions_path = None
|
||||||
|
|
||||||
if is_multi:
|
if is_multi:
|
||||||
@@ -267,7 +289,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
# ══════════════════════════════════════
|
# ══════════════════════════════════════
|
||||||
_update_task(task_id, progress=12, message="正在分配素材...")
|
_update_task(task_id, progress=12, message="正在分配素材...")
|
||||||
|
|
||||||
if req.custom_assignments:
|
if req.custom_assignments and len(req.custom_assignments) == len(material_paths):
|
||||||
# 用户自定义分配,跳过 Whisper 均分
|
# 用户自定义分配,跳过 Whisper 均分
|
||||||
assignments = [
|
assignments = [
|
||||||
{
|
{
|
||||||
@@ -275,6 +297,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
"start": a.start,
|
"start": a.start,
|
||||||
"end": a.end,
|
"end": a.end,
|
||||||
"source_start": a.source_start,
|
"source_start": a.source_start,
|
||||||
|
"source_end": a.source_end,
|
||||||
"index": i,
|
"index": i,
|
||||||
}
|
}
|
||||||
for i, a in enumerate(req.custom_assignments)
|
for i, a in enumerate(req.custom_assignments)
|
||||||
@@ -290,6 +313,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
text=req.text,
|
text=req.text,
|
||||||
output_path=str(captions_path),
|
output_path=str(captions_path),
|
||||||
language=_locale_to_whisper_lang(req.language),
|
language=_locale_to_whisper_lang(req.language),
|
||||||
|
original_text=req.text,
|
||||||
)
|
)
|
||||||
print(f"[Pipeline] Whisper alignment completed (custom assignments)")
|
print(f"[Pipeline] Whisper alignment completed (custom assignments)")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -297,6 +321,49 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
captions_path = None
|
captions_path = None
|
||||||
else:
|
else:
|
||||||
captions_path = None
|
captions_path = None
|
||||||
|
elif req.custom_assignments:
|
||||||
|
logger.warning(
|
||||||
|
f"[MultiMat] custom_assignments 数量({len(req.custom_assignments)})"
|
||||||
|
f" 与素材数量({len(material_paths)})不一致,回退自动分配"
|
||||||
|
)
|
||||||
|
|
||||||
|
# 原有逻辑:Whisper → _split_equal
|
||||||
|
_update_task(task_id, message="正在生成字幕 (Whisper)...")
|
||||||
|
|
||||||
|
captions_path = temp_dir / f"{task_id}_captions.json"
|
||||||
|
temp_files.append(captions_path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
captions_data = await whisper_service.align(
|
||||||
|
audio_path=str(audio_path),
|
||||||
|
text=req.text,
|
||||||
|
output_path=str(captions_path),
|
||||||
|
language=_locale_to_whisper_lang(req.language),
|
||||||
|
original_text=req.text,
|
||||||
|
)
|
||||||
|
print(f"[Pipeline] Whisper alignment completed (multi-material)")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Whisper alignment failed: {e}")
|
||||||
|
captions_data = None
|
||||||
|
captions_path = None
|
||||||
|
|
||||||
|
_update_task(task_id, progress=15, message="正在分配素材...")
|
||||||
|
|
||||||
|
if captions_data and captions_data.get("segments"):
|
||||||
|
assignments = _split_equal(captions_data["segments"], material_paths)
|
||||||
|
else:
|
||||||
|
# Whisper 失败 → 按时长均分(不依赖字符对齐)
|
||||||
|
logger.warning("[MultiMat] Whisper 无数据,按时长均分")
|
||||||
|
audio_dur = video._get_duration(str(audio_path))
|
||||||
|
if audio_dur <= 0:
|
||||||
|
audio_dur = 30.0 # 安全兜底
|
||||||
|
seg_dur = audio_dur / len(material_paths)
|
||||||
|
assignments = [
|
||||||
|
{"material_path": material_paths[i], "start": i * seg_dur,
|
||||||
|
"end": (i + 1) * seg_dur, "index": i}
|
||||||
|
for i in range(len(material_paths))
|
||||||
|
]
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# 原有逻辑:Whisper → _split_equal
|
# 原有逻辑:Whisper → _split_equal
|
||||||
_update_task(task_id, message="正在生成字幕 (Whisper)...")
|
_update_task(task_id, message="正在生成字幕 (Whisper)...")
|
||||||
@@ -310,6 +377,7 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
text=req.text,
|
text=req.text,
|
||||||
output_path=str(captions_path),
|
output_path=str(captions_path),
|
||||||
language=_locale_to_whisper_lang(req.language),
|
language=_locale_to_whisper_lang(req.language),
|
||||||
|
original_text=req.text,
|
||||||
)
|
)
|
||||||
print(f"[Pipeline] Whisper alignment completed (multi-material)")
|
print(f"[Pipeline] Whisper alignment completed (multi-material)")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -348,43 +416,82 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
|
|
||||||
lipsync_start = time.time()
|
lipsync_start = time.time()
|
||||||
|
|
||||||
# ── 第一步:下载所有素材并检测分辨率 ──
|
# ── 第一步:并行下载所有素材并检测分辨率 ──
|
||||||
material_locals: List[Path] = []
|
material_locals: List[Path] = []
|
||||||
resolutions = []
|
resolutions = []
|
||||||
|
|
||||||
for i, assignment in enumerate(assignments):
|
async def _download_and_normalize(i: int, assignment: dict):
|
||||||
|
"""下载单个素材并归一化方向"""
|
||||||
material_local = temp_dir / f"{task_id}_material_{i}.mp4"
|
material_local = temp_dir / f"{task_id}_material_{i}.mp4"
|
||||||
temp_files.append(material_local)
|
temp_files.append(material_local)
|
||||||
await _download_material(assignment["material_path"], material_local)
|
await _download_material(assignment["material_path"], material_local)
|
||||||
material_locals.append(material_local)
|
|
||||||
resolutions.append(video.get_resolution(str(material_local)))
|
|
||||||
|
|
||||||
# 分辨率不一致时,统一到第一个素材的分辨率
|
normalized_material = temp_dir / f"{task_id}_material_{i}_norm.mp4"
|
||||||
base_res = resolutions[0] if resolutions else (0, 0)
|
loop = asyncio.get_event_loop()
|
||||||
need_scale = any(r != base_res for r in resolutions) and base_res[0] > 0
|
normalized_result = await loop.run_in_executor(
|
||||||
|
None,
|
||||||
|
video.normalize_orientation,
|
||||||
|
str(material_local),
|
||||||
|
str(normalized_material),
|
||||||
|
)
|
||||||
|
if normalized_result != str(material_local):
|
||||||
|
temp_files.append(normalized_material)
|
||||||
|
material_local = normalized_material
|
||||||
|
|
||||||
|
res = video.get_resolution(str(material_local))
|
||||||
|
return material_local, res
|
||||||
|
|
||||||
|
download_tasks = [
|
||||||
|
_download_and_normalize(i, assignment)
|
||||||
|
for i, assignment in enumerate(assignments)
|
||||||
|
]
|
||||||
|
download_results = await asyncio.gather(*download_tasks)
|
||||||
|
for local, res in download_results:
|
||||||
|
material_locals.append(local)
|
||||||
|
resolutions.append(res)
|
||||||
|
|
||||||
|
# 按用户选择的画面比例统一分辨率
|
||||||
|
base_res = target_resolution
|
||||||
|
need_scale = any(r != base_res for r in resolutions)
|
||||||
if need_scale:
|
if need_scale:
|
||||||
logger.info(f"[MultiMat] 素材分辨率不一致,统一到 {base_res[0]}x{base_res[1]}")
|
logger.info(f"[MultiMat] 素材分辨率不一致,统一到 {base_res[0]}x{base_res[1]}")
|
||||||
|
|
||||||
# ── 第二步:裁剪每段素材到对应时长 ──
|
# ── 第二步:并行裁剪每段素材到对应时长 ──
|
||||||
prepared_segments: List[Path] = []
|
prepared_segments: List[Path] = [None] * num_segments
|
||||||
|
|
||||||
for i, assignment in enumerate(assignments):
|
async def _prepare_one_segment(i: int, assignment: dict):
|
||||||
seg_progress = 15 + int((i / num_segments) * 30) # 15% → 45%
|
"""将单个素材裁剪/循环到对应时长"""
|
||||||
seg_dur = assignment["end"] - assignment["start"]
|
seg_dur = assignment["end"] - assignment["start"]
|
||||||
_update_task(
|
|
||||||
task_id,
|
|
||||||
progress=seg_progress,
|
|
||||||
message=f"正在准备素材 {i+1}/{num_segments}..."
|
|
||||||
)
|
|
||||||
|
|
||||||
prepared_path = temp_dir / f"{task_id}_prepared_{i}.mp4"
|
prepared_path = temp_dir / f"{task_id}_prepared_{i}.mp4"
|
||||||
temp_files.append(prepared_path)
|
temp_files.append(prepared_path)
|
||||||
video.prepare_segment(
|
|
||||||
str(material_locals[i]), seg_dur, str(prepared_path),
|
loop = asyncio.get_event_loop()
|
||||||
target_resolution=base_res if need_scale else None,
|
await loop.run_in_executor(
|
||||||
source_start=assignment.get("source_start", 0.0),
|
None,
|
||||||
|
video.prepare_segment,
|
||||||
|
str(material_locals[i]),
|
||||||
|
seg_dur,
|
||||||
|
str(prepared_path),
|
||||||
|
base_res,
|
||||||
|
assignment.get("source_start", 0.0),
|
||||||
|
assignment.get("source_end"),
|
||||||
|
25,
|
||||||
)
|
)
|
||||||
prepared_segments.append(prepared_path)
|
return i, prepared_path
|
||||||
|
|
||||||
|
_update_task(
|
||||||
|
task_id,
|
||||||
|
progress=15,
|
||||||
|
message=f"正在并行准备 {num_segments} 个素材片段..."
|
||||||
|
)
|
||||||
|
|
||||||
|
prepare_tasks = [
|
||||||
|
_prepare_one_segment(i, assignment)
|
||||||
|
for i, assignment in enumerate(assignments)
|
||||||
|
]
|
||||||
|
prepare_results = await asyncio.gather(*prepare_tasks)
|
||||||
|
for i, path in prepare_results:
|
||||||
|
prepared_segments[i] = path
|
||||||
|
|
||||||
# ── 第二步:拼接所有素材片段 ──
|
# ── 第二步:拼接所有素材片段 ──
|
||||||
_update_task(task_id, progress=50, message="正在拼接素材片段...")
|
_update_task(task_id, progress=50, message="正在拼接素材片段...")
|
||||||
@@ -392,7 +499,8 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
temp_files.append(concat_path)
|
temp_files.append(concat_path)
|
||||||
video.concat_videos(
|
video.concat_videos(
|
||||||
[str(p) for p in prepared_segments],
|
[str(p) for p in prepared_segments],
|
||||||
str(concat_path)
|
str(concat_path),
|
||||||
|
target_fps=25,
|
||||||
)
|
)
|
||||||
|
|
||||||
# ── 第三步:一次 LatentSync 推理 ──
|
# ── 第三步:一次 LatentSync 推理 ──
|
||||||
@@ -425,23 +533,31 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
# 单素材流水线(原有逻辑)
|
# 单素材流水线(原有逻辑)
|
||||||
# ══════════════════════════════════════
|
# ══════════════════════════════════════
|
||||||
|
|
||||||
# 单素材 + source_start:先截取片段
|
if input_material_path is None:
|
||||||
|
raise RuntimeError("单素材流程缺少输入素材")
|
||||||
|
|
||||||
|
# 单素材:按用户选择画面比例统一到目标分辨率,并应用 source_start
|
||||||
single_source_start = 0.0
|
single_source_start = 0.0
|
||||||
|
single_source_end = None
|
||||||
if req.custom_assignments and len(req.custom_assignments) == 1:
|
if req.custom_assignments and len(req.custom_assignments) == 1:
|
||||||
single_source_start = req.custom_assignments[0].source_start
|
single_source_start = req.custom_assignments[0].source_start
|
||||||
|
single_source_end = req.custom_assignments[0].source_end
|
||||||
|
|
||||||
if single_source_start > 0:
|
_update_task(task_id, progress=20, message="正在准备素材片段...")
|
||||||
_update_task(task_id, progress=20, message="正在截取素材片段...")
|
audio_dur = video._get_duration(str(audio_path))
|
||||||
audio_dur = video._get_duration(str(audio_path))
|
if audio_dur <= 0:
|
||||||
if audio_dur <= 0:
|
audio_dur = 30.0
|
||||||
audio_dur = 30.0
|
prepared_single_path = temp_dir / f"{task_id}_prepared_single.mp4"
|
||||||
trimmed_path = temp_dir / f"{task_id}_trimmed.mp4"
|
temp_files.append(prepared_single_path)
|
||||||
temp_files.append(trimmed_path)
|
video.prepare_segment(
|
||||||
video.prepare_segment(
|
str(input_material_path),
|
||||||
str(input_material_path), audio_dur, str(trimmed_path),
|
audio_dur,
|
||||||
source_start=single_source_start,
|
str(prepared_single_path),
|
||||||
)
|
target_resolution=target_resolution,
|
||||||
input_material_path = trimmed_path
|
source_start=single_source_start,
|
||||||
|
source_end=single_source_end,
|
||||||
|
)
|
||||||
|
input_material_path = prepared_single_path
|
||||||
|
|
||||||
_update_task(task_id, progress=25)
|
_update_task(task_id, progress=25)
|
||||||
_update_task(task_id, message="正在合成唇形 (LatentSync)...", progress=30)
|
_update_task(task_id, message="正在合成唇形 (LatentSync)...", progress=30)
|
||||||
@@ -463,58 +579,100 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
print(f"[Pipeline] LipSync completed in {lipsync_time:.1f}s")
|
print(f"[Pipeline] LipSync completed in {lipsync_time:.1f}s")
|
||||||
_update_task(task_id, progress=80)
|
_update_task(task_id, progress=80)
|
||||||
|
|
||||||
# 单素材模式:Whisper 在 LatentSync 之后
|
# 单素材模式:Whisper 延迟到下方与 BGM 并行执行
|
||||||
if req.enable_subtitles:
|
if not req.enable_subtitles:
|
||||||
|
captions_path = None
|
||||||
|
|
||||||
|
_update_task(task_id, progress=85)
|
||||||
|
|
||||||
|
# ── Whisper 字幕 + BGM 混音 并行(两者都只依赖 audio_path)──
|
||||||
|
final_audio_path = audio_path
|
||||||
|
_whisper_task = None
|
||||||
|
_bgm_task = None
|
||||||
|
|
||||||
|
# 单素材模式下 Whisper 尚未执行,这里与 BGM 并行启动
|
||||||
|
need_whisper = not is_multi and req.enable_subtitles and captions_path is None
|
||||||
|
if need_whisper:
|
||||||
|
captions_path = temp_dir / f"{task_id}_captions.json"
|
||||||
|
temp_files.append(captions_path)
|
||||||
|
_captions_path_str = str(captions_path)
|
||||||
|
|
||||||
|
async def _run_whisper():
|
||||||
_update_task(task_id, message="正在生成字幕 (Whisper)...", progress=82)
|
_update_task(task_id, message="正在生成字幕 (Whisper)...", progress=82)
|
||||||
|
|
||||||
captions_path = temp_dir / f"{task_id}_captions.json"
|
|
||||||
temp_files.append(captions_path)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
await whisper_service.align(
|
await whisper_service.align(
|
||||||
audio_path=str(audio_path),
|
audio_path=str(audio_path),
|
||||||
text=req.text,
|
text=req.text,
|
||||||
output_path=str(captions_path),
|
output_path=_captions_path_str,
|
||||||
language=_locale_to_whisper_lang(req.language),
|
language=_locale_to_whisper_lang(req.language),
|
||||||
|
original_text=req.text,
|
||||||
)
|
)
|
||||||
print(f"[Pipeline] Whisper alignment completed")
|
print(f"[Pipeline] Whisper alignment completed")
|
||||||
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning(f"Whisper alignment failed, skipping subtitles: {e}")
|
logger.warning(f"Whisper alignment failed, skipping subtitles: {e}")
|
||||||
captions_path = None
|
return False
|
||||||
|
|
||||||
_update_task(task_id, progress=85)
|
_whisper_task = _run_whisper()
|
||||||
|
|
||||||
final_audio_path = audio_path
|
|
||||||
if req.bgm_id:
|
if req.bgm_id:
|
||||||
_update_task(task_id, message="正在合成背景音乐...", progress=86)
|
|
||||||
|
|
||||||
bgm_path = resolve_bgm_path(req.bgm_id)
|
bgm_path = resolve_bgm_path(req.bgm_id)
|
||||||
if bgm_path:
|
if bgm_path:
|
||||||
mix_output_path = temp_dir / f"{task_id}_audio_mix.wav"
|
mix_output_path = temp_dir / f"{task_id}_audio_mix.wav"
|
||||||
temp_files.append(mix_output_path)
|
temp_files.append(mix_output_path)
|
||||||
volume = req.bgm_volume if req.bgm_volume is not None else 0.2
|
volume = req.bgm_volume if req.bgm_volume is not None else 0.2
|
||||||
volume = max(0.0, min(float(volume), 1.0))
|
volume = max(0.0, min(float(volume), 1.0))
|
||||||
try:
|
_mix_output = str(mix_output_path)
|
||||||
video.mix_audio(
|
_bgm_path = str(bgm_path)
|
||||||
voice_path=str(audio_path),
|
_voice_path = str(audio_path)
|
||||||
bgm_path=str(bgm_path),
|
_volume = volume
|
||||||
output_path=str(mix_output_path),
|
|
||||||
bgm_volume=volume
|
async def _run_bgm():
|
||||||
)
|
_update_task(task_id, message="正在合成背景音乐...", progress=86)
|
||||||
final_audio_path = mix_output_path
|
loop = asyncio.get_event_loop()
|
||||||
except Exception as e:
|
try:
|
||||||
logger.warning(f"BGM mix failed, fallback to voice only: {e}")
|
await loop.run_in_executor(
|
||||||
|
None,
|
||||||
|
video.mix_audio,
|
||||||
|
_voice_path,
|
||||||
|
_bgm_path,
|
||||||
|
_mix_output,
|
||||||
|
_volume,
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"BGM mix failed, fallback to voice only: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
_bgm_task = _run_bgm()
|
||||||
else:
|
else:
|
||||||
logger.warning(f"BGM not found: {req.bgm_id}")
|
logger.warning(f"BGM not found: {req.bgm_id}")
|
||||||
|
|
||||||
use_remotion = (captions_path and captions_path.exists()) or req.title
|
# 并行等待 Whisper + BGM
|
||||||
|
parallel_tasks = [t for t in (_whisper_task, _bgm_task) if t is not None]
|
||||||
|
if parallel_tasks:
|
||||||
|
results = await asyncio.gather(*parallel_tasks)
|
||||||
|
result_idx = 0
|
||||||
|
if _whisper_task is not None:
|
||||||
|
if not results[result_idx]:
|
||||||
|
captions_path = None
|
||||||
|
result_idx += 1
|
||||||
|
if _bgm_task is not None:
|
||||||
|
if results[result_idx]:
|
||||||
|
final_audio_path = mix_output_path
|
||||||
|
|
||||||
|
|
||||||
|
use_remotion = (captions_path and captions_path.exists()) or req.title or req.secondary_title
|
||||||
|
|
||||||
subtitle_style = None
|
subtitle_style = None
|
||||||
title_style = None
|
title_style = None
|
||||||
|
secondary_title_style = None
|
||||||
if req.enable_subtitles:
|
if req.enable_subtitles:
|
||||||
subtitle_style = get_style("subtitle", req.subtitle_style_id) or get_default_style("subtitle")
|
subtitle_style = get_style("subtitle", req.subtitle_style_id) or get_default_style("subtitle")
|
||||||
if req.title:
|
if req.title:
|
||||||
title_style = get_style("title", req.title_style_id) or get_default_style("title")
|
title_style = get_style("title", req.title_style_id) or get_default_style("title")
|
||||||
|
if req.secondary_title:
|
||||||
|
secondary_title_style = get_style("title", req.secondary_title_style_id) or get_default_style("title")
|
||||||
|
|
||||||
if req.subtitle_font_size and req.enable_subtitles:
|
if req.subtitle_font_size and req.enable_subtitles:
|
||||||
if subtitle_style is None:
|
if subtitle_style is None:
|
||||||
@@ -536,6 +694,16 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
subtitle_style = {}
|
subtitle_style = {}
|
||||||
subtitle_style["bottom_margin"] = int(req.subtitle_bottom_margin)
|
subtitle_style["bottom_margin"] = int(req.subtitle_bottom_margin)
|
||||||
|
|
||||||
|
if req.secondary_title_font_size and req.secondary_title:
|
||||||
|
if secondary_title_style is None:
|
||||||
|
secondary_title_style = {}
|
||||||
|
secondary_title_style["font_size"] = int(req.secondary_title_font_size)
|
||||||
|
|
||||||
|
if req.secondary_title_top_margin is not None and req.secondary_title:
|
||||||
|
if secondary_title_style is None:
|
||||||
|
secondary_title_style = {}
|
||||||
|
secondary_title_style["top_margin"] = int(req.secondary_title_top_margin)
|
||||||
|
|
||||||
if use_remotion:
|
if use_remotion:
|
||||||
subtitle_style = prepare_style_for_remotion(
|
subtitle_style = prepare_style_for_remotion(
|
||||||
subtitle_style,
|
subtitle_style,
|
||||||
@@ -547,6 +715,11 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
temp_dir,
|
temp_dir,
|
||||||
f"{task_id}_title_font"
|
f"{task_id}_title_font"
|
||||||
)
|
)
|
||||||
|
secondary_title_style = prepare_style_for_remotion(
|
||||||
|
secondary_title_style,
|
||||||
|
temp_dir,
|
||||||
|
f"{task_id}_secondary_title_font"
|
||||||
|
)
|
||||||
|
|
||||||
final_output_local_path = temp_dir / f"{task_id}_output.mp4"
|
final_output_local_path = temp_dir / f"{task_id}_output.mp4"
|
||||||
temp_files.append(final_output_local_path)
|
temp_files.append(final_output_local_path)
|
||||||
@@ -566,16 +739,26 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
|
|||||||
mapped = 87 + int(percent * 0.08)
|
mapped = 87 + int(percent * 0.08)
|
||||||
_update_task(task_id, progress=mapped)
|
_update_task(task_id, progress=mapped)
|
||||||
|
|
||||||
|
title_display_mode = (
|
||||||
|
req.title_display_mode
|
||||||
|
if req.title_display_mode in ("short", "persistent")
|
||||||
|
else "short"
|
||||||
|
)
|
||||||
|
title_duration = max(0.5, min(float(req.title_duration or 4.0), 30.0))
|
||||||
|
|
||||||
await remotion_service.render(
|
await remotion_service.render(
|
||||||
video_path=str(composed_video_path),
|
video_path=str(composed_video_path),
|
||||||
output_path=str(final_output_local_path),
|
output_path=str(final_output_local_path),
|
||||||
captions_path=str(captions_path) if captions_path else None,
|
captions_path=str(captions_path) if captions_path else None,
|
||||||
title=req.title,
|
title=req.title,
|
||||||
title_duration=3.0,
|
title_duration=title_duration,
|
||||||
|
title_display_mode=title_display_mode,
|
||||||
fps=25,
|
fps=25,
|
||||||
enable_subtitles=req.enable_subtitles,
|
enable_subtitles=req.enable_subtitles,
|
||||||
subtitle_style=subtitle_style,
|
subtitle_style=subtitle_style,
|
||||||
title_style=title_style,
|
title_style=title_style,
|
||||||
|
secondary_title=req.secondary_title,
|
||||||
|
secondary_title_style=secondary_title_style,
|
||||||
on_progress=on_remotion_progress
|
on_progress=on_remotion_progress
|
||||||
)
|
)
|
||||||
print(f"[Pipeline] Remotion render completed")
|
print(f"[Pipeline] Remotion render completed")
|
||||||
|
|||||||
34
backend/app/repositories/orders.py
Normal file
34
backend/app/repositories/orders.py
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
"""
|
||||||
|
订单数据访问层
|
||||||
|
"""
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from typing import Any, Dict, Optional, cast
|
||||||
|
|
||||||
|
from app.core.supabase import get_supabase
|
||||||
|
|
||||||
|
|
||||||
|
def create_order(user_id: str, out_trade_no: str, amount: float) -> Dict[str, Any]:
|
||||||
|
supabase = get_supabase()
|
||||||
|
result = supabase.table("orders").insert({
|
||||||
|
"user_id": user_id,
|
||||||
|
"out_trade_no": out_trade_no,
|
||||||
|
"amount": amount,
|
||||||
|
"status": "pending",
|
||||||
|
}).execute()
|
||||||
|
return cast(Dict[str, Any], (result.data or [{}])[0])
|
||||||
|
|
||||||
|
|
||||||
|
def get_order_by_trade_no(out_trade_no: str) -> Optional[Dict[str, Any]]:
|
||||||
|
supabase = get_supabase()
|
||||||
|
result = supabase.table("orders").select("*").eq("out_trade_no", out_trade_no).single().execute()
|
||||||
|
return cast(Optional[Dict[str, Any]], result.data or None)
|
||||||
|
|
||||||
|
|
||||||
|
def update_order_status(out_trade_no: str, status: str, trade_no: str | None = None) -> None:
|
||||||
|
supabase = get_supabase()
|
||||||
|
payload: Dict[str, Any] = {"status": status}
|
||||||
|
if trade_no:
|
||||||
|
payload["trade_no"] = trade_no
|
||||||
|
if status == "paid":
|
||||||
|
payload["paid_at"] = datetime.now(timezone.utc).isoformat()
|
||||||
|
supabase.table("orders").update(payload).eq("out_trade_no", out_trade_no).execute()
|
||||||
@@ -1,3 +1,4 @@
|
|||||||
|
from datetime import datetime, timezone
|
||||||
from typing import Any, Dict, List, Optional, cast
|
from typing import Any, Dict, List, Optional, cast
|
||||||
|
|
||||||
from app.core.supabase import get_supabase
|
from app.core.supabase import get_supabase
|
||||||
@@ -37,3 +38,33 @@ def update_user(user_id: str, payload: Dict[str, Any]) -> List[Dict[str, Any]]:
|
|||||||
supabase = get_supabase()
|
supabase = get_supabase()
|
||||||
result = supabase.table("users").update(payload).eq("id", user_id).execute()
|
result = supabase.table("users").update(payload).eq("id", user_id).execute()
|
||||||
return cast(List[Dict[str, Any]], result.data or [])
|
return cast(List[Dict[str, Any]], result.data or [])
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_expires_at(expires_at: Any) -> Optional[datetime]:
|
||||||
|
try:
|
||||||
|
expires_at_dt = datetime.fromisoformat(str(expires_at).replace("Z", "+00:00"))
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if expires_at_dt.tzinfo is None:
|
||||||
|
expires_at_dt = expires_at_dt.replace(tzinfo=timezone.utc)
|
||||||
|
return expires_at_dt.astimezone(timezone.utc)
|
||||||
|
|
||||||
|
|
||||||
|
def deactivate_user_if_expired(user: Dict[str, Any]) -> bool:
|
||||||
|
expires_at = user.get("expires_at")
|
||||||
|
if not expires_at:
|
||||||
|
return False
|
||||||
|
|
||||||
|
expires_at_dt = _parse_expires_at(expires_at)
|
||||||
|
if not expires_at_dt:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if datetime.now(timezone.utc) <= expires_at_dt:
|
||||||
|
return False
|
||||||
|
|
||||||
|
user_id = user.get("id")
|
||||||
|
if user.get("is_active") and user_id:
|
||||||
|
update_user(cast(str, user_id), {"is_active": False})
|
||||||
|
|
||||||
|
return True
|
||||||
|
|||||||
@@ -35,18 +35,19 @@ class GLMService:
|
|||||||
Returns:
|
Returns:
|
||||||
{"title": "标题", "tags": ["标签1", "标签2", ...]}
|
{"title": "标题", "tags": ["标签1", "标签2", ...]}
|
||||||
"""
|
"""
|
||||||
prompt = f"""根据以下口播文案,生成一个吸引人的短视频标题和3个相关标签。
|
prompt = f"""根据以下口播文案,生成一个吸引人的短视频标题、副标题和3个相关标签。
|
||||||
|
|
||||||
口播文案:
|
口播文案:
|
||||||
{text}
|
{text}
|
||||||
|
|
||||||
要求:
|
要求:
|
||||||
1. 标题要简洁有力,能吸引观众点击,不超过10个字
|
1. 标题要简洁有力,能吸引观众点击,不超过10个字
|
||||||
2. 标签要与内容相关,便于搜索和推荐,只要3个
|
2. 副标题是对标题的补充说明或描述性文字,不超过20个字
|
||||||
3. 标题和标签必须使用与口播文案相同的语言(如文案是英文就用英文,日文就用日文)
|
3. 标签要与内容相关,便于搜索和推荐,只要3个
|
||||||
|
4. 标题、副标题和标签必须使用与口播文案相同的语言(如文案是英文就用英文,日文就用日文)
|
||||||
|
|
||||||
请严格按以下JSON格式返回(不要包含其他内容):
|
请严格按以下JSON格式返回(不要包含其他内容):
|
||||||
{{"title": "标题", "tags": ["标签1", "标签2", "标签3"]}}"""
|
{{"title": "标题", "secondary_title": "副标题", "tags": ["标签1", "标签2", "标签3"]}}"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
client = self._get_client()
|
client = self._get_client()
|
||||||
@@ -75,17 +76,24 @@ class GLMService:
|
|||||||
logger.error(f"GLM service error: {e}")
|
logger.error(f"GLM service error: {e}")
|
||||||
raise Exception(f"AI 生成失败: {str(e)}")
|
raise Exception(f"AI 生成失败: {str(e)}")
|
||||||
|
|
||||||
async def rewrite_script(self, text: str) -> str:
|
async def rewrite_script(self, text: str, custom_prompt: str = None) -> str:
|
||||||
"""
|
"""
|
||||||
AI 洗稿(文案改写)
|
AI 改写文案
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
text: 原始文案
|
text: 原始文案
|
||||||
|
custom_prompt: 自定义提示词,为空则使用默认提示词
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
改写后的文案
|
改写后的文案
|
||||||
"""
|
"""
|
||||||
prompt = f"""请将以下视频文案进行改写。
|
if custom_prompt and custom_prompt.strip():
|
||||||
|
prompt = f"""{custom_prompt.strip()}
|
||||||
|
|
||||||
|
原始文案:
|
||||||
|
{text}"""
|
||||||
|
else:
|
||||||
|
prompt = f"""请将以下视频文案进行改写。
|
||||||
|
|
||||||
原始文案:
|
原始文案:
|
||||||
{text}
|
{text}
|
||||||
@@ -174,6 +182,8 @@ class GLMService:
|
|||||||
|
|
||||||
# 尝试提取 JSON 块
|
# 尝试提取 JSON 块
|
||||||
json_match = re.search(r'\{[^{}]*"title"[^{}]*"tags"[^{}]*\}', content, re.DOTALL)
|
json_match = re.search(r'\{[^{}]*"title"[^{}]*"tags"[^{}]*\}', content, re.DOTALL)
|
||||||
|
if not json_match:
|
||||||
|
json_match = re.search(r'\{[^{}]*"title"[^{}]*"secondary_title"[^{}]*"tags"[^{}]*\}', content, re.DOTALL)
|
||||||
if json_match:
|
if json_match:
|
||||||
try:
|
try:
|
||||||
return json.loads(json_match.group())
|
return json.loads(json_match.group())
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
唇形同步服务
|
唇形同步服务
|
||||||
通过 subprocess 调用 LatentSync conda 环境进行推理
|
混合方案: 短视频用 LatentSync (高质量), 长视频用 MuseTalk (高速度)
|
||||||
配置为使用 GPU1 (CUDA:1)
|
路由阈值: LIPSYNC_DURATION_THRESHOLD (默认 120s)
|
||||||
"""
|
"""
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
@@ -17,15 +17,18 @@ from app.core.config import settings
|
|||||||
|
|
||||||
|
|
||||||
class LipSyncService:
|
class LipSyncService:
|
||||||
"""唇形同步服务 - LatentSync 1.6 集成 (Subprocess 方式)"""
|
"""唇形同步服务 - LatentSync 1.6 + MuseTalk 1.5 混合方案"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.use_local = settings.LATENTSYNC_LOCAL
|
self.use_local = settings.LATENTSYNC_LOCAL
|
||||||
self.api_url = settings.LATENTSYNC_API_URL
|
self.api_url = settings.LATENTSYNC_API_URL
|
||||||
self.latentsync_dir = settings.LATENTSYNC_DIR
|
self.latentsync_dir = settings.LATENTSYNC_DIR
|
||||||
self.gpu_id = settings.LATENTSYNC_GPU_ID
|
self.gpu_id = settings.LATENTSYNC_GPU_ID
|
||||||
self.use_server = settings.LATENTSYNC_USE_SERVER
|
self.use_server = settings.LATENTSYNC_USE_SERVER
|
||||||
|
|
||||||
|
# MuseTalk 配置
|
||||||
|
self.musetalk_api_url = settings.MUSETALK_API_URL
|
||||||
|
|
||||||
# GPU 并发锁 (Serial Queue)
|
# GPU 并发锁 (Serial Queue)
|
||||||
self._lock = asyncio.Lock()
|
self._lock = asyncio.Lock()
|
||||||
|
|
||||||
@@ -103,7 +106,7 @@ class LipSyncService:
|
|||||||
"-t", str(target_duration), # 截取到目标时长
|
"-t", str(target_duration), # 截取到目标时长
|
||||||
"-c:v", "libx264",
|
"-c:v", "libx264",
|
||||||
"-preset", "fast",
|
"-preset", "fast",
|
||||||
"-crf", "18",
|
"-crf", "23",
|
||||||
"-an", # 去掉原音频
|
"-an", # 去掉原音频
|
||||||
output_path
|
output_path
|
||||||
]
|
]
|
||||||
@@ -268,6 +271,18 @@ class LipSyncService:
|
|||||||
else:
|
else:
|
||||||
actual_video_path = video_path
|
actual_video_path = video_path
|
||||||
|
|
||||||
|
# 混合路由: 长视频走 MuseTalk,短视频走 LatentSync
|
||||||
|
if audio_duration and audio_duration >= settings.LIPSYNC_DURATION_THRESHOLD:
|
||||||
|
logger.info(
|
||||||
|
f"🔄 音频 {audio_duration:.1f}s >= {settings.LIPSYNC_DURATION_THRESHOLD}s,路由到 MuseTalk"
|
||||||
|
)
|
||||||
|
musetalk_result = await self._call_musetalk_server(
|
||||||
|
actual_video_path, audio_path, output_path
|
||||||
|
)
|
||||||
|
if musetalk_result:
|
||||||
|
return musetalk_result
|
||||||
|
logger.warning("⚠️ MuseTalk 不可用,回退到 LatentSync(长视频,会较慢)")
|
||||||
|
|
||||||
if self.use_server:
|
if self.use_server:
|
||||||
# 模式 A: 调用常驻服务 (加速模式)
|
# 模式 A: 调用常驻服务 (加速模式)
|
||||||
return await self._call_persistent_server(actual_video_path, audio_path, output_path)
|
return await self._call_persistent_server(actual_video_path, audio_path, output_path)
|
||||||
@@ -352,6 +367,55 @@ class LipSyncService:
|
|||||||
shutil.copy(video_path, output_path)
|
shutil.copy(video_path, output_path)
|
||||||
return output_path
|
return output_path
|
||||||
|
|
||||||
|
async def _call_musetalk_server(
|
||||||
|
self, video_path: str, audio_path: str, output_path: str
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""
|
||||||
|
调用 MuseTalk 常驻服务。
|
||||||
|
成功返回 output_path,不可用返回 None(信号上层回退到 LatentSync)。
|
||||||
|
"""
|
||||||
|
server_url = self.musetalk_api_url
|
||||||
|
logger.info(f"⚡ 调用 MuseTalk 服务: {server_url}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=3600.0) as client:
|
||||||
|
# 健康检查
|
||||||
|
try:
|
||||||
|
resp = await client.get(f"{server_url}/health", timeout=5.0)
|
||||||
|
if resp.status_code != 200:
|
||||||
|
logger.warning("⚠️ MuseTalk 健康检查失败")
|
||||||
|
return None
|
||||||
|
health = resp.json()
|
||||||
|
if not health.get("model_loaded"):
|
||||||
|
logger.warning("⚠️ MuseTalk 模型未加载")
|
||||||
|
return None
|
||||||
|
except Exception:
|
||||||
|
logger.warning("⚠️ 无法连接 MuseTalk 服务")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# 发送推理请求
|
||||||
|
payload = {
|
||||||
|
"video_path": str(Path(video_path).resolve()),
|
||||||
|
"audio_path": str(Path(audio_path).resolve()),
|
||||||
|
"video_out_path": str(Path(output_path).resolve()),
|
||||||
|
"batch_size": settings.MUSETALK_BATCH_SIZE,
|
||||||
|
}
|
||||||
|
|
||||||
|
response = await client.post(f"{server_url}/lipsync", json=payload)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
if Path(result["output_path"]).exists():
|
||||||
|
logger.info(f"✅ MuseTalk 推理完成: {output_path}")
|
||||||
|
return output_path
|
||||||
|
|
||||||
|
logger.error(f"❌ MuseTalk 服务报错: {response.text}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"❌ MuseTalk 调用失败: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
async def _call_persistent_server(self, video_path: str, audio_path: str, output_path: str) -> str:
|
async def _call_persistent_server(self, video_path: str, audio_path: str, output_path: str) -> str:
|
||||||
"""调用本地常驻服务 (server.py)"""
|
"""调用本地常驻服务 (server.py)"""
|
||||||
server_url = "http://localhost:8007"
|
server_url = "http://localhost:8007"
|
||||||
@@ -369,7 +433,7 @@ class LipSyncService:
|
|||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=1200.0) as client:
|
async with httpx.AsyncClient(timeout=3600.0) as client:
|
||||||
# 先检查健康状态
|
# 先检查健康状态
|
||||||
try:
|
try:
|
||||||
resp = await client.get(f"{server_url}/health", timeout=5.0)
|
resp = await client.get(f"{server_url}/health", timeout=5.0)
|
||||||
@@ -477,8 +541,18 @@ class LipSyncService:
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
# 检查 MuseTalk 服务
|
||||||
|
musetalk_ready = False
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||||
|
resp = await client.get(f"{self.musetalk_api_url}/health")
|
||||||
|
if resp.status_code == 200:
|
||||||
|
musetalk_ready = resp.json().get("model_loaded", False)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"model": "LatentSync 1.6",
|
"model": "LatentSync 1.6 + MuseTalk 1.5",
|
||||||
"conda_env": conda_ok,
|
"conda_env": conda_ok,
|
||||||
"weights": weights_ok,
|
"weights": weights_ok,
|
||||||
"gpu": gpu_ok,
|
"gpu": gpu_ok,
|
||||||
@@ -486,5 +560,7 @@ class LipSyncService:
|
|||||||
"gpu_id": self.gpu_id,
|
"gpu_id": self.gpu_id,
|
||||||
"inference_steps": settings.LATENTSYNC_INFERENCE_STEPS,
|
"inference_steps": settings.LATENTSYNC_INFERENCE_STEPS,
|
||||||
"guidance_scale": settings.LATENTSYNC_GUIDANCE_SCALE,
|
"guidance_scale": settings.LATENTSYNC_GUIDANCE_SCALE,
|
||||||
"ready": conda_ok and weights_ok and gpu_ok
|
"ready": conda_ok and weights_ok and gpu_ok,
|
||||||
|
"musetalk_ready": musetalk_ready,
|
||||||
|
"lipsync_threshold": settings.LIPSYNC_DURATION_THRESHOLD,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import asyncio
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
|
from collections.abc import Callable
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
@@ -29,12 +30,15 @@ class RemotionService:
|
|||||||
output_path: str,
|
output_path: str,
|
||||||
captions_path: Optional[str] = None,
|
captions_path: Optional[str] = None,
|
||||||
title: Optional[str] = None,
|
title: Optional[str] = None,
|
||||||
title_duration: float = 3.0,
|
title_duration: float = 4.0,
|
||||||
|
title_display_mode: str = "short",
|
||||||
fps: int = 25,
|
fps: int = 25,
|
||||||
enable_subtitles: bool = True,
|
enable_subtitles: bool = True,
|
||||||
subtitle_style: Optional[dict] = None,
|
subtitle_style: Optional[dict] = None,
|
||||||
title_style: Optional[dict] = None,
|
title_style: Optional[dict] = None,
|
||||||
on_progress: Optional[callable] = None
|
secondary_title: Optional[str] = None,
|
||||||
|
secondary_title_style: Optional[dict] = None,
|
||||||
|
on_progress: Optional[Callable[[int], None]] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
使用 Remotion 渲染视频(添加字幕和标题)
|
使用 Remotion 渲染视频(添加字幕和标题)
|
||||||
@@ -45,6 +49,7 @@ class RemotionService:
|
|||||||
captions_path: 字幕 JSON 文件路径(Whisper 生成)
|
captions_path: 字幕 JSON 文件路径(Whisper 生成)
|
||||||
title: 视频标题(可选)
|
title: 视频标题(可选)
|
||||||
title_duration: 标题显示时长(秒)
|
title_duration: 标题显示时长(秒)
|
||||||
|
title_display_mode: 标题显示模式(short/persistent)
|
||||||
fps: 帧率
|
fps: 帧率
|
||||||
enable_subtitles: 是否启用字幕
|
enable_subtitles: 是否启用字幕
|
||||||
on_progress: 进度回调函数
|
on_progress: 进度回调函数
|
||||||
@@ -75,6 +80,7 @@ class RemotionService:
|
|||||||
if title:
|
if title:
|
||||||
cmd.extend(["--title", title])
|
cmd.extend(["--title", title])
|
||||||
cmd.extend(["--titleDuration", str(title_duration)])
|
cmd.extend(["--titleDuration", str(title_duration)])
|
||||||
|
cmd.extend(["--titleDisplayMode", title_display_mode])
|
||||||
|
|
||||||
if subtitle_style:
|
if subtitle_style:
|
||||||
cmd.extend(["--subtitleStyle", json.dumps(subtitle_style, ensure_ascii=False)])
|
cmd.extend(["--subtitleStyle", json.dumps(subtitle_style, ensure_ascii=False)])
|
||||||
@@ -82,6 +88,12 @@ class RemotionService:
|
|||||||
if title_style:
|
if title_style:
|
||||||
cmd.extend(["--titleStyle", json.dumps(title_style, ensure_ascii=False)])
|
cmd.extend(["--titleStyle", json.dumps(title_style, ensure_ascii=False)])
|
||||||
|
|
||||||
|
if secondary_title:
|
||||||
|
cmd.extend(["--secondaryTitle", secondary_title])
|
||||||
|
|
||||||
|
if secondary_title_style:
|
||||||
|
cmd.extend(["--secondaryTitleStyle", json.dumps(secondary_title_style, ensure_ascii=False)])
|
||||||
|
|
||||||
logger.info(f"Running Remotion render: {' '.join(cmd)}")
|
logger.info(f"Running Remotion render: {' '.join(cmd)}")
|
||||||
|
|
||||||
# 在线程池中运行子进程
|
# 在线程池中运行子进程
|
||||||
@@ -95,8 +107,12 @@ class RemotionService:
|
|||||||
bufsize=1
|
bufsize=1
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if process.stdout is None:
|
||||||
|
raise RuntimeError("Remotion process stdout is unavailable")
|
||||||
|
stdout = process.stdout
|
||||||
|
|
||||||
output_lines = []
|
output_lines = []
|
||||||
for line in iter(process.stdout.readline, ''):
|
for line in iter(stdout.readline, ''):
|
||||||
line = line.strip()
|
line = line.strip()
|
||||||
if line:
|
if line:
|
||||||
output_lines.append(line)
|
output_lines.append(line)
|
||||||
|
|||||||
@@ -1,282 +1,405 @@
|
|||||||
"""
|
"""
|
||||||
视频合成服务
|
视频合成服务
|
||||||
"""
|
"""
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
import json
|
import json
|
||||||
import shlex
|
import shlex
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
class VideoService:
|
class VideoService:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def _run_ffmpeg(self, cmd: list) -> bool:
|
def get_video_metadata(self, file_path: str) -> dict:
|
||||||
cmd_str = ' '.join(shlex.quote(str(c)) for c in cmd)
|
"""获取视频元信息(含旋转角与有效显示分辨率)"""
|
||||||
logger.debug(f"FFmpeg CMD: {cmd_str}")
|
cmd = [
|
||||||
try:
|
"ffprobe", "-v", "error",
|
||||||
# Synchronous call for BackgroundTasks compatibility
|
"-select_streams", "v:0",
|
||||||
result = subprocess.run(
|
"-show_entries", "stream=width,height:stream_side_data=rotation",
|
||||||
cmd,
|
"-of", "json",
|
||||||
shell=False,
|
file_path,
|
||||||
capture_output=True,
|
]
|
||||||
text=True,
|
default_info = {
|
||||||
encoding='utf-8',
|
"width": 0,
|
||||||
)
|
"height": 0,
|
||||||
if result.returncode != 0:
|
"rotation": 0,
|
||||||
logger.error(f"FFmpeg Error: {result.stderr}")
|
"effective_width": 0,
|
||||||
return False
|
"effective_height": 0,
|
||||||
return True
|
}
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"FFmpeg Exception: {e}")
|
try:
|
||||||
return False
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
||||||
|
if result.returncode != 0:
|
||||||
def _get_duration(self, file_path: str) -> float:
|
return default_info
|
||||||
# Synchronous call for BackgroundTasks compatibility
|
|
||||||
# 使用参数列表形式避免 shell=True 的命令注入风险
|
payload = json.loads(result.stdout or "{}")
|
||||||
cmd = [
|
streams = payload.get("streams") or []
|
||||||
'ffprobe', '-v', 'error',
|
if not streams:
|
||||||
'-show_entries', 'format=duration',
|
return default_info
|
||||||
'-of', 'default=noprint_wrappers=1:nokey=1',
|
|
||||||
file_path
|
stream = streams[0]
|
||||||
]
|
width = int(stream.get("width") or 0)
|
||||||
try:
|
height = int(stream.get("height") or 0)
|
||||||
result = subprocess.run(
|
|
||||||
cmd,
|
rotation = 0
|
||||||
capture_output=True,
|
for side_data in stream.get("side_data_list") or []:
|
||||||
text=True,
|
if not isinstance(side_data, dict):
|
||||||
)
|
continue
|
||||||
return float(result.stdout.strip())
|
raw_rotation = side_data.get("rotation")
|
||||||
except Exception:
|
if raw_rotation is None:
|
||||||
return 0.0
|
continue
|
||||||
|
try:
|
||||||
def mix_audio(
|
rotation = int(round(float(str(raw_rotation))))
|
||||||
self,
|
except Exception:
|
||||||
voice_path: str,
|
rotation = 0
|
||||||
bgm_path: str,
|
break
|
||||||
output_path: str,
|
|
||||||
bgm_volume: float = 0.2
|
norm_rotation = rotation % 360
|
||||||
) -> str:
|
if norm_rotation > 180:
|
||||||
"""混合人声与背景音乐"""
|
norm_rotation -= 360
|
||||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
swap_wh = abs(norm_rotation) == 90
|
||||||
|
|
||||||
volume = max(0.0, min(float(bgm_volume), 1.0))
|
effective_width = height if swap_wh else width
|
||||||
filter_complex = (
|
effective_height = width if swap_wh else height
|
||||||
f"[0:a]volume=1.0[a0];"
|
|
||||||
f"[1:a]volume={volume}[a1];"
|
return {
|
||||||
f"[a0][a1]amix=inputs=2:duration=first:dropout_transition=2:normalize=0[aout]"
|
"width": width,
|
||||||
)
|
"height": height,
|
||||||
|
"rotation": norm_rotation,
|
||||||
cmd = [
|
"effective_width": effective_width,
|
||||||
"ffmpeg", "-y",
|
"effective_height": effective_height,
|
||||||
"-i", voice_path,
|
}
|
||||||
"-stream_loop", "-1", "-i", bgm_path,
|
except Exception as e:
|
||||||
"-filter_complex", filter_complex,
|
logger.warning(f"获取视频元信息失败: {e}")
|
||||||
"-map", "[aout]",
|
return default_info
|
||||||
"-c:a", "pcm_s16le",
|
|
||||||
"-shortest",
|
def normalize_orientation(self, video_path: str, output_path: str) -> str:
|
||||||
output_path,
|
"""将带旋转元数据的视频转为物理方向,避免后续流程忽略 rotation。"""
|
||||||
]
|
info = self.get_video_metadata(video_path)
|
||||||
|
rotation = int(info.get("rotation") or 0)
|
||||||
if self._run_ffmpeg(cmd):
|
if rotation == 0:
|
||||||
return output_path
|
return video_path
|
||||||
raise RuntimeError("FFmpeg audio mix failed")
|
|
||||||
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
async def compose(
|
logger.info(
|
||||||
self,
|
f"检测到旋转元数据 rotation={rotation},归一化方向: "
|
||||||
video_path: str,
|
f"{info.get('effective_width', 0)}x{info.get('effective_height', 0)}"
|
||||||
audio_path: str,
|
)
|
||||||
output_path: str,
|
|
||||||
subtitle_path: Optional[str] = None
|
cmd = [
|
||||||
) -> str:
|
"ffmpeg", "-y",
|
||||||
"""合成视频"""
|
"-i", video_path,
|
||||||
# Ensure output dir
|
"-map", "0:v:0",
|
||||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
"-map", "0:a?",
|
||||||
|
"-c:v", "libx264",
|
||||||
video_duration = self._get_duration(video_path)
|
"-preset", "fast",
|
||||||
audio_duration = self._get_duration(audio_path)
|
"-crf", "23",
|
||||||
|
"-c:a", "copy",
|
||||||
# Audio loop if needed
|
"-movflags", "+faststart",
|
||||||
loop_count = 1
|
output_path,
|
||||||
if audio_duration > video_duration and video_duration > 0:
|
]
|
||||||
loop_count = int(audio_duration / video_duration) + 1
|
|
||||||
|
if self._run_ffmpeg(cmd):
|
||||||
cmd = ["ffmpeg", "-y"]
|
normalized = self.get_video_metadata(output_path)
|
||||||
|
logger.info(
|
||||||
# Input video (stream_loop must be before -i)
|
"视频方向归一化完成: "
|
||||||
if loop_count > 1:
|
f"coded={normalized.get('width', 0)}x{normalized.get('height', 0)}, "
|
||||||
cmd.extend(["-stream_loop", str(loop_count)])
|
f"rotation={normalized.get('rotation', 0)}"
|
||||||
cmd.extend(["-i", video_path])
|
)
|
||||||
|
return output_path
|
||||||
# Input audio
|
|
||||||
cmd.extend(["-i", audio_path])
|
logger.warning("视频方向归一化失败,回退使用原视频")
|
||||||
|
return video_path
|
||||||
# Filter complex
|
|
||||||
filter_complex = []
|
def _run_ffmpeg(self, cmd: list) -> bool:
|
||||||
|
cmd_str = ' '.join(shlex.quote(str(c)) for c in cmd)
|
||||||
# Subtitles (skip for now to mimic previous state or implement basic)
|
logger.debug(f"FFmpeg CMD: {cmd_str}")
|
||||||
# Previous state: subtitles disabled due to font issues
|
try:
|
||||||
# if subtitle_path: ...
|
# Synchronous call for BackgroundTasks compatibility
|
||||||
|
result = subprocess.run(
|
||||||
# Audio map with high quality encoding
|
cmd,
|
||||||
cmd.extend([
|
shell=False,
|
||||||
"-c:v", "libx264",
|
capture_output=True,
|
||||||
"-preset", "slow", # 慢速预设,更好的压缩效率
|
text=True,
|
||||||
"-crf", "18", # 高质量(与 LatentSync 一致)
|
encoding='utf-8',
|
||||||
"-c:a", "aac",
|
)
|
||||||
"-b:a", "192k", # 音频比特率
|
if result.returncode != 0:
|
||||||
"-shortest"
|
logger.error(f"FFmpeg Error: {result.stderr}")
|
||||||
])
|
return False
|
||||||
# Use audio from input 1
|
return True
|
||||||
cmd.extend(["-map", "0:v", "-map", "1:a"])
|
except Exception as e:
|
||||||
|
logger.error(f"FFmpeg Exception: {e}")
|
||||||
cmd.append(output_path)
|
return False
|
||||||
|
|
||||||
if self._run_ffmpeg(cmd):
|
def _get_duration(self, file_path: str) -> float:
|
||||||
return output_path
|
# Synchronous call for BackgroundTasks compatibility
|
||||||
else:
|
# 使用参数列表形式避免 shell=True 的命令注入风险
|
||||||
raise RuntimeError("FFmpeg composition failed")
|
cmd = [
|
||||||
|
'ffprobe', '-v', 'error',
|
||||||
def concat_videos(self, video_paths: list, output_path: str) -> str:
|
'-show_entries', 'format=duration',
|
||||||
"""使用 FFmpeg concat demuxer 拼接多个视频片段"""
|
'-of', 'default=noprint_wrappers=1:nokey=1',
|
||||||
if not video_paths:
|
file_path
|
||||||
raise ValueError("No video segments to concat")
|
]
|
||||||
|
try:
|
||||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
# 生成 concat list 文件
|
capture_output=True,
|
||||||
list_path = Path(output_path).parent / f"{Path(output_path).stem}_concat.txt"
|
text=True,
|
||||||
with open(list_path, "w", encoding="utf-8") as f:
|
)
|
||||||
for vp in video_paths:
|
return float(result.stdout.strip())
|
||||||
f.write(f"file '{vp}'\n")
|
except Exception:
|
||||||
|
return 0.0
|
||||||
cmd = [
|
|
||||||
"ffmpeg", "-y",
|
def mix_audio(
|
||||||
"-f", "concat",
|
self,
|
||||||
"-safe", "0",
|
voice_path: str,
|
||||||
"-i", str(list_path),
|
bgm_path: str,
|
||||||
"-c", "copy",
|
output_path: str,
|
||||||
output_path,
|
bgm_volume: float = 0.2
|
||||||
]
|
) -> str:
|
||||||
|
"""混合人声与背景音乐"""
|
||||||
try:
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
if self._run_ffmpeg(cmd):
|
|
||||||
return output_path
|
volume = max(0.0, min(float(bgm_volume), 1.0))
|
||||||
else:
|
filter_complex = (
|
||||||
raise RuntimeError("FFmpeg concat failed")
|
f"[0:a]volume=1.0[a0];"
|
||||||
finally:
|
f"[1:a]volume={volume}[a1];"
|
||||||
try:
|
f"[a0][a1]amix=inputs=2:duration=first:dropout_transition=2:normalize=0[aout]"
|
||||||
list_path.unlink(missing_ok=True)
|
)
|
||||||
except Exception:
|
|
||||||
pass
|
cmd = [
|
||||||
|
"ffmpeg", "-y",
|
||||||
def split_audio(self, audio_path: str, start: float, end: float, output_path: str) -> str:
|
"-i", voice_path,
|
||||||
"""用 FFmpeg 按时间范围切分音频"""
|
"-stream_loop", "-1", "-i", bgm_path,
|
||||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
"-filter_complex", filter_complex,
|
||||||
|
"-map", "[aout]",
|
||||||
duration = end - start
|
"-c:a", "pcm_s16le",
|
||||||
if duration <= 0:
|
"-shortest",
|
||||||
raise ValueError(f"Invalid audio split range: start={start}, end={end}, duration={duration}")
|
output_path,
|
||||||
|
]
|
||||||
cmd = [
|
|
||||||
"ffmpeg", "-y",
|
if self._run_ffmpeg(cmd):
|
||||||
"-ss", str(start),
|
return output_path
|
||||||
"-t", str(duration),
|
raise RuntimeError("FFmpeg audio mix failed")
|
||||||
"-i", audio_path,
|
|
||||||
"-c", "copy",
|
async def compose(
|
||||||
output_path,
|
self,
|
||||||
]
|
video_path: str,
|
||||||
|
audio_path: str,
|
||||||
if self._run_ffmpeg(cmd):
|
output_path: str,
|
||||||
return output_path
|
subtitle_path: Optional[str] = None
|
||||||
raise RuntimeError(f"FFmpeg audio split failed: {start}-{end}")
|
) -> str:
|
||||||
|
"""合成视频"""
|
||||||
def get_resolution(self, file_path: str) -> tuple:
|
# Ensure output dir
|
||||||
"""获取视频分辨率,返回 (width, height)"""
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
cmd = [
|
|
||||||
'ffprobe', '-v', 'error',
|
video_duration = self._get_duration(video_path)
|
||||||
'-select_streams', 'v:0',
|
audio_duration = self._get_duration(audio_path)
|
||||||
'-show_entries', 'stream=width,height',
|
|
||||||
'-of', 'csv=p=0',
|
# Audio loop if needed
|
||||||
file_path
|
loop_count = 1
|
||||||
]
|
if audio_duration > video_duration and video_duration > 0:
|
||||||
try:
|
loop_count = int(audio_duration / video_duration) + 1
|
||||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
|
||||||
parts = result.stdout.strip().split(',')
|
cmd = ["ffmpeg", "-y"]
|
||||||
return (int(parts[0]), int(parts[1]))
|
|
||||||
except Exception:
|
# Input video (stream_loop must be before -i)
|
||||||
return (0, 0)
|
if loop_count > 1:
|
||||||
|
cmd.extend(["-stream_loop", str(loop_count)])
|
||||||
def prepare_segment(self, video_path: str, target_duration: float, output_path: str,
|
cmd.extend(["-i", video_path])
|
||||||
target_resolution: tuple = None, source_start: float = 0.0) -> str:
|
|
||||||
"""将素材视频裁剪或循环到指定时长(无音频)。
|
# Input audio
|
||||||
target_resolution: (width, height) 如需统一分辨率则传入,否则保持原分辨率。
|
cmd.extend(["-i", audio_path])
|
||||||
source_start: 源视频截取起点(秒),默认 0。
|
|
||||||
"""
|
# Filter complex
|
||||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
filter_complex = []
|
||||||
|
|
||||||
video_dur = self._get_duration(video_path)
|
# Subtitles (skip for now to mimic previous state or implement basic)
|
||||||
if video_dur <= 0:
|
# Previous state: subtitles disabled due to font issues
|
||||||
video_dur = target_duration
|
# if subtitle_path: ...
|
||||||
|
|
||||||
# 可用时长 = 从 source_start 到视频结尾
|
# Audio map with high quality encoding
|
||||||
available = max(video_dur - source_start, 0.1)
|
cmd.extend([
|
||||||
needs_loop = target_duration > available
|
"-c:v", "libx264",
|
||||||
needs_scale = target_resolution is not None
|
"-preset", "medium", # 平衡速度与压缩效率
|
||||||
|
"-crf", "20", # 最终输出:高质量(肉眼无损)
|
||||||
# 当需要循环且有 source_start 时,先裁剪出片段,再循环裁剪后的文件
|
"-c:a", "aac",
|
||||||
# 避免 stream_loop 循环整个视频(而不是从 source_start 开始的片段)
|
"-b:a", "192k", # 音频比特率
|
||||||
actual_input = video_path
|
"-shortest"
|
||||||
trim_temp = None
|
])
|
||||||
if needs_loop and source_start > 0:
|
# Use audio from input 1
|
||||||
trim_temp = str(Path(output_path).parent / (Path(output_path).stem + "_trim_tmp.mp4"))
|
cmd.extend(["-map", "0:v", "-map", "1:a"])
|
||||||
trim_cmd = [
|
|
||||||
"ffmpeg", "-y",
|
cmd.append(output_path)
|
||||||
"-ss", str(source_start),
|
|
||||||
"-i", video_path,
|
if self._run_ffmpeg(cmd):
|
||||||
"-t", str(available),
|
return output_path
|
||||||
"-an",
|
else:
|
||||||
"-c:v", "libx264", "-preset", "fast", "-crf", "18",
|
raise RuntimeError("FFmpeg composition failed")
|
||||||
trim_temp,
|
|
||||||
]
|
def concat_videos(self, video_paths: list, output_path: str, target_fps: int = 25) -> str:
|
||||||
if not self._run_ffmpeg(trim_cmd):
|
"""使用 FFmpeg concat demuxer 拼接多个视频片段"""
|
||||||
raise RuntimeError(f"FFmpeg trim for loop failed: {video_path}")
|
if not video_paths:
|
||||||
actual_input = trim_temp
|
raise ValueError("No video segments to concat")
|
||||||
source_start = 0.0 # 已裁剪,不需要再 seek
|
|
||||||
# 重新计算循环次数(基于裁剪后文件)
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
available = self._get_duration(trim_temp) or available
|
|
||||||
|
# 生成 concat list 文件
|
||||||
loop_count = int(target_duration / available) + 1 if needs_loop else 0
|
list_path = Path(output_path).parent / f"{Path(output_path).stem}_concat.txt"
|
||||||
|
with open(list_path, "w", encoding="utf-8") as f:
|
||||||
cmd = ["ffmpeg", "-y"]
|
for vp in video_paths:
|
||||||
if needs_loop:
|
f.write(f"file '{vp}'\n")
|
||||||
cmd.extend(["-stream_loop", str(loop_count)])
|
|
||||||
if source_start > 0:
|
cmd = [
|
||||||
cmd.extend(["-ss", str(source_start)])
|
"ffmpeg", "-y",
|
||||||
cmd.extend(["-i", actual_input, "-t", str(target_duration), "-an"])
|
"-f", "concat",
|
||||||
|
"-safe", "0",
|
||||||
if needs_scale:
|
"-fflags", "+genpts",
|
||||||
w, h = target_resolution
|
"-i", str(list_path),
|
||||||
cmd.extend(["-vf", f"scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:(ow-iw)/2:(oh-ih)/2"])
|
"-an",
|
||||||
|
"-vsync", "cfr",
|
||||||
# 需要循环、缩放或指定起点时必须重编码,否则用 stream copy 保持原画质
|
"-r", str(target_fps),
|
||||||
if needs_loop or needs_scale or source_start > 0:
|
"-c:v", "libx264",
|
||||||
cmd.extend(["-c:v", "libx264", "-preset", "fast", "-crf", "18"])
|
"-preset", "fast",
|
||||||
else:
|
"-crf", "23",
|
||||||
cmd.extend(["-c:v", "copy"])
|
"-pix_fmt", "yuv420p",
|
||||||
|
"-movflags", "+faststart",
|
||||||
cmd.append(output_path)
|
output_path,
|
||||||
|
]
|
||||||
try:
|
|
||||||
if self._run_ffmpeg(cmd):
|
try:
|
||||||
return output_path
|
if self._run_ffmpeg(cmd):
|
||||||
raise RuntimeError(f"FFmpeg prepare_segment failed: {video_path}")
|
return output_path
|
||||||
finally:
|
else:
|
||||||
# 清理裁剪临时文件
|
raise RuntimeError("FFmpeg concat failed")
|
||||||
if trim_temp:
|
finally:
|
||||||
try:
|
try:
|
||||||
Path(trim_temp).unlink(missing_ok=True)
|
list_path.unlink(missing_ok=True)
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def split_audio(self, audio_path: str, start: float, end: float, output_path: str) -> str:
|
||||||
|
"""用 FFmpeg 按时间范围切分音频"""
|
||||||
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
duration = end - start
|
||||||
|
if duration <= 0:
|
||||||
|
raise ValueError(f"Invalid audio split range: start={start}, end={end}, duration={duration}")
|
||||||
|
|
||||||
|
cmd = [
|
||||||
|
"ffmpeg", "-y",
|
||||||
|
"-ss", str(start),
|
||||||
|
"-t", str(duration),
|
||||||
|
"-i", audio_path,
|
||||||
|
"-c", "copy",
|
||||||
|
output_path,
|
||||||
|
]
|
||||||
|
|
||||||
|
if self._run_ffmpeg(cmd):
|
||||||
|
return output_path
|
||||||
|
raise RuntimeError(f"FFmpeg audio split failed: {start}-{end}")
|
||||||
|
|
||||||
|
def get_resolution(self, file_path: str) -> tuple[int, int]:
|
||||||
|
"""获取视频有效显示分辨率(考虑旋转元数据)。"""
|
||||||
|
info = self.get_video_metadata(file_path)
|
||||||
|
return (
|
||||||
|
int(info.get("effective_width") or 0),
|
||||||
|
int(info.get("effective_height") or 0),
|
||||||
|
)
|
||||||
|
|
||||||
|
def prepare_segment(self, video_path: str, target_duration: float, output_path: str,
|
||||||
|
target_resolution: Optional[tuple] = None, source_start: float = 0.0,
|
||||||
|
source_end: Optional[float] = None, target_fps: Optional[int] = None) -> str:
|
||||||
|
"""将素材视频裁剪或循环到指定时长(无音频)。
|
||||||
|
target_resolution: (width, height) 如需统一分辨率则传入,否则保持原分辨率。
|
||||||
|
source_start: 源视频截取起点(秒),默认 0。
|
||||||
|
source_end: 源视频截取终点(秒),默认到素材结尾。
|
||||||
|
target_fps: 输出帧率(可选),用于多素材拼接前统一时间基。
|
||||||
|
"""
|
||||||
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
video_dur = self._get_duration(video_path)
|
||||||
|
if video_dur <= 0:
|
||||||
|
video_dur = target_duration
|
||||||
|
|
||||||
|
clip_end = video_dur
|
||||||
|
if source_end is not None:
|
||||||
|
try:
|
||||||
|
source_end_value = float(source_end)
|
||||||
|
if source_end_value > source_start:
|
||||||
|
clip_end = min(source_end_value, video_dur)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# 可用时长 = 从 source_start 到视频结尾
|
||||||
|
available = max(clip_end - source_start, 0.1)
|
||||||
|
needs_loop = target_duration > available
|
||||||
|
needs_scale = target_resolution is not None
|
||||||
|
needs_fps = bool(target_fps and target_fps > 0)
|
||||||
|
has_source_end = clip_end < video_dur
|
||||||
|
|
||||||
|
# 当需要循环且存在截取范围时,先裁剪出片段,再循环裁剪后的文件
|
||||||
|
# 避免 stream_loop 循环整个视频(而不是截取后的片段)
|
||||||
|
actual_input = video_path
|
||||||
|
trim_temp = None
|
||||||
|
if needs_loop and (source_start > 0 or has_source_end):
|
||||||
|
trim_temp = str(Path(output_path).parent / (Path(output_path).stem + "_trim_tmp.mp4"))
|
||||||
|
trim_cmd = [
|
||||||
|
"ffmpeg", "-y",
|
||||||
|
"-ss", str(source_start),
|
||||||
|
"-i", video_path,
|
||||||
|
"-t", str(available),
|
||||||
|
"-an",
|
||||||
|
"-c:v", "libx264", "-preset", "fast", "-crf", "23",
|
||||||
|
trim_temp,
|
||||||
|
]
|
||||||
|
if not self._run_ffmpeg(trim_cmd):
|
||||||
|
raise RuntimeError(f"FFmpeg trim for loop failed: {video_path}")
|
||||||
|
actual_input = trim_temp
|
||||||
|
source_start = 0.0 # 已裁剪,不需要再 seek
|
||||||
|
# 重新计算循环次数(基于裁剪后文件)
|
||||||
|
available = self._get_duration(trim_temp) or available
|
||||||
|
|
||||||
|
loop_count = int(target_duration / available) + 1 if needs_loop else 0
|
||||||
|
|
||||||
|
cmd = ["ffmpeg", "-y"]
|
||||||
|
if needs_loop:
|
||||||
|
cmd.extend(["-stream_loop", str(loop_count)])
|
||||||
|
if source_start > 0:
|
||||||
|
cmd.extend(["-ss", str(source_start)])
|
||||||
|
cmd.extend(["-i", actual_input, "-t", str(target_duration), "-an"])
|
||||||
|
|
||||||
|
filters = []
|
||||||
|
if needs_fps:
|
||||||
|
filters.append(f"fps={int(target_fps)}")
|
||||||
|
if needs_scale:
|
||||||
|
w, h = target_resolution
|
||||||
|
filters.append(f"scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:(ow-iw)/2:(oh-ih)/2")
|
||||||
|
|
||||||
|
if filters:
|
||||||
|
cmd.extend(["-vf", ",".join(filters)])
|
||||||
|
if needs_fps:
|
||||||
|
cmd.extend(["-vsync", "cfr", "-r", str(int(target_fps))])
|
||||||
|
|
||||||
|
# 需要循环、缩放或指定起点时必须重编码,否则用 stream copy 保持原画质
|
||||||
|
if needs_loop or needs_scale or source_start > 0 or has_source_end or needs_fps:
|
||||||
|
cmd.extend(["-c:v", "libx264", "-preset", "fast", "-crf", "23"])
|
||||||
|
else:
|
||||||
|
cmd.extend(["-c:v", "copy"])
|
||||||
|
|
||||||
|
cmd.append(output_path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if self._run_ffmpeg(cmd):
|
||||||
|
return output_path
|
||||||
|
raise RuntimeError(f"FFmpeg prepare_segment failed: {video_path}")
|
||||||
|
finally:
|
||||||
|
# 清理裁剪临时文件
|
||||||
|
if trim_temp:
|
||||||
|
try:
|
||||||
|
Path(trim_temp).unlink(missing_ok=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|||||||
@@ -1,37 +1,104 @@
|
|||||||
"""
|
"""
|
||||||
声音克隆服务
|
声音克隆服务
|
||||||
通过 HTTP 调用 Qwen3-TTS 独立服务 (端口 8009)
|
通过 HTTP 调用 CosyVoice 3.0 独立服务 (端口 8010)
|
||||||
"""
|
"""
|
||||||
import httpx
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
import httpx
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from app.core.config import settings
|
# CosyVoice 3.0 服务地址
|
||||||
|
VOICE_CLONE_URL = "http://localhost:8010"
|
||||||
# Qwen3-TTS 服务地址
|
|
||||||
QWEN_TTS_URL = "http://localhost:8009"
|
|
||||||
|
|
||||||
|
|
||||||
class VoiceCloneService:
|
class VoiceCloneService:
|
||||||
"""声音克隆服务 - 调用 Qwen3-TTS HTTP API"""
|
"""声音克隆服务 - 调用 CosyVoice 3.0 HTTP API"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.base_url = QWEN_TTS_URL
|
self.base_url = VOICE_CLONE_URL
|
||||||
# 健康状态缓存
|
# 健康状态缓存
|
||||||
self._health_cache: Optional[dict] = None
|
self._health_cache: Optional[dict] = None
|
||||||
self._health_cache_time: float = 0
|
self._health_cache_time: float = 0
|
||||||
# GPU 并发锁 (Serial Queue)
|
# GPU 并发锁 (Serial Queue)
|
||||||
self._lock = asyncio.Lock()
|
self._lock = asyncio.Lock()
|
||||||
|
|
||||||
|
async def _generate_once(
|
||||||
|
self,
|
||||||
|
*,
|
||||||
|
text: str,
|
||||||
|
ref_audio_data: bytes,
|
||||||
|
ref_text: str,
|
||||||
|
language: str,
|
||||||
|
speed: float = 1.0,
|
||||||
|
max_retries: int = 4,
|
||||||
|
) -> bytes:
|
||||||
|
timeout = httpx.Timeout(240.0)
|
||||||
|
|
||||||
|
for attempt in range(max_retries):
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||||
|
response = await client.post(
|
||||||
|
f"{self.base_url}/generate",
|
||||||
|
files={"ref_audio": ("ref.wav", ref_audio_data, "audio/wav")},
|
||||||
|
data={
|
||||||
|
"text": text,
|
||||||
|
"ref_text": ref_text,
|
||||||
|
"language": language,
|
||||||
|
"speed": str(speed),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
retryable = False
|
||||||
|
reason = ""
|
||||||
|
|
||||||
|
if response.status_code in (429, 502, 503, 504):
|
||||||
|
retryable = True
|
||||||
|
reason = f"HTTP {response.status_code}"
|
||||||
|
elif response.status_code == 500 and (
|
||||||
|
"生成超时" in response.text or "timeout" in response.text.lower()
|
||||||
|
):
|
||||||
|
retryable = True
|
||||||
|
reason = "upstream timeout"
|
||||||
|
|
||||||
|
if retryable and attempt < max_retries - 1:
|
||||||
|
wait = 8 * (attempt + 1)
|
||||||
|
logger.warning(
|
||||||
|
f"Voice clone retryable error ({reason}), retrying in {wait}s "
|
||||||
|
f"(attempt {attempt + 1}/{max_retries})"
|
||||||
|
)
|
||||||
|
await asyncio.sleep(wait)
|
||||||
|
continue
|
||||||
|
|
||||||
|
response.raise_for_status()
|
||||||
|
return response.content
|
||||||
|
|
||||||
|
except httpx.HTTPStatusError as e:
|
||||||
|
logger.error(f"Voice clone API error: {e.response.status_code} - {e.response.text}")
|
||||||
|
raise RuntimeError(f"声音克隆服务错误: {e.response.text}")
|
||||||
|
except httpx.RequestError as e:
|
||||||
|
if attempt < max_retries - 1:
|
||||||
|
wait = 6 * (attempt + 1)
|
||||||
|
logger.warning(
|
||||||
|
f"Voice clone connection error: {e}; retrying in {wait}s "
|
||||||
|
f"(attempt {attempt + 1}/{max_retries})"
|
||||||
|
)
|
||||||
|
await asyncio.sleep(wait)
|
||||||
|
continue
|
||||||
|
logger.error(f"Voice clone connection error: {e}")
|
||||||
|
raise RuntimeError("无法连接声音克隆服务,请检查服务是否启动")
|
||||||
|
|
||||||
|
raise RuntimeError("声音克隆服务繁忙,请稍后重试")
|
||||||
|
|
||||||
async def generate_audio(
|
async def generate_audio(
|
||||||
self,
|
self,
|
||||||
text: str,
|
text: str,
|
||||||
ref_audio_path: str,
|
ref_audio_path: str,
|
||||||
ref_text: str,
|
ref_text: str,
|
||||||
output_path: str,
|
output_path: str,
|
||||||
language: str = "Chinese"
|
language: str = "Chinese",
|
||||||
|
speed: float = 1.0,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
使用声音克隆生成语音
|
使用声音克隆生成语音
|
||||||
@@ -51,60 +118,49 @@ class VoiceCloneService:
|
|||||||
logger.info(f"🎤 Voice Clone: {text[:30]}... (language={language})")
|
logger.info(f"🎤 Voice Clone: {text[:30]}... (language={language})")
|
||||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# 读取参考音频
|
text = text.strip()
|
||||||
|
if not text:
|
||||||
|
raise RuntimeError("文本为空,无法生成语音")
|
||||||
|
|
||||||
with open(ref_audio_path, "rb") as f:
|
with open(ref_audio_path, "rb") as f:
|
||||||
ref_audio_data = f.read()
|
ref_audio_data = f.read()
|
||||||
|
|
||||||
# 调用 Qwen3-TTS 服务
|
# CosyVoice 内部自带 text_normalize 分段,无需客户端切分
|
||||||
timeout = httpx.Timeout(300.0) # 5分钟超时
|
audio_bytes = await self._generate_once(
|
||||||
async with httpx.AsyncClient(timeout=timeout) as client:
|
text=text,
|
||||||
try:
|
ref_audio_data=ref_audio_data,
|
||||||
response = await client.post(
|
ref_text=ref_text,
|
||||||
f"{self.base_url}/generate",
|
language=language,
|
||||||
files={"ref_audio": ("ref.wav", ref_audio_data, "audio/wav")},
|
speed=speed,
|
||||||
data={
|
)
|
||||||
"text": text,
|
with open(output_path, "wb") as f:
|
||||||
"ref_text": ref_text,
|
f.write(audio_bytes)
|
||||||
"language": language
|
logger.info(f"✅ Voice clone saved: {output_path}")
|
||||||
}
|
return output_path
|
||||||
)
|
|
||||||
response.raise_for_status()
|
|
||||||
|
|
||||||
# 保存返回的音频
|
|
||||||
with open(output_path, "wb") as f:
|
|
||||||
f.write(response.content)
|
|
||||||
|
|
||||||
logger.info(f"✅ Voice clone saved: {output_path}")
|
|
||||||
return output_path
|
|
||||||
|
|
||||||
except httpx.HTTPStatusError as e:
|
|
||||||
logger.error(f"Qwen3-TTS API error: {e.response.status_code} - {e.response.text}")
|
|
||||||
raise RuntimeError(f"声音克隆服务错误: {e.response.text}")
|
|
||||||
except httpx.RequestError as e:
|
|
||||||
logger.error(f"Qwen3-TTS connection error: {e}")
|
|
||||||
raise RuntimeError("无法连接声音克隆服务,请检查服务是否启动")
|
|
||||||
|
|
||||||
async def check_health(self) -> dict:
|
async def check_health(self) -> dict:
|
||||||
"""健康检查"""
|
"""健康检查"""
|
||||||
import time
|
import time
|
||||||
|
|
||||||
# 5分钟缓存
|
# 30秒缓存
|
||||||
now = time.time()
|
now = time.time()
|
||||||
if self._health_cache and (now - self._health_cache_time) < 300:
|
cached = self._health_cache
|
||||||
return self._health_cache
|
if cached is not None and (now - self._health_cache_time) < 30:
|
||||||
|
return cached
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||||
response = await client.get(f"{self.base_url}/health")
|
response = await client.get(f"{self.base_url}/health")
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
self._health_cache = response.json()
|
payload = response.json()
|
||||||
|
self._health_cache = payload
|
||||||
self._health_cache_time = now
|
self._health_cache_time = now
|
||||||
return self._health_cache
|
return payload
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning(f"Qwen3-TTS health check failed: {e}")
|
logger.warning(f"Voice clone health check failed: {e}")
|
||||||
return {
|
return {
|
||||||
"service": "Qwen3-TTS Voice Clone",
|
"service": "CosyVoice 3.0 Voice Clone",
|
||||||
"model": "0.6B-Base",
|
"model": "unknown",
|
||||||
"ready": False,
|
"ready": False,
|
||||||
"gpu_id": 0,
|
"gpu_id": 0,
|
||||||
"error": str(e)
|
"error": str(e)
|
||||||
|
|||||||
@@ -39,12 +39,22 @@ def split_word_to_chars(word: str, start: float, end: float) -> list:
|
|||||||
|
|
||||||
tokens = []
|
tokens = []
|
||||||
ascii_buffer = ""
|
ascii_buffer = ""
|
||||||
|
pending_space = False # 记录是否有待处理的空格(用于英文单词间距)
|
||||||
|
|
||||||
for char in word:
|
for char in word:
|
||||||
if not char.strip():
|
if not char.strip():
|
||||||
|
# 空格:flush ascii_buffer,标记下一个 token 需要前导空格
|
||||||
|
if ascii_buffer:
|
||||||
|
tokens.append(ascii_buffer)
|
||||||
|
ascii_buffer = ""
|
||||||
|
if tokens: # 仅在已有 token 时标记(避免开头重复空格)
|
||||||
|
pending_space = True
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if char.isascii() and char.isalnum():
|
if char.isascii() and char.isalnum():
|
||||||
|
if pending_space and not ascii_buffer:
|
||||||
|
ascii_buffer = " " # 将空格前置到新英文单词
|
||||||
|
pending_space = False
|
||||||
ascii_buffer += char
|
ascii_buffer += char
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@@ -52,7 +62,9 @@ def split_word_to_chars(word: str, start: float, end: float) -> list:
|
|||||||
tokens.append(ascii_buffer)
|
tokens.append(ascii_buffer)
|
||||||
ascii_buffer = ""
|
ascii_buffer = ""
|
||||||
|
|
||||||
tokens.append(char)
|
prefix = " " if pending_space else ""
|
||||||
|
pending_space = False
|
||||||
|
tokens.append(prefix + char)
|
||||||
|
|
||||||
if ascii_buffer:
|
if ascii_buffer:
|
||||||
tokens.append(ascii_buffer)
|
tokens.append(ascii_buffer)
|
||||||
@@ -175,6 +187,7 @@ class WhisperService:
|
|||||||
text: str,
|
text: str,
|
||||||
output_path: Optional[str] = None,
|
output_path: Optional[str] = None,
|
||||||
language: str = "zh",
|
language: str = "zh",
|
||||||
|
original_text: Optional[str] = None,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""
|
"""
|
||||||
对音频进行转录,生成字级别时间戳
|
对音频进行转录,生成字级别时间戳
|
||||||
@@ -184,6 +197,8 @@ class WhisperService:
|
|||||||
text: 原始文本(用于参考,但实际使用 whisper 转录结果)
|
text: 原始文本(用于参考,但实际使用 whisper 转录结果)
|
||||||
output_path: 可选,输出 JSON 文件路径
|
output_path: 可选,输出 JSON 文件路径
|
||||||
language: 语言代码 (zh/en 等)
|
language: 语言代码 (zh/en 等)
|
||||||
|
original_text: 原始文案。非空时,Whisper 仅用于检测总时间范围,
|
||||||
|
字幕文字用此原文替换(解决语言不匹配问题)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
包含字级别时间戳的字典
|
包含字级别时间戳的字典
|
||||||
@@ -208,16 +223,19 @@ class WhisperService:
|
|||||||
|
|
||||||
logger.info(f"Detected language: {info.language} (prob: {info.language_probability:.2f})")
|
logger.info(f"Detected language: {info.language} (prob: {info.language_probability:.2f})")
|
||||||
|
|
||||||
|
# 收集 Whisper 转录结果(始终需要,用于获取时间范围)
|
||||||
all_segments = []
|
all_segments = []
|
||||||
|
whisper_first_start = None
|
||||||
|
whisper_last_end = None
|
||||||
for segment in segments_iter:
|
for segment in segments_iter:
|
||||||
# 提取每个字的时间戳,并拆分成单字
|
|
||||||
all_words = []
|
all_words = []
|
||||||
if segment.words:
|
if segment.words:
|
||||||
for word_info in segment.words:
|
for word_info in segment.words:
|
||||||
word_text = word_info.word
|
word_text = word_info.word
|
||||||
if word_text.strip():
|
if word_text.strip():
|
||||||
# 将词拆分成单字,时间戳线性插值
|
if whisper_first_start is None:
|
||||||
# 保留前导空格用于英文词间距
|
whisper_first_start = word_info.start
|
||||||
|
whisper_last_end = word_info.end
|
||||||
chars = split_word_to_chars(
|
chars = split_word_to_chars(
|
||||||
word_text,
|
word_text,
|
||||||
word_info.start,
|
word_info.start,
|
||||||
@@ -225,11 +243,72 @@ class WhisperService:
|
|||||||
)
|
)
|
||||||
all_words.extend(chars)
|
all_words.extend(chars)
|
||||||
|
|
||||||
# 将长段落按标点和字数拆分成多行
|
|
||||||
if all_words:
|
if all_words:
|
||||||
line_segments = split_segment_to_lines(all_words, max_chars)
|
line_segments = split_segment_to_lines(all_words, max_chars)
|
||||||
all_segments.extend(line_segments)
|
all_segments.extend(line_segments)
|
||||||
|
|
||||||
|
# 如果提供了 original_text,用原文替换 Whisper 转录文字,保留语音节奏
|
||||||
|
if original_text and original_text.strip() and whisper_first_start is not None:
|
||||||
|
# 收集 Whisper 逐字时间戳(保留真实语音节奏)
|
||||||
|
whisper_chars = []
|
||||||
|
for seg in all_segments:
|
||||||
|
whisper_chars.extend(seg.get("words", []))
|
||||||
|
|
||||||
|
# 用原文字符 + Whisper 节奏生成新的时间戳
|
||||||
|
orig_chars = split_word_to_chars(
|
||||||
|
original_text.strip(),
|
||||||
|
whisper_first_start,
|
||||||
|
whisper_last_end
|
||||||
|
)
|
||||||
|
|
||||||
|
if orig_chars and len(whisper_chars) >= 2:
|
||||||
|
# 将原文字符按比例映射到 Whisper 的时间节奏上
|
||||||
|
n_w = len(whisper_chars)
|
||||||
|
n_o = len(orig_chars)
|
||||||
|
w_starts = [c["start"] for c in whisper_chars]
|
||||||
|
w_final_end = whisper_chars[-1]["end"]
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Using original_text for subtitles (len={len(original_text)}), "
|
||||||
|
f"rhythm-mapping {n_o} orig chars onto {n_w} Whisper chars, "
|
||||||
|
f"time range: {whisper_first_start:.2f}-{whisper_last_end:.2f}s"
|
||||||
|
)
|
||||||
|
|
||||||
|
remapped = []
|
||||||
|
for i, oc in enumerate(orig_chars):
|
||||||
|
# 原文第 i 个字符对应 Whisper 时间线的位置
|
||||||
|
pos = (i / n_o) * n_w
|
||||||
|
idx = min(int(pos), n_w - 1)
|
||||||
|
frac = pos - idx
|
||||||
|
t_start = (
|
||||||
|
w_starts[idx] + frac * (w_starts[idx + 1] - w_starts[idx])
|
||||||
|
if idx < n_w - 1
|
||||||
|
else w_starts[idx] + frac * (w_final_end - w_starts[idx])
|
||||||
|
)
|
||||||
|
|
||||||
|
# 结束时间 = 下一个字符的开始时间
|
||||||
|
pos_next = ((i + 1) / n_o) * n_w
|
||||||
|
idx_n = min(int(pos_next), n_w - 1)
|
||||||
|
frac_n = pos_next - idx_n
|
||||||
|
t_end = (
|
||||||
|
w_starts[idx_n] + frac_n * (w_starts[idx_n + 1] - w_starts[idx_n])
|
||||||
|
if idx_n < n_w - 1
|
||||||
|
else w_starts[idx_n] + frac_n * (w_final_end - w_starts[idx_n])
|
||||||
|
)
|
||||||
|
|
||||||
|
remapped.append({
|
||||||
|
"word": oc["word"],
|
||||||
|
"start": round(t_start, 3),
|
||||||
|
"end": round(t_end, 3),
|
||||||
|
})
|
||||||
|
|
||||||
|
all_segments = split_segment_to_lines(remapped, max_chars)
|
||||||
|
logger.info(f"Rebuilt {len(all_segments)} subtitle segments (rhythm-mapped)")
|
||||||
|
elif orig_chars:
|
||||||
|
# Whisper 字符不足,退回线性插值
|
||||||
|
all_segments = split_segment_to_lines(orig_chars, max_chars)
|
||||||
|
logger.info(f"Rebuilt {len(all_segments)} subtitle segments (linear fallback)")
|
||||||
|
|
||||||
logger.info(f"Generated {len(all_segments)} subtitle segments")
|
logger.info(f"Generated {len(all_segments)} subtitle segments")
|
||||||
return {"segments": all_segments}
|
return {"segments": all_segments}
|
||||||
|
|
||||||
@@ -247,12 +326,13 @@ class WhisperService:
|
|||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
async def transcribe(self, audio_path: str) -> str:
|
async def transcribe(self, audio_path: str, language: str | None = None) -> str:
|
||||||
"""
|
"""
|
||||||
仅转录文本(用于提取文案)
|
仅转录文本(用于提取文案)
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
audio_path: 音频/视频文件路径
|
audio_path: 音频/视频文件路径
|
||||||
|
language: 语言代码,None 表示自动检测
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
纯文本内容
|
纯文本内容
|
||||||
@@ -266,7 +346,7 @@ class WhisperService:
|
|||||||
# 转录 (无需字级时间戳)
|
# 转录 (无需字级时间戳)
|
||||||
segments_iter, _ = model.transcribe(
|
segments_iter, _ = model.transcribe(
|
||||||
audio_path,
|
audio_path,
|
||||||
language="zh",
|
language=language,
|
||||||
word_timestamps=False,
|
word_timestamps=False,
|
||||||
vad_filter=True,
|
vad_filter=True,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -54,5 +54,61 @@
|
|||||||
"letter_spacing": 1,
|
"letter_spacing": 1,
|
||||||
"bottom_margin": 72,
|
"bottom_margin": 72,
|
||||||
"is_default": false
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "subtitle_pink",
|
||||||
|
"label": "少女粉",
|
||||||
|
"font_file": "DingTalk JinBuTi.ttf",
|
||||||
|
"font_family": "DingTalkJinBuTi",
|
||||||
|
"font_size": 56,
|
||||||
|
"highlight_color": "#FF69B4",
|
||||||
|
"normal_color": "#FFFFFF",
|
||||||
|
"stroke_color": "#1A0010",
|
||||||
|
"stroke_size": 3,
|
||||||
|
"letter_spacing": 2,
|
||||||
|
"bottom_margin": 80,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "subtitle_lime",
|
||||||
|
"label": "清新绿",
|
||||||
|
"font_file": "DingTalk Sans.ttf",
|
||||||
|
"font_family": "DingTalkSans",
|
||||||
|
"font_size": 50,
|
||||||
|
"highlight_color": "#76FF03",
|
||||||
|
"normal_color": "#FFFFFF",
|
||||||
|
"stroke_color": "#001A00",
|
||||||
|
"stroke_size": 3,
|
||||||
|
"letter_spacing": 1,
|
||||||
|
"bottom_margin": 78,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "subtitle_gold",
|
||||||
|
"label": "金色隶书",
|
||||||
|
"font_file": "阿里妈妈刀隶体.ttf",
|
||||||
|
"font_family": "AliMamaDaoLiTi",
|
||||||
|
"font_size": 56,
|
||||||
|
"highlight_color": "#FDE68A",
|
||||||
|
"normal_color": "#E8D5B0",
|
||||||
|
"stroke_color": "#2B1B00",
|
||||||
|
"stroke_size": 3,
|
||||||
|
"letter_spacing": 3,
|
||||||
|
"bottom_margin": 80,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "subtitle_kai",
|
||||||
|
"label": "楷体红字",
|
||||||
|
"font_file": "simkai.ttf",
|
||||||
|
"font_family": "SimKai",
|
||||||
|
"font_size": 54,
|
||||||
|
"highlight_color": "#FF4444",
|
||||||
|
"normal_color": "#FFFFFF",
|
||||||
|
"stroke_color": "#000000",
|
||||||
|
"stroke_size": 3,
|
||||||
|
"letter_spacing": 2,
|
||||||
|
"bottom_margin": 80,
|
||||||
|
"is_default": false
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -7,7 +7,7 @@
|
|||||||
"font_size": 90,
|
"font_size": 90,
|
||||||
"color": "#FFFFFF",
|
"color": "#FFFFFF",
|
||||||
"stroke_color": "#000000",
|
"stroke_color": "#000000",
|
||||||
"stroke_size": 8,
|
"stroke_size": 5,
|
||||||
"letter_spacing": 5,
|
"letter_spacing": 5,
|
||||||
"top_margin": 62,
|
"top_margin": 62,
|
||||||
"font_weight": 900,
|
"font_weight": 900,
|
||||||
@@ -21,7 +21,7 @@
|
|||||||
"font_size": 72,
|
"font_size": 72,
|
||||||
"color": "#FFFFFF",
|
"color": "#FFFFFF",
|
||||||
"stroke_color": "#000000",
|
"stroke_color": "#000000",
|
||||||
"stroke_size": 8,
|
"stroke_size": 5,
|
||||||
"letter_spacing": 4,
|
"letter_spacing": 4,
|
||||||
"top_margin": 60,
|
"top_margin": 60,
|
||||||
"font_weight": 900,
|
"font_weight": 900,
|
||||||
@@ -35,7 +35,7 @@
|
|||||||
"font_size": 70,
|
"font_size": 70,
|
||||||
"color": "#FDE68A",
|
"color": "#FDE68A",
|
||||||
"stroke_color": "#2B1B00",
|
"stroke_color": "#2B1B00",
|
||||||
"stroke_size": 8,
|
"stroke_size": 5,
|
||||||
"letter_spacing": 3,
|
"letter_spacing": 3,
|
||||||
"top_margin": 58,
|
"top_margin": 58,
|
||||||
"font_weight": 800,
|
"font_weight": 800,
|
||||||
@@ -49,10 +49,122 @@
|
|||||||
"font_size": 72,
|
"font_size": 72,
|
||||||
"color": "#FFFFFF",
|
"color": "#FFFFFF",
|
||||||
"stroke_color": "#1F0A00",
|
"stroke_color": "#1F0A00",
|
||||||
"stroke_size": 8,
|
"stroke_size": 5,
|
||||||
"letter_spacing": 4,
|
"letter_spacing": 4,
|
||||||
"top_margin": 60,
|
"top_margin": 60,
|
||||||
"font_weight": 900,
|
"font_weight": 900,
|
||||||
"is_default": false
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_pangmen",
|
||||||
|
"label": "庞门正道",
|
||||||
|
"font_file": "title/庞门正道标题体3.0.ttf",
|
||||||
|
"font_family": "PangMenZhengDao",
|
||||||
|
"font_size": 80,
|
||||||
|
"color": "#FFFFFF",
|
||||||
|
"stroke_color": "#000000",
|
||||||
|
"stroke_size": 5,
|
||||||
|
"letter_spacing": 5,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_round",
|
||||||
|
"label": "优设标题圆",
|
||||||
|
"font_file": "title/优设标题圆.otf",
|
||||||
|
"font_family": "YouSheBiaoTiYuan",
|
||||||
|
"font_size": 78,
|
||||||
|
"color": "#FFFFFF",
|
||||||
|
"stroke_color": "#4A1A6B",
|
||||||
|
"stroke_size": 5,
|
||||||
|
"letter_spacing": 4,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_alibaba",
|
||||||
|
"label": "阿里数黑体",
|
||||||
|
"font_file": "title/阿里巴巴数黑体.ttf",
|
||||||
|
"font_family": "AlibabaShuHeiTi",
|
||||||
|
"font_size": 72,
|
||||||
|
"color": "#FFFFFF",
|
||||||
|
"stroke_color": "#000000",
|
||||||
|
"stroke_size": 4,
|
||||||
|
"letter_spacing": 3,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_chaohei",
|
||||||
|
"label": "文道潮黑",
|
||||||
|
"font_file": "title/文道潮黑.ttf",
|
||||||
|
"font_family": "WenDaoChaoHei",
|
||||||
|
"font_size": 76,
|
||||||
|
"color": "#00E5FF",
|
||||||
|
"stroke_color": "#001A33",
|
||||||
|
"stroke_size": 5,
|
||||||
|
"letter_spacing": 4,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_wujie",
|
||||||
|
"label": "无界黑",
|
||||||
|
"font_file": "title/标小智无界黑.otf",
|
||||||
|
"font_family": "BiaoXiaoZhiWuJieHei",
|
||||||
|
"font_size": 74,
|
||||||
|
"color": "#FFFFFF",
|
||||||
|
"stroke_color": "#1A1A1A",
|
||||||
|
"stroke_size": 4,
|
||||||
|
"letter_spacing": 3,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_houdi",
|
||||||
|
"label": "厚底黑",
|
||||||
|
"font_file": "title/Aa厚底黑.ttf",
|
||||||
|
"font_family": "AaHouDiHei",
|
||||||
|
"font_size": 76,
|
||||||
|
"color": "#FF6B6B",
|
||||||
|
"stroke_color": "#1A0000",
|
||||||
|
"stroke_size": 5,
|
||||||
|
"letter_spacing": 4,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_banyuan",
|
||||||
|
"label": "寒蝉半圆体",
|
||||||
|
"font_file": "title/寒蝉半圆体.otf",
|
||||||
|
"font_family": "HanChanBanYuan",
|
||||||
|
"font_size": 78,
|
||||||
|
"color": "#FFFFFF",
|
||||||
|
"stroke_color": "#000000",
|
||||||
|
"stroke_size": 5,
|
||||||
|
"letter_spacing": 4,
|
||||||
|
"top_margin": 60,
|
||||||
|
"font_weight": 900,
|
||||||
|
"is_default": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "title_jixiang",
|
||||||
|
"label": "欣意吉祥宋",
|
||||||
|
"font_file": "title/字体圈欣意吉祥宋.ttf",
|
||||||
|
"font_family": "XinYiJiXiangSong",
|
||||||
|
"font_size": 70,
|
||||||
|
"color": "#FDE68A",
|
||||||
|
"stroke_color": "#2B1B00",
|
||||||
|
"stroke_size": 5,
|
||||||
|
"letter_spacing": 3,
|
||||||
|
"top_margin": 58,
|
||||||
|
"font_weight": 800,
|
||||||
|
"is_default": false
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -71,3 +71,18 @@ CREATE TRIGGER users_updated_at
|
|||||||
BEFORE UPDATE ON users
|
BEFORE UPDATE ON users
|
||||||
FOR EACH ROW
|
FOR EACH ROW
|
||||||
EXECUTE FUNCTION update_updated_at();
|
EXECUTE FUNCTION update_updated_at();
|
||||||
|
|
||||||
|
-- 8. 订单表(支付宝付费)
|
||||||
|
CREATE TABLE IF NOT EXISTS orders (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
out_trade_no TEXT UNIQUE NOT NULL,
|
||||||
|
amount DECIMAL(10, 2) NOT NULL DEFAULT 999.00,
|
||||||
|
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'paid', 'failed')),
|
||||||
|
trade_no TEXT,
|
||||||
|
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||||
|
paid_at TIMESTAMP WITH TIME ZONE
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_orders_out_trade_no ON orders(out_trade_no);
|
||||||
|
|||||||
31
backend/package-lock.json
generated
Normal file
31
backend/package-lock.json
generated
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"name": "backend",
|
||||||
|
"lockfileVersion": 3,
|
||||||
|
"requires": true,
|
||||||
|
"packages": {
|
||||||
|
"": {
|
||||||
|
"dependencies": {
|
||||||
|
"qrcode.react": "^4.2.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/qrcode.react": {
|
||||||
|
"version": "4.2.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/qrcode.react/-/qrcode.react-4.2.0.tgz",
|
||||||
|
"integrity": "sha512-QpgqWi8rD9DsS9EP3z7BT+5lY5SFhsqGjpgW5DY/i3mK4M9DTBNz3ErMi8BWYEfI3L0d8GIbGmcdFAS1uIRGjA==",
|
||||||
|
"license": "ISC",
|
||||||
|
"peerDependencies": {
|
||||||
|
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/react": {
|
||||||
|
"version": "19.2.4",
|
||||||
|
"resolved": "https://registry.npmjs.org/react/-/react-19.2.4.tgz",
|
||||||
|
"integrity": "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"peer": true,
|
||||||
|
"engines": {
|
||||||
|
"node": ">=0.10.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
5
backend/package.json
Normal file
5
backend/package.json
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
"dependencies": {
|
||||||
|
"qrcode.react": "^4.2.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -29,6 +29,9 @@ python-jose[cryptography]>=3.3.0
|
|||||||
passlib[bcrypt]>=1.7.4
|
passlib[bcrypt]>=1.7.4
|
||||||
bcrypt==4.0.1
|
bcrypt==4.0.1
|
||||||
|
|
||||||
|
# 支付宝支付
|
||||||
|
python-alipay-sdk>=3.6.0
|
||||||
|
|
||||||
# 字幕对齐
|
# 字幕对齐
|
||||||
faster-whisper>=1.0.0
|
faster-whisper>=1.0.0
|
||||||
|
|
||||||
|
|||||||
@@ -20,14 +20,14 @@ logger = logging.getLogger("Watchdog")
|
|||||||
# 服务配置
|
# 服务配置
|
||||||
SERVICES = [
|
SERVICES = [
|
||||||
{
|
{
|
||||||
"name": "vigent2-qwen-tts",
|
"name": "vigent2-cosyvoice",
|
||||||
"url": "http://localhost:8009/health",
|
"url": "http://localhost:8010/health",
|
||||||
"failures": 0,
|
"failures": 0,
|
||||||
"threshold": 5, # 连续5次失败才重启(5×30s = 2.5分钟容忍期)
|
"threshold": 3, # 连续3次失败才重启(3×15s ≈ 45秒容忍期)
|
||||||
"timeout": 10.0,
|
"timeout": 10.0,
|
||||||
"restart_cmd": ["pm2", "restart", "vigent2-qwen-tts"],
|
"restart_cmd": ["pm2", "restart", "vigent2-cosyvoice"],
|
||||||
"cooldown_until": 0, # 重启后的冷却截止时间戳
|
"cooldown_until": 0, # 重启后的冷却截止时间戳
|
||||||
"cooldown_sec": 120, # 重启后等待120秒再开始检查
|
"cooldown_sec": 45, # 重启后等待45秒再开始检查
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -45,10 +45,20 @@ async def check_service(service):
|
|||||||
async with httpx.AsyncClient(timeout=timeout) as client:
|
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||||
response = await client.get(service["url"])
|
response = await client.get(service["url"])
|
||||||
if response.status_code == 200:
|
if response.status_code == 200:
|
||||||
if service["failures"] > 0:
|
ready = True
|
||||||
logger.info(f"✅ 服务 {service['name']} 已恢复正常")
|
try:
|
||||||
service["failures"] = 0
|
payload = response.json()
|
||||||
return True
|
ready = bool(payload.get("ready", True))
|
||||||
|
except Exception:
|
||||||
|
payload = {}
|
||||||
|
|
||||||
|
if ready:
|
||||||
|
if service["failures"] > 0:
|
||||||
|
logger.info(f"✅ 服务 {service['name']} 已恢复正常")
|
||||||
|
service["failures"] = 0
|
||||||
|
return True
|
||||||
|
|
||||||
|
logger.warning(f"⚠️ 服务 {service['name']} ready=false,健康检查未通过: {payload}")
|
||||||
else:
|
else:
|
||||||
logger.warning(f"⚠️ 服务 {service['name']} 返回状态码 {response.status_code}")
|
logger.warning(f"⚠️ 服务 {service['name']} 返回状态码 {response.status_code}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -83,8 +93,8 @@ async def main():
|
|||||||
for service in SERVICES:
|
for service in SERVICES:
|
||||||
await check_service(service)
|
await check_service(service)
|
||||||
|
|
||||||
# 每 30 秒检查一次
|
# 每 15 秒检查一次
|
||||||
await asyncio.sleep(30)
|
await asyncio.sleep(15)
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
try:
|
try:
|
||||||
|
|||||||
10
frontend/package-lock.json
generated
10
frontend/package-lock.json
generated
@@ -15,6 +15,7 @@
|
|||||||
"axios": "^1.13.4",
|
"axios": "^1.13.4",
|
||||||
"lucide-react": "^0.563.0",
|
"lucide-react": "^0.563.0",
|
||||||
"next": "16.1.1",
|
"next": "16.1.1",
|
||||||
|
"qrcode.react": "^4.2.0",
|
||||||
"react": "19.2.3",
|
"react": "19.2.3",
|
||||||
"react-dom": "19.2.3",
|
"react-dom": "19.2.3",
|
||||||
"sonner": "^2.0.7",
|
"sonner": "^2.0.7",
|
||||||
@@ -5618,6 +5619,15 @@
|
|||||||
"node": ">=6"
|
"node": ">=6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/qrcode.react": {
|
||||||
|
"version": "4.2.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/qrcode.react/-/qrcode.react-4.2.0.tgz",
|
||||||
|
"integrity": "sha512-QpgqWi8rD9DsS9EP3z7BT+5lY5SFhsqGjpgW5DY/i3mK4M9DTBNz3ErMi8BWYEfI3L0d8GIbGmcdFAS1uIRGjA==",
|
||||||
|
"license": "ISC",
|
||||||
|
"peerDependencies": {
|
||||||
|
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/queue-microtask": {
|
"node_modules/queue-microtask": {
|
||||||
"version": "1.2.3",
|
"version": "1.2.3",
|
||||||
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
|
||||||
|
|||||||
@@ -16,6 +16,7 @@
|
|||||||
"axios": "^1.13.4",
|
"axios": "^1.13.4",
|
||||||
"lucide-react": "^0.563.0",
|
"lucide-react": "^0.563.0",
|
||||||
"next": "16.1.1",
|
"next": "16.1.1",
|
||||||
|
"qrcode.react": "^4.2.0",
|
||||||
"react": "19.2.3",
|
"react": "19.2.3",
|
||||||
"react-dom": "19.2.3",
|
"react-dom": "19.2.3",
|
||||||
"sonner": "^2.0.7",
|
"sonner": "^2.0.7",
|
||||||
|
|||||||
@@ -3,9 +3,11 @@
|
|||||||
import { useState } from 'react';
|
import { useState } from 'react';
|
||||||
import { useRouter } from 'next/navigation';
|
import { useRouter } from 'next/navigation';
|
||||||
import { login } from "@/shared/lib/auth";
|
import { login } from "@/shared/lib/auth";
|
||||||
|
import { useAuth } from "@/shared/contexts/AuthContext";
|
||||||
|
|
||||||
export default function LoginPage() {
|
export default function LoginPage() {
|
||||||
const router = useRouter();
|
const router = useRouter();
|
||||||
|
const { setUser } = useAuth();
|
||||||
const [phone, setPhone] = useState('');
|
const [phone, setPhone] = useState('');
|
||||||
const [password, setPassword] = useState('');
|
const [password, setPassword] = useState('');
|
||||||
const [error, setError] = useState('');
|
const [error, setError] = useState('');
|
||||||
@@ -25,7 +27,11 @@ export default function LoginPage() {
|
|||||||
|
|
||||||
try {
|
try {
|
||||||
const result = await login(phone, password);
|
const result = await login(phone, password);
|
||||||
if (result.success) {
|
if (result.paymentToken) {
|
||||||
|
sessionStorage.setItem('payment_token', result.paymentToken);
|
||||||
|
router.push('/pay');
|
||||||
|
} else if (result.success) {
|
||||||
|
if (result.user) setUser(result.user);
|
||||||
router.push('/');
|
router.push('/');
|
||||||
} else {
|
} else {
|
||||||
setError(result.message || '登录失败');
|
setError(result.message || '登录失败');
|
||||||
|
|||||||
160
frontend/src/app/pay/page.tsx
Normal file
160
frontend/src/app/pay/page.tsx
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
'use client';
|
||||||
|
|
||||||
|
import { Suspense, useState, useEffect, useRef } from 'react';
|
||||||
|
import { useRouter, useSearchParams } from 'next/navigation';
|
||||||
|
import api from '@/shared/api/axios';
|
||||||
|
|
||||||
|
type PageStatus = 'loading' | 'redirecting' | 'checking' | 'success' | 'error';
|
||||||
|
|
||||||
|
function PayContent() {
|
||||||
|
const router = useRouter();
|
||||||
|
const searchParams = useSearchParams();
|
||||||
|
const [status, setStatus] = useState<PageStatus>('loading');
|
||||||
|
const [errorMsg, setErrorMsg] = useState('');
|
||||||
|
const pollRef = useRef<ReturnType<typeof setInterval> | null>(null);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const outTradeNo = searchParams.get('out_trade_no');
|
||||||
|
if (outTradeNo) {
|
||||||
|
setStatus('checking');
|
||||||
|
startPolling(outTradeNo);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const token = sessionStorage.getItem('payment_token');
|
||||||
|
if (!token) {
|
||||||
|
router.replace('/login');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
createOrder(token);
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
if (pollRef.current) clearInterval(pollRef.current);
|
||||||
|
};
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const createOrder = async (token: string) => {
|
||||||
|
try {
|
||||||
|
const { data } = await api.post('/api/payment/create-order', { payment_token: token });
|
||||||
|
const { pay_url } = data.data;
|
||||||
|
setStatus('redirecting');
|
||||||
|
window.location.href = pay_url;
|
||||||
|
} catch (err: any) {
|
||||||
|
setStatus('error');
|
||||||
|
setErrorMsg(err.response?.data?.message || '创建订单失败,请重新登录');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const startPolling = (tradeNo: string) => {
|
||||||
|
checkStatus(tradeNo);
|
||||||
|
pollRef.current = setInterval(() => checkStatus(tradeNo), 3000);
|
||||||
|
};
|
||||||
|
|
||||||
|
const checkStatus = async (tradeNo: string) => {
|
||||||
|
try {
|
||||||
|
const { data } = await api.get(`/api/payment/status/${tradeNo}`);
|
||||||
|
if (data.data.status === 'paid') {
|
||||||
|
if (pollRef.current) clearInterval(pollRef.current);
|
||||||
|
setStatus('success');
|
||||||
|
sessionStorage.removeItem('payment_token');
|
||||||
|
setTimeout(() => router.replace('/login'), 3000);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// ignore polling errors
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="w-full max-w-md p-8 bg-white/10 backdrop-blur-lg rounded-2xl shadow-2xl border border-white/20">
|
||||||
|
{(status === 'loading' || status === 'redirecting') && (
|
||||||
|
<div className="text-center">
|
||||||
|
<div className="mb-6">
|
||||||
|
<svg className="animate-spin h-12 w-12 mx-auto text-purple-400" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
|
||||||
|
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
|
||||||
|
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<p className="text-gray-300">
|
||||||
|
{status === 'loading' ? '正在创建订单...' : '正在跳转到支付宝...'}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{status === 'checking' && (
|
||||||
|
<div className="text-center">
|
||||||
|
<h1 className="text-2xl font-bold text-white mb-6">支付确认中</h1>
|
||||||
|
<div className="flex items-center justify-center gap-2 text-purple-300 mb-4">
|
||||||
|
<svg className="animate-spin h-5 w-5" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
|
||||||
|
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
|
||||||
|
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||||
|
</svg>
|
||||||
|
正在确认支付结果...
|
||||||
|
</div>
|
||||||
|
<p className="text-gray-400 text-sm">如果您已完成支付,请稍候</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{status === 'success' && (
|
||||||
|
<div className="text-center">
|
||||||
|
<div className="mb-6">
|
||||||
|
<svg className="w-16 h-16 mx-auto text-green-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<h2 className="text-2xl font-bold text-white mb-4">支付成功!</h2>
|
||||||
|
<p className="text-gray-300 mb-2">会员已开通,即将跳转到登录页...</p>
|
||||||
|
<p className="text-gray-500 text-sm">请重新登录即可使用</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{status === 'error' && (
|
||||||
|
<div className="text-center">
|
||||||
|
<div className="mb-6">
|
||||||
|
<svg className="w-16 h-16 mx-auto text-red-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4m0 4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<h2 className="text-2xl font-bold text-white mb-4">创建订单失败</h2>
|
||||||
|
<p className="text-red-300 mb-6">{errorMsg}</p>
|
||||||
|
<button
|
||||||
|
onClick={() => router.replace('/login')}
|
||||||
|
className="py-3 px-6 bg-gradient-to-r from-purple-600 to-pink-600 text-white font-semibold rounded-lg"
|
||||||
|
>
|
||||||
|
返回登录
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{status === 'checking' && (
|
||||||
|
<div className="mt-6 text-center">
|
||||||
|
<button
|
||||||
|
onClick={() => {
|
||||||
|
if (pollRef.current) clearInterval(pollRef.current);
|
||||||
|
router.replace('/login');
|
||||||
|
}}
|
||||||
|
className="text-purple-300 hover:text-purple-200 text-sm"
|
||||||
|
>
|
||||||
|
返回登录
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export default function PayPage() {
|
||||||
|
return (
|
||||||
|
<div className="min-h-dvh flex items-center justify-center">
|
||||||
|
<Suspense fallback={
|
||||||
|
<div className="w-full max-w-md p-8 bg-white/10 backdrop-blur-lg rounded-2xl shadow-2xl border border-white/20 text-center">
|
||||||
|
<svg className="animate-spin h-12 w-12 mx-auto text-purple-400" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
|
||||||
|
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
|
||||||
|
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
}>
|
||||||
|
<PayContent />
|
||||||
|
</Suspense>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -61,7 +61,7 @@ export default function RegisterPage() {
|
|||||||
</div>
|
</div>
|
||||||
<h2 className="text-2xl font-bold text-white mb-4">注册成功!</h2>
|
<h2 className="text-2xl font-bold text-white mb-4">注册成功!</h2>
|
||||||
<p className="text-gray-300 mb-6">
|
<p className="text-gray-300 mb-6">
|
||||||
您的账号已创建,请等待管理员审核激活后即可登录。
|
注册成功!请返回登录页,登录后完成付费即可开通。
|
||||||
</p>
|
</p>
|
||||||
<a
|
<a
|
||||||
href="/login"
|
href="/login"
|
||||||
|
|||||||
@@ -106,6 +106,10 @@ export default function AccountSettingsDropdown() {
|
|||||||
{/* 下拉菜单 */}
|
{/* 下拉菜单 */}
|
||||||
{isOpen && (
|
{isOpen && (
|
||||||
<div className="absolute right-0 mt-2 bg-gray-800 border border-white/10 rounded-lg shadow-xl z-[160] overflow-hidden whitespace-nowrap">
|
<div className="absolute right-0 mt-2 bg-gray-800 border border-white/10 rounded-lg shadow-xl z-[160] overflow-hidden whitespace-nowrap">
|
||||||
|
{/* 账户名称 */}
|
||||||
|
<div className="px-3 py-2 border-b border-white/10 text-center">
|
||||||
|
<div className="text-sm text-white font-medium">{user?.phone ? `${user.phone.slice(0, 3)}****${user.phone.slice(-4)}` : '未知账户'}</div>
|
||||||
|
</div>
|
||||||
{/* 有效期显示 */}
|
{/* 有效期显示 */}
|
||||||
<div className="px-3 py-2 border-b border-white/10 text-center">
|
<div className="px-3 py-2 border-b border-white/10 text-center">
|
||||||
<div className="text-xs text-gray-400">账户有效期</div>
|
<div className="text-xs text-gray-400">账户有效期</div>
|
||||||
@@ -188,6 +192,7 @@ export default function AccountSettingsDropdown() {
|
|||||||
onClick={() => {
|
onClick={() => {
|
||||||
setShowPasswordModal(false);
|
setShowPasswordModal(false);
|
||||||
setError('');
|
setError('');
|
||||||
|
setSuccess('');
|
||||||
setOldPassword('');
|
setOldPassword('');
|
||||||
setNewPassword('');
|
setNewPassword('');
|
||||||
setConfirmPassword('');
|
setConfirmPassword('');
|
||||||
|
|||||||
@@ -126,6 +126,7 @@ export const useGeneratedAudios = ({
|
|||||||
ref_audio_id?: string;
|
ref_audio_id?: string;
|
||||||
ref_text?: string;
|
ref_text?: string;
|
||||||
language: string;
|
language: string;
|
||||||
|
speed?: number;
|
||||||
}) => {
|
}) => {
|
||||||
setIsGeneratingAudio(true);
|
setIsGeneratingAudio(true);
|
||||||
setAudioTask({ status: "pending", progress: 0, message: "正在提交..." });
|
setAudioTask({ status: "pending", progress: 0, message: "正在提交..." });
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ interface GeneratedVideo {
|
|||||||
}
|
}
|
||||||
|
|
||||||
interface UseGeneratedVideosOptions {
|
interface UseGeneratedVideosOptions {
|
||||||
|
storageKey: string;
|
||||||
selectedVideoId: string | null;
|
selectedVideoId: string | null;
|
||||||
setSelectedVideoId: React.Dispatch<React.SetStateAction<string | null>>;
|
setSelectedVideoId: React.Dispatch<React.SetStateAction<string | null>>;
|
||||||
setGeneratedVideo: React.Dispatch<React.SetStateAction<string | null>>;
|
setGeneratedVideo: React.Dispatch<React.SetStateAction<string | null>>;
|
||||||
@@ -20,7 +20,7 @@ interface UseGeneratedVideosOptions {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export const useGeneratedVideos = ({
|
export const useGeneratedVideos = ({
|
||||||
|
storageKey,
|
||||||
selectedVideoId,
|
selectedVideoId,
|
||||||
setSelectedVideoId,
|
setSelectedVideoId,
|
||||||
setGeneratedVideo,
|
setGeneratedVideo,
|
||||||
@@ -45,6 +45,8 @@ export const useGeneratedVideos = ({
|
|||||||
if (preferVideoId === "__latest__") {
|
if (preferVideoId === "__latest__") {
|
||||||
setSelectedVideoId(videos[0].id);
|
setSelectedVideoId(videos[0].id);
|
||||||
setGeneratedVideo(resolveMediaUrl(videos[0].path));
|
setGeneratedVideo(resolveMediaUrl(videos[0].path));
|
||||||
|
// 写入跨页面共享标记,让另一个页面也能感知最新生成的视频
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_latestGeneratedVideoId`, videos[0].id);
|
||||||
} else {
|
} else {
|
||||||
const found = videos.find(v => v.id === preferVideoId);
|
const found = videos.find(v => v.id === preferVideoId);
|
||||||
if (found) {
|
if (found) {
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { useEffect, useRef, useState } from "react";
|
import { useEffect, useMemo, useRef, useState } from "react";
|
||||||
import api from "@/shared/api/axios";
|
import api from "@/shared/api/axios";
|
||||||
import {
|
import {
|
||||||
buildTextShadow,
|
buildTextShadow,
|
||||||
@@ -9,7 +9,7 @@ import {
|
|||||||
resolveBgmUrl,
|
resolveBgmUrl,
|
||||||
resolveMediaUrl,
|
resolveMediaUrl,
|
||||||
} from "@/shared/lib/media";
|
} from "@/shared/lib/media";
|
||||||
import { clampTitle } from "@/shared/lib/title";
|
import { clampTitle, clampSecondaryTitle, SECONDARY_TITLE_MAX_LENGTH } from "@/shared/lib/title";
|
||||||
import { useTitleInput } from "@/shared/hooks/useTitleInput";
|
import { useTitleInput } from "@/shared/hooks/useTitleInput";
|
||||||
import { useAuth } from "@/shared/contexts/AuthContext";
|
import { useAuth } from "@/shared/contexts/AuthContext";
|
||||||
import { useTask } from "@/shared/contexts/TaskContext";
|
import { useTask } from "@/shared/contexts/TaskContext";
|
||||||
@@ -26,6 +26,7 @@ import { useRefAudios } from "@/features/home/model/useRefAudios";
|
|||||||
import { useTitleSubtitleStyles } from "@/features/home/model/useTitleSubtitleStyles";
|
import { useTitleSubtitleStyles } from "@/features/home/model/useTitleSubtitleStyles";
|
||||||
import { useTimelineEditor } from "@/features/home/model/useTimelineEditor";
|
import { useTimelineEditor } from "@/features/home/model/useTimelineEditor";
|
||||||
import { useSavedScripts } from "@/features/home/model/useSavedScripts";
|
import { useSavedScripts } from "@/features/home/model/useSavedScripts";
|
||||||
|
import { useVideoFrameCapture } from "@/features/home/model/useVideoFrameCapture";
|
||||||
import { ApiResponse, unwrap } from "@/shared/api/types";
|
import { ApiResponse, unwrap } from "@/shared/api/types";
|
||||||
|
|
||||||
const VOICES: Record<string, { id: string; name: string }[]> = {
|
const VOICES: Record<string, { id: string; name: string }[]> = {
|
||||||
@@ -87,10 +88,9 @@ const LANG_TO_LOCALE: Record<string, string> = {
|
|||||||
"Português": "pt-BR",
|
"Português": "pt-BR",
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const DEFAULT_SHORT_TITLE_DURATION = 4;
|
||||||
|
|
||||||
|
|
||||||
const FIXED_REF_TEXT =
|
|
||||||
"其实生活中有许多美好的瞬间,比如清晨的阳光,或者一杯温热的清茶。希望这次生成的音色能够自然、流畅,完美还原出我最真实的声音状态。";
|
|
||||||
|
|
||||||
const scrollContainerToItem = (container: HTMLDivElement, item: HTMLDivElement) => {
|
const scrollContainerToItem = (container: HTMLDivElement, item: HTMLDivElement) => {
|
||||||
const containerRect = container.getBoundingClientRect();
|
const containerRect = container.getBoundingClientRect();
|
||||||
@@ -152,10 +152,19 @@ export const useHomeController = () => {
|
|||||||
const [subtitleSizeLocked, setSubtitleSizeLocked] = useState<boolean>(false);
|
const [subtitleSizeLocked, setSubtitleSizeLocked] = useState<boolean>(false);
|
||||||
const [titleSizeLocked, setTitleSizeLocked] = useState<boolean>(false);
|
const [titleSizeLocked, setTitleSizeLocked] = useState<boolean>(false);
|
||||||
const [titleTopMargin, setTitleTopMargin] = useState<number>(62);
|
const [titleTopMargin, setTitleTopMargin] = useState<number>(62);
|
||||||
|
const [titleDisplayMode, setTitleDisplayMode] = useState<"short" | "persistent">("short");
|
||||||
const [subtitleBottomMargin, setSubtitleBottomMargin] = useState<number>(80);
|
const [subtitleBottomMargin, setSubtitleBottomMargin] = useState<number>(80);
|
||||||
|
const [outputAspectRatio, setOutputAspectRatio] = useState<"9:16" | "16:9">("9:16");
|
||||||
const [showStylePreview, setShowStylePreview] = useState<boolean>(false);
|
const [showStylePreview, setShowStylePreview] = useState<boolean>(false);
|
||||||
const [materialDimensions, setMaterialDimensions] = useState<{ width: number; height: number } | null>(null);
|
const [materialDimensions, setMaterialDimensions] = useState<{ width: number; height: number } | null>(null);
|
||||||
|
|
||||||
|
// 副标题相关状态
|
||||||
|
const [videoSecondaryTitle, setVideoSecondaryTitle] = useState<string>("");
|
||||||
|
const [selectedSecondaryTitleStyleId, setSelectedSecondaryTitleStyleId] = useState<string>("");
|
||||||
|
const [secondaryTitleFontSize, setSecondaryTitleFontSize] = useState<number>(48);
|
||||||
|
const [secondaryTitleTopMargin, setSecondaryTitleTopMargin] = useState<number>(12);
|
||||||
|
const [secondaryTitleSizeLocked, setSecondaryTitleSizeLocked] = useState<boolean>(false);
|
||||||
|
|
||||||
|
|
||||||
// 背景音乐相关状态
|
// 背景音乐相关状态
|
||||||
const [selectedBgmId, setSelectedBgmId] = useState<string>("");
|
const [selectedBgmId, setSelectedBgmId] = useState<string>("");
|
||||||
@@ -165,11 +174,14 @@ export const useHomeController = () => {
|
|||||||
// 声音克隆相关状态
|
// 声音克隆相关状态
|
||||||
const [ttsMode, setTtsMode] = useState<"edgetts" | "voiceclone">("edgetts");
|
const [ttsMode, setTtsMode] = useState<"edgetts" | "voiceclone">("edgetts");
|
||||||
const [selectedRefAudio, setSelectedRefAudio] = useState<RefAudio | null>(null);
|
const [selectedRefAudio, setSelectedRefAudio] = useState<RefAudio | null>(null);
|
||||||
const [refText, setRefText] = useState(FIXED_REF_TEXT);
|
const [refText, setRefText] = useState("");
|
||||||
|
|
||||||
// 预生成配音选中 ID
|
// 预生成配音选中 ID
|
||||||
const [selectedAudioId, setSelectedAudioId] = useState<string | null>(null);
|
const [selectedAudioId, setSelectedAudioId] = useState<string | null>(null);
|
||||||
|
|
||||||
|
// 语速控制
|
||||||
|
const [speed, setSpeed] = useState<number>(1.0);
|
||||||
|
|
||||||
// ClipTrimmer 模态框状态
|
// ClipTrimmer 模态框状态
|
||||||
const [clipTrimmerOpen, setClipTrimmerOpen] = useState(false);
|
const [clipTrimmerOpen, setClipTrimmerOpen] = useState(false);
|
||||||
const [clipTrimmerSegmentId, setClipTrimmerSegmentId] = useState<string | null>(null);
|
const [clipTrimmerSegmentId, setClipTrimmerSegmentId] = useState<string | null>(null);
|
||||||
@@ -269,6 +281,9 @@ export const useHomeController = () => {
|
|||||||
// 文案提取模态框
|
// 文案提取模态框
|
||||||
const [extractModalOpen, setExtractModalOpen] = useState(false);
|
const [extractModalOpen, setExtractModalOpen] = useState(false);
|
||||||
|
|
||||||
|
// AI 改写模态框
|
||||||
|
const [rewriteModalOpen, setRewriteModalOpen] = useState(false);
|
||||||
|
|
||||||
// 获取存储 key 的前缀(登录用户使用 userId,未登录使用 guest)
|
// 获取存储 key 的前缀(登录用户使用 userId,未登录使用 guest)
|
||||||
const storageKey = userId || "guest";
|
const storageKey = userId || "guest";
|
||||||
|
|
||||||
@@ -286,7 +301,6 @@ export const useHomeController = () => {
|
|||||||
setUploadError,
|
setUploadError,
|
||||||
fetchMaterials,
|
fetchMaterials,
|
||||||
toggleMaterial,
|
toggleMaterial,
|
||||||
reorderMaterials,
|
|
||||||
deleteMaterial,
|
deleteMaterial,
|
||||||
handleUpload,
|
handleUpload,
|
||||||
} = useMaterials({
|
} = useMaterials({
|
||||||
@@ -314,8 +328,9 @@ export const useHomeController = () => {
|
|||||||
fetchRefAudios,
|
fetchRefAudios,
|
||||||
uploadRefAudio,
|
uploadRefAudio,
|
||||||
deleteRefAudio,
|
deleteRefAudio,
|
||||||
|
retranscribeRefAudio,
|
||||||
|
retranscribingId,
|
||||||
} = useRefAudios({
|
} = useRefAudios({
|
||||||
fixedRefText: FIXED_REF_TEXT,
|
|
||||||
selectedRefAudio,
|
selectedRefAudio,
|
||||||
setSelectedRefAudio,
|
setSelectedRefAudio,
|
||||||
setRefText,
|
setRefText,
|
||||||
@@ -350,7 +365,7 @@ export const useHomeController = () => {
|
|||||||
fetchGeneratedVideos,
|
fetchGeneratedVideos,
|
||||||
deleteVideo,
|
deleteVideo,
|
||||||
} = useGeneratedVideos({
|
} = useGeneratedVideos({
|
||||||
|
storageKey,
|
||||||
selectedVideoId,
|
selectedVideoId,
|
||||||
setSelectedVideoId,
|
setSelectedVideoId,
|
||||||
setGeneratedVideo,
|
setGeneratedVideo,
|
||||||
@@ -384,6 +399,18 @@ export const useHomeController = () => {
|
|||||||
storageKey,
|
storageKey,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// 时间轴第一段素材的视频 URL(用于帧截取预览)
|
||||||
|
// 有时间轴段时用第一段,没有(如未选配音)回退到 selectedMaterials[0]
|
||||||
|
const firstTimelineMaterialUrl = useMemo(() => {
|
||||||
|
const firstSeg = timelineSegments[0];
|
||||||
|
const matId = firstSeg?.materialId ?? selectedMaterials[0];
|
||||||
|
if (!matId) return null;
|
||||||
|
const mat = materials.find((m) => m.id === matId);
|
||||||
|
return mat?.path ? resolveMediaUrl(mat.path) : null;
|
||||||
|
}, [materials, timelineSegments, selectedMaterials]);
|
||||||
|
|
||||||
|
const materialPosterUrl = useVideoFrameCapture(showStylePreview ? firstTimelineMaterialUrl : null);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isAuthLoading || !userId) return;
|
if (isAuthLoading || !userId) return;
|
||||||
let active = true;
|
let active = true;
|
||||||
@@ -426,6 +453,8 @@ export const useHomeController = () => {
|
|||||||
setText,
|
setText,
|
||||||
videoTitle,
|
videoTitle,
|
||||||
setVideoTitle,
|
setVideoTitle,
|
||||||
|
videoSecondaryTitle,
|
||||||
|
setVideoSecondaryTitle,
|
||||||
ttsMode,
|
ttsMode,
|
||||||
setTtsMode,
|
setTtsMode,
|
||||||
voice,
|
voice,
|
||||||
@@ -438,16 +467,27 @@ export const useHomeController = () => {
|
|||||||
setSelectedSubtitleStyleId,
|
setSelectedSubtitleStyleId,
|
||||||
selectedTitleStyleId,
|
selectedTitleStyleId,
|
||||||
setSelectedTitleStyleId,
|
setSelectedTitleStyleId,
|
||||||
|
selectedSecondaryTitleStyleId,
|
||||||
|
setSelectedSecondaryTitleStyleId,
|
||||||
subtitleFontSize,
|
subtitleFontSize,
|
||||||
setSubtitleFontSize,
|
setSubtitleFontSize,
|
||||||
titleFontSize,
|
titleFontSize,
|
||||||
setTitleFontSize,
|
setTitleFontSize,
|
||||||
|
secondaryTitleFontSize,
|
||||||
|
setSecondaryTitleFontSize,
|
||||||
setSubtitleSizeLocked,
|
setSubtitleSizeLocked,
|
||||||
setTitleSizeLocked,
|
setTitleSizeLocked,
|
||||||
|
setSecondaryTitleSizeLocked,
|
||||||
titleTopMargin,
|
titleTopMargin,
|
||||||
setTitleTopMargin,
|
setTitleTopMargin,
|
||||||
|
secondaryTitleTopMargin,
|
||||||
|
setSecondaryTitleTopMargin,
|
||||||
|
titleDisplayMode,
|
||||||
|
setTitleDisplayMode,
|
||||||
subtitleBottomMargin,
|
subtitleBottomMargin,
|
||||||
setSubtitleBottomMargin,
|
setSubtitleBottomMargin,
|
||||||
|
outputAspectRatio,
|
||||||
|
setOutputAspectRatio,
|
||||||
selectedBgmId,
|
selectedBgmId,
|
||||||
setSelectedBgmId,
|
setSelectedBgmId,
|
||||||
bgmVolume,
|
bgmVolume,
|
||||||
@@ -459,6 +499,8 @@ export const useHomeController = () => {
|
|||||||
selectedRefAudio,
|
selectedRefAudio,
|
||||||
selectedAudioId,
|
selectedAudioId,
|
||||||
setSelectedAudioId,
|
setSelectedAudioId,
|
||||||
|
speed,
|
||||||
|
setSpeed,
|
||||||
});
|
});
|
||||||
|
|
||||||
const { savedScripts, saveScript, deleteScript: deleteSavedScript } = useSavedScripts(storageKey);
|
const { savedScripts, saveScript, deleteScript: deleteSavedScript } = useSavedScripts(storageKey);
|
||||||
@@ -481,6 +523,12 @@ export const useHomeController = () => {
|
|||||||
onCommit: syncTitleToPublish,
|
onCommit: syncTitleToPublish,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
const secondaryTitleInput = useTitleInput({
|
||||||
|
value: videoSecondaryTitle,
|
||||||
|
onChange: setVideoSecondaryTitle,
|
||||||
|
maxLength: SECONDARY_TITLE_MAX_LENGTH,
|
||||||
|
});
|
||||||
|
|
||||||
// 加载素材列表和历史视频
|
// 加载素材列表和历史视频
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isAuthLoading) return;
|
if (isAuthLoading) return;
|
||||||
@@ -523,7 +571,6 @@ export const useHomeController = () => {
|
|||||||
|
|
||||||
let isActive = true;
|
let isActive = true;
|
||||||
const video = document.createElement("video");
|
const video = document.createElement("video");
|
||||||
video.crossOrigin = "anonymous";
|
|
||||||
video.preload = "metadata";
|
video.preload = "metadata";
|
||||||
video.src = url;
|
video.src = url;
|
||||||
video.load();
|
video.load();
|
||||||
@@ -573,11 +620,32 @@ export const useHomeController = () => {
|
|||||||
}
|
}
|
||||||
}, [titleStyles, selectedTitleStyleId, titleSizeLocked]);
|
}, [titleStyles, selectedTitleStyleId, titleSizeLocked]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (secondaryTitleSizeLocked || titleStyles.length === 0) return;
|
||||||
|
const active = titleStyles.find((s) => s.id === selectedSecondaryTitleStyleId)
|
||||||
|
|| titleStyles.find((s) => s.is_default)
|
||||||
|
|| titleStyles[0];
|
||||||
|
if (active?.font_size) {
|
||||||
|
setSecondaryTitleFontSize(active.font_size);
|
||||||
|
}
|
||||||
|
}, [titleStyles, selectedSecondaryTitleStyleId, secondaryTitleSizeLocked]);
|
||||||
|
|
||||||
// 移除重复的 BGM 持久化恢复逻辑 (已统一移动到 useHomePersistence 中)
|
// 移除重复的 BGM 持久化恢复逻辑 (已统一移动到 useHomePersistence 中)
|
||||||
// useEffect(() => { ... })
|
// useEffect(() => { ... })
|
||||||
|
|
||||||
|
// 时间门控:页面加载后 1 秒内禁止所有列表自动滚动效果
|
||||||
|
// 防止持久化恢复 + 异步数据加载触发 scrollIntoView 导致移动端页面跳动
|
||||||
|
const scrollEffectsEnabled = useRef(false);
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!selectedBgmId) return;
|
const timer = setTimeout(() => {
|
||||||
|
scrollEffectsEnabled.current = true;
|
||||||
|
}, 1000);
|
||||||
|
return () => clearTimeout(timer);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// BGM 列表滚动
|
||||||
|
useEffect(() => {
|
||||||
|
if (!selectedBgmId || !scrollEffectsEnabled.current) return;
|
||||||
const container = bgmListContainerRef.current;
|
const container = bgmListContainerRef.current;
|
||||||
const target = bgmItemRefs.current[selectedBgmId];
|
const target = bgmItemRefs.current[selectedBgmId];
|
||||||
if (container && target) {
|
if (container && target) {
|
||||||
@@ -585,16 +653,10 @@ export const useHomeController = () => {
|
|||||||
}
|
}
|
||||||
}, [selectedBgmId, bgmList]);
|
}, [selectedBgmId, bgmList]);
|
||||||
|
|
||||||
// 素材列表滚动:跳过首次恢复,仅用户主动操作时滚动
|
// 素材列表滚动
|
||||||
const materialScrollReady = useRef(false);
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
const firstSelected = selectedMaterials[0];
|
const firstSelected = selectedMaterials[0];
|
||||||
if (!firstSelected) return;
|
if (!firstSelected || !scrollEffectsEnabled.current) return;
|
||||||
if (!materialScrollReady.current) {
|
|
||||||
// 首次有选中素材时标记就绪,但不滚动(避免刷新后整页跳动)
|
|
||||||
materialScrollReady.current = true;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
const target = materialItemRefs.current[firstSelected];
|
const target = materialItemRefs.current[firstSelected];
|
||||||
if (target) {
|
if (target) {
|
||||||
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
|
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
|
||||||
@@ -610,7 +672,7 @@ export const useHomeController = () => {
|
|||||||
setSelectedVideoId(firstId);
|
setSelectedVideoId(firstId);
|
||||||
setGeneratedVideo(resolveMediaUrl(generatedVideos[0].path));
|
setGeneratedVideo(resolveMediaUrl(generatedVideos[0].path));
|
||||||
}
|
}
|
||||||
}, [isRestored, generatedVideos, selectedVideoId, setSelectedVideoId, setGeneratedVideo, resolveMediaUrl]);
|
}, [isRestored, generatedVideos, selectedVideoId, setSelectedVideoId, setGeneratedVideo]);
|
||||||
|
|
||||||
// 【修复】BGM 默认选中逻辑
|
// 【修复】BGM 默认选中逻辑
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -619,8 +681,9 @@ export const useHomeController = () => {
|
|||||||
}
|
}
|
||||||
}, [isRestored, bgmList, selectedBgmId, enableBgm, setSelectedBgmId]);
|
}, [isRestored, bgmList, selectedBgmId, enableBgm, setSelectedBgmId]);
|
||||||
|
|
||||||
|
// 视频列表滚动
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!selectedVideoId) return;
|
if (!selectedVideoId || !scrollEffectsEnabled.current) return;
|
||||||
const target = videoItemRefs.current[selectedVideoId];
|
const target = videoItemRefs.current[selectedVideoId];
|
||||||
if (target) {
|
if (target) {
|
||||||
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
|
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
|
||||||
@@ -726,7 +789,7 @@ export const useHomeController = () => {
|
|||||||
|
|
||||||
setIsGeneratingMeta(true);
|
setIsGeneratingMeta(true);
|
||||||
try {
|
try {
|
||||||
const { data: res } = await api.post<ApiResponse<{ title?: string; tags?: string[] }>>(
|
const { data: res } = await api.post<ApiResponse<{ title?: string; secondary_title?: string; tags?: string[] }>>(
|
||||||
"/api/ai/generate-meta",
|
"/api/ai/generate-meta",
|
||||||
{ text: text.trim() }
|
{ text: text.trim() }
|
||||||
);
|
);
|
||||||
@@ -736,6 +799,10 @@ export const useHomeController = () => {
|
|||||||
const nextTitle = clampTitle(payload.title || "");
|
const nextTitle = clampTitle(payload.title || "");
|
||||||
titleInput.commitValue(nextTitle);
|
titleInput.commitValue(nextTitle);
|
||||||
|
|
||||||
|
// 更新副标题
|
||||||
|
const nextSecondaryTitle = clampSecondaryTitle(payload.secondary_title || "");
|
||||||
|
secondaryTitleInput.commitValue(nextSecondaryTitle);
|
||||||
|
|
||||||
// 同步到发布页 localStorage
|
// 同步到发布页 localStorage
|
||||||
localStorage.setItem(`vigent_${storageKey}_publish_tags`, JSON.stringify(payload.tags || []));
|
localStorage.setItem(`vigent_${storageKey}_publish_tags`, JSON.stringify(payload.tags || []));
|
||||||
} catch (err: unknown) {
|
} catch (err: unknown) {
|
||||||
@@ -815,6 +882,7 @@ export const useHomeController = () => {
|
|||||||
ref_audio_id: ttsMode === "voiceclone" ? selectedRefAudio!.id : undefined,
|
ref_audio_id: ttsMode === "voiceclone" ? selectedRefAudio!.id : undefined,
|
||||||
ref_text: ttsMode === "voiceclone" ? refText : undefined,
|
ref_text: ttsMode === "voiceclone" ? refText : undefined,
|
||||||
language: textLang,
|
language: textLang,
|
||||||
|
speed: ttsMode === "voiceclone" ? speed : undefined,
|
||||||
};
|
};
|
||||||
await generateAudio(params);
|
await generateAudio(params);
|
||||||
};
|
};
|
||||||
@@ -854,22 +922,59 @@ export const useHomeController = () => {
|
|||||||
language: selectedAudio.language || textLang,
|
language: selectedAudio.language || textLang,
|
||||||
title: videoTitle.trim() || undefined,
|
title: videoTitle.trim() || undefined,
|
||||||
enable_subtitles: true,
|
enable_subtitles: true,
|
||||||
|
output_aspect_ratio: outputAspectRatio,
|
||||||
};
|
};
|
||||||
|
|
||||||
// 多素材
|
// 多素材
|
||||||
if (selectedMaterials.length > 1) {
|
if (selectedMaterials.length > 1) {
|
||||||
payload.material_paths = selectedMaterials
|
const timelineOrderedIds = timelineSegments
|
||||||
|
.map((seg) => seg.materialId)
|
||||||
|
.filter((id, index, arr) => arr.indexOf(id) === index);
|
||||||
|
const orderedMaterialIds = [
|
||||||
|
...timelineOrderedIds.filter((id) => selectedMaterials.includes(id)),
|
||||||
|
...selectedMaterials.filter((id) => !timelineOrderedIds.includes(id)),
|
||||||
|
];
|
||||||
|
|
||||||
|
const materialPaths = orderedMaterialIds
|
||||||
.map((id) => materials.find((x) => x.id === id)?.path)
|
.map((id) => materials.find((x) => x.id === id)?.path)
|
||||||
.filter((path): path is string => !!path);
|
.filter((path): path is string => !!path);
|
||||||
|
|
||||||
|
if (materialPaths.length === 0) {
|
||||||
|
toast.error("多素材解析失败,请刷新素材后重试");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
payload.material_paths = materialPaths;
|
||||||
|
payload.material_path = materialPaths[0];
|
||||||
|
|
||||||
// 发送自定义时间轴分配
|
// 发送自定义时间轴分配
|
||||||
const assignments = toCustomAssignments();
|
const assignments = toCustomAssignments();
|
||||||
if (assignments.length > 0) {
|
if (assignments.length > 0) {
|
||||||
|
const assignmentPaths = assignments
|
||||||
|
.map((a) => a.material_path)
|
||||||
|
.filter((path): path is string => !!path);
|
||||||
|
|
||||||
|
if (assignmentPaths.length === assignments.length) {
|
||||||
|
// 以时间轴可见段为准:超出时间轴的素材不会参与本次生成
|
||||||
|
payload.material_paths = assignmentPaths;
|
||||||
|
payload.material_path = assignmentPaths[0];
|
||||||
|
}
|
||||||
payload.custom_assignments = assignments;
|
payload.custom_assignments = assignments;
|
||||||
|
} else {
|
||||||
|
console.warn(
|
||||||
|
"[Timeline] custom_assignments 为空,回退后端自动分配",
|
||||||
|
{ materials: materialPaths.length }
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// 单素材 + 截取起点
|
// 单素材 + 截取范围
|
||||||
if (selectedMaterials.length === 1 && timelineSegments[0]?.sourceStart > 0) {
|
const singleSeg = timelineSegments[0];
|
||||||
|
if (
|
||||||
|
selectedMaterials.length === 1
|
||||||
|
&& singleSeg
|
||||||
|
&& (singleSeg.sourceStart > 0 || singleSeg.sourceEnd > 0)
|
||||||
|
) {
|
||||||
payload.custom_assignments = toCustomAssignments();
|
payload.custom_assignments = toCustomAssignments();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -889,10 +994,28 @@ export const useHomeController = () => {
|
|||||||
payload.title_font_size = Math.round(titleFontSize);
|
payload.title_font_size = Math.round(titleFontSize);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (videoTitle.trim() || videoSecondaryTitle.trim()) {
|
||||||
|
payload.title_display_mode = titleDisplayMode;
|
||||||
|
if (titleDisplayMode === "short") {
|
||||||
|
payload.title_duration = DEFAULT_SHORT_TITLE_DURATION;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (videoTitle.trim()) {
|
if (videoTitle.trim()) {
|
||||||
payload.title_top_margin = Math.round(titleTopMargin);
|
payload.title_top_margin = Math.round(titleTopMargin);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (videoSecondaryTitle.trim()) {
|
||||||
|
payload.secondary_title = videoSecondaryTitle.trim();
|
||||||
|
if (selectedSecondaryTitleStyleId) {
|
||||||
|
payload.secondary_title_style_id = selectedSecondaryTitleStyleId;
|
||||||
|
}
|
||||||
|
if (secondaryTitleFontSize) {
|
||||||
|
payload.secondary_title_font_size = Math.round(secondaryTitleFontSize);
|
||||||
|
}
|
||||||
|
payload.secondary_title_top_margin = Math.round(secondaryTitleTopMargin);
|
||||||
|
}
|
||||||
|
|
||||||
payload.subtitle_bottom_margin = Math.round(subtitleBottomMargin);
|
payload.subtitle_bottom_margin = Math.round(subtitleBottomMargin);
|
||||||
|
|
||||||
if (enableBgm && selectedBgmId) {
|
if (enableBgm && selectedBgmId) {
|
||||||
@@ -973,6 +1096,8 @@ export const useHomeController = () => {
|
|||||||
setText,
|
setText,
|
||||||
extractModalOpen,
|
extractModalOpen,
|
||||||
setExtractModalOpen,
|
setExtractModalOpen,
|
||||||
|
rewriteModalOpen,
|
||||||
|
setRewriteModalOpen,
|
||||||
handleGenerateMeta,
|
handleGenerateMeta,
|
||||||
isGeneratingMeta,
|
isGeneratingMeta,
|
||||||
handleTranslate,
|
handleTranslate,
|
||||||
@@ -992,6 +1117,15 @@ export const useHomeController = () => {
|
|||||||
titleFontSize,
|
titleFontSize,
|
||||||
setTitleFontSize,
|
setTitleFontSize,
|
||||||
setTitleSizeLocked,
|
setTitleSizeLocked,
|
||||||
|
videoSecondaryTitle,
|
||||||
|
secondaryTitleInput,
|
||||||
|
selectedSecondaryTitleStyleId,
|
||||||
|
setSelectedSecondaryTitleStyleId,
|
||||||
|
secondaryTitleFontSize,
|
||||||
|
setSecondaryTitleFontSize,
|
||||||
|
setSecondaryTitleSizeLocked,
|
||||||
|
secondaryTitleTopMargin,
|
||||||
|
setSecondaryTitleTopMargin,
|
||||||
subtitleStyles,
|
subtitleStyles,
|
||||||
selectedSubtitleStyleId,
|
selectedSubtitleStyleId,
|
||||||
setSelectedSubtitleStyleId,
|
setSelectedSubtitleStyleId,
|
||||||
@@ -1000,12 +1134,17 @@ export const useHomeController = () => {
|
|||||||
setSubtitleSizeLocked,
|
setSubtitleSizeLocked,
|
||||||
titleTopMargin,
|
titleTopMargin,
|
||||||
setTitleTopMargin,
|
setTitleTopMargin,
|
||||||
|
titleDisplayMode,
|
||||||
|
setTitleDisplayMode,
|
||||||
subtitleBottomMargin,
|
subtitleBottomMargin,
|
||||||
setSubtitleBottomMargin,
|
setSubtitleBottomMargin,
|
||||||
|
outputAspectRatio,
|
||||||
|
setOutputAspectRatio,
|
||||||
resolveAssetUrl,
|
resolveAssetUrl,
|
||||||
getFontFormat,
|
getFontFormat,
|
||||||
buildTextShadow,
|
buildTextShadow,
|
||||||
materialDimensions,
|
materialDimensions,
|
||||||
|
materialPosterUrl,
|
||||||
ttsMode,
|
ttsMode,
|
||||||
setTtsMode,
|
setTtsMode,
|
||||||
voices: VOICES[textLang] || VOICES["zh-CN"],
|
voices: VOICES[textLang] || VOICES["zh-CN"],
|
||||||
@@ -1029,6 +1168,8 @@ export const useHomeController = () => {
|
|||||||
saveEditing,
|
saveEditing,
|
||||||
cancelEditing,
|
cancelEditing,
|
||||||
deleteRefAudio,
|
deleteRefAudio,
|
||||||
|
retranscribeRefAudio,
|
||||||
|
retranscribingId,
|
||||||
recordedBlob,
|
recordedBlob,
|
||||||
isRecording,
|
isRecording,
|
||||||
recordingTime,
|
recordingTime,
|
||||||
@@ -1036,7 +1177,6 @@ export const useHomeController = () => {
|
|||||||
stopRecording,
|
stopRecording,
|
||||||
useRecording,
|
useRecording,
|
||||||
formatRecordingTime,
|
formatRecordingTime,
|
||||||
fixedRefText: FIXED_REF_TEXT,
|
|
||||||
bgmList,
|
bgmList,
|
||||||
bgmLoading,
|
bgmLoading,
|
||||||
bgmError,
|
bgmError,
|
||||||
@@ -1072,6 +1212,8 @@ export const useHomeController = () => {
|
|||||||
deleteAudio,
|
deleteAudio,
|
||||||
renameAudio,
|
renameAudio,
|
||||||
selectAudio,
|
selectAudio,
|
||||||
|
speed,
|
||||||
|
setSpeed,
|
||||||
timelineSegments,
|
timelineSegments,
|
||||||
reorderSegments,
|
reorderSegments,
|
||||||
setSourceRange,
|
setSourceRange,
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import { useEffect, useState } from "react";
|
import { useEffect, useState } from "react";
|
||||||
import { clampTitle } from "@/shared/lib/title";
|
import { clampTitle, clampSecondaryTitle } from "@/shared/lib/title";
|
||||||
|
|
||||||
interface RefAudio {
|
interface RefAudio {
|
||||||
id: string;
|
id: string;
|
||||||
@@ -17,6 +17,8 @@ interface UseHomePersistenceOptions {
|
|||||||
setText: React.Dispatch<React.SetStateAction<string>>;
|
setText: React.Dispatch<React.SetStateAction<string>>;
|
||||||
videoTitle: string;
|
videoTitle: string;
|
||||||
setVideoTitle: React.Dispatch<React.SetStateAction<string>>;
|
setVideoTitle: React.Dispatch<React.SetStateAction<string>>;
|
||||||
|
videoSecondaryTitle: string;
|
||||||
|
setVideoSecondaryTitle: React.Dispatch<React.SetStateAction<string>>;
|
||||||
ttsMode: 'edgetts' | 'voiceclone';
|
ttsMode: 'edgetts' | 'voiceclone';
|
||||||
setTtsMode: React.Dispatch<React.SetStateAction<'edgetts' | 'voiceclone'>>;
|
setTtsMode: React.Dispatch<React.SetStateAction<'edgetts' | 'voiceclone'>>;
|
||||||
voice: string;
|
voice: string;
|
||||||
@@ -29,16 +31,27 @@ interface UseHomePersistenceOptions {
|
|||||||
setSelectedSubtitleStyleId: React.Dispatch<React.SetStateAction<string>>;
|
setSelectedSubtitleStyleId: React.Dispatch<React.SetStateAction<string>>;
|
||||||
selectedTitleStyleId: string;
|
selectedTitleStyleId: string;
|
||||||
setSelectedTitleStyleId: React.Dispatch<React.SetStateAction<string>>;
|
setSelectedTitleStyleId: React.Dispatch<React.SetStateAction<string>>;
|
||||||
|
selectedSecondaryTitleStyleId: string;
|
||||||
|
setSelectedSecondaryTitleStyleId: React.Dispatch<React.SetStateAction<string>>;
|
||||||
subtitleFontSize: number;
|
subtitleFontSize: number;
|
||||||
setSubtitleFontSize: React.Dispatch<React.SetStateAction<number>>;
|
setSubtitleFontSize: React.Dispatch<React.SetStateAction<number>>;
|
||||||
titleFontSize: number;
|
titleFontSize: number;
|
||||||
setTitleFontSize: React.Dispatch<React.SetStateAction<number>>;
|
setTitleFontSize: React.Dispatch<React.SetStateAction<number>>;
|
||||||
|
secondaryTitleFontSize: number;
|
||||||
|
setSecondaryTitleFontSize: React.Dispatch<React.SetStateAction<number>>;
|
||||||
setSubtitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
|
setSubtitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
|
||||||
setTitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
|
setTitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
|
||||||
|
setSecondaryTitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
|
||||||
titleTopMargin: number;
|
titleTopMargin: number;
|
||||||
setTitleTopMargin: React.Dispatch<React.SetStateAction<number>>;
|
setTitleTopMargin: React.Dispatch<React.SetStateAction<number>>;
|
||||||
|
secondaryTitleTopMargin: number;
|
||||||
|
setSecondaryTitleTopMargin: React.Dispatch<React.SetStateAction<number>>;
|
||||||
|
titleDisplayMode: 'short' | 'persistent';
|
||||||
|
setTitleDisplayMode: React.Dispatch<React.SetStateAction<'short' | 'persistent'>>;
|
||||||
subtitleBottomMargin: number;
|
subtitleBottomMargin: number;
|
||||||
setSubtitleBottomMargin: React.Dispatch<React.SetStateAction<number>>;
|
setSubtitleBottomMargin: React.Dispatch<React.SetStateAction<number>>;
|
||||||
|
outputAspectRatio: '9:16' | '16:9';
|
||||||
|
setOutputAspectRatio: React.Dispatch<React.SetStateAction<'9:16' | '16:9'>>;
|
||||||
selectedBgmId: string;
|
selectedBgmId: string;
|
||||||
setSelectedBgmId: React.Dispatch<React.SetStateAction<string>>;
|
setSelectedBgmId: React.Dispatch<React.SetStateAction<string>>;
|
||||||
bgmVolume: number;
|
bgmVolume: number;
|
||||||
@@ -50,6 +63,8 @@ interface UseHomePersistenceOptions {
|
|||||||
selectedRefAudio: RefAudio | null;
|
selectedRefAudio: RefAudio | null;
|
||||||
selectedAudioId: string | null;
|
selectedAudioId: string | null;
|
||||||
setSelectedAudioId: React.Dispatch<React.SetStateAction<string | null>>;
|
setSelectedAudioId: React.Dispatch<React.SetStateAction<string | null>>;
|
||||||
|
speed: number;
|
||||||
|
setSpeed: React.Dispatch<React.SetStateAction<number>>;
|
||||||
}
|
}
|
||||||
|
|
||||||
export const useHomePersistence = ({
|
export const useHomePersistence = ({
|
||||||
@@ -59,6 +74,8 @@ export const useHomePersistence = ({
|
|||||||
setText,
|
setText,
|
||||||
videoTitle,
|
videoTitle,
|
||||||
setVideoTitle,
|
setVideoTitle,
|
||||||
|
videoSecondaryTitle,
|
||||||
|
setVideoSecondaryTitle,
|
||||||
ttsMode,
|
ttsMode,
|
||||||
setTtsMode,
|
setTtsMode,
|
||||||
voice,
|
voice,
|
||||||
@@ -71,16 +88,27 @@ export const useHomePersistence = ({
|
|||||||
setSelectedSubtitleStyleId,
|
setSelectedSubtitleStyleId,
|
||||||
selectedTitleStyleId,
|
selectedTitleStyleId,
|
||||||
setSelectedTitleStyleId,
|
setSelectedTitleStyleId,
|
||||||
|
selectedSecondaryTitleStyleId,
|
||||||
|
setSelectedSecondaryTitleStyleId,
|
||||||
subtitleFontSize,
|
subtitleFontSize,
|
||||||
setSubtitleFontSize,
|
setSubtitleFontSize,
|
||||||
titleFontSize,
|
titleFontSize,
|
||||||
setTitleFontSize,
|
setTitleFontSize,
|
||||||
|
secondaryTitleFontSize,
|
||||||
|
setSecondaryTitleFontSize,
|
||||||
setSubtitleSizeLocked,
|
setSubtitleSizeLocked,
|
||||||
setTitleSizeLocked,
|
setTitleSizeLocked,
|
||||||
|
setSecondaryTitleSizeLocked,
|
||||||
titleTopMargin,
|
titleTopMargin,
|
||||||
setTitleTopMargin,
|
setTitleTopMargin,
|
||||||
|
secondaryTitleTopMargin,
|
||||||
|
setSecondaryTitleTopMargin,
|
||||||
|
titleDisplayMode,
|
||||||
|
setTitleDisplayMode,
|
||||||
subtitleBottomMargin,
|
subtitleBottomMargin,
|
||||||
setSubtitleBottomMargin,
|
setSubtitleBottomMargin,
|
||||||
|
outputAspectRatio,
|
||||||
|
setOutputAspectRatio,
|
||||||
selectedBgmId,
|
selectedBgmId,
|
||||||
setSelectedBgmId,
|
setSelectedBgmId,
|
||||||
bgmVolume,
|
bgmVolume,
|
||||||
@@ -92,6 +120,8 @@ export const useHomePersistence = ({
|
|||||||
selectedRefAudio,
|
selectedRefAudio,
|
||||||
selectedAudioId,
|
selectedAudioId,
|
||||||
setSelectedAudioId,
|
setSelectedAudioId,
|
||||||
|
speed,
|
||||||
|
setSpeed,
|
||||||
}: UseHomePersistenceOptions) => {
|
}: UseHomePersistenceOptions) => {
|
||||||
const [isRestored, setIsRestored] = useState(false);
|
const [isRestored, setIsRestored] = useState(false);
|
||||||
|
|
||||||
@@ -100,24 +130,33 @@ export const useHomePersistence = ({
|
|||||||
|
|
||||||
const savedText = localStorage.getItem(`vigent_${storageKey}_text`);
|
const savedText = localStorage.getItem(`vigent_${storageKey}_text`);
|
||||||
const savedTitle = localStorage.getItem(`vigent_${storageKey}_title`);
|
const savedTitle = localStorage.getItem(`vigent_${storageKey}_title`);
|
||||||
|
const savedSecondaryTitle = localStorage.getItem(`vigent_${storageKey}_secondaryTitle`);
|
||||||
const savedTtsMode = localStorage.getItem(`vigent_${storageKey}_ttsMode`);
|
const savedTtsMode = localStorage.getItem(`vigent_${storageKey}_ttsMode`);
|
||||||
const savedVoice = localStorage.getItem(`vigent_${storageKey}_voice`);
|
const savedVoice = localStorage.getItem(`vigent_${storageKey}_voice`);
|
||||||
const savedTextLang = localStorage.getItem(`vigent_${storageKey}_textLang`);
|
const savedTextLang = localStorage.getItem(`vigent_${storageKey}_textLang`);
|
||||||
const savedMaterial = localStorage.getItem(`vigent_${storageKey}_material`);
|
const savedMaterial = localStorage.getItem(`vigent_${storageKey}_material`);
|
||||||
const savedSubtitleStyle = localStorage.getItem(`vigent_${storageKey}_subtitleStyle`);
|
const savedSubtitleStyle = localStorage.getItem(`vigent_${storageKey}_subtitleStyle`);
|
||||||
const savedTitleStyle = localStorage.getItem(`vigent_${storageKey}_titleStyle`);
|
const savedTitleStyle = localStorage.getItem(`vigent_${storageKey}_titleStyle`);
|
||||||
|
const savedSecondaryTitleStyle = localStorage.getItem(`vigent_${storageKey}_secondaryTitleStyle`);
|
||||||
const savedSubtitleFontSize = localStorage.getItem(`vigent_${storageKey}_subtitleFontSize`);
|
const savedSubtitleFontSize = localStorage.getItem(`vigent_${storageKey}_subtitleFontSize`);
|
||||||
const savedTitleFontSize = localStorage.getItem(`vigent_${storageKey}_titleFontSize`);
|
const savedTitleFontSize = localStorage.getItem(`vigent_${storageKey}_titleFontSize`);
|
||||||
|
const savedSecondaryTitleFontSize = localStorage.getItem(`vigent_${storageKey}_secondaryTitleFontSize`);
|
||||||
const savedBgmId = localStorage.getItem(`vigent_${storageKey}_bgmId`);
|
const savedBgmId = localStorage.getItem(`vigent_${storageKey}_bgmId`);
|
||||||
const savedSelectedVideoId = localStorage.getItem(`vigent_${storageKey}_selectedVideoId`);
|
const savedSelectedVideoId = localStorage.getItem(`vigent_${storageKey}_latestGeneratedVideoId`)
|
||||||
|
|| localStorage.getItem(`vigent_${storageKey}_selectedVideoId`);
|
||||||
const savedSelectedAudioId = localStorage.getItem(`vigent_${storageKey}_selectedAudioId`);
|
const savedSelectedAudioId = localStorage.getItem(`vigent_${storageKey}_selectedAudioId`);
|
||||||
const savedBgmVolume = localStorage.getItem(`vigent_${storageKey}_bgmVolume`);
|
const savedBgmVolume = localStorage.getItem(`vigent_${storageKey}_bgmVolume`);
|
||||||
const savedEnableBgm = localStorage.getItem(`vigent_${storageKey}_enableBgm`);
|
const savedEnableBgm = localStorage.getItem(`vigent_${storageKey}_enableBgm`);
|
||||||
const savedTitleTopMargin = localStorage.getItem(`vigent_${storageKey}_titleTopMargin`);
|
const savedTitleTopMargin = localStorage.getItem(`vigent_${storageKey}_titleTopMargin`);
|
||||||
|
const savedSecondaryTitleTopMargin = localStorage.getItem(`vigent_${storageKey}_secondaryTitleTopMargin`);
|
||||||
|
const savedTitleDisplayMode = localStorage.getItem(`vigent_${storageKey}_titleDisplayMode`);
|
||||||
const savedSubtitleBottomMargin = localStorage.getItem(`vigent_${storageKey}_subtitleBottomMargin`);
|
const savedSubtitleBottomMargin = localStorage.getItem(`vigent_${storageKey}_subtitleBottomMargin`);
|
||||||
|
const savedOutputAspectRatio = localStorage.getItem(`vigent_${storageKey}_outputAspectRatio`);
|
||||||
|
const savedSpeed = localStorage.getItem(`vigent_${storageKey}_speed`);
|
||||||
|
|
||||||
setText(savedText || "大家好,欢迎来到我的频道,今天给大家分享一些有趣的内容。");
|
setText(savedText || "大家好,欢迎来到我的频道,今天给大家分享一些有趣的内容。");
|
||||||
setVideoTitle(savedTitle ? clampTitle(savedTitle) : "");
|
setVideoTitle(savedTitle ? clampTitle(savedTitle) : "");
|
||||||
|
setVideoSecondaryTitle(savedSecondaryTitle ? clampSecondaryTitle(savedSecondaryTitle) : "");
|
||||||
setTtsMode((savedTtsMode as 'edgetts' | 'voiceclone') || 'edgetts');
|
setTtsMode((savedTtsMode as 'edgetts' | 'voiceclone') || 'edgetts');
|
||||||
setVoice(savedVoice || "zh-CN-YunxiNeural");
|
setVoice(savedVoice || "zh-CN-YunxiNeural");
|
||||||
if (savedTextLang) setTextLang(savedTextLang);
|
if (savedTextLang) setTextLang(savedTextLang);
|
||||||
@@ -137,6 +176,7 @@ export const useHomePersistence = ({
|
|||||||
}
|
}
|
||||||
if (savedSubtitleStyle) setSelectedSubtitleStyleId(savedSubtitleStyle);
|
if (savedSubtitleStyle) setSelectedSubtitleStyleId(savedSubtitleStyle);
|
||||||
if (savedTitleStyle) setSelectedTitleStyleId(savedTitleStyle);
|
if (savedTitleStyle) setSelectedTitleStyleId(savedTitleStyle);
|
||||||
|
if (savedSecondaryTitleStyle) setSelectedSecondaryTitleStyleId(savedSecondaryTitleStyle);
|
||||||
|
|
||||||
if (savedSubtitleFontSize) {
|
if (savedSubtitleFontSize) {
|
||||||
const parsed = parseInt(savedSubtitleFontSize, 10);
|
const parsed = parseInt(savedSubtitleFontSize, 10);
|
||||||
@@ -154,21 +194,47 @@ export const useHomePersistence = ({
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (savedSecondaryTitleFontSize) {
|
||||||
|
const parsed = parseInt(savedSecondaryTitleFontSize, 10);
|
||||||
|
if (!Number.isNaN(parsed)) {
|
||||||
|
setSecondaryTitleFontSize(parsed);
|
||||||
|
setSecondaryTitleSizeLocked(true);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (savedBgmId) setSelectedBgmId(savedBgmId);
|
if (savedBgmId) setSelectedBgmId(savedBgmId);
|
||||||
if (savedBgmVolume) setBgmVolume(parseFloat(savedBgmVolume));
|
if (savedBgmVolume) setBgmVolume(parseFloat(savedBgmVolume));
|
||||||
if (savedEnableBgm !== null) setEnableBgm(savedEnableBgm === 'true');
|
if (savedEnableBgm !== null) setEnableBgm(savedEnableBgm === 'true');
|
||||||
if (savedSelectedVideoId) setSelectedVideoId(savedSelectedVideoId);
|
if (savedSelectedVideoId) setSelectedVideoId(savedSelectedVideoId);
|
||||||
|
// 消费后清除跨页面共享标记,避免反复覆盖
|
||||||
|
localStorage.removeItem(`vigent_${storageKey}_latestGeneratedVideoId`);
|
||||||
if (savedSelectedAudioId) setSelectedAudioId(savedSelectedAudioId);
|
if (savedSelectedAudioId) setSelectedAudioId(savedSelectedAudioId);
|
||||||
|
|
||||||
if (savedTitleTopMargin) {
|
if (savedTitleTopMargin) {
|
||||||
const parsed = parseInt(savedTitleTopMargin, 10);
|
const parsed = parseInt(savedTitleTopMargin, 10);
|
||||||
if (!Number.isNaN(parsed)) setTitleTopMargin(parsed);
|
if (!Number.isNaN(parsed)) setTitleTopMargin(parsed);
|
||||||
}
|
}
|
||||||
|
if (savedSecondaryTitleTopMargin) {
|
||||||
|
const parsed = parseInt(savedSecondaryTitleTopMargin, 10);
|
||||||
|
if (!Number.isNaN(parsed)) setSecondaryTitleTopMargin(parsed);
|
||||||
|
}
|
||||||
|
if (savedTitleDisplayMode === 'short' || savedTitleDisplayMode === 'persistent') {
|
||||||
|
setTitleDisplayMode(savedTitleDisplayMode);
|
||||||
|
}
|
||||||
if (savedSubtitleBottomMargin) {
|
if (savedSubtitleBottomMargin) {
|
||||||
const parsed = parseInt(savedSubtitleBottomMargin, 10);
|
const parsed = parseInt(savedSubtitleBottomMargin, 10);
|
||||||
if (!Number.isNaN(parsed)) setSubtitleBottomMargin(parsed);
|
if (!Number.isNaN(parsed)) setSubtitleBottomMargin(parsed);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (savedOutputAspectRatio === '9:16' || savedOutputAspectRatio === '16:9') {
|
||||||
|
setOutputAspectRatio(savedOutputAspectRatio);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (savedSpeed) {
|
||||||
|
const parsed = parseFloat(savedSpeed);
|
||||||
|
if (!Number.isNaN(parsed)) setSpeed(parsed);
|
||||||
|
}
|
||||||
|
|
||||||
// eslint-disable-next-line react-hooks/set-state-in-effect
|
// eslint-disable-next-line react-hooks/set-state-in-effect
|
||||||
setIsRestored(true);
|
setIsRestored(true);
|
||||||
}, [
|
}, [
|
||||||
@@ -179,18 +245,26 @@ export const useHomePersistence = ({
|
|||||||
setSelectedMaterials,
|
setSelectedMaterials,
|
||||||
setSelectedSubtitleStyleId,
|
setSelectedSubtitleStyleId,
|
||||||
setSelectedTitleStyleId,
|
setSelectedTitleStyleId,
|
||||||
|
setSelectedSecondaryTitleStyleId,
|
||||||
setSelectedVideoId,
|
setSelectedVideoId,
|
||||||
setSelectedAudioId,
|
setSelectedAudioId,
|
||||||
|
setSpeed,
|
||||||
setSubtitleFontSize,
|
setSubtitleFontSize,
|
||||||
setSubtitleSizeLocked,
|
setSubtitleSizeLocked,
|
||||||
setText,
|
setText,
|
||||||
setTextLang,
|
setTextLang,
|
||||||
setTitleFontSize,
|
setTitleFontSize,
|
||||||
setTitleSizeLocked,
|
setTitleSizeLocked,
|
||||||
|
setSecondaryTitleFontSize,
|
||||||
|
setSecondaryTitleSizeLocked,
|
||||||
setTitleTopMargin,
|
setTitleTopMargin,
|
||||||
|
setSecondaryTitleTopMargin,
|
||||||
|
setTitleDisplayMode,
|
||||||
setSubtitleBottomMargin,
|
setSubtitleBottomMargin,
|
||||||
|
setOutputAspectRatio,
|
||||||
setTtsMode,
|
setTtsMode,
|
||||||
setVideoTitle,
|
setVideoTitle,
|
||||||
|
setVideoSecondaryTitle,
|
||||||
setVoice,
|
setVoice,
|
||||||
storageKey,
|
storageKey,
|
||||||
]);
|
]);
|
||||||
@@ -211,6 +285,14 @@ export const useHomePersistence = ({
|
|||||||
return () => clearTimeout(timeout);
|
return () => clearTimeout(timeout);
|
||||||
}, [videoTitle, storageKey, isRestored]);
|
}, [videoTitle, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!isRestored) return;
|
||||||
|
const timeout = setTimeout(() => {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_secondaryTitle`, videoSecondaryTitle);
|
||||||
|
}, 300);
|
||||||
|
return () => clearTimeout(timeout);
|
||||||
|
}, [videoSecondaryTitle, storageKey, isRestored]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isRestored) localStorage.setItem(`vigent_${storageKey}_ttsMode`, ttsMode);
|
if (isRestored) localStorage.setItem(`vigent_${storageKey}_ttsMode`, ttsMode);
|
||||||
}, [ttsMode, storageKey, isRestored]);
|
}, [ttsMode, storageKey, isRestored]);
|
||||||
@@ -241,6 +323,12 @@ export const useHomePersistence = ({
|
|||||||
}
|
}
|
||||||
}, [selectedTitleStyleId, storageKey, isRestored]);
|
}, [selectedTitleStyleId, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isRestored && selectedSecondaryTitleStyleId) {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_secondaryTitleStyle`, selectedSecondaryTitleStyleId);
|
||||||
|
}
|
||||||
|
}, [selectedSecondaryTitleStyleId, storageKey, isRestored]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isRestored) {
|
if (isRestored) {
|
||||||
localStorage.setItem(`vigent_${storageKey}_subtitleFontSize`, String(subtitleFontSize));
|
localStorage.setItem(`vigent_${storageKey}_subtitleFontSize`, String(subtitleFontSize));
|
||||||
@@ -253,18 +341,42 @@ export const useHomePersistence = ({
|
|||||||
}
|
}
|
||||||
}, [titleFontSize, storageKey, isRestored]);
|
}, [titleFontSize, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isRestored) {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_secondaryTitleFontSize`, String(secondaryTitleFontSize));
|
||||||
|
}
|
||||||
|
}, [secondaryTitleFontSize, storageKey, isRestored]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isRestored) {
|
if (isRestored) {
|
||||||
localStorage.setItem(`vigent_${storageKey}_titleTopMargin`, String(titleTopMargin));
|
localStorage.setItem(`vigent_${storageKey}_titleTopMargin`, String(titleTopMargin));
|
||||||
}
|
}
|
||||||
}, [titleTopMargin, storageKey, isRestored]);
|
}, [titleTopMargin, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isRestored) {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_secondaryTitleTopMargin`, String(secondaryTitleTopMargin));
|
||||||
|
}
|
||||||
|
}, [secondaryTitleTopMargin, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isRestored) {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_titleDisplayMode`, titleDisplayMode);
|
||||||
|
}
|
||||||
|
}, [titleDisplayMode, storageKey, isRestored]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isRestored) {
|
if (isRestored) {
|
||||||
localStorage.setItem(`vigent_${storageKey}_subtitleBottomMargin`, String(subtitleBottomMargin));
|
localStorage.setItem(`vigent_${storageKey}_subtitleBottomMargin`, String(subtitleBottomMargin));
|
||||||
}
|
}
|
||||||
}, [subtitleBottomMargin, storageKey, isRestored]);
|
}, [subtitleBottomMargin, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isRestored) {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_outputAspectRatio`, outputAspectRatio);
|
||||||
|
}
|
||||||
|
}, [outputAspectRatio, storageKey, isRestored]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isRestored) {
|
if (isRestored) {
|
||||||
localStorage.setItem(`vigent_${storageKey}_bgmId`, selectedBgmId);
|
localStorage.setItem(`vigent_${storageKey}_bgmId`, selectedBgmId);
|
||||||
@@ -309,5 +421,11 @@ export const useHomePersistence = ({
|
|||||||
}
|
}
|
||||||
}, [selectedRefAudio, storageKey, isRestored]);
|
}, [selectedRefAudio, storageKey, isRestored]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (isRestored) {
|
||||||
|
localStorage.setItem(`vigent_${storageKey}_speed`, String(speed));
|
||||||
|
}
|
||||||
|
}, [speed, storageKey, isRestored]);
|
||||||
|
|
||||||
return { isRestored };
|
return { isRestored };
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -185,11 +185,14 @@ export const useMaterials = ({
|
|||||||
).then((enriched) => setMaterials(enriched));
|
).then((enriched) => setMaterials(enriched));
|
||||||
}
|
}
|
||||||
|
|
||||||
// 找出新增的素材 ID 并自动选中
|
// 找出新增素材并默认仅选中新上传项,避免误触发多素材模式
|
||||||
const oldIds = new Set(materials.map((m) => m.id));
|
const oldIds = new Set(materials.map((m) => m.id));
|
||||||
const newIds = nextMaterials.filter((m) => !oldIds.has(m.id)).map((m) => m.id);
|
const newIds = nextMaterials.filter((m) => !oldIds.has(m.id)).map((m) => m.id);
|
||||||
if (newIds.length > 0) {
|
if (newIds.length > 0) {
|
||||||
setSelectedMaterials((prev) => [...prev, ...newIds]);
|
setSelectedMaterials([newIds[0]]);
|
||||||
|
} else if (nextMaterials[0]?.id) {
|
||||||
|
// 兜底:即使未识别到新增项,也保持单素材默认选择最新一个
|
||||||
|
setSelectedMaterials([nextMaterials[0].id]);
|
||||||
}
|
}
|
||||||
} catch (err: unknown) {
|
} catch (err: unknown) {
|
||||||
console.error("Upload failed:", err);
|
console.error("Upload failed:", err);
|
||||||
@@ -200,7 +203,7 @@ export const useMaterials = ({
|
|||||||
}
|
}
|
||||||
|
|
||||||
e.target.value = '';
|
e.target.value = '';
|
||||||
}, [fetchMaterials]);
|
}, [materials, setSelectedMaterials]);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
materials,
|
materials,
|
||||||
|
|||||||
@@ -13,14 +13,12 @@ interface RefAudio {
|
|||||||
}
|
}
|
||||||
|
|
||||||
interface UseRefAudiosOptions {
|
interface UseRefAudiosOptions {
|
||||||
fixedRefText: string;
|
|
||||||
selectedRefAudio: RefAudio | null;
|
selectedRefAudio: RefAudio | null;
|
||||||
setSelectedRefAudio: React.Dispatch<React.SetStateAction<RefAudio | null>>;
|
setSelectedRefAudio: React.Dispatch<React.SetStateAction<RefAudio | null>>;
|
||||||
setRefText: React.Dispatch<React.SetStateAction<string>>;
|
setRefText: React.Dispatch<React.SetStateAction<string>>;
|
||||||
}
|
}
|
||||||
|
|
||||||
export const useRefAudios = ({
|
export const useRefAudios = ({
|
||||||
fixedRefText,
|
|
||||||
selectedRefAudio,
|
selectedRefAudio,
|
||||||
setSelectedRefAudio,
|
setSelectedRefAudio,
|
||||||
setRefText,
|
setRefText,
|
||||||
@@ -28,6 +26,7 @@ export const useRefAudios = ({
|
|||||||
const [refAudios, setRefAudios] = useState<RefAudio[]>([]);
|
const [refAudios, setRefAudios] = useState<RefAudio[]>([]);
|
||||||
const [isUploadingRef, setIsUploadingRef] = useState(false);
|
const [isUploadingRef, setIsUploadingRef] = useState(false);
|
||||||
const [uploadRefError, setUploadRefError] = useState<string | null>(null);
|
const [uploadRefError, setUploadRefError] = useState<string | null>(null);
|
||||||
|
const [retranscribingId, setRetranscribingId] = useState<string | null>(null);
|
||||||
|
|
||||||
const fetchRefAudios = useCallback(async () => {
|
const fetchRefAudios = useCallback(async () => {
|
||||||
try {
|
try {
|
||||||
@@ -42,15 +41,12 @@ export const useRefAudios = ({
|
|||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
const uploadRefAudio = useCallback(async (file: File) => {
|
const uploadRefAudio = useCallback(async (file: File) => {
|
||||||
const refTextInput = fixedRefText;
|
|
||||||
|
|
||||||
setIsUploadingRef(true);
|
setIsUploadingRef(true);
|
||||||
setUploadRefError(null);
|
setUploadRefError(null);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const formData = new FormData();
|
const formData = new FormData();
|
||||||
formData.append('file', file);
|
formData.append('file', file);
|
||||||
formData.append('ref_text', refTextInput);
|
|
||||||
|
|
||||||
const { data: res } = await api.post<ApiResponse<RefAudio>>('/api/ref-audios', formData, {
|
const { data: res } = await api.post<ApiResponse<RefAudio>>('/api/ref-audios', formData, {
|
||||||
headers: { 'Content-Type': 'multipart/form-data' },
|
headers: { 'Content-Type': 'multipart/form-data' },
|
||||||
@@ -68,7 +64,7 @@ export const useRefAudios = ({
|
|||||||
const errorMsg = axiosErr.response?.data?.message || axiosErr.message || String(err);
|
const errorMsg = axiosErr.response?.data?.message || axiosErr.message || String(err);
|
||||||
setUploadRefError(`上传失败: ${errorMsg}`);
|
setUploadRefError(`上传失败: ${errorMsg}`);
|
||||||
}
|
}
|
||||||
}, [fetchRefAudios, fixedRefText, setRefText, setSelectedRefAudio]);
|
}, [fetchRefAudios, setRefText, setSelectedRefAudio]);
|
||||||
|
|
||||||
const deleteRefAudio = useCallback(async (audioId: string) => {
|
const deleteRefAudio = useCallback(async (audioId: string) => {
|
||||||
if (!confirm("确定要删除这个参考音频吗?")) return;
|
if (!confirm("确定要删除这个参考音频吗?")) return;
|
||||||
@@ -84,6 +80,28 @@ export const useRefAudios = ({
|
|||||||
}
|
}
|
||||||
}, [fetchRefAudios, selectedRefAudio, setRefText, setSelectedRefAudio]);
|
}, [fetchRefAudios, selectedRefAudio, setRefText, setSelectedRefAudio]);
|
||||||
|
|
||||||
|
const retranscribeRefAudio = useCallback(async (audioId: string) => {
|
||||||
|
setRetranscribingId(audioId);
|
||||||
|
try {
|
||||||
|
const { data: res } = await api.post<ApiResponse<{ ref_text: string }>>(
|
||||||
|
`/api/ref-audios/${encodeURIComponent(audioId)}/retranscribe`
|
||||||
|
);
|
||||||
|
const payload = unwrap(res);
|
||||||
|
toast.success("识别完成");
|
||||||
|
// 更新列表和当前选中
|
||||||
|
await fetchRefAudios();
|
||||||
|
if (selectedRefAudio?.id === audioId) {
|
||||||
|
setRefText(payload.ref_text);
|
||||||
|
}
|
||||||
|
} catch (err: unknown) {
|
||||||
|
const axiosErr = err as { response?: { data?: { message?: string } }; message?: string };
|
||||||
|
const errorMsg = axiosErr.response?.data?.message || axiosErr.message || String(err);
|
||||||
|
toast.error(`识别失败: ${errorMsg}`);
|
||||||
|
} finally {
|
||||||
|
setRetranscribingId(null);
|
||||||
|
}
|
||||||
|
}, [fetchRefAudios, selectedRefAudio, setRefText]);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
refAudios,
|
refAudios,
|
||||||
isUploadingRef,
|
isUploadingRef,
|
||||||
@@ -92,5 +110,7 @@ export const useRefAudios = ({
|
|||||||
fetchRefAudios,
|
fetchRefAudios,
|
||||||
uploadRefAudio,
|
uploadRefAudio,
|
||||||
deleteRefAudio,
|
deleteRefAudio,
|
||||||
|
retranscribeRefAudio,
|
||||||
|
retranscribingId,
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -12,12 +12,13 @@ export interface TimelineSegment {
|
|||||||
color: string;
|
color: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface CustomAssignment {
|
export interface CustomAssignment {
|
||||||
material_path: string;
|
material_path: string;
|
||||||
start: number;
|
start: number;
|
||||||
end: number;
|
end: number;
|
||||||
source_start: number;
|
source_start: number;
|
||||||
}
|
source_end?: number;
|
||||||
|
}
|
||||||
|
|
||||||
const COLORS = ["#8b5cf6", "#ec4899", "#06b6d4", "#f59e0b", "#10b981", "#f97316"];
|
const COLORS = ["#8b5cf6", "#ec4899", "#06b6d4", "#f59e0b", "#10b981", "#f97316"];
|
||||||
|
|
||||||
@@ -31,14 +32,16 @@ interface SegmentSnapshot {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/** Get effective duration of a segment (clipped range or full material duration) */
|
/** Get effective duration of a segment (clipped range or full material duration) */
|
||||||
function getEffectiveDuration(
|
function getEffectiveDuration(
|
||||||
seg: { sourceStart: number; sourceEnd: number; materialId: string },
|
seg: { sourceStart: number; sourceEnd: number; materialId: string },
|
||||||
mats: Material[]
|
mats: Material[]
|
||||||
): number {
|
): number {
|
||||||
if (seg.sourceEnd > seg.sourceStart) return seg.sourceEnd - seg.sourceStart;
|
const mat = mats.find((m) => m.id === seg.materialId);
|
||||||
const mat = mats.find((m) => m.id === seg.materialId);
|
const matDur = mat?.duration_sec ?? 0;
|
||||||
return mat?.duration_sec ?? 0;
|
if (seg.sourceEnd > seg.sourceStart) return seg.sourceEnd - seg.sourceStart;
|
||||||
}
|
if (seg.sourceStart > 0) return Math.max(matDur - seg.sourceStart, 0);
|
||||||
|
return matDur;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Recalculate segment start/end positions based on effective durations.
|
* Recalculate segment start/end positions based on effective durations.
|
||||||
@@ -97,11 +100,17 @@ export const useTimelineEditor = ({
|
|||||||
const prevKey = useRef("");
|
const prevKey = useRef("");
|
||||||
const restoredRef = useRef(false);
|
const restoredRef = useRef(false);
|
||||||
|
|
||||||
// Refs for stable callbacks (avoid recreating on every materials/duration change)
|
// Refs for stable callbacks (avoid recreating on every materials/duration change)
|
||||||
const materialsRef = useRef(materials);
|
const materialsRef = useRef(materials);
|
||||||
materialsRef.current = materials;
|
const audioDurationRef = useRef(audioDuration);
|
||||||
const audioDurationRef = useRef(audioDuration);
|
|
||||||
audioDurationRef.current = audioDuration;
|
useEffect(() => {
|
||||||
|
materialsRef.current = materials;
|
||||||
|
}, [materials]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
audioDurationRef.current = audioDuration;
|
||||||
|
}, [audioDuration]);
|
||||||
|
|
||||||
// Build a durationsKey so segments re-init when material durations become available
|
// Build a durationsKey so segments re-init when material durations become available
|
||||||
const durationsKey = selectedMaterials
|
const durationsKey = selectedMaterials
|
||||||
@@ -227,14 +236,15 @@ export const useTimelineEditor = ({
|
|||||||
.filter((seg) => seg.start < duration)
|
.filter((seg) => seg.start < duration)
|
||||||
.map((seg) => {
|
.map((seg) => {
|
||||||
const mat = materialsRef.current.find((m) => m.id === seg.materialId);
|
const mat = materialsRef.current.find((m) => m.id === seg.materialId);
|
||||||
return {
|
return {
|
||||||
material_path: mat?.path || seg.materialId,
|
material_path: mat?.path || seg.materialId,
|
||||||
start: seg.start,
|
start: seg.start,
|
||||||
end: seg.end,
|
end: seg.end,
|
||||||
source_start: seg.sourceStart,
|
source_start: seg.sourceStart,
|
||||||
};
|
source_end: seg.sourceEnd > seg.sourceStart ? seg.sourceEnd : undefined,
|
||||||
});
|
};
|
||||||
}, [segments]);
|
});
|
||||||
|
}, [segments]);
|
||||||
|
|
||||||
return {
|
return {
|
||||||
segments,
|
segments,
|
||||||
|
|||||||
94
frontend/src/features/home/model/useVideoFrameCapture.ts
Normal file
94
frontend/src/features/home/model/useVideoFrameCapture.ts
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
import { useEffect, useState } from "react";
|
||||||
|
|
||||||
|
/** 预览窗口最大 280px 宽,截取无需超过此尺寸 */
|
||||||
|
const MAX_CAPTURE_WIDTH = 480;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 从视频 URL 截取 0.1s 处的帧,返回 JPEG data URL。
|
||||||
|
* 失败时返回 null(降级渐变背景)。
|
||||||
|
*/
|
||||||
|
export function useVideoFrameCapture(videoUrl: string | null): string | null {
|
||||||
|
const [frameUrl, setFrameUrl] = useState<string | null>(null);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!videoUrl) {
|
||||||
|
setFrameUrl(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let isActive = true;
|
||||||
|
const video = document.createElement("video");
|
||||||
|
video.crossOrigin = "anonymous";
|
||||||
|
video.muted = true;
|
||||||
|
video.preload = "auto";
|
||||||
|
video.playsInline = true;
|
||||||
|
|
||||||
|
const cleanup = () => {
|
||||||
|
video.removeEventListener("loadedmetadata", onLoaded);
|
||||||
|
video.removeEventListener("canplay", onLoaded);
|
||||||
|
video.removeEventListener("seeked", onSeeked);
|
||||||
|
video.removeEventListener("error", onError);
|
||||||
|
video.src = "";
|
||||||
|
video.load();
|
||||||
|
};
|
||||||
|
|
||||||
|
const onSeeked = () => {
|
||||||
|
if (!isActive) return;
|
||||||
|
try {
|
||||||
|
const vw = video.videoWidth;
|
||||||
|
const vh = video.videoHeight;
|
||||||
|
if (!vw || !vh) {
|
||||||
|
if (isActive) setFrameUrl(null);
|
||||||
|
cleanup();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const scale = Math.min(1, MAX_CAPTURE_WIDTH / vw);
|
||||||
|
const cw = Math.round(vw * scale);
|
||||||
|
const ch = Math.round(vh * scale);
|
||||||
|
|
||||||
|
const canvas = document.createElement("canvas");
|
||||||
|
canvas.width = cw;
|
||||||
|
canvas.height = ch;
|
||||||
|
const ctx = canvas.getContext("2d");
|
||||||
|
if (!ctx) {
|
||||||
|
if (isActive) setFrameUrl(null);
|
||||||
|
cleanup();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
ctx.drawImage(video, 0, 0, cw, ch);
|
||||||
|
const dataUrl = canvas.toDataURL("image/jpeg", 0.7);
|
||||||
|
if (isActive) setFrameUrl(dataUrl);
|
||||||
|
} catch {
|
||||||
|
if (isActive) setFrameUrl(null);
|
||||||
|
}
|
||||||
|
cleanup();
|
||||||
|
};
|
||||||
|
|
||||||
|
let seeked = false;
|
||||||
|
const onLoaded = () => {
|
||||||
|
if (!isActive || seeked) return;
|
||||||
|
seeked = true;
|
||||||
|
video.currentTime = 0.1;
|
||||||
|
};
|
||||||
|
|
||||||
|
const onError = () => {
|
||||||
|
if (isActive) setFrameUrl(null);
|
||||||
|
cleanup();
|
||||||
|
};
|
||||||
|
|
||||||
|
// 先绑定监听,再设 src
|
||||||
|
video.addEventListener("loadedmetadata", onLoaded);
|
||||||
|
video.addEventListener("canplay", onLoaded);
|
||||||
|
video.addEventListener("seeked", onSeeked);
|
||||||
|
video.addEventListener("error", onError);
|
||||||
|
video.src = videoUrl;
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
isActive = false;
|
||||||
|
cleanup();
|
||||||
|
};
|
||||||
|
}, [videoUrl]);
|
||||||
|
|
||||||
|
return frameUrl;
|
||||||
|
}
|
||||||
@@ -43,7 +43,7 @@ export function BgmPanel({
|
|||||||
return (
|
return (
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<div className="flex items-center justify-between mb-4">
|
<div className="flex items-center justify-between mb-4">
|
||||||
<h2 className="text-lg font-semibold text-white flex items-center gap-2">🎵 背景音乐</h2>
|
<h2 className="text-lg font-semibold text-white flex items-center gap-2">五、背景音乐</h2>
|
||||||
<div className="flex items-center gap-2">
|
<div className="flex items-center gap-2">
|
||||||
<button
|
<button
|
||||||
onClick={onRefresh}
|
onClick={onRefresh}
|
||||||
|
|||||||
@@ -213,7 +213,7 @@ export function ClipTrimmer({
|
|||||||
{/* Custom range track */}
|
{/* Custom range track */}
|
||||||
<div
|
<div
|
||||||
ref={trackRef}
|
ref={trackRef}
|
||||||
className="relative h-8 cursor-pointer select-none touch-none"
|
className="relative h-10 cursor-pointer select-none touch-none"
|
||||||
onPointerMove={handleTrackPointerMove}
|
onPointerMove={handleTrackPointerMove}
|
||||||
onPointerUp={handleTrackPointerUp}
|
onPointerUp={handleTrackPointerUp}
|
||||||
onPointerLeave={handleTrackPointerUp}
|
onPointerLeave={handleTrackPointerUp}
|
||||||
@@ -242,7 +242,7 @@ export function ClipTrimmer({
|
|||||||
{/* Start thumb */}
|
{/* Start thumb */}
|
||||||
<div
|
<div
|
||||||
onPointerDown={(e) => handleThumbPointerDown("start", e)}
|
onPointerDown={(e) => handleThumbPointerDown("start", e)}
|
||||||
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-4 h-4 rounded-full bg-purple-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
|
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-5 h-5 rounded-full bg-purple-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
|
||||||
style={{ left: `${startPct}%` }}
|
style={{ left: `${startPct}%` }}
|
||||||
title={`起点: ${formatSec(sourceStart)}`}
|
title={`起点: ${formatSec(sourceStart)}`}
|
||||||
/>
|
/>
|
||||||
@@ -250,7 +250,7 @@ export function ClipTrimmer({
|
|||||||
{/* End thumb */}
|
{/* End thumb */}
|
||||||
<div
|
<div
|
||||||
onPointerDown={(e) => handleThumbPointerDown("end", e)}
|
onPointerDown={(e) => handleThumbPointerDown("end", e)}
|
||||||
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-4 h-4 rounded-full bg-pink-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
|
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-5 h-5 rounded-full bg-pink-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
|
||||||
style={{ left: `${endPct}%` }}
|
style={{ left: `${endPct}%` }}
|
||||||
title={`终点: ${formatSec(effectiveEnd)}`}
|
title={`终点: ${formatSec(effectiveEnd)}`}
|
||||||
/>
|
/>
|
||||||
|
|||||||
@@ -35,9 +35,13 @@ interface TitleStyleOption {
|
|||||||
interface FloatingStylePreviewProps {
|
interface FloatingStylePreviewProps {
|
||||||
onClose: () => void;
|
onClose: () => void;
|
||||||
videoTitle: string;
|
videoTitle: string;
|
||||||
|
videoSecondaryTitle: string;
|
||||||
titleStyles: TitleStyleOption[];
|
titleStyles: TitleStyleOption[];
|
||||||
selectedTitleStyleId: string;
|
selectedTitleStyleId: string;
|
||||||
titleFontSize: number;
|
titleFontSize: number;
|
||||||
|
selectedSecondaryTitleStyleId: string;
|
||||||
|
secondaryTitleFontSize: number;
|
||||||
|
secondaryTitleTopMargin: number;
|
||||||
subtitleStyles: SubtitleStyleOption[];
|
subtitleStyles: SubtitleStyleOption[];
|
||||||
selectedSubtitleStyleId: string;
|
selectedSubtitleStyleId: string;
|
||||||
subtitleFontSize: number;
|
subtitleFontSize: number;
|
||||||
@@ -49,16 +53,22 @@ interface FloatingStylePreviewProps {
|
|||||||
buildTextShadow: (color: string, size: number) => string;
|
buildTextShadow: (color: string, size: number) => string;
|
||||||
previewBaseWidth: number;
|
previewBaseWidth: number;
|
||||||
previewBaseHeight: number;
|
previewBaseHeight: number;
|
||||||
|
previewBackgroundUrl?: string | null;
|
||||||
}
|
}
|
||||||
|
|
||||||
const DESKTOP_WIDTH = 280;
|
const DESKTOP_WIDTH = 280;
|
||||||
|
const MOBILE_WIDTH = 160;
|
||||||
|
|
||||||
export function FloatingStylePreview({
|
export function FloatingStylePreview({
|
||||||
onClose,
|
onClose,
|
||||||
videoTitle,
|
videoTitle,
|
||||||
|
videoSecondaryTitle,
|
||||||
titleStyles,
|
titleStyles,
|
||||||
selectedTitleStyleId,
|
selectedTitleStyleId,
|
||||||
titleFontSize,
|
titleFontSize,
|
||||||
|
selectedSecondaryTitleStyleId,
|
||||||
|
secondaryTitleFontSize,
|
||||||
|
secondaryTitleTopMargin,
|
||||||
subtitleStyles,
|
subtitleStyles,
|
||||||
selectedSubtitleStyleId,
|
selectedSubtitleStyleId,
|
||||||
subtitleFontSize,
|
subtitleFontSize,
|
||||||
@@ -70,11 +80,10 @@ export function FloatingStylePreview({
|
|||||||
buildTextShadow,
|
buildTextShadow,
|
||||||
previewBaseWidth,
|
previewBaseWidth,
|
||||||
previewBaseHeight,
|
previewBaseHeight,
|
||||||
|
previewBackgroundUrl,
|
||||||
}: FloatingStylePreviewProps) {
|
}: FloatingStylePreviewProps) {
|
||||||
const isMobile = typeof window !== "undefined" && window.innerWidth < 640;
|
const isMobile = typeof window !== "undefined" && window.innerWidth < 640;
|
||||||
const windowWidth = isMobile
|
const windowWidth = isMobile ? MOBILE_WIDTH : DESKTOP_WIDTH;
|
||||||
? Math.min(window.innerWidth - 32, 360)
|
|
||||||
: DESKTOP_WIDTH;
|
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
const handleKeyDown = (e: KeyboardEvent) => {
|
const handleKeyDown = (e: KeyboardEvent) => {
|
||||||
@@ -86,6 +95,8 @@ export function FloatingStylePreview({
|
|||||||
|
|
||||||
const previewScale = windowWidth / previewBaseWidth;
|
const previewScale = windowWidth / previewBaseWidth;
|
||||||
const previewHeight = previewBaseHeight * previewScale;
|
const previewHeight = previewBaseHeight * previewScale;
|
||||||
|
const widthScale = Math.min(1, previewBaseWidth / 1080);
|
||||||
|
const responsiveScale = Math.max(0.55, widthScale);
|
||||||
|
|
||||||
const activeSubtitleStyle = subtitleStyles.find((s) => s.id === selectedSubtitleStyleId)
|
const activeSubtitleStyle = subtitleStyles.find((s) => s.id === selectedSubtitleStyleId)
|
||||||
|| subtitleStyles.find((s) => s.is_default)
|
|| subtitleStyles.find((s) => s.is_default)
|
||||||
@@ -102,8 +113,8 @@ export function FloatingStylePreview({
|
|||||||
const subtitleHighlightColor = activeSubtitleStyle?.highlight_color || "#FFE600";
|
const subtitleHighlightColor = activeSubtitleStyle?.highlight_color || "#FFE600";
|
||||||
const subtitleNormalColor = activeSubtitleStyle?.normal_color || "#FFFFFF";
|
const subtitleNormalColor = activeSubtitleStyle?.normal_color || "#FFFFFF";
|
||||||
const subtitleStrokeColor = activeSubtitleStyle?.stroke_color || "#000000";
|
const subtitleStrokeColor = activeSubtitleStyle?.stroke_color || "#000000";
|
||||||
const subtitleStrokeSize = activeSubtitleStyle?.stroke_size ?? 3;
|
const subtitleStrokeSize = Math.max(1, Math.round((activeSubtitleStyle?.stroke_size ?? 3) * responsiveScale));
|
||||||
const subtitleLetterSpacing = activeSubtitleStyle?.letter_spacing ?? 2;
|
const subtitleLetterSpacing = Math.max(0, (activeSubtitleStyle?.letter_spacing ?? 2) * responsiveScale);
|
||||||
const subtitleFontFamilyName = `SubtitlePreview-${activeSubtitleStyle?.id || "default"}`;
|
const subtitleFontFamilyName = `SubtitlePreview-${activeSubtitleStyle?.id || "default"}`;
|
||||||
const subtitleFontUrl = activeSubtitleStyle?.font_file
|
const subtitleFontUrl = activeSubtitleStyle?.font_file
|
||||||
? resolveAssetUrl(`fonts/${activeSubtitleStyle.font_file}`)
|
? resolveAssetUrl(`fonts/${activeSubtitleStyle.font_file}`)
|
||||||
@@ -111,23 +122,45 @@ export function FloatingStylePreview({
|
|||||||
|
|
||||||
const titleColor = activeTitleStyle?.color || "#FFFFFF";
|
const titleColor = activeTitleStyle?.color || "#FFFFFF";
|
||||||
const titleStrokeColor = activeTitleStyle?.stroke_color || "#000000";
|
const titleStrokeColor = activeTitleStyle?.stroke_color || "#000000";
|
||||||
const titleStrokeSize = activeTitleStyle?.stroke_size ?? 8;
|
const titleStrokeSize = Math.max(1, Math.round((activeTitleStyle?.stroke_size ?? 8) * responsiveScale));
|
||||||
const titleLetterSpacing = activeTitleStyle?.letter_spacing ?? 4;
|
const titleLetterSpacing = Math.max(0, (activeTitleStyle?.letter_spacing ?? 4) * responsiveScale);
|
||||||
const titleFontWeight = activeTitleStyle?.font_weight ?? 900;
|
const titleFontWeight = activeTitleStyle?.font_weight ?? 900;
|
||||||
const titleFontFamilyName = `TitlePreview-${activeTitleStyle?.id || "default"}`;
|
const titleFontFamilyName = `TitlePreview-${activeTitleStyle?.id || "default"}`;
|
||||||
const titleFontUrl = activeTitleStyle?.font_file
|
const titleFontUrl = activeTitleStyle?.font_file
|
||||||
? resolveAssetUrl(`fonts/${activeTitleStyle.font_file}`)
|
? resolveAssetUrl(`fonts/${activeTitleStyle.font_file}`)
|
||||||
: null;
|
: null;
|
||||||
|
|
||||||
|
const scaledTitleFontSize = Math.max(36, Math.round(titleFontSize * responsiveScale));
|
||||||
|
const scaledSubtitleFontSize = Math.max(28, Math.round(subtitleFontSize * responsiveScale));
|
||||||
|
const scaledTitleTopMargin = Math.max(0, Math.round(titleTopMargin * responsiveScale));
|
||||||
|
const scaledSubtitleBottomMargin = Math.max(0, Math.round(subtitleBottomMargin * responsiveScale));
|
||||||
|
|
||||||
|
// 副标题样式
|
||||||
|
const activeSecondaryTitleStyle = titleStyles.find((s) => s.id === selectedSecondaryTitleStyleId)
|
||||||
|
|| activeTitleStyle;
|
||||||
|
const stColor = activeSecondaryTitleStyle?.color || "#FFFFFF";
|
||||||
|
const stStrokeColor = activeSecondaryTitleStyle?.stroke_color || "#000000";
|
||||||
|
const stStrokeSize = Math.max(1, Math.round((activeSecondaryTitleStyle?.stroke_size ?? 6) * responsiveScale));
|
||||||
|
const stLetterSpacing = Math.max(0, (activeSecondaryTitleStyle?.letter_spacing ?? 2) * responsiveScale);
|
||||||
|
const stFontWeight = activeSecondaryTitleStyle?.font_weight ?? 700;
|
||||||
|
const stFontFamilyName = `SecondaryTitlePreview-${activeSecondaryTitleStyle?.id || "default"}`;
|
||||||
|
const stFontUrl = activeSecondaryTitleStyle?.font_file
|
||||||
|
? resolveAssetUrl(`fonts/${activeSecondaryTitleStyle.font_file}`)
|
||||||
|
: null;
|
||||||
|
const scaledSecondaryTitleFontSize = Math.max(24, Math.round(secondaryTitleFontSize * responsiveScale));
|
||||||
|
const scaledSecondaryTitleTopMargin = Math.max(0, Math.round(secondaryTitleTopMargin * responsiveScale));
|
||||||
|
const previewSecondaryTitleText = videoSecondaryTitle.trim() || "";
|
||||||
|
|
||||||
const content = (
|
const content = (
|
||||||
<div
|
<div
|
||||||
style={{
|
style={{
|
||||||
position: "fixed",
|
position: "fixed",
|
||||||
left: "16px",
|
...(isMobile
|
||||||
top: "16px",
|
? { right: "12px", bottom: "12px" }
|
||||||
|
: { left: "16px", top: "16px" }),
|
||||||
width: `${windowWidth}px`,
|
width: `${windowWidth}px`,
|
||||||
zIndex: 150,
|
zIndex: 150,
|
||||||
maxHeight: "calc(100dvh - 32px)",
|
maxHeight: isMobile ? "calc(50dvh)" : "calc(100dvh - 32px)",
|
||||||
overflow: "hidden",
|
overflow: "hidden",
|
||||||
}}
|
}}
|
||||||
className="rounded-xl border border-white/20 bg-gray-900/95 backdrop-blur-md shadow-2xl"
|
className="rounded-xl border border-white/20 bg-gray-900/95 backdrop-blur-md shadow-2xl"
|
||||||
@@ -152,13 +185,18 @@ export function FloatingStylePreview({
|
|||||||
className="relative overflow-hidden rounded-b-xl"
|
className="relative overflow-hidden rounded-b-xl"
|
||||||
style={{ height: `${previewHeight}px` }}
|
style={{ height: `${previewHeight}px` }}
|
||||||
>
|
>
|
||||||
{(titleFontUrl || subtitleFontUrl) && (
|
{(titleFontUrl || subtitleFontUrl || stFontUrl) && (
|
||||||
<style>{`
|
<style>{`
|
||||||
${titleFontUrl ? `@font-face { font-family: '${titleFontFamilyName}'; src: url('${titleFontUrl}') format('${getFontFormat(activeTitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
|
${titleFontUrl ? `@font-face { font-family: '${titleFontFamilyName}'; src: url('${titleFontUrl}') format('${getFontFormat(activeTitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
|
||||||
|
${stFontUrl && stFontUrl !== titleFontUrl ? `@font-face { font-family: '${stFontFamilyName}'; src: url('${stFontUrl}') format('${getFontFormat(activeSecondaryTitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
|
||||||
${subtitleFontUrl ? `@font-face { font-family: '${subtitleFontFamilyName}'; src: url('${subtitleFontUrl}') format('${getFontFormat(activeSubtitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
|
${subtitleFontUrl ? `@font-face { font-family: '${subtitleFontFamilyName}'; src: url('${subtitleFontUrl}') format('${getFontFormat(activeSubtitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
|
||||||
`}</style>
|
`}</style>
|
||||||
)}
|
)}
|
||||||
<div className="absolute inset-0 opacity-20 bg-gradient-to-br from-purple-500/40 via-transparent to-pink-500/30" />
|
{previewBackgroundUrl ? (
|
||||||
|
<img src={previewBackgroundUrl} alt="" className="absolute inset-0 w-full h-full object-cover" />
|
||||||
|
) : (
|
||||||
|
<div className="absolute inset-0 opacity-20 bg-gradient-to-br from-purple-500/40 via-transparent to-pink-500/30" />
|
||||||
|
)}
|
||||||
<div
|
<div
|
||||||
className="absolute top-0 left-0"
|
className="absolute top-0 left-0"
|
||||||
style={{
|
style={{
|
||||||
@@ -172,39 +210,78 @@ export function FloatingStylePreview({
|
|||||||
className="w-full text-center"
|
className="w-full text-center"
|
||||||
style={{
|
style={{
|
||||||
position: 'absolute',
|
position: 'absolute',
|
||||||
top: `${titleTopMargin}px`,
|
top: `${scaledTitleTopMargin}px`,
|
||||||
left: 0,
|
left: 0,
|
||||||
right: 0,
|
right: 0,
|
||||||
color: titleColor,
|
display: 'flex',
|
||||||
fontSize: `${titleFontSize}px`,
|
flexDirection: 'column',
|
||||||
fontWeight: titleFontWeight,
|
alignItems: 'center',
|
||||||
fontFamily: titleFontUrl
|
|
||||||
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
|
||||||
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
|
|
||||||
textShadow: buildTextShadow(titleStrokeColor, titleStrokeSize),
|
|
||||||
letterSpacing: `${titleLetterSpacing}px`,
|
|
||||||
lineHeight: 1.2,
|
|
||||||
opacity: videoTitle.trim() ? 1 : 0.7,
|
|
||||||
padding: '0 5%',
|
padding: '0 5%',
|
||||||
|
boxSizing: 'border-box',
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
{previewTitleText}
|
<div
|
||||||
|
style={{
|
||||||
|
color: titleColor,
|
||||||
|
fontSize: `${scaledTitleFontSize}px`,
|
||||||
|
fontWeight: titleFontWeight,
|
||||||
|
fontFamily: titleFontUrl
|
||||||
|
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
||||||
|
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
|
||||||
|
textShadow: buildTextShadow(titleStrokeColor, titleStrokeSize),
|
||||||
|
letterSpacing: `${titleLetterSpacing}px`,
|
||||||
|
lineHeight: 1.2,
|
||||||
|
whiteSpace: 'normal',
|
||||||
|
wordBreak: 'break-word',
|
||||||
|
overflowWrap: 'anywhere',
|
||||||
|
opacity: videoTitle.trim() ? 1 : 0.7,
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{previewTitleText}
|
||||||
|
</div>
|
||||||
|
{previewSecondaryTitleText && (
|
||||||
|
<div
|
||||||
|
style={{
|
||||||
|
marginTop: `${scaledSecondaryTitleTopMargin}px`,
|
||||||
|
color: stColor,
|
||||||
|
fontSize: `${scaledSecondaryTitleFontSize}px`,
|
||||||
|
fontWeight: stFontWeight,
|
||||||
|
fontFamily: stFontUrl && stFontUrl !== titleFontUrl
|
||||||
|
? `'${stFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
||||||
|
: titleFontUrl
|
||||||
|
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
||||||
|
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
|
||||||
|
textShadow: buildTextShadow(stStrokeColor, stStrokeSize),
|
||||||
|
letterSpacing: `${stLetterSpacing}px`,
|
||||||
|
lineHeight: 1.2,
|
||||||
|
whiteSpace: 'normal',
|
||||||
|
wordBreak: 'break-word',
|
||||||
|
overflowWrap: 'anywhere',
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{previewSecondaryTitleText}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div
|
<div
|
||||||
className="w-full text-center"
|
className="w-full text-center"
|
||||||
style={{
|
style={{
|
||||||
position: 'absolute',
|
position: 'absolute',
|
||||||
bottom: `${subtitleBottomMargin}px`,
|
bottom: `${scaledSubtitleBottomMargin}px`,
|
||||||
left: 0,
|
left: 0,
|
||||||
right: 0,
|
right: 0,
|
||||||
fontSize: `${subtitleFontSize}px`,
|
fontSize: `${scaledSubtitleFontSize}px`,
|
||||||
fontFamily: subtitleFontUrl
|
fontFamily: subtitleFontUrl
|
||||||
? `'${subtitleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
? `'${subtitleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
|
||||||
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
|
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
|
||||||
textShadow: buildTextShadow(subtitleStrokeColor, subtitleStrokeSize),
|
textShadow: buildTextShadow(subtitleStrokeColor, subtitleStrokeSize),
|
||||||
letterSpacing: `${subtitleLetterSpacing}px`,
|
letterSpacing: `${subtitleLetterSpacing}px`,
|
||||||
lineHeight: 1.35,
|
lineHeight: 1.35,
|
||||||
|
whiteSpace: 'normal',
|
||||||
|
wordBreak: 'break-word',
|
||||||
|
overflowWrap: 'anywhere',
|
||||||
|
boxSizing: 'border-box',
|
||||||
padding: '0 6%',
|
padding: '0 6%',
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import { useState, useRef, useCallback, useEffect } from "react";
|
import { useState, useRef, useCallback, useEffect } from "react";
|
||||||
import { Play, Pause, Pencil, Trash2, Check, X, RefreshCw, Mic } from "lucide-react";
|
import { Play, Pause, Pencil, Trash2, Check, X, RefreshCw, Mic, ChevronDown } from "lucide-react";
|
||||||
import type { GeneratedAudio } from "@/features/home/model/useGeneratedAudios";
|
import type { GeneratedAudio } from "@/features/home/model/useGeneratedAudios";
|
||||||
|
|
||||||
interface AudioTask {
|
interface AudioTask {
|
||||||
@@ -19,6 +19,11 @@ interface GeneratedAudiosPanelProps {
|
|||||||
onDeleteAudio: (id: string) => void;
|
onDeleteAudio: (id: string) => void;
|
||||||
onRenameAudio: (id: string, newName: string) => void;
|
onRenameAudio: (id: string, newName: string) => void;
|
||||||
hasText: boolean;
|
hasText: boolean;
|
||||||
|
missingRefAudio?: boolean;
|
||||||
|
speed: number;
|
||||||
|
onSpeedChange: (speed: number) => void;
|
||||||
|
ttsMode: string;
|
||||||
|
embedded?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function GeneratedAudiosPanel({
|
export function GeneratedAudiosPanel({
|
||||||
@@ -32,11 +37,18 @@ export function GeneratedAudiosPanel({
|
|||||||
onDeleteAudio,
|
onDeleteAudio,
|
||||||
onRenameAudio,
|
onRenameAudio,
|
||||||
hasText,
|
hasText,
|
||||||
|
missingRefAudio = false,
|
||||||
|
speed,
|
||||||
|
onSpeedChange,
|
||||||
|
ttsMode,
|
||||||
|
embedded = false,
|
||||||
}: GeneratedAudiosPanelProps) {
|
}: GeneratedAudiosPanelProps) {
|
||||||
const [editingId, setEditingId] = useState<string | null>(null);
|
const [editingId, setEditingId] = useState<string | null>(null);
|
||||||
const [editName, setEditName] = useState("");
|
const [editName, setEditName] = useState("");
|
||||||
const [playingId, setPlayingId] = useState<string | null>(null);
|
const [playingId, setPlayingId] = useState<string | null>(null);
|
||||||
|
const [speedOpen, setSpeedOpen] = useState(false);
|
||||||
const audioRef = useRef<HTMLAudioElement | null>(null);
|
const audioRef = useRef<HTMLAudioElement | null>(null);
|
||||||
|
const speedRef = useRef<HTMLDivElement>(null);
|
||||||
|
|
||||||
const stopPlaying = useCallback(() => {
|
const stopPlaying = useCallback(() => {
|
||||||
if (audioRef.current) {
|
if (audioRef.current) {
|
||||||
@@ -57,6 +69,17 @@ export function GeneratedAudiosPanel({
|
|||||||
};
|
};
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
// Close speed dropdown on click outside
|
||||||
|
useEffect(() => {
|
||||||
|
const handler = (e: MouseEvent) => {
|
||||||
|
if (speedRef.current && !speedRef.current.contains(e.target as Node)) {
|
||||||
|
setSpeedOpen(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
if (speedOpen) document.addEventListener("mousedown", handler);
|
||||||
|
return () => document.removeEventListener("mousedown", handler);
|
||||||
|
}, [speedOpen]);
|
||||||
|
|
||||||
const togglePlay = (audio: GeneratedAudio, e: React.MouseEvent) => {
|
const togglePlay = (audio: GeneratedAudio, e: React.MouseEvent) => {
|
||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
if (playingId === audio.id) {
|
if (playingId === audio.id) {
|
||||||
@@ -91,34 +114,142 @@ export function GeneratedAudiosPanel({
|
|||||||
setEditName("");
|
setEditName("");
|
||||||
};
|
};
|
||||||
|
|
||||||
return (
|
const canGenerate = hasText && !missingRefAudio;
|
||||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
|
||||||
<div className="flex justify-between items-center gap-2 mb-4">
|
const speedOptions = [
|
||||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
|
{ value: 0.8, label: "较慢" },
|
||||||
<Mic className="h-4 w-4 text-purple-400" />
|
{ value: 0.9, label: "稍慢" },
|
||||||
配音列表
|
{ value: 1.0, label: "正常" },
|
||||||
</h2>
|
{ value: 1.1, label: "稍快" },
|
||||||
<div className="flex gap-1.5">
|
{ value: 1.2, label: "较快" },
|
||||||
<button
|
] as const;
|
||||||
onClick={onGenerateAudio}
|
const currentSpeedLabel = speedOptions.find((o) => o.value === speed)?.label ?? "正常";
|
||||||
disabled={isGeneratingAudio || !hasText}
|
|
||||||
className={`px-2 py-1 text-xs rounded transition-all whitespace-nowrap flex items-center gap-1 ${
|
const content = (
|
||||||
isGeneratingAudio || !hasText
|
<>
|
||||||
? "bg-gray-600 cursor-not-allowed text-gray-400"
|
{embedded ? (
|
||||||
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white"
|
<>
|
||||||
}`}
|
{/* Row 1: 语速 + 生成配音 (right-aligned) */}
|
||||||
>
|
<div className="flex justify-end items-center gap-1.5 mb-3">
|
||||||
<Mic className="h-3.5 w-3.5" />
|
{ttsMode === "voiceclone" && (
|
||||||
生成配音
|
<div ref={speedRef} className="relative">
|
||||||
</button>
|
<button
|
||||||
<button
|
onClick={() => setSpeedOpen((v) => !v)}
|
||||||
onClick={onRefresh}
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
|
||||||
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1"
|
>
|
||||||
>
|
语速: {currentSpeedLabel}
|
||||||
<RefreshCw className="h-3.5 w-3.5" />
|
<ChevronDown className={`h-3 w-3 transition-transform ${speedOpen ? "rotate-180" : ""}`} />
|
||||||
</button>
|
</button>
|
||||||
|
{speedOpen && (
|
||||||
|
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[80px]">
|
||||||
|
{speedOptions.map((opt) => (
|
||||||
|
<button
|
||||||
|
key={opt.value}
|
||||||
|
onClick={() => { onSpeedChange(opt.value); setSpeedOpen(false); }}
|
||||||
|
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
|
||||||
|
speed === opt.value
|
||||||
|
? "bg-purple-600/40 text-purple-200"
|
||||||
|
: "text-gray-300 hover:bg-white/10"
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{opt.label}
|
||||||
|
</button>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
<button
|
||||||
|
onClick={onGenerateAudio}
|
||||||
|
disabled={isGeneratingAudio || !canGenerate}
|
||||||
|
title={missingRefAudio ? "请先选择参考音频" : !hasText ? "请先输入文案" : ""}
|
||||||
|
className={`px-4 py-2 text-sm font-medium rounded-lg transition-all whitespace-nowrap flex items-center gap-1.5 shadow-sm ${
|
||||||
|
isGeneratingAudio || !canGenerate
|
||||||
|
? "bg-gray-600 cursor-not-allowed text-gray-400"
|
||||||
|
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white hover:shadow-md"
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
<Mic className="h-4 w-4" />
|
||||||
|
生成配音
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{/* Row 2: 配音列表 + 刷新 */}
|
||||||
|
<div className="flex justify-between items-center mb-3">
|
||||||
|
<h3 className="text-sm font-medium text-gray-400">配音列表</h3>
|
||||||
|
<button
|
||||||
|
onClick={onRefresh}
|
||||||
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1"
|
||||||
|
>
|
||||||
|
<RefreshCw className="h-3.5 w-3.5" />
|
||||||
|
刷新
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<div className="flex justify-between items-center gap-2 mb-4">
|
||||||
|
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
|
||||||
|
<Mic className="h-4 w-4 text-purple-400" />
|
||||||
|
配音列表
|
||||||
|
</h2>
|
||||||
|
<div className="flex gap-1.5">
|
||||||
|
{ttsMode === "voiceclone" && (
|
||||||
|
<div ref={speedRef} className="relative">
|
||||||
|
<button
|
||||||
|
onClick={() => setSpeedOpen((v) => !v)}
|
||||||
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
|
||||||
|
>
|
||||||
|
语速: {currentSpeedLabel}
|
||||||
|
<ChevronDown className={`h-3 w-3 transition-transform ${speedOpen ? "rotate-180" : ""}`} />
|
||||||
|
</button>
|
||||||
|
{speedOpen && (
|
||||||
|
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[80px]">
|
||||||
|
{speedOptions.map((opt) => (
|
||||||
|
<button
|
||||||
|
key={opt.value}
|
||||||
|
onClick={() => { onSpeedChange(opt.value); setSpeedOpen(false); }}
|
||||||
|
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
|
||||||
|
speed === opt.value
|
||||||
|
? "bg-purple-600/40 text-purple-200"
|
||||||
|
: "text-gray-300 hover:bg-white/10"
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{opt.label}
|
||||||
|
</button>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
<button
|
||||||
|
onClick={onGenerateAudio}
|
||||||
|
disabled={isGeneratingAudio || !canGenerate}
|
||||||
|
title={missingRefAudio ? "请先选择参考音频" : !hasText ? "请先输入文案" : ""}
|
||||||
|
className={`px-4 py-2 text-sm font-medium rounded-lg transition-all whitespace-nowrap flex items-center gap-1.5 shadow-sm ${
|
||||||
|
isGeneratingAudio || !canGenerate
|
||||||
|
? "bg-gray-600 cursor-not-allowed text-gray-400"
|
||||||
|
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white hover:shadow-md"
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
<Mic className="h-4 w-4" />
|
||||||
|
生成配音
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={onRefresh}
|
||||||
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1"
|
||||||
|
>
|
||||||
|
<RefreshCw className="h-3.5 w-3.5" />
|
||||||
|
刷新
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
)}
|
||||||
|
|
||||||
|
{/* 缺少参考音频提示 */}
|
||||||
|
{missingRefAudio && (
|
||||||
|
<div className="mb-3 px-3 py-2 bg-yellow-500/10 border border-yellow-500/30 rounded-lg text-yellow-300 text-xs">
|
||||||
|
声音克隆模式需要先选择参考音频
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* 生成进度 */}
|
{/* 生成进度 */}
|
||||||
{isGeneratingAudio && audioTask && (
|
{isGeneratingAudio && audioTask && (
|
||||||
@@ -181,7 +312,7 @@ export function GeneratedAudiosPanel({
|
|||||||
<div className="text-white text-sm truncate">{audio.name}</div>
|
<div className="text-white text-sm truncate">{audio.name}</div>
|
||||||
<div className="text-gray-400 text-xs">{audio.duration_sec.toFixed(1)}s</div>
|
<div className="text-gray-400 text-xs">{audio.duration_sec.toFixed(1)}s</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex items-center gap-1 pl-2 opacity-0 group-hover:opacity-100 transition-opacity">
|
<div className="flex items-center gap-1 pl-2 opacity-40 group-hover:opacity-100 transition-opacity">
|
||||||
<button
|
<button
|
||||||
onClick={(e) => togglePlay(audio, e)}
|
onClick={(e) => togglePlay(audio, e)}
|
||||||
className="p-1 text-gray-500 hover:text-purple-400 transition-colors"
|
className="p-1 text-gray-500 hover:text-purple-400 transition-colors"
|
||||||
@@ -218,7 +349,14 @@ export function GeneratedAudiosPanel({
|
|||||||
})}
|
})}
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
|
||||||
|
if (embedded) return content;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm relative z-10">
|
||||||
|
{content}
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ interface HistoryListProps {
|
|||||||
onRefresh: () => void;
|
onRefresh: () => void;
|
||||||
registerVideoRef: (id: string, element: HTMLDivElement | null) => void;
|
registerVideoRef: (id: string, element: HTMLDivElement | null) => void;
|
||||||
formatDate: (timestamp: number) => string;
|
formatDate: (timestamp: number) => string;
|
||||||
|
embedded?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function HistoryList({
|
export function HistoryList({
|
||||||
@@ -26,19 +27,22 @@ export function HistoryList({
|
|||||||
onRefresh,
|
onRefresh,
|
||||||
registerVideoRef,
|
registerVideoRef,
|
||||||
formatDate,
|
formatDate,
|
||||||
|
embedded = false,
|
||||||
}: HistoryListProps) {
|
}: HistoryListProps) {
|
||||||
return (
|
const content = (
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<>
|
||||||
<div className="flex justify-between items-center mb-4">
|
{!embedded && (
|
||||||
<h2 className="text-lg font-semibold text-white flex items-center gap-2">📂 历史作品</h2>
|
<div className="flex justify-between items-center mb-4">
|
||||||
<button
|
<h2 className="text-lg font-semibold text-white flex items-center gap-2">历史作品</h2>
|
||||||
onClick={onRefresh}
|
<button
|
||||||
className="px-3 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
|
onClick={onRefresh}
|
||||||
>
|
className="px-3 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
|
||||||
<RefreshCw className="h-3.5 w-3.5" />
|
>
|
||||||
刷新
|
<RefreshCw className="h-3.5 w-3.5" />
|
||||||
</button>
|
刷新
|
||||||
</div>
|
</button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
{generatedVideos.length === 0 ? (
|
{generatedVideos.length === 0 ? (
|
||||||
<div className="text-center py-4 text-gray-500">
|
<div className="text-center py-4 text-gray-500">
|
||||||
<p>暂无生成的作品</p>
|
<p>暂无生成的作品</p>
|
||||||
@@ -66,7 +70,7 @@ export function HistoryList({
|
|||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
onDeleteVideo(v.id);
|
onDeleteVideo(v.id);
|
||||||
}}
|
}}
|
||||||
className="p-1 text-gray-500 hover:text-red-400 opacity-0 group-hover:opacity-100 transition-opacity"
|
className="p-1 text-gray-500 hover:text-red-400 opacity-40 group-hover:opacity-100 transition-opacity"
|
||||||
title="删除视频"
|
title="删除视频"
|
||||||
>
|
>
|
||||||
<Trash2 className="h-4 w-4" />
|
<Trash2 className="h-4 w-4" />
|
||||||
@@ -75,6 +79,14 @@ export function HistoryList({
|
|||||||
))}
|
))}
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
|
||||||
|
if (embedded) return content;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
|
{content}
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,8 +2,10 @@
|
|||||||
|
|
||||||
import { useEffect, useMemo } from "react";
|
import { useEffect, useMemo } from "react";
|
||||||
import { useRouter } from "next/navigation";
|
import { useRouter } from "next/navigation";
|
||||||
|
import { RefreshCw } from "lucide-react";
|
||||||
import VideoPreviewModal from "@/components/VideoPreviewModal";
|
import VideoPreviewModal from "@/components/VideoPreviewModal";
|
||||||
import ScriptExtractionModal from "./ScriptExtractionModal";
|
import ScriptExtractionModal from "./ScriptExtractionModal";
|
||||||
|
import RewriteModal from "./RewriteModal";
|
||||||
import { useHomeController } from "@/features/home/model/useHomeController";
|
import { useHomeController } from "@/features/home/model/useHomeController";
|
||||||
import { resolveMediaUrl } from "@/shared/lib/media";
|
import { resolveMediaUrl } from "@/shared/lib/media";
|
||||||
import { BgmPanel } from "@/features/home/ui/BgmPanel";
|
import { BgmPanel } from "@/features/home/ui/BgmPanel";
|
||||||
@@ -51,6 +53,8 @@ export function HomePage() {
|
|||||||
setText,
|
setText,
|
||||||
extractModalOpen,
|
extractModalOpen,
|
||||||
setExtractModalOpen,
|
setExtractModalOpen,
|
||||||
|
rewriteModalOpen,
|
||||||
|
setRewriteModalOpen,
|
||||||
handleGenerateMeta,
|
handleGenerateMeta,
|
||||||
isGeneratingMeta,
|
isGeneratingMeta,
|
||||||
handleTranslate,
|
handleTranslate,
|
||||||
@@ -70,6 +74,15 @@ export function HomePage() {
|
|||||||
titleFontSize,
|
titleFontSize,
|
||||||
setTitleFontSize,
|
setTitleFontSize,
|
||||||
setTitleSizeLocked,
|
setTitleSizeLocked,
|
||||||
|
videoSecondaryTitle,
|
||||||
|
secondaryTitleInput,
|
||||||
|
selectedSecondaryTitleStyleId,
|
||||||
|
setSelectedSecondaryTitleStyleId,
|
||||||
|
secondaryTitleFontSize,
|
||||||
|
setSecondaryTitleFontSize,
|
||||||
|
setSecondaryTitleSizeLocked,
|
||||||
|
secondaryTitleTopMargin,
|
||||||
|
setSecondaryTitleTopMargin,
|
||||||
subtitleStyles,
|
subtitleStyles,
|
||||||
selectedSubtitleStyleId,
|
selectedSubtitleStyleId,
|
||||||
setSelectedSubtitleStyleId,
|
setSelectedSubtitleStyleId,
|
||||||
@@ -80,10 +93,13 @@ export function HomePage() {
|
|||||||
setTitleTopMargin,
|
setTitleTopMargin,
|
||||||
subtitleBottomMargin,
|
subtitleBottomMargin,
|
||||||
setSubtitleBottomMargin,
|
setSubtitleBottomMargin,
|
||||||
|
titleDisplayMode,
|
||||||
|
setTitleDisplayMode,
|
||||||
|
outputAspectRatio,
|
||||||
|
setOutputAspectRatio,
|
||||||
resolveAssetUrl,
|
resolveAssetUrl,
|
||||||
getFontFormat,
|
getFontFormat,
|
||||||
buildTextShadow,
|
buildTextShadow,
|
||||||
materialDimensions,
|
|
||||||
ttsMode,
|
ttsMode,
|
||||||
setTtsMode,
|
setTtsMode,
|
||||||
voices,
|
voices,
|
||||||
@@ -106,6 +122,8 @@ export function HomePage() {
|
|||||||
saveEditing,
|
saveEditing,
|
||||||
cancelEditing,
|
cancelEditing,
|
||||||
deleteRefAudio,
|
deleteRefAudio,
|
||||||
|
retranscribeRefAudio,
|
||||||
|
retranscribingId,
|
||||||
recordedBlob,
|
recordedBlob,
|
||||||
isRecording,
|
isRecording,
|
||||||
recordingTime,
|
recordingTime,
|
||||||
@@ -113,7 +131,6 @@ export function HomePage() {
|
|||||||
stopRecording,
|
stopRecording,
|
||||||
useRecording,
|
useRecording,
|
||||||
formatRecordingTime,
|
formatRecordingTime,
|
||||||
fixedRefText,
|
|
||||||
bgmList,
|
bgmList,
|
||||||
bgmLoading,
|
bgmLoading,
|
||||||
bgmError,
|
bgmError,
|
||||||
@@ -149,6 +166,8 @@ export function HomePage() {
|
|||||||
deleteAudio,
|
deleteAudio,
|
||||||
renameAudio,
|
renameAudio,
|
||||||
selectAudio,
|
selectAudio,
|
||||||
|
speed,
|
||||||
|
setSpeed,
|
||||||
timelineSegments,
|
timelineSegments,
|
||||||
reorderSegments,
|
reorderSegments,
|
||||||
setSourceRange,
|
setSourceRange,
|
||||||
@@ -156,12 +175,26 @@ export function HomePage() {
|
|||||||
setClipTrimmerOpen,
|
setClipTrimmerOpen,
|
||||||
clipTrimmerSegmentId,
|
clipTrimmerSegmentId,
|
||||||
setClipTrimmerSegmentId,
|
setClipTrimmerSegmentId,
|
||||||
|
materialPosterUrl,
|
||||||
} = useHomeController();
|
} = useHomeController();
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
router.prefetch("/publish");
|
router.prefetch("/publish");
|
||||||
}, [router]);
|
}, [router]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (typeof window === "undefined") return;
|
||||||
|
if ("scrollRestoration" in history) {
|
||||||
|
history.scrollRestoration = "manual";
|
||||||
|
}
|
||||||
|
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
|
||||||
|
// 兜底:等所有恢复 effect + 异步数据加载 settle 后再次强制回顶部
|
||||||
|
const timer = setTimeout(() => {
|
||||||
|
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
|
||||||
|
}, 200);
|
||||||
|
return () => clearTimeout(timer);
|
||||||
|
}, []);
|
||||||
|
|
||||||
const clipTrimmerSegment = useMemo(
|
const clipTrimmerSegment = useMemo(
|
||||||
() => timelineSegments.find((s) => s.id === clipTrimmerSegmentId) ?? null,
|
() => timelineSegments.find((s) => s.id === clipTrimmerSegmentId) ?? null,
|
||||||
[timelineSegments, clipTrimmerSegmentId]
|
[timelineSegments, clipTrimmerSegmentId]
|
||||||
@@ -181,11 +214,12 @@ export function HomePage() {
|
|||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-8">
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-8">
|
||||||
{/* 左侧: 输入区域 */}
|
{/* 左侧: 输入区域 */}
|
||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
{/* 1. 文案输入 */}
|
{/* 一、文案提取与编辑 */}
|
||||||
<ScriptEditor
|
<ScriptEditor
|
||||||
text={text}
|
text={text}
|
||||||
onChangeText={setText}
|
onChangeText={setText}
|
||||||
onOpenExtractModal={() => setExtractModalOpen(true)}
|
onOpenExtractModal={() => setExtractModalOpen(true)}
|
||||||
|
onOpenRewriteModal={() => setRewriteModalOpen(true)}
|
||||||
onGenerateMeta={handleGenerateMeta}
|
onGenerateMeta={handleGenerateMeta}
|
||||||
isGeneratingMeta={isGeneratingMeta}
|
isGeneratingMeta={isGeneratingMeta}
|
||||||
onTranslate={handleTranslate}
|
onTranslate={handleTranslate}
|
||||||
@@ -198,95 +232,77 @@ export function HomePage() {
|
|||||||
onDeleteScript={deleteSavedScript}
|
onDeleteScript={deleteSavedScript}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
{/* 2. 标题和字幕设置 */}
|
{/* 二、配音 */}
|
||||||
<TitleSubtitlePanel
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
showStylePreview={showStylePreview}
|
<h2 className="text-base sm:text-lg font-semibold text-white mb-4">
|
||||||
onTogglePreview={() => setShowStylePreview((prev) => !prev)}
|
二、配音
|
||||||
videoTitle={videoTitle}
|
</h2>
|
||||||
onTitleChange={titleInput.handleChange}
|
<h3 className="text-sm font-medium text-gray-400 mb-3">配音方式</h3>
|
||||||
onTitleCompositionStart={titleInput.handleCompositionStart}
|
<VoiceSelector
|
||||||
onTitleCompositionEnd={titleInput.handleCompositionEnd}
|
embedded
|
||||||
titleStyles={titleStyles}
|
ttsMode={ttsMode}
|
||||||
selectedTitleStyleId={selectedTitleStyleId}
|
onSelectTtsMode={setTtsMode}
|
||||||
onSelectTitleStyle={setSelectedTitleStyleId}
|
voices={voices}
|
||||||
titleFontSize={titleFontSize}
|
voice={voice}
|
||||||
onTitleFontSizeChange={(value) => {
|
onSelectVoice={setVoice}
|
||||||
setTitleFontSize(value);
|
voiceCloneSlot={(
|
||||||
setTitleSizeLocked(true);
|
<RefAudioPanel
|
||||||
}}
|
refAudios={refAudios}
|
||||||
subtitleStyles={subtitleStyles}
|
selectedRefAudio={selectedRefAudio}
|
||||||
selectedSubtitleStyleId={selectedSubtitleStyleId}
|
onSelectRefAudio={handleSelectRefAudio}
|
||||||
onSelectSubtitleStyle={setSelectedSubtitleStyleId}
|
isUploadingRef={isUploadingRef}
|
||||||
subtitleFontSize={subtitleFontSize}
|
uploadRefError={uploadRefError}
|
||||||
onSubtitleFontSizeChange={(value) => {
|
onClearUploadRefError={() => setUploadRefError(null)}
|
||||||
setSubtitleFontSize(value);
|
onUploadRefAudio={uploadRefAudio}
|
||||||
setSubtitleSizeLocked(true);
|
onFetchRefAudios={fetchRefAudios}
|
||||||
}}
|
playingAudioId={playingAudioId}
|
||||||
titleTopMargin={titleTopMargin}
|
onTogglePlayPreview={togglePlayPreview}
|
||||||
onTitleTopMarginChange={setTitleTopMargin}
|
editingAudioId={editingAudioId}
|
||||||
subtitleBottomMargin={subtitleBottomMargin}
|
editName={editName}
|
||||||
onSubtitleBottomMarginChange={setSubtitleBottomMargin}
|
onEditNameChange={setEditName}
|
||||||
resolveAssetUrl={resolveAssetUrl}
|
onStartEditing={startEditing}
|
||||||
getFontFormat={getFontFormat}
|
onSaveEditing={saveEditing}
|
||||||
buildTextShadow={buildTextShadow}
|
onCancelEditing={cancelEditing}
|
||||||
previewBaseWidth={materialDimensions?.width || 1080}
|
onDeleteRefAudio={deleteRefAudio}
|
||||||
previewBaseHeight={materialDimensions?.height || 1920}
|
onRetranscribe={retranscribeRefAudio}
|
||||||
/>
|
retranscribingId={retranscribingId}
|
||||||
|
recordedBlob={recordedBlob}
|
||||||
|
isRecording={isRecording}
|
||||||
|
recordingTime={recordingTime}
|
||||||
|
onStartRecording={startRecording}
|
||||||
|
onStopRecording={stopRecording}
|
||||||
|
onUseRecording={useRecording}
|
||||||
|
formatRecordingTime={formatRecordingTime}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
<div className="border-t border-white/10 my-4" />
|
||||||
|
<GeneratedAudiosPanel
|
||||||
|
embedded
|
||||||
|
generatedAudios={generatedAudios}
|
||||||
|
selectedAudioId={selectedAudioId}
|
||||||
|
isGeneratingAudio={isGeneratingAudio}
|
||||||
|
audioTask={audioTask}
|
||||||
|
onGenerateAudio={handleGenerateAudio}
|
||||||
|
onRefresh={() => fetchGeneratedAudios()}
|
||||||
|
onSelectAudio={selectAudio}
|
||||||
|
onDeleteAudio={deleteAudio}
|
||||||
|
onRenameAudio={renameAudio}
|
||||||
|
hasText={!!text.trim()}
|
||||||
|
missingRefAudio={ttsMode === "voiceclone" && !selectedRefAudio}
|
||||||
|
speed={speed}
|
||||||
|
onSpeedChange={setSpeed}
|
||||||
|
ttsMode={ttsMode}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* 3. 配音方式选择 */}
|
{/* 三、素材编辑 */}
|
||||||
<VoiceSelector
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
ttsMode={ttsMode}
|
<h2 className="text-base sm:text-lg font-semibold text-white mb-4">
|
||||||
onSelectTtsMode={setTtsMode}
|
三、素材编辑
|
||||||
voices={voices}
|
</h2>
|
||||||
voice={voice}
|
<MaterialSelector
|
||||||
onSelectVoice={setVoice}
|
embedded
|
||||||
voiceCloneSlot={(
|
|
||||||
<RefAudioPanel
|
|
||||||
refAudios={refAudios}
|
|
||||||
selectedRefAudio={selectedRefAudio}
|
|
||||||
onSelectRefAudio={handleSelectRefAudio}
|
|
||||||
isUploadingRef={isUploadingRef}
|
|
||||||
uploadRefError={uploadRefError}
|
|
||||||
onClearUploadRefError={() => setUploadRefError(null)}
|
|
||||||
onUploadRefAudio={uploadRefAudio}
|
|
||||||
onFetchRefAudios={fetchRefAudios}
|
|
||||||
playingAudioId={playingAudioId}
|
|
||||||
onTogglePlayPreview={togglePlayPreview}
|
|
||||||
editingAudioId={editingAudioId}
|
|
||||||
editName={editName}
|
|
||||||
onEditNameChange={setEditName}
|
|
||||||
onStartEditing={startEditing}
|
|
||||||
onSaveEditing={saveEditing}
|
|
||||||
onCancelEditing={cancelEditing}
|
|
||||||
onDeleteRefAudio={deleteRefAudio}
|
|
||||||
recordedBlob={recordedBlob}
|
|
||||||
isRecording={isRecording}
|
|
||||||
recordingTime={recordingTime}
|
|
||||||
onStartRecording={startRecording}
|
|
||||||
onStopRecording={stopRecording}
|
|
||||||
onUseRecording={useRecording}
|
|
||||||
formatRecordingTime={formatRecordingTime}
|
|
||||||
fixedRefText={fixedRefText}
|
|
||||||
/>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
{/* 4. 配音列表 */}
|
|
||||||
<GeneratedAudiosPanel
|
|
||||||
generatedAudios={generatedAudios}
|
|
||||||
selectedAudioId={selectedAudioId}
|
|
||||||
isGeneratingAudio={isGeneratingAudio}
|
|
||||||
audioTask={audioTask}
|
|
||||||
onGenerateAudio={handleGenerateAudio}
|
|
||||||
onRefresh={() => fetchGeneratedAudios()}
|
|
||||||
onSelectAudio={selectAudio}
|
|
||||||
onDeleteAudio={deleteAudio}
|
|
||||||
onRenameAudio={renameAudio}
|
|
||||||
hasText={!!text.trim()}
|
|
||||||
/>
|
|
||||||
|
|
||||||
{/* 5. 视频素材 */}
|
|
||||||
<MaterialSelector
|
|
||||||
materials={materials}
|
materials={materials}
|
||||||
selectedMaterials={selectedMaterials}
|
selectedMaterials={selectedMaterials}
|
||||||
isFetching={isFetching}
|
isFetching={isFetching}
|
||||||
@@ -310,30 +326,84 @@ export function HomePage() {
|
|||||||
onClearUploadError={() => setUploadError(null)}
|
onClearUploadError={() => setUploadError(null)}
|
||||||
registerMaterialRef={registerMaterialRef}
|
registerMaterialRef={registerMaterialRef}
|
||||||
/>
|
/>
|
||||||
|
<div className="border-t border-white/10 my-4" />
|
||||||
{/* 5.5 时间轴编辑器 — 未选配音/素材时模糊遮挡 */}
|
<div className="relative">
|
||||||
<div className="relative">
|
{(!selectedAudio || selectedMaterials.length === 0) && (
|
||||||
{(!selectedAudio || selectedMaterials.length === 0) && (
|
<div className="absolute inset-0 bg-black/50 backdrop-blur-sm rounded-xl flex items-center justify-center z-10">
|
||||||
<div className="absolute inset-0 bg-black/50 backdrop-blur-sm rounded-2xl flex items-center justify-center z-10">
|
<p className="text-gray-400">
|
||||||
<p className="text-gray-400">
|
{!selectedAudio ? "请先生成并选中配音" : "请先选择素材"}
|
||||||
{!selectedAudio ? "请先生成并选中配音" : "请先选择素材"}
|
</p>
|
||||||
</p>
|
</div>
|
||||||
</div>
|
)}
|
||||||
)}
|
<TimelineEditor
|
||||||
<TimelineEditor
|
embedded
|
||||||
audioDuration={selectedAudio?.duration_sec ?? 0}
|
audioDuration={selectedAudio?.duration_sec ?? 0}
|
||||||
audioUrl={selectedAudio ? (resolveMediaUrl(selectedAudio.path) || "") : ""}
|
audioUrl={selectedAudio ? (resolveMediaUrl(selectedAudio.path) || "") : ""}
|
||||||
segments={timelineSegments}
|
segments={timelineSegments}
|
||||||
materials={materials}
|
materials={materials}
|
||||||
onReorderSegment={reorderSegments}
|
outputAspectRatio={outputAspectRatio}
|
||||||
onClickSegment={(seg) => {
|
onOutputAspectRatioChange={setOutputAspectRatio}
|
||||||
setClipTrimmerSegmentId(seg.id);
|
onReorderSegment={reorderSegments}
|
||||||
setClipTrimmerOpen(true);
|
onClickSegment={(seg) => {
|
||||||
}}
|
setClipTrimmerSegmentId(seg.id);
|
||||||
/>
|
setClipTrimmerOpen(true);
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* 6. 背景音乐 */}
|
{/* 四、标题与字幕 */}
|
||||||
|
<TitleSubtitlePanel
|
||||||
|
showStylePreview={showStylePreview}
|
||||||
|
onTogglePreview={() => setShowStylePreview((prev) => !prev)}
|
||||||
|
videoTitle={videoTitle}
|
||||||
|
onTitleChange={titleInput.handleChange}
|
||||||
|
onTitleCompositionStart={titleInput.handleCompositionStart}
|
||||||
|
onTitleCompositionEnd={titleInput.handleCompositionEnd}
|
||||||
|
videoSecondaryTitle={videoSecondaryTitle}
|
||||||
|
onSecondaryTitleChange={secondaryTitleInput.handleChange}
|
||||||
|
onSecondaryTitleCompositionStart={secondaryTitleInput.handleCompositionStart}
|
||||||
|
onSecondaryTitleCompositionEnd={secondaryTitleInput.handleCompositionEnd}
|
||||||
|
titleStyles={titleStyles}
|
||||||
|
selectedTitleStyleId={selectedTitleStyleId}
|
||||||
|
onSelectTitleStyle={setSelectedTitleStyleId}
|
||||||
|
titleFontSize={titleFontSize}
|
||||||
|
onTitleFontSizeChange={(value) => {
|
||||||
|
setTitleFontSize(value);
|
||||||
|
setTitleSizeLocked(true);
|
||||||
|
}}
|
||||||
|
selectedSecondaryTitleStyleId={selectedSecondaryTitleStyleId}
|
||||||
|
onSelectSecondaryTitleStyle={setSelectedSecondaryTitleStyleId}
|
||||||
|
secondaryTitleFontSize={secondaryTitleFontSize}
|
||||||
|
onSecondaryTitleFontSizeChange={(value) => {
|
||||||
|
setSecondaryTitleFontSize(value);
|
||||||
|
setSecondaryTitleSizeLocked(true);
|
||||||
|
}}
|
||||||
|
secondaryTitleTopMargin={secondaryTitleTopMargin}
|
||||||
|
onSecondaryTitleTopMarginChange={setSecondaryTitleTopMargin}
|
||||||
|
subtitleStyles={subtitleStyles}
|
||||||
|
selectedSubtitleStyleId={selectedSubtitleStyleId}
|
||||||
|
onSelectSubtitleStyle={setSelectedSubtitleStyleId}
|
||||||
|
subtitleFontSize={subtitleFontSize}
|
||||||
|
onSubtitleFontSizeChange={(value) => {
|
||||||
|
setSubtitleFontSize(value);
|
||||||
|
setSubtitleSizeLocked(true);
|
||||||
|
}}
|
||||||
|
titleTopMargin={titleTopMargin}
|
||||||
|
onTitleTopMarginChange={setTitleTopMargin}
|
||||||
|
subtitleBottomMargin={subtitleBottomMargin}
|
||||||
|
onSubtitleBottomMarginChange={setSubtitleBottomMargin}
|
||||||
|
titleDisplayMode={titleDisplayMode}
|
||||||
|
onTitleDisplayModeChange={setTitleDisplayMode}
|
||||||
|
resolveAssetUrl={resolveAssetUrl}
|
||||||
|
getFontFormat={getFontFormat}
|
||||||
|
buildTextShadow={buildTextShadow}
|
||||||
|
previewBaseWidth={outputAspectRatio === "16:9" ? 1920 : 1080}
|
||||||
|
previewBaseHeight={outputAspectRatio === "16:9" ? 1080 : 1920}
|
||||||
|
previewBackgroundUrl={materialPosterUrl}
|
||||||
|
/>
|
||||||
|
|
||||||
|
{/* 背景音乐 (不编号) */}
|
||||||
<BgmPanel
|
<BgmPanel
|
||||||
bgmList={bgmList}
|
bgmList={bgmList}
|
||||||
bgmLoading={bgmLoading}
|
bgmLoading={bgmLoading}
|
||||||
@@ -351,7 +421,7 @@ export function HomePage() {
|
|||||||
registerBgmItemRef={registerBgmItemRef}
|
registerBgmItemRef={registerBgmItemRef}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
{/* 7. 生成按钮 */}
|
{/* 生成按钮 (不编号) */}
|
||||||
<GenerateActionBar
|
<GenerateActionBar
|
||||||
isGenerating={isGenerating}
|
isGenerating={isGenerating}
|
||||||
progress={currentTask?.progress || 0}
|
progress={currentTask?.progress || 0}
|
||||||
@@ -361,23 +431,59 @@ export function HomePage() {
|
|||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* 右侧: 预览区域 */}
|
{/* 右侧: 作品区域 */}
|
||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
<PreviewPanel
|
{/* 生成进度(在作品卡片上方) */}
|
||||||
currentTask={currentTask}
|
{currentTask && isGenerating && (
|
||||||
isGenerating={isGenerating}
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-purple-500/30 backdrop-blur-sm">
|
||||||
generatedVideo={generatedVideo}
|
<div className="space-y-3">
|
||||||
/>
|
<div className="flex justify-between text-sm text-purple-300 mb-1">
|
||||||
|
<span>正在AI生成中...</span>
|
||||||
<HistoryList
|
<span>{currentTask.progress || 0}%</span>
|
||||||
generatedVideos={generatedVideos}
|
</div>
|
||||||
selectedVideoId={selectedVideoId}
|
<div className="h-3 bg-black/30 rounded-full overflow-hidden">
|
||||||
onSelectVideo={handleSelectVideo}
|
<div
|
||||||
onDeleteVideo={deleteVideo}
|
className="h-full bg-gradient-to-r from-purple-500 to-pink-500 transition-all duration-300"
|
||||||
onRefresh={() => fetchGeneratedVideos()}
|
style={{ width: `${currentTask.progress || 0}%` }}
|
||||||
registerVideoRef={registerVideoRef}
|
/>
|
||||||
formatDate={formatDate}
|
</div>
|
||||||
/>
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
{/* 六、作品 */}
|
||||||
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
|
<h2 className="text-base sm:text-lg font-semibold text-white mb-4">
|
||||||
|
六、作品
|
||||||
|
</h2>
|
||||||
|
<div className="flex justify-between items-center mb-3">
|
||||||
|
<h3 className="text-sm font-medium text-gray-400">作品列表</h3>
|
||||||
|
<button
|
||||||
|
onClick={() => fetchGeneratedVideos()}
|
||||||
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
|
||||||
|
>
|
||||||
|
<RefreshCw className="h-3.5 w-3.5" />
|
||||||
|
刷新
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<HistoryList
|
||||||
|
embedded
|
||||||
|
generatedVideos={generatedVideos}
|
||||||
|
selectedVideoId={selectedVideoId}
|
||||||
|
onSelectVideo={handleSelectVideo}
|
||||||
|
onDeleteVideo={deleteVideo}
|
||||||
|
onRefresh={() => fetchGeneratedVideos()}
|
||||||
|
registerVideoRef={registerVideoRef}
|
||||||
|
formatDate={formatDate}
|
||||||
|
/>
|
||||||
|
<div className="border-t border-white/10 my-4" />
|
||||||
|
<h3 className="text-sm font-medium text-gray-400 mb-3">作品预览</h3>
|
||||||
|
<PreviewPanel
|
||||||
|
embedded
|
||||||
|
currentTask={null}
|
||||||
|
isGenerating={false}
|
||||||
|
generatedVideo={generatedVideo}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</main>
|
</main>
|
||||||
@@ -393,6 +499,13 @@ export function HomePage() {
|
|||||||
onApply={(nextText) => setText(nextText)}
|
onApply={(nextText) => setText(nextText)}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
|
<RewriteModal
|
||||||
|
isOpen={rewriteModalOpen}
|
||||||
|
onClose={() => setRewriteModalOpen(false)}
|
||||||
|
originalText={text}
|
||||||
|
onApply={(newText) => setText(newText)}
|
||||||
|
/>
|
||||||
|
|
||||||
<ClipTrimmer
|
<ClipTrimmer
|
||||||
isOpen={clipTrimmerOpen}
|
isOpen={clipTrimmerOpen}
|
||||||
segment={clipTrimmerSegment}
|
segment={clipTrimmerSegment}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { type ChangeEvent, type MouseEvent } from "react";
|
import { type ChangeEvent, type MouseEvent, useMemo } from "react";
|
||||||
import { Upload, RefreshCw, Eye, Trash2, X, Pencil, Check } from "lucide-react";
|
import { Upload, RefreshCw, Eye, Trash2, X, Pencil, Check } from "lucide-react";
|
||||||
import type { Material } from "@/shared/types/material";
|
import type { Material } from "@/shared/types/material";
|
||||||
|
|
||||||
@@ -25,6 +25,7 @@ interface MaterialSelectorProps {
|
|||||||
onDeleteMaterial: (id: string) => void;
|
onDeleteMaterial: (id: string) => void;
|
||||||
onClearUploadError: () => void;
|
onClearUploadError: () => void;
|
||||||
registerMaterialRef: (id: string, element: HTMLDivElement | null) => void;
|
registerMaterialRef: (id: string, element: HTMLDivElement | null) => void;
|
||||||
|
embedded?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function MaterialSelector({
|
export function MaterialSelector({
|
||||||
@@ -50,19 +51,27 @@ export function MaterialSelector({
|
|||||||
onDeleteMaterial,
|
onDeleteMaterial,
|
||||||
onClearUploadError,
|
onClearUploadError,
|
||||||
registerMaterialRef,
|
registerMaterialRef,
|
||||||
|
embedded = false,
|
||||||
}: MaterialSelectorProps) {
|
}: MaterialSelectorProps) {
|
||||||
const selectedSet = new Set(selectedMaterials);
|
const selectedSet = useMemo(() => new Set(selectedMaterials), [selectedMaterials]);
|
||||||
const isFull = selectedMaterials.length >= 4;
|
const isFull = selectedMaterials.length >= 4;
|
||||||
|
|
||||||
return (
|
const content = (
|
||||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
<>
|
||||||
<div className="flex justify-between items-center gap-2 mb-4">
|
<div className="flex justify-between items-center gap-2 mb-4">
|
||||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
|
{!embedded ? (
|
||||||
📹 视频素材
|
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 min-w-0">
|
||||||
<span className="ml-1 text-[11px] sm:text-xs text-gray-400/90 font-normal">
|
<span className="shrink-0">视频素材</span>
|
||||||
(可多选,最多4个)
|
<span className="text-[11px] sm:text-xs text-gray-400/90 font-normal truncate">
|
||||||
</span>
|
(上传自拍视频,最多可选4个)
|
||||||
</h2>
|
</span>
|
||||||
|
</h2>
|
||||||
|
) : (
|
||||||
|
<h3 className="text-sm font-medium text-gray-400 min-w-0">
|
||||||
|
<span className="shrink-0">视频素材</span>
|
||||||
|
<span className="ml-1 text-[11px] text-gray-400/90 font-normal hidden sm:inline">(上传自拍视频,最多可选4个)</span>
|
||||||
|
</h3>
|
||||||
|
)}
|
||||||
<div className="flex gap-1.5">
|
<div className="flex gap-1.5">
|
||||||
<input
|
<input
|
||||||
type="file"
|
type="file"
|
||||||
@@ -94,7 +103,7 @@ export function MaterialSelector({
|
|||||||
{isUploading && (
|
{isUploading && (
|
||||||
<div className="mb-4 p-4 bg-purple-500/10 rounded-xl border border-purple-500/30">
|
<div className="mb-4 p-4 bg-purple-500/10 rounded-xl border border-purple-500/30">
|
||||||
<div className="flex justify-between text-sm text-purple-300 mb-2">
|
<div className="flex justify-between text-sm text-purple-300 mb-2">
|
||||||
<span>📤 上传中...</span>
|
<span>上传中...</span>
|
||||||
<span>{uploadProgress}%</span>
|
<span>{uploadProgress}%</span>
|
||||||
</div>
|
</div>
|
||||||
<div className="h-2 bg-black/30 rounded-full overflow-hidden">
|
<div className="h-2 bg-black/30 rounded-full overflow-hidden">
|
||||||
@@ -108,7 +117,7 @@ export function MaterialSelector({
|
|||||||
|
|
||||||
{uploadError && (
|
{uploadError && (
|
||||||
<div className="mb-4 p-4 bg-red-500/20 text-red-200 rounded-xl text-sm flex justify-between items-center">
|
<div className="mb-4 p-4 bg-red-500/20 text-red-200 rounded-xl text-sm flex justify-between items-center">
|
||||||
<span>❌ {uploadError}</span>
|
<span>{uploadError}</span>
|
||||||
<button onClick={onClearUploadError} className="text-red-300 hover:text-white">
|
<button onClick={onClearUploadError} className="text-red-300 hover:text-white">
|
||||||
<X className="h-3.5 w-3.5" />
|
<X className="h-3.5 w-3.5" />
|
||||||
</button>
|
</button>
|
||||||
@@ -138,7 +147,7 @@ export function MaterialSelector({
|
|||||||
<div className="text-5xl mb-4">📁</div>
|
<div className="text-5xl mb-4">📁</div>
|
||||||
<p>暂无视频素材</p>
|
<p>暂无视频素材</p>
|
||||||
<p className="text-sm mt-2">
|
<p className="text-sm mt-2">
|
||||||
点击上方「📤 上传视频」按钮添加视频素材
|
点击上方「上传」按钮添加视频素材
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
@@ -183,7 +192,7 @@ export function MaterialSelector({
|
|||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
<button onClick={() => onToggleMaterial(m.id)} className="flex-1 text-left flex items-center gap-2">
|
<button onClick={() => onToggleMaterial(m.id)} disabled={isFull && !isSelected} className="flex-1 text-left flex items-center gap-2">
|
||||||
{/* 复选框 */}
|
{/* 复选框 */}
|
||||||
<span
|
<span
|
||||||
className={`flex-shrink-0 w-4 h-4 rounded border flex items-center justify-center text-[10px] ${isSelected
|
className={`flex-shrink-0 w-4 h-4 rounded border flex items-center justify-center text-[10px] ${isSelected
|
||||||
@@ -207,7 +216,7 @@ export function MaterialSelector({
|
|||||||
onPreviewMaterial(m.path);
|
onPreviewMaterial(m.path);
|
||||||
}
|
}
|
||||||
}}
|
}}
|
||||||
className="p-1 text-gray-500 hover:text-white opacity-0 group-hover:opacity-100 transition-opacity"
|
className="p-1 text-gray-500 hover:text-white opacity-40 group-hover:opacity-100 transition-opacity"
|
||||||
title="预览视频"
|
title="预览视频"
|
||||||
>
|
>
|
||||||
<Eye className="h-4 w-4" />
|
<Eye className="h-4 w-4" />
|
||||||
@@ -215,7 +224,7 @@ export function MaterialSelector({
|
|||||||
{editingMaterialId !== m.id && (
|
{editingMaterialId !== m.id && (
|
||||||
<button
|
<button
|
||||||
onClick={(e) => onStartEditing(m, e)}
|
onClick={(e) => onStartEditing(m, e)}
|
||||||
className="p-1 text-gray-500 hover:text-white opacity-0 group-hover:opacity-100 transition-opacity"
|
className="p-1 text-gray-500 hover:text-white opacity-40 group-hover:opacity-100 transition-opacity"
|
||||||
title="重命名"
|
title="重命名"
|
||||||
>
|
>
|
||||||
<Pencil className="h-4 w-4" />
|
<Pencil className="h-4 w-4" />
|
||||||
@@ -226,7 +235,7 @@ export function MaterialSelector({
|
|||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
onDeleteMaterial(m.id);
|
onDeleteMaterial(m.id);
|
||||||
}}
|
}}
|
||||||
className="p-1 text-gray-500 hover:text-red-400 opacity-0 group-hover:opacity-100 transition-opacity"
|
className="p-1 text-gray-500 hover:text-red-400 opacity-40 group-hover:opacity-100 transition-opacity"
|
||||||
title="删除素材"
|
title="删除素材"
|
||||||
>
|
>
|
||||||
<Trash2 className="h-4 w-4" />
|
<Trash2 className="h-4 w-4" />
|
||||||
@@ -237,6 +246,14 @@ export function MaterialSelector({
|
|||||||
})}
|
})}
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
|
||||||
|
if (embedded) return content;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
|
{content}
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,18 +12,20 @@ interface PreviewPanelProps {
|
|||||||
currentTask: Task | null;
|
currentTask: Task | null;
|
||||||
isGenerating: boolean;
|
isGenerating: boolean;
|
||||||
generatedVideo: string | null;
|
generatedVideo: string | null;
|
||||||
|
embedded?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function PreviewPanel({
|
export function PreviewPanel({
|
||||||
currentTask,
|
currentTask,
|
||||||
isGenerating,
|
isGenerating,
|
||||||
generatedVideo,
|
generatedVideo,
|
||||||
|
embedded = false,
|
||||||
}: PreviewPanelProps) {
|
}: PreviewPanelProps) {
|
||||||
return (
|
const content = (
|
||||||
<>
|
<>
|
||||||
{currentTask && isGenerating && (
|
{currentTask && isGenerating && (
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className={embedded ? "mb-4" : "bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm"}>
|
||||||
<h2 className="text-lg font-semibold text-white mb-4">⏳ 生成进度</h2>
|
{!embedded && <h2 className="text-lg font-semibold text-white mb-4">生成进度</h2>}
|
||||||
<div className="space-y-3">
|
<div className="space-y-3">
|
||||||
<div className="h-3 bg-black/30 rounded-full overflow-hidden">
|
<div className="h-3 bg-black/30 rounded-full overflow-hidden">
|
||||||
<div
|
<div
|
||||||
@@ -36,8 +38,8 @@ export function PreviewPanel({
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className={embedded ? "" : "bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm"}>
|
||||||
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">🎥 作品预览</h2>
|
{!embedded && <h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">作品预览</h2>}
|
||||||
<div className="aspect-video bg-black/50 rounded-xl overflow-hidden flex items-center justify-center">
|
<div className="aspect-video bg-black/50 rounded-xl overflow-hidden flex items-center justify-center">
|
||||||
{generatedVideo ? (
|
{generatedVideo ? (
|
||||||
<video src={generatedVideo} controls preload="metadata" className="w-full h-full object-contain" />
|
<video src={generatedVideo} controls preload="metadata" className="w-full h-full object-contain" />
|
||||||
@@ -71,4 +73,6 @@ export function PreviewPanel({
|
|||||||
</div>
|
</div>
|
||||||
</>
|
</>
|
||||||
);
|
);
|
||||||
|
|
||||||
|
return content;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { useEffect, useState } from "react";
|
import { useEffect, useState } from "react";
|
||||||
import type { MouseEvent } from "react";
|
import type { MouseEvent } from "react";
|
||||||
import { Upload, RefreshCw, Play, Pause, Pencil, Trash2, Check, X, Mic, Square } from "lucide-react";
|
import { Upload, RefreshCw, Play, Pause, Pencil, Trash2, Check, X, Mic, Square, RotateCw } from "lucide-react";
|
||||||
|
|
||||||
interface RefAudio {
|
interface RefAudio {
|
||||||
id: string;
|
id: string;
|
||||||
@@ -29,6 +29,8 @@ interface RefAudioPanelProps {
|
|||||||
onSaveEditing: (id: string, event: MouseEvent) => void;
|
onSaveEditing: (id: string, event: MouseEvent) => void;
|
||||||
onCancelEditing: (event: MouseEvent) => void;
|
onCancelEditing: (event: MouseEvent) => void;
|
||||||
onDeleteRefAudio: (id: string) => void;
|
onDeleteRefAudio: (id: string) => void;
|
||||||
|
onRetranscribe: (id: string) => void;
|
||||||
|
retranscribingId: string | null;
|
||||||
recordedBlob: Blob | null;
|
recordedBlob: Blob | null;
|
||||||
isRecording: boolean;
|
isRecording: boolean;
|
||||||
recordingTime: number;
|
recordingTime: number;
|
||||||
@@ -36,9 +38,10 @@ interface RefAudioPanelProps {
|
|||||||
onStopRecording: () => void;
|
onStopRecording: () => void;
|
||||||
onUseRecording: () => void;
|
onUseRecording: () => void;
|
||||||
formatRecordingTime: (seconds: number) => string;
|
formatRecordingTime: (seconds: number) => string;
|
||||||
fixedRefText: string;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const OLD_FIXED_REF_TEXT = "其实生活中有许多美好的瞬间";
|
||||||
|
|
||||||
export function RefAudioPanel({
|
export function RefAudioPanel({
|
||||||
refAudios,
|
refAudios,
|
||||||
selectedRefAudio,
|
selectedRefAudio,
|
||||||
@@ -57,6 +60,8 @@ export function RefAudioPanel({
|
|||||||
onSaveEditing,
|
onSaveEditing,
|
||||||
onCancelEditing,
|
onCancelEditing,
|
||||||
onDeleteRefAudio,
|
onDeleteRefAudio,
|
||||||
|
onRetranscribe,
|
||||||
|
retranscribingId,
|
||||||
recordedBlob,
|
recordedBlob,
|
||||||
isRecording,
|
isRecording,
|
||||||
recordingTime,
|
recordingTime,
|
||||||
@@ -64,7 +69,6 @@ export function RefAudioPanel({
|
|||||||
onStopRecording,
|
onStopRecording,
|
||||||
onUseRecording,
|
onUseRecording,
|
||||||
formatRecordingTime,
|
formatRecordingTime,
|
||||||
fixedRefText,
|
|
||||||
}: RefAudioPanelProps) {
|
}: RefAudioPanelProps) {
|
||||||
const [recordedUrl, setRecordedUrl] = useState<string | null>(null);
|
const [recordedUrl, setRecordedUrl] = useState<string | null>(null);
|
||||||
|
|
||||||
@@ -81,11 +85,14 @@ export function RefAudioPanel({
|
|||||||
};
|
};
|
||||||
}, [recordedBlob]);
|
}, [recordedBlob]);
|
||||||
|
|
||||||
|
const needsRetranscribe = (audio: RefAudio) =>
|
||||||
|
audio.ref_text.startsWith(OLD_FIXED_REF_TEXT);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="space-y-4">
|
<div className="space-y-4">
|
||||||
<div>
|
<div>
|
||||||
<div className="flex justify-between items-center mb-2">
|
<div className="flex justify-between items-center mb-2">
|
||||||
<span className="text-sm text-gray-300">📁 我的参考音频</span>
|
<span className="text-sm text-gray-300">📁 我的参考音频 <span className="text-xs text-gray-500 font-normal">(上传3-10秒语音样本)</span></span>
|
||||||
<div className="flex gap-2">
|
<div className="flex gap-2">
|
||||||
<input
|
<input
|
||||||
type="file"
|
type="file"
|
||||||
@@ -122,7 +129,7 @@ export function RefAudioPanel({
|
|||||||
|
|
||||||
{isUploadingRef && (
|
{isUploadingRef && (
|
||||||
<div className="mb-2 p-2 bg-purple-500/10 rounded text-sm text-purple-300">
|
<div className="mb-2 p-2 bg-purple-500/10 rounded text-sm text-purple-300">
|
||||||
⏳ 上传中...
|
⏳ 上传并识别中...
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
@@ -180,7 +187,7 @@ export function RefAudioPanel({
|
|||||||
<div className="text-white text-xs truncate pr-1 flex-1" title={audio.name}>
|
<div className="text-white text-xs truncate pr-1 flex-1" title={audio.name}>
|
||||||
{audio.name}
|
{audio.name}
|
||||||
</div>
|
</div>
|
||||||
<div className="flex gap-1 opacity-0 group-hover:opacity-100 transition-opacity">
|
<div className="flex gap-1 opacity-40 group-hover:opacity-100 transition-opacity">
|
||||||
<button
|
<button
|
||||||
onClick={(e) => onTogglePlayPreview(audio, e)}
|
onClick={(e) => onTogglePlayPreview(audio, e)}
|
||||||
className="text-gray-400 hover:text-purple-400 text-xs"
|
className="text-gray-400 hover:text-purple-400 text-xs"
|
||||||
@@ -192,6 +199,17 @@ export function RefAudioPanel({
|
|||||||
<Play className="h-3.5 w-3.5" />
|
<Play className="h-3.5 w-3.5" />
|
||||||
)}
|
)}
|
||||||
</button>
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={(e) => {
|
||||||
|
e.stopPropagation();
|
||||||
|
onRetranscribe(audio.id);
|
||||||
|
}}
|
||||||
|
disabled={retranscribingId === audio.id}
|
||||||
|
className="text-gray-400 hover:text-cyan-400 text-xs disabled:opacity-50"
|
||||||
|
title="重新识别文字"
|
||||||
|
>
|
||||||
|
<RotateCw className={`h-3.5 w-3.5 ${retranscribingId === audio.id ? 'animate-spin' : ''}`} />
|
||||||
|
</button>
|
||||||
<button
|
<button
|
||||||
onClick={(e) => onStartEditing(audio, e)}
|
onClick={(e) => onStartEditing(audio, e)}
|
||||||
className="text-gray-400 hover:text-blue-400 text-xs"
|
className="text-gray-400 hover:text-blue-400 text-xs"
|
||||||
@@ -211,7 +229,12 @@ export function RefAudioPanel({
|
|||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="text-gray-400 text-xs">{audio.duration_sec.toFixed(1)}s</div>
|
<div className="text-gray-400 text-xs">
|
||||||
|
{audio.duration_sec.toFixed(1)}s
|
||||||
|
{needsRetranscribe(audio) && (
|
||||||
|
<span className="text-yellow-500 ml-1" title="需要重新识别文字">⚠</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
@@ -221,7 +244,7 @@ export function RefAudioPanel({
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div className="border-t border-white/10 pt-4">
|
<div className="border-t border-white/10 pt-4">
|
||||||
<span className="text-sm text-gray-300 mb-2 block">🎤 或在线录音</span>
|
<span className="text-sm text-gray-300 mb-2 block">🎤 或在线录音 <span className="text-xs text-gray-500">(建议 3-10 秒,超出将自动截取)</span></span>
|
||||||
<div className="flex gap-2 items-center">
|
<div className="flex gap-2 items-center">
|
||||||
{!isRecording ? (
|
{!isRecording ? (
|
||||||
<button
|
<button
|
||||||
@@ -264,15 +287,6 @@ export function RefAudioPanel({
|
|||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div className="border-t border-white/10 pt-4">
|
|
||||||
<label className="text-sm text-gray-300 mb-2 block">📝 录音/上传时请朗读以下内容:</label>
|
|
||||||
<div className="w-full bg-black/30 border border-white/10 rounded-lg p-3 text-white text-sm">
|
|
||||||
{fixedRefText}
|
|
||||||
</div>
|
|
||||||
<p className="text-xs text-gray-500 mt-1">
|
|
||||||
请清晰朗读上述内容完成录音,系统将以此为参考克隆您的声音
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
213
frontend/src/features/home/ui/RewriteModal.tsx
Normal file
213
frontend/src/features/home/ui/RewriteModal.tsx
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
import { useState, useEffect, useRef, useCallback } from "react";
|
||||||
|
import { Loader2, Sparkles } from "lucide-react";
|
||||||
|
import api from "@/shared/api/axios";
|
||||||
|
import { ApiResponse, unwrap } from "@/shared/api/types";
|
||||||
|
|
||||||
|
const CUSTOM_PROMPT_KEY = "vigent_rewriteCustomPrompt";
|
||||||
|
|
||||||
|
interface RewriteModalProps {
|
||||||
|
isOpen: boolean;
|
||||||
|
onClose: () => void;
|
||||||
|
originalText: string;
|
||||||
|
onApply: (text: string) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
export default function RewriteModal({
|
||||||
|
isOpen,
|
||||||
|
onClose,
|
||||||
|
originalText,
|
||||||
|
onApply,
|
||||||
|
}: RewriteModalProps) {
|
||||||
|
const [customPrompt, setCustomPrompt] = useState(
|
||||||
|
() => (typeof window !== "undefined" ? localStorage.getItem(CUSTOM_PROMPT_KEY) || "" : "")
|
||||||
|
);
|
||||||
|
const [rewrittenText, setRewrittenText] = useState("");
|
||||||
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
|
const [error, setError] = useState<string | null>(null);
|
||||||
|
|
||||||
|
// Debounced save customPrompt to localStorage
|
||||||
|
const debounceRef = useRef<ReturnType<typeof setTimeout>>(undefined);
|
||||||
|
useEffect(() => {
|
||||||
|
debounceRef.current = setTimeout(() => {
|
||||||
|
localStorage.setItem(CUSTOM_PROMPT_KEY, customPrompt);
|
||||||
|
}, 300);
|
||||||
|
return () => clearTimeout(debounceRef.current);
|
||||||
|
}, [customPrompt]);
|
||||||
|
|
||||||
|
// Reset state when modal opens
|
||||||
|
useEffect(() => {
|
||||||
|
if (isOpen) {
|
||||||
|
setRewrittenText("");
|
||||||
|
setError(null);
|
||||||
|
setIsLoading(false);
|
||||||
|
}
|
||||||
|
}, [isOpen]);
|
||||||
|
|
||||||
|
const handleRewrite = useCallback(async () => {
|
||||||
|
if (!originalText.trim()) return;
|
||||||
|
|
||||||
|
setIsLoading(true);
|
||||||
|
setError(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const { data: res } = await api.post<
|
||||||
|
ApiResponse<{ rewritten_text: string }>
|
||||||
|
>("/api/ai/rewrite", {
|
||||||
|
text: originalText,
|
||||||
|
custom_prompt: customPrompt.trim() || null,
|
||||||
|
});
|
||||||
|
const payload = unwrap(res);
|
||||||
|
setRewrittenText(payload.rewritten_text || "");
|
||||||
|
} catch (err: unknown) {
|
||||||
|
console.error("AI rewrite failed:", err);
|
||||||
|
const axiosErr = err as {
|
||||||
|
response?: { data?: { message?: string } };
|
||||||
|
message?: string;
|
||||||
|
};
|
||||||
|
const msg =
|
||||||
|
axiosErr.response?.data?.message || axiosErr.message || "改写失败,请重试";
|
||||||
|
setError(msg);
|
||||||
|
} finally {
|
||||||
|
setIsLoading(false);
|
||||||
|
}
|
||||||
|
}, [originalText, customPrompt]);
|
||||||
|
|
||||||
|
const handleApply = () => {
|
||||||
|
onApply(rewrittenText);
|
||||||
|
onClose();
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleRetry = () => {
|
||||||
|
setRewrittenText("");
|
||||||
|
setError(null);
|
||||||
|
};
|
||||||
|
|
||||||
|
// ESC to close
|
||||||
|
useEffect(() => {
|
||||||
|
if (!isOpen) return;
|
||||||
|
const handleKeyDown = (e: KeyboardEvent) => {
|
||||||
|
if (e.key === "Escape") onClose();
|
||||||
|
};
|
||||||
|
document.addEventListener("keydown", handleKeyDown);
|
||||||
|
return () => document.removeEventListener("keydown", handleKeyDown);
|
||||||
|
}, [isOpen, onClose]);
|
||||||
|
|
||||||
|
if (!isOpen) return null;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="fixed inset-0 z-50 flex items-center justify-center bg-black/80 backdrop-blur-sm p-4 animate-in fade-in duration-200">
|
||||||
|
<div className="bg-[#1a1a1a] border border-white/10 rounded-2xl w-full max-w-2xl max-h-[90vh] overflow-hidden flex flex-col shadow-2xl">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="flex items-center justify-between p-4 border-b border-white/10 bg-white/5">
|
||||||
|
<h3 className="text-lg font-semibold text-white flex items-center gap-2">
|
||||||
|
<Sparkles className="h-5 w-5 text-purple-400" />
|
||||||
|
AI 智能改写
|
||||||
|
</h3>
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="text-gray-400 hover:text-white transition-colors text-2xl leading-none"
|
||||||
|
>
|
||||||
|
×
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Content */}
|
||||||
|
<div className="flex-1 overflow-y-auto p-6 space-y-5">
|
||||||
|
{/* Custom Prompt */}
|
||||||
|
<div className="space-y-2">
|
||||||
|
<label className="text-sm text-gray-300">
|
||||||
|
自定义提示词 (可选)
|
||||||
|
</label>
|
||||||
|
<textarea
|
||||||
|
value={customPrompt}
|
||||||
|
onChange={(e) => setCustomPrompt(e.target.value)}
|
||||||
|
placeholder="输入改写要求..."
|
||||||
|
rows={3}
|
||||||
|
className="w-full bg-black/20 border border-white/10 rounded-xl px-3 py-2 text-sm text-white placeholder-gray-500 focus:outline-none focus:border-purple-500 transition-colors resize-none"
|
||||||
|
/>
|
||||||
|
<p className="text-xs text-gray-500">留空则使用默认提示词</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Action button (before result) */}
|
||||||
|
{!rewrittenText && (
|
||||||
|
<button
|
||||||
|
onClick={handleRewrite}
|
||||||
|
disabled={isLoading || !originalText.trim()}
|
||||||
|
className="w-full py-3 px-4 bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 disabled:opacity-50 disabled:cursor-not-allowed text-white rounded-xl transition-all font-medium shadow-lg flex items-center justify-center gap-2"
|
||||||
|
>
|
||||||
|
{isLoading ? (
|
||||||
|
<>
|
||||||
|
<Loader2 className="w-5 h-5 animate-spin" />
|
||||||
|
改写中...
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<Sparkles className="w-5 h-5" />
|
||||||
|
开始改写
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Error */}
|
||||||
|
{error && (
|
||||||
|
<div className="bg-red-500/10 border border-red-500/30 rounded-xl p-4">
|
||||||
|
<p className="text-red-400 text-sm">{error}</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Rewritten result */}
|
||||||
|
{rewrittenText && (
|
||||||
|
<>
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="flex justify-between items-center">
|
||||||
|
<h4 className="font-semibold text-purple-300 flex items-center gap-2">
|
||||||
|
<Sparkles className="h-4 w-4" />
|
||||||
|
AI 改写结果
|
||||||
|
</h4>
|
||||||
|
<button
|
||||||
|
onClick={handleApply}
|
||||||
|
className="text-xs bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 text-white px-3 py-1.5 rounded-lg transition-colors shadow-sm"
|
||||||
|
>
|
||||||
|
使用此结果
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div className="bg-purple-900/10 border border-purple-500/20 rounded-xl p-4 max-h-60 overflow-y-auto hide-scrollbar">
|
||||||
|
<p className="text-gray-200 text-sm leading-relaxed whitespace-pre-wrap">
|
||||||
|
{rewrittenText}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="flex justify-between items-center">
|
||||||
|
<h4 className="font-semibold text-gray-400 flex items-center gap-2">
|
||||||
|
📝 原文对比
|
||||||
|
</h4>
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors"
|
||||||
|
>
|
||||||
|
保留原文
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div className="bg-white/5 border border-white/10 rounded-xl p-4 max-h-40 overflow-y-auto hide-scrollbar">
|
||||||
|
<p className="text-gray-400 text-sm leading-relaxed whitespace-pre-wrap">
|
||||||
|
{originalText}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<button
|
||||||
|
onClick={handleRetry}
|
||||||
|
className="w-full py-2.5 px-4 bg-white/10 hover:bg-white/20 text-white rounded-xl transition-colors"
|
||||||
|
>
|
||||||
|
重新改写
|
||||||
|
</button>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -18,6 +18,7 @@ interface ScriptEditorProps {
|
|||||||
text: string;
|
text: string;
|
||||||
onChangeText: (value: string) => void;
|
onChangeText: (value: string) => void;
|
||||||
onOpenExtractModal: () => void;
|
onOpenExtractModal: () => void;
|
||||||
|
onOpenRewriteModal: () => void;
|
||||||
onGenerateMeta: () => void;
|
onGenerateMeta: () => void;
|
||||||
isGeneratingMeta: boolean;
|
isGeneratingMeta: boolean;
|
||||||
onTranslate: (targetLang: string) => void;
|
onTranslate: (targetLang: string) => void;
|
||||||
@@ -34,6 +35,7 @@ export function ScriptEditor({
|
|||||||
text,
|
text,
|
||||||
onChangeText,
|
onChangeText,
|
||||||
onOpenExtractModal,
|
onOpenExtractModal,
|
||||||
|
onOpenRewriteModal,
|
||||||
onGenerateMeta,
|
onGenerateMeta,
|
||||||
isGeneratingMeta,
|
isGeneratingMeta,
|
||||||
onTranslate,
|
onTranslate,
|
||||||
@@ -86,7 +88,7 @@ export function ScriptEditor({
|
|||||||
<div className="relative z-10 bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
<div className="relative z-10 bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<div className="mb-4 space-y-3">
|
<div className="mb-4 space-y-3">
|
||||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
||||||
✍️ 文案提取与编辑
|
一、文案提取与编辑
|
||||||
</h2>
|
</h2>
|
||||||
<div className="flex gap-2 flex-wrap justify-end items-center">
|
<div className="flex gap-2 flex-wrap justify-end items-center">
|
||||||
{/* 历史文案 */}
|
{/* 历史文案 */}
|
||||||
@@ -123,7 +125,7 @@ export function ScriptEditor({
|
|||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
onDeleteScript(script.id);
|
onDeleteScript(script.id);
|
||||||
}}
|
}}
|
||||||
className="opacity-0 group-hover:opacity-100 p-1 text-gray-500 hover:text-red-400 transition-all shrink-0"
|
className="opacity-40 group-hover:opacity-100 p-1 text-gray-500 hover:text-red-400 transition-all shrink-0"
|
||||||
>
|
>
|
||||||
<Trash2 className="h-3 w-3" />
|
<Trash2 className="h-3 w-3" />
|
||||||
</button>
|
</button>
|
||||||
@@ -218,18 +220,32 @@ export function ScriptEditor({
|
|||||||
/>
|
/>
|
||||||
<div className="flex items-center justify-between mt-2 text-sm text-gray-400">
|
<div className="flex items-center justify-between mt-2 text-sm text-gray-400">
|
||||||
<span>{text.length} 字</span>
|
<span>{text.length} 字</span>
|
||||||
<button
|
<div className="flex items-center gap-2">
|
||||||
onClick={onSaveScript}
|
<button
|
||||||
disabled={!text.trim()}
|
onClick={onOpenRewriteModal}
|
||||||
className={`px-2.5 py-1 text-xs rounded transition-all flex items-center gap-1 ${
|
disabled={!text.trim()}
|
||||||
!text.trim()
|
className={`px-2.5 py-1 text-xs rounded transition-all flex items-center gap-1 ${
|
||||||
? "bg-gray-700 cursor-not-allowed text-gray-500"
|
!text.trim()
|
||||||
: "bg-amber-600/80 hover:bg-amber-600 text-white"
|
? "bg-gray-700 cursor-not-allowed text-gray-500"
|
||||||
}`}
|
: "bg-purple-600/80 hover:bg-purple-600 text-white"
|
||||||
>
|
}`}
|
||||||
<Save className="h-3 w-3" />
|
>
|
||||||
保存文案
|
<Sparkles className="h-3 w-3" />
|
||||||
</button>
|
AI智能改写
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={onSaveScript}
|
||||||
|
disabled={!text.trim()}
|
||||||
|
className={`px-2.5 py-1 text-xs rounded transition-all flex items-center gap-1 ${
|
||||||
|
!text.trim()
|
||||||
|
? "bg-gray-700 cursor-not-allowed text-gray-500"
|
||||||
|
: "bg-amber-600/80 hover:bg-amber-600 text-white"
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
<Save className="h-3 w-3" />
|
||||||
|
保存文案
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -18,15 +18,12 @@ export default function ScriptExtractionModal({
|
|||||||
const {
|
const {
|
||||||
isLoading,
|
isLoading,
|
||||||
script,
|
script,
|
||||||
rewrittenScript,
|
|
||||||
error,
|
error,
|
||||||
doRewrite,
|
|
||||||
step,
|
step,
|
||||||
dragActive,
|
dragActive,
|
||||||
selectedFile,
|
selectedFile,
|
||||||
activeTab,
|
activeTab,
|
||||||
inputUrl,
|
inputUrl,
|
||||||
setDoRewrite,
|
|
||||||
setActiveTab,
|
setActiveTab,
|
||||||
setInputUrl,
|
setInputUrl,
|
||||||
handleDrag,
|
handleDrag,
|
||||||
@@ -186,21 +183,6 @@ export default function ScriptExtractionModal({
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{/* Options */}
|
|
||||||
<div className="flex items-center gap-3 bg-white/5 rounded-xl p-4 border border-white/10">
|
|
||||||
<label className="flex items-center gap-2 cursor-pointer">
|
|
||||||
<input
|
|
||||||
type="checkbox"
|
|
||||||
checked={doRewrite}
|
|
||||||
onChange={(e) => setDoRewrite(e.target.checked)}
|
|
||||||
className="w-4 h-4 rounded bg-white/10 border-white/20 text-purple-500 focus:ring-purple-500"
|
|
||||||
/>
|
|
||||||
<span className="text-sm text-gray-300">
|
|
||||||
AI 智能改写(去口语化)
|
|
||||||
</span>
|
|
||||||
</label>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Error */}
|
{/* Error */}
|
||||||
{error && (
|
{error && (
|
||||||
<div className="bg-red-500/10 border border-red-500/30 rounded-xl p-4">
|
<div className="bg-red-500/10 border border-red-500/30 rounded-xl p-4">
|
||||||
@@ -244,9 +226,7 @@ export default function ScriptExtractionModal({
|
|||||||
<p className="text-sm text-gray-400 text-center max-w-sm px-4">
|
<p className="text-sm text-gray-400 text-center max-w-sm px-4">
|
||||||
{activeTab === "url" && "正在下载视频..."}
|
{activeTab === "url" && "正在下载视频..."}
|
||||||
<br />
|
<br />
|
||||||
{doRewrite
|
正在进行语音识别...
|
||||||
? "正在进行语音识别和 AI 智能改写..."
|
|
||||||
: "正在进行语音识别..."}
|
|
||||||
<br />
|
<br />
|
||||||
<span className="opacity-75">
|
<span className="opacity-75">
|
||||||
大文件可能需要几分钟,请不要关闭窗口
|
大文件可能需要几分钟,请不要关闭窗口
|
||||||
@@ -257,60 +237,30 @@ export default function ScriptExtractionModal({
|
|||||||
|
|
||||||
{step === "result" && (
|
{step === "result" && (
|
||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
{rewrittenScript && (
|
<div className="space-y-2">
|
||||||
<div className="space-y-2">
|
<div className="flex justify-between items-center">
|
||||||
<div className="flex justify-between items-center">
|
<h4 className="font-semibold text-gray-300 flex items-center gap-2">
|
||||||
<h4 className="font-semibold text-purple-300 flex items-center gap-2">
|
🎙️ 识别结果
|
||||||
✨ AI 洗稿结果{" "}
|
</h4>
|
||||||
<span className="text-xs font-normal text-purple-400/70">
|
<div className="flex items-center gap-2">
|
||||||
(推荐)
|
|
||||||
</span>
|
|
||||||
</h4>
|
|
||||||
{onApply && (
|
{onApply && (
|
||||||
<button
|
<button
|
||||||
onClick={() => handleApplyAndClose(rewrittenScript)}
|
onClick={() => handleApplyAndClose(script)}
|
||||||
className="text-xs bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1 shadow-sm"
|
className="text-xs bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1 shadow-sm"
|
||||||
>
|
>
|
||||||
📥 填入
|
📥 填入
|
||||||
</button>
|
</button>
|
||||||
)}
|
)}
|
||||||
<button
|
<button
|
||||||
onClick={() => copyToClipboard(rewrittenScript)}
|
onClick={() => copyToClipboard(script)}
|
||||||
className="text-xs bg-purple-600 hover:bg-purple-500 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1"
|
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors"
|
||||||
>
|
>
|
||||||
📋 复制内容
|
复制
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
<div className="bg-purple-900/10 border border-purple-500/20 rounded-xl p-4 max-h-60 overflow-y-auto custom-scrollbar">
|
|
||||||
<p className="text-gray-200 text-sm leading-relaxed whitespace-pre-wrap">
|
|
||||||
{rewrittenScript}
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
)}
|
<div className="bg-white/5 border border-white/10 rounded-xl p-4 max-h-60 overflow-y-auto hide-scrollbar">
|
||||||
|
<p className="text-gray-200 text-sm leading-relaxed whitespace-pre-wrap">
|
||||||
<div className="space-y-2">
|
|
||||||
<div className="flex justify-between items-center">
|
|
||||||
<h4 className="font-semibold text-gray-400 flex items-center gap-2">
|
|
||||||
🎙️ 原始识别结果
|
|
||||||
</h4>
|
|
||||||
{onApply && (
|
|
||||||
<button
|
|
||||||
onClick={() => handleApplyAndClose(script)}
|
|
||||||
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1"
|
|
||||||
>
|
|
||||||
📥 填入
|
|
||||||
</button>
|
|
||||||
)}
|
|
||||||
<button
|
|
||||||
onClick={() => copyToClipboard(script)}
|
|
||||||
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors"
|
|
||||||
>
|
|
||||||
复制
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
<div className="bg-white/5 border border-white/10 rounded-xl p-4 max-h-40 overflow-y-auto custom-scrollbar">
|
|
||||||
<p className="text-gray-400 text-sm leading-relaxed whitespace-pre-wrap">
|
|
||||||
{script}
|
{script}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -1,283 +1,364 @@
|
|||||||
import { useEffect, useRef, useCallback, useState } from "react";
|
import { useEffect, useRef, useCallback, useState, useMemo } from "react";
|
||||||
import WaveSurfer from "wavesurfer.js";
|
import WaveSurfer from "wavesurfer.js";
|
||||||
import type { TimelineSegment } from "@/features/home/model/useTimelineEditor";
|
import { ChevronDown, GripVertical } from "lucide-react";
|
||||||
import type { Material } from "@/shared/types/material";
|
import type { TimelineSegment } from "@/features/home/model/useTimelineEditor";
|
||||||
|
import type { Material } from "@/shared/types/material";
|
||||||
interface TimelineEditorProps {
|
|
||||||
audioDuration: number;
|
interface TimelineEditorProps {
|
||||||
audioUrl: string;
|
audioDuration: number;
|
||||||
segments: TimelineSegment[];
|
audioUrl: string;
|
||||||
materials: Material[];
|
segments: TimelineSegment[];
|
||||||
onReorderSegment: (fromIdx: number, toIdx: number) => void;
|
materials: Material[];
|
||||||
onClickSegment: (segment: TimelineSegment) => void;
|
outputAspectRatio: "9:16" | "16:9";
|
||||||
}
|
onOutputAspectRatioChange: (ratio: "9:16" | "16:9") => void;
|
||||||
|
onReorderSegment: (fromIdx: number, toIdx: number) => void;
|
||||||
function formatTime(sec: number): string {
|
onClickSegment: (segment: TimelineSegment) => void;
|
||||||
const m = Math.floor(sec / 60);
|
embedded?: boolean;
|
||||||
const s = sec % 60;
|
}
|
||||||
return `${String(m).padStart(2, "0")}:${s.toFixed(1).padStart(4, "0")}`;
|
|
||||||
}
|
function formatTime(sec: number): string {
|
||||||
|
const m = Math.floor(sec / 60);
|
||||||
export function TimelineEditor({
|
const s = sec % 60;
|
||||||
audioDuration,
|
return `${String(m).padStart(2, "0")}:${s.toFixed(1).padStart(4, "0")}`;
|
||||||
audioUrl,
|
}
|
||||||
segments,
|
|
||||||
materials,
|
export function TimelineEditor({
|
||||||
onReorderSegment,
|
audioDuration,
|
||||||
onClickSegment,
|
audioUrl,
|
||||||
}: TimelineEditorProps) {
|
segments,
|
||||||
const waveRef = useRef<HTMLDivElement>(null);
|
materials,
|
||||||
const wsRef = useRef<WaveSurfer | null>(null);
|
outputAspectRatio,
|
||||||
const [waveReady, setWaveReady] = useState(false);
|
onOutputAspectRatioChange,
|
||||||
const [isPlaying, setIsPlaying] = useState(false);
|
onReorderSegment,
|
||||||
|
onClickSegment,
|
||||||
// Refs for high-frequency DOM updates (avoid 60fps re-renders)
|
embedded = false,
|
||||||
const playheadRef = useRef<HTMLDivElement>(null);
|
}: TimelineEditorProps) {
|
||||||
const timeRef = useRef<HTMLSpanElement>(null);
|
const waveRef = useRef<HTMLDivElement>(null);
|
||||||
const audioDurationRef = useRef(audioDuration);
|
const wsRef = useRef<WaveSurfer | null>(null);
|
||||||
audioDurationRef.current = audioDuration;
|
const [waveReady, setWaveReady] = useState(false);
|
||||||
|
const [isPlaying, setIsPlaying] = useState(false);
|
||||||
// Drag-to-reorder state
|
|
||||||
const [dragFromIdx, setDragFromIdx] = useState<number | null>(null);
|
// Refs for high-frequency DOM updates (avoid 60fps re-renders)
|
||||||
const [dragOverIdx, setDragOverIdx] = useState<number | null>(null);
|
const playheadRef = useRef<HTMLDivElement>(null);
|
||||||
|
const timeRef = useRef<HTMLSpanElement>(null);
|
||||||
// Create / recreate wavesurfer when audioUrl changes
|
const audioDurationRef = useRef(audioDuration);
|
||||||
useEffect(() => {
|
|
||||||
if (!waveRef.current || !audioUrl) return;
|
useEffect(() => {
|
||||||
|
audioDurationRef.current = audioDuration;
|
||||||
// Destroy previous instance
|
}, [audioDuration]);
|
||||||
if (wsRef.current) {
|
|
||||||
wsRef.current.destroy();
|
// Drag-to-reorder state
|
||||||
wsRef.current = null;
|
const [dragFromIdx, setDragFromIdx] = useState<number | null>(null);
|
||||||
}
|
const [dragOverIdx, setDragOverIdx] = useState<number | null>(null);
|
||||||
|
|
||||||
const ws = WaveSurfer.create({
|
// Aspect ratio dropdown
|
||||||
container: waveRef.current,
|
const [ratioOpen, setRatioOpen] = useState(false);
|
||||||
height: 56,
|
const ratioRef = useRef<HTMLDivElement>(null);
|
||||||
waveColor: "#6d28d9",
|
const ratioOptions = [
|
||||||
progressColor: "#a855f7",
|
{ value: "9:16" as const, label: "竖屏 9:16" },
|
||||||
barWidth: 2,
|
{ value: "16:9" as const, label: "横屏 16:9" },
|
||||||
barGap: 1,
|
];
|
||||||
barRadius: 2,
|
const currentRatioLabel =
|
||||||
cursorWidth: 1,
|
ratioOptions.find((opt) => opt.value === outputAspectRatio)?.label ?? "竖屏 9:16";
|
||||||
cursorColor: "#e879f9",
|
|
||||||
interact: true,
|
useEffect(() => {
|
||||||
normalize: true,
|
const handler = (e: MouseEvent) => {
|
||||||
});
|
if (ratioRef.current && !ratioRef.current.contains(e.target as Node)) {
|
||||||
|
setRatioOpen(false);
|
||||||
// Click waveform → seek + auto-play
|
}
|
||||||
ws.on("interaction", () => ws.play());
|
};
|
||||||
ws.on("play", () => setIsPlaying(true));
|
if (ratioOpen) document.addEventListener("mousedown", handler);
|
||||||
ws.on("pause", () => setIsPlaying(false));
|
return () => document.removeEventListener("mousedown", handler);
|
||||||
ws.on("finish", () => {
|
}, [ratioOpen]);
|
||||||
setIsPlaying(false);
|
|
||||||
if (playheadRef.current) playheadRef.current.style.display = "none";
|
// Create / recreate wavesurfer when audioUrl changes
|
||||||
});
|
useEffect(() => {
|
||||||
// High-frequency: update playhead + time via refs (no React re-render)
|
if (!waveRef.current || !audioUrl) return;
|
||||||
ws.on("timeupdate", (time: number) => {
|
|
||||||
const dur = audioDurationRef.current;
|
const playheadEl = playheadRef.current;
|
||||||
if (playheadRef.current && dur > 0) {
|
const timeEl = timeRef.current;
|
||||||
playheadRef.current.style.left = `${(time / dur) * 100}%`;
|
|
||||||
playheadRef.current.style.display = "block";
|
// Destroy previous instance
|
||||||
}
|
if (wsRef.current) {
|
||||||
if (timeRef.current) {
|
wsRef.current.destroy();
|
||||||
timeRef.current.textContent = formatTime(time);
|
wsRef.current = null;
|
||||||
}
|
}
|
||||||
});
|
|
||||||
|
const ws = WaveSurfer.create({
|
||||||
ws.load(audioUrl);
|
container: waveRef.current,
|
||||||
wsRef.current = ws;
|
height: 56,
|
||||||
|
waveColor: "#6d28d9",
|
||||||
return () => {
|
progressColor: "#a855f7",
|
||||||
ws.destroy();
|
barWidth: 2,
|
||||||
wsRef.current = null;
|
barGap: 1,
|
||||||
setIsPlaying(false);
|
barRadius: 2,
|
||||||
if (playheadRef.current) playheadRef.current.style.display = "none";
|
cursorWidth: 1,
|
||||||
if (timeRef.current) timeRef.current.textContent = formatTime(0);
|
cursorColor: "#e879f9",
|
||||||
};
|
interact: true,
|
||||||
}, [audioUrl, waveReady]);
|
normalize: true,
|
||||||
|
});
|
||||||
// Callback ref to detect when waveRef div mounts
|
|
||||||
const waveCallbackRef = useCallback((node: HTMLDivElement | null) => {
|
// Click waveform → seek + auto-play
|
||||||
(waveRef as React.MutableRefObject<HTMLDivElement | null>).current = node;
|
ws.on("interaction", () => ws.play());
|
||||||
setWaveReady(!!node);
|
ws.on("play", () => setIsPlaying(true));
|
||||||
}, []);
|
ws.on("pause", () => setIsPlaying(false));
|
||||||
|
ws.on("finish", () => {
|
||||||
const handlePlayPause = useCallback(() => {
|
setIsPlaying(false);
|
||||||
wsRef.current?.playPause();
|
if (playheadRef.current) playheadRef.current.style.display = "none";
|
||||||
}, []);
|
});
|
||||||
|
// High-frequency: update playhead + time via refs (no React re-render)
|
||||||
// Drag-to-reorder handlers
|
ws.on("timeupdate", (time: number) => {
|
||||||
const handleDragStart = useCallback((idx: number, e: React.DragEvent) => {
|
const dur = audioDurationRef.current;
|
||||||
setDragFromIdx(idx);
|
if (playheadRef.current && dur > 0) {
|
||||||
e.dataTransfer.effectAllowed = "move";
|
playheadRef.current.style.left = `${(time / dur) * 100}%`;
|
||||||
e.dataTransfer.setData("text/plain", String(idx));
|
playheadRef.current.style.display = "block";
|
||||||
}, []);
|
}
|
||||||
|
if (timeRef.current) {
|
||||||
const handleDragOver = useCallback((idx: number, e: React.DragEvent) => {
|
timeRef.current.textContent = formatTime(time);
|
||||||
e.preventDefault();
|
}
|
||||||
e.dataTransfer.dropEffect = "move";
|
});
|
||||||
setDragOverIdx(idx);
|
|
||||||
}, []);
|
ws.load(audioUrl);
|
||||||
|
wsRef.current = ws;
|
||||||
const handleDragLeave = useCallback(() => {
|
|
||||||
setDragOverIdx(null);
|
return () => {
|
||||||
}, []);
|
ws.destroy();
|
||||||
|
wsRef.current = null;
|
||||||
const handleDrop = useCallback((toIdx: number, e: React.DragEvent) => {
|
setIsPlaying(false);
|
||||||
e.preventDefault();
|
if (playheadEl) playheadEl.style.display = "none";
|
||||||
const fromIdx = parseInt(e.dataTransfer.getData("text/plain"), 10);
|
if (timeEl) timeEl.textContent = formatTime(0);
|
||||||
if (!isNaN(fromIdx) && fromIdx !== toIdx) {
|
};
|
||||||
onReorderSegment(fromIdx, toIdx);
|
}, [audioUrl, waveReady]);
|
||||||
}
|
|
||||||
setDragFromIdx(null);
|
// Callback ref to detect when waveRef div mounts
|
||||||
setDragOverIdx(null);
|
const waveCallbackRef = useCallback((node: HTMLDivElement | null) => {
|
||||||
}, [onReorderSegment]);
|
(waveRef as React.MutableRefObject<HTMLDivElement | null>).current = node;
|
||||||
|
setWaveReady(!!node);
|
||||||
const handleDragEnd = useCallback(() => {
|
}, []);
|
||||||
setDragFromIdx(null);
|
|
||||||
setDragOverIdx(null);
|
const handlePlayPause = useCallback(() => {
|
||||||
}, []);
|
wsRef.current?.playPause();
|
||||||
|
}, []);
|
||||||
// Filter visible vs overflow segments
|
|
||||||
const visibleSegments = segments.filter((s) => s.start < audioDuration);
|
// Drag-to-reorder handlers
|
||||||
const overflowSegments = segments.filter((s) => s.start >= audioDuration);
|
const handleDragStart = useCallback((idx: number, e: React.DragEvent) => {
|
||||||
const hasSegments = visibleSegments.length > 0;
|
setDragFromIdx(idx);
|
||||||
|
e.dataTransfer.effectAllowed = "move";
|
||||||
return (
|
e.dataTransfer.setData("text/plain", String(idx));
|
||||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
}, []);
|
||||||
<div className="flex items-center justify-between mb-3">
|
|
||||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
const handleDragOver = useCallback((idx: number, e: React.DragEvent) => {
|
||||||
🎞️ 时间轴编辑
|
e.preventDefault();
|
||||||
</h2>
|
e.dataTransfer.dropEffect = "move";
|
||||||
{audioUrl && (
|
setDragOverIdx(idx);
|
||||||
<div className="flex items-center gap-2 text-xs text-gray-400">
|
}, []);
|
||||||
<button
|
|
||||||
onClick={handlePlayPause}
|
const handleDragLeave = useCallback(() => {
|
||||||
className="w-7 h-7 flex items-center justify-center rounded-full bg-white/10 hover:bg-white/20 text-white transition-colors"
|
setDragOverIdx(null);
|
||||||
title={isPlaying ? "暂停" : "播放"}
|
}, []);
|
||||||
>
|
|
||||||
{isPlaying ? "⏸" : "▶"}
|
const handleDrop = useCallback((toIdx: number, e: React.DragEvent) => {
|
||||||
</button>
|
e.preventDefault();
|
||||||
<span ref={timeRef} className="tabular-nums">00:00.0</span>
|
const fromIdx = parseInt(e.dataTransfer.getData("text/plain"), 10);
|
||||||
<span className="text-gray-600">/</span>
|
if (!isNaN(fromIdx) && fromIdx !== toIdx) {
|
||||||
<span className="tabular-nums">{formatTime(audioDuration)}</span>
|
onReorderSegment(fromIdx, toIdx);
|
||||||
</div>
|
}
|
||||||
)}
|
setDragFromIdx(null);
|
||||||
</div>
|
setDragOverIdx(null);
|
||||||
|
}, [onReorderSegment]);
|
||||||
{/* Waveform — always rendered so ref stays mounted */}
|
|
||||||
<div className="relative mb-1">
|
const handleDragEnd = useCallback(() => {
|
||||||
<div ref={waveCallbackRef} className="rounded-lg overflow-hidden bg-black/20 cursor-pointer" style={{ minHeight: 56 }} />
|
setDragFromIdx(null);
|
||||||
</div>
|
setDragOverIdx(null);
|
||||||
|
}, []);
|
||||||
{/* Segment blocks or empty placeholder */}
|
|
||||||
{hasSegments ? (
|
// Filter visible vs overflow segments
|
||||||
<>
|
const visibleSegments = useMemo(() => segments.filter((s) => s.start < audioDuration), [segments, audioDuration]);
|
||||||
<div className="relative h-14 flex select-none">
|
const overflowSegments = useMemo(() => segments.filter((s) => s.start >= audioDuration), [segments, audioDuration]);
|
||||||
{/* Playhead — syncs with audio playback */}
|
const hasSegments = visibleSegments.length > 0;
|
||||||
<div
|
|
||||||
ref={playheadRef}
|
const content = (
|
||||||
className="absolute top-0 h-full w-0.5 bg-fuchsia-400 z-10 pointer-events-none"
|
<>
|
||||||
style={{ display: "none", left: "0%" }}
|
<div className="flex items-center justify-between mb-3">
|
||||||
/>
|
{!embedded ? (
|
||||||
{visibleSegments.map((seg, i) => {
|
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
||||||
const left = (seg.start / audioDuration) * 100;
|
时间轴编辑
|
||||||
const width = ((seg.end - seg.start) / audioDuration) * 100;
|
</h2>
|
||||||
const segDur = seg.end - seg.start;
|
) : (
|
||||||
const isDragTarget = dragOverIdx === i && dragFromIdx !== i;
|
<h3 className="text-sm font-medium text-gray-400">时间轴编辑</h3>
|
||||||
|
)}
|
||||||
// Compute loop portion for the last visible segment
|
<div className="flex items-center gap-2 text-xs text-gray-400">
|
||||||
const isLastVisible = i === visibleSegments.length - 1;
|
<div ref={ratioRef} className="relative">
|
||||||
let loopPercent = 0;
|
<button
|
||||||
if (isLastVisible && audioDuration > 0) {
|
type="button"
|
||||||
const mat = materials.find((m) => m.id === seg.materialId);
|
onClick={() => setRatioOpen((v) => !v)}
|
||||||
const matDur = mat?.duration_sec ?? 0;
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
|
||||||
const effDur = (seg.sourceEnd > seg.sourceStart)
|
title="设置输出画面比例"
|
||||||
? (seg.sourceEnd - seg.sourceStart)
|
>
|
||||||
: matDur;
|
画面: {currentRatioLabel}
|
||||||
if (effDur > 0 && segDur > effDur + 0.1) {
|
<ChevronDown className={`h-3 w-3 transition-transform ${ratioOpen ? "rotate-180" : ""}`} />
|
||||||
loopPercent = ((segDur - effDur) / segDur) * 100;
|
</button>
|
||||||
}
|
{ratioOpen && (
|
||||||
}
|
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[106px]">
|
||||||
|
{ratioOptions.map((opt) => (
|
||||||
return (
|
<button
|
||||||
<div key={seg.id} className="absolute top-0 h-full" style={{ left: `${left}%`, width: `${width}%` }}>
|
key={opt.value}
|
||||||
<button
|
type="button"
|
||||||
draggable
|
onClick={() => {
|
||||||
onDragStart={(e) => handleDragStart(i, e)}
|
onOutputAspectRatioChange(opt.value);
|
||||||
onDragOver={(e) => handleDragOver(i, e)}
|
setRatioOpen(false);
|
||||||
onDragLeave={handleDragLeave}
|
}}
|
||||||
onDrop={(e) => handleDrop(i, e)}
|
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
|
||||||
onDragEnd={handleDragEnd}
|
outputAspectRatio === opt.value
|
||||||
onClick={() => onClickSegment(seg)}
|
? "bg-purple-600/40 text-purple-200"
|
||||||
className={`relative w-full h-full rounded-lg flex flex-col items-center justify-center overflow-hidden cursor-grab active:cursor-grabbing transition-all border ${
|
: "text-gray-300 hover:bg-white/10"
|
||||||
isDragTarget
|
}`}
|
||||||
? "ring-2 ring-purple-400 border-purple-400 scale-[1.02]"
|
>
|
||||||
: dragFromIdx === i
|
{opt.label}
|
||||||
? "opacity-50 border-white/10"
|
</button>
|
||||||
: "hover:opacity-90 border-white/10"
|
))}
|
||||||
}`}
|
</div>
|
||||||
style={{ backgroundColor: seg.color + "33", borderColor: isDragTarget ? undefined : seg.color + "66" }}
|
)}
|
||||||
title={`拖拽可调换顺序 · 点击设置截取范围\n${seg.materialName}\n${segDur.toFixed(1)}s${loopPercent > 0 ? ` (含循环 ${(segDur * loopPercent / 100).toFixed(1)}s)` : ""}`}
|
</div>
|
||||||
>
|
|
||||||
<span className="text-[11px] text-white/90 truncate max-w-full px-1 leading-tight z-[1]">
|
{audioUrl && (
|
||||||
{seg.materialName}
|
<>
|
||||||
</span>
|
<button
|
||||||
<span className="text-[10px] text-white/60 leading-tight z-[1]">
|
onClick={handlePlayPause}
|
||||||
{segDur.toFixed(1)}s
|
className="w-7 h-7 flex items-center justify-center rounded-full bg-white/10 hover:bg-white/20 text-white transition-colors"
|
||||||
</span>
|
title={isPlaying ? "暂停" : "播放"}
|
||||||
{seg.sourceStart > 0 && (
|
>
|
||||||
<span className="text-[9px] text-amber-400/80 leading-tight z-[1]">
|
{isPlaying ? "⏸" : "▶"}
|
||||||
✂ {seg.sourceStart.toFixed(1)}s
|
</button>
|
||||||
</span>
|
<span ref={timeRef} className="tabular-nums">00:00.0</span>
|
||||||
)}
|
<span className="text-gray-600">/</span>
|
||||||
{/* Loop fill stripe overlay */}
|
<span className="tabular-nums">{formatTime(audioDuration)}</span>
|
||||||
{loopPercent > 0 && (
|
</>
|
||||||
<div
|
)}
|
||||||
className="absolute top-0 right-0 h-full pointer-events-none flex items-center justify-center"
|
</div>
|
||||||
style={{
|
</div>
|
||||||
width: `${loopPercent}%`,
|
|
||||||
background: `repeating-linear-gradient(-45deg, transparent, transparent 3px, rgba(255,255,255,0.07) 3px, rgba(255,255,255,0.07) 6px)`,
|
{/* Waveform — always rendered so ref stays mounted */}
|
||||||
borderLeft: "1px dashed rgba(255,255,255,0.25)",
|
<div className="relative mb-1">
|
||||||
}}
|
<div ref={waveCallbackRef} className="rounded-lg overflow-hidden bg-black/20 cursor-pointer" style={{ minHeight: 56 }} />
|
||||||
>
|
</div>
|
||||||
<span className="text-[9px] text-white/30">循环</span>
|
|
||||||
</div>
|
{/* Segment blocks or empty placeholder */}
|
||||||
)}
|
{hasSegments ? (
|
||||||
</button>
|
<>
|
||||||
</div>
|
<div className="relative h-14 flex select-none">
|
||||||
);
|
{/* Playhead — syncs with audio playback */}
|
||||||
})}
|
<div
|
||||||
</div>
|
ref={playheadRef}
|
||||||
|
className="absolute top-0 h-full w-0.5 bg-fuchsia-400 z-10 pointer-events-none"
|
||||||
{/* Overflow segments — shown as gray chips */}
|
style={{ display: "none", left: "0%" }}
|
||||||
{overflowSegments.length > 0 && (
|
/>
|
||||||
<div className="flex flex-wrap items-center gap-1.5 mt-1.5">
|
{visibleSegments.map((seg, i) => {
|
||||||
<span className="text-[10px] text-gray-500">未使用:</span>
|
const left = (seg.start / audioDuration) * 100;
|
||||||
{overflowSegments.map((seg) => (
|
const width = ((seg.end - seg.start) / audioDuration) * 100;
|
||||||
<span
|
const segDur = seg.end - seg.start;
|
||||||
key={seg.id}
|
const isDragTarget = dragOverIdx === i && dragFromIdx !== i;
|
||||||
className="text-[10px] text-gray-500 bg-white/5 border border-white/10 rounded px-1.5 py-0.5"
|
|
||||||
>
|
// Compute loop portion for the last visible segment
|
||||||
{seg.materialName}
|
const isLastVisible = i === visibleSegments.length - 1;
|
||||||
</span>
|
let loopPercent = 0;
|
||||||
))}
|
if (isLastVisible && audioDuration > 0) {
|
||||||
</div>
|
const mat = materials.find((m) => m.id === seg.materialId);
|
||||||
)}
|
const matDur = mat?.duration_sec ?? 0;
|
||||||
|
const effDur = (seg.sourceEnd > seg.sourceStart)
|
||||||
<p className="text-[10px] text-gray-500 mt-1.5">
|
? (seg.sourceEnd - seg.sourceStart)
|
||||||
点击波形定位播放 · 拖拽色块调换顺序 · 点击色块设置截取范围
|
: Math.max(matDur - seg.sourceStart, 0);
|
||||||
</p>
|
if (effDur > 0 && segDur > effDur + 0.1) {
|
||||||
</>
|
loopPercent = ((segDur - effDur) / segDur) * 100;
|
||||||
) : (
|
}
|
||||||
<>
|
}
|
||||||
<div className="h-14 bg-white/5 rounded-lg" />
|
|
||||||
<p className="text-[10px] text-gray-500 mt-1.5">
|
return (
|
||||||
选中配音和素材后可编辑时间轴
|
<div key={seg.id} className="absolute top-0 h-full" style={{ left: `${left}%`, width: `${width}%` }}>
|
||||||
</p>
|
<button
|
||||||
</>
|
draggable
|
||||||
)}
|
onDragStart={(e) => handleDragStart(i, e)}
|
||||||
</div>
|
onDragOver={(e) => handleDragOver(i, e)}
|
||||||
);
|
onDragLeave={handleDragLeave}
|
||||||
}
|
onDrop={(e) => handleDrop(i, e)}
|
||||||
|
onDragEnd={handleDragEnd}
|
||||||
|
onClick={() => onClickSegment(seg)}
|
||||||
|
className={`relative w-full h-full rounded-lg flex flex-col items-center justify-center overflow-hidden cursor-grab active:cursor-grabbing transition-all border ${
|
||||||
|
isDragTarget
|
||||||
|
? "ring-2 ring-purple-400 border-purple-400 scale-[1.02]"
|
||||||
|
: dragFromIdx === i
|
||||||
|
? "opacity-50 border-white/10"
|
||||||
|
: "hover:opacity-90 border-white/10"
|
||||||
|
}`}
|
||||||
|
style={{ backgroundColor: seg.color + "33", borderColor: isDragTarget ? undefined : seg.color + "66" }}
|
||||||
|
title={`拖拽可调换顺序 · 点击设置截取范围\n${seg.materialName}\n${segDur.toFixed(1)}s${loopPercent > 0 ? ` (含循环 ${(segDur * loopPercent / 100).toFixed(1)}s)` : ""}`}
|
||||||
|
>
|
||||||
|
<GripVertical className="absolute top-0.5 left-0.5 h-3 w-3 text-white/30 z-[1]" />
|
||||||
|
<span className="text-[11px] text-white/90 truncate max-w-full px-1 leading-tight z-[1]">
|
||||||
|
{seg.materialName}
|
||||||
|
</span>
|
||||||
|
<span className="text-[10px] text-white/60 leading-tight z-[1]">
|
||||||
|
{segDur.toFixed(1)}s
|
||||||
|
</span>
|
||||||
|
{seg.sourceStart > 0 && (
|
||||||
|
<span className="text-[9px] text-amber-400/80 leading-tight z-[1]">
|
||||||
|
✂ {seg.sourceStart.toFixed(1)}s
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
{/* Loop fill stripe overlay */}
|
||||||
|
{loopPercent > 0 && (
|
||||||
|
<div
|
||||||
|
className="absolute top-0 right-0 h-full pointer-events-none flex items-center justify-center"
|
||||||
|
style={{
|
||||||
|
width: `${loopPercent}%`,
|
||||||
|
background: `repeating-linear-gradient(-45deg, transparent, transparent 3px, rgba(255,255,255,0.07) 3px, rgba(255,255,255,0.07) 6px)`,
|
||||||
|
borderLeft: "1px dashed rgba(255,255,255,0.25)",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<span className="text-[9px] text-white/30">循环</span>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Overflow segments — shown as gray chips */}
|
||||||
|
{overflowSegments.length > 0 && (
|
||||||
|
<div className="flex flex-wrap items-center gap-1.5 mt-1.5">
|
||||||
|
<span className="text-[10px] text-gray-500">未使用:</span>
|
||||||
|
{overflowSegments.map((seg) => (
|
||||||
|
<span
|
||||||
|
key={seg.id}
|
||||||
|
className="text-[10px] text-gray-500 bg-white/5 border border-white/10 rounded px-1.5 py-0.5"
|
||||||
|
>
|
||||||
|
{seg.materialName}
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<p className="text-[10px] text-gray-500 mt-1.5">
|
||||||
|
点击波形定位播放 · 拖拽色块调换顺序 · 点击色块设置截取范围
|
||||||
|
</p>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<div className="h-14 bg-white/5 rounded-lg" />
|
||||||
|
<p className="text-[10px] text-gray-500 mt-1.5">
|
||||||
|
选中配音和素材后可编辑时间轴
|
||||||
|
</p>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
|
||||||
|
if (embedded) return content;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
|
{content}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { Eye } from "lucide-react";
|
import { ChevronDown, Eye } from "lucide-react";
|
||||||
import { FloatingStylePreview } from "@/features/home/ui/FloatingStylePreview";
|
import { FloatingStylePreview } from "@/features/home/ui/FloatingStylePreview";
|
||||||
|
|
||||||
interface SubtitleStyleOption {
|
interface SubtitleStyleOption {
|
||||||
@@ -38,11 +38,21 @@ interface TitleSubtitlePanelProps {
|
|||||||
onTitleChange: (value: string) => void;
|
onTitleChange: (value: string) => void;
|
||||||
onTitleCompositionStart?: () => void;
|
onTitleCompositionStart?: () => void;
|
||||||
onTitleCompositionEnd?: (value: string) => void;
|
onTitleCompositionEnd?: (value: string) => void;
|
||||||
|
videoSecondaryTitle: string;
|
||||||
|
onSecondaryTitleChange: (value: string) => void;
|
||||||
|
onSecondaryTitleCompositionStart?: () => void;
|
||||||
|
onSecondaryTitleCompositionEnd?: (value: string) => void;
|
||||||
titleStyles: TitleStyleOption[];
|
titleStyles: TitleStyleOption[];
|
||||||
selectedTitleStyleId: string;
|
selectedTitleStyleId: string;
|
||||||
onSelectTitleStyle: (id: string) => void;
|
onSelectTitleStyle: (id: string) => void;
|
||||||
titleFontSize: number;
|
titleFontSize: number;
|
||||||
onTitleFontSizeChange: (value: number) => void;
|
onTitleFontSizeChange: (value: number) => void;
|
||||||
|
selectedSecondaryTitleStyleId: string;
|
||||||
|
onSelectSecondaryTitleStyle: (id: string) => void;
|
||||||
|
secondaryTitleFontSize: number;
|
||||||
|
onSecondaryTitleFontSizeChange: (value: number) => void;
|
||||||
|
secondaryTitleTopMargin: number;
|
||||||
|
onSecondaryTitleTopMarginChange: (value: number) => void;
|
||||||
subtitleStyles: SubtitleStyleOption[];
|
subtitleStyles: SubtitleStyleOption[];
|
||||||
selectedSubtitleStyleId: string;
|
selectedSubtitleStyleId: string;
|
||||||
onSelectSubtitleStyle: (id: string) => void;
|
onSelectSubtitleStyle: (id: string) => void;
|
||||||
@@ -52,11 +62,14 @@ interface TitleSubtitlePanelProps {
|
|||||||
onTitleTopMarginChange: (value: number) => void;
|
onTitleTopMarginChange: (value: number) => void;
|
||||||
subtitleBottomMargin: number;
|
subtitleBottomMargin: number;
|
||||||
onSubtitleBottomMarginChange: (value: number) => void;
|
onSubtitleBottomMarginChange: (value: number) => void;
|
||||||
|
titleDisplayMode: "short" | "persistent";
|
||||||
|
onTitleDisplayModeChange: (mode: "short" | "persistent") => void;
|
||||||
resolveAssetUrl: (path?: string | null) => string | null;
|
resolveAssetUrl: (path?: string | null) => string | null;
|
||||||
getFontFormat: (fontFile?: string) => string;
|
getFontFormat: (fontFile?: string) => string;
|
||||||
buildTextShadow: (color: string, size: number) => string;
|
buildTextShadow: (color: string, size: number) => string;
|
||||||
previewBaseWidth?: number;
|
previewBaseWidth?: number;
|
||||||
previewBaseHeight?: number;
|
previewBaseHeight?: number;
|
||||||
|
previewBackgroundUrl?: string | null;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function TitleSubtitlePanel({
|
export function TitleSubtitlePanel({
|
||||||
@@ -66,11 +79,21 @@ export function TitleSubtitlePanel({
|
|||||||
onTitleChange,
|
onTitleChange,
|
||||||
onTitleCompositionStart,
|
onTitleCompositionStart,
|
||||||
onTitleCompositionEnd,
|
onTitleCompositionEnd,
|
||||||
|
videoSecondaryTitle,
|
||||||
|
onSecondaryTitleChange,
|
||||||
|
onSecondaryTitleCompositionStart,
|
||||||
|
onSecondaryTitleCompositionEnd,
|
||||||
titleStyles,
|
titleStyles,
|
||||||
selectedTitleStyleId,
|
selectedTitleStyleId,
|
||||||
onSelectTitleStyle,
|
onSelectTitleStyle,
|
||||||
titleFontSize,
|
titleFontSize,
|
||||||
onTitleFontSizeChange,
|
onTitleFontSizeChange,
|
||||||
|
selectedSecondaryTitleStyleId,
|
||||||
|
onSelectSecondaryTitleStyle,
|
||||||
|
secondaryTitleFontSize,
|
||||||
|
onSecondaryTitleFontSizeChange,
|
||||||
|
secondaryTitleTopMargin,
|
||||||
|
onSecondaryTitleTopMarginChange,
|
||||||
subtitleStyles,
|
subtitleStyles,
|
||||||
selectedSubtitleStyleId,
|
selectedSubtitleStyleId,
|
||||||
onSelectSubtitleStyle,
|
onSelectSubtitleStyle,
|
||||||
@@ -80,34 +103,55 @@ export function TitleSubtitlePanel({
|
|||||||
onTitleTopMarginChange,
|
onTitleTopMarginChange,
|
||||||
subtitleBottomMargin,
|
subtitleBottomMargin,
|
||||||
onSubtitleBottomMarginChange,
|
onSubtitleBottomMarginChange,
|
||||||
|
titleDisplayMode,
|
||||||
|
onTitleDisplayModeChange,
|
||||||
resolveAssetUrl,
|
resolveAssetUrl,
|
||||||
getFontFormat,
|
getFontFormat,
|
||||||
buildTextShadow,
|
buildTextShadow,
|
||||||
previewBaseWidth = 1080,
|
previewBaseWidth = 1080,
|
||||||
previewBaseHeight = 1920,
|
previewBaseHeight = 1920,
|
||||||
|
previewBackgroundUrl,
|
||||||
}: TitleSubtitlePanelProps) {
|
}: TitleSubtitlePanelProps) {
|
||||||
return (
|
return (
|
||||||
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<div className="flex items-center justify-between mb-4 gap-2">
|
<div className="flex items-center justify-between mb-4 gap-2">
|
||||||
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
|
||||||
🎬 标题与字幕
|
四、标题与字幕
|
||||||
</h2>
|
</h2>
|
||||||
<button
|
<div className="flex items-center gap-1.5">
|
||||||
onClick={onTogglePreview}
|
<div className="relative shrink-0">
|
||||||
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
|
<select
|
||||||
>
|
value={titleDisplayMode}
|
||||||
<Eye className="h-3.5 w-3.5" />
|
onChange={(e) => onTitleDisplayModeChange(e.target.value as "short" | "persistent")}
|
||||||
{showStylePreview ? "收起预览" : "预览样式"}
|
className="appearance-none rounded-lg border border-white/15 bg-black/35 px-2.5 py-1.5 pr-7 text-xs text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
|
||||||
</button>
|
aria-label="标题显示方式"
|
||||||
|
>
|
||||||
|
<option value="short">标题短暂显示</option>
|
||||||
|
<option value="persistent">标题常驻显示</option>
|
||||||
|
</select>
|
||||||
|
<ChevronDown className="pointer-events-none absolute right-2 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={onTogglePreview}
|
||||||
|
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
|
||||||
|
>
|
||||||
|
<Eye className="h-3.5 w-3.5" />
|
||||||
|
{showStylePreview ? "收起预览" : "预览样式"}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{showStylePreview && (
|
{showStylePreview && (
|
||||||
<FloatingStylePreview
|
<FloatingStylePreview
|
||||||
onClose={onTogglePreview}
|
onClose={onTogglePreview}
|
||||||
videoTitle={videoTitle}
|
videoTitle={videoTitle}
|
||||||
|
videoSecondaryTitle={videoSecondaryTitle}
|
||||||
titleStyles={titleStyles}
|
titleStyles={titleStyles}
|
||||||
selectedTitleStyleId={selectedTitleStyleId}
|
selectedTitleStyleId={selectedTitleStyleId}
|
||||||
titleFontSize={titleFontSize}
|
titleFontSize={titleFontSize}
|
||||||
|
selectedSecondaryTitleStyleId={selectedSecondaryTitleStyleId}
|
||||||
|
secondaryTitleFontSize={secondaryTitleFontSize}
|
||||||
|
secondaryTitleTopMargin={secondaryTitleTopMargin}
|
||||||
subtitleStyles={subtitleStyles}
|
subtitleStyles={subtitleStyles}
|
||||||
selectedSubtitleStyleId={selectedSubtitleStyleId}
|
selectedSubtitleStyleId={selectedSubtitleStyleId}
|
||||||
subtitleFontSize={subtitleFontSize}
|
subtitleFontSize={subtitleFontSize}
|
||||||
@@ -119,11 +163,15 @@ export function TitleSubtitlePanel({
|
|||||||
buildTextShadow={buildTextShadow}
|
buildTextShadow={buildTextShadow}
|
||||||
previewBaseWidth={previewBaseWidth}
|
previewBaseWidth={previewBaseWidth}
|
||||||
previewBaseHeight={previewBaseHeight}
|
previewBaseHeight={previewBaseHeight}
|
||||||
|
previewBackgroundUrl={previewBackgroundUrl}
|
||||||
/>
|
/>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
<div className="mb-4">
|
<div className="mb-4">
|
||||||
<label className="text-sm text-gray-300 mb-2 block">片头标题(限制15个字)</label>
|
<div className="flex items-center justify-between mb-2">
|
||||||
|
<label className="text-sm text-gray-300">片头标题</label>
|
||||||
|
<span className={`text-xs ${videoTitle.length > 15 ? "text-red-400" : "text-gray-500"}`}>{videoTitle.length}/15</span>
|
||||||
|
</div>
|
||||||
<input
|
<input
|
||||||
type="text"
|
type="text"
|
||||||
value={videoTitle}
|
value={videoTitle}
|
||||||
@@ -135,96 +183,102 @@ export function TitleSubtitlePanel({
|
|||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div className="mb-4">
|
||||||
|
<div className="flex items-center justify-between mb-2">
|
||||||
|
<label className="text-sm text-gray-300">片头副标题</label>
|
||||||
|
<span className={`text-xs ${videoSecondaryTitle.length > 20 ? "text-red-400" : "text-gray-500"}`}>{videoSecondaryTitle.length}/20</span>
|
||||||
|
</div>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={videoSecondaryTitle}
|
||||||
|
onChange={(e) => onSecondaryTitleChange(e.target.value)}
|
||||||
|
onCompositionStart={onSecondaryTitleCompositionStart}
|
||||||
|
onCompositionEnd={(e) => onSecondaryTitleCompositionEnd?.(e.currentTarget.value)}
|
||||||
|
placeholder="输入副标题,显示在主标题下方"
|
||||||
|
className="w-full px-3 sm:px-4 py-2 text-sm sm:text-base bg-black/30 border border-white/10 rounded-xl text-white placeholder-gray-500 focus:outline-none focus:border-purple-500 transition-colors"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
{titleStyles.length > 0 && (
|
{titleStyles.length > 0 && (
|
||||||
<div className="mb-4">
|
<div className="mb-4 space-y-3">
|
||||||
<label className="text-sm text-gray-300 mb-2 block">标题样式</label>
|
<div className="flex items-center gap-3">
|
||||||
<div className="grid grid-cols-2 gap-2">
|
<label className="text-sm text-gray-300 shrink-0 w-20">标题样式</label>
|
||||||
{titleStyles.map((style) => (
|
<div className="relative w-1/3 min-w-[100px]">
|
||||||
<button
|
<select
|
||||||
key={style.id}
|
value={selectedTitleStyleId}
|
||||||
onClick={() => onSelectTitleStyle(style.id)}
|
onChange={(e) => onSelectTitleStyle(e.target.value)}
|
||||||
className={`p-2 rounded-lg border transition-all text-left ${selectedTitleStyleId === style.id
|
className="w-full appearance-none rounded-lg border border-white/15 bg-black/35 px-3 py-2 pr-8 text-sm text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
|
||||||
? "border-purple-500 bg-purple-500/20"
|
|
||||||
: "border-white/10 bg-white/5 hover:border-white/30"
|
|
||||||
}`}
|
|
||||||
>
|
>
|
||||||
<div className="text-white text-sm truncate">{style.label}</div>
|
{titleStyles.map((style) => (
|
||||||
<div className="text-xs text-gray-400 truncate">
|
<option key={style.id} value={style.id}>{style.label}</option>
|
||||||
{style.font_family || style.font_file || ""}
|
))}
|
||||||
</div>
|
</select>
|
||||||
</button>
|
<ChevronDown className="pointer-events-none absolute right-2.5 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
|
||||||
))}
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="mt-3">
|
<div className="flex items-center gap-3">
|
||||||
<label className="text-xs text-gray-400 mb-2 block">标题字号: {titleFontSize}px</label>
|
<label className="text-xs text-gray-400 shrink-0 w-20">字号 {titleFontSize}</label>
|
||||||
<input
|
<input type="range" min="60" max="150" step="1" value={titleFontSize} onChange={(e) => onTitleFontSizeChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
|
||||||
type="range"
|
|
||||||
min="60"
|
|
||||||
max="150"
|
|
||||||
step="1"
|
|
||||||
value={titleFontSize}
|
|
||||||
onChange={(e) => onTitleFontSizeChange(parseInt(e.target.value, 10))}
|
|
||||||
className="w-full accent-purple-500"
|
|
||||||
/>
|
|
||||||
</div>
|
</div>
|
||||||
<div className="mt-3">
|
<div className="flex items-center gap-3">
|
||||||
<label className="text-xs text-gray-400 mb-2 block">标题位置: {titleTopMargin}px</label>
|
<label className="text-xs text-gray-400 shrink-0 w-20">位置 {titleTopMargin}</label>
|
||||||
<input
|
<input type="range" min="0" max="300" step="1" value={titleTopMargin} onChange={(e) => onTitleTopMarginChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
|
||||||
type="range"
|
</div>
|
||||||
min="0"
|
</div>
|
||||||
max="300"
|
)}
|
||||||
step="1"
|
|
||||||
value={titleTopMargin}
|
{titleStyles.length > 0 && (
|
||||||
onChange={(e) => onTitleTopMarginChange(parseInt(e.target.value, 10))}
|
<div className="mb-4 space-y-3">
|
||||||
className="w-full accent-purple-500"
|
<div className="flex items-center gap-3">
|
||||||
/>
|
<label className="text-sm text-gray-300 shrink-0 w-20">副标题样式</label>
|
||||||
|
<div className="relative w-1/3 min-w-[100px]">
|
||||||
|
<select
|
||||||
|
value={selectedSecondaryTitleStyleId}
|
||||||
|
onChange={(e) => onSelectSecondaryTitleStyle(e.target.value)}
|
||||||
|
className="w-full appearance-none rounded-lg border border-white/15 bg-black/35 px-3 py-2 pr-8 text-sm text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
|
||||||
|
>
|
||||||
|
{titleStyles.map((style) => (
|
||||||
|
<option key={style.id} value={style.id}>{style.label}</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
<ChevronDown className="pointer-events-none absolute right-2.5 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<label className="text-xs text-gray-400 shrink-0 w-20">字号 {secondaryTitleFontSize}</label>
|
||||||
|
<input type="range" min="30" max="100" step="1" value={secondaryTitleFontSize} onChange={(e) => onSecondaryTitleFontSizeChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<label className="text-xs text-gray-400 shrink-0 w-20">间距 {secondaryTitleTopMargin}</label>
|
||||||
|
<input type="range" min="0" max="100" step="1" value={secondaryTitleTopMargin} onChange={(e) => onSecondaryTitleTopMarginChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{subtitleStyles.length > 0 && (
|
{subtitleStyles.length > 0 && (
|
||||||
<div className="mt-4">
|
<div className="mt-4 space-y-3">
|
||||||
<label className="text-sm text-gray-300 mb-2 block">字幕样式</label>
|
<div className="flex items-center gap-3">
|
||||||
<div className="grid grid-cols-2 gap-2">
|
<label className="text-sm text-gray-300 shrink-0 w-20">字幕样式</label>
|
||||||
{subtitleStyles.map((style) => (
|
<div className="relative w-1/3 min-w-[100px]">
|
||||||
<button
|
<select
|
||||||
key={style.id}
|
value={selectedSubtitleStyleId}
|
||||||
onClick={() => onSelectSubtitleStyle(style.id)}
|
onChange={(e) => onSelectSubtitleStyle(e.target.value)}
|
||||||
className={`p-2 rounded-lg border transition-all text-left ${selectedSubtitleStyleId === style.id
|
className="w-full appearance-none rounded-lg border border-white/15 bg-black/35 px-3 py-2 pr-8 text-sm text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
|
||||||
? "border-purple-500 bg-purple-500/20"
|
|
||||||
: "border-white/10 bg-white/5 hover:border-white/30"
|
|
||||||
}`}
|
|
||||||
>
|
>
|
||||||
<div className="text-white text-sm truncate">{style.label}</div>
|
{subtitleStyles.map((style) => (
|
||||||
<div className="text-xs text-gray-400 truncate">
|
<option key={style.id} value={style.id}>{style.label}</option>
|
||||||
{style.font_family || style.font_file || ""}
|
))}
|
||||||
</div>
|
</select>
|
||||||
</button>
|
<ChevronDown className="pointer-events-none absolute right-2.5 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
|
||||||
))}
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="mt-3">
|
<div className="flex items-center gap-3">
|
||||||
<label className="text-xs text-gray-400 mb-2 block">字幕字号: {subtitleFontSize}px</label>
|
<label className="text-xs text-gray-400 shrink-0 w-20">字号 {subtitleFontSize}</label>
|
||||||
<input
|
<input type="range" min="40" max="90" step="1" value={subtitleFontSize} onChange={(e) => onSubtitleFontSizeChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
|
||||||
type="range"
|
|
||||||
min="40"
|
|
||||||
max="90"
|
|
||||||
step="1"
|
|
||||||
value={subtitleFontSize}
|
|
||||||
onChange={(e) => onSubtitleFontSizeChange(parseInt(e.target.value, 10))}
|
|
||||||
className="w-full accent-purple-500"
|
|
||||||
/>
|
|
||||||
</div>
|
</div>
|
||||||
<div className="mt-3">
|
<div className="flex items-center gap-3">
|
||||||
<label className="text-xs text-gray-400 mb-2 block">字幕位置: {subtitleBottomMargin}px</label>
|
<label className="text-xs text-gray-400 shrink-0 w-20">位置 {subtitleBottomMargin}</label>
|
||||||
<input
|
<input type="range" min="0" max="300" step="1" value={subtitleBottomMargin} onChange={(e) => onSubtitleBottomMarginChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
|
||||||
type="range"
|
|
||||||
min="0"
|
|
||||||
max="300"
|
|
||||||
step="1"
|
|
||||||
value={subtitleBottomMargin}
|
|
||||||
onChange={(e) => onSubtitleBottomMarginChange(parseInt(e.target.value, 10))}
|
|
||||||
className="w-full accent-purple-500"
|
|
||||||
/>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ interface VoiceSelectorProps {
|
|||||||
voice: string;
|
voice: string;
|
||||||
onSelectVoice: (id: string) => void;
|
onSelectVoice: (id: string) => void;
|
||||||
voiceCloneSlot: ReactNode;
|
voiceCloneSlot: ReactNode;
|
||||||
|
embedded?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function VoiceSelector({
|
export function VoiceSelector({
|
||||||
@@ -22,32 +23,29 @@ export function VoiceSelector({
|
|||||||
voice,
|
voice,
|
||||||
onSelectVoice,
|
onSelectVoice,
|
||||||
voiceCloneSlot,
|
voiceCloneSlot,
|
||||||
|
embedded = false,
|
||||||
}: VoiceSelectorProps) {
|
}: VoiceSelectorProps) {
|
||||||
return (
|
const content = (
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<>
|
||||||
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
|
|
||||||
🎙️ 配音方式
|
|
||||||
</h2>
|
|
||||||
|
|
||||||
<div className="flex gap-2 mb-4">
|
<div className="flex gap-2 mb-4">
|
||||||
<button
|
<button
|
||||||
onClick={() => onSelectTtsMode("edgetts")}
|
onClick={() => onSelectTtsMode("edgetts")}
|
||||||
className={`flex-1 py-2 px-4 rounded-lg font-medium transition-all flex items-center justify-center gap-2 ${ttsMode === "edgetts"
|
className={`flex-1 py-2 px-2 sm:px-4 rounded-lg text-sm sm:text-base font-medium transition-all flex items-center justify-center gap-1.5 sm:gap-2 ${ttsMode === "edgetts"
|
||||||
? "bg-purple-600 text-white"
|
? "bg-purple-600 text-white"
|
||||||
: "bg-white/10 text-gray-300 hover:bg-white/20"
|
: "bg-white/10 text-gray-300 hover:bg-white/20"
|
||||||
}`}
|
}`}
|
||||||
>
|
>
|
||||||
<Volume2 className="h-4 w-4" />
|
<Volume2 className="h-4 w-4 shrink-0" />
|
||||||
选择声音
|
选择声音
|
||||||
</button>
|
</button>
|
||||||
<button
|
<button
|
||||||
onClick={() => onSelectTtsMode("voiceclone")}
|
onClick={() => onSelectTtsMode("voiceclone")}
|
||||||
className={`flex-1 py-2 px-4 rounded-lg font-medium transition-all flex items-center justify-center gap-2 ${ttsMode === "voiceclone"
|
className={`flex-1 py-2 px-2 sm:px-4 rounded-lg text-sm sm:text-base font-medium transition-all flex items-center justify-center gap-1.5 sm:gap-2 ${ttsMode === "voiceclone"
|
||||||
? "bg-purple-600 text-white"
|
? "bg-purple-600 text-white"
|
||||||
: "bg-white/10 text-gray-300 hover:bg-white/20"
|
: "bg-white/10 text-gray-300 hover:bg-white/20"
|
||||||
}`}
|
}`}
|
||||||
>
|
>
|
||||||
<Mic className="h-4 w-4" />
|
<Mic className="h-4 w-4 shrink-0" />
|
||||||
克隆声音
|
克隆声音
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
@@ -70,6 +68,17 @@ export function VoiceSelector({
|
|||||||
)}
|
)}
|
||||||
|
|
||||||
{ttsMode === "voiceclone" && voiceCloneSlot}
|
{ttsMode === "voiceclone" && voiceCloneSlot}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
|
||||||
|
if (embedded) return content;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
|
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
|
||||||
|
🎙️ 配音方式
|
||||||
|
</h2>
|
||||||
|
{content}
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,9 +15,7 @@ interface UseScriptExtractionOptions {
|
|||||||
export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
||||||
const [isLoading, setIsLoading] = useState(false);
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
const [script, setScript] = useState("");
|
const [script, setScript] = useState("");
|
||||||
const [rewrittenScript, setRewrittenScript] = useState("");
|
|
||||||
const [error, setError] = useState<string | null>(null);
|
const [error, setError] = useState<string | null>(null);
|
||||||
const [doRewrite, setDoRewrite] = useState(true);
|
|
||||||
const [step, setStep] = useState<ExtractionStep>("config");
|
const [step, setStep] = useState<ExtractionStep>("config");
|
||||||
const [dragActive, setDragActive] = useState(false);
|
const [dragActive, setDragActive] = useState(false);
|
||||||
const [selectedFile, setSelectedFile] = useState<File | null>(null);
|
const [selectedFile, setSelectedFile] = useState<File | null>(null);
|
||||||
@@ -29,7 +27,6 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
|||||||
if (isOpen) {
|
if (isOpen) {
|
||||||
setStep("config");
|
setStep("config");
|
||||||
setScript("");
|
setScript("");
|
||||||
setRewrittenScript("");
|
|
||||||
setError(null);
|
setError(null);
|
||||||
setIsLoading(false);
|
setIsLoading(false);
|
||||||
setSelectedFile(null);
|
setSelectedFile(null);
|
||||||
@@ -100,10 +97,10 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
|||||||
} else if (activeTab === "url") {
|
} else if (activeTab === "url") {
|
||||||
formData.append("url", inputUrl.trim());
|
formData.append("url", inputUrl.trim());
|
||||||
}
|
}
|
||||||
formData.append("rewrite", doRewrite ? "true" : "false");
|
formData.append("rewrite", "false");
|
||||||
|
|
||||||
const { data: res } = await api.post<
|
const { data: res } = await api.post<
|
||||||
ApiResponse<{ original_script: string; rewritten_script?: string }>
|
ApiResponse<{ original_script: string }>
|
||||||
>("/api/tools/extract-script", formData, {
|
>("/api/tools/extract-script", formData, {
|
||||||
headers: { "Content-Type": "multipart/form-data" },
|
headers: { "Content-Type": "multipart/form-data" },
|
||||||
timeout: 180000, // 3 minutes timeout
|
timeout: 180000, // 3 minutes timeout
|
||||||
@@ -111,7 +108,6 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
|||||||
|
|
||||||
const payload = unwrap(res);
|
const payload = unwrap(res);
|
||||||
setScript(payload.original_script);
|
setScript(payload.original_script);
|
||||||
setRewrittenScript(payload.rewritten_script || "");
|
|
||||||
setStep("result");
|
setStep("result");
|
||||||
} catch (err: unknown) {
|
} catch (err: unknown) {
|
||||||
console.error(err);
|
console.error(err);
|
||||||
@@ -126,7 +122,7 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
|||||||
} finally {
|
} finally {
|
||||||
setIsLoading(false);
|
setIsLoading(false);
|
||||||
}
|
}
|
||||||
}, [activeTab, selectedFile, inputUrl, doRewrite]);
|
}, [activeTab, selectedFile, inputUrl]);
|
||||||
|
|
||||||
const copyToClipboard = useCallback((text: string) => {
|
const copyToClipboard = useCallback((text: string) => {
|
||||||
if (navigator.clipboard && window.isSecureContext) {
|
if (navigator.clipboard && window.isSecureContext) {
|
||||||
@@ -185,16 +181,13 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
|
|||||||
// State
|
// State
|
||||||
isLoading,
|
isLoading,
|
||||||
script,
|
script,
|
||||||
rewrittenScript,
|
|
||||||
error,
|
error,
|
||||||
doRewrite,
|
|
||||||
step,
|
step,
|
||||||
dragActive,
|
dragActive,
|
||||||
selectedFile,
|
selectedFile,
|
||||||
activeTab,
|
activeTab,
|
||||||
inputUrl,
|
inputUrl,
|
||||||
// Setters
|
// Setters
|
||||||
setDoRewrite,
|
|
||||||
setActiveTab,
|
setActiveTab,
|
||||||
setInputUrl,
|
setInputUrl,
|
||||||
// Handlers
|
// Handlers
|
||||||
|
|||||||
@@ -83,6 +83,8 @@ export const usePublishController = () => {
|
|||||||
setVideos(nextVideos);
|
setVideos(nextVideos);
|
||||||
if (nextVideos.length > 0 && autoSelectLatest) {
|
if (nextVideos.length > 0 && autoSelectLatest) {
|
||||||
setSelectedVideo(nextVideos[0].id);
|
setSelectedVideo(nextVideos[0].id);
|
||||||
|
// 写入跨页面共享标记,让首页也能感知最新生成的视频
|
||||||
|
localStorage.setItem(`vigent_${getStorageKey()}_latestGeneratedVideoId`, nextVideos[0].id);
|
||||||
}
|
}
|
||||||
updatePrefetch({ videos: nextVideos });
|
updatePrefetch({ videos: nextVideos });
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -109,16 +111,23 @@ export const usePublishController = () => {
|
|||||||
|
|
||||||
// ---- 视频选择恢复(唯一一个 effect,条件极简) ----
|
// ---- 视频选择恢复(唯一一个 effect,条件极简) ----
|
||||||
// 等 auth 完成 + videos 有数据 → 恢复一次,之后再也不跑
|
// 等 auth 完成 + videos 有数据 → 恢复一次,之后再也不跑
|
||||||
|
// 优先检查跨页面共享标记(最新生成的视频),其次恢复上次选择
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (isAuthLoading || videos.length === 0 || videoRestoredRef.current) return;
|
if (isAuthLoading || videos.length === 0 || videoRestoredRef.current) return;
|
||||||
videoRestoredRef.current = true;
|
videoRestoredRef.current = true;
|
||||||
|
|
||||||
const key = getStorageKey();
|
const key = getStorageKey();
|
||||||
const saved = localStorage.getItem(`vigent_${key}_publish_selected_video`);
|
const latestId = localStorage.getItem(`vigent_${key}_latestGeneratedVideoId`);
|
||||||
if (saved && videos.some(v => v.id === saved)) {
|
if (latestId && videos.some(v => v.id === latestId)) {
|
||||||
setSelectedVideo(saved);
|
setSelectedVideo(latestId);
|
||||||
|
localStorage.removeItem(`vigent_${key}_latestGeneratedVideoId`);
|
||||||
} else {
|
} else {
|
||||||
setSelectedVideo(videos[0].id);
|
const saved = localStorage.getItem(`vigent_${key}_publish_selected_video`);
|
||||||
|
if (saved && videos.some(v => v.id === saved)) {
|
||||||
|
setSelectedVideo(saved);
|
||||||
|
} else {
|
||||||
|
setSelectedVideo(videos[0].id);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}, [isAuthLoading, videos, getStorageKey]);
|
}, [isAuthLoading, videos, getStorageKey]);
|
||||||
|
|
||||||
|
|||||||
@@ -135,7 +135,7 @@ export function PublishPage() {
|
|||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
|
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
|
||||||
👤 平台账号
|
七、平台账号
|
||||||
</h2>
|
</h2>
|
||||||
|
|
||||||
{isAccountsLoading ? (
|
{isAccountsLoading ? (
|
||||||
@@ -157,62 +157,60 @@ export function PublishPage() {
|
|||||||
))}
|
))}
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
<div className="space-y-3">
|
<div className="space-y-2 sm:space-y-3">
|
||||||
{accounts.map((account) => (
|
{accounts.map((account) => (
|
||||||
<div
|
<div
|
||||||
key={account.platform}
|
key={account.platform}
|
||||||
className="flex items-center justify-between p-4 bg-black/30 rounded-xl"
|
className="flex items-center gap-3 px-3 py-2.5 sm:px-4 sm:py-3.5 bg-black/30 rounded-xl"
|
||||||
>
|
>
|
||||||
<div className="flex items-center gap-3">
|
{platformIcons[account.platform] ? (
|
||||||
{platformIcons[account.platform] ? (
|
<Image
|
||||||
<Image
|
src={platformIcons[account.platform].src}
|
||||||
src={platformIcons[account.platform].src}
|
alt={platformIcons[account.platform].alt}
|
||||||
alt={platformIcons[account.platform].alt}
|
width={28}
|
||||||
width={28}
|
height={28}
|
||||||
height={28}
|
className="h-6 w-6 sm:h-7 sm:w-7 shrink-0"
|
||||||
className="h-7 w-7"
|
/>
|
||||||
/>
|
) : (
|
||||||
) : (
|
<span className="text-xl sm:text-2xl">🌐</span>
|
||||||
<span className="text-2xl">🌐</span>
|
)}
|
||||||
)}
|
<div className="min-w-0 flex-1">
|
||||||
<div>
|
<div className="text-sm sm:text-base text-white font-medium leading-tight">
|
||||||
<div className="text-white font-medium">
|
{account.name}
|
||||||
{account.name}
|
</div>
|
||||||
</div>
|
<div
|
||||||
<div
|
className={`text-xs sm:text-sm leading-tight ${account.logged_in
|
||||||
className={`text-sm ${account.logged_in
|
? "text-green-400"
|
||||||
? "text-green-400"
|
: "text-gray-500"
|
||||||
: "text-gray-500"
|
}`}
|
||||||
}`}
|
>
|
||||||
>
|
{account.logged_in ? "✓ 已登录" : "未登录"}
|
||||||
{account.logged_in ? "✓ 已登录" : "未登录"}
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex gap-2">
|
<div className="flex items-center gap-1.5 sm:gap-2 shrink-0">
|
||||||
{account.logged_in ? (
|
{account.logged_in ? (
|
||||||
<>
|
<>
|
||||||
<button
|
<button
|
||||||
onClick={() => handleLogin(account.platform)}
|
onClick={() => handleLogin(account.platform)}
|
||||||
className="px-3 py-1 bg-white/10 hover:bg-white/20 text-white text-sm rounded-lg transition-colors flex items-center gap-1"
|
className="px-2 py-1 sm:px-3 sm:py-1.5 bg-white/10 hover:bg-white/20 text-white text-xs sm:text-sm rounded-md sm:rounded-lg transition-colors flex items-center gap-1"
|
||||||
>
|
>
|
||||||
<RotateCcw className="h-3.5 w-3.5" />
|
<RotateCcw className="h-3 w-3 sm:h-3.5 sm:w-3.5" />
|
||||||
重新登录
|
重新登录
|
||||||
</button>
|
</button>
|
||||||
<button
|
<button
|
||||||
onClick={() => handleLogout(account.platform)}
|
onClick={() => handleLogout(account.platform)}
|
||||||
className="px-3 py-1 bg-red-500/80 hover:bg-red-600 text-white text-sm rounded-lg transition-colors flex items-center gap-1"
|
className="px-2 py-1 sm:px-3 sm:py-1.5 bg-red-500/80 hover:bg-red-600 text-white text-xs sm:text-sm rounded-md sm:rounded-lg transition-colors flex items-center gap-1"
|
||||||
>
|
>
|
||||||
<LogOut className="h-3.5 w-3.5" />
|
<LogOut className="h-3 w-3 sm:h-3.5 sm:w-3.5" />
|
||||||
注销
|
注销
|
||||||
</button>
|
</button>
|
||||||
</>
|
</>
|
||||||
) : (
|
) : (
|
||||||
<button
|
<button
|
||||||
onClick={() => handleLogin(account.platform)}
|
onClick={() => handleLogin(account.platform)}
|
||||||
className="px-3 py-1 bg-purple-500/80 hover:bg-purple-600 text-white text-sm rounded-lg transition-colors flex items-center gap-1"
|
className="px-2 py-1 sm:px-3 sm:py-1.5 bg-purple-500/80 hover:bg-purple-600 text-white text-xs sm:text-sm rounded-md sm:rounded-lg transition-colors flex items-center gap-1"
|
||||||
>
|
>
|
||||||
<QrCode className="h-3.5 w-3.5" />
|
<QrCode className="h-3 w-3 sm:h-3.5 sm:w-3.5" />
|
||||||
登录
|
登录
|
||||||
</button>
|
</button>
|
||||||
)}
|
)}
|
||||||
@@ -228,7 +226,7 @@ export function PublishPage() {
|
|||||||
<div className="space-y-6">
|
<div className="space-y-6">
|
||||||
{/* 选择视频 */}
|
{/* 选择视频 */}
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<h2 className="text-lg font-semibold text-white mb-4">📹 选择发布作品</h2>
|
<h2 className="text-lg font-semibold text-white mb-4">八、选择发布作品</h2>
|
||||||
|
|
||||||
<div className="flex items-center gap-3 mb-4">
|
<div className="flex items-center gap-3 mb-4">
|
||||||
<Search className="text-gray-400 w-4 h-4" />
|
<Search className="text-gray-400 w-4 h-4" />
|
||||||
@@ -303,7 +301,7 @@ export function PublishPage() {
|
|||||||
|
|
||||||
{/* 填写信息 */}
|
{/* 填写信息 */}
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<h2 className="text-lg font-semibold text-white mb-4">✍️ 发布信息</h2>
|
<h2 className="text-lg font-semibold text-white mb-4">九、发布信息</h2>
|
||||||
|
|
||||||
<div className="space-y-4">
|
<div className="space-y-4">
|
||||||
<div>
|
<div>
|
||||||
@@ -337,7 +335,7 @@ export function PublishPage() {
|
|||||||
|
|
||||||
{/* 选择平台 */}
|
{/* 选择平台 */}
|
||||||
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
|
||||||
<h2 className="text-lg font-semibold text-white mb-4">📱 选择发布平台</h2>
|
<h2 className="text-lg font-semibold text-white mb-4">十、选择发布平台</h2>
|
||||||
|
|
||||||
<div className="grid grid-cols-3 gap-3">
|
<div className="grid grid-cols-3 gap-3">
|
||||||
{accounts
|
{accounts
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ const API_BASE = typeof window === 'undefined'
|
|||||||
// 防止重复跳转
|
// 防止重复跳转
|
||||||
let isRedirecting = false;
|
let isRedirecting = false;
|
||||||
|
|
||||||
const PUBLIC_PATHS = new Set(['/login', '/register']);
|
const PUBLIC_PATHS = new Set(['/login', '/register', '/pay']);
|
||||||
|
|
||||||
// 创建 axios 实例
|
// 创建 axios 实例
|
||||||
const api = axios.create({
|
const api = axios.create({
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ interface AuthContextType {
|
|||||||
user: User | null;
|
user: User | null;
|
||||||
isLoading: boolean;
|
isLoading: boolean;
|
||||||
isAuthenticated: boolean;
|
isAuthenticated: boolean;
|
||||||
|
setUser: (user: User | null) => void;
|
||||||
}
|
}
|
||||||
|
|
||||||
const AuthContext = createContext<AuthContextType>({
|
const AuthContext = createContext<AuthContextType>({
|
||||||
@@ -18,6 +19,7 @@ const AuthContext = createContext<AuthContextType>({
|
|||||||
user: null,
|
user: null,
|
||||||
isLoading: true,
|
isLoading: true,
|
||||||
isAuthenticated: false,
|
isAuthenticated: false,
|
||||||
|
setUser: () => {},
|
||||||
});
|
});
|
||||||
|
|
||||||
export function AuthProvider({ children }: { children: ReactNode }) {
|
export function AuthProvider({ children }: { children: ReactNode }) {
|
||||||
@@ -63,7 +65,8 @@ export function AuthProvider({ children }: { children: ReactNode }) {
|
|||||||
userId: user?.id || null,
|
userId: user?.id || null,
|
||||||
user,
|
user,
|
||||||
isLoading,
|
isLoading,
|
||||||
isAuthenticated: !!user
|
isAuthenticated: !!user,
|
||||||
|
setUser,
|
||||||
}}>
|
}}>
|
||||||
{children}
|
{children}
|
||||||
</AuthContext.Provider>
|
</AuthContext.Provider>
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ export interface AuthResponse {
|
|||||||
success: boolean;
|
success: boolean;
|
||||||
message: string;
|
message: string;
|
||||||
user?: User;
|
user?: User;
|
||||||
|
paymentToken?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
interface ApiResponse<T> {
|
interface ApiResponse<T> {
|
||||||
@@ -25,20 +26,41 @@ interface ApiResponse<T> {
|
|||||||
* 用户注册
|
* 用户注册
|
||||||
*/
|
*/
|
||||||
export async function register(phone: string, password: string, username?: string): Promise<AuthResponse> {
|
export async function register(phone: string, password: string, username?: string): Promise<AuthResponse> {
|
||||||
const { data: payload } = await api.post<ApiResponse<null>>('/api/auth/register', {
|
try {
|
||||||
phone, password, username
|
const { data: payload } = await api.post<ApiResponse<null>>('/api/auth/register', {
|
||||||
});
|
phone, password, username
|
||||||
return { success: payload.success, message: payload.message };
|
});
|
||||||
|
return { success: payload.success, message: payload.message };
|
||||||
|
} catch (err: any) {
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
message: err.response?.data?.message || '注册失败',
|
||||||
|
};
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 用户登录
|
* 用户登录
|
||||||
*/
|
*/
|
||||||
export async function login(phone: string, password: string): Promise<AuthResponse> {
|
export async function login(phone: string, password: string): Promise<AuthResponse> {
|
||||||
const { data: payload } = await api.post<ApiResponse<{ user?: User }>>('/api/auth/login', {
|
try {
|
||||||
phone, password
|
const { data: payload } = await api.post<ApiResponse<{ user?: User }>>('/api/auth/login', {
|
||||||
});
|
phone, password
|
||||||
return { success: payload.success, message: payload.message, user: payload.data?.user };
|
});
|
||||||
|
return { success: payload.success, message: payload.message, user: payload.data?.user };
|
||||||
|
} catch (err: any) {
|
||||||
|
if (err.response?.status === 403 && err.response?.data?.data?.reason === 'PAYMENT_REQUIRED') {
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
message: err.response.data.message,
|
||||||
|
paymentToken: err.response.data.data.payment_token,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
message: err.response?.data?.message || '登录失败',
|
||||||
|
};
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -1,8 +1,12 @@
|
|||||||
export const TITLE_MAX_LENGTH = 15;
|
export const TITLE_MAX_LENGTH = 15;
|
||||||
|
export const SECONDARY_TITLE_MAX_LENGTH = 20;
|
||||||
|
|
||||||
export const clampTitle = (value: string, maxLength: number = TITLE_MAX_LENGTH) =>
|
export const clampTitle = (value: string, maxLength: number = TITLE_MAX_LENGTH) =>
|
||||||
value.slice(0, maxLength);
|
value.slice(0, maxLength);
|
||||||
|
|
||||||
|
export const clampSecondaryTitle = (value: string, maxLength: number = SECONDARY_TITLE_MAX_LENGTH) =>
|
||||||
|
value.slice(0, maxLength);
|
||||||
|
|
||||||
export const applyTitleLimit = (
|
export const applyTitleLimit = (
|
||||||
prev: string,
|
prev: string,
|
||||||
next: string,
|
next: string,
|
||||||
|
|||||||
76
models/CosyVoice/CODE_OF_CONDUCT.md
Normal file
76
models/CosyVoice/CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# Contributor Covenant Code of Conduct
|
||||||
|
|
||||||
|
## Our Pledge
|
||||||
|
|
||||||
|
In the interest of fostering an open and welcoming environment, we as
|
||||||
|
contributors and maintainers pledge to making participation in our project and
|
||||||
|
our community a harassment-free experience for everyone, regardless of age, body
|
||||||
|
size, disability, ethnicity, sex characteristics, gender identity and expression,
|
||||||
|
level of experience, education, socio-economic status, nationality, personal
|
||||||
|
appearance, race, religion, or sexual identity and orientation.
|
||||||
|
|
||||||
|
## Our Standards
|
||||||
|
|
||||||
|
Examples of behavior that contributes to creating a positive environment
|
||||||
|
include:
|
||||||
|
|
||||||
|
* Using welcoming and inclusive language
|
||||||
|
* Being respectful of differing viewpoints and experiences
|
||||||
|
* Gracefully accepting constructive criticism
|
||||||
|
* Focusing on what is best for the community
|
||||||
|
* Showing empathy towards other community members
|
||||||
|
|
||||||
|
Examples of unacceptable behavior by participants include:
|
||||||
|
|
||||||
|
* The use of sexualized language or imagery and unwelcome sexual attention or
|
||||||
|
advances
|
||||||
|
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||||
|
* Public or private harassment
|
||||||
|
* Publishing others' private information, such as a physical or electronic
|
||||||
|
address, without explicit permission
|
||||||
|
* Other conduct which could reasonably be considered inappropriate in a
|
||||||
|
professional setting
|
||||||
|
|
||||||
|
## Our Responsibilities
|
||||||
|
|
||||||
|
Project maintainers are responsible for clarifying the standards of acceptable
|
||||||
|
behavior and are expected to take appropriate and fair corrective action in
|
||||||
|
response to any instances of unacceptable behavior.
|
||||||
|
|
||||||
|
Project maintainers have the right and responsibility to remove, edit, or
|
||||||
|
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||||
|
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||||
|
permanently any contributor for other behaviors that they deem inappropriate,
|
||||||
|
threatening, offensive, or harmful.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
This Code of Conduct applies both within project spaces and in public spaces
|
||||||
|
when an individual is representing the project or its community. Examples of
|
||||||
|
representing a project or community include using an official project e-mail
|
||||||
|
address, posting via an official social media account, or acting as an appointed
|
||||||
|
representative at an online or offline event. Representation of a project may be
|
||||||
|
further defined and clarified by project maintainers.
|
||||||
|
|
||||||
|
## Enforcement
|
||||||
|
|
||||||
|
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||||
|
reported by contacting the project team at mikelei@mobvoi.com. All
|
||||||
|
complaints will be reviewed and investigated and will result in a response that
|
||||||
|
is deemed necessary and appropriate to the circumstances. The project team is
|
||||||
|
obligated to maintain confidentiality with regard to the reporter of an incident.
|
||||||
|
Further details of specific enforcement policies may be posted separately.
|
||||||
|
|
||||||
|
Project maintainers who do not follow or enforce the Code of Conduct in good
|
||||||
|
faith may face temporary or permanent repercussions as determined by other
|
||||||
|
members of the project's leadership.
|
||||||
|
|
||||||
|
## Attribution
|
||||||
|
|
||||||
|
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
||||||
|
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||||
|
|
||||||
|
[homepage]: https://www.contributor-covenant.org
|
||||||
|
|
||||||
|
For answers to common questions about this code of conduct, see
|
||||||
|
https://www.contributor-covenant.org/faq
|
||||||
16
models/CosyVoice/FAQ.md
Normal file
16
models/CosyVoice/FAQ.md
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
## ModuleNotFoundError: No module named 'matcha'
|
||||||
|
|
||||||
|
Matcha-TTS is a third_party module. Please check `third_party` directory. If there is no `Matcha-TTS`, execute `git submodule update --init --recursive`.
|
||||||
|
|
||||||
|
run `export PYTHONPATH=third_party/Matcha-TTS` if you want to use `from cosyvoice.cli.cosyvoice import CosyVoice` in python script.
|
||||||
|
|
||||||
|
## cannot find resource.zip or cannot unzip resource.zip
|
||||||
|
|
||||||
|
Please make sure you have git-lfs installed. Execute
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
|
||||||
|
cd pretrained_models/CosyVoice-ttsfrd/
|
||||||
|
unzip resource.zip -d .
|
||||||
|
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
|
||||||
|
```
|
||||||
201
models/CosyVoice/LICENSE
Normal file
201
models/CosyVoice/LICENSE
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
264
models/CosyVoice/README.md
Normal file
264
models/CosyVoice/README.md
Normal file
@@ -0,0 +1,264 @@
|
|||||||
|

|
||||||
|
|
||||||
|
## 👉🏻 CosyVoice 👈🏻
|
||||||
|
|
||||||
|
**Fun-CosyVoice 3.0**: [Demos](https://funaudiollm.github.io/cosyvoice3/); [Paper](https://arxiv.org/pdf/2505.17589); [Modelscope](https://www.modelscope.cn/models/FunAudioLLM/Fun-CosyVoice3-0.5B-2512); [Huggingface](https://huggingface.co/FunAudioLLM/Fun-CosyVoice3-0.5B-2512); [CV3-Eval](https://github.com/FunAudioLLM/CV3-Eval)
|
||||||
|
|
||||||
|
**CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/pdf/2412.10117); [Modelscope](https://www.modelscope.cn/models/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/FunAudioLLM/CosyVoice2-0.5B)
|
||||||
|
|
||||||
|
**CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/models/iic/CosyVoice-300M); [HuggingFace](https://huggingface.co/FunAudioLLM/CosyVoice-300M)
|
||||||
|
|
||||||
|
## Highlight🔥
|
||||||
|
|
||||||
|
**Fun-CosyVoice 3.0** is an advanced text-to-speech (TTS) system based on large language models (LLM), surpassing its predecessor (CosyVoice 2.0) in content consistency, speaker similarity, and prosody naturalness. It is designed for zero-shot multilingual speech synthesis in the wild.
|
||||||
|
### Key Features
|
||||||
|
- **Language Coverage**: Covers 9 common languages (Chinese, English, Japanese, Korean, German, Spanish, French, Italian, Russian), 18+ Chinese dialects/accents (Guangdong, Minnan, Sichuan, Dongbei, Shan3xi, Shan1xi, Shanghai, Tianjin, Shandong, Ningxia, Gansu, etc.) and meanwhile supports both multi-lingual/cross-lingual zero-shot voice cloning.
|
||||||
|
- **Content Consistency & Naturalness**: Achieves state-of-the-art performance in content consistency, speaker similarity, and prosody naturalness.
|
||||||
|
- **Pronunciation Inpainting**: Supports pronunciation inpainting of Chinese Pinyin and English CMU phonemes, providing more controllability and thus suitable for production use.
|
||||||
|
- **Text Normalization**: Supports reading of numbers, special symbols and various text formats without a traditional frontend module.
|
||||||
|
- **Bi-Streaming**: Support both text-in streaming and audio-out streaming, and achieves latency as low as 150ms while maintaining high-quality audio output.
|
||||||
|
- **Instruct Support**: Supports various instructions such as languages, dialects, emotions, speed, volume, etc.
|
||||||
|
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
|
||||||
|
- [x] 2025/12
|
||||||
|
|
||||||
|
- [x] release Fun-CosyVoice3-0.5B-2512 base model, rl model and its training/inference script
|
||||||
|
- [x] release Fun-CosyVoice3-0.5B modelscope gradio space
|
||||||
|
|
||||||
|
- [x] 2025/08
|
||||||
|
|
||||||
|
- [x] Thanks to the contribution from NVIDIA Yuekai Zhang, add triton trtllm runtime support and cosyvoice2 grpo training support
|
||||||
|
|
||||||
|
- [x] 2025/07
|
||||||
|
|
||||||
|
- [x] release Fun-CosyVoice 3.0 eval set
|
||||||
|
|
||||||
|
- [x] 2025/05
|
||||||
|
|
||||||
|
- [x] add CosyVoice2-0.5B vllm support
|
||||||
|
|
||||||
|
- [x] 2024/12
|
||||||
|
|
||||||
|
- [x] 25hz CosyVoice2-0.5B released
|
||||||
|
|
||||||
|
- [x] 2024/09
|
||||||
|
|
||||||
|
- [x] 25hz CosyVoice-300M base model
|
||||||
|
- [x] 25hz CosyVoice-300M voice conversion function
|
||||||
|
|
||||||
|
- [x] 2024/08
|
||||||
|
|
||||||
|
- [x] Repetition Aware Sampling(RAS) inference for llm stability
|
||||||
|
- [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
|
||||||
|
|
||||||
|
- [x] 2024/07
|
||||||
|
|
||||||
|
- [x] Flow matching training support
|
||||||
|
- [x] WeTextProcessing support when ttsfrd is not available
|
||||||
|
- [x] Fastapi server and client
|
||||||
|
|
||||||
|
## Evaluation
|
||||||
|
|
||||||
|
| Model | Open-Source | Model Size | test-zh<br>CER (%) ↓ | test-zh<br>SS (%) ↑ | test-en<br>WER (%) ↓ | test-en<br>SS (%) ↑ | test-hard<br>CER (%) ↓ | test-hard<br>SS (%) ↑ |
|
||||||
|
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
||||||
|
| Human | - | - | 1.26 | 75.5 | 2.14 | 73.4 | - | - |
|
||||||
|
| Seed-TTS | ❌ | - | 1.12 | 79.6 | 2.25 | 76.2 | 7.59 | 77.6 |
|
||||||
|
| MiniMax-Speech | ❌ | - | 0.83 | 78.3 | 1.65 | 69.2 | - | - |
|
||||||
|
| F5-TTS | ✅ | 0.3B | 1.52 | 74.1 | 2.00 | 64.7 | 8.67 | 71.3 |
|
||||||
|
| Spark TTS | ✅ | 0.5B | 1.2 | 66.0 | 1.98 | 57.3 | - | - |
|
||||||
|
| CosyVoice2 | ✅ | 0.5B | 1.45 | 75.7 | 2.57 | 65.9 | 6.83 | 72.4 |
|
||||||
|
| FireRedTTS2 | ✅ | 1.5B | 1.14 | 73.2 | 1.95 | 66.5 | - | - |
|
||||||
|
| Index-TTS2 | ✅ | 1.5B | 1.03 | 76.5 | 2.23 | 70.6 | 7.12 | 75.5 |
|
||||||
|
| VibeVoice-1.5B | ✅ | 1.5B | 1.16 | 74.4 | 3.04 | 68.9 | - | - |
|
||||||
|
| VibeVoice-Realtime | ✅ | 0.5B | - | - | 2.05 | 63.3 | - | - |
|
||||||
|
| HiggsAudio-v2 | ✅ | 3B | 1.50 | 74.0 | 2.44 | 67.7 | - | - |
|
||||||
|
| VoxCPM | ✅ | 0.5B | 0.93 | 77.2 | 1.85 | 72.9 | 8.87 | 73.0 |
|
||||||
|
| GLM-TTS | ✅ | 1.5B | 1.03 | 76.1 | - | - | - | - |
|
||||||
|
| GLM-TTS RL | ✅ | 1.5B | 0.89 | 76.4 | - | - | - | - |
|
||||||
|
| Fun-CosyVoice3-0.5B-2512 | ✅ | 0.5B | 1.21 | 78.0 | 2.24 | 71.8 | 6.71 | 75.8 |
|
||||||
|
| Fun-CosyVoice3-0.5B-2512_RL | ✅ | 0.5B | 0.81 | 77.4 | 1.68 | 69.5 | 5.44 | 75.0 |
|
||||||
|
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
### Clone and install
|
||||||
|
|
||||||
|
- Clone the repo
|
||||||
|
``` sh
|
||||||
|
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
|
||||||
|
# If you failed to clone the submodule due to network failures, please run the following command until success
|
||||||
|
cd CosyVoice
|
||||||
|
git submodule update --init --recursive
|
||||||
|
```
|
||||||
|
|
||||||
|
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
|
||||||
|
- Create Conda env:
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
conda create -n cosyvoice -y python=3.10
|
||||||
|
conda activate cosyvoice
|
||||||
|
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
||||||
|
|
||||||
|
# If you encounter sox compatibility issues
|
||||||
|
# ubuntu
|
||||||
|
sudo apt-get install sox libsox-dev
|
||||||
|
# centos
|
||||||
|
sudo yum install sox sox-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
### Model download
|
||||||
|
|
||||||
|
We strongly recommend that you download our pretrained `Fun-CosyVoice3-0.5B` `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
|
||||||
|
|
||||||
|
``` python
|
||||||
|
# modelscope SDK model download
|
||||||
|
from modelscope import snapshot_download
|
||||||
|
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
||||||
|
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
|
||||||
|
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
|
||||||
|
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
|
||||||
|
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
|
||||||
|
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
||||||
|
|
||||||
|
# for oversea users, huggingface SDK model download
|
||||||
|
from huggingface_hub import snapshot_download
|
||||||
|
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
||||||
|
snapshot_download('FunAudioLLM/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
|
||||||
|
snapshot_download('FunAudioLLM/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
|
||||||
|
snapshot_download('FunAudioLLM/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
|
||||||
|
snapshot_download('FunAudioLLM/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
|
||||||
|
snapshot_download('FunAudioLLM/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
||||||
|
```
|
||||||
|
|
||||||
|
Optionally, you can unzip `ttsfrd` resource and install `ttsfrd` package for better text normalization performance.
|
||||||
|
|
||||||
|
Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use wetext by default.
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
cd pretrained_models/CosyVoice-ttsfrd/
|
||||||
|
unzip resource.zip -d .
|
||||||
|
pip install ttsfrd_dependency-0.1-py3-none-any.whl
|
||||||
|
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
|
||||||
|
```
|
||||||
|
|
||||||
|
### Basic Usage
|
||||||
|
|
||||||
|
We strongly recommend using `Fun-CosyVoice3-0.5B` for better performance.
|
||||||
|
Follow the code in `example.py` for detailed usage of each model.
|
||||||
|
```sh
|
||||||
|
python example.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### vLLM Usage
|
||||||
|
CosyVoice2/3 now supports **vLLM 0.11.x+ (V1 engine)** and **vLLM 0.9.0 (legacy)**.
|
||||||
|
Older vllm version(<0.9.0) do not support CosyVoice inference, and versions in between (e.g., 0.10.x) are not tested.
|
||||||
|
|
||||||
|
Notice that `vllm` has a lot of specific requirements. You can create a new env to in case your hardward do not support vllm and old env is corrupted.
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
conda create -n cosyvoice_vllm --clone cosyvoice
|
||||||
|
conda activate cosyvoice_vllm
|
||||||
|
# for vllm==0.9.0
|
||||||
|
pip install vllm==v0.9.0 transformers==4.51.3 numpy==1.26.4 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
||||||
|
# for vllm>=0.11.0
|
||||||
|
pip install vllm==v0.11.0 transformers==4.57.1 numpy==1.26.4 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
||||||
|
python vllm_example.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Start web demo
|
||||||
|
|
||||||
|
You can use our web demo page to get familiar with CosyVoice quickly.
|
||||||
|
|
||||||
|
Please see the demo website for details.
|
||||||
|
|
||||||
|
``` python
|
||||||
|
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
|
||||||
|
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Advanced Usage
|
||||||
|
|
||||||
|
For advanced users, we have provided training and inference scripts in `examples/libritts`.
|
||||||
|
|
||||||
|
#### Build for deployment
|
||||||
|
|
||||||
|
Optionally, if you want service deployment,
|
||||||
|
You can run the following steps.
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
cd runtime/python
|
||||||
|
docker build -t cosyvoice:v1.0 .
|
||||||
|
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
|
||||||
|
# for grpc usage
|
||||||
|
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
|
||||||
|
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
|
||||||
|
# for fastapi usage
|
||||||
|
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
|
||||||
|
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Using Nvidia TensorRT-LLM for deployment
|
||||||
|
|
||||||
|
Using TensorRT-LLM to accelerate cosyvoice2 llm could give 4x acceleration comparing with huggingface transformers implementation.
|
||||||
|
To quick start:
|
||||||
|
|
||||||
|
``` sh
|
||||||
|
cd runtime/triton_trtllm
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
For more details, you could check [here](https://github.com/FunAudioLLM/CosyVoice/tree/main/runtime/triton_trtllm)
|
||||||
|
|
||||||
|
## Discussion & Communication
|
||||||
|
|
||||||
|
You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
|
||||||
|
|
||||||
|
You can also scan the QR code to join our official Dingding chat group.
|
||||||
|
|
||||||
|
<img src="./asset/dingding.png" width="250px">
|
||||||
|
|
||||||
|
## Acknowledge
|
||||||
|
|
||||||
|
1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
|
||||||
|
2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
|
||||||
|
3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
|
||||||
|
4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
|
||||||
|
5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
|
||||||
|
|
||||||
|
## Citations
|
||||||
|
|
||||||
|
``` bibtex
|
||||||
|
@article{du2024cosyvoice,
|
||||||
|
title={Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens},
|
||||||
|
author={Du, Zhihao and Chen, Qian and Zhang, Shiliang and Hu, Kai and Lu, Heng and Yang, Yexin and Hu, Hangrui and Zheng, Siqi and Gu, Yue and Ma, Ziyang and others},
|
||||||
|
journal={arXiv preprint arXiv:2407.05407},
|
||||||
|
year={2024}
|
||||||
|
}
|
||||||
|
|
||||||
|
@article{du2024cosyvoice,
|
||||||
|
title={Cosyvoice 2: Scalable streaming speech synthesis with large language models},
|
||||||
|
author={Du, Zhihao and Wang, Yuxuan and Chen, Qian and Shi, Xian and Lv, Xiang and Zhao, Tianyu and Gao, Zhifu and Yang, Yexin and Gao, Changfeng and Wang, Hui and others},
|
||||||
|
journal={arXiv preprint arXiv:2412.10117},
|
||||||
|
year={2024}
|
||||||
|
}
|
||||||
|
|
||||||
|
@article{du2025cosyvoice,
|
||||||
|
title={CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-training},
|
||||||
|
author={Du, Zhihao and Gao, Changfeng and Wang, Yuxuan and Yu, Fan and Zhao, Tianyu and Wang, Hao and Lv, Xiang and Wang, Hui and Shi, Xian and An, Keyu and others},
|
||||||
|
journal={arXiv preprint arXiv:2505.17589},
|
||||||
|
year={2025}
|
||||||
|
}
|
||||||
|
|
||||||
|
@inproceedings{lyu2025build,
|
||||||
|
title={Build LLM-Based Zero-Shot Streaming TTS System with Cosyvoice},
|
||||||
|
author={Lyu, Xiang and Wang, Yuxuan and Zhao, Tianyu and Wang, Hao and Liu, Huadai and Du, Zhihao},
|
||||||
|
booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
||||||
|
pages={1--2},
|
||||||
|
year={2025},
|
||||||
|
organization={IEEE}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Disclaimer
|
||||||
|
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
|
||||||
0
models/CosyVoice/cosyvoice/__init__.py
Normal file
0
models/CosyVoice/cosyvoice/__init__.py
Normal file
93
models/CosyVoice/cosyvoice/bin/average_model.py
Normal file
93
models/CosyVoice/cosyvoice/bin/average_model.py
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
# Copyright (c) 2020 Mobvoi Inc (Di Wu)
|
||||||
|
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
import argparse
|
||||||
|
import glob
|
||||||
|
|
||||||
|
import yaml
|
||||||
|
import torch
|
||||||
|
|
||||||
|
|
||||||
|
def get_args():
|
||||||
|
parser = argparse.ArgumentParser(description='average model')
|
||||||
|
parser.add_argument('--dst_model', required=True, help='averaged model')
|
||||||
|
parser.add_argument('--src_path',
|
||||||
|
required=True,
|
||||||
|
help='src model path for average')
|
||||||
|
parser.add_argument('--val_best',
|
||||||
|
action="store_true",
|
||||||
|
help='averaged model')
|
||||||
|
parser.add_argument('--num',
|
||||||
|
default=5,
|
||||||
|
type=int,
|
||||||
|
help='nums for averaged model')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
print(args)
|
||||||
|
return args
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
args = get_args()
|
||||||
|
val_scores = []
|
||||||
|
if args.val_best:
|
||||||
|
yamls = glob.glob('{}/*.yaml'.format(args.src_path))
|
||||||
|
yamls = [
|
||||||
|
f for f in yamls
|
||||||
|
if not (os.path.basename(f).startswith('train')
|
||||||
|
or os.path.basename(f).startswith('init'))
|
||||||
|
]
|
||||||
|
for y in yamls:
|
||||||
|
with open(y, 'r') as f:
|
||||||
|
dic_yaml = yaml.load(f, Loader=yaml.BaseLoader)
|
||||||
|
loss = float(dic_yaml['loss_dict']['loss'])
|
||||||
|
epoch = int(dic_yaml['epoch'])
|
||||||
|
step = int(dic_yaml['step'])
|
||||||
|
tag = dic_yaml['tag']
|
||||||
|
val_scores += [[epoch, step, loss, tag]]
|
||||||
|
sorted_val_scores = sorted(val_scores,
|
||||||
|
key=lambda x: x[2],
|
||||||
|
reverse=False)
|
||||||
|
print("best val (epoch, step, loss, tag) = " +
|
||||||
|
str(sorted_val_scores[:args.num]))
|
||||||
|
path_list = [
|
||||||
|
args.src_path + '/epoch_{}_whole.pt'.format(score[0])
|
||||||
|
for score in sorted_val_scores[:args.num]
|
||||||
|
]
|
||||||
|
print(path_list)
|
||||||
|
avg = {}
|
||||||
|
num = args.num
|
||||||
|
assert num == len(path_list)
|
||||||
|
for path in path_list:
|
||||||
|
print('Processing {}'.format(path))
|
||||||
|
states = torch.load(path, map_location=torch.device('cpu'))
|
||||||
|
for k in states.keys():
|
||||||
|
if k not in ['step', 'epoch']:
|
||||||
|
if k not in avg.keys():
|
||||||
|
avg[k] = states[k].clone()
|
||||||
|
else:
|
||||||
|
avg[k] += states[k]
|
||||||
|
# average
|
||||||
|
for k in avg.keys():
|
||||||
|
if avg[k] is not None:
|
||||||
|
# pytorch 1.6 use true_divide instead of /=
|
||||||
|
avg[k] = torch.true_divide(avg[k], num)
|
||||||
|
print('Saving to {}'.format(args.dst_model))
|
||||||
|
torch.save(avg, args.dst_model)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
99
models/CosyVoice/cosyvoice/bin/export_jit.py
Normal file
99
models/CosyVoice/cosyvoice/bin/export_jit.py
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import torch
|
||||||
|
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
sys.path.append('{}/../..'.format(ROOT_DIR))
|
||||||
|
sys.path.append('{}/../../third_party/Matcha-TTS'.format(ROOT_DIR))
|
||||||
|
from cosyvoice.cli.cosyvoice import AutoModel
|
||||||
|
from cosyvoice.utils.file_utils import logging
|
||||||
|
|
||||||
|
|
||||||
|
def get_args():
|
||||||
|
parser = argparse.ArgumentParser(description='export your model for deployment')
|
||||||
|
parser.add_argument('--model_dir',
|
||||||
|
type=str,
|
||||||
|
default='pretrained_models/CosyVoice-300M',
|
||||||
|
help='local path')
|
||||||
|
args = parser.parse_args()
|
||||||
|
print(args)
|
||||||
|
return args
|
||||||
|
|
||||||
|
|
||||||
|
def get_optimized_script(model, preserved_attrs=[]):
|
||||||
|
script = torch.jit.script(model)
|
||||||
|
if preserved_attrs != []:
|
||||||
|
script = torch.jit.freeze(script, preserved_attrs=preserved_attrs)
|
||||||
|
else:
|
||||||
|
script = torch.jit.freeze(script)
|
||||||
|
script = torch.jit.optimize_for_inference(script)
|
||||||
|
return script
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
args = get_args()
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='%(asctime)s %(levelname)s %(message)s')
|
||||||
|
|
||||||
|
torch._C._jit_set_fusion_strategy([('STATIC', 1)])
|
||||||
|
torch._C._jit_set_profiling_mode(False)
|
||||||
|
torch._C._jit_set_profiling_executor(False)
|
||||||
|
|
||||||
|
model = AutoModel(model_dir=args.model_dir)
|
||||||
|
|
||||||
|
if model.__class__.__name__ == 'CosyVoice':
|
||||||
|
# 1. export llm text_encoder
|
||||||
|
llm_text_encoder = model.model.llm.text_encoder
|
||||||
|
script = get_optimized_script(llm_text_encoder)
|
||||||
|
script.save('{}/llm.text_encoder.fp32.zip'.format(args.model_dir))
|
||||||
|
script = get_optimized_script(llm_text_encoder.half())
|
||||||
|
script.save('{}/llm.text_encoder.fp16.zip'.format(args.model_dir))
|
||||||
|
logging.info('successfully export llm_text_encoder')
|
||||||
|
|
||||||
|
# 2. export llm llm
|
||||||
|
llm_llm = model.model.llm.llm
|
||||||
|
script = get_optimized_script(llm_llm, ['forward_chunk'])
|
||||||
|
script.save('{}/llm.llm.fp32.zip'.format(args.model_dir))
|
||||||
|
script = get_optimized_script(llm_llm.half(), ['forward_chunk'])
|
||||||
|
script.save('{}/llm.llm.fp16.zip'.format(args.model_dir))
|
||||||
|
logging.info('successfully export llm_llm')
|
||||||
|
|
||||||
|
# 3. export flow encoder
|
||||||
|
flow_encoder = model.model.flow.encoder
|
||||||
|
script = get_optimized_script(flow_encoder)
|
||||||
|
script.save('{}/flow.encoder.fp32.zip'.format(args.model_dir))
|
||||||
|
script = get_optimized_script(flow_encoder.half())
|
||||||
|
script.save('{}/flow.encoder.fp16.zip'.format(args.model_dir))
|
||||||
|
logging.info('successfully export flow_encoder')
|
||||||
|
elif model.__class__.__name__ == 'CosyVoice2':
|
||||||
|
# 1. export flow encoder
|
||||||
|
flow_encoder = model.model.flow.encoder
|
||||||
|
script = get_optimized_script(flow_encoder)
|
||||||
|
script.save('{}/flow.encoder.fp32.zip'.format(args.model_dir))
|
||||||
|
script = get_optimized_script(flow_encoder.half())
|
||||||
|
script.save('{}/flow.encoder.fp16.zip'.format(args.model_dir))
|
||||||
|
logging.info('successfully export flow_encoder')
|
||||||
|
else:
|
||||||
|
raise ValueError('unsupported model type')
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
114
models/CosyVoice/cosyvoice/bin/export_onnx.py
Normal file
114
models/CosyVoice/cosyvoice/bin/export_onnx.py
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
# Copyright (c) 2024 Antgroup Inc (authors: Zhoubofan, hexisyztem@icloud.com)
|
||||||
|
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import onnxruntime
|
||||||
|
import random
|
||||||
|
import torch
|
||||||
|
from tqdm import tqdm
|
||||||
|
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
sys.path.append('{}/../..'.format(ROOT_DIR))
|
||||||
|
sys.path.append('{}/../../third_party/Matcha-TTS'.format(ROOT_DIR))
|
||||||
|
from cosyvoice.cli.cosyvoice import AutoModel
|
||||||
|
from cosyvoice.utils.file_utils import logging
|
||||||
|
|
||||||
|
|
||||||
|
def get_dummy_input(batch_size, seq_len, out_channels, device):
|
||||||
|
x = torch.rand((batch_size, out_channels, seq_len), dtype=torch.float32, device=device)
|
||||||
|
mask = torch.ones((batch_size, 1, seq_len), dtype=torch.float32, device=device)
|
||||||
|
mu = torch.rand((batch_size, out_channels, seq_len), dtype=torch.float32, device=device)
|
||||||
|
t = torch.rand((batch_size), dtype=torch.float32, device=device)
|
||||||
|
spks = torch.rand((batch_size, out_channels), dtype=torch.float32, device=device)
|
||||||
|
cond = torch.rand((batch_size, out_channels, seq_len), dtype=torch.float32, device=device)
|
||||||
|
return x, mask, mu, t, spks, cond
|
||||||
|
|
||||||
|
|
||||||
|
def get_args():
|
||||||
|
parser = argparse.ArgumentParser(description='export your model for deployment')
|
||||||
|
parser.add_argument('--model_dir',
|
||||||
|
type=str,
|
||||||
|
default='pretrained_models/CosyVoice-300M',
|
||||||
|
help='local path')
|
||||||
|
args = parser.parse_args()
|
||||||
|
print(args)
|
||||||
|
return args
|
||||||
|
|
||||||
|
|
||||||
|
@torch.no_grad()
|
||||||
|
def main():
|
||||||
|
args = get_args()
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='%(asctime)s %(levelname)s %(message)s')
|
||||||
|
|
||||||
|
model = AutoModel(model_dir=args.model_dir)
|
||||||
|
|
||||||
|
# 1. export flow decoder estimator
|
||||||
|
estimator = model.model.flow.decoder.estimator
|
||||||
|
estimator.eval()
|
||||||
|
|
||||||
|
device = model.model.device
|
||||||
|
batch_size, seq_len = 2, 256
|
||||||
|
out_channels = model.model.flow.decoder.estimator.out_channels
|
||||||
|
x, mask, mu, t, spks, cond = get_dummy_input(batch_size, seq_len, out_channels, device)
|
||||||
|
torch.onnx.export(
|
||||||
|
estimator,
|
||||||
|
(x, mask, mu, t, spks, cond),
|
||||||
|
'{}/flow.decoder.estimator.fp32.onnx'.format(args.model_dir),
|
||||||
|
export_params=True,
|
||||||
|
opset_version=18,
|
||||||
|
do_constant_folding=True,
|
||||||
|
input_names=['x', 'mask', 'mu', 't', 'spks', 'cond'],
|
||||||
|
output_names=['estimator_out'],
|
||||||
|
dynamic_axes={
|
||||||
|
'x': {2: 'seq_len'},
|
||||||
|
'mask': {2: 'seq_len'},
|
||||||
|
'mu': {2: 'seq_len'},
|
||||||
|
'cond': {2: 'seq_len'},
|
||||||
|
'estimator_out': {2: 'seq_len'},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2. test computation consistency
|
||||||
|
option = onnxruntime.SessionOptions()
|
||||||
|
option.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
|
||||||
|
option.intra_op_num_threads = 1
|
||||||
|
providers = ['CUDAExecutionProvider' if torch.cuda.is_available() else 'CPUExecutionProvider']
|
||||||
|
estimator_onnx = onnxruntime.InferenceSession('{}/flow.decoder.estimator.fp32.onnx'.format(args.model_dir),
|
||||||
|
sess_options=option, providers=providers)
|
||||||
|
|
||||||
|
for _ in tqdm(range(10)):
|
||||||
|
x, mask, mu, t, spks, cond = get_dummy_input(batch_size, random.randint(16, 512), out_channels, device)
|
||||||
|
output_pytorch = estimator(x, mask, mu, t, spks, cond)
|
||||||
|
ort_inputs = {
|
||||||
|
'x': x.cpu().numpy(),
|
||||||
|
'mask': mask.cpu().numpy(),
|
||||||
|
'mu': mu.cpu().numpy(),
|
||||||
|
't': t.cpu().numpy(),
|
||||||
|
'spks': spks.cpu().numpy(),
|
||||||
|
'cond': cond.cpu().numpy()
|
||||||
|
}
|
||||||
|
output_onnx = estimator_onnx.run(None, ort_inputs)[0]
|
||||||
|
torch.testing.assert_allclose(output_pytorch, torch.from_numpy(output_onnx).to(device), rtol=1e-2, atol=1e-4)
|
||||||
|
logging.info('successfully export estimator')
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
195
models/CosyVoice/cosyvoice/bin/train.py
Normal file
195
models/CosyVoice/cosyvoice/bin/train.py
Normal file
@@ -0,0 +1,195 @@
|
|||||||
|
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
|
import argparse
|
||||||
|
import datetime
|
||||||
|
import logging
|
||||||
|
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
||||||
|
from copy import deepcopy
|
||||||
|
import os
|
||||||
|
import torch
|
||||||
|
import torch.distributed as dist
|
||||||
|
import deepspeed
|
||||||
|
|
||||||
|
from hyperpyyaml import load_hyperpyyaml
|
||||||
|
|
||||||
|
from torch.distributed.elastic.multiprocessing.errors import record
|
||||||
|
|
||||||
|
from cosyvoice.utils.losses import DPOLoss
|
||||||
|
from cosyvoice.utils.executor import Executor
|
||||||
|
from cosyvoice.utils.train_utils import (
|
||||||
|
init_distributed,
|
||||||
|
init_dataset_and_dataloader,
|
||||||
|
init_optimizer_and_scheduler,
|
||||||
|
init_summarywriter, save_model,
|
||||||
|
wrap_cuda_model, check_modify_and_save_config)
|
||||||
|
|
||||||
|
|
||||||
|
def get_args():
|
||||||
|
parser = argparse.ArgumentParser(description='training your network')
|
||||||
|
parser.add_argument('--train_engine',
|
||||||
|
default='torch_ddp',
|
||||||
|
choices=['torch_ddp', 'deepspeed'],
|
||||||
|
help='Engine for paralleled training')
|
||||||
|
parser.add_argument('--model', required=True, help='model which will be trained')
|
||||||
|
parser.add_argument('--ref_model', required=False, help='ref model used in dpo')
|
||||||
|
parser.add_argument('--config', required=True, help='config file')
|
||||||
|
parser.add_argument('--train_data', required=True, help='train data file')
|
||||||
|
parser.add_argument('--cv_data', required=True, help='cv data file')
|
||||||
|
parser.add_argument('--qwen_pretrain_path', required=False, help='qwen pretrain path')
|
||||||
|
parser.add_argument('--onnx_path', required=False, help='onnx path, which is required for online feature extraction')
|
||||||
|
parser.add_argument('--checkpoint', help='checkpoint model')
|
||||||
|
parser.add_argument('--model_dir', required=True, help='save model dir')
|
||||||
|
parser.add_argument('--tensorboard_dir',
|
||||||
|
default='tensorboard',
|
||||||
|
help='tensorboard log dir')
|
||||||
|
parser.add_argument('--ddp.dist_backend',
|
||||||
|
dest='dist_backend',
|
||||||
|
default='nccl',
|
||||||
|
choices=['nccl', 'gloo'],
|
||||||
|
help='distributed backend')
|
||||||
|
parser.add_argument('--num_workers',
|
||||||
|
default=0,
|
||||||
|
type=int,
|
||||||
|
help='num of subprocess workers for reading')
|
||||||
|
parser.add_argument('--prefetch',
|
||||||
|
default=100,
|
||||||
|
type=int,
|
||||||
|
help='prefetch number')
|
||||||
|
parser.add_argument('--pin_memory',
|
||||||
|
action='store_true',
|
||||||
|
default=False,
|
||||||
|
help='Use pinned memory buffers used for reading')
|
||||||
|
parser.add_argument('--use_amp',
|
||||||
|
action='store_true',
|
||||||
|
default=False,
|
||||||
|
help='Use automatic mixed precision training')
|
||||||
|
parser.add_argument('--dpo',
|
||||||
|
action='store_true',
|
||||||
|
default=False,
|
||||||
|
help='Use Direct Preference Optimization')
|
||||||
|
parser.add_argument('--deepspeed.save_states',
|
||||||
|
dest='save_states',
|
||||||
|
default='model_only',
|
||||||
|
choices=['model_only', 'model+optimizer'],
|
||||||
|
help='save model/optimizer states')
|
||||||
|
parser.add_argument('--timeout',
|
||||||
|
default=60,
|
||||||
|
type=int,
|
||||||
|
help='timeout (in seconds) of cosyvoice_join.')
|
||||||
|
parser = deepspeed.add_config_arguments(parser)
|
||||||
|
args = parser.parse_args()
|
||||||
|
return args
|
||||||
|
|
||||||
|
|
||||||
|
@record
|
||||||
|
def main():
|
||||||
|
args = get_args()
|
||||||
|
os.environ['onnx_path'] = args.onnx_path
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='%(asctime)s %(levelname)s %(message)s')
|
||||||
|
# gan train has some special initialization logic
|
||||||
|
gan = True if args.model == 'hifigan' else False
|
||||||
|
|
||||||
|
override_dict = {k: None for k in ['llm', 'flow', 'hift', 'hifigan'] if k != args.model}
|
||||||
|
if gan is True:
|
||||||
|
override_dict.pop('hift')
|
||||||
|
if args.qwen_pretrain_path is not None:
|
||||||
|
override_dict['qwen_pretrain_path'] = args.qwen_pretrain_path
|
||||||
|
with open(args.config, 'r') as f:
|
||||||
|
configs = load_hyperpyyaml(f, overrides=override_dict)
|
||||||
|
if gan is True:
|
||||||
|
configs['train_conf'] = configs['train_conf_gan']
|
||||||
|
configs['train_conf'].update(vars(args))
|
||||||
|
|
||||||
|
# Init env for ddp
|
||||||
|
init_distributed(args)
|
||||||
|
|
||||||
|
# Get dataset & dataloader
|
||||||
|
train_dataset, cv_dataset, train_data_loader, cv_data_loader = \
|
||||||
|
init_dataset_and_dataloader(args, configs, gan, args.dpo)
|
||||||
|
|
||||||
|
# Do some sanity checks and save config to arsg.model_dir
|
||||||
|
configs = check_modify_and_save_config(args, configs)
|
||||||
|
|
||||||
|
# Tensorboard summary
|
||||||
|
writer = init_summarywriter(args)
|
||||||
|
|
||||||
|
# load checkpoint
|
||||||
|
if args.dpo is True:
|
||||||
|
configs[args.model].forward = configs[args.model].forward_dpo
|
||||||
|
model = configs[args.model]
|
||||||
|
start_step, start_epoch = 0, -1
|
||||||
|
if args.checkpoint is not None:
|
||||||
|
if os.path.exists(args.checkpoint):
|
||||||
|
state_dict = torch.load(args.checkpoint, map_location='cpu')
|
||||||
|
model.load_state_dict(state_dict, strict=False)
|
||||||
|
if 'step' in state_dict:
|
||||||
|
start_step = state_dict['step']
|
||||||
|
if 'epoch' in state_dict:
|
||||||
|
start_epoch = state_dict['epoch']
|
||||||
|
else:
|
||||||
|
logging.warning('checkpoint {} do not exsist!'.format(args.checkpoint))
|
||||||
|
|
||||||
|
# Dispatch model from cpu to gpu
|
||||||
|
model = wrap_cuda_model(args, model)
|
||||||
|
|
||||||
|
# Get optimizer & scheduler
|
||||||
|
model, optimizer, scheduler, optimizer_d, scheduler_d = init_optimizer_and_scheduler(args, configs, model, gan)
|
||||||
|
scheduler.set_step(start_step)
|
||||||
|
if scheduler_d is not None:
|
||||||
|
scheduler_d.set_step(start_step)
|
||||||
|
|
||||||
|
# Save init checkpoints
|
||||||
|
info_dict = deepcopy(configs['train_conf'])
|
||||||
|
info_dict['step'] = start_step
|
||||||
|
info_dict['epoch'] = start_epoch
|
||||||
|
save_model(model, 'init', info_dict)
|
||||||
|
|
||||||
|
# DPO related
|
||||||
|
if args.dpo is True:
|
||||||
|
ref_model = deepcopy(configs[args.model])
|
||||||
|
state_dict = torch.load(args.ref_model, map_location='cpu')
|
||||||
|
ref_model.load_state_dict(state_dict, strict=False)
|
||||||
|
dpo_loss = DPOLoss(beta=0.01, label_smoothing=0.0, ipo=False)
|
||||||
|
# NOTE maybe it is not needed to wrap ref_model as ddp because its parameter is not updated
|
||||||
|
ref_model = wrap_cuda_model(args, ref_model)
|
||||||
|
else:
|
||||||
|
ref_model, dpo_loss = None, None
|
||||||
|
|
||||||
|
# Get executor
|
||||||
|
executor = Executor(gan=gan, ref_model=ref_model, dpo_loss=dpo_loss)
|
||||||
|
executor.step = start_step
|
||||||
|
|
||||||
|
# Init scaler, used for pytorch amp mixed precision training
|
||||||
|
scaler = torch.cuda.amp.GradScaler() if args.use_amp else None
|
||||||
|
print('start step {} start epoch {}'.format(start_step, start_epoch))
|
||||||
|
|
||||||
|
# Start training loop
|
||||||
|
for epoch in range(start_epoch + 1, info_dict['max_epoch']):
|
||||||
|
executor.epoch = epoch
|
||||||
|
train_dataset.set_epoch(epoch)
|
||||||
|
dist.barrier()
|
||||||
|
group_join = dist.new_group(backend="gloo", timeout=datetime.timedelta(seconds=args.timeout))
|
||||||
|
if gan is True:
|
||||||
|
executor.train_one_epoc_gan(model, optimizer, scheduler, optimizer_d, scheduler_d, train_data_loader, cv_data_loader,
|
||||||
|
writer, info_dict, scaler, group_join)
|
||||||
|
else:
|
||||||
|
executor.train_one_epoc(model, optimizer, scheduler, train_data_loader, cv_data_loader, writer, info_dict, scaler, group_join, ref_model=ref_model)
|
||||||
|
dist.destroy_process_group(group_join)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
0
models/CosyVoice/cosyvoice/cli/__init__.py
Normal file
0
models/CosyVoice/cosyvoice/cli/__init__.py
Normal file
240
models/CosyVoice/cosyvoice/cli/cosyvoice.py
Normal file
240
models/CosyVoice/cosyvoice/cli/cosyvoice.py
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
from typing import Generator
|
||||||
|
from tqdm import tqdm
|
||||||
|
from hyperpyyaml import load_hyperpyyaml
|
||||||
|
from modelscope import snapshot_download
|
||||||
|
import torch
|
||||||
|
from cosyvoice.cli.frontend import CosyVoiceFrontEnd
|
||||||
|
from cosyvoice.cli.model import CosyVoiceModel, CosyVoice2Model, CosyVoice3Model
|
||||||
|
from cosyvoice.utils.file_utils import logging
|
||||||
|
from cosyvoice.utils.class_utils import get_model_type
|
||||||
|
|
||||||
|
|
||||||
|
class CosyVoice:
|
||||||
|
|
||||||
|
def __init__(self, model_dir, load_jit=False, load_trt=False, fp16=False, trt_concurrent=1):
|
||||||
|
self.model_dir = model_dir
|
||||||
|
self.fp16 = fp16
|
||||||
|
if not os.path.exists(model_dir):
|
||||||
|
model_dir = snapshot_download(model_dir)
|
||||||
|
hyper_yaml_path = '{}/cosyvoice.yaml'.format(model_dir)
|
||||||
|
if not os.path.exists(hyper_yaml_path):
|
||||||
|
raise ValueError('{} not found!'.format(hyper_yaml_path))
|
||||||
|
with open(hyper_yaml_path, 'r') as f:
|
||||||
|
configs = load_hyperpyyaml(f)
|
||||||
|
assert get_model_type(configs) == CosyVoiceModel, 'do not use {} for CosyVoice initialization!'.format(model_dir)
|
||||||
|
self.frontend = CosyVoiceFrontEnd(configs['get_tokenizer'],
|
||||||
|
configs['feat_extractor'],
|
||||||
|
'{}/campplus.onnx'.format(model_dir),
|
||||||
|
'{}/speech_tokenizer_v1.onnx'.format(model_dir),
|
||||||
|
'{}/spk2info.pt'.format(model_dir),
|
||||||
|
configs['allowed_special'])
|
||||||
|
self.sample_rate = configs['sample_rate']
|
||||||
|
if torch.cuda.is_available() is False and (load_jit is True or load_trt is True or fp16 is True):
|
||||||
|
load_jit, load_trt, fp16 = False, False, False
|
||||||
|
logging.warning('no cuda device, set load_jit/load_trt/fp16 to False')
|
||||||
|
self.model = CosyVoiceModel(configs['llm'], configs['flow'], configs['hift'], fp16)
|
||||||
|
self.model.load('{}/llm.pt'.format(model_dir),
|
||||||
|
'{}/flow.pt'.format(model_dir),
|
||||||
|
'{}/hift.pt'.format(model_dir))
|
||||||
|
if load_jit:
|
||||||
|
self.model.load_jit('{}/llm.text_encoder.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||||
|
'{}/llm.llm.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||||
|
'{}/flow.encoder.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'))
|
||||||
|
if load_trt:
|
||||||
|
self.model.load_trt('{}/flow.decoder.estimator.{}.mygpu.plan'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||||
|
'{}/flow.decoder.estimator.fp32.onnx'.format(model_dir),
|
||||||
|
trt_concurrent,
|
||||||
|
self.fp16)
|
||||||
|
del configs
|
||||||
|
|
||||||
|
def list_available_spks(self):
|
||||||
|
spks = list(self.frontend.spk2info.keys())
|
||||||
|
return spks
|
||||||
|
|
||||||
|
def add_zero_shot_spk(self, prompt_text, prompt_wav, zero_shot_spk_id):
|
||||||
|
assert zero_shot_spk_id != '', 'do not use empty zero_shot_spk_id'
|
||||||
|
model_input = self.frontend.frontend_zero_shot('', prompt_text, prompt_wav, self.sample_rate, '')
|
||||||
|
del model_input['text']
|
||||||
|
del model_input['text_len']
|
||||||
|
self.frontend.spk2info[zero_shot_spk_id] = model_input
|
||||||
|
return True
|
||||||
|
|
||||||
|
def save_spkinfo(self):
|
||||||
|
torch.save(self.frontend.spk2info, '{}/spk2info.pt'.format(self.model_dir))
|
||||||
|
|
||||||
|
def inference_sft(self, tts_text, spk_id, stream=False, speed=1.0, text_frontend=True):
|
||||||
|
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||||
|
model_input = self.frontend.frontend_sft(i, spk_id)
|
||||||
|
start_time = time.time()
|
||||||
|
logging.info('synthesis text {}'.format(i))
|
||||||
|
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||||
|
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||||
|
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||||
|
yield model_output
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
def inference_zero_shot(self, tts_text, prompt_text, prompt_wav, zero_shot_spk_id='', stream=False, speed=1.0, text_frontend=True):
|
||||||
|
if self.__class__.__name__ == 'CosyVoice3' and '<|endofprompt|>' not in prompt_text + tts_text:
|
||||||
|
logging.warning('<|endofprompt|> not found in CosyVoice3 inference, check your input text')
|
||||||
|
prompt_text = self.frontend.text_normalize(prompt_text, split=False, text_frontend=text_frontend)
|
||||||
|
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||||
|
if (not isinstance(i, Generator)) and len(i) < 0.5 * len(prompt_text):
|
||||||
|
logging.warning('synthesis text {} too short than prompt text {}, this may lead to bad performance'.format(i, prompt_text))
|
||||||
|
model_input = self.frontend.frontend_zero_shot(i, prompt_text, prompt_wav, self.sample_rate, zero_shot_spk_id)
|
||||||
|
start_time = time.time()
|
||||||
|
logging.info('synthesis text {}'.format(i))
|
||||||
|
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||||
|
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||||
|
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||||
|
yield model_output
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
def inference_cross_lingual(self, tts_text, prompt_wav, zero_shot_spk_id='', stream=False, speed=1.0, text_frontend=True):
|
||||||
|
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||||
|
model_input = self.frontend.frontend_cross_lingual(i, prompt_wav, self.sample_rate, zero_shot_spk_id)
|
||||||
|
start_time = time.time()
|
||||||
|
logging.info('synthesis text {}'.format(i))
|
||||||
|
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||||
|
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||||
|
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||||
|
yield model_output
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
def inference_instruct(self, tts_text, spk_id, instruct_text, stream=False, speed=1.0, text_frontend=True):
|
||||||
|
assert self.__class__.__name__ == 'CosyVoice', 'inference_instruct is only implemented for CosyVoice!'
|
||||||
|
instruct_text = self.frontend.text_normalize(instruct_text, split=False, text_frontend=text_frontend)
|
||||||
|
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||||
|
model_input = self.frontend.frontend_instruct(i, spk_id, instruct_text)
|
||||||
|
start_time = time.time()
|
||||||
|
logging.info('synthesis text {}'.format(i))
|
||||||
|
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||||
|
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||||
|
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||||
|
yield model_output
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
def inference_vc(self, source_wav, prompt_wav, stream=False, speed=1.0):
|
||||||
|
model_input = self.frontend.frontend_vc(source_wav, prompt_wav, self.sample_rate)
|
||||||
|
start_time = time.time()
|
||||||
|
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||||
|
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||||
|
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||||
|
yield model_output
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
|
||||||
|
class CosyVoice2(CosyVoice):
|
||||||
|
|
||||||
|
def __init__(self, model_dir, load_jit=False, load_trt=False, load_vllm=False, fp16=False, trt_concurrent=1):
|
||||||
|
self.model_dir = model_dir
|
||||||
|
self.fp16 = fp16
|
||||||
|
if not os.path.exists(model_dir):
|
||||||
|
model_dir = snapshot_download(model_dir)
|
||||||
|
hyper_yaml_path = '{}/cosyvoice2.yaml'.format(model_dir)
|
||||||
|
if not os.path.exists(hyper_yaml_path):
|
||||||
|
raise ValueError('{} not found!'.format(hyper_yaml_path))
|
||||||
|
with open(hyper_yaml_path, 'r') as f:
|
||||||
|
configs = load_hyperpyyaml(f, overrides={'qwen_pretrain_path': os.path.join(model_dir, 'CosyVoice-BlankEN')})
|
||||||
|
assert get_model_type(configs) == CosyVoice2Model, 'do not use {} for CosyVoice2 initialization!'.format(model_dir)
|
||||||
|
self.frontend = CosyVoiceFrontEnd(configs['get_tokenizer'],
|
||||||
|
configs['feat_extractor'],
|
||||||
|
'{}/campplus.onnx'.format(model_dir),
|
||||||
|
'{}/speech_tokenizer_v2.onnx'.format(model_dir),
|
||||||
|
'{}/spk2info.pt'.format(model_dir),
|
||||||
|
configs['allowed_special'])
|
||||||
|
self.sample_rate = configs['sample_rate']
|
||||||
|
if torch.cuda.is_available() is False and (load_jit is True or load_trt is True or load_vllm is True or fp16 is True):
|
||||||
|
load_jit, load_trt, load_vllm, fp16 = False, False, False, False
|
||||||
|
logging.warning('no cuda device, set load_jit/load_trt/load_vllm/fp16 to False')
|
||||||
|
self.model = CosyVoice2Model(configs['llm'], configs['flow'], configs['hift'], fp16)
|
||||||
|
self.model.load('{}/llm.pt'.format(model_dir),
|
||||||
|
'{}/flow.pt'.format(model_dir),
|
||||||
|
'{}/hift.pt'.format(model_dir))
|
||||||
|
if load_vllm:
|
||||||
|
self.model.load_vllm('{}/vllm'.format(model_dir))
|
||||||
|
if load_jit:
|
||||||
|
self.model.load_jit('{}/flow.encoder.{}.zip'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'))
|
||||||
|
if load_trt:
|
||||||
|
self.model.load_trt('{}/flow.decoder.estimator.{}.mygpu.plan'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||||
|
'{}/flow.decoder.estimator.fp32.onnx'.format(model_dir),
|
||||||
|
trt_concurrent,
|
||||||
|
self.fp16)
|
||||||
|
del configs
|
||||||
|
|
||||||
|
def inference_instruct2(self, tts_text, instruct_text, prompt_wav, zero_shot_spk_id='', stream=False, speed=1.0, text_frontend=True):
|
||||||
|
for i in tqdm(self.frontend.text_normalize(tts_text, split=True, text_frontend=text_frontend)):
|
||||||
|
model_input = self.frontend.frontend_instruct2(i, instruct_text, prompt_wav, self.sample_rate, zero_shot_spk_id)
|
||||||
|
start_time = time.time()
|
||||||
|
logging.info('synthesis text {}'.format(i))
|
||||||
|
for model_output in self.model.tts(**model_input, stream=stream, speed=speed):
|
||||||
|
speech_len = model_output['tts_speech'].shape[1] / self.sample_rate
|
||||||
|
logging.info('yield speech len {}, rtf {}'.format(speech_len, (time.time() - start_time) / speech_len))
|
||||||
|
yield model_output
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
|
||||||
|
class CosyVoice3(CosyVoice2):
|
||||||
|
|
||||||
|
def __init__(self, model_dir, load_trt=False, load_vllm=False, fp16=False, trt_concurrent=1):
|
||||||
|
self.model_dir = model_dir
|
||||||
|
self.fp16 = fp16
|
||||||
|
if not os.path.exists(model_dir):
|
||||||
|
model_dir = snapshot_download(model_dir)
|
||||||
|
hyper_yaml_path = '{}/cosyvoice3.yaml'.format(model_dir)
|
||||||
|
if not os.path.exists(hyper_yaml_path):
|
||||||
|
raise ValueError('{} not found!'.format(hyper_yaml_path))
|
||||||
|
with open(hyper_yaml_path, 'r') as f:
|
||||||
|
configs = load_hyperpyyaml(f, overrides={'qwen_pretrain_path': os.path.join(model_dir, 'CosyVoice-BlankEN')})
|
||||||
|
assert get_model_type(configs) == CosyVoice3Model, 'do not use {} for CosyVoice3 initialization!'.format(model_dir)
|
||||||
|
self.frontend = CosyVoiceFrontEnd(configs['get_tokenizer'],
|
||||||
|
configs['feat_extractor'],
|
||||||
|
'{}/campplus.onnx'.format(model_dir),
|
||||||
|
'{}/speech_tokenizer_v3.onnx'.format(model_dir),
|
||||||
|
'{}/spk2info.pt'.format(model_dir),
|
||||||
|
configs['allowed_special'])
|
||||||
|
self.sample_rate = configs['sample_rate']
|
||||||
|
if torch.cuda.is_available() is False and (load_trt is True or fp16 is True):
|
||||||
|
load_trt, fp16 = False, False
|
||||||
|
logging.warning('no cuda device, set load_trt/fp16 to False')
|
||||||
|
self.model = CosyVoice3Model(configs['llm'], configs['flow'], configs['hift'], fp16)
|
||||||
|
self.model.load('{}/llm.pt'.format(model_dir),
|
||||||
|
'{}/flow.pt'.format(model_dir),
|
||||||
|
'{}/hift.pt'.format(model_dir))
|
||||||
|
if load_vllm:
|
||||||
|
self.model.load_vllm('{}/vllm'.format(model_dir))
|
||||||
|
if load_trt:
|
||||||
|
if self.fp16 is True:
|
||||||
|
logging.warning('DiT tensorRT fp16 engine have some performance issue, use at caution!')
|
||||||
|
self.model.load_trt('{}/flow.decoder.estimator.{}.mygpu.plan'.format(model_dir, 'fp16' if self.fp16 is True else 'fp32'),
|
||||||
|
'{}/flow.decoder.estimator.fp32.onnx'.format(model_dir),
|
||||||
|
trt_concurrent,
|
||||||
|
self.fp16)
|
||||||
|
del configs
|
||||||
|
|
||||||
|
|
||||||
|
def AutoModel(**kwargs):
|
||||||
|
if not os.path.exists(kwargs['model_dir']):
|
||||||
|
kwargs['model_dir'] = snapshot_download(kwargs['model_dir'])
|
||||||
|
if os.path.exists('{}/cosyvoice.yaml'.format(kwargs['model_dir'])):
|
||||||
|
return CosyVoice(**kwargs)
|
||||||
|
elif os.path.exists('{}/cosyvoice2.yaml'.format(kwargs['model_dir'])):
|
||||||
|
return CosyVoice2(**kwargs)
|
||||||
|
elif os.path.exists('{}/cosyvoice3.yaml'.format(kwargs['model_dir'])):
|
||||||
|
return CosyVoice3(**kwargs)
|
||||||
|
else:
|
||||||
|
raise TypeError('No valid model type found!')
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user