Compare commits

...

8 Commits

Author SHA1 Message Date
Kevin Wong
0e3502c6f0 更新 2026-02-27 16:11:34 +08:00
Kevin Wong
a1604979f0 更新 2026-02-26 11:13:03 +08:00
Kevin Wong
08221e48de 更新 2026-02-26 10:49:22 +08:00
Kevin Wong
42b5cc0c02 更新 2026-02-26 10:14:41 +08:00
Kevin Wong
1717635bfd 更新 2026-02-25 17:51:58 +08:00
Kevin Wong
0a5a17402c 更新 2026-02-24 16:55:29 +08:00
Kevin Wong
bc0fe9326a 更新 2026-02-11 17:48:38 +08:00
Kevin Wong
035ee29d72 更新 2026-02-11 14:33:05 +08:00
170 changed files with 119133 additions and 1376 deletions

1
.gitignore vendored
View File

@@ -40,6 +40,7 @@ backend/uploads/
backend/cookies/
backend/user_data/
backend/debug_screenshots/
backend/keys/
*_cookies.json
# ============ 模型权重 ============

278
Docs/ALIPAY_DEPLOY.md Normal file
View File

@@ -0,0 +1,278 @@
# 支付宝付费开通会员 — 部署指南
本文档涵盖支付宝电脑网站支付功能的完整部署流程。用户注册后通过支付宝付费自动激活会员,有效期 1 年。
---
## 前置条件
- 支付宝企业/个体商户账号
- 已在 [支付宝开放平台](https://open.alipay.com) 创建应用并获取 APPID
- 应用已开通 **「电脑网站支付」** 产品权限(`alipay.trade.page.pay` 接口)
- 服务器域名已配置 HTTPS支付宝回调要求公网可达
---
## 第一部分:支付宝开放平台配置
### 1. 创建应用
登录 https://open.alipay.com → 控制台 → 创建应用(或使用已有应用)。
### 2. 开通「电脑网站支付」产品
进入应用详情 → 产品绑定/产品管理 → 添加 **「电脑网站支付」** → 提交审核。
> **注意**:未开通此产品会导致 `ACQ.ACCESS_FORBIDDEN` 错误。
### 3. 生成密钥对
进入应用详情 → 开发设置 → 接口加签方式 → 选择 **RSA2(SHA256)**
1. 使用支付宝官方密钥工具生成 RSA2048 密钥对
2.**应用公钥** 上传到开放平台
3. 上传后平台会显示 **支付宝公钥**`alipayPublicKey_RSA2`
最终你会得到两样东西:
- **应用私钥**:你本地保存,代码用来签名请求
- **支付宝公钥**:平台返回给你,代码用来验证回调签名
> 应用公钥只是上传用的中间产物,代码中不需要。
---
## 第二部分:服务器配置
### 1. 放置密钥文件
将密钥保存为标准 PEM 格式,放到 `backend/keys/` 目录:
```bash
mkdir -p /home/rongye/ProgramFiles/ViGent2/backend/keys
```
**`backend/keys/app_private_key.pem`**(应用私钥):
```
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...(你的私钥内容)
...
-----END PRIVATE KEY-----
```
**`backend/keys/alipay_public_key.pem`**(支付宝公钥):
```
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A...(支付宝公钥内容)
...
-----END PUBLIC KEY-----
```
#### PEM 格式要求
支付宝密钥工具导出的是一行纯文本,需要转换为标准 PEM 格式:
- 必须有头尾标记(`-----BEGIN/END ...-----`
- 密钥内容每 64 字符换行
- 私钥头标记为 `-----BEGIN PRIVATE KEY-----`PKCS#8 格式)
- 公钥头标记为 `-----BEGIN PUBLIC KEY-----`
如果你拿到的是一行裸密钥,用以下命令转换:
```bash
# 私钥格式化(假设裸密钥在 raw_private.txt 中)
echo "-----BEGIN PRIVATE KEY-----" > app_private_key.pem
cat raw_private.txt | fold -w 64 >> app_private_key.pem
echo "-----END PRIVATE KEY-----" >> app_private_key.pem
# 公钥格式化
echo "-----BEGIN PUBLIC KEY-----" > alipay_public_key.pem
cat raw_public.txt | fold -w 64 >> alipay_public_key.pem
echo "-----END PUBLIC KEY-----" >> alipay_public_key.pem
```
> `backend/keys/` 目录已加入 `.gitignore`,不会被提交到仓库。
### 2. 配置环境变量
`backend/.env` 中添加:
```ini
# =============== 支付宝配置 ===============
ALIPAY_APP_ID=你的应用APPID
ALIPAY_PRIVATE_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/app_private_key.pem
ALIPAY_PUBLIC_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/alipay_public_key.pem
ALIPAY_NOTIFY_URL=https://vigent.hbyrkj.top/api/payment/notify
ALIPAY_RETURN_URL=https://vigent.hbyrkj.top/pay
```
| 变量 | 说明 |
|------|------|
| `ALIPAY_APP_ID` | 支付宝开放平台应用 APPID |
| `ALIPAY_PRIVATE_KEY_PATH` | 应用私钥 PEM 文件绝对路径 |
| `ALIPAY_PUBLIC_KEY_PATH` | 支付宝公钥 PEM 文件绝对路径 |
| `ALIPAY_NOTIFY_URL` | 异步回调地址(服务器间通信),必须公网 HTTPS 可达 |
| `ALIPAY_RETURN_URL` | 同步跳转地址(用户支付完成后浏览器跳转回的页面) |
`config.py` 中还有几个可调参数(已有默认值,一般不需要加到 .env
| 变量 | 默认值 | 说明 |
|------|--------|------|
| `ALIPAY_SANDBOX` | `false` | 是否使用沙箱环境 |
| `PAYMENT_AMOUNT` | `999.00` | 会员价格(元) |
| `PAYMENT_EXPIRE_DAYS` | `365` | 会员有效天数 |
### 3. 创建数据库表
通过 Docker 在本地 Supabase 中执行:
```bash
docker exec -i supabase-db psql -U postgres -c "
CREATE TABLE IF NOT EXISTS orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
out_trade_no TEXT UNIQUE NOT NULL,
amount DECIMAL(10, 2) NOT NULL DEFAULT 999.00,
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'paid', 'failed')),
trade_no TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
paid_at TIMESTAMP WITH TIME ZONE
);
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
CREATE INDEX IF NOT EXISTS idx_orders_out_trade_no ON orders(out_trade_no);
"
```
### 4. 安装依赖
```bash
# 后端(在 venv 中)
cd /home/rongye/ProgramFiles/ViGent2/backend
venv/bin/pip install python-alipay-sdk
```
> 前端无额外依赖需要安装。
### 5. Nginx 配置
确保 Nginx 将 `/api/payment/notify` 代理到后端。如果现有配置已覆盖 `/api/` 前缀,则无需额外修改:
```nginx
location /api/ {
proxy_pass http://localhost:8006;
# ... 现有配置
}
```
### 6. 重启服务
```bash
# 构建前端
cd /home/rongye/ProgramFiles/ViGent2/frontend
npx next build
# 重启
pm2 restart vigent2-backend
pm2 restart vigent2-frontend
```
---
## 第三部分:正式上线
测试通过后,将 `backend/app/core/config.py` 中的测试金额改为正式价格:
```python
PAYMENT_AMOUNT: float = 999.00 # 正式价格
```
或在 `backend/.env` 中添加覆盖:
```ini
PAYMENT_AMOUNT=999.00
```
然后重启后端:
```bash
pm2 restart vigent2-backend
```
---
## 支付流程说明
```
用户注册 → 登录(密码正确但 is_active=false
→ 后端返回 403 + payment_token
→ 前端跳转 /pay 页面
→ POST /api/payment/create-order → 返回支付宝收银台 URL
→ 前端重定向到支付宝收银台页面(支持扫码、账号登录、余额等多种支付方式)
→ 用户完成支付
→ 支付宝异步回调 POST /api/payment/notify
→ 后端验签 → 更新订单 → 激活用户is_active=true, expires_at=+365天
→ 支付宝同步跳转回 /pay?out_trade_no=xxx
→ 前端轮询 GET /api/payment/status/{out_trade_no}
→ 轮询到 paid → 提示成功 → 跳转登录页
→ 用户重新登录 → 成功进入系统
```
**电脑网站支付 vs 当面付**:电脑网站支付(`alipay.trade.page.pay`)会跳转到支付宝官方收银台页面,用户可以选择扫码、支付宝账号登录、余额等多种方式支付,体验更好。当面付(`alipay.trade.precreate`)仅生成一个二维码,只能扫码支付。
会员到期续费同流程:登录时检测到过期 → 返回 PAYMENT_REQUIRED → 跳转 /pay。
管理员手动激活功能不受影响,两种方式并存。
---
## 涉及文件
| 文件 | 变更类型 | 说明 |
|------|---------|------|
| `backend/requirements.txt` | 修改 | 添加 `python-alipay-sdk` |
| `backend/database/schema.sql` | 修改 | 新增 `orders` 表 |
| `backend/app/core/config.py` | 修改 | 支付宝配置项 |
| `backend/app/core/security.py` | 修改 | payment_token 函数 |
| `backend/app/core/deps.py` | 修改 | is_active 安全兜底 |
| `backend/app/repositories/orders.py` | 新建 | orders 数据层 |
| `backend/app/modules/payment/__init__.py` | 新建 | 模块初始化 |
| `backend/app/modules/payment/schemas.py` | 新建 | 请求/响应模型 |
| `backend/app/modules/payment/service.py` | 新建 | 支付业务逻辑(电脑网站支付) |
| `backend/app/modules/payment/router.py` | 新建 | 3 个 API 端点 |
| `backend/app/modules/auth/router.py` | 修改 | 登录返回 PAYMENT_REQUIRED |
| `backend/app/main.py` | 修改 | 注册 payment_router |
| `backend/.env` | 修改 | 支付宝环境变量 |
| `backend/keys/` | 新建 | PEM 密钥文件 |
| `frontend/src/shared/lib/auth.ts` | 修改 | login() 处理 paymentToken |
| `frontend/src/shared/api/axios.ts` | 修改 | PUBLIC_PATHS 加 /pay |
| `frontend/src/app/login/page.tsx` | 修改 | paymentToken 跳转 |
| `frontend/src/app/register/page.tsx` | 修改 | 注册成功提示文案 |
| `frontend/src/app/pay/page.tsx` | 新建 | 付费页面(重定向到支付宝收银台) |
---
## 常见问题
### RSA key format is not supported
密钥文件缺少 PEM 头尾标记或未按 64 字符换行。参考「PEM 格式要求」重新格式化。
### ACQ.ACCESS_FORBIDDEN
应用未开通「电脑网站支付」产品。在支付宝开放平台 → 应用详情 → 产品管理中添加并开通。
### 支付宝回调不到
1. 检查 `ALIPAY_NOTIFY_URL` 是否公网 HTTPS 可达
2. 检查 Nginx 是否将 `/api/payment/notify` 代理到后端
3. 支付宝回调超时15s 未响应)会重试,共重试 8 次,持续 24 小时
### 支付完成后页面未跳转回来
检查 `ALIPAY_RETURN_URL` 配置是否正确,必须是前端 `/pay` 页面的完整 URL`https://vigent.hbyrkj.top/pay`)。支付宝会在用户支付完成后将浏览器重定向到此地址,并附带 `out_trade_no` 等参数。
### 前端显示"网络错误"而非具体错误
API 函数缺少 try/catch 捕获 axios 异常。已在 `auth.ts``register()``login()` 中修复。

View File

@@ -39,6 +39,7 @@ backend/
│ │ ├── generated_audios/ # 预生成配音管理router/schemas/service
│ │ ├── login_helper/ # 扫码登录辅助
│ │ ├── tools/ # 工具接口router/schemas/service
│ │ ├── payment/ # 支付宝付费开通router/schemas/service
│ │ └── admin/ # 管理员功能
│ ├── repositories/ # Supabase 数据访问
│ ├── services/ # 外部服务集成
@@ -74,6 +75,18 @@ backend/
- 错误通过 `HTTPException` 抛出,统一由全局异常处理返回 `{success:false, message, code}`
- 不再使用 `detail` 作为前端错误文案(前端已改为读 `message`)。
### `/api/videos/generate` 参数契约(关键约定)
- `custom_assignments` 每项使用 `material_path/start/end/source_start/source_end?`,并以时间轴可见段为准。
- `output_aspect_ratio` 仅允许 `9:16` / `16:9`,默认 `9:16`
- 标题显示模式参数:
- `title_display_mode`: `short` / `persistent`(默认 `short`
- `title_duration`: 默认 `4.0`(秒),仅 `short` 模式生效
- 片头副标题参数:
- `secondary_title`: 副标题文字(可选,限 20 字),仅在视频画面中显示,不参与发布标题
- `secondary_title_style_id` / `secondary_title_font_size` / `secondary_title_top_margin`: 副标题样式配置
- workflow/remotion 侧需保持字段透传一致,避免前后端语义漂移。
---
## 4. 认证与权限
@@ -143,6 +156,14 @@ backend/user_data/{user_uuid}/cookies/
- `LATENTSYNC_*`
- `CORS_ORIGINS` (CORS 白名单,默认 *)
### MuseTalk / 混合唇形同步
- `MUSETALK_GPU_ID` (GPU 编号,默认 0)
- `MUSETALK_API_URL` (常驻服务地址,默认 http://localhost:8011)
- `MUSETALK_BATCH_SIZE` (推理批大小,默认 32)
- `MUSETALK_VERSION` (v15)
- `MUSETALK_USE_FLOAT16` (半精度,默认 true)
- `LIPSYNC_DURATION_THRESHOLD` (秒,>=此值用 MuseTalk默认 120)
### 微信视频号
- `WEIXIN_HEADLESS_MODE` (headful/headless-new)
- `WEIXIN_CHROME_PATH` / `WEIXIN_BROWSER_CHANNEL`
@@ -157,7 +178,13 @@ backend/user_data/{user_uuid}/cookies/
- `DOUYIN_LOCALE` / `DOUYIN_TIMEZONE_ID`
- `DOUYIN_FORCE_SWIFTSHADER`
- `DOUYIN_DEBUG_ARTIFACTS` / `DOUYIN_RECORD_VIDEO` / `DOUYIN_KEEP_SUCCESS_VIDEO`
- `DOUYIN_COOKIE` (抖音视频下载 Cookie)
### 支付宝
- `ALIPAY_APP_ID` / `ALIPAY_PRIVATE_KEY_PATH` / `ALIPAY_PUBLIC_KEY_PATH`
- `ALIPAY_NOTIFY_URL` / `ALIPAY_RETURN_URL`
- `ALIPAY_SANDBOX` (沙箱模式,默认 false)
- `PAYMENT_AMOUNT` (会员价格,默认 999.00)
- `PAYMENT_EXPIRE_DAYS` (会员有效天数,默认 365)
---

View File

@@ -25,6 +25,7 @@ backend/
│ │ ├── generated_audios/ # 预生成配音管理router/schemas/service
│ │ ├── login_helper/ # 扫码登录辅助
│ │ ├── tools/ # 工具接口router/schemas/service
│ │ ├── payment/ # 支付宝付费开通router/schemas/service
│ │ └── admin/ # 管理员功能
│ ├── repositories/ # Supabase 数据访问
│ ├── services/ # 外部服务集成 (TTS/Remotion/Storage/Uploader 等)
@@ -100,9 +101,16 @@ backend/
* `POST /api/tools/extract-script`: 从视频链接提取文案
10. **健康检查**
* `GET /api/lipsync/health`: LatentSync 服务健康状态
* `GET /api/lipsync/health`: 唇形同步服务健康状态(含 LatentSync + MuseTalk + 混合路由阈值)
* `GET /api/voiceclone/health`: CosyVoice 3.0 服务健康状态
11. **支付 (Payment)**
* `POST /api/payment/create-order`: 创建支付宝电脑网站支付订单(需 payment_token
* `POST /api/payment/notify`: 支付宝异步通知回调(返回纯文本 success/fail
* `GET /api/payment/status/{out_trade_no}`: 查询订单支付状态(前端轮询)
> 登录时若账号未激活或已过期,返回 403 + `payment_token`,前端跳转 `/pay` 页面完成付费。详见 [支付宝部署指南](ALIPAY_DEPLOY.md)。
### 统一响应结构
```json
@@ -131,11 +139,17 @@ backend/
- `output_aspect_ratio`: 输出画面比例(`9:16``16:9`,默认 `9:16`
- `language`: TTS 语言(默认自动检测,声音克隆时透传给 CosyVoice 3.0
- `title`: 片头标题文字
- `title_display_mode`: 标题显示模式(`short` / `persistent`,默认 `short`
- `title_duration`: 标题显示时长(秒,默认 `4.0``short` 模式生效)
- `subtitle_style_id`: 字幕样式 ID
- `title_style_id`: 标题样式 ID
- `subtitle_font_size`: 字幕字号(覆盖样式默认值)
- `title_font_size`: 标题字号(覆盖样式默认值)
- `title_top_margin`: 标题距顶部像素
- `secondary_title`: 片头副标题文字(可选,限 20 字,仅视频画面显示)
- `secondary_title_style_id`: 副标题样式 ID
- `secondary_title_font_size`: 副标题字号
- `secondary_title_top_margin`: 副标题距主标题间距
- `subtitle_bottom_margin`: 字幕距底部像素
- `enable_subtitles`: 是否启用字幕
- `bgm_id`: 背景音乐 ID
@@ -188,6 +202,12 @@ GLM_API_KEY=your_glm_api_key
# LatentSync 配置
LATENTSYNC_GPU_ID=1
# MuseTalk 配置 (长视频唇形同步)
MUSETALK_GPU_ID=0
MUSETALK_API_URL=http://localhost:8011
MUSETALK_BATCH_SIZE=32
LIPSYNC_DURATION_THRESHOLD=120
```
### 4. 启动服务
@@ -210,6 +230,14 @@ uvicorn app.main:app --host 0.0.0.0 --port 8006 --reload
3. **重要**: 如果模型占用 GPU请务必使用 `asyncio.Lock` 进行并发控制,防止 OOM。
4.`app/modules/` 下创建对应模块,添加 router/service/schemas并在 `main.py` 注册路由。
### 唇形同步混合路由
`lipsync_service.py` 实现了 LatentSync + MuseTalk 混合路由:
- 短视频 (<`LIPSYNC_DURATION_THRESHOLD`s) → LatentSync 1.6 (GPU1, 端口 8007)
- 长视频 (>=阈值) → MuseTalk 1.5 (GPU0, 端口 8011)
- MuseTalk 不可用时自动回退到 LatentSync
- 路由逻辑对 workflow 完全透明
### 添加定时任务
目前推荐使用 **APScheduler****Crontab** 来管理定时任务。

View File

@@ -7,6 +7,7 @@
| 模型 | Fun-CosyVoice3-0.5B-2512 (0.5B 参数) |
| 端口 | 8010 |
| GPU | 0 (CUDA_VISIBLE_DEVICES=0) |
| 推理精度 | FP16 (自动混合精度) |
| PM2 名称 | vigent2-cosyvoice (id=15) |
| Conda 环境 | cosyvoice (Python 3.10) |
| 启动脚本 | `run_cosyvoice.sh` |

View File

@@ -7,8 +7,8 @@
| 服务器 | Dell PowerEdge R730 |
| CPU | 2× Intel Xeon E5-2680 v4 (56 线程) |
| 内存 | 192GB DDR4 |
| GPU 0 | NVIDIA RTX 3090 24GB |
| GPU 1 | NVIDIA RTX 3090 24GB (用于 LatentSync) |
| GPU 0 | NVIDIA RTX 3090 24GB (MuseTalk + CosyVoice) |
| GPU 1 | NVIDIA RTX 3090 24GB (LatentSync) |
| 部署路径 | `/home/rongye/ProgramFiles/ViGent2` |
---
@@ -72,7 +72,9 @@ cd /home/rongye/ProgramFiles/ViGent2
---
## 步骤 3: 部署 AI 模型 (LatentSync 1.6)
## 步骤 3: 部署 AI 模型
### 3a. LatentSync 1.6 (短视频唇形同步, GPU1)
> ⚠️ **重要**LatentSync 需要独立的 Conda 环境和 **~18GB VRAM**。请**不要**直接安装在后端环境中。
@@ -93,6 +95,26 @@ conda activate latentsync
python -m scripts.server # 测试能否启动Ctrl+C 退出
```
### 3b. MuseTalk 1.5 (长视频唇形同步, GPU0)
> MuseTalk 是单步潜空间修复模型(非扩散模型),推理速度接近实时,适合 >=120s 的长视频。与 CosyVoice 共享 GPU0fp16 推理约需 4-8GB 显存。
请参考详细的独立部署指南:
**[MuseTalk 部署指南](MUSETALK_DEPLOY.md)**
简要步骤:
1. 创建独立的 `musetalk` Conda 环境 (Python 3.10 + PyTorch 2.0.1 + CUDA 11.8)
2. 安装 mmcv/mmdet/mmpose 等依赖
3. 下载模型权重 (`download_weights.sh`)
4. 创建必要的软链接 (`musetalk/config.json`, `musetalk/musetalkV15`)
**验证 MuseTalk 部署**:
```bash
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
/home/rongye/ProgramFiles/miniconda3/envs/musetalk/bin/python scripts/server.py
# 另一个终端: curl http://localhost:8011/health
```
---
## 步骤 4: 安装后端依赖
@@ -189,7 +211,7 @@ cp .env.example .env
| `SUPABASE_PUBLIC_URL` | `https://api.hbyrkj.top` | Supabase API 公网地址 (前端访问) |
| `LATENTSYNC_GPU_ID` | 1 | GPU 选择 (0 或 1) |
| `LATENTSYNC_USE_SERVER` | false | 设为 true 以启用常驻服务加速 |
| `LATENTSYNC_INFERENCE_STEPS` | 20 | 推理步数 (20-50) |
| `LATENTSYNC_INFERENCE_STEPS` | 16 | 推理步数 (16-50) |
| `LATENTSYNC_GUIDANCE_SCALE` | 1.5 | 引导系数 (1.0-3.0) |
| `DEBUG` | true | 生产环境改为 false |
| `REDIS_URL` | `redis://localhost:6379/0` | 任务状态存储(不可用时回退内存) |
@@ -212,7 +234,21 @@ cp .env.example .env
| `DOUYIN_RECORD_VIDEO` | false | 录制浏览器操作视频 |
| `DOUYIN_KEEP_SUCCESS_VIDEO` | false | 成功后保留录屏 |
| `CORS_ORIGINS` | `*` | CORS 允许源 (生产环境建议白名单) |
| `DOUYIN_COOKIE` | | 抖音视频下载 Cookie (文案提取功能) |
| `MUSETALK_GPU_ID` | 0 | MuseTalk GPU 编号 |
| `MUSETALK_API_URL` | `http://localhost:8011` | MuseTalk 常驻服务地址 |
| `MUSETALK_BATCH_SIZE` | 32 | MuseTalk 推理批大小 |
| `MUSETALK_VERSION` | v15 | MuseTalk 模型版本 |
| `MUSETALK_USE_FLOAT16` | true | MuseTalk 半精度加速 |
| `LIPSYNC_DURATION_THRESHOLD` | 120 | 秒,>=此值用 MuseTalk<此值用 LatentSync |
| `ALIPAY_APP_ID` | 空 | 支付宝应用 APPID |
| `ALIPAY_PRIVATE_KEY_PATH` | 空 | 应用私钥 PEM 文件路径 |
| `ALIPAY_PUBLIC_KEY_PATH` | 空 | 支付宝公钥 PEM 文件路径 |
| `ALIPAY_NOTIFY_URL` | 空 | 支付宝异步回调地址 (公网 HTTPS) |
| `ALIPAY_RETURN_URL` | 空 | 支付完成后浏览器跳转地址 |
| `PAYMENT_AMOUNT` | `999.00` | 会员价格 (元) |
| `PAYMENT_EXPIRE_DAYS` | `365` | 会员有效天数 |
> 支付宝完整配置步骤密钥生成、PEM 格式、产品开通等)请参考 **[支付宝部署指南](ALIPAY_DEPLOY.md)**。
---
@@ -262,6 +298,13 @@ cd /home/rongye/ProgramFiles/ViGent2/models/LatentSync
conda activate latentsync
python -m scripts.server
```
### 启动 MuseTalk (终端 4, 长视频唇形同步)
```bash
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
/home/rongye/ProgramFiles/miniconda3/envs/musetalk/bin/python scripts/server.py
```
### 验证
@@ -355,7 +398,27 @@ pm2 save
curl http://localhost:8010/health
```
### 5. 启动服务看门狗 (Watchdog)
### 5. 启动 MuseTalk 长视频唇形同步服务
> 长视频 (>=120s) 自动路由到 MuseTalk。MuseTalk 不可用时自动回退 LatentSync。
> 详细部署步骤见 [MuseTalk 部署指南](MUSETALK_DEPLOY.md)。
1. 启动脚本位于项目根目录: `run_musetalk.sh`
2. 使用 pm2 启动:
```bash
cd /home/rongye/ProgramFiles/ViGent2
pm2 start ./run_musetalk.sh --name vigent2-musetalk
pm2 save
```
3. 验证服务:
```bash
curl http://localhost:8011/health
# {"status":"ok","model_loaded":true}
```
### 6. 启动服务看门狗 (Watchdog)
> 🛡️ **推荐**:监控 CosyVoice 和 LatentSync 服务健康状态,卡死时自动重启。
@@ -372,6 +435,8 @@ pm2 save
pm2 startup
```
> **提示**: 完整的 PM2 进程列表应包含 5-6 个服务: vigent2-backend, vigent2-frontend, vigent2-latentsync, vigent2-cosyvoice, vigent2-musetalk, vigent2-watchdog。
### pm2 常用命令
```bash
@@ -379,6 +444,7 @@ pm2 status # 查看所有服务状态
pm2 logs # 查看所有日志
pm2 logs vigent2-backend # 查看后端日志
pm2 logs vigent2-cosyvoice # 查看 CosyVoice 日志
pm2 logs vigent2-musetalk # 查看 MuseTalk 日志
pm2 restart all # 重启所有服务
pm2 stop vigent2-latentsync # 停止 LatentSync 服务
pm2 delete all # 删除所有服务
@@ -518,6 +584,7 @@ sudo lsof -i :8006
sudo lsof -i :3002
sudo lsof -i :8007
sudo lsof -i :8010 # CosyVoice
sudo lsof -i :8011 # MuseTalk
```
### 查看日志
@@ -528,6 +595,7 @@ pm2 logs vigent2-backend
pm2 logs vigent2-frontend
pm2 logs vigent2-latentsync
pm2 logs vigent2-cosyvoice
pm2 logs vigent2-musetalk
```
### SSH 连接卡顿 / 系统响应慢
@@ -558,6 +626,7 @@ pm2 logs vigent2-cosyvoice
| `playwright` | 社交媒体自动发布 |
| `biliup` | B站视频上传 |
| `loguru` | 日志管理 |
| `python-alipay-sdk` | 支付宝支付集成 |
### 前端关键依赖

View File

@@ -47,6 +47,16 @@
- 开启可换行:`white-space: normal` + `word-break` + `overflow-wrap`
- 描边、字距、上下边距同步按比例缩放。
### 2.3 片头标题显示模式(短暂/常驻)
- 在“标题与字幕”面板的“片头标题”行尾新增下拉,支持:`短暂显示` / `常驻显示`
- 默认模式为 `短暂显示`,短暂模式默认时长为 4 秒。
- 用户选择会持久化到 localStorage刷新后保持上次配置。
- 生成请求新增 `title_display_mode`,短暂模式透传 `title_duration=4.0`
- Remotion 端到端支持该参数:
- `short`:标题在设定时长后淡出并结束渲染;
- `persistent`:标题全程常驻(保留淡入动画,不执行淡出)。
---
## 🎥 方向归一化 + 多素材拼接稳定性 — 第三阶段 (Day 24)
@@ -139,8 +149,9 @@
| `backend/app/core/deps.py` | `get_current_user` / `get_current_user_optional` 接入到期失效检查 |
| `backend/app/modules/auth/router.py` | 登录时到期停用 + `/api/auth/me` 统一鉴权依赖 |
| `backend/app/modules/videos/schemas.py` | `CustomAssignment` 新增 `source_end`;保留 `output_aspect_ratio` |
| `backend/app/modules/videos/workflow.py` | 多素材/单素材透传 `source_end`;多素材 prepare/concat 统一 25fps |
| `backend/app/modules/videos/workflow.py` | 多素材/单素材透传 `source_end`;多素材 prepare/concat 统一 25fps;标题显示模式参数透传 Remotion |
| `backend/app/services/video_service.py` | 旋转元数据解析与方向归一化;`prepare_segment` 支持 `source_end/target_fps`concat 强制 CFR + `+genpts` |
| `backend/app/services/remotion_service.py` | render 支持 `title_display_mode/title_duration` 并传递到 render.ts |
### 前端修改
@@ -149,20 +160,26 @@
| `frontend/src/features/home/model/useTimelineEditor.ts` | `CustomAssignment` 新增 `source_end`;修复 sourceStart 开放终点时长计算 |
| `frontend/src/features/home/model/useHomeController.ts` | 多素材以可见 assignments 为准发送;单素材截取触发条件补齐 |
| `frontend/src/features/home/ui/TimelineEditor.tsx` | 画面比例下拉;循环比例按截取后有效时长计算 |
| `frontend/src/features/home/model/useHomePersistence.ts` | `outputAspectRatio` 持久化 |
| `frontend/src/features/home/model/useHomePersistence.ts` | `outputAspectRatio``titleDisplayMode` 持久化 |
| `frontend/src/features/home/ui/HomePage.tsx` | 页面进入滚动到顶部ClipTrimmer/Timeline 交互保持一致 |
| `frontend/src/features/home/ui/FloatingStylePreview.tsx` | 标题/字幕样式预览与成片渲染策略对齐 |
| `frontend/src/features/home/ui/TitleSubtitlePanel.tsx` | 标题行新增“短暂显示/常驻显示”下拉 |
### Remotion 修改
| 文件 | 变更 |
|------|------|
| `remotion/src/components/Title.tsx` | 标题响应式缩放与自动换行,优化竖屏窄画布适配 |
| `remotion/src/components/Title.tsx` | 标题响应式缩放与自动换行;新增短暂/常驻显示模式控制 |
| `remotion/src/components/Subtitles.tsx` | 字幕响应式缩放与自动换行,减少预览/成片差异 |
| `remotion/src/Video.tsx` | 新增 `titleDisplayMode` 透传到标题组件 |
| `remotion/src/Root.tsx` | 默认 props 增加 `titleDisplayMode='short'``titleDuration=4` |
| `remotion/render.ts` | CLI 参数新增 `--titleDisplayMode`inputProps 增加 `titleDisplayMode` |
---
## 验证记录
- 后端语法检查:`python -m py_compile backend/app/modules/videos/schemas.py backend/app/modules/videos/workflow.py backend/app/services/video_service.py`
- 后端语法检查:`python -m py_compile backend/app/modules/videos/schemas.py backend/app/modules/videos/workflow.py backend/app/services/video_service.py backend/app/services/remotion_service.py`
- 前端类型检查:`npx tsc --noEmit`
- 前端 ESLint`npx eslint src/features/home/model/useHomeController.ts src/features/home/model/useHomePersistence.ts src/features/home/ui/HomePage.tsx src/features/home/ui/TitleSubtitlePanel.tsx`
- Remotion 渲染脚本构建:`npm run build:render`

254
Docs/DevLogs/Day25.md Normal file
View File

@@ -0,0 +1,254 @@
## 🔧 文案提取助手修复 — 抖音链接无法提取文案 (Day 25)
### 概述
文案提取助手粘贴抖音链接后无法提取文案yt-dlp 报错 `Fresh cookies are needed`,手动回退方案也因抖音页面结构变化失效。本日完成了完整修复,并清理了不再需要的 `DOUYIN_COOKIE` 配置。
---
## 🐛 问题诊断
### 错误链路
1. **yt-dlp 失败**`ERROR: [Douyin] Fresh cookies (not necessarily logged in) are needed`
- yt-dlp 版本 `2025.12.08` 过旧
- 抖音 API `aweme/v1/web/aweme/detail/` 需要签名 cookie`s_v_web_id` 等),即使升级 yt-dlp 到最新版 + 传入 cookie 仍无法解决,属 yt-dlp 已知问题
2. **手动回退失败**`Could not find RENDER_DATA in page`
- 旧方案通过桌面端用户主页 + `modal_id` 访问,抖音 SSR 已不再返回 `videoDetail` 数据
3. **`.env``DOUYIN_COOKIE`**:时间戳 2024 年 12 月,早已过期
---
## ✅ 修复方案:移动端分享页 + 自动获取 ttwid
### 核心思路
放弃依赖 yt-dlp 下载抖音视频和手动维护 cookie改为
1. 自动从 ByteDance 公共 API 获取新鲜 `ttwid`(匿名令牌,不绑定账号)
2.`ttwid` 访问移动端分享页 `m.douyin.com/share/video/{id}`
3. 从页面内嵌 JSON 中提取 `play_addr` 播放地址并下载
### 关键代码(`_download_douyin_manual` 重写)
```python
# 1. 获取新鲜 ttwid
ttwid_resp = await client.post(
"https://ttwid.bytedance.com/ttwid/union/register/",
json={"region": "cn", "aid": 6383, "service": "www.douyin.com", ...}
)
ttwid = ttwid_resp.cookies.get("ttwid", "")
# 2. 访问移动端分享页
page_resp = await client.get(
f"https://m.douyin.com/share/video/{video_id}",
headers={"cookie": f"ttwid={ttwid}", ...}
)
# 3. 提取 play_addr
addr_match = re.search(r'"play_addr":\{"uri":"([^"]+)","url_list":\["([^"]+)"', page_text)
video_url = addr_match.group(2).replace(r"\u002F", "/")
```
### 优势
- 不再依赖手动维护的 `DOUYIN_COOKIE`ttwid 每次请求自动获取
- 不受 yt-dlp 对抖音支持状况影响
- 所有用户通用,不绑定特定账号
---
## 🧹 清理 DOUYIN_COOKIE 配置
`DOUYIN_COOKIE` 仅用于文案提取,新方案不再需要,已从以下位置删除:
| 文件 | 变更 |
|------|------|
| `backend/.env` | 删除 `DOUYIN_COOKIE` 配置项及注释 |
| `backend/app/core/config.py` | 删除 `DOUYIN_COOKIE: str = ""` 字段定义 |
| `backend/app/modules/tools/service.py` | 删除 yt-dlp 传 cookie 逻辑和 `_write_netscape_cookies` 辅助函数 |
---
## 🔤 前端文案修正
将文案提取界面中的"AI 洗稿结果"改为"AI 改写结果"。
| 文件 | 变更 |
|------|------|
| `frontend/src/features/home/ui/ScriptExtractionModal.tsx` | `AI 洗稿结果``AI 改写结果` |
| `backend/app/modules/tools/service.py` | 注释中"洗稿"→"改写" |
| `backend/app/services/glm_service.py` | docstring 中"洗稿"→"改写文案" |
---
## 📦 其他变更
- **yt-dlp 升级**`2025.12.08``2026.2.21`
- **yt-dlp 初始化修正**:改为 `YoutubeDL(ydl_opts)` 直接传参初始化(原先空初始化后 update params 不生效)
- **User-Agent 更新**yt-dlp 中 `Chrome/91``Chrome/131`
---
## 涉及文件汇总
### 后端修改
| 文件 | 变更 |
|------|------|
| `backend/app/modules/tools/service.py` | 重写 `_download_douyin_manual`(移动端分享页方案);修正 yt-dlp 初始化;清理 cookie 相关代码;注释改写 |
| `backend/app/services/glm_service.py` | docstring "洗稿" → "改写文案" |
| `backend/app/core/config.py` | 删除 `DOUYIN_COOKIE` 字段 |
| `backend/.env` | 删除 `DOUYIN_COOKIE` 配置 |
| `backend/requirements.txt` | yt-dlp 版本升级 |
### 前端修改
| 文件 | 变更 |
|------|------|
| `frontend/src/features/home/ui/ScriptExtractionModal.tsx` | "AI 洗稿结果" → "AI 改写结果" |
---
## ✏️ AI 智能改写 — 自定义提示词功能
### 概述
文案提取助手的"AI 智能改写"原先使用硬编码 prompt用户无法定制改写风格。本次在 checkbox 右侧新增"自定义提示词"折叠区域,用户可编辑自定义 prompt持久化到 localStorage后端按需替换默认 prompt。
### 后端修改
**路由层** (`router.py`)`extract_script_tool` 新增可选 Form 参数 `custom_prompt: Optional[str] = Form(None)`,透传给 service。
**服务层** (`service.py`)`extract_script()` 签名新增 `custom_prompt`,透传给 `glm_service.rewrite_script(script, custom_prompt)`
**AI 层** (`glm_service.py`)`rewrite_script(self, text, custom_prompt=None)`,若 `custom_prompt` 有值则用自定义 prompt + 原文拼接,否则保持原有默认 prompt。
```python
if custom_prompt and custom_prompt.strip():
prompt = f"""{custom_prompt.strip()}
原始文案:
{text}"""
else:
prompt = f"""请将以下视频文案进行改写。...(原有默认)"""
```
### 前端修改
**Hook** (`useScriptExtraction.ts`)
- 新增 `customPrompt` / `showCustomPrompt` 状态
- 初始值从 `localStorage.getItem("vigent_rewriteCustomPrompt")` 恢复
- `customPrompt` 变化时防抖 300ms 保存到 localStorage
- `handleExtract()` 中若 `doRewrite && customPrompt.trim()` 有值,追加 `formData.append("custom_prompt", ...)`
- modal 重置时不清空 customPrompt持久化偏好
**UI** (`ScriptExtractionModal.tsx`)
- checkbox 同行右侧新增"自定义提示词 ▼"按钮(仅 `doRewrite` 时显示)
- 点击展开 textarea 编辑区域,底部提示"留空则使用默认提示词"
- 取消勾选 AI 智能改写时,自定义提示词区域自动隐藏
### 涉及文件
| 文件 | 变更 |
|------|------|
| `backend/app/modules/tools/router.py` | 新增 `custom_prompt` Form 参数 |
| `backend/app/modules/tools/service.py` | `extract_script()` 透传 `custom_prompt` |
| `backend/app/services/glm_service.py` | `rewrite_script()` 支持自定义 prompt |
| `frontend/.../useScriptExtraction.ts` | 新增状态、localStorage 持久化、FormData 传参 |
| `frontend/.../ScriptExtractionModal.tsx` | UI 按钮 + 展开 textarea |
### 验证
- 后端 `python -m py_compile` 三个文件通过
- 前端 `npx tsc --noEmit` 通过
---
## 🐛 SSR 构建修复 — localStorage is not defined
### 问题
`npm run build` 报错 `ReferenceError: localStorage is not defined`,因为 `useScriptExtraction.ts``useState` 的初始化函数在 SSRNode.js环境下也会执行而服务端没有 `localStorage`
### 修复
`useState` 初始化加 `typeof window !== "undefined"` 守卫:
```typescript
const [customPrompt, setCustomPrompt] = useState(
() => typeof window !== "undefined" ? localStorage.getItem(CUSTOM_PROMPT_KEY) || "" : ""
);
```
| 文件 | 变更 |
|------|------|
| `frontend/.../useScriptExtraction.ts` | `useState` 初始化增加 SSR 安全守卫 |
---
## 🎬 片头副标题功能
### 概述
新增片头副标题secondary_title显示在主标题下方用于补充说明或悬念引导。副标题有独立的样式配置字体、字号、颜色等可由 AI 同时生成20 字限制,仅在视频画面中显示,不参与发布标题。
命名约定:后端 `secondary_title`snake_case前端 `videoSecondaryTitle`camelCase用户界面"片头副标题"。
---
### 后端修改
| 文件 | 变更 |
|------|------|
| `backend/app/modules/videos/schemas.py` | `GenerateRequest` 新增 4 个可选字段:`secondary_title``secondary_title_style_id``secondary_title_font_size``secondary_title_top_margin` |
| `backend/app/services/glm_service.py` | AI prompt 增加副标题生成要求不超过20字JSON 格式新增 `secondary_title` 字段 |
| `backend/app/modules/ai/router.py` | `GenerateMetaResponse` 增加 `secondary_title: str = ""`endpoint 返回时取 `result.get("secondary_title", "")` |
| `backend/app/modules/videos/workflow.py` | `use_remotion` 条件增加 `or req.secondary_title`;副标题样式解析复用 `get_style("title", ...)`;字号/间距覆盖;`prepare_style_for_remotion` 处理副标题字体;`remotion_service.render()` 传入 `secondary_title` + `secondary_title_style` |
| `backend/app/services/remotion_service.py` | `render()` 新增 `secondary_title``secondary_title_style` 参数,构建 CLI 参数 `--secondaryTitle``--secondaryTitleStyle` |
### Remotion 修改
| 文件 | 变更 |
|------|------|
| `remotion/render.ts` | `RenderOptions` 新增 `secondaryTitle?` + `secondaryTitleStyle?``parseArgs()` 新增 switch case`inputProps` 新增两个字段 |
| `remotion/src/components/Title.tsx` | `TitleProps` 新增 `secondaryTitle?``secondaryTitleStyle?``AbsoluteFill` 改为 `flexDirection: 'column'` 垂直堆叠;主标题 `<h1>` 后增加副标题 `<h2>`,独立样式(默认字号 48px、字重 700共享淡入淡出动画副标题字体使用独立 `@font-face``SecondaryTitleFont`)避免与主标题冲突 |
| `remotion/src/Video.tsx` | `VideoProps` 新增 `secondaryTitle?` + `secondaryTitleStyle?`;传递给 `<Title>` 组件;渲染条件改为 `{(title \|\| secondaryTitle) && ...}` |
| `remotion/src/Root.tsx` | `defaultProps` 新增 `secondaryTitle: undefined` + `secondaryTitleStyle: undefined` |
### 前端修改
| 文件 | 变更 |
|------|------|
| `frontend/src/shared/lib/title.ts` | 新增 `SECONDARY_TITLE_MAX_LENGTH = 20``clampSecondaryTitle()` |
| `frontend/src/features/home/model/useHomeController.ts` | 新增状态 `videoSecondaryTitle``selectedSecondaryTitleStyleId``secondaryTitleFontSize``secondaryTitleTopMargin``secondaryTitleSizeLocked`;新建 `secondaryTitleInput = useTitleInput({ maxLength: 20 })`(不 sync 到发布页);`handleGenerateMeta()` 接收并填充 `secondary_title``handleGenerate()` 构建 payload 增加副标题字段return 暴露所有新状态 |
| `frontend/src/features/home/model/useHomePersistence.ts` | 新增 localStorage key`secondaryTitle``secondaryTitleStyle``secondaryTitleFontSize``secondaryTitleTopMargin`;对应的恢复和保存 effect |
| `frontend/src/features/home/ui/TitleSubtitlePanel.tsx` | Props 新增副标题相关;主标题输入框下方添加"片头副标题限制20个字"输入框;副标题样式选择器(复用 titleStyles 预设、字号滑块30-100px、间距滑块0-100px |
| `frontend/src/features/home/ui/FloatingStylePreview.tsx` | 标题预览改为 flex column 布局;主标题下方增加副标题预览行,独立样式渲染 |
| `frontend/src/features/home/ui/HomePage.tsx` | 从 `useHomeController` 解构新状态,传给 `TitleSubtitlePanel` |
---
## 🐛 参考音频上传 — 中文文件名 InvalidKey 修复
### 问题
上传中文名参考音频(如"我的声音.wav"Supabase Storage 报 `InvalidKey`,因为存储路径直接使用了原始中文文件名。
### 修复
`ref_audios/service.py` 新增 `sanitize_filename()` 函数,将存储路径的文件名清洗为 ASCII 安全字符(仅 `A-Za-z0-9._-`
- NFKD 规范化 → 丢弃非 ASCII → 非法字符替换为 `_`
- 纯中文/emoji 清洗后为空时,使用 MD5 哈希兜底(如 `audio_e924b1193007`
- 文件名限长 50 字符
- 原始中文文件名保留在 metadata 中作为展示名,前端显示不受影响
```
修复前: cbbe.../1771915755_我的声音.wav → InvalidKey
修复后: cbbe.../1771915755_audio_xxxxxxxx.wav → 上传成功
```
| 文件 | 变更 |
|------|------|
| `backend/app/modules/ref_audios/service.py` | 新增 `sanitize_filename()` 函数,上传路径使用清洗后文件名 |

239
Docs/DevLogs/Day26.md Normal file
View File

@@ -0,0 +1,239 @@
## 🎨 前端优化:板块合并 + 序号标题 + UI 精细化 (Day 26)
### 概述
首页原有 9 个独立板块(左栏 7 个 + 右栏 2 个),每个都有自己的卡片容器和标题,视觉碎片化严重。本次将相关板块合并为 5 个主板块,添加中文序号(一~十),移除 emoji 图标,并对多个子组件的布局和交互细节进行优化。
---
## ✅ 改动内容
### 1. 板块合并方案
**左栏4 个主板块 + 2 个独立区域):**
| 序号 | 板块名 | 子板块 | 原组件 |
|------|--------|--------|--------|
| 一 | 文案提取与编辑 | — | ScriptEditor |
| 二 | 标题与字幕 | — | TitleSubtitlePanel |
| 三 | 配音 | 配音方式 / 配音列表 | VoiceSelector + GeneratedAudiosPanel |
| 四 | 素材编辑 | 视频素材 / 时间轴编辑 | MaterialSelector + TimelineEditor |
| 五 | 背景音乐 | — | BgmPanel |
| — | 生成按钮 | — | GenerateActionBar不编号 |
**右栏1 个主板块):**
| 序号 | 板块名 | 子板块 | 原组件 |
|------|--------|--------|--------|
| 六 | 作品 | 作品列表 / 作品预览 | HistoryList + PreviewPanel |
**发布页(/publish**
| 序号 | 板块名 |
|------|--------|
| 七 | 平台账号 |
| 八 | 选择发布作品 |
| 九 | 发布信息 |
| 十 | 选择发布平台 |
### 2. embedded 模式
6 个组件新增 `embedded?: boolean` prop默认 `false`
- `VoiceSelector` — embedded 时不渲染外层卡片和主标题
- `GeneratedAudiosPanel` — embedded 时两行布局:第 1 行(语速+生成配音右对齐)、第 2 行(配音列表+刷新)
- `MaterialSelector` — embedded 时自渲染 h3 子标题"视频素材"+ 上传/刷新按钮同行
- `TimelineEditor` — embedded 时自渲染 h3 子标题"时间轴编辑"+ 画面比例/播放控件同行
- `PreviewPanel` — embedded 时不渲染外层卡片和标题
- `HistoryList` — embedded 时不渲染外层卡片和标题(刷新按钮由 HomePage 提供)
### 3. 序号标题 + emoji 移除
所有编号板块移除 emoji 图标,使用纯中文序号:
- ScriptEditor: `✍️ 文案提取与编辑``一、文案提取与编辑`
- TitleSubtitlePanel: `🎬 标题与字幕``二、标题与字幕`
- BgmPanel: `🎵 背景音乐``五、背景音乐`
- HomePage 右栏: `五、作品``六、作品`
- PublishPage: `👤 平台账号``七、平台账号``📹 选择发布作品``八、选择发布作品``✍️ 发布信息``九、发布信息``📱 选择发布平台``十、选择发布平台`
### 4. 子标题与分隔样式
- **主标题**: `text-base sm:text-lg font-semibold text-white`
- **子标题**: `text-sm font-medium text-gray-400`
- **分隔线**: `<div className="border-t border-white/10 my-4" />`
### 5. 配音列表布局优化
GeneratedAudiosPanel embedded 模式下采用两行布局:
- **第 1 行**:语速下拉 + 生成配音按钮(右对齐,`flex justify-end`
- **第 2 行**`<h3>配音列表</h3>` + 刷新按钮(两端对齐)
- 非 embedded 模式保持原单行布局
### 6. TitleSubtitlePanel 下拉对齐
- 标题样式/副标题样式/字幕样式三行标签统一 `w-20`(固定 80px确保下拉菜单垂直对齐
- 下拉菜单宽度 `w-1/3 min-w-[100px]`,避免过宽
### 7. RefAudioPanel 文案简化
- 原底部段落"上传任意语音样本3-10秒…" 移至 "我的参考音频" 标题旁,简化为 `(上传3-10秒语音样本)`
### 8. 账户下拉菜单添加手机号
- AccountSettingsDropdown 在账户有效期上方新增手机号显示区域
- 显示 `user?.phone || '未知账户'`
### 9. 标题显示模式对副标题生效
- **payload 修复**: `useHomeController.ts``title_display_mode` 的发送条件从 `videoTitle.trim()` 改为 `videoTitle.trim() || videoSecondaryTitle.trim()`,确保仅有副标题时也能发送显示模式
- **UI 调整**: 短暂显示/常驻显示下拉从片头标题输入行移至"二、标题与字幕"板块标题行(与预览样式按钮同行),明确表示该设置对标题和副标题同时生效
- Remotion 端 `Title.tsx` 已支持(标题和副标题作为整体组件渲染,`displayMode` 统一控制)
### 10. 时间轴模糊遮罩
遮罩从外层 wrapper 移入"四、素材编辑"卡片内,仅覆盖时间轴子区域(`rounded-xl`)。
### 11. 登录后用户信息立即可用
- AuthContext 新增 `setUser` 方法暴露给消费者
- 登录页成功后调用 `setUser(result.user)` 立即写入 Context无需等页面刷新
- 修复登录后账户下拉显示"未知账户"、刷新后才显示手机号的问题
### 12. 文案与选项微调
- MaterialSelector 描述 `(可多选最多4个)``(上传自拍视频最多可选4个)`
- TitleSubtitlePanel 显示模式选项 `短暂显示/常驻显示``标题短暂显示/标题常驻显示`
### 13. UI/UX 体验优化6 项)
- **操作按钮移动端可见**: 配音列表、作品列表、素材列表、参考音频、历史文案的操作按钮从 `opacity-0`hover 才显示)改为 `opacity-40`平时半透明可见hover 全亮),解决触屏设备无法发现按钮的问题
- **手机号脱敏**: AccountSettingsDropdown 手机号中间四位遮掩 `138****5678`
- **标题字数计数器**: TitleSubtitlePanel 标题/副标题输入框右侧显示实时字数 `3/15`,超限变红
- **列表滚动条提示**: ~~配音列表、作品列表、素材列表、BGM 列表从 `hide-scrollbar` 改为 `custom-scrollbar`~~ → 已全部改回 `hide-scrollbar` 隐藏滚动条(滚动功能不变)
- **时间轴拖拽提示**: TimelineEditor 色块左上角新增 `GripVertical` 抓手图标,暗示可拖拽排序
- **截取滑块放大**: ClipTrimmer 手柄从 16px 放大到 20px触控区从 32px 放大到 40px
### 14. 代码质量修复4 项)
- **AccountSettingsDropdown**: 关闭密码弹窗补齐 `setSuccess('')` 清空
- **MaterialSelector**: `selectedSet``useMemo` 避免每次渲染重建
- **TimelineEditor**: `visibleSegments`/`overflowSegments``useMemo`
- **MaterialSelector**: 素材满 4 个时非选中项按钮加 `disabled`
### 15. 发布页平台账号响应式布局
- **单行布局**:图标+名称+状态在左,按钮在右(`flex items-center`
- **移动端紧凑**:图标 `h-6 w-6`、按钮 `text-xs px-2 py-1 rounded-md`、间距 `space-y-2 px-3 py-2.5`
- **桌面端宽松**`sm:h-7 sm:w-7``sm:text-sm sm:px-3 sm:py-1.5 sm:rounded-lg``sm:space-y-3 sm:px-4 sm:py-3.5`
- 两端各自美观,风格与其他板块一致
### 16. 移动端刷新回顶部修复
- **问题**: 移动端刷新页面后不回到顶部,而是滚动到背景音乐板块
- **根因**: 1) 浏览器原生滚动恢复覆盖 `scrollTo(0,0)`2) 列表 scroll effect 有双依赖(`selectedId` + `list`),数据异步加载时第二次触发跳过了 ref 守卫,执行了 `scrollIntoView` 导致页面跳动
- **修复**: 三管齐下 — ① `history.scrollRestoration = "manual"` 禁用浏览器原生恢复;② 时间门控 `scrollEffectsEnabled` ref1 秒内禁止所有列表自动滚动)替代单次 ref 守卫;③ 200ms 延迟兜底 `scrollTo(0,0)`
### 17. 移动端样式预览窗口缩小
- **问题**: 移动端点击"预览样式"后窗口占满整屏(宽 358px高约 636px遮挡样式调节控件
- **修复**: 移动端宽度从 `window.innerWidth - 32` 缩小到 **160px**;位置从左上角改为**右下角**`right:12, bottom:12`),不遮挡上方控件;最大高度限制 `50dvh`
- 桌面端保持不变280px左上角
### 18. 列表滚动条统一隐藏
- 将 Day 26 早期改为 `custom-scrollbar`(细紫色滚动条)的 7 处全部改回 `hide-scrollbar`
- 涉及BgmPanel、GeneratedAudiosPanel、HistoryList、MaterialSelector2处、ScriptExtractionModal2处
- 滚动功能不受影响,仅视觉上不显示滚动条
### 19. 配音按钮移动端适配
- VoiceSelector "选择声音/克隆声音" 按钮:内边距 `px-4``px-2 sm:px-4`,字号加 `text-sm sm:text-base`,图标加 `shrink-0`
- 修复移动端窄屏下按钮被挤压导致"克隆声音"不可见的问题
### 20. 素材标题溢出修复
- MaterialSelector embedded 标题行移除 `whitespace-nowrap`
- 描述文字 `(上传自拍视频最多可选4个)` 在移动端隐藏(`hidden sm:inline`),桌面端正常显示
- 修复移动端刷新按钮被推出容器外的问题
### 21. 生成配音按钮放大
- "生成配音" 作为核心操作按钮,从辅助尺寸升级为主操作尺寸
- 内边距 `px-2/px-3 py-1/py-1.5``px-4 py-2`,字号 `text-xs``text-sm font-medium`
- 图标 `h-3.5 w-3.5``h-4 w-4`,新增 `shadow-sm` + hover `shadow-md`
- embedded 与非 embedded 模式统一放大
### 22. 生成进度条位置调整
- **问题**: 生成进度条在"六、作品"卡片内部(作品预览下方),不够醒目
- **修复**: 进度条从 PreviewPanel 内部提取到 HomePage 右栏,作为独立卡片渲染在"六、作品"卡片**上方**
- 使用紫色边框(`border-purple-500/30`)区分,显示任务消息和百分比
- PreviewPanel embedded 模式下不再渲染进度条(传入 `currentTask={null}`
- 生成完成后进度卡片自动消失
### 23. LatentSync 超时修复
- **问题**: 约 2 分钟的视频3023 帧190 段推理)预计推理 54 分钟,但 httpx 超时仅 20 分钟,导致 LatentSync 调用失败并回退到无口型同步
- **根因**: `lipsync_service.py``httpx.AsyncClient(timeout=1200.0)` 不足以覆盖长视频推理时间
- **修复**: 超时从 `1200s`20 分钟)改为 `3600s`1 小时),足以覆盖 2-3 分钟视频的推理
### 24. 字幕时间戳节奏映射(修复长视频字幕漂移)
- **问题**: 2 分钟视频字幕明显对不上语音,越到后面偏差越大
- **根因**: `whisper_service.py``original_text` 处理逻辑丢弃了 Whisper 逐词时间戳,仅保留总时间范围后做全程线性插值,每个字分配相同时长,完全忽略语速变化和停顿
- **修复**: 保留 Whisper 的逐字时间戳作为语音节奏模板,将原文字符按比例映射到 Whisper 时间节奏上rhythm-mapping而非线性均分。字幕文字不变只是时间戳跟随真实语速
- **算法**: 原文第 i 个字符映射到 Whisper 时间线的 `(i/N)*M` 位置N=原文字符数M=Whisper字符数在相邻 Whisper 时间点间线性插值
---
## 📁 修改文件清单
| 文件 | 改动 |
|------|------|
| `VoiceSelector.tsx` | 新增 embedded prop移动端按钮适配`px-2 sm:px-4` |
| `GeneratedAudiosPanel.tsx` | 新增 embedded prop两行布局操作按钮可见度"生成配音"按钮放大 |
| `MaterialSelector.tsx` | 新增 embedded prop自渲染子标题+操作按钮useMemodisabled 守卫,操作按钮可见度,标题溢出修复 |
| `TimelineEditor.tsx` | 新增 embedded prop自渲染子标题+控件useMemo拖拽抓手图标 |
| `PreviewPanel.tsx` | 新增 embedded prop |
| `HistoryList.tsx` | 新增 embedded prop操作按钮可见度 |
| `ScriptEditor.tsx` | 标题加序号,移除 emoji操作按钮可见度 |
| `TitleSubtitlePanel.tsx` | 标题加序号,移除 emoji下拉对齐显示模式下拉上移字数计数器 |
| `BgmPanel.tsx` | 标题加序号 |
| `HomePage.tsx` | 核心重构:合并板块、序号标题、生成配音按钮迁入、`scrollRestoration` + 延迟兜底修复刷新回顶部、生成进度条提取到作品卡片上方 |
| `PublishPage.tsx` | 四个板块加序号(七~十),移除 emoji平台卡片响应式单行布局 |
| `RefAudioPanel.tsx` | 简化提示文案,操作按钮可见度 |
| `AccountSettingsDropdown.tsx` | 新增手机号显示(脱敏),补齐 success 清空 |
| `AuthContext.tsx` | 新增 `setUser` 方法,登录后立即更新用户状态 |
| `login/page.tsx` | 登录成功后调用 `setUser` 写入用户数据 |
| `useHomeController.ts` | titleDisplayMode 条件修复,列表 scroll 时间门控 `scrollEffectsEnabled` |
| `FloatingStylePreview.tsx` | 移动端预览窗口缩小160px并移至右下角 |
| `ScriptExtractionModal.tsx` | 滚动条改回隐藏 |
| `ClipTrimmer.tsx` | 滑块手柄放大、触控区增高 |
| `lipsync_service.py` | httpx 超时从 1200s 改为 3600s |
| `whisper_service.py` | 字幕时间戳从线性插值改为 Whisper 节奏映射 |
---
## 🔍 验证
- `npm run build` — 零报错零警告
- 合并后布局:各子板块分隔清晰、主标题有序号
- 向后兼容:`embedded` 默认 `false`,组件独立使用不受影响
- 配音列表两行布局:语速+生成配音在上,配音列表+刷新在下
- 下拉菜单垂直对齐正确
- 短暂显示/常驻显示对标题和副标题同时生效
- 操作按钮在移动端(触屏)可见
- 手机号脱敏显示
- 标题字数计数器正常
- 列表滚动条全部隐藏
- 时间轴拖拽抓手图标显示
- 发布页平台卡片:移动端紧凑、桌面端宽松,风格一致
- 移动端刷新后回到顶部,不再滚动到背景音乐位置
- 移动端样式预览窗口不遮挡控件
- 移动端配音按钮(选择声音/克隆声音)均可见
- 移动端素材标题行按钮不溢出
- 生成配音按钮视觉层级高于辅助按钮
- 生成进度条在作品卡片上方独立显示
- LatentSync 长视频推理不再超时回退
- 字幕时间戳与语音节奏同步,长视频不漂移

231
Docs/DevLogs/Day27.md Normal file
View File

@@ -0,0 +1,231 @@
## Remotion 描边修复 + 字体样式扩展 + TypeScript 修复 (Day 27)
### 概述
修复标题/字幕描边渲染问题(描边过粗 + 副标题重影),扩展字体样式选项(标题 4→12、字幕 4→8修复 Remotion 项目 TypeScript 类型错误。
---
## ✅ 改动内容
### 1. 描边渲染修复(标题 + 字幕)
- **问题**: 标题黑色描边过粗,副标题出现重影/鬼影
- **根因**: `buildTextShadow` 用 4 方向 `textShadow` 模拟描边 — 对角线叠加导致描边视觉上比实际 `stroke_size` 更粗4 角方向在中间有间隙和叠加,造成重影
- **修复**: 改用 CSS 原生描边 `-webkit-text-stroke` + `paint-order: stroke fill`Remotion 用 Chromium 渲染,完美支持)
- **旧方案**:
```javascript
textShadow: `-8px -8px 0 #000, 8px -8px 0 #000, -8px 8px 0 #000, 8px 8px 0 #000, 0 0 16px rgba(0,0,0,0.5), 0 2px 4px rgba(0,0,0,0.3)`
```
- **新方案**:
```javascript
WebkitTextStroke: `5px #000000`,
paintOrder: 'stroke fill',
textShadow: `0 2px 4px rgba(0,0,0,0.3)`,
```
- 同时将所有预设样式的 `stroke_size` 从 8 降到 5配合原生描边视觉更干净
### 2. 字体样式扩展
**标题样式**: 4 个 → 12 个(+8
| ID | 样式名 | 字体 | 配色 |
|----|--------|------|------|
| title_pangmen | 庞门正道 | 庞门正道标题体3.0 | 白字黑描 |
| title_round | 优设标题圆 | 优设标题圆 | 白字紫描 |
| title_alibaba | 阿里数黑体 | 阿里巴巴数黑体 | 白字黑描 |
| title_chaohei | 文道潮黑 | 文道潮黑 | 青蓝字深蓝描 |
| title_wujie | 无界黑 | 标小智无界黑 | 白字深灰描 |
| title_houdi | 厚底黑 | Aa厚底黑 | 红字深黑描 |
| title_banyuan | 寒蝉半圆体 | 寒蝉半圆体 | 白字黑描 |
| title_jixiang | 欣意吉祥宋 | 字体圈欣意吉祥宋 | 金字棕描 |
**字幕样式**: 4 个 → 8 个(+4
| ID | 样式名 | 字体 | 高亮色 |
|----|--------|------|--------|
| subtitle_pink | 少女粉 | DingTalk JinBuTi | 粉色 #FF69B4 |
| subtitle_lime | 清新绿 | DingTalk Sans | 荧光绿 #76FF03 |
| subtitle_gold | 金色隶书 | 阿里妈妈刀隶体 | 金色 #FDE68A |
| subtitle_kai | 楷体红字 | SimKai | 红色 #FF4444 |
### 3. TypeScript 类型错误修复
- **Root.tsx**: `Composition` 泛型类型与 `calculateMetadata` 参数类型不匹配 — 内联 `calculateMetadata` 并显式标注参数类型,`defaultProps` 使用 `satisfies VideoProps` 约束
- **Video.tsx**: `VideoProps` 接口添加 `[key: string]: unknown` 索引签名,兼容 Remotion 要求的 `Record<string, unknown>` 约束
- **VideoLayer.tsx**: `OffthreadVideo` 组件不支持 `loop` prop — 移除(该 prop 原本就被忽略)
### 4. 进度条文案还原
- **问题**: 进度条显示后端推送的详细阶段消息(如"正在合成唇型"),用户希望只显示"正在AI生成中..."
- **修复**: `HomePage.tsx` 进度条文案从 `{currentTask.message || "正在AI生成中..."}` 改为固定 `正在AI生成中...`
---
## 📁 修改文件清单
| 文件 | 改动 |
|------|------|
| `remotion/src/components/Title.tsx` | `buildTextShadow` → `buildStrokeStyle`CSS 原生描边),标题+副标题同时生效 |
| `remotion/src/components/Subtitles.tsx` | `buildTextShadow` → `buildStrokeStyle`CSS 原生描边) |
| `remotion/src/Root.tsx` | 修复 `Composition` 泛型类型、`calculateMetadata` 参数类型 |
| `remotion/src/Video.tsx` | `VideoProps` 添加索引签名 |
| `remotion/src/components/VideoLayer.tsx` | 移除 `OffthreadVideo` 不支持的 `loop` prop |
| `backend/assets/styles/title.json` | 标题样式从 4 个扩展到 12 个,`stroke_size` 8→5 |
| `backend/assets/styles/subtitle.json` | 字幕样式从 4 个扩展到 8 个 |
| `frontend/.../HomePage.tsx` | 进度条文案还原为固定"正在AI生成中..." |
---
## 🔍 验证
- `npx tsc --noEmit` — 零错误
- `npm run build:render` — 渲染脚本编译成功
- `npm run build`(前端)— 零报错
- 描边:标题/副标题/字幕使用 CSS 原生描边,无重影、无虚胖
- 样式选择:前端下拉可加载全部 12 个标题 + 8 个字幕样式
---
## 视频生成流水线性能优化
### 概述
针对视频生成流水线进行全面性能优化,涵盖 FFmpeg 编码参数、LatentSync 推理参数、多素材并行化、以及后处理阶段并行化。预估 15s 单素材视频从 ~280s 降至 ~190s (32%)30s 双素材从 ~400s 降至 ~240s (40%)。
**服务器配置**: 2x RTX 3090 (24GB), 2x Xeon E5-2680 v4 (56核), 192GB RAM
### 第一阶段FFmpeg 编码优化
**最终合成 preset `slow` → `medium`**
- 合成阶段从 ~50s 降到 ~25s质量几乎无变化
**中间文件 CRF 18 → 23**
- 中间产物trim、prepare_segment、concat、loop、normalize_orientation不是最终输出不需要高质量编码
- 每个中间步骤快 3-8 秒
**最终合成 CRF 18 → 20**
- 15 秒口播视频 CRF 18 vs 20 肉眼无法区分
### 第二阶段LatentSync 推理参数调优
**inference_steps 20 → 16**
- 推理时间线性减少 20%~180s → ~144s
**guidance_scale 2.0 → 1.5**
- classifier-free guidance 权重降低每步计算量微降5-10%
> ⚠️ 两项需重启 LatentSync 服务后测试唇形质量,确认可接受再保留。如质量不佳可回退 .env 参数。
### 第三阶段:多素材流水线并行化
**素材下载 + 归一化并行**
- 串行 `for` 循环改为 `asyncio.gather()``normalize_orientation` 通过 `run_in_executor` 在线程池执行
- N 个素材从串行 N×5s → ~5s
**片段预处理并行**
- 逐个 `prepare_segment` 改为 `asyncio.gather()` + `run_in_executor`
- 2 素材 ~90s → ~50s4 素材 ~180s → ~60s
### 第四阶段:流水线交叠
**Whisper 字幕对齐 与 BGM 混音 并行**
- 两者互不依赖(都只依赖 audio_path用 `asyncio.gather()` 并行执行
- 单素材模式下 Whisper 从 LatentSync 之后的串行步骤移至与 BGM 并行
- 不开 BGM 或不开字幕时行为不变,只有同时启用时才并行
### 修改文件
| 文件 | 改动 |
|------|------|
| `backend/app/services/video_service.py` | compose: preset slow→medium, CRF 18→20; normalize_orientation/prepare_segment/concat: CRF 18→23 |
| `backend/app/services/lipsync_service.py` | _loop_video_to_duration: CRF 18→23 |
| `backend/.env` | LATENTSYNC_INFERENCE_STEPS=16, LATENTSYNC_GUIDANCE_SCALE=1.5 |
| `backend/app/modules/videos/workflow.py` | import asyncio; 素材下载/归一化并行; 片段预处理并行; Whisper+BGM 并行 |
### 回退方案
- FFmpeg 参数:如画质不满意,将最终 CRF 改回 18、preset 改回 slow
- LatentSync如唇形质量下降将 .env 中 `INFERENCE_STEPS` 改回 20、`GUIDANCE_SCALE` 改回 2.0
- 并行化:纯架构优化,无质量影响,无需回退
---
## MuseTalk + LatentSync 混合唇形同步方案
### 概述
LatentSync 1.6 质量高但推理极慢(~78% 总时长),长视频(>=2min耗时 20-60 分钟不可接受。MuseTalk 1.5 是单步潜空间修复非扩散模型逐帧推理速度接近实时30fps+ on V100适合长视频。混合方案按音频时长自动路由短视频用 LatentSync 保质量,长视频用 MuseTalk 保速度。
### 架构
- **路由阈值**: `LIPSYNC_DURATION_THRESHOLD` (默认 120s)
- **短视频 (<120s)**: LatentSync 1.6 (GPU1, 端口 8007)
- **长视频 (>=120s)**: MuseTalk 1.5 (GPU0, 端口 8011)
- **回退**: MuseTalk 不可用时自动 fallback 到 LatentSync
### 改动文件
| 文件 | 改动 |
|------|------|
| `models/MuseTalk/` | 从 Temp/MuseTalk 复制代码 + 下载权重 |
| `models/MuseTalk/scripts/server.py` | 新建 FastAPI 常驻服务 (端口 8011, GPU0) |
| `backend/app/core/config.py` | 新增 MUSETALK_* 和 LIPSYNC_DURATION_THRESHOLD |
| `backend/.env` | 新增对应环境变量 |
| `backend/app/services/lipsync_service.py` | 新增 `_call_musetalk_server()` + 混合路由逻辑 + 扩展 `check_health()` |
---
## MuseTalk 推理性能优化 (server.py v2)
### 概述
MuseTalk 首次长视频测试 (136s, 3404 帧) 耗时 1799s (~30 分钟),分析发现瓶颈集中在人脸检测 (28%)、BiSeNet 合成 (22%)、I/O (17%),而非 UNet 推理本身 (17%)。通过 6 项优化预估降至 8-10 分钟 (~3x 加速)。
### 性能瓶颈分析 (优化前, 1799s)
| 阶段 | 耗时 | 占比 | 瓶颈原因 |
|------|------|------|---------|
| DWPose + 人脸检测 | ~510s | 28% | `batch_size_fa=1`, 每帧跑 2 个 NN, 完全串行 |
| 合成 + BiSeNet 人脸解析 | ~400s | 22% | 每帧都跑 BiSeNet + PNG 写盘 |
| UNet 推理 | ~300s | 17% | batch_size=8 太小 |
| I/O (PNG 读写 + FFmpeg) | ~300s | 17% | PNG 压缩慢, ffmpeg→PNG→imread 链路 |
| VAE 编码 | ~100s | 6% | 逐帧编码, 未批处理 |
### 6 项优化
| # | 优化项 | 详情 |
|---|--------|------|
| 1 | **batch_size 8→32** | `.env` 修改, RTX 3090 显存充裕 |
| 2 | **cv2.VideoCapture 直读帧** | 跳过 ffmpeg→PNG→imread 链路, 省去 3404 次 PNG 编解码 |
| 3 | **人脸检测降频 (每5帧)** | 每 5 帧运行 DWPose + FaceAlignment, 中间帧线性插值 bbox |
| 4 | **BiSeNet mask 缓存 (每5帧)** | 每 5 帧运行 `get_image_prepare_material`, 中间帧用 `get_image_blending` 复用缓存 mask |
| 5 | **cv2.VideoWriter 直写** | 跳过逐帧 PNG 写盘 + ffmpeg 重编码, 用 VideoWriter 直写 mp4 |
| 6 | **每阶段计时** | 7 个阶段精确计时, 方便后续进一步调优 |
### 修改文件
| 文件 | 改动 |
|------|------|
| `models/MuseTalk/scripts/server.py` | 完全重写 `_run_inference()`, 新增 `_detect_faces_subsampled()` |
| `backend/.env` | `MUSETALK_BATCH_SIZE` 8→32 |
---
## Remotion 并发渲染优化
### 概述
Remotion 渲染在 56 核服务器上默认只用 8 并发 (`min(8, cores/2)`),改为 16 并发,预估从 ~5 分钟降到 ~2-3 分钟。
### 改动
- `remotion/render.ts`: `renderMedia()` 新增 `concurrency` 参数 (默认 16), 支持 `--concurrency` CLI 参数覆盖
- `remotion/dist/render.js`: 重新编译
### 修改文件
| 文件 | 改动 |
|------|------|
| `remotion/render.ts` | `RenderOptions` 新增 `concurrency` 字段, `renderMedia()` 传入 `concurrency` |
| `remotion/dist/render.js` | TypeScript 重新编译 |

203
Docs/DevLogs/Day28.md Normal file
View File

@@ -0,0 +1,203 @@
## CosyVoice FP16 加速 + 文档更新 + AI改写界面重构 + 标题字幕面板重排与视频帧预览 (Day 28)
### 概述
CosyVoice 3.0 声音克隆服务开启 FP16 半精度推理,预估提速 30-40%。同步更新 4 个项目文档。重构 AI 改写文案界面RewriteModal 两步流程 + ScriptExtractionModal 逻辑抽取)。前端将"标题与字幕"面板从第二步移至第四步(素材编辑之后),样式预览窗口背景从紫粉渐变改为视频片头帧截图,实现所见即所得。
---
## ✅ 改动内容
### 1. CosyVoice FP16 半精度加速
- **问题**: CosyVoice 3.0 以 FP32 全精度运行RTF (Real-Time Factor) 约 0.9-1.35x,生成 2 分钟音频需要约 2 分钟
- **根因**: `AutoModel()` 初始化时未传入 `fp16=True`LLM 推理和 Flow Matching (DiT) 均在 FP32 下运行
- **修复**: 一行改动开启 FP16 自动混合精度
```python
# 旧: _model = AutoModel(model_dir=str(MODEL_DIR))
# 新:
_model = AutoModel(model_dir=str(MODEL_DIR), fp16=True)
```
- **生效机制**: `CosyVoice3Model``llm_job()``token2wav()` 中通过 `torch.cuda.amp.autocast(self.fp16)` 自动将计算转为 FP16
- **预期效果**:
- 推理速度提升 30-40%
- 显存占用降低 ~30%
- 语音质量基本无损0.5B 模型 FP16 精度充足)
- **验证**: 服务重启后自检通过,健康检查 `ready: true`
### 2. 文档全面更新 (4 个文件)
补充 Day 27 新增的 MuseTalk 混合唇形同步方案、性能优化、Remotion 并发渲染等内容到所有相关文档。
#### README.md
- 项目描述更新为 "LatentSync 1.6 + MuseTalk 1.5 混合唇形同步"
- 唇形同步功能描述改为混合方案(短视频 LatentSync长视频 MuseTalk
- 技术栈表新增 MuseTalk 1.5
- 项目结构新增 `models/MuseTalk/`
- 服务架构表新增 MuseTalk (端口 8011)
- 文档中心新增 MuseTalk 部署指南链接
- 性能优化描述新增降频检测 + Remotion 16 并发
#### DEPLOY_MANUAL.md
- GPU 分配说明更新 (GPU0=MuseTalk+CosyVoice, GPU1=LatentSync)
- 步骤 3 拆分为 3a (LatentSync) + 3b (MuseTalk)
- 环境变量表新增 7 个 MuseTalk 变量,移除过时的 `DOUYIN_COOKIE`
- LatentSync 推理步数默认值 20→16
- 测试运行新增 MuseTalk 启动终端
- PM2 管理新增 MuseTalk 服务(第 5 项)
- 端口检查、日志查看命令新增 8011/vigent2-musetalk
#### SUBTITLE_DEPLOY.md
- 技术架构图更新为 LatentSync/MuseTalk 混合路由
- 新增唇形同步路由说明
- Remotion 配置表新增 `concurrency` 参数 (默认 16)
- GPU 分配说明更新
- 更新日志新增 v1.3.0 条目
#### BACKEND_README.md
- 健康检查接口描述更新为含 LatentSync + MuseTalk + 混合路由阈值
- 环境变量配置新增 MuseTalk 相关变量
- 服务集成指南新增"唇形同步混合路由"章节
---
### 3. AI 改写文案界面重构
#### RewriteModal 重构
将 AI 改写弹窗改为两步式流程,提升交互体验:
**第一步 — 配置与触发**
- 自定义提示词输入(可选),自动持久化到 localStorage
- "开始改写"按钮触发 `/api/ai/rewrite` 请求
**第二步 — 结果对比与选择**
- 上方AI 改写结果 + "使用此结果"按钮(紫粉渐变色,醒目)
- 下方:原文对比 + "保留原文"按钮(灰色低调)
- 底部:可"重新改写"(重回第一步,保留自定义提示词)
- ESC 快捷键关闭
#### ScriptExtractionModal 逻辑抽取
将文案提取模态框的全部业务逻辑抽取到独立 hook `useScriptExtraction`
- **useScriptExtraction.ts** (新建): 管理 URL/文件双模式输入、拖拽上传、提取请求、步骤状态机 (config → processing → result)、剪贴板复制
- **ScriptExtractionModal.tsx**: 纯展示组件,消费 hook 返回值,新增 ESC/Enter 快捷键
#### ScriptEditor 工具栏调整
- 按钮组右对齐 (`justify-end`),统一高度 `h-7` 和圆角
- "历史文案"按钮用灰色 (bg-gray-600) 区分辅助功能
- "文案提取助手"用紫色 (bg-purple-600) 表示主功能
- "AI多语言"用绿渐变 (emerald-teal)"AI生成标题标签"用蓝渐变 (blue-cyan)
- "AI智能改写"和"保存文案"移至文本框下方状态栏
---
### 4. 标题字幕面板重排 + 视频帧背景预览
#### 面板顺序重排
`<TitleSubtitlePanel>` 从第二步移至第四步(素材编辑之后),使用户在设置标题字幕样式时已经完成了素材选择和时间轴编排。
新顺序:
```
一、文案提取与编辑(不变)
二、配音(原三)
三、素材编辑(原四)
四、标题与字幕(原二)→ 移到素材编辑之后
```
#### 新建 useVideoFrameCapture hook
从视频 URL 截取 0.1s 处帧画面,返回 JPEG data URL
- 创建 `<video>` 元素,设置 `crossOrigin="anonymous"`(素材存储在 Supabase Storage 跨域地址)
- 先绑定 `loadedmetadata` / `canplay` / `seeked` / `error` 事件监听,再设 src避免事件丢失
- `loadedmetadata``canplay` 触发后 seek 到 0.1s`seeked` 回调中用 canvas `drawImage` 截帧
- canvas 缩放到 480px 宽再编码(预览窗口最大 280px节省内存
- `canvas.toDataURL("image/jpeg", 0.7)` 导出
- 防御 `videoWidth/videoHeight` 为 0 的边界情况
- try-catch 防 canvas taint失败返回 null降级渐变
- `isActive` 标志 + `seeked` 去重标志防止 stale 和重复更新
- 截图完成后清理 video 元素释放内存
#### 按需截取(性能优化)
只在样式预览窗口打开时才触发截取:
```typescript
const materialPosterUrl = useVideoFrameCapture(
showStylePreview ? firstTimelineMaterialUrl : null
);
```
截取源优先使用**时间轴第一段素材**(用户拖拽排序后的真实片头),回退到 `selectedMaterials[0]`(未生成配音、时间轴为空时)。
#### 预览背景替换
`FloatingStylePreview` 有视频帧时直接显示原始画面(不加半透明,保证颜色真实),文字靠描边保证可读性;无视频帧时降级为原紫粉渐变背景。
#### 踩坑记录
1. **CORS tainted canvas**: 素材文件存储在 Supabase Storage (`api.hbyrkj.top`),是跨域签名链接。必须设 `video.crossOrigin = "anonymous"` 才能让 canvas `toDataURL` 不被 SecurityError 拦截
2. **时间轴为空**: `useTimelineEditor``audioDuration <= 0`(未选配音)时返回空数组,需回退到 `selectedMaterials[0]`
3. **事件监听顺序**: 必须先绑定事件监听再设 `video.src`,否则快速加载时事件可能丢失
---
## 📁 修改文件清单
| 文件 | 改动 |
|------|------|
| `models/CosyVoice/cosyvoice_server.py` | `AutoModel()` 新增 `fp16=True` 参数 |
| `README.md` | 混合唇形同步描述、技术栈、服务架构、项目结构更新 |
| `Docs/DEPLOY_MANUAL.md` | MuseTalk 部署步骤、环境变量、PM2 管理、端口检查 |
| `Docs/SUBTITLE_DEPLOY.md` | 架构图、Remotion concurrency、GPU 分配、更新日志 |
| `Docs/BACKEND_README.md` | 健康检查、环境变量、混合路由章节 |
| `frontend/.../RewriteModal.tsx` | 两步式改写流程(自定义提示词 → 结果对比) |
| `frontend/.../script-extraction/useScriptExtraction.ts` | **新建** — 文案提取逻辑 hook |
| `frontend/.../ScriptExtractionModal.tsx` | 纯展示组件,消费 hook新增快捷键 |
| `frontend/.../ScriptEditor.tsx` | 工具栏右对齐 + 按钮分色 + 改写/保存移至底部 |
| `frontend/.../useVideoFrameCapture.ts` | **新建** — 视频帧截取 hookcrossOrigin + canvas 缩放 |
| `frontend/.../useHomeController.ts` | 新增 useMemo 计算素材 URL调用帧截取 hookshowStylePreview 门控 |
| `frontend/.../HomePage.tsx` | 面板重排(二↔四互换),编号更新,透传 materialPosterUrl |
| `frontend/.../TitleSubtitlePanel.tsx` | 编号"二"→"四",新增 previewBackgroundUrl prop |
| `frontend/.../FloatingStylePreview.tsx` | 新增 previewBackgroundUrl prop条件渲染视频帧/渐变背景 |
---
## 🔍 验证
- CosyVoice 重启成功,健康检查 `{"ready": true}`
- 自检推理通过7.2s for "你好"
- FP16 通过 `torch.cuda.amp.autocast(self.fp16)` 在 LLM 和 Flow Matching 阶段生效
- `npx tsc --noEmit` — 零错误
- AI 改写:自定义提示词持久化 → 改写结果 + 原文对比 → "使用此结果"/"保留原文"
- 文案提取URL / 文件双模式 → 处理中动画 → 结果填入
- 面板顺序:一→文案、二→配音、三→素材编辑、四→标题与字幕
- 样式预览背景:有素材时显示真实视频片头帧,无素材降级紫粉渐变
- 预览关闭时不触发截取,不浪费资源
---
## 💡 CosyVoice 性能分析备注
### 当前性能基线 (FP32, 优化前)
| 文本长度 | 音频时长 | 推理耗时 | RTF |
|----------|----------|----------|-----|
| 42 字 | 9.8s | 13.2s | 1.35x |
| 89 字 | 18.2s | 20.3s | 1.12x |
| ~530 字 | 115.8s | 107.7s | 0.93x |
| ~670 字 | 143.5s | 131.6s | 0.92x |
### 未来可选优化(收益递减,暂不实施)
| 优化项 | 预期提升 | 复杂度 |
|--------|----------|--------|
| TensorRT (DiT 模块) | +20-30% | 需编译 .plan 引擎 |
| torch.compile() | +10-20% | 一行代码,但首次编译慢 |
| vLLM (LLM 模块) | +10-15% | 额外依赖 |

View File

@@ -196,6 +196,7 @@ ViGent2/Docs/
├── SUPABASE_DEPLOY.md # Supabase 部署文档
├── LATENTSYNC_DEPLOY.md # LatentSync 部署文档
├── COSYVOICE3_DEPLOY.md # 声音克隆部署文档
├── ALIPAY_DEPLOY.md # 支付宝付费部署文档
├── SUBTITLE_DEPLOY.md # 字幕系统部署文档
└── DevLogs/
├── Day1.md # 开发日志
@@ -304,4 +305,4 @@ ViGent2/Docs/
---
**最后更新**2026-02-08
**最后更新**2026-02-11

View File

@@ -10,8 +10,9 @@ frontend/src/
│ ├── page.tsx # 首页(视频生成)
│ ├── publish/ # 发布管理页
│ ├── admin/ # 管理员页面
│ ├── login/ # 登录
── register/ # 注册
│ ├── login/ # 登录
── register/ # 注册
│ └── pay/ # 付费开通会员
├── features/ # 功能模块(按业务拆分)
│ ├── home/
│ │ ├── model/ # 业务逻辑 hooks
@@ -150,6 +151,33 @@ body {
| `sm:` | ≥ 640px | 平板/桌面 |
| `lg:` | ≥ 1024px | 大屏桌面 |
### embedded 组件模式
合并板块时,子组件通过 `embedded?: boolean` prop 控制是否渲染外层卡片容器和主标题。
```tsx
// embedded=false独立使用渲染完整卡片
<div className="bg-white/5 rounded-2xl p-6 border border-white/10">
<h2></h2>
{content}
</div>
// embedded=true嵌入父卡片只渲染内容
{content}
```
- 子标题使用 `<h3 className="text-sm font-medium text-gray-400">`
- 分隔线使用 `<div className="border-t border-white/10 my-4" />`
- 移动端标题行避免 `whitespace-nowrap`,长描述文字可用 `hidden sm:inline` 在移动端隐藏
### 按钮视觉层级
| 层级 | 样式 | 用途 |
|------|------|------|
| 主操作 | `px-4 py-2 text-sm font-medium bg-gradient-to-r from-purple-600 to-pink-600 shadow-sm` | 生成配音、立即发布 |
| 辅助操作 | `px-2 py-1 text-xs bg-white/10 rounded` | 刷新、上传、语速 |
| 触屏可见 | `opacity-40 group-hover:opacity-100` | 列表行内操作(编辑/删除) |
---
## API 请求规范
@@ -256,6 +284,38 @@ import { formatDate } from '@/shared/lib/media';
## ⚡️ 体验优化规范
### 刷新回顶部(统一体验)
- 长页面(如首页/发布页)在首次挂载时统一回到顶部。
- **必须**在页面级 `useEffect` 中设置 `history.scrollRestoration = "manual"` 禁用浏览器原生滚动恢复。
- 调用 `window.scrollTo({ top: 0, left: 0, behavior: "auto" })` 并追加 200ms 延迟兜底(防止异步 effect 覆盖)。
- **列表自动滚动必须使用时间门控**:页面加载后 1 秒内禁止所有列表自动滚动效果(`scrollEffectsEnabled` ref防止持久化恢复 + 异步数据加载触发 `scrollIntoView` 导致页面跳动。
- 推荐模式:
```typescript
// 页面级HomePage / PublishPage
useEffect(() => {
if (typeof window === "undefined") return;
if ("scrollRestoration" in history) history.scrollRestoration = "manual";
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
const timer = setTimeout(() => window.scrollTo({ top: 0, left: 0, behavior: "auto" }), 200);
return () => clearTimeout(timer);
}, []);
// Controller 级(列表滚动时间门控)
const scrollEffectsEnabled = useRef(false);
useEffect(() => {
const timer = setTimeout(() => { scrollEffectsEnabled.current = true; }, 1000);
return () => clearTimeout(timer);
}, []);
// 列表滚动 effectBGM/素材/视频等)
useEffect(() => {
if (!selectedId || !scrollEffectsEnabled.current) return;
target?.scrollIntoView({ block: "nearest", behavior: "smooth" });
}, [selectedId, list]);
```
### 路由预取
- 首页进入发布管理时使用 `router.prefetch("/publish")`
@@ -305,7 +365,9 @@ import { formatDate } from '@/shared/lib/media';
- **必须持久化**
- 标题样式 ID / 字幕样式 ID
- 标题字号 / 字幕字号
- 标题显示模式(`short` / `persistent`
- 背景音乐选择 / 音量 / 开关状态
- 输出画面比例(`9:16` / `16:9`
- 素材选择 / 历史作品选择
- 选中配音 ID (`selectedAudioId`)
- 语速 (`speed`,声音克隆模式)
@@ -333,6 +395,7 @@ import { formatDate } from '@/shared/lib/media';
- 片头标题与发布信息标题统一限制 15 字。
- 中文输入法合成阶段不截断,合成结束后才校验长度。
- 首页片头标题修改会同步写入 `vigent_${storageKey}_publish_title`
- 标题显示模式使用 `short` / `persistent` 两个固定值;默认 `short`(短暂显示 4 秒)。
- 避免使用 `maxLength` 强制截断输入法合成态。
- 推荐使用 `@/shared/hooks/useTitleInput` 统一处理输入逻辑。

View File

@@ -5,14 +5,12 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
## ✨ 核心功能
### 1. 视频生成 (`/`)
- **素材管理**: 拖拽上传人物视频,实时预览
- **素材重命名**: 支持在列表中直接重命名素材
- **文案配音**: 集成 EdgeTTS支持多音色选择 (云溪 / 晓晓)
- **AI 标题/标签**: 一键生成视频标题与标签 (Day 14)。
- **标题/字幕样式**: 样式选择 + 预览 + 字号调节 (Day 16)
- **背景音乐**: 试听 + 音量控制 + 选择持久化 (Day 16)
- **交互优化**: 选择项持久化、列表内定位、刷新回顶部 (Day 16)。
- **预览一致性**: 标题/字幕预览按素材分辨率缩放,效果更接近成片 (Day 17)。
- **一、文案提取与编辑**: 文案输入/提取/翻译/保存
- **二、配音**: 配音方式EdgeTTS/声音克隆)+ 配音列表(生成/试听/管理)合并为一个板块
- **三、素材编辑**: 视频素材(上传/选择/管理)+ 时间轴编辑(波形/色块/拖拽排序)合并为一个板块
- **四、标题与字幕**: 片头标题/副标题/字幕样式配置;短暂显示/常驻显示;样式预览使用视频片头帧作为真实背景 (Day 28)。
- **五、背景音乐**: 试听 + 音量控制 + 选择持久化
- **六、作品**(右栏): 作品列表 + 作品预览合并为一个板块
- **进度追踪**: 实时显示视频生成进度 (10% -> 100%)。
- **作品预览**: 生成完成后直接播放下载(作品预览 + 历史作品)。
- **预览优化**: 预览视频 `metadata` 预取,首帧加载更快。
@@ -52,13 +50,14 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
- **画面比例控制**: 时间轴顶部支持 `9:16 / 16:9` 输出比例选择,设置持久化并透传后端。
### 5. 字幕与标题 [Day 13 新增]
- **片头标题**: 可选输入,限制 15 字,视频开头显示 3 秒淡入淡出标题
- **片头标题**: 可选输入,限制 15 字;支持”短暂显示 / 常驻显示”默认短暂显示4 秒),对标题和副标题同时生效
- **片头副标题**: 可选输入,限制 20 字;显示在主标题下方,用于补充说明或悬念引导;独立样式配置(字体/字号/颜色/间距),可由 AI 同时生成;与标题共享显示模式设定;仅在视频画面中显示,不参与发布标题 (Day 25)。
- **标题同步**: 首页片头标题修改会同步到发布信息标题。
- **逐字高亮字幕**: 卡拉OK效果默认开启可关闭。
- **自动对齐**: 基于 faster-whisper 生成字级别时间戳。
- **样式预设**: 标题/字幕样式选择 + 预览 + 字号调节 (Day 16)。
- **样式预设**: 标题/字幕/副标题样式选择 + 预览 + 字号调节 (Day 16/25)。
- **默认样式**: 标题 90px 站酷快乐体;字幕 60px 经典黄字 + DingTalkJinBuTi (Day 17)。
- **样式持久化**: 标题/字幕样式与字号刷新保留 (Day 17)。
- **样式持久化**: 标题/字幕/副标题样式与字号刷新保留 (Day 17/25)。
### 6. 背景音乐 [Day 16 新增]
- **试听预览**: 点击试听即选中,音量滑块实时生效。
@@ -66,12 +65,20 @@ ViGent2 的前端界面,采用 Next.js 16 + TailwindCSS 构建。
### 7. 账户设置 [Day 15 新增]
- **手机号登录**: 11位中国手机号验证登录。
- **账户下拉菜单**: 显示有效期 + 修改密码 + 安全退出。
- **账户下拉菜单**: 显示手机号(中间四位脱敏)+ 有效期 + 修改密码 + 安全退出。
- **修改密码**: 弹窗输入当前密码与新密码,修改后强制重新登录。
- **登录即时生效**: 登录成功后 AuthContext 立即写入用户数据,无需刷新即显示手机号。
### 8. 付费开通会员 (`/pay`)
- **支付宝电脑网站支付**: 跳转支付宝官方收银台,支持扫码/账号登录/余额等多种支付方式。
- **自动激活**: 支付成功后异步回调自动激活会员(有效期 1 年),前端轮询检测支付结果。
- **到期续费**: 会员到期后登录自动跳转付费页续费,流程与首次开通一致。
- **管理员激活**: 管理员手动激活功能并存,两种方式互不影响。
### 8. 文案提取助手 (`ScriptExtractionModal`) [Day 15 新增]
- **多源提取**: 支持文件拖拽上传与 URL 粘贴 (B站/抖音/TikTok)。
- **AI 洗稿**: 集成 GLM-4.7-Flash自动改写为口播文案。
- **AI 智能改写**: 集成 GLM-4.7-Flash自动改写为口播文案。
- **自定义提示词**: 可自定义改写提示词,留空使用默认;设置持久化到 localStorage (Day 25)。
- **一键填入**: 提取结果直接填充至视频生成输入框。
- **智能交互**: 实时进度展示,防误触设计。
@@ -109,6 +116,8 @@ src/
│ ├── page.tsx # 视频生成主页
│ ├── publish/ # 发布管理页
│ │ └── page.tsx
│ ├── pay/ # 付费开通会员页
│ │ └── page.tsx
│ └── layout.tsx # 全局布局 (导航栏)
├── features/
│ ├── home/
@@ -133,5 +142,8 @@ src/
## 🎨 设计规范
- **主色调**: 深紫/黑色系 (Dark Mode)
- **交互**: 悬停微动画 (Hover Effects)
- **响应式**: 适配桌面端大屏操作
- **交互**: 悬停微动画 (Hover Effects);操作按钮默认半透明可见 (opacity-40)hover 时全亮,兼顾触屏设备
- **响应式**: 适配桌面端与移动端;发布页平台卡片响应式布局(移动端紧凑/桌面端宽松)
- **滚动体验**: 列表滚动条统一隐藏 (hide-scrollbar);刷新后自动回到顶部(禁用浏览器滚动恢复 + 列表 scroll 时间门控)
- **样式预览**: 浮动预览窗口,桌面端左上角 280px移动端右下角 160px不遮挡控件
- **输入辅助**: 标题/副标题输入框实时字数计数器,超限变红

252
Docs/MUSETALK_DEPLOY.md Normal file
View File

@@ -0,0 +1,252 @@
# MuseTalk 部署指南
> **更新时间**2026-02-27
> **适用版本**MuseTalk v1.5 (常驻服务模式)
> **架构**FastAPI 常驻服务 + PM2 进程管理
---
## 架构概览
MuseTalk 作为 **混合唇形同步方案** 的长视频引擎:
- **短视频 (<120s)** → LatentSync 1.6 (GPU1, 端口 8007)
- **长视频 (>=120s)** → MuseTalk 1.5 (GPU0, 端口 8011)
- 路由阈值由 `LIPSYNC_DURATION_THRESHOLD` 控制
- MuseTalk 不可用时自动回退到 LatentSync
---
## 硬件要求
| 配置 | 最低要求 | 推荐配置 |
|------|----------|----------|
| GPU | 8GB VRAM (RTX 3060) | 24GB VRAM (RTX 3090) |
| 内存 | 32GB | 64GB |
| CUDA | 11.7+ | 11.8 |
> MuseTalk fp16 推理约需 4-8GB 显存,可与 CosyVoice 共享 GPU0。
---
## 安装步骤
### 1. Conda 环境
```bash
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
conda create -n musetalk python=3.10 -y
conda activate musetalk
```
### 2. PyTorch 2.0.1 + CUDA 11.8
> 必须使用此版本mmcv 预编译包依赖。
```bash
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
```
### 3. 依赖安装
```bash
pip install -r requirements.txt
# MMLab 系列
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv==2.0.1"
mim install "mmdet==3.1.0"
pip install chumpy --no-build-isolation
pip install "mmpose==1.1.0" --no-deps
# FastAPI 服务依赖
pip install fastapi uvicorn httpx
```
---
## 模型权重
### 目录结构
```
models/MuseTalk/models/
├── musetalk/ ← v1 基础模型
│ ├── config.json -> musetalk.json (软链接)
│ ├── musetalk.json
│ ├── musetalkV15 -> ../musetalkV15 (软链接, 关键!)
│ └── pytorch_model.bin (~3.2GB)
├── musetalkV15/ ← v1.5 UNet 模型
│ ├── musetalk.json
│ └── unet.pth (~3.2GB)
├── sd-vae/ ← Stable Diffusion VAE
│ ├── config.json
│ └── diffusion_pytorch_model.bin
├── whisper/ ← OpenAI Whisper Tiny
│ ├── config.json
│ ├── pytorch_model.bin (~151MB)
│ └── preprocessor_config.json
├── dwpose/ ← DWPose 人体姿态检测
│ └── dw-ll_ucoco_384.pth (~387MB)
├── syncnet/ ← SyncNet 唇形同步评估
│ └── latentsync_syncnet.pt
└── face-parse-bisent/ ← 人脸解析模型
├── 79999_iter.pth (~53MB)
└── resnet18-5c106cde.pth (~45MB)
```
### 下载方式
使用项目自带脚本:
```bash
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
conda activate musetalk
bash download_weights.sh
```
或手动 Python API 下载:
```bash
conda activate musetalk
export HF_ENDPOINT=https://hf-mirror.com
python -c "
from huggingface_hub import snapshot_download
snapshot_download('TMElyralab/MuseTalk', local_dir='models',
allow_patterns=['musetalk/*', 'musetalkV15/*'])
snapshot_download('stabilityai/sd-vae-ft-mse', local_dir='models/sd-vae',
allow_patterns=['config.json', 'diffusion_pytorch_model.bin'])
snapshot_download('openai/whisper-tiny', local_dir='models/whisper',
allow_patterns=['config.json', 'pytorch_model.bin', 'preprocessor_config.json'])
snapshot_download('yzd-v/DWPose', local_dir='models/dwpose',
allow_patterns=['dw-ll_ucoco_384.pth'])
"
```
### 创建必要的软链接
```bash
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk/models/musetalk
ln -sf musetalk.json config.json
ln -sf ../musetalkV15 musetalkV15
```
> **关键**`musetalk/musetalkV15` 软链接缺失会导致权重检测失败 (`weights: False`)。
---
## 服务启动
### PM2 进程管理(推荐)
```bash
# 首次注册
cd /home/rongye/ProgramFiles/ViGent2
pm2 start run_musetalk.sh --name vigent2-musetalk
pm2 save
# 日常管理
pm2 restart vigent2-musetalk
pm2 logs vigent2-musetalk
pm2 stop vigent2-musetalk
```
### 手动启动
```bash
cd /home/rongye/ProgramFiles/ViGent2/models/MuseTalk
/home/rongye/ProgramFiles/miniconda3/envs/musetalk/bin/python scripts/server.py
```
### 健康检查
```bash
curl http://localhost:8011/health
# {"status":"ok","model_loaded":true}
```
---
## 后端配置
`backend/.env` 中的相关变量:
```ini
# MuseTalk 配置
MUSETALK_GPU_ID=0 # GPU 编号 (与 CosyVoice 共存)
MUSETALK_API_URL=http://localhost:8011 # 常驻服务地址
MUSETALK_BATCH_SIZE=32 # 推理批大小
MUSETALK_VERSION=v15 # 模型版本
MUSETALK_USE_FLOAT16=true # 半精度加速
# 混合唇形同步路由
LIPSYNC_DURATION_THRESHOLD=120 # 秒, >=此值用 MuseTalk
```
---
## 相关文件
| 文件 | 说明 |
|------|------|
| `models/MuseTalk/scripts/server.py` | FastAPI 常驻服务 (端口 8011) |
| `run_musetalk.sh` | PM2 启动脚本 |
| `backend/app/services/lipsync_service.py` | 混合路由 + `_call_musetalk_server()` |
| `backend/app/core/config.py` | `MUSETALK_*` 配置项 |
---
## 性能优化 (server.py v2)
首次长视频测试 (136s, 3404 帧) 耗时 30 分钟。分析发现瓶颈在人脸检测 (28%)、BiSeNet 合成 (22%)、I/O (17%),而非 UNet 推理 (17%)。
### 已实施优化
| 优化项 | 说明 |
|--------|------|
| `MUSETALK_BATCH_SIZE` 8→32 | RTX 3090 显存充裕UNet 推理加速 ~3x |
| cv2.VideoCapture 直读帧 | 跳过 ffmpeg→PNG→imread 链路 |
| 人脸检测降频 (每5帧) | DWPose + FaceAlignment 只在采样帧运行,中间帧线性插值 bbox |
| BiSeNet mask 缓存 (每5帧) | `get_image_prepare_material` 每 5 帧运行,中间帧用 `get_image_blending` 复用 |
| cv2.VideoWriter 直写 | 跳过逐帧 PNG 写盘 + ffmpeg 重编码 |
| 每阶段计时 | 7 个阶段精确计时,方便后续调优 |
### 调优参数
`models/MuseTalk/scripts/server.py` 顶部可调:
```python
DETECT_EVERY = 5 # 人脸检测降频间隔 (帧)
BLEND_CACHE_EVERY = 5 # BiSeNet mask 缓存间隔 (帧)
```
> 对于口播视频 (人脸几乎不动)5 帧间隔的插值误差可忽略。
> 如人脸运动剧烈的场景,可降低为 2-3。
---
## 常见问题
### huggingface-hub 版本冲突
```
ImportError: huggingface-hub>=0.19.3,<1.0 is required
```
**解决**:降级 huggingface-hub
```bash
pip install "huggingface-hub>=0.19.3,<1.0"
```
### mmcv 导入失败
```bash
pip uninstall mmcv mmcv-full -y
mim install "mmcv==2.0.1"
```
### 音视频长度不匹配
已在 `musetalk/utils/audio_processor.py` 中修复(零填充逻辑),无需额外处理。

View File

@@ -16,14 +16,16 @@
文本 → EdgeTTS → 音频 → LatentSync → FFmpeg合成 → 最终视频
新流程 (单素材):
文本 → EdgeTTS/Qwen3-TTS/预生成配音 → 音频 ─┬→ LatentSync → 唇形视频 ─┐
文本 → EdgeTTS/CosyVoice/预生成配音 → 音频 ─┬→ LatentSync/MuseTalk → 唇形视频 ─┐
└→ faster-whisper → 字幕JSON ─┴→ Remotion合成 → 最终视频
新流程 (多素材):
音频 → 多素材按 custom_assignments 拼接 → LatentSync (单次推理) → 唇形视频 ─┐
音频 → 多素材按 custom_assignments 拼接 → LatentSync/MuseTalk (单次推理) → 唇形视频 ─┐
音频 → faster-whisper → 字幕JSON ─────────────────────────────────────────────┴→ Remotion合成 → 最终视频
```
> **唇形同步路由**: 短视频 (<120s) 用 LatentSync 1.6 (GPU1),长视频 (>=120s) 用 MuseTalk 1.5 (GPU0),由 `LIPSYNC_DURATION_THRESHOLD` 控制。
## 系统要求
| 组件 | 要求 |
@@ -185,7 +187,9 @@ Remotion 渲染参数在 `backend/app/services/remotion_service.py` 中配置:
| 参数 | 默认值 | 说明 |
|------|--------|------|
| `fps` | 25 | 输出帧率 |
| `title_duration` | 3.0 | 标题显示时长(秒 |
| `concurrency` | 16 | Remotion 并发渲染进程数(默认 16可通过 `--concurrency` CLI 参数覆盖 |
| `title_display_mode` | `short` | 标题显示模式(`short`=短暂显示;`persistent`=常驻显示) |
| `title_duration` | 4.0 | 标题显示时长(秒,仅 `short` 模式生效) |
---
@@ -272,7 +276,7 @@ wget https://github.com/googlefonts/noto-cjk/raw/main/Sans/OTF/SimplifiedChinese
### 使用 GPU 0
faster-whisper 默认使用 GPU 0与 LatentSync (GPU 1) 分开,避免显存冲突。如需指定 GPU
faster-whisper 默认使用 GPU 0MuseTalk 共享 GPU 0LatentSync 使用 GPU 1,互不冲突。如需指定 GPU
```python
# 在 whisper_service.py 中修改
@@ -288,3 +292,5 @@ WhisperService(device="cuda:0") # 或 "cuda:1"
| 2026-01-29 | 1.0.0 | 初始版本,使用 faster-whisper + Remotion 实现逐字高亮字幕和片头标题 |
| 2026-02-10 | 1.1.0 | 更新架构图:多素材 concat-then-infer、预生成配音选项 |
| 2026-01-30 | 1.0.1 | 字幕高亮样式与标题动画优化,视觉表现更清晰 |
| 2026-02-25 | 1.2.0 | 字幕时间戳从线性插值改为 Whisper 节奏映射,修复长视频字幕漂移 |
| 2026-02-27 | 1.3.0 | 架构图更新 MuseTalk 混合路由Remotion 并发渲染从 8 提升到 16GPU 分配说明更新 |

View File

@@ -1,8 +1,8 @@
# ViGent2 开发任务清单 (Task Log)
**项目**: ViGent2 数字人口播视频生成系统
**进度**: 100% (Day 24 - 鉴权到期治理 + 多素材时间轴稳定性修复)
**更新时间**: 2026-02-11
**进度**: 100% (Day 28 - CosyVoice FP16 加速 + 文档全面更新)
**更新时间**: 2026-02-27
---
@@ -10,16 +10,65 @@
> 这里记录了每一天的核心开发内容与 milestone。
### Day 24: 鉴权到期治理 + 多素材时间轴稳定性修复 (Current)
- [x] **会员到期请求时失效**: 登录与鉴权接口统一执行 `expires_at` 检查;到期后自动停用账号、清理 session并返回“会员已到期请续费”
- [x] **画面比例控制**: 时间轴新增 `9:16 / 16:9` 输出比例选择,前端持久化并透传后端,单素材/多素材统一按目标分辨率处理
- [x] **标题/字幕防溢出**: Remotion 与前端预览统一响应式缩放、自动换行、描边/字距/边距比例缩放,降低预览与成片差异。
- [x] **MOV 方向归一化**: 新增旋转元数据解析与 orientation normalize修复“编码横屏+旋转元数据”导致的竖屏判断偏差。
- [x] **多素材拼接稳定性**: 片段 prepare 与 concat 统一 25fps/CFRconcat 增加 `+genpts`,缓解段切换处“画面冻结口型还动”
- [x] **时间轴语义对齐**: 打通 `source_end` 全链路;修复 `sourceStart>0 且 sourceEnd=0` 时长计算;生成时以时间轴可见段 assignments 为准,超出段不参与
- [x] **交互细节优化**: 页面刷新回顶部;素材/历史列表首轮自动滚动抑制,减少恢复状态时页面跳动
### Day 23: 配音前置重构 + 素材时间轴编排 + UI 体验优化 + 声音克隆增强
### Day 28: CosyVoice FP16 加速 + 文档全面更新 (Current)
- [x] **CosyVoice FP16 半精度加速**: `AutoModel()` 开启 `fp16=True`LLM 推理和 Flow Matching 自动混合精度运行,预估提速 30-40%、显存降低 ~30%
- [x] **文档全面更新**: README.md / DEPLOY_MANUAL.md / SUBTITLE_DEPLOY.md / BACKEND_README.md 补充 MuseTalk 混合唇形同步方案、性能优化、Remotion 并发渲染等内容
### Day 27: Remotion 描边修复 + 字体样式扩展 + 混合唇形同步 + 性能优化
- [x] **描边渲染修复**: 标题/副标题/字幕从 `textShadow` 4 方向模拟改为 CSS 原生 `-webkit-text-stroke` + `paint-order: stroke fill`,修复描边过粗和副标题重影问题
- [x] **字体样式扩展**: 标题样式 4→12 个(+庞门正道/优设标题圆/阿里数黑体/文道潮黑/无界黑/厚底黑/寒蝉半圆体/欣意吉祥宋),字幕样式 4→8 个(+少女粉/清新绿/金色隶书/楷体红字)
- [x] **描边参数优化**: 所有预设 `stroke_size` 从 8 降至 4~5配合原生描边视觉更干净
- [x] **TypeScript 类型修复**: Root.tsx `Composition` 泛型与 `calculateMetadata` 参数类型对齐Video.tsx `VideoProps` 添加索引签名兼容 `Record<string, unknown>`VideoLayer.tsx 移除 `OffthreadVideo` 不支持的 `loop` prop。
- [x] **进度条文案还原**: 进度条从显示后端推送消息改回固定 `正在AI生成中...`
- [x] **MuseTalk 混合唇形同步**: 部署 MuseTalk 1.5 常驻服务 (GPU0, 端口 8011),按音频时长自动路由 — 短视频 (<120s) 走 LatentSync长视频 (>=120s) 走 MuseTalkMuseTalk 不可用时自动回退。
- [x] **MuseTalk 推理性能优化**: server.py v2 重写 — cv2 直读帧(跳过 ffmpeg→PNG)、人脸检测降频(每5帧)、BiSeNet mask 缓存(每5帧)、cv2.VideoWriter 直写(跳过 PNG 写盘)、batch_size 8→32预估 30min→8-10min (~3x)。
- [x] **Remotion 并发渲染优化**: render.ts 新增 concurrency 参数,从默认 8 提升到 16 (56核 CPU),预估 5min→2-3min。
### Day 26: 前端优化:板块合并 + 序号标题 + UI 精细化
- [x] **板块合并**: 首页 9 个独立板块合并为 5 个主板块(配音方式+配音列表→三、配音;视频素材+时间轴→四、素材编辑;历史作品+作品预览→六、作品)。
- [x] **中文序号标题**: 一~十编号(首页一~六,发布页七~十),移除所有 emoji 图标。
- [x] **embedded 模式**: 6 个组件支持 `embedded` prop嵌入时不渲染外层卡片/标题。
- [x] **配音列表两行布局**: embedded 模式第 1 行语速+生成配音(右对齐),第 2 行配音列表+刷新。
- [x] **子组件自渲染子标题**: MaterialSelector/TimelineEditor embedded 时自渲染 h3 子标题+操作按钮同行。
- [x] **下拉对齐**: TitleSubtitlePanel 标签统一 `w-20`,下拉 `w-1/3 min-w-[100px]`,垂直对齐。
- [x] **参考音频文案简化**: 底部段落移至标题旁,简化为 `(上传3-10秒语音样本)`
- [x] **账户手机号显示**: AccountSettingsDropdown 新增手机号显示。
- [x] **标题显示模式对副标题生效**: payload 条件修复 + UI 下拉上移至板块标题行。
- [x] **登录后用户信息立即可用**: AuthContext 暴露 `setUser`,登录成功后立即写入用户数据,修复登录后显示"未知账户"的问题。
- [x] **文案微调**: 素材描述改为"上传自拍视频最多可选4个";显示模式选项加"标题"前缀。
- [x] **UI/UX 体验优化**: 操作按钮移动端可见opacity-40、手机号脱敏、标题字数计数器、时间轴拖拽抓手图标、截取滑块放大。
- [x] **代码质量修复**: 密码弹窗 success 清空、MaterialSelector useMemo + disabled 守卫、TimelineEditor useMemo。
- [x] **发布页响应式布局**: 平台账号卡片单行布局,移动端紧凑(小图标/小按钮),桌面端宽松(与其他板块风格一致)。
- [x] **移动端刷新回顶部**: `scrollRestoration = "manual"` + 列表 scroll 时间门控(`scrollEffectsEnabled` ref1 秒内禁止自动滚动)+ 延迟兜底 `scrollTo(0,0)`
- [x] **移动端样式预览缩小**: FloatingStylePreview 移动端宽度缩至 160px位置改为右下角不遮挡样式调节控件。
- [x] **列表滚动条统一隐藏**: 所有列表BGM/配音/作品/素材/文案提取)滚动条改回 `hide-scrollbar`
- [x] **移动端配音/素材适配**: VoiceSelector 按钮移动端缩小(`px-2 sm:px-4`修复克隆声音不可见MaterialSelector 标题行移除 `whitespace-nowrap`,描述移动端隐藏,修复刷新按钮溢出。
- [x] **生成配音按钮放大**: 从辅助尺寸(`text-xs px-2 py-1`)升级为主操作尺寸(`text-sm font-medium px-4 py-2`),新增阴影。
- [x] **生成进度条位置调整**: 从"六、作品"卡片内部提取到右栏独立卡片,显示在作品卡片上方,更醒目。
- [x] **LatentSync 超时修复**: httpx 超时从 1200s20 分钟)改为 3600s1 小时),修复 2 分钟以上视频口型推理超时回退问题。
- [x] **字幕时间戳节奏映射**: `whisper_service.py` 从全程线性插值改为 Whisper 逐词节奏映射,修复长视频字幕漂移。
### Day 25: 文案提取修复 + 自定义提示词 + 片头副标题
- [x] **抖音文案提取修复**: yt-dlp Fresh cookies 报错,重写 `_download_douyin_manual` 为移动端分享页 + 自动获取 ttwid 方案。
- [x] **清理 DOUYIN_COOKIE**: 新方案不再需要手动维护 Cookie`.env`/`config.py`/`service.py` 全面删除。
- [x] **AI 智能改写自定义提示词**: 后端 `rewrite_script()` 支持 `custom_prompt` 参数;前端 checkbox 旁新增折叠式提示词编辑区localStorage 持久化。
- [x] **SSR 构建修复**: `useState` 初始化 `localStorage` 访问加 `typeof window` 守卫,修复 `npm run build` 报错。
- [x] **片头副标题**: 新增 secondary_title后端/Remotion/前端全链路AI 同时生成独立样式配置20 字限制。
- [x] **前端文案修正**: "AI 洗稿结果"→"AI 改写结果"。
- [x] **yt-dlp 升级**: `2025.12.08``2026.2.21`
- [x] **参考音频中文文件名修复**: `sanitize_filename()` 将存储路径清洗为 ASCII 安全字符,纯中文名哈希兜底,原始名保留为展示名。
### Day 24: 鉴权到期治理 + 多素材时间轴稳定性修复
- [x] **会员到期请求时失效**: 登录与鉴权接口统一执行 `expires_at` 检查;到期后自动停用账号、清理 session并返回“会员已到期请续费”。
- [x] **画面比例控制**: 时间轴新增 `9:16 / 16:9` 输出比例选择,前端持久化并透传后端,单素材/多素材统一按目标分辨率处理。
- [x] **标题/字幕防溢出**: Remotion 与前端预览统一响应式缩放、自动换行、描边/字距/边距比例缩放,降低预览与成片差异。
- [x] **标题显示模式**: 标题行新增“短暂显示/常驻显示”下拉默认短暂显示4 秒),用户选择持久化并透传至 Remotion 渲染链路。
- [x] **MOV 方向归一化**: 新增旋转元数据解析与 orientation normalize修复“编码横屏+旋转元数据”导致的竖屏判断偏差。
- [x] **多素材拼接稳定性**: 片段 prepare 与 concat 统一 25fps/CFRconcat 增加 `+genpts`,缓解段切换处“画面冻结口型还动”。
- [x] **时间轴语义对齐**: 打通 `source_end` 全链路;修复 `sourceStart>0 且 sourceEnd=0` 时长计算;生成时以时间轴可见段 assignments 为准,超出段不参与。
- [x] **交互细节优化**: 页面刷新回顶部;素材/历史列表首轮自动滚动抑制,减少恢复状态时页面跳动。
### Day 23: 配音前置重构 + 素材时间轴编排 + UI 体验优化 + 声音克隆增强
#### 第一阶段:配音前置
- [x] **配音生成独立化**: 新增 `generated_audios` 后端模块router/schemas/service5 个 API 端点,复用现有 TTSService / voice_clone_service / task_store。
@@ -212,6 +261,7 @@
| **TTS 配音** | 100% | ✅ EdgeTTS + CosyVoice 3.0 + 配音前置 + 时间轴编排 + 自动转写 + 语速控制 |
| **自动发布** | 100% | ✅ 抖音/微信视频号/B站/小红书 |
| **用户认证** | 100% | ✅ 手机号 + JWT |
| **付费会员** | 100% | ✅ 支付宝电脑网站支付 + 自动激活 |
| **部署运维** | 100% | ✅ PM2 + Watchdog |
---

View File

@@ -4,7 +4,7 @@
> 📹 **上传人物** · 🎙️ **输入文案** · 🎬 **一键成片**
基于 **LatentSync 1.6 + EdgeTTS** 的开源数字人口播视频生成系统。
基于 **LatentSync 1.6 + MuseTalk 1.5 混合唇形同步** 的开源数字人口播视频生成系统。
集成 **CosyVoice 3.0** 声音克隆与自动社交媒体发布功能。
[功能特性](#-功能特性) • [技术栈](#-技术栈) • [文档中心](#-文档中心) • [部署指南](Docs/DEPLOY_MANUAL.md)
@@ -16,25 +16,28 @@
## ✨ 功能特性
### 核心能力
- 🎬 **高清唇形同步** - LatentSync 1.6 驱动512×512 高分辨率 Latent Diffusion 模型
- 🎬 **高清唇形同步** - 混合方案:短视频 (<120s) 用 LatentSync 1.6 (高质量 Latent Diffusion),长视频 (>=120s) 用 MuseTalk 1.5 (实时级单步推理),自动路由 + 回退
- 🎙️ **多模态配音** - 支持 **EdgeTTS** (微软超自然语音, 10 语言) 和 **CosyVoice 3.0** (3秒极速声音克隆, 9语言+18方言, 语速可调)。上传参考音频自动 Whisper 转写 + 智能截取。配音前置工作流:先生成配音 → 选素材 → 生成视频。
- 📝 **智能字幕** - 集成 faster-whisper + Remotion自动生成逐字高亮 (卡拉OK效果) 字幕。
- 🎨 **样式预设** - 标题/字幕样式选择 + 预览 + 字号调节,支持自定义字体库。
- 🖼 **作品预览一致性** - 标题/字幕预览与 Remotion 成片统一响应式缩放和自动换行,窄屏画布也稳定显示
- 🎞️ **多素材多机位** - 支持多选素材 + 时间轴编辑器 (wavesurfer.js 波形可视化),拖拽分割线调整时长、拖拽排序切换机位、按 `source_start/source_end` 截取片段
- 📐 **画面比例控制** - 时间轴一键切换 `9:16 / 16:9` 输出比例,生成链路全程按目标比例处理
- 🎨 **样式预设** - 12 种标题 + 8 种字幕样式预设,支持预览 + 字号调节 + 自定义字体库。CSS 原生描边渲染,清晰无重影。
- 🏷 **标题显示模式** - 片头标题支持 `短暂显示` / `常驻显示`默认短暂显示4秒用户偏好自动持久化
- 📌 **片头副标题** - 可选副标题显示在主标题下方独立样式配置AI 可同时生成20 字限制
- 🖼️ **作品预览一致性** - 标题/字幕预览与 Remotion 成片统一响应式缩放和自动换行,窄屏画布也稳定显示
- 🎞️ **多素材多机位** - 支持多选素材 + 时间轴编辑器 (wavesurfer.js 波形可视化),拖拽分割线调整时长、拖拽排序切换机位、按 `source_start/source_end` 截取片段。
- 📐 **画面比例控制** - 时间轴一键切换 `9:16 / 16:9` 输出比例,生成链路全程按目标比例处理。
- 💾 **用户偏好持久化** - 首页状态统一恢复/保存,刷新后延续上次配置。历史文案手动保存与加载。
- 🎵 **背景音乐** - 试听 + 音量控制 + 混音,保持配音音量稳定。
- 🤖 **AI 辅助创作** - 内置 GLM-4.7-Flash支持 B站/抖音链接文案提取、AI 洗稿、标题/标签自动生成、9 语言翻译。
- 🤖 **AI 辅助创作** - 内置 GLM-4.7-Flash支持 B站/抖音链接文案提取、AI 智能改写(支持自定义提示词)、标题/标签自动生成、9 语言翻译。
### 平台化功能
- 📱 **全自动发布** - 支持抖音/微信视频号/B站/小红书立即发布;扫码登录 + Cookie 持久化。
- 🖥️ **发布管理预览** - 支持签名 URL / 相对路径作品预览,确保可直接播放。
- 📸 **发布结果可视化** - 抖音/微信视频号发布成功后返回截图,发布页结果卡片可直接查看。
- 🛡️ **发布防误操作** - 发布进行中自动提示“请勿刷新或关闭网页”,并拦截刷新/关页二次确认。
- 💳 **付费会员** - 支付宝电脑网站支付自动开通会员,到期自动停用并引导续费,管理员手动激活并存。
- 🔐 **认证与隔离** - 基于 Supabase 的用户隔离,支持手机号注册/登录、密码管理。
- 🛡️ **服务守护** - 内置 Watchdog 看门狗机制,自动监控并重启僵死服务,确保 7x24h 稳定运行。
- 🚀 **性能优化** - 视频预压缩、模型常驻服务(近实时加载)、双 GPU 流水线并发。
- 🚀 **性能优化** - 视频预压缩、模型常驻服务(近实时加载)、双 GPU 流水线并发、MuseTalk 人脸检测降频 + BiSeNet 缓存、Remotion 16 并发渲染
---
@@ -43,9 +46,9 @@
| 领域 | 核心技术 | 说明 |
|------|----------|------|
| **前端** | Next.js 16 | TypeScript, TailwindCSS, SWR, wavesurfer.js |
| **后端** | FastAPI | Python 3.10, AsyncIO, PM2 |
| **后端** | FastAPI | Python 3.12, AsyncIO, PM2 |
| **数据库** | Supabase | PostgreSQL, Storage (本地/S3), Auth |
| **唇形同步** | LatentSync 1.6 | PyTorch 2.5, Diffusers, DeepCache |
| **唇形同步** | LatentSync 1.6 + MuseTalk 1.5 | 混合路由:短视频 Diffusion 高质量,长视频单步实时推理 |
| **声音克隆** | CosyVoice 3.0 | 0.5B 参数量9 语言 + 18 方言 |
| **自动化** | Playwright | 社交媒体无头浏览器自动化 |
| **部署** | Docker & PM2 | 混合部署架构 |
@@ -59,13 +62,17 @@
### 部署运维
- **[部署手册 (DEPLOY_MANUAL.md)](Docs/DEPLOY_MANUAL.md)** - 👈 **部署请看这里**!包含完整的环境搭建步骤。
- [参考音频服务部署 (COSYVOICE3_DEPLOY.md)](Docs/COSYVOICE3_DEPLOY.md) - 声音克隆模型部署指南。
- [LatentSync 部署指南](models/LatentSync/DEPLOY.md) - 唇形同步模型独立部署。
- [LatentSync 部署指南 (LATENTSYNC_DEPLOY.md)](Docs/LATENTSYNC_DEPLOY.md) - 唇形同步模型独立部署。
- [MuseTalk 部署指南 (MUSETALK_DEPLOY.md)](Docs/MUSETALK_DEPLOY.md) - 长视频唇形同步模型部署。
- [Supabase 部署指南 (SUPABASE_DEPLOY.md)](Docs/SUPABASE_DEPLOY.md) - Supabase 与认证系统配置。
- [支付宝部署指南 (ALIPAY_DEPLOY.md)](Docs/ALIPAY_DEPLOY.md) - 支付宝付费开通会员配置。
### 开发文档
- [后端开发指南](Docs/BACKEND_README.md) - 接口规范与开发流程。
- [后端开发规范](Docs/BACKEND_DEV.md) - 分层约定与开发习惯。
- [前端开发指南](Docs/FRONTEND_DEV.md) - UI 组件与页面规范。
- [后端开发指南 (BACKEND_README.md)](Docs/BACKEND_README.md) - 接口规范与开发流程。
- [后端开发规范 (BACKEND_DEV.md)](Docs/BACKEND_DEV.md) - 分层约定与开发习惯。
- [前端开发指南 (FRONTEND_DEV.md)](Docs/FRONTEND_DEV.md) - UI 组件与页面规范。
- [前端组件文档 (FRONTEND_README.md)](Docs/FRONTEND_README.md) - 组件结构与板块说明。
- [Remotion 字幕部署 (SUBTITLE_DEPLOY.md)](Docs/SUBTITLE_DEPLOY.md) - 字幕渲染服务部署。
- [开发日志 (DevLogs)](Docs/DevLogs/) - 每日开发进度与技术决策记录。
---
@@ -82,7 +89,8 @@ ViGent2/
├── frontend/ # Next.js 前端应用
├── remotion/ # Remotion 视频渲染 (标题/字幕合成)
├── models/ # AI 模型仓库
│ ├── LatentSync/ # 唇形同步服务
│ ├── LatentSync/ # 唇形同步服务 (GPU1, 短视频)
│ ├── MuseTalk/ # 唇形同步服务 (GPU0, 长视频)
│ └── CosyVoice/ # 声音克隆服务
└── Docs/ # 项目文档
```
@@ -97,7 +105,8 @@ ViGent2/
|----------|------|------|
| **Web UI** | 3002 | 用户访问入口 (Next.js) |
| **Backend API** | 8006 | 核心业务接口 (FastAPI) |
| **LatentSync** | 8007 | 唇形同步推理服务 |
| **LatentSync** | 8007 | 唇形同步推理服务 (GPU1, 短视频) |
| **MuseTalk** | 8011 | 唇形同步推理服务 (GPU0, 长视频) |
| **CosyVoice 3.0** | 8010 | 声音克隆推理服务 |
| **Supabase** | 8008 | 数据库与认证网关 |

View File

@@ -25,10 +25,10 @@ LATENTSYNC_USE_SERVER=true
# LATENTSYNC_API_URL=http://localhost:8007
# 推理步数 (20-50, 越高质量越好,速度越慢)
LATENTSYNC_INFERENCE_STEPS=40
LATENTSYNC_INFERENCE_STEPS=16
# 引导系数 (1.0-3.0, 越高唇同步越准,但可能抖动)
LATENTSYNC_GUIDANCE_SCALE=2.0
LATENTSYNC_GUIDANCE_SCALE=1.5
# 启用 DeepCache 加速 (推荐开启)
LATENTSYNC_ENABLE_DEEPCACHE=true
@@ -36,6 +36,26 @@ LATENTSYNC_ENABLE_DEEPCACHE=true
# 随机种子 (设为 -1 则随机)
LATENTSYNC_SEED=1247
# =============== MuseTalk 配置 ===============
# GPU 选择 (默认 GPU0与 CosyVoice 共存)
MUSETALK_GPU_ID=0
# 常驻服务地址 (端口 8011)
MUSETALK_API_URL=http://localhost:8011
# 推理批大小
MUSETALK_BATCH_SIZE=32
# 模型版本
MUSETALK_VERSION=v15
# 半精度加速
MUSETALK_USE_FLOAT16=true
# =============== 混合唇形同步路由 ===============
# 音频时长 >= 此阈值(秒)用 MuseTalk< 此阈值用 LatentSync
LIPSYNC_DURATION_THRESHOLD=120
# =============== 上传配置 ===============
# 最大上传文件大小 (MB)
MAX_UPLOAD_SIZE_MB=500
@@ -70,6 +90,9 @@ GLM_MODEL=glm-4.7-flash
# 确保存储卷映射正确,避免硬编码路径
SUPABASE_STORAGE_LOCAL_PATH=/home/rongye/ProgramFiles/Supabase/volumes/storage/stub/stub
# =============== 抖音视频下载 Cookie ===============
# 用于从抖音 URL 提取视频文案功能,会过期需要定期更新
DOUYIN_COOKIE=douyin.com; device_web_cpu_core=10; device_web_memory_size=8; __ac_nonce=06760391f00b9b51264ae; __ac_signature=_02B4Z6wo00f019a5ceAAAIDAhEZR-X3jjWfWmXVAAJLXd4; ttwid=1%7C7MTKBSMsP4eOv9h5NAh8p0E-NYIud09ftNmB0mjLpWc%7C1734359327%7C8794abeabbd47447e1f56e5abc726be089f2a0344d6343b5f75f23e7b0f0028f; UIFID_TEMP=0de8750d2b188f4235dbfd208e44abbb976428f0720eb983255afefa45d39c0c6532e1d4768dd8587bf919f866ff1396912bcb2af71efee56a14a2a9f37b74010d0a0413795262f6d4afe02a032ac7ab; s_v_web_id=verify_m4r4ribr_c7krmY1z_WoeI_43po_ATpO_I4o8U1bex2D7; hevc_supported=true; home_can_add_dy_2_desktop=%220%22; dy_swidth=2560; dy_sheight=1440; stream_recommend_feed_params=%22%7B%5C%22cookie_enabled%5C%22%3Atrue%2C%5C%22screen_width%5C%22%3A2560%2C%5C%22screen_height%5C%22%3A1440%2C%5C%22browser_online%5C%22%3Atrue%2C%5C%22cpu_core_num%5C%22%3A10%2C%5C%22device_memory%5C%22%3A8%2C%5C%22downlink%5C%22%3A10%2C%5C%22effective_type%5C%22%3A%5C%224g%5C%22%2C%5C%22round_trip_time%5C%22%3A50%7D%22; strategyABtestKey=%221734359328.577%22; csrf_session_id=2f53aed9aa6974e83aa9a1014180c3a4; fpk1=U2FsdGVkX1/IpBh0qdmlKAVhGyYHgur4/VtL9AReZoeSxadXn4juKvsakahRGqjxOPytHWspYoBogyhS/V6QSw==; fpk2=0845b309c7b9b957afd9ecf775a4c21f; passport_csrf_token=d80e0c5b2fa2328219856be5ba7e671e; passport_csrf_token_default=d80e0c5b2fa2328219856be5ba7e671e; odin_tt=3c891091d2eb0f4718c1d5645bc4a0017032d4d5aa989decb729e9da2ad570918cbe5e9133dc6b145fa8c758de98efe32ff1f81aa0d611e838cc73ab08ef7d3f6adf66ab4d10e8372ddd628f94f16b8e; volume_info=%7B%22isUserMute%22%3Afalse%2C%22isMute%22%3Afalse%2C%22volume%22%3A0.5%7D; bd_ticket_guard_client_web_domain=2; FORCE_LOGIN=%7B%22videoConsumedRemainSeconds%22%3A180%7D; UIFID=0de8750d2b188f4235dbfd208e44abbb976428f0720eb983255afefa45d39c0c6532e1d4768dd8587bf919f866ff139655a3c2b735923234f371c699560c657923fd3d6c5b63ab7bb9b83423b6cb4787e2ce66a7fbc4ecb24c8570f520fe6de068bbb95115023c0c6c1b6ee31b49fb7e3996fb8349f43a3fd8b7a61cd9e18e8fe65eb6a7c13de4c0960d84e344b644725db3eb2fa6b7caf821de1b50527979f2; is_dash_user=1; biz_trace_id=b57a241f; bd_ticket_guard_client_data=eyJiZC10aWNrZXQtZ3VhcmQtdmVyc2lvbiI6MiwiYmQtdGlja2V0LWd1YXJkLWl0ZXJhdGlvbi12ZXJzaW9uIjoxLCJiZC10aWNrZXQtZ3VhcmQtcmVlLXB1YmxpYy1rZXkiOiJCTEo2R0lDalVoWW1XcHpGOFdrN0Vrc0dXcCtaUzNKY1g4NGNGY2k0TTl1TEowNjdUb21mbFU5aDdvWVBGamhNRWNRQWtKdnN1MnM3RmpTWnlJQXpHMjA9IiwiYmQtdGlja2V0LWd1YXJkLXdlYi12ZXJzaW9uIjoyfQ%3D%3D; download_guide=%221%2F20241216%2F0%22; sdk_source_info=7e276470716a68645a606960273f276364697660272927676c715a6d6069756077273f276364697660272927666d776a68605a607d71606b766c6a6b5a7666776c7571273f275e58272927666a6b766a69605a696c6061273f27636469766027292762696a6764695a7364776c6467696076273f275e5827292771273f273d33323131333c3036313632342778; bit_env=RiOY4jzzpxZoVCl6zdVSVhVRjdwHRTxqcqWdqMBZLPGjMdB4Tax1kAELHNTVAAh72KuhumewE4Lq6f0-VJ2UpJrkrhSxoPw9LUb3zQrq1OSwbeSPHkRlRgRQvO89sItdGUyq1oFr0XyRCnMYG87KSeWyc4x0czGR0o50hTDoDLG5rJVoRcdQOLvjiAegsqyytKF59sPX_QM9qffK2SqYsg0hCggURc_AI6kguDDE5DvG0bnyz1utw4z1eEnIoLrkGDqzqBZj4dOAr0BVU6ofbsS-pOQ2u2PM1dLP9FlBVBlVaqYVgHJeSLsR5k76BRTddUjTb4zEilVIEwAMJWGN4I1BxVt6fC9B5tBQpuT0lj3n3eKXCKXZsd8FrEs5_pbfDsxV-e_WMiXI2ff4qxiTC0U73sfo9OpicKICtZjdq8qsHxJuu6wVR36zvXeL2Wch5C6MzprNvkivv0l8nbh2mSgy1nabZr3dmU6NcR-Bg3Q3xTWUlR9aAUmpopC-cNuXjgLpT-Lw1AYGilSUnCvosth1Gfypq-b0MpgmdSDgTrQ%3D; gulu_source_res=eyJwX2luIjoiMDhjOGQ3ZTJiODQyNjZkZWI5Y2VkMGJiODNlNmY1ZWY0ZjMyNTE2ZmYyZjAzNDMzZjI0OWU1Y2Q1NTczNTk5NyJ9; passport_auth_mix_state=hp9bc3dgb1tm5wd8p82zawus27g0e3ue; IsDouyinActive=false
# =============== 支付宝配置 ===============
ALIPAY_APP_ID=2021006132600283
ALIPAY_PRIVATE_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/app_private_key.pem
ALIPAY_PUBLIC_KEY_PATH=/home/rongye/ProgramFiles/ViGent2/backend/keys/alipay_public_key.pem
ALIPAY_NOTIFY_URL=https://vigent.hbyrkj.top/api/payment/notify
ALIPAY_RETURN_URL=https://vigent.hbyrkj.top/pay

View File

@@ -57,7 +57,17 @@ class Settings(BaseSettings):
LATENTSYNC_ENABLE_DEEPCACHE: bool = True # 启用 DeepCache 加速
LATENTSYNC_SEED: int = 1247 # 随机种子 (-1 则随机)
LATENTSYNC_USE_SERVER: bool = True # 使用常驻服务 (Persistent Server) 加速
# MuseTalk 配置
MUSETALK_GPU_ID: int = 0 # GPU ID (默认使用 GPU0)
MUSETALK_API_URL: str = "http://localhost:8011" # 常驻服务地址
MUSETALK_BATCH_SIZE: int = 8 # 推理批大小
MUSETALK_VERSION: str = "v15" # 模型版本
MUSETALK_USE_FLOAT16: bool = True # 半精度加速
# 混合唇形同步路由
LIPSYNC_DURATION_THRESHOLD: float = 120.0 # 秒,>=此值用 MuseTalk
# Supabase 配置
SUPABASE_URL: str = ""
SUPABASE_PUBLIC_URL: str = "" # 公网访问地址,用于生成前端可访问的 URL
@@ -76,17 +86,28 @@ class Settings(BaseSettings):
GLM_API_KEY: str = ""
GLM_MODEL: str = "glm-4.7-flash"
# 支付宝配置
ALIPAY_APP_ID: str = ""
ALIPAY_PRIVATE_KEY_PATH: str = "" # 应用私钥 PEM 文件路径
ALIPAY_PUBLIC_KEY_PATH: str = "" # 支付宝公钥 PEM 文件路径
ALIPAY_NOTIFY_URL: str = "" # 异步通知回调地址(公网可达)
ALIPAY_RETURN_URL: str = "" # 支付成功后同步跳转地址
ALIPAY_SANDBOX: bool = False # 是否使用沙箱环境
PAYMENT_AMOUNT: float = 999.00 # 会员价格(元)
PAYMENT_EXPIRE_DAYS: int = 365 # 会员有效天数
# CORS 配置 (逗号分隔的域名列表,* 表示允许所有)
CORS_ORIGINS: str = "*"
# 抖音 Cookie (用于视频下载功能,会过期需要定期更新)
DOUYIN_COOKIE: str = ""
@property
def LATENTSYNC_DIR(self) -> Path:
"""LatentSync 目录路径 (动态计算)"""
return self.BASE_DIR.parent.parent / "models" / "LatentSync"
@property
def MUSETALK_DIR(self) -> Path:
"""MuseTalk 目录路径 (动态计算)"""
return self.BASE_DIR.parent.parent / "models" / "MuseTalk"
class Config:
env_file = ".env"
extra = "ignore" # 忽略未知的环境变量

View File

@@ -1,12 +1,12 @@
"""
依赖注入模块:认证和用户获取
"""
from typing import Optional, Any, Dict, cast
from fastapi import Request, HTTPException, Depends, status
from app.core.security import decode_access_token
from app.repositories.sessions import get_session, delete_sessions
from app.repositories.users import get_user_by_id, deactivate_user_if_expired
from loguru import logger
from typing import Optional, Any, Dict, cast
from fastapi import Request, HTTPException, Depends, status
from app.core.security import decode_access_token
from app.repositories.sessions import get_session, delete_sessions
from app.repositories.users import get_user_by_id, deactivate_user_if_expired
from loguru import logger
async def get_token_from_cookie(request: Request) -> Optional[str]:
@@ -14,9 +14,9 @@ async def get_token_from_cookie(request: Request) -> Optional[str]:
return request.cookies.get("access_token")
async def get_current_user_optional(
request: Request
) -> Optional[Dict[str, Any]]:
async def get_current_user_optional(
request: Request
) -> Optional[Dict[str, Any]]:
"""
获取当前用户 (可选,未登录返回 None)
"""
@@ -29,26 +29,30 @@ async def get_current_user_optional(
return None
# 验证 session_token 是否有效 (单设备登录检查)
try:
session = get_session(token_data.user_id, token_data.session_token)
if not session:
logger.warning(f"Session token 无效: user_id={token_data.user_id}")
return None
user = cast(Optional[Dict[str, Any]], get_user_by_id(token_data.user_id))
if user and deactivate_user_if_expired(user):
delete_sessions(token_data.user_id)
return None
return user
except Exception as e:
logger.error(f"获取用户信息失败: {e}")
return None
try:
session = get_session(token_data.user_id, token_data.session_token)
if not session:
logger.warning(f"Session token 无效: user_id={token_data.user_id}")
return None
user = cast(Optional[Dict[str, Any]], get_user_by_id(token_data.user_id))
if user and deactivate_user_if_expired(user):
delete_sessions(token_data.user_id)
return None
if user and not user.get("is_active"):
delete_sessions(token_data.user_id)
return None
return user
except Exception as e:
logger.error(f"获取用户信息失败: {e}")
return None
async def get_current_user(
request: Request
) -> Dict[str, Any]:
async def get_current_user(
request: Request
) -> Dict[str, Any]:
"""
获取当前用户 (必须登录)
@@ -70,38 +74,45 @@ async def get_current_user(
detail="Token 无效或已过期"
)
try:
session = get_session(token_data.user_id, token_data.session_token)
if not session:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="会话已失效,请重新登录(可能已在其他设备登录)"
)
user = get_user_by_id(token_data.user_id)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="用户不存在"
)
user = cast(Dict[str, Any], user)
if deactivate_user_if_expired(user):
delete_sessions(token_data.user_id)
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="会员已到期,请续费"
)
return user
except HTTPException:
raise
except Exception as e:
logger.error(f"获取用户信息失败: {e}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="服务器错误"
)
try:
session = get_session(token_data.user_id, token_data.session_token)
if not session:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="会话已失效,请重新登录(可能已在其他设备登录)"
)
user = get_user_by_id(token_data.user_id)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="用户不存在"
)
user = cast(Dict[str, Any], user)
if deactivate_user_if_expired(user):
delete_sessions(token_data.user_id)
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="会员已到期,请续费"
)
if not user.get("is_active"):
delete_sessions(token_data.user_id)
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="账号已停用"
)
return user
except HTTPException:
raise
except Exception as e:
logger.error(f"获取用户信息失败: {e}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="服务器错误"
)
async def get_current_admin(

View File

@@ -110,3 +110,28 @@ def set_auth_cookie(response: Response, token: str) -> None:
def clear_auth_cookie(response: Response) -> None:
"""清除认证 Cookie"""
response.delete_cookie(key="access_token")
def create_payment_token(user_id: str) -> str:
"""生成付费专用短期 JWT token30 分钟有效)"""
payload = {
"sub": user_id,
"purpose": "payment",
"exp": datetime.now(timezone.utc) + timedelta(minutes=30),
}
return jwt.encode(payload, settings.JWT_SECRET_KEY, algorithm=settings.JWT_ALGORITHM)
def decode_payment_token(token: str) -> str | None:
"""解析 payment_token返回 user_id仅 purpose=payment 有效)"""
try:
data = jwt.decode(
token,
settings.JWT_SECRET_KEY,
algorithms=[settings.JWT_ALGORITHM],
)
if data.get("purpose") != "payment":
return None
return data.get("sub")
except JWTError:
return None

View File

@@ -16,6 +16,7 @@ from app.modules.ai.router import router as ai_router
from app.modules.tools.router import router as tools_router
from app.modules.assets.router import router as assets_router
from app.modules.generated_audios.router import router as generated_audios_router
from app.modules.payment.router import router as payment_router
from loguru import logger
import os
@@ -126,6 +127,7 @@ app.include_router(ai_router) # /api/ai
app.include_router(tools_router, prefix="/api/tools", tags=["Tools"])
app.include_router(assets_router, prefix="/api/assets", tags=["Assets"])
app.include_router(generated_audios_router, prefix="/api/generated-audios", tags=["GeneratedAudios"])
app.include_router(payment_router) # /api/payment
@app.on_event("startup")

View File

@@ -2,6 +2,8 @@
AI 相关 API 路由
"""
from typing import Optional
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
from loguru import logger
@@ -21,9 +23,16 @@ class GenerateMetaRequest(BaseModel):
class GenerateMetaResponse(BaseModel):
"""生成标题标签响应"""
title: str
secondary_title: str = ""
tags: list[str]
class RewriteRequest(BaseModel):
"""改写请求"""
text: str
custom_prompt: Optional[str] = None
class TranslateRequest(BaseModel):
"""翻译请求"""
text: str
@@ -66,8 +75,24 @@ async def generate_meta(req: GenerateMetaRequest):
result = await glm_service.generate_title_tags(req.text)
return success_response(GenerateMetaResponse(
title=result.get("title", ""),
secondary_title=result.get("secondary_title", ""),
tags=result.get("tags", [])
).model_dump())
except Exception as e:
logger.error(f"Generate meta failed: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.post("/rewrite")
async def rewrite_script(req: RewriteRequest):
"""AI 改写文案"""
if not req.text or not req.text.strip():
raise HTTPException(status_code=400, detail="文案不能为空")
try:
logger.info(f"Rewriting text: {req.text[:50]}...")
rewritten = await glm_service.rewrite_script(req.text.strip(), req.custom_prompt)
return success_response({"rewritten_text": rewritten})
except Exception as e:
logger.error(f"Rewrite failed: {e}")
raise HTTPException(status_code=500, detail=str(e))

View File

@@ -1,30 +1,32 @@
"""
认证 API注册、登录、登出、修改密码
"""
from fastapi import APIRouter, HTTPException, Response, status, Request, Depends
from fastapi import APIRouter, HTTPException, Response, status, Request, Depends
from fastapi.responses import JSONResponse
from pydantic import BaseModel, field_validator
from app.core.security import (
get_password_hash,
verify_password,
create_access_token,
generate_session_token,
set_auth_cookie,
clear_auth_cookie,
decode_access_token
)
from app.repositories.sessions import create_session, delete_sessions
from app.repositories.users import (
create_user,
get_user_by_id,
get_user_by_phone,
user_exists_by_phone,
update_user,
deactivate_user_if_expired,
)
from app.core.deps import get_current_user
from app.core.response import success_response
from app.core.security import (
get_password_hash,
verify_password,
create_access_token,
generate_session_token,
set_auth_cookie,
clear_auth_cookie,
decode_access_token,
create_payment_token,
)
from app.repositories.sessions import create_session, delete_sessions
from app.repositories.users import (
create_user,
get_user_by_id,
get_user_by_phone,
user_exists_by_phone,
update_user,
deactivate_user_if_expired,
)
from app.core.deps import get_current_user
from app.core.response import success_response
from loguru import logger
from typing import Optional, Any, cast
from typing import Optional, Any, cast
import re
router = APIRouter(prefix="/api/auth", tags=["认证"])
@@ -84,26 +86,26 @@ async def register(request: RegisterRequest):
注册后状态为 pending需要管理员激活
"""
try:
if user_exists_by_phone(request.phone):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="该手机号已注册"
)
if user_exists_by_phone(request.phone):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="该手机号已注册"
)
# 创建用户
password_hash = get_password_hash(request.password)
create_user({
"phone": request.phone,
"password_hash": password_hash,
"username": request.username or f"用户{request.phone[-4:]}",
"role": "pending",
"is_active": False
})
create_user({
"phone": request.phone,
"password_hash": password_hash,
"username": request.username or f"用户{request.phone[-4:]}",
"role": "pending",
"is_active": False
})
logger.info(f"新用户注册: {request.phone}")
return success_response(message="注册成功,请等待管理员审核激活")
return success_response(message="注册成功,请等待管理员审核激活")
except HTTPException:
raise
except Exception as e:
@@ -124,12 +126,12 @@ async def login(request: LoginRequest, response: Response):
- 实现"后踢前"单设备登录
"""
try:
user = cast(dict[str, Any], get_user_by_phone(request.phone) or {})
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="手机号或密码错误"
)
user = cast(dict[str, Any], get_user_by_phone(request.phone) or {})
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="手机号或密码错误"
)
# 验证密码
if not verify_password(request.password, user["password_hash"]):
@@ -138,27 +140,33 @@ async def login(request: LoginRequest, response: Response):
detail="手机号或密码错误"
)
# 授权过期自动停用账号
if deactivate_user_if_expired(user):
delete_sessions(user["id"])
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="会员已到期,请续费"
)
# 检查是否激活
if not user["is_active"]:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="账号未激活,请等待管理员审核"
)
# 过期自动停用(注意:只更新 DB不修改内存中的 user 字典)
expired = deactivate_user_if_expired(user)
if expired:
delete_sessions(user["id"])
# 过期 或 未激活(新注册)→ 返回付费指引
if expired or not user["is_active"]:
payment_token = create_payment_token(user["id"])
return JSONResponse(
status_code=403,
content={
"success": False,
"message": "请付费开通会员",
"code": 403,
"data": {
"reason": "PAYMENT_REQUIRED",
"payment_token": payment_token,
}
}
)
# 生成新的 session_token (后踢前)
session_token = generate_session_token()
# 删除旧 session插入新 session
delete_sessions(user["id"])
create_session(user["id"], session_token, None)
delete_sessions(user["id"])
create_session(user["id"], session_token, None)
# 生成 JWT Token
token = create_access_token(user["id"], session_token)
@@ -168,19 +176,19 @@ async def login(request: LoginRequest, response: Response):
logger.info(f"用户登录: {request.phone}")
return success_response(
data={
"user": UserResponse(
id=user["id"],
phone=user["phone"],
username=user.get("username"),
role=user["role"],
is_active=user["is_active"],
expires_at=user.get("expires_at")
).model_dump()
},
message="登录成功",
)
return success_response(
data={
"user": UserResponse(
id=user["id"],
phone=user["phone"],
username=user.get("username"),
role=user["role"],
is_active=user["is_active"],
expires_at=user.get("expires_at")
).model_dump()
},
message="登录成功",
)
except HTTPException:
raise
except Exception as e:
@@ -192,10 +200,10 @@ async def login(request: LoginRequest, response: Response):
@router.post("/logout")
async def logout(response: Response):
"""用户登出"""
clear_auth_cookie(response)
return success_response(message="已登出")
async def logout(response: Response):
"""用户登出"""
clear_auth_cookie(response)
return success_response(message="已登出")
@router.post("/change-password")
@@ -223,12 +231,12 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
)
try:
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="用户不存在"
)
user = cast(dict[str, Any], get_user_by_id(token_data.user_id) or {})
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="用户不存在"
)
# 验证当前密码
if not verify_password(request.old_password, user["password_hash"]):
@@ -239,13 +247,13 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
# 更新密码
new_password_hash = get_password_hash(request.new_password)
update_user(user["id"], {"password_hash": new_password_hash})
update_user(user["id"], {"password_hash": new_password_hash})
# 生成新的 session token使旧 token 失效
new_session_token = generate_session_token()
delete_sessions(user["id"])
create_session(user["id"], new_session_token, None)
delete_sessions(user["id"])
create_session(user["id"], new_session_token, None)
# 生成新的 JWT Token
new_token = create_access_token(user["id"], new_session_token)
@@ -253,7 +261,7 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
logger.info(f"用户修改密码: {user['phone']}")
return success_response(message="密码修改成功")
return success_response(message="密码修改成功")
except HTTPException:
raise
except Exception as e:
@@ -264,14 +272,14 @@ async def change_password(request: ChangePasswordRequest, req: Request, response
)
@router.get("/me")
async def get_me(user: dict = Depends(get_current_user)):
"""获取当前用户信息"""
return success_response(UserResponse(
id=user["id"],
phone=user["phone"],
username=user.get("username"),
role=user["role"],
is_active=user["is_active"],
expires_at=user.get("expires_at")
).model_dump())
@router.get("/me")
async def get_me(user: dict = Depends(get_current_user)):
"""获取当前用户信息"""
return success_response(UserResponse(
id=user["id"],
phone=user["phone"],
username=user.get("username"),
role=user["role"],
is_active=user["is_active"],
expires_at=user.get("expires_at")
).model_dump())

View File

View File

@@ -0,0 +1,52 @@
"""
支付 API创建订单、异步通知、状态查询
遵循 BACKEND_DEV.md 规范router 只做参数校验、调用 service、返回统一响应
"""
from fastapi import APIRouter, HTTPException, Request, status
from fastapi.responses import PlainTextResponse
from app.core.response import success_response
from .schemas import CreateOrderRequest, CreateOrderResponse, OrderStatusResponse
from . import service
router = APIRouter(prefix="/api/payment", tags=["支付"])
@router.post("/create-order")
async def create_payment_order(request: CreateOrderRequest):
"""创建支付宝电脑网站支付订单,返回收银台 URL"""
try:
result = service.create_payment_order(request.payment_token)
except ValueError as e:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e))
except RuntimeError as e:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e))
return success_response(
CreateOrderResponse(**result).model_dump()
)
@router.post("/notify")
async def payment_notify(request: Request):
"""
支付宝异步通知回调
必须返回纯文本 "success"(不是 JSON否则支付宝会重复推送。
"""
form_data = await request.form()
verified = service.handle_payment_notify(dict(form_data))
return PlainTextResponse("success" if verified else "fail")
@router.get("/status/{out_trade_no}")
async def check_payment_status(out_trade_no: str):
"""查询订单支付状态(前端轮询)"""
order_status = service.get_order_status(out_trade_no)
if order_status is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="订单不存在")
return success_response(
OrderStatusResponse(status=order_status).model_dump()
)

View File

@@ -0,0 +1,15 @@
from pydantic import BaseModel
class CreateOrderRequest(BaseModel):
payment_token: str
class CreateOrderResponse(BaseModel):
pay_url: str
out_trade_no: str
amount: float
class OrderStatusResponse(BaseModel):
status: str

View File

@@ -0,0 +1,137 @@
"""
支付业务服务
职责Alipay SDK 封装、创建订单、处理支付通知、查询状态
遵循 BACKEND_DEV.md "薄路由 + 厚服务" 原则
"""
from datetime import datetime, timezone, timedelta
import uuid
from alipay import AliPay
from loguru import logger
from app.core.config import settings
from app.core.security import decode_payment_token
from app.repositories.orders import create_order, get_order_by_trade_no, update_order_status
from app.repositories.users import update_user
# 支付宝网关地址
ALIPAY_GATEWAY = "https://openapi.alipay.com/gateway.do"
ALIPAY_GATEWAY_SANDBOX = "https://openapi-sandbox.dl.alipaydev.com/gateway.do"
def _get_alipay_client() -> AliPay:
"""延迟初始化 Alipay 客户端"""
return AliPay(
appid=settings.ALIPAY_APP_ID,
app_notify_url=settings.ALIPAY_NOTIFY_URL,
app_private_key_string=open(settings.ALIPAY_PRIVATE_KEY_PATH).read(),
alipay_public_key_string=open(settings.ALIPAY_PUBLIC_KEY_PATH).read(),
sign_type="RSA2",
debug=settings.ALIPAY_SANDBOX,
)
def _create_page_pay_url(out_trade_no: str, amount: float, subject: str) -> str | None:
"""调用 alipay.trade.page.pay返回支付宝收银台 URL"""
client = _get_alipay_client()
order_string = client.api_alipay_trade_page_pay(
subject=subject,
out_trade_no=out_trade_no,
total_amount=amount,
return_url=settings.ALIPAY_RETURN_URL,
)
if not order_string:
logger.error(f"电脑网站支付下单失败: {out_trade_no}")
return None
gateway = ALIPAY_GATEWAY_SANDBOX if settings.ALIPAY_SANDBOX else ALIPAY_GATEWAY
pay_url = f"{gateway}?{order_string}"
logger.info(f"电脑网站支付下单成功: {out_trade_no}")
return pay_url
def _verify_signature(data: dict, signature: str) -> bool:
"""验证支付宝异步通知签名"""
client = _get_alipay_client()
return client.verify(data, signature)
def create_payment_order(payment_token: str) -> dict:
"""
创建支付订单完整流程
Returns: {"pay_url": str, "out_trade_no": str, "amount": float}
Raises: ValueError (token 无效), RuntimeError (API 失败)
"""
user_id = decode_payment_token(payment_token)
if not user_id:
raise ValueError("付费凭证无效或已过期,请重新登录")
out_trade_no = f"VG_{int(datetime.now().timestamp())}_{uuid.uuid4().hex[:8]}"
amount = settings.PAYMENT_AMOUNT
create_order(user_id, out_trade_no, amount)
pay_url = _create_page_pay_url(out_trade_no, amount, "IPAgent 会员开通")
if not pay_url:
raise RuntimeError("创建支付订单失败,请稍后重试")
logger.info(f"用户 {user_id} 创建支付订单: {out_trade_no}")
return {"pay_url": pay_url, "out_trade_no": out_trade_no, "amount": amount}
def handle_payment_notify(form_data: dict) -> bool:
"""
处理支付宝异步通知完整流程
Returns: True=验签通过, False=验签失败
"""
data = dict(form_data)
signature = data.pop("sign", "")
data.pop("sign_type", None)
if not _verify_signature(data, signature):
logger.warning(f"支付宝通知验签失败: {data.get('out_trade_no')}")
return False
out_trade_no = data.get("out_trade_no", "")
trade_status = data.get("trade_status", "")
trade_no = data.get("trade_no", "")
logger.info(f"收到支付宝通知: {out_trade_no}, status={trade_status}, trade_no={trade_no}")
if trade_status not in ("TRADE_SUCCESS", "TRADE_FINISHED"):
return True
order = get_order_by_trade_no(out_trade_no)
if not order:
logger.warning(f"订单不存在: {out_trade_no}")
return True
if order["status"] == "paid":
logger.info(f"订单已处理过: {out_trade_no}")
return True
update_order_status(out_trade_no, "paid", trade_no)
user_id = order["user_id"]
expires_at = (datetime.now(timezone.utc) + timedelta(days=settings.PAYMENT_EXPIRE_DAYS)).isoformat()
update_user(user_id, {
"is_active": True,
"role": "user",
"expires_at": expires_at,
})
logger.success(f"用户 {user_id} 支付成功,已激活,有效期至 {expires_at}")
return True
def get_order_status(out_trade_no: str) -> str | None:
"""查询订单支付状态"""
order = get_order_by_trade_no(out_trade_no)
if not order:
return None
return order["status"]

View File

@@ -2,9 +2,11 @@ import re
import os
import time
import json
import hashlib
import asyncio
import subprocess
import tempfile
import unicodedata
from pathlib import Path
from typing import Optional
@@ -19,8 +21,16 @@ BUCKET_REF_AUDIOS = "ref-audios"
def sanitize_filename(filename: str) -> str:
"""清理文件名,移除特殊字符"""
safe_name = re.sub(r'[<>:"/\\|?*\s]', '_', filename)
"""清理文件名用于 Storage key仅保留 ASCII 安全字符)。"""
normalized = unicodedata.normalize("NFKD", filename)
ascii_name = normalized.encode("ascii", "ignore").decode("ascii")
safe_name = re.sub(r"[^A-Za-z0-9._-]+", "_", ascii_name).strip("._-")
# 纯中文/emoji 等场景会被清空,使用稳定哈希兜底,避免 InvalidKey
if not safe_name:
digest = hashlib.md5(filename.encode("utf-8")).hexdigest()[:12]
safe_name = f"audio_{digest}"
if len(safe_name) > 50:
ext = Path(safe_name).suffix
safe_name = safe_name[:50 - len(ext)] + ext

View File

@@ -13,11 +13,12 @@ router = APIRouter()
async def extract_script_tool(
file: Optional[UploadFile] = File(None),
url: Optional[str] = Form(None),
rewrite: bool = Form(True)
rewrite: bool = Form(True),
custom_prompt: Optional[str] = Form(None)
):
"""独立文案提取工具"""
try:
result = await service.extract_script(file=file, url=url, rewrite=rewrite)
result = await service.extract_script(file=file, url=url, rewrite=rewrite, custom_prompt=custom_prompt)
return success_response(result)
except ValueError as e:
raise HTTPException(400, str(e))

View File

@@ -17,9 +17,9 @@ from app.services.whisper_service import whisper_service
from app.services.glm_service import glm_service
async def extract_script(file=None, url: Optional[str] = None, rewrite: bool = True) -> dict:
async def extract_script(file=None, url: Optional[str] = None, rewrite: bool = True, custom_prompt: Optional[str] = None) -> dict:
"""
文案提取:上传文件或视频链接 -> Whisper 转写 -> (可选) GLM 洗稿
文案提取:上传文件或视频链接 -> Whisper 转写 -> (可选) GLM 改写
"""
if not file and not url:
raise ValueError("必须提供文件或视频链接")
@@ -63,11 +63,15 @@ async def extract_script(file=None, url: Optional[str] = None, rewrite: bool = T
# 2. 提取文案 (Whisper)
script = await whisper_service.transcribe(str(audio_path))
# 3. AI 洗稿 (GLM)
# 3. AI 改写 (GLM) — 失败时降级返回原文
rewritten = None
if rewrite and script and len(script.strip()) > 0:
logger.info("Rewriting script...")
rewritten = await glm_service.rewrite_script(script)
try:
rewritten = await glm_service.rewrite_script(script, custom_prompt)
except Exception as e:
logger.warning(f"GLM rewrite failed, returning original script: {e}")
rewritten = None
return {
"original_script": script,
@@ -156,125 +160,120 @@ def _download_yt_dlp(url_value: str, temp_dir: Path, timestamp: int) -> Path:
'quiet': True,
'no_warnings': True,
'http_headers': {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
'Referer': 'https://www.douyin.com/',
}
}
with yt_dlp.YoutubeDL() as ydl_raw:
ydl: Any = ydl_raw
ydl.params.update(ydl_opts)
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
info = ydl.extract_info(url_value, download=True)
if 'requested_downloads' in info:
downloaded_file = info['requested_downloads'][0]['filepath']
else:
ext = info.get('ext', 'mp4')
id = info.get('id')
downloaded_file = str(temp_dir / f"tool_download_{timestamp}_{id}.{ext}")
vid_id = info.get('id')
downloaded_file = str(temp_dir / f"tool_download_{timestamp}_{vid_id}.{ext}")
return Path(downloaded_file)
async def _download_douyin_manual(url: str, temp_dir: Path, timestamp: int) -> Optional[Path]:
"""手动下载抖音视频 (Fallback)"""
logger.info(f"[SuperIPAgent] Starting download for: {url}")
"""手动下载抖音视频 (Fallback) — 通过移动端分享页获取播放地址"""
logger.info(f"[douyin-fallback] Starting download for: {url}")
try:
# 1. 解析短链接,提取视频 ID
headers = {
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
"user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15"
}
async with httpx.AsyncClient(follow_redirects=True, timeout=10.0) as client:
resp = await client.get(url, headers=headers)
final_url = str(resp.url)
logger.info(f"[SuperIPAgent] Final URL: {final_url}")
logger.info(f"[douyin-fallback] Final URL: {final_url}")
modal_id = None
video_id = None
match = re.search(r'/video/(\d+)', final_url)
if match:
modal_id = match.group(1)
video_id = match.group(1)
if not modal_id:
logger.error("[SuperIPAgent] Could not extract modal_id")
if not video_id:
logger.error("[douyin-fallback] Could not extract video_id")
return None
logger.info(f"[SuperIPAgent] Extracted modal_id: {modal_id}")
logger.info(f"[douyin-fallback] Extracted video_id: {video_id}")
target_url = f"https://www.douyin.com/user/MS4wLjABAAAAN_s_hups7LD0N4qnrM3o2gI0vuG3pozNaEolz2_py3cHTTrpVr1Z4dukFD9SOlwY?from_tab_name=main&modal_id={modal_id}"
# 2. 获取新鲜 ttwid
ttwid = ""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
ttwid_resp = await client.post(
"https://ttwid.bytedance.com/ttwid/union/register/",
json={
"region": "cn", "aid": 6383, "needFid": False,
"service": "www.douyin.com",
"migrate_info": {"ticket": "", "source": "node"},
"cbUrlProtocol": "https", "union": True,
}
)
ttwid = ttwid_resp.cookies.get("ttwid", "")
logger.info(f"[douyin-fallback] Got fresh ttwid (len={len(ttwid)})")
except Exception as e:
logger.warning(f"[douyin-fallback] Failed to get ttwid: {e}")
from app.core.config import settings
if not settings.DOUYIN_COOKIE:
logger.warning("[SuperIPAgent] DOUYIN_COOKIE 未配置,视频下载可能失败")
headers_with_cookie = {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"cookie": settings.DOUYIN_COOKIE,
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
# 3. 访问移动端分享页提取播放地址
page_headers = {
"user-agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15",
"cookie": f"ttwid={ttwid}" if ttwid else "",
}
logger.info(f"[SuperIPAgent] Requesting page with Cookie...")
async with httpx.AsyncClient(follow_redirects=True, timeout=15.0) as client:
page_resp = await client.get(
f"https://m.douyin.com/share/video/{video_id}",
headers=page_headers,
)
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(target_url, headers=headers_with_cookie)
page_text = page_resp.text
logger.info(f"[douyin-fallback] Mobile page length: {len(page_text)}")
content_match = re.findall(r'<script id="RENDER_DATA" type="application/json">(.*?)</script>', response.text)
if not content_match:
if "SSR_HYDRATED_DATA" in response.text:
content_match = re.findall(r'<script id="SSR_HYDRATED_DATA" type="application/json">(.*?)</script>', response.text)
if not content_match:
logger.error(f"[SuperIPAgent] Could not find RENDER_DATA in page (len={len(response.text)})")
return None
content = unquote(content_match[0])
try:
data = json.loads(content)
except:
logger.error("[SuperIPAgent] JSON decode failed")
return None
video_url = None
try:
if "app" in data and "videoDetail" in data["app"]:
info = data["app"]["videoDetail"]["video"]
if "bitRateList" in info and info["bitRateList"]:
video_url = info["bitRateList"][0]["playAddr"][0]["src"]
elif "playAddr" in info and info["playAddr"]:
video_url = info["playAddr"][0]["src"]
except Exception as e:
logger.error(f"[SuperIPAgent] Path extraction failed: {e}")
if not video_url:
logger.error("[SuperIPAgent] No video_url found")
# 4. 提取 play_addr
addr_match = re.search(
r'"play_addr":\{"uri":"([^"]+)","url_list":\["([^"]+)"',
page_text,
)
if not addr_match:
logger.error("[douyin-fallback] Could not find play_addr in mobile page")
return None
video_url = addr_match.group(2).replace(r"\u002F", "/")
if video_url.startswith("//"):
video_url = "https:" + video_url
logger.info(f"[SuperIPAgent] Found video URL: {video_url[:50]}...")
logger.info(f"[douyin-fallback] Found video URL: {video_url[:80]}...")
# 5. 下载视频
temp_path = temp_dir / f"douyin_manual_{timestamp}.mp4"
download_headers = {
'Referer': 'https://www.douyin.com/',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
"Referer": "https://www.douyin.com/",
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 16_0 like Mac OS X) AppleWebKit/605.1.15",
}
async with httpx.AsyncClient(timeout=60.0) as client:
async with httpx.AsyncClient(timeout=120.0, follow_redirects=True) as client:
async with client.stream("GET", video_url, headers=download_headers) as dl_resp:
if dl_resp.status_code == 200:
with open(temp_path, 'wb') as f:
with open(temp_path, "wb") as f:
async for chunk in dl_resp.aiter_bytes(chunk_size=8192):
f.write(chunk)
logger.info(f"[SuperIPAgent] Downloaded successfully: {temp_path}")
logger.info(f"[douyin-fallback] Downloaded successfully: {temp_path}")
return temp_path
else:
logger.error(f"[SuperIPAgent] Download failed: {dl_resp.status_code}")
logger.error(f"[douyin-fallback] Download failed: {dl_resp.status_code}")
return None
except Exception as e:
logger.error(f"[SuperIPAgent] Logic failed: {e}")
logger.error(f"[douyin-fallback] Logic failed: {e}")
return None

View File

@@ -21,9 +21,15 @@ class GenerateRequest(BaseModel):
language: str = "zh-CN"
generated_audio_id: Optional[str] = None # 预生成配音 ID存在时跳过内联 TTS
title: Optional[str] = None
title_display_mode: Literal["short", "persistent"] = "short"
title_duration: float = 4.0
enable_subtitles: bool = True
subtitle_style_id: Optional[str] = None
title_style_id: Optional[str] = None
secondary_title: Optional[str] = None
secondary_title_style_id: Optional[str] = None
secondary_title_font_size: Optional[int] = None
secondary_title_top_margin: Optional[int] = None
subtitle_font_size: Optional[int] = None
title_font_size: Optional[int] = None
title_top_margin: Optional[int] = None

View File

@@ -1,5 +1,6 @@
from typing import Optional, Any, List
from pathlib import Path
import asyncio
import time
import traceback
import httpx
@@ -415,18 +416,21 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
lipsync_start = time.time()
# ── 第一步:下载所有素材并检测分辨率 ──
# ── 第一步:并行下载所有素材并检测分辨率 ──
material_locals: List[Path] = []
resolutions = []
for i, assignment in enumerate(assignments):
async def _download_and_normalize(i: int, assignment: dict):
"""下载单个素材并归一化方向"""
material_local = temp_dir / f"{task_id}_material_{i}.mp4"
temp_files.append(material_local)
await _download_material(assignment["material_path"], material_local)
# 归一化旋转元数据,确保分辨率判断与后续推理一致
normalized_material = temp_dir / f"{task_id}_material_{i}_norm.mp4"
normalized_result = video.normalize_orientation(
loop = asyncio.get_event_loop()
normalized_result = await loop.run_in_executor(
None,
video.normalize_orientation,
str(material_local),
str(normalized_material),
)
@@ -434,8 +438,17 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
temp_files.append(normalized_material)
material_local = normalized_material
material_locals.append(material_local)
resolutions.append(video.get_resolution(str(material_local)))
res = video.get_resolution(str(material_local))
return material_local, res
download_tasks = [
_download_and_normalize(i, assignment)
for i, assignment in enumerate(assignments)
]
download_results = await asyncio.gather(*download_tasks)
for local, res in download_results:
material_locals.append(local)
resolutions.append(res)
# 按用户选择的画面比例统一分辨率
base_res = target_resolution
@@ -443,29 +456,42 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
if need_scale:
logger.info(f"[MultiMat] 素材分辨率不一致,统一到 {base_res[0]}x{base_res[1]}")
# ── 第二步:裁剪每段素材到对应时长 ──
prepared_segments: List[Path] = []
# ── 第二步:并行裁剪每段素材到对应时长 ──
prepared_segments: List[Path] = [None] * num_segments
for i, assignment in enumerate(assignments):
seg_progress = 15 + int((i / num_segments) * 30) # 15% → 45%
async def _prepare_one_segment(i: int, assignment: dict):
"""将单个素材裁剪/循环到对应时长"""
seg_dur = assignment["end"] - assignment["start"]
_update_task(
task_id,
progress=seg_progress,
message=f"正在准备素材 {i+1}/{num_segments}..."
)
prepared_path = temp_dir / f"{task_id}_prepared_{i}.mp4"
temp_files.append(prepared_path)
video.prepare_segment(
str(material_locals[i]), seg_dur, str(prepared_path),
# 多素材拼接前统一重编码为同分辨率/同编码,避免 concat 仅保留首段
target_resolution=base_res,
source_start=assignment.get("source_start", 0.0),
source_end=assignment.get("source_end"),
target_fps=25,
loop = asyncio.get_event_loop()
await loop.run_in_executor(
None,
video.prepare_segment,
str(material_locals[i]),
seg_dur,
str(prepared_path),
base_res,
assignment.get("source_start", 0.0),
assignment.get("source_end"),
25,
)
prepared_segments.append(prepared_path)
return i, prepared_path
_update_task(
task_id,
progress=15,
message=f"正在并行准备 {num_segments} 个素材片段..."
)
prepare_tasks = [
_prepare_one_segment(i, assignment)
for i, assignment in enumerate(assignments)
]
prepare_results = await asyncio.gather(*prepare_tasks)
for i, path in prepare_results:
prepared_segments[i] = path
# ── 第二步:拼接所有素材片段 ──
_update_task(task_id, progress=50, message="正在拼接素材片段...")
@@ -553,59 +579,100 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
print(f"[Pipeline] LipSync completed in {lipsync_time:.1f}s")
_update_task(task_id, progress=80)
# 单素材模式Whisper 在 LatentSync 之后
if req.enable_subtitles:
# 单素材模式Whisper 延迟到下方与 BGM 并行执行
if not req.enable_subtitles:
captions_path = None
_update_task(task_id, progress=85)
# ── Whisper 字幕 + BGM 混音 并行(两者都只依赖 audio_path──
final_audio_path = audio_path
_whisper_task = None
_bgm_task = None
# 单素材模式下 Whisper 尚未执行,这里与 BGM 并行启动
need_whisper = not is_multi and req.enable_subtitles and captions_path is None
if need_whisper:
captions_path = temp_dir / f"{task_id}_captions.json"
temp_files.append(captions_path)
_captions_path_str = str(captions_path)
async def _run_whisper():
_update_task(task_id, message="正在生成字幕 (Whisper)...", progress=82)
captions_path = temp_dir / f"{task_id}_captions.json"
temp_files.append(captions_path)
try:
await whisper_service.align(
audio_path=str(audio_path),
text=req.text,
output_path=str(captions_path),
output_path=_captions_path_str,
language=_locale_to_whisper_lang(req.language),
original_text=req.text,
)
print(f"[Pipeline] Whisper alignment completed")
return True
except Exception as e:
logger.warning(f"Whisper alignment failed, skipping subtitles: {e}")
captions_path = None
return False
_update_task(task_id, progress=85)
_whisper_task = _run_whisper()
final_audio_path = audio_path
if req.bgm_id:
_update_task(task_id, message="正在合成背景音乐...", progress=86)
bgm_path = resolve_bgm_path(req.bgm_id)
if bgm_path:
mix_output_path = temp_dir / f"{task_id}_audio_mix.wav"
temp_files.append(mix_output_path)
volume = req.bgm_volume if req.bgm_volume is not None else 0.2
volume = max(0.0, min(float(volume), 1.0))
try:
video.mix_audio(
voice_path=str(audio_path),
bgm_path=str(bgm_path),
output_path=str(mix_output_path),
bgm_volume=volume
)
final_audio_path = mix_output_path
except Exception as e:
logger.warning(f"BGM mix failed, fallback to voice only: {e}")
_mix_output = str(mix_output_path)
_bgm_path = str(bgm_path)
_voice_path = str(audio_path)
_volume = volume
async def _run_bgm():
_update_task(task_id, message="正在合成背景音乐...", progress=86)
loop = asyncio.get_event_loop()
try:
await loop.run_in_executor(
None,
video.mix_audio,
_voice_path,
_bgm_path,
_mix_output,
_volume,
)
return True
except Exception as e:
logger.warning(f"BGM mix failed, fallback to voice only: {e}")
return False
_bgm_task = _run_bgm()
else:
logger.warning(f"BGM not found: {req.bgm_id}")
use_remotion = (captions_path and captions_path.exists()) or req.title
# 并行等待 Whisper + BGM
parallel_tasks = [t for t in (_whisper_task, _bgm_task) if t is not None]
if parallel_tasks:
results = await asyncio.gather(*parallel_tasks)
result_idx = 0
if _whisper_task is not None:
if not results[result_idx]:
captions_path = None
result_idx += 1
if _bgm_task is not None:
if results[result_idx]:
final_audio_path = mix_output_path
use_remotion = (captions_path and captions_path.exists()) or req.title or req.secondary_title
subtitle_style = None
title_style = None
secondary_title_style = None
if req.enable_subtitles:
subtitle_style = get_style("subtitle", req.subtitle_style_id) or get_default_style("subtitle")
if req.title:
title_style = get_style("title", req.title_style_id) or get_default_style("title")
if req.secondary_title:
secondary_title_style = get_style("title", req.secondary_title_style_id) or get_default_style("title")
if req.subtitle_font_size and req.enable_subtitles:
if subtitle_style is None:
@@ -627,6 +694,16 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
subtitle_style = {}
subtitle_style["bottom_margin"] = int(req.subtitle_bottom_margin)
if req.secondary_title_font_size and req.secondary_title:
if secondary_title_style is None:
secondary_title_style = {}
secondary_title_style["font_size"] = int(req.secondary_title_font_size)
if req.secondary_title_top_margin is not None and req.secondary_title:
if secondary_title_style is None:
secondary_title_style = {}
secondary_title_style["top_margin"] = int(req.secondary_title_top_margin)
if use_remotion:
subtitle_style = prepare_style_for_remotion(
subtitle_style,
@@ -638,6 +715,11 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
temp_dir,
f"{task_id}_title_font"
)
secondary_title_style = prepare_style_for_remotion(
secondary_title_style,
temp_dir,
f"{task_id}_secondary_title_font"
)
final_output_local_path = temp_dir / f"{task_id}_output.mp4"
temp_files.append(final_output_local_path)
@@ -657,16 +739,26 @@ async def process_video_generation(task_id: str, req: GenerateRequest, user_id:
mapped = 87 + int(percent * 0.08)
_update_task(task_id, progress=mapped)
title_display_mode = (
req.title_display_mode
if req.title_display_mode in ("short", "persistent")
else "short"
)
title_duration = max(0.5, min(float(req.title_duration or 4.0), 30.0))
await remotion_service.render(
video_path=str(composed_video_path),
output_path=str(final_output_local_path),
captions_path=str(captions_path) if captions_path else None,
title=req.title,
title_duration=3.0,
title_duration=title_duration,
title_display_mode=title_display_mode,
fps=25,
enable_subtitles=req.enable_subtitles,
subtitle_style=subtitle_style,
title_style=title_style,
secondary_title=req.secondary_title,
secondary_title_style=secondary_title_style,
on_progress=on_remotion_progress
)
print(f"[Pipeline] Remotion render completed")

View File

@@ -0,0 +1,34 @@
"""
订单数据访问层
"""
from datetime import datetime, timezone
from typing import Any, Dict, Optional, cast
from app.core.supabase import get_supabase
def create_order(user_id: str, out_trade_no: str, amount: float) -> Dict[str, Any]:
supabase = get_supabase()
result = supabase.table("orders").insert({
"user_id": user_id,
"out_trade_no": out_trade_no,
"amount": amount,
"status": "pending",
}).execute()
return cast(Dict[str, Any], (result.data or [{}])[0])
def get_order_by_trade_no(out_trade_no: str) -> Optional[Dict[str, Any]]:
supabase = get_supabase()
result = supabase.table("orders").select("*").eq("out_trade_no", out_trade_no).single().execute()
return cast(Optional[Dict[str, Any]], result.data or None)
def update_order_status(out_trade_no: str, status: str, trade_no: str | None = None) -> None:
supabase = get_supabase()
payload: Dict[str, Any] = {"status": status}
if trade_no:
payload["trade_no"] = trade_no
if status == "paid":
payload["paid_at"] = datetime.now(timezone.utc).isoformat()
supabase.table("orders").update(payload).eq("out_trade_no", out_trade_no).execute()

View File

@@ -35,18 +35,19 @@ class GLMService:
Returns:
{"title": "标题", "tags": ["标签1", "标签2", ...]}
"""
prompt = f"""根据以下口播文案生成一个吸引人的短视频标题和3个相关标签。
prompt = f"""根据以下口播文案,生成一个吸引人的短视频标题、副标题和3个相关标签。
口播文案:
{text}
要求:
1. 标题要简洁有力能吸引观众点击不超过10个字
2. 标签要与内容相关便于搜索和推荐只要3个
3. 标题和标签必须使用与口播文案相同的语言(如文案是英文就用英文,日文就用日文)
2. 副标题是对标题的补充说明或描述性文字不超过20个字
3. 标签要与内容相关便于搜索和推荐只要3个
4. 标题、副标题和标签必须使用与口播文案相同的语言(如文案是英文就用英文,日文就用日文)
请严格按以下JSON格式返回不要包含其他内容
{{"title": "标题", "tags": ["标签1", "标签2", "标签3"]}}"""
{{"title": "标题", "secondary_title": "副标题", "tags": ["标签1", "标签2", "标签3"]}}"""
try:
client = self._get_client()
@@ -75,17 +76,24 @@ class GLMService:
logger.error(f"GLM service error: {e}")
raise Exception(f"AI 生成失败: {str(e)}")
async def rewrite_script(self, text: str) -> str:
async def rewrite_script(self, text: str, custom_prompt: str = None) -> str:
"""
AI 洗稿(文案改写)
AI 改写文案
Args:
text: 原始文案
custom_prompt: 自定义提示词,为空则使用默认提示词
Returns:
改写后的文案
"""
prompt = f"""请将以下视频文案进行改写。
if custom_prompt and custom_prompt.strip():
prompt = f"""{custom_prompt.strip()}
原始文案:
{text}"""
else:
prompt = f"""请将以下视频文案进行改写。
原始文案:
{text}
@@ -174,6 +182,8 @@ class GLMService:
# 尝试提取 JSON 块
json_match = re.search(r'\{[^{}]*"title"[^{}]*"tags"[^{}]*\}', content, re.DOTALL)
if not json_match:
json_match = re.search(r'\{[^{}]*"title"[^{}]*"secondary_title"[^{}]*"tags"[^{}]*\}', content, re.DOTALL)
if json_match:
try:
return json.loads(json_match.group())

View File

@@ -1,7 +1,7 @@
"""
唇形同步服务
通过 subprocess 调用 LatentSync conda 环境进行推理
配置为使用 GPU1 (CUDA:1)
混合方案: 短视频用 LatentSync (高质量), 长视频用 MuseTalk (高速度)
路由阈值: LIPSYNC_DURATION_THRESHOLD (默认 120s)
"""
import os
import shutil
@@ -17,15 +17,18 @@ from app.core.config import settings
class LipSyncService:
"""唇形同步服务 - LatentSync 1.6 集成 (Subprocess 方式)"""
"""唇形同步服务 - LatentSync 1.6 + MuseTalk 1.5 混合方案"""
def __init__(self):
self.use_local = settings.LATENTSYNC_LOCAL
self.api_url = settings.LATENTSYNC_API_URL
self.latentsync_dir = settings.LATENTSYNC_DIR
self.gpu_id = settings.LATENTSYNC_GPU_ID
self.use_server = settings.LATENTSYNC_USE_SERVER
# MuseTalk 配置
self.musetalk_api_url = settings.MUSETALK_API_URL
# GPU 并发锁 (Serial Queue)
self._lock = asyncio.Lock()
@@ -103,7 +106,7 @@ class LipSyncService:
"-t", str(target_duration), # 截取到目标时长
"-c:v", "libx264",
"-preset", "fast",
"-crf", "18",
"-crf", "23",
"-an", # 去掉原音频
output_path
]
@@ -268,6 +271,18 @@ class LipSyncService:
else:
actual_video_path = video_path
# 混合路由: 长视频走 MuseTalk短视频走 LatentSync
if audio_duration and audio_duration >= settings.LIPSYNC_DURATION_THRESHOLD:
logger.info(
f"🔄 音频 {audio_duration:.1f}s >= {settings.LIPSYNC_DURATION_THRESHOLD}s路由到 MuseTalk"
)
musetalk_result = await self._call_musetalk_server(
actual_video_path, audio_path, output_path
)
if musetalk_result:
return musetalk_result
logger.warning("⚠️ MuseTalk 不可用,回退到 LatentSync长视频会较慢")
if self.use_server:
# 模式 A: 调用常驻服务 (加速模式)
return await self._call_persistent_server(actual_video_path, audio_path, output_path)
@@ -352,6 +367,55 @@ class LipSyncService:
shutil.copy(video_path, output_path)
return output_path
async def _call_musetalk_server(
self, video_path: str, audio_path: str, output_path: str
) -> Optional[str]:
"""
调用 MuseTalk 常驻服务。
成功返回 output_path不可用返回 None信号上层回退到 LatentSync
"""
server_url = self.musetalk_api_url
logger.info(f"⚡ 调用 MuseTalk 服务: {server_url}")
try:
async with httpx.AsyncClient(timeout=3600.0) as client:
# 健康检查
try:
resp = await client.get(f"{server_url}/health", timeout=5.0)
if resp.status_code != 200:
logger.warning("⚠️ MuseTalk 健康检查失败")
return None
health = resp.json()
if not health.get("model_loaded"):
logger.warning("⚠️ MuseTalk 模型未加载")
return None
except Exception:
logger.warning("⚠️ 无法连接 MuseTalk 服务")
return None
# 发送推理请求
payload = {
"video_path": str(Path(video_path).resolve()),
"audio_path": str(Path(audio_path).resolve()),
"video_out_path": str(Path(output_path).resolve()),
"batch_size": settings.MUSETALK_BATCH_SIZE,
}
response = await client.post(f"{server_url}/lipsync", json=payload)
if response.status_code == 200:
result = response.json()
if Path(result["output_path"]).exists():
logger.info(f"✅ MuseTalk 推理完成: {output_path}")
return output_path
logger.error(f"❌ MuseTalk 服务报错: {response.text}")
return None
except Exception as e:
logger.error(f"❌ MuseTalk 调用失败: {e}")
return None
async def _call_persistent_server(self, video_path: str, audio_path: str, output_path: str) -> str:
"""调用本地常驻服务 (server.py)"""
server_url = "http://localhost:8007"
@@ -369,7 +433,7 @@ class LipSyncService:
}
try:
async with httpx.AsyncClient(timeout=1200.0) as client:
async with httpx.AsyncClient(timeout=3600.0) as client:
# 先检查健康状态
try:
resp = await client.get(f"{server_url}/health", timeout=5.0)
@@ -477,8 +541,18 @@ class LipSyncService:
except:
pass
# 检查 MuseTalk 服务
musetalk_ready = False
try:
async with httpx.AsyncClient(timeout=5.0) as client:
resp = await client.get(f"{self.musetalk_api_url}/health")
if resp.status_code == 200:
musetalk_ready = resp.json().get("model_loaded", False)
except Exception:
pass
return {
"model": "LatentSync 1.6",
"model": "LatentSync 1.6 + MuseTalk 1.5",
"conda_env": conda_ok,
"weights": weights_ok,
"gpu": gpu_ok,
@@ -486,5 +560,7 @@ class LipSyncService:
"gpu_id": self.gpu_id,
"inference_steps": settings.LATENTSYNC_INFERENCE_STEPS,
"guidance_scale": settings.LATENTSYNC_GUIDANCE_SCALE,
"ready": conda_ok and weights_ok and gpu_ok
"ready": conda_ok and weights_ok and gpu_ok,
"musetalk_ready": musetalk_ready,
"lipsync_threshold": settings.LIPSYNC_DURATION_THRESHOLD,
}

View File

@@ -7,6 +7,7 @@ import asyncio
import json
import os
import subprocess
from collections.abc import Callable
from pathlib import Path
from typing import Optional
from loguru import logger
@@ -29,12 +30,15 @@ class RemotionService:
output_path: str,
captions_path: Optional[str] = None,
title: Optional[str] = None,
title_duration: float = 3.0,
title_duration: float = 4.0,
title_display_mode: str = "short",
fps: int = 25,
enable_subtitles: bool = True,
subtitle_style: Optional[dict] = None,
title_style: Optional[dict] = None,
on_progress: Optional[callable] = None
secondary_title: Optional[str] = None,
secondary_title_style: Optional[dict] = None,
on_progress: Optional[Callable[[int], None]] = None
) -> str:
"""
使用 Remotion 渲染视频(添加字幕和标题)
@@ -45,6 +49,7 @@ class RemotionService:
captions_path: 字幕 JSON 文件路径Whisper 生成)
title: 视频标题(可选)
title_duration: 标题显示时长(秒)
title_display_mode: 标题显示模式short/persistent
fps: 帧率
enable_subtitles: 是否启用字幕
on_progress: 进度回调函数
@@ -75,6 +80,7 @@ class RemotionService:
if title:
cmd.extend(["--title", title])
cmd.extend(["--titleDuration", str(title_duration)])
cmd.extend(["--titleDisplayMode", title_display_mode])
if subtitle_style:
cmd.extend(["--subtitleStyle", json.dumps(subtitle_style, ensure_ascii=False)])
@@ -82,6 +88,12 @@ class RemotionService:
if title_style:
cmd.extend(["--titleStyle", json.dumps(title_style, ensure_ascii=False)])
if secondary_title:
cmd.extend(["--secondaryTitle", secondary_title])
if secondary_title_style:
cmd.extend(["--secondaryTitleStyle", json.dumps(secondary_title_style, ensure_ascii=False)])
logger.info(f"Running Remotion render: {' '.join(cmd)}")
# 在线程池中运行子进程
@@ -95,8 +107,12 @@ class RemotionService:
bufsize=1
)
if process.stdout is None:
raise RuntimeError("Remotion process stdout is unavailable")
stdout = process.stdout
output_lines = []
for line in iter(process.stdout.readline, ''):
for line in iter(stdout.readline, ''):
line = line.strip()
if line:
output_lines.append(line)

View File

@@ -1,14 +1,14 @@
"""
视频合成服务
"""
import os
import subprocess
import json
import shlex
from pathlib import Path
from loguru import logger
from typing import Optional
"""
视频合成服务
"""
import os
import subprocess
import json
import shlex
from pathlib import Path
from loguru import logger
from typing import Optional
class VideoService:
def __init__(self):
pass
@@ -96,7 +96,7 @@ class VideoService:
"-map", "0:a?",
"-c:v", "libx264",
"-preset", "fast",
"-crf", "18",
"-crf", "23",
"-c:a", "copy",
"-movflags", "+faststart",
output_path,
@@ -113,146 +113,146 @@ class VideoService:
logger.warning("视频方向归一化失败,回退使用原视频")
return video_path
def _run_ffmpeg(self, cmd: list) -> bool:
cmd_str = ' '.join(shlex.quote(str(c)) for c in cmd)
logger.debug(f"FFmpeg CMD: {cmd_str}")
try:
# Synchronous call for BackgroundTasks compatibility
result = subprocess.run(
cmd,
shell=False,
capture_output=True,
text=True,
encoding='utf-8',
)
if result.returncode != 0:
logger.error(f"FFmpeg Error: {result.stderr}")
return False
return True
except Exception as e:
logger.error(f"FFmpeg Exception: {e}")
return False
def _get_duration(self, file_path: str) -> float:
# Synchronous call for BackgroundTasks compatibility
# 使用参数列表形式避免 shell=True 的命令注入风险
cmd = [
'ffprobe', '-v', 'error',
'-show_entries', 'format=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
file_path
]
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
)
return float(result.stdout.strip())
except Exception:
return 0.0
def mix_audio(
self,
voice_path: str,
bgm_path: str,
output_path: str,
bgm_volume: float = 0.2
) -> str:
"""混合人声与背景音乐"""
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
volume = max(0.0, min(float(bgm_volume), 1.0))
filter_complex = (
f"[0:a]volume=1.0[a0];"
f"[1:a]volume={volume}[a1];"
f"[a0][a1]amix=inputs=2:duration=first:dropout_transition=2:normalize=0[aout]"
)
cmd = [
"ffmpeg", "-y",
"-i", voice_path,
"-stream_loop", "-1", "-i", bgm_path,
"-filter_complex", filter_complex,
"-map", "[aout]",
"-c:a", "pcm_s16le",
"-shortest",
output_path,
]
if self._run_ffmpeg(cmd):
return output_path
raise RuntimeError("FFmpeg audio mix failed")
async def compose(
self,
video_path: str,
audio_path: str,
output_path: str,
subtitle_path: Optional[str] = None
) -> str:
"""合成视频"""
# Ensure output dir
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
video_duration = self._get_duration(video_path)
audio_duration = self._get_duration(audio_path)
# Audio loop if needed
loop_count = 1
if audio_duration > video_duration and video_duration > 0:
loop_count = int(audio_duration / video_duration) + 1
cmd = ["ffmpeg", "-y"]
# Input video (stream_loop must be before -i)
if loop_count > 1:
cmd.extend(["-stream_loop", str(loop_count)])
cmd.extend(["-i", video_path])
# Input audio
cmd.extend(["-i", audio_path])
# Filter complex
filter_complex = []
# Subtitles (skip for now to mimic previous state or implement basic)
# Previous state: subtitles disabled due to font issues
# if subtitle_path: ...
# Audio map with high quality encoding
cmd.extend([
"-c:v", "libx264",
"-preset", "slow", # 慢速预设,更好的压缩效率
"-crf", "18", # 高质量(与 LatentSync 一致
"-c:a", "aac",
"-b:a", "192k", # 音频比特率
"-shortest"
])
# Use audio from input 1
cmd.extend(["-map", "0:v", "-map", "1:a"])
cmd.append(output_path)
if self._run_ffmpeg(cmd):
return output_path
else:
raise RuntimeError("FFmpeg composition failed")
def _run_ffmpeg(self, cmd: list) -> bool:
cmd_str = ' '.join(shlex.quote(str(c)) for c in cmd)
logger.debug(f"FFmpeg CMD: {cmd_str}")
try:
# Synchronous call for BackgroundTasks compatibility
result = subprocess.run(
cmd,
shell=False,
capture_output=True,
text=True,
encoding='utf-8',
)
if result.returncode != 0:
logger.error(f"FFmpeg Error: {result.stderr}")
return False
return True
except Exception as e:
logger.error(f"FFmpeg Exception: {e}")
return False
def _get_duration(self, file_path: str) -> float:
# Synchronous call for BackgroundTasks compatibility
# 使用参数列表形式避免 shell=True 的命令注入风险
cmd = [
'ffprobe', '-v', 'error',
'-show_entries', 'format=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
file_path
]
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
)
return float(result.stdout.strip())
except Exception:
return 0.0
def mix_audio(
self,
voice_path: str,
bgm_path: str,
output_path: str,
bgm_volume: float = 0.2
) -> str:
"""混合人声与背景音乐"""
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
volume = max(0.0, min(float(bgm_volume), 1.0))
filter_complex = (
f"[0:a]volume=1.0[a0];"
f"[1:a]volume={volume}[a1];"
f"[a0][a1]amix=inputs=2:duration=first:dropout_transition=2:normalize=0[aout]"
)
cmd = [
"ffmpeg", "-y",
"-i", voice_path,
"-stream_loop", "-1", "-i", bgm_path,
"-filter_complex", filter_complex,
"-map", "[aout]",
"-c:a", "pcm_s16le",
"-shortest",
output_path,
]
if self._run_ffmpeg(cmd):
return output_path
raise RuntimeError("FFmpeg audio mix failed")
async def compose(
self,
video_path: str,
audio_path: str,
output_path: str,
subtitle_path: Optional[str] = None
) -> str:
"""合成视频"""
# Ensure output dir
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
video_duration = self._get_duration(video_path)
audio_duration = self._get_duration(audio_path)
# Audio loop if needed
loop_count = 1
if audio_duration > video_duration and video_duration > 0:
loop_count = int(audio_duration / video_duration) + 1
cmd = ["ffmpeg", "-y"]
# Input video (stream_loop must be before -i)
if loop_count > 1:
cmd.extend(["-stream_loop", str(loop_count)])
cmd.extend(["-i", video_path])
# Input audio
cmd.extend(["-i", audio_path])
# Filter complex
filter_complex = []
# Subtitles (skip for now to mimic previous state or implement basic)
# Previous state: subtitles disabled due to font issues
# if subtitle_path: ...
# Audio map with high quality encoding
cmd.extend([
"-c:v", "libx264",
"-preset", "medium", # 平衡速度与压缩效率
"-crf", "20", # 最终输出:高质量(肉眼无损
"-c:a", "aac",
"-b:a", "192k", # 音频比特率
"-shortest"
])
# Use audio from input 1
cmd.extend(["-map", "0:v", "-map", "1:a"])
cmd.append(output_path)
if self._run_ffmpeg(cmd):
return output_path
else:
raise RuntimeError("FFmpeg composition failed")
def concat_videos(self, video_paths: list, output_path: str, target_fps: int = 25) -> str:
"""使用 FFmpeg concat demuxer 拼接多个视频片段"""
if not video_paths:
raise ValueError("No video segments to concat")
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
# 生成 concat list 文件
list_path = Path(output_path).parent / f"{Path(output_path).stem}_concat.txt"
with open(list_path, "w", encoding="utf-8") as f:
for vp in video_paths:
f.write(f"file '{vp}'\n")
if not video_paths:
raise ValueError("No video segments to concat")
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
# 生成 concat list 文件
list_path = Path(output_path).parent / f"{Path(output_path).stem}_concat.txt"
with open(list_path, "w", encoding="utf-8") as f:
for vp in video_paths:
f.write(f"file '{vp}'\n")
cmd = [
"ffmpeg", "-y",
"-f", "concat",
@@ -264,44 +264,44 @@ class VideoService:
"-r", str(target_fps),
"-c:v", "libx264",
"-preset", "fast",
"-crf", "18",
"-crf", "23",
"-pix_fmt", "yuv420p",
"-movflags", "+faststart",
output_path,
]
try:
if self._run_ffmpeg(cmd):
return output_path
else:
raise RuntimeError("FFmpeg concat failed")
finally:
try:
list_path.unlink(missing_ok=True)
except Exception:
pass
def split_audio(self, audio_path: str, start: float, end: float, output_path: str) -> str:
"""用 FFmpeg 按时间范围切分音频"""
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
duration = end - start
if duration <= 0:
raise ValueError(f"Invalid audio split range: start={start}, end={end}, duration={duration}")
cmd = [
"ffmpeg", "-y",
"-ss", str(start),
"-t", str(duration),
"-i", audio_path,
"-c", "copy",
output_path,
]
if self._run_ffmpeg(cmd):
return output_path
raise RuntimeError(f"FFmpeg audio split failed: {start}-{end}")
try:
if self._run_ffmpeg(cmd):
return output_path
else:
raise RuntimeError("FFmpeg concat failed")
finally:
try:
list_path.unlink(missing_ok=True)
except Exception:
pass
def split_audio(self, audio_path: str, start: float, end: float, output_path: str) -> str:
"""用 FFmpeg 按时间范围切分音频"""
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
duration = end - start
if duration <= 0:
raise ValueError(f"Invalid audio split range: start={start}, end={end}, duration={duration}")
cmd = [
"ffmpeg", "-y",
"-ss", str(start),
"-t", str(duration),
"-i", audio_path,
"-c", "copy",
output_path,
]
if self._run_ffmpeg(cmd):
return output_path
raise RuntimeError(f"FFmpeg audio split failed: {start}-{end}")
def get_resolution(self, file_path: str) -> tuple[int, int]:
"""获取视频有效显示分辨率(考虑旋转元数据)。"""
info = self.get_video_metadata(file_path)
@@ -309,7 +309,7 @@ class VideoService:
int(info.get("effective_width") or 0),
int(info.get("effective_height") or 0),
)
def prepare_segment(self, video_path: str, target_duration: float, output_path: str,
target_resolution: Optional[tuple] = None, source_start: float = 0.0,
source_end: Optional[float] = None, target_fps: Optional[int] = None) -> str:
@@ -353,21 +353,21 @@ class VideoService:
"-i", video_path,
"-t", str(available),
"-an",
"-c:v", "libx264", "-preset", "fast", "-crf", "18",
"-c:v", "libx264", "-preset", "fast", "-crf", "23",
trim_temp,
]
if not self._run_ffmpeg(trim_cmd):
raise RuntimeError(f"FFmpeg trim for loop failed: {video_path}")
actual_input = trim_temp
source_start = 0.0 # 已裁剪,不需要再 seek
# 重新计算循环次数(基于裁剪后文件)
available = self._get_duration(trim_temp) or available
loop_count = int(target_duration / available) + 1 if needs_loop else 0
cmd = ["ffmpeg", "-y"]
if needs_loop:
cmd.extend(["-stream_loop", str(loop_count)])
if not self._run_ffmpeg(trim_cmd):
raise RuntimeError(f"FFmpeg trim for loop failed: {video_path}")
actual_input = trim_temp
source_start = 0.0 # 已裁剪,不需要再 seek
# 重新计算循环次数(基于裁剪后文件)
available = self._get_duration(trim_temp) or available
loop_count = int(target_duration / available) + 1 if needs_loop else 0
cmd = ["ffmpeg", "-y"]
if needs_loop:
cmd.extend(["-stream_loop", str(loop_count)])
if source_start > 0:
cmd.extend(["-ss", str(source_start)])
cmd.extend(["-i", actual_input, "-t", str(target_duration), "-an"])
@@ -386,20 +386,20 @@ class VideoService:
# 需要循环、缩放或指定起点时必须重编码,否则用 stream copy 保持原画质
if needs_loop or needs_scale or source_start > 0 or has_source_end or needs_fps:
cmd.extend(["-c:v", "libx264", "-preset", "fast", "-crf", "18"])
cmd.extend(["-c:v", "libx264", "-preset", "fast", "-crf", "23"])
else:
cmd.extend(["-c:v", "copy"])
cmd.append(output_path)
try:
if self._run_ffmpeg(cmd):
return output_path
raise RuntimeError(f"FFmpeg prepare_segment failed: {video_path}")
finally:
# 清理裁剪临时文件
if trim_temp:
try:
Path(trim_temp).unlink(missing_ok=True)
except Exception:
pass
cmd.append(output_path)
try:
if self._run_ffmpeg(cmd):
return output_path
raise RuntimeError(f"FFmpeg prepare_segment failed: {video_path}")
finally:
# 清理裁剪临时文件
if trim_temp:
try:
Path(trim_temp).unlink(missing_ok=True)
except Exception:
pass

View File

@@ -247,19 +247,67 @@ class WhisperService:
line_segments = split_segment_to_lines(all_words, max_chars)
all_segments.extend(line_segments)
# 如果提供了 original_text用原文替换 Whisper 转录文字
# 如果提供了 original_text用原文替换 Whisper 转录文字,保留语音节奏
if original_text and original_text.strip() and whisper_first_start is not None:
logger.info(f"Using original_text for subtitles (len={len(original_text)}), "
f"Whisper time range: {whisper_first_start:.2f}-{whisper_last_end:.2f}s")
# 用 split_word_to_chars 拆分原文
# 收集 Whisper 逐字时间戳(保留真实语音节奏)
whisper_chars = []
for seg in all_segments:
whisper_chars.extend(seg.get("words", []))
# 用原文字符 + Whisper 节奏生成新的时间戳
orig_chars = split_word_to_chars(
original_text.strip(),
whisper_first_start,
whisper_last_end
)
if orig_chars:
if orig_chars and len(whisper_chars) >= 2:
# 将原文字符按比例映射到 Whisper 的时间节奏上
n_w = len(whisper_chars)
n_o = len(orig_chars)
w_starts = [c["start"] for c in whisper_chars]
w_final_end = whisper_chars[-1]["end"]
logger.info(
f"Using original_text for subtitles (len={len(original_text)}), "
f"rhythm-mapping {n_o} orig chars onto {n_w} Whisper chars, "
f"time range: {whisper_first_start:.2f}-{whisper_last_end:.2f}s"
)
remapped = []
for i, oc in enumerate(orig_chars):
# 原文第 i 个字符对应 Whisper 时间线的位置
pos = (i / n_o) * n_w
idx = min(int(pos), n_w - 1)
frac = pos - idx
t_start = (
w_starts[idx] + frac * (w_starts[idx + 1] - w_starts[idx])
if idx < n_w - 1
else w_starts[idx] + frac * (w_final_end - w_starts[idx])
)
# 结束时间 = 下一个字符的开始时间
pos_next = ((i + 1) / n_o) * n_w
idx_n = min(int(pos_next), n_w - 1)
frac_n = pos_next - idx_n
t_end = (
w_starts[idx_n] + frac_n * (w_starts[idx_n + 1] - w_starts[idx_n])
if idx_n < n_w - 1
else w_starts[idx_n] + frac_n * (w_final_end - w_starts[idx_n])
)
remapped.append({
"word": oc["word"],
"start": round(t_start, 3),
"end": round(t_end, 3),
})
all_segments = split_segment_to_lines(remapped, max_chars)
logger.info(f"Rebuilt {len(all_segments)} subtitle segments (rhythm-mapped)")
elif orig_chars:
# Whisper 字符不足,退回线性插值
all_segments = split_segment_to_lines(orig_chars, max_chars)
logger.info(f"Rebuilt {len(all_segments)} subtitle segments from original text")
logger.info(f"Rebuilt {len(all_segments)} subtitle segments (linear fallback)")
logger.info(f"Generated {len(all_segments)} subtitle segments")
return {"segments": all_segments}

View File

@@ -54,5 +54,61 @@
"letter_spacing": 1,
"bottom_margin": 72,
"is_default": false
},
{
"id": "subtitle_pink",
"label": "少女粉",
"font_file": "DingTalk JinBuTi.ttf",
"font_family": "DingTalkJinBuTi",
"font_size": 56,
"highlight_color": "#FF69B4",
"normal_color": "#FFFFFF",
"stroke_color": "#1A0010",
"stroke_size": 3,
"letter_spacing": 2,
"bottom_margin": 80,
"is_default": false
},
{
"id": "subtitle_lime",
"label": "清新绿",
"font_file": "DingTalk Sans.ttf",
"font_family": "DingTalkSans",
"font_size": 50,
"highlight_color": "#76FF03",
"normal_color": "#FFFFFF",
"stroke_color": "#001A00",
"stroke_size": 3,
"letter_spacing": 1,
"bottom_margin": 78,
"is_default": false
},
{
"id": "subtitle_gold",
"label": "金色隶书",
"font_file": "阿里妈妈刀隶体.ttf",
"font_family": "AliMamaDaoLiTi",
"font_size": 56,
"highlight_color": "#FDE68A",
"normal_color": "#E8D5B0",
"stroke_color": "#2B1B00",
"stroke_size": 3,
"letter_spacing": 3,
"bottom_margin": 80,
"is_default": false
},
{
"id": "subtitle_kai",
"label": "楷体红字",
"font_file": "simkai.ttf",
"font_family": "SimKai",
"font_size": 54,
"highlight_color": "#FF4444",
"normal_color": "#FFFFFF",
"stroke_color": "#000000",
"stroke_size": 3,
"letter_spacing": 2,
"bottom_margin": 80,
"is_default": false
}
]

View File

@@ -7,7 +7,7 @@
"font_size": 90,
"color": "#FFFFFF",
"stroke_color": "#000000",
"stroke_size": 8,
"stroke_size": 5,
"letter_spacing": 5,
"top_margin": 62,
"font_weight": 900,
@@ -21,7 +21,7 @@
"font_size": 72,
"color": "#FFFFFF",
"stroke_color": "#000000",
"stroke_size": 8,
"stroke_size": 5,
"letter_spacing": 4,
"top_margin": 60,
"font_weight": 900,
@@ -35,7 +35,7 @@
"font_size": 70,
"color": "#FDE68A",
"stroke_color": "#2B1B00",
"stroke_size": 8,
"stroke_size": 5,
"letter_spacing": 3,
"top_margin": 58,
"font_weight": 800,
@@ -49,10 +49,122 @@
"font_size": 72,
"color": "#FFFFFF",
"stroke_color": "#1F0A00",
"stroke_size": 8,
"stroke_size": 5,
"letter_spacing": 4,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_pangmen",
"label": "庞门正道",
"font_file": "title/庞门正道标题体3.0.ttf",
"font_family": "PangMenZhengDao",
"font_size": 80,
"color": "#FFFFFF",
"stroke_color": "#000000",
"stroke_size": 5,
"letter_spacing": 5,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_round",
"label": "优设标题圆",
"font_file": "title/优设标题圆.otf",
"font_family": "YouSheBiaoTiYuan",
"font_size": 78,
"color": "#FFFFFF",
"stroke_color": "#4A1A6B",
"stroke_size": 5,
"letter_spacing": 4,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_alibaba",
"label": "阿里数黑体",
"font_file": "title/阿里巴巴数黑体.ttf",
"font_family": "AlibabaShuHeiTi",
"font_size": 72,
"color": "#FFFFFF",
"stroke_color": "#000000",
"stroke_size": 4,
"letter_spacing": 3,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_chaohei",
"label": "文道潮黑",
"font_file": "title/文道潮黑.ttf",
"font_family": "WenDaoChaoHei",
"font_size": 76,
"color": "#00E5FF",
"stroke_color": "#001A33",
"stroke_size": 5,
"letter_spacing": 4,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_wujie",
"label": "无界黑",
"font_file": "title/标小智无界黑.otf",
"font_family": "BiaoXiaoZhiWuJieHei",
"font_size": 74,
"color": "#FFFFFF",
"stroke_color": "#1A1A1A",
"stroke_size": 4,
"letter_spacing": 3,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_houdi",
"label": "厚底黑",
"font_file": "title/Aa厚底黑.ttf",
"font_family": "AaHouDiHei",
"font_size": 76,
"color": "#FF6B6B",
"stroke_color": "#1A0000",
"stroke_size": 5,
"letter_spacing": 4,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_banyuan",
"label": "寒蝉半圆体",
"font_file": "title/寒蝉半圆体.otf",
"font_family": "HanChanBanYuan",
"font_size": 78,
"color": "#FFFFFF",
"stroke_color": "#000000",
"stroke_size": 5,
"letter_spacing": 4,
"top_margin": 60,
"font_weight": 900,
"is_default": false
},
{
"id": "title_jixiang",
"label": "欣意吉祥宋",
"font_file": "title/字体圈欣意吉祥宋.ttf",
"font_family": "XinYiJiXiangSong",
"font_size": 70,
"color": "#FDE68A",
"stroke_color": "#2B1B00",
"stroke_size": 5,
"letter_spacing": 3,
"top_margin": 58,
"font_weight": 800,
"is_default": false
}
]

View File

@@ -71,3 +71,18 @@ CREATE TRIGGER users_updated_at
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION update_updated_at();
-- 8. 订单表(支付宝付费)
CREATE TABLE IF NOT EXISTS orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
out_trade_no TEXT UNIQUE NOT NULL,
amount DECIMAL(10, 2) NOT NULL DEFAULT 999.00,
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'paid', 'failed')),
trade_no TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
paid_at TIMESTAMP WITH TIME ZONE
);
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
CREATE INDEX IF NOT EXISTS idx_orders_out_trade_no ON orders(out_trade_no);

31
backend/package-lock.json generated Normal file
View File

@@ -0,0 +1,31 @@
{
"name": "backend",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"dependencies": {
"qrcode.react": "^4.2.0"
}
},
"node_modules/qrcode.react": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/qrcode.react/-/qrcode.react-4.2.0.tgz",
"integrity": "sha512-QpgqWi8rD9DsS9EP3z7BT+5lY5SFhsqGjpgW5DY/i3mK4M9DTBNz3ErMi8BWYEfI3L0d8GIbGmcdFAS1uIRGjA==",
"license": "ISC",
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/react": {
"version": "19.2.4",
"resolved": "https://registry.npmjs.org/react/-/react-19.2.4.tgz",
"integrity": "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.10.0"
}
}
}
}

5
backend/package.json Normal file
View File

@@ -0,0 +1,5 @@
{
"dependencies": {
"qrcode.react": "^4.2.0"
}
}

View File

@@ -29,6 +29,9 @@ python-jose[cryptography]>=3.3.0
passlib[bcrypt]>=1.7.4
bcrypt==4.0.1
# 支付宝支付
python-alipay-sdk>=3.6.0
# 字幕对齐
faster-whisper>=1.0.0

View File

@@ -15,6 +15,7 @@
"axios": "^1.13.4",
"lucide-react": "^0.563.0",
"next": "16.1.1",
"qrcode.react": "^4.2.0",
"react": "19.2.3",
"react-dom": "19.2.3",
"sonner": "^2.0.7",
@@ -5618,6 +5619,15 @@
"node": ">=6"
}
},
"node_modules/qrcode.react": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/qrcode.react/-/qrcode.react-4.2.0.tgz",
"integrity": "sha512-QpgqWi8rD9DsS9EP3z7BT+5lY5SFhsqGjpgW5DY/i3mK4M9DTBNz3ErMi8BWYEfI3L0d8GIbGmcdFAS1uIRGjA==",
"license": "ISC",
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/queue-microtask": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",

View File

@@ -16,6 +16,7 @@
"axios": "^1.13.4",
"lucide-react": "^0.563.0",
"next": "16.1.1",
"qrcode.react": "^4.2.0",
"react": "19.2.3",
"react-dom": "19.2.3",
"sonner": "^2.0.7",

View File

@@ -3,9 +3,11 @@
import { useState } from 'react';
import { useRouter } from 'next/navigation';
import { login } from "@/shared/lib/auth";
import { useAuth } from "@/shared/contexts/AuthContext";
export default function LoginPage() {
const router = useRouter();
const { setUser } = useAuth();
const [phone, setPhone] = useState('');
const [password, setPassword] = useState('');
const [error, setError] = useState('');
@@ -25,7 +27,11 @@ export default function LoginPage() {
try {
const result = await login(phone, password);
if (result.success) {
if (result.paymentToken) {
sessionStorage.setItem('payment_token', result.paymentToken);
router.push('/pay');
} else if (result.success) {
if (result.user) setUser(result.user);
router.push('/');
} else {
setError(result.message || '登录失败');

View File

@@ -0,0 +1,160 @@
'use client';
import { Suspense, useState, useEffect, useRef } from 'react';
import { useRouter, useSearchParams } from 'next/navigation';
import api from '@/shared/api/axios';
type PageStatus = 'loading' | 'redirecting' | 'checking' | 'success' | 'error';
function PayContent() {
const router = useRouter();
const searchParams = useSearchParams();
const [status, setStatus] = useState<PageStatus>('loading');
const [errorMsg, setErrorMsg] = useState('');
const pollRef = useRef<ReturnType<typeof setInterval> | null>(null);
useEffect(() => {
const outTradeNo = searchParams.get('out_trade_no');
if (outTradeNo) {
setStatus('checking');
startPolling(outTradeNo);
return;
}
const token = sessionStorage.getItem('payment_token');
if (!token) {
router.replace('/login');
return;
}
createOrder(token);
return () => {
if (pollRef.current) clearInterval(pollRef.current);
};
}, []);
const createOrder = async (token: string) => {
try {
const { data } = await api.post('/api/payment/create-order', { payment_token: token });
const { pay_url } = data.data;
setStatus('redirecting');
window.location.href = pay_url;
} catch (err: any) {
setStatus('error');
setErrorMsg(err.response?.data?.message || '创建订单失败,请重新登录');
}
};
const startPolling = (tradeNo: string) => {
checkStatus(tradeNo);
pollRef.current = setInterval(() => checkStatus(tradeNo), 3000);
};
const checkStatus = async (tradeNo: string) => {
try {
const { data } = await api.get(`/api/payment/status/${tradeNo}`);
if (data.data.status === 'paid') {
if (pollRef.current) clearInterval(pollRef.current);
setStatus('success');
sessionStorage.removeItem('payment_token');
setTimeout(() => router.replace('/login'), 3000);
}
} catch {
// ignore polling errors
}
};
return (
<div className="w-full max-w-md p-8 bg-white/10 backdrop-blur-lg rounded-2xl shadow-2xl border border-white/20">
{(status === 'loading' || status === 'redirecting') && (
<div className="text-center">
<div className="mb-6">
<svg className="animate-spin h-12 w-12 mx-auto text-purple-400" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
</div>
<p className="text-gray-300">
{status === 'loading' ? '正在创建订单...' : '正在跳转到支付宝...'}
</p>
</div>
)}
{status === 'checking' && (
<div className="text-center">
<h1 className="text-2xl font-bold text-white mb-6"></h1>
<div className="flex items-center justify-center gap-2 text-purple-300 mb-4">
<svg className="animate-spin h-5 w-5" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
...
</div>
<p className="text-gray-400 text-sm"></p>
</div>
)}
{status === 'success' && (
<div className="text-center">
<div className="mb-6">
<svg className="w-16 h-16 mx-auto text-green-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
</div>
<h2 className="text-2xl font-bold text-white mb-4"></h2>
<p className="text-gray-300 mb-2">...</p>
<p className="text-gray-500 text-sm">使</p>
</div>
)}
{status === 'error' && (
<div className="text-center">
<div className="mb-6">
<svg className="w-16 h-16 mx-auto text-red-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4m0 4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
</div>
<h2 className="text-2xl font-bold text-white mb-4"></h2>
<p className="text-red-300 mb-6">{errorMsg}</p>
<button
onClick={() => router.replace('/login')}
className="py-3 px-6 bg-gradient-to-r from-purple-600 to-pink-600 text-white font-semibold rounded-lg"
>
</button>
</div>
)}
{status === 'checking' && (
<div className="mt-6 text-center">
<button
onClick={() => {
if (pollRef.current) clearInterval(pollRef.current);
router.replace('/login');
}}
className="text-purple-300 hover:text-purple-200 text-sm"
>
</button>
</div>
)}
</div>
);
}
export default function PayPage() {
return (
<div className="min-h-dvh flex items-center justify-center">
<Suspense fallback={
<div className="w-full max-w-md p-8 bg-white/10 backdrop-blur-lg rounded-2xl shadow-2xl border border-white/20 text-center">
<svg className="animate-spin h-12 w-12 mx-auto text-purple-400" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
</div>
}>
<PayContent />
</Suspense>
</div>
);
}

View File

@@ -61,7 +61,7 @@ export default function RegisterPage() {
</div>
<h2 className="text-2xl font-bold text-white mb-4"></h2>
<p className="text-gray-300 mb-6">
</p>
<a
href="/login"

View File

@@ -106,6 +106,10 @@ export default function AccountSettingsDropdown() {
{/* 下拉菜单 */}
{isOpen && (
<div className="absolute right-0 mt-2 bg-gray-800 border border-white/10 rounded-lg shadow-xl z-[160] overflow-hidden whitespace-nowrap">
{/* 账户名称 */}
<div className="px-3 py-2 border-b border-white/10 text-center">
<div className="text-sm text-white font-medium">{user?.phone ? `${user.phone.slice(0, 3)}****${user.phone.slice(-4)}` : '未知账户'}</div>
</div>
{/* 有效期显示 */}
<div className="px-3 py-2 border-b border-white/10 text-center">
<div className="text-xs text-gray-400"></div>
@@ -188,6 +192,7 @@ export default function AccountSettingsDropdown() {
onClick={() => {
setShowPasswordModal(false);
setError('');
setSuccess('');
setOldPassword('');
setNewPassword('');
setConfirmPassword('');

View File

@@ -12,7 +12,7 @@ interface GeneratedVideo {
}
interface UseGeneratedVideosOptions {
storageKey: string;
selectedVideoId: string | null;
setSelectedVideoId: React.Dispatch<React.SetStateAction<string | null>>;
setGeneratedVideo: React.Dispatch<React.SetStateAction<string | null>>;
@@ -20,7 +20,7 @@ interface UseGeneratedVideosOptions {
}
export const useGeneratedVideos = ({
storageKey,
selectedVideoId,
setSelectedVideoId,
setGeneratedVideo,
@@ -45,6 +45,8 @@ export const useGeneratedVideos = ({
if (preferVideoId === "__latest__") {
setSelectedVideoId(videos[0].id);
setGeneratedVideo(resolveMediaUrl(videos[0].path));
// 写入跨页面共享标记,让另一个页面也能感知最新生成的视频
localStorage.setItem(`vigent_${storageKey}_latestGeneratedVideoId`, videos[0].id);
} else {
const found = videos.find(v => v.id === preferVideoId);
if (found) {

View File

@@ -1,4 +1,4 @@
import { useEffect, useRef, useState } from "react";
import { useEffect, useMemo, useRef, useState } from "react";
import api from "@/shared/api/axios";
import {
buildTextShadow,
@@ -9,7 +9,7 @@ import {
resolveBgmUrl,
resolveMediaUrl,
} from "@/shared/lib/media";
import { clampTitle } from "@/shared/lib/title";
import { clampTitle, clampSecondaryTitle, SECONDARY_TITLE_MAX_LENGTH } from "@/shared/lib/title";
import { useTitleInput } from "@/shared/hooks/useTitleInput";
import { useAuth } from "@/shared/contexts/AuthContext";
import { useTask } from "@/shared/contexts/TaskContext";
@@ -26,6 +26,7 @@ import { useRefAudios } from "@/features/home/model/useRefAudios";
import { useTitleSubtitleStyles } from "@/features/home/model/useTitleSubtitleStyles";
import { useTimelineEditor } from "@/features/home/model/useTimelineEditor";
import { useSavedScripts } from "@/features/home/model/useSavedScripts";
import { useVideoFrameCapture } from "@/features/home/model/useVideoFrameCapture";
import { ApiResponse, unwrap } from "@/shared/api/types";
const VOICES: Record<string, { id: string; name: string }[]> = {
@@ -87,6 +88,8 @@ const LANG_TO_LOCALE: Record<string, string> = {
"Português": "pt-BR",
};
const DEFAULT_SHORT_TITLE_DURATION = 4;
const scrollContainerToItem = (container: HTMLDivElement, item: HTMLDivElement) => {
@@ -149,11 +152,19 @@ export const useHomeController = () => {
const [subtitleSizeLocked, setSubtitleSizeLocked] = useState<boolean>(false);
const [titleSizeLocked, setTitleSizeLocked] = useState<boolean>(false);
const [titleTopMargin, setTitleTopMargin] = useState<number>(62);
const [titleDisplayMode, setTitleDisplayMode] = useState<"short" | "persistent">("short");
const [subtitleBottomMargin, setSubtitleBottomMargin] = useState<number>(80);
const [outputAspectRatio, setOutputAspectRatio] = useState<"9:16" | "16:9">("9:16");
const [showStylePreview, setShowStylePreview] = useState<boolean>(false);
const [materialDimensions, setMaterialDimensions] = useState<{ width: number; height: number } | null>(null);
// 副标题相关状态
const [videoSecondaryTitle, setVideoSecondaryTitle] = useState<string>("");
const [selectedSecondaryTitleStyleId, setSelectedSecondaryTitleStyleId] = useState<string>("");
const [secondaryTitleFontSize, setSecondaryTitleFontSize] = useState<number>(48);
const [secondaryTitleTopMargin, setSecondaryTitleTopMargin] = useState<number>(12);
const [secondaryTitleSizeLocked, setSecondaryTitleSizeLocked] = useState<boolean>(false);
// 背景音乐相关状态
const [selectedBgmId, setSelectedBgmId] = useState<string>("");
@@ -270,6 +281,9 @@ export const useHomeController = () => {
// 文案提取模态框
const [extractModalOpen, setExtractModalOpen] = useState(false);
// AI 改写模态框
const [rewriteModalOpen, setRewriteModalOpen] = useState(false);
// 获取存储 key 的前缀(登录用户使用 userId未登录使用 guest
const storageKey = userId || "guest";
@@ -351,7 +365,7 @@ export const useHomeController = () => {
fetchGeneratedVideos,
deleteVideo,
} = useGeneratedVideos({
storageKey,
selectedVideoId,
setSelectedVideoId,
setGeneratedVideo,
@@ -385,6 +399,18 @@ export const useHomeController = () => {
storageKey,
});
// 时间轴第一段素材的视频 URL用于帧截取预览
// 有时间轴段时用第一段,没有(如未选配音)回退到 selectedMaterials[0]
const firstTimelineMaterialUrl = useMemo(() => {
const firstSeg = timelineSegments[0];
const matId = firstSeg?.materialId ?? selectedMaterials[0];
if (!matId) return null;
const mat = materials.find((m) => m.id === matId);
return mat?.path ? resolveMediaUrl(mat.path) : null;
}, [materials, timelineSegments, selectedMaterials]);
const materialPosterUrl = useVideoFrameCapture(showStylePreview ? firstTimelineMaterialUrl : null);
useEffect(() => {
if (isAuthLoading || !userId) return;
let active = true;
@@ -427,6 +453,8 @@ export const useHomeController = () => {
setText,
videoTitle,
setVideoTitle,
videoSecondaryTitle,
setVideoSecondaryTitle,
ttsMode,
setTtsMode,
voice,
@@ -439,14 +467,23 @@ export const useHomeController = () => {
setSelectedSubtitleStyleId,
selectedTitleStyleId,
setSelectedTitleStyleId,
selectedSecondaryTitleStyleId,
setSelectedSecondaryTitleStyleId,
subtitleFontSize,
setSubtitleFontSize,
titleFontSize,
setTitleFontSize,
secondaryTitleFontSize,
setSecondaryTitleFontSize,
setSubtitleSizeLocked,
setTitleSizeLocked,
setSecondaryTitleSizeLocked,
titleTopMargin,
setTitleTopMargin,
secondaryTitleTopMargin,
setSecondaryTitleTopMargin,
titleDisplayMode,
setTitleDisplayMode,
subtitleBottomMargin,
setSubtitleBottomMargin,
outputAspectRatio,
@@ -486,6 +523,12 @@ export const useHomeController = () => {
onCommit: syncTitleToPublish,
});
const secondaryTitleInput = useTitleInput({
value: videoSecondaryTitle,
onChange: setVideoSecondaryTitle,
maxLength: SECONDARY_TITLE_MAX_LENGTH,
});
// 加载素材列表和历史视频
useEffect(() => {
if (isAuthLoading) return;
@@ -577,11 +620,32 @@ export const useHomeController = () => {
}
}, [titleStyles, selectedTitleStyleId, titleSizeLocked]);
useEffect(() => {
if (secondaryTitleSizeLocked || titleStyles.length === 0) return;
const active = titleStyles.find((s) => s.id === selectedSecondaryTitleStyleId)
|| titleStyles.find((s) => s.is_default)
|| titleStyles[0];
if (active?.font_size) {
setSecondaryTitleFontSize(active.font_size);
}
}, [titleStyles, selectedSecondaryTitleStyleId, secondaryTitleSizeLocked]);
// 移除重复的 BGM 持久化恢复逻辑 (已统一移动到 useHomePersistence 中)
// useEffect(() => { ... })
// 时间门控:页面加载后 1 秒内禁止所有列表自动滚动效果
// 防止持久化恢复 + 异步数据加载触发 scrollIntoView 导致移动端页面跳动
const scrollEffectsEnabled = useRef(false);
useEffect(() => {
if (!selectedBgmId) return;
const timer = setTimeout(() => {
scrollEffectsEnabled.current = true;
}, 1000);
return () => clearTimeout(timer);
}, []);
// BGM 列表滚动
useEffect(() => {
if (!selectedBgmId || !scrollEffectsEnabled.current) return;
const container = bgmListContainerRef.current;
const target = bgmItemRefs.current[selectedBgmId];
if (container && target) {
@@ -589,16 +653,10 @@ export const useHomeController = () => {
}
}, [selectedBgmId, bgmList]);
// 素材列表滚动:跳过首次恢复,仅用户主动操作时滚动
const materialScrollReady = useRef(false);
// 素材列表滚动
useEffect(() => {
const firstSelected = selectedMaterials[0];
if (!firstSelected) return;
if (!materialScrollReady.current) {
// 首次有选中素材时标记就绪,但不滚动(避免刷新后整页跳动)
materialScrollReady.current = true;
return;
}
if (!firstSelected || !scrollEffectsEnabled.current) return;
const target = materialItemRefs.current[firstSelected];
if (target) {
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
@@ -623,14 +681,9 @@ export const useHomeController = () => {
}
}, [isRestored, bgmList, selectedBgmId, enableBgm, setSelectedBgmId]);
const videoScrollReady = useRef(false);
// 视频列表滚动
useEffect(() => {
if (!selectedVideoId) return;
if (!videoScrollReady.current) {
videoScrollReady.current = true;
return;
}
if (!selectedVideoId || !scrollEffectsEnabled.current) return;
const target = videoItemRefs.current[selectedVideoId];
if (target) {
target.scrollIntoView({ block: "nearest", behavior: "smooth" });
@@ -736,7 +789,7 @@ export const useHomeController = () => {
setIsGeneratingMeta(true);
try {
const { data: res } = await api.post<ApiResponse<{ title?: string; tags?: string[] }>>(
const { data: res } = await api.post<ApiResponse<{ title?: string; secondary_title?: string; tags?: string[] }>>(
"/api/ai/generate-meta",
{ text: text.trim() }
);
@@ -746,6 +799,10 @@ export const useHomeController = () => {
const nextTitle = clampTitle(payload.title || "");
titleInput.commitValue(nextTitle);
// 更新副标题
const nextSecondaryTitle = clampSecondaryTitle(payload.secondary_title || "");
secondaryTitleInput.commitValue(nextSecondaryTitle);
// 同步到发布页 localStorage
localStorage.setItem(`vigent_${storageKey}_publish_tags`, JSON.stringify(payload.tags || []));
} catch (err: unknown) {
@@ -937,10 +994,28 @@ export const useHomeController = () => {
payload.title_font_size = Math.round(titleFontSize);
}
if (videoTitle.trim() || videoSecondaryTitle.trim()) {
payload.title_display_mode = titleDisplayMode;
if (titleDisplayMode === "short") {
payload.title_duration = DEFAULT_SHORT_TITLE_DURATION;
}
}
if (videoTitle.trim()) {
payload.title_top_margin = Math.round(titleTopMargin);
}
if (videoSecondaryTitle.trim()) {
payload.secondary_title = videoSecondaryTitle.trim();
if (selectedSecondaryTitleStyleId) {
payload.secondary_title_style_id = selectedSecondaryTitleStyleId;
}
if (secondaryTitleFontSize) {
payload.secondary_title_font_size = Math.round(secondaryTitleFontSize);
}
payload.secondary_title_top_margin = Math.round(secondaryTitleTopMargin);
}
payload.subtitle_bottom_margin = Math.round(subtitleBottomMargin);
if (enableBgm && selectedBgmId) {
@@ -1021,6 +1096,8 @@ export const useHomeController = () => {
setText,
extractModalOpen,
setExtractModalOpen,
rewriteModalOpen,
setRewriteModalOpen,
handleGenerateMeta,
isGeneratingMeta,
handleTranslate,
@@ -1040,6 +1117,15 @@ export const useHomeController = () => {
titleFontSize,
setTitleFontSize,
setTitleSizeLocked,
videoSecondaryTitle,
secondaryTitleInput,
selectedSecondaryTitleStyleId,
setSelectedSecondaryTitleStyleId,
secondaryTitleFontSize,
setSecondaryTitleFontSize,
setSecondaryTitleSizeLocked,
secondaryTitleTopMargin,
setSecondaryTitleTopMargin,
subtitleStyles,
selectedSubtitleStyleId,
setSelectedSubtitleStyleId,
@@ -1048,6 +1134,8 @@ export const useHomeController = () => {
setSubtitleSizeLocked,
titleTopMargin,
setTitleTopMargin,
titleDisplayMode,
setTitleDisplayMode,
subtitleBottomMargin,
setSubtitleBottomMargin,
outputAspectRatio,
@@ -1056,6 +1144,7 @@ export const useHomeController = () => {
getFontFormat,
buildTextShadow,
materialDimensions,
materialPosterUrl,
ttsMode,
setTtsMode,
voices: VOICES[textLang] || VOICES["zh-CN"],

View File

@@ -1,5 +1,5 @@
import { useEffect, useState } from "react";
import { clampTitle } from "@/shared/lib/title";
import { clampTitle, clampSecondaryTitle } from "@/shared/lib/title";
interface RefAudio {
id: string;
@@ -17,6 +17,8 @@ interface UseHomePersistenceOptions {
setText: React.Dispatch<React.SetStateAction<string>>;
videoTitle: string;
setVideoTitle: React.Dispatch<React.SetStateAction<string>>;
videoSecondaryTitle: string;
setVideoSecondaryTitle: React.Dispatch<React.SetStateAction<string>>;
ttsMode: 'edgetts' | 'voiceclone';
setTtsMode: React.Dispatch<React.SetStateAction<'edgetts' | 'voiceclone'>>;
voice: string;
@@ -29,14 +31,23 @@ interface UseHomePersistenceOptions {
setSelectedSubtitleStyleId: React.Dispatch<React.SetStateAction<string>>;
selectedTitleStyleId: string;
setSelectedTitleStyleId: React.Dispatch<React.SetStateAction<string>>;
selectedSecondaryTitleStyleId: string;
setSelectedSecondaryTitleStyleId: React.Dispatch<React.SetStateAction<string>>;
subtitleFontSize: number;
setSubtitleFontSize: React.Dispatch<React.SetStateAction<number>>;
titleFontSize: number;
setTitleFontSize: React.Dispatch<React.SetStateAction<number>>;
secondaryTitleFontSize: number;
setSecondaryTitleFontSize: React.Dispatch<React.SetStateAction<number>>;
setSubtitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
setTitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
setSecondaryTitleSizeLocked: React.Dispatch<React.SetStateAction<boolean>>;
titleTopMargin: number;
setTitleTopMargin: React.Dispatch<React.SetStateAction<number>>;
secondaryTitleTopMargin: number;
setSecondaryTitleTopMargin: React.Dispatch<React.SetStateAction<number>>;
titleDisplayMode: 'short' | 'persistent';
setTitleDisplayMode: React.Dispatch<React.SetStateAction<'short' | 'persistent'>>;
subtitleBottomMargin: number;
setSubtitleBottomMargin: React.Dispatch<React.SetStateAction<number>>;
outputAspectRatio: '9:16' | '16:9';
@@ -63,6 +74,8 @@ export const useHomePersistence = ({
setText,
videoTitle,
setVideoTitle,
videoSecondaryTitle,
setVideoSecondaryTitle,
ttsMode,
setTtsMode,
voice,
@@ -75,14 +88,23 @@ export const useHomePersistence = ({
setSelectedSubtitleStyleId,
selectedTitleStyleId,
setSelectedTitleStyleId,
selectedSecondaryTitleStyleId,
setSelectedSecondaryTitleStyleId,
subtitleFontSize,
setSubtitleFontSize,
titleFontSize,
setTitleFontSize,
secondaryTitleFontSize,
setSecondaryTitleFontSize,
setSubtitleSizeLocked,
setTitleSizeLocked,
setSecondaryTitleSizeLocked,
titleTopMargin,
setTitleTopMargin,
secondaryTitleTopMargin,
setSecondaryTitleTopMargin,
titleDisplayMode,
setTitleDisplayMode,
subtitleBottomMargin,
setSubtitleBottomMargin,
outputAspectRatio,
@@ -108,26 +130,33 @@ export const useHomePersistence = ({
const savedText = localStorage.getItem(`vigent_${storageKey}_text`);
const savedTitle = localStorage.getItem(`vigent_${storageKey}_title`);
const savedSecondaryTitle = localStorage.getItem(`vigent_${storageKey}_secondaryTitle`);
const savedTtsMode = localStorage.getItem(`vigent_${storageKey}_ttsMode`);
const savedVoice = localStorage.getItem(`vigent_${storageKey}_voice`);
const savedTextLang = localStorage.getItem(`vigent_${storageKey}_textLang`);
const savedMaterial = localStorage.getItem(`vigent_${storageKey}_material`);
const savedSubtitleStyle = localStorage.getItem(`vigent_${storageKey}_subtitleStyle`);
const savedTitleStyle = localStorage.getItem(`vigent_${storageKey}_titleStyle`);
const savedSecondaryTitleStyle = localStorage.getItem(`vigent_${storageKey}_secondaryTitleStyle`);
const savedSubtitleFontSize = localStorage.getItem(`vigent_${storageKey}_subtitleFontSize`);
const savedTitleFontSize = localStorage.getItem(`vigent_${storageKey}_titleFontSize`);
const savedSecondaryTitleFontSize = localStorage.getItem(`vigent_${storageKey}_secondaryTitleFontSize`);
const savedBgmId = localStorage.getItem(`vigent_${storageKey}_bgmId`);
const savedSelectedVideoId = localStorage.getItem(`vigent_${storageKey}_selectedVideoId`);
const savedSelectedVideoId = localStorage.getItem(`vigent_${storageKey}_latestGeneratedVideoId`)
|| localStorage.getItem(`vigent_${storageKey}_selectedVideoId`);
const savedSelectedAudioId = localStorage.getItem(`vigent_${storageKey}_selectedAudioId`);
const savedBgmVolume = localStorage.getItem(`vigent_${storageKey}_bgmVolume`);
const savedEnableBgm = localStorage.getItem(`vigent_${storageKey}_enableBgm`);
const savedTitleTopMargin = localStorage.getItem(`vigent_${storageKey}_titleTopMargin`);
const savedSecondaryTitleTopMargin = localStorage.getItem(`vigent_${storageKey}_secondaryTitleTopMargin`);
const savedTitleDisplayMode = localStorage.getItem(`vigent_${storageKey}_titleDisplayMode`);
const savedSubtitleBottomMargin = localStorage.getItem(`vigent_${storageKey}_subtitleBottomMargin`);
const savedOutputAspectRatio = localStorage.getItem(`vigent_${storageKey}_outputAspectRatio`);
const savedSpeed = localStorage.getItem(`vigent_${storageKey}_speed`);
setText(savedText || "大家好,欢迎来到我的频道,今天给大家分享一些有趣的内容。");
setVideoTitle(savedTitle ? clampTitle(savedTitle) : "");
setVideoSecondaryTitle(savedSecondaryTitle ? clampSecondaryTitle(savedSecondaryTitle) : "");
setTtsMode((savedTtsMode as 'edgetts' | 'voiceclone') || 'edgetts');
setVoice(savedVoice || "zh-CN-YunxiNeural");
if (savedTextLang) setTextLang(savedTextLang);
@@ -147,6 +176,7 @@ export const useHomePersistence = ({
}
if (savedSubtitleStyle) setSelectedSubtitleStyleId(savedSubtitleStyle);
if (savedTitleStyle) setSelectedTitleStyleId(savedTitleStyle);
if (savedSecondaryTitleStyle) setSelectedSecondaryTitleStyleId(savedSecondaryTitleStyle);
if (savedSubtitleFontSize) {
const parsed = parseInt(savedSubtitleFontSize, 10);
@@ -164,16 +194,33 @@ export const useHomePersistence = ({
}
}
if (savedSecondaryTitleFontSize) {
const parsed = parseInt(savedSecondaryTitleFontSize, 10);
if (!Number.isNaN(parsed)) {
setSecondaryTitleFontSize(parsed);
setSecondaryTitleSizeLocked(true);
}
}
if (savedBgmId) setSelectedBgmId(savedBgmId);
if (savedBgmVolume) setBgmVolume(parseFloat(savedBgmVolume));
if (savedEnableBgm !== null) setEnableBgm(savedEnableBgm === 'true');
if (savedSelectedVideoId) setSelectedVideoId(savedSelectedVideoId);
// 消费后清除跨页面共享标记,避免反复覆盖
localStorage.removeItem(`vigent_${storageKey}_latestGeneratedVideoId`);
if (savedSelectedAudioId) setSelectedAudioId(savedSelectedAudioId);
if (savedTitleTopMargin) {
const parsed = parseInt(savedTitleTopMargin, 10);
if (!Number.isNaN(parsed)) setTitleTopMargin(parsed);
}
if (savedSecondaryTitleTopMargin) {
const parsed = parseInt(savedSecondaryTitleTopMargin, 10);
if (!Number.isNaN(parsed)) setSecondaryTitleTopMargin(parsed);
}
if (savedTitleDisplayMode === 'short' || savedTitleDisplayMode === 'persistent') {
setTitleDisplayMode(savedTitleDisplayMode);
}
if (savedSubtitleBottomMargin) {
const parsed = parseInt(savedSubtitleBottomMargin, 10);
if (!Number.isNaN(parsed)) setSubtitleBottomMargin(parsed);
@@ -198,6 +245,7 @@ export const useHomePersistence = ({
setSelectedMaterials,
setSelectedSubtitleStyleId,
setSelectedTitleStyleId,
setSelectedSecondaryTitleStyleId,
setSelectedVideoId,
setSelectedAudioId,
setSpeed,
@@ -207,11 +255,16 @@ export const useHomePersistence = ({
setTextLang,
setTitleFontSize,
setTitleSizeLocked,
setSecondaryTitleFontSize,
setSecondaryTitleSizeLocked,
setTitleTopMargin,
setSecondaryTitleTopMargin,
setTitleDisplayMode,
setSubtitleBottomMargin,
setOutputAspectRatio,
setTtsMode,
setVideoTitle,
setVideoSecondaryTitle,
setVoice,
storageKey,
]);
@@ -232,6 +285,14 @@ export const useHomePersistence = ({
return () => clearTimeout(timeout);
}, [videoTitle, storageKey, isRestored]);
useEffect(() => {
if (!isRestored) return;
const timeout = setTimeout(() => {
localStorage.setItem(`vigent_${storageKey}_secondaryTitle`, videoSecondaryTitle);
}, 300);
return () => clearTimeout(timeout);
}, [videoSecondaryTitle, storageKey, isRestored]);
useEffect(() => {
if (isRestored) localStorage.setItem(`vigent_${storageKey}_ttsMode`, ttsMode);
}, [ttsMode, storageKey, isRestored]);
@@ -262,6 +323,12 @@ export const useHomePersistence = ({
}
}, [selectedTitleStyleId, storageKey, isRestored]);
useEffect(() => {
if (isRestored && selectedSecondaryTitleStyleId) {
localStorage.setItem(`vigent_${storageKey}_secondaryTitleStyle`, selectedSecondaryTitleStyleId);
}
}, [selectedSecondaryTitleStyleId, storageKey, isRestored]);
useEffect(() => {
if (isRestored) {
localStorage.setItem(`vigent_${storageKey}_subtitleFontSize`, String(subtitleFontSize));
@@ -274,12 +341,30 @@ export const useHomePersistence = ({
}
}, [titleFontSize, storageKey, isRestored]);
useEffect(() => {
if (isRestored) {
localStorage.setItem(`vigent_${storageKey}_secondaryTitleFontSize`, String(secondaryTitleFontSize));
}
}, [secondaryTitleFontSize, storageKey, isRestored]);
useEffect(() => {
if (isRestored) {
localStorage.setItem(`vigent_${storageKey}_titleTopMargin`, String(titleTopMargin));
}
}, [titleTopMargin, storageKey, isRestored]);
useEffect(() => {
if (isRestored) {
localStorage.setItem(`vigent_${storageKey}_secondaryTitleTopMargin`, String(secondaryTitleTopMargin));
}
}, [secondaryTitleTopMargin, storageKey, isRestored]);
useEffect(() => {
if (isRestored) {
localStorage.setItem(`vigent_${storageKey}_titleDisplayMode`, titleDisplayMode);
}
}, [titleDisplayMode, storageKey, isRestored]);
useEffect(() => {
if (isRestored) {
localStorage.setItem(`vigent_${storageKey}_subtitleBottomMargin`, String(subtitleBottomMargin));

View File

@@ -0,0 +1,94 @@
import { useEffect, useState } from "react";
/** 预览窗口最大 280px 宽,截取无需超过此尺寸 */
const MAX_CAPTURE_WIDTH = 480;
/**
* 从视频 URL 截取 0.1s 处的帧,返回 JPEG data URL。
* 失败时返回 null降级渐变背景
*/
export function useVideoFrameCapture(videoUrl: string | null): string | null {
const [frameUrl, setFrameUrl] = useState<string | null>(null);
useEffect(() => {
if (!videoUrl) {
setFrameUrl(null);
return;
}
let isActive = true;
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.muted = true;
video.preload = "auto";
video.playsInline = true;
const cleanup = () => {
video.removeEventListener("loadedmetadata", onLoaded);
video.removeEventListener("canplay", onLoaded);
video.removeEventListener("seeked", onSeeked);
video.removeEventListener("error", onError);
video.src = "";
video.load();
};
const onSeeked = () => {
if (!isActive) return;
try {
const vw = video.videoWidth;
const vh = video.videoHeight;
if (!vw || !vh) {
if (isActive) setFrameUrl(null);
cleanup();
return;
}
const scale = Math.min(1, MAX_CAPTURE_WIDTH / vw);
const cw = Math.round(vw * scale);
const ch = Math.round(vh * scale);
const canvas = document.createElement("canvas");
canvas.width = cw;
canvas.height = ch;
const ctx = canvas.getContext("2d");
if (!ctx) {
if (isActive) setFrameUrl(null);
cleanup();
return;
}
ctx.drawImage(video, 0, 0, cw, ch);
const dataUrl = canvas.toDataURL("image/jpeg", 0.7);
if (isActive) setFrameUrl(dataUrl);
} catch {
if (isActive) setFrameUrl(null);
}
cleanup();
};
let seeked = false;
const onLoaded = () => {
if (!isActive || seeked) return;
seeked = true;
video.currentTime = 0.1;
};
const onError = () => {
if (isActive) setFrameUrl(null);
cleanup();
};
// 先绑定监听,再设 src
video.addEventListener("loadedmetadata", onLoaded);
video.addEventListener("canplay", onLoaded);
video.addEventListener("seeked", onSeeked);
video.addEventListener("error", onError);
video.src = videoUrl;
return () => {
isActive = false;
cleanup();
};
}, [videoUrl]);
return frameUrl;
}

View File

@@ -43,7 +43,7 @@ export function BgmPanel({
return (
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<div className="flex items-center justify-between mb-4">
<h2 className="text-lg font-semibold text-white flex items-center gap-2">🎵 </h2>
<h2 className="text-lg font-semibold text-white flex items-center gap-2"></h2>
<div className="flex items-center gap-2">
<button
onClick={onRefresh}

View File

@@ -213,7 +213,7 @@ export function ClipTrimmer({
{/* Custom range track */}
<div
ref={trackRef}
className="relative h-8 cursor-pointer select-none touch-none"
className="relative h-10 cursor-pointer select-none touch-none"
onPointerMove={handleTrackPointerMove}
onPointerUp={handleTrackPointerUp}
onPointerLeave={handleTrackPointerUp}
@@ -242,7 +242,7 @@ export function ClipTrimmer({
{/* Start thumb */}
<div
onPointerDown={(e) => handleThumbPointerDown("start", e)}
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-4 h-4 rounded-full bg-purple-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-5 h-5 rounded-full bg-purple-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
style={{ left: `${startPct}%` }}
title={`起点: ${formatSec(sourceStart)}`}
/>
@@ -250,7 +250,7 @@ export function ClipTrimmer({
{/* End thumb */}
<div
onPointerDown={(e) => handleThumbPointerDown("end", e)}
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-4 h-4 rounded-full bg-pink-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
className="absolute top-1/2 -translate-y-1/2 -translate-x-1/2 w-5 h-5 rounded-full bg-pink-500 border-2 border-white shadow-lg cursor-grab active:cursor-grabbing hover:scale-110 transition-transform z-10"
style={{ left: `${endPct}%` }}
title={`终点: ${formatSec(effectiveEnd)}`}
/>

View File

@@ -35,9 +35,13 @@ interface TitleStyleOption {
interface FloatingStylePreviewProps {
onClose: () => void;
videoTitle: string;
videoSecondaryTitle: string;
titleStyles: TitleStyleOption[];
selectedTitleStyleId: string;
titleFontSize: number;
selectedSecondaryTitleStyleId: string;
secondaryTitleFontSize: number;
secondaryTitleTopMargin: number;
subtitleStyles: SubtitleStyleOption[];
selectedSubtitleStyleId: string;
subtitleFontSize: number;
@@ -49,16 +53,22 @@ interface FloatingStylePreviewProps {
buildTextShadow: (color: string, size: number) => string;
previewBaseWidth: number;
previewBaseHeight: number;
previewBackgroundUrl?: string | null;
}
const DESKTOP_WIDTH = 280;
const MOBILE_WIDTH = 160;
export function FloatingStylePreview({
onClose,
videoTitle,
videoSecondaryTitle,
titleStyles,
selectedTitleStyleId,
titleFontSize,
selectedSecondaryTitleStyleId,
secondaryTitleFontSize,
secondaryTitleTopMargin,
subtitleStyles,
selectedSubtitleStyleId,
subtitleFontSize,
@@ -70,11 +80,10 @@ export function FloatingStylePreview({
buildTextShadow,
previewBaseWidth,
previewBaseHeight,
previewBackgroundUrl,
}: FloatingStylePreviewProps) {
const isMobile = typeof window !== "undefined" && window.innerWidth < 640;
const windowWidth = isMobile
? Math.min(window.innerWidth - 32, 360)
: DESKTOP_WIDTH;
const windowWidth = isMobile ? MOBILE_WIDTH : DESKTOP_WIDTH;
useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
@@ -126,15 +135,32 @@ export function FloatingStylePreview({
const scaledTitleTopMargin = Math.max(0, Math.round(titleTopMargin * responsiveScale));
const scaledSubtitleBottomMargin = Math.max(0, Math.round(subtitleBottomMargin * responsiveScale));
// 副标题样式
const activeSecondaryTitleStyle = titleStyles.find((s) => s.id === selectedSecondaryTitleStyleId)
|| activeTitleStyle;
const stColor = activeSecondaryTitleStyle?.color || "#FFFFFF";
const stStrokeColor = activeSecondaryTitleStyle?.stroke_color || "#000000";
const stStrokeSize = Math.max(1, Math.round((activeSecondaryTitleStyle?.stroke_size ?? 6) * responsiveScale));
const stLetterSpacing = Math.max(0, (activeSecondaryTitleStyle?.letter_spacing ?? 2) * responsiveScale);
const stFontWeight = activeSecondaryTitleStyle?.font_weight ?? 700;
const stFontFamilyName = `SecondaryTitlePreview-${activeSecondaryTitleStyle?.id || "default"}`;
const stFontUrl = activeSecondaryTitleStyle?.font_file
? resolveAssetUrl(`fonts/${activeSecondaryTitleStyle.font_file}`)
: null;
const scaledSecondaryTitleFontSize = Math.max(24, Math.round(secondaryTitleFontSize * responsiveScale));
const scaledSecondaryTitleTopMargin = Math.max(0, Math.round(secondaryTitleTopMargin * responsiveScale));
const previewSecondaryTitleText = videoSecondaryTitle.trim() || "";
const content = (
<div
style={{
position: "fixed",
left: "16px",
top: "16px",
...(isMobile
? { right: "12px", bottom: "12px" }
: { left: "16px", top: "16px" }),
width: `${windowWidth}px`,
zIndex: 150,
maxHeight: "calc(100dvh - 32px)",
maxHeight: isMobile ? "calc(50dvh)" : "calc(100dvh - 32px)",
overflow: "hidden",
}}
className="rounded-xl border border-white/20 bg-gray-900/95 backdrop-blur-md shadow-2xl"
@@ -159,13 +185,18 @@ export function FloatingStylePreview({
className="relative overflow-hidden rounded-b-xl"
style={{ height: `${previewHeight}px` }}
>
{(titleFontUrl || subtitleFontUrl) && (
{(titleFontUrl || subtitleFontUrl || stFontUrl) && (
<style>{`
${titleFontUrl ? `@font-face { font-family: '${titleFontFamilyName}'; src: url('${titleFontUrl}') format('${getFontFormat(activeTitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
${stFontUrl && stFontUrl !== titleFontUrl ? `@font-face { font-family: '${stFontFamilyName}'; src: url('${stFontUrl}') format('${getFontFormat(activeSecondaryTitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
${subtitleFontUrl ? `@font-face { font-family: '${subtitleFontFamilyName}'; src: url('${subtitleFontUrl}') format('${getFontFormat(activeSubtitleStyle?.font_file)}'); font-weight: 400; font-style: normal; }` : ''}
`}</style>
)}
<div className="absolute inset-0 opacity-20 bg-gradient-to-br from-purple-500/40 via-transparent to-pink-500/30" />
{previewBackgroundUrl ? (
<img src={previewBackgroundUrl} alt="" className="absolute inset-0 w-full h-full object-cover" />
) : (
<div className="absolute inset-0 opacity-20 bg-gradient-to-br from-purple-500/40 via-transparent to-pink-500/30" />
)}
<div
className="absolute top-0 left-0"
style={{
@@ -182,24 +213,55 @@ export function FloatingStylePreview({
top: `${scaledTitleTopMargin}px`,
left: 0,
right: 0,
color: titleColor,
fontSize: `${scaledTitleFontSize}px`,
fontWeight: titleFontWeight,
fontFamily: titleFontUrl
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
textShadow: buildTextShadow(titleStrokeColor, titleStrokeSize),
letterSpacing: `${titleLetterSpacing}px`,
lineHeight: 1.2,
whiteSpace: 'normal',
wordBreak: 'break-word',
overflowWrap: 'anywhere',
boxSizing: 'border-box',
opacity: videoTitle.trim() ? 1 : 0.7,
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
padding: '0 5%',
boxSizing: 'border-box',
}}
>
{previewTitleText}
<div
style={{
color: titleColor,
fontSize: `${scaledTitleFontSize}px`,
fontWeight: titleFontWeight,
fontFamily: titleFontUrl
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
textShadow: buildTextShadow(titleStrokeColor, titleStrokeSize),
letterSpacing: `${titleLetterSpacing}px`,
lineHeight: 1.2,
whiteSpace: 'normal',
wordBreak: 'break-word',
overflowWrap: 'anywhere',
opacity: videoTitle.trim() ? 1 : 0.7,
}}
>
{previewTitleText}
</div>
{previewSecondaryTitleText && (
<div
style={{
marginTop: `${scaledSecondaryTitleTopMargin}px`,
color: stColor,
fontSize: `${scaledSecondaryTitleFontSize}px`,
fontWeight: stFontWeight,
fontFamily: stFontUrl && stFontUrl !== titleFontUrl
? `'${stFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
: titleFontUrl
? `'${titleFontFamilyName}', "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif`
: '"PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", "Noto Sans SC", sans-serif',
textShadow: buildTextShadow(stStrokeColor, stStrokeSize),
letterSpacing: `${stLetterSpacing}px`,
lineHeight: 1.2,
whiteSpace: 'normal',
wordBreak: 'break-word',
overflowWrap: 'anywhere',
}}
>
{previewSecondaryTitleText}
</div>
)}
</div>
<div

View File

@@ -23,6 +23,7 @@ interface GeneratedAudiosPanelProps {
speed: number;
onSpeedChange: (speed: number) => void;
ttsMode: string;
embedded?: boolean;
}
export function GeneratedAudiosPanel({
@@ -40,6 +41,7 @@ export function GeneratedAudiosPanel({
speed,
onSpeedChange,
ttsMode,
embedded = false,
}: GeneratedAudiosPanelProps) {
const [editingId, setEditingId] = useState<string | null>(null);
const [editName, setEditName] = useState("");
@@ -123,64 +125,124 @@ export function GeneratedAudiosPanel({
] as const;
const currentSpeedLabel = speedOptions.find((o) => o.value === speed)?.label ?? "正常";
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm relative z-10">
<div className="flex justify-between items-center gap-2 mb-4">
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
<Mic className="h-4 w-4 text-purple-400" />
</h2>
<div className="flex gap-1.5">
{/* 语速下拉 (仅声音克隆模式) */}
{ttsMode === "voiceclone" && (
<div ref={speedRef} className="relative">
<button
onClick={() => setSpeedOpen((v) => !v)}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
>
: {currentSpeedLabel}
<ChevronDown className={`h-3 w-3 transition-transform ${speedOpen ? "rotate-180" : ""}`} />
</button>
{speedOpen && (
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[80px]">
{speedOptions.map((opt) => (
<button
key={opt.value}
onClick={() => { onSpeedChange(opt.value); setSpeedOpen(false); }}
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
speed === opt.value
? "bg-purple-600/40 text-purple-200"
: "text-gray-300 hover:bg-white/10"
}`}
>
{opt.label}
</button>
))}
</div>
)}
</div>
)}
<button
onClick={onGenerateAudio}
disabled={isGeneratingAudio || !canGenerate}
title={missingRefAudio ? "请先选择参考音频" : !hasText ? "请先输入文案" : ""}
className={`px-2 py-1 text-xs rounded transition-all whitespace-nowrap flex items-center gap-1 ${
isGeneratingAudio || !canGenerate
? "bg-gray-600 cursor-not-allowed text-gray-400"
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white"
}`}
>
<Mic className="h-3.5 w-3.5" />
</button>
<button
onClick={onRefresh}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1"
>
<RefreshCw className="h-3.5 w-3.5" />
</button>
const content = (
<>
{embedded ? (
<>
{/* Row 1: 语速 + 生成配音 (right-aligned) */}
<div className="flex justify-end items-center gap-1.5 mb-3">
{ttsMode === "voiceclone" && (
<div ref={speedRef} className="relative">
<button
onClick={() => setSpeedOpen((v) => !v)}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
>
: {currentSpeedLabel}
<ChevronDown className={`h-3 w-3 transition-transform ${speedOpen ? "rotate-180" : ""}`} />
</button>
{speedOpen && (
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[80px]">
{speedOptions.map((opt) => (
<button
key={opt.value}
onClick={() => { onSpeedChange(opt.value); setSpeedOpen(false); }}
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
speed === opt.value
? "bg-purple-600/40 text-purple-200"
: "text-gray-300 hover:bg-white/10"
}`}
>
{opt.label}
</button>
))}
</div>
)}
</div>
)}
<button
onClick={onGenerateAudio}
disabled={isGeneratingAudio || !canGenerate}
title={missingRefAudio ? "请先选择参考音频" : !hasText ? "请先输入文案" : ""}
className={`px-4 py-2 text-sm font-medium rounded-lg transition-all whitespace-nowrap flex items-center gap-1.5 shadow-sm ${
isGeneratingAudio || !canGenerate
? "bg-gray-600 cursor-not-allowed text-gray-400"
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white hover:shadow-md"
}`}
>
<Mic className="h-4 w-4" />
</button>
</div>
{/* Row 2: 配音列表 + 刷新 */}
<div className="flex justify-between items-center mb-3">
<h3 className="text-sm font-medium text-gray-400"></h3>
<button
onClick={onRefresh}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1"
>
<RefreshCw className="h-3.5 w-3.5" />
</button>
</div>
</>
) : (
<div className="flex justify-between items-center gap-2 mb-4">
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
<Mic className="h-4 w-4 text-purple-400" />
</h2>
<div className="flex gap-1.5">
{ttsMode === "voiceclone" && (
<div ref={speedRef} className="relative">
<button
onClick={() => setSpeedOpen((v) => !v)}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1 transition-all"
>
: {currentSpeedLabel}
<ChevronDown className={`h-3 w-3 transition-transform ${speedOpen ? "rotate-180" : ""}`} />
</button>
{speedOpen && (
<div className="absolute right-0 top-full mt-1 bg-gray-800 border border-white/20 rounded-lg shadow-xl py-1 z-50 min-w-[80px]">
{speedOptions.map((opt) => (
<button
key={opt.value}
onClick={() => { onSpeedChange(opt.value); setSpeedOpen(false); }}
className={`w-full text-left px-3 py-1.5 text-xs transition-colors ${
speed === opt.value
? "bg-purple-600/40 text-purple-200"
: "text-gray-300 hover:bg-white/10"
}`}
>
{opt.label}
</button>
))}
</div>
)}
</div>
)}
<button
onClick={onGenerateAudio}
disabled={isGeneratingAudio || !canGenerate}
title={missingRefAudio ? "请先选择参考音频" : !hasText ? "请先输入文案" : ""}
className={`px-4 py-2 text-sm font-medium rounded-lg transition-all whitespace-nowrap flex items-center gap-1.5 shadow-sm ${
isGeneratingAudio || !canGenerate
? "bg-gray-600 cursor-not-allowed text-gray-400"
: "bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-700 hover:to-pink-700 text-white hover:shadow-md"
}`}
>
<Mic className="h-4 w-4" />
</button>
<button
onClick={onRefresh}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 whitespace-nowrap flex items-center gap-1"
>
<RefreshCw className="h-3.5 w-3.5" />
</button>
</div>
</div>
</div>
)}
{/* 缺少参考音频提示 */}
{missingRefAudio && (
@@ -250,7 +312,7 @@ export function GeneratedAudiosPanel({
<div className="text-white text-sm truncate">{audio.name}</div>
<div className="text-gray-400 text-xs">{audio.duration_sec.toFixed(1)}s</div>
</div>
<div className="flex items-center gap-1 pl-2 opacity-0 group-hover:opacity-100 transition-opacity">
<div className="flex items-center gap-1 pl-2 opacity-40 group-hover:opacity-100 transition-opacity">
<button
onClick={(e) => togglePlay(audio, e)}
className="p-1 text-gray-500 hover:text-purple-400 transition-colors"
@@ -287,7 +349,14 @@ export function GeneratedAudiosPanel({
})}
</div>
)}
</>
);
if (embedded) return content;
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm relative z-10">
{content}
</div>
);
}

View File

@@ -16,6 +16,7 @@ interface HistoryListProps {
onRefresh: () => void;
registerVideoRef: (id: string, element: HTMLDivElement | null) => void;
formatDate: (timestamp: number) => string;
embedded?: boolean;
}
export function HistoryList({
@@ -26,19 +27,22 @@ export function HistoryList({
onRefresh,
registerVideoRef,
formatDate,
embedded = false,
}: HistoryListProps) {
return (
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<div className="flex justify-between items-center mb-4">
<h2 className="text-lg font-semibold text-white flex items-center gap-2">📂 </h2>
<button
onClick={onRefresh}
className="px-3 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
>
<RefreshCw className="h-3.5 w-3.5" />
</button>
</div>
const content = (
<>
{!embedded && (
<div className="flex justify-between items-center mb-4">
<h2 className="text-lg font-semibold text-white flex items-center gap-2"></h2>
<button
onClick={onRefresh}
className="px-3 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
>
<RefreshCw className="h-3.5 w-3.5" />
</button>
</div>
)}
{generatedVideos.length === 0 ? (
<div className="text-center py-4 text-gray-500">
<p></p>
@@ -66,7 +70,7 @@ export function HistoryList({
e.stopPropagation();
onDeleteVideo(v.id);
}}
className="p-1 text-gray-500 hover:text-red-400 opacity-0 group-hover:opacity-100 transition-opacity"
className="p-1 text-gray-500 hover:text-red-400 opacity-40 group-hover:opacity-100 transition-opacity"
title="删除视频"
>
<Trash2 className="h-4 w-4" />
@@ -75,6 +79,14 @@ export function HistoryList({
))}
</div>
)}
</>
);
if (embedded) return content;
return (
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
{content}
</div>
);
}

View File

@@ -2,8 +2,10 @@
import { useEffect, useMemo } from "react";
import { useRouter } from "next/navigation";
import { RefreshCw } from "lucide-react";
import VideoPreviewModal from "@/components/VideoPreviewModal";
import ScriptExtractionModal from "./ScriptExtractionModal";
import RewriteModal from "./RewriteModal";
import { useHomeController } from "@/features/home/model/useHomeController";
import { resolveMediaUrl } from "@/shared/lib/media";
import { BgmPanel } from "@/features/home/ui/BgmPanel";
@@ -51,6 +53,8 @@ export function HomePage() {
setText,
extractModalOpen,
setExtractModalOpen,
rewriteModalOpen,
setRewriteModalOpen,
handleGenerateMeta,
isGeneratingMeta,
handleTranslate,
@@ -70,6 +74,15 @@ export function HomePage() {
titleFontSize,
setTitleFontSize,
setTitleSizeLocked,
videoSecondaryTitle,
secondaryTitleInput,
selectedSecondaryTitleStyleId,
setSelectedSecondaryTitleStyleId,
secondaryTitleFontSize,
setSecondaryTitleFontSize,
setSecondaryTitleSizeLocked,
secondaryTitleTopMargin,
setSecondaryTitleTopMargin,
subtitleStyles,
selectedSubtitleStyleId,
setSelectedSubtitleStyleId,
@@ -80,6 +93,8 @@ export function HomePage() {
setTitleTopMargin,
subtitleBottomMargin,
setSubtitleBottomMargin,
titleDisplayMode,
setTitleDisplayMode,
outputAspectRatio,
setOutputAspectRatio,
resolveAssetUrl,
@@ -160,6 +175,7 @@ export function HomePage() {
setClipTrimmerOpen,
clipTrimmerSegmentId,
setClipTrimmerSegmentId,
materialPosterUrl,
} = useHomeController();
useEffect(() => {
@@ -168,7 +184,15 @@ export function HomePage() {
useEffect(() => {
if (typeof window === "undefined") return;
if ("scrollRestoration" in history) {
history.scrollRestoration = "manual";
}
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
// 兜底:等所有恢复 effect + 异步数据加载 settle 后再次强制回顶部
const timer = setTimeout(() => {
window.scrollTo({ top: 0, left: 0, behavior: "auto" });
}, 200);
return () => clearTimeout(timer);
}, []);
const clipTrimmerSegment = useMemo(
@@ -190,11 +214,12 @@ export function HomePage() {
<div className="grid grid-cols-1 lg:grid-cols-2 gap-8">
{/* 左侧: 输入区域 */}
<div className="space-y-6">
{/* 1. 文案输入 */}
{/* 一、文案提取与编辑 */}
<ScriptEditor
text={text}
onChangeText={setText}
onOpenExtractModal={() => setExtractModalOpen(true)}
onOpenRewriteModal={() => setRewriteModalOpen(true)}
onGenerateMeta={handleGenerateMeta}
isGeneratingMeta={isGeneratingMeta}
onTranslate={handleTranslate}
@@ -207,100 +232,77 @@ export function HomePage() {
onDeleteScript={deleteSavedScript}
/>
{/* 2. 标题和字幕设置 */}
<TitleSubtitlePanel
showStylePreview={showStylePreview}
onTogglePreview={() => setShowStylePreview((prev) => !prev)}
videoTitle={videoTitle}
onTitleChange={titleInput.handleChange}
onTitleCompositionStart={titleInput.handleCompositionStart}
onTitleCompositionEnd={titleInput.handleCompositionEnd}
titleStyles={titleStyles}
selectedTitleStyleId={selectedTitleStyleId}
onSelectTitleStyle={setSelectedTitleStyleId}
titleFontSize={titleFontSize}
onTitleFontSizeChange={(value) => {
setTitleFontSize(value);
setTitleSizeLocked(true);
}}
subtitleStyles={subtitleStyles}
selectedSubtitleStyleId={selectedSubtitleStyleId}
onSelectSubtitleStyle={setSelectedSubtitleStyleId}
subtitleFontSize={subtitleFontSize}
onSubtitleFontSizeChange={(value) => {
setSubtitleFontSize(value);
setSubtitleSizeLocked(true);
}}
titleTopMargin={titleTopMargin}
onTitleTopMarginChange={setTitleTopMargin}
subtitleBottomMargin={subtitleBottomMargin}
onSubtitleBottomMarginChange={setSubtitleBottomMargin}
resolveAssetUrl={resolveAssetUrl}
getFontFormat={getFontFormat}
buildTextShadow={buildTextShadow}
previewBaseWidth={outputAspectRatio === "16:9" ? 1920 : 1080}
previewBaseHeight={outputAspectRatio === "16:9" ? 1080 : 1920}
/>
{/* 二、配音 */}
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-base sm:text-lg font-semibold text-white mb-4">
</h2>
<h3 className="text-sm font-medium text-gray-400 mb-3"></h3>
<VoiceSelector
embedded
ttsMode={ttsMode}
onSelectTtsMode={setTtsMode}
voices={voices}
voice={voice}
onSelectVoice={setVoice}
voiceCloneSlot={(
<RefAudioPanel
refAudios={refAudios}
selectedRefAudio={selectedRefAudio}
onSelectRefAudio={handleSelectRefAudio}
isUploadingRef={isUploadingRef}
uploadRefError={uploadRefError}
onClearUploadRefError={() => setUploadRefError(null)}
onUploadRefAudio={uploadRefAudio}
onFetchRefAudios={fetchRefAudios}
playingAudioId={playingAudioId}
onTogglePlayPreview={togglePlayPreview}
editingAudioId={editingAudioId}
editName={editName}
onEditNameChange={setEditName}
onStartEditing={startEditing}
onSaveEditing={saveEditing}
onCancelEditing={cancelEditing}
onDeleteRefAudio={deleteRefAudio}
onRetranscribe={retranscribeRefAudio}
retranscribingId={retranscribingId}
recordedBlob={recordedBlob}
isRecording={isRecording}
recordingTime={recordingTime}
onStartRecording={startRecording}
onStopRecording={stopRecording}
onUseRecording={useRecording}
formatRecordingTime={formatRecordingTime}
/>
)}
/>
<div className="border-t border-white/10 my-4" />
<GeneratedAudiosPanel
embedded
generatedAudios={generatedAudios}
selectedAudioId={selectedAudioId}
isGeneratingAudio={isGeneratingAudio}
audioTask={audioTask}
onGenerateAudio={handleGenerateAudio}
onRefresh={() => fetchGeneratedAudios()}
onSelectAudio={selectAudio}
onDeleteAudio={deleteAudio}
onRenameAudio={renameAudio}
hasText={!!text.trim()}
missingRefAudio={ttsMode === "voiceclone" && !selectedRefAudio}
speed={speed}
onSpeedChange={setSpeed}
ttsMode={ttsMode}
/>
</div>
{/* 3. 配音方式选择 */}
<VoiceSelector
ttsMode={ttsMode}
onSelectTtsMode={setTtsMode}
voices={voices}
voice={voice}
onSelectVoice={setVoice}
voiceCloneSlot={(
<RefAudioPanel
refAudios={refAudios}
selectedRefAudio={selectedRefAudio}
onSelectRefAudio={handleSelectRefAudio}
isUploadingRef={isUploadingRef}
uploadRefError={uploadRefError}
onClearUploadRefError={() => setUploadRefError(null)}
onUploadRefAudio={uploadRefAudio}
onFetchRefAudios={fetchRefAudios}
playingAudioId={playingAudioId}
onTogglePlayPreview={togglePlayPreview}
editingAudioId={editingAudioId}
editName={editName}
onEditNameChange={setEditName}
onStartEditing={startEditing}
onSaveEditing={saveEditing}
onCancelEditing={cancelEditing}
onDeleteRefAudio={deleteRefAudio}
onRetranscribe={retranscribeRefAudio}
retranscribingId={retranscribingId}
recordedBlob={recordedBlob}
isRecording={isRecording}
recordingTime={recordingTime}
onStartRecording={startRecording}
onStopRecording={stopRecording}
onUseRecording={useRecording}
formatRecordingTime={formatRecordingTime}
/>
)}
/>
{/* 4. 配音列表 */}
<GeneratedAudiosPanel
generatedAudios={generatedAudios}
selectedAudioId={selectedAudioId}
isGeneratingAudio={isGeneratingAudio}
audioTask={audioTask}
onGenerateAudio={handleGenerateAudio}
onRefresh={() => fetchGeneratedAudios()}
onSelectAudio={selectAudio}
onDeleteAudio={deleteAudio}
onRenameAudio={renameAudio}
hasText={!!text.trim()}
missingRefAudio={ttsMode === "voiceclone" && !selectedRefAudio}
speed={speed}
onSpeedChange={setSpeed}
ttsMode={ttsMode}
/>
{/* 5. 视频素材 */}
<MaterialSelector
{/* 三、素材编辑 */}
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-base sm:text-lg font-semibold text-white mb-4">
</h2>
<MaterialSelector
embedded
materials={materials}
selectedMaterials={selectedMaterials}
isFetching={isFetching}
@@ -324,32 +326,84 @@ export function HomePage() {
onClearUploadError={() => setUploadError(null)}
registerMaterialRef={registerMaterialRef}
/>
{/* 5.5 时间轴编辑器 — 未选配音/素材时模糊遮挡 */}
<div className="relative">
{(!selectedAudio || selectedMaterials.length === 0) && (
<div className="absolute inset-0 bg-black/50 backdrop-blur-sm rounded-2xl flex items-center justify-center z-10">
<p className="text-gray-400">
{!selectedAudio ? "请先生成并选中配音" : "请先选择素材"}
</p>
</div>
)}
<TimelineEditor
audioDuration={selectedAudio?.duration_sec ?? 0}
audioUrl={selectedAudio ? (resolveMediaUrl(selectedAudio.path) || "") : ""}
segments={timelineSegments}
materials={materials}
outputAspectRatio={outputAspectRatio}
onOutputAspectRatioChange={setOutputAspectRatio}
onReorderSegment={reorderSegments}
onClickSegment={(seg) => {
setClipTrimmerSegmentId(seg.id);
setClipTrimmerOpen(true);
}}
/>
<div className="border-t border-white/10 my-4" />
<div className="relative">
{(!selectedAudio || selectedMaterials.length === 0) && (
<div className="absolute inset-0 bg-black/50 backdrop-blur-sm rounded-xl flex items-center justify-center z-10">
<p className="text-gray-400">
{!selectedAudio ? "请先生成并选中配音" : "请先选择素材"}
</p>
</div>
)}
<TimelineEditor
embedded
audioDuration={selectedAudio?.duration_sec ?? 0}
audioUrl={selectedAudio ? (resolveMediaUrl(selectedAudio.path) || "") : ""}
segments={timelineSegments}
materials={materials}
outputAspectRatio={outputAspectRatio}
onOutputAspectRatioChange={setOutputAspectRatio}
onReorderSegment={reorderSegments}
onClickSegment={(seg) => {
setClipTrimmerSegmentId(seg.id);
setClipTrimmerOpen(true);
}}
/>
</div>
</div>
{/* 6. 背景音乐 */}
{/* 四、标题与字幕 */}
<TitleSubtitlePanel
showStylePreview={showStylePreview}
onTogglePreview={() => setShowStylePreview((prev) => !prev)}
videoTitle={videoTitle}
onTitleChange={titleInput.handleChange}
onTitleCompositionStart={titleInput.handleCompositionStart}
onTitleCompositionEnd={titleInput.handleCompositionEnd}
videoSecondaryTitle={videoSecondaryTitle}
onSecondaryTitleChange={secondaryTitleInput.handleChange}
onSecondaryTitleCompositionStart={secondaryTitleInput.handleCompositionStart}
onSecondaryTitleCompositionEnd={secondaryTitleInput.handleCompositionEnd}
titleStyles={titleStyles}
selectedTitleStyleId={selectedTitleStyleId}
onSelectTitleStyle={setSelectedTitleStyleId}
titleFontSize={titleFontSize}
onTitleFontSizeChange={(value) => {
setTitleFontSize(value);
setTitleSizeLocked(true);
}}
selectedSecondaryTitleStyleId={selectedSecondaryTitleStyleId}
onSelectSecondaryTitleStyle={setSelectedSecondaryTitleStyleId}
secondaryTitleFontSize={secondaryTitleFontSize}
onSecondaryTitleFontSizeChange={(value) => {
setSecondaryTitleFontSize(value);
setSecondaryTitleSizeLocked(true);
}}
secondaryTitleTopMargin={secondaryTitleTopMargin}
onSecondaryTitleTopMarginChange={setSecondaryTitleTopMargin}
subtitleStyles={subtitleStyles}
selectedSubtitleStyleId={selectedSubtitleStyleId}
onSelectSubtitleStyle={setSelectedSubtitleStyleId}
subtitleFontSize={subtitleFontSize}
onSubtitleFontSizeChange={(value) => {
setSubtitleFontSize(value);
setSubtitleSizeLocked(true);
}}
titleTopMargin={titleTopMargin}
onTitleTopMarginChange={setTitleTopMargin}
subtitleBottomMargin={subtitleBottomMargin}
onSubtitleBottomMarginChange={setSubtitleBottomMargin}
titleDisplayMode={titleDisplayMode}
onTitleDisplayModeChange={setTitleDisplayMode}
resolveAssetUrl={resolveAssetUrl}
getFontFormat={getFontFormat}
buildTextShadow={buildTextShadow}
previewBaseWidth={outputAspectRatio === "16:9" ? 1920 : 1080}
previewBaseHeight={outputAspectRatio === "16:9" ? 1080 : 1920}
previewBackgroundUrl={materialPosterUrl}
/>
{/* 背景音乐 (不编号) */}
<BgmPanel
bgmList={bgmList}
bgmLoading={bgmLoading}
@@ -367,7 +421,7 @@ export function HomePage() {
registerBgmItemRef={registerBgmItemRef}
/>
{/* 7. 生成按钮 */}
{/* 生成按钮 (不编号) */}
<GenerateActionBar
isGenerating={isGenerating}
progress={currentTask?.progress || 0}
@@ -377,23 +431,59 @@ export function HomePage() {
/>
</div>
{/* 右侧: 预览区域 */}
{/* 右侧: 作品区域 */}
<div className="space-y-6">
<PreviewPanel
currentTask={currentTask}
isGenerating={isGenerating}
generatedVideo={generatedVideo}
/>
<HistoryList
generatedVideos={generatedVideos}
selectedVideoId={selectedVideoId}
onSelectVideo={handleSelectVideo}
onDeleteVideo={deleteVideo}
onRefresh={() => fetchGeneratedVideos()}
registerVideoRef={registerVideoRef}
formatDate={formatDate}
/>
{/* 生成进度(在作品卡片上方) */}
{currentTask && isGenerating && (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-purple-500/30 backdrop-blur-sm">
<div className="space-y-3">
<div className="flex justify-between text-sm text-purple-300 mb-1">
<span>AI生成中...</span>
<span>{currentTask.progress || 0}%</span>
</div>
<div className="h-3 bg-black/30 rounded-full overflow-hidden">
<div
className="h-full bg-gradient-to-r from-purple-500 to-pink-500 transition-all duration-300"
style={{ width: `${currentTask.progress || 0}%` }}
/>
</div>
</div>
</div>
)}
{/* 六、作品 */}
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-base sm:text-lg font-semibold text-white mb-4">
</h2>
<div className="flex justify-between items-center mb-3">
<h3 className="text-sm font-medium text-gray-400"></h3>
<button
onClick={() => fetchGeneratedVideos()}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
>
<RefreshCw className="h-3.5 w-3.5" />
</button>
</div>
<HistoryList
embedded
generatedVideos={generatedVideos}
selectedVideoId={selectedVideoId}
onSelectVideo={handleSelectVideo}
onDeleteVideo={deleteVideo}
onRefresh={() => fetchGeneratedVideos()}
registerVideoRef={registerVideoRef}
formatDate={formatDate}
/>
<div className="border-t border-white/10 my-4" />
<h3 className="text-sm font-medium text-gray-400 mb-3"></h3>
<PreviewPanel
embedded
currentTask={null}
isGenerating={false}
generatedVideo={generatedVideo}
/>
</div>
</div>
</div>
</main>
@@ -409,6 +499,13 @@ export function HomePage() {
onApply={(nextText) => setText(nextText)}
/>
<RewriteModal
isOpen={rewriteModalOpen}
onClose={() => setRewriteModalOpen(false)}
originalText={text}
onApply={(newText) => setText(newText)}
/>
<ClipTrimmer
isOpen={clipTrimmerOpen}
segment={clipTrimmerSegment}

View File

@@ -1,4 +1,4 @@
import { type ChangeEvent, type MouseEvent } from "react";
import { type ChangeEvent, type MouseEvent, useMemo } from "react";
import { Upload, RefreshCw, Eye, Trash2, X, Pencil, Check } from "lucide-react";
import type { Material } from "@/shared/types/material";
@@ -25,6 +25,7 @@ interface MaterialSelectorProps {
onDeleteMaterial: (id: string) => void;
onClearUploadError: () => void;
registerMaterialRef: (id: string, element: HTMLDivElement | null) => void;
embedded?: boolean;
}
export function MaterialSelector({
@@ -50,19 +51,27 @@ export function MaterialSelector({
onDeleteMaterial,
onClearUploadError,
registerMaterialRef,
embedded = false,
}: MaterialSelectorProps) {
const selectedSet = new Set(selectedMaterials);
const selectedSet = useMemo(() => new Set(selectedMaterials), [selectedMaterials]);
const isFull = selectedMaterials.length >= 4;
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
const content = (
<>
<div className="flex justify-between items-center gap-2 mb-4">
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 whitespace-nowrap">
📹
<span className="ml-1 text-[11px] sm:text-xs text-gray-400/90 font-normal">
(4)
</span>
</h2>
{!embedded ? (
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2 min-w-0">
<span className="shrink-0"></span>
<span className="text-[11px] sm:text-xs text-gray-400/90 font-normal truncate">
(4)
</span>
</h2>
) : (
<h3 className="text-sm font-medium text-gray-400 min-w-0">
<span className="shrink-0"></span>
<span className="ml-1 text-[11px] text-gray-400/90 font-normal hidden sm:inline">(4)</span>
</h3>
)}
<div className="flex gap-1.5">
<input
type="file"
@@ -94,7 +103,7 @@ export function MaterialSelector({
{isUploading && (
<div className="mb-4 p-4 bg-purple-500/10 rounded-xl border border-purple-500/30">
<div className="flex justify-between text-sm text-purple-300 mb-2">
<span>📤 ...</span>
<span>...</span>
<span>{uploadProgress}%</span>
</div>
<div className="h-2 bg-black/30 rounded-full overflow-hidden">
@@ -108,7 +117,7 @@ export function MaterialSelector({
{uploadError && (
<div className="mb-4 p-4 bg-red-500/20 text-red-200 rounded-xl text-sm flex justify-between items-center">
<span> {uploadError}</span>
<span>{uploadError}</span>
<button onClick={onClearUploadError} className="text-red-300 hover:text-white">
<X className="h-3.5 w-3.5" />
</button>
@@ -138,7 +147,7 @@ export function MaterialSelector({
<div className="text-5xl mb-4">📁</div>
<p></p>
<p className="text-sm mt-2">
📤
</p>
</div>
) : (
@@ -183,7 +192,7 @@ export function MaterialSelector({
</button>
</div>
) : (
<button onClick={() => onToggleMaterial(m.id)} className="flex-1 text-left flex items-center gap-2">
<button onClick={() => onToggleMaterial(m.id)} disabled={isFull && !isSelected} className="flex-1 text-left flex items-center gap-2">
{/* 复选框 */}
<span
className={`flex-shrink-0 w-4 h-4 rounded border flex items-center justify-center text-[10px] ${isSelected
@@ -207,7 +216,7 @@ export function MaterialSelector({
onPreviewMaterial(m.path);
}
}}
className="p-1 text-gray-500 hover:text-white opacity-0 group-hover:opacity-100 transition-opacity"
className="p-1 text-gray-500 hover:text-white opacity-40 group-hover:opacity-100 transition-opacity"
title="预览视频"
>
<Eye className="h-4 w-4" />
@@ -215,7 +224,7 @@ export function MaterialSelector({
{editingMaterialId !== m.id && (
<button
onClick={(e) => onStartEditing(m, e)}
className="p-1 text-gray-500 hover:text-white opacity-0 group-hover:opacity-100 transition-opacity"
className="p-1 text-gray-500 hover:text-white opacity-40 group-hover:opacity-100 transition-opacity"
title="重命名"
>
<Pencil className="h-4 w-4" />
@@ -226,7 +235,7 @@ export function MaterialSelector({
e.stopPropagation();
onDeleteMaterial(m.id);
}}
className="p-1 text-gray-500 hover:text-red-400 opacity-0 group-hover:opacity-100 transition-opacity"
className="p-1 text-gray-500 hover:text-red-400 opacity-40 group-hover:opacity-100 transition-opacity"
title="删除素材"
>
<Trash2 className="h-4 w-4" />
@@ -237,6 +246,14 @@ export function MaterialSelector({
})}
</div>
)}
</>
);
if (embedded) return content;
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
{content}
</div>
);
}

View File

@@ -12,18 +12,20 @@ interface PreviewPanelProps {
currentTask: Task | null;
isGenerating: boolean;
generatedVideo: string | null;
embedded?: boolean;
}
export function PreviewPanel({
currentTask,
isGenerating,
generatedVideo,
embedded = false,
}: PreviewPanelProps) {
return (
const content = (
<>
{currentTask && isGenerating && (
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4"> </h2>
<div className={embedded ? "mb-4" : "bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm"}>
{!embedded && <h2 className="text-lg font-semibold text-white mb-4"></h2>}
<div className="space-y-3">
<div className="h-3 bg-black/30 rounded-full overflow-hidden">
<div
@@ -36,8 +38,8 @@ export function PreviewPanel({
</div>
)}
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">🎥 </h2>
<div className={embedded ? "" : "bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm"}>
{!embedded && <h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2"></h2>}
<div className="aspect-video bg-black/50 rounded-xl overflow-hidden flex items-center justify-center">
{generatedVideo ? (
<video src={generatedVideo} controls preload="metadata" className="w-full h-full object-contain" />
@@ -71,4 +73,6 @@ export function PreviewPanel({
</div>
</>
);
return content;
}

View File

@@ -92,7 +92,7 @@ export function RefAudioPanel({
<div className="space-y-4">
<div>
<div className="flex justify-between items-center mb-2">
<span className="text-sm text-gray-300">📁 </span>
<span className="text-sm text-gray-300">📁 <span className="text-xs text-gray-500 font-normal">(3-10)</span></span>
<div className="flex gap-2">
<input
type="file"
@@ -187,7 +187,7 @@ export function RefAudioPanel({
<div className="text-white text-xs truncate pr-1 flex-1" title={audio.name}>
{audio.name}
</div>
<div className="flex gap-1 opacity-0 group-hover:opacity-100 transition-opacity">
<div className="flex gap-1 opacity-40 group-hover:opacity-100 transition-opacity">
<button
onClick={(e) => onTogglePlayPreview(audio, e)}
className="text-gray-400 hover:text-purple-400 text-xs"
@@ -287,9 +287,6 @@ export function RefAudioPanel({
)}
</div>
<p className="text-xs text-gray-500 mt-2 border-t border-white/10 pt-3">
3-10
</p>
</div>
);
}

View File

@@ -0,0 +1,213 @@
import { useState, useEffect, useRef, useCallback } from "react";
import { Loader2, Sparkles } from "lucide-react";
import api from "@/shared/api/axios";
import { ApiResponse, unwrap } from "@/shared/api/types";
const CUSTOM_PROMPT_KEY = "vigent_rewriteCustomPrompt";
interface RewriteModalProps {
isOpen: boolean;
onClose: () => void;
originalText: string;
onApply: (text: string) => void;
}
export default function RewriteModal({
isOpen,
onClose,
originalText,
onApply,
}: RewriteModalProps) {
const [customPrompt, setCustomPrompt] = useState(
() => (typeof window !== "undefined" ? localStorage.getItem(CUSTOM_PROMPT_KEY) || "" : "")
);
const [rewrittenText, setRewrittenText] = useState("");
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
// Debounced save customPrompt to localStorage
const debounceRef = useRef<ReturnType<typeof setTimeout>>(undefined);
useEffect(() => {
debounceRef.current = setTimeout(() => {
localStorage.setItem(CUSTOM_PROMPT_KEY, customPrompt);
}, 300);
return () => clearTimeout(debounceRef.current);
}, [customPrompt]);
// Reset state when modal opens
useEffect(() => {
if (isOpen) {
setRewrittenText("");
setError(null);
setIsLoading(false);
}
}, [isOpen]);
const handleRewrite = useCallback(async () => {
if (!originalText.trim()) return;
setIsLoading(true);
setError(null);
try {
const { data: res } = await api.post<
ApiResponse<{ rewritten_text: string }>
>("/api/ai/rewrite", {
text: originalText,
custom_prompt: customPrompt.trim() || null,
});
const payload = unwrap(res);
setRewrittenText(payload.rewritten_text || "");
} catch (err: unknown) {
console.error("AI rewrite failed:", err);
const axiosErr = err as {
response?: { data?: { message?: string } };
message?: string;
};
const msg =
axiosErr.response?.data?.message || axiosErr.message || "改写失败,请重试";
setError(msg);
} finally {
setIsLoading(false);
}
}, [originalText, customPrompt]);
const handleApply = () => {
onApply(rewrittenText);
onClose();
};
const handleRetry = () => {
setRewrittenText("");
setError(null);
};
// ESC to close
useEffect(() => {
if (!isOpen) return;
const handleKeyDown = (e: KeyboardEvent) => {
if (e.key === "Escape") onClose();
};
document.addEventListener("keydown", handleKeyDown);
return () => document.removeEventListener("keydown", handleKeyDown);
}, [isOpen, onClose]);
if (!isOpen) return null;
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-black/80 backdrop-blur-sm p-4 animate-in fade-in duration-200">
<div className="bg-[#1a1a1a] border border-white/10 rounded-2xl w-full max-w-2xl max-h-[90vh] overflow-hidden flex flex-col shadow-2xl">
{/* Header */}
<div className="flex items-center justify-between p-4 border-b border-white/10 bg-white/5">
<h3 className="text-lg font-semibold text-white flex items-center gap-2">
<Sparkles className="h-5 w-5 text-purple-400" />
AI
</h3>
<button
onClick={onClose}
className="text-gray-400 hover:text-white transition-colors text-2xl leading-none"
>
&times;
</button>
</div>
{/* Content */}
<div className="flex-1 overflow-y-auto p-6 space-y-5">
{/* Custom Prompt */}
<div className="space-y-2">
<label className="text-sm text-gray-300">
()
</label>
<textarea
value={customPrompt}
onChange={(e) => setCustomPrompt(e.target.value)}
placeholder="输入改写要求..."
rows={3}
className="w-full bg-black/20 border border-white/10 rounded-xl px-3 py-2 text-sm text-white placeholder-gray-500 focus:outline-none focus:border-purple-500 transition-colors resize-none"
/>
<p className="text-xs text-gray-500">使</p>
</div>
{/* Action button (before result) */}
{!rewrittenText && (
<button
onClick={handleRewrite}
disabled={isLoading || !originalText.trim()}
className="w-full py-3 px-4 bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 disabled:opacity-50 disabled:cursor-not-allowed text-white rounded-xl transition-all font-medium shadow-lg flex items-center justify-center gap-2"
>
{isLoading ? (
<>
<Loader2 className="w-5 h-5 animate-spin" />
...
</>
) : (
<>
<Sparkles className="w-5 h-5" />
</>
)}
</button>
)}
{/* Error */}
{error && (
<div className="bg-red-500/10 border border-red-500/30 rounded-xl p-4">
<p className="text-red-400 text-sm">{error}</p>
</div>
)}
{/* Rewritten result */}
{rewrittenText && (
<>
<div className="space-y-2">
<div className="flex justify-between items-center">
<h4 className="font-semibold text-purple-300 flex items-center gap-2">
<Sparkles className="h-4 w-4" />
AI
</h4>
<button
onClick={handleApply}
className="text-xs bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 text-white px-3 py-1.5 rounded-lg transition-colors shadow-sm"
>
使
</button>
</div>
<div className="bg-purple-900/10 border border-purple-500/20 rounded-xl p-4 max-h-60 overflow-y-auto hide-scrollbar">
<p className="text-gray-200 text-sm leading-relaxed whitespace-pre-wrap">
{rewrittenText}
</p>
</div>
</div>
<div className="space-y-2">
<div className="flex justify-between items-center">
<h4 className="font-semibold text-gray-400 flex items-center gap-2">
📝
</h4>
<button
onClick={onClose}
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors"
>
</button>
</div>
<div className="bg-white/5 border border-white/10 rounded-xl p-4 max-h-40 overflow-y-auto hide-scrollbar">
<p className="text-gray-400 text-sm leading-relaxed whitespace-pre-wrap">
{originalText}
</p>
</div>
</div>
<button
onClick={handleRetry}
className="w-full py-2.5 px-4 bg-white/10 hover:bg-white/20 text-white rounded-xl transition-colors"
>
</button>
</>
)}
</div>
</div>
</div>
);
}

View File

@@ -18,6 +18,7 @@ interface ScriptEditorProps {
text: string;
onChangeText: (value: string) => void;
onOpenExtractModal: () => void;
onOpenRewriteModal: () => void;
onGenerateMeta: () => void;
isGeneratingMeta: boolean;
onTranslate: (targetLang: string) => void;
@@ -34,6 +35,7 @@ export function ScriptEditor({
text,
onChangeText,
onOpenExtractModal,
onOpenRewriteModal,
onGenerateMeta,
isGeneratingMeta,
onTranslate,
@@ -86,7 +88,7 @@ export function ScriptEditor({
<div className="relative z-10 bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
<div className="mb-4 space-y-3">
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
</h2>
<div className="flex gap-2 flex-wrap justify-end items-center">
{/* 历史文案 */}
@@ -123,7 +125,7 @@ export function ScriptEditor({
e.stopPropagation();
onDeleteScript(script.id);
}}
className="opacity-0 group-hover:opacity-100 p-1 text-gray-500 hover:text-red-400 transition-all shrink-0"
className="opacity-40 group-hover:opacity-100 p-1 text-gray-500 hover:text-red-400 transition-all shrink-0"
>
<Trash2 className="h-3 w-3" />
</button>
@@ -218,18 +220,32 @@ export function ScriptEditor({
/>
<div className="flex items-center justify-between mt-2 text-sm text-gray-400">
<span>{text.length} </span>
<button
onClick={onSaveScript}
disabled={!text.trim()}
className={`px-2.5 py-1 text-xs rounded transition-all flex items-center gap-1 ${
!text.trim()
? "bg-gray-700 cursor-not-allowed text-gray-500"
: "bg-amber-600/80 hover:bg-amber-600 text-white"
}`}
>
<Save className="h-3 w-3" />
</button>
<div className="flex items-center gap-2">
<button
onClick={onOpenRewriteModal}
disabled={!text.trim()}
className={`px-2.5 py-1 text-xs rounded transition-all flex items-center gap-1 ${
!text.trim()
? "bg-gray-700 cursor-not-allowed text-gray-500"
: "bg-purple-600/80 hover:bg-purple-600 text-white"
}`}
>
<Sparkles className="h-3 w-3" />
AI智能改写
</button>
<button
onClick={onSaveScript}
disabled={!text.trim()}
className={`px-2.5 py-1 text-xs rounded transition-all flex items-center gap-1 ${
!text.trim()
? "bg-gray-700 cursor-not-allowed text-gray-500"
: "bg-amber-600/80 hover:bg-amber-600 text-white"
}`}
>
<Save className="h-3 w-3" />
</button>
</div>
</div>
</div>
);

View File

@@ -18,15 +18,12 @@ export default function ScriptExtractionModal({
const {
isLoading,
script,
rewrittenScript,
error,
doRewrite,
step,
dragActive,
selectedFile,
activeTab,
inputUrl,
setDoRewrite,
setActiveTab,
setInputUrl,
handleDrag,
@@ -186,21 +183,6 @@ export default function ScriptExtractionModal({
</div>
)}
{/* Options */}
<div className="flex items-center gap-3 bg-white/5 rounded-xl p-4 border border-white/10">
<label className="flex items-center gap-2 cursor-pointer">
<input
type="checkbox"
checked={doRewrite}
onChange={(e) => setDoRewrite(e.target.checked)}
className="w-4 h-4 rounded bg-white/10 border-white/20 text-purple-500 focus:ring-purple-500"
/>
<span className="text-sm text-gray-300">
AI
</span>
</label>
</div>
{/* Error */}
{error && (
<div className="bg-red-500/10 border border-red-500/30 rounded-xl p-4">
@@ -244,9 +226,7 @@ export default function ScriptExtractionModal({
<p className="text-sm text-gray-400 text-center max-w-sm px-4">
{activeTab === "url" && "正在下载视频..."}
<br />
{doRewrite
? "正在进行语音识别和 AI 智能改写..."
: "正在进行语音识别..."}
...
<br />
<span className="opacity-75">
@@ -257,60 +237,30 @@ export default function ScriptExtractionModal({
{step === "result" && (
<div className="space-y-6">
{rewrittenScript && (
<div className="space-y-2">
<div className="flex justify-between items-center">
<h4 className="font-semibold text-purple-300 flex items-center gap-2">
AI 稿{" "}
<span className="text-xs font-normal text-purple-400/70">
()
</span>
</h4>
<div className="space-y-2">
<div className="flex justify-between items-center">
<h4 className="font-semibold text-gray-300 flex items-center gap-2">
🎙
</h4>
<div className="flex items-center gap-2">
{onApply && (
<button
onClick={() => handleApplyAndClose(rewrittenScript)}
onClick={() => handleApplyAndClose(script)}
className="text-xs bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1 shadow-sm"
>
📥
</button>
)}
<button
onClick={() => copyToClipboard(rewrittenScript)}
className="text-xs bg-purple-600 hover:bg-purple-500 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1"
onClick={() => copyToClipboard(script)}
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors"
>
📋
</button>
</div>
<div className="bg-purple-900/10 border border-purple-500/20 rounded-xl p-4 max-h-60 overflow-y-auto custom-scrollbar">
<p className="text-gray-200 text-sm leading-relaxed whitespace-pre-wrap">
{rewrittenScript}
</p>
</div>
</div>
)}
<div className="space-y-2">
<div className="flex justify-between items-center">
<h4 className="font-semibold text-gray-400 flex items-center gap-2">
🎙
</h4>
{onApply && (
<button
onClick={() => handleApplyAndClose(script)}
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors flex items-center gap-1"
>
📥
</button>
)}
<button
onClick={() => copyToClipboard(script)}
className="text-xs bg-white/10 hover:bg-white/20 text-white px-3 py-1.5 rounded-lg transition-colors"
>
</button>
</div>
<div className="bg-white/5 border border-white/10 rounded-xl p-4 max-h-40 overflow-y-auto custom-scrollbar">
<p className="text-gray-400 text-sm leading-relaxed whitespace-pre-wrap">
<div className="bg-white/5 border border-white/10 rounded-xl p-4 max-h-60 overflow-y-auto hide-scrollbar">
<p className="text-gray-200 text-sm leading-relaxed whitespace-pre-wrap">
{script}
</p>
</div>

View File

@@ -1,9 +1,9 @@
import { useEffect, useRef, useCallback, useState } from "react";
import { useEffect, useRef, useCallback, useState, useMemo } from "react";
import WaveSurfer from "wavesurfer.js";
import { ChevronDown } from "lucide-react";
import { ChevronDown, GripVertical } from "lucide-react";
import type { TimelineSegment } from "@/features/home/model/useTimelineEditor";
import type { Material } from "@/shared/types/material";
interface TimelineEditorProps {
audioDuration: number;
audioUrl: string;
@@ -13,14 +13,15 @@ interface TimelineEditorProps {
onOutputAspectRatioChange: (ratio: "9:16" | "16:9") => void;
onReorderSegment: (fromIdx: number, toIdx: number) => void;
onClickSegment: (segment: TimelineSegment) => void;
embedded?: boolean;
}
function formatTime(sec: number): string {
const m = Math.floor(sec / 60);
const s = sec % 60;
return `${String(m).padStart(2, "0")}:${s.toFixed(1).padStart(4, "0")}`;
}
function formatTime(sec: number): string {
const m = Math.floor(sec / 60);
const s = sec % 60;
return `${String(m).padStart(2, "0")}:${s.toFixed(1).padStart(4, "0")}`;
}
export function TimelineEditor({
audioDuration,
audioUrl,
@@ -30,12 +31,13 @@ export function TimelineEditor({
onOutputAspectRatioChange,
onReorderSegment,
onClickSegment,
embedded = false,
}: TimelineEditorProps) {
const waveRef = useRef<HTMLDivElement>(null);
const wsRef = useRef<WaveSurfer | null>(null);
const [waveReady, setWaveReady] = useState(false);
const [isPlaying, setIsPlaying] = useState(false);
const waveRef = useRef<HTMLDivElement>(null);
const wsRef = useRef<WaveSurfer | null>(null);
const [waveReady, setWaveReady] = useState(false);
const [isPlaying, setIsPlaying] = useState(false);
// Refs for high-frequency DOM updates (avoid 60fps re-renders)
const playheadRef = useRef<HTMLDivElement>(null);
const timeRef = useRef<HTMLSpanElement>(null);
@@ -44,7 +46,7 @@ export function TimelineEditor({
useEffect(() => {
audioDurationRef.current = audioDuration;
}, [audioDuration]);
// Drag-to-reorder state
const [dragFromIdx, setDragFromIdx] = useState<number | null>(null);
const [dragOverIdx, setDragOverIdx] = useState<number | null>(null);
@@ -68,57 +70,57 @@ export function TimelineEditor({
if (ratioOpen) document.addEventListener("mousedown", handler);
return () => document.removeEventListener("mousedown", handler);
}, [ratioOpen]);
// Create / recreate wavesurfer when audioUrl changes
// Create / recreate wavesurfer when audioUrl changes
useEffect(() => {
if (!waveRef.current || !audioUrl) return;
const playheadEl = playheadRef.current;
const timeEl = timeRef.current;
// Destroy previous instance
if (wsRef.current) {
wsRef.current.destroy();
wsRef.current = null;
}
const ws = WaveSurfer.create({
container: waveRef.current,
height: 56,
waveColor: "#6d28d9",
progressColor: "#a855f7",
barWidth: 2,
barGap: 1,
barRadius: 2,
cursorWidth: 1,
cursorColor: "#e879f9",
interact: true,
normalize: true,
});
// Click waveform → seek + auto-play
ws.on("interaction", () => ws.play());
ws.on("play", () => setIsPlaying(true));
ws.on("pause", () => setIsPlaying(false));
ws.on("finish", () => {
setIsPlaying(false);
if (playheadRef.current) playheadRef.current.style.display = "none";
});
// High-frequency: update playhead + time via refs (no React re-render)
ws.on("timeupdate", (time: number) => {
const dur = audioDurationRef.current;
if (playheadRef.current && dur > 0) {
playheadRef.current.style.left = `${(time / dur) * 100}%`;
playheadRef.current.style.display = "block";
}
if (timeRef.current) {
timeRef.current.textContent = formatTime(time);
}
});
ws.load(audioUrl);
wsRef.current = ws;
// Destroy previous instance
if (wsRef.current) {
wsRef.current.destroy();
wsRef.current = null;
}
const ws = WaveSurfer.create({
container: waveRef.current,
height: 56,
waveColor: "#6d28d9",
progressColor: "#a855f7",
barWidth: 2,
barGap: 1,
barRadius: 2,
cursorWidth: 1,
cursorColor: "#e879f9",
interact: true,
normalize: true,
});
// Click waveform → seek + auto-play
ws.on("interaction", () => ws.play());
ws.on("play", () => setIsPlaying(true));
ws.on("pause", () => setIsPlaying(false));
ws.on("finish", () => {
setIsPlaying(false);
if (playheadRef.current) playheadRef.current.style.display = "none";
});
// High-frequency: update playhead + time via refs (no React re-render)
ws.on("timeupdate", (time: number) => {
const dur = audioDurationRef.current;
if (playheadRef.current && dur > 0) {
playheadRef.current.style.left = `${(time / dur) * 100}%`;
playheadRef.current.style.display = "block";
}
if (timeRef.current) {
timeRef.current.textContent = formatTime(time);
}
});
ws.load(audioUrl);
wsRef.current = ws;
return () => {
ws.destroy();
wsRef.current = null;
@@ -127,60 +129,64 @@ export function TimelineEditor({
if (timeEl) timeEl.textContent = formatTime(0);
};
}, [audioUrl, waveReady]);
// Callback ref to detect when waveRef div mounts
const waveCallbackRef = useCallback((node: HTMLDivElement | null) => {
(waveRef as React.MutableRefObject<HTMLDivElement | null>).current = node;
setWaveReady(!!node);
}, []);
const handlePlayPause = useCallback(() => {
wsRef.current?.playPause();
}, []);
// Drag-to-reorder handlers
const handleDragStart = useCallback((idx: number, e: React.DragEvent) => {
setDragFromIdx(idx);
e.dataTransfer.effectAllowed = "move";
e.dataTransfer.setData("text/plain", String(idx));
}, []);
const handleDragOver = useCallback((idx: number, e: React.DragEvent) => {
e.preventDefault();
e.dataTransfer.dropEffect = "move";
setDragOverIdx(idx);
}, []);
const handleDragLeave = useCallback(() => {
setDragOverIdx(null);
}, []);
const handleDrop = useCallback((toIdx: number, e: React.DragEvent) => {
e.preventDefault();
const fromIdx = parseInt(e.dataTransfer.getData("text/plain"), 10);
if (!isNaN(fromIdx) && fromIdx !== toIdx) {
onReorderSegment(fromIdx, toIdx);
}
setDragFromIdx(null);
setDragOverIdx(null);
}, [onReorderSegment]);
const handleDragEnd = useCallback(() => {
setDragFromIdx(null);
setDragOverIdx(null);
}, []);
// Filter visible vs overflow segments
const visibleSegments = segments.filter((s) => s.start < audioDuration);
const overflowSegments = segments.filter((s) => s.start >= audioDuration);
const hasSegments = visibleSegments.length > 0;
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
// Callback ref to detect when waveRef div mounts
const waveCallbackRef = useCallback((node: HTMLDivElement | null) => {
(waveRef as React.MutableRefObject<HTMLDivElement | null>).current = node;
setWaveReady(!!node);
}, []);
const handlePlayPause = useCallback(() => {
wsRef.current?.playPause();
}, []);
// Drag-to-reorder handlers
const handleDragStart = useCallback((idx: number, e: React.DragEvent) => {
setDragFromIdx(idx);
e.dataTransfer.effectAllowed = "move";
e.dataTransfer.setData("text/plain", String(idx));
}, []);
const handleDragOver = useCallback((idx: number, e: React.DragEvent) => {
e.preventDefault();
e.dataTransfer.dropEffect = "move";
setDragOverIdx(idx);
}, []);
const handleDragLeave = useCallback(() => {
setDragOverIdx(null);
}, []);
const handleDrop = useCallback((toIdx: number, e: React.DragEvent) => {
e.preventDefault();
const fromIdx = parseInt(e.dataTransfer.getData("text/plain"), 10);
if (!isNaN(fromIdx) && fromIdx !== toIdx) {
onReorderSegment(fromIdx, toIdx);
}
setDragFromIdx(null);
setDragOverIdx(null);
}, [onReorderSegment]);
const handleDragEnd = useCallback(() => {
setDragFromIdx(null);
setDragOverIdx(null);
}, []);
// Filter visible vs overflow segments
const visibleSegments = useMemo(() => segments.filter((s) => s.start < audioDuration), [segments, audioDuration]);
const overflowSegments = useMemo(() => segments.filter((s) => s.start >= audioDuration), [segments, audioDuration]);
const hasSegments = visibleSegments.length > 0;
const content = (
<>
<div className="flex items-center justify-between mb-3">
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
🎞
</h2>
{!embedded ? (
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
</h2>
) : (
<h3 className="text-sm font-medium text-gray-400"></h3>
)}
<div className="flex items-center gap-2 text-xs text-gray-400">
<div ref={ratioRef} className="relative">
<button
@@ -231,28 +237,28 @@ export function TimelineEditor({
)}
</div>
</div>
{/* Waveform — always rendered so ref stays mounted */}
<div className="relative mb-1">
<div ref={waveCallbackRef} className="rounded-lg overflow-hidden bg-black/20 cursor-pointer" style={{ minHeight: 56 }} />
</div>
{/* Segment blocks or empty placeholder */}
{hasSegments ? (
<>
<div className="relative h-14 flex select-none">
{/* Playhead — syncs with audio playback */}
<div
ref={playheadRef}
className="absolute top-0 h-full w-0.5 bg-fuchsia-400 z-10 pointer-events-none"
style={{ display: "none", left: "0%" }}
/>
{visibleSegments.map((seg, i) => {
const left = (seg.start / audioDuration) * 100;
const width = ((seg.end - seg.start) / audioDuration) * 100;
const segDur = seg.end - seg.start;
const isDragTarget = dragOverIdx === i && dragFromIdx !== i;
{/* Waveform — always rendered so ref stays mounted */}
<div className="relative mb-1">
<div ref={waveCallbackRef} className="rounded-lg overflow-hidden bg-black/20 cursor-pointer" style={{ minHeight: 56 }} />
</div>
{/* Segment blocks or empty placeholder */}
{hasSegments ? (
<>
<div className="relative h-14 flex select-none">
{/* Playhead — syncs with audio playback */}
<div
ref={playheadRef}
className="absolute top-0 h-full w-0.5 bg-fuchsia-400 z-10 pointer-events-none"
style={{ display: "none", left: "0%" }}
/>
{visibleSegments.map((seg, i) => {
const left = (seg.start / audioDuration) * 100;
const width = ((seg.end - seg.start) / audioDuration) * 100;
const segDur = seg.end - seg.start;
const isDragTarget = dragOverIdx === i && dragFromIdx !== i;
// Compute loop portion for the last visible segment
const isLastVisible = i === visibleSegments.length - 1;
let loopPercent = 0;
@@ -266,84 +272,93 @@ export function TimelineEditor({
loopPercent = ((segDur - effDur) / segDur) * 100;
}
}
return (
<div key={seg.id} className="absolute top-0 h-full" style={{ left: `${left}%`, width: `${width}%` }}>
<button
draggable
onDragStart={(e) => handleDragStart(i, e)}
onDragOver={(e) => handleDragOver(i, e)}
onDragLeave={handleDragLeave}
onDrop={(e) => handleDrop(i, e)}
onDragEnd={handleDragEnd}
onClick={() => onClickSegment(seg)}
className={`relative w-full h-full rounded-lg flex flex-col items-center justify-center overflow-hidden cursor-grab active:cursor-grabbing transition-all border ${
isDragTarget
? "ring-2 ring-purple-400 border-purple-400 scale-[1.02]"
: dragFromIdx === i
? "opacity-50 border-white/10"
: "hover:opacity-90 border-white/10"
}`}
style={{ backgroundColor: seg.color + "33", borderColor: isDragTarget ? undefined : seg.color + "66" }}
title={`拖拽可调换顺序 · 点击设置截取范围\n${seg.materialName}\n${segDur.toFixed(1)}s${loopPercent > 0 ? ` (含循环 ${(segDur * loopPercent / 100).toFixed(1)}s)` : ""}`}
>
<span className="text-[11px] text-white/90 truncate max-w-full px-1 leading-tight z-[1]">
{seg.materialName}
</span>
<span className="text-[10px] text-white/60 leading-tight z-[1]">
{segDur.toFixed(1)}s
</span>
{seg.sourceStart > 0 && (
<span className="text-[9px] text-amber-400/80 leading-tight z-[1]">
{seg.sourceStart.toFixed(1)}s
</span>
)}
{/* Loop fill stripe overlay */}
{loopPercent > 0 && (
<div
className="absolute top-0 right-0 h-full pointer-events-none flex items-center justify-center"
style={{
width: `${loopPercent}%`,
background: `repeating-linear-gradient(-45deg, transparent, transparent 3px, rgba(255,255,255,0.07) 3px, rgba(255,255,255,0.07) 6px)`,
borderLeft: "1px dashed rgba(255,255,255,0.25)",
}}
>
<span className="text-[9px] text-white/30"></span>
</div>
)}
</button>
</div>
);
})}
</div>
{/* Overflow segments — shown as gray chips */}
{overflowSegments.length > 0 && (
<div className="flex flex-wrap items-center gap-1.5 mt-1.5">
<span className="text-[10px] text-gray-500">使:</span>
{overflowSegments.map((seg) => (
<span
key={seg.id}
className="text-[10px] text-gray-500 bg-white/5 border border-white/10 rounded px-1.5 py-0.5"
>
{seg.materialName}
</span>
))}
</div>
)}
<p className="text-[10px] text-gray-500 mt-1.5">
· ·
</p>
</>
) : (
<>
<div className="h-14 bg-white/5 rounded-lg" />
<p className="text-[10px] text-gray-500 mt-1.5">
</p>
</>
)}
</div>
);
}
return (
<div key={seg.id} className="absolute top-0 h-full" style={{ left: `${left}%`, width: `${width}%` }}>
<button
draggable
onDragStart={(e) => handleDragStart(i, e)}
onDragOver={(e) => handleDragOver(i, e)}
onDragLeave={handleDragLeave}
onDrop={(e) => handleDrop(i, e)}
onDragEnd={handleDragEnd}
onClick={() => onClickSegment(seg)}
className={`relative w-full h-full rounded-lg flex flex-col items-center justify-center overflow-hidden cursor-grab active:cursor-grabbing transition-all border ${
isDragTarget
? "ring-2 ring-purple-400 border-purple-400 scale-[1.02]"
: dragFromIdx === i
? "opacity-50 border-white/10"
: "hover:opacity-90 border-white/10"
}`}
style={{ backgroundColor: seg.color + "33", borderColor: isDragTarget ? undefined : seg.color + "66" }}
title={`拖拽可调换顺序 · 点击设置截取范围\n${seg.materialName}\n${segDur.toFixed(1)}s${loopPercent > 0 ? ` (含循环 ${(segDur * loopPercent / 100).toFixed(1)}s)` : ""}`}
>
<GripVertical className="absolute top-0.5 left-0.5 h-3 w-3 text-white/30 z-[1]" />
<span className="text-[11px] text-white/90 truncate max-w-full px-1 leading-tight z-[1]">
{seg.materialName}
</span>
<span className="text-[10px] text-white/60 leading-tight z-[1]">
{segDur.toFixed(1)}s
</span>
{seg.sourceStart > 0 && (
<span className="text-[9px] text-amber-400/80 leading-tight z-[1]">
{seg.sourceStart.toFixed(1)}s
</span>
)}
{/* Loop fill stripe overlay */}
{loopPercent > 0 && (
<div
className="absolute top-0 right-0 h-full pointer-events-none flex items-center justify-center"
style={{
width: `${loopPercent}%`,
background: `repeating-linear-gradient(-45deg, transparent, transparent 3px, rgba(255,255,255,0.07) 3px, rgba(255,255,255,0.07) 6px)`,
borderLeft: "1px dashed rgba(255,255,255,0.25)",
}}
>
<span className="text-[9px] text-white/30"></span>
</div>
)}
</button>
</div>
);
})}
</div>
{/* Overflow segments — shown as gray chips */}
{overflowSegments.length > 0 && (
<div className="flex flex-wrap items-center gap-1.5 mt-1.5">
<span className="text-[10px] text-gray-500">使:</span>
{overflowSegments.map((seg) => (
<span
key={seg.id}
className="text-[10px] text-gray-500 bg-white/5 border border-white/10 rounded px-1.5 py-0.5"
>
{seg.materialName}
</span>
))}
</div>
)}
<p className="text-[10px] text-gray-500 mt-1.5">
· ·
</p>
</>
) : (
<>
<div className="h-14 bg-white/5 rounded-lg" />
<p className="text-[10px] text-gray-500 mt-1.5">
</p>
</>
)}
</>
);
if (embedded) return content;
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
{content}
</div>
);
}

View File

@@ -1,4 +1,4 @@
import { Eye } from "lucide-react";
import { ChevronDown, Eye } from "lucide-react";
import { FloatingStylePreview } from "@/features/home/ui/FloatingStylePreview";
interface SubtitleStyleOption {
@@ -38,11 +38,21 @@ interface TitleSubtitlePanelProps {
onTitleChange: (value: string) => void;
onTitleCompositionStart?: () => void;
onTitleCompositionEnd?: (value: string) => void;
videoSecondaryTitle: string;
onSecondaryTitleChange: (value: string) => void;
onSecondaryTitleCompositionStart?: () => void;
onSecondaryTitleCompositionEnd?: (value: string) => void;
titleStyles: TitleStyleOption[];
selectedTitleStyleId: string;
onSelectTitleStyle: (id: string) => void;
titleFontSize: number;
onTitleFontSizeChange: (value: number) => void;
selectedSecondaryTitleStyleId: string;
onSelectSecondaryTitleStyle: (id: string) => void;
secondaryTitleFontSize: number;
onSecondaryTitleFontSizeChange: (value: number) => void;
secondaryTitleTopMargin: number;
onSecondaryTitleTopMarginChange: (value: number) => void;
subtitleStyles: SubtitleStyleOption[];
selectedSubtitleStyleId: string;
onSelectSubtitleStyle: (id: string) => void;
@@ -52,11 +62,14 @@ interface TitleSubtitlePanelProps {
onTitleTopMarginChange: (value: number) => void;
subtitleBottomMargin: number;
onSubtitleBottomMarginChange: (value: number) => void;
titleDisplayMode: "short" | "persistent";
onTitleDisplayModeChange: (mode: "short" | "persistent") => void;
resolveAssetUrl: (path?: string | null) => string | null;
getFontFormat: (fontFile?: string) => string;
buildTextShadow: (color: string, size: number) => string;
previewBaseWidth?: number;
previewBaseHeight?: number;
previewBackgroundUrl?: string | null;
}
export function TitleSubtitlePanel({
@@ -66,11 +79,21 @@ export function TitleSubtitlePanel({
onTitleChange,
onTitleCompositionStart,
onTitleCompositionEnd,
videoSecondaryTitle,
onSecondaryTitleChange,
onSecondaryTitleCompositionStart,
onSecondaryTitleCompositionEnd,
titleStyles,
selectedTitleStyleId,
onSelectTitleStyle,
titleFontSize,
onTitleFontSizeChange,
selectedSecondaryTitleStyleId,
onSelectSecondaryTitleStyle,
secondaryTitleFontSize,
onSecondaryTitleFontSizeChange,
secondaryTitleTopMargin,
onSecondaryTitleTopMarginChange,
subtitleStyles,
selectedSubtitleStyleId,
onSelectSubtitleStyle,
@@ -80,34 +103,55 @@ export function TitleSubtitlePanel({
onTitleTopMarginChange,
subtitleBottomMargin,
onSubtitleBottomMarginChange,
titleDisplayMode,
onTitleDisplayModeChange,
resolveAssetUrl,
getFontFormat,
buildTextShadow,
previewBaseWidth = 1080,
previewBaseHeight = 1920,
previewBackgroundUrl,
}: TitleSubtitlePanelProps) {
return (
<div className="bg-white/5 rounded-2xl p-4 sm:p-6 border border-white/10 backdrop-blur-sm">
<div className="flex items-center justify-between mb-4 gap-2">
<h2 className="text-base sm:text-lg font-semibold text-white flex items-center gap-2">
🎬
</h2>
<button
onClick={onTogglePreview}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
>
<Eye className="h-3.5 w-3.5" />
{showStylePreview ? "收起预览" : "预览样式"}
</button>
<div className="flex items-center gap-1.5">
<div className="relative shrink-0">
<select
value={titleDisplayMode}
onChange={(e) => onTitleDisplayModeChange(e.target.value as "short" | "persistent")}
className="appearance-none rounded-lg border border-white/15 bg-black/35 px-2.5 py-1.5 pr-7 text-xs text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
aria-label="标题显示方式"
>
<option value="short"></option>
<option value="persistent"></option>
</select>
<ChevronDown className="pointer-events-none absolute right-2 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
</div>
<button
onClick={onTogglePreview}
className="px-2 py-1 text-xs bg-white/10 hover:bg-white/20 rounded text-gray-300 flex items-center gap-1"
>
<Eye className="h-3.5 w-3.5" />
{showStylePreview ? "收起预览" : "预览样式"}
</button>
</div>
</div>
{showStylePreview && (
<FloatingStylePreview
onClose={onTogglePreview}
videoTitle={videoTitle}
videoSecondaryTitle={videoSecondaryTitle}
titleStyles={titleStyles}
selectedTitleStyleId={selectedTitleStyleId}
titleFontSize={titleFontSize}
selectedSecondaryTitleStyleId={selectedSecondaryTitleStyleId}
secondaryTitleFontSize={secondaryTitleFontSize}
secondaryTitleTopMargin={secondaryTitleTopMargin}
subtitleStyles={subtitleStyles}
selectedSubtitleStyleId={selectedSubtitleStyleId}
subtitleFontSize={subtitleFontSize}
@@ -119,11 +163,15 @@ export function TitleSubtitlePanel({
buildTextShadow={buildTextShadow}
previewBaseWidth={previewBaseWidth}
previewBaseHeight={previewBaseHeight}
previewBackgroundUrl={previewBackgroundUrl}
/>
)}
<div className="mb-4">
<label className="text-sm text-gray-300 mb-2 block">15</label>
<div className="flex items-center justify-between mb-2">
<label className="text-sm text-gray-300"></label>
<span className={`text-xs ${videoTitle.length > 15 ? "text-red-400" : "text-gray-500"}`}>{videoTitle.length}/15</span>
</div>
<input
type="text"
value={videoTitle}
@@ -135,96 +183,102 @@ export function TitleSubtitlePanel({
/>
</div>
<div className="mb-4">
<div className="flex items-center justify-between mb-2">
<label className="text-sm text-gray-300"></label>
<span className={`text-xs ${videoSecondaryTitle.length > 20 ? "text-red-400" : "text-gray-500"}`}>{videoSecondaryTitle.length}/20</span>
</div>
<input
type="text"
value={videoSecondaryTitle}
onChange={(e) => onSecondaryTitleChange(e.target.value)}
onCompositionStart={onSecondaryTitleCompositionStart}
onCompositionEnd={(e) => onSecondaryTitleCompositionEnd?.(e.currentTarget.value)}
placeholder="输入副标题,显示在主标题下方"
className="w-full px-3 sm:px-4 py-2 text-sm sm:text-base bg-black/30 border border-white/10 rounded-xl text-white placeholder-gray-500 focus:outline-none focus:border-purple-500 transition-colors"
/>
</div>
{titleStyles.length > 0 && (
<div className="mb-4">
<label className="text-sm text-gray-300 mb-2 block"></label>
<div className="grid grid-cols-2 gap-2">
{titleStyles.map((style) => (
<button
key={style.id}
onClick={() => onSelectTitleStyle(style.id)}
className={`p-2 rounded-lg border transition-all text-left ${selectedTitleStyleId === style.id
? "border-purple-500 bg-purple-500/20"
: "border-white/10 bg-white/5 hover:border-white/30"
}`}
<div className="mb-4 space-y-3">
<div className="flex items-center gap-3">
<label className="text-sm text-gray-300 shrink-0 w-20"></label>
<div className="relative w-1/3 min-w-[100px]">
<select
value={selectedTitleStyleId}
onChange={(e) => onSelectTitleStyle(e.target.value)}
className="w-full appearance-none rounded-lg border border-white/15 bg-black/35 px-3 py-2 pr-8 text-sm text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
>
<div className="text-white text-sm truncate">{style.label}</div>
<div className="text-xs text-gray-400 truncate">
{style.font_family || style.font_file || ""}
</div>
</button>
))}
{titleStyles.map((style) => (
<option key={style.id} value={style.id}>{style.label}</option>
))}
</select>
<ChevronDown className="pointer-events-none absolute right-2.5 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
</div>
</div>
<div className="mt-3">
<label className="text-xs text-gray-400 mb-2 block">: {titleFontSize}px</label>
<input
type="range"
min="60"
max="150"
step="1"
value={titleFontSize}
onChange={(e) => onTitleFontSizeChange(parseInt(e.target.value, 10))}
className="w-full accent-purple-500"
/>
<div className="flex items-center gap-3">
<label className="text-xs text-gray-400 shrink-0 w-20"> {titleFontSize}</label>
<input type="range" min="60" max="150" step="1" value={titleFontSize} onChange={(e) => onTitleFontSizeChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
</div>
<div className="mt-3">
<label className="text-xs text-gray-400 mb-2 block">: {titleTopMargin}px</label>
<input
type="range"
min="0"
max="300"
step="1"
value={titleTopMargin}
onChange={(e) => onTitleTopMarginChange(parseInt(e.target.value, 10))}
className="w-full accent-purple-500"
/>
<div className="flex items-center gap-3">
<label className="text-xs text-gray-400 shrink-0 w-20"> {titleTopMargin}</label>
<input type="range" min="0" max="300" step="1" value={titleTopMargin} onChange={(e) => onTitleTopMarginChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
</div>
</div>
)}
{titleStyles.length > 0 && (
<div className="mb-4 space-y-3">
<div className="flex items-center gap-3">
<label className="text-sm text-gray-300 shrink-0 w-20"></label>
<div className="relative w-1/3 min-w-[100px]">
<select
value={selectedSecondaryTitleStyleId}
onChange={(e) => onSelectSecondaryTitleStyle(e.target.value)}
className="w-full appearance-none rounded-lg border border-white/15 bg-black/35 px-3 py-2 pr-8 text-sm text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
>
{titleStyles.map((style) => (
<option key={style.id} value={style.id}>{style.label}</option>
))}
</select>
<ChevronDown className="pointer-events-none absolute right-2.5 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
</div>
</div>
<div className="flex items-center gap-3">
<label className="text-xs text-gray-400 shrink-0 w-20"> {secondaryTitleFontSize}</label>
<input type="range" min="30" max="100" step="1" value={secondaryTitleFontSize} onChange={(e) => onSecondaryTitleFontSizeChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
</div>
<div className="flex items-center gap-3">
<label className="text-xs text-gray-400 shrink-0 w-20"> {secondaryTitleTopMargin}</label>
<input type="range" min="0" max="100" step="1" value={secondaryTitleTopMargin} onChange={(e) => onSecondaryTitleTopMarginChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
</div>
</div>
)}
{subtitleStyles.length > 0 && (
<div className="mt-4">
<label className="text-sm text-gray-300 mb-2 block"></label>
<div className="grid grid-cols-2 gap-2">
{subtitleStyles.map((style) => (
<button
key={style.id}
onClick={() => onSelectSubtitleStyle(style.id)}
className={`p-2 rounded-lg border transition-all text-left ${selectedSubtitleStyleId === style.id
? "border-purple-500 bg-purple-500/20"
: "border-white/10 bg-white/5 hover:border-white/30"
}`}
<div className="mt-4 space-y-3">
<div className="flex items-center gap-3">
<label className="text-sm text-gray-300 shrink-0 w-20"></label>
<div className="relative w-1/3 min-w-[100px]">
<select
value={selectedSubtitleStyleId}
onChange={(e) => onSelectSubtitleStyle(e.target.value)}
className="w-full appearance-none rounded-lg border border-white/15 bg-black/35 px-3 py-2 pr-8 text-sm text-gray-200 outline-none transition-colors hover:border-white/25 focus:border-purple-500"
>
<div className="text-white text-sm truncate">{style.label}</div>
<div className="text-xs text-gray-400 truncate">
{style.font_family || style.font_file || ""}
</div>
</button>
))}
{subtitleStyles.map((style) => (
<option key={style.id} value={style.id}>{style.label}</option>
))}
</select>
<ChevronDown className="pointer-events-none absolute right-2.5 top-1/2 h-3.5 w-3.5 -translate-y-1/2 text-gray-400" />
</div>
</div>
<div className="mt-3">
<label className="text-xs text-gray-400 mb-2 block">: {subtitleFontSize}px</label>
<input
type="range"
min="40"
max="90"
step="1"
value={subtitleFontSize}
onChange={(e) => onSubtitleFontSizeChange(parseInt(e.target.value, 10))}
className="w-full accent-purple-500"
/>
<div className="flex items-center gap-3">
<label className="text-xs text-gray-400 shrink-0 w-20"> {subtitleFontSize}</label>
<input type="range" min="40" max="90" step="1" value={subtitleFontSize} onChange={(e) => onSubtitleFontSizeChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
</div>
<div className="mt-3">
<label className="text-xs text-gray-400 mb-2 block">: {subtitleBottomMargin}px</label>
<input
type="range"
min="0"
max="300"
step="1"
value={subtitleBottomMargin}
onChange={(e) => onSubtitleBottomMarginChange(parseInt(e.target.value, 10))}
className="w-full accent-purple-500"
/>
<div className="flex items-center gap-3">
<label className="text-xs text-gray-400 shrink-0 w-20"> {subtitleBottomMargin}</label>
<input type="range" min="0" max="300" step="1" value={subtitleBottomMargin} onChange={(e) => onSubtitleBottomMarginChange(parseInt(e.target.value, 10))} className="flex-1 accent-purple-500" />
</div>
</div>
)}

View File

@@ -13,6 +13,7 @@ interface VoiceSelectorProps {
voice: string;
onSelectVoice: (id: string) => void;
voiceCloneSlot: ReactNode;
embedded?: boolean;
}
export function VoiceSelector({
@@ -22,32 +23,29 @@ export function VoiceSelector({
voice,
onSelectVoice,
voiceCloneSlot,
embedded = false,
}: VoiceSelectorProps) {
return (
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
🎙
</h2>
const content = (
<>
<div className="flex gap-2 mb-4">
<button
onClick={() => onSelectTtsMode("edgetts")}
className={`flex-1 py-2 px-4 rounded-lg font-medium transition-all flex items-center justify-center gap-2 ${ttsMode === "edgetts"
className={`flex-1 py-2 px-2 sm:px-4 rounded-lg text-sm sm:text-base font-medium transition-all flex items-center justify-center gap-1.5 sm:gap-2 ${ttsMode === "edgetts"
? "bg-purple-600 text-white"
: "bg-white/10 text-gray-300 hover:bg-white/20"
}`}
>
<Volume2 className="h-4 w-4" />
<Volume2 className="h-4 w-4 shrink-0" />
</button>
<button
onClick={() => onSelectTtsMode("voiceclone")}
className={`flex-1 py-2 px-4 rounded-lg font-medium transition-all flex items-center justify-center gap-2 ${ttsMode === "voiceclone"
className={`flex-1 py-2 px-2 sm:px-4 rounded-lg text-sm sm:text-base font-medium transition-all flex items-center justify-center gap-1.5 sm:gap-2 ${ttsMode === "voiceclone"
? "bg-purple-600 text-white"
: "bg-white/10 text-gray-300 hover:bg-white/20"
}`}
>
<Mic className="h-4 w-4" />
<Mic className="h-4 w-4 shrink-0" />
</button>
</div>
@@ -70,6 +68,17 @@ export function VoiceSelector({
)}
{ttsMode === "voiceclone" && voiceCloneSlot}
</>
);
if (embedded) return content;
return (
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
🎙
</h2>
{content}
</div>
);
}

View File

@@ -15,9 +15,7 @@ interface UseScriptExtractionOptions {
export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
const [isLoading, setIsLoading] = useState(false);
const [script, setScript] = useState("");
const [rewrittenScript, setRewrittenScript] = useState("");
const [error, setError] = useState<string | null>(null);
const [doRewrite, setDoRewrite] = useState(true);
const [step, setStep] = useState<ExtractionStep>("config");
const [dragActive, setDragActive] = useState(false);
const [selectedFile, setSelectedFile] = useState<File | null>(null);
@@ -29,7 +27,6 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
if (isOpen) {
setStep("config");
setScript("");
setRewrittenScript("");
setError(null);
setIsLoading(false);
setSelectedFile(null);
@@ -100,10 +97,10 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
} else if (activeTab === "url") {
formData.append("url", inputUrl.trim());
}
formData.append("rewrite", doRewrite ? "true" : "false");
formData.append("rewrite", "false");
const { data: res } = await api.post<
ApiResponse<{ original_script: string; rewritten_script?: string }>
ApiResponse<{ original_script: string }>
>("/api/tools/extract-script", formData, {
headers: { "Content-Type": "multipart/form-data" },
timeout: 180000, // 3 minutes timeout
@@ -111,7 +108,6 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
const payload = unwrap(res);
setScript(payload.original_script);
setRewrittenScript(payload.rewritten_script || "");
setStep("result");
} catch (err: unknown) {
console.error(err);
@@ -126,7 +122,7 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
} finally {
setIsLoading(false);
}
}, [activeTab, selectedFile, inputUrl, doRewrite]);
}, [activeTab, selectedFile, inputUrl]);
const copyToClipboard = useCallback((text: string) => {
if (navigator.clipboard && window.isSecureContext) {
@@ -185,16 +181,13 @@ export const useScriptExtraction = ({ isOpen }: UseScriptExtractionOptions) => {
// State
isLoading,
script,
rewrittenScript,
error,
doRewrite,
step,
dragActive,
selectedFile,
activeTab,
inputUrl,
// Setters
setDoRewrite,
setActiveTab,
setInputUrl,
// Handlers

View File

@@ -83,6 +83,8 @@ export const usePublishController = () => {
setVideos(nextVideos);
if (nextVideos.length > 0 && autoSelectLatest) {
setSelectedVideo(nextVideos[0].id);
// 写入跨页面共享标记,让首页也能感知最新生成的视频
localStorage.setItem(`vigent_${getStorageKey()}_latestGeneratedVideoId`, nextVideos[0].id);
}
updatePrefetch({ videos: nextVideos });
} catch (error) {
@@ -109,16 +111,23 @@ export const usePublishController = () => {
// ---- 视频选择恢复(唯一一个 effect条件极简 ----
// 等 auth 完成 + videos 有数据 → 恢复一次,之后再也不跑
// 优先检查跨页面共享标记(最新生成的视频),其次恢复上次选择
useEffect(() => {
if (isAuthLoading || videos.length === 0 || videoRestoredRef.current) return;
videoRestoredRef.current = true;
const key = getStorageKey();
const saved = localStorage.getItem(`vigent_${key}_publish_selected_video`);
if (saved && videos.some(v => v.id === saved)) {
setSelectedVideo(saved);
const latestId = localStorage.getItem(`vigent_${key}_latestGeneratedVideoId`);
if (latestId && videos.some(v => v.id === latestId)) {
setSelectedVideo(latestId);
localStorage.removeItem(`vigent_${key}_latestGeneratedVideoId`);
} else {
setSelectedVideo(videos[0].id);
const saved = localStorage.getItem(`vigent_${key}_publish_selected_video`);
if (saved && videos.some(v => v.id === saved)) {
setSelectedVideo(saved);
} else {
setSelectedVideo(videos[0].id);
}
}
}, [isAuthLoading, videos, getStorageKey]);

View File

@@ -135,7 +135,7 @@ export function PublishPage() {
<div className="space-y-6">
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4 flex items-center gap-2">
👤
</h2>
{isAccountsLoading ? (
@@ -157,62 +157,60 @@ export function PublishPage() {
))}
</div>
) : (
<div className="space-y-3">
<div className="space-y-2 sm:space-y-3">
{accounts.map((account) => (
<div
key={account.platform}
className="flex items-center justify-between p-4 bg-black/30 rounded-xl"
className="flex items-center gap-3 px-3 py-2.5 sm:px-4 sm:py-3.5 bg-black/30 rounded-xl"
>
<div className="flex items-center gap-3">
{platformIcons[account.platform] ? (
<Image
src={platformIcons[account.platform].src}
alt={platformIcons[account.platform].alt}
width={28}
height={28}
className="h-7 w-7"
/>
) : (
<span className="text-2xl">🌐</span>
)}
<div>
<div className="text-white font-medium">
{account.name}
</div>
<div
className={`text-sm ${account.logged_in
? "text-green-400"
: "text-gray-500"
}`}
>
{account.logged_in ? "✓ 已登录" : "未登录"}
</div>
{platformIcons[account.platform] ? (
<Image
src={platformIcons[account.platform].src}
alt={platformIcons[account.platform].alt}
width={28}
height={28}
className="h-6 w-6 sm:h-7 sm:w-7 shrink-0"
/>
) : (
<span className="text-xl sm:text-2xl">🌐</span>
)}
<div className="min-w-0 flex-1">
<div className="text-sm sm:text-base text-white font-medium leading-tight">
{account.name}
</div>
<div
className={`text-xs sm:text-sm leading-tight ${account.logged_in
? "text-green-400"
: "text-gray-500"
}`}
>
{account.logged_in ? "✓ 已登录" : "未登录"}
</div>
</div>
<div className="flex gap-2">
<div className="flex items-center gap-1.5 sm:gap-2 shrink-0">
{account.logged_in ? (
<>
<button
onClick={() => handleLogin(account.platform)}
className="px-3 py-1 bg-white/10 hover:bg-white/20 text-white text-sm rounded-lg transition-colors flex items-center gap-1"
className="px-2 py-1 sm:px-3 sm:py-1.5 bg-white/10 hover:bg-white/20 text-white text-xs sm:text-sm rounded-md sm:rounded-lg transition-colors flex items-center gap-1"
>
<RotateCcw className="h-3.5 w-3.5" />
<RotateCcw className="h-3 w-3 sm:h-3.5 sm:w-3.5" />
</button>
<button
onClick={() => handleLogout(account.platform)}
className="px-3 py-1 bg-red-500/80 hover:bg-red-600 text-white text-sm rounded-lg transition-colors flex items-center gap-1"
className="px-2 py-1 sm:px-3 sm:py-1.5 bg-red-500/80 hover:bg-red-600 text-white text-xs sm:text-sm rounded-md sm:rounded-lg transition-colors flex items-center gap-1"
>
<LogOut className="h-3.5 w-3.5" />
<LogOut className="h-3 w-3 sm:h-3.5 sm:w-3.5" />
</button>
</>
) : (
<button
onClick={() => handleLogin(account.platform)}
className="px-3 py-1 bg-purple-500/80 hover:bg-purple-600 text-white text-sm rounded-lg transition-colors flex items-center gap-1"
className="px-2 py-1 sm:px-3 sm:py-1.5 bg-purple-500/80 hover:bg-purple-600 text-white text-xs sm:text-sm rounded-md sm:rounded-lg transition-colors flex items-center gap-1"
>
<QrCode className="h-3.5 w-3.5" />
<QrCode className="h-3 w-3 sm:h-3.5 sm:w-3.5" />
</button>
)}
@@ -228,7 +226,7 @@ export function PublishPage() {
<div className="space-y-6">
{/* 选择视频 */}
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4">📹 </h2>
<h2 className="text-lg font-semibold text-white mb-4"></h2>
<div className="flex items-center gap-3 mb-4">
<Search className="text-gray-400 w-4 h-4" />
@@ -303,7 +301,7 @@ export function PublishPage() {
{/* 填写信息 */}
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4"> </h2>
<h2 className="text-lg font-semibold text-white mb-4"></h2>
<div className="space-y-4">
<div>
@@ -337,7 +335,7 @@ export function PublishPage() {
{/* 选择平台 */}
<div className="bg-white/5 rounded-2xl p-6 border border-white/10 backdrop-blur-sm">
<h2 className="text-lg font-semibold text-white mb-4">📱 </h2>
<h2 className="text-lg font-semibold text-white mb-4"></h2>
<div className="grid grid-cols-3 gap-3">
{accounts

View File

@@ -12,7 +12,7 @@ const API_BASE = typeof window === 'undefined'
// 防止重复跳转
let isRedirecting = false;
const PUBLIC_PATHS = new Set(['/login', '/register']);
const PUBLIC_PATHS = new Set(['/login', '/register', '/pay']);
// 创建 axios 实例
const api = axios.create({

View File

@@ -11,6 +11,7 @@ interface AuthContextType {
user: User | null;
isLoading: boolean;
isAuthenticated: boolean;
setUser: (user: User | null) => void;
}
const AuthContext = createContext<AuthContextType>({
@@ -18,6 +19,7 @@ const AuthContext = createContext<AuthContextType>({
user: null,
isLoading: true,
isAuthenticated: false,
setUser: () => {},
});
export function AuthProvider({ children }: { children: ReactNode }) {
@@ -63,7 +65,8 @@ export function AuthProvider({ children }: { children: ReactNode }) {
userId: user?.id || null,
user,
isLoading,
isAuthenticated: !!user
isAuthenticated: !!user,
setUser,
}}>
{children}
</AuthContext.Provider>

View File

@@ -12,6 +12,7 @@ export interface AuthResponse {
success: boolean;
message: string;
user?: User;
paymentToken?: string;
}
interface ApiResponse<T> {
@@ -25,20 +26,41 @@ interface ApiResponse<T> {
* 用户注册
*/
export async function register(phone: string, password: string, username?: string): Promise<AuthResponse> {
const { data: payload } = await api.post<ApiResponse<null>>('/api/auth/register', {
phone, password, username
});
return { success: payload.success, message: payload.message };
try {
const { data: payload } = await api.post<ApiResponse<null>>('/api/auth/register', {
phone, password, username
});
return { success: payload.success, message: payload.message };
} catch (err: any) {
return {
success: false,
message: err.response?.data?.message || '注册失败',
};
}
}
/**
* 用户登录
*/
export async function login(phone: string, password: string): Promise<AuthResponse> {
const { data: payload } = await api.post<ApiResponse<{ user?: User }>>('/api/auth/login', {
phone, password
});
return { success: payload.success, message: payload.message, user: payload.data?.user };
try {
const { data: payload } = await api.post<ApiResponse<{ user?: User }>>('/api/auth/login', {
phone, password
});
return { success: payload.success, message: payload.message, user: payload.data?.user };
} catch (err: any) {
if (err.response?.status === 403 && err.response?.data?.data?.reason === 'PAYMENT_REQUIRED') {
return {
success: false,
message: err.response.data.message,
paymentToken: err.response.data.data.payment_token,
};
}
return {
success: false,
message: err.response?.data?.message || '登录失败',
};
}
}
/**

View File

@@ -1,8 +1,12 @@
export const TITLE_MAX_LENGTH = 15;
export const SECONDARY_TITLE_MAX_LENGTH = 20;
export const clampTitle = (value: string, maxLength: number = TITLE_MAX_LENGTH) =>
value.slice(0, maxLength);
export const clampSecondaryTitle = (value: string, maxLength: number = SECONDARY_TITLE_MAX_LENGTH) =>
value.slice(0, maxLength);
export const applyTitleLimit = (
prev: string,
next: string,

View File

@@ -65,7 +65,7 @@ def load_model():
start = time.time()
from cosyvoice.cli.cosyvoice import AutoModel
_model = AutoModel(model_dir=str(MODEL_DIR))
_model = AutoModel(model_dir=str(MODEL_DIR), fp16=True)
_model_loaded = True
print(f"✅ CosyVoice 3.0 model loaded in {time.time() - start:.1f}s")

159
models/MuseTalk/LICENSE Normal file
View File

@@ -0,0 +1,159 @@
MIT License
Copyright (c) 2024 Tencent Music Entertainment Group
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Other dependencies and licenses:
Open Source Software Licensed under the MIT License:
--------------------------------------------------------------------
1. sd-vae-ft-mse
Fileshttps://huggingface.co/stabilityai/sd-vae-ft-mse/tree/main
LicenseMIT license
For detailshttps://choosealicense.com/licenses/mit/
2. whisper
Fileshttps://github.com/openai/whisper
LicenseMIT license
Copyright (c) 2022 OpenAI
For detailshttps://github.com/openai/whisper/blob/main/LICENSE
3. face-parsing.PyTorch
Fileshttps://github.com/zllrunning/face-parsing.PyTorch
LicenseMIT License
Copyright (c) 2019 zll
For detailshttps://github.com/zllrunning/face-parsing.PyTorch/blob/master/LICENSE
Open Source Software Licensed under the Apache License Version 2.0:
--------------------------------------------------------------------
1. DWpose
Fileshttps://huggingface.co/yzd-v/DWPose/tree/main
LicenseApache-2.0
For detailshttps://choosealicense.com/licenses/apache-2.0/
Terms of the Apache License Version 2.0:
--------------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Open Source Software Licensed under the BSD 3-Clause License:
--------------------------------------------------------------------
1. face-alignment
Fileshttps://github.com/1adrianb/face-alignment/tree/master
LicenseBSD 3-Clause License
Copyright (c) 2017, Adrian Bulat
All rights reserved.
For detailshttps://github.com/1adrianb/face-alignment/blob/master/LICENSE
Terms of the BSD 3-Clause License:
--------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Open Source Software
--------------------------------------------------------------------
1.s3FD
Fileshttps://github.com/yxlijun/S3FD.pytorch

556
models/MuseTalk/README.md Normal file
View File

@@ -0,0 +1,556 @@
# MuseTalk
> **ViGent2 集成说明**
>
> 本目录为 MuseTalk v1.5 的部署副本,作为混合唇形同步方案的长视频引擎。
>
> - **服务**: `scripts/server.py` — FastAPI 常驻推理服务 (端口 8011, GPU0)
> - **PM2**: `vigent2-musetalk` (启动脚本 `run_musetalk.sh`)
> - **路由**: 音频 >=120s 自动路由到 MuseTalk, <120s 走 LatentSync
> - **部署文档**: [`Docs/MUSETALK_DEPLOY.md`](../../Docs/MUSETALK_DEPLOY.md)
> - **修改记录**: `scripts/inference.py` 增强 FFmpeg 调用 + CLI 参数; `musetalk/utils/audio_processor.py` 音视频长度不匹配时零填充
---
<strong>MuseTalk: Real-Time High-Fidelity Video Dubbing via Spatio-Temporal Sampling</strong>
Yue Zhang<sup>\*</sup>,
Zhizhou Zhong<sup>\*</sup>,
Minhao Liu<sup>\*</sup>,
Zhaokang Chen,
Bin Wu<sup>†</sup>,
Yubin Zeng,
Chao Zhan,
Junxin Huang,
Yingjie He,
Wenjiang Zhou
(<sup>*</sup>Equal Contribution, <sup>†</sup>Corresponding Author, benbinwu@tencent.com)
Lyra Lab, Tencent Music Entertainment
**[github](https://github.com/TMElyralab/MuseTalk)** **[huggingface](https://huggingface.co/TMElyralab/MuseTalk)** **[space](https://huggingface.co/spaces/TMElyralab/MuseTalk)** **[Technical report](https://arxiv.org/abs/2410.10122)**
We introduce `MuseTalk`, a **real-time high quality** lip-syncing model (30fps+ on an NVIDIA Tesla V100). MuseTalk can be applied with input videos, e.g., generated by [MuseV](https://github.com/TMElyralab/MuseV), as a complete virtual human solution.
## 🔥 Updates
We're excited to unveil MuseTalk 1.5.
This version **(1)** integrates training with perceptual loss, GAN loss, and sync loss, significantly boosting its overall performance. **(2)** We've implemented a two-stage training strategy and a spatio-temporal data sampling approach to strike a balance between visual quality and lip-sync accuracy.
Learn more details [here](https://arxiv.org/abs/2410.10122).
**The inference codes, training codes and model weights of MuseTalk 1.5 are all available now!** 🚀
# Overview
`MuseTalk` is a real-time high quality audio-driven lip-syncing model trained in the latent space of `ft-mse-vae`, which
1. modifies an unseen face according to the input audio, with a size of face region of `256 x 256`.
1. supports audio in various languages, such as Chinese, English, and Japanese.
1. supports real-time inference with 30fps+ on an NVIDIA Tesla V100.
1. supports modification of the center point of the face region proposes, which **SIGNIFICANTLY** affects generation results.
1. checkpoint available trained on the HDTF and private dataset.
# News
- [04/05/2025] :mega: We are excited to announce that the training code is now open-sourced! You can now train your own MuseTalk model using our provided training scripts and configurations.
- [03/28/2025] We are thrilled to announce the release of our 1.5 version. This version is a significant improvement over the 1.0 version, with enhanced clarity, identity consistency, and precise lip-speech synchronization. We update the [technical report](https://arxiv.org/abs/2410.10122) with more details.
- [10/18/2024] We release the [technical report](https://arxiv.org/abs/2410.10122v2). Our report details a superior model to the open-source L1 loss version. It includes GAN and perceptual losses for improved clarity, and sync loss for enhanced performance.
- [04/17/2024] We release a pipeline that utilizes MuseTalk for real-time inference.
- [04/16/2024] Release Gradio [demo](https://huggingface.co/spaces/TMElyralab/MuseTalk) on HuggingFace Spaces (thanks to HF team for their community grant)
- [04/02/2024] Release MuseTalk project and pretrained models.
## Model
![Model Structure](https://github.com/user-attachments/assets/02f4a214-1bdd-4326-983c-e70b478accba)
MuseTalk was trained in latent spaces, where the images were encoded by a freezed VAE. The audio was encoded by a freezed `whisper-tiny` model. The architecture of the generation network was borrowed from the UNet of the `stable-diffusion-v1-4`, where the audio embeddings were fused to the image embeddings by cross-attention.
Note that although we use a very similar architecture as Stable Diffusion, MuseTalk is distinct in that it is **NOT** a diffusion model. Instead, MuseTalk operates by inpainting in the latent space with a single step.
## Cases
<table>
<tr>
<td width="33%">
### Input Video
---
https://github.com/TMElyralab/MuseTalk/assets/163980830/37a3a666-7b90-4244-8d3a-058cb0e44107
---
https://github.com/user-attachments/assets/1ce3e850-90ac-4a31-a45f-8dfa4f2960ac
---
https://github.com/user-attachments/assets/fa3b13a1-ae26-4d1d-899e-87435f8d22b3
---
https://github.com/user-attachments/assets/15800692-39d1-4f4c-99f2-aef044dc3251
---
https://github.com/user-attachments/assets/a843f9c9-136d-4ed4-9303-4a7269787a60
---
https://github.com/user-attachments/assets/6eb4e70e-9e19-48e9-85a9-bbfa589c5fcb
</td>
<td width="33%">
### MuseTalk 1.0
---
https://github.com/user-attachments/assets/c04f3cd5-9f77-40e9-aafd-61978380d0ef
---
https://github.com/user-attachments/assets/2051a388-1cef-4c1d-b2a2-3c1ceee5dc99
---
https://github.com/user-attachments/assets/b5f56f71-5cdc-4e2e-a519-454242000d32
---
https://github.com/user-attachments/assets/a5843835-04ab-4c31-989f-0995cfc22f34
---
https://github.com/user-attachments/assets/3dc7f1d7-8747-4733-bbdd-97874af0c028
---
https://github.com/user-attachments/assets/3c78064e-faad-4637-83ae-28452a22b09a
</td>
<td width="33%">
### MuseTalk 1.5
---
https://github.com/user-attachments/assets/999a6f5b-61dd-48e1-b902-bb3f9cbc7247
---
https://github.com/user-attachments/assets/d26a5c9a-003c-489d-a043-c9a331456e75
---
https://github.com/user-attachments/assets/471290d7-b157-4cf6-8a6d-7e899afa302c
---
https://github.com/user-attachments/assets/1ee77c4c-8c70-4add-b6db-583a12faa7dc
---
https://github.com/user-attachments/assets/370510ea-624c-43b7-bbb0-ab5333e0fcc4
---
https://github.com/user-attachments/assets/b011ece9-a332-4bc1-b8b7-ef6e383d7bde
</td>
</tr>
</table>
# TODO:
- [x] trained models and inference codes.
- [x] Huggingface Gradio [demo](https://huggingface.co/spaces/TMElyralab/MuseTalk).
- [x] codes for real-time inference.
- [x] [technical report](https://arxiv.org/abs/2410.10122v2).
- [x] a better model with updated [technical report](https://arxiv.org/abs/2410.10122).
- [x] realtime inference code for 1.5 version.
- [x] training and data preprocessing codes.
- [ ] **always** welcome to submit issues and PRs to improve this repository! 😊
# Getting Started
We provide a detailed tutorial about the installation and the basic usage of MuseTalk for new users:
## Third party integration
Thanks for the third-party integration, which makes installation and use more convenient for everyone.
We also hope you note that we have not verified, maintained, or updated third-party. Please refer to this project for specific results.
### [ComfyUI](https://github.com/chaojie/ComfyUI-MuseTalk)
## Installation
To prepare the Python environment and install additional packages such as opencv, diffusers, mmcv, etc., please follow the steps below:
### Build environment
We recommend Python 3.10 and CUDA 11.7. Set up your environment as follows:
```shell
conda create -n MuseTalk python==3.10
conda activate MuseTalk
```
### Install PyTorch 2.0.1
Choose one of the following installation methods:
```shell
# Option 1: Using pip
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
# Option 2: Using conda
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
```
### Install Dependencies
Install the remaining required packages:
```shell
pip install -r requirements.txt
```
### Install MMLab Packages
Install the MMLab ecosystem packages:
```bash
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv==2.0.1"
mim install "mmdet==3.1.0"
mim install "mmpose==1.1.0"
```
### Setup FFmpeg
1. [Download](https://github.com/BtbN/FFmpeg-Builds/releases) the ffmpeg-static package
2. Configure FFmpeg based on your operating system:
For Linux:
```bash
export FFMPEG_PATH=/path/to/ffmpeg
# Example:
export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static
```
For Windows:
Add the `ffmpeg-xxx\bin` directory to your system's PATH environment variable. Verify the installation by running `ffmpeg -version` in the command prompt - it should display the ffmpeg version information.
### Download weights
You can download weights in two ways:
#### Option 1: Using Download Scripts
We provide two scripts for automatic downloading:
For Linux:
```bash
sh ./download_weights.sh
```
For Windows:
```batch
# Run the script
download_weights.bat
```
#### Option 2: Manual Download
You can also download the weights manually from the following links:
1. Download our trained [weights](https://huggingface.co/TMElyralab/MuseTalk/tree/main)
2. Download the weights of other components:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse/tree/main)
- [whisper](https://huggingface.co/openai/whisper-tiny/tree/main)
- [dwpose](https://huggingface.co/yzd-v/DWPose/tree/main)
- [syncnet](https://huggingface.co/ByteDance/LatentSync/tree/main)
- [face-parse-bisent](https://drive.google.com/file/d/154JgKpzCPW82qINcVieuPH3fZ2e0P812/view?pli=1)
- [resnet18](https://download.pytorch.org/models/resnet18-5c106cde.pth)
Finally, these weights should be organized in `models` as follows:
```
./models/
├── musetalk
│ └── musetalk.json
│ └── pytorch_model.bin
├── musetalkV15
│ └── musetalk.json
│ └── unet.pth
├── syncnet
│ └── latentsync_syncnet.pt
├── dwpose
│ └── dw-ll_ucoco_384.pth
├── face-parse-bisent
│ ├── 79999_iter.pth
│ └── resnet18-5c106cde.pth
├── sd-vae
│ ├── config.json
│ └── diffusion_pytorch_model.bin
└── whisper
├── config.json
├── pytorch_model.bin
└── preprocessor_config.json
```
## Quickstart
### Inference
We provide inference scripts for both versions of MuseTalk:
#### Prerequisites
Before running inference, please ensure ffmpeg is installed and accessible:
```bash
# Check ffmpeg installation
ffmpeg -version
```
If ffmpeg is not found, please install it first:
- Windows: Download from [ffmpeg-static](https://github.com/BtbN/FFmpeg-Builds/releases) and add to PATH
- Linux: `sudo apt-get install ffmpeg`
#### Normal Inference
##### Linux Environment
```bash
# MuseTalk 1.5 (Recommended)
sh inference.sh v1.5 normal
# MuseTalk 1.0
sh inference.sh v1.0 normal
```
##### Windows Environment
Please ensure that you set the `ffmpeg_path` to match the actual location of your FFmpeg installation.
```bash
# MuseTalk 1.5 (Recommended)
python -m scripts.inference --inference_config configs\inference\test.yaml --result_dir results\test --unet_model_path models\musetalkV15\unet.pth --unet_config models\musetalkV15\musetalk.json --version v15 --ffmpeg_path ffmpeg-master-latest-win64-gpl-shared\bin
# For MuseTalk 1.0, change:
# - models\musetalkV15 -> models\musetalk
# - unet.pth -> pytorch_model.bin
# - --version v15 -> --version v1
```
#### Real-time Inference
##### Linux Environment
```bash
# MuseTalk 1.5 (Recommended)
sh inference.sh v1.5 realtime
# MuseTalk 1.0
sh inference.sh v1.0 realtime
```
##### Windows Environment
```bash
# MuseTalk 1.5 (Recommended)
python -m scripts.realtime_inference --inference_config configs\inference\realtime.yaml --result_dir results\realtime --unet_model_path models\musetalkV15\unet.pth --unet_config models\musetalkV15\musetalk.json --version v15 --fps 25 --ffmpeg_path ffmpeg-master-latest-win64-gpl-shared\bin
# For MuseTalk 1.0, change:
# - models\musetalkV15 -> models\musetalk
# - unet.pth -> pytorch_model.bin
# - --version v15 -> --version v1
```
The configuration file `configs/inference/test.yaml` contains the inference settings, including:
- `video_path`: Path to the input video, image file, or directory of images
- `audio_path`: Path to the input audio file
Note: For optimal results, we recommend using input videos with 25fps, which is the same fps used during model training. If your video has a lower frame rate, you can use frame interpolation or convert it to 25fps using ffmpeg.
Important notes for real-time inference:
1. Set `preparation` to `True` when processing a new avatar
2. After preparation, the avatar will generate videos using audio clips from `audio_clips`
3. The generation process can achieve 30fps+ on an NVIDIA Tesla V100
4. Set `preparation` to `False` for generating more videos with the same avatar
For faster generation without saving images, you can use:
```bash
python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --skip_save_images
```
## Gradio Demo
We provide an intuitive web interface through Gradio for users to easily adjust input parameters. To optimize inference time, users can generate only the **first frame** to fine-tune the best lip-sync parameters, which helps reduce facial artifacts in the final output.
![para](assets/figs/gradio_2.png)
For minimum hardware requirements, we tested the system on a Windows environment using an NVIDIA GeForce RTX 3050 Ti Laptop GPU with 4GB VRAM. In fp16 mode, generating an 8-second video takes approximately 5 minutes. ![speed](assets/figs/gradio.png)
Both Linux and Windows users can launch the demo using the following command. Please ensure that the `ffmpeg_path` parameter matches your actual FFmpeg installation path:
```bash
# You can remove --use_float16 for better quality, but it will increase VRAM usage and inference time
python app.py --use_float16 --ffmpeg_path ffmpeg-master-latest-win64-gpl-shared\bin
```
## Training
### Data Preparation
To train MuseTalk, you need to prepare your dataset following these steps:
1. **Place your source videos**
For example, if you're using the HDTF dataset, place all your video files in `./dataset/HDTF/source`.
2. **Run the preprocessing script**
```bash
python -m scripts.preprocess --config ./configs/training/preprocess.yaml
```
This script will:
- Extract frames from videos
- Detect and align faces
- Generate audio features
- Create the necessary data structure for training
### Training Process
After data preprocessing, you can start the training process:
1. **First Stage**
```bash
sh train.sh stage1
```
2. **Second Stage**
```bash
sh train.sh stage2
```
### Configuration Adjustment
Before starting the training, you should adjust the configuration files according to your hardware and requirements:
1. **GPU Configuration** (`configs/training/gpu.yaml`):
- `gpu_ids`: Specify the GPU IDs you want to use (e.g., "0,1,2,3")
- `num_processes`: Set this to match the number of GPUs you're using
2. **Stage 1 Configuration** (`configs/training/stage1.yaml`):
- `data.train_bs`: Adjust batch size based on your GPU memory (default: 32)
- `data.n_sample_frames`: Number of sampled frames per video (default: 1)
3. **Stage 2 Configuration** (`configs/training/stage2.yaml`):
- `random_init_unet`: Must be set to `False` to use the model from stage 1
- `data.train_bs`: Smaller batch size due to high GPU memory cost (default: 2)
- `data.n_sample_frames`: Higher value for temporal consistency (default: 16)
- `solver.gradient_accumulation_steps`: Increase to simulate larger batch sizes (default: 8)
### GPU Memory Requirements
Based on our testing on a machine with 8 NVIDIA H20 GPUs:
#### Stage 1 Memory Usage
| Batch Size | Gradient Accumulation | Memory per GPU | Recommendation |
|:----------:|:----------------------:|:--------------:|:--------------:|
| 8 | 1 | ~32GB | |
| 16 | 1 | ~45GB | |
| 32 | 1 | ~74GB | ✓ |
#### Stage 2 Memory Usage
| Batch Size | Gradient Accumulation | Memory per GPU | Recommendation |
|:----------:|:----------------------:|:--------------:|:--------------:|
| 1 | 8 | ~54GB | |
| 2 | 2 | ~80GB | |
| 2 | 8 | ~85GB | ✓ |
<details close>
## TestCases For 1.0
<table class="center">
<tr style="font-weight: bolder;text-align:center;">
<td width="33%">Image</td>
<td width="33%">MuseV</td>
<td width="33%">+MuseTalk</td>
</tr>
<tr>
<td>
<img src=assets/demo/musk/musk.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/4a4bb2d1-9d14-4ca9-85c8-7f19c39f712e controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/b2a879c2-e23a-4d39-911d-51f0343218e4 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/yongen/yongen.jpeg width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/57ef9dee-a9fd-4dc8-839b-3fbbbf0ff3f4 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/94d8dcba-1bcd-4b54-9d1d-8b6fc53228f0 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/sit/sit.jpeg width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/5fbab81b-d3f2-4c75-abb5-14c76e51769e controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/f8100f4a-3df8-4151-8de2-291b09269f66 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/man/man.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/a6e7d431-5643-4745-9868-8b423a454153 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/6ccf7bc7-cb48-42de-85bd-076d5ee8a623 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/monalisa/monalisa.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/1568f604-a34f-4526-a13a-7d282aa2e773 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/a40784fc-a885-4c1f-9b7e-8f87b7caf4e0 controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/sun1/sun.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/37a3a666-7b90-4244-8d3a-058cb0e44107 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/172f4ff1-d432-45bd-a5a7-a07dec33a26b controls preload></video>
</td>
</tr>
<tr>
<td>
<img src=assets/demo/sun2/sun.png width="95%">
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/37a3a666-7b90-4244-8d3a-058cb0e44107 controls preload></video>
</td>
<td >
<video src=https://github.com/TMElyralab/MuseTalk/assets/163980830/85a6873d-a028-4cce-af2b-6c59a1f2971d controls preload></video>
</td>
</tr>
</table >
#### Use of bbox_shift to have adjustable results(For 1.0)
:mag_right: We have found that upper-bound of the mask has an important impact on mouth openness. Thus, to control the mask region, we suggest using the `bbox_shift` parameter. Positive values (moving towards the lower half) increase mouth openness, while negative values (moving towards the upper half) decrease mouth openness.
You can start by running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.
For example, in the case of `Xinying Sun`, after running the default configuration, it shows that the adjustable value rage is [-9, 9]. Then, to decrease the mouth openness, we set the value to be `-7`.
```
python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift -7
```
:pushpin: More technical details can be found in [bbox_shift](assets/BBOX_SHIFT.md).
#### Combining MuseV and MuseTalk
As a complete solution to virtual human generation, you are suggested to first apply [MuseV](https://github.com/TMElyralab/MuseV) to generate a video (text-to-video, image-to-video or pose-to-video) by referring [this](https://github.com/TMElyralab/MuseV?tab=readme-ov-file#text2video). Frame interpolation is suggested to increase frame rate. Then, you can use `MuseTalk` to generate a lip-sync video by referring [this](https://github.com/TMElyralab/MuseTalk?tab=readme-ov-file#inference).
# Acknowledgement
1. We thank open-source components like [whisper](https://github.com/openai/whisper), [dwpose](https://github.com/IDEA-Research/DWPose), [face-alignment](https://github.com/1adrianb/face-alignment), [face-parsing](https://github.com/zllrunning/face-parsing.PyTorch), [S3FD](https://github.com/yxlijun/S3FD.pytorch) and [LatentSync](https://huggingface.co/ByteDance/LatentSync/tree/main).
1. MuseTalk has referred much to [diffusers](https://github.com/huggingface/diffusers) and [isaacOnline/whisper](https://github.com/isaacOnline/whisper/tree/extract-embeddings).
1. MuseTalk has been built on [HDTF](https://github.com/MRzzm/HDTF) datasets.
Thanks for open-sourcing!
# Limitations
- Resolution: Though MuseTalk uses a face region size of 256 x 256, which make it better than other open-source methods, it has not yet reached the theoretical resolution bound. We will continue to deal with this problem.
If you need higher resolution, you could apply super resolution models such as [GFPGAN](https://github.com/TencentARC/GFPGAN) in combination with MuseTalk.
- Identity preservation: Some details of the original face are not well preserved, such as mustache, lip shape and color.
- Jitter: There exists some jitter as the current pipeline adopts single-frame generation.
# Citation
```bib
@article{musetalk,
title={MuseTalk: Real-Time High-Fidelity Video Dubbing via Spatio-Temporal Sampling},
author={Zhang, Yue and Zhong, Zhizhou and Liu, Minhao and Chen, Zhaokang and Wu, Bin and Zeng, Yubin and Zhan, Chao and He, Yingjie and Huang, Junxin and Zhou, Wenjiang},
journal={arxiv},
year={2025}
}
```
# Disclaimer/License
1. `code`: The code of MuseTalk is released under the MIT License. There is no limitation for both academic and commercial usage.
1. `model`: The trained model are available for any purpose, even commercially.
1. `other opensource model`: Other open-source models used must comply with their license, such as `whisper`, `ft-mse-vae`, `dwpose`, `S3FD`, etc..
1. The testdata are collected from internet, which are available for non-commercial research purposes only.
1. `AIGC`: This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.

570
models/MuseTalk/app.py Normal file
View File

@@ -0,0 +1,570 @@
import os
import time
import pdb
import re
import gradio as gr
import numpy as np
import sys
import subprocess
from huggingface_hub import snapshot_download
import requests
import argparse
import os
from omegaconf import OmegaConf
import numpy as np
import cv2
import torch
import glob
import pickle
from tqdm import tqdm
import copy
from argparse import Namespace
import shutil
import gdown
import imageio
import ffmpeg
from moviepy.editor import *
from transformers import WhisperModel
ProjectDir = os.path.abspath(os.path.dirname(__file__))
CheckpointsDir = os.path.join(ProjectDir, "models")
@torch.no_grad()
def debug_inpainting(video_path, bbox_shift, extra_margin=10, parsing_mode="jaw",
left_cheek_width=90, right_cheek_width=90):
"""Debug inpainting parameters, only process the first frame"""
# Set default parameters
args_dict = {
"result_dir": './results/debug',
"fps": 25,
"batch_size": 1,
"output_vid_name": '',
"use_saved_coord": False,
"audio_padding_length_left": 2,
"audio_padding_length_right": 2,
"version": "v15",
"extra_margin": extra_margin,
"parsing_mode": parsing_mode,
"left_cheek_width": left_cheek_width,
"right_cheek_width": right_cheek_width
}
args = Namespace(**args_dict)
# Create debug directory
os.makedirs(args.result_dir, exist_ok=True)
# Read first frame
if get_file_type(video_path) == "video":
reader = imageio.get_reader(video_path)
first_frame = reader.get_data(0)
reader.close()
else:
first_frame = cv2.imread(video_path)
first_frame = cv2.cvtColor(first_frame, cv2.COLOR_BGR2RGB)
# Save first frame
debug_frame_path = os.path.join(args.result_dir, "debug_frame.png")
cv2.imwrite(debug_frame_path, cv2.cvtColor(first_frame, cv2.COLOR_RGB2BGR))
# Get face coordinates
coord_list, frame_list = get_landmark_and_bbox([debug_frame_path], bbox_shift)
bbox = coord_list[0]
frame = frame_list[0]
if bbox == coord_placeholder:
return None, "No face detected, please adjust bbox_shift parameter"
# Initialize face parser
fp = FaceParsing(
left_cheek_width=args.left_cheek_width,
right_cheek_width=args.right_cheek_width
)
# Process first frame
x1, y1, x2, y2 = bbox
y2 = y2 + args.extra_margin
y2 = min(y2, frame.shape[0])
crop_frame = frame[y1:y2, x1:x2]
crop_frame = cv2.resize(crop_frame,(256,256),interpolation = cv2.INTER_LANCZOS4)
# Generate random audio features
random_audio = torch.randn(1, 50, 384, device=device, dtype=weight_dtype)
audio_feature = pe(random_audio)
# Get latents
latents = vae.get_latents_for_unet(crop_frame)
latents = latents.to(dtype=weight_dtype)
# Generate prediction results
pred_latents = unet.model(latents, timesteps, encoder_hidden_states=audio_feature).sample
recon = vae.decode_latents(pred_latents)
# Inpaint back to original image
res_frame = recon[0]
res_frame = cv2.resize(res_frame.astype(np.uint8),(x2-x1,y2-y1))
combine_frame = get_image(frame, res_frame, [x1, y1, x2, y2], mode=args.parsing_mode, fp=fp)
# Save results (no need to convert color space again since get_image already returns RGB format)
debug_result_path = os.path.join(args.result_dir, "debug_result.png")
cv2.imwrite(debug_result_path, combine_frame)
# Create information text
info_text = f"Parameter information:\n" + \
f"bbox_shift: {bbox_shift}\n" + \
f"extra_margin: {extra_margin}\n" + \
f"parsing_mode: {parsing_mode}\n" + \
f"left_cheek_width: {left_cheek_width}\n" + \
f"right_cheek_width: {right_cheek_width}\n" + \
f"Detected face coordinates: [{x1}, {y1}, {x2}, {y2}]"
return cv2.cvtColor(combine_frame, cv2.COLOR_RGB2BGR), info_text
def print_directory_contents(path):
for child in os.listdir(path):
child_path = os.path.join(path, child)
if os.path.isdir(child_path):
print(child_path)
def download_model():
# 检查必需的模型文件是否存在
required_models = {
"MuseTalk": f"{CheckpointsDir}/musetalkV15/unet.pth",
"MuseTalk": f"{CheckpointsDir}/musetalkV15/musetalk.json",
"SD VAE": f"{CheckpointsDir}/sd-vae/config.json",
"Whisper": f"{CheckpointsDir}/whisper/config.json",
"DWPose": f"{CheckpointsDir}/dwpose/dw-ll_ucoco_384.pth",
"SyncNet": f"{CheckpointsDir}/syncnet/latentsync_syncnet.pt",
"Face Parse": f"{CheckpointsDir}/face-parse-bisent/79999_iter.pth",
"ResNet": f"{CheckpointsDir}/face-parse-bisent/resnet18-5c106cde.pth"
}
missing_models = []
for model_name, model_path in required_models.items():
if not os.path.exists(model_path):
missing_models.append(model_name)
if missing_models:
# 全用英文
print("The following required model files are missing:")
for model in missing_models:
print(f"- {model}")
print("\nPlease run the download script to download the missing models:")
if sys.platform == "win32":
print("Windows: Run download_weights.bat")
else:
print("Linux/Mac: Run ./download_weights.sh")
sys.exit(1)
else:
print("All required model files exist.")
download_model() # for huggingface deployment.
from musetalk.utils.blending import get_image
from musetalk.utils.face_parsing import FaceParsing
from musetalk.utils.audio_processor import AudioProcessor
from musetalk.utils.utils import get_file_type, get_video_fps, datagen, load_all_model
from musetalk.utils.preprocessing import get_landmark_and_bbox, read_imgs, coord_placeholder, get_bbox_range
def fast_check_ffmpeg():
try:
subprocess.run(["ffmpeg", "-version"], capture_output=True, check=True)
return True
except:
return False
@torch.no_grad()
def inference(audio_path, video_path, bbox_shift, extra_margin=10, parsing_mode="jaw",
left_cheek_width=90, right_cheek_width=90, progress=gr.Progress(track_tqdm=True)):
# Set default parameters, aligned with inference.py
args_dict = {
"result_dir": './results/output',
"fps": 25,
"batch_size": 8,
"output_vid_name": '',
"use_saved_coord": False,
"audio_padding_length_left": 2,
"audio_padding_length_right": 2,
"version": "v15", # Fixed use v15 version
"extra_margin": extra_margin,
"parsing_mode": parsing_mode,
"left_cheek_width": left_cheek_width,
"right_cheek_width": right_cheek_width
}
args = Namespace(**args_dict)
# Check ffmpeg
if not fast_check_ffmpeg():
print("Warning: Unable to find ffmpeg, please ensure ffmpeg is properly installed")
input_basename = os.path.basename(video_path).split('.')[0]
audio_basename = os.path.basename(audio_path).split('.')[0]
output_basename = f"{input_basename}_{audio_basename}"
# Create temporary directory
temp_dir = os.path.join(args.result_dir, f"{args.version}")
os.makedirs(temp_dir, exist_ok=True)
# Set result save path
result_img_save_path = os.path.join(temp_dir, output_basename)
crop_coord_save_path = os.path.join(args.result_dir, "../", input_basename+".pkl")
os.makedirs(result_img_save_path, exist_ok=True)
if args.output_vid_name == "":
output_vid_name = os.path.join(temp_dir, output_basename+".mp4")
else:
output_vid_name = os.path.join(temp_dir, args.output_vid_name)
############################################## extract frames from source video ##############################################
if get_file_type(video_path) == "video":
save_dir_full = os.path.join(temp_dir, input_basename)
os.makedirs(save_dir_full, exist_ok=True)
# Read video
reader = imageio.get_reader(video_path)
# Save images
for i, im in enumerate(reader):
imageio.imwrite(f"{save_dir_full}/{i:08d}.png", im)
input_img_list = sorted(glob.glob(os.path.join(save_dir_full, '*.[jpJP][pnPN]*[gG]')))
fps = get_video_fps(video_path)
else: # input img folder
input_img_list = glob.glob(os.path.join(video_path, '*.[jpJP][pnPN]*[gG]'))
input_img_list = sorted(input_img_list, key=lambda x: int(os.path.splitext(os.path.basename(x))[0]))
fps = args.fps
############################################## extract audio feature ##############################################
# Extract audio features
whisper_input_features, librosa_length = audio_processor.get_audio_feature(audio_path)
whisper_chunks = audio_processor.get_whisper_chunk(
whisper_input_features,
device,
weight_dtype,
whisper,
librosa_length,
fps=fps,
audio_padding_length_left=args.audio_padding_length_left,
audio_padding_length_right=args.audio_padding_length_right,
)
############################################## preprocess input image ##############################################
if os.path.exists(crop_coord_save_path) and args.use_saved_coord:
print("using extracted coordinates")
with open(crop_coord_save_path,'rb') as f:
coord_list = pickle.load(f)
frame_list = read_imgs(input_img_list)
else:
print("extracting landmarks...time consuming")
coord_list, frame_list = get_landmark_and_bbox(input_img_list, bbox_shift)
with open(crop_coord_save_path, 'wb') as f:
pickle.dump(coord_list, f)
bbox_shift_text = get_bbox_range(input_img_list, bbox_shift)
# Initialize face parser
fp = FaceParsing(
left_cheek_width=args.left_cheek_width,
right_cheek_width=args.right_cheek_width
)
i = 0
input_latent_list = []
for bbox, frame in zip(coord_list, frame_list):
if bbox == coord_placeholder:
continue
x1, y1, x2, y2 = bbox
y2 = y2 + args.extra_margin
y2 = min(y2, frame.shape[0])
crop_frame = frame[y1:y2, x1:x2]
crop_frame = cv2.resize(crop_frame,(256,256),interpolation = cv2.INTER_LANCZOS4)
latents = vae.get_latents_for_unet(crop_frame)
input_latent_list.append(latents)
# to smooth the first and the last frame
frame_list_cycle = frame_list + frame_list[::-1]
coord_list_cycle = coord_list + coord_list[::-1]
input_latent_list_cycle = input_latent_list + input_latent_list[::-1]
############################################## inference batch by batch ##############################################
print("start inference")
video_num = len(whisper_chunks)
batch_size = args.batch_size
gen = datagen(
whisper_chunks=whisper_chunks,
vae_encode_latents=input_latent_list_cycle,
batch_size=batch_size,
delay_frame=0,
device=device,
)
res_frame_list = []
for i, (whisper_batch,latent_batch) in enumerate(tqdm(gen,total=int(np.ceil(float(video_num)/batch_size)))):
audio_feature_batch = pe(whisper_batch)
# Ensure latent_batch is consistent with model weight type
latent_batch = latent_batch.to(dtype=weight_dtype)
pred_latents = unet.model(latent_batch, timesteps, encoder_hidden_states=audio_feature_batch).sample
recon = vae.decode_latents(pred_latents)
for res_frame in recon:
res_frame_list.append(res_frame)
############################################## pad to full image ##############################################
print("pad talking image to original video")
for i, res_frame in enumerate(tqdm(res_frame_list)):
bbox = coord_list_cycle[i%(len(coord_list_cycle))]
ori_frame = copy.deepcopy(frame_list_cycle[i%(len(frame_list_cycle))])
x1, y1, x2, y2 = bbox
y2 = y2 + args.extra_margin
y2 = min(y2, frame.shape[0])
try:
res_frame = cv2.resize(res_frame.astype(np.uint8),(x2-x1,y2-y1))
except:
continue
# Use v15 version blending
combine_frame = get_image(ori_frame, res_frame, [x1, y1, x2, y2], mode=args.parsing_mode, fp=fp)
cv2.imwrite(f"{result_img_save_path}/{str(i).zfill(8)}.png",combine_frame)
# Frame rate
fps = 25
# Output video path
output_video = 'temp.mp4'
# Read images
def is_valid_image(file):
pattern = re.compile(r'\d{8}\.png')
return pattern.match(file)
images = []
files = [file for file in os.listdir(result_img_save_path) if is_valid_image(file)]
files.sort(key=lambda x: int(x.split('.')[0]))
for file in files:
filename = os.path.join(result_img_save_path, file)
images.append(imageio.imread(filename))
# Save video
imageio.mimwrite(output_video, images, 'FFMPEG', fps=fps, codec='libx264', pixelformat='yuv420p')
input_video = './temp.mp4'
# Check if the input_video and audio_path exist
if not os.path.exists(input_video):
raise FileNotFoundError(f"Input video file not found: {input_video}")
if not os.path.exists(audio_path):
raise FileNotFoundError(f"Audio file not found: {audio_path}")
# Read video
reader = imageio.get_reader(input_video)
fps = reader.get_meta_data()['fps'] # Get original video frame rate
reader.close() # Otherwise, error on win11: PermissionError: [WinError 32] Another program is using this file, process cannot access. : 'temp.mp4'
# Store frames in list
frames = images
print(len(frames))
# Load the video
video_clip = VideoFileClip(input_video)
# Load the audio
audio_clip = AudioFileClip(audio_path)
# Set the audio to the video
video_clip = video_clip.set_audio(audio_clip)
# Write the output video
video_clip.write_videofile(output_vid_name, codec='libx264', audio_codec='aac',fps=25)
os.remove("temp.mp4")
#shutil.rmtree(result_img_save_path)
print(f"result is save to {output_vid_name}")
return output_vid_name,bbox_shift_text
# load model weights
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
vae, unet, pe = load_all_model(
unet_model_path="./models/musetalkV15/unet.pth",
vae_type="sd-vae",
unet_config="./models/musetalkV15/musetalk.json",
device=device
)
# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument("--ffmpeg_path", type=str, default=r"ffmpeg-master-latest-win64-gpl-shared\bin", help="Path to ffmpeg executable")
parser.add_argument("--ip", type=str, default="127.0.0.1", help="IP address to bind to")
parser.add_argument("--port", type=int, default=7860, help="Port to bind to")
parser.add_argument("--share", action="store_true", help="Create a public link")
parser.add_argument("--use_float16", action="store_true", help="Use float16 for faster inference")
args = parser.parse_args()
# Set data type
if args.use_float16:
# Convert models to half precision for better performance
pe = pe.half()
vae.vae = vae.vae.half()
unet.model = unet.model.half()
weight_dtype = torch.float16
else:
weight_dtype = torch.float32
# Move models to specified device
pe = pe.to(device)
vae.vae = vae.vae.to(device)
unet.model = unet.model.to(device)
timesteps = torch.tensor([0], device=device)
# Initialize audio processor and Whisper model
audio_processor = AudioProcessor(feature_extractor_path="./models/whisper")
whisper = WhisperModel.from_pretrained("./models/whisper")
whisper = whisper.to(device=device, dtype=weight_dtype).eval()
whisper.requires_grad_(False)
def check_video(video):
if not isinstance(video, str):
return video # in case of none type
# Define the output video file name
dir_path, file_name = os.path.split(video)
if file_name.startswith("outputxxx_"):
return video
# Add the output prefix to the file name
output_file_name = "outputxxx_" + file_name
os.makedirs('./results',exist_ok=True)
os.makedirs('./results/output',exist_ok=True)
os.makedirs('./results/input',exist_ok=True)
# Combine the directory path and the new file name
output_video = os.path.join('./results/input', output_file_name)
# read video
reader = imageio.get_reader(video)
fps = reader.get_meta_data()['fps'] # get fps from original video
# conver fps to 25
frames = [im for im in reader]
target_fps = 25
L = len(frames)
L_target = int(L / fps * target_fps)
original_t = [x / fps for x in range(1, L+1)]
t_idx = 0
target_frames = []
for target_t in range(1, L_target+1):
while target_t / target_fps > original_t[t_idx]:
t_idx += 1 # find the first t_idx so that target_t / target_fps <= original_t[t_idx]
if t_idx >= L:
break
target_frames.append(frames[t_idx])
# save video
imageio.mimwrite(output_video, target_frames, 'FFMPEG', fps=25, codec='libx264', quality=9, pixelformat='yuv420p')
return output_video
css = """#input_img {max-width: 1024px !important} #output_vid {max-width: 1024px; max-height: 576px}"""
with gr.Blocks(css=css) as demo:
gr.Markdown(
"""<div align='center'> <h1>MuseTalk: Real-Time High-Fidelity Video Dubbing via Spatio-Temporal Sampling</h1> \
<h2 style='font-weight: 450; font-size: 1rem; margin: 0rem'>\
</br>\
Yue Zhang <sup>*</sup>,\
Zhizhou Zhong <sup>*</sup>,\
Minhao Liu<sup>*</sup>,\
Zhaokang Chen,\
Bin Wu<sup>†</sup>,\
Yubin Zeng,\
Chao Zhang,\
Yingjie He,\
Junxin Huang,\
Wenjiang Zhou <br>\
(<sup>*</sup>Equal Contribution, <sup>†</sup>Corresponding Author, benbinwu@tencent.com)\
Lyra Lab, Tencent Music Entertainment\
</h2> \
<a style='font-size:18px;color: #000000' href='https://github.com/TMElyralab/MuseTalk'>[Github Repo]</a>\
<a style='font-size:18px;color: #000000' href='https://github.com/TMElyralab/MuseTalk'>[Huggingface]</a>\
<a style='font-size:18px;color: #000000' href='https://arxiv.org/abs/2410.10122'> [Technical report] </a>"""
)
with gr.Row():
with gr.Column():
audio = gr.Audio(label="Drving Audio",type="filepath")
video = gr.Video(label="Reference Video",sources=['upload'])
bbox_shift = gr.Number(label="BBox_shift value, px", value=0)
extra_margin = gr.Slider(label="Extra Margin", minimum=0, maximum=40, value=10, step=1)
parsing_mode = gr.Radio(label="Parsing Mode", choices=["jaw", "raw"], value="jaw")
left_cheek_width = gr.Slider(label="Left Cheek Width", minimum=20, maximum=160, value=90, step=5)
right_cheek_width = gr.Slider(label="Right Cheek Width", minimum=20, maximum=160, value=90, step=5)
bbox_shift_scale = gr.Textbox(label="'left_cheek_width' and 'right_cheek_width' parameters determine the range of left and right cheeks editing when parsing model is 'jaw'. The 'extra_margin' parameter determines the movement range of the jaw. Users can freely adjust these three parameters to obtain better inpainting results.")
with gr.Row():
debug_btn = gr.Button("1. Test Inpainting ")
btn = gr.Button("2. Generate")
with gr.Column():
debug_image = gr.Image(label="Test Inpainting Result (First Frame)")
debug_info = gr.Textbox(label="Parameter Information", lines=5)
out1 = gr.Video()
video.change(
fn=check_video, inputs=[video], outputs=[video]
)
btn.click(
fn=inference,
inputs=[
audio,
video,
bbox_shift,
extra_margin,
parsing_mode,
left_cheek_width,
right_cheek_width
],
outputs=[out1,bbox_shift_scale]
)
debug_btn.click(
fn=debug_inpainting,
inputs=[
video,
bbox_shift,
extra_margin,
parsing_mode,
left_cheek_width,
right_cheek_width
],
outputs=[debug_image, debug_info]
)
# Check ffmpeg and add to PATH
if not fast_check_ffmpeg():
print(f"Adding ffmpeg to PATH: {args.ffmpeg_path}")
# According to operating system, choose path separator
path_separator = ';' if sys.platform == 'win32' else ':'
os.environ["PATH"] = f"{args.ffmpeg_path}{path_separator}{os.environ['PATH']}"
if not fast_check_ffmpeg():
print("Warning: Unable to find ffmpeg, please ensure ffmpeg is properly installed")
# Solve asynchronous IO issues on Windows
if sys.platform == 'win32':
import asyncio
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
# Start Gradio application
demo.queue().launch(
share=args.share,
debug=True,
server_name=args.ip,
server_port=args.port
)

View File

@@ -0,0 +1,10 @@
avator_1:
preparation: True # your can set it to False if you want to use the existing avator, it will save time
bbox_shift: 5
video_path: "data/video/yongen.mp4"
audio_clips:
audio_0: "data/audio/yongen.wav"
audio_1: "data/audio/eng.wav"

View File

@@ -0,0 +1,10 @@
task_0:
video_path: "data/video/yongen.mp4"
audio_path: "data/audio/yongen.wav"
task_1:
video_path: "data/video/yongen.mp4"
audio_path: "data/audio/eng.wav"
bbox_shift: -7

View File

@@ -0,0 +1,21 @@
compute_environment: LOCAL_MACHINE
debug: True
deepspeed_config:
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: False
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
gpu_ids: "5, 7" # modify this according to your GPU number
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 2 # it should be the same as the number of GPUs
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View File

@@ -0,0 +1,31 @@
clip_len_second: 30 # the length of the video clip
video_root_raw: "./dataset/HDTF/source/" # the path of the original video
val_list_hdtf:
- RD_Radio7_000
- RD_Radio8_000
- RD_Radio9_000
- WDA_TinaSmith_000
- WDA_TomCarper_000
- WDA_TomPerez_000
- WDA_TomUdall_000
- WDA_VeronicaEscobar0_000
- WDA_VeronicaEscobar1_000
- WDA_WhipJimClyburn_000
- WDA_XavierBecerra_000
- WDA_XavierBecerra_001
- WDA_XavierBecerra_002
- WDA_ZoeLofgren_000
- WRA_SteveScalise1_000
- WRA_TimScott_000
- WRA_ToddYoung_000
- WRA_TomCotton_000
- WRA_TomPrice_000
- WRA_VickyHartzler_000
# following dir will be automatically generated
video_root_25fps: "./dataset/HDTF/video_root_25fps/"
video_file_list: "./dataset/HDTF/video_file_list.txt"
video_audio_clip_root: "./dataset/HDTF/video_audio_clip_root/"
meta_root: "./dataset/HDTF/meta/"
video_clip_file_list_train: "./dataset/HDTF/train.txt"
video_clip_file_list_val: "./dataset/HDTF/val.txt"

View File

@@ -0,0 +1,89 @@
exp_name: 'test' # Name of the experiment
output_dir: './exp_out/stage1/' # Directory to save experiment outputs
unet_sub_folder: musetalk # Subfolder name for UNet model
random_init_unet: True # Whether to randomly initialize UNet (stage1) or use pretrained weights (stage2)
whisper_path: "./models/whisper" # Path to the Whisper model
pretrained_model_name_or_path: "./models" # Path to pretrained models
resume_from_checkpoint: True # Whether to resume training from a checkpoint
padding_pixel_mouth: 10 # Number of pixels to pad around the mouth region
vae_type: "sd-vae" # Type of VAE model to use
# Validation parameters
num_images_to_keep: 8 # Number of validation images to keep
ref_dropout_rate: 0 # Dropout rate for reference images
syncnet_config_path: "./configs/training/syncnet.yaml" # Path to SyncNet configuration
use_adapted_weight: False # Whether to use adapted weights for loss calculation
cropping_jaw2edge_margin_mean: 10 # Mean margin for jaw-to-edge cropping
cropping_jaw2edge_margin_std: 10 # Standard deviation for jaw-to-edge cropping
crop_type: "crop_resize" # Type of cropping method
random_margin_method: "normal" # Method for random margin generation
num_backward_frames: 16 # Number of frames to use for backward pass in SyncNet
data:
dataset_key: "HDTF" # Dataset to use for training
train_bs: 32 # Training batch size (actual batch size is train_bs*n_sample_frames)
image_size: 256 # Size of input images
n_sample_frames: 1 # Number of frames to sample per batch
num_workers: 8 # Number of data loading workers
audio_padding_length_left: 2 # Left padding length for audio features
audio_padding_length_right: 2 # Right padding length for audio features
sample_method: pose_similarity_and_mouth_dissimilarity # Method for sampling frames
top_k_ratio: 0.51 # Ratio for top-k sampling
contorl_face_min_size: True # Whether to control minimum face size
min_face_size: 150 # Minimum face size in pixels
loss_params:
l1_loss: 1.0 # Weight for L1 loss
vgg_loss: 0.01 # Weight for VGG perceptual loss
vgg_layer_weight: [1, 1, 1, 1, 1] # Weights for different VGG layers
pyramid_scale: [1, 0.5, 0.25, 0.125] # Scales for image pyramid
gan_loss: 0 # Weight for GAN loss
fm_loss: [1.0, 1.0, 1.0, 1.0] # Weights for feature matching loss
sync_loss: 0 # Weight for sync loss
mouth_gan_loss: 0 # Weight for mouth-specific GAN loss
model_params:
discriminator_params:
scales: [1] # Scales for discriminator
block_expansion: 32 # Expansion factor for discriminator blocks
max_features: 512 # Maximum number of features in discriminator
num_blocks: 4 # Number of blocks in discriminator
sn: True # Whether to use spectral normalization
image_channel: 3 # Number of image channels
estimate_jacobian: False # Whether to estimate Jacobian
discriminator_train_params:
lr: 0.000005 # Learning rate for discriminator
eps: 0.00000001 # Epsilon for optimizer
weight_decay: 0.01 # Weight decay for optimizer
patch_size: 1 # Size of patches for discriminator
betas: [0.5, 0.999] # Beta parameters for Adam optimizer
epochs: 10000 # Number of training epochs
start_gan: 1000 # Step to start GAN training
solver:
gradient_accumulation_steps: 1 # Number of steps for gradient accumulation
uncond_steps: 10 # Number of unconditional steps
mixed_precision: 'fp32' # Precision mode for training
enable_xformers_memory_efficient_attention: True # Whether to use memory efficient attention
gradient_checkpointing: True # Whether to use gradient checkpointing
max_train_steps: 250000 # Maximum number of training steps
max_grad_norm: 1.0 # Maximum gradient norm for clipping
# Learning rate parameters
learning_rate: 2.0e-5 # Base learning rate
scale_lr: False # Whether to scale learning rate
lr_warmup_steps: 1000 # Number of warmup steps for learning rate
lr_scheduler: "linear" # Type of learning rate scheduler
# Optimizer parameters
use_8bit_adam: False # Whether to use 8-bit Adam optimizer
adam_beta1: 0.5 # Beta1 parameter for Adam optimizer
adam_beta2: 0.999 # Beta2 parameter for Adam optimizer
adam_weight_decay: 1.0e-2 # Weight decay for Adam optimizer
adam_epsilon: 1.0e-8 # Epsilon for Adam optimizer
total_limit: 10 # Maximum number of checkpoints to keep
save_model_epoch_interval: 250000 # Interval between model saves
checkpointing_steps: 10000 # Number of steps between checkpoints
val_freq: 2000 # Frequency of validation
seed: 41 # Random seed for reproducibility

View File

@@ -0,0 +1,89 @@
exp_name: 'test' # Name of the experiment
output_dir: './exp_out/stage2/' # Directory to save experiment outputs
unet_sub_folder: musetalk # Subfolder name for UNet model
random_init_unet: False # Whether to randomly initialize UNet (stage1) or use pretrained weights (stage2)
whisper_path: "./models/whisper" # Path to the Whisper model
pretrained_model_name_or_path: "./models" # Path to pretrained models
resume_from_checkpoint: True # Whether to resume training from a checkpoint
padding_pixel_mouth: 10 # Number of pixels to pad around the mouth region
vae_type: "sd-vae" # Type of VAE model to use
# Validation parameters
num_images_to_keep: 8 # Number of validation images to keep
ref_dropout_rate: 0 # Dropout rate for reference images
syncnet_config_path: "./configs/training/syncnet.yaml" # Path to SyncNet configuration
use_adapted_weight: False # Whether to use adapted weights for loss calculation
cropping_jaw2edge_margin_mean: 10 # Mean margin for jaw-to-edge cropping
cropping_jaw2edge_margin_std: 10 # Standard deviation for jaw-to-edge cropping
crop_type: "dynamic_margin_crop_resize" # Type of cropping method
random_margin_method: "normal" # Method for random margin generation
num_backward_frames: 16 # Number of frames to use for backward pass in SyncNet
data:
dataset_key: "HDTF" # Dataset to use for training
train_bs: 2 # Training batch size (actual batch size is train_bs*n_sample_frames)
image_size: 256 # Size of input images
n_sample_frames: 16 # Number of frames to sample per batch
num_workers: 8 # Number of data loading workers
audio_padding_length_left: 2 # Left padding length for audio features
audio_padding_length_right: 2 # Right padding length for audio features
sample_method: pose_similarity_and_mouth_dissimilarity # Method for sampling frames
top_k_ratio: 0.51 # Ratio for top-k sampling
contorl_face_min_size: True # Whether to control minimum face size
min_face_size: 200 # Minimum face size in pixels
loss_params:
l1_loss: 1.0 # Weight for L1 loss
vgg_loss: 0.01 # Weight for VGG perceptual loss
vgg_layer_weight: [1, 1, 1, 1, 1] # Weights for different VGG layers
pyramid_scale: [1, 0.5, 0.25, 0.125] # Scales for image pyramid
gan_loss: 0.01 # Weight for GAN loss
fm_loss: [1.0, 1.0, 1.0, 1.0] # Weights for feature matching loss
sync_loss: 0.05 # Weight for sync loss
mouth_gan_loss: 0.01 # Weight for mouth-specific GAN loss
model_params:
discriminator_params:
scales: [1] # Scales for discriminator
block_expansion: 32 # Expansion factor for discriminator blocks
max_features: 512 # Maximum number of features in discriminator
num_blocks: 4 # Number of blocks in discriminator
sn: True # Whether to use spectral normalization
image_channel: 3 # Number of image channels
estimate_jacobian: False # Whether to estimate Jacobian
discriminator_train_params:
lr: 0.000005 # Learning rate for discriminator
eps: 0.00000001 # Epsilon for optimizer
weight_decay: 0.01 # Weight decay for optimizer
patch_size: 1 # Size of patches for discriminator
betas: [0.5, 0.999] # Beta parameters for Adam optimizer
epochs: 10000 # Number of training epochs
start_gan: 1000 # Step to start GAN training
solver:
gradient_accumulation_steps: 8 # Number of steps for gradient accumulation
uncond_steps: 10 # Number of unconditional steps
mixed_precision: 'fp32' # Precision mode for training
enable_xformers_memory_efficient_attention: True # Whether to use memory efficient attention
gradient_checkpointing: True # Whether to use gradient checkpointing
max_train_steps: 250000 # Maximum number of training steps
max_grad_norm: 1.0 # Maximum gradient norm for clipping
# Learning rate parameters
learning_rate: 5.0e-6 # Base learning rate
scale_lr: False # Whether to scale learning rate
lr_warmup_steps: 1000 # Number of warmup steps for learning rate
lr_scheduler: "linear" # Type of learning rate scheduler
# Optimizer parameters
use_8bit_adam: False # Whether to use 8-bit Adam optimizer
adam_beta1: 0.5 # Beta1 parameter for Adam optimizer
adam_beta2: 0.999 # Beta2 parameter for Adam optimizer
adam_weight_decay: 1.0e-2 # Weight decay for Adam optimizer
adam_epsilon: 1.0e-8 # Epsilon for Adam optimizer
total_limit: 10 # Maximum number of checkpoints to keep
save_model_epoch_interval: 250000 # Interval between model saves
checkpointing_steps: 2000 # Number of steps between checkpoints
val_freq: 2000 # Frequency of validation
seed: 41 # Random seed for reproducibility

View File

@@ -0,0 +1,19 @@
# This file is modified from LatentSync (https://github.com/bytedance/LatentSync/blob/main/latentsync/configs/training/syncnet_16_pixel.yaml).
model:
audio_encoder: # input (1, 80, 52)
in_channels: 1
block_out_channels: [32, 64, 128, 256, 512, 1024, 2048]
downsample_factors: [[2, 1], 2, 2, 1, 2, 2, [2, 3]]
attn_blocks: [0, 0, 0, 0, 0, 0, 0]
dropout: 0.0
visual_encoder: # input (48, 128, 256)
in_channels: 48
block_out_channels: [64, 128, 256, 256, 512, 1024, 2048, 2048]
downsample_factors: [[1, 2], 2, 2, 2, 2, 2, 2, 2]
attn_blocks: [0, 0, 0, 0, 0, 0, 0, 0]
dropout: 0.0
ckpt:
resume_ckpt_path: ""
inference_ckpt_path: ./models/syncnet/latentsync_syncnet.pt # this pretrained model is from LatentSync (https://huggingface.co/ByteDance/LatentSync/tree/main)
save_ckpt_steps: 2500

View File

@@ -0,0 +1,41 @@
@echo off
setlocal
:: Set the checkpoints directory
set CheckpointsDir=models
:: Create necessary directories
mkdir %CheckpointsDir%\musetalk
mkdir %CheckpointsDir%\musetalkV15
mkdir %CheckpointsDir%\syncnet
mkdir %CheckpointsDir%\dwpose
mkdir %CheckpointsDir%\face-parse-bisent
mkdir %CheckpointsDir%\sd-vae-ft-mse
mkdir %CheckpointsDir%\whisper
:: Install required packages
pip install -U "huggingface_hub[hf_xet]"
:: Set HuggingFace endpoint
set HF_ENDPOINT=https://hf-mirror.com
:: Download MuseTalk weights
hf download TMElyralab/MuseTalk --local-dir %CheckpointsDir%
:: Download SD VAE weights
hf download stabilityai/sd-vae-ft-mse --local-dir %CheckpointsDir%\sd-vae --include "config.json" "diffusion_pytorch_model.bin"
:: Download Whisper weights
hf download openai/whisper-tiny --local-dir %CheckpointsDir%\whisper --include "config.json" "pytorch_model.bin" "preprocessor_config.json"
:: Download DWPose weights
hf download yzd-v/DWPose --local-dir %CheckpointsDir%\dwpose --include "dw-ll_ucoco_384.pth"
:: Download SyncNet weights
hf download ByteDance/LatentSync --local-dir %CheckpointsDir%\syncnet --include "latentsync_syncnet.pt"
:: Download face-parse-bisent weights
hf download ManyOtherFunctions/face-parse-bisent --local-dir %CheckpointsDir%\face-parse-bisent --include "79999_iter.pth" "resnet18-5c106cde.pth"
echo All weights have been downloaded successfully!
endlocal

View File

@@ -0,0 +1,51 @@
#!/bin/bash
# Set the checkpoints directory
CheckpointsDir="models"
# Create necessary directories
mkdir -p models/musetalk models/musetalkV15 models/syncnet models/dwpose models/face-parse-bisent models/sd-vae models/whisper
# Install required packages
pip install -U "huggingface_hub[cli]"
pip install gdown
# Set HuggingFace mirror endpoint
export HF_ENDPOINT=https://hf-mirror.com
# Download MuseTalk V1.0 weights
huggingface-cli download TMElyralab/MuseTalk \
--local-dir $CheckpointsDir \
--include "musetalk/musetalk.json" "musetalk/pytorch_model.bin"
# Download MuseTalk V1.5 weights (unet.pth)
huggingface-cli download TMElyralab/MuseTalk \
--local-dir $CheckpointsDir \
--include "musetalkV15/musetalk.json" "musetalkV15/unet.pth"
# Download SD VAE weights
huggingface-cli download stabilityai/sd-vae-ft-mse \
--local-dir $CheckpointsDir/sd-vae \
--include "config.json" "diffusion_pytorch_model.bin"
# Download Whisper weights
huggingface-cli download openai/whisper-tiny \
--local-dir $CheckpointsDir/whisper \
--include "config.json" "pytorch_model.bin" "preprocessor_config.json"
# Download DWPose weights
huggingface-cli download yzd-v/DWPose \
--local-dir $CheckpointsDir/dwpose \
--include "dw-ll_ucoco_384.pth"
# Download SyncNet weights
huggingface-cli download ByteDance/LatentSync \
--local-dir $CheckpointsDir/syncnet \
--include "latentsync_syncnet.pt"
# Download Face Parse Bisent weights
gdown --id 154JgKpzCPW82qINcVieuPH3fZ2e0P812 -O $CheckpointsDir/face-parse-bisent/79999_iter.pth
curl -L https://download.pytorch.org/models/resnet18-5c106cde.pth \
-o $CheckpointsDir/face-parse-bisent/resnet18-5c106cde.pth
echo "✅ All weights have been downloaded successfully!"

View File

@@ -0,0 +1,9 @@
#!/bin/bash
echo "entrypoint.sh"
whoami
which python
source /opt/conda/etc/profile.d/conda.sh
conda activate musev
which python
python app.py

View File

@@ -0,0 +1,72 @@
#!/bin/bash
# This script runs inference based on the version and mode specified by the user.
# Usage:
# To run v1.0 inference: sh inference.sh v1.0 [normal|realtime]
# To run v1.5 inference: sh inference.sh v1.5 [normal|realtime]
# Check if the correct number of arguments is provided
if [ "$#" -ne 2 ]; then
echo "Usage: $0 <version> <mode>"
echo "Example: $0 v1.0 normal or $0 v1.5 realtime"
exit 1
fi
# Get the version and mode from the user input
version=$1
mode=$2
# Validate mode
if [ "$mode" != "normal" ] && [ "$mode" != "realtime" ]; then
echo "Invalid mode specified. Please use 'normal' or 'realtime'."
exit 1
fi
# Set config path based on mode
if [ "$mode" = "normal" ]; then
config_path="./configs/inference/test.yaml"
result_dir="./results/test"
else
config_path="./configs/inference/realtime.yaml"
result_dir="./results/realtime"
fi
# Define the model paths based on the version
if [ "$version" = "v1.0" ]; then
model_dir="./models/musetalk"
unet_model_path="$model_dir/pytorch_model.bin"
unet_config="$model_dir/musetalk.json"
version_arg="v1"
elif [ "$version" = "v1.5" ]; then
model_dir="./models/musetalkV15"
unet_model_path="$model_dir/unet.pth"
unet_config="$model_dir/musetalk.json"
version_arg="v15"
else
echo "Invalid version specified. Please use v1.0 or v1.5."
exit 1
fi
# Set script name based on mode
if [ "$mode" = "normal" ]; then
script_name="scripts.inference"
else
script_name="scripts.realtime_inference"
fi
# Base command arguments
cmd_args="--inference_config $config_path \
--result_dir $result_dir \
--unet_model_path $unet_model_path \
--unet_config $unet_config \
--version $version_arg"
# Add realtime-specific arguments if in realtime mode
if [ "$mode" = "realtime" ]; then
cmd_args="$cmd_args \
--fps 25 \
--version $version_arg"
fi
# Run inference
python3 -m $script_name $cmd_args

View File

@@ -0,0 +1,168 @@
import librosa
import librosa.filters
import numpy as np
from scipy import signal
from scipy.io import wavfile
class HParams:
# copy from wav2lip
def __init__(self):
self.n_fft = 800
self.hop_size = 200
self.win_size = 800
self.sample_rate = 16000
self.frame_shift_ms = None
self.signal_normalization = True
self.allow_clipping_in_normalization = True
self.symmetric_mels = True
self.max_abs_value = 4.0
self.preemphasize = True
self.preemphasis = 0.97
self.min_level_db = -100
self.ref_level_db = 20
self.fmin = 55
self.fmax=7600
self.use_lws=False
self.num_mels=80 # Number of mel-spectrogram channels and local conditioning dimensionality
self.rescale=True # Whether to rescale audio prior to preprocessing
self.rescaling_max=0.9 # Rescaling value
self.use_lws=False
hp = HParams()
def load_wav(path, sr):
return librosa.core.load(path, sr=sr)[0]
#def load_wav(path, sr):
# audio, sr_native = sf.read(path)
# if sr != sr_native:
# audio = librosa.resample(audio.T, sr_native, sr).T
# return audio
def save_wav(wav, path, sr):
wav *= 32767 / max(0.01, np.max(np.abs(wav)))
#proposed by @dsmiller
wavfile.write(path, sr, wav.astype(np.int16))
def save_wavenet_wav(wav, path, sr):
librosa.output.write_wav(path, wav, sr=sr)
def preemphasis(wav, k, preemphasize=True):
if preemphasize:
return signal.lfilter([1, -k], [1], wav)
return wav
def inv_preemphasis(wav, k, inv_preemphasize=True):
if inv_preemphasize:
return signal.lfilter([1], [1, -k], wav)
return wav
def get_hop_size():
hop_size = hp.hop_size
if hop_size is None:
assert hp.frame_shift_ms is not None
hop_size = int(hp.frame_shift_ms / 1000 * hp.sample_rate)
return hop_size
def linearspectrogram(wav):
D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize))
S = _amp_to_db(np.abs(D)) - hp.ref_level_db
if hp.signal_normalization:
return _normalize(S)
return S
def melspectrogram(wav):
D = _stft(preemphasis(wav, hp.preemphasis, hp.preemphasize))
S = _amp_to_db(_linear_to_mel(np.abs(D))) - hp.ref_level_db
if hp.signal_normalization:
return _normalize(S)
return S
def _lws_processor():
import lws
return lws.lws(hp.n_fft, get_hop_size(), fftsize=hp.win_size, mode="speech")
def _stft(y):
if hp.use_lws:
return _lws_processor(hp).stft(y).T
else:
return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=get_hop_size(), win_length=hp.win_size)
##########################################################
#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!)
def num_frames(length, fsize, fshift):
"""Compute number of time frames of spectrogram
"""
pad = (fsize - fshift)
if length % fshift == 0:
M = (length + pad * 2 - fsize) // fshift + 1
else:
M = (length + pad * 2 - fsize) // fshift + 2
return M
def pad_lr(x, fsize, fshift):
"""Compute left and right padding
"""
M = num_frames(len(x), fsize, fshift)
pad = (fsize - fshift)
T = len(x) + 2 * pad
r = (M - 1) * fshift + fsize - T
return pad, pad + r
##########################################################
#Librosa correct padding
def librosa_pad_lr(x, fsize, fshift):
return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0]
# Conversions
_mel_basis = None
def _linear_to_mel(spectogram):
global _mel_basis
if _mel_basis is None:
_mel_basis = _build_mel_basis()
return np.dot(_mel_basis, spectogram)
def _build_mel_basis():
assert hp.fmax <= hp.sample_rate // 2
return librosa.filters.mel(sr=hp.sample_rate, n_fft=hp.n_fft, n_mels=hp.num_mels,
fmin=hp.fmin, fmax=hp.fmax)
def _amp_to_db(x):
min_level = np.exp(hp.min_level_db / 20 * np.log(10))
return 20 * np.log10(np.maximum(min_level, x))
def _db_to_amp(x):
return np.power(10.0, (x) * 0.05)
def _normalize(S):
if hp.allow_clipping_in_normalization:
if hp.symmetric_mels:
return np.clip((2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value,
-hp.max_abs_value, hp.max_abs_value)
else:
return np.clip(hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db)), 0, hp.max_abs_value)
assert S.max() <= 0 and S.min() - hp.min_level_db >= 0
if hp.symmetric_mels:
return (2 * hp.max_abs_value) * ((S - hp.min_level_db) / (-hp.min_level_db)) - hp.max_abs_value
else:
return hp.max_abs_value * ((S - hp.min_level_db) / (-hp.min_level_db))
def _denormalize(D):
if hp.allow_clipping_in_normalization:
if hp.symmetric_mels:
return (((np.clip(D, -hp.max_abs_value,
hp.max_abs_value) + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value))
+ hp.min_level_db)
else:
return ((np.clip(D, 0, hp.max_abs_value) * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db)
if hp.symmetric_mels:
return (((D + hp.max_abs_value) * -hp.min_level_db / (2 * hp.max_abs_value)) + hp.min_level_db)
else:
return ((D * -hp.min_level_db / hp.max_abs_value) + hp.min_level_db)

View File

@@ -0,0 +1,610 @@
import os
import numpy as np
import random
from PIL import Image
import torch
from torch.utils.data import Dataset, ConcatDataset
import torchvision.transforms as transforms
from transformers import AutoFeatureExtractor
import librosa
import time
import json
import math
from decord import AudioReader, VideoReader
from decord.ndarray import cpu
from musetalk.data.sample_method import get_src_idx, shift_landmarks_to_face_coordinates, resize_landmark
from musetalk.data import audio
from musetalk.utils.audio_utils import ensure_wav
syncnet_mel_step_size = math.ceil(16 / 5 * 16) # latentsync
class FaceDataset(Dataset):
"""Dataset class for loading and processing video data
Each video can be represented as:
- Concatenated frame images
- '.mp4' or '.gif' files
- Folder containing all frames
"""
def __init__(self,
cfg,
list_paths,
root_path='./dataset/',
repeats=None):
# Initialize dataset paths
meta_paths = []
if repeats is None:
repeats = [1] * len(list_paths)
assert len(repeats) == len(list_paths)
# Load data list
for list_path, repeat_time in zip(list_paths, repeats):
with open(list_path, 'r') as f:
num = 0
f.readline() # Skip header line
for line in f.readlines():
line_info = line.strip()
meta = line_info.split()
meta = meta[0]
meta_paths.extend([os.path.join(root_path, meta)] * repeat_time)
num += 1
print(f'{list_path}: {num} x {repeat_time} = {num * repeat_time} samples')
# Set basic attributes
self.meta_paths = meta_paths
self.root_path = root_path
self.image_size = cfg['image_size']
self.min_face_size = cfg['min_face_size']
self.T = cfg['T']
self.sample_method = cfg['sample_method']
self.top_k_ratio = cfg['top_k_ratio']
self.max_attempts = 200
self.padding_pixel_mouth = cfg['padding_pixel_mouth']
# Cropping related parameters
self.crop_type = cfg['crop_type']
self.jaw2edge_margin_mean = cfg['cropping_jaw2edge_margin_mean']
self.jaw2edge_margin_std = cfg['cropping_jaw2edge_margin_std']
self.random_margin_method = cfg['random_margin_method']
# Image transformations
self.to_tensor = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
self.pose_to_tensor = transforms.Compose([
transforms.ToTensor(),
])
# Feature extractor
self.feature_extractor = AutoFeatureExtractor.from_pretrained(cfg['whisper_path'])
self.contorl_face_min_size = cfg["contorl_face_min_size"]
print("The sample method is: ", self.sample_method)
print(f"only use face size > {self.min_face_size}", self.contorl_face_min_size)
def generate_random_value(self):
"""Generate random value
Returns:
float: Generated random value
"""
if self.random_margin_method == "uniform":
random_value = np.random.uniform(
self.jaw2edge_margin_mean - self.jaw2edge_margin_std,
self.jaw2edge_margin_mean + self.jaw2edge_margin_std
)
elif self.random_margin_method == "normal":
random_value = np.random.normal(
loc=self.jaw2edge_margin_mean,
scale=self.jaw2edge_margin_std
)
random_value = np.clip(
random_value,
self.jaw2edge_margin_mean - self.jaw2edge_margin_std,
self.jaw2edge_margin_mean + self.jaw2edge_margin_std,
)
else:
raise ValueError(f"Invalid random margin method: {self.random_margin_method}")
return max(0, random_value)
def dynamic_margin_crop(self, img, original_bbox, extra_margin=None):
"""Dynamically crop image with dynamic margin
Args:
img: Input image
original_bbox: Original bounding box
extra_margin: Extra margin
Returns:
tuple: (x1, y1, x2, y2, extra_margin)
"""
if extra_margin is None:
extra_margin = self.generate_random_value()
w, h = img.size
x1, y1, x2, y2 = original_bbox
y2 = min(y2 + int(extra_margin), h)
return x1, y1, x2, y2, extra_margin
def crop_resize_img(self, img, bbox, crop_type='crop_resize', extra_margin=None):
"""Crop and resize image
Args:
img: Input image
bbox: Bounding box
crop_type: Type of cropping
extra_margin: Extra margin
Returns:
tuple: (Processed image, extra_margin, mask_scaled_factor)
"""
mask_scaled_factor = 1.
if crop_type == 'crop_resize':
x1, y1, x2, y2 = bbox
img = img.crop((x1, y1, x2, y2))
img = img.resize((self.image_size, self.image_size), Image.LANCZOS)
elif crop_type == 'dynamic_margin_crop_resize':
x1, y1, x2, y2, extra_margin = self.dynamic_margin_crop(img, bbox, extra_margin)
w_original, _ = img.size
img = img.crop((x1, y1, x2, y2))
w_cropped, _ = img.size
mask_scaled_factor = w_cropped / w_original
img = img.resize((self.image_size, self.image_size), Image.LANCZOS)
elif crop_type == 'resize':
w, h = img.size
scale = np.sqrt(self.image_size ** 2 / (h * w))
new_w = int(w * scale) / 64 * 64
new_h = int(h * scale) / 64 * 64
img = img.resize((new_w, new_h), Image.LANCZOS)
return img, extra_margin, mask_scaled_factor
def get_audio_file(self, wav_path, start_index):
"""Get audio file features
Args:
wav_path: Audio file path
start_index: Starting index
Returns:
tuple: (Audio features, start index)
"""
if not os.path.exists(wav_path):
return None
wav_path_converted = ensure_wav(wav_path)
audio_input_librosa, sampling_rate = librosa.load(wav_path_converted, sr=16000)
assert sampling_rate == 16000
while start_index >= 25 * 30:
audio_input = audio_input_librosa[16000*30:]
start_index -= 25 * 30
if start_index + 2 * 25 >= 25 * 30:
start_index -= 4 * 25
audio_input = audio_input_librosa[16000*4:16000*34]
else:
audio_input = audio_input_librosa[:16000*30]
assert 2 * (start_index) >= 0
assert 2 * (start_index + 2 * 25) <= 1500
audio_input = self.feature_extractor(
audio_input,
return_tensors="pt",
sampling_rate=sampling_rate
).input_features
return audio_input, start_index
def get_audio_file_mel(self, wav_path, start_index):
"""Get mel spectrogram of audio file
Args:
wav_path: Audio file path
start_index: Starting index
Returns:
tuple: (Mel spectrogram, start index)
"""
if not os.path.exists(wav_path):
return None
wav_path_converted = ensure_wav(wav_path)
audio_input_librosa, sampling_rate = librosa.load(wav_path_converted, sr=16000)
assert sampling_rate == 16000
audio_mel = self.mel_feature_extractor(audio_input_librosa)
return audio_mel, start_index
def mel_feature_extractor(self, audio_input):
"""Extract mel spectrogram features
Args:
audio_input: Input audio
Returns:
ndarray: Mel spectrogram features
"""
orig_mel = audio.melspectrogram(audio_input)
return orig_mel.T
def crop_audio_window(self, spec, start_frame_num, fps=25):
"""Crop audio window
Args:
spec: Spectrogram
start_frame_num: Starting frame number
fps: Frames per second
Returns:
ndarray: Cropped spectrogram
"""
start_idx = int(80. * (start_frame_num / float(fps)))
end_idx = start_idx + syncnet_mel_step_size
return spec[start_idx: end_idx, :]
def get_syncnet_input(self, video_path):
"""Get SyncNet input features
Args:
video_path: Video file path
Returns:
ndarray: SyncNet input features
"""
ar = AudioReader(video_path, sample_rate=16000)
original_mel = audio.melspectrogram(ar[:].asnumpy().squeeze(0))
return original_mel.T
def get_resized_mouth_mask(
self,
img_resized,
landmark_array,
face_shape,
padding_pixel_mouth=0,
image_size=256,
crop_margin=0
):
landmark_array = np.array(landmark_array)
resized_landmark = resize_landmark(
landmark_array, w=face_shape[0], h=face_shape[1], new_w=image_size, new_h=image_size)
landmark_array = np.array(resized_landmark[48 : 67]) # the lip landmarks in 68 landmarks format
min_x, min_y = np.min(landmark_array, axis=0)
max_x, max_y = np.max(landmark_array, axis=0)
min_x = min_x - padding_pixel_mouth
max_x = max_x + padding_pixel_mouth
# Calculate x-axis length and use it for y-axis
width = max_x - min_x
# Calculate old center point
center_y = (max_y + min_y) / 2
# Determine new min_y and max_y based on width
min_y = center_y - width / 4
max_y = center_y + width / 4
# Adjust mask position for dynamic crop, shift y-axis
min_y = min_y - crop_margin
max_y = max_y - crop_margin
# Prevent out of bounds
min_x = max(min_x, 0)
min_y = max(min_y, 0)
max_x = min(max_x, face_shape[0])
max_y = min(max_y, face_shape[1])
mask = np.zeros_like(np.array(img_resized))
mask[round(min_y):round(max_y), round(min_x):round(max_x)] = 255
return Image.fromarray(mask)
def __len__(self):
return 100000
def __getitem__(self, idx):
attempts = 0
while attempts < self.max_attempts:
try:
meta_path = random.sample(self.meta_paths, k=1)[0]
with open(meta_path, 'r') as f:
meta_data = json.load(f)
except Exception as e:
print(f"meta file error:{meta_path}")
print(e)
attempts += 1
time.sleep(0.1)
continue
video_path = meta_data["mp4_path"]
wav_path = meta_data["wav_path"]
bbox_list = meta_data["face_list"]
landmark_list = meta_data["landmark_list"]
T = self.T
s = 0
e = meta_data["frames"]
len_valid_clip = e - s
if len_valid_clip < T * 10:
attempts += 1
print(f"video {video_path} has less than {T * 10} frames")
continue
try:
cap = VideoReader(video_path, fault_tol=1, ctx=cpu(0))
total_frames = len(cap)
assert total_frames == len(landmark_list)
assert total_frames == len(bbox_list)
landmark_shape = np.array(landmark_list).shape
if landmark_shape != (total_frames, 68, 2):
attempts += 1
print(f"video {video_path} has invalid landmark shape: {landmark_shape}, expected: {(total_frames, 68, 2)}") # we use 68 landmarks
continue
except Exception as e:
print(f"video file error:{video_path}")
print(e)
attempts += 1
time.sleep(0.1)
continue
shift_landmarks, bbox_list_union, face_shapes = shift_landmarks_to_face_coordinates(
landmark_list,
bbox_list
)
if self.contorl_face_min_size and face_shapes[0][0] < self.min_face_size:
print(f"video {video_path} has face size {face_shapes[0][0]} less than minimum required {self.min_face_size}")
attempts += 1
continue
step = 1
drive_idx_start = random.randint(s, e - T * step)
drive_idx_list = list(
range(drive_idx_start, drive_idx_start + T * step, step))
assert len(drive_idx_list) == T
src_idx_list = []
list_index_out_of_range = False
for drive_idx in drive_idx_list:
src_idx = get_src_idx(
drive_idx, T, self.sample_method, shift_landmarks, face_shapes, self.top_k_ratio)
if src_idx is None:
list_index_out_of_range = True
break
src_idx = min(src_idx, e - 1)
src_idx = max(src_idx, s)
src_idx_list.append(src_idx)
if list_index_out_of_range:
attempts += 1
print(f"video {video_path} has invalid source index for drive frames")
continue
ref_face_valid_flag = True
extra_margin = self.generate_random_value()
# Get reference images
ref_imgs = []
for src_idx in src_idx_list:
imSrc = Image.fromarray(cap[src_idx].asnumpy())
bbox_s = bbox_list_union[src_idx]
imSrc, _, _ = self.crop_resize_img(
imSrc,
bbox_s,
self.crop_type,
extra_margin=None
)
if self.contorl_face_min_size and min(imSrc.size[0], imSrc.size[1]) < self.min_face_size:
ref_face_valid_flag = False
break
ref_imgs.append(imSrc)
if not ref_face_valid_flag:
attempts += 1
print(f"video {video_path} has reference face size smaller than minimum required {self.min_face_size}")
continue
# Get target images and masks
imSameIDs = []
bboxes = []
face_masks = []
face_mask_valid = True
target_face_valid_flag = True
for drive_idx in drive_idx_list:
imSameID = Image.fromarray(cap[drive_idx].asnumpy())
bbox_s = bbox_list_union[drive_idx]
imSameID, _ , mask_scaled_factor = self.crop_resize_img(
imSameID,
bbox_s,
self.crop_type,
extra_margin=extra_margin
)
if self.contorl_face_min_size and min(imSameID.size[0], imSameID.size[1]) < self.min_face_size:
target_face_valid_flag = False
break
crop_margin = extra_margin * mask_scaled_factor
face_mask = self.get_resized_mouth_mask(
imSameID,
shift_landmarks[drive_idx],
face_shapes[drive_idx],
self.padding_pixel_mouth,
self.image_size,
crop_margin=crop_margin
)
if np.count_nonzero(face_mask) == 0:
face_mask_valid = False
break
if face_mask.size[1] == 0 or face_mask.size[0] == 0:
print(f"video {video_path} has invalid face mask size at frame {drive_idx}")
face_mask_valid = False
break
imSameIDs.append(imSameID)
bboxes.append(bbox_s)
face_masks.append(face_mask)
if not face_mask_valid:
attempts += 1
print(f"video {video_path} has invalid face mask")
continue
if not target_face_valid_flag:
attempts += 1
print(f"video {video_path} has target face size smaller than minimum required {self.min_face_size}")
continue
# Process audio features
audio_offset = drive_idx_list[0]
audio_step = step
fps = 25.0 / step
try:
audio_feature, audio_offset = self.get_audio_file(wav_path, audio_offset)
_, audio_offset = self.get_audio_file_mel(wav_path, audio_offset)
audio_feature_mel = self.get_syncnet_input(video_path)
except Exception as e:
print(f"audio file error:{wav_path}")
print(e)
attempts += 1
time.sleep(0.1)
continue
mel = self.crop_audio_window(audio_feature_mel, audio_offset)
if mel.shape[0] != syncnet_mel_step_size:
attempts += 1
print(f"video {video_path} has invalid mel spectrogram shape: {mel.shape}, expected: {syncnet_mel_step_size}")
continue
mel = torch.FloatTensor(mel.T).unsqueeze(0)
# Build sample dictionary
sample = dict(
pixel_values_vid=torch.stack(
[self.to_tensor(imSameID) for imSameID in imSameIDs], dim=0),
pixel_values_ref_img=torch.stack(
[self.to_tensor(ref_img) for ref_img in ref_imgs], dim=0),
pixel_values_face_mask=torch.stack(
[self.pose_to_tensor(face_mask) for face_mask in face_masks], dim=0),
audio_feature=audio_feature[0],
audio_offset=audio_offset,
audio_step=audio_step,
mel=mel,
wav_path=wav_path,
fps=fps,
)
return sample
raise ValueError("Unable to find a valid sample after maximum attempts.")
class HDTFDataset(FaceDataset):
"""HDTF dataset class"""
def __init__(self, cfg):
root_path = './dataset/HDTF/meta'
list_paths = [
'./dataset/HDTF/train.txt',
]
repeats = [10]
super().__init__(cfg, list_paths, root_path, repeats)
print('HDTFDataset: ', len(self))
class VFHQDataset(FaceDataset):
"""VFHQ dataset class"""
def __init__(self, cfg):
root_path = './dataset/VFHQ/meta'
list_paths = [
'./dataset/VFHQ/train.txt',
]
repeats = [1]
super().__init__(cfg, list_paths, root_path, repeats)
print('VFHQDataset: ', len(self))
def PortraitDataset(cfg=None):
"""Return dataset based on configuration
Args:
cfg: Configuration dictionary
Returns:
Dataset: Combined dataset
"""
if cfg["dataset_key"] == "HDTF":
return ConcatDataset([HDTFDataset(cfg)])
elif cfg["dataset_key"] == "VFHQ":
return ConcatDataset([VFHQDataset(cfg)])
else:
print("############ use all dataset ############ ")
return ConcatDataset([HDTFDataset(cfg), VFHQDataset(cfg)])
if __name__ == '__main__':
# Set random seeds for reproducibility
seed = 42
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# Create dataset with configuration parameters
dataset = PortraitDataset(cfg={
'T': 1, # Number of frames to process at once
'random_margin_method': "normal", # Method for generating random margins: "normal" or "uniform"
'dataset_key': "HDTF", # Dataset to use: "HDTF", "VFHQ", or None for both
'image_size': 256, # Size of processed images (height and width)
'sample_method': 'pose_similarity_and_mouth_dissimilarity', # Method for selecting reference frames
'top_k_ratio': 0.51, # Ratio for top-k selection in reference frame sampling
'contorl_face_min_size': True, # Whether to enforce minimum face size
'padding_pixel_mouth': 10, # Padding pixels around mouth region in mask
'min_face_size': 200, # Minimum face size requirement for dataset
'whisper_path': "./models/whisper", # Path to Whisper model
'cropping_jaw2edge_margin_mean': 10, # Mean margin for jaw-to-edge cropping
'cropping_jaw2edge_margin_std': 10, # Standard deviation for jaw-to-edge cropping
'crop_type': "dynamic_margin_crop_resize", # Type of cropping: "crop_resize", "dynamic_margin_crop_resize", or "resize"
})
print(len(dataset))
import torchvision
os.makedirs('debug', exist_ok=True)
for i in range(10): # Check 10 samples
sample = dataset[0]
print(f"processing {i}")
# Get images and mask
ref_img = (sample['pixel_values_ref_img'] + 1.0) / 2 # (b, c, h, w)
target_img = (sample['pixel_values_vid'] + 1.0) / 2
face_mask = sample['pixel_values_face_mask']
# Print dimension information
print(f"ref_img shape: {ref_img.shape}")
print(f"target_img shape: {target_img.shape}")
print(f"face_mask shape: {face_mask.shape}")
# Create visualization images
b, c, h, w = ref_img.shape
# Apply mask only to target image
target_mask = face_mask
# Keep reference image unchanged
ref_with_mask = ref_img.clone()
# Create mask overlay for target image
target_with_mask = target_img.clone()
target_with_mask = target_with_mask * (1 - target_mask) + target_mask # Apply mask only to target
# Save original images, mask, and overlay results
# First row: original images
# Second row: mask
# Third row: overlay effect
concatenated_img = torch.cat((
ref_img, target_img, # Original images
torch.zeros_like(ref_img), target_mask, # Mask (black for ref)
ref_with_mask, target_with_mask # Overlay effect
), dim=3)
torchvision.utils.save_image(
concatenated_img, f'debug/mask_check_{i}.jpg', nrow=2)

View File

@@ -0,0 +1,233 @@
import numpy as np
import random
def summarize_tensor(x):
return f"\033[34m{str(tuple(x.shape)).ljust(24)}\033[0m (\033[31mmin {x.min().item():+.4f}\033[0m / \033[32mmean {x.mean().item():+.4f}\033[0m / \033[33mmax {x.max().item():+.4f}\033[0m)"
def calculate_mouth_open_similarity(landmarks_list, select_idx,top_k=50,ascending=True):
num_landmarks = len(landmarks_list)
mouth_open_ratios = np.zeros(num_landmarks) # Initialize as a numpy array
print(np.shape(landmarks_list))
## Calculate mouth opening ratios
for i, landmarks in enumerate(landmarks_list):
# Assuming landmarks are in the format [x, y] and accessible by index
mouth_top = landmarks[165] # Adjust index according to your landmarks format
mouth_bottom = landmarks[147] # Adjust index according to your landmarks format
mouth_open_ratio = np.linalg.norm(mouth_top - mouth_bottom)
mouth_open_ratios[i] = mouth_open_ratio
# Calculate differences matrix
differences_matrix = np.abs(mouth_open_ratios[:, np.newaxis] - mouth_open_ratios[select_idx])
differences_matrix_with_signs = mouth_open_ratios[:, np.newaxis] - mouth_open_ratios[select_idx]
print(differences_matrix.shape)
# Find top_k similar indices for each landmark set
if ascending:
top_indices = np.argsort(differences_matrix[i])[:top_k]
else:
top_indices = np.argsort(-differences_matrix[i])[:top_k]
similar_landmarks_indices = top_indices.tolist()
similar_landmarks_distances = differences_matrix_with_signs[i].tolist() #注意这里不要排序
return similar_landmarks_indices, similar_landmarks_distances
#############################################################################################
def get_closed_mouth(landmarks_list,ascending=True,top_k=50):
num_landmarks = len(landmarks_list)
mouth_open_ratios = np.zeros(num_landmarks) # Initialize as a numpy array
## Calculate mouth opening ratios
#print("landmarks shape",np.shape(landmarks_list))
for i, landmarks in enumerate(landmarks_list):
# Assuming landmarks are in the format [x, y] and accessible by index
#print(landmarks[165])
mouth_top = np.array(landmarks[165])# Adjust index according to your landmarks format
mouth_bottom = np.array(landmarks[147]) # Adjust index according to your landmarks format
mouth_open_ratio = np.linalg.norm(mouth_top - mouth_bottom)
mouth_open_ratios[i] = mouth_open_ratio
# Find top_k similar indices for each landmark set
if ascending:
top_indices = np.argsort(mouth_open_ratios)[:top_k]
else:
top_indices = np.argsort(-mouth_open_ratios)[:top_k]
return top_indices
def calculate_landmarks_similarity(selected_idx, landmarks_list,image_shapes, start_index, end_index, top_k=50,ascending=True):
"""
Calculate the similarity between sets of facial landmarks and return the indices of the most similar faces.
Parameters:
landmarks_list (list): A list containing sets of facial landmarks, each element is a set of landmarks.
image_shapes (list): A list containing the shape of each image, each element is a (width, height) tuple.
start_index (int): The starting index of the facial landmarks.
end_index (int): The ending index of the facial landmarks.
top_k (int): The number of most similar landmark sets to return. Default is 50.
ascending (bool): Controls the sorting order. If True, sort in ascending order; If False, sort in descending order. Default is True.
Returns:
similar_landmarks_indices (list): A list containing the indices of the most similar facial landmarks for each face.
resized_landmarks (list): A list containing the resized facial landmarks.
"""
num_landmarks = len(landmarks_list)
resized_landmarks = []
# Preprocess landmarks
for i in range(num_landmarks):
landmark_array = np.array(landmarks_list[i])
selected_landmarks = landmark_array[start_index:end_index]
resized_landmark = resize_landmark(selected_landmarks, w=image_shapes[i][0], h=image_shapes[i][1],new_w=256,new_h=256)
resized_landmarks.append(resized_landmark)
resized_landmarks_array = np.array(resized_landmarks) # Convert list to array for easier manipulation
# Calculate similarity
distances = np.linalg.norm(resized_landmarks_array - resized_landmarks_array[selected_idx][np.newaxis, :], axis=2)
overall_distances = np.mean(distances, axis=1) # Calculate mean distance for each set of landmarks
if ascending:
sorted_indices = np.argsort(overall_distances)
similar_landmarks_indices = sorted_indices[1:top_k+1].tolist() # Exclude self and take top_k
else:
sorted_indices = np.argsort(-overall_distances)
similar_landmarks_indices = sorted_indices[0:top_k].tolist()
return similar_landmarks_indices
def process_bbox_musetalk(face_array, landmark_array):
x_min_face, y_min_face, x_max_face, y_max_face = map(int, face_array)
x_min_lm = min([int(x) for x, y in landmark_array])
y_min_lm = min([int(y) for x, y in landmark_array])
x_max_lm = max([int(x) for x, y in landmark_array])
y_max_lm = max([int(y) for x, y in landmark_array])
x_min = min(x_min_face, x_min_lm)
y_min = min(y_min_face, y_min_lm)
x_max = max(x_max_face, x_max_lm)
y_max = max(y_max_face, y_max_lm)
x_min = max(x_min, 0)
y_min = max(y_min, 0)
return [x_min, y_min, x_max, y_max]
def shift_landmarks_to_face_coordinates(landmark_list, face_list):
"""
Translates the data in landmark_list to the coordinates of the cropped larger face.
Parameters:
landmark_list (list): A list containing multiple sets of facial landmarks.
face_list (list): A list containing multiple facial images.
Returns:
landmark_list_shift (list): The list of translated landmarks.
bbox_union (list): The list of union bounding boxes.
face_shapes (list): The list of facial shapes.
"""
landmark_list_shift = []
bbox_union = []
face_shapes = []
for i in range(len(face_list)):
landmark_array = np.array(landmark_list[i]) # 转换为numpy数组并创建副本
face_array = face_list[i]
f_landmark_bbox = process_bbox_musetalk(face_array, landmark_array)
x_min, y_min, x_max, y_max = f_landmark_bbox
landmark_array[:, 0] = landmark_array[:, 0] - f_landmark_bbox[0]
landmark_array[:, 1] = landmark_array[:, 1] - f_landmark_bbox[1]
landmark_list_shift.append(landmark_array)
bbox_union.append(f_landmark_bbox)
face_shapes.append((x_max - x_min, y_max - y_min))
return landmark_list_shift, bbox_union, face_shapes
def resize_landmark(landmark, w, h, new_w, new_h):
landmark_norm = landmark / [w, h]
landmark_resized = landmark_norm * [new_w, new_h]
return landmark_resized
def get_src_idx(drive_idx, T, sample_method,landmarks_list,image_shapes,top_k_ratio):
"""
Calculate the source index (src_idx) based on the given drive index, T, s, e, and sampling method.
Parameters:
- drive_idx (int): The current drive index.
- T (int): Total number of frames or a specific range limit.
- sample_method (str): Sampling method, which can be "random" or other methods.
- landmarks_list (list): List of facial landmarks.
- image_shapes (list): List of image shapes.
- top_k_ratio (float): Ratio for selecting top k similar frames.
Returns:
- src_idx (int): The calculated source index.
"""
if sample_method == "random":
src_idx = random.randint(drive_idx - 5 * T, drive_idx + 5 * T)
elif sample_method == "pose_similarity":
top_k = int(top_k_ratio*len(landmarks_list))
try:
top_k = int(top_k_ratio*len(landmarks_list))
# facial contour
landmark_start_idx = 0
landmark_end_idx = 16
pose_similarity_list = calculate_landmarks_similarity(drive_idx, landmarks_list,image_shapes, landmark_start_idx, landmark_end_idx,top_k=top_k, ascending=True)
src_idx = random.choice(pose_similarity_list)
while abs(src_idx-drive_idx)<5:
src_idx = random.choice(pose_similarity_list)
except Exception as e:
print(e)
return None
elif sample_method=="pose_similarity_and_closed_mouth":
# facial contour
landmark_start_idx = 0
landmark_end_idx = 16
try:
top_k = int(top_k_ratio*len(landmarks_list))
closed_mouth_list = get_closed_mouth(landmarks_list, ascending=True,top_k=top_k)
#print("closed_mouth_list",closed_mouth_list)
pose_similarity_list = calculate_landmarks_similarity(drive_idx, landmarks_list,image_shapes, landmark_start_idx, landmark_end_idx,top_k=top_k, ascending=True)
#print("pose_similarity_list",pose_similarity_list)
common_list = list(set(closed_mouth_list).intersection(set(pose_similarity_list)))
if len(common_list) == 0:
src_idx = random.randint(drive_idx - 5 * T, drive_idx + 5 * T)
else:
src_idx = random.choice(common_list)
while abs(src_idx-drive_idx) <5:
src_idx = random.randint(drive_idx - 5 * T, drive_idx + 5 * T)
except Exception as e:
print(e)
return None
elif sample_method=="pose_similarity_and_mouth_dissimilarity":
top_k = int(top_k_ratio*len(landmarks_list))
try:
top_k = int(top_k_ratio*len(landmarks_list))
# facial contour for 68 landmarks format
landmark_start_idx = 0
landmark_end_idx = 16
pose_similarity_list = calculate_landmarks_similarity(drive_idx, landmarks_list,image_shapes, landmark_start_idx, landmark_end_idx,top_k=top_k, ascending=True)
# Mouth inner coutour for 68 landmarks format
landmark_start_idx = 60
landmark_end_idx = 67
mouth_dissimilarity_list = calculate_landmarks_similarity(drive_idx, landmarks_list,image_shapes, landmark_start_idx, landmark_end_idx,top_k=top_k, ascending=False)
common_list = list(set(pose_similarity_list).intersection(set(mouth_dissimilarity_list)))
if len(common_list) == 0:
src_idx = random.randint(drive_idx - 5 * T, drive_idx + 5 * T)
else:
src_idx = random.choice(common_list)
while abs(src_idx-drive_idx) <5:
src_idx = random.randint(drive_idx - 5 * T, drive_idx + 5 * T)
except Exception as e:
print(e)
return None
else:
raise ValueError(f"Unknown sample_method: {sample_method}")
return src_idx

View File

@@ -0,0 +1,81 @@
import torch
import torch.nn as nn
from omegaconf import OmegaConf
import torch
import torch.nn.functional as F
from torch import nn, optim
from torch.optim.lr_scheduler import CosineAnnealingLR
from musetalk.loss.discriminator import MultiScaleDiscriminator,DiscriminatorFullModel
import musetalk.loss.vgg_face as vgg_face
class Interpolate(nn.Module):
def __init__(self, size=None, scale_factor=None, mode='nearest', align_corners=None):
super(Interpolate, self).__init__()
self.size = size
self.scale_factor = scale_factor
self.mode = mode
self.align_corners = align_corners
def forward(self, input):
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
def set_requires_grad(net, requires_grad=False):
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad
if __name__ == "__main__":
cfg = OmegaConf.load("config/audio_adapter/E7.yaml")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
pyramid_scale = [1, 0.5, 0.25, 0.125]
vgg_IN = vgg_face.Vgg19().to(device)
pyramid = vgg_face.ImagePyramide(cfg.loss_params.pyramid_scale, 3).to(device)
vgg_IN.eval()
downsampler = Interpolate(size=(224, 224), mode='bilinear', align_corners=False)
image = torch.rand(8, 3, 256, 256).to(device)
image_pred = torch.rand(8, 3, 256, 256).to(device)
pyramide_real = pyramid(downsampler(image))
pyramide_generated = pyramid(downsampler(image_pred))
loss_IN = 0
for scale in cfg.loss_params.pyramid_scale:
x_vgg = vgg_IN(pyramide_generated['prediction_' + str(scale)])
y_vgg = vgg_IN(pyramide_real['prediction_' + str(scale)])
for i, weight in enumerate(cfg.loss_params.vgg_layer_weight):
value = torch.abs(x_vgg[i] - y_vgg[i].detach()).mean()
loss_IN += weight * value
loss_IN /= sum(cfg.loss_params.vgg_layer_weight) # 对vgg不同层取均值金字塔loss是每层叠
print(loss_IN)
#print(cfg.model_params.discriminator_params)
discriminator = MultiScaleDiscriminator(**cfg.model_params.discriminator_params).to(device)
discriminator_full = DiscriminatorFullModel(discriminator)
disc_scales = cfg.model_params.discriminator_params.scales
# Prepare optimizer and loss function
optimizer_D = optim.AdamW(discriminator.parameters(),
lr=cfg.discriminator_train_params.lr,
weight_decay=cfg.discriminator_train_params.weight_decay,
betas=cfg.discriminator_train_params.betas,
eps=cfg.discriminator_train_params.eps)
scheduler_D = CosineAnnealingLR(optimizer_D,
T_max=cfg.discriminator_train_params.epochs,
eta_min=1e-6)
discriminator.train()
set_requires_grad(discriminator, False)
loss_G = 0.
discriminator_maps_generated = discriminator(pyramide_generated)
discriminator_maps_real = discriminator(pyramide_real)
for scale in disc_scales:
key = 'prediction_map_%s' % scale
value = ((1 - discriminator_maps_generated[key]) ** 2).mean()
loss_G += value
print(loss_G)

View File

@@ -0,0 +1,44 @@
import torch
from torch import nn
from torch.nn import functional as F
class Conv2d(nn.Module):
def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_block = nn.Sequential(
nn.Conv2d(cin, cout, kernel_size, stride, padding),
nn.BatchNorm2d(cout)
)
self.act = nn.ReLU()
self.residual = residual
def forward(self, x):
out = self.conv_block(x)
if self.residual:
out += x
return self.act(out)
class nonorm_Conv2d(nn.Module):
def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_block = nn.Sequential(
nn.Conv2d(cin, cout, kernel_size, stride, padding),
)
self.act = nn.LeakyReLU(0.01, inplace=True)
def forward(self, x):
out = self.conv_block(x)
return self.act(out)
class Conv2dTranspose(nn.Module):
def __init__(self, cin, cout, kernel_size, stride, padding, output_padding=0, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_block = nn.Sequential(
nn.ConvTranspose2d(cin, cout, kernel_size, stride, padding, output_padding),
nn.BatchNorm2d(cout)
)
self.act = nn.ReLU()
def forward(self, x):
out = self.conv_block(x)
return self.act(out)

View File

@@ -0,0 +1,145 @@
from torch import nn
import torch.nn.functional as F
import torch
from musetalk.loss.vgg_face import ImagePyramide
class DownBlock2d(nn.Module):
"""
Simple block for processing video (encoder).
"""
def __init__(self, in_features, out_features, norm=False, kernel_size=4, pool=False, sn=False):
super(DownBlock2d, self).__init__()
self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size)
if sn:
self.conv = nn.utils.spectral_norm(self.conv)
if norm:
self.norm = nn.InstanceNorm2d(out_features, affine=True)
else:
self.norm = None
self.pool = pool
def forward(self, x):
out = x
out = self.conv(out)
if self.norm:
out = self.norm(out)
out = F.leaky_relu(out, 0.2)
if self.pool:
out = F.avg_pool2d(out, (2, 2))
return out
class Discriminator(nn.Module):
"""
Discriminator similar to Pix2Pix
"""
def __init__(self, num_channels=3, block_expansion=64, num_blocks=4, max_features=512,
sn=False, **kwargs):
super(Discriminator, self).__init__()
down_blocks = []
for i in range(num_blocks):
down_blocks.append(
DownBlock2d(num_channels if i == 0 else min(max_features, block_expansion * (2 ** i)),
min(max_features, block_expansion * (2 ** (i + 1))),
norm=(i != 0), kernel_size=4, pool=(i != num_blocks - 1), sn=sn))
self.down_blocks = nn.ModuleList(down_blocks)
self.conv = nn.Conv2d(self.down_blocks[-1].conv.out_channels, out_channels=1, kernel_size=1)
if sn:
self.conv = nn.utils.spectral_norm(self.conv)
def forward(self, x):
feature_maps = []
out = x
for down_block in self.down_blocks:
feature_maps.append(down_block(out))
out = feature_maps[-1]
prediction_map = self.conv(out)
return feature_maps, prediction_map
class MultiScaleDiscriminator(nn.Module):
"""
Multi-scale (scale) discriminator
"""
def __init__(self, scales=(), **kwargs):
super(MultiScaleDiscriminator, self).__init__()
self.scales = scales
discs = {}
for scale in scales:
discs[str(scale).replace('.', '-')] = Discriminator(**kwargs)
self.discs = nn.ModuleDict(discs)
def forward(self, x):
out_dict = {}
for scale, disc in self.discs.items():
scale = str(scale).replace('-', '.')
key = 'prediction_' + scale
#print(key)
#print(x)
feature_maps, prediction_map = disc(x[key])
out_dict['feature_maps_' + scale] = feature_maps
out_dict['prediction_map_' + scale] = prediction_map
return out_dict
class DiscriminatorFullModel(torch.nn.Module):
"""
Merge all discriminator related updates into single model for better multi-gpu usage
"""
def __init__(self, discriminator):
super(DiscriminatorFullModel, self).__init__()
self.discriminator = discriminator
self.scales = self.discriminator.scales
print("scales",self.scales)
self.pyramid = ImagePyramide(self.scales, 3)
if torch.cuda.is_available():
self.pyramid = self.pyramid.cuda()
self.zero_tensor = None
def get_zero_tensor(self, input):
if self.zero_tensor is None:
self.zero_tensor = torch.FloatTensor(1).fill_(0).cuda()
self.zero_tensor.requires_grad_(False)
return self.zero_tensor.expand_as(input)
def forward(self, x, generated, gan_mode='ls'):
pyramide_real = self.pyramid(x)
pyramide_generated = self.pyramid(generated.detach())
discriminator_maps_generated = self.discriminator(pyramide_generated)
discriminator_maps_real = self.discriminator(pyramide_real)
value_total = 0
for scale in self.scales:
key = 'prediction_map_%s' % scale
if gan_mode == 'hinge':
value = -torch.mean(torch.min(discriminator_maps_real[key]-1, self.get_zero_tensor(discriminator_maps_real[key]))) - torch.mean(torch.min(-discriminator_maps_generated[key]-1, self.get_zero_tensor(discriminator_maps_generated[key])))
elif gan_mode == 'ls':
value = ((1 - discriminator_maps_real[key]) ** 2 + discriminator_maps_generated[key] ** 2).mean()
else:
raise ValueError('Unexpected gan_mode {}'.format(self.train_params['gan_mode']))
value_total += value
return value_total
def main():
discriminator = MultiScaleDiscriminator(scales=[1],
block_expansion=32,
max_features=512,
num_blocks=4,
sn=True,
image_channel=3,
estimate_jacobian=False)

View File

@@ -0,0 +1,152 @@
import torch.nn as nn
import math
__all__ = ['ResNet', 'resnet50']
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, include_top=True):
self.inplanes = 64
super(ResNet, self).__init__()
self.include_top = include_top
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=0, ceil_mode=True)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AvgPool2d(7, stride=1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = x * 255.
x = x.flip(1)
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
if not self.include_top:
return x
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def resnet50(**kwargs):
"""Constructs a ResNet-50 model.
"""
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
return model

Some files were not shown because too many files have changed in this diff Show More