Diffusion Policy 模型微调
概述
Diffusion Policy 是一种基于扩散模型的视觉运动策略学习方法,将扩散模型的生成能力应用于机器人控制领域。该方法通过学习动作分布的扩散过程,能够生 成多样化且高质量的机器人动作序列,在复杂的机器人操作任务中表现出色。
核心特点
- 扩散生成:使用扩散模型生成连续的动作序列
- 多模态动作:能够处理具有多种解决方案的任务
- 高质量输出:生成平滑、自然的机器人动作
- 鲁棒性强:对噪声和扰动具有良好的鲁棒性
- 表达能力强:能够学习复杂的动作分布
先决条件
系统要求
- 操作系统:Linux(推荐 Ubuntu 20.04+)或 macOS
- Python 版本:3.8+
- GPU:NVIDIA GPU(推荐 RTX 3080 或更高),至少 10GB 显存
- 内存:至少 32GB RAM
- 存储空间:至少 50GB 可用空间
环境准备
1. 安装 LeRobot
# 克隆 LeRobot 仓库
git clone https://github.com/huggingface/lerobot.git
cd lerobot
# 创建虚拟环境
conda create -n lerobot python=3.10
conda activate lerobot
# 安装依赖
pip install -e .
2. 安装 Diffusion Policy 特定依赖
# 安装扩散模型相关依赖
pip install diffusers
pip install accelerate
pip install transformers
pip install einops
pip install wandb
# 安装数值计算库
pip install scipy
pip install scikit-learn
# 登录 Weights & Biases(可选)
wandb login
Diffusion Policy 架构
核心组件
- 视觉编码器:提取图像特征
- 状态编码器:处理机器人状态信息
- 条件编码器:融合视觉和状态信息
- 扩散网络:学习动作分布的扩散过程
- 噪声调度器:控制扩散过程的噪声水平
扩散过程
- 前向过程:逐步向动作序列添加噪声
- 反向过程:从噪声中逐步恢复动作序列
- 条件生成:基于观察条件生成动作
- 采样策略:使用 DDPM 或 DDIM 采样
数据准备
LeRobot 格式数据
Diffusion Policy 需要使用 LeRobot 格式的数据集:
your_dataset/
├── data/
│ ├ ── chunk-001/
│ │ ├── observation.images.cam_high.png
│ │ ├── observation.images.cam_low.png
│ │ ├── observation.state.npy
│ │ ├── action.npy
│ │ └── ...
│ └── chunk-002/
│ └── ...
├── meta.json
├── stats.safetensors
└── videos/
├── episode_000000.mp4
└── ...
数据质量要求
- 最少 100 个 episode 用于基本训练
- 推荐 500+ episode 以获得最佳效果
- 动作序列应该平滑连续
- 包含多样化的任务场景
- 高质量的视觉观察数据
微调训练
基本训练命令
# 设置环境变量
export HF_USER="your-huggingface-username"
export CUDA_VISIBLE_DEVICES=0
# 启动 Diffusion Policy 训练
lerobot-train \
--policy.path=lerobot/diffusion_policy \
--dataset.repo_id=${HF_USER}/your_dataset \
--batch_size=64 \
--steps=100000 \
--output_dir=outputs/train/diffusion_policy_finetuned \
--job_name=diffusion_policy_finetuning \
--policy.device=cuda \
--policy.horizon=16 \
--policy.n_action_steps=8 \
--policy.n_obs_steps=2 \
--policy.num_inference_steps=100 \
--training.learning_rate=1e-4 \
--training.weight_decay=1e-6 \
--wandb.enable=true
高级训练配置
多步预测配置
# 针对长序列预测的配置
lerobot-train \
--policy.path=lerobot/diffusion_policy \
--dataset.repo_id=${HF_USER}/your_dataset \
--batch_size=32 \
--steps=150000 \
--output_dir=outputs/train/diffusion_policy_long_horizon \
--job_name=diffusion_policy_long_horizon \
--policy.device=cuda \
--policy.horizon=32 \
--policy.n_action_steps=16 \
--policy.n_obs_steps=4 \
--policy.num_inference_steps=100 \
--policy.beta_schedule=squaredcos_cap_v2 \
--policy.clip_sample=true \
--policy.prediction_type=epsilon \
--training.learning_rate=1e-4 \
--training.lr_scheduler=cosine \
--training.warmup_steps=5000 \
--wandb.enable=true
内存优化配置
# 针对显存较小的 GPU
lerobot-train \
--policy.path=lerobot/diffusion_policy \
--dataset.repo_id=${HF_USER}/your_dataset \
--batch_size=16 \
--steps=200000 \
--output_dir=outputs/train/diffusion_policy_memory_opt \
--job_name=diffusion_policy_memory_optimized \
--policy.device=cuda \
--policy.horizon=16 \
--policy.n_action_steps=8 \
--policy.num_inference_steps=50 \
--training.learning_rate=5e-5 \
--training.gradient_accumulation_steps=4 \
--training.mixed_precision=fp16 \
--training.gradient_checkpointing=true \
--wandb.enable=true
参数详细说明
核心参数
参数 | 含义 | 推荐值 | 说明 |
---|---|---|---|
--policy.path | 预训练模型路径 | lerobot/diffusion_policy | LeRobot 官方 Diffusion Policy 模型 |
--dataset.repo_id | 数据集仓库 ID | ${HF_USER}/dataset | 你的 HuggingFace 数据集 |
--batch_size | 批处理大小 | 64 | 根据显存调整,RTX 3080 推荐 32-64 |
--steps | 训练步数 | 100000 | 扩散模型通常需要更多训练步数 |
--output_dir | 输出目录 | outputs/train/diffusion_policy_finetuned | 模型保存路径 |
--job_name | 任务名称 | diffusion_policy_finetuning | 用于日志和实验跟踪 |
Diffusion Policy 特定参数
参数 | 含义 | 推荐值 | 说明 |
---|---|---|---|
--policy.horizon | 预测时间范围 | 16 | 预测的动作序列长度 |
--policy.n_action_steps | 执行动作步数 | 8 | 每次执行的动作数量 |
--policy.n_obs_steps | 观察步数 | 2 | 历史观察的数量 |
--policy.num_inference_steps | 推理步数 | 100 | 扩散采样的步数 |
--policy.beta_schedule | 噪声调度 | squaredcos_cap_v2 | 噪声添加的调度策略 |
--policy.clip_sample | 样本裁剪 | true | 是否裁剪生成的样本 |
--policy.prediction_type | 预测类型 | epsilon | 预测噪声或样本 |
网络架构参数
参数 | 含义 | 推荐值 | 说明 |
---|---|---|---|
--policy.vision_backbone | 视觉骨干网络 | resnet18 | 图像特征提取网络 |
--policy.crop_shape | 图像裁剪尺寸 | [224,224] | 输入图像的尺寸 |
--policy.diffusion_step_embed_dim | 步骤嵌入维度 | 128 | 扩散步骤的嵌入维度 |
--policy.down_dims | 下采样维度 | [512,1024,2048] | U-Net 下采样路径的维度 |
--policy.kernel_size | 卷积核大小 | 5 | 1D 卷积的核大小 |
--policy.n_groups | 组归一化数 | 8 | GroupNorm 的组数 |
训练参数
参数 | 含义 | 推荐值 | 说明 |
---|---|---|---|
--training.learning_rate | 学习率 | 1e-4 | 扩散模型推荐的学习率 |
--training.weight_decay | 权重衰减 | 1e-6 | 正则化参数 |
--training.lr_scheduler | 学习率调度器 | cosine | 余弦退火调度 |
--training.warmup_steps | 预热步数 | 5000 | 学习率预热 |
--training.gradient_accumulation_steps | 梯度累积 | 1 | 显存不足时增加 |
--training.mixed_precision | 混合精度 | fp16 | 节省显存 |
--training.gradient_checkpointing | 梯度检查点 | false | 进一步节省显存 |
训练监控和调试
Weights & Biases 集成
# 详细的 W&B 配置
lerobot-train \
--wandb.enable=true \
--wandb.project=diffusion_policy_experiments \
--wandb.run_name=diffusion_policy_v1 \
--wandb.notes="Diffusion Policy training with long horizon" \
--wandb.tags="[diffusion,policy,long_horizon]" \
# ... 其他参数
关键指标监控
训练过程中需要关注的指标:
- Diffusion Loss:扩散模型的总体损失
- MSE Loss:均方误差损失
- Learning Rate:学习率变化曲线
- Gradient Norm:梯度范数
- Inference Time:推理时间
- Sample Quality:生成样本的质量
训练可视化
# visualization.py
import torch
import matplotlib.pyplot as plt
import numpy as np
from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
def visualize_diffusion_process(model_path, observation):
# 加载模型
policy = DiffusionPolicy.from_pretrained(model_path, device="cuda")
policy.eval()
# 生成动作序列的扩散过程
with torch.no_grad():
# 初始噪声
noise = torch.randn(1, policy.horizon, policy.action_dim, device="cuda")
# 扩散采样过程
actions_sequence = []
for t in range(policy.num_inference_steps):
# 预测噪声
noise_pred = policy.unet(noise, t, observation)
# 更新样本
noise = policy.scheduler.step(noise_pred, t, noise).prev_sample
# 保存中间结果
if t % 10 == 0:
actions_sequence.append(noise.cpu().numpy())
final_actions = noise.cpu().numpy()
# 可视化扩散过程
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
for i, actions in enumerate(actions_sequence[:6]):
ax = axes[i//3, i%3]
ax.plot(actions[0, :, 0], label='Action Dim 0')
ax.plot(actions[0, :, 1], label='Action Dim 1')
ax.set_title(f'Diffusion Step {i*10}')
ax.legend()
plt.tight_layout()
plt.savefig('diffusion_process.png')
plt.show()
return final_actions
if __name__ == "__main__":
model_path = "outputs/train/diffusion_policy_finetuned/checkpoints/last"
# 模拟观察
observation = {
"observation.images.cam_high": torch.randn(1, 3, 224, 224, device="cuda"),
"observation.state": torch.randn(1, 7, device="cuda")
}
actions = visualize_diffusion_process(model_path, observation)
print(f"Generated actions shape: {actions.shape}")
模型评估
离线评估
# offline_evaluation.py
import torch
import numpy as np
from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
def evaluate_diffusion_policy(model_path, dataset_path):
# 加载模型
policy = DiffusionPolicy.from_pretrained(model_path, device="cuda")
policy.eval()
# 加载测试数据集
dataset = LeRobotDataset(dataset_path, split="test")
total_mse_loss = 0
total_mae_loss = 0
num_samples = 0
with torch.no_grad():
for batch in dataset:
# 模型预测
prediction = policy(batch)
# 计算损失
target_actions = batch['action']
predicted_actions = prediction['action']
mse_loss = torch.mean((predicted_actions - target_actions) ** 2)
mae_loss = torch.mean(torch.abs(predicted_actions - target_actions))
total_mse_loss += mse_loss.item()
total_mae_loss += mae_loss.item()
num_samples += 1
avg_mse_loss = total_mse_loss / num_samples
avg_mae_loss = total_mae_loss / num_samples
print(f"Average MSE Loss: {avg_mse_loss:.4f}")
print(f"Average MAE Loss: {avg_mae_loss:.4f}")
return avg_mse_loss, avg_mae_loss
def evaluate_action_diversity(model_path, observation, num_samples=10):
# 评估动作多样性
policy = DiffusionPolicy.from_pretrained(model_path, device="cuda")
policy.eval()
actions_list = []
with torch.no_grad():
for _ in range(num_samples):
prediction = policy(observation)
actions_list.append(prediction['action'].cpu().numpy())
actions_array = np.array(actions_list) # [num_samples, horizon, action_dim]
# 计算动作多样性指标
action_std = np.std(actions_array, axis=0) # [horizon, action_dim]
avg_std = np.mean(action_std)
print(f"Average action standard deviation: {avg_std:.4f}")
return avg_std, actions_array
if __name__ == "__main__":
model_path = "outputs/train/diffusion_policy_finetuned/checkpoints/last"
dataset_path = "path/to/your/test/dataset"
# 离线评估
evaluate_diffusion_policy(model_path, dataset_path)
# 多样性评估
observation = {
"observation.images.cam_high": torch.randn(1, 3, 224, 224, device="cuda"),
"observation.state": torch.randn(1, 7, device="cuda")
}
evaluate_action_diversity(model_path, observation)
在线评估(机器人环境)
# robot_evaluation.py
import torch
import numpy as np
from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
class DiffusionPolicyController:
def __init__(self, model_path, num_inference_steps=50):
self.policy = DiffusionPolicy.from_pretrained(model_path, device="cuda")
self.policy.eval()
self.num_inference_steps = num_inference_steps
self.action_queue = []
self.current_obs_history = []
def get_action(self, observations):
# 更新观察历史
self.current_obs_history.append(observations)
if len(self.current_obs_history) > self.policy.n_obs_steps:
self.current_obs_history.pop(0)
# 如果动作队列为空或需要重新规划,生成新的动作序列
if len(self.action_queue) == 0 or self.should_replan():
with torch.no_grad():
# 构建输入
batch = self.prepare_observation_batch()
# 设置推理步数
self.policy.scheduler.set_timesteps(self.num_inference_steps)
# 生成动作序列
prediction = self.policy(batch)
actions = prediction['action'].cpu().numpy()[0] # [horizon, action_dim]
# 更新动作队列
self.action_queue = list(actions[:self.policy.n_action_steps])
# 返回下一个动作
return self.action_queue.pop(0)
def should_replan(self):
# 简单的重新规划策略:当动作队列剩余不足一半时重新规划
return len(self.action_queue) < self.policy.n_action_steps // 2
def prepare_observation_batch(self):
batch = {}
# 处理图像观察
if "observation.images.cam_high" in self.current_obs_history[-1]:
images = []
for obs in self.current_obs_history:
image = obs["observation.images.cam_high"]
image_tensor = self.preprocess_image(image)
images.append(image_tensor)
# 如果历史不足,重复最后一个观察
while len(images) < self.policy.n_obs_steps:
images.insert(0, images[0])
batch["observation.images.cam_high"] = torch.stack(images).unsqueeze(0)
# 处理状态观察
if "observation.state" in self.current_obs_history[-1]:
states = []
for obs in self.current_obs_history:
state = torch.tensor(obs["observation.state"], dtype=torch.float32)
states.append(state)
# 如果历史不足,重复最后一个状态
while len(states) < self.policy.n_obs_steps:
states.insert(0, states[0])
batch["observation.state"] = torch.stack(states).unsqueeze(0)
return batch
def preprocess_image(self, image):
# 图像预处理逻辑
image_tensor = torch.tensor(image).permute(2, 0, 1).float() / 255.0
return image_tensor
# 使用示例
if __name__ == "__main__":
controller = DiffusionPolicyController(
model_path="outputs/train/diffusion_policy_finetuned/checkpoints/last",
num_inference_steps=50
)
# 模拟机器人控制循环
for step in range(100):
# 获取当前观察
observations = {
"observation.images.cam_high": np.random.randint(0, 255, (224, 224, 3)),
"observation.state": np.random.randn(7)
}
# 获取动作
action = controller.get_action(observations)
# 执行动作
print(f"Step {step}: Action = {action}")
# 这里应该将动作发送给实际的机器人
# robot.execute_action(action)
部署和优化
推理加速
# fast_inference.py
import torch
from lerobot.common.policies.diffusion.modeling_diffusion import DiffusionPolicy
from diffusers import DDIMScheduler
class FastDiffusionInference:
def __init__(self, model_path, num_inference_steps=10):
self.policy = DiffusionPolicy.from_pretrained(model_path, device="cuda")
self.policy.eval()
# 使用 DDIM 调度器进行快速采样
self.policy.scheduler = DDIMScheduler.from_config(self.policy.scheduler.config)
self.num_inference_steps = num_inference_steps
# 预热模型
self.warmup()
def warmup(self):
# 使用虚拟数据预热模型
dummy_batch = {
"observation.images.cam_high": torch.randn(1, 2, 3, 224, 224, device="cuda"),
"observation.state": torch.randn(1, 2, 7, device="cuda")
}
with torch.no_grad():
for _ in range(5):
_ = self.predict(dummy_batch)
@torch.no_grad()
def predict(self, observations):
# 设置推理步数
self.policy.scheduler.set_timesteps(self.num_inference_steps)
# 快速推理
prediction = self.policy(observations)
return prediction['action'].cpu().numpy()
if __name__ == "__main__":
fast_inference = FastDiffusionInference(
"outputs/train/diffusion_policy_finetuned/checkpoints/last",
num_inference_steps=10
)
# 测试推理速度
import time
observations = {
"observation.images.cam_high": torch.randn(1, 2, 3, 224, 224, device="cuda"),
"observation.state": torch.randn(1, 2, 7, device="cuda")
}
start_time = time.time()
for _ in range(100):
action = fast_inference.predict(observations)
end_time = time.time()
avg_inference_time = (end_time - start_time) / 100
print(f"Average inference time: {avg_inference_time:.4f} seconds")
print(f"Inference frequency: {1/avg_inference_time:.2f} Hz")