MDM-2-DiffGAN implementation. Our paper
HumanML3D - Follow the instructions in HumanML3D, then copy the result dataset to our repository:
cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D
Download and save humanml-encoder-512 to
save
folder.
To train our model, use the following script.
python -m train_ddgan --dataset humanml --num_channels 263 --batch_size 32
python -m sample.generate --dataset humanml --output_dir ./save/epoch325/toolbox --exp experiment --epoch_id 325 --text_prompt "the person walked forward and is picking up his toolbox."
python -m sample.generate --model_path ./save/humanml_trans_enc_512/model000200000.pt --num_samples 10 --num_repetitions 3
python -m eval.eval_humanml --dataset humanml --model_path saved_info/dd-gan/humanml/experiment/netG_325 --eval_mode mm_short --output_dir ./save --exp experiment --epoch_id 325 --node_rank 0 --text_prompt "A person jumping"
We want to thank "Human Motion Diffusion Model" and "TACKLING THE GENERATIVE LEARNING TRILEMMA WITH DENOISING DIFFUSION GANS" for their contributions. Their ideas, valuable insights, and codebase allowed us to implement our work.
Our code is distributed under both an MIT and NVIDIA License.