Music-Driven Group Choreography

1AIOZ
2University of Liverpool
cars peace

Abstract

Music-driven choreography is a challenging problem with a wide variety of industrial applications. Recently, many methods have been proposed to synthesize dance motions from music for a single dancer. However, generating dance motion for a group remains an open problem. In this paper, we present GDANCE, a new large-scale dataset for music-driven group dance generation. Unlike existing datasets that only support single dance, our new dataset contains group dance videos, hence supporting the study of group choreography. We propose a semi-autonomous labeling method with humans in the loop to obtain the 3D ground truth for our dataset. The proposed dataset consists of 16.7 hours of paired music and 3D motion from in-the-wild videos, covering 7 dance styles and 16 music genres. We show that naively applying single dance generation technique to creating group dance motion may lead to unsatisfactory results, such as inconsistent movements and collisions between dancers. Based on our new dataset, we propose a new method that takes an input music sequence and a set of 3D positions of dancers to efficiently produce multiple group-coherent choreographies. We propose new evaluation metrics for measuring group dance quality and perform intensive experiments to demonstrate the effectiveness of our method. Our code and dataset will be released to facilitate future research on group dance generation.

Dataset Description

AIOZ-GDANCE comprises 16.7 hours of whole-body motion and music audio of group dancing. The duration of each video in our dataset is ranging from 15 to 60 seconds. We randomly sample all videos into train, validation and test sets with 80%; 10%; and 10% of total videos, respectively. The dataset is large-scale, vary in music genres and dance styles.
Criteria Train Validate Test Total
Duration (hours) 13.5 1.6 1.6 16.7
Frames 1,459K 175K 174K 1,808K

Method

We propose the first baseline for group dance generation that can jointly generate multiple dancing motions expressively and coherently. Our model takes in a music sequence and a set of initial positions, and then auto-regressively generates coherent group dance motions that are attuned to the input music.
cars peace

More Demonstrations

 

BibTeX

@article{aiozGdance,
				author    = {Le, Nhat and Pham, Thang and Do, Tuong and Tjiputra, Erman and Tran, Quang D. and Nguyen, Anh},
				title     = {Music-Driven Group Choreography},
				journal   = {CVPR},
				year      = {2023},
				}