Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription

Nicolas Boulanger-Lewandowski, Yoshua Bengio and Pascal Vincent (Université de Montréal)
Appearing in Proc. ICML 29 (2012)

Download PDF

Listen to MP3 samples

Browse our Python Hessian-free code

Dataset information

Important: If you intend to run experiments and compare the accuracy of your model to our results, make sure to compute the expected frame-level accuracy correctly. For each time step, given the ground truth sequence history, compute the expectation over the output vector configurations (as defined by the conditional distribution of your probabilistic model) of the accuracy of that vector for that time step. The accuracy is computed as in Bay et al. (2009) and takes into account insertions, misses and replacement errors. Then report the average of that expectation across time steps. If the expectation is not tractable under your model as for the RNN-RBM/NADE, you can estimate it by sampling vector configurations and reporting the empirical mean of the accuracy. Increase the number of vector samples until the standard error of the averaged mean is satisfactory (usually 20-50 samples per time step for < 0.1% error). Note that the log-likelihood metric is much more meaningful than accuracy for polyphonic music generation and transcription, and I recommend basing your evaluation on it exclusively.

Below are the source files (MIDI) for the 4 datasets evaluated in the paper (split in train, validation and test sets). You can generate piano-rolls from the source files by transposing each sequence in C major or C minor and sampling frames every eighth note (quarter note for JSB chorales) following the beat information present in the MIDI file. Alternatively, pickled piano-rolls for use with the Python language are also provided. 1Please see the Copyright page.
2The original collection is also available in ABC format.
3Please read the License Agreement.


Example usage:

import cPickle
dataset = cPickle.load(file('MuseData.pickle'))

This will load a dictionary with 'train', 'valid' and 'test' keys, with the corresponding values being a list of sequences. Each sequence is itself a list of time steps, and each time step is a list of the non-zero elements in the piano-roll at this instant (in MIDI note numbers, between 21 and 108 inclusive).