This web repository contains musical examples from our paper LakhNES: Improving multi-instrumental music generation with cross-domain pre-training. LakhNES is a Transformer model which benefits from transfer learning: it is pre-trained on the Lakh MIDI dataset and fine-tuned on the NES-MDB 8-bit music dataset. Code can be found here.
This section showcases unconditional examples from our LakhNES model, i.e., chiptunes generated from scratch.
This section showcases examples of our model continuing human-composed material. We do this by feeding a snippet of a human-composed song that the model has never seen during training, and asking LakhNES to continue the piece.
Game | Priming sequence | LakhNES Continuation 1 | LakhNES Continuation 2 | Human Continuation |
---|---|---|---|---|
Bubble Bobble | ||||
Bubble Bobble | ||||
Operation Wolf | ||||
Vs. Duck Hunt | ||||
The Battle of Midway |
It would be useful if our generative model could assist in human-guided composition. Here, we condition LakhNES on the rhythm from a human-composed song and ask it to fill in the notes.
Game | Rhythm specification | LakhNES Notes 1 | LakhNES Notes 2 | Human notes |
---|---|---|---|---|
Choplifter | ||||
Digital Devil Story | ||||
Dragon Buster II | ||||
Terra Cresta |
In this section we show uncurated collections of 8 sound examples from each model included in our user study. This is intended to both contextualize the results of our user study and speak to the general capabilities of each model.
Method | Example 1 | Example 2 | Example 3 | Example 4 | Example 5 | Example 6 | Example 7 | Example 8 |
---|---|---|---|---|---|---|---|---|
Random (control) | ||||||||
5-gram | ||||||||
LSTM | ||||||||
Transformer-XL | ||||||||
LakhNES | ||||||||
Real data |