The Magenta team is very proud to have been awarded “Best Demo” at the
Neural Information Processing Systems conference in Barcelona
Here is a short video of the demo in action at the Google Brain office:
The demo consists of several components:
- The Magenta library, which is built
with TensorFlow and provides an API to to generate
MIDI data with neural network models.
- The Magenta-MIDI interface,
which provides an interactive communication layer between TensorFlow and MIDI
- Models developed by the Magenta team and our collaborators, including a set of
6 neural networks: a basic one-hot melody RNN,
a melody RNN with lookback,
a melody RNN with attention,
an RNN which had been fine-tuned with
a drum RNN
and a polyphonic RNN
based on BachBot. Pre-trained versions of these models can
all be downloaded on our GitHub.
- A visualization built by
- A MIDI keyboard and drum pad.
- An Ableton Live / MaxMSP interface that connects all of the above, so that
participants can improvise with the models.
We’re also pleased that we can provide you with all of the code we wrote to put
this demo together. Instructions and code for running the full demo, which
requires Ableton Live and MaxMSP, are now available in our
If you do not have access to these expensive software packages, you can run a
simplified version of the demo using these
If you use this interface to produce music, aid a live performance, or even in
your own AI jam session, we’d love to hear about it! Please drop us a line at