Making a Neural Synthesizer Instrument

By | machinelearning, ML, TensorFlow

In a previous post, we described the details of NSynth (Neural Audio Synthesis), a new approach to audio synthesis using neural networks. We hinted at further releases to enable you to make your own music with these technologies. Today, we’re excited to follow through on that promise by releasing a playable set of neural synthesizer instruments:

  • An interactive AI Experiment made in collaboration with Google Creative Lab that lets you interpolate between pairs of instruments to create new sounds.

Generating Long-Term Structure in Songs and Stories

By | machinelearning, ML, TensorFlow

One of the difficult problems in using machine learning to generate sequences, such as melodies, is creating long-term structure. Long-term structure comes very naturally to people, but it’s very hard for machines. Basic machine learning systems can generate a short melody that stays in key, but they have trouble generating a longer melody that follows a chord progression, or follows a multi-bar song structure of verses and choruses. Likewise, they can produce a screenplay with grammatically correct sentences, but not one with a compelling plot line. Without long-term structure, the content produced by recurrent neural networks (RNNs) often seems wandering and random.

But what if these RNN models could recognize and reproduce longer-term structure? Read More

Magenta MIDI Interface

By | machinelearning, ML, TensorFlow

The magenta team is happy to announce our first step toward providing an easy-to-use
interface between musicians and TensorFlow. This release makes it
possible to connect a TensorFlow model to a MIDI controller and synthesizer in
real time.

Don’t have your own MIDI keyboard? There are many free software
components you can download and use with our interface. Find out more details on
setting up your own TensorFlow-powered MIDI rig in the

Read More

Multistyle Pastiche Generator

By | machinelearning, ML, TensorFlow

Vincent Dumoulin,
Jonathon Shlens,
Manjunath Kudlur
have extended image style transfer by creating a single network which performs
more than one stylization of an image. The
paper[1] has also been summarized
in a
Research Blog
post. The source code and trained models behind the paper
are being released here.

The model creates a succinct description of a style. These descriptions can be
combined to create new mixtures of styles. Below is a picture of Picabo[5] stylized with a mixture of 3 different styles. Adjust the sliders below the
image to create more styles.




var pasticheDemo = function(img_id, url_prefix) {

function getValue(index) {
return parseFloat(document.getElementById(img_id + ‘_style_’ + index).value);

function normalizeValues(values) {
var sum = values[0] + values[1] + values[2];
if (sum <= 0) {
return [0, 0, 0];
var normValue = function(v) {
return Math.round(v * 20 / sum) * 50;
var norm = [
normValue(values[0]), normValue(values[1]), normValue(values[2])];
sum = norm[0] + norm[1] + norm[2];
var diff = 1000 – sum;
var max = Math.max(norm[0], norm[1], norm[2]);
if (norm[0] == max) {
norm[0] += diff;
} else if (norm[1] == max) {
norm[1] += diff;
} else {
norm[2] += diff;
return norm;

function imageHash(values) {
var toString = function(v) {
var str = String(v);
while (str.length < 3) {
str = '0' + str;
// AdBlock Plus looks for patterns that match common ad image sizes.
// Breaking up the number with a character is enough to bypass this.
str = str[0] + 'a' + str.substr(1);
return str;
return toString(values[0]) + '_' + toString(values[1]) + '_' +

function getImageUrl(prefix, values) {
href = '/assets/style_blends/' + prefix + '_' + imageHash(values) + '.jpg';
return href;

var preloadedImages = null;

function createImage(values) {
var img = new Image(); = img_id;
var contents = {
'isloaded': false,
'image': img
img.onload = function() {
contents.isloaded = true;
img.src = getImageUrl(url_prefix, values);
return contents;

function getImage(values) {
var hash = imageHash(values);
var contents = preloadedImages[hash];
if (contents.isloaded) {
return contents.image;
} else {
preloadedImages[hash] = createImage(values);
return contents.image;

function loadAllImages() {
var images = {};
for (var x = 0; x <= 1000; x += 50) {
for (var y = 0; y <= 1000 – x; y += 50) {
for (var z = 0; z <= 1000 – x – y; z += 50) {
if (x + y + z == 1000 || x + y + z == 0) {
images[imageHash([x, y, z])] = createImage([x, y, z]);
return images;

function displayImage(image) {
var current = document.getElementById(img_id);
// Load the new image with the height of the current image so the slider
// stays in the same place.
image.width = current.width;
image.height = current.height;
var parent = current.parentElement;

function setWeightLabels(values) {
for (var index = 0; index < 3; ++index) {
var weight = document.getElementById(img_id + '_weight_' + index);
weight.innerHTML = (values[index] / 10) + '%';

function sliderChange() {
if (preloadedImages == null) {
preloadedImages = loadAllImages();

var img = document.getElementById(img_id);
var values = [getValue(0), getValue(1), getValue(2)];
var normalized = normalizeValues(values);

document.getElementById(img_id + '_style_0').oninput = sliderChange;
document.getElementById(img_id + '_style_1').oninput = sliderChange;
document.getElementById(img_id + '_style_2').oninput = sliderChange;

pasticheDemo('picabo_deck', 'picabo');

Read More

Tuning Recurrent Neural Networks with Reinforcement Learning

By | machinelearning, ML, TensorFlow

We are excited to announce our new RL Tuner algorithm, a method for enchancing the performance of an LSTM trained on data using Reinforcement Learning (RL). We create an RL reward function that teaches the model to follow certain rules, while still allowing it to retain information learned from data. We use RL Tuner to teach concepts of music theory to an LSTM trained to generate melodies. The two videos below show samples from the original LSTM model, and the same model enchanced using RL Tuner.

Read More

Learning from A.I. Duet

By | machinelearning, ML, TensorFlow

Google Creative Lab just released A.I.
, an interactive experiment
which lets you play a music duet with the computer. You no longer need code or
special equipment to play along with a Magenta music generation model. Just point
your browser at A.I. Duet
and use your laptop keyboard or a MIDI keyboard to make some music. You can learn
more by reading Alex Chen’s
Google Blog post.
A.I. Duet is a really fun way to interact with a Magenta music model.
As A.I. Duet is open source,
it can also grow into a powerful tool for machine learning research.
I learned a lot by experimenting with the underlying code.

Read More

Magenta returns to Moogfest

By | machinelearning, ML, TensorFlow

Magenta was first announced to the public
nearly one year ago at Moogfest, a yearly music
festival in Durham, NC that brings together together artists, futurist thinkers,
inventors, entrepreneurs, designers, engineers, scientists, and musicians to
explore emerging sound technologies.

This year we will be returning
to continue the conversation, share what we’ve built in the last year, and help
you make music with Magenta.

Read More