Category

TensorFlow

Perspective_1.gif

When computers learn to swear: Using machine learning for better online conversations

By | machinelearning, TensorFlow | No Comments

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.



Source link

twitter_DataScienceWeekly.png

Data Science Weekly – Issue 170

By | machinelearning, TensorFlow | No Comments

Data Science Weekly – Issue 170

#outlook a{
padding:0;
}
.ReadMsgBody{
width:100%;
}
.ExternalClass{
width:100%;
}
body{
margin:0;
padding:0;
}
img{
border:0;
height:auto;
line-height:100%;
outline:none;
text-decoration:none;
}
table,td{
border-collapse:collapse !important;
mso-table-lspace:0pt;
mso-table-rspace:0pt;
}
#bodyTable,#bodyCell{
height:100% !important;
margin:0;
padding:0;
width:100% !important;
}
#bodyCell{
padding:20px;
}
#templateContainer{
width:600px;
}
body,#bodyTable{
background-color:#ecf0f1;
}
h1{
color:#34495e !important;
display:block;
font-family:Georgia;
font-size:26px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h2{
color:#34495e !important;
display:block;
font-family:Tahoma;
font-size:20px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h3{
color:#000000 !important;
display:block;
font-family:Helvetica;
font-size:18px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h4{
color:#000000 !important;
display:block;
font-family:Helvetica;
font-size:16px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:left;
}
#templatePreheader{
border-top:0;
border-bottom:0;
}
.preheaderContent{
color:#34495e;
font-family:Tahoma;
font-size:9px;
line-height:125%;
padding-top:10px;
padding-bottom:10px;
text-align:left;
}
.preheaderContent a:link,.preheaderContent a:visited,.preheaderContent a .yshortcuts {
color:#34495e;
font-weight:bold;
text-decoration:none;
}
#templateHeader{
border-top:10px solid #000000;
border-bottom:5px solid #000000;
}
.headerContent{
color:#000000;
font-family:Helvetica;
font-size:20px;
font-weight:bold;
line-height:100%;
padding-top:20px;
padding-bottom:20px;
text-align:center;
}
.headerContent a:link,.headerContent a:visited,.headerContent a .yshortcuts {
color:#000000;
font-weight:normal;
text-decoration:underline;
}
#headerImage{
height:auto;
max-width:600px !important;
}
#templateBody{
border-top:0;
border-bottom:0;
}
.bodyContent{
color:#000000;
font-family:Helvetica;
font-size:16px;
line-height:150%;
padding-top:40px;
padding-bottom:40px;
text-align:left;
}
.bodyContent a:link,.bodyContent a:visited,.bodyContent a .yshortcuts {
color:#FF0000;
font-weight:normal;
text-decoration:none;
}
.bodyContent img{
display:inline;
height:auto;
max-width:600px !important;
}
#templateFooter{
border-top:2px solid #000000;
border-bottom:20px solid #000000;
}
.footerContent{
color:#000000;
font-family:Helvetica;
font-size:10px;
line-height:150%;
padding-top:20px;
padding-bottom:20px;
text-align:center;
}
.footerContent a:link,.footerContent a:visited,.footerContent a .yshortcuts,.footerContent a span {
color:#000000;
font-weight:bold;
text-decoration:none;
}
.footerContent img{
display:inline;
height:auto;
max-width:600 !important;
}
@media only screen and (max-width: 500px){
body,table,td,p,a,li,blockquote{
-webkit-text-size-adjust:none !important;
}

} @media only screen and (max-width: 500px){
body{
width:auto !important;
}

} @media only screen and (max-width: 500px){
td[id=bodyCell]{
padding:10px;
}

} @media only screen and (max-width: 500px){
table[id=templateContainer]{
max-width:600px !important;
width:75% !important;
}

} @media only screen and (max-width: 500px){
h1{
font-size:40px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h2{
font-size:20px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h3{
font-size:18px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h4{
font-size:16px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
table[id=templatePreheader]{
display:none !important;
}

} @media only screen and (max-width: 500px){
td[class=headerContent]{
font-size:20px !important;
line-height:150% !important;
}

} @media only screen and (max-width: 500px){
td[class=bodyContent]{
font-size:18px !important;
line-height:125% !important;
}

} @media only screen and (max-width: 500px){
td[class=footerContent]{
font-size:14px !important;
line-height:150% !important;
}

} @media only screen and (max-width: 500px){
td[class=footerContent] a{
display:block !important;
}

}


Curated news, articles and jobs related to Data Science. 
Keep up with all the latest developments

Issue #170

Feb 23 2017

Editor Picks

 

 


 

A Message from this week's Sponsor:

 

 

  • Harness the business power of big data.

    How far could you go with the right experience and education? Find out. At Capitol Technology University. Earn your PhD Management & Decision Sciences — in as little as three years — in convenient online classes. Banking, healthcare, energy and business all rely on insightful analysis. And business analytics spending will grow to $89.6 billion in 2018. This is a tremendous opportunity — and Capitol’s PhD program will prepare you for it. Learn more now.

 


 

Data Science Articles & Videos

 

  • Tracking my movements on the football pitch with Fitbit
    I mostly run and play football, and when it comes to track your movements while jogging, my brand new Fitbit Surge does the job almost perfectly. I decided to test its effectiveness on the football field, so I wore it during a game in Paris. Fitbit allows you to export your data in a .TCX format. I did it, and then imported it in Google Earth to check whether the GPS was accurate or not…
  • Feel The Kern – Generating Proportional Fonts with AI
    A about a year ago I read two blog posts about generating fonts with deep learning; one by Erik Bernhardsson and TJ Torres at StitchFix… So why not just take what Erik and TJ have made and simply use that to generate new fonts? Because their models are lacking something: Even though they manage to capture the styles of individual characters very well, they do not incorporate the styling found between pairs of characters, namely the intended spacing in between them, known as kerning…
  • Learning from A.I. Duet
    Google Creative Lab just released A.I. Duet, an interactive experiment which lets you play a music duet with the computer. You no longer need code or special equipment to play along with a Magenta music generation model. Just point your browser at A.I. Duet and use your laptop keyboard or a MIDI keyboard to make some music…
  • Twitter researchers offer clues for why Trump won
    Two University of Rochester researchers are out with a new study about why the 2016 Presidential election turned out the way it did. Professor Jiebo Luo and PhD candidate Yu Wang conducted an extensive 14-month study of each candidate’s Twitter followers and arrived at some very interesting results…
  • Learning to generate one-sentence biographies from Wikidata
    We investigate the generation of one sentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. These automated 1-sentence "biographies" from Wikidata, are preferred by readers over Wikipedia's 1st sentence in 40% of cases…
  • Deep Nets Don't Learn Via Memorization
    We use empirical methods to argue that deep neural networks (DNNs) do not achieve their performance by memorizing training data, in spite of overlyexpressive model architectures. Instead, they learn a simple available hypothesis that fits the finite data samples…
  • Playing SNES in the Retro Learning Environment
    Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. As a result, the Arcade Learning Environment (ALE) has become a commonly used benchmark environment allowing algorithms to trainon various Atari 2600 games. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles…

 


 

Jobs

 

  • Data Scientist – SeatGeek – NYC

    SeatGeek operates a unique business model in a complicated, opaque market. Many of the hardest problems we face have never been tackled at scale and do not have clear questions, let alone answers. Moving forward requires critical thinking, rapid prototyping, and intellectual dexterity…

 


 

Training & Resources

 

  • Text mining in R: a tutorial
    At the end of this tutorial, you’ll have developed the skills to read in large files with text and derive meaningful insights you can share from that analysis. You’ll have learned how to do text mining in R, an essential data mining tool…

  • Experiment with Dask and TensorFlow
    This post briefly describes potential interactions between Dask and TensorFlow and then goes through a concrete example using them together for distributed training with a moderately complex architecture…

 


 

Books

 

 


 
P.S. Interested in reaching fellow readers of this newsletter? Consider sponsoring! Email us for details 🙂 – All the best, Hannah & Sebastian

Follow on Twitter
Copyright © 2013-2016 DataScienceWeekly.org, All rights reserved.
unsubscribe from this list    update subscription preferences 

Source link

Perspective_1.gif

When computers learn to swear: Using machine learning for better online conversations

By | machinelearning, TensorFlow | No Comments

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.



Source link

66EXuGF2ti8

Google Cloud supports $3M in grant credits for the NSF BIGDATA program

By | machinelearning, TensorFlow | No Comments

Google Cloud Platform (GCP) serves more than one billion end-users, and we continue to seek ways to give researchers access to these powerful tools. Through the National Science Foundation’s BIGDATA grants program, we’re offering researchers $3M in Google Cloud Platform credits to use the same infrastructure, analytics and machine learning that we use to drive innovation at Google.

About the BIGDATA grants

The National Science Foundation (NSF) recently announced its flagship research program on big data, Critical Techniques, Technologies and Methodologies for Advancing Foundations and Applications of Big Data Sciences and Engineering (BIGDATA). The BIGDATA program encourages experimentation with datasets at scale. Google will provide cloud credits to qualifying NSF-funded projects, giving researchers access to the breadth of services on GCP, from scalable data management (Google Cloud Storage, Google Cloud Bigtable, Google Cloud Datastore), to analysis (Google BigQuery, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Datalab, Google Genomics) to machine learning (Google Cloud Machine Learning, TensorFlow).

This collaboration combines NSF’s experience in managing diverse research portfolios with Google’s proven track record in secure and intelligent cloud computing and data science. NSF is accepting proposals from March 15, 2017 through March 22, 2017.  All proposals that meet NSF requirements will be reviewed through NSF’s merit review process.

GCP in action at Stanford University

To get an idea of the potential impact of GCP, consider Stanford University’s Center of Genomics and Personalized Medicine, where scientists work with data at a massive scale. Director Mike Snyder and his lab have been involved in a number of large efforts, from ENCODE to the Million Veteran Program. Snyder and his colleagues turned to Google Genomics, which gives scientists access to GCP to help secure, store, process, explore and share biological datasets. With the costs of cloud computing dropping significantly and demand for ever-larger genomics studies growing, Snyder thinks fewer labs will continue relying on local infrastructure.

“We’re entering an era where people are working with thousands or tens of thousands or even million genome projects, and you’re never going to do that on a local cluster very easily,” he says. “Cloud computing is where the field is going.”

“What you can do with Google Genomics — and you can’t do in-house — is run 1,000 genomes in parallel,” says Somalee Datta, bioinformatics director of Stanford University’s Center of Genomics. “From our point of view, it’s almost infinite resources.”



Source link

nvidia-2.png

GPUs are now available for Google Compute Engine and Cloud Machine Learning

By | machinelearning, TensorFlow | No Comments

Google Cloud Platform gets a performance boost today with the much anticipated public beta of NVIDIA Tesla K80 GPUs. You can now spin up NVIDIA GPU-based VMs in three GCP regions: us-east1, asia-east1 and europe-west1, using the gcloud command-line tool. Support for creating GPU VMs using the Cloud Console appears later this week.

If you need extra computational power for deep learning, you can attach up to eight GPUs (4 K80 boards) to any custom Google Compute Engine virtual machine. GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.

NVIDIA K80 GPU Accelerator Board

Rather than constructing a GPU cluster in your own datacenter, just add GPUs to virtual machines running in our cloud. GPUs on Google Compute Engine are attached directly to the VM, providing bare-metal performance. Each NVIDIA GPU in a K80 has 2,496 stream processors with 12 GB of GDDR5 memory. You can shape your instances for optimal performance by flexibly attaching 1, 2, 4 or 8 NVIDIA GPUs to custom machine shapes.

Google Cloud supports as many as 8 GPUs attached to custom VMs, allowing you to optimize the performance of your applications.

These instances support popular machine learning and deep learning frameworks such as TensorFlow, Theano, Torch, MXNet and Caffe, as well as NVIDIA’s popular CUDA software for building GPU-accelerated applications.

Pricing

Like the rest of our infrastructure, the GPUs are priced competitively and are billed per minute (10 minute minimum). In the US, each K80 GPU attached to a VM is priced at $0.700 per hour per GPU and in Asia and Europe, $0.770 per hour per GPU. As always, you only pay for what you use. This frees you up to spin up a large cluster of GPU machines for rapid deep learning and machine learning training with zero capital investment.

Supercharge machine learning

The new Google Cloud GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML), helping you slash the time it takes to train machine learning models at scale using the TensorFlow framework. Now, instead of taking several days to train an image classifier on a large image dataset on a single machine, you can run distributed training with multiple GPU workers on Cloud ML, dramatically shorten your development cycle and iterate quickly on the model.

Cloud ML is a fully-managed service that provides end-to-end training and prediction workflow with cloud computing tools such as Google Cloud Dataflow, Google BigQuery, Google Cloud Storage and Google Cloud Datalab.

Start small and train a TensorFlow model locally on a small dataset. Then, kick off a larger Cloud ML training job against a full dataset in the cloud to take advantage of the scale and performance of Google Cloud GPUs. For more on Cloud ML, please see the Quickstart guide to get started, or this document to dive into using GPUs.

Next steps

Register for Cloud NEXT, sign up for the CloudML Bootcamp and learn how to Supercharge performance using GPUs in the cloud. You can use the gcloud command-line to create a VM today and start experimenting with TensorFlow-accelerated machine learning. Detailed documentation is available on our website.



Source link

F85-VBha-to

Play a duet with a computer, through machine learning

By | machinelearning, TensorFlow | No Comments

Technology can inspire people to be creative in new ways. Magenta, an open-source project we launched last year, aims to do that by giving developers tools to explore music using neural networks.

To help show what’s possible with Magenta, we’ve created an interactive experiment called A.I. Duet, which lets you play a duet with the computer. Just play some notes, and the computer will respond to your melody. You don’t even have to know how to play piano—it’s fun to just press some keys and listen to what comes back. We hope it inspires you—whether you’re a developer or musician, or just curious—to imagine how technology can help creative ideas come to life. Watch our video above to learn more, or just start playing with it.



Source link

fTUc0ud5Qa4

Debug TensorFlow Models with tfdbg

By | machinelearning, TensorFlow | No Comments

Posted by Shanqing Cai, Software Engineer, Tools and Infrastructure.

We are excited to share TensorFlow Debugger (tfdbg), a tool that makes debugging of machine learning models (ML) in TensorFlow easier.
TensorFlow, Google’s open-source ML library, is based on dataflow graphs. A typical TensorFlow ML program consists of two separate stages:

  1. Setting up the ML model as a dataflow graph by using the library’s Python API,
  2. Training or performing inference on the graph by using the Session.run()method.

If errors and bugs occur during the second stage (i.e., the TensorFlow runtime), they are difficult to debug.

To understand why that is the case, note that to standard Python debuggers, the Session.run() call is effectively a single statement and does not exposes the running graph’s internal structure (nodes and their connections) and state (output arrays or tensors of the nodes). Lower-level debuggers such as gdb cannot organize stack frames and variable values in a way relevant to TensorFlow graph operations. A specialized runtime debugger has been among the most frequently raised feature requests from TensorFlow users.

tfdbg addresses this runtime debugging need. Let’s see tfdbg in action with a short snippet of code that sets up and runs a simple TensorFlow graph to fit a simple linear equation through gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.python.debug as tf_debug
xs = np.linspace(-0.5, 0.49, 100)
x = tf.placeholder(tf.float32, shape=[None], name="x")
y = tf.placeholder(tf.float32, shape=[None], name="y")
k = tf.Variable([0.0], name="k")
y_hat = tf.multiply(k, x, name="y_hat")
sse = tf.reduce_sum((y - y_hat) * (y - y_hat), name="sse")
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.02).minimize(sse)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

sess = tf_debug.LocalCLIDebugWrapperSession(sess)
for _ in range(10):
sess.run(train_op, feed_dict={x: xs, y: 42 * xs})

As the highlighted line in this example shows, the session object is wrapped as a class for debugging (LocalCLIDebugWrapperSession), so the calling the run() method will launch the command-line interface (CLI) of tfdbg. Using mouse clicks or commands, you can proceed through the successive run calls, inspect the graph’s nodes and their attributes, visualize the complete history of the execution of all relevant nodes in the graph through the list of intermediate tensors. By using the invoke_stepper command, you can let the Session.run() call execute in the “stepper mode”, in which you can step to nodes of your choice, observe and modify their outputs, followed by further stepping actions, in a way analogous to debugging procedural languages (e.g., in gdb or pdb).

A class of frequently encountered issue in developing TensorFlow ML models is the appearance of bad numerical values (infinities and NaNs) due to overflow, division by zero, log of zero, etc. In large TensorFlow graphs, finding the source of such nodes can be tedious and time-consuming. With the help of tfdbg CLI and its conditional breakpoint support, you can quickly identify the culprit node. The video below demonstrates how to debug infinity/NaN issues in a neural network with tfdbg:

A screencast of the TensorFlow Debugger in action, from this tutorial.

Compared with alternative debugging options such as Print Ops, tfdbg requires fewer lines of code change, provides more comprehensive coverage of the graphs, and offers a more interactive debugging experience. It will speed up your model development and debugging workflows. It offers additional features such as offline debugging of dumped tensors from server environments and integration with tf.contrib.learn. To get started, please visit this documentation. This research paperlays out the design of tfdbg in greater detail.

The minimum required TensorFlow version for tfdbgis 0.12.1. To report bugs, please open issues on TensorFlow’s GitHub Issues Page. For general usage help, please post questions on StackOverflow using the tag tensorflow.

Acknowledgements
This project would not be possible without the help and feedback from members of the Google TensorFlow Core/API Team and the Applied Machine Intelligence Team.



Source link

collect

Developer Advocates offer up their favorite Google Cloud NEXT 17 sessions

By | machinelearning, TensorFlow | No Comments

Here at Google Cloud, we employ a small army of developer advocates, DAs for short, who are out on the front lines at conferences, at customer premise, or on social media, explaining our technologies and communicating back to people like me and our product teams about your needs as a member of a development community.

DAs take the responsibility of advocating for developers seriously, and have spent time poring over the extensive Google Cloud Next ’17 session catalog, bookmarking the talks that will benefit you. To wit:

  • If you’re a developer working in Ruby, you know to turn to Aja Hammerly for all things Ruby/Google Cloud Platform (GCP)-related. Aja’s top pick for Rubyists at Next is Google Cloud Platform < 3 Ruby with Google Developer Program Engineer Remi Taylor, but there are other noteworthy mentions on her personal blog.
  • Mete Atamel is your go-to DA for all things Windows on GCP. Selfishly, his top Next session is his own about running ASP.NET apps on GCP, but he has plenty more suggestions for you to choose from
  • Groovy nut Guillaume Laforge is going to be one busy guy at Next, jumping from between sessions about PaaS, serverless and containers, to name a few. Here’s his full list of his must-see sessions
  • If you’re a game developer, let Mark Mandel be your guide. Besides co-presenting with Rob Whitehead, CTO of Improbable, Mark has bookmarked sessions about location-based gaming, using GPUs and game analytics. Mosy on over to his personal blog for the full list.
  • In the past year, Google Apps Script has opened the door to building amazing customizations for G Suite, our communication and collaboration platform. In this G Suite Developers blog post, Wesley Chun walks you through some of the cool Apps Script sessions, as well as sessions about App Maker and some nifty G Suite APIs. 
  • Want to attend sessions that teach you about our machine learning services? That’s where you’ll find our hands-on ML expert Sara Robinson, who in addition to recommending her favorite Next sessions, also examines her talk from last year’s event using Cloud Natural Language API. 

For my part, I’m really looking forward to Day 3, which we’re modeling after my favorite open source conferences thanks to Sarah Novotny’s leadership. We’ll have a carefully assembled set of open talks on Kubernetes, TensorFlow and Apache Beam that cover the technologies, how to contribute, the ecosystems around them and small group discussions with the developers. For a full list of keynotes, bootcamps and breakout sessions, check out the schedule and reserve your spot.



Source link

j_NBIEvaDo4

Play a duet with a computer, through machine learning

By | machinelearning, TensorFlow | No Comments

Technology can inspire people to be creative in new ways. Magenta, an open-source project we launched last year, aims to do that by giving developers tools to explore music using neural networks.

To help show what’s possible with Magenta, we’ve created an interactive experiment called A.I. Duet, which lets you play a duet with the computer. Just play some notes, and the computer will respond to your melody. You don’t even have to know how to play piano—it’s fun to just press some keys and listen to what comes back. We hope it inspires you—whether you’re a developer or musician, or just curious—to imagine how technology can help creative ideas come to life. Watch our video above to learn more, or just start playing with it.



Source link

ai_duet.png

Learning from A.I. Duet

By | machinelearning, ML, TensorFlow | No Comments

Google Creative Lab just released A.I.
Duet
, an interactive experiment
which lets you play a music duet with the computer. You no longer need code or
special equipment to play along with a Magenta music generation model. Just point
your browser at A.I. Duet
and use your laptop keyboard or a MIDI keyboard to make some music. You can learn
more by reading Alex Chen’s
Google Blog post.
A.I. Duet is a really fun way to interact with a Magenta music model.
As A.I. Duet is open source,
it can also grow into a powerful tool for machine learning research.
I learned a lot by experimenting with the underlying code.

Read More