Category

TensorFlow

How Google Cloud is transforming Japanese businesses

By | machinelearning, TensorFlow

This week, we welcomed 13,000 executives, developers, IT managers and partners to our largest Asia-Pacific Cloud event, Google Cloud Next Tokyo. During this event, we celebrated the many ways that Japanese companies such as Kewpie, Sony (and even cucumber farmers) have transformed and scaled their businesses using Google Cloud. 

Since the launch of the Google Cloud Tokyo region last November, roughly 40 percent of Google Compute Engine core hour usage in Tokyo is from customers new to Google Cloud Platform (GCP). The number of new customers using Compute Engine has increased by an average of 21 percent monthly over the last three months, and the total number of paid customers in Japan has increased by 70 percent over the last year.

By supplying compliance statements and documents for FISC — an important Japanese compliance standard — for both GCP and G Suite, we’re making it easier to do business with Google Cloud in Japan.

Here are a few of the exciting announcements that came out of Next Tokyo:

Retailers embracing enterprise innovation  

One of the biggest retailers in Japan, FamilyMart, will work with Google’s Professional Services Organization to transform the way it works, reform its store operations, and build a retail model for the next generation. FamilyMart is using G Suite to facilitate a collaborative culture and transform its business to embrace an ever-changing landscape. Furthermore, it plans to use big data analysis and machine learning to develop new ways of managing store operations. The project, — dubbed “Famima 10x” — kicks off by introducing G Suite to facilitate a more flexible work style and encourage a more collaborative, innovative culture. 

Modernizing food production with cloud computing, data analytics and machine learning

Kewpie, a major food manufacturer in Japan famous for their mayonnaise, takes high standards of food production seriously. For its baby food, it used to depend on human eyes to evaluate 4 – 5 tons of food materials daily, per factory, to root out bad potato cubes — a labor-intensive task that required intense focus on the production line. But over the course of six months, Kewpie has tested Cloud Machine Learning Engine and TensorFlow to help identify the bad cubes. The results of the tests were so successful that Kewpie adopted the technology.

Empowering employees to conduct effective data analysis

Sony Network Communications Inc. is a division of Sony Group that develops and operates cloud services and applications for Sony group companies. It converted from Hive/Hadoop to BigQuery and established a data analysis platform based on BigQuery, called Private Data Management Platform. This not only reduces data preparation and maintenance costs, but also allows a wide range of employees — from data scientists to those who are only familiar with SQL — to conduct effective data analysis, which in turn made its data-driven business more productive than before.

Collaborating with partners

During Next Tokyo, we announced five new Japanese partners that will help Google Cloud better serve customers.

  • NTT Communications Corporation is a respected Japanese cloud solution provider and new Google Cloud partner that helps enterprises worldwide optimize their information and communications technology environments. GCP will connect with NTT Communications’ Enterprise Cloud, and NTT Communications plans to develop new services utilizing Google Cloud’s big data analysis and machine intelligence solutions. NTT Communications will use both G Suite and GCP to run its own business and will use its experiences to help both Japanese and international enterprises.

  • KDDI is already a key partner for G Suite and Chrome devices and will offer GCP to the Japanese market this summer, in addition to an expanded networking partnership.

  • Softbank has been a G Suite partner since 2011 and will expand the collaboration with Google Cloud to include solutions utilizing GCP in its offerings. As part of the collaboration, Softbank plans to link GCP with its own “White Cloud” service in addition to promoting next-generation workplaces with G Suite.

  • SORACOM, which uses cellular and LoRaWAN networks to provide connectivity for IoT devices, announced two new integrations with GCP. SORACOM Beam, its data transfer support service, now supports Google Cloud IoT Core, and SORACOM Funnel, its cloud resource adapter service, enables constrained devices to send messages to Google Cloud Pub/Sub. This means that a small, battery-powered sensor can keep sending data to GCP by LoRaWAN for months, for example.

Create Cloud Spanner instances in Tokyo

Cloud Spanner is the world’s first horizontally-scalable and strongly-consistent relational database service. It became generally available in May, delivering long-term value for our customers with mission-critical applications in the cloud, including customer authentication systems, business-transaction and inventory-management systems, and high-volume media systems that require low latency and high throughput. Starting today, customers can store data and create Spanner instances directly in our Tokyo region.

Jamboard coming to Japan in 2018

At Next Tokyo, businesses discussed how they can use technology to improve productivity, and make it easier for employees to work together. Jamboard, a digital whiteboard designed specifically for the cloud, allows employees to sketch their ideas whiteboard-style on a brilliant 4k display, and drop images, add notes and pull things directly from the web while they collaborate with team members from anywhere. This week, we announced that Jamboard will be generally available in Japan in 2018.

Why Japanese companies are choosing Google Cloud

For Kewpie, Sony and FamilyMart, Google’s track record building secure infrastructure all over the world was an important consideration for their move to Google Cloud. From energy-efficient data centers to custom servers to custom networking gear to a software-defined global backbone to specialized ASICs for machine learning, Google has been living cloud at scale for more than 15 years—and we bring all of it to bear in Google Cloud.

We hope to see many of you as we go on the road to meet with customers and partners, and encourage you to learn more about upcoming Google Cloud events.



Source link

How Google Cloud is transforming Japanese businesses

By | machinelearning, TensorFlow

This week, we welcomed 13,000 executives, developers, IT managers and partners to our largest Asia-Pacific Cloud event, Google Cloud Next Tokyo. During this event, we celebrated the many ways that Japanese companies such as Kewpie, Sony (and even cucumber farmers) have transformed and scaled their businesses using Google Cloud. 

Since the launch of the Google Cloud Tokyo region last November, roughly 40 percent of Google Compute Engine core hour usage in Tokyo is from customers new to Google Cloud Platform (GCP). The number of new customers using Compute Engine has increased by an average of 21 percent monthly over the last three months, and the total number of paid customers in Japan has increased by 70 percent over the last year.

By supplying compliance statements and documents for FISC — an important Japanese compliance standard — for both GCP and G Suite, we’re making it easier to do business with Google Cloud in Japan.

Here are a few of the exciting announcements that came out of Next Tokyo:

Retailers embracing enterprise innovation  

One of the biggest retailers in Japan, FamilyMart, will work with Google’s Professional Services Organization to transform the way it works, reform its store operations, and build a retail model for the next generation. FamilyMart is using G Suite to facilitate a collaborative culture and transform its business to embrace an ever-changing landscape. Furthermore, it plans to use big data analysis and machine learning to develop new ways of managing store operations. The project, — dubbed “Famima 10x” — kicks off by introducing G Suite to facilitate a more flexible work style and encourage a more collaborative, innovative culture. 

Modernizing food production with cloud computing, data analytics and machine learning

Kewpie, a major food manufacturer in Japan famous for their mayonnaise, takes high standards of food production seriously. For its baby food, it used to depend on human eyes to evaluate 4 – 5 tons of food materials daily, per factory, to root out bad potato cubes — a labor-intensive task that required intense focus on the production line. But over the course of six months, Kewpie has tested Cloud Machine Learning Engine and TensorFlow to help identify the bad cubes. The results of the tests were so successful that Kewpie adopted the technology.

Empowering employees to conduct effective data analysis

Sony Network Communications Inc. is a division of Sony Group that develops and operates cloud services and applications for Sony group companies. It converted from Hive/Hadoop to BigQuery and established a data analysis platform based on BigQuery, called Private Data Management Platform. This not only reduces data preparation and maintenance costs, but also allows a wide range of employees — from data scientists to those who are only familiar with SQL — to conduct effective data analysis, which in turn made its data-driven business more productive than before.

Collaborating with partners

During Next Tokyo, we announced five new Japanese partners that will help Google Cloud better serve customers.

  • NTT Communications Corporation is a respected Japanese cloud solution provider and new Google Cloud partner that helps enterprises worldwide optimize their information and communications technology environments. GCP will connect with NTT Communications’ Enterprise Cloud, and NTT Communications plans to develop new services utilizing Google Cloud’s big data analysis and machine intelligence solutions. NTT Communications will use both G Suite and GCP to run its own business and will use its experiences to help both Japanese and international enterprises.

  • KDDI is already a key partner for G Suite and Chrome devices and will offer GCP to the Japanese market this summer, in addition to an expanded networking partnership.

  • Softbank has been a G Suite partner since 2011 and will expand the collaboration with Google Cloud to include solutions utilizing GCP in its offerings. As part of the collaboration, Softbank plans to link GCP with its own “White Cloud” service in addition to promoting next-generation workplaces with G Suite.

  • SORACOM, which uses cellular and LoRaWAN networks to provide connectivity for IoT devices, announced two new integrations with GCP. SORACOM Beam, its data transfer support service, now supports Google Cloud IoT Core, and SORACOM Funnel, its cloud resource adapter service, enables constrained devices to send messages to Google Cloud Pub/Sub. This means that a small, battery-powered sensor can keep sending data to GCP by LoRaWAN for months, for example.

Create Cloud Spanner instances in Tokyo

Cloud Spanner is the world’s first horizontally-scalable and strongly-consistent relational database service. It became generally available in May, delivering long-term value for our customers with mission-critical applications in the cloud, including customer authentication systems, business-transaction and inventory-management systems, and high-volume media systems that require low latency and high throughput. Starting today, customers can store data and create Spanner instances directly in our Tokyo region.

Jamboard coming to Japan in 2018

At Next Tokyo, businesses discussed how they can use technology to improve productivity, and make it easier for employees to work together. Jamboard, a digital whiteboard designed specifically for the cloud, allows employees to sketch their ideas whiteboard-style on a brilliant 4k display, and drop images, add notes and pull things directly from the web while they collaborate with team members from anywhere. This week, we announced that Jamboard will be generally available in Japan in 2018.

Why Japanese companies are choosing Google Cloud

For Kewpie, Sony and FamilyMart, Google’s track record building secure infrastructure all over the world was an important consideration for their move to Google Cloud. From energy-efficient data centers to custom servers to custom networking gear to a software-defined global backbone to specialized ASICs for machine learning, Google has been living cloud at scale for more than 15 years—and we bring all of it to bear in Google Cloud.

We hope to see many of you as we go on the road to meet with customers and partners, and encourage you to learn more about upcoming Google Cloud events.



Source link

How Google Cloud is transforming Japanese businesses

By | machinelearning, TensorFlow

This week, we welcomed 13,000 executives, developers, IT managers and partners to our largest Asia-Pacific Cloud event, Google Cloud Next Tokyo. During this event, we celebrated the many ways that Japanese companies such as Kewpie, Sony (and even cucumber farmers) have transformed and scaled their businesses using Google Cloud. 

Since the launch of the Google Cloud Tokyo region last November, roughly 40 percent of Google Compute Engine core hour usage in Tokyo is from customers new to Google Cloud Platform (GCP). The number of new customers using Compute Engine has increased by an average of 21 percent monthly over the last three months, and the total number of paid customers in Japan has increased by 70 percent over the last year.

By supplying compliance statements and documents for FISC — an important Japanese compliance standard — for both GCP and G Suite, we’re making it easier to do business with Google Cloud in Japan.

Here are a few of the exciting announcements that came out of Next Tokyo:

Retailers embracing enterprise innovation  

One of the biggest retailers in Japan, FamilyMart, will work with Google’s Professional Services Organization to transform the way it works, reform its store operations, and build a retail model for the next generation. FamilyMart is using G Suite to facilitate a collaborative culture and transform its business to embrace an ever-changing landscape. Furthermore, it plans to use big data analysis and machine learning to develop new ways of managing store operations. The project, — dubbed “Famima 10x” — kicks off by introducing G Suite to facilitate a more flexible work style and encourage a more collaborative, innovative culture. 

Modernizing food production with cloud computing, data analytics and machine learning

Kewpie, a major food manufacturer in Japan famous for their mayonnaise, takes high standards of food production seriously. For its baby food, it used to depend on human eyes to evaluate 4 – 5 tons of food materials daily, per factory, to root out bad potato cubes — a labor-intensive task that required intense focus on the production line. But over the course of six months, Kewpie has tested Cloud Machine Learning Engine and TensorFlow to help identify the bad cubes. The results of the tests were so successful that Kewpie adopted the technology.

Empowering employees to conduct effective data analysis

Sony Network Communications Inc. is a division of Sony Group that develops and operates cloud services and applications for Sony group companies. It converted from Hive/Hadoop to BigQuery and established a data analysis platform based on BigQuery, called Private Data Management Platform. This not only reduces data preparation and maintenance costs, but also allows a wide range of employees — from data scientists to those who are only familiar with SQL — to conduct effective data analysis, which in turn made its data-driven business more productive than before.

Collaborating with partners

During Next Tokyo, we announced five new Japanese partners that will help Google Cloud better serve customers.

  • NTT Communications Corporation is a respected Japanese cloud solution provider and new Google Cloud partner that helps enterprises worldwide optimize their information and communications technology environments. GCP will connect with NTT Communications’ Enterprise Cloud, and NTT Communications plans to develop new services utilizing Google Cloud’s big data analysis and machine intelligence solutions. NTT Communications will use both G Suite and GCP to run its own business and will use its experiences to help both Japanese and international enterprises.

  • KDDI is already a key partner for G Suite and Chrome devices and will offer GCP to the Japanese market this summer, in addition to an expanded networking partnership.

  • Softbank has been a G Suite partner since 2011 and will expand the collaboration with Google Cloud to include solutions utilizing GCP in its offerings. As part of the collaboration, Softbank plans to link GCP with its own “White Cloud” service in addition to promoting next-generation workplaces with G Suite.

  • SORACOM, which uses cellular and LoRaWAN networks to provide connectivity for IoT devices, announced two new integrations with GCP. SORACOM Beam, its data transfer support service, now supports Google Cloud IoT Core, and SORACOM Funnel, its cloud resource adapter service, enables constrained devices to send messages to Google Cloud Pub/Sub. This means that a small, battery-powered sensor can keep sending data to GCP by LoRaWAN for months, for example.

Create Cloud Spanner instances in Tokyo

Cloud Spanner is the world’s first horizontally-scalable and strongly-consistent relational database service. It became generally available in May, delivering long-term value for our customers with mission-critical applications in the cloud, including customer authentication systems, business-transaction and inventory-management systems, and high-volume media systems that require low latency and high throughput. Starting today, customers can store data and create Spanner instances directly in our Tokyo region.

Jamboard coming to Japan in 2018

At Next Tokyo, businesses discussed how they can use technology to improve productivity, and make it easier for employees to work together. Jamboard, a digital whiteboard designed specifically for the cloud, allows employees to sketch their ideas whiteboard-style on a brilliant 4k display, and drop images, add notes and pull things directly from the web while they collaborate with team members from anywhere. This week, we announced that Jamboard will be generally available in Japan in 2018.

Why Japanese companies are choosing Google Cloud

For Kewpie, Sony and FamilyMart, Google’s track record building secure infrastructure all over the world was an important consideration for their move to Google Cloud. From energy-efficient data centers to custom servers to custom networking gear to a software-defined global backbone to specialized ASICs for machine learning, Google has been living cloud at scale for more than 15 years—and we bring all of it to bear in Google Cloud.

We hope to see many of you as we go on the road to meet with customers and partners, and encourage you to learn more about upcoming Google Cloud events.



Source link

Data Science Weekly – Issue 185

By | machinelearning, TensorFlow

Data Science Weekly – Issue 185

#outlook a{
padding:0;
}
.ReadMsgBody{
width:100%;
}
.ExternalClass{
width:100%;
}
body{
margin:0;
padding:0;
}
img{
border:0;
height:auto;
line-height:100%;
outline:none;
text-decoration:none;
}
table,td{
border-collapse:collapse !important;
mso-table-lspace:0pt;
mso-table-rspace:0pt;
}
#bodyTable,#bodyCell{
height:100% !important;
margin:0;
padding:0;
width:100% !important;
}
#bodyCell{
padding:20px;
}
#templateContainer{
width:600px;
}
body,#bodyTable{
background-color:#ecf0f1;
}
h1{
color:#34495e !important;
display:block;
font-family:Georgia;
font-size:26px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h2{
color:#34495e !important;
display:block;
font-family:Tahoma;
font-size:20px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h3{
color:#000000 !important;
display:block;
font-family:Helvetica;
font-size:18px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h4{
color:#000000 !important;
display:block;
font-family:Helvetica;
font-size:16px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:left;
}
#templatePreheader{
border-top:0;
border-bottom:0;
}
.preheaderContent{
color:#34495e;
font-family:Tahoma;
font-size:9px;
line-height:125%;
padding-top:10px;
padding-bottom:10px;
text-align:left;
}
.preheaderContent a:link,.preheaderContent a:visited,.preheaderContent a .yshortcuts {
color:#34495e;
font-weight:bold;
text-decoration:none;
}
#templateHeader{
border-top:10px solid #000000;
border-bottom:5px solid #000000;
}
.headerContent{
color:#000000;
font-family:Helvetica;
font-size:20px;
font-weight:bold;
line-height:100%;
padding-top:20px;
padding-bottom:20px;
text-align:center;
}
.headerContent a:link,.headerContent a:visited,.headerContent a .yshortcuts {
color:#000000;
font-weight:normal;
text-decoration:underline;
}
#headerImage{
height:auto;
max-width:600px !important;
}
#templateBody{
border-top:0;
border-bottom:0;
}
.bodyContent{
color:#000000;
font-family:Helvetica;
font-size:16px;
line-height:150%;
padding-top:40px;
padding-bottom:40px;
text-align:left;
}
.bodyContent a:link,.bodyContent a:visited,.bodyContent a .yshortcuts {
color:#FF0000;
font-weight:normal;
text-decoration:none;
}
.bodyContent img{
display:inline;
height:auto;
max-width:600px !important;
}
#templateFooter{
border-top:2px solid #000000;
border-bottom:20px solid #000000;
}
.footerContent{
color:#000000;
font-family:Helvetica;
font-size:10px;
line-height:150%;
padding-top:20px;
padding-bottom:20px;
text-align:center;
}
.footerContent a:link,.footerContent a:visited,.footerContent a .yshortcuts,.footerContent a span {
color:#000000;
font-weight:bold;
text-decoration:none;
}
.footerContent img{
display:inline;
height:auto;
max-width:600 !important;
}
@media only screen and (max-width: 500px){
body,table,td,p,a,li,blockquote{
-webkit-text-size-adjust:none !important;
}

} @media only screen and (max-width: 500px){
body{
width:auto !important;
}

} @media only screen and (max-width: 500px){
td[id=bodyCell]{
padding:10px;
}

} @media only screen and (max-width: 500px){
table[id=templateContainer]{
max-width:600px !important;
width:75% !important;
}

} @media only screen and (max-width: 500px){
h1{
font-size:40px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h2{
font-size:20px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h3{
font-size:18px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h4{
font-size:16px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
table[id=templatePreheader]{
display:none !important;
}

} @media only screen and (max-width: 500px){
td[class=headerContent]{
font-size:20px !important;
line-height:150% !important;
}

} @media only screen and (max-width: 500px){
td[class=bodyContent]{
font-size:18px !important;
line-height:125% !important;
}

} @media only screen and (max-width: 500px){
td[class=footerContent]{
font-size:14px !important;
line-height:150% !important;
}

} @media only screen and (max-width: 500px){
td[class=footerContent] a{
display:block !important;
}

}


Curated news, articles and jobs related to Data Science. 
Keep up with all the latest developments

Issue #185

June 8 2017

Editor Picks

 

  • Google Brain Residency
    Last year, after nerding out a bit on TensorFlow, I applied and was accepted into the inaugural class of the Google Brain Residency Program. The program invites two dozen people, with varying backgrounds in ML, to spend a year at Google's deep learning research lab in Mountain View to work with the scientists and engineers pushing on the forefront of this technology. The year has just concluded and this is a summary of how I spent it…
  • A Retiree Discovers An Elusive Math Proof – And No-one Notices
    As he was brushing his teeth on the morning of July 17, 2014, Thomas Royen, a little-known retired German statistician, suddenly lit upon the proof of a famous conjecture at the intersection of geometry, probability theory, and statistics that had eluded top experts for decades…

 


 

A Message from this week's Sponsor:

 

 

  • Get started with Python for data science in minutes

    Using Python for data science and machine learning is easy with ActiveState’s Python distribution. Pre-bundled with 300+ packages, ActivePython includes NumPy, SciPy, scikit-learn, TensorFlow, Theano and Keras, and leverages the Intel Math Kernel Library, so you can focus on your data and not setting up software. Download ActivePython and start developing for free.
     


 

Data Science Articles & Videos

 

  • You can probably use deep learning even if your data isn't that big
    Over at Simply Stats Jeff Leek posted an article entitled “Don’t use deep learning your data isn’t that big” that I’ll admit, rustled my jimmies a little bit. To be clear, I don’t think deep learning is a universal panacea and I mostly agree with his central thesis (more on that later), but I think there are several things going on at once, and I’d like to explore a few of those further in this post…
  • How big data can help you pick better wine
    There are currently over 5,000 distinct bottles of Bordeaux-style red blends available for purchase on Wine.com. Rather than segmenting these wines using traditional structured data — like price, vintage, winery, grape varietal — what if we could instead rely on the rich, expressive language used in the product description and expert reviews posted online? Enter NLP (natural language processing)…
  • How to Call BS on Big Data: A Practical Guide
    "Nothing that you will learn in the course of your studies will be of the slightest possible use to you,” the Oxford philosophy professor John Alexander Smith told his students, in 1914, “save only this: if you work hard and intelligently, you should be able to detect when a man is talking rot.”…
  • What If People Run Out of Things to Do?
    What gives our lives meaning? And what if one day, whatever gives us meaning went away—what would we do then? I’m still thinking about those weighty questions after finishing Homo Deus, the provocative new book by Yuval Noah Harari…
  • Google Sprinkles AI on Its Spreadsheets to Automate Away Some Office Work
    In Google’s commercial for its virtual assistant, people ask it to play dance music, videos, and set a timer. A new feature from the search giant that lets you ask questions of its online spreadsheets is less flashy, but it could be the start of something that has a huge impact on how some companies operate…
  • Geometry of Optimization and Implicit Regularization in Deep Learning
    We argue that the optimization plays a crucial role in generalization of deep learning models through implicit regularization. We do this by demonstrating that generalization ability is not controlled by network size but rather by some other implicit control. We then demonstrate how changing the empirical optimization procedure can improve generalization, even if actual optimization quality is not affected. We do so by studying the geometry of the parameter space of deep networks, and devising an optimization algorithm attuned to this geometry…
  • A simple neural network module for relational reasoning
    Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning…

 


 

Jobs

 

  • Netflix – Los Gatos & Los Angeles, CA

    We are looking to fill several key roles across our Data Science groups. 

    • Director, Production Science & Algorithms

      In this role, you will lead a high-impact data science team focused on the digital supply chain at Netflix. The problems this team will work on have a direct impact on the viewing experience of our global member base, including ensuring that the digital assets (video, audio, and subtitle/text files) are of high quality, and developing new algorithms and metrics to improve the perceptual quality of our encoded assets.

    • Manager, Content Programming Science & Algorithms

      The ideal candidate for Manager of Content Programming Science & Algorithms is an experienced and entrepreneurial-minded data scientist. This is high-impact and challenging role, and will require both strong leadership and technical prowess.

    • Senior Data Scientist, Content Science & Algorithms

      We are looking for an experienced individual who is passionate about data science and enjoys working in a collaborative environment. Members of the Content Science team typically work on one or two projects (e.g. predicting movie viewership) over any six month period.

 


 

Training & Resources

 

  • NeuroNER
    Named-entity recognition using neural networks. Easy-to-use and state-of-the-art results…

  • miner and craft
    In addition to our miner package and our in-development bookdown book, the R/minecraft team from the ROpenSci Unconference had created a bunch of other useful code for interacting with Minecraft from R, which we’re putting into a second package…

 


 

Books

 

 


 
P.S. Looking to hire a Data Scientist? Find an awesome one among our readers! Email us for details on how to post your job 🙂 – All the best, Hannah & Sebastian

Follow on Twitter
Copyright © 2013-2016 DataScienceWeekly.org, All rights reserved.
unsubscribe from this list    update subscription preferences 

Source link

Data Science Weekly – Issue 183

By | machinelearning, TensorFlow

Data Science Weekly – Issue 183

#outlook a{
padding:0;
}
.ReadMsgBody{
width:100%;
}
.ExternalClass{
width:100%;
}
body{
margin:0;
padding:0;
}
img{
border:0;
height:auto;
line-height:100%;
outline:none;
text-decoration:none;
}
table,td{
border-collapse:collapse !important;
mso-table-lspace:0pt;
mso-table-rspace:0pt;
}
#bodyTable,#bodyCell{
height:100% !important;
margin:0;
padding:0;
width:100% !important;
}
#bodyCell{
padding:20px;
}
#templateContainer{
width:600px;
}
body,#bodyTable{
background-color:#ecf0f1;
}
h1{
color:#34495e !important;
display:block;
font-family:Georgia;
font-size:26px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h2{
color:#34495e !important;
display:block;
font-family:Tahoma;
font-size:20px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h3{
color:#000000 !important;
display:block;
font-family:Helvetica;
font-size:18px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:center;
}
h4{
color:#000000 !important;
display:block;
font-family:Helvetica;
font-size:16px;
font-style:normal;
font-weight:bold;
line-height:100%;
letter-spacing:normal;
margin-top:0;
margin-right:0;
margin-bottom:10px;
margin-left:0;
text-align:left;
}
#templatePreheader{
border-top:0;
border-bottom:0;
}
.preheaderContent{
color:#34495e;
font-family:Tahoma;
font-size:9px;
line-height:125%;
padding-top:10px;
padding-bottom:10px;
text-align:left;
}
.preheaderContent a:link,.preheaderContent a:visited,.preheaderContent a .yshortcuts {
color:#34495e;
font-weight:bold;
text-decoration:none;
}
#templateHeader{
border-top:10px solid #000000;
border-bottom:5px solid #000000;
}
.headerContent{
color:#000000;
font-family:Helvetica;
font-size:20px;
font-weight:bold;
line-height:100%;
padding-top:20px;
padding-bottom:20px;
text-align:center;
}
.headerContent a:link,.headerContent a:visited,.headerContent a .yshortcuts {
color:#000000;
font-weight:normal;
text-decoration:underline;
}
#headerImage{
height:auto;
max-width:600px !important;
}
#templateBody{
border-top:0;
border-bottom:0;
}
.bodyContent{
color:#000000;
font-family:Helvetica;
font-size:16px;
line-height:150%;
padding-top:40px;
padding-bottom:40px;
text-align:left;
}
.bodyContent a:link,.bodyContent a:visited,.bodyContent a .yshortcuts {
color:#FF0000;
font-weight:normal;
text-decoration:none;
}
.bodyContent img{
display:inline;
height:auto;
max-width:600px !important;
}
#templateFooter{
border-top:2px solid #000000;
border-bottom:20px solid #000000;
}
.footerContent{
color:#000000;
font-family:Helvetica;
font-size:10px;
line-height:150%;
padding-top:20px;
padding-bottom:20px;
text-align:center;
}
.footerContent a:link,.footerContent a:visited,.footerContent a .yshortcuts,.footerContent a span {
color:#000000;
font-weight:bold;
text-decoration:none;
}
.footerContent img{
display:inline;
height:auto;
max-width:600 !important;
}
@media only screen and (max-width: 500px){
body,table,td,p,a,li,blockquote{
-webkit-text-size-adjust:none !important;
}

} @media only screen and (max-width: 500px){
body{
width:auto !important;
}

} @media only screen and (max-width: 500px){
td[id=bodyCell]{
padding:10px;
}

} @media only screen and (max-width: 500px){
table[id=templateContainer]{
max-width:600px !important;
width:75% !important;
}

} @media only screen and (max-width: 500px){
h1{
font-size:40px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h2{
font-size:20px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h3{
font-size:18px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
h4{
font-size:16px !important;
line-height:100% !important;
}

} @media only screen and (max-width: 500px){
table[id=templatePreheader]{
display:none !important;
}

} @media only screen and (max-width: 500px){
td[class=headerContent]{
font-size:20px !important;
line-height:150% !important;
}

} @media only screen and (max-width: 500px){
td[class=bodyContent]{
font-size:18px !important;
line-height:125% !important;
}

} @media only screen and (max-width: 500px){
td[class=footerContent]{
font-size:14px !important;
line-height:150% !important;
}

} @media only screen and (max-width: 500px){
td[class=footerContent] a{
display:block !important;
}

}


Curated news, articles and jobs related to Data Science. 
Keep up with all the latest developments

Issue #183

May 25 2017

Editor Picks

 

  • New paint colors invented by neural network
    So if you’ve ever picked out paint, you know that every infinitesimally different shade of blue, beige, and gray has its own descriptive, attractive name. Tuscan sunrise, blushing pear, Tradewind, etc… There are in fact people who invent these names for a living. But given that the human eye can see millions of distinct colors, sooner or later we’re going to run out of good names. Can AI help?…
  • Predicting Lung Cancer: Solution Write-up
    The Data Science Bowl is an annual data science competition hosted by Kaggle. In this year’s edition the goal was to detect lung cancer based on CT scans of the chest from people diagnosed with cancer within a year. To tackle this challenge, we formed a mixed team of machine learning savvy people of which none had specific knowledge about medical image analysis or cancer prediction. Hence, the competition was both a nobel challenge and a good learning experience for us. The competition just finished and our team Deep Breath finished 9th! In this post, we explain our approach…

 


 

A Message from this week's Sponsor:

 

 

  • Harness the business power of big data.

    How far could you go with the right experience and education? Find out. At Capitol Technology University. Earn your PhD Management & Decision Sciences — in as little as three years — in convenient online classes. Banking, healthcare, energy and business all rely on insightful analysis. And business analytics spending will grow to $89.6 billion in 2018. This is a tremendous opportunity — and Capitol’s PhD program will prepare you for it. Learn more now.
     


 

Data Science Articles & Videos

 

  • A Day in the Life of Americans
    So again I looked at microdata from the American Time Use Survey from 2014, which asked thousands of people what they did during a 24-hour period. I used the data to simulate a single day for 1,000 Americans representative of the population — to the minute. More specifically, I tabulated transition probabilities for one activity to the other, such as from work to traveling, for every minute of the day. That provided 1,440 transition matrices, which let me model a day as a time-varying Markov chain. The simulations below come from this model, and it’s kind of mesmerizing…
  • Using Machine Learning to Explore Neural Network Architecture
    To make this process of designing machine learning models much more accessible, we’ve been exploring ways to automate the design of machine learning models. Among many algorithms we’ve studied, evolutionary algorithms [1] and reinforcement learning algorithms [2] have shown great promise. But in this blog post, we’ll focus on our reinforcement learning approach and the early results we’ve gotten so far…
  • First In-Depth Look at Google's New Second Generation TPU
    This morning at the Google’s I/O event, the company stole Nvidia’s recent Volta GPU thunder by releasing details about its second-generation tensor processing unit (TPU), which will manage both training and inference in a rather staggering 180 teraflops system board, complete with custom network to lash several together into “TPU pods” that can deliver Top 500-class supercomputing might at up to 11.5 petaflops of peak performance…
  • Automated Machine Learning (AML)  — 
    A Paradigm Shift That Accelerates Data Scientist Productivity @ Airbnb

    The scope of AML is ambitious, however, is it really effective? The answer is it depends on how you use it. Our view is that it is difficult to perform wholesale replacement of a data scientist with an AML framework, because most machine learning problems require domain knowledge and human judgement to set up correctly. Also, we have found AML tools to be most useful for regression and classification problems involving tabular datasets, however the state of this area is quickly advancing. In summary, we believe that in certain cases AML can vastly increase a data scientist’s productivity, often by an order of magnitude…
  • A new algorithm for finding a visual center of a polygon
    We came up with a neat little algorithm that may be useful for placing labels and tooltips on polygons, accompanied by a JavaScript library. It’s now going to be used in Mapbox GL and Mapbox Studio. Let’s see how it works…
  • Who Owns England?
    Who owns land is one of England's most closely-guarded secrets. This map is a first attempt to display major landowners in England, combining public data with Freedom of Information requests…
  • Get Up To Speed Fast As A Junior Data Scientist
    You are a new junior data scientist and you want to get started the right way. You want to make sure you don't make the same mistakes others have made early in their data scientist careers because you want to prove to your employers that they made the right choice. As such, you need to figure out how to get up to speed as fast as possible…

 


 

Jobs

 

  • Deep Learning Engineer – New Relic – San Francisco, CA

    New Relic is a leading digital intelligence company, delivering full-stack visibility and analytics with more than 14,000 paid business accounts. Our Platform provides actionable insights to drive digital business results.Every minute, New Relic collects over 1.37 billion data points from computers, phones, browsers, and applications all over the world. To handle this incredible influx, we built massively scalable systems capable of ingesting, analyzing, and storing this data.

    We’re looking for talented deep learning engineers to join us in our quest to analyze this data and solve immediate, real world challenges facing modern digital businesses. If you’re someone who lives and breathes everything from logistic regression to LTSMs and CNNs, we’d love to hear from you!

    Join us and apply your expertise in machine learning to solve our hardest problems. You’ll build valuable new products and features, and take a significant role in shaping future product and technology directions at New Relic….

 


 

Training & Resources

 

  • Deep learning for natural language processing, Part 1
    Natural language processing is yet another field that underwent a small revolution thanks to the second coming of artificial neural networks. Let’s just briefly discuss two advances in the natural language processing toolbox made thanks to artificial neural networks and deep learning techniques…

 


 

Books

 

 


 
P.S. Looking to hire a Data Scientist? Find an awesome one among our readers! Email us for details on how to post your job 🙂 – All the best, Hannah & Sebastian

Follow on Twitter
Copyright © 2013-2016 DataScienceWeekly.org, All rights reserved.
unsubscribe from this list    update subscription preferences 

Source link

All 101 announcements from Google I/O ‘17

By | machinelearning, TensorFlow

It’s been a busy three days here in Mountain View, as more than 7,000 developers joined us at Shoreline Amphitheatre for this year’s Google I/O. From AI to VR, and everything in between, here’s an exhaustive—we mean that—recap of everything we announced.

1. The Google Assistant is already available on more than 100 million devices!
2. Soon, with Google Lens—a new way for computers to “see”—you’ll be able to learn more about and take action on the things around you, while you’re in a conversation with your Assistant.
3. We’ve brought your Google Assistant to iPhones.
4. Call me maybe? With new hands-free calling on Google Home, you’ll be able to make calls with the Assistant to landlines and mobile numbers in U.S. and Canada for free.
5. You can now type to your Google Assistant on eligible Android phones and iPhones.
6. Bonjour. Later this year people in Australia, Canada, France, Germany and Japan will be able to give the Assistant on Google Home a try.
7. And Hallo. Soon the Assistant will roll out to eligible Android phones in Brazilian Portuguese, French, German and Japanese. By the end of the year the Assistant will support Italian, Korean and Spanish.
8. We’re also adding transactions and payments to your Assistant on phones—soon you can order and pay for food and more, with your Assistant.  
9. With 70+ home automation partners, you can water your lawn and check the status of your smoke alarm with the Assistant on Google Home and phones.
10. Soon you’ll get proactive notifications for reminders, flight delays and traffic alerts with the Assistant on Google Home and phones. With multi-user support, you can control the type of notifications to fit your daily life.
12. Listen to all your favorite tunes. We’ve added Deezer and Soundcloud as partners, plus Spotify’s free music offering coming soon.  
12. Bluetooth support is coming to Google Home, so you can play any audio from your iOS or Android device.
13. Don’t know the name of a song, but remember a few of the lyrics? Now you can just ask the Assistant to “play that song that goes like…” and list some of the lyrics.
14. Use your voice to play your favorite shows and more from 20+ new partners (HBO NOW, CBS All Access, and HGTV) straight to your TV.
15. With visual responses from your Assistant on TVs with Chromecast, you’ll be able to see Assistant answers on the biggest screen in your house.
16. You can stream with your voice with Google Home on 50 million Cast and Cast-enabled devices.
17. For developers, we’re bringing Actions on Google to the Assistant on phones—on both Android and iOS. Soon you’ll find conversation apps for the Assistant that help you do things like shopping for clothes or ordering food from a lengthy menu.
18. Also for developers, we’re adding ways for you to get data on your app’s usage and performance, with a new console.
19. We’re rolling out an app directory, so people can find apps from developers directly in the Google Assistant.
20. People can now also create shortcuts for apps in the Google Assistant, so instead of saying “Ok Google, ask Forecaster Joe what’s the surf report for the Outer Banks,” someone can just say their personal shortcut, like “Ok Google, is the surf up?”
21. Last month we previewed the Google Assistant SDK, and now we’re updating it with hotword support, so developers can build devices that are triggered by a simple “Ok Google.”
22. We’re also adding to the SDK the ability to have both timers and alarms.
23. And finally, we’re launching our first developer competition for Actions on Google.

AI, ML and Cloud

24. With the addition of Smart Reply to Gmail on Android and iOS, we’re using machine learning to make responding to emails easier for more than a billion Gmail users.
25. New Cloud TPUs—the second generation of our custom hardware built specifically for machine learning—are optimized for training ML models as well as running them, and will be available in the Google Compute Engine.
26. And to speed up the pace of open machine-learning research, we’re introducing the TensorFlow Research Cloud, a cluster of 1,000 Cloud TPUs available for free to top researchers.
27. Google for Jobs is our initiative to use our products to help people find work, using machine learning. Through Google Search and the Cloud Jobs API, we’re committed to helping companies connect with potential employees and job seekers with available opportunities.
28. The Google Cloud Jobs API is helping customers like Johnson & Johnson recruit the best candidates. Only months after launching, they’ve found that job seekers are 18 percent more likely to apply on its career page now they are using Cloud Jobs API.
29. With Google.ai, we’re pulling all our AI initiatives together to put more powerful computing tools and research in the hands of researchers, developers and companies. We’ve already seen promising research in the fields of pathology and DNA research.
30. We must go deeper. AutoML uses neural nets to design neural nets, potentially cutting down the time-intensive process of setting up an AI system, and helping non-experts build AI for their particular needs.
31. We’ve partnered with world-class medical researchers to explore how machine learning could help improve care for patients, avoid costly incidents and save lives.
32. We introduced a new Google Cloud Platform service called Google Cloud IoT Core, which makes it easy for Google Cloud customers to gain business insights through secure device connections to our rich data and analytics tools.

101-IO-headers_3.jpg

33. We first launched Google Photos two years ago, and now it has more than 500 million monthly users.
34. Every day more than 1.2 billion photos and videos are uploaded to Google Photos.
35. Soon Google Photos will give you sharing suggestions by selecting the right photos, and suggesting who you should send them to based on who was in them
36. Shared libraries will let you effortlessly share photos with a specific person. You can share your full photo library, or photos of certain people or from a certain date forward.
37. With photo books, once you select the photos, Google Photos can curate an album for you with all the best shots, which you can then print for $9.99 (20-page softcover) or $19.99 (20-page hardcover), in the U.S. for now.
38. Google Lens is coming to Photos later this year, so you’ll be able to look back on your photos to learn more or take action—like find more information about a painting from a photo you took in a museum.

101-IO-headers_4.jpg

39. We reached 2 billion monthly active devices on Android!
40. Android O, coming later this year, is getting improvements to “vitals” like battery life and performance, and bringing more fluid experiences to your smaller screen, from improved notifications to autofill.
41. With picture-in-picture in Android O, you can do two tasks simultaneously, like checking your calendar while on a Duo video call.
42. Smart text selection in Android O improves copy and paste to recognize entities on the screen—like a complete address—so you can easily select text with a double tap, and even bring up an app like Maps to help navigate you there.
43. Our emoji are going through a major design refresh in Android O.
44. For developers, the first beta release of Android O is now available.
45. We introduced Google Play Protect—a set of security protections for Android that’s always on and automatically takes action to keep your data and device safe, so you don’t have to lift a finger.
46. The new Find My Device app helps you locate, ring, lock and erase your lost Android devices—phones, tablets, and even watches.
47. We previewed a new initiative aimed at getting computing into the hands of more people on entry-level Android devices. Internally called Android Go, it’s designed to be relevant for people who have limited data connectivity and speak multiple languages.
48. Android Auto is now supported by 300 car models, and Android Auto users have grown 10x since last year.
49. With partners in 70+ countries, we’re seeing 1 million new Android TV device activations every two months, doubling the number of users since last year.
50. We’ve refreshed the look and feel of the Android TV homescreen, making it easy for people to find, preview and watch content provided by apps.
51. With new partners like Emporio Armani, Movado and New Balance, Android Wear now powers almost 50 different watches.
52. We shared an early look at TensorFlow Lite, which is designed to help developers take advantage of machine learning to improve the user experience on Android.
53. As part of TensorFlow Lite, we’re working on a Neural Network API that TensorFlow can take advantage of to accelerate computation.
54. An incredible 82 billion apps were downloaded from Google Play in the last year.
55. We honored 12 Google Play Awards winners—apps and games that give their fans particularly delightful and memorable experiences.
56. We’re now previewing Android Studio 3.0, focused on speed and Android platform support.
57. We’re making Kotlin an officially supported programming language in Android, with the goal of making Android development faster and more fun.
58. And we’ll be collaborating with JetBrains, the creators of Kotlin, to move Kotlin into a nonprofit foundation.
59. Android Instant Apps are now open to all developers, so anyone can build and publish apps that can be run without requiring installation.
60. Thousands of developers from 60+ countries are now using Android Things to create connected devices that have easy access to services like the Google Assistant, TensorFlow and more.
61. Android Things will be fully released later this year.
62. Over the last year, the number of Google Play developers with more than 1 million installs grew 35 percent.
63. The number of people buying on Google Play grew by almost 30 percent this past year.
64. We’re updating the Google Play Console with new features to help developers improve your app’s performance and quality, and grow your business on Google Play.
65. We’re also adding a new subscriptions dashboard in the Play Console, bringing together data like new subscribers and churn so you can make better business decisions.
66. To make it easier and more fun for developers to write robust apps, we announced a guide to Android app architecture along with a preview of Architecture Components.  
67. We’re adding four new tools to the Complications API for Android Wear, to help give users more informative watch faces.
68. Also for Android Wear, we’re open sourcing some components in the Android Support Library.

101-IO-headers_5.jpg

69. More Daydream-ready phones are coming soon, including the Samsung Galaxy S8 and S8+, LG’s next flagship phone, and devices from Motorola and ASUS.
70. Today there are 150+ applications available for Daydream.
71. More than 2 million students have gone on virtual reality Expeditions using Google Cardboard, with more than 600 tours available.
72. We’re expanding Daydream to support standalone VR headsets, which don’t require a phone or PC. HTC VIVE and Lenovo are both working on devices, based on a Qualcomm reference design.
73. Standalone Daydream headsets will include WorldSense, a new technology based on Tango which enables the headset to track your precise movements in space, without any extra sensors.
74. The next smartphone with Tango technology will be the ASUS ZenFone AR, available this summer.
75. We worked with the Google Maps team to create a new Visual Positioning Service (VPS) for developers, which helps devices quickly and accurately understand their location indoors.
76. We’re bringing AR to the classroom with Expeditions AR, launching with a Pioneer Program this fall.
77. We previewed Euphrates, the latest release of Daydream, which will let you capture what you’re seeing and cast your virtual world right onto the screen in your living room, coming later this year.
78. A new tool for VR developers, Instant Preview, lets developers make changes on a computer and see them reflected on a headset in seconds, not minutes.
79. Seurat is a new technology that makes it possible to render high-fidelity scenes on mobile VR headsets in real time. Somebody warn Cameron Frye.
80. We’re releasing an experimental build of Chromium with an augmented reality API, to help bring AR to the web.

101-IO-headers_6.jpg

81. Soon you’ll be able to watch and control 360-degree YouTube videos and live streams on your TV, and use your game controller or remote to pan around an immersive experience.
82. Super Chat lets fans interact directly with YouTube creators during live streams by purchasing highlighted chat messages that stay pinned to the top of the chat window. We previewed a developer integration that showed how the Super Chat API can be used to trigger actions in the real world—such as turning the lights on and off in a creator’s apartment.
83. A new feature in the YouTube VR app will soon let people watch and discuss videos together.

101-IO-headers_7.jpg

84. We announced that we will make Fabric’s Crashlytics the primary crash reporting product in Firebase.
85.  We’re bringing phone number authentication to Firebase, working closely with the Fabric Digits team, so your users can sign in to your apps with their phone numbers.
86. New Firebase Performance Monitoring will help diagnose issues resulting from poorly performing code or challenging network conditions.
87. We’ve improved Firebase Cloud Messaging.
88. For game developers, we’ve built Game Loop support & FPS monitoring into Test Lab for Android, allowing you to evaluate your game’s frame rate before you deploy.
89. We’ve taken some big steps to open source many of our Firebase SDKs on GitHub.
90. We’re expanding Firebase Hosting to integrate with Cloud Functions, letting you can do things like send a notification when a user signs up or automatically create thumbnails when an image is uploaded to Cloud Storage.
91. Developers interested in testing the cutting edge of our products can now sign up for a Firebase Alpha program.
92. We’re adding two new certifications for web developers, in addition to the Associate Android Developer Certification announced last year.
93. We opened an Early Access Program for Chatbase, a new analytics tool in API.ai that helps developers monitor the activity in their chatbots.
94. We’ve completely redesigned AdMob, which helps developers promote, measure and monetize mobile apps, with a new user flow and publisher controls.
95. AdMob is also now integrated with Google Analytics for Firebase, giving developers a complete picture of ads revenue, mediation revenue and in-app purchase revenue in one place.
96. With a new Google Payment API, developers can enable easy in-app or online payments for customers who already have credit and debit cards stored on Google properties.
97. We’re introducing new ways for merchants to engage and reward customers, including the new Card Linked Offers API.
98. We’re introducing a new options for ads placement through Universal App Campaigns to help users discover your apps in the Google Play Store.
99. An update to Smart Bidding strategies in Universal App Campaigns helps you gain high-value users of your apps—like players who level-up in your game or the loyal travelers who book several flights a month.
100. A new program, App Attribution Partners, integrates data into AdWords from seven third-party measurement providers so you can more easily find and take action on insights about how users engage with your app.
101. Firebase partnered up with Google Cloud to offer free storage for up to 10 gigabytes in BigQuery so you can quickly, easily and affordably run queries on it.

That’s all, folks! Thanks to everyone who joined us at I/O this year, whether in person, at an I/O Extended event or via the live stream. See you in 2018.



Source link

Making AI work for everyone

By | machinelearning, TensorFlow

I’ve now been at Google for 13 years, and it’s remarkable how the company’s founding mission of making information universally accessible and useful is as relevant today as it was when I joined. From the start, we’ve looked to solve complex problems using deep computer science and insights, even as the technology around us forces dramatic change.

The most complex problems tend to be ones that affect people’s daily lives, and it’s exciting to see how many people have made Google a part of their day—we’ve just passed 2 billion monthly active Android devices; YouTube has not only 1 billion users but also 1 billion hours of watchtime every day; people find their way along 1 billion kilometers across the planet using Google Maps each day. This growth would have been unthinkable without computing’s shift to mobile, which made us rethink all of our products—reinventing them to reflect new models of interaction like multi-touch screens.

We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.  

The Assistant is a powerful example of these advances at work. It’s already across 100 million devices, and getting more useful every day. We can now distinguish between different voices in Google Home, making it possible for people to have a more personalized experience when they interact with the device. We are now also in a position to make the smartphone camera a tool to get things done. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. If you have crawled down on a friend’s apartment floor to see a long, complicated Wi-Fi password on the back of a router, your phone can now recognize the password, see that you’re trying to log into a Wi-Fi network and automatically log you in. The key thing is, you don’t need to learn anything new to make this work—the interface and the experience can be much more intuitive than, for example, copying and pasting across apps on a smartphone. We’ll first be bringing Google Lens capabilities to the Assistant and Google Photos and you can expect it to make its way to other products as well.

[Warning, geeky stuff ahead!!!]

All of this requires the right computational architecture. Last year at I/O, we announced the first generation of our TPUs, which allow us to run our machine learning algorithms faster and more efficiently. Today we announced our next generation of TPUs—Cloud TPUs, which are optimized for both inference and training and can process a LOT of information. We’ll be bringing Cloud TPUs to the Google Compute Engine so that companies and developers can take advantage of it.

It’s important to us to make these advances work better for everyone—not just for the users of Google products. We believe huge breakthroughs in complex social problems will be possible if scientists and engineers can have better, more powerful computing tools and research at their fingertips. But today, there are too many barriers to making this happen. 

That’s the motivation behind Google.ai, which pulls all our AI initiatives into one effort that can lower these barriers and accelerate how researchers, developers and companies work in this field.

One way we hope to make AI more accessible is by simplifying the creation of machine learning models called neural networks. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. That’s why we’ve created an approach called AutoML, showing that it’s possible for neural nets to design neural nets. We hope AutoML will take an ability that a few PhDs have today and will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs. 

In addition, Google.ai has been teaming Google researchers with scientists and developers to tackle problems across a range of disciplines, with promising results. We’ve used ML to improve the algorithm that detects the spread of breast cancer to adjacent lymph nodes. We’ve also seen AI make strides in the time and accuracy with which researchers can guess the properties of molecules and even sequence the human genome.

This shift isn’t just about building futuristic devices or conducting cutting-edge research. We also think it can help millions of people today by democratizing access to information and surfacing new opportunities. For example, almost half of U.S. employers say they still have issues filling open positions. Meanwhile, job seekers often don’t know there’s a job opening just around the corner from them, because the nature of job posts—high turnover, low traffic, inconsistency in job titles—have made them hard for search engines to classify. Through a new initiative, Google for Jobs, we hope to connect companies with potential employees, and help job seekers find new opportunities. As part of this effort, we will be launching a new feature in Search in the coming weeks that helps people look for jobs across experience and wage levels—including jobs that have traditionally been much harder to search for and classify, like service and retail jobs. 

It’s inspiring to see how AI is starting to bear fruit that people can actually taste. There is still a long way to go before we are truly an AI-first world, but the more we can work to democratize access to the technology—both in terms of the tools people can use and the way we apply it—the sooner everyone will benefit. 

To read more about the many, many other announcements at Google I/O—for Android, and Photos, and VR, and more, please see our latest stories.



Source link

All 101 announcements from Google I/O ‘17

By | machinelearning, TensorFlow

It’s been a busy three days here in Mountain View, as more than 7,000 developers joined us at Shoreline Amphitheatre for this year’s Google I/O. From AI to VR, and everything in between, here’s an exhaustive—we mean that—recap of everything we announced.

1. The Google Assistant is already available on more than 100 million devices!
2. Soon, with Google Lens—a new way for computers to “see”—you’ll be able to learn more about and take action on the things around you, while you’re in a conversation with your Assistant.
3. We’ve brought your Google Assistant to iPhones.
4. Call me maybe? With new hands-free calling on Google Home, you’ll be able to make calls with the Assistant to landlines and mobile numbers in U.S. and Canada for free.
5. You can now type to your Google Assistant on eligible Android phones and iPhones.
6. Bonjour. Later this year people in Australia, Canada, France, Germany and Japan will be able to give the Assistant on Google Home a try.
7. And Hallo. Soon the Assistant will roll out to eligible Android phones in Brazilian Portuguese, French, German and Japanese. By the end of the year the Assistant will support Italian, Korean and Spanish.
8. We’re also adding transactions and payments to your Assistant on phones—soon you can order and pay for food and more, with your Assistant.  
9. With 70+ home automation partners, you can water your lawn and check the status of your smoke alarm with the Assistant on Google Home and phones.
10. Soon you’ll get proactive notifications for reminders, flight delays and traffic alerts with the Assistant on Google Home and phones. With multi-user support, you can control the type of notifications to fit your daily life.
12. Listen to all your favorite tunes. We’ve added Deezer and Soundcloud as partners, plus Spotify’s free music offering coming soon.  
12. Bluetooth support is coming to Google Home, so you can play any audio from your iOS or Android device.
13. Don’t know the name of a song, but remember a few of the lyrics? Now you can just ask the Assistant to “play that song that goes like…” and list some of the lyrics.
14. Use your voice to play your favorite shows and more from 20+ new partners (HBO NOW, CBS All Access, and HGTV) straight to your TV.
15. With visual responses from your Assistant on TVs with Chromecast, you’ll be able to see Assistant answers on the biggest screen in your house.
16. You can stream with your voice with Google Home on 50 million Cast and Cast-enabled devices.
17. For developers, we’re bringing Actions on Google to the Assistant on phones—on both Android and iOS. Soon you’ll find conversation apps for the Assistant that help you do things like shopping for clothes or ordering food from a lengthy menu.
18. Also for developers, we’re adding ways for you to get data on your app’s usage and performance, with a new console.
19. We’re rolling out an app directory, so people can find apps from developers directly in the Google Assistant.
20. People can now also create shortcuts for apps in the Google Assistant, so instead of saying “Ok Google, ask Forecaster Joe what’s the surf report for the Outer Banks,” someone can just say their personal shortcut, like “Ok Google, is the surf up?”
21. Last month we previewed the Google Assistant SDK, and now we’re updating it with hotword support, so developers can build devices that are triggered by a simple “Ok Google.”
22. We’re also adding to the SDK the ability to have both timers and alarms.
23. And finally, we’re launching our first developer competition for Actions on Google.

AI, ML and Cloud

24. With the addition of Smart Reply to Gmail on Android and iOS, we’re using machine learning to make responding to emails easier for more than a billion Gmail users.
25. New Cloud TPUs—the second generation of our custom hardware built specifically for machine learning—are optimized for training ML models as well as running them, and will be available in the Google Compute Engine.
26. And to speed up the pace of open machine-learning research, we’re introducing the TensorFlow Research Cloud, a cluster of 1,000 Cloud TPUs available for free to top researchers.
27. Google for Jobs is our initiative to use our products to help people find work, using machine learning. Through Google Search and the Cloud Jobs API, we’re committed to helping companies connect with potential employees and job seekers with available opportunities.
28. The Google Cloud Jobs API is helping customers like Johnson & Johnson recruit the best candidates. Only months after launching, they’ve found that job seekers are 18 percent more likely to apply on its career page now they are using Cloud Jobs API.
29. With Google.ai, we’re pulling all our AI initiatives together to put more powerful computing tools and research in the hands of researchers, developers and companies. We’ve already seen promising research in the fields of pathology and DNA research.
30. We must go deeper. AutoML uses neural nets to design neural nets, potentially cutting down the time-intensive process of setting up an AI system, and helping non-experts build AI for their particular needs.
31. We’ve partnered with world-class medical researchers to explore how machine learning could help improve care for patients, avoid costly incidents and save lives.
32. We introduced a new Google Cloud Platform service called Google Cloud IoT Core, which makes it easy for Google Cloud customers to gain business insights through secure device connections to our rich data and analytics tools.

101-IO-headers_3.jpg

33. We first launched Google Photos two years ago, and now it has more than 500 million monthly users.
34. Every day more than 1.2 billion photos and videos are uploaded to Google Photos.
35. Soon Google Photos will give you sharing suggestions by selecting the right photos, and suggesting who you should send them to based on who was in them
36. Shared libraries will let you effortlessly share photos with a specific person. You can share your full photo library, or photos of certain people or from a certain date forward.
37. With photo books, once you select the photos, Google Photos can curate an album for you with all the best shots, which you can then print for $9.99 (20-page softcover) or $19.99 (20-page hardcover), in the U.S. for now.
38. Google Lens is coming to Photos later this year, so you’ll be able to look back on your photos to learn more or take action—like find more information about a painting from a photo you took in a museum.

101-IO-headers_4.jpg

39. We reached 2 billion monthly active devices on Android!
40. Android O, coming later this year, is getting improvements to “vitals” like battery life and performance, and bringing more fluid experiences to your smaller screen, from improved notifications to autofill.
41. With picture-in-picture in Android O, you can do two tasks simultaneously, like checking your calendar while on a Duo video call.
42. Smart text selection in Android O improves copy and paste to recognize entities on the screen—like a complete address—so you can easily select text with a double tap, and even bring up an app like Maps to help navigate you there.
43. Our emoji are going through a major design refresh in Android O.
44. For developers, the first beta release of Android O is now available.
45. We introduced Google Play Protect—a set of security protections for Android that’s always on and automatically takes action to keep your data and device safe, so you don’t have to lift a finger.
46. The new Find My Device app helps you locate, ring, lock and erase your lost Android devices—phones, tablets, and even watches.
47. We previewed a new initiative aimed at getting computing into the hands of more people on entry-level Android devices. Internally called Android Go, it’s designed to be relevant for people who have limited data connectivity and speak multiple languages.
48. Android Auto is now supported by 300 car models, and Android Auto users have grown 10x since last year.
49. With partners in 70+ countries, we’re seeing 1 million new Android TV device activations every two months, doubling the number of users since last year.
50. We’ve refreshed the look and feel of the Android TV homescreen, making it easy for people to find, preview and watch content provided by apps.
51. With new partners like Emporio Armani, Movado and New Balance, Android Wear now powers almost 50 different watches.
52. We shared an early look at TensorFlow Lite, which is designed to help developers take advantage of machine learning to improve the user experience on Android.
53. As part of TensorFlow Lite, we’re working on a Neural Network API that TensorFlow can take advantage of to accelerate computation.
54. An incredible 82 billion apps were downloaded from Google Play in the last year.
55. We honored 12 Google Play Awards winners—apps and games that give their fans particularly delightful and memorable experiences.
56. We’re now previewing Android Studio 3.0, focused on speed and Android platform support.
57. We’re making Kotlin an officially supported programming language in Android, with the goal of making Android development faster and more fun.
58. And we’ll be collaborating with JetBrains, the creators of Kotlin, to move Kotlin into a nonprofit foundation.
59. Android Instant Apps are now open to all developers, so anyone can build and publish apps that can be run without requiring installation.
60. Thousands of developers from 60+ countries are now using Android Things to create connected devices that have easy access to services like the Google Assistant, TensorFlow and more.
61. Android Things will be fully released later this year.
62. Over the last year, the number of Google Play developers with more than 1 million installs grew 35 percent.
63. The number of people buying on Google Play grew by almost 30 percent this past year.
64. We’re updating the Google Play Console with new features to help developers improve your app’s performance and quality, and grow your business on Google Play.
65. We’re also adding a new subscriptions dashboard in the Play Console, bringing together data like new subscribers and churn so you can make better business decisions.
66. To make it easier and more fun for developers to write robust apps, we announced a guide to Android app architecture along with a preview of Architecture Components.  
67. We’re adding four new tools to the Complications API for Android Wear, to help give users more informative watch faces.
68. Also for Android Wear, we’re open sourcing some components in the Android Support Library.

101-IO-headers_5.jpg

69. More Daydream-ready phones are coming soon, including the Samsung Galaxy S8 and S8+, LG’s next flagship phone, and devices from Motorola and ASUS.
70. Today there are 150+ applications available for Daydream.
71. More than 2 million students have gone on virtual reality Expeditions using Google Cardboard, with more than 600 tours available.
72. We’re expanding Daydream to support standalone VR headsets, which don’t require a phone or PC. HTC VIVE and Lenovo are both working on devices, based on a Qualcomm reference design.
73. Standalone Daydream headsets will include WorldSense, a new technology based on Tango which enables the headset to track your precise movements in space, without any extra sensors.
74. The next smartphone with Tango technology will be the ASUS ZenFone AR, available this summer.
75. We worked with the Google Maps team to create a new Visual Positioning Service (VPS) for developers, which helps devices quickly and accurately understand their location indoors.
76. We’re bringing AR to the classroom with Expeditions AR, launching with a Pioneer Program this fall.
77. We previewed Euphrates, the latest release of Daydream, which will let you capture what you’re seeing and cast your virtual world right onto the screen in your living room, coming later this year.
78. A new tool for VR developers, Instant Preview, lets developers make changes on a computer and see them reflected on a headset in seconds, not minutes.
79. Seurat is a new technology that makes it possible to render high-fidelity scenes on mobile VR headsets in real time. Somebody warn Cameron Frye.
80. We’re releasing an experimental build of Chromium with an augmented reality API, to help bring AR to the web.

101-IO-headers_6.jpg

81. Soon you’ll be able to watch and control 360-degree YouTube videos and live streams on your TV, and use your game controller or remote to pan around an immersive experience.
82. Super Chat lets fans interact directly with YouTube creators during live streams by purchasing highlighted chat messages that stay pinned to the top of the chat window. We previewed a developer integration that showed how the Super Chat API can be used to trigger actions in the real world—such as turning the lights on and off in a creator’s apartment.
83. A new feature in the YouTube VR app will soon let people watch and discuss videos together.

101-IO-headers_7.jpg

84. We announced that we will make Fabric’s Crashlytics the primary crash reporting product in Firebase.
85.  We’re bringing phone number authentication to Firebase, working closely with the Fabric Digits team, so your users can sign in to your apps with their phone numbers.
86. New Firebase Performance Monitoring will help diagnose issues resulting from poorly performing code or challenging network conditions.
87. We’ve improved Firebase Cloud Messaging.
88. For game developers, we’ve built Game Loop support & FPS monitoring into Test Lab for Android, allowing you to evaluate your game’s frame rate before you deploy.
89. We’ve taken some big steps to open source many of our Firebase SDKs on GitHub.
90. We’re expanding Firebase Hosting to integrate with Cloud Functions, letting you can do things like send a notification when a user signs up or automatically create thumbnails when an image is uploaded to Cloud Storage.
91. Developers interested in testing the cutting edge of our products can now sign up for a Firebase Alpha program.
92. We’re adding two new certifications for web developers, in addition to the Associate Android Developer Certification announced last year.
93. We opened an Early Access Program for Chatbase, a new analytics tool in API.ai that helps developers monitor the activity in their chatbots.
94. We’ve completely redesigned AdMob, which helps developers promote, measure and monetize mobile apps, with a new user flow and publisher controls.
95. AdMob is also now integrated with Google Analytics for Firebase, giving developers a complete picture of ads revenue, mediation revenue and in-app purchase revenue in one place.
96. With a new Google Payment API, developers can enable easy in-app or online payments for customers who already have credit and debit cards stored on Google properties.
97. We’re introducing new ways for merchants to engage and reward customers, including the new Card Linked Offers API.
98. We’re introducing a new options for ads placement through Universal App Campaigns to help users discover your apps in the Google Play Store.
99. An update to Smart Bidding strategies in Universal App Campaigns helps you gain high-value users of your apps—like players who level-up in your game or the loyal travelers who book several flights a month.
100. A new program, App Attribution Partners, integrates data into AdWords from seven third-party measurement providers so you can more easily find and take action on insights about how users engage with your app.
101. Firebase partnered up with Google Cloud to offer free storage for up to 10 gigabytes in BigQuery so you can quickly, easily and affordably run queries on it.

That’s all, folks! Thanks to everyone who joined us at I/O this year, whether in person, at an I/O Extended event or via the live stream. See you in 2018.



Source link