Prwa9BHZ3_o

Google Cloud supports $3M in grant credits for the NSF BIGDATA program

By | machinelearning, TensorFlow | No Comments

Google Cloud Platform (GCP) serves more than one billion end-users, and we continue to seek ways to give researchers access to these powerful tools. Through the National Science Foundation’s BIGDATA grants program, we’re offering researchers $3M in Google Cloud Platform credits to use the same infrastructure, analytics and machine learning that we use to drive innovation at Google.

About the BIGDATA grants

The National Science Foundation (NSF) recently announced its flagship research program on big data, Critical Techniques, Technologies and Methodologies for Advancing Foundations and Applications of Big Data Sciences and Engineering (BIGDATA). The BIGDATA program encourages experimentation with datasets at scale. Google will provide cloud credits to qualifying NSF-funded projects, giving researchers access to the breadth of services on GCP, from scalable data management (Google Cloud Storage, Google Cloud Bigtable, Google Cloud Datastore), to analysis (Google BigQuery, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Datalab, Google Genomics) to machine learning (Google Cloud Machine Learning, TensorFlow).

This collaboration combines NSF’s experience in managing diverse research portfolios with Google’s proven track record in secure and intelligent cloud computing and data science. NSF is accepting proposals from March 15, 2017 through March 22, 2017.  All proposals that meet NSF requirements will be reviewed through NSF’s merit review process.

GCP in action at Stanford University

To get an idea of the potential impact of GCP, consider Stanford University’s Center of Genomics and Personalized Medicine, where scientists work with data at a massive scale. Director Mike Snyder and his lab have been involved in a number of large efforts, from ENCODE to the Million Veteran Program. Snyder and his colleagues turned to Google Genomics, which gives scientists access to GCP to help secure, store, process, explore and share biological datasets. With the costs of cloud computing dropping significantly and demand for ever-larger genomics studies growing, Snyder thinks fewer labs will continue relying on local infrastructure.

“We’re entering an era where people are working with thousands or tens of thousands or even million genome projects, and you’re never going to do that on a local cluster very easily,” he says. “Cloud computing is where the field is going.”

“What you can do with Google Genomics — and you can’t do in-house — is run 1,000 genomes in parallel,” says Somalee Datta, bioinformatics director of Stanford University’s Center of Genomics. “From our point of view, it’s almost infinite resources.”



Source link

Exposure to Stan has changed my defaults: a non-haiku

By | ai, bigdata, machinelearning | No Comments

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Now when I look at my old R code, it looks really weird because there are no semicolons
Each line of code just looks incomplete
As if I were writing my sentences like this
Whassup with that, huh
Also can I please no longer do <-
I much prefer =
Please

The post Exposure to Stan has changed my defaults: a non-haiku appeared first on Statistical Modeling, Causal Inference, and Social Science.

Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

The post Exposure to Stan has changed my defaults: a non-haiku appeared first on All About Statistics.




Source link

toulon.jpg

The ICLR2017 program is out

By | machinelearning | No Comments

ICLR2017 just released their program ( the open review for the Workshop site is open and here)

Monday April 24, 2017
Morning Session
8.45 – 9.00 Opening Remarks
9.00 – 9.40 Invited talk 1: Eero Simoncelli
9.40 – 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 – 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.00 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Benjamin Recht
15.10 – 15.30 Contributed Talk 3: Understanding deep learning requires rethinking generalization – BEST PAPER AWARD
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2
Tuesday April 25, 2017
Afternoon Session
9.00 – 9.40 Invited talk 1: Chloe Azencott
9.40 – 10.00 Contributed talk 1: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data – BEST PAPER AWARD
10.00 – 10.20 Contributed talk 2: Learning Graphical State Transitions
10.20 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Riccardo Zecchina
15.10 – 15.30 Contributed Talk 3: Learning to Act by Predicting the Future
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2
19.00 – 21.00 Gala dinner offered by ICLR
Wednesday April 26, 2017
Morning Session
9.00 – 9.40 Invited talk 1: Regina Barzilay
9.40 – 10.00 Contributed talk 1: Learning End-to-End Goal-Oriented Dialog
10.00 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Alex Graves
15.10 – 15.30 Contributed Talk 3: Making Neural Programming Architectures Generalize via Recursion – BEST PAPER AWARD
15.30 – 15.50 Contributed Talk 4: Neural Architecture Search with Reinforcement Learning
15.50 – 16.10 Contributed Talk 5: Optimization as a Model for Few-Shot Learning
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2







Credit photo: Par BaptisteMPM — Travail personnel, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37629070

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche’s feed, there’s more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.



Source link

Exposure to Stan has changed my defaults: a non-haiku

By | ai, bigdata, machinelearning | No Comments

Now when I look at my old R code, it looks really weird because there are no semicolons
Each line of code just looks incomplete
As if I were writing my sentences like this
Whassup with that, huh
Also can I please no longer do <-
I much prefer =
Please

The post Exposure to Stan has changed my defaults: a non-haiku appeared first on Statistical Modeling, Causal Inference, and Social Science.


Source link

Perspective_1.gif

When computers learn to swear: Using machine learning for better online conversations

By | machinelearning, TensorFlow | No Comments

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.



Source link

When Size Matters: Weighted Effect Coding

By | ai, bigdata, machinelearning | No Comments

Categorical variables in regression models are often included by dummy variables. In R, this is done with factor variables with treatment coding. Typically, the difference and significance of each category are tested against a preselected reference category. We present a useful alternative.

If all categories have (roughly) the same number of observations, you can also test all categories against the grand mean using effect (ANOVA) coding. In observational studies, however, the number of observations per category typically varies. Our new paper shows how categories of a factor variable can be tested against the sample mean. Although the paper has been online for some time now (and this post is an update to an earlier post some time age), we are happy to announce that our paper has now officially been published a the International Journal of Public Health.

To apply the procedures introduced in these papers, called weighted effect coding, procedures are made available for R, SPSS, and Stata. For R, we created the ‘wec’ package which can be installed by typing:

install.packages(“wec”)

References

Grotenhuis, M., Ben Pelzer, Eisinga, R., Nieuwenhuis, R., Schmidt-Catran, A., & Konig, R. (2017). When size matters: advantages of weighted effect coding in observational studies. International Journal of Public Health, (62), 163–167. http://doi.org/10.1007/s00038-016-0901-1

Sweeney R, Ulveling EF (1972) A transformation for simplifying the interpretation of coefficients of binary variables in regression analysis. Am Stat 26:30–32




Source link