All Posts By

bunch

toulon.jpg

The ICLR2017 program is out

By | machinelearning | No Comments

ICLR2017 just released their program ( the open review for the Workshop site is open and here)

Monday April 24, 2017
Morning Session
8.45 – 9.00 Opening Remarks
9.00 – 9.40 Invited talk 1: Eero Simoncelli
9.40 – 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 – 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.00 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Benjamin Recht
15.10 – 15.30 Contributed Talk 3: Understanding deep learning requires rethinking generalization – BEST PAPER AWARD
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2
Tuesday April 25, 2017
Afternoon Session
9.00 – 9.40 Invited talk 1: Chloe Azencott
9.40 – 10.00 Contributed talk 1: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data – BEST PAPER AWARD
10.00 – 10.20 Contributed talk 2: Learning Graphical State Transitions
10.20 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Riccardo Zecchina
15.10 – 15.30 Contributed Talk 3: Learning to Act by Predicting the Future
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2
19.00 – 21.00 Gala dinner offered by ICLR
Wednesday April 26, 2017
Morning Session
9.00 – 9.40 Invited talk 1: Regina Barzilay
9.40 – 10.00 Contributed talk 1: Learning End-to-End Goal-Oriented Dialog
10.00 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Alex Graves
15.10 – 15.30 Contributed Talk 3: Making Neural Programming Architectures Generalize via Recursion – BEST PAPER AWARD
15.30 – 15.50 Contributed Talk 4: Neural Architecture Search with Reinforcement Learning
15.50 – 16.10 Contributed Talk 5: Optimization as a Model for Few-Shot Learning
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2







Credit photo: Par BaptisteMPM — Travail personnel, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37629070

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche’s feed, there’s more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.



Source link

Exposure to Stan has changed my defaults: a non-haiku

By | ai, bigdata, machinelearning | No Comments

Now when I look at my old R code, it looks really weird because there are no semicolons
Each line of code just looks incomplete
As if I were writing my sentences like this
Whassup with that, huh
Also can I please no longer do <-
I much prefer =
Please

The post Exposure to Stan has changed my defaults: a non-haiku appeared first on Statistical Modeling, Causal Inference, and Social Science.


Source link

Perspective_1.gif

When computers learn to swear: Using machine learning for better online conversations

By | machinelearning, TensorFlow | No Comments

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.



Source link

When Size Matters: Weighted Effect Coding

By | ai, bigdata, machinelearning | No Comments

Categorical variables in regression models are often included by dummy variables. In R, this is done with factor variables with treatment coding. Typically, the difference and significance of each category are tested against a preselected reference category. We present a useful alternative.

If all categories have (roughly) the same number of observations, you can also test all categories against the grand mean using effect (ANOVA) coding. In observational studies, however, the number of observations per category typically varies. Our new paper shows how categories of a factor variable can be tested against the sample mean. Although the paper has been online for some time now (and this post is an update to an earlier post some time age), we are happy to announce that our paper has now officially been published a the International Journal of Public Health.

To apply the procedures introduced in these papers, called weighted effect coding, procedures are made available for R, SPSS, and Stata. For R, we created the ‘wec’ package which can be installed by typing:

install.packages(“wec”)

References

Grotenhuis, M., Ben Pelzer, Eisinga, R., Nieuwenhuis, R., Schmidt-Catran, A., & Konig, R. (2017). When size matters: advantages of weighted effect coding in observational studies. International Journal of Public Health, (62), 163–167. http://doi.org/10.1007/s00038-016-0901-1

Sweeney R, Ulveling EF (1972) A transformation for simplifying the interpretation of coefficients of binary variables in regression analysis. Am Stat 26:30–32




Source link

Screen-Shot-2017-02-24-at-3.51.46-PM-200x191.png

Roadmap to High Performance Computing

By | iot | No Comments

High-performance computing is not only about running advanced applications but it is leading to transformations in the way world works. Michael Keegan International Business Leader, Chairman and Board Member, Andy Stevenson Head of Middle East, Turkey, and India, Managing Director for India and Ravi Krishnamoorthi Sr. Vice President & Head of Business Consulting Fujitsu reveal the transformational roadmap with the help of HPC that even governs functioning of all entities in a connected world works while talking to Shanosh Kumar From EFY group


Michael Keegan

Q. How does High performance computing address classic problems of automation?

A. When we look at most technologically advanced nations, HPC is used to bring in private sector research to deliver solutions. Let us take an example of a seaport in Singapore. HPC’s there helps to optimise the flow of cargo in and out of the ports while interacting directly with ships and guiding them. They also make sure that the inflow and outflow of cargo is taken care of by analysing traffic patterns near the port. We can see a real cross over of research over the impact.

Q. What are the infrastructural demands required to support smart environments?

Ravi Krishnamoorthi

A. Intelligent analysis of existing data that exists in form of structured and unstructured data can now be sourced from multiple devices and mediums. This can help the analytics engines to make knowledgeable decision for us. Now that is intelligent analytics over big data.

Q. Can HPC address India specific problems?

A. Michael: This is a big growth area in India, considering the fact that there is more data out there and the requirements to gather intelligent information and insightfulness backed by data is driving an increase in high performance super compute capabilities. This becomes the fundamental engine that underpins the digital transformation.

Andy Stevenson

There are some problems that are present with India perspective. Last mile connectivity, front end security, presence of unstructured data than structured data and legacy systems in operation are some of them. If India needs to realise its smart city initiatives and become digital India; infrastructure connectivity, infrastructural demand needs to be sorted, says Ravi.

Q. What factors are pushing corporations to link up every process to cloud platforms and HPC?

A. Andy: Timing is everything. The big macroeconomic factors with respect to HPC are that the price of computing is coming down while the ability to process structured and unstructured data becomes cheaper. The connectivity and bandwidths are improving all the time. Also we have a population that is more and more impatient for information to make more intelligent decision based on data and not on belief.

Ravi: To add on to it, consumer’s mind-set revolves around mobility, access and dependence of technology drives the whole data spectrum. This is making whole lot of processes automated. Interoperability will happen.

Q. What is your take on interoperability?

A. Andy: Open standards make more room for interoperability. There used to be very few true open standards. Software innovations and number of people including those who run big business now are looking for open standard software as a key asset. The age of proprietary software is going away. Transformation will take place when a farmer or if somebody uses this information through a smart phone line device with compute power help somewhere.

Q. How accurate are HPC systems post their operational deployment?

A. Andy: Looking across the spectrum, HPC has some problem domains that do not yield themselves to end point devices. For example, if we are trying to measure the impact of weather for one kilometre the algorithms and models today can gauge up to a kilometre. Meteorological department of India is trying to take the next generation of algorithm models down to less than 100 meters grid on a global scale. Hence we can imagine the amount of insight generated as the analytics engine takes in to account ocean currents, temperature, convection currents and all sorts of different parameters.

Q. Would future of compute power lie in a distributed environment of in the palm of our hands with respect to shrinking size?

A. Michael: With respect to HPC, the platform has been part of various deployments across the world. Meteorological department, tsunami alert systems are all using this platform to run simulations and predict. Going forward Spark HPC platform will collaborate with ARM based technology and we will be looking to provide the successive roadmaps, particularly in the mobile space and this collaboration would be the future direction.

Q. What is your take on security on HPC?

A. Michael: Approach to security depends on what the business or the consumer services are made available. We have learned a lot from the banking industry, like using internet security protocols for safe payments.

We also see a rise in cybercrimes. The first is the insider threat like fraud by people who have privileged access to be able to cause loss of money or data together. We can protect those using process by controlling the access and assessing risk. These are major threats considering customer stand point. Today we can limit access etc. to certain people by just giving them a wearable.


The post Roadmap to High Performance Computing appeared first on Internet Of Things | IoT India.

Source link