Category

machinelearning

toulon.jpg

The ICLR2017 program is out

By | machinelearning | No Comments

ICLR2017 just released their program ( the open review for the Workshop site is open and here)

Monday April 24, 2017
Morning Session
8.45 – 9.00 Opening Remarks
9.00 – 9.40 Invited talk 1: Eero Simoncelli
9.40 – 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 – 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.00 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Benjamin Recht
15.10 – 15.30 Contributed Talk 3: Understanding deep learning requires rethinking generalization – BEST PAPER AWARD
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2
Tuesday April 25, 2017
Afternoon Session
9.00 – 9.40 Invited talk 1: Chloe Azencott
9.40 – 10.00 Contributed talk 1: Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data – BEST PAPER AWARD
10.00 – 10.20 Contributed talk 2: Learning Graphical State Transitions
10.20 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Riccardo Zecchina
15.10 – 15.30 Contributed Talk 3: Learning to Act by Predicting the Future
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2
19.00 – 21.00 Gala dinner offered by ICLR
Wednesday April 26, 2017
Morning Session
9.00 – 9.40 Invited talk 1: Regina Barzilay
9.40 – 10.00 Contributed talk 1: Learning End-to-End Goal-Oriented Dialog
10.00 – 10.30 Coffee Break
10.30 – 12.30 Poster Session 1
12.30 – 14.30 Lunch provided by ICLR
Afternoon Session
14.30 – 15.10 Invited talk 2: Alex Graves
15.10 – 15.30 Contributed Talk 3: Making Neural Programming Architectures Generalize via Recursion – BEST PAPER AWARD
15.30 – 15.50 Contributed Talk 4: Neural Architecture Search with Reinforcement Learning
15.50 – 16.10 Contributed Talk 5: Optimization as a Model for Few-Shot Learning
16.10 – 16.30 Coffee Break
16.30 – 18.30 Poster Session 2







Credit photo: Par BaptisteMPM — Travail personnel, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37629070

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche’s feed, there’s more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.



Source link

Exposure to Stan has changed my defaults: a non-haiku

By | ai, bigdata, machinelearning | No Comments

Now when I look at my old R code, it looks really weird because there are no semicolons
Each line of code just looks incomplete
As if I were writing my sentences like this
Whassup with that, huh
Also can I please no longer do <-
I much prefer =
Please

The post Exposure to Stan has changed my defaults: a non-haiku appeared first on Statistical Modeling, Causal Inference, and Social Science.


Source link

Perspective_1.gif

When computers learn to swear: Using machine learning for better online conversations

By | machinelearning, TensorFlow | No Comments

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.



Source link

When Size Matters: Weighted Effect Coding

By | ai, bigdata, machinelearning | No Comments

Categorical variables in regression models are often included by dummy variables. In R, this is done with factor variables with treatment coding. Typically, the difference and significance of each category are tested against a preselected reference category. We present a useful alternative.

If all categories have (roughly) the same number of observations, you can also test all categories against the grand mean using effect (ANOVA) coding. In observational studies, however, the number of observations per category typically varies. Our new paper shows how categories of a factor variable can be tested against the sample mean. Although the paper has been online for some time now (and this post is an update to an earlier post some time age), we are happy to announce that our paper has now officially been published a the International Journal of Public Health.

To apply the procedures introduced in these papers, called weighted effect coding, procedures are made available for R, SPSS, and Stata. For R, we created the ‘wec’ package which can be installed by typing:

install.packages(“wec”)

References

Grotenhuis, M., Ben Pelzer, Eisinga, R., Nieuwenhuis, R., Schmidt-Catran, A., & Konig, R. (2017). When size matters: advantages of weighted effect coding in observational studies. International Journal of Public Health, (62), 163–167. http://doi.org/10.1007/s00038-016-0901-1

Sweeney R, Ulveling EF (1972) A transformation for simplifying the interpretation of coefficients of binary variables in regression analysis. Am Stat 26:30–32




Source link

Black Knight: Mortgage Delinquencies Declined in January

By | ai, bigdata, machinelearning | No Comments


From Black Knight: Black Knight Financial Services’ First Look at January Mortgage Data: Impact of Rising Rates Felt as Prepayments Decline by 30 Percent in January

• Prepayment speeds (historically a good indicator of refinance activity) declined by 30 percent in January to the lowest level since February 2016

• Delinquencies improved by 3.9 percent from December and were down 17 percent from January 2016

• Foreclosure starts rose 18 percent for the month; January’s 70,400 starts were the most since March 2016

• 2.6 million borrowers are behind on mortgage payments, the lowest number since August 2006, immediately following the pre-crisis national peak in home prices

According to Black Knight’s First Look report for January, the percent of loans delinquent decreased 3.9% in January compared to December, and declined 16.6% year-over-year.

The percent of loans in the foreclosure process declined 0.5% in January and were down 27.6% over the last year.

Black Knight reported the U.S. mortgage delinquency rate (loans 30 or more days past due, but not in foreclosure) was 4.25% in January, down from 4.42% in December.

The percent of loans in the foreclosure process declined in January to 0.94%.

The number of delinquent properties, but not in foreclosure, is down 413,000 properties year-over-year, and the number of properties in the foreclosure process is down 178,000 properties year-over-year.

Black Knight: Percent Loans Delinquent and in Foreclosure Process
Jan
2017
Dec
2016
Jan
2016
Jan
2015
Delinquent 4.25% 4.42% 5.09% 5.48%
In Foreclosure 0.94% 0.95% 1.30% 1.76%
Number of properties:
Number of properties that are delinquent, but not in foreclosure: 2,162,000 2,248,000 2,575,000 2,764,000
Number of properties in foreclosure pre-sale inventory: 481,000 483,000 659,000 885,000
Total Properties 2,643,000 2,731,000 3,234,000 3,649,000




Source link

Is Rigor Contagious? (my talk next Monday 4:15pm at Columbia)

By | machinelearning | No Comments

Is Rigor Contagious?

Much of the theory and practice of statistics and econometrics is characterized by a toxic mixture of rigor and sloppiness. Methods are justified based on seemingly pure principles that can’t survive reality. Examples of these principles include random sampling, unbiased estimation, hypothesis testing, Bayesian inference, and causal identification. Examples of uncomfortable reality include nonresponse, varying effects, researcher degrees of freedom, actual prior information, and the desire for external validity. We discuss a series of scenarios where researchers naively think that rigor in one part of their design and analysis will assure rigor on their larger conclusions, and then we discuss possible hierarchical Bayesian solutions in which the load of rigor is more evenly balanced across the chain of scientific reasoning.

The talk (for the Sustainable Development seminar) will be Mon 27 Feb, 4:15-5:45, in room 801 International Affairs Building at Columbia.

The post Is Rigor Contagious? (my talk next Monday 4:15pm at Columbia) appeared first on Statistical Modeling, Causal Inference, and Social Science.

Source link

TOSR04 – 4 Channel Smartphone Bluetooth Relay Kit – (Andorid/iOS)

By | iot, machinelearning | No Comments

Descriptions:
TOSR04 Bluetooth Relay kit supports both Andorid and iOS. It’s a convenient and easy-to-use product that cab be used to control any electrical equipments. You may wish to pair with a mobile phone or a computer (etc). Open the APP->Connect Device, use your phone to search for a new Bluetooth device. The module will appears as “BT Bee-BLE”(iOS) or “BT Bee-EDR”(Android). Connect to it then you can control the relays ON/OFF. The communication way also can be changed easily. Remove the Bluetooth Bee module instead with a WiFiBee, it becomes a WiFi relay.

TOSR04 allows computer control switching of external devices by using the USB port of your computer. It also has a wireless extension port, can works with Xbee or BluetoothBee or WiFibee Module, So you can control your device via zigbee or Bluetooth or WiFi!

The TOSR04 provides four volt free contact relay outputs with a current rating of up to 10Amp each. one DS18B20 temperature sensor port. The module is powered from a 5VDC power supply(or USB). The DC input jack is 2.1mm with positive core polarity, The relays are SPDT types. If you want to power it with DC12V or 24V, you just need a 12V/24V-5V DC converter.

Features
– Working voltage: 5VDC
– Dimension: 124mmx45mmx17.2mm
– Standby Current 10mA/5VDC
– Current rating of up to 10 Amp

NOTE
Please remove the Bluetooth module with board powered off if you want to use USB mode.

Please visit the following link to get all the documents:
https://s3-us-west-2.amazonaws.com/relaytinysine/TOSR0x+Relay+Board.zipUSB cable included

$51.00