Text Analytics Poll™ shows asking respondents to provide reasons for their opinions may increase cognition and decrease “No Opinion” Asking People WHY They Support/Oppose Civil War Monuments May Affect Results. Judging from the TV news and social media, the entire country is up in arms over the status of Confederate Civil War monuments. What really […]

What do women do in films? If you analyze the stage directions in film scripts — as Julia Silge, Russell Goldenberg and Amber Thomas have done for this visual essay for ThePudding — it seems that women (but not men) are written to snuggle, giggle and squeal, while men (but not women) shoot, gallop and strap things to other things.

This is all based on an analysis of almost 2,000 film scripts mostly from 1990 and after. The words come from pairs of words beginning with “he” and “she” in the stage directions (but not the dialogue) in the screenplays — directions like “she snuggles up to him, strokes his back” and “he straps on a holster under his sealskin cloak”. The essay also includes an analysis of words by the writer and character's gender, and includes lots of lovely interactive elements (including the ability to see examples of the stage directions).

The analysis, including the chart above, was was created using the R language, and the R code is available on GitHub. The screenplay analysis makes use on the tidytext package, which simplifies the process of handling the text-based data (the screenplays), extracting the stage directions, and tabulating the word pairs.

You can find the complete essay linked below, and it's well worth checking out to experience the interactive elements.

Next Gen Market Research Award Nominations Open OdinText is a proud sponsor of the 2017 NGMR Awards at The Market Research Event (TMRE). Once again Women in Research (WIRe) has joined NGMR in celebrating those who are doing most to shake up marketing research. Nominations are due in just two weeks, September 5th, Nomination form […]

What do women do in films? If you analyze the stage directions in film scripts — as Julia Silge, Russell Goldenberg and Amber Thomas have done for this visual essay for ThePudding — it seems that women (but not men) are written to snuggle, giggle and squeal, while men (but not women) shoot, gallop and strap things to other things.

This is all based on an analysis of almost 2,000 film scripts mostly from 1990 and after. The words come from pairs of words beginning with “he” and “she” in the stage directions (but not the dialogue) in the screenplays — directions like “she snuggles up to him, strokes his back” and “he straps on a holster under his sealskin cloak”. The essay also includes an analysis of words by the writer and character's gender, and includes lots of lovely interactive elements (including the ability to see examples of the stage directions).

The analysis, including the chart above, was was created using the R language, and the R code is available on GitHub. The screenplay analysis makes use on the tidytext package, which simplifies the process of handling the text-based data (the screenplays), extracting the stage directions, and tabulating the word pairs.

You can find the complete essay linked below, and it's well worth checking out to experience the interactive elements.

After blogging break caused by writing research papers, I managed to secure time to write something new about time series forecasting. This time I want to share with you my experiences with seasonal-trend time series forecasting using simple regression trees. Classification and regression tree (or decision tree) is broadly used machine learning method for modeling. They are favorite because of these factors:

simple to understand (white box)

from a tree we can extract interpretable results and make simple decisions

they are helpful for exploratory analysis as binary structure of tree is simple to visualize

very good prediction accuracy performance

very fast

they can be simply tuned by ensemble learning techniques

But! There is always some “but”, they poorly adapt when new unexpected situations (values) appears. In other words, they can not detect and adapt to change or concept drift well (absolutely not). This is due to the fact that tree creates during learning just simple rules based on training data. Simple decision tree does not compute any regression coefficients like linear regression, so trend modeling is not possible. You would ask now, so why we are talking about time series forecasting with regression tree together, right? I will explain how to deal with it in more detail further in this post.

You will learn in this post how to:

decompose double-seasonal time series

detrend time series

model and forecast double-seasonal time series with trend

use two types of simple regression trees

set important hyperparameters related to regression tree

Exploring time series data of electricity consumption

As in previous posts, I will use smart meter data of electricity consumption for demonstrating forecasting of seasonal time series. I created a new dataset of aggregated electricity load of consumers from an anonymous area. Time series data have the length of 17 weeks.

Firstly, let’s scan all of the needed packages for data analysis, modeling and visualizing.

Now read the mentioned time series data by read_feather to one data.table.

And store information of the date and period of time series that is 48.

For data visualization needs, store my favorite ggplot theme settings by function theme.

Now, pick some dates of the length 3 weeks from dataset to split data on the train and test part. Test set has the length of only one day because we will perform one day ahead forecast of electricity consumption.

Let’s plot the train set and corresponding average weekly values of electricity load.

We can see some trend increasing over time, maybe air conditioning is more used when gets hotter in summer. The double-seasonal (daily and weekly) character of time series is obvious.

A very useful method for visualization and analysis of time series is STL decomposition.
STL decomposition is based on Loess regression, and it decomposes time series to three parts: seasonal, trend and remainder.
We will use results from the STL decomposition to model our data as well.
I am using stl() from stats package and before computation we must define weekly seasonality to our time series object. Let’s look on results:

As was expected from the previous picture, we can see that there is “slight” trend increasing and decreasing (by around 100 kW so slightly large ).
Remainder part (noise) is very fluctuate and not seems like classical white noise (we obviously missing additional information like weather and other unexpected situations).

Constructing features to model

In this section I will do feature engineering for modeling double-seasonal time series with trend best as possible by just available historical values.

Classical way to handle seasonality is to add seasonal features to a model as vectors of form ( (1, dots, DailyPeriod, 1, …, DailyPeriod,…) ) for daily season or ( (1, dots, 1, 2, dots, 2, dots , 7, 1, dots) ) for weekly season. I used it this way in my previous post about GAM and somehow similar also with multiple linear regression.

A better way to model seasonal variables (features) with nonlinear regression methods like tree is to transform it to Fourier terms (sinus and cosine). It is more effective to tree models and also other nonlinear machine learning methods. I will explain why it is like that further of this post.

where ( ws ) is a number of weekly seasonality Fourier pairs.

Another great feature (most of the times most powerful) is a lag of original time series. We can use lag by one day, one week, etc…
The lag of time series can be preprocessed by removing noise or trend for example by STL decomposition method to ensure stability.

As was earlier mentioned, regression trees can’t predict trend because they logically make rules and predict future values only by rules made by training set.
Therefore original time series that inputs to regression tree as dependent variable must be detrended (removing the trend part of the time series). The acquired trend part then can be forecasted by for example ARIMA model.

Let’s go to constructing mentioned features and trend forecasting.

Double-seasonal Fourier terms can be simply extracted by fourier function from forecast package.
Firstly, we must create multiple seasonal object with function msts.

Now use fourier function using two conditions for a number of K terms.
Set K for example just to 2.

It made 2 pairs (sine and cosine) of daily and weekly seasonal signals.
If we compare it with approach described in previous posts, so simple periodic vectors, it looks like this:

where four-daily is the Fourier term for daily season, simple-daily is the simple feature for daily season, four-weekly is the Fourier term for weekly season, and simple-weekly is the simple feature for weekly season. The advantage of Fourier terms is that there is much more closeness between ending and starting of a day or a week, which is more natural.

Now, let’s use data from STL decomposition to forecast trend part of time series. I will use auto.arima procedure from the forecast package to perform this.

Let’s plot it:

Function auto.arima chose ARIMA(0,2,0) model as best for trend forecasting.

Next, make the final feature to the model (lag) and construct train matrix (model matrix).
I am creating lag by one day and just taking seasonal part from STL decomposition (for having smooth lag time series feature).

The accuracy of forecast (or fitted values of a model) will be measured by MAPE, let’s defined it:

RPART (CART) tree

In the next two sections, I will describe two regression tree methods. The first is RPART, or CART (Classification and Regression Trees), the second will be CTREE. RPART is recursive partitioning type of binary tree for classification or regression tasks. It performs a search over all possible splits by maximizing an information measure of node impurity, selecting the covariate showing the best split.

I’m using rpart implementation from the same named package. Let’s go forward to modeling and try default settings of rpart function:

It makes many interesting outputs to check, for example we can see a table of nodes and corresponding errors by printcp(tree_1) or see a detailed summary of created nodes by summary(tree_1). We will check variable importance and number of created splits:

We can see that most important variables are Lag and cosine forms of the daily and weekly season. The number of splits is 10, ehm, is it enough for time series of length 1008 values?

Let’s plot created rules with fancy rpart.plot function from the same named package:

We can see values, rules, and percentage of values split each time. Pretty simple and interpretable.

Now plot fitted values to see results of the tree_1 model.

And see the error of fitted values against real values.

Whups. It’s a little bit simple (rectangular) and not really accurate, but it’s logical result from a simple tree model.
The key to achieving better results and have more accurate fit is to set manually control hyperparameters of rpart.
Check ?rpart.control to get more information.
The “hack” is to change cp (complexity) parameter to very low to produce more splits (nodes). The cp is a threshold deciding if each branch fulfills conditions for further processing, so only nodes with fitness larger than factor cp are processed. Other important parameters are the minimum number of observations in needed in a node to split (minsplit) and the maximal depth of a tree (maxdepth).
Set the minsplit to 2 and set the maxdepth to its maximal value – 30.

Now make simple plot to see depth of the created tree…

That’s little bit impressive difference than previous one, isn’t it?
Check also number of splits.

600 is higher than 10

Let’s plot fitted values from the model tree_2:

And see the error of fitted values against real values.

Much better, but obviously the model can be overfitted now.

Add together everything that we got till now, so forecast load one day ahead.
Let’s create testing data matrix:

Predict detrended time series part with tree_2 model + add the trend part of time series forecasted by ARIMA model.

Let’s plot the results and compare it with real values from data_test.

Not bad. For clarity, compare forecasting results with model without separate trend forecasting and detrending.

We can see that RPART model without trend manipulation has higher values of the forecast.
Evaluate results with MAPE forecasting measure.

We can see the large difference in MAPE. So detrending original time series and forecasting separately trend part really works, but not generalize the result now. You can read more about RPART method in its great package vignette.

CTREE

The second simple regression tree method that will be used is CTREE. Conditional inference trees (CTREE) is a statistical approach to recursive partitioning, which takes into account the distributional properties of the data. CTREE performs multiple test procedures that are applied to determine whether no significant association between any of the feature and the response (load in the our case) can be stated and the recursion needs to stop.
In R CTREE is implemented in the package party in the function ctree.

Let’s try fit simple ctree with a default values.

Constructed tree can be again simply plotted by plot function, but it made many splits so it’s disarranged.

Let’s plot fitted values from ctree_1 model.

And see the error of fitted values against real values.

Actually, this is pretty nice, but again, it can be tuned.

For available hyperparameters tuning check ?ctree_control. I changed hyperparameters minsplit and minbucket that have similar meaning like the cp parameter in RPART. The mincriterion can be tuned also, and it is significance level (1 – p-value) that must be exceeded in order to implement a split. Let’s plot results.

And see the error of fitted values against real values.

It’s better. Now forecast values with ctree_2 model.

And compare CTREE with RPART model.

Slightly better MAPE value with RPART, but again now it can not be anything to generalize. You can read more about CTREE method in its great package vignette.
Try to forecast future values with all available electricity load data with sliding window approach (window of the length of three weeks) for a period of more than three months (98 days).

Comparison

Define functions that produce forecasts, so add up everything that we learned so far.

I created plotly boxplots graph of MAPE values from four models – CTREE simple, CTREE with detrending, RPART simple and RPART with detrending. Whole evaluation can be seen in the script that is stored in my GitHub repository.

We can see that detrending time series of electricity consumption improves the accuracy of the forecast with the combination of both regression tree methods – RPART and CTREE. My approach works as expected.

The habit of my posts is that animation must appear. So, I prepared for you two animations (animated dashboards) using animation, grid, ggplot and ggforce (for zooming) packages that visualize results of forecasting.

We can see that in many days it is almost perfect forecast, but on some days it has some potential for improving.

Conclusion

In this post, I showed you how to solve trend appearance in seasonal time series with using a regression tree model. Detrending time series for regression tree methods is a important (must) procedure due to the character of decision trees. The trend part of a time series was acquired by STL decomposition and separately forecasted by a simple ARIMA model. I evaluated this approach on the dataset from smart meters measurements of electricity consumption. The regression (decision) tree is a great technique for getting simple and interpretable results in very fast computational time.

In the future post, I will focus on enhancing the predictive performance of simple regression tree methods by ensemble learning methods like Bagging, Random Forest, and similar.

var vglnk = { key: ‘949efb41171ac6ec1bf7f206d57e90b8’ };

(function(d, t) {
var s = d.createElement(t); s.type = ‘text/javascript’; s.async = true;
s.src = “http://cdn.viglink.com/api/vglnk.js”;
var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r);
}(document, ‘script’));

To leave a comment for the author, please follow the link and comment on their blog: Peter Laurinec.

Reports on Fifth District manufacturing activity were largely unchanged in August, according to the latest survey by the Federal Reserve Bank of Richmond. The composite index remained at 14 in August, with an increase in the employment index offsetting a decrease in the shipments index and a very slight decline in the new orders metric. Although the employment index rose from 10 to 17 in August, other measures of labor market activity — wages and average workweek — were largely unchanged. emphasis added

The first notation is an operator called the “named map builder”. This is a cute notation that essentially does the job of stats::setNames(). It allows for code such as the following:

library("seplyr")
names <- c('a', 'b')
names := c('x', 'y')
#> a b
#> "x" "y"

This can be very useful when programming in R, as it allows indirection or abstraction on the left-hand side of inline name assignments (unlike c(a = 'x', b = 'y'), where all left-hand-sides are concrete values even if not quoted).

A nifty property of the named map builder is it commutes (in the sense of algebra or category theory) with R‘s “c()” combine/concatenate function. That is: c('a' := 'x', 'b' := 'y') is the same as c('a', 'b') := c('x', 'y'). Roughly this means the two operations play well with each other.

The second notation is an operator called “anonymous function builder“. For technical reasons we use the same “:=” notation for this (and, as is common in R, pick the correct behavior based on runtime types).

The function construction is written as: “variables := { code }” (the braces are required) and the semantics are roughly the same as “function(variables) { code }“. This is derived from some of the work of Konrad Rudolph who noted that most functional languages have a more concise “lambda syntax” than “function(){}” (please see here and here for some details, and be aware the seplyr notation is not as concise as is possible).

This notation allows us to write the squares of 1 through 4 as:

sapply(1:4, x:={x^2})

instead of writing:

sapply(1:4, function(x) x^2)

It is only a few characters of savings, but being able to choose notation can be a big deal. A real victory would be able to directly use lambda-calculus notation such as “(λx.x^2)“. In the development version of seplyr we are experimenting with the following additional notations:

sapply(1:4, lambda(x)(x^2))

sapply(1:4, λ(x, x^2))

(Both of these currenlty work in the development version, though we are not sure about submitting source files with non-ASCII characters to CRAN.)

The recent release of the blscrapeR package brings the “tidyverse” into the fold. Inspired by my recent collaboration with Kyle Walker on his excellent tidycensus package, blscrapeR has been optimized for use within the tidyverse as of the current version 3.0.0.

New things you’ll notice right away include:

All data now returned as tibbles.

dplyr and purrr are now imported packages, along with magrittr and ggplot, which were imported from the start.

No need to call any packages other than tidyverse and blscrapeR.

Major internal changes

Switched from base R to dplyr in instances where performance could be increased.

Standard apply functions replaced with purrr map() functions where performance could be increased.

The BLS: More than Unemployment

The American Time Use Survey is one of the BLS’ more interesting data sets. Below is an API query that compares the time Americans spend watching TV on a daily basis compared to the time spent socializing and communicating.

It should be noted, some familiarity with BLS series id numbers is required here. The BLS Data Finder is a nice tool to find series id numbers.

Unemployment Rates

The main attraction of the BLS are the monthly employment and unemployment data. Below is an API query and plot of three of the major BLS unemployment rates.

U-3: The “official unemployment rate.” Total unemployed, as a percent of the civilian labor force.

U-5: Total unemployed, plus discouraged workers, plus all other marginally attached workers, as a percent of the civilian labor force plus all marginally attached workers.

U-6: Total unemployed, plus all marginally attached workers, plus total employed part time for economic reasons, as a percent of the civilian labor force plus all marginally attached workers.

For more information and examples, please see the package vignettes.

var vglnk = { key: ‘949efb41171ac6ec1bf7f206d57e90b8’ };

(function(d, t) {
var s = d.createElement(t); s.type = ‘text/javascript’; s.async = true;
s.src = “http://cdn.viglink.com/api/vglnk.js”;
var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r);
}(document, ‘script’));

To leave a comment for the author, please follow the link and comment on their blog: Data Science Riot!.

In June of 2017, Intel partnered with MobileODT to challenge Kagglers to develop an algorithm with tangible, real-world impact–accurately identify a woman’s cervix type in images. This is really important because assigning effective cervical cancer treatment depends on the doctor’s ability to accurately do this. While cervical cancer is easy to prevent if caught in its pre-cancerous stage, many doctors don’t have the skills to reliably discern the appropriate treatment.

In this winners’ interview, first place team, ‘Towards Empirically Stable Training’ shares insights into their approach like how it was important to invest in creating a validation set and why they developed bounding boxes for each photo.

The basics:

What was your background prior to entering this challenge?

Ignas Namajūnas (bobutis) – Mathematics BSc, Computer Science MSc and 3 years of R&D work, including around 9 months of being the research lead for a surveillance project.

Jonas Bialopetravičius (zadrras) – Software Engineering BSc, Computer Science MSc, 7 years of professional experience in computer vision and ML, currently studying astrophysics where I also apply deep learning methods.

Darius Barušauskas (raddar) – BSc & MSc in Econometrics, 7 years of ML applications in various fields, such as finance, telcos, utilities.

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

We have a lot of experience in training object detectors. Additionally, Jonas and Ignas have won a previous deep learning competition – The Nature Conservancy Fisheries Monitoring Competition; It required similar know-how, therefore it could be easily transferred to this task.

How did you get started competing on Kaggle?

We saw Kaggle as an opportunity to apply our knowledge and skills obtained in our daily jobs to other fields as well. We also saw a lot of opportunity to learn from the great Machine Learning community the Kaggle platform has.

What made you decide to enter this competition?

The importance of this problem and the fact that it could be approached as object detection, where we already had success in a previous competition.

Let’s get technical:

Did any past research or previous competitions inform your approach?

We have been using Faster R-CNN in many tasks we have done so far. We believe that by tuning the right details it can be adapted to quite different problems.

What preprocessing steps have you done?

Since we had a very noisy dataset, we spent lots of time manually looking at the given data. We noticed, that the additionally provided dataset had many blurry and non-informative photos. We discarded large portion of them (roughly 15%). We also hand labeled photos by creating bounding boxes with regions of interest in each photo (both original dataset and additional dataset). This was essential for our methods to work and it helped a lot during model training.

What supervised learning methods did you use?

We used a few different variants of Faster R-CNN models with VGG-16 feature extractors. In the end, we ended up with 4 models which we ensembled. These models also had complementary models for generating bounding boxes on the public test set and night-vision-like image detection. Some of these 4 models alone were enough to place us 1st.

What was your most important insight into the data?

A proper validation scheme was super important. We noticed that the additional dataset had many similar photos as in the original training set, which itself caused problems if we wanted to use additional data in our models. Therefore, we applied K-means clustering to create a trustworthy validation set. We clustered all the photos into 100 clusters and took 20 random clusters as our validation set. This helped us track if data augmentations we used in our models were useful or not.

We also saw that augmenting the red color channel was critical, therefore we used a few different red color augmentations in our models.

Having two datasets with differing quality, we also experimented with undersampling the additional dataset. We found out that keeping the original:additional dataset image count ratio to 1:2 was optimal (in contrast to a ratio of 1:4, if no undersampling was applied).

Were you surprised by any of your findings?

From manual inspection, it seems that different types of cancerous cervixes had differing blood patterns. So focusing on blood color in the photos seemed logical.

Which tools did you use?

We used our customized R-FCN (which also includes Faster R-CNN). Original version can be obtained at https://github.com/Orpine/py-R-FCN.

How did you spend your time on this competition?

The first few days were dedicated to creating image bounding boxes and thinking of how to construct a proper validation set. After that we kept our GPU’s running non-stop while discussing which data augmentations we should try.

What does your hardware setup look like?

We had 2 GTX1080 and 1 GTX980 for model training. The whole ensemble takes 50 hours to train and it takes 7-10 seconds for single image inference. Our best single model takes 8 hours to train, 0.7 seconds for image inference.

Words of wisdom:

Many different problems could be tackled using the same DL algorithms. If a problem can be interpreted as an image detection problem, detecting fish types or certain cervix types becomes somewhat equivalent, even though knowing which details to tune for each problem might be very important.

Teamwork:

How did your team form?

We have been colleagues and acquaintances for a long time. On top of that, we are a part of larger team, aiming to solve medical tasks with computer vision and deep learning.

How did your team work together?

We were using slack for communication and had a few meetings as well.

How did competing on a team help you succeed?

It was much easier as we could split roles. Darius worked on image bounding boxes and setting up the validation, Jonas worked on the codebase, and Ignas was brainstorming which data augmentations to test.

Just for fun:

If you could run a Kaggle competition, what problem would you want to pose to other Kagglers?

Given the patient medical records, X-rays, ultrasounds, etc. predict which disease a patient is likely to suffer in the future. Combining different sources of information sounds like an interesting challenge.

What is your dream job?

Creating deep-learning based software for doctors to assist them in faster and more accurate decisions and more efficient patient treatment.