Thursday: Unemployment Claims, Durable Goods, Pending Home Sales

By | ai, bigdata, machinelearning


Thursday:
• At 8:30 AM ET, The initial weekly unemployment claims report will be released. The consensus is for 243 thousand initial claims, down from 244 thousand the previous week.

• Also at 8:30 AM, Durable Goods Orders for February from the Census Bureau. The consensus is for a 1.1% increase in durable goods orders.

• At 10:00 AM, Pending Home Sales Index for March. The consensus is for a 0.4% decrease in the index.

• Also at 10:00 AM, the Q1 2017 Housing Vacancies and Homeownership from the Census Bureau.

• At 11:00 AM, the Kansas City Fed manufacturing survey for April. This is the last of the regional Fed surveys for April.


Source link

SanDisk Ultra 64GB microSDXC UHS-I Card with Adapter, Grey/Red, Standard Packaging (SDSQUNC-064G-GN6MA)

By | iot, machinelearning

Capture, carry and keep more high-quality photos and full HD video on your Android smartphone or tablet. Transfer pictures and videos from the card to your PC at a no-wait rate of up to 80MB/s. The SanDisk Memory Zone app, available on the Google Play store, makes it easy to view, access, and back up your files from your phone’s memory. To help your smartphone run at its peak performance, set the app to automatically off-load files from your smartphone’s internal memory to the card. Built to perform in extreme conditions, SanDisk Ultra micro SDHC and micro SDXC cards are water proof, temperature proof, shock proof, X-ray proof and magnet proof. The SanDisk Ultra card is rated Class 10 for Full HD video and comes with a SD adapter and a ten-year warranty.Ideal for premium Android based smartphones and tablets
Up to 80 MB/s transfer speed
Class 10 for Full HD video recording and playback
Water proof, temperature proof, shock proof, X-ray proof and magnet proof
Memory Zone app lets you auto-manage media and memory for peak phone performance
Comes with SD adapter for use in cameras,A SD card will normally work in a SDHC device with low performance ,A SDHC card will not work in a SD device such as a camera or reader

$21.99



Using prior knowledge in frequentist tests

By | ai, bigdata, machinelearning

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Christian Bartels send along this paper, which he described as an attempt to use informative priors for frequentist test statistics.

I replied:

I’ve not tried to follow the details but this reminds me of our paper on posterior predictive checks. People think of this as very Bayesian but my original idea when doing this research was to construct frequentist tests using Bayesian averaging in order to get p-values. This was motivated by a degrees-of-freedom-correction problem where the model had nonlinear constraints and so one could not simply do a classical correction based on counting parameters.

To which Bartels wrote:

Your work starts from the same point as mine, existing frequentist tests may be inadequate for the problem of interest. Your work ends also, where I would like to end, performing tests via integration over (i.e.,sampling of) paremeters and future observation using likelihood and prior.

In addition, I try to anchor the approach in decision theory (as referenced in my write up). Perhaps this is too ambitious, we’ll see.

Results so far, using the language of your publication:
– The posterior distribution p(theta|y) is a good choice for the deviance D(y,theta). It gives optimal confidence intervals/sets in the sense proposed by Schafer, C.M. and Stark, P.B., 2009. Constructing confidence regions of optimal expected size. Journal of the American Statistical Association, 104(487), pp.1080-1089.
– Using informative priors for the deviance D(y,theta)=p(theta|y) may improve the quality of decisions, e.g., may improve thenpower of tests.
– For the marginalization, I find it difficult to strike the balance between proposing something that can be argued/shown to give optimal tests, and something that can be calculate with availabe computational resources. I hope to end up with something like one of the variants shown in your Figure 1.

I noted that you distinguish test statistics from deviances that do depend or do not depend on the parameter. I’m not aware of anything that prevents you from using deviances with a dependence on parameters for frequentist tests – it is just inconvenient, if you are after generic, closed form solutions for tests. I did not make this differentiation, and refer to tests independent on whether they depend on the parameters or not.

I don’t really have anything more to say here, as I have not thought about these issues for awhile. But I thought Bartels’s paper and this discussion might interest some of you.

The post Using prior knowledge in frequentist tests appeared first on Statistical Modeling, Causal Inference, and Social Science.

Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

The post Using prior knowledge in frequentist tests appeared first on All About Statistics.




Source link

Assorted Shiny apps collection, full code and data

By | ai, bigdata, machinelearning

(This article was first published on rbloggers – SNAP tech blog, and kindly contributed to R-bloggers)

Here is an assortment of R Shiny apps that you may find useful for exploration if you are in the process of learning Shiny and looking for something different. Some of these apps are very small and simple whereas others are large and complex. This repository provides full code and any necessary accompanying data sets. The repo also links to the apps hosted online at shinyapps.io so that you can run apps in your browser without having to download the entire collection repo to run apps locally. That and other details can be found at the repo linked above. This isn’t a tutorial or other form of support, but it’s plenty of R code to peruse if that is what you are looking for.

A bit of backstory. If I recall correctly, I began exploring RStudio’s Shiny package when I first heard of it in late 2012. Needless to say, a lot has changed since then, including not only all the alpha-release code-breaking changes I had to adjust to when making my first apps and what features and capabilities Shiny has grown to offer, but also simply how I go about coding apps has changed over time symbiotically with the package’s continued development. None of the apps in this repository are quite that old, though a few are close. Even so, they have been maintained and updated and tweaked since then to keep with the times as necessary.

Most of the apps are newer. But one nice thing about this collection is that it shows a diversity of approaches to coding different features and behavior into apps depending on their purposes and how for me that has changed over time. For example, some apps are heavy on maps. Prior to the robust availability of Leaflet in Shiny, I would tend to have a Shiny app display maps using static (but reactive) plots made with Lattice or ggplot2. There are many ways to do the same thing, and the way that is best in one case is not always the best way.

Across these apps there are many other examples of different ways to implement the same general task, depending on how I want that to be presented to the user in a specific app. In other cases, some approaches have proven more powerful and outright superior to others and have won out and it is equally useful to see these examples of what once was considered to be “good enough” is no longer.

Lastly, if you do happen to stumble upon something that is actually broken, I am unaware of it, so please let me know.

To leave a comment for the author, please follow the link and comment on their blog: rbloggers – SNAP tech blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…




Source link

3 Fresh Approaches to Maximize Customer Value with Data

By | iot


New customer acquisition is costly. And customers are increasingly demanding, fickle, and empowered with endless options — new and old — to spend their dollars. So brands are rightly focused on increasing retention and share of wallet to maximize customer value.

Brands know that data holds the key to making the customer value gains they want to see. But they struggle to leverage that data in the right way. Here are 3 fresh approaches many brands are not using, but should consider, to improve customer value.

1. The More Data, The Merrier

You are collecting some data — likely even a lot of data — about your customers. You’ve got some demographics, geography/location, and purchase history. You may have their customer service history and website behavior as well.

Don’t stop there. Do you know their marital status, education level, income? How about the words they said when talking to a customer service rep? How about their tweets? How loyal to your brand are their friends, family, and coworkers in their social networks?

And don’t stop with your customers. What about the enterprise itself? You’ve got a wealth of data about every aspect of the business, including sales data, ops data, and much more.

Why is this important?

Many …

Read More on Datafloq

Source link