Category

machinelearning

An efficient approach to continuous documentation

By | ai, bigdata, machinelearning

How to cultivate lasting shared knowledge without giving up all that free time you don’t really have.

An outside observer watching a software developer work on a small feature in a real project would find the process to look less like engineering and more like a contrived scavenger hunt for knowledge new and old.

The problem is that we scatter what we learn and teach to the winds. A quick comment on a pull request transfers knowledge from one head to another, but then falls off the radar. A blog post covers some lesson learned while working on a project, but then never gets touched again after it’s written. A StackOverflow link is passed through Slack, and then disappears into the back scroll.

Continue reading An efficient approach to continuous documentation.




Source link

Podcast Special Episode 2 – The Future of AI with Dr. Ed Felten

By | ai, bigdata, machinelearning

In this audio-only Becoming a Data Scientist Podcast Special Episode, I interview Dr. Ed Felten, Deputy U.S. Chief Technology Officer, about the Future of Artificial Intelligence (from The White House!).

You can stream or download the audio at this link (download by right-clicking on the player and choosing “Save As”), or listen to it in podcast players like iTunes and Stitcher. Enjoy!

Show Notes:

The White House Names Dr. Ed Felten as Deputy U.S. Chief Technology Officer

Edward W. Felten at Princeton University

Dr. Edward Felten on Wikipedia

White House Office of Science and Technology Policy (OSTP)

The Administration’s Report on the Future of Artificial Intelligence (White House Report from October 2016)

Artificial Intelligence, Automation, and the Economy (White House Report from December 2016)

Ed Felten on Twitter: Official / Personal

Freedom to Tinker blog

———

Other Podcasts in this Government Data Series:

Becoming a Data Scientist Patreon Campaign


Source link

NAR: “Existing-Home Sales Stumble in February”

By | ai, bigdata, machinelearning


From the NAR: Existing-Home Sales Stumble in February

Total existing-home sales, which are completed transactions that include single-family homes, townhomes, condominiums and co-ops, retreated 3.7 percent to a seasonally adjusted annual rate of 5.48 million in February from 5.69 million in January. Despite last month’s decline, February’s sales pace is still 5.4 percent above a year ago.

Total housing inventory 3 at the end of February increased 4.2 percent to 1.75 million existing homes available for sale, but is still 6.4 percent lower than a year ago (1.87 million) and has fallen year-over-year for 21 straight months. Unsold inventory is at a 3.8-month supply at the current sales pace (3.5 months in January).
emphasis added

Existing Home SalesClick on graph for larger image.

This graph shows existing home sales, on a Seasonally Adjusted Annual Rate (SAAR) basis since 1993.

Sales in January (5.48 million SAAR) were 3.7% lower than last month, but were 5.4% above the February 2016 rate.

The second graph shows nationwide inventory for existing homes.

Existing Home InventoryAccording to the NAR, inventory increased to 1.75 million in February from 1.68 million in January. Headline inventory is not seasonally adjusted, and inventory usually decreases to the seasonal lows in December and January, and peaks in mid-to-late summer.

The last graph shows the year-over-year (YoY) change in reported existing home inventory and months-of-supply. Since inventory is not seasonally adjusted, it really helps to look at the YoY change. Note: Months-of-supply is based on the seasonally adjusted sales and not seasonally adjusted inventory.

Year-over-year Inventory Inventory decreased 6.4% year-over-year in February compared to February 2016.

Months of supply was at 3.8 months in February.

This was below consensus expectations. For existing home sales, a key number is inventory – and inventory is still low. I’ll have more later …


Source link

Elegoo Upgraded Electronics Fun Kit w/ Power Supply Module, Jumper Wire, Precision Potentiometer, 830 tie-points Breadboard for Arduino, Raspberry Pi, STM32

By | iot, machinelearning

Elegoo Inc. is a professional manufacturer and exporter that is concerned with the design, development production and marketing of arduino, 3d printers, raspberry pi and STM32.
Our Dream is to make the best price and best quality produce for customers.
So we would like to receive your valuable suggestions for our products and we can improve them for you.

Component listing:
1pcs Power Supply Module WARNING: Pls. do not use the voltage higher than 9V
1pcs 830 tie-points Breadboard
1pcs 65 Jumper Wire
140pcs Solderless Jumper Wire
20pcs Female-to-male Dupont Wire
2pcs Pin header (40pin)
1pcs Precision Potentiometer
2pcs Photoresistor
1pcs Thermistor
5pcs Diode Rectifier (1N4007)
5pcs NPN Transistor (PN2222)
1pcs IC 4N35
1pcs IC 74HC595
1pcs Active Buzzer
1pcs Passive Buzzer
10pcs Button (small)
10pcs 22pf Ceramic Capacitor
10pcs 104 Ceramic Capacitor
5pcs Electrolytic Capacitor (10UF 50V)
5pcs Electrolytic Capacitor (100UF 50V)
10pcs White LED
10pcs Yellow LED
10pcs Blue LED
10pcs Green LED
10pcs Red LED
1pcs RGB LED
10pcs Resistor (10R)
10pcs Resistor (100R)
10pcs Resistor (220R)
10pcs Resistor (330R)
10pcs Resistor (1K)
10pcs Resistor (2K)
10pcs Resistor (5K1)
10pcs Resistor (10K)
10pcs Resistor (100K)
10pcs Resistor (1M)

The highest cost performance kit.With more than 300pcs components.
Datasheet is available to download from our official website or you can contact our customer service
Better improvement from our customers’ suggestion.
Great quality and packing in a plastic box.
NOT including the controller board.

$20.99



Let’s accept the idea that treatment effects vary—not as something special but just as a matter of course

By | ai, bigdata, machinelearning

(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)

Tyler Cowen writes:

Does knowing the price lower your enjoyment of goods and services?

I [Cowen] don’t quite agree with this as stated, as the experience of enjoying a bargain can make it more pleasurable, or at least I have seen this for many people. Some in fact enjoy the bargain only, not the actual good or service. Nonetheless here is the abstract [of a recent article by Kelly Haws, Brent McFerran, and Joseph Redden]:

Prices are typically critical to consumption decisions, but can the presence of price impact enjoyment over the course of an experience? We examine the effect of price on consumers’ satisfaction over the course of consumption. We find that, compared to when no pricing information is available, the presence of prices accelerates satiation (i.e., enjoyment declines faster). . . .

I have no special thoughts on pricing and enjoyment, nor am I criticizing the paper by Haws et al. which I have not had the opportunity to read (see P.S. below).

The thing I did want to talk about was Cowen’s implicit assumption in his header that a treatment has a fixed effect. It’s clear that Cowen doesn’t believe this—the very first sentence of this post recognizes variation—so it’s not that he’s making this conceptual error. Rather, my problem here is that the whole discussion, by default, is taking place on the turf of constant effects.

The idea, I think, is that you first establish the effect and then you look for interactions. But if interactions are the entire story—as seems plausible here—that main-effect-first approach will be a disaster. Just as it was with power pose, priming, etc.

Framing questions in terms of “the effect” can be a hard habit to break.

P.S.

I was thinking of just paying the $35.95 but, just from the very fact of knowing the price, my satiation increased and my enjoyment declined and I couldn’t bring myself to do it. In future, perhaps Elsevier can learn from its own research and try some hidden pricing: Click on this article and we’ll remove a random amount of money from your bank account! That sort of thing.

The post Let’s accept the idea that treatment effects vary—not as something special but just as a matter of course appeared first on Statistical Modeling, Causal Inference, and Social Science.

Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science

The post Let’s accept the idea that treatment effects vary—not as something special but just as a matter of course appeared first on All About Statistics.




Source link

RApiDatetime 0.0.2

By | ai, bigdata, machinelearning

(This article was first published on Thinking inside the box , and kindly contributed to R-bloggers)

Two days after the initial 0.0.1 release, a new version of RApiDatetime has just arrived on CRAN.

RApiDatetime provides six entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. These six functions are all fairly essential and useful, but not one of them was previously exported by R.

Josh Ulrich took one hard look at the package — and added the one line we needed to enable the Windows support that was missing in the initial release. We now build on all platforms supported by R and CRAN. Otherwise, I just added a NEWS file and called it a bugfix release.

Changes in RApiDatetime version 0.0.2 (2017-03-25)

  • Windows support has added (Josh Ulrich in #1)

Changes in RApiDatetime version 0.0.1 (2017-03-23)

  • Initial release with six accessible functions

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the rapidatetime page.

For questions or comments please use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

To leave a comment for the author, please follow the link and comment on their blog: Thinking inside the box .

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…




Source link

BLS: Unemployment Rates “significantly lower in February in 10 states”, Arkansas and Oregon at New Lows

By | ai, bigdata, machinelearning


From the BLS: Regional and State Employment and Unemployment Summary

Unemployment rates were significantly lower in February in 10 states, higher in 1 state, and stable in 39 states and the District of Columbia, the U.S. Bureau of Labor Statistics reported today. Nine states had notable jobless rate decreases from a year earlier, and 41 states and the District had no significant change. The national unemployment rate, at 4.7 percent, was little changed from January but 0.2 percentage point lower than in February 2016.

New Hampshire had the lowest unemployment rate in February, 2.7 percent, closely followed by Hawaii and South Dakota, 2.8 percent each, and Colorado and North Dakota, 2.9 percent each. The rates in both Arkansas (3.7 percent) and Oregon (4.0 percent) set new series lows. … New Mexico had the highest jobless rate, 6.8 percent, followed by Alaska and Alabama, 6.4 percent and 6.2 percent, respectively.
emphasis added

State Unemployment Click on graph for larger image.

This graph shows the current unemployment rate for each state (red), and the max during the recession (blue). All states are well below the maximum unemployment rate for the recession.

The size of the blue bar indicates the amount of improvement. The yellow squares are the lowest unemployment rate per state since 1976.

The states are ranked by the highest current unemployment rate. New Mexico, at 6.8%, had the highest state unemployment rate.

State UnemploymentThe second graph shows the number of states (and D.C.) with unemployment rates at or above certain levels since January 2006. At the worst of the employment recession, there were 11 states with an unemployment rate at or above 11% (red).

Currently no state has an unemployment rate at or above 7% (light blue); Only three states are at or above 6% (dark blue). The states are New Mexico (6.8%), Alaska (6.4%), and Alabama (6.2%).


Source link