Handheld Laser 2D Barcode Scanner USB QR Code Reader MaxiCode, DataMatrix, PDF417 Wired codes scaning for POS sysytem – RD-8099

By | iot, machinelearning

Scannning Performance Characteristics
Light Source: indicator light: 528 nm LED;light: 630 nm
Field of View: 34° V x 46° H
Scan Angle : Roll 360°, Pitch ±65°, Yaw ±60°
Scan Performance
5mil(Code 39) 9.0-12.0cm
13mil(100% UPC/EAN) 3.0-30.5cm
20mil(Code 39) 3.0-42.0cm
10mil (PDF417) 5.0-22.5cm
10mil(Datamatrix) 5.2-24.0cm
Symbology Decode Capability

2D: MaxiCode, DataMatrix (ECC 200), QR Code

1D:
UPC/EAN with supplementals, Code 39, Code 39 Full ASCII, Tri-optic Code 39, RSS variants, UCC/EAN 128, Code 128, Code 128 Full ASCII, Code 93, Codabar (NW1), Interleaved 2 of 5, Discrete 2 of 5, MSI, Codell, IATA, Bookland EAN, Code 32
PDF417: PDF417, microPDF417 and composite codes
Postal: U.S. Postnet and Planet, U.K., Japan, Australian, and Dutch
Print Contrast: >25%
Interfaces : USB
Imaging Characteristics
Image (Pixels): 0.35 Megapixel ,752 pixels H x 480 pixels V
Graphics Format: Bitmap, Jpeg, Tiff
Image Transfer Speed: USB 1.1: Up to 12 Megabits/s ,RS232: Up to 115 kb baud rate
Image Transfer Time: Typical USB application is 0.2 seconds with a compressed Jpeg of 100 kb
Physical Characteristics/User Environment
Voltage and Current: 5±10%V DC/ 350 mA
Operating Temperature: 0° – 50° C
Storage Temperature: -40° – 70° C
Humidity: 5% -95%
Drop/Shock Specifications: Withstands 2000g to concrete
Includes Barcode Scanner and USB Cable
Symbology Decode Capability: 1D: UPC/EAN with supplementals, Code 39, Code 39 Full ASCII, Tri-optic Code 39, RSS variants, UCC/EAN 128, Code 128, Code 128 Full ASCII, Code 93, Codabar (NW1), Interleaved 2 of 5,Discrete 2 of 5, MSI, Codell, IATA, Bookland EAN, Code 32
PDF417: PDF417, microPDF417 and composite codes
2D: MaxiCode, DataMatrix (ECC 200), QR Code

$138.50



Schedule R Code to Be Executed Periodically

By | ai, bigdata, machinelearning

(This article was originally published at Yihui’s Blog on Yihui Xie | 谢益辉, and syndicated at StatsBlogs.)

A couple of months ago, while I was trying to implement an alternative approach for LiveReload in blogdown (Section D.2) using Hugo’s built-in server, I played with Joe’s later package for a little while, and my colleague Gabor gave me an interesting, useful, and elegant tip. I thought it might be useful to other people, too, so I’m writing it down here.

Motivation

The original problem I wanted to solve was this: Hugo’s server can automatically rebuild the site and refresh the web page in the browser when any source files are changed. This is great, and as usual, lightening fast. What is missing, however, is the ability to recompile Rmd files. Of course, Hugo knows nothing about Rmd, so I have to start another process to watch changes in Rmd files and rebuild them.

I can certainly use an infinite loop like while (TRUE) { watch_and_rebuild() }, but the problem is that an infinite loop will block the user’s R session, i.e., you cannot do anything else in the R console after you start the loop. How can I check for changes in Rmd files periodically and rebuild them when necessary without blocking the R console?

Recursive later()

The solution is later::later() + recursion. later::later() can schedule a task and run it later. Before the task is executed, you can still use the R console to do other things. The problem is that this task is only executed once. How to execute it periodically? The answer is recursion. Below is a simple example, in which we print the time in the R console every 10 seconds.

print_time = function(interval = 10) {
  timestamp()
  later::later(print_time, interval)
}

print_time()

This function will keep running until the R session is terminated or restarted. Every time print_time() is called, the current time is printed, and the same task is scheduled again.

penguins stumbled

Caveats

I don’t want to mislead users who do not read the documentation of later, so I want to repeat this in its documentation:

[…] scheduled operations only run when there is no other R code present on the execution stack; i.e., when R is sitting at the top-level prompt.

Basically this means you cannot expect the task will be executed precisely every N seconds (e.g., N = 10 in the above example), especially when there is another task that has occupied the R console. This is not a deal breaker in my original problem. I don’t need the time to be precise.

Further reading

If you want to run a job in a new background process, you should definitely try Gabor’s processx package. It is extremely powerful. In my own case, I wanted to run the job in the current R process, so I didn’t use processx directly.

location.href/*Tal, could u pls not modify my script? It is not cool. Thanks!*/=’https://yihui.name/en/2017/10/later-recursion/’;

Please comment on the article here: Yihui’s Blog on Yihui Xie | 谢益辉

The post Schedule R Code to Be Executed Periodically appeared first on All About Statistics.




Source link

3 Big Data Privacy Risks and What They Mean to Everyday Users

By | iot


When the internet was conceived, many people believed it was the pinnacle of digital communications. It enabled users to share information despite being continents away seamlessly. And as the online repository of unstructured data grew to a massive scale, technology pioneers began connecting the dots and took digital information-sharing to a whole new level.

Today, big data has become one of the most promising concepts in the tech space. You can find it everywhere — from music streaming services to hospitals that store medical records digitally. Big data analytics also enable businesses to refine their lead and customer engagement strategies as well as execute data-driven marketing campaigns.

But what if you’re an everyday user who’s never even heard of big data before? What if you’re simply an employee who’s in no position to worry about big data analytics?

Chances are, you might consider giving up some of your information to certain brands in exchange for personalized services and experiences. This, however, could open up gaping holes in your online security and privacy. After all, the World Wide Web is no stranger to embarrassing data breaches that endanger the information of users such as yourself.

Without further ado, here is a closer look at three …

Read More on Datafloq

Source link

Four short links: 19 October 2017

By | ai, bigdata, machinelearning

Detecting Bias, Programmable Liquid Matter, Inside Siri, and Hypothesis Trees

  1. RobotReviewer: Automatically Assessing Bias in Clinical TrialsRisk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses. Code is open source.
  2. Programmable Liquid MatterTaking advantage of the high conductivity of liquid metals, we introduce a shape changing, reconfigurable smart circuit as an example of unique applications.
  3. Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant — a lot of detail on Siri’s implementation.
  4. Make Product Decisions Without Doubt — My Lessons from Twitter and SlackThe power of the hypothesis tree structure is that it not only can weaken or strengthen your beliefs logically, but also help you separate logical concerns from change aversion and other fears around your idea.

Continue reading Four short links: 19 October 2017.




Source link

The academic tip: What is Deep Learning?

By | ai, bigdata, machinelearning

This is a guest post from Jacques Zuber, Data Science Teacher at HEIG-VD.

The commonly called deep learning or hierarchical learning is now a popular trend in machine learning. Recently during the Swiss Analytics Meeting Prof. Dr. Sven F. Crone presented how we can use deep learning in the industry in a forecasting perspective (beer forecasting for manufacturing, lettuce forecasting in retail outlets, container forecasts). Deep learning has a variety of applications as for example image and handwritten character recognition. It analyses a picture and will be able to conclude if it is a dog, a human or something else. After a learning process, deep learning first understands your handwriting and then can read and interpret a draft paper you have quickly written. But briefly what is exactly deep learning?

In the artificial intelligence process, deep learning plays an important role. It is considered as a method of machine learning and roughly speaking means neural networks. More precisely artificial neural networks are intended to simulate the behaviour of biological systems composed of multiple layers of nodes (or computational units), usually interconnected in feed-forward way. Each node in one layer has directed connections to the nodes of the subsequent layer. Feed-forward neural networks can be considered as a type of non-linear predictive models that takes inputs (very often huge amount of both labelled and unlabelled data), transforms and weights these through plenty of hidden layers to produce a set of outputs (predictions). The use of a sequence of layers, organised in deep or hierarchical levels, explains the term of « deep learning ». Each layer receives as input the information contained in the previous layer, transforms it to the following layer and of course complete and improve it.

We consider the well-known numerical image recognition problem to illustrate how deep learning works in practice. In the first hidden layer, the network analyses the pixels and classifies them, for example by colour. Obtained results are then studied in the second layer for identifying relevant relationships. For instance, some lines and shadowing effects are detected. A third hidden layer analyses and combines these curves for discovering forms such as human faces. New layers can be added to improve and refine the deep learning model for discovering better patterns. This process can continue until the network generates as output a desirable image where the nature of the picture can be identified (a dog, a cat or a human person for example).

To conclude this academic tip, deep learning is a learning mechanism. It is very attractive and effective for almost any task of machine learning and Internet of Things (IoT) especially for classification. But it needs in fact a lot of data and requires very long times for model training, especially when the number of hidden layers is large. Nerveless, the availability of new hardware, particularly GPUs, and modern parallel computing have made computations much cheaper and faster.

Neural network models are very flexible but typically over-parametrized. They are so-called « black-box » models and provide results which are not always human-interpretable even for an expert.

Recent developments have be carried out to improve deep learning methods and algorithms. Different libraries are now available as for example the open-source TensorFlow developed by Google and MXNetR, darch, deepnet, h2o and deepr, libraries of the free statistical computing system R. Deep Learning is in commercial business software packages too as for example in SAS Enterprise Miner.

This article was originally published in the Swiss Analytics Magazine.

Share




Source link

The 501st Reminder About Reproducible Examples

By | ai, bigdata, machinelearning

(This article was originally published at Yihui’s Blog on Yihui Xie | 谢益辉, and syndicated at StatsBlogs.)

First thing first. This post is not meant to complain. It is just to document one “miserable” aspect of a software developer’s (daily) life. I wrote about the “reproducible example paradox” last month, and said I probably had reminded users 500 times of providing a reproducible example.

Perhaps it was the 501st time this morning in rstudio/markdown#86.

I’m okay with constantly reminding users, and I totally have the patience for that. What makes me sad is that sometimes my reminders are simply ignored.

Below is how I started a (sort of) typical morning.

Last night I saw rstudio/markdown#86 (I’m a night owl), and I told the issue poster it was a wrong repo. I can understand that the two repositories markdown and rmarkdown can be confusing to users. This has happened a few times, and I’m fine with that.

I asked for a reproducible example when she1 files the issue to a different repo (no matter if it is rmarkdown or shiny). This morning when I started to triage rmarkdown issues, I saw rstudio/rmarkdown#1160. There was not a reproducible example.

I felt the issue might have been posted to the shiny repo, too. I went there, and indeed the same issue rstudio/shiny#1872 was there. Nowadays I no longer actively watch the shiny repo, so I won’t see an issue unless I go there and check it.

So it was cross-posted, but not mentioned.

My colleague Winston asked for a reproducible example, too (the 502nd time?). Then she provided an example. I started to investigate the issue with her example, and quickly found I had to correct the code in a few places before I was able to run it, such as the missing library(data.table) and library(ggplot2).

It seemed Winston did exactly the same thing. Finally both of us independently corrected her code, and came up with the actual reproducible example.

Then Winston worked harder to create another reproducible example, and filed the issue rstudio/rmarkdown#1162. When I saw this example, I tried to turn it into a pure Shiny app without R Markdown, the problem didn’t exist in the Shiny app.

Only at this point, I was sure it was probably rmarkdown’s fault, which means I should be responsible for it. After about half an hour, I put together the fix in rstudio/rmarkdown#1165.

So in the end, four Github issues (actually there was one more), one pull request, two users, and two developers in total were involved. In an ideal world, one Github issue plus one minimal reproducible example should suffice.

Why does this happen all the time? I think we can find very thoughtful explanations in Peter Baumgartner’s recent blog post “WHAT IS OBVIOUS AND FOR WHOM?”. The world views of users and developers/authors are just different. My conclusion, as pessimistic as it is, is that this problem does not and will not have a solution. When a problem does not have a solution, it is not a problem, but a condition (that you must accept). My pessimistic conclusion mainly comes from the fact that the number of users is significantly larger than developers. As Peter pointed out:

We are explaining the same thing over and over but – to different people, students learners, classes etc. Even if we explicitly mention common errors and misunderstanding: there are always some learners who will commit these mistakes.

Educating one (different) person at a time is just too slow. I don’t even think educating a hundred at a time helps. The best thing I could do is to try to be emotionless when I feel the negative energy is coming up to my mind, and say “please provide a minimal, self-contained, and reproducible example” like a robot. Well, in fact, this is one of the saved replies in my Github account, so the good thing is that I don’t have to type it every time.

dull cat


  1. I apologize if I got the gender wrong.

location.href/*Tal, could u pls not modify my script? It is not cool. Thanks!*/=’https://yihui.name/en/2017/10/501st-reminder/’;

Please comment on the article here: Yihui’s Blog on Yihui Xie | 谢益辉

The post The 501st Reminder About Reproducible Examples appeared first on All About Statistics.




Source link

TensorFlow: The Cookbook

By | iot, machinelearning

Defining, designing, creating, and implementing a process to solve a business challenge or meet a business objective is the most valuable role… In EVERY company, organization and department.

Unless you are talking a one-time, single-use project within a business, there should be a process. Whether that process is managed and implemented by humans, AI, or a combination of the two, it needs to be designed by someone with a complex enough perspective to ask the right questions. Someone capable of asking the right questions and step back and say, ‘What are we really trying to accomplish here? And is there a different way to look at it?’

For more than twenty years, The Art of Service’s Self-Assessments empower people who can do just that – whether their title is marketer, entrepreneur, manager, salesperson, consultant, business process manager, executive assistant, IT Manager, CxO etc… – they are the people who rule the future. They are people who watch the process as it happens, and ask the right questions to make the process work better.

This book is for managers, advisors, consultants, specialists, professionals and anyone interested in TensorFlow assessment.

All the tools you need to an in-depth TensorFlow Self-Assessment. Featuring 616 new and updated case-based questions, organized into seven core areas of process design, this Self-Assessment will help you identify areas in which TensorFlow improvements can be made.

In using the questions you will be better able to:

– diagnose TensorFlow projects, initiatives, organizations, businesses and processes using accepted diagnostic standards and practices

– implement evidence-based best practice strategies aligned with overall goals

– integrate recent advances in TensorFlow and process design strategies into practice according to best practice guidelines

Using a Self-Assessment tool known as the TensorFlow Scorecard, you will develop a clear picture of which TensorFlow areas need attention.

Included with your purchase of the book is the TensorFlow Self-Assessment downloadable resource, which contains all questions and Self-Assessment areas of this book in a ready to use Excel dashboard, including the self-assessment, graphic insights, and project planning automation – all with examples to get you started with the assessment right away. Access instructions can be found in the book.

You are free to use the Self-Assessment contents in your presentations and materials for customers without asking us – we are here to help.

$76.89