Big Data Strategy (Part III): is Your Company Data-Driven?

By | iot


If you missed the first two parts, I have previously proposed some tips for analyzing corporate data as well as a data maturity map to understand the stage of data development of an organization. Now, in this final article, I want to conclude this mini-series with final food for thoughts and considerations on big data capabilities in a company context.

I. Where is home for big data capabilities?

First of all, I want to spend few more words regarding the organizational home (Pearson and Wegener, 2013) for data analytics. I claimed that the Centre of Excellence is the cutting-edge structure to incorporate and supervise the data functions within a company. Its main task is to coordinate cross-units activities, which include:

Maintaining and upgrading the technological infrastructures;
Deciding what data have to be gathered and from which department;
Helping with the talents recruitment;
Planning the insights generation phase and stating the privacy, compliance, and ethics policies.

However, other forms may exist, and it is essential to know them since sometimes they might fit better into preexisting business models.

Data analytics organizational models

The figure shows different combinations of data analytics independence and business models. It ranges from business units (BUs) that are completely independent one from the other, to independent BUs that join the efforts in some …

Read More on Datafloq

Source link

From enterprise service bus to microservices

By | ai, bigdata, machinelearning

The present and future of data integration in the cloud.

In this episode of the O’Reilly podcast, Jon Bruner sat down with Rahul Kamdar, director of product management and strategy at TIBCO Software. They discussed the shift from the centralized enterprise service bus (ESB) to a distributed data architecture based on microservices, APIs, and cloud-native applications.

Here are some highlights from their conversation:

Cheaper, more scalable, and open to broader interaction

In some ways, [microservices] are derived from the traditional service oriented architecture (SOA) style of services. But really, they represent the niche of that architecture, in terms of the set of practices that you would follow to build the microservices so they are easy to develop, and less expensive to manage, operate, and deploy. Microservices need to be elastic and scalable when taken to concepts such as the cloud.

The second aspect is obviously around APIs—something that is key to companies pretty much all over the world right now—both from a technology standpoint as well as from a business standpoint. APIs extend the concept of microservices to make them available for external consumers in the form of the APIs, which is really defining the services in a standardized format and making them available in a consumable manner for developers and third-party partners alike.

Changing your architecture means changing your culture

For most users and developers who are moving toward this kind of architecture, I think the first thing that is really, really important is that it’s critical to embrace or accept the fact that change is required not only on the technology side, but in many cases, on the organization or the culture side. Not only do you break down your applications from large, what are traditionally called monolithic applications, but, really you break down your teams into more functional groups that are potentially working in a very agile way—an agile scrum way in many cases—so they can reach their end results a lot more quickly, a lot more easily, and at the same time, focus on a very specific common functional set of applications they want to build.

What’s coming down the road?

In the new world, you’re still going to build distributor applications and architecture that supports them, but at the same time, the set of different systems, the set of different data sources and applications that you still need to talk to, is going to remain, and probably get more complex because of hybrid deployments. Some customers who have always been on-premise or in a private datacenter are starting to deploy multiple clouds, or in some cases, something is on a private cloud, something is on a public cloud, and something is on third-party partners. They still want all their systems to be able to talk to each other.

Integration by itself is starting to become even more key, but how it’s being done is definitely changing. ESBs are mostly for the traditional architectures that still remain, but all the new ones are likely to adopt a more distributed integration architecture.

This post is part of a collaboration between O’Reilly and TIBCO. See our statement of editorial independence.

Continue reading From enterprise service bus to microservices.




Source link

Three Simple Steps to Customer Discovery

By | ai, bigdata, machinelearning

Building a data product is no different than building any software product in that you have to really know your customer and value proposition before you go to market and scale. The process of getting to know your customer and how your proposed solution can help solve a problem is most commonly known as customer discovery. As you’ve seen in our other blog posts about the Blueprint product, we went through an extensive customer discovery process prior to developing our go-to-market strategy.

If the customer discovery concept is new to you, I’d recommend reading the following two books before diving head first into new product development:

The Lean Startup by Eric Ries

Four Steps to the Epiphany by Steve Blank

We’ve adapted what we’ve learned from these books to a process that we can use to test the viability of other products that we’ve built on the Juicebox Platform such as JuiceSelect (a product that helps chambers of commerce communicate data and drive to action), and now we’re sharing it with you.

Step 1: Craft your value proposition hypothesis. Before you start having conversations with potential customers, you need to have an idea of the problem you believe you are solving with your product. Once you have a basic outline of the problem and your solution, you’re ready to test your hypothesis. Here’s how we structured our initial description of the JuiceSelect value proposition:

  • Target audience – The primary audience is lawmakers and chamber members/investors

  • Urgent need – Chambers need to publish data to support important policymaking decisions and track progress against strategic plans

  • Ease of setup – To turn the website on, it requires minimal effort from chamber staff

Step 2: Set up phone calls and in-person meetings to test your value proposition and demo your product (if you don’t yet have a minimum viable product [MVP], wireframes are good enough at this step). You should set up meetings with potential customers in your market and with organizations affiliated with your potential customers. For JuiceSelect, this meant reaching out to small, large, state, and regional chambers to make sure we were testing all aspects of our market. We also reached out to an Association for Chambers of Commerce and to a few vendors that sell other products to chambers to get a better understanding of our potential clients.

Step 3: Compile feedback and re-asses product-market fit. Now it’s time to pull together all of your findings and figure out if your original hypothesis about the problem and your solution were correct.

After completing these three steps, you’ll often find that you didn’t completely understand the problem and/or that your proposed solution is really only a partial solution. For instance, when we started our customer discovery process for the JuiceSelect product, we had made an assumption that the product would be valuable to all 2,000+ chambers of commerce nationwide. After a few weeks of demos and conversations about our value proposition, we discovered that the product was really primarily suitable for state chambers of commerce. State chambers of commerce need a public website to display all key economic metrics to help drive public policy decisions, while regional chambers only want to display data relevant to helping them attract new businesses to their region.

Good thing we didn’t sink tons of marketing and sales dollars into a market for which we didn’t have the right fit! However, all is not lost. We can still sell the original product to the state chambers while developing a related product that will fit the needs of the remaining 1,950 regional chambers.

If you’re interested in seeing how Blueprint or JuiceSelect can help your organization, we’d love to hear from you. Send us a message at [email protected] or tell us about yourself in the form below!

get in touch

 

 


Source link

Intel Edison Breakout Board Kit – EDI1BB.AL.K

By | iot, machinelearning

Intel Edison Breakout Board Kit

Brand Name: Intel
Product Type: Network Upgrade Kit

General Information
Brand Name: Intel
Manufacturer Part Number: EDI1BB.AL.K
Manufacturer: Intel Corporation
Product Name: Edison Breakout Board Kit
Product Type: Network Upgrade Kit

Miscellaneous
Additional Information: Essentials: Board Form Factor: 25mm x 35.5mm Socket: 70-pin Hirose .4mm Lithography: 22 nm DC Input Voltage Supported: 3.15V-4.5V Description: Wi-Fi/BT Compute Module Memory: Max Memory Size (dependent on memory type): 4 GB Memory Types: DDR3, NAND FLASH Physical Address Extensions: 32-bit I/O: USB Revision: USB 2.0 # of USB Ports: 1 USB 2.0 Configuration (Back + Internal): 1 # of Serial Ports: 2 Serial Port via Internal Header: Yes Audio (back channel + front channel): 1 I2S Integrated Wifi: Yes, 802.11n Integrated Bluetooth: Yes Package: Max CPU Configuration: 1 Package Size: 25mm x 35.5mm Intel® Platform Protection Technology: Trusted Execution Technology: Yes
Limited Warranty: 1 Year

Warranty
Limited Warranty: 1 Year

Height 1.50 inches
Width 3.50 inches
Length 4.25 inches
Shipping Weight 0.20 pounds
Pallet Quantity 0
More for the money with this high quality Product
Offers premium quality at outstanding saving
Excellent product
100% satisfaction

$96.24



Le Master Statistique & Econométrie est mort

By | ai, bigdata, machinelearning

Après 10 ans, le master Statistique & Econométrie de l’Université de Rennes 1 va disparaître…. pour renaître de ses cendres sous le nom Mathématiques Appliquées, Statistique, cohabilité par l’Université de Rennes 1, Rennes 2, Agrocampus Ouest, l’ENSAI et l’INSA Rennes, à la rentrée. En particulier, il y aura un parcours “Prévision et Prédiction Economiques“. Il y aura beaucoup de nouveaux cours, mais j’aurais l’occasion d’en reparler…


Source link

AIA: Architecture Billings Index increased in February

By | ai, bigdata, machinelearning


Note: This index is a leading indicator primarily for new Commercial Real Estate (CRE) investment.

From the AIA: Architecture Billings Index rebounds into positive territory

The Architecture Billings Index (ABI) returned to growth mode in February, after a weak showing in January. As a leading economic indicator of construction activity, the ABI reflects the approximate nine to twelve month lead time between architecture billings and construction spending. The American Institute of Architects (AIA) reported the February ABI score was 50.7, up from a score of 49.5 in the previous month. This score reflects a minor increase in design services (any score above 50 indicates an increase in billings). The new projects inquiry index was 61.5, up from a reading of 60.0 the previous month, while the new design contracts index climbed from 52.1 to 54.7.

“The sluggish start to the year in architecture firm billings should give way to stronger design activity as the year progresses,” said AIA Chief Economist, Kermit Baker, Hon. AIA, PhD. “New project inquiries have been very strong through the first two months of the year, and in February new design contracts at architecture firms posted their largest monthly gain in over two years.”

• Regional averages: Midwest (52.4), South (50.5), Northeast (50.0), West (47.5)

• Sector index breakdown: institutional (51.8), multi-family residential (49.3), mixed practice (49.2), commercial / industrial (48.9)
emphasis added

AIA Architecture Billing Index Click on graph for larger image.

This graph shows the Architecture Billings Index since 1996. The index was at 50.7 in February, up from 49.5 in January. Anything above 50 indicates expansion in demand for architects’ services.

Note: This includes commercial and industrial facilities like hotels and office buildings, multi-family residential, as well as schools, hospitals and other institutions.

According to the AIA, there is an “approximate nine to twelve month lag time between architecture billings and construction spending” on non-residential construction. This index was positive in 9 of the last 12 months, suggesting a further increase in CRE investment in 2017.


Source link

Paris Machine Learning Hors Serie #10 : Workshop SPARK (atelier 1)

By | machinelearning




Leonardo Noleto, data scientist chez KPMG, nous fait découvrir le processus de nettoyage et transformation des données brutes en données “propres” avec Apache Spark.
Apache Spark est un framework open source généraliste, conçu pour le traitement distribué de données. C’est une extension du modèle MapReduce avec l’avantage de pouvoir traiter les données en mémoire et de manière interactive. Spark offre un ensemble de composants pour l’analyse de données: Spark SQL, Spark Streaming, MLlib (machine learning) et GraphX (graphes).
Cet atelier se concentre sur les fondamentaux de Spark et le paradigme de traitement de données avec l’interface de programmation Python (plus précisément PySpark).
L’installation, configuration, traitement sur cluster, Spark Streaming, MLlib et GraphX ne seront pas abordés dans cet atelier.
Matériel à installer c’est ici. ..
Objectifs
  • Comprendre les fondamentaux de Spark et le situer dans l’écosystème Big Data ;
  • Savoir la différence avec Hadoop MapReduce ;
  • Utiliser les RDD (Resilient Distributed Datasets) ;
  • Utiliser les actions et transformations les plus courantes pour manipuler et analyser des données ;
  • Ecrire un pipeline de transformation de données ;
  • Utiliser l’API de programmation PySpark.
Cet atelier est le premier d’une série de 2 ateliers avec Apache Spark. Pour suivre les prochains ateliers, vous devez avoir suivi les précédents ou être à l’aise avec les sujets déjà traités.
Quels sont les pré-requis ?
  • Connaître les base du langage Python (ou apprendre rapidement via ce cours en ligne Python Introduction)
  • Être sensibilisé au traitement de la donnée avec R, Python ou Bash (why not?)
  • Aucune connaissance préalable en traitement distribué et Apache Spark n’est demandée. C’est un atelier d’introduction. Les personnes ayant déjà une première expérience avec Spark (en Scala, Java ou R) risquent de s’ennuyer (c’est un atelier pour débuter).
Comment me préparer pour cet atelier ?
  • Vous devez être muni d’un ordinateur portable relativement moderne et avec minimum 4 Go de mémoire, avec un navigateur internet installé. Vous devez pouvoir vous connecter à Internet via le Wifi.
  • Suivre les instructions pour vous préparer à l’atelier (installation Docker + image docker de l’atelier).
  • Les données à nettoyer sont comprises dans l’image Docker. Les exercices seront fournis lors de l’atelier en format Jupyter notebook.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche’s feed, there’s more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.



Source link