Communicating data science: An interview with a storytelling expert | Tyler Byers

By July 14, 2016ai, bigdata, machinelearning

In May I announced that I was assembling a series for the blog covering topics related to creating and presenting analyses including: the ingredients of a well-constructed analysis, data visualization, and practical guides to using tools like Rmarkdown and Jupyter notebooks. The internet is host to innumerable tutorials on every aspect of machine learning from simple linear regression to cutting edge algorithms in deep learning. However, it’s often acknowledged that a career in data science typically requires more time and effort spent on data analysis and understanding than on intense computation. What more does it take to translate Kaggle ranking points into tangible job prospects?

Soon after, I came across an excellent write-up by Tyler Byers, Data Science: Beyond the Kaggle, and I knew I was in good fortune. Tyler, a data scientist and software developer with Comverge, deftly discusses the ways in which excelling in Kaggle competitions is representative of only some of the multi-faceted responsibilities belonging to someone with a career in the field. Seeing the alignment in our perspectives, I asked Tyler if he would be willing to share more thoughts on what aspiring data scientists can learn that’s not found in a textbook or online tutorial.

To kick off this series on communicating data science, I interview Tyler about how he uses his skills in data visualization and effective reporting to collaborate and influence in his career. His advice to those who are talented at rising to the top of Kaggle’s leaderboard, but need help finding their voice when it comes to communicating the insights in their ensemble? Read extensively outside of your domain and listen to stand-up comedy!

Let’s set the scene

Can you tell us about your education and background?

I earned my Bachelor’s degree in Engineering Mathematics from the University of Arizona, which I had attended largely for athletics – they had a wheelchair racing team and recruited me to race for them. The Engineering Mathematics degree was mostly a mix of applied math, computer science, and mechanical engineering. During summertimes, I had internships with three different companies where I was doing data work, which in the early 2000’s meant I was writing thousand-line Excel macros!

After University, I worked in government for nearly a decade, doing data analysis with aerospace engineering applications. This was mostly classic data analysis, including a lot of writing reports and giving presentations, and unfortunately less coding than I would have liked. In 2013, I found the MOOCs, and took as many as possible over the next two years to update my skill set for the modern world, including the Coursera Data Science specialization and several Udacity data analysis and data science courses. In 2015 I moved to the private sector, and now work as a Data Scientist/Machine Learning Software Developer for a company in the clean energy business in Denver, Colorado. And finally, 11 years after getting my BS degree, I have begun working on a Master’s degree, in Data Science from Regis University here in Denver.

Describe what you do in your job? What does collaboration look like and how do you interface with other teams?

My title is “Software Developer, Machine Learning,” but my manager and I see what I do as more of a core Data Science role, with software development being the last-mile effort after I’ve done a large analysis project. I basically spend my time getting dirty with our data, and figuring out what we can do to provide energy forecasting capabilities for our customers. After I acquire my data (mostly via SQL, web scraping, and unzipping customer CSVs) and doing a lot of exploratory analysis, I develop machine learning models. Like a Kaggle competition, I’ll iterate on my models, improving them against a metric, until I or my manager says “that’s good enough for now.” Then I’ll write production-level code to integrate with the application our Data Engineers are developing.

… one of my big focuses is keeping up a blog, telling the story of my data as I’m analyzing it, and making sure to have lots of great visualizations to refer to later.

Collaboration with the team, and especially my manager, is extremely important throughout all phases of my project. I usually find that what I’m working on is several months in front of what the developers are working on. We’re a small company, and I’m the only Data Scientist integrated with our team. In real-time collaboration, I participate in the morning stand-up meeting, am active in Slack chat, and of course answer any questions from developers if they are having difficulty integrating the data pipes with my models. But because I am months out in front of our team, it can be hard to remember what I worked on, and why it was important, when I get those questions. So one of my big focuses is keeping up a blog, telling the story of my data as I’m analyzing it, and making sure to have lots of great visualizations to refer to later.

In addition to collaboration with my peers on my team, I make sure to collaborate with higher-ups in the company. Either via informal conversations in the break room, or scheduled meetings, I keep them up-to-date about what’s being done with the data we’re collecting, and what I think is possible. This is where I really have to work on my storytelling skills and try to influence. The data are powerful, but we need to find business pain to cure, especially our customer’s pain. Rather than just chasing down a problem because I think it is interesting, I’ll ask managers “is this a business need,” or “which project do you think is higher priority for our customers at this time.” The higher managers know the business better than I do (but I’m trying to learn!), so I work to get confirmation that my projects address our business needs.

What has been your experience with Kaggle? How has it helped you in your career?

I first learned about Kaggle in 2014 while taking the Intro to Data Science course from Udacity. Like many people, my first taste of competition was with the Titanic data set.

I loved it. Working on a model, improving it, and watching my place on the leaderboard climb as my score improved was intoxicating. I learned much more about Machine Learning in a short time from Kaggle than I could have from any course. I couldn’t believe how much help people gave out – for free! – in the Kaggle forums. I did a few competitions in 2014 and 2015; unfortunately with a full-time job, growing family, and a full MOOC load my time was limited, and Kaggle was left for those few weekends when I wasn’t enjoying the Colorado outdoors or working on a course assignment, so I never finished particularly high on the leaderboard (I did have one top 10% – in the 2015 Analytics Edge course competition).

Bike trails in the Colorado mountains.

Sometimes you just need to spend your weekends on the bike trails in the Colorado mountains. `gsub(‘sometimes’, ‘often’, last_sentence)`.

But even though my participation was limited, I still think about the lessons I learned in Kaggle competitions, and apply those to my job. In particular, learning about cross-validation techniques, feature engineering, and model ensembling, which I mostly learned about from Kaggle, have proven to be valuable as I improve my models at work. As I’ve written about elsewhere, I’ve tried to devote much of my time to learning a broader spectrum of Data Science than what can be learned from Kaggle competitions; however, I greatly value the time that I have spent with the competitions, and cruising the forums and the blog, because they have shaped my career in a very positive manner. And I still check back from time to time, and am very interested in spending more time with Kaggle scripts and Kaggle datasets!

The plot thickens

Can you tell us more about the importance of storytelling in data science?

Humans are story-telling animals. We love a good story. Stories, whether in the form of movies, epics, songs, or myths: they interest us, they bind together generations, get us in touch with deeper messages. Everyone knows how to tell stories. But, certainly some of us seem to have more story-telling talent than others. That group of five or six friends you hung out with in college: there was probably one person in that group that everyone loved to listen to. They could tell stories for hours on end – funny stories, stories that made you think, stories that made you cry, stories that changed your life. This person, they didn’t necessarily know more about the world than you. But they knew how to craft their message in an interesting manner, and likely had an outsized influence on you and your circle of friends.

“You have to read widely, constantly refining (and redefining) your own work as you do so. It’s hard for me to believe that people who read very little (or not at all in some cases) should presume to write and expect people to like what they have written, but I know it’s true.” — Stephen King

As Data Scientists, we need to be able to influence. We have data and insights that can shape the direction of a business. But all our insights are for naught if we can’t convince the leaders in our business to act on our insights. Our Data Science peers might be interested in the details of our heroic data wrangling, or seeing how many models we ensembled, or how many features we engineered. Leaders of our business? Unless our work can save or generate dollars, they see nothing more than perhaps a grown-up version of a science fair project. That’s right, I’m comparing your sophisticated models that took you three months to build to a fifth-grader’s papier mâché volcano. Interesting and fun, but eventually just clutter in your bedroom that you throw out with the garbage. Hopefully you had fun and learned something, even if you didn’t win that ribbon!

So, as Data Scientists, how can we influence? How can we convince our company leadership to act our insights? I struggled with this when I began my current job. The five-minute morning standup meeting and occasional ggplot visualizations dropped into Slack chat weren’t enough. I was doing (what I thought was) cool work, but it wasn’t affecting the direction of our product. So I found my voice. I began telling my data’s stories, and writing them down. Working hard on visualizations. Thinking about my audience’s needs. Asking questions – finding out what my business needs were, not just what I found interesting in the data. I began focusing on how I was communicating my insights. And things began to change. It’s still (and always will be) a work in progress, but I am having increasing influence as my communication and storytelling improve.

How do you communicate results of your work in your career?

It starts with the blog. Last September, I spent several days figuring out how to start a Jekyll-driven blog on our company’s GitHub Enterprise install. At first I was worried that my time setting it up would be wasted – this was a few days when I wasn’t doing data work, after all. But this has been one of my best decisions I have made. I blog about two to three times a month, at various “stopping points” in whatever project I’m working on. I love the blog because the words are mine, and I don’t have to worry about sounding too formal. I can easily cross-reference previous blog posts. Writing helps me organize my thoughts, helps me record additional questions, and provides a repository of ready visualizations if I need to have a presentation ready quickly. I forget what I worked on just two months ago. I forget what ideas I had two months ago. If I go to my blog, I get that refresher much better than I would had I left the results in an R Markdown file in a random directory on GitHub. And I can easily share my blog links with company leadership. In fact, I’ve been shocked how many times I’ve been asked a question by company leadership, and I can say “oh yeah, I think I did a project on that 4 months ago…yep, here’s the link!”

Comverge blog screenshot.

In my internal blog, I regularly communicate my analytic findings in order to share with colleagues, managers, and my future self.

On a short-term basis, I still share day-to-day findings with my team in the morning standup meeting, but I try to limit what I share to about 15 seconds. I might still drop a ggplot visualization or two in our Slack chat every couple weeks. These are nice to keep a conversation going, or to maybe get a question answered, but are easily forgotten and not very influential.

… since I frequently write on my blog, I find delivering the content of the presentations to be rather easy. I know the material, have done my best to explain it in my blog, and rarely do I get very technical questions that I can’t answer.

Finally, I’ll occasionally give presentations to customers or potential customers, usually with leaders from my business in attendance. These can be high-stress, because business and dollars rely on the outcomes! But, since I frequently write on my blog, I find delivering the content of the presentations to be rather easy. I know the material, have done my best to explain it in my blog, and rarely do I get very technical questions that I can’t answer. The presentations are more about showing pretty pictures and telling how our way is an improvement on the old way (and it’s not very hard showing that Machine Learning is an improvement compared to analysis from Excel spreadsheets!).

How does effective reporting, analysis, & visualization add value to your work?

Effective reporting, detailed analysis, and visualization are what give my “core” work value. My core work is developing machine learning software. So it’s similar to a Kaggle competition, where I develop a model, test it against a metric, and iterate to improve the model. Once I get to a good “stopping point” with my model, I’ll write production-level code to integrate with the rest of our application.

Effective reporting, detailed analysis, and visualization are what give my “core” work value.

But, as any good Kaggler knows, in the course of creating my model, I’ve created a beast! In particular, I’ve come up with several new features that must be engineered. I don’t do the engineering for the production systems! I write code in R! My team writes code in Ruby on Rails! Their engineering pipelines feed my models.

I’ve just spent two months creating this new, better model. Now comes the real work. The work of influence. The work of convincing my team lead that it’s worth his effort to write new Pivotal stories, point them up, and assign them to team members. Maybe push our product deadline back by a few weeks to accommodate the new work I’ve created. Why and how is my new model better?

If I’ve been smart, I’ve been blogging about my new model all along. Creating visualizations showing the improvements against the test metric. Admitting in my blog when a new feature that I thought would improve the model actually made it worse.

My analysis must be extremely detailed. I need to anticipate questions. I need to have had conversations with my team lead all along, to try to figure out what changes might be feasible, and which ones I should leave until the next version of our app.

Comverge blog visualization.

Visualizations are a critical aspect of communicating data science findings (source: Comverge blog).

I’ve found that visualizations are even more effective than my words. I can write for five pages trying to explain my argument. But a good visualization or two captures the mind, captures the imagination, and I’ve found can ultimately be the influencing factor in convincing my team to take a new route.

Supplementary reading

Do you have any advice for someone with the technical chops to be a data scientist, but who struggles with crafting a story?

Every person has the ability to tell a story. Clearly, some people are more talented than others, but storytelling is ultimately a skill, and skills can be improved with intentional practice. Technical people in particular can really struggle telling a story. I don’t know if it’s the whole left-brain versus right-brain thing, but clearly crafting a story is a creative process. People with technical skills often tend to bogged down in the details. Effective storytelling is more about keeping an eye on the big picture – where you want to start, where you want to peak, and where you want to end – while giving enough details to fill in the gaps in the big picture. Here are some of my tips for becoming a better storyteller:

  • Read as much as possible. Fiction, non-fiction, technical books, blog posts, magazines, newspapers. If you’re only reading material in Data Science, or in technology, most people will find you boring. You won’t be able to connect with many people. Human connection is critical to effective storytelling. Writers who get paid to write are essentially paid storytellers. Learn from them! I love reading National Geographic magazine in particular. The photographs are the story, and the essays surrounding the photographs add in the color and context. If you must read data-related stories (and you should!), I think FiveThirtyEight sets the standard. Take inspiration!
  • Write as much as you possible. Writing provides clarity to your thoughts. You will ask better questions. You will be able to provide better answers when asked about your work. A well-written blog post or report can give you significantly more influence than you may think! But you must practice. Do the Work.
  • Find your voice. You won’t be influential if your reports are boring or overly academic. Leave the academic work for academia. That’s its place, and it works there. You are in business now! Find your voice. Don’t be afraid of saying “I” in your blog posts. Make people want to read your work. Make people want to act on what you say. This may be especially hard for people with PhDs, who have written academically for years. But once you get past this, it is extremely freeing.
  • Visualizations First: When writing or giving a presentation in Data Science, before you write a word, add in your visualizations. Your visualizations should be your story. The words you write or say should be meant to add context to your visualizations.
  • Listen to Stand-up Comedy. Comedians have a special gift for noticing absurdities in the world around us. But listen to the best stand-up comedians. Are they standing up there telling one-liners? Knock-knock jokes? No! They are communicating the absurd via stories, often very elaborate stories. Listen to how they craft these stories. You don’t want to necessarily emulate a stand-up comedian when you are giving a Data Science presentation (I highly recommend against it), but you can learn a lot through their use of story as a communication tool, and can take cues from their cadence and inflection.
  • Know your Audience: This is key to connecting, and thus influencing. If your audience is the decision-makers of a potential client, they aren’t going to care about the details of your five-model ensemble, or what features you had to create. They have business pain; how is your model solving it? Who do you expect to read your reports? Target their needs. Sure, you might need to write up a technical document for your future self or future teammates (or your replacement when you leave). But then write another version of that report that your third-line manager can understand and act on. If you don’t know your audience’s needs, you won’t connect with them; if you don’t connect, you won’t influence; if you can’t influence, then hopefully your work has been interesting to you, because it’s likely useless to your business.

What tools do you use for analysis and reporting?

For analysis, I spend almost all of my time using R in RStudio. I love the ease of data manipulation and visualization that R provides, and of course the Hadleyverse has made using R an absolute joy. I’ve had to do a few things in Python, in particular data acquistion from a (terrible) municiple government SOAP API. But, I try to keep my analysis in a single language to cut down on the cognitive cost of switching back and forth.

Rstudio screenshot from How Much Did it Rain II competition.

I love to document my analytic steps in Rmd files in RStudio and create ggplot visualizations along the way. This is a this screenshot from the Kaggle How Much Did it Rain II contest.

For reporting, as I mentioned before, I mostly do blogging, on our GitHub Enterprise with a Jekyll-powered blog. I love using Jekyll, because writing in Markdown is so easy that I can focus on the content, and not try to fight with a UI.

I’m also working on some automated reports for customers, and R Markdown helps make beautiful PDF reports relatively simple (I had to learn a little bit of LaTeX too!). Finally, I’m working on a making an energy analytics dashboard for internal company use, and am very excited to be experimenting with the brand-new “flexdashboards” package, which combines R, R Markdown, and Shiny to create a reactive dashboard that’s pretty easy to put together – again, it puts the focus on the content, not fighting with your tools!


Thanks for reading the first entry in this series on communicating data science. Stay tuned for a guide to presenting a well-constructed analysis!


Source link