The Fundamentals of Data Science

 

Two of the biggest buzzwords in our industry is “big data” and “data science”. Big Data seems to have a lot of interest right now, but Data Science is fast becoming a very hot topic.

I think there’s room to really define the science of data science – what are those fundamentals that are needed to make data science truly a science we can build upon?

Below are my thoughts for an outline for such a set of fundamentals:

Fundamentals of Data Science

Introduction

The easiest thing for people within the big data / analytics / data science disciplines is to say “I do data science”. However, when it comes to data science fundamentals, we need to ask the following critical questions: What really is “data”, what are we trying to do with data, and how do we apply scientific principles to achieve our goals with data?

– What is Data?
– The Goal of Data Science
– The Scientific Method

Probability and Statistics

The world is a probabilistic one, so we work with data that is probabilistic – meaning that, given a certain set of preconditions, data will appear to you in a specific way only part of the time.  To apply data science properly, one must become familiar and comfortable with probability and statistics.

– The Two Characteristics of Data
– Examples of Statistical Data
– Introduction to Probability
– Probability Distributions
– Connection with Statistical Distributions
– Statistical Properties (Mean, Mode, Median, Moments, Standard Deviation, etc.)
– Common Probability Distributions (Discrete, Binomial, Normal)
– Other Probability Distributions (Chi-Square, Poisson)
– Joint and Conditional Probabilities
– Bayes’ Rules
– Bayesian Inference

Decision Theory

This section is one of the key fundamentals of data science.  Whether applied in scientific, engineering, or business fields, we are trying to make decisions using data.  Data itself isn’t useful unless it’s telling us something, which we’re making a decision about what it is telling us.  How do we come up with those decisions? What are the factors that go into this decision making process?  What is the best method for making decisions with data?  This section tell us…

– Hypothesis Testing
– Binary Hypothesis Test
– Likelihood Ratio and Log Likelihood Ratio
– Bayes Risk
– Neyman-Pearson Criterion
– Receiver Operating Characteristic (ROC) Curve
– M-ary Hypothesis Test
– Optimal Decision Making

Estimation Theory

Sometimes we make characterizations of data – averages, parameter estimates, etc.  Estimation from data is essentially an extension of decision making, a natural next section from Decision Theory.

– Estimation as Extension of M-ary Hypothesis Test
– Unbiased Estimation
– Minimum Mean Square Error (MMSE)
– Maximum Likelihood Estimation (MLE)
– Maximum A Posteriori Estimation (MAP)
– Kalman Filter

Coordinate Systems

To bring various data elements together into a common decision making framework, we need to know how to align the data.  Knowledge of coordinate systems and how they are used becomes important to lay a solid foundation for bringing disparate data together.

– Introduction to Coordinate Systems
– Euclidian Spaces
– Orthogonal Coordinate Systems
– Properties of Orthogonal Coordinate Systems (angle, dot product, coordinate transformations,
etc.)
– Cartesian Coordinate System
– Polar Coordinate System
– Cylindrical Coordinate System
– Spherical Coordinate System
– Transformations Between Coordinate Systems

Linear Transformations

Once we understand coordinate systems, we can learn why to transform the data to get at the underlying information.  This section describe how we can transform our data into other useful data products through various types of transformations, including the popular Fourier transform.

– Introduction to Linear Transformations
– Properties of Linear Transformations
– Matrix Multiplication
– Fourier Transform
– Properties of Fourier Transforms (time-frequency relationship, shift invariance, spectral
properties, Perseval’s Theorem, Convolution Theorem, etc.)
– Discrete and Continuous Fourier Transforms
– Uncertainty Principle and Aliasing
– Wavelet and Other Transforms

Effects of Computation on Data

An often overlooked aspect of data science is the impact the algorithms we apply have on the information we are seeking to find. Merely applying algorithms and computations to create analytics and other data products has an impact on the effectiveness data-driven decision making ability.  This section take us on a journey of advanced aspects of data science.

– Mathematical Representation of Computation
– Reversible Computations (Bijective Mapping)
– Irreversible Computations
– Impulse Response Functions
– Transformation of Probability Distributions (due to addition, subtraction, multiplication,
division, arbitrary computations, etc.)
– Impacts on Decision Making

Prototype Coding / Programming

One of the key elements to data science is the willingness of practitioners to “get their hands dirty” with data.  This means being able to write programs that access, process, and visualize data in important languages in science and industry. This section takes us on a tour of these important elements.

– Introduction to Programming
– Data Types, Variables, and Functions
– Data Structures (Arrays, etc.)
– Loops, Comparisons, If-Then-Else
– Functions
– Scripting Languages vs. Compilable Langugages
– SQL
– SAS
– R
– Python
– C++

Graph Theory

Graphs are ways to illustrate connections between different data elements, and they are important in today’s interconnected world.

– Introduction to Graph Theory
– Undirected Graphs
– Directed Graphs
– Various Graph Data Structures
– Route and Network Problems

Algorithms

Key to data science is understanding the use of algorithms to compute important data-derived metrics.  Popular data manipulation algorithms are included in this section.

– Introduction to Algorithms
– Recursive Algorithms
– Serial, Parallel, and Distributed Algorithms
– Exhaustive Search
– Divide-and-Conquer (Binary Search)
– Gradient Search
– Sorting Algorithms
– Linear Programming
– Greedy Algorithms
– Heuristic Algorithms
– Randomized Algorithms
– Shortest Path Algorithms for Graphs

Machine Learning

No data science fundamentals course would be complete without exposure to machine learning.  However, it’s important to know that these techniques build upon the fundamentals described in previous sections.  This section gives practitioners an understanding of useful and popular machine learning techniques and why they are applied.

– Introduction to Machine Learning
– Linear Classifiers (Logistic Regression, Naive Bayes Classifier, Support Vector Machines)
– Decision Trees (Random Forests)
– Bayesian Networks
– Hidden Markov Models
– Expectation-Maximization
– Artificial Neural Networks and Deep Learning
– Vector Quantization
– K-Means Clustering

Question:  Do you have any thoughts on the fundamentals of data science? You can leave a comment below.

A Data Science Lesson from Richard Feynman

Richard Feynman

Richard Feynman

Richard Feynman is one of the greatest scientific minds, and what I love about him, aside from his brilliance, is his perspective on why we perform science.   I’ve been reading the compilation of short works of Feynman titled The Pleasure of Finding Things Out, and I recently came across a section that really hit home with me.

In the world of data science, much is made about the algorithms used to work with data, such as random forests or k-mean clustering.  However, I believe there is a missing component – one that deals the fundamentals underlying data science, and that is the real science of data science.

10 Things To Know When Hiring Data Scientists

I’ve been performing data science before there was a field called “data science“, so I’ve had the opportunity to work with and hire a lot of great people.  But if you’re trying to hire a data scientist, how do you know what to look for, and what should you consider in the interview process?

Data Science Word Cloud

I’ve been doing what is now called “data science” since the early 1990s and have helped to hire numerous scientists and engineers over the years.  The teams I’ve had the opportunity to work with are some of the best in the world, tackling some of the most challenging problems facing our country.  These folks are also some of the smartest people I’ve ever had the opportunity to work with.

That said, not everyone is a good fit, and the discipline of data science requires important key elements.  Hiring someone into your team is incredibly important to your business, especially if you’re a small startup or building a critical internal data science team; mistakes can be expensive in both time and money.  This can be even more intimidating if you don’t have the background or experience in hiring scientists, especially someone responsible for this new discipline of working with data.

How Wolfram|Alpha Can Help You Discover Your Own Social Network

Ever wonder what your own personal network looks like?  You are likely connected to many different groups (family, friends, community, work), but do you know how they are connected?  Or are they connected at all?  Are you the glue that connects these various groups?

Word Cloud

This is a great age we’re living in, and I’m glad to be involved with developing lots of really advanced technologies.  One of the technology areas that I’m really fascinated with has been pushed forward by Stephen Wolfram.  He created the industry standard computing environment Mathematica, which now serves as the engine behind his company’s newest creation, Wolfram|Alpha.  (I’ve written a few posts on Wolfram|Alpha in the past, and you can read them here and here).

A Review of The Signal and The Noise, A New Book by Nate Silver

Imagine a guy with glasses who used to model baseball stats and play online poker nailing the outcome of the 2012 elections. And when I say “nailing”, I mean that he correctly predicted the U.S. Presidential contest in every one of the 50 states (and nearly every U.S. Senate race, too). He even performed better than some of the most widely-used polling firms. Now imagine that he gives his thoughts on making these types of predictions. That’s exactly what Nate Silver does in his new book The Signal and the Noise.

Nate-Silver-book
I’ve worked in what’s now being called “data science” for nearly twenty years. The title of Silver’s book – The Signal and the Noise – presents an important and sometimes overlooked part of this science. The “signal” is what we’re looking for in the data, and the “noise” is all the stuff in the data that gets in the way of what we’re looking for.

60 Minutes: Cancer Science Fraud at Duke

60 Minutes aired a piece last night about scientific fraud at Duke University, where data was fabricated in order to support alleged discoveries in individualized cancer therapies.  As a result of these investigations, a number of previously published scientific articles have been retracted.

Less than a week ago, I highlighted an infographic from Jen Rhee about the alarming statistics in science fraud.  I’m really disheartened that such a highly visible example came up so quickly… 

Introducing Wolfram|Alpha Pro

Stephen Wolfram is doing it again.  I’m a big fan of Wolfram (you can read some of my other posts here, here, and here…), and am always intrigued by what he comes up with.  A couple of days ago, Wolfram launched his latest contribution to data science and computational understanding – Wolfram|Alpha Pro

Here’s an overview of what the new Pro version of Wolfram|Alpha can provide:

With Wolfram|Alpha Pro, you can compute with your own data. Just input numeric or tabular data right in your browser, and Pro will automatically analyze it—effortlessly handling not just pure numbers, but also dates, places, strings, and more.

Upload 60+ types of data, sound, text, and other files to Wolfram|Alpha Pro for automatic analysis and computation. CSV, XLS, TXT, WAV, 3DS, HDF, GXL, XML…

Zoom in to see the details of any output—rendering it at a larger size and higher resolution.

Perform longer computations as a Wolfram|Alpha Pro subscriber by requesting extra time on the Wolfram|Alpha compute servers when you need it.

Licenses of prototying and analysis software go for several thousand dollars (Matlab, IDL, even Mathematica) – student versions can be had for a few hundred dollars, but you can’t leverage data science for business purposes on student licenses.

Wolfram|Alpha Pro lets anyone with a computer, an internet connection, and a small budget to leverage the power of data science.  Right now, you can get a free trial subscription, and from there, the costs are $4.99/month.  This price is introductory, but it could be sedutive enough to attract a lot of users (I’ve already signed up – all you need for the free trial is an e-mail address…)

One option that I find really interesting is Wolfram’s creation of the Computable Document Format (CDF), which interactivity lets you get dynamic versions of existing Wolfram|Alpha output as well as access to new content using interactive controls, 3D rotation, and animation.  It’s like having Wolfram|Alpha is embedded in the document.

I had attended a Wolfram Science Conference back in 2006 and saw the potential for such a document format back then.  There were a number of presenters who later wrote up their work into a paper, published by the journal Complex Systems.  Since many of the presentations utilized a real interactivity with the data, I could see where much of the insight would be lost when people tried to write things down and limit their visualizations to simple, static graphs and figures.

I remember contacting Jean Buck at Wolfram Research, and recommending such a format.  Who knows whether that had any impact, but I’m certainly glad to see that this is finally becoming a reality.  I actually got the opportunity to meet Wolfram at the conference (he even signed a copy of his Cellular Automata and Complexity for me… – Jean was kind enough to arrange that for me – thanks, Jean!)

If you’re interested in data science and have a spare $5 this month, try out Wolfram|Alpha Pro!

Bad Science

Jen Rhee has done some great homework on bad science and put them into a cool infographic that’s worth looking at.  Here are some of the highlights from her research into bad science:

  • 1 in 3 scientists admit to using questionable research practices
  • 1 in 50 admits falsifying or fabricating data outright
  • Among biomedical researcher trainees at UC-San Diego, 81% said they would modify or fabricate results to win a grant or publish a paper

This is obvious disturbing, and worth highlighting to try and root these things out.  Science is about finding the truth – no matter what it is – and as more businesses start using data science in order to drive business outcomes, we need to make sure that science is about being honest – with the truth and with ourselves.

The scientific method was developed to provide the best way to figure out what the truth is, given the data we’ve got.  It doesn’t make perfect decisions (no method can), but it’s the best method available.

Real scientists (the ones not highlighted in Jen’s research) care about what the data is actually saying and discovering the truth.  When someone cares about something else other than the truth (money, celebrity, fame, etc.), then bad science is what you get.  Of course, when there are people involved, sometimes the truth isn’t the top priority.

Great infographic, Jen!  You can find it here

Data Science Tidbits

Here are some interesting data science nuggets that I thought were interesting for a mid-January day…

The first comes from TechMASH about data science being the next big thing.  The primary nugget of note is that the supply of employees with the needed skills as data scientists – those people who really understand how to pull relevant information out of data reliably – is going to have a tough time meeting demand.  Here’s an interesting infographic on the current disconnects – for example, while 37% of “business intelligence” professional studied business in school, 42% of today’s “data scientists” studied computer science, engineering, and natural sciences.  This highlights the increasing demand for students that have solid mathematics backgrounds – it’s becoming more about knowing how you pull information from data, regardless of application.

Don’t get me wrong – to be effective applying data science, you need two things:  a subject matter expert that understands what makes sense and what doesn’t, and someone who really understands data to pull out the information.  Sometimes that can reside within one person, but it’s rare and takes many years of training to acquire the necessary excellence in both fields.   And as the demands for data analysis grow, these two areas will likely form into distinct disciplines with interesting partnership opportunities being created.

The definition of data science is still being defined, but I’m convinced it will have huge impact in the next five years.  And while the science aspects of data are starting to be defined, the engineering aspects of data and analytics are truly in their infancy…

On the same thread, here’s a Forbes article by Tom Groenfeldt on the need for data scientists, or Excel jockeys, or whatever they will be called in the future.  For some companies, the move to “data science” is quite apparent, but for others, the current assemblance of business professionals that have figured out the ins-and-outs of Excel spreadsheets work quite well.  This is likely a snapshot of where things are today, but I do believe that as the questions we ask of the data get more complicated, we will clearly see the need for a more rigorous science-based discipline to data wrangling…

The last tidbit is from the Wall Street Journal about the healthcare field being the next big area for Big Data.  I do think that healthcare is ripe for leveraging data, and I’ve written other posts on the subject.  One former Chief Medical Officer that I spoke with mentioned that one of the big problems is just getting the data useable in the first place.  He said that, as of today, 85% of all medical records are still in paper form.  The figure seems a bit high to me, but I don’t really know how many patient records in various individual doctor’s offices are still sitting in folders on shelves. 

There has been a big push lately, spurred by financial support from the U.S government, for upgrading to electronic health records (EHR).   This will help to solve the data collection problem – if you can’t get data into an electronic format, you can’t utilize information technologies to pull information out of the data.

Rise of the Algorithm

I ran across this article from the Independent today about the impacts of data algorithms, the ethics of data mining, and the future of our lives in an automated, data-crunching world.  Below is a quote from the article by Jaron Lanier, musician, computer scientist and author of the bestseller You Are Not a Gadget.

Algorithms themselves are a form of creativity. The problem is the illusion that they’re free-standing. If you start to think that information isn’t just a mask behind which people are hiding, if you forget that, you’ll pay a price for that way of thinking. It will cause you to be less creative.

If you show me an algorithm that dehumanises, impoverishes, manipulates or spies upon people,” he continues, “that same core maths can be applied differently. In every case. Take Facebook’s new Timeline feature [a diary-style way of displaying personal information]. It’s an idea that has been proposed since the 1980s [by Lanier himself]. But there are two problems with it. One, it’s owned by Facebook; what happens if Facebook goes bankrupt? Your life disappears – that’s weird. And two, it becomes fodder for advertisers to manipulate you. That’s creepy. But its underlying algorithms, if packaged in a different way, could be wonderful because they address a human cognitive need.

I think this is a really great read for anyone who’s interested in data, algorithms, and their impact on society – there’s a lot of really good stuff to take in.  You can read the entire article here