A Review of The Signal and The Noise, A New Book by Nate Silver

Imagine a guy with glasses who used to model baseball stats and play online poker nailing the outcome of the 2012 elections. And when I say “nailing”, I mean that he correctly predicted the U.S. Presidential contest in every one of the 50 states (and nearly every U.S. Senate race, too). He even performed better than some of the most widely-used polling firms. Now imagine that he gives his thoughts on making these types of predictions. That’s exactly what Nate Silver does in his new book The Signal and the Noise.

Nate-Silver-book
I’ve worked in what’s now being called “data science” for nearly twenty years. The title of Silver’s book – The Signal and the Noise – presents an important and sometimes overlooked part of this science. The “signal” is what we’re looking for in the data, and the “noise” is all the stuff in the data that gets in the way of what we’re looking for.

60 Minutes: Cancer Science Fraud at Duke

60 Minutes aired a piece last night about scientific fraud at Duke University, where data was fabricated in order to support alleged discoveries in individualized cancer therapies.  As a result of these investigations, a number of previously published scientific articles have been retracted.

Less than a week ago, I highlighted an infographic from Jen Rhee about the alarming statistics in science fraud.  I’m really disheartened that such a highly visible example came up so quickly… 

Introducing Wolfram|Alpha Pro

Stephen Wolfram is doing it again.  I’m a big fan of Wolfram (you can read some of my other posts here, here, and here…), and am always intrigued by what he comes up with.  A couple of days ago, Wolfram launched his latest contribution to data science and computational understanding – Wolfram|Alpha Pro

Here’s an overview of what the new Pro version of Wolfram|Alpha can provide:

With Wolfram|Alpha Pro, you can compute with your own data. Just input numeric or tabular data right in your browser, and Pro will automatically analyze it—effortlessly handling not just pure numbers, but also dates, places, strings, and more.

Upload 60+ types of data, sound, text, and other files to Wolfram|Alpha Pro for automatic analysis and computation. CSV, XLS, TXT, WAV, 3DS, HDF, GXL, XML…

Zoom in to see the details of any output—rendering it at a larger size and higher resolution.

Perform longer computations as a Wolfram|Alpha Pro subscriber by requesting extra time on the Wolfram|Alpha compute servers when you need it.

Licenses of prototying and analysis software go for several thousand dollars (Matlab, IDL, even Mathematica) – student versions can be had for a few hundred dollars, but you can’t leverage data science for business purposes on student licenses.

Wolfram|Alpha Pro lets anyone with a computer, an internet connection, and a small budget to leverage the power of data science.  Right now, you can get a free trial subscription, and from there, the costs are $4.99/month.  This price is introductory, but it could be sedutive enough to attract a lot of users (I’ve already signed up – all you need for the free trial is an e-mail address…)

One option that I find really interesting is Wolfram’s creation of the Computable Document Format (CDF), which interactivity lets you get dynamic versions of existing Wolfram|Alpha output as well as access to new content using interactive controls, 3D rotation, and animation.  It’s like having Wolfram|Alpha is embedded in the document.

I had attended a Wolfram Science Conference back in 2006 and saw the potential for such a document format back then.  There were a number of presenters who later wrote up their work into a paper, published by the journal Complex Systems.  Since many of the presentations utilized a real interactivity with the data, I could see where much of the insight would be lost when people tried to write things down and limit their visualizations to simple, static graphs and figures.

I remember contacting Jean Buck at Wolfram Research, and recommending such a format.  Who knows whether that had any impact, but I’m certainly glad to see that this is finally becoming a reality.  I actually got the opportunity to meet Wolfram at the conference (he even signed a copy of his Cellular Automata and Complexity for me… – Jean was kind enough to arrange that for me – thanks, Jean!)

If you’re interested in data science and have a spare $5 this month, try out Wolfram|Alpha Pro!

Bad Science

Jen Rhee has done some great homework on bad science and put them into a cool infographic that’s worth looking at.  Here are some of the highlights from her research into bad science:

  • 1 in 3 scientists admit to using questionable research practices
  • 1 in 50 admits falsifying or fabricating data outright
  • Among biomedical researcher trainees at UC-San Diego, 81% said they would modify or fabricate results to win a grant or publish a paper

This is obvious disturbing, and worth highlighting to try and root these things out.  Science is about finding the truth – no matter what it is – and as more businesses start using data science in order to drive business outcomes, we need to make sure that science is about being honest – with the truth and with ourselves.

The scientific method was developed to provide the best way to figure out what the truth is, given the data we’ve got.  It doesn’t make perfect decisions (no method can), but it’s the best method available.

Real scientists (the ones not highlighted in Jen’s research) care about what the data is actually saying and discovering the truth.  When someone cares about something else other than the truth (money, celebrity, fame, etc.), then bad science is what you get.  Of course, when there are people involved, sometimes the truth isn’t the top priority.

Great infographic, Jen!  You can find it here

Data Science Tidbits

Here are some interesting data science nuggets that I thought were interesting for a mid-January day…

The first comes from TechMASH about data science being the next big thing.  The primary nugget of note is that the supply of employees with the needed skills as data scientists – those people who really understand how to pull relevant information out of data reliably – is going to have a tough time meeting demand.  Here’s an interesting infographic on the current disconnects – for example, while 37% of “business intelligence” professional studied business in school, 42% of today’s “data scientists” studied computer science, engineering, and natural sciences.  This highlights the increasing demand for students that have solid mathematics backgrounds – it’s becoming more about knowing how you pull information from data, regardless of application.

Don’t get me wrong – to be effective applying data science, you need two things:  a subject matter expert that understands what makes sense and what doesn’t, and someone who really understands data to pull out the information.  Sometimes that can reside within one person, but it’s rare and takes many years of training to acquire the necessary excellence in both fields.   And as the demands for data analysis grow, these two areas will likely form into distinct disciplines with interesting partnership opportunities being created.

The definition of data science is still being defined, but I’m convinced it will have huge impact in the next five years.  And while the science aspects of data are starting to be defined, the engineering aspects of data and analytics are truly in their infancy…

On the same thread, here’s a Forbes article by Tom Groenfeldt on the need for data scientists, or Excel jockeys, or whatever they will be called in the future.  For some companies, the move to “data science” is quite apparent, but for others, the current assemblance of business professionals that have figured out the ins-and-outs of Excel spreadsheets work quite well.  This is likely a snapshot of where things are today, but I do believe that as the questions we ask of the data get more complicated, we will clearly see the need for a more rigorous science-based discipline to data wrangling…

The last tidbit is from the Wall Street Journal about the healthcare field being the next big area for Big Data.  I do think that healthcare is ripe for leveraging data, and I’ve written other posts on the subject.  One former Chief Medical Officer that I spoke with mentioned that one of the big problems is just getting the data useable in the first place.  He said that, as of today, 85% of all medical records are still in paper form.  The figure seems a bit high to me, but I don’t really know how many patient records in various individual doctor’s offices are still sitting in folders on shelves. 

There has been a big push lately, spurred by financial support from the U.S government, for upgrading to electronic health records (EHR).   This will help to solve the data collection problem – if you can’t get data into an electronic format, you can’t utilize information technologies to pull information out of the data.

Rise of the Algorithm

I ran across this article from the Independent today about the impacts of data algorithms, the ethics of data mining, and the future of our lives in an automated, data-crunching world.  Below is a quote from the article by Jaron Lanier, musician, computer scientist and author of the bestseller You Are Not a Gadget.

Algorithms themselves are a form of creativity. The problem is the illusion that they’re free-standing. If you start to think that information isn’t just a mask behind which people are hiding, if you forget that, you’ll pay a price for that way of thinking. It will cause you to be less creative.

If you show me an algorithm that dehumanises, impoverishes, manipulates or spies upon people,” he continues, “that same core maths can be applied differently. In every case. Take Facebook’s new Timeline feature [a diary-style way of displaying personal information]. It’s an idea that has been proposed since the 1980s [by Lanier himself]. But there are two problems with it. One, it’s owned by Facebook; what happens if Facebook goes bankrupt? Your life disappears – that’s weird. And two, it becomes fodder for advertisers to manipulate you. That’s creepy. But its underlying algorithms, if packaged in a different way, could be wonderful because they address a human cognitive need.

I think this is a really great read for anyone who’s interested in data, algorithms, and their impact on society – there’s a lot of really good stuff to take in.  You can read the entire article here

Nerd Pride Friday: ‘Wars’ Trumps ‘Trek’

It appears that Princess Leia and Captain Kirk are in a bit of interstellar combat, albeit 30+ years later.  And regardless of what Kirk has to say about it, I think that Star Wars rules!  So, the big question is, why are we even talking about this?…

Well, back in September, William Shatner, who portrayed Captain Kirk on the original Star Trek series, posted a YouTube video about how Star Trek was really better than Star Wars – how Lucas’ space opera was really “derivative” of Gene Roddenberry’s creation by 10-20 years. 

Well, a couple of weeks ago, Carrie Fisher, who played Princess Leia in the original Star Wars trilogy of movies, had to call Shatner out, posting her own video rebutting the critique.

Below are the videos to both Shatner’s original post, and Fisher’s reply.  Personally, I’d rather see the original Kirk and Leia duke it out, but I guess we’ll have to settle for their earthly thespians…

 

Forbes: Can Big Data Fix Healthcare?

This is the very question asked by Colin Hill, CEO and co-founder of GNS Healthcare, a healthcare analytics company.  Hill hopes to make the case that healthcare can benefit from what a recent McKinsey report calls “the next frontier for innovation, competition and productivity.”  

I think Hill is onto something, especially with this insight:

What will healthcare look like in the year 2020?  One thing is certain: we can’t afford its current trajectory.  Left unchecked, our $2.6 trillion in annual spending will grow to $4.6 trillion by 2020, one-fifth of GDP.  With almost 80 million Baby Boomers approaching retirement, economists forecast these trends will likely bankrupt Medicare and Medicaid in the near future.  And while healthcare reform ignites a number of important changes, alone it does not resolve our issues.  It’s critical we fix our system now.

Something’s got to give, and better decisions from better data can yield significant healthcare savings if done right.  Saving lives and reducing costs dramatically in healthcare would qualify as one of those hard problems where disciplined approaches can yield significant results.  Here is Hill’s post on Forbes…

Fast Company: Interview with LinkedIn’s Reid Hoffman

I ran across this interview by Fast Company with LinkedIn co-founder about his new book The Start-Up of You and the need for companies to have a data strategy, or risk losing “potentially a lot” in the future.  Here’s that brief bit from the Hoffman interview:

What do companies miss out on if they don’t have a data strategy?

Potentially a lot. If you say the way our products and services are constituted, how we determine our strategy and maintain a competitive edge against other folks–if data is a very strong element of each of these, and you’re not doing anything, it’s like trying to run a business without business intelligence. I’m not sure I have a broad enough view that I would say every company needs to have a data strategy. But I would say many companies do. I certainly think that any company that is over 20 people needs to have a technology strategy, and data is essential to where technology is going.

LinkedIn has already been on record as not worrying about Facebook taking over their business.  According to Hoffman, “People with advanced degrees are three times more likely to use LinkedIn.”

You can read the Fast Company interview here

Banks Predicting Your Divorce?

Are banks predicting divorces?  Well, if there’s data to help them predict such things, they may very well use it to optimize their business.

Forbes has a couple of posts that peek into businesses use of “big data”.  The first article talks about the race to build new analytics to solve challenges of large volumes of data.  Here’s a snippet from Tom Groenfeldt‘s post, quoting Scott Gnau, head of research and development at Teradata:

Thought leaders in a number of industries are starting to leverage the additional analytic content from big data and combine it with what they have in large volume data stores as well. It is interesting to understand social media and consumer sentiment, but when that information is analyzed in combination with traditional consumer data it provides new, rich intelligence helping companies to identify trends and react to immediate business conditions.

According to another Forbes article, there are a number of studies that show that companies that characterize themselves as “data driven” as the best corporate performers.  Now, when we’re talking “data driven”, we mean in how the company operates, not necessarily in what it produces as technology.  Top performing companies are determined to use the data that they have (especially about themselves) to improve what they do and how they do it. 

Also, banks are on the lookout for changes that could affect how they do business with their customers, and of course, their bottom line:

Banks, for example, worry about their customers divorcing, because divorce causes a change in credit-worthiness. No problem. They can now see a divorce coming before the couple does. All from the data.

As part of the “Computer Science or Data Science” panel at Techonomy 2011 in Tucson, AZ this week, the panel explored how data science has taken its place next to computer science as a fundamental element of information technology.  New technologies are coming out seemingly every day, not only to handle big data, but to understand how to extract relevant information from the ocean of data we’re swimming in.

A company in Silicon Valley, ai-one, announced today that they have “a breakthrough method to graphically represent knowledge enables software developers to easily build intelligent agents such as Apple’s SIRI and IBM Watson”.  The technology, ai-Fingerprint, is geared toward natural language programming, allowing developers to create new technologies that use natural language as input data.  

Apple’s Siri and IBM’s Watson are definitely heading in the right direction for this type of technology.  I just bought an iPhone 4S and I’ve tested Siri out a number of times.  While Siri doesn’t get everything right (it keeps thinking my name is “Nick” when I say “Mic”), it does get more right than I expected.  I was able to send texts and e-mails to people without keystrokes, and I took some notes using the voice feature, getting nearly every word correct.  Pretty amazing stuff!…

Watson is the supercomputer that beat two longtime Jeopardy! champions, and it uses a technology approach that looks for the best answer for the questions being asked (or in this case, the best question for the answer being presented – it is Jeopardy! after all…).  These are definitely the models that should be emulated; although, ai-one’s announcement is a press release so before we see the results, let’s chalk this up at the moment as good marketing…