Why Lies Spread Faster than the Truth

With the increasing speed of information coming at us, how do we know what’s true and what’s not, or even worse – what’s fake?

Figuring out what’s true and false is tough, and then understanding what to do about it can be even tougher.  But we should recognize one aspect between lies and the truth.

Lies spread faster.  Here’s why.

The Advent of Analytics Engineering

Data Science has become an exploding field in recent years, and depending on whether you are focusing on machine learning, artificial intelligence, or citizen data science, the discipline of data science is creating very high expectations.

There is indeed much promise for data science, where predictive models and decision engines can target skin cancer in patient imagery, presciently recommend a new product that piques your interest, or power your self-driving car to evade a potential accident.

However, promise requires much effort for it to be realized. It takes a lot of work and brand new engineering disciplines that are not yet mature or even employed on a wide scale. As there is greater recognition of the value of data science, and the generation of data is increasing at exponential rates, this engineering effort is starting and will grow beyond its adolescence soon.

This is why we are at the advent of a new engineering discipline that can truly realize the promise of data science – a discipline that I call “analytics engineering”.

How Wolfram|Alpha Can Help You Discover Your Own Social Network

Ever wonder what your own personal network looks like?  You are likely connected to many different groups (family, friends, community, work), but do you know how they are connected?  Or are they connected at all?  Are you the glue that connects these various groups?

Word Cloud

This is a great age we’re living in, and I’m glad to be involved with developing lots of really advanced technologies.  One of the technology areas that I’m really fascinated with has been pushed forward by Stephen Wolfram.  He created the industry standard computing environment Mathematica, which now serves as the engine behind his company’s newest creation, Wolfram|Alpha.  (I’ve written a few posts on Wolfram|Alpha in the past, and you can read them here and here).

How to Make Your Own Custom URL Shortener

This is a technical post about what I’ve discovered in creating my own custom URL shortener.  Hopefully, you can learn to do the same things I did, and my experience will save you some headaches if it’s something you’re interesting in trying.


On my website, I focus a lot about decisions and discovery.  I love finding out how the world works and then applying what I’ve learned to make better decisions, and I also try to share what I can along the way.  I hope that it helps others.

Introducing Wolfram|Alpha Pro

Stephen Wolfram is doing it again.  I’m a big fan of Wolfram (you can read some of my other posts here, here, and here…), and am always intrigued by what he comes up with.  A couple of days ago, Wolfram launched his latest contribution to data science and computational understanding – Wolfram|Alpha Pro

Here’s an overview of what the new Pro version of Wolfram|Alpha can provide:

With Wolfram|Alpha Pro, you can compute with your own data. Just input numeric or tabular data right in your browser, and Pro will automatically analyze it—effortlessly handling not just pure numbers, but also dates, places, strings, and more.

Upload 60+ types of data, sound, text, and other files to Wolfram|Alpha Pro for automatic analysis and computation. CSV, XLS, TXT, WAV, 3DS, HDF, GXL, XML…

Zoom in to see the details of any output—rendering it at a larger size and higher resolution.

Perform longer computations as a Wolfram|Alpha Pro subscriber by requesting extra time on the Wolfram|Alpha compute servers when you need it.

Licenses of prototying and analysis software go for several thousand dollars (Matlab, IDL, even Mathematica) – student versions can be had for a few hundred dollars, but you can’t leverage data science for business purposes on student licenses.

Wolfram|Alpha Pro lets anyone with a computer, an internet connection, and a small budget to leverage the power of data science.  Right now, you can get a free trial subscription, and from there, the costs are $4.99/month.  This price is introductory, but it could be sedutive enough to attract a lot of users (I’ve already signed up – all you need for the free trial is an e-mail address…)

One option that I find really interesting is Wolfram’s creation of the Computable Document Format (CDF), which interactivity lets you get dynamic versions of existing Wolfram|Alpha output as well as access to new content using interactive controls, 3D rotation, and animation.  It’s like having Wolfram|Alpha is embedded in the document.

I had attended a Wolfram Science Conference back in 2006 and saw the potential for such a document format back then.  There were a number of presenters who later wrote up their work into a paper, published by the journal Complex Systems.  Since many of the presentations utilized a real interactivity with the data, I could see where much of the insight would be lost when people tried to write things down and limit their visualizations to simple, static graphs and figures.

I remember contacting Jean Buck at Wolfram Research, and recommending such a format.  Who knows whether that had any impact, but I’m certainly glad to see that this is finally becoming a reality.  I actually got the opportunity to meet Wolfram at the conference (he even signed a copy of his Cellular Automata and Complexity for me… – Jean was kind enough to arrange that for me – thanks, Jean!)

If you’re interested in data science and have a spare $5 this month, try out Wolfram|Alpha Pro!

Gartner Magic Quadrant Report on Big Data Integration Tools

Based upon their Magic Quadrant analysis of data integration tools, Gartner rates Informatica Corp. and IBM as the top software vendors in the space.

Gartner uses a Magic Quadrant to rate companies as leaders, challengers, niche players and visionaries based on several criteria including “completeness of vision” and “ability to execute.”  From Gartner’s website:

  • Leaders execute well against their current vision and are well positioned for tomorrow.
  • Visionaries understand where the market is going or have a vision for changing market rules, but do not yet execute well.
  • Niche Players focus successfully on a small segment, or are unfocused and do not out-innovate or outperform others.
  • Challengers execute well today or may dominate a large segment, but do not demonstrate an understanding of market direction.

A post by Mark Brunelli, Senior News Editor, at SeniorDataManagement has a more detailed analysis of the Gartner report.  Here’s what Brunelli wrote, detailing some of the thoughts of Ted Friedman, a Gartner vice president and information management analyst and co-author of the report:

“You’re hearing a lot about big data and analytics around big data,” Friedman said. “To do that kind of stuff you’ve got to collect the data that you want to analyze and put it somewhere. [That] in effect is a job for data integration tools.”

It does seems that the main focus right now in this space is on data handling and data management.  A lot of work is being done by companies to create data visualization tools to gain insight from the data, but as the problems get much harder, better analytics approaches will need to be brought to bear.  The real key over the next few years will be on the smart analysis of all this data, turning the data into reliable actionable information.

WSJ: The King of Big Data

On the Wall Street Journal website today, Ben Rooney posts an interview with Hortonworks CEO Eric Baldeschwieler, co-creator of Hadoop.  For all those in the big data space, the Hadoop project develops open-source software for reliable, scalable, distributed computing, and Hortonworks is focused on accelerating the development and adoption of Hadoop.

In Rooney’s interview, Baldeschwieler describes the problem Hadoop is designed to solve:

At its base, it is just a way to take bulk data and storage in a way that is cheap and replicated and can pull up data very, very fast.

Hadoop is at one level much simpler than other databases. It has two primary components; a storage layer that lets you combine the local disks of a whole bunch of commodity computers, cheap computers. It lets you combine that into a shared file system, a way to store data without worrying which computer it is on. What that means is you can use cheap computers. That lets you strip a lot of cost out of the hardware layer.

The thing that people don’t appreciate when you drop a lower price point is that it is not about saving money, it is about being able to do an order of magnitude more on the same budget. That is revolutionary. You can score five to 10 times more data and you can process it in ways that you can’t imagine. A lot of the innovation it opens up is just the speed of innovation. You get to an answer faster, you move into production faster, you make revenue faster.

You can read Rooney’s WSJ interview here

Popular Science: Wolfram on Big Data

I’m a big fan on Stephen Wolfram and his efforts in building Mathematica and pushing forward his approach to scientific discovery, A New Kind of Science.  In a recent post, Popular Science editor Mark Jannot talks to Wolfram about big data, human understanding, and the origin of the universe.

Here’s just on back-and-forth between Jannot and Wolfram:

Jannot:  A couple years ago at TED, Tim Berners-Lee led the audience in a chant of “More raw data now!” so he’s out there trying to create the data Web. And your project in Wolfram Alpha is not to deliver raw data but to deliver the interface that allows for raw data to turn into meaning.

Wolfram:  Yes, what we’re trying to do—as far as I’m concerned, the thing that’s great about all this data is that it’s possible to answer lots of questions about the world and to make predictions about things that might happen in the world, and so on. But possible how? Possible for an expert to go in and say, “Well, gosh, we have this data, we know what the weather was on such-and-such a date, we know what the economic conditions at such-and-such place were, so now we can go and figure something out from that.” But the thing that I’m interested in is, Can one just walk up to a computer and basically be able to say, “OK, answer me this question.”

This part of the Q&A is particularly interesting, since it highlights a difference of approach in what some want in technology.  Berners-Lee seems to want more “raw data”, while Wolfram is highlighting that the data isn’t really important unless you can turn the data into actionable information.  Wolfram|Alpha does just this – the technology uses Wolfram’s understanding of computation (what he built as part of his wildly successful Mathematica product line) and lets us answer questions. 

It’s an incredibly rich article – one worth reading (1) if you’re interested in data and where its taking us, and (2) if you’re interested in Wolfram and his take on science and technology.  I’m interested in both, so I think it’s very worth highlighting…

Here’s the Popular Science article, and another post to Wolfram|Alpha that highlights the history of computable knowledge (you can even order a poster of the timeline here…).   I’ve had a number of other posts on Wolfram and his scientific approach, which might worth looking into as well…

Upcoming Data Science Conferences

Looking ahead, there are a number of interesting upcoming conferences within the “big data” and “data science” fields, so I thought I’d list a few in this post.

First, Strata is probably one of the biggest and fastest growing conferences within the field, and they’ll be holding their next conference in Santa Clara, CA, starting on February 28, 2012.  Featured speakers include danah boyd from Microsoft Research, Hal Varian, Chief Economist for Google, and Pete Warden, CTO of Jetpac.  They hold online conferences as well – here’s more info on the December 7 event from a previous post

Next, Predictive Analytics World hosts a number of conferences around the world and for differing industries.  Their next conference is scheduled for November 30 in London, UK, in conjunction with the Marketing Optimization Summit and Conversion Conference as part of Data Driven Business Week.  PAW held a recent conference in New York this past October and will be hosting another one in San Francisco in March of 2012.

Also, The Big Data Summit team announced today session details for the upcoming technology event, November 8-10, 2011 in Miami, Florida.  The Summit is for C-level executives who are involved in data storage, data management and data analysis to gather and discuss how companies can effectively manage, protect and leverage the growing amounts of data in the enterprise.

Nerd Pride Friday: Triumph of the Nerds

This Friday, I thought I’d highlight one of my favorite and nerdiest documentaries of the technology industry.  Given the presence of the late Steve Jobs in the media lately – his loss to cancer, the new Walter Issaccon biography, and Apple’s re-creation of personal computing – I thought it would be nice to reflect upon Jobs’ first major impact into our technological lives. 

15 years ago, PBS created a documentary called Triumph of the Nerds:  An Irreverent History of the PC Industry.  It was hosted by Bob Cringely, who at the time wrote a column for InfoWorld about the goings-on of Silicon Valley and now writes a weekely column, I, Cringely.  In it, Cringely humorously and effectively describes how Steve Jobs and Bill Gates created the PC industry as we know it. 

I was always impressed with this documentary and the impact the subjects (especially Jobs) had on our lives through their business and technology pursuits.  I personally think it’s amazing and worth your time watching.  They made another documentary about the history of the Internet:  Nerds 2.0.1 – this second documentary was done in 1998 and the Internet as we know it was probably only three years old or so…