The Netra Blog

IBM’s artificial intelligence was poised to transform industry. Why didn't it succeed?

Written by amit-phansalker | Jan 28, 2022 5:00:00 AM

In 2011, riding high on Watson’s stunning win on Jeopardy!, IBM leadership proclaimed that their AI technology could solve any challenge open to transformation through Artificial Intelligence (AI). Yet while Watson introduced burgeoning AI to American households, Watson was clouded with underlying challenges. Despite plowing $4B into Watson to fix its numerous shortcomings, IBM announced it was selling its Watson division to PE-firm Francisco Partners for $1B last Friday.

Three key factors lead to Watson’s limited success:

1) IBM’s Artificial Intelligence capabilities were early and limited. The Watson system did not evolve fast enough to incorporate the more advanced AI capabilities now emerging.

Based on interviews in New York Times, senior IBM leaders were completely starry-eyed about Watson’s potential and promise. AI, however, was still new and IBM’s leadership may not have understood the extent of Watson’s IBM limitations. Given that this was an emerging technology, the principles of AI-based solution development were still a new concept for IBM’s leadership and the public. IBM’s leaders touted how their “moonshot” technology would soon revolutionize healthcare and breathe life into IBM. But there was no magic switch that would translate Watson’s Jeopardy! success to a range of applications overnight. To win Jeopardy!, Watson was trained specifically to win a quiz show, and even that feat required years of training to solve a relatively narrow objective.

2) With basic-levels of AI and an unclear deployment strategy, IBM's expectations that Watson would quickly make breakthroughs in healthcare were unrealistic.

IBM set its sights high on revolutionizing healthcare through an early partnership with MD Anderson Center in Houston. The engagement quickly became snarled.

Most AI models must “learn” from massive amounts of pre-existing, reliable data for AI to make suggestions. For example, to diagnose a condition such as lung cancer in patients, the model needs an extensive database of historical data (or “sample set”) on a set of individuals that both have and do not have lung cancer. The model also needs a list of contributing criteria that could influence a diagnosis (e.g. “smoker,” age, “lived with a smoker”, diet, medical complaints, family history, etc.). Once these data points are provided, AI models are adept at linking connections to create powerful estimations regarding the early onset of medical conditions for future patients based on the patient’s own criteria.

However, healthcare data and patient record systems are notoriously fraught with problems and lack common structure and labeling taxonomies. For example, tracking a single patient’s information across electronic records, paper records, multiple doctors’ notes, etc. is difficult within a single visit and becomes more complicated when achieving a complete view of a patient’s medical history. This common taxonomy problem plagues many of today’s AI challenges today.  Further, hospital IT systems are notoriously clunky and require deep domain expertise, and electronic health record systems tightly control access to patient data. It is postulated that the aforementioned coupled with the fact that Hospital systems would not allow AI platforms such as Watson to create direct extraction due to privacy law, created a challenging learning environment.

While IBM’s ambitions to tackle healthcare were noble, in hindsight, they chose to test their new technology on a complex component of a minefield-ridden industry. As a result, Watson’s technology did not deliver as hoped.

3) Lofty Expectations: AI is for automation; AI is for transformation; AI delivers over time not instantly

IBM presented Watson as a shiny new object that was set to rapidly change the face of healthcare, but its ambitions were hampered by a few realities:

  • AI technology takes time to mature and needs enough data to understand signals from noise
  • Watson technology missed a critical feature that more advanced technologies that emerging AI firms such as Netra can provide: active learning from pure AI.

Rather than accepting its limitations and pursuing new paths for its technology, Watson’s ambitions exceeded its capabilities and was unable to deliver. At the conclusion of the MD Anderson engagement, Anderson’s auditors concluded that the project cost the hospital $62 million and failed to accomplish its objectives.

How will this impact AI’s future?

To its credit, IBM ushered in an important era of AI and made the emerging technology a household name and brought some of the first mainstream, creepy-free messaging on how AI can revolutionize and transform the human experience. But, unfortunately, IBM’s dreams of Watson stumbled.

Shane Greenstein, a Harvard Business School professor, told the New York Times, “they chose the highest bar possible, real-time cancer diagnosis, with an immature technology.” Had the team been armed with a well-thought-out strategy, Watson could have succeeded by:

  • First, approaching industries that are easier to initialize with more accessible data
  • Second, targeting applications that are well-suited for computer-intervention versus human-review, such as radiology image interpretation
  • Third, incorporating extensive labeling and classification like that of Netra’s offering

As AI matures and learning engines such as Netra penetrate the mainstream, advanced-AI will finally achieve the ambitions set out by Watson. Technologies such as Netra will provide opportunities to make automation easier in the marketplace, reduce errors that humans are prone to, and streamline the mountains of data required to boil up the critical nuggets of information that scientists and business leaders have long dreamed of unearthing. AI, however, will not eliminate the need for human intervention. The market will soon reflect a composite approach that pairs human intervention with human-enabled Advanced AI.