Visualizing Machine Learning: How Do We Humanize The Intelligence?

Visualizing Machine Learning: How Do We Humanize The Intelligence?

gramenerThese are exciting times for data science professionals. With the rapid advances being made in analytics, specifically around machine learning, deep learning and AI, researchers and practitioners around the world have started adapting these technologies for deep, human-impacting applications.

The Age of Advanced Analytics

Social talk is replete with conversations that have shifted gears from self-driving cars to flying vehicles, smart machines to robots that are already collaborating quietly in our homes, and chatbots to conversational AI which is becoming omnipresent, contextual and indistinguishable from a human response.

Many are proud of the fact that, as a race, humans have managed to create real-time intelligence, one that hasn’t naturally evolved. With advances across disciplines, we have finally cracked several challenges in simulating human intelligence and are starting to surpass it in some areas.

The Great Analytics Divide: Salvation or Sorcery?

Interestingly, this is where public opinion across the world splits. At the other end of this spectrum, we have people who question the very existence and intent of AI, with deep debates on not just the trustworthiness of advanced analytics, but also about its very utility.

At some level, there is a sense of fear gripping most consumers. There is an eerie uncertainty around the optimistic spin being given to the areas of potential application, with questions raised about its feasibility, scale and impact.

There’s a connection between the advances that are made in technology and the sense of primitive fear people develop in response to it. — Don DeLillo

Rapid advances in technology have always been accompanied by an escalation of fear over the past decades. While technology earlier was complex, it was perfectly rational. With deep learning and AI, we can no longer claim this, since they now surpass the realm of human logic and understanding.

Also Read: How App Science Will Lead to Better Predictive AI

The Challenge of Analytics Consumption

Lack of basic awareness is the biggest challenge facing the analytics discipline today, and this ranks higher than the ethical dilemmas around its adoption. As users increasingly flounder in their understanding of a new technology, the noises around its perceived utility and questions of adoption get louder.

While large-scale adoption of advanced analytics by end-consumers is still playing out incrementally, this is already a clear and present challenge for enterprises. Notwithstanding their million-dollar investments in data science to glean intelligent insights, businesses also face huge resistance from within.

The biggest challenge in enterprise projects is not with model engineering or accuracy, but with the on-ground adoption of analytics applications and implementation of recommendations provided by these intelligent models. More so, when they end up counter-intuitive to industry heuristics and gut feel.

Model accuracies from a forecasting project, where neural networks outshines other models
Model accuracies from a forecasting project, where neural networks outshine other models

Models with high accuracy and low acceptance

Over years of implementing advanced analytics engagements, we have come across many instances where, outstanding complex models (black-box) meet the engagement objective with exceptional accuracy, but fail to meet human acceptance standards.

While black-box models like neural networks bring about a significant jump in accuracy, this is often at the cost of losing explainability and human comprehension. Compare this with the ease of explaining a decision tree model with intuitive if-then-else conditions and simple thresholds.

However, in projects, the improvements in business benefits made possible by the complex, black-box models are too alluring to ignore. As data science practitioners, our responsibility is to bridge this divide, enable consumption of machine learning insights and gently push towards prescriptive actions.

If you can’t explain it simply, you don’t understand it well enough. — Albert Einstein

Also Read: What Pixar Can Teach Us About AI & Machine Learning

A Visual Framework for Machine learning

While charts are more powerful in conveying information and can prove to be far superior to a table of numbers, a visual framework can be particularly effective in humanizing the intelligence from machine learning.

Here’s a look at the 4 key elements of this visual framework which can promote easy comprehension and help in demystifying advanced analytics models.

A visual framework for humanizing Machine learning
A visual framework for humanizing Machine Learning

Information Design:

Visual story-telling of data is the foremost approach to present not just a table of numbers, but importantly the statistical results and interpretations of the algorithm results to arrive at prescriptive actions.

A standardized approach to information design with a user-centric approach, and by designing the right navigation workflow, pertinent representations and relevant visual design is the right place to start on this journey.

Demonstration of how a static visual presentation can encapsulate & illustrate model results well
Demonstration of how a static visual presentation can encapsulate and illustrate model results well

Adaptive abstraction:

The most powerful way to gain insight into a system is by moving across the levels of abstraction. Data scientists and designers instinctively move up-and-down across different levels to glean insights and layout solutions for users.

Its imperative to empower users with some fluidity, so that they can take in the bird’s eye view (abstract summaries), digest the ground-level detail and dynamically navigate across the data, adapted to context and user expertise.

Bret Victor’s ladder of abstraction is a useful reference, where he demonstrates steering around a problem by abstracting over time, algorithm and data. Applying this in a contextual, domain-driven way can demystify the analytical solution by shining a light on the underlying design approach.

Bret Victor’s ladder of abstraction with a toy car example as a walk-through
Bret Victor’s ladder of abstraction with a toy car example as a walk-through

Also Read: 3 Pivots Marketers Need to Make to Improve Marketing Performance

Model Unraveling

Equally important in the journey of onboarding users is providing them a sneak peek into the model internals, albeit in a non-overwhelming way. While it still befuddles humanity on how algorithms like neural networks learn or map data to the desired output, research is fast progressing in this area.

There are several early attempts at unpacking the internal sequence of steps in deep learning, particularly in areas such as classification and image recognition. Keeping the user safe from toxic statistical jargons, if we can unzip the model and enable simple traceability to the output, this can go a long way in making people appreciate the beauty of these black-box models.

A powerful methodology for classification models; Distill has setup a prize for outstanding work in this area
A powerful methodology for classification models; Distill has setup a prize for outstanding work in this area

User Interactivity:

User interactivity can be a powerful glue that stitches together various elements of this framework. It enables a visual storytelling interface that promotes meaningful user-journeys across the levels of abstraction, to understand the salience and operation of a machine learning model.

By making all user interactions consistent, perceivable, learnable and predictable, the entire experience can be turned around from one which is doubt-inducing to something that can be meaningful and awe-inspiring.

Case study: What-if modelling (move the sliders) for prescriptive action, enabled in a Visual causal analysis
Case study: What-if modeling for prescriptive action, enabled in a Visual Causal Analysis

Summary

While the data science and AI disciplines go through exciting and exhilarating advances, it is important to keep the user’s expectations and experiences in perspective. This is very critical since a sizeable segment of target AI users are being alienated with deepening disconnect and distrust.

It doesn’t need a big dash of imagination to bridge this gap. Many of the enablers needed to build user trust and promote understanding are already available in our toolkit, and research in the field is quickly unraveling the rest.

What’s needed is an acknowledgment of this divide and a conscious effort to address it by adapting the above visual framework along with the constituent 4 key aspects, while implementing machine learning solutions.

Also Read: The Future of Artificial Intelligence: Is Your Job Under Threat?

Picture of Ganes Kesari

Ganes Kesari

Ganes Kesari is a Co-founder at Gramener, a pioneering organization in the Data Analytics and Visualisation space. He has played several startup leadership roles to scale Gramener, by demonstrated ability to conceive strategy and execute on-the-ground.

You Might Also Like