Welcome to the Artificial Intelligence Quotient

ShadowBox Solutions for AI: Human-Centered AI

AI has made amazing advances, yet is used much less than imagined, why? Frankly because AI is not well understood (and sometimes feared) by a lot of people. That is why we have developed a toolkit to help give these future users some perspective. Why spend big money on a tool that doesn’t make it in the field because people reject it? That is a lot of time and frustration for little reward.

 

So what is the solution? Well, let us share with you a little about our Human-centered AI tools. While we are not the first to talk about Human-centered AI, we are one of the first to provide practical considerations and solutions for system developers and users.

 

Why do we do this? Because we value skillful workers. As a company who specializes in studying expertise, we know the necessity it provides to the workforce. We know how expertise is developed and how it is stifled. As such, our broad approach to these workplace technologies is to identify leverage points that can help users better understand and place appropriate trust in them.

The AIQ Framework

 

Our services for improving AI systems surround four main goals:

  • Increase the rate of adoption to AI systems
  • Enhance the explainability of AI technologies
  • Increase human-AI performance
  • Help people use these systems to make quicker decisions, understand situations more effectively, detect problems sooner, and coordinate better with their teams.

The Mental Model Matrix (MMM) is a framework designed to help you explore your team’s or users’ beliefs about their own capabilities and limitations regarding an AI technology. The MMM can be used to identify potential gaps in understanding and align cognitive models between users and AI systems. These concepts are important because they help you better appreciate  Its goal is to elicit insights in the following areas:

 

Capabilities Limitations
System How the system works: How the system fails:
User How to make the system work: How the user gets confused:

The Cognitive Tutorial is a resource for your end-users, helping them better appreciate how your AI technology works (and how it does not) so that they may know the best way to use the tool and the necessary workarounds.

 

The Cognitive Tutorial sits between an instruction manual and a training module. It’s primary objectives are to:

  1. Identify where a user’s mental model of a system does not match what the system is really doing.
  2. Compress many learned lessons users face along the pathway to expertise, including how to effectively operate the system, understanding its boundary conditions, expert workarounds, and common errors users are likely to encounter.

 

One example of where the cognitive tutorial can and has been used is for machine translation systems (MTS) . Many users may understand how to use the system, but they have little knowledge about why and how an MTS makes mistakes. This tool bridges that gap to help users better use the MTS.

The score-card is a tool developers and system designers can refer to when thinking about designing their system to be more transparent and explainable to the end user. It presents an ordinal scale of the types of features that can be designed into their programs.

 

For Example:

LEVEL CONTENT EXAMPLES
Lowest Level
  • No attempts of self-explanation
  • (Most descriptive reports fit here
Traditional neural networks or black-box AI models with no transparency in decision-making.
Mid Level
  • Offers glimpses into decision logic and AI reasoning
Decision tree showing a step-by-step process, e.g., “These conditions led to the ‘safe’ classification.”
Highest Level
  • Identify reasons for failures. 
  • Ability for user to manipulate AI 
  • and infer diagnosis by effects on AI output
We do not have any real-world examples of this final level.

Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for Explainable AI: Challenges and Prospects. arXiv preprint arXiv:1812.04608.

AI is Built to Serve.

We approach AI systems with the understanding that they are meant to serve and assist our success. They are not our teammates, they are tools. As such, AI must be developed around human practicality. Developers need to frame construction around the users’ needs to ensure intuitive systems that appropriately assist and enhance expertise. Therefore, we support a future focused on what we call Human-Centered AI.

 

So, how do we (here at Shadowbox) define Human-Centered AI?

1. Comprehensive & Usable

Systems with clear and easy uses, and tools that minimize complexity and difficulty (cognitive load) for the users.

2. Transparent

Designing and reasoning transparency are vital in demystifying the “black box” of AI.

3. User-Centric

System designs that adapt to people, and not the other way around.

4. User-Calibrated

Systems that meet users where they are in their training and understanding.

5. Expertise Enhancing

Systems that are useful for both experts and novices. Tools that enhance training and improve skills.

6. Expertise Preserving

Systems that leverage and preserve the skills of top field experts and that avoid complacency and skill loss.