You are currently viewing Human Understanding Holds the Key to the Future Success of Artificial Intelligence.

Human Understanding Holds the Key to the Future Success of Artificial Intelligence.

1 January, 2022

It turns out that the way we humans arrive at a decision will be one of the key components to whether or not mass adoption of AI technologies will occur.

 

recent article published in MIT News titled “When should someone trust an AI assistant’s predictions?” detailed the efforts of a group of AI researchers to understand and model how and why humans trust AI systems to help us to make decisions.

 

The implications of this research are enormous. Across all industry segments, the factor causing the most resistance to the adoption of artificial intelligence and machine learning systems is human apprehension.

 

There are a number of key factors that feed this apprehension including dystopia visions of an AI-controlled future, fear of losing jobs, resistance to change, and perhaps most damaging, a fundamental lack of understanding about what current AI and ML systems can do and what their strengths and weaknesses are.

 

The film industry is a good example of where all these factors are feeding global resistance to the adoption of AI-assisted film making tools, something which, if it goes unchanged, threatens to destroy the industry by leaving the power of AI in the hands of a few giant platforms such as Netflix that are already using them to better understand their audiences in order to give them a huge advantage of their competitors.

 

Such a scenario will affect nearly every industry on the planet where the use of big data is part of everyday operations. The lesson is straightforward, those that resist AI will be left behind.

 

Researchers, in part funded by the National Science Foundation, set about trying to understand the mechanisms behind human interaction with artificial intelligence systems and to create a method that will better help “workers collaborate with artificial intelligence systems.” – MIT Technology

 

The team began by examining how we humans decide whether or not to allow other humans to help us to make our decisions. Unsurprisingly, key to this is what we perceive as the other person’s strengths and weaknesses, including factors such as experience, past decisions where they have been correct, etc.

 

They discovered that we actually apply the same decision model to AI systems too. However, the big drawback is most people lack familiarity with AI systems and what their strengths and weaknesses are.

 

In order to overcome this, the researchers decided to create a teaching model or a boarding process that allowed them to train human test groups on the strengths and weaknesses of an AI system. Candidates were asked to create a written record of rules on how and where it works. The students were then tested to see how effective the teaching model had been.

 

What they discovered was “When teaching is successful, it has a significant impact. That is the takeaway here. When we are able to teach participants effectively, they are able to do better than if you actually gave them the answer.”

 

However, “the results also show there is still a gap. Only 50 percent of those who were trained built accurate mental models of the AI, and even those who did were only right 63 percent of the time. Even though they learned accurate lessons, they didn’t always follow their own rules.”

 

 

 

These findings have enormous implications on the road to the mass adoption of artificial intelligence systems. To start with, the findings prove that if properly trained to understand how AI systems arrive at the answers that they do, humans are actually able to use AI to get better results than if they are actually shown the right answer but don’t understand how and why the AI system reached that conclusion.

 

Understanding is Key

It seems paramount that we understand the reasoning behind AI systems in order to best evaluate when and where to trust the results that they provide.

 

So, for example, in the case of AI-assisted film making tools such as most suitable actor suggestions, if filmmakers are going to be able to make the right decisions on whether or not to trust the AI system and to pick a certain actor, they are more likely to make the correct decision if they have a good understanding of why the system has picked that particular actor and whether or not this is an area where the AI system has been successful in the past, etc.

 

The implication for AI companies of all shapes and sizes of this research is also enormous. If they wish to boost customer interaction and faith in their AI systems then they need to be providing better upfront training regarding how their system works as well as training them on how best to use it. Also, they need to be providing customers details on how their AI system reached a particular insight and how often the system has been right in the past.

 

Such insights into the actual workings of the AI system will, according to this research, boost the successful results that users are getting and will dramatically boost customer retention as a result.

 

The facts are in, now we will have to wait to see if the AI companies listen to the data and adapt their business models to encourage us all to gain a better understanding of how and when to trust AI to make our decisions.

It turns out that the way we humans arrive at a decision will be one of the key components to whether or not mass adoption of AI technologies will occur.