You are currently viewing A Question of Ethnicity: Walking a Tightrope in Creating ‘Morally Acceptable’ AI Systems

A Question of Ethnicity: Walking a Tightrope in Creating ‘Morally Acceptable’ AI Systems

22 October, 2020

“More human than human is our motto” proudly exclaims Dr. Tyrell, the genius responsible for the creation of replicant AI human beings in Ridley Scott’s 1982 classic movie Blade Runner.

 

This quote sums up what will be the ultimate goal of the creators of artificial intelligence lifeforms. The reason for this is very simple – we humans require understandable patterns in everything we do and accept.

 

From the dominant portrayal in Christianity of God as a white-bearded old man to the ‘humanness’ of animal characters in Disney movies, we need to be able to emotionally and logically relate to all things, and this includes artificial intelligence.

 

If AI systems are going to be successful, they will have to understand and eventually, mimic humans, and so, in effect, become as human as we are.

 

Or so you would think.

 

The creators of AI systems have begun to understand that there are choices that need to be made when ‘training’ their systems. The reason for this is simple – humans and the societies we have created are heavily infused with a whole range of biases, many of which are no longer acceptable.

 

The most well-known example of this phenomenon is the chatbot. Several of the world’s leading AI software companies have had to be pulled due to them learning racial biases, etc. In the most shocking example, Microsoft’s Tay developed racial and gender-based biases less than 24 hours after being turned on. The chatbot was quickly found to be referring to feminism as a ‘cult’ and making offensive remarks to do with Adolf Hitler.

 

 

 

 

It is clear how much of a challenge learning the rights and wrongs of our societies is going to be for AI, which in effect, will require it to become ‘more human than human’.

 

Given that movies are a representation of humans and our complexities, the challenge for AI systems will be no less difficult when it comes to AI in film.

 

In the movie Goodfellas (1990), the main character of Henry Hill states on the narration that “Jimmy was the kind of guy that rooted for bad guys in the movies.” The sole purpose of this narration is intended to clearly state the psychopathic criminal nature of Robert De Niro’s character Jimmy Conway, a device that attempts to break audience identification with the character by convincing them that he is in fact a very dangerous person.

 

Arguably, the only failing of Scorsese’s classic film is that by the end of the movie, the audience still has a great affection for Jimmy, seeing him in the same light as Western outlaws such as Jessie James and not as the vicious amoral criminal he really is.

 

This is a good example of the minefield that AI-assisted moviemaking systems will have to overcome trying to understand what is moral and what is not, and why some of us exhibit often counter-moral views to certain things such as movie characters, movie endings, etc. The likeable anti-hero is just the tip of the challenge that AI faces.

More Human Than Human

The issue of minority representation, both onscreen as well as behind the camera, is very important in AI development.

 

A 2019 report by UCLA highlighted the shocking facts regarding underrepresentation of minority groups, which constituted “nearly 40 percent of the U.S population” in 2017, but only accounted for 19.8% of film leads, 12.6% of film directors, and 7.8% of writers in that year.

 

It is certain that AI systems will be increasingly asked to help prevent prejudice in films.

 

In a recent AI Masterclass with MediaDesk Hamburg, Largo.AI CEO Sami Arpa was asked about how AI systems choose the most suitable actors for a role, as part of his answer he stated that “we definitely don’t want to give ethnicity information to A.I. because then it will learn about bias in terms of ethnicity in the society.”

 

 

 

 

He went on to say that the system “doesn’t focus on gender or ethnicity but rather on how much a character and actor matches.”

 

Could this mean that AI could be the first-ever truly unbiased way to ensure fair minority representation?

 

Ostensibly, the case for AI as a source of unbiased insights certainly looks strong. The removal of human bias based on ethnicity is most certainly a perspective that current AI systems can provide. As Arpa pointed out, the fact that systems focus on matching data relating to an actor’s past performances and acting style, range, etc., and not categories such as ethnicity means that they are less biased than any human could be.

 

Based on these criteria, studios can already select the most suitable actors for a role and use the data provided by impartial AI systems to defend their choices against accusations of bias.

 

This represents a huge step in the battle to end discrimination.

 

In the longer term, however, the complexity of human and social bias, the increasing sophistication of AI systems, and the inevitable demand on AI to reduce even the most subtle forms of discrimination will require tough choices to be made.

 

During the same interview, Prof. Björn Stockleben outlines this difficulty, saying that “this technology cannot be developed in an ivory tower, this is a place where it confronts reality, and we together need to decide what options we want to give, what data we want to feed, and what we might want to exclude consciously.”

 

Herein lies the possibility for human decision-based bias to enter AI filmmaking solutions. By consciously excluding certain data analysis criteria, AI companies are running the gauntlet of inadvertently creating biases themselves.

 

It is clear what a tightrope these companies will walk trying to teach their AI systems to remove minority bias while trying to accurately understand why international audiences might be more willing to see an actor of one ethnic background or sex play a role over another.

 

Since the role of AI systems is to understand audience preferences, AI technicians carry an immense burden as to how to train AI systems to accurately understand what we want and what is morally acceptable.

 

This is a particularly big challenge because culture-based bias routinely influence much of what we see onscreen. While certain biases might be obvious, i.e. a white actor playing Othello, etc., the vast majority of biases are of a very subtle and historic nature, and often go unnoticed.

 

Unfortunately, this means that these biases already exist deep inside the history data pools that AI analyses and are impossible to separate out via ‘wide’ categories such as ethnicity. The danger of doing so is that the AI predictions could also become less accurate.

 

It is clear that real changes will require both AI and us humans to evolve. It is certain that in time, AI-assisted filmmaking solutions will have to be increasingly trained to identify and understand what constitutes unacceptable biases in all aspects of movie production and film content, etc. This will require AI technicians to work closely with experts in such things as racial profiling on a long-term basis to constantly refine AI’s ability to identify forms of prejudice, no matter how subtle.

 

The decisions that AI technicians make regarding what data sets to analyze and what to exclude will shape the success of future AI systems in remaining as unbiased as ‘humanly’ possible. It is clear that a symbiotic relationship between AI experts, experts on various minority group representation, and the communities themselves will be vital in ensuring that AI becomes a powerful tool at identifying all forms of discrimination, and ultimately, does not become a system with its own learned set of biases itself.

 

This journey will not end there either.

 

As AI systems become more and more sophisticated, they will asked to take on the challenge of becoming an education tool too. This will involve providing new types of insights that help educate both the filmmakers and audiences into new ways of non-biased thinking.

 

An excellent and very welcome recent example of this point, though not one from an AI system, was the futuristic representation of the East African country of ‘Wakanda’ as an advanced technological civilization in the movie Black Panther (2018).

 

 

 

 

The term ‘Africa’ is still misused when, for example, referring to specific African countries instead of the continent, which sadly, despite being one of the world’s most culturally rich, still has simplistic negative connotations attached to it. One only has to read the comments made by U.S. President Donald Trump to see this point perfectly illustrated.

 

The positive portrayal of an, albeit, fictional East African country in Black Panther will have at least had a subconscious effect on most audiences, hopefully causing a number of them at least to rethink what ‘Africa’ really means.

 

With sufficient development and guidance from experts and minority communities themselves, AI systems will be able to offer what could be termed as ‘positive representational creative changes’ like the example above which would allow filmmakers to alter everything from where their movies were set to how characters speak.

 

This would have two positive impacts, firstly, it would offer a new and unique elements to a film (imagine that Black Panther had only taken place in a futuristic U.S like all other recent superhero films), while also enabling the film to ‘educate’ existing audiences into accepting new forms of minority representations.

 

As with the case of increasing the diversity of films we get to see via AI audience market size identification and gross earning predictions, AI would become a win-win for film production companies as well as the societies that they are seen in.

“More human than human is our motto” proudly exclaims Dr. Tyrell, the genius responsible for the creation of replicant AI human beings in Ridley Scott’s 1982 classic movie Blade Runner.