SUBSCRIBE TO OUR FREE NEWSLETTER
PUBLISHED |

Why diversity is vital for Artificial Intelligence

Author:
Why diversity is vital for Artificial Intelligence article image

There is a great deal of interest in how AI systems can have a sufficient degree of alignment to human values and behaviour.

However, most of the research in this area has concentrated on the implications of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) while we already have problems with the lowest level of AI defined as Artificial Narrow Intelligence (ANI).

For example, the failure of some crime assistance systems used in the US and even Robodebts in Australia showed how easy it is for non-alignment to occur.

Those who are investigating high-level AI alignment are often concerned with trying to establish a globally accepted framework for AI ethics, but this is likely to be impossible to achieve. After all, we have no such international code for human behaviour, and also it is a limiting approach as diversity is a vital element in the growth of intelligence.

When facial recognition programs became available, governments throughout the world rapidly applied it to their infrastructure, such as surveillance cameras. However, how the utilisation was applied was quite different.

Rating the ‘goodness’ of every citizen

Having a culture based largely on Christian ethics, the US deployed the new systems to identify and follow miscreants and potential miscreants to challenge evil. In China, they have deployed the technology to rate the “goodness” of every citizen based on the traditional Taoist idea with the aim to improve individual behaviour.

Without taking a position, it is interesting to note AI experts on both sides regard the other approach as unacceptable. The Chinese believe using network-based systems, developed to identify terrorists, to identify potential miscreants and then to monitor them without their knowledge is unconscionable.

In comparison, many in the US see the producing of a file of behaviour of every individual within the country as completely unacceptable. What is important is that both of these systems, though different, reflect the aspirations of the governments concerned.

We’re used to living in environments where values differ

On a much more personal note, there is no reason why the values of our AI systems should

be identical. In fact, they are not at present. On a daily basis, when an individual goes to a search engine, the response is tailored for the person’s individual search history or location.

Moreover humans are used to living in environments where values differ. Those who strongly support the death penalty live successfully alongside those that oppose it in countries, workplaces and even families.

The reason why personal AI systems need to adopt the values, or at least appear to, of the humans involved can be easily demonstrated by some work being done by a Japanese colleague. He is working with robots to assist in palliative care. Central to the system is a chatbot, and the most commonly asked question is: “Is there life after death?”

To provide comfort, the machine attempts to identify the patient’s own beliefs and reinforces them rather than give some inserted response.

Building a symbiotic relationship between artificial and human intelligence

The way, we believe, the process could be applied in a more general sense is by AI adopting a system of ethics, morals and values in a similar way to humans. When medical, engineering or accounting graduate completes their studies, they have a high level of skill but are not fit to practice as they have only a rudimental understanding of the behaviour expected.

To overcome this, a formal or informal mentoring takes place. Of course, the mentee does not learn to have perfect compliance with best practice having acquired some of the same foibles as the mentor but in practice should be able to function just as adequately.

To maximise the advantages that can be gained through artificial intelligence, we need to build a symbiotic relationship between artificial and human intelligence.

This does not, of course, imply an equal relationship any more than the parties to biological-based symbiotic relationships are seen as having equal roles and authority. However, to maximise the return from the relationship, it is vital that maximum diversity must exists in each partner.

John Page Adjunct Senior Lecturer at UNSW and Editor of the International Journal of Intelligent Unmanned Systems. Faqihza Mukhlish Research Student UNSW

related

comments

Leave A Comment
SUBSCRIBE TO OUR FREE NEWSLETTER

Featured Products