Here’s why artificial intelligence needs pro-social values

Here’s why artificial intelligence needs pro-social values

We need a tech-literate population thinking differently about AI.

AI is now part of our everyday lives – so much so that we often take it for granted – but too often it seems only to work to support corporates and big business.

The Facebook-Cambridge Analytica data scandal in 2018, in which the personal data of millions of people was used, without their consent, for political advertising shone a spotlight on the route we could find ourselves on if we fail to bring more pro-social values to the development process.

Professor Jon Whittle, Dean of the Monash Faculty of Information Technology, is at the forefront of driving a change in how we think about AI, Big Data and robotics.

He believes that scandals such as that which engulfed Facebook Cambridge Analytica are, in part, the result of the massive and rapid advances in technology. This has meant companies have forged ahead – maliciously or not – without really thinking about the ramifications.

Now is the time for our social conscience, regulation and legislation to catch up.

“We need to seriously rethink our approach to intelligent machines,” he said. “Rather than, ‘OK, let’s use this to make money’, we should be saying ‘Can we use this to make the world a better place?’”

So where should we start this new conversation? Professor Whittle believes it is by defining what we are trying to achieve. And he believes that begins with values.

Any new technology impacts human values, either deliberately or unintentionally. And so positive social values should not only be reflected in the way AI systems are used but when software is being written.

AI with pro-social values

Software engineers should have a thorough understanding and consideration of the values of end-users and stakeholders. They, and the software they produce should first and foremost express and reflect the human values of equality, diversity, respect for tradition, privacy, a sense of belonging, and freedom.

In the commercial world companies should be clear about what they stand for, so their customers can make informed choices about whether they share those values and are prepared to do business with the company.

“It’s more about transparency rather than trying to come up with some magical set of values that everybody buys into,” said Professor Whittle.

Publicly funded research is also vital in this equation if the agenda is to focus on research and social good. Research being driven by our universities will encompass broader areas of inquiry and bring together multiple disciplines to tackle questions. Perhaps most importantly, unlike research driven by the corporate world, publicly funded research won’t have the ultimate driving agent being profit.

But our citizens need to be empowered as well. There is a growing belief among many AI researchers that the field should not add to the divisions of the world into “haves” and “have-nots”.

That means AI focusing more and more on Human-Centred AI, which means developing AI to support human goals, values, and activities.

Democratisation of AI in a positive social way is also central to the work of Dr Aldeida Aleti, a senior lecturer in Cybersecurity and Systems at Monash.

She is deciphering the potential of deep learning - a type of machine learning - and said that researchers must take into account people’s fears and concerns. But first, we must make sure they have the skills to fully participate in the conversation.

She points to Finland as an example of how we should be moving forward.

The country is running an ambitious pioneering training program to  teach an initial 55,000 people the basic concepts at the root of artificial intelligence technology.

Once that first cohort has been trained, the program will be rolled out nationally with the aim of getting the entire population thinking about high-end applications of artificial intelligence that would help their area of expertise.

“We need to do this too,” Dr Aleti said. “Educating people so they first understand what artificial intelligence is, is vital, as it gives them back the power over their lives.”

Software engineering is a democratic project

An important part of bridging that divide is making the technology more accessible and transparent - to experts and non-experts alike. That’s why Dr Aleti’s research is open source, which means anyone can access what she and her team are developing.

Dr Aleti’s specialty is investigating how to use AI to help build software automatically or semi-automatically. That will not only save costs but will also enable people to build their own systems without actually knowing much of programming.

At the same time, we also need to keep an eye on what the software is doing, to ensure that more people can be comfortable with it.

“These AI systems are software – code – so it’s about making sure that it’s doing the right thing, that it’s reliable and safe.”

Professor Whittle and his team are now working with global software companies to help it take a more rigorous approach to values-based software development.

“While 85 out of 100 FTSE companies have public values statements, a key issue is how to translate those values into the software that the companies use or develop,” he said.

“We’re working with industry to fundamentally change the way that they develop software, so at every stage they’re thinking about these values and making design decisions that respect those values rather than forgetting about them.”

A question of bias

One inevitability of AI is bias - a feature that has led to algorithms that discriminate against certain groups of people.

At times, this has had disastrous outcomes, such as with a sentencing tool in the US called Compas (Correctional Offender Management Profiling for Alternative Sanctions), which was developed in 1998 and has been in use since 2000.

Compas used an algorithm to assess the potential risk of a defendant re-offending, but analysis of the system found that the system routinely, and erroneously, predicted that black defendants had a higher risk of re-offending than white defendants - findings that Equivant, the company that developed the system, still disputes.

“We are all introducing our own biases to everything we do,” said Dr Aleti. “So I think AI unfortunately is going to be biased, but the important thing is to be aware of that.”

But in that simple fact of knowing AI systems are likely to have all-too-human biases comes security. It has led to the growing demand for Explainable AI, a focus of Monash’s Dr Mor Vered.

“When you talk about human-centred explainable AI, you’re looking at explanations from a more positive social human perspective. How do people understand? What kind of explanation do we need to help them understand?” she said.

That means that the Compas sentencing case would have been more transparent.

“If you had a human-centred explanation you could say, ‘you know what, I'm seeing this decision because he is a man of colour’, and if you can spot it, the decision can be dismissed as bias.”

For Dr Aleti problems like this point back to education.

“Even with Explainable AI, it is still important that everyone first understands what AI is and the important role it will play in all our futures. But also to know its limitations.”