Will artificial intelligence create killer robots or steal our jobs? Here’s how a Singapore bank deals with the ethics of AI

 

A scene from “Ex Machina”, a sci-fi thriller which revolves around artificial intelligence.

The fourth season of Netflix’s “Black Mirror” gave a sneak peek into a very possible future altered by technology advancement with many episodes based on how humans interact with AI and the issues that arise as a result.

*SPOILER ALERT* In one of the most spine-chilling episodes, a group of human beings encountered a relentless pack of killing machines in the form of an autonomous robot “dog” called Metalhead. A cat-and-mouse game ensued and eventually – you guessed it – the “dogs” killed them all.

While there has been no record of such catastrophes so far, last year, over 100 robotics and AI technology leaders, including Elon Musk and Google’s DeepMind co-founder Mustafa Suleyman, issued a warning about the risks posed by super-intelligent machines.

In their open letter to the UN Convention on Certain Conventional Weapons, it stated: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s Box is opened, it will be hard to close.”

Closer to home, the Singapore government recently announced that it is setting up an advisory council to delve into the ethical use of Artificial Intelligence (AI). The Advisory Council will assist the Government to develop ethics standards and reference governance frameworks, issue advisory guidelines, practical guidance and codes of practice for voluntary adoption by businesses.

OCBC Bank also launched its own AI unit earlier this year with an initial investment budget of S$10 million over three years to strategically develop in-house capabilities. Broadly, these are the four key principles that steer the bank’s development in AI:

1. Augment jobs, don’t kill jobs

High profile technocrat, Jack Ma, shared at the World Economic Forum that AI and robots are going to kill a lot of jobs. He is not alone in his assessment. IT research firm Gartner estimates that by 2025, a whopping one third of jobs will be replaced by robots and smart machines.

We think otherwise.

When developing digital capabilities, we view the role of fast-developing technologies, like AI, augmenting and not killing our jobs; value-adding to what our people are already doing. Robots and machines are our enablers to make smarter decisions, not make decisions in place of us.

We see economic benefits and job creation through people and machines working in collaboration. Take our home and renovation loan chatbot Emma for example. It was developed with the intention to complement the efforts of our mortgage sales teams, and not to replace them.

Specifically, it caters to the growing segment of self-serve consumers that prefers the D-I-Y way – regardless of the time of the day and place. It helped to close more than S$70 million in home loans in less than a year.

2. Flourish alongside AI, leave no one behind

While we look to AI augmenting jobs, we recognise the inevitable – the skillsets of our people need to keep pace with the technology to flourish alongside it to cater to the evolving needs of our customers.

This is akin to a car mechanic who started plying his trade in the 1980s and keeps abreast with the latest technological advancements to service the new types of cars his customers come to him with.

Today, he needs to learn to fix a complicated hybrid car model with auto start-stop systems, a far cry from conventional fuel injection engines he was used to decades ago. In another three years’ time, he may have to upgrade himself again – to repair flying cars (who knows?). The learning journey never ends.

In the same vein, we are conscientiously taking steps to ensure that our people have the competencies to thrive and create a strong learning culture that encourages a mindset receptive to learning, unlearning and relearning.

Our $20 million Future Smart programme, launched in May this year, is a testament to this principle – to train and develop the digital skills of all 29,000 employees of the OCBC Group globally.

3. Keep AI fair, minimise bias

American mathematician Cathy O’Neil shared her concerns on the possibility of AI algorithms – based on past learnings – increasingly reinforcing pre-existing inequality in her book ‘Weapon of Math Destruction’.

For example, if a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious circle ensues.

There is a high risk that blindly adopting AI decision-making black boxes will make the world even more unfair than it is now and widen the social gap between rich and poor.

We want to avoid this.

Our colleagues, subject experts in the areas that we collaborate with, are equally involved as our AI scientists in the development of our initiatives. They are our AI trainer-equivalents; continuously validating and providing feedback on the algorithms even after the AI product has been launched.

It’s not a sexy job but a meaningful one to make us better as an inclusive organisation.

4. Protect customer data, uphold trust and integrity

AI thrives in businesses when there is big data, benefiting organisations and consumers in an exponential way. However, it can go eerily wrong too when the data is misused; the recent Cambridge Analytica scandal comes to mind.

How companies manage, secure and share consumer data is fast becoming a key factor in the relationship they have with their customers. When done well, companies will be in a good position to capture the most valuable element of the relationship – trust – to yield the maximum benefits of AI and digital. When poorly managed, not only customers are lost but reputation as well.

Our journey with big data started some 15 years ago – way before “big data” became a buzzword – to serve our customers better. Much has been invested in the areas of protecting as well as the use of our customers’ data. For example, access to the data is tightly controlled. We strive to use the data to provide customers with products and services that are of relevance to them and not spam them.

To us, this is not merely a case of fulfilling our fiduciary obligations. It is about our values; upholding the highest level of integrity in everything we do – treating our customers with respect and integrity, and consistently dealing with them in a fair and professional manner.

While we strive to advance our capabilities, it will not be done at the expense of our customers and the public in general. A long-term relationship is what we endeavour to build on the basis of integrity and fair dealing.

The writer is OCBC Bank’s Senior Vice President, Group Operations and Technology.