Speech transcript 19 July Cranlana Programme: 'How do we build ethical machines?'

- 17 mins

On 19 July 2017 19 July 2017 I gave a speech about data ethics to the Cranlana Programme, which was broadcast as part of ABC Radio National’s ‘Big Ideas’. You can listen to the full conversation on ABC Radio National here. A transcript of the prepared speech (before conversation with the room) is below.

Google thinks that I like american football.

Google also thinks that I like combat sports and the blues, but it’s american football that confuses me most.

Who here has a gmail account? Have you ever opened your ad settings? You can see how ads for you are personalised, based on the kinds of things you search for via Google and watch on YouTube. And you can change them, or turn off personalisation entirely.

Google has correctly guessed that I like pop music, sci fi and - although I’m ashamed to admit it - celebrity gossip.

But have I been sleep-searching american football?

See, Google’s algorithm has assembled a figure representing me - a kind of data shadow - based on the data it has access to about me. Of course Google doesn’t know everything about me but it’s had a go at figuring me out.

Maybe Google thinks I like american football because I genuinely do like Friday Night Lights, an american TV series set in a small town in Texas that revolves around american football. And maybe Google’s algorithm can’t tell the difference between liking a show about fictional american football and the real thing.

In this context - showing me ads that I never click on anyway - it doesn’t really bother me. It does capture rather beautifully, though, the difference between my data shadow as Google interprets it to be and what the data actually reveals about me.

We use the term ‘artificial intelligence’ to describe machines that can perform tasks and make decisions that we used to think only people could make. While AI have been around in various forms for decades, the kinds of tasks and decisions artificial intelligence can make are quickly becoming more sophisticated and widespread, and it’s because of data.

Enormous, endlessly expanding, diverse, messy data.

Many of our interactions take place online now. We sign up to loyalty programmes and browse the Web. We pay our bills electronically and research symptoms online. We buy smart fridges and smart TVs. Sensors, mobile phones with GPS and satellite imagery capture how we move through the world. And our online lives leave thick data trails.

Data is powering automated cars, trains and planes. Automated systems learn from data to make lots of different kinds of decisions: about what we might like to buy online, when we could be at risk of getting sick. They decide who our potential romantic partners might be. The insurance premiums we get. The news we’re exposed to.

The rapid advances in AI have been exhilarating for some and disturbing for others. For me, it’s a bit of both.

Tonight I want to talk about three themes: access, control and accountability. Because within the question, ‘how can we build ethical machines?’ are profound structural and historical choices regarding data - how it is collected, who has access to it, and how it is used (or misused) - to be unpacked.

Because data if you like is what gives AI life. It makes it smarter. You can’t build smart machines without it.

And so we need to ask questions like: Who has access to data? Who collects enormous data sources? What kind of organisations? And what responsibilities should they have? Do we as people have the ability to control and question automated decisions made about us? And, who gets held accountable when a machine gets it wrong?

Because things can go wrong.
There’s lots of stories about AI getting into trouble.

The social media chatbot that quickly becomes horrifically sexist and racist. Updates to Google Photos that accidentally see black people identified as gorillas. The camera that recognises images of Asian faces as people blinking.

These kinds of glaring problems are typically picked up quickly. But sometimes the issues training AI out of biases and prejudice can be more insidious, and more troubling.

Joy Buolamwini, a computer science researcher at the MIT Media Lab in the US, has spoken about issues she’s had as a researcher getting robots to interact with her: to recognise her face, to play peekaboo.

But when Joy, who is black, puts a white mask on over her face, the robots can see her.

The problem here is poor data being used to train a robot about what faces look like.

Facial recognition software learns faces from big datasets of images of faces. If the images in what is called your ‘training data’ aren’t diverse, then the software doesn’t learn to recognise diverse faces.

A bit like humans really. AI is shaped by its environment just as we are. It’s impressionable. And so we need to take care not to encode biases within machines that we’re still wrestling with as humans.

In 2016, the first international beauty contest judged by AI - and which promoted itself as analysing ‘objective’ features like facial symmetry and wrinkles - identified nearly all white winners.

In the US, sentencing algorithms are being developed to predict the likelihood of people who have been convicted of crimes reoffending and so to readjust sentencing. One of these algorithms was found to falsely flag black defendants as future criminals at twice the rate of non-black defendants.

It’s not just race either: researchers from Carnegie Mellon University have discovered that women are significantly less likely than men to be shown ads online for high paying jobs.

In one machine learning experiment helping AI make sense of language, words like “female” and “woman” were closely associated by the AI with arts and humanities and with the home, while “man” and “male” associated with science and engineering.

In that experiment, the machine learning tool was trained on what’s called a “common crawl” corpus: a list of 840 billion words in material published on the Web.

Training AI on historical data can freeze our society in its current setting, or even turn it back.

If women aren’t shown advertisements for high paying jobs, then it will be harder for women to actually apply for high paying jobs. There’ll be less women in high paying jobs.

Robots that struggle to read emotions on non-white faces will only reinforce the experiences of otherness, of invisibility, that can already be felt by racial minorities in western societies.

The extent to which a person or an organisation can be held responsible for a machine that is racist or sexist is a question coming up a lot in AI debates.

On the one hand, there’s a fairly straight forward answer: people designing AI need to be accountable for how AI could hurt people. The hard part with AI can sometimes be figuring out when harm could reasonably have been prevented.

The creeping, quiet bias in data and AI can be hard to pin down. I have no idea if I’m not being shown ads for high paying jobs because I’m a woman. I don’t know what I’m not being shown.

As AI becomes more sophisticated, and depending on the technique being used, it can be hard for the people who have designed an AI to figure out why it makes certain decisions. It evolves and learns on its own.

Take my american-football loving data shadow from Google.

I don’t know how Google’s algorithm actually works, even though I can see all of the data being used to guess (because Google’s actually pretty transparent about it). And what’s weird is, of all of the topics Google thinks I like, there are none related to technology or data or AI. And yet every day - I can see in the data - it’s technology and data related stories that I’m looking at online.

Maybe the algorithm deduced that data is my job based on the frequency of my data-related searches, so I might not “like” it.

Or maybe it’s based its assumptions about what I might like more on my gender and age than what I actually search for. I don’t know what’s being weighted. I don’t really have a way of asking Google whether they can explain it either.

What does ‘control’ mean - who can ask questions - in an age of machines?

In the United States a class action lawsuit been underway for two years about cuts that have been made to Medicaid assistance for people with developmental and intellectual disabilities.

The decisions about where cuts would fall were based on a closed data model. When lawyers representing people affected by the cuts asked to see how the data model worked, the Medicaid program came back and said, “we can’t tell you that. It’s a trade secret.”

In California a defendant was jailed for life without parole in a case in which the prosecution relied on the results of a piece of software that analysed DNA traces at crime scenes.

When expert witnesses for the defendant asked to see the source code for the software, the developer refused, saying the code was a trade secret. The language and expectations of business are increasingly intertwined with government when it comes to AI. A “trade secret” is something we understand from the commercial world.

But when should it be ok to refuse someone the information they need to exercise their democratic right to an appeal, because the algorithm being used is a “trade secret”?

Partnering with private sector organisations to deliver automated, predictive public services is becoming a necessity for government. We don’t have clear expectations of the nature of those relationships: who owns the AI being developed using public funding; who should have control over and access to data used by the AI; and what our democratic rights are to understand and control how automation, algorithms, artificial intelligence, shape our interactions with government.

We need to have this discussion in Australia. Just this year, as well as the much-covered Centrelink debt recovery programme, the government has also announced investments in predictive systems to identify welfare recipients for drug testing and - just last week - identifying ‘at risk’ gamblers online.

When Centrelink began sending automated debt notices over Christmas in 2016, it became front page news and the subject of a Parliamentary inquiry. The data model had flaws. The systems surrounding its implementation had flaws. The data matching process at the heart of Centrelink’s debt recovery programme wasn’t new. Automating the process simply exposed existing flaws and scaled them up with devastating effect.

Access to data is power. If you’re a startup, a business, a researcher, or a government department building AI, you need access to high quality data sources.

And if you’re someone on the receiving end of an automated decision, not having ready access to data to challenge it with immediately puts you in a less powerful position.

In the Centrelink case, the only way to challenge a decision was to validate the model – submit data about your employment and pay slips that might expose an error. How accessible to you are your employment histories as data? Not the snippets, the payslips and documents. Your employment details as data that can be interpreted by a machine.

As more and more services are automated - applying for a home loan, getting health insurance - having access to our own data, or the ability to entrust it to someone else, will become increasingly important. The world we live in now is shaped by information flows and information hierarchies. And there’s a trend emerging in the machines being built for tomorrow.

Automation is disproportionately affecting already vulnerable and marginalised people. We’re at risk of entrenching - making permanent - existing structural inequalities.

In this new age of machines our power structures might look a little different at the top - tech and online giants replacing mainstream media giants - but it’s the same people left excluded and even more marginalised at the bottom.

The good news is while there are challenges there are also great possibilities

At the same time we’re wrestling with these challenges, systems are being developed to try to address some of the issues of bias and under representation we struggle with in society.

Take recruitment. Challenges addressing gender and racial bias in recruiting processes have been well documented.

Today a range of tools are being developed which try to reduce that particular aspect of recruitment bias. One UK based startup, Applied, offers gendered language detection in job descriptions and blind application scoring.

Historically in medical research, treatments that have been developed tend to be most suitable for middle aged men. That’s because men are overrepresented in Australian clinical trials. Women make more difficult clinical trial participants because we menstruate. The impacts of drugs and other treatments are rarely tested on pregnant women at all.

Now, we have access to data about how people respond to treatments beyond expensive clinical trials. We have digitised scans, x-rays, blood tests, DNA histories. We have smart devices and mobile applications tracking symptoms and reactions in real time that we can use to devise fairer treatments for everyone, with the right data security mechanisms in place.

Artificial intelligence is being used to support and protect marginalised communities. In the UK, volunteers are teaching AI to spot potential slavery sites using satellite imagery - South Asian brick kilns, which are often the site of forced labour.

But when we see and hear stories about how data is being misused and abused, and driving bad automated systems, it makes it harder to have meaningful conversations about these kinds of possibilities. It makes it harder to trust.

A lack of trust is bad for business and bad for government. The economics are rubbish. When trust is low, investment is low and innovation is harder. But the issues we’re dealing with in AI aren’t new issues.

Statisticians, scientists and social researchers have always worked within guidelines managing data responsibly and reducing bias. Issues around bias and prejudice in decision making aren’t new either - society’s reckoning with them is reflected in our anti-discrimination laws, our employment laws, our consumer rights laws.

What we need for this next machine age is a systems update.

People and organisations around the world are designing ways to handle data ethically, to build ethical machines and drive a fairer future for everyone.

Sage Bionetworks, a non-profit research organisation in the US is developing design solutions for data sharing and consent - meeting people where they live with the ethics, not just the technology. And they’re building massive, intentionally diverse health datasets for future use as training data.

The Open Data Institute is developing a data ethics canvas to help teams work through the risks and potential impacts of data projects. The ODI has also been leading conversations in the UK and Europe about how openness can help organisations build trust.

Elon Musk is one of the sponsors of a non-profit called OpenAI, committed to researching and promoting AI safety. Just last week Google launched PAIR: the People + AI Research Initiative to study how humans interact with AI.

In New York AINow, an initiative co-founded by Australian researcher Kate Crawford, was recently launched to study the social impacts of AI.

There is a gap though. It’s a knowledge gap that exists between people working on AI-related issues and our senior leaders who make decisions about where AI should be deployed.

We don’t all need to become machine learning experts. We don’t need to know how to build a car engine from scratch to know when it’s at risk of breaking down. We have lights that flash on our dashboards, we have smells and sounds that trigger warnings. We understand some of the basic things that keep our cars healthy, and we learn how to respect others on our roads.

We do all need to develop a basic awareness of AI warning signs (dodgy data, unreasonable secrecy about how they work, over reliance on automated results over common sense) - the bad smells.

And organisations designing artificial systems or debating their role within different sectors need to develop the dashboard warnings, the indicators, to help people investing in AI check for errors before pressing the accelerator.

We need to give senior decision makers, our politicians and leaders, the skills and information they need to ask the right questions. To follow their noses. To know when AI stinks.

There’s also broader policy questions to be be debated about how what a healthy AI ecosystem looks like, and how it should be regulated. This is where I return to those three themes that will shape the evolution of our AI systems and who gets to benefit from them: access, control and accountability.

Data privacy is no longer the biggest challenge we’re facing - we have other challenges like data monopolies. Technology giants like Google, Facebook, Amazon are sitting on enormous data sources of billions of people and acquiring artificial intelligence startups quickly.

We talk about accessing data held outside government for national security purposes, but what about for public interest purposes? Healthcare, transport planning? How do we generate competitive AI economies when who holds data holds the power? And what controls do we put around this?

When we talk about a dystopian future in which man is slave to machines, we tend to have these images of beings with super intelligence and super strength.

I’m more worried about stubborn, short sighted AI who can’t distinguish me from my data shadows. Who will not listen, can’t be argued with and can’t be changed. Who respond to every request with “computer says no”.

The control we retain as humans - to appeal, to challenge, to choose - will determine the power structures in this new age of machines.

Organisations designing and implementing AI now who will determine the controls we have. What are their responsibilities? How should they be held accountable for systems that make unethical or simply inaccurate decisions?

Access, control, accountability. How we apply these concepts to AI now will shape our future. We can’t simply ignore the bad smells. But we also can’t throw the keys away, halt development. There are risks and questions to be worked through, but there’s also opportunities for AI to be used in genuinely powerful ways to improve our lives.

So. Take a moment, clear your nose. And let’s work on that sense of smell of yours.

Thank you.

Ellen Broad

Ellen Broad

Data, digital, board games, Beyonce

rss facebook twitter github youtube mail spotify instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora