Google trusts that humans will use AI for good

It’s the main stage at The Next Web 2019 conference in Amsterdam. It’s early May and outside the sun has made an appearance, giving the whole site a festival-like feel. Food stalls are doing a roaring trade, and there’s even a waft of weed from somewhere over by the ‘beach’ area (think Ibiza bar, not Mauritius).

IMG 1422

Inside the arena, lights and lasers sweep across the crowd as the sound system gets them pumped up. It’s a tech conference, but not as I know it.

DUN 5868

Enter Cassie Kozyrkov, the Chief Decision Scientist at Google. She’s impressive, or at least the 40ft versions of her on the screens either side of the stage are.

Cassie has trained over 15,000 Googlers in AI. She says she used to be worried about the implications of it. She talks about how, as AI has the potential to scale quickly, those programming it have their hands on a lever that has the potential to change the world.

DUN 5860

She’s much more relaxed about it now though. It’s just a tool after all, right? Like a pencil. And humans have a long history of developing and using tools to make repetitive tasks less dependent on humans; Cassie has faith that humans will use this new tool wisely and Google are waving a financial incentive of $25m in front of those who want to use AI for good too.

She sets out the rules that she thinks makes AI safe:

  1. Wise objective to optimise
  2. Relevant datasets
  3. Well crafted exams
  4. Prudent saftey nets

DUN 5863

It’s a tool that fundamentally changes the digital world we inhabit though, and makes step-by-step instructional programming by humans redundant to some extent. This is a potential problem, as that process brought oversight.

Now she talks about how she tells programmers that they need to spend as much time thinking about the two lines of AI they code as they used to spend thinking about 10,000 lines. Whether they will or not remains to be seen.

The implications of AI launched without due care are unknown - that's inherently scary.

AI in the real world

Pre-AI, you couldn’t tell a computer to go and find you a cat. Now you can. It’s like having a genie in a lamp. We can now build a robot to go around and stroke cats... or kick them (my example, not hers).

What does that mean for ‘normal’ companies though, who aren’t working with AI and don’t plan on finding cats? Well, actually, they are impacted by AI, and will be more and more. If they use automated marketing tools, or customers find them through search or asking Siri, then they’re already under the influence of AI.

We're living in an AI world already, and much like the early days of Search Engine Opimisation, those organisations that know how to exploit that will see the rewards, whilst those who don't, won't.

Getting in early

More forward-thinking organisations have a tool with incredible potential power at their disposal.

Imagine learning what your customers want and need on an individual basis, and being able to react to to them as individuals? Or being able to build new products by testing millions of variations as this whisky company are doing with the help of Microsoft's AI.

There are hundreds of examples of AI being used in businesses today already.

You could also take the relaxed view, that we don't really need to worry about any of it, as AI will be so smart, it will figure out how to make previously indecipherable content on your website intelligible, for example. Of course, what it then goes on to make of that content, and whether it decides it should be the first or second thing it recommends to its user, is another question. It's a question many will be uncomfortable with.

Your real-world profits will ultimately be decided by a machine programmed by someone in Silicon Valley that's unlikely to have ever met you. If that machine is trained with US data, then it will likely 'reward' the content that US consumers like (if that programmer hasn't spent the time considering the implications of her two lines of code).

The future's already happening

Search engines deciding what they should prioritise based on human signals such as links from other websites pointing to your content, or how you describe it, will soon be considered quaint.

Machines are already deciding what we see, when and how, and they're only going to get better and faster at doing it. We as humans are going to respond to this because, well, they will prove better than our own judgement.

More and more people trust Google to tell us how to get from A to B, but what do we lose in handing over this power, and who are we actually trusting? We no longer need to think about how to get to places, or ask people for directions, or look at the sky for the weather. We no longer need to tell our Uber driver we're not in the mood for talking - there's a feature for that.

Even if Cassie is right and humans use AI purely for 'good' outside of the Google bubble, have they spent enough time thinking about what we'll lose anyway?

Facial recognition

Outside of the main arena, in the exhibition hall, Adobe has a stand. They're asking people to stand in front of a camera. Underneath the camera is a screen that shows what the camera sees and overlays this with the computer's best guess at their age, gender and emotions. This is the kind of tech made possible by AI.

It goes on to try to predict what they would like to drink. The few that I see get it laughably wrong. It's always cappuccino, followed by sparkling water.

Seeing the AI fail gives me a strange comfort. The next day I walk past the same exhibit, and it gets one person's drink spot on. It's learnt. This isn't a machine knowing what you want to drink though; it's a tool that a human has built. What else will it eventually know, and who will control it?

The world is struggling with this already. San Francisco has just banned facial recognition tech, whilst a UK police force have rushed headlong into Orwellian nightmare territory, no doubt with good intentions.

I wanted to interview Cassie Kozyrkov, but she wasn’t doing any. If I did, I would’ve pressed her on her seemingly naive optimism that humans are capable of using powerful tools wisely, for good. We don’t have a great history of doing that in the human race, no matter how well-intentioned the people developing those tools started out. Arrows soon moved from hunting food to killing humans, and Einstein’s theories eventually led to nuclear bombs being dropped.

Cassie

Googlers may get it. They probably do, and it’s good that they’re on the side of ‘good’ at least for now. It’s the rest of the world I worry about. A world that, in recent times, those inside Silicon Valley have fallen woefully short of stopping from using their platforms and technology for bad.

How much responsibility should the people who build tools take for how people use them? It seems Google's answer will be a depressing, none.

Further reading:

Do the benefits of AI outweigh the risks?

Google's responsible AI practices documentation


Building a successful Alexa skill: 6 takeaways from Amazon’s Agency Day