Tuesday, August 22, 2017

The Trouble with AI: No Single Wringable Neck

Pretty exciting news in that as of June 2017 I am now certified in 3 things: Open Water Scuba Diving, Internal Medicine and Scrum Agile Product Ownership.  (The first was the most fun.)

Anyhow, one of the concepts brought up in Scrum Agile is the idea that the Product owner is the single wringable neck.  The one person on the team that the customer/management/anybody can blame when everything goes wrong.  Sounds like a fun job, right?

So why do you need this? Because, something always goes wrong. (and thankfully for that!)

What exactly does this have to do with AI?  As a doctor in technology I am constantly bombarded with really exciting applications of artificial intelligence that are going to make me irrelevant.  These are usually put forward by very well intentioned, highly intelligent and well-funded individuals that haven't the faintest idea of what a primary doctor actually does (shame on us).  In general I have adapted a wait and see attitude, although there have been some promising breakthroughs in using AI to accomplish what needs machines the most-- the dreaded paperwork of medicine.

Even outside of healthcare we see robots that can climb stairs, robots that can fold clothing, and most obviously, self-driving cars.  And we hear the same story-- once these get good enough, they are going to take over.

And in some sense, I agree.  However, the individuals working on these technologies are product people: scientists, engineers, product managers, designers, etc. They are all motivated to make the best/coolest thing possible.  And that is exactly what they are doing.

But this approach alone will fail.

What exactly is the quality threshold where we will trust AI/robots/computer driven cars?  We will likely see long stretches of accident free days like we never have before.  But perfection today is no guarantee of perfection tomorrow.  Somewhere, at some point during this utopia of perfect driving, one of these cars will kill someone.  Or it won't, but we won't be able to shake the idea that it could happen.  They already have far fewer accidents than humans.  But they lack something humans have. A wringable neck.

If you get hit by another driver, we have an elaborate series of mechanisms to punish that person and make you whole (ish) again.  We have someone to blame and a way to blame them.

How do we hold the robots accountable?

I propose that this is the hurdle we must overcome in order to expand robots, AI and other autonomous devices in our lives.  It is a legal one, a regulatory one, a cultural one, not a technical one.  No amount of perfection will rid us of the human desire for retribution when harmed.  So to all of the makers out there-- please keep improving the quality, but know that you must offer a neck a some point if you want others to adopt your technology.