ENTREPRENEURSHIP

The Case for Transparent AI

Share The Knowledge

Opinions expressed by Entrepreneur contributors are their own.

I can’t go on Facebook without seeing magicians.

I can trace it back to when I watched a video of America’s Got Talent. It started with singers, but soon it moved on to other categories, including illusionists. That was enough to tell Facebook’s algorithms that I had to be interested in magic and that it should show me more of what it deduced I wanted to see. Now I have to be careful, because if I click on any of that content, it will reinforce the algorithm’s notion that I must really be interested in card tricks, and pretty soon that’s all Facebook will ever show me. Even if it was all just a passing curiosity.

My experience is not new or particularly unique — Eli Pariser warned us about social media “filter bubbles” back in 2011 — but it’s a handy illustration of the dark places an algorithm can take you. I may get a bit annoyed when Facebook serves up a David Blaine video, but filter bubbles can be downright dangerous, turning otherwise neutral platforms into breeding grounds for all sorts of ugly ideas.

Where does my data go?

The truth is, most people have little understanding of how AI works — they just know that computers are collecting their data. And that can be scary.

Where does that data go, and who has access to it? Is it being used for my benefit, or is it being harnessed to sell me things and increase corporate profits? If you are offering a product or service with AI built into it, these are the questions your users and customers will ask. If someone is entrusting you with their data, you don’t just owe them answers. You owe them transparency.

When we were first designing Charli — our software that uses AI to help customers automate tasks and keep track of all their content and other “stuff” — we envisioned it as a “fire-and-forget” product. In other words, we were asking people to hand their data over to Charli and let the AI worry about it.

It was a nice idea, but we soon realized that a lot of people aren’t comfortable with that opaque, black-box approach. They are afraid to give control of their content to a machine, and understandably so. Entire film franchises have been built around this fear, and while The Matrix and The Terminator are certainly entertaining, no one wants to live in them for real.

Related: How to Leverage Artificial Intelligence in Public Relations

AI is inherently biased

Sci-fi nightmare scenarios aside, we want to program our machines to learn and evolve, but we want to do it in a measured, predictable way. My magic-filled social media filter bubble might irk me, but it’s how algorithms work today. If you’re building a network of AI models to automate a specific set of tasks for your customers, they will likely appreciate the fact that the AI has learned enough about them to be reliable. If, for example, someone is counting on an app to retrieve their data when they ask for it, they don’t want any surprises. They just want it to work.

That’s where bias comes in. There have been all sorts of studies and articles written about the issue of bias in artificial intelligence, and it certainly can be a problem, but the fact is that AI is inherently biased. That’s because AI is dependent upon models and training data developed by human beings with their own biases. AI’s inherent bias often works to the user’s advantage, such as when it allows the AI to learn how to work for individual users, each of whom will have their own set of preferences.

To broaden our horizons, we will have to introduce diversity into AI, similar to how we have to introduce diversity into our real lives.

Related: How Artificial Intelligence Will Shape Our Future

Give the customer a steering wheel

Let’s change gears for a second. The era of the fully self-driving automobile has yet to arrive, but it’s not too far down the road. There are all sorts of designs in the works, and some of them don’t even have steering wheels.

There are very brilliant and talented engineers hard at work on these projects, and I trust them — up to a point. But if I’m in a self-driving car and something goes wrong, I want to be able to grab a steering wheel and pull that thing over to the side of the highway. In short, I want the option of turning off the AI.

If you want your customers to trust you with their data, give them a steering wheel and put them in the driver’s seat. In our case, that meant telling our AI that it could not do anything with a user’s content without first storing that content in Google Drive. That way, the user always knows where their stuff is, and they are always ultimately in control. They may be granting Charli permission to access their data and automate certain processes around it, but the user can also see what is happening and take control whenever they want.

Artificial intelligence is advanced, but it’s still in the early stages. We’re just now scratching the surface of what AI can do, and we’re a long way off from finding all the answers to the problems of bias and diversity. However, what we can do is offer our customers transparency and control over their own stuff. There’s nothing magical about that; it’s just good business.

Related: This Is the Most Powerful Artificial Intelligence Tool in the World

Source

Tagged with:

Similar Posts