We need to design mistrust of AI systems to make them more secure


It’s interesting that you talk about how, in these kinds of scenarios, you have to actively craft mistrust in the system to make it more secure.

Yes, this is what you need to do. We are currently trying an experiment around the idea of ​​denial of service. We don’t have any results yet and we are grappling with ethical issues. Because once we talk about it and publish the results, we’ll have to explain why sometimes you might not want to give the AI ​​the ability to decline a service either. How do I remove the service if someone really needs it?

But here’s an example with Tesla’s mistrust trick. Denial of service would be: I create a profile of your trust, which I can do depending on the number of times you have disabled or disengaged from holding the wheel. Given these disengagement profiles, I can then model how fully you are in this state of trust. We did this, not with Tesla data, but with our own data. And at some point, the next time you get in the car, you get a denial of service. You do not have access to the system for X period.

It’s almost like when you punish a teenager by taking their phone away. You know teens won’t do what you didn’t want them to do if you tie it to their modality of communication.

What other mechanisms have you explored to build distrust of systems?

The other methodology we explored is basically called Explainable AI, where the system provides an explanation for some of its risks or uncertainties. Because all of these systems are uncertain – none of them are 100%. And a system knows when it’s uncertain. So it could provide information in a way that a human can understand, so that people change their behavior.

For example, let’s say I’m a self-driving car and I have all of my map information, and I know that some intersections are more prone to accidents than others. As we approach one of them, I would say, “We are approaching an intersection where 10 people died last year.” You explain it in a way where it makes someone go, “Oh wait, maybe I should be more aware.”

We’ve already talked about some of your concerns about our tendency to over-trust these systems. Who are the others? On the other hand, are there also advantages?

The negatives are really about prejudice. That’s why I always talk about bias and trust interchangeably. Because if I trust these systems too much and these systems make decisions that have different outcomes for different groups of people – for example, a medical diagnostic system has differences between women and men – we are now creating systems that increase the inequalities we have now. It is a problem. And when you tie it to things related to health or transportation, both of which can lead to life or death situations, one bad decision can actually lead to something you can’t get over. So we really need to fix it.

The good points are that automated systems are better than people in general. I think they can be even better, but personally I would rather interact with an AI system in some situations than with some humans in other situations. For example, I know there are issues, but give me the AI. Give me the robot. They have more data; they are more precise. Especially if you have a newbie. It is a better result. The result may not be equal.

In addition to your research in robotics and AI, you have been a strong supporter of increasing diversity in the field throughout your career. You started a mentorship program for at-risk junior high school girls 20 years ago, long before many people thought about this. Why is this important to you and why is it so important to the field?

This is important to me because I can identify the times in my life when someone basically gave me access to engineering and IT. I didn’t even know it was a thing. And that’s really why later on I never had a problem knowing that I could do it. And so I always felt it was right for me to do the same for those who did it for me. As I got older, too, I noticed that there were a lot of people who were unlike me in the room. So I realized: Wait, there’s definitely a problem here, because people just don’t have the models, they don’t have access to them, they don’t even know that’s a thing.

And why it’s important on the pitch is because everyone has a different experience. Just like I thought about human-robot interaction before it even was a thing. It wasn’t because I was brilliant. This is because I looked at the problem in a different way. And when I talk to someone with a different point of view, it’s like, “Oh, let’s try to combine and experience the best of both worlds.”

Airbags are killing more women and children. Why is that? Well, I’m going to say it’s because somebody wasn’t in the room saying, “Hey, why don’t you test this on women up front?” There are a bunch of issues that have killed or endangered certain groups of people. And I would say if you go back it’s because you don’t have enough people to say “Hey, have you thought about that?” because they speak from their own experience and from their environment and their community.

How do you expect AI and robotics research to evolve over time? What is your vision of the field?

If you think about coding and programming, pretty much anyone can do it. There are so many organizations like Code.org. The resources and the tools are there. I would love to have a conversation with a student someday where I ask them, “Do you know about AI and machine learning?” and they say, “Dr. H, I’ve been doing this for the third year! I want to be shocked like this, because that would be wonderful. Of course, then I should think about what my next job is, but that’s a whole different story.

But I think when you have the tools of coding, AI, and machine learning, you can create your own jobs, you can create your own future, you can create your own solution. It would be my dream.



Source link

Deixe uma resposta

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *