Share Podcast
Azeem’s Picks: Creating AI Responsibly with Joanna Bryson
What does it mean to design and implement AI responsibly?
- Subscribe:
- Apple Podcasts
- Spotify
- RSS
Artificial Intelligence (AI) is on every business leader’s agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise.
Today’s pick is Azeem’s conversation with Joanna Bryson, a leading expert on the questions of AI governance and the impact of technology on human cooperation.
They discuss:
- The concept of intelligence and AI as a “cognitive prosthetic.”
- The scale of development and the benefits of developing AI.
- The problem of explainability in AI systems.
Further resources:
- A Very Short Primer on AI & IP, including Copyright (Joanna Bryson Blog, 2023)
- Inside the Loop: AI May Launch a Race No-one Can Afford to Lose (Azeem Azhar, 2023)
AZEEM AZHAR: Hi there. It’s Azeem Azhar, founder of Exponential View. We are moving into an age of artificial intelligence. These tools of productivity, efficiency, and creativity are coming on in leaps and bounds even if they remain incomplete and immature today. Implementations of AI are becoming priorities amongst top execs in the largest firms all over the world. Now, one big question is how do you make sure your AI systems behave ethically and fairly? It’s a huge issue, and it’s one I’ve been exploring since 2015 in my newsletter, Exponential View. And over the years I’ve hosted some of the leading experts on this subject, on this very podcast. I know that ethical AI implementation is top of mind for leaders like you. So, to help you think through the questions of responsibility, accountability, and power in the context of AI development, I’m bringing back some of my previous conversations over the next five weeks. This week I’m bringing back my 2019 conversation with Professor Joanna Bryson. She’s the professor of ethics and technology at the Hertie School of Governance. Joanna is a leading expert on the questions of AI governance and the impact of technology on human cooperation. In 2010, she co-authored the first national level AI ethics policy, the UK’s Principles of Robotics. In our conversation, Joanna and I go into the concept of intelligence and AI as a cognitive prosthetic. We tackle the question of explainability and what the development of professional standards for software engineers need to look like. Here’s my conversation with Professor Bryson. Joanna, I’m really delighted to have your brilliance with me for this conversation.
JOANNA BRYSON: Well, thank you very much.
AZEEM AZHAR: So, you are recognized for your work on artificial intelligence and in particular around ethics. Why is it important for us to think about ethics with this technology?
JOANNA BRYSON: I think of course it depends on what you mean by ethics. In fact, now people are saying it like there’s a gold rush for the ethics thing. But I think what there really is a recognition of the extent of impact that artificial intelligence is already having in our society. And so as we try to figure out how we’re going to deal with that, a lot of people are looking for how to frame it and they’re looking for how to control the people who are going to try to control AI.
AZEEM AZHAR: What would you say artificial intelligence is?
JOANNA BRYSON: Well, I find the most useful thing to do if you’re trying to understand the impacts that AI is already having is to actually use a very, very basic definition. And it’s actually the same one I was taught as a master’s student in the 1990s in artificial intelligence and as a behavioral scientist in the 1980s when I was more of a psychology major as an undergraduate. So all of that says is that intelligence is behaving appropriately to the context. Or as a computer scientist, I like to say it’s a form of computation, it’s transforming information, and in particular, intelligence is transforming perception. So that is our awareness of the present context into action. So if you accept that that’s intelligence, then artificial intelligence is just something that’s an artifact. That is, something that a human is responsible for building that expresses intelligence.
AZEEM AZHAR: One of the things that you didn’t mention though was this idea of there being a goal. And that seems to me to be one of the critical parts of an intelligence. That it’s looking at its external environment, its perceptions. It’s taking those signals and figuring out what it needs to actuate, what it needs to do in order to achieve some end state. Do you think having a goal is an important part of an intelligent system?
JOANNA BRYSON: Well, that’s a really interesting question. So even for humans who are pretty intelligent, as intelligence goes. The vast majority of the behavior that we express, we haven’t thought twice about. So in fact, we don’t even recognize the extent to which we’re influenced, for example, socially or culturally or by something that’s happening in our peripheral vision. So an awful lot of our behavior is implicit, and it’s only these sorts of rare or hard cases where we actually get our explicit mind involved and try to sort of pre-plan something and think of the consequences. Neuroscience shows us that that part of intelligence actually requires having a lot of possible plans and then inhibiting most of them. So you’re holding back until something really looks like it passes a threshold where you can behave. So we can build stuff like that in AI if we want to, but it would be a mistake to think that even in humans that that’s the majority of the sources of our behavior. It’s not.
AZEEM AZHAR: Right. But you’ve got me thinking now because that suggests to me that in some sense we do have some goals, but those goals have been gifted to us, not by the frontal part of our brains that do executive function, but they’ve been gifted to us over the eons, over the millions and billions of years because we clearly do have certain amount of goals.
JOANNA BRYSON: I’ve forgotten who said it, but somebody very smart said that nothing in biology makes sense except in the light of evolution. And so [inaudible 00:05:16] actually, I’ve certainly heard talk about these kinds of things. That when you have anything that’s not human, you basically have to assume that the ultimate goal, I mean literally the definition of ultimate goal in biology is persistence, right? It’s survival. Some people mistakenly think it’s reproduction, but reproduction is one of the means by which you persist. You also have a lifespan. So you combine those two things and you get a variety of strategies. We are living biological organisms. So we do have those same ultimate goals, which actually motivate quite a lot of the goals again, that we’re explicitly aware of. And again, this is open research. Lots of people try to understand why we are strongly motivated by things like identity or music, art. Certainly when we build an artifact, then we are the ones who pick the goal. We are the ones that pick what it’s going to do. And so the definition I gave you, and this goes back again to some of the ancient 50-year-old arguments in AI, but by the definition I gave you then thermostats are intelligent. So I think some people find this kind of definition of intelligent unsatisfying because they want only to talk about organisms that choose their own goals. And as you just pointed out, we choose some of our own goals, but others of our goals, the things that gave us basic satisfaction often are things that, as you said, we have at birth.
AZEEM AZHAR: Well, my thermostat has sent about 60 gigabytes of data out of my home network over the past six months. I have no idea who it’s talking to, but it’s certainly got a mind of its own. When you defined AI, you used another word that was quite interesting. You used the word computation and made me reflect that sometimes people say, “Well, AI is just maths or math.” Why computation rather than math?
JOANNA BRYSON: Well, math is an abstraction. And I think a lot of people make this mistake. They think that intelligence, it’s just an algorithm. An algorithm is a recipe. It’s not. I mean, there is a recipe, but somebody has to execute that recipe and that takes time, it takes space, it takes energy. So it’s very important to realize that intelligence computation, it really is moving from a set of information into action. And so that will always take time, space, and energy. And that’s why you can’t get omniscient. There’s not going to be some one magic algorithm.
AZEEM AZHAR: You just have to look at the energy consumption and data center today to know that machine learning is more than maths.
JOANNA BRYSON: Well, people forget about that. And it’s true. The data really matters. The algorithms matter too, but it’s not just the data.
AZEEM AZHAR: The algorithms, people used to think they used to matter a lot, and then deep learning came along and we started to say, “Well, it’s all about the data. The more data you have, then you just throw it into a deep neural network.” But even if you look at the people who are building the most advanced deep neural networks, they’re not just random folk walking on the street. They are some of the best computer science researchers in the world, and they’re having to work very hard to get these systems to do what they do.
JOANNA BRYSON: I like the guys at DeepMind, but it is just hysterically funny when they talk about like, “Oh, we’re going to create artificial general intelligence.” And the reason that there was 400 million pounds or whatever it was spent on these 12 guys was because nobody… There was no systematic way to figure out how to tune the parameters. So they were hiring 12 guys that were really, really good at tuning parameters. And then those guys claimed, “Oh, look, it’s AGI.” And it’s like, well, if it was AGI, they wouldn’t need you.
AZEEM AZHAR: Well, that’s right. And just for the audience to give a sense of the scale of this, several years ago, the early deep neural networks, I’m talking about 2012, 2013, ’14, might’ve had 10 million parameters that needed to be trained. And today we regularly see networks that have tens of billions, maybe even more parameters that need to be tuned for their specific application. So in that context of a general agreement about maybe what AI is and what it does, what are the kind of benefits that you see from it?
JOANNA BRYSON: It’s insane in a way. Everything that we think about, literally everything we’re aware of has to do with intelligence. So everything you care about, you should be able to do it better if you have basically prosthetic intelligence that you’re bringing in.
AZEEM AZHAR: Let’s think about this AI as a prosthetic. It’s a cognitive prosthetic that is allowing me to offload a bunch of things into a device or onto the cloud. Do you think it’s more important than some of the other prosthetics we’ve had, the pen and paper or the plow? How should we think about it in its historical context?
JOANNA BRYSON: When I’m trying to understand what technology is doing or what we are doing, technology is never the actor. But what we are doing with ourselves to ourselves, to each other with technology, I think I go all the way back and say, “Look, all of those things are things that enhance our capacity to express action given our context.” So I don’t draw the line anywhere. I would say all of that’s AI. So one of the things that people have said about AI as a big danger is that Nick Bostrom is one of the people that talks about a super intelligence that’s able to learn to learn. And he talks about this exponential explosion you would expect if you had a system that can do that. Well, that’s what we’ve had since we’ve had writing. So 10,000 years ago, there were more macaques than there were hominids. I mean, hominids were doing okay, but since then, now our success has become one of our problems, of course. That we’re trying to make sure that we’re doing things sustainably.
AZEEM AZHAR: Nick Bostrom, who you just referred to uses that famous example of the AI whose job it is to build paperclips. And it focuses on improving its ability to build paperclips and to create more and more of them. It has this very simple objective, and eventually in his thought experiment, it uses all the resources in the universe in order to produce paperclips. I remember reading that a few years ago, and I thought, “Well, how different is that actually to a company?” Companies go out and they’re told You have to maximize shareholder profit. You have to build as many tires or widgets or whatever it is. And companies have historically not really known any limits other than their own internal limits. The management doesn’t scale or the shareholders get annoyed, but essentially they’ve single-mindedly pursued a goal without care to the consequences.
JOANNA BRYSON: There’s actually a book about that. The book is List and Pet It from 2011, but they talk about corporations as AI, and you can also think of governments that way. I have to say I don’t like the Bostrom example because it just shows absolute ignorance of how we actually build intelligence systems. So we aren’t going to have that kind of context that we would ever construct in something that we would normally refer to as AI in a digital system that’s been built for a purpose.
AZEEM AZHAR: But we’ve got this stage where we’re exploring this idea of AI as a prosthetic and AI helping us with our maps or potentially even companies being a type of AI. We’re quite far away from some media perceptions of AI, which are often shiny white robots or glaring red eyes. I think you’ve got some views on the anthropomorphization of AI. How do you think about it?
JOANNA BRYSON: I think when you anthropomorphize AI, then you get into a lot of problems basically because any artifact is something for which people have accountability. AI is a discipline and it’s a software technique, and you can have intelligence that you build into a robot, but the idea that there’s some anthropomorphic thing that is responsible for itself that’s hurt if you turn it off, that is like a person is an overextension of our identity onto these things that we want to be sort of powerful allies of ours. But generally speaking, when people pander to that, I worry that it’s pretty much a good sign that someone’s trying to get out of responsibility for something.
AZEEM AZHAR: What are the types of ethical challenges that we are seeing as a result of this particular technology?
JOANNA BRYSON: I got into artificial intelligence because I wanted to understand natural intelligence. And so I have been using AI to do scientific experiments and that big bias study that came out where we showed that you can wind up with human implicit biases in artificial intelligence. So when AI ethics became an important thing, I think the reason I had value is because I was just thinking about how taking the stuff I already knew about natural intelligence and applying what I knew about AI to answering questions about where human intelligence would go and where human society could go when you add more intelligence in.
AZEEM AZHAR: One of the things that’s been emerging in the media in the last 2, 3, 4 years has been machine learning or AI systems that have come up with socially unacceptable outcomes. Whether it’s racial bias in scoring resumes, whether it’s misrecognizing people in photographs or miscoring credit scores. And I think one of the reasons why… I think it’s important to look at is that as we move humans out of these systems, these systems are becoming quite automated and they’re going to really run on their own because ultimately they’re software running in data centers somewhere. And so unchecked, they could really, really embed societal biases or bad outcomes, which then might be quite hard to rectify.
JOANNA BRYSON: Right. Now I think one of the biggest things… So to me, one of the core principles of AI ethics that companies should be looking at is making sure that there’s always humans involved. That your customers can talk to a person. So when your customers have to fill in a form, then you lose some of them because they aren’t good at filling in forms. But then when you hire people to fill in the forms for those people and you still don’t allow those people the flexibility to really solve problems, then you again are going to lose business opportunities. And so I’m putting this in a very economic sense, but you can also put it in a flourishing sense or human rights sense. Just the freedom to think and be different from each other. That’s something that we aren’t happy unless we have. We want that. It’s central to human flourishing, but it’s also central to safety in society. We need people who think differently. We need to be able to innovate. And innovation comes from variation. Again, that’s literally called the fundamental theorem of biology. The more variation you have, the faster things change. Now, sometimes things can change too fast, so I don’t want to… It’s not like it means there’s a limit on either direction. It’s always about figuring out how to get the balances right. But generally speaking, there’s a big problem if you try to clamp everything down and make it easier to implement. I mean, the example you use of resumes is really interesting because of course, some of the most famous resume studies, we assume at least that it was humans that were showing these biases. That you send out resumes and you look and see how many interviews you get back, and they’re identical resumes. But if they have African-American names, they have something like half the chance of getting offered an interview. When you talk in sort of a Chatham rules with HR people from major companies, they say, “Oh, AI is helping us get much better results. We’re finding much better diversity. We’re finding the kinds of people we were missing in our pools of applicants before.” But nobody wants to document it because they don’t want to admit that their system was flawed before. So it’s very hard to get people to go on the record about that.
AZEEM AZHAR: But there’s an interesting silver lining because I think that a lot has been written about how we’ve applied these natural language processing, let’s call them AI, systems to large bodies of text/ and we’ve identified that the bodies of text reflect certain relationships. So man to doctor, but woman to nurse. And what I think is so interesting about that is first of all, most people knew that already. They knew that bias existed in the corpus. You didn’t need to be sort of Einstein or James Joyce to understand that. But I think the thing that’s interesting is that we made it explicit and we presented it and we said, “Look, here it is, and here is the data that shows that these relationships exist.” Now, are we happy as society that these are the relationships that the corpus of texts are there informing our systems? And if we’re not happy with it, we can actually measure whatever changes we make, adjustments we make to the system to see whether we’re getting a result that we are more happy with. Am I just a starry-eyed optimist about that?
JOANNA BRYSON: No, I think that we are seeing some people doing that, but I would say that you were wrong to think that everybody sort of knows that that bias was there. I think they sort of do know now. That message was picked up really quickly, but initially people were astounded. They sort of thought that prejudice was something that bad people… Because they’re innately bad or because their parents are bad or something, somehow they got these bad ideas. But that actually everybody’s the same and everybody’s lived experience is the same. And of course, some of us know that our lived experiences aren’t the same as other people’s lived experiences, but it was something that we couldn’t measure so accurately. We showed that the things that we explicitly think we shouldn’t think, that all nurses are female or something. So we’ve said, “Oh, don’t make that assumption.” But the reason why people do make that assumption may very well be because most of the nurses they’ve met were female.
And so trying to… It’s still important. It’s an important thing that we pick explicitly these goals that are better than our world. I mean, that’s essential. I think we found out really important things of what it is to be human and how we use our minds.
AZEEM AZHAR: Another challenge that comes up amongst the enterprise and the corporation thinking about AI systems is a question of explainability and security of these systems. So do they know what is going on inside that black box, or is it just a kind of a bundle of complicated maths and matrices that no human can really understand? How important is it to make things explainable and what can we do to achieve that?
JOANNA BRYSON: The GDPR has mandated that we do have explainable systems, but now what’s going to be worked out in the courts is what counts as an explanation. Again, remember that humans don’t even recognize that we have both implicit and explicit knowledge, which is different, and beliefs, which is different from each other. So we don’t understand ourselves and we can’t explain our own behavior in a way that’s reliable and coherent. That’s been shown over and over again. We are still accountable. So if you have a bank, it’s still accountable for what its employees do, even though nobody knows what those individual synapses in the head do. And everyone knows that similarly with AI and with machine learning, we can also say, “Look, there’s a way to use these tools and we can keep track of it.” It’s called DevOps. It’s called logging, right? So you can go through and say, who changed which line of code? Why did they do it? When did they do it? Which libraries did you use to train your machine learning system? Where did they come from? Most importantly, how did you know you were done? What tests did you make the system go through before you released that system? And it’s a system that’s continuing to learn that what kind of monitoring do you do to make sure that it’s still operating within parameters? Now, these are ordinary questions that we ask about software systems and hardware systems in fact. If you’re running a bank or a car or an airplane, you’ve got a million monitors that are checking to make sure everything’s going okay right now.
AZEEM AZHAR: You are bringing memories back to me. So in my last startup, we did a bunch of machine learning on lots and lots of social data on Twitter data. We would have to contend with what we would call topic drift. So we’d have a classifier that was saying when people use particular words, they’re talking about soccer or basketball. But of course, the language and the terms were very, very dynamic, and you could quite quickly have a classifier that wasn’t doing its job very well. One of the things that I used to look at was something called the ROC curve of our classifiers. It’s basically a measure for how effectively a classifier is classifying things into two different classes. And we would look at those ROC curves for different demographic characteristics. So, we would be saying, how well is this classifier working for 18 to 30 year-olds compared to 30- to 50 year-olds? As an attempt back nearly 10 years ago to say, are our systems working reliably? Are they doing what we expect them to do? And are we happy with the mistakes that they’re making?
JOANNA BRYSON: I mean, that’s what intelligence is, right? We’re learning to behave in different ways. When you’re developing a system, and especially when you have a lot of interdependent parts, then if one of them changes for any reason, that little conflict between two classifiers could switch one way or the other, and then everything else changes too. And so, this has been a problem for people architecting systems that way. And what I’ve seen is people coming back more towards the kind of stuff that I was doing. My PhD was literally about making it easier to develop human-like intelligence, real time intelligence. And people said, “Well, why are you doing all that architecting? AI should just teach itself like children.” As if teaching children for 18 years, however many hours a day is easy.
AZEEM AZHAR: There’s an ongoing debate amongst some AI theoreticians between a view that says, there’s a belief that the deep learning school of thought comes from that. And then there’s another school that says, “Look, we actually need to bootstrap these systems with our own experience, our own learned behaviors, and reflect that in the architectures that goes into the systems.” That’s sort of the dichotomy.
JOANNA BRYSON: So, both of the systems you’re talking about, you could even think of them as ways to understand what any way everybody doing deep learning is doing. So it was proven, I think back in the 1980s that you don’t ever need a deep learning. You don’t need deep neural networks in theory. In theory, a three-layer network, a multi-layer perception, one middle layer is sufficient to solve any problem. In practice, it’s very unlikely you’ll get to the right solution if you do that. It takes a really long time. And so then when you create more depth and you set these parameters about how big are the different layers and how are you train these different things, what are they being exposed to? You are providing priors. Of course, you’re providing priors. As I said before, you can’t suddenly know everything. We don’t have the computation, we don’t have the energy or the time or the space to know everything. So you’re always using some kind of a prior and you’re always introducing some kind of architecture. And the big question is, what kind, how much do you need? But I was actually going even a little further than that and saying, “Look, you want to constrain the subparts of your system. If you’ve been figuring out the GPS of your autonomous car, there’s just no reason for that to have anything whatsoever to do with the things that are steering around puddles or whatever.” And so of course, you modularize that, and back in the nineties, a lot of people were thinking about how do you build the modules? How do you decompose the problem? And then other people said, “Oh, you silly people. You’re programming.” But the point was that even if you’re using machine learning, you may want to constrain the problem so that the machine learning is more likely to actually solve the problem you set it, and also so that it won’t interfere as much with the other bits of the machine learning.
AZEEM AZHAR: Well, I think that’s something that autonomous vehicle manufacturers are learning quite rapidly, which is that you need to modularize, you need various different systems to handle distant pedestrians to puddles or holes in the road and braking and so on and so forth.
JOANNA BRYSON: I’m just wondering how many billions they could have saved by reading my PhD.
AZEEM AZHAR: That’s true. Well, I think one of the things that happens when you get a technological breakthrough is that people get very excited by what they can do with the tinkering and the building on these parts, and they don’t necessarily see what the limitations are, whether those are within the system itself or whether they are the sort of spillovers into society. And it’s interesting that we are spending some time thinking about the spillovers for this particular technology. I wonder about, given the proliferation of deep learning and data and computation, whether there’s any real way of implementing best practices around the design of AI systems. It feels like this is a powerful technology, but unlike powerful technologies like nuclear energy, you don’t have to be a nation state to be able to play around with it and plug it into the economy or into the real world. How do we create rules of the road for how people or organizations should build these systems or can we indeed do that?
JOANNA BRYSON: I think it’s important to understand, as I said, intelligence is essential to everything humans do. And in some ways, making a big exception about AI is a mistake. I think it’s about time that we start enumerating more of the professional obligations for software engineering in general. I don’t think we should make a big exception and start getting into fights about whether something’s intelligent enough that we need to have a different set of standards. I think it should be everyone who is a computer scientist or a programmer or in the tech sector, should have this kind of training. I mean, look, architecture, it used to be that people could just build anything anywhere if you had enough money, right?
AZEEM AZHAR: And bridges used to fall down a lot.
JOANNA BRYSON: And also putting a building or a bridge somewhere affects the entire city planning. So it’s affecting everyone. So now we have a rule of law that involves among other things that you have to get planning permission, that you have to have buildings inspected, that you have to have architects get licensed. And I think we’re getting to that stage now, and maybe we’re a little past where we should be. The buildings have fallen over and killed a few people, and we need to get to the point where we have that kind of professional standards. But even besides that, I don’t want to tag this all on the programmers or whatever. The point is, there is nothing that a human does that doesn’t have already some kinds of legal and regulatory framework, especially if you’re selling anything. If you’re making money, then there are already laws that constrain you. And the real issue is, do we have the best laws for helping us handle these situations? I actually think, at least in the UK, I’m a British academic, so that’s where I know most about it. We aren’t really thinking of adding a lot of laws. We’re more thinking about how to provide the expertise to the regulatory bodies so that they can go and pursue these kinds of cases. I mean, Facebook got the maximum penalty under unfortunately a relatively old law that was available, but they were able to say, “Yeah, that was wrong. You can’t do that.” And so we already have enough expertise if something’s important enough that it comes to trial. But the point is that just as we do with the environment or with medicine, we may want to have sort of a technology or an AI just sort of boards that you can go to that will spend time both responding if people come with a complaint, but also proactively seeking to make sure that things seem to be properly constructed and properly maintained.
AZEEM AZHAR: Joanna, that’s a super interesting idea, and I would love to continue this conversation, but I also know that we’re running short of time. JOANNA BRYSON, it’s been brilliant to talk to you. You are so full of great insights and you have an amazing Twitter game. So how can people stay in touch with you?
JOANNA BRYSON: My Twitter handle is J2Bryson, and I would say that there is one other thing. The British government has just given the University of Bath a bunch of money, and it’s not just the government. We’re working with a whole lot of really interesting organizations to create something called the Accountable, Responsible, and Transparent AI Doctoral Training Center. So we are looking for partners. If you have problems that you’d like solved by smart PhD students, or if you are someone who’s thinking it’s a good time to go back and do a PhD, the only requirement is that you have to have supervisors from at least two different faculties, not just departments, but faculties. So it has to be interdisciplinary and you have to be willing to work on real problems. But that would be another way to get in touch for a long time. It’d be four years to finish a PhD.
AZEEM AZHAR: Well, it would almost certainly be worth it. Joanna, thanks so much for your time.
JOANNA BRYSON: Thank you.
AZEEM AZHAR: Thank you for listening to Exponential View. If you’re new to my podcast, I encourage you to explore the archives. You’ll find my previous conversations with many of the world’s leading thinkers on the tech revolution. And remember to give us a five star rating on your podcast platform of choice. To stay in touch, follow me on Twitter, I’m @azeem. That is A-Z-E-E-M, and subscribe to my newsletter, Exponential View at www.exponentialview.co. I’m your host, Azeem Azhar. This podcast was produced by Marija Gavrilov and Fred Cassella. Bojan Sabioncello is the sound editor. Exponential View is a production of E to the Pi I Plus One, Limited. And Exponential View is part of the HBR Presents network.