About this Episode
These days, if we want to know something, we ask Alexa or Siri. We increasingly rely on artificial intelligence to help us make big (and sometimes wrong) decisions about who gets a job, who gets a mortgage, and who gets sent to prison. Few people realize just how much AI is shaped by existing information and the limitations of the people building it. We talked to experts from MIT and Cornell to understand how AI leads to unfair decision making and exacerbates bias—and what we can do about it.
Transcript
Rachel Thomas: Welcome to Tilted, a Lean In Podcast. Each week we explore the uneven playing field—the gender bias that lurks in unexpected places, the impact it has on our everyday lives, and what happens when women lean in and start driving change. I’m your host Rachel Thomas, co-founder and president of Lean In.
Rachel Thomas: It would be hard to imagine our lives without technology. If I want to know the weather, I ask Alexa. If I forget the capital of Kentucky, Google reminds me. And artificial intelligence in particular is changing everything. We rely on it to curate our lives and help us make decisions—from what to buy to who to hire. The problem is we often forget that it relies on existing information and is shaped by the strengths and limitations of the people building it.
Rachel Thomas: And that’s about where my understanding of AI starts and stops—I know it’s going to dramatically change our lives and I know it’s far from perfect. But that’s it. So I reached out to some experts to learn more about AI and what it means for us—and spoiler alert, the stakes are higher—much higher—than I thought.
Rachel Thomas: I am here with Meredith Broussard, a data journalist, author of Artificial Unintelligence: How Computers Misunderstand the World, assistant professor at the Arthur L. Carter Journalism Institute at NYU, and an affiliate faculty member at the NYU Center for Data Science. Welcome, Meredith.
Meredith Broussard: Thank you for having me. It’s a pleasure to be here.
Rachel Thomas: Also sitting down with also sitting down with Rediet Abebe, a Ph.D. student in the Department of Computer Science at Cornell University and co-founder of two groups: Mechanism Design for Social Good and Black in AI. She’s received both the Google Generation Scholarship and is a Facebook fellow. Thank you for joining us.
Rediet Abebe: Thank you for having me. It’s great to be here.
Rachel Thomas: And last but certainly not least, Stephanie Dinkins, a trans disciplinary artist focused on artificial intelligence as it intersects with race and gender. Stephanie is also an associate professor of art at Stony Brook University. Welcome.
Stephanie Dinkins: Thank you. It’s great to be here.
Rachel Thomas: So ladies, let’s start with the basics. How does artificial intelligence work? What is artificial intelligence?
Meredith Broussard: I’ll be happy to take that one. We talk a lot about AI but many, many of us are unsure what it actually means and how it really works. The easiest way to think about it is that artificial intelligence is a branch of computer science, the same way that algebra is a branch of mathematics. And when people say AI nowadays, usually what they mean is machine learning and machine learning is a subfield of artificial intelligence. Now there are other subfields like expert systems or natural language processing but machine learning is the one that's most popular right now, and so that’s generally what people mean.
Meredith Broussard: And then another important distinction to keep in mind is what’s real and what’s imaginary about AI. So probably when you think about artificial intelligence, a Hollywood image pops into your mind, like maybe you think about Arnold Schwarzenegger as the Terminator. Maybe think about that movie Ex Machina, or maybe you think about killer robots and stock art robot guy who you see on the Internet, right. But those things are purely imaginary right. There are no robots who are going to take over the world, even though it’s so fun to think about. What we actually have, what's real about AI, is AI is math—it’s really, really gorgeous and complicated math.
Rachel Thomas: So Rediet, could you unpack a little bit? What are some very specific examples in our everyday lives of when we interact with AI?
Rediet Abebe: There are so many examples and I'm really glad that we've already narrowed in on the everyday AI and the sort of Hollywood, I'm watching a movie and thinking about things that are fun to think about that will probably not happen any time soon type of AI. So if you're thinking about that everyday AI, certainly one example that a lot of my friends deal with every day is auto correct, right. When you're typing there's the autofill and then it autocorrects maybe something that you typed and sometimes it's almost magical, you type the first letter and it knows exactly what you want it to say and other times it's not, and behind that, what's happening is a sort of pattern recognition, right?
Rediet Abebe: This AI, that behind the scenes has learned from human language and it’s learned maybe even from your own from your own language and your own use of text to predict what you might be saying next. So it's looking at a string of words that you've already said and then trying to predict what the next thing that you would say might be. That is part of many of us everyday lives, a lot of us also use search engines - that's another one where you type in something, sometimes you type in two words, and it shows you exactly what you want to see on the very first page, it highlights it to you on the very top of the first page. And what's happening there is also something that's really remarkable and, as Meredith said really beautiful, right. It's going through the millions and billions of web pages that are out there and finding the one that you actually would find the most helpful.
Rachel Thomas: So before we get into bias let's do a little bit on how is AI helping make the world a better place? Anybody?
Rediet Abebe: I can get started ... wow I was really hoping to not take that tech optimist stance here, but think about how much easier it is now for people to find knowledge on the web. This was not something that was accessible to people before and now I talked to a lot of people, many of whom are, I have a lot of friends who are growing up all over the continent in Africa, who don't have access to the kind of education that I have access to here, but they're able to learn from the Web, they're able to learn so much from the Web and make really great things that help their communities. And that's facilitated through AI and you ask the question and we all sort of paused and I can tell you why I paused. The reason why I paused was because I—even know I know that there's so many different ways in which AI is helpful to us now than it was before, I see a gap between what we could be using it for and what we are currently using it for. And I think when we use AI for good, we're not using it for the good of all, we're using it for the good of some. That gap is something that I grapple with in my research in my day-to-day life, and it's something that I I would love to talk more about.
Rachel Thomas: So, I see some vigorous head shaking. So, jump in, ladies.
Meredith Broussard: I think that is really well put—the idea that AI is being used for the good of some versus the good of all. One of the things I talk about in my book is this idea of techno chauvinism, the idea that technology is always the highest and best solution and that using a tech-based solution is better than using a people-based solution. But, really, it should be about using the right tool for the task. One of the ways we see this playing out is in hiring that uses AI. So, you probably heard recently about Amazon was using an algorithm—it was using A.I. to try and sort through resumes and the algorithm was trained on a data set of resumes of people who were already successful, and then they said, “Alright, go find us resumes that are going to correspond to people who are going to be successful.” But, what the algorithm did was it discriminated against women, it discriminated against anybody who went to a women's college. And so the AI was simply reproducing the inequality that already exists in the world because if you go and you look at the upper echelons of power in Fortune 500 companies, those upper echelons of power look really homogeneous.
Rachel Thomas: Yeah, it's mostly men, and mostly white men.
Meredith Broussard: Exactly. And, yes, you could write an algorithm that just sorts for Ivy League educated white men, but there are lots of other people in the world who are also really, really talented. So, when we just use computational systems, and when we assume that the computational system is going to do a better job than a human, we’re being techno chauvinists. Now, we can also, we can look at an example of AI that goes a little bit further than just discriminating.
Meredith Broussard: There's a company called Atipica that is run by a woman named Laura Gomez and what Atipica does is it uses AI to predict the race and gender of applicants and then it goes one step further and assembles for you a more diverse applicant pool. It says “No, there is structural inequality in the world and we're going to use AI to try and remedy that structural inequality.” So, I think pushing past the naive techno chauvinist view of saying “Oh, if we just use the computer it will be better.” We need to go past that and have a more nuanced approach and say, “How can we use AI to achieve the world as it should be and not necessarily replicate the world as it is.”
Stephanie Dinkins: Yeah, that’s wonderful. That was my hesitation as well, thinking about this world that it depends whose hands the technology is in, in a lot of ways, and what gets done and what it’s used for. So every time I start thinking about AI in relation to communities and what it can do, it’s always the two edged sword—is it going to be used for something that betters and makes us more of what we want to become, or is it something that’s going to be used to just continue what’s going on? And even in systems that are positive, what happens when somebody else gets that positive system and starts using it to weed out rather than to weed in? And how do we start to really think those systems through and how we’re using them? So, for me, it always comes back to, “Well AI is, or can be, good. I’m optimistic about it because I don’t really think I have a choice and I think lots of us need to be thinking about these issues. But also, how do we really drill down into who we are as people using these systems that are working side by side with us to make the world exactly what we would like it to be, rather than what it is at the moment?
Rachel Thomas: What are some other and what are the most startling examples of bias in AI that you see? Let’s paint a picture for people listening so they know how significant this can be.
Meredith Broussard: One example that I like to point out is the example of facial recognition systems do not work as well for women as they do for men and that they do not recognize darker skinned faces as well as they recognize lighter skinned faces. Now, for many years, this problem was ignored in part because we don't have diverse teams making image recognition technology, and it's not enough to just make the facial recognition systems recognize darker skinned people. We also have to think about how are these systems being used. So, for example, people are increasingly trying to use facial recognition systems in policing and policing technologies have been disproportionately employed against communities of color. It's one of the reasons we have such a terrible problem with mass incarceration in the US. So, we really have to be careful about how we use AI, even if we're making more inclusive systems.
Rediet Abebe: Yeah I really like that example. One thing that I do in my work is I think about the more invisible uses of AI, where there will never be an instance that's so bad that we'll have a breaking news article about it, but it's still working in these very insidious ways day-to-day and undermining the quality of justice that we've all been fighting for. And one such example is in housing, so over the past year and a half, I’ve been learning about how algorithmic decision making tools and AI are being used in filtering housing applicants. I’m focused in on how this is playing out in New York City, and we know so little about how landlords and these big management companies are using algorithmic decision making tools to facilitate the process of screening out applicants.
Rediet Abebe: We know that there are third party vendors that they’re using. We don't really know what algorithm they're using, we don't really know whether or not that is not violating laws that we've put in place for reasons, such as not taking into account protected categories, like race or marital status or immigration status or any of these things. And that worries me. I think about the many people who may have been denied housing as a result of this, not because they shouldn't have been given housing but because they happened to be black, or happened to be an immigrant, or they happened to be a woman, or a mother, or married, or whatever it is.
Stephanie Dinkins: Yeah, and I would say that you could extrapolate that to many different arenas. So education, applying for schools, testing, banking, like these tropes go through and we're dealing with the same kind of issues. And for me, it's about how we break through those.
Rachel Thomas: I mean this is amazing, though, just step back, we're literally saying it impacts who gets jobs, who gets houses, who gets loans, who's put in jail and who isn't. You really begin to internalize the huge impact this has on everybody's lives. So, one of the things I'm also trying to understand, so unpack for me--it actually exacerbates the problem, Right? We're kind of pushing forward bias in a way that is probably even more damaging in many cases than the bias in the systems today. Does that make sense?
Rediet Abebe: It’s basically human bias on steroids is how I think about it.
Rachel Thomas: Much better put than I said. But yes, this idea, it's exacerbating a problem that we're talking about I think so much as a culture that we're trying to solve, and, yet, we're building products and tools and algorithms that are actually just going to make it worse.
Meredith Broussard Yeah. When we act as techno chauvinists, when we say, “Oh yeah, it's going to be better if we build a world where the computers make all the decisions.” Well, actually, all we're doing is we're taking the existing inequality in the world, we're embedding that in code and we're making it harder to see and impossible to eradicate.
Stephanie Dinkins: Yeah. And it’s so interesting because also, when we’re looking at it done for profit, so we’re when we're looking at AI and computational systems use for profit, the end desire is to make money, not to make a better society. So, how do we start thinking around that or through that, so that we’re also thinking about “Well, how's the algorithm working? Are we demanding transparency from our algorithms? What are we doing with this data? Are we just feeding years and years of old histories into systems that then use those to make current decisions? How are we starting to mediate and mitigate some of that?”
Rediet Abebe: I wanted to go back to a point Stephanie made I think on who has power, who is behind these algorithms, and how are they making decisions. Because I think that's very important, because at the end of the day, AI is neither good nor bad, it is what you use it to be. But one thing that we haven't done is talk about what it can’t do and what it can’t do is, it can’t give you a question to answer. You have to come up with a question that you want to answer and you have to set the objective yourself.
Rediet Abebe: That is where the human comes and you as a researcher or as a practitioner of AI come in and you say, “I would like to solve this problem and I would like to maximize this, or I would like to filter in this way, I would like to do whatever it is.” You get to decide what data you’re going to use for that. And I think it would be disingenuous to assume that people’s lived experiences here does not matter—it absolutely matters, it absolutely matters what your day-to-day experiences are, what your background is, in the questions that you're setting, and also the objectives that you set to maximize for, and so then that goes back to the question of who is working for. Because if you have people who have very similar experiences, who have a ton of power in this field, then you're just going to be serving that subcommunity rather than everyone.
Rachel Thomas: So, what are some of the stats? Who's making A.I. right now? What percentage are men, what percentage are, I'm assuming white men? What are the numbers?'
Meredith Broussard: It's pretty much white men.
Rachel Thomas: And I know that you are also passionate and speaking publicly on the issue of getting bias out of AI and really changing the input so we get to better output. What are other people doing, what can we do, what does the solution look like?
Rediet Abebe: So, I wanted to talk a little bit about this idea of gatekeeping that's happening. So, if you are a woman and you look around you and you don’t see that a lot of the people taking classes with you, they’re not women, that a lot of them are men, then you might already be thinking “Maybe this is not for me, maybe this is not something I’m cut out to do.” And this is what happens in computer science programs here at Cornell and also in other departments, when you first start actually, it's almost even, you almost have a 50/50 breakdown, and then it just keeps going down and down and down and then, by the time you get to graduate programs or large tech firms or whatever, then you're down to 10 percent, sometimes less.
Rediet Abebe: I’m one of the founders for the Black in the AI Group, which was a group that we started because many of us were feeling very isolated in our experiences, we would go to conferences and we wouldn’t see that many black people, we’d walk around our departments and we wouldn’t see that many black people, there were just all these things that that were making us feel unsupported and isolated and so we started this group because we wanted to create a community where people can come together and these are people that I would not have run into in my day-to-day. I would not have met at conferences because they maybe didn't have the opportunity to show up to those conferences and so on. And that’s something that we're doing, we're trying to undo as much of the gatekeeping as we can. So there are a lot of people who are working on diversifying the field because there is awareness that who gets to ask the question and set the objectives has a lot of power and we want to make sure that people have equal access to that power.
Rachel Thomas What is interesting for me, at the risk of oversimplifying, is the conversation about how you get bias out of systems moving forward is actually I think leading to coming together and talking really openly about bias at the beginning of the process. In other words, I think it's moving the discussion about unconscious bias and stereotypes and how they work, forward because it feels dire, like if we don't solve it now, we're just going to push it forward and we're going to exacerbate it. So, I think that's really interesting that it's bringing industries together to talk about bias in a way I don't think we've talked about in the past.
Stephanie Dinkins: Yeah, if we allow ourselves, this is such a huge opportunity because we are willing to kind of think about these things that were hard and coding into systems and it does feel super dangerous. So, people are, like I've been in rooms where we're having these discussions that felt impossible before, and that's good right now, if we can put them to real use. We'll see. But I'm optimistic about that, there's something that feels good about that, if we can get it right.
Meredith Broussard: Agreed.
Rachel Thomas: I have learned more in the last, I don't know how long we've been talking, it feels like it was 10 minutes, It was so interesting, but thank you so much. I've learned a lot and I think our listeners have learned a lot as well. And the next time I interact with Alexa or I'm typing and auto correct is telling me what I'm supposed to say next, I will think about it differently. So thank you for that.
Rachel Thomas: I'm here with Max Tegmark. He is an MIT physics professor, co-founder of The Future of Life Institute and author of two books, Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence. Thanks for joining us, Max.
Max Tegmark: Thank you. It's a pleasure to be here.
Rachel Thomas: So, about a week ago, I spoke to this incredible panel of experts about gender bias in AI and bias more broadly in AI. And I asked them, “Do you think AI can be a force for good?” And they said that was a really hard question to answer. What do you think?
Max Tegmark: Yeah, I'm not surprised at all that they stumbled on your question there because the fact is, AI is a technology and technology is not automatically a force for good or a force for bad - technology is a tool that amplifies our ability to do good or bad. That's why I'm so interested, not just in making the technology more powerful, but also developing the wisdom that we need to direct it to good uses.
Max Tegmark: When people ask me if I am for AI or against AI, I always ask them if they're for fire or against fire. Obviously you're for fire to keep your home warm in the winter and against its use for arson. But that isn't something that just happens automatically, we instead created this society where we actually strongly discourage setting other people's houses on fire to the point that we’ll lock people up if they do it and we incentivize good uses and in the same way, we need to very carefully think through how are we going to steer this ever more powerful AI technology to ensure that it becomes a force for good. And I think we've been too sloppy with that so far, frankly, I think you would never build a rocket and just focus on making the engines powerful but not think about how you're going to steer it and where you want to go with it. But I think we're making this mistake with AI, we're falling in love so much just for the tech itself to make it more powerful that we haven't paid enough attention to how to steer it towards good uses. And the thinking hard about what sort of society we're ultimately hoping to create if we succeed.
Rachel Thomas: Sticking with your earlier metaphor, that there’s fire for good and fire for bad, what should we be concerned about when we think about AI?
Max Tegmark: Artificial intelligence enables us to solve all sorts of difficult problems we couldn't solve before, which can, in principle, be used to create this fantastic future where we resolve all of the toughest problems that face us today and tomorrow, cure diseases that were supposed to be uncurable, solve our climate problems, lift people out of poverty, or it can be used to just start some race to the bottom arms race and new classes of horrible ways of killing people anonymously which is just going to make everybody worse off. The intelligence itself is the power. And intelligence gives great power that I think you want to make sure to harness for good use.
Rachel Thomas: You mention that we're moving too quickly - there’s kind of a race to make AI as powerful as possible but not perhaps thinking through how to build AI that amplifies our best selves for lack of a better way to frame it. Can you walk through how is it moving too quickly? And then how does the industry need to start thinking differently?
Max Tegmark: So I'm not saying that that technology is developing too quickly per se. What I'm saying is that I'm optimistic that we can create an aspiring high tech feature with advanced AI, if we win this race between the growing power of the AI and the growing wisdom with which we manage it. So one strategy for trying to win that race is to slow down the progress of the technology. I don't think that's realistic. I don't even think it's desirable. A better way to win this race is to speed up how we accumulate this wisdom.
Max Tegmark: So, for example, I've worked a lot with the Future of Life Institute to organize conferences and bring together AI researchers and others thinkers that precisely think about these questions of how to steer the technology in good directions and where we want to go with it. I think that's also much, much easier to do because, since this sort of wisdom creation has been largely neglected, a small effort can make a great difference. You asked concretely what that boils down to in society—there are many, many examples we see, you started by asking about bias and so on and you’re probably aware of how people kind of rushed to deploy AI systems in American courtrooms that advise on whether to give people probation or not. Judges trusted this a lot because it was new and shiny and “Oh, AI, it must be good,” without realizing that it actually was racially biased and without realizing that actually they hadn't understood well enough even how it worked.
Max Tegmark: We also have to, as AI keeps getting smarter, figure out how to make machines actually understand and adopt our human goals because today's machines are so dumb, they have no clue what we actually want. For example, when someone builds an airplane, the engineers never want that airplane ever to fly into a fixed object, yet that's what the September 11 hijackers were able to do, and more recently, when there's the press, German Wings pilot Andreas Lubitz flew his passenger jet into the alps killing over 100 people, he just did it by telling the computer to do it. And even though the computer had the full map of the Alps and GPS and everything, the computer just said “OK.” So, people hadn't even bothered teaching this flight computer kindergarten ethics of the most basic kind and we have to get into the habit now when we put more intelligence into machines and not just make them smart but also be good parents and teach them right from wrong at the level they can understand. That would eliminate problems like this.
Rachel Thomas: That makes a ton of sense. It's one of those things that’s so obvious when you say it, but yet we're not doing it. One of the things that's hard for me to wrap my head around and I'm sure you have a really strong point of view on a solution is undoubtedly the data set that AI is built on are going to be biased and imperfect because we are biased and imperfect as a culture. How do we get bias out of past data so that we can set a new fair and unbiased standard going forward. Practically, what does that look like and how hard is it to do?
Max Tegmark: So many AI systems that we have today that learn from data, unfortunately have the problem that if you give them biased data, they're going to behave in biased ways. But it doesn't have to be that way. Maybe we should take a step back and just talk a little bit about what artificial intelligence is. I define intelligence simply as the ability to accomplish goals and the more complicated those goals are, the more intelligent, so artificial intelligence is just intelligence that's not biological, it's just intelligence, it's just about information processing and it doesn't matter whether it's being processed by carbon atoms inside of cells inside of your brain or by silicon atoms inside of some some computer. And we already know that people like you can be fed all sorts of very biased data and nonetheless come to reasonable conclusions. So, it's perfectly possible to build intelligent systems that can learn to become unbiased from biased data.
Rachel Thomas: Isn't it also an issue around who's developing the AI? I have to imagine that a lot of these teams are not particularly diverse and inclusive, just given what technologists generally look like these days. So, how big an issue do you think that is?
Max Tegmark: I think it's a big issue. I think, when I went to the last big AI conference last year with 8,000 people roughly, this was the most gender imbalanced conference I ever been through in my life and that's saying a lot. I'm a physicist, I usually go to physics conferences and this is even worse. So we have a long way to go obviously to get more diversity into the people who are building AI. But I think even before that, we also just need to diversify the group of people who are in this conversation.
Max Tegmark: The reason I wrote this second book you mentioned, Life 3.0, was precisely to make it easier for everybody to just get into this conversation and understand how this was affecting them because if we fail to diversify the conversation itself about what kind of society we're going to be building with AI, you know who's going to make all the important decisions about our future—it's going to be some incredibly non-representative group of dudes in some little backroom, maybe in a tech company somewhere. And I just don't feel that that's right—it's not morally right because it's everyone's future but it's not even right in the sense that those people have any special qualifications to know what makes humans happy more broadly.
Max Tegmark: It's so crucial that we get people from all walks of life, from all diverse backgrounds in on this because this is everybody's future. AI is by default is just going to make this worse because when you replace human workers by machines, the money that used to go in salaries to everyday workers now goes to those that own the machines instead. So, how are we going to deal with this? My opinion is that if you can grow the overall economic pie tremendously with AI and have machines produce all of this stuff we need, it's very easy to figure out some way of sharing this wealth so everyone gets better off. Yet, we're doing the exact opposite. And if in the future we still can't figure out how to make everybody better off with this, then I think shame on us and we need to have a healthy discussion about what's the best way to do it.
Rachel Thomas: Are any companies or organizations out there getting this right?
Max Tegmark: We organized the first ever conference almost five years ago now where we brought together AI researchers to talk about this wisdom stuff. And since then, there is this thing called the “Partnership on AI,” where leading tech companies from around the world have gotten together to discuss precisely these issues about how you can make sure that this technology becomes beneficial.
Max Tegmark: And I think that's actually a really, really good development. Many of these company leaders are very idealistic and want to make sure that this becomes beneficial and now there are a lot more discussions happening where they can share best practices with each other, share lessons learned from mistakes and so on. So, I would say actually the companies by and large have done a better job trying to look one step ahead than the world's politicians, who are still largely asleep at the wheel on the issue. I was so embarrassed watching the debates between Hillary Clinton and Donald Trump before the last presidential election, that none of them even mentioned AI at all, like “Hello!” Here’s the biggest transformation of our time happening and it’s not even on their radar.
Rachel Thomas: What can we as individuals do to step back and make sure that we’re not overly relying on AI in our decision making?
Max Tegmark: That’s a great question. First of all, we are only effective decision makers with real power if we are actually informed on the issues. So, that's why it's so important that everybody try to learn more about how this will impact them, so that when they vote and influence in other ways, they really do so in a way that serves their own interests. Certainly, as you said, people shouldn't just automatically trust something because it's shiny and made of metal, they should be just as skeptical of advice coming from machines as from other people. So, the more women we can get into decision making positions here, the better it's going to be. And nowhere is this more true than when it comes to the issue of weaponizing AI and using it to make slaughter bots and other new ways of assassinating people, which is a very current issue, it's being discussed in the United Nations, whether these kind of disgusting weapons should be banned like we've banned biological weapons before, or whether they should be completely legal and everybody should be able to have these little assassination drones freely or what exactly. And I just find it so striking that these issues, pretty much everybody who is for an unlimited arms race is male and many of the voices saying “Hey, wait a minute, let's draw a line in the sand here between what's acceptable and unacceptable uses and negotiate an international treaty,” there are a lot of women on that side. And I think if we can get more women into decision making positions, really start affecting what kind of future we make, I think that's going to be a better future.
Rachel Thomas: I cannot agree with you more, and that is the perfect point to end on. Thank you so much. This was really helpful and I learned a ton.
Max Tegmark: Thank you so much.
Rachel Thomas: I am so grateful for our experts this week—they opened up a topic that too few of us understand and I am struck by how passionate they are about getting this next phase in our advancement right.
AI is complicated stuff—but here’s what I know for sure:
We should all learn more about it, how it works, and how it is shaping our future. And like everything, AI is created by people—and when only one group of people is doing the creating, we all miss out.
We need more women, more people of color and more people with diverse backgrounds driving the development of AI. AI can either perpetuate biases and systems that lead to inequality or it can help us level the playing field. It can make things worse or it can make things better. And I want better, and I’m willing to bet you do, too.
Please subscribe to us on Apple Podcasts, Stitcher, Spotify, Google Play or wherever you get your podcasts.
Our producer is Jordan Bell. Special thanks to Katie Miserany, Ali Bohrer, Megan Rooney, and Sarah Maisel from the Lean In team and Laura Mayer at Stitcher.
Our engineer are Jared O’Connell, Bianca Taylor, and Matt Russell, and our music was composed by Casey Holford.
This has been Tilted, and I’m your host Rachel Thomas.