>> Hello, everybody. How's it going? We are, what, ten days into South by now? So this is like the lone survivors in this room. Um, my name is Allison Morrison. I work in public policy at meta, but I am also a part time MBA student here in Austin at the University of Texas. I have spent many years now coming to South By, so it's super cool to actually be up here on stage and moderating. Um, and I'm super excited to introduce to you guys and have this conversation with Doctor Rahman Chowdhury. She is a data
scientist, a social scientist, and CEO of Humane Intelligence, which builds a community of practice and sort of around evaluations of AI models. She was recently named one of time 100 Most Influential People in I. BBC's top 100 women and Forbes one of five who are shaping I. So welcome to Austin. Welcome to South By. >> Thank you. I'm really excited to see you all and happy to be here. So I guess to get things started, you know, I'm sure folks in this room know a lot about you and your background and, you know, have read
your bio, but what's something that people might not know about you or your background that led you into the work that you do? Yeah. I think one of the most interesting things is I am I always say I'm not born of tech. I did not start my career in tech. I actually started my career working at nonprofits, public policy orgs, and I was an economist for a while. And this was before really tech was an industry that you went into unless you were a computer scientist or a programmer. I think that's kind of the big
thing. I think a lot of us who have been in tech for a while, people assume that it's the only world we know. But what one thing that gives me a lot of the perspectives that I have is the fact that I had whole other careers one in academia, one in public policy and nonprofits and other types of industries before I came into tech. And I think it it helps you see a lot of the potential of tech, but also it gives you a maturity to see how things could be better. So on that in
in 2017, you were hired by Accenture to be their very first lead in responsible AI, which at the time we didn't really know what it meant to make a responsible technology a responsible system. And so I'm curious how at that time you viewed your background in academia, in nonprofits, in, you know, political science and government. How did that help you think about some of those early questions before there was much of a roadmap? And, you know, I think a follow on to that is what's the importance of bringing people from different disciplines, not necessarily just
AI research or, you know, engineering to work on the problems that we're, that we're facing right now. Yeah. Well, like you said, when Accenture brought me on responsible AI, I was coming in as a data scientist, and I saw this problem as a series of basically quantitative social science problems that needed to be solved. One of the most interesting things for me coming into Silicon Valley was that people talked about human beings, but understood very, very little about them and actually disdained people who even understood people or researched human beings. And that reflected in the
technology that was being built and responsible AI was an opportunity to remedy that, right? We're not just talking about computers. Go, Burr. What we're talking about is like, you make this thing and it impacts somebody's life. And now people like myself and lots of people in these fields, we've studied that like we can actually make solutions about it. So during my time at Accenture, I actually made the first bias detection and mitigation tool that enterprise companies could use. And now it seems obvious because there's an entire industry around that. But at that time, the idea
that you could take this idea, a concept of something that seemed nebulous, like bias, and quantify it and measure it in a model and then go do something about it so that your output was not biased, was not something people had really considered. So Accenture was this amazing opportunity to really think big, think globally, and start solving big problems. And since then, how has the evolution of evaluating models and evaluating the the inputs and the outputs and bias in these models? How has that evolved over the past five, six, seven years? Are there still and
sort of where do we go from here? How much progress have we made? Um, so I'll give you both an optimistic and a pessimistic answer. I mean, the pessimistic answer remains it is very hard to because there are no legal mandates for companies to adopt any of these solutions. Right? So when they bring on responsible AI teams, sometimes it is an uphill battle to get teams to adopt what you're doing, to pay attention to what you want. And you know, in a way that you're trying to impart that you're not making the company worse. You're
not trying to take someone's baby away from them. You're actually trying to make it work better for more people. And that's a really hard road to navigate inside a corporation. I think outside a corporation can be more maybe like emotionally satisfying because you can, you know, say the big things, but you have very little agency over what gets done. So that often is the trade offs. That's sort of the pessimistic side of things. On the more optimistic side of things, I do think especially with newer forms of AI, like generative AI, the general public is
just more aware of AI systems. So if we rewind like 4 or 5 years ago, I would have to explain to people why they should care about AI because to them it was this like abstract concept that was in movies. But I was trying to explain to them that, you know, when you apply to a job, AI and machine learning are being used on your resume to determine whether or not you're a good fit for that job. How do you know that there isn't bias in that model? And like, you have no access and you
have no forms of redress and people would not it would not make sense. Generative AI really put AI into everybody's hands. And then you don't need me to explain it to you. You can just do it yourself and you can say, hey, why is it that when I ask for like, five scientists, it's only ever giving me men, right? And then you can start questioning for yourself so that that I think, is the optimistic side of things. What remains to be done is making that bridge between the awareness and the ability to act on the
awareness. Yeah. So how do we actually act on, you know, some of these discoveries that we're making about about the bias that does exist in these models. Yeah. And that is the purpose of my nonprofit. So as you mentioned, the goal of human intelligence is to build a community of practice around algorithmic evaluations, which sounds very fancy, but really all it is, is I want to give people the tools and critical thinking to test AI models from their perspective, and make a pipeline for companies to correct the issues they find. So when we do what's
called a red teaming exercises, these are exercises in which people can come in and basically hack AI models. We often work with companies to give that feedback, and we design these evaluations around communities and perspectives that tend to be underrepresented in tech. So one of the things that I'm doing is trying to build that bridge. Awesome. Um, in your Ted talk that you gave last year on your right to repair AI systems, you said that we are, uh, we're asking the wrong question. It's not what can companies build to ensure that people trust I. But
rather what tools can we build so that people can make AI beneficial for them? So what are some of those tools? How do we teach ourselves and like society at large, how to think and act more critically about these technologies that are becoming an ever more present part of our life and our work? Yeah, that is like the million dollar question. The fundamental structural problem is that the only way we've ever been presented, a lot of the technology we have, whether it's smartphones or computers or whatever, or even things that are not physical hardware like
AI models, is we are just the recipient of what somebody else has built. We don't really have a say in how it's being used or if it doesn't work for us, we don't really have a method of redress. I mean, I worked at Twitter, right? If there was something happening on Twitter, you didn't. You could log a ticket. That's really the best you could do. There's nothing you could actually fundamentally do about these things. And that that mindset like these are the tools that need to be built. But the fundamental the starting point would not
even be make a thing. It's actually how can we shift the conversation by asking questions differently often. And actually, this revelation came to me as I was making that Ted talk, because we have to sort of write out what our script of what we want to do, etc. and as I was thinking through how I'm going to wrap this up, like, oh, we can talk about like all the stuff people are doing to make trustworthy tech. And I'm like, is that really what it is? Like, is there going to be some tech that is just
so good? We're all going to be like, oh, that's the trustworthy thing, and everyone's just going to blindly agree. Like that's actually not the world I want, right? Because again, that world still requires people to just be in compliance with what I have decided is fair and what I have decided is good. And if what I want is for people to make choices for themselves, then how do we make pathways for people to make that choice? So what that requires companies to do. Part of it is just, you know, having less ownership of things. So
let's say we talk things that are more capitalistic and structural. One would just be market diversity. The fact that there are so few companies that own so much tech, right. Um, number two, in this sort of ties into this battle that happens around open source, right? I'm a big open source advocate, and in part because of that, like, it helps people think through different problems in different ways. It gives access to these tools to people who couldn't afford it, because generative AI models are prohibitively expensive to build. Right. And the third would be, and we
haven't quite solved this yet, are methods of not just raising problems, because today, the only way to raise a problem with tech is to, like, go viral on social media, which is the wrong way to raise problems about tech, but to actually create these methods of addressing these concerns, some people say it's regulation. I think regulation is a way to do it, but we forget that there's an entire middle between no regulation and all regulation that is civil society organizations, independent actors, people who can live in this space and represent specific communities. On the open
source. I feel like open source has become, um, you know, a more frequently spoken about word and concept in the AI community. But maybe a lot of people don't know what open source actually means. So for the folks in the audience who don't necessarily know exactly what open source is in the context of AI systems, can you break that down a little bit and then explain how, uh, you know, an open source as opposed to a closed source system actually strengthens the models and, um, you know, can make them potentially less biased or more safe
or easier to test by, um, you know, the things that you were doing at your organization, for example. Yeah. So there's a spectrum, right? So Meta's models are open access, which means that you can take this model and fine tune it and train it, but you don't get access to the the data. It was built on full open source, meaning you get access to the data. You get access to basically everything that can leave models open and vulnerable to hackers, for example, or people who want to execute things like data poisoning attacks. But so one
maybe simplistic way to put it is greater transparency does open up to possible greater exploitation again by malicious actors. However, because we're now in this world of exploring the spectrum of open source, open access, what does it all mean? The thing that I am always optimistic about, as somebody who has been in this space for quite some time, right. So if you know, folks in the audience are like a graduate student, right? You can't afford half the stuff that's out there. So you go online and you teach yourself and you find some like janky thing
on GitHub and you slap it together. And that is how we learn things, right? That is how we get good at this industry. So I think I feel like it's a bit disingenuous when people I think actually very few people in the industry vilify open source. I think it's a lot of people who don't know how open source is used to be the backbone of the tech industry, and not just structurally, but it's how so many people get into tech. It's what makes tech so accessible as an industry is the fact that I can go
online and learn stuff off of YouTube or whatever, you know, grab some free code off of GitHub if we all just decide to, like, stick it in these big corporate silos. It's it is that it will stifle innovation. It will limit oversight and scrutiny, but it will actually kill the soul of what made tech so appealing and desirable to people. It became this place of innovation. That's what innovation is. It's just regular people being able to be a hobbyist, be able to play with something right in a way that we can't really do in a
lot of other fields and industries. Um, so we are also just a housekeeping note. We'll do we'll say 15 minutes at the end for questions. So if you submit those with, uh, I think there's a QR code up. We'll get to those at the end. Um, but back to the career path. So from March. So after Accenture, uh, from March 2021 to November of 22, um, you were the director of the machine learning ethics, Transparency and Accountability team at Twitter? Um, until Elon Musk came in and eliminated the team, essentially. Uh, what was that experience
like, and how did you decide what was next for you? Yeah, I mean, working backwards, we didn't really have much of a choice. We had to I think about a good third or more of the company was fired. Yeah, but I'll also add that this kind of went under-discussed and underreported, that we don't even know how many people left before he took over. And anecdotally, I can tell you people, we were hearing about people leaving every single day. So some estimates say about 20% of the company had already left before he even took over. And
then he fired another 30% on top of that. Right. So we're talking a big chunk of the company being just gone, whether of their own volition or not. But I think the most important thing was that just the looming shadow of his existence completely killed the culture of Twitter. I really loved being at Twitter. It was it somehow managed to retain some of the, like, startup y vibe while it was still this global company, for better or for worse. There's many critiques one can have of Twitter. Most of them are probably quite valid, but there
is this spirit of trying at Twitter and of trying to be open and honest about when things worked and when things didn't, that I think a lot of companies, especially when they move out of startup mode into something that's more corporatized. They're very scared to talk about their failures. Twitter was usually never really scared to talk about our failures. And I like that about them. And I think it's what made the Twitter community really like the company. Right. And we see some of that reflected in blue Sky today, which is some of the spirit of
Twitter lives in blue Sky. But that last year was pretty brutal for many reasons. One, and sadly, I think it's very relevant today because we're seeing all of the US government going through it. I'm very sympathetic to my friends in government who are trying to run teams and manage workloads. The thing is, you still have to do work. So when your funding is frozen and you don't know if you're going to be fired, and there's this, like, absolutely unhinged person saying weird things on the internet constantly, you still have to do your job right. The
lights have to be kept on because Elon Musk is not the one keeping the lights on, even though he would like you to think that. Right. It is the rank and file employee who's keeping the lights on through all this uncertainty. So one of the concerns I have, especially with the federal government today, because I saw this at Twitter, is there's the brain drain that happens, right. Where if you're smart and you're capable, you're like, I can't be in this environment. And again, whether or not you like him or not like him as a human
being, he breeds chaotic environments, right? And chaos is not where good work happens. It's not where good things get done. I think it is appealing to some people because you think that if you pull an all nighter, you did a lot of stuff, but if you actually do things, you realize that if you pulled an all nighter, you've just managed your time poorly. You've not actually gotten a lot of stuff done. You know, the sleeping on the factory floor is not impressive to people who actually have built things and done things wisely. So, and it's
scary to see that happening to the US government, an institution that matters significantly in every American's life and even to people beyond American borders. I mean, Twitter was a company. Yes, it was an impactful company, but it's not the same as a government. That's very concerning. But yeah, to your question on sort of what was it like? Not great. Is the is the short answer just kind of and again, it was just dealing with the uncertainty and the chaos that made it not that great. Yeah. You know, I think a lot of people in, uh,
a lot of people in tech and other industries as well over the, over the past several years, um, you know, thousands of people are getting laid off with little to no notice, maybe embarking on a job search, embarking on a new chapter of their careers, and sort of thinking about navigating that with the rapid technological change and innovation that's happening. You know, I certainly as somebody who is about to graduate from business school and I'm looking ahead at the next, you know, not only five but the next 20, 30 years of my career trying to
prepare for how these technological shifts are going to impact me in the world around me. Do you have, uh, you know, thoughts on how everything we're experiencing now is going to impact the future of work, how this is going to be similar or different to past technologies we've seen. Um, obviously the internet, email, smart phone, but even like the calculator, which also had like a very interesting impact on the way that people did their jobs, what jobs were actually available. So how do we all navigate that and how can people prepare themselves to be valuable
workers? I guess over the over the next five, ten, 20 years? Yeah, I'm glad for that question. I feel like future of work comes up in these very like, abstract ways that are, I think, leave people often feeling more scared. So I do a lot of talks at universities and overwhelmingly, whether especially undergraduates, frankly, but also a lot of grad students will come up and they'll ask like, well, what do I do so that I can have a job because I'm still in school, I can go get a degree. Like, what degree should I get?
And there is no clear answer to give them. And what I've seen is there's a lot of mythology students have around this. I've been told everything from, well, like, oh, I'm studying to be an accountant because people will always need these sort of kind of boring, you know, these sort of rank and file kinds of jobs. A computer is not going to replace that. I've heard people say you should do programming because you need to know how to build these systems. I heard people say you shouldn't learn programming because AI is just going to do
all the programming. So all those jobs are going to be gone. Somebody told me that they're in an MD program, but they're also taking a communications minor because in the future you have to have both EQ and IQ. So again, like absent clear guidance or knowledge, like students and young people are just going around trying to navigate a world that they consider to be very uncertain. What I'm concerned about is that there seems to be a lot of like up here, Davos level conversations that are quite philosophical. And then there's a lot of fear kind
of down here that's this sort of existential dread. But no, in between where we're connecting the two and we're saying, well, these are viable pathways forward. One, for example, that's not going very discussed is I actually think a lot of programming jobs are being automated away because we have pretty sophisticated tools like Copilot that actually can help with unit tests and programming, and there's nothing inherently wrong with that. We want technological advancement. We want these jobs to be better. But that should be quite clear then as a profession. So I think the simple answer I
can give is it's always good to learn how to use these tools. So, you know, understanding how you might be able to utilize some of these AI tools to augment the work you do can be helpful. It also helps you understand what the limitations of these tools are. And there are many. So one of the problems with this very scary future of work narrative and the lack of in-between, is that the existential dread is that there will be no jobs left tomorrow. And what are we all going to do? Unless Sam Altman decides to pay
us out of his piggy bank? Right. Which is the perception a lot of people have, that very rich people are going to have to set up UBI, because there'll be no jobs in the future, and that will not be the reality. I can at least tell you that much. That will not be the reality. The reality is there's going to be a market shift in what jobs are out there, and also a change in the nature of jobs. I think actually, there's going to be a lot of the more qualitative kinds of jobs, whether it
is things like policy research, program management, project management will actually be more useful. I know tech is sort of swinging in this, like we want to lay off middle management perspective that happens in tech every handful of years. And then they realize what a dumb idea it is. And then they swing back because engineers cannot self-manage, right? It's not a thing. It's not it's not what it's not what they are there for. Right. So like dumping the responsibility of managing a team on somebody who is also being asked to build is not a smart way
to do things, period. This is not like a knock on engineers. It's actually saying that their job is to make a thing. They can't make a thing, while also looking around and making sure everybody else is making their thing. Right. So you do need a management layer, but what is a management layer now look like given that you have AI tools at your disposal, right. Maybe it looks like a bit more automation of certain tasks like like allocating tools to people, right? Things like that. And even in hiring. Um, but we're not having substantial conversations
about these things because there is there's this high level fear mongering narrative that, that that frankly, a lot of people are very complicit in that is maybe short term useful, but not long term beneficial societally. So what I hope is that when we talk about the future of work, that we're having conversations around new professions that will exist. So kind of going back to human intelligence, we are one of the reasons why I want people to be smarter about how to interrogate AI systems. And my way of talking about Future of work is to say,
I think algorithmic auditing and algorithmic assessment should be its own profession. And that means something that doesn't just mean this profession falls out of the sky. It means someone needs to build the infrastructure. They need to build education. They need to build a pipeline for people to get these jobs, and then meaningful ways to go and do it. So if we just talk up here about rich people have to pay a UBI and we're going to not have jobs, then that's not filling in that middle. Yeah. No, I think algorithmic assessment as a profession is
super interesting. And an example of something that, you know, a job that obviously has not existed up to this point but may become extremely important or, you know, a very viable way to create a career and make a living. Um, if there are if there are students or even people who are looking to switch their career and curious about, um, building the the types of skills that one would need to become a, you know, auditor or assessor of algorithms. How would you go about doing that? Yeah, that's a that's a harder one to to to
answer. Um, I often get that question. One thing I did realize is that in the field of responsible AI, we've built a lot of celebrities. There's a lot of people who are like, essentially in our world, like household names. But we did not do a really good job of is like extending the ladder so that other people can rise, and that we're in this position where there's a handful of people who are these big rock stars, and then unless one of them is in the room, sort of, you know, getting the press and telling the
man what's what, nothing's getting done. That's just not a sustainable way to do things. So what does it look like to get to get a career in algorithmic assessment? Um, right now there's not a clear pipeline that exists. There are analogs in other industries, right? If you look at, for example, finance, the people who build Financial models are not the same people who assess these models. So model risk management is a field that exists in finance. And these are people who evaluate the financial models that are being built. And they actually do analyze them for
things like fairness and bias, because those are required by law. They do analyze them for bigger market impacts. But again, the skill set of people who build these models is not the same as the skill set to analyze these models. So I think first thing is just getting rid of the mindset that an engineer can just go do everything. I think there's this assumption that because you can build an AI model, you're the best and smartest person to evaluate the AI model when actually it is not true. You're probably the party that has the most
blind spots when it comes to it, right? Because nobody's nobody's going into building these models trying to be racist, right? They're not trying to do bad things with it. It comes from a lack of knowledge and understanding. It comes from a lack of a sufficient educational background, and it comes from a lack of skills and tools. Right. So those are the three things that would need to be built. I will say that there's more and more model governance happening. So one of the places where I am seeing more movement in algorithmic assessment, in a sense,
is the privacy world and the legal world in general, right? Because that world is getting pretty, pretty big. There's a regulation coming out of the EU. There's also regulation coming out of multiple parts of the world. We're entering a really interesting geopolitical phase in this whole world of AI, which is fascinating. So we're seeing a lot of legal people coming in, and for a lot of them to understand how to be in compliance with these laws that are coming out, they need either technical people working with them or for them, or they need to be
technical themselves. Policies are the place I'm seeing. It's very fascinating. People think policy. I think their image is like a lobbyist. Increasingly there are engineers on policy teams, which is really interesting. Like and again, that's only been in the past few years. So kind of absent technical teams bringing those people in. What we're seeing is the people at companies who did these roles are now putting technical people on their staff to augment what they do. So I guess another way to maybe put it is if you are a technical person, don't automatically assume that the
only jobs you can get are with product building teams, because there are now more and more opportunities ranging from policy to governance, risk and compliance and others that did not exist before. Totally. And I think the you know, I think that as regulators and policymakers are increasingly involved in, in developing policies to help us, uh, help us sort of meet the challenges that are going to be needed. They need to be educated on how, like the tech behind the things that they are regulating. A lot of policymakers and regulators, you know, I think there was
the, uh, the meme going around every time a tech CEO testifies in front of Congress, uh, you know, speaking to a bunch of people whose password is probably password. Um, and so I think that there's a role for, for, uh, for engineers and for technical people to actually educate policymakers to help them make better policies. Um, so I think that'll be an interesting. Yeah. And I think there are a lot of people interested in doing that. And one of the things that's happened, um, a lot of people went into tech and with like, really good
intentions, they really loved like, a lot of the things that I love about tech one is sort of this open mindset of innovation like, let's just try, let's just see if it works. There's a there's the negative way that that gets done, but then there's a positive way that gets done as well. I think a lot of people were attracted to that. What seemed like a lack of barriers and obstacles, if you were, and the feeling that if you were a smart person with a good idea, you could get that thing done right? Um, it's
not always the case, but that is more so than other industries I've been in. It is more so that way in tech, but honestly, it's been closing the past few years because of just a lot of shifts in ideology which have just not. It's using the words of innovation and building, but it's not in the spirit of innovation and building. It is in the sort of spirit of sort of hoarding power. Um, but I do think that the other thing that is really appealing to a lot of people in tech is this ability to learn
from failing, which to me, especially coming out of a graduate program, I was in a PhD program. If anyone here is an academic, you know, that is not the best environment that you're allowed to fail in, right? You're not allowed to have like wild ideas. You're not allowed to push outside the hierarchy. And certainly you're not allowed to fail because God forbid, you try something and fail. You're forever marked as that person with the one failure. Which is dumb, right? Because human beings learn by failing, we learn to walk. We don't just get up and
walk as babies. One day. We trip. We fall on our faces, we bloody our knee, and then we get up and we do it again. And if we treat our professions and our jobs and our lives like you're not allowed to bloody your knee every once in a while, then you're never going to get anything done. So I think that is another part of tech that is actually a great and pretty appealing part of it. But again, there is a way that that is used in a very negative way where it becomes, you know, this
mentality of like, the world is my collateral damage, which is not how to do it, but the better way to do it is to say, how can I be humble and learn from these failures? And that's permission tech. The tech industry has often given people. So I do want to emphasize that there is just there is, there are good. There's a good ethos that has sometimes gets warped for bad reasons, but there are fundamentally underneath. It is a good ethos. I think a lot of people can learn and build from. We've talked a bit about
your the, the company that you started, human Intelligence, which was a which you founded in summer of 2023. And, you know, I'm sure that you mostly get asked about your experience and thoughts on AI and the direction of travel. But I'm very curious what your experience has been as a founder and building a business and a brand, and how you know, how you have maybe navigated that experience. You know, as because you because you worked at Big Corporation, you worked in nonprofits. Now you're a founder. How has that experience been? Yeah. And I'm glad for
this question because you're right. Like, I worked at the sort of literally worked for the man. I mean, there is truly the definition of a man is Accenture. And I loved my time there. I say that jokingly. I had never worked in an organization that big before. One of the things about working at a place that big is it pushes you to think that big. And I had never really thought about what does a global solution look like? I was really good. I was a data scientist manager when I was before I was brought into
Accenture, and I was really good at like managing a team and building a product and pushing it out there. But that is different from saying, how do I take this thing and make, you know, fortune 50 companies around the world adopted and make sure it is relevant in Singapore, as it is in Paris as it is in San Francisco. That is a whole other level that Accenture unlocked in my brain after Accenture, a startup for a few years that I sold to Twitter and running a startup versus now where I run a nonprofit was actually
is is actually fundamentally the same. And what is interesting to me in both having run a small startup and a small nonprofit, is that they're actually structurally quite similar. The only difference is your legal designation. To be perfectly honest, I think there's a lot emotionally that we put on startups and emotionally that we put on nonprofits. And we assume that if something is a nonprofit is just fundamentally good. Actually, there is nothing in a nonprofit charter that legally that says you have to be doing things for good. It really doesn't. And there is nothing in
a startup's, you know, like document legal documentation that says you can only do things ruthlessly and capitalistically. Right. All in all, it is is like where your money gets placed and how your money gets placed. You are the person if you run the organization that chooses what that means. And I think we're actually starting to see more of that exploration happen. So there are companies like Patagonia, for example, that are for profit companies that are trying to really do good with what they do. And I think there are some organizations in tech that are thinking
about this more and more. Some of the limitations of a nonprofit is that I can only raise some certain types of organizations, right. I can't give my employees equity. And that's really hard when you're trying to attract technical talent like I am. If I need to bring in data scientists and engineers. But I'm faced with the limitations, literally, the financial limitations a nonprofit has, then it's really hard to recruit that talent for profits. Maybe don't have that, but when I ran a startup, there was not the universe of VCs that truly understood what a mission
oriented startup could look like. They thought of responsible AI as compliance, and a lot of the responsible AI companies that exist in the space are compliance companies, and those are two different things. Compliance is the floor. That's the bare minimum. What you have to do. Responsible AI is going above that. But it was very hard to find like minded VCs. So when I made human intelligence, I purposely made it a nonprofit instead of a for profit because I was it was more important to me to find funders that were aligned with me from a mission
perspective. And again, it's not because, you know, making a for profit is inherently evil. It's because it's hard to find VCs that do that. But I also think the landscape has shifted quite a bit. So the Paris there was a big Paris AI summit about a month ago, and in that there was this sort of new vibe of nonprofits and VCs almost starting to meld in a weird way. There's a lot more investment in technology happening by nonprofits, and there are more and more VCs that are looking at long term benefits. So one of the
big restrictions of running a for profit is you really just only have to you're forced to think about quarter over quarter and year over year like RR, right. Your annual recurring revenue, that's your big metric. And you got to like dance to that. And that's very limiting and frustrating. But that's only imposed by who your funders are finding funders who are like, I care what you're going to look like in five years is really interesting, and those are the people that are starting to emerge a bit more. Yeah, no, it's not as it's not as
binary as a nonprofit is automatically doing, you know, positive social impact work. And for profit companies are automatically only preoccupied with profits like, um, you know, it is a lot more nuanced than that. And it's interesting to just see how the landscape is, um, how the landscape is shifting around where people are finding funding. Um, okay. I'm going to, uh, I do have a few more questions? But I want to make sure we get to some of the questions in the room. So maybe we'll take a brief intermission or see how long. Um, uh, we
can we can turn to some of these. Yeah. Um, so we have, um, how do you ensure that engineering teams understand the rationale behind policy recommendations and then feel invested in the implementation of those recommendations? You picked my favorite one. I was actually. Hoping because you already. I was. Trying to read your. Mind. Great. You really did. Because I was like, because it's the last one on the screen for those who can't see it. And I literally was going to be like, oh, that's my favorite one. I love that question because I was often in
that role and most of my jobs, I've been in that role. And you're kind of this translator so specifically like structurally. So another thing about if you look for a job in this field, one of the most important things is to look at organizationally where your team is located. That is actually how I choose my jobs. It's how I chose my job at Twitter. I was an engineering director and I was in the core AI ML services team called cortex. If you're at all familiar with with Twitter, it was like the brain of all the
AI products, and then all the stuff you saw on Twitter was built on top of the infrastructure that the team I was on made. Why that was important to me, right? Because I, I didn't have to ask permission to be in the room. I was in that room. My peers were the people making the stuff. Right. And that is one of the most important things. Like, you shouldn't have to ask permission to be there if you are structurally there. So to the question about this, like how do you do that? Translation I spent a lot
of time with our legal teams, our risk and compliance teams. This was like early days of like the Digital Services Act, for example, which if you work at all in social media world or any of the very large online platforms, you think about quite a bit because it is a very big and very impactful law and explaining things like so the Digital Services Act says things like, you have to demonstrate that your models don't violate fundamental human rights. Well, how the hell are you going to go do that? And honestly, I don't know if I
know the answer to how we're going to go do that. But translating what that means technically, like this is that's that. That's that bridge, like, that's that infrastructure that hasn't been built, like computationally hasn't been built. But this is why people like quantitative social scientists belong on these teams, not in a from like a philosophical perspective, but literally people who have taken abstract concepts and measured them. Right. So in my field of study, people talk about things like corruption, right. We talk about things like transparency. My dissertation was on the idea of social capital. But
people sit down and they try to operationalize. They try to measure these things. So when there is a law, for example, that says, okay, can you demonstrate that your algorithms are not, you know, um, you know, making children mentally ill and it's not creating addiction. Somebody has to go sit and figure out that measurement. And that's that bridge between the two. So it's yes, it is understanding the rationale, but I think it's it's more that like it needs to be explained in terms that make sense technically because like I think it's not I've really rarely
been in a situation, contrary to popular belief, rarely been in a situation where a technical person challenged me to say, well, I don't see why I should care whether or not AI systems are perpetuate. They all care. They just didn't realize that it's something you that's not ideological. I think people think it's often ideological. I think people think it's often philosophical. And it's not hard when the reality is we already do this, we measure this. And related, you know, we were talking a bit about and I know we had on our list to talk about
a lot of the AI models that are out there. I know there's a question here about open source. One of the most interesting things to me. Again, back to this, like top down way of how AI is sort of imposed on all of us is we are just told that new model X is the best model. Based on what? Like, how have we decided that this general purpose model that these companies claim perform like human beings is the best? So if I were to say you're the best because you scored a perfect score on the
SAT, therefore you're the smartest. I think plenty of people in this room would be like, ah, you know, the SAT is not the only way to measure if somebody is smart or good at something, right? Or if I say, well, we're going to run down the hallway and you're going to beat me. So therefore, therefore you're just like a better athlete overall. Somebody else might disagree, right? It's like all those dudes who thought they could beat Serena Williams, right? Like whether or not they could. Mess with Serena. Very sweet, very sweet of them to think
so. Yeah. But what I'm getting at is like what we're using to measure these models matter. So all of these models that are being sold to us as replacing us and smarter than us and better than us are measured on like, medical questions, coding questions, physics questions. And there's like one library of like bias type questions. And we just decide that we agree that that's how things should be measured. So like I loved that question. Not just because engineering teams should understand the rationale behind policy recommendations, but that engineering teams should not just blindly decide
that some measurement is the best way to measure something. And in let's say, for example, machine learning world, it was always the R squared. And I'm a statistician by background, right. That is not the only way you measure a model. And it just become this like mantra of like, oh, r squared is the best way to measure it. Therefore you just optimize for that. It's just it's just not a nuanced way of looking at things. Yeah. Well, I think it's also another great example as to why it's important to have engineers and people with product
expertise In. In the. In the policy conversation. Absolutely. So that that you know so relaying those recommendations. Right. Right. Does actually take into account maybe like what is technically feasible and all of that. Right. Right. Yeah. So making and we are seeing this we're seeing smarter and smarter laws being written because we're starting to bridge to your point, like what's technically feasible and what is the aspiration of law. Mhm. I think that's a good segue to to this question. Is it possible to retrain already existing and deployed AI to be more responsible, or is it
only the net new models and releases? Um, you know, can we retrofit things that are already out in the world and improve those? So the short answer is yes. The longer answer is like it is not easy. Um, step one would be just identifying what responsible means. Step two would be identifying where it is becoming irresponsible. Often it is the data, but it is not always the data. I think people just, you know, the the easiest culprit is to say, oh well, society is biased, therefore the data is biased or for the model output is
biased. It's not always actually where it comes from depending on the decisions about model and model use. Now we have people making extensive guardrails. So like if we think about retraining, it doesn't have to actually be retraining. It can just be building guardrails on top of it, which are actually easier to build than retraining a whole new model. Also just depends on what we mean when we're talking about AI. Like if we're talking about generative AI models, then that is insanely expensive to do, and you are better off investing in building better guardrails. If it
is like machine learning models, you probably can just retrain and recreate models because they are much, much cheaper and easier to do. So it really just depends. But I think the crux of your question also is this concept of more responsible, more responsible to whom and for what is always a question to ask. There is no universal gold standard of responsibility. There is no universal gold standard of ethics or unbiased. And so back to you ask the question about being a startup founder. One of the things I was absolutely abysmal at is lying. So when
you have to pitch, when you have to pitch as a founder, you have to be like really confident in saying shit that is like awful. And I know, I know. Definitely built this pretty. Well. And also this word unbiased like, I know, I know I would pitch and there'd be other founders pitching and they'd be like, we're going to have unbiased AI. And I just and they say it with such confidence. And then the VCs are just like nodding and they're so thrilled. And I'm like, nothing you said is true because there is no such
thing as UN. But like, now we're actually in a more sophisticated world where you can't say things like that. But in 2021, I know 100%. No, I took VC calls and I just wouldn't say things like that. And it cost me. Right. Because if we're being honest responsibility, if we think about just who we are as human beings today in the geopolitical climate we're in today, we can't agree on what good is. And that's not necessarily a bad thing. I think the hard part is that, like we want there to be an answer and then
we're like, oh, good, now I can just figure out how to get there. But the answer is that it is the journey. It's an aspirational journey to a better outcome. Not that it is very clear what the outcome is. And people can say, I want anti-racist tech that is still aspirational and unmeasurable, right. But the the point is that there's not this point at which you get and you're like, yep, that's anti-racist tech. We're good. We're done now. Right, right, right. It's it's and you're trying to, you know, people who are building these companies are
trying to get support and funding to embark on that journey. To be fair to them, you have to speak simplistic words because you're trying to pitch to an audience, right? You can't get up there with this incredibly nuanced story. You're spinning because you've got, like, ten minutes of this guy's time. Right. And you're you're playing salesman at that point, right? Yeah. So it's been there's there's a question on here about, um, uh, like global. I think there was like, um, global cooperation. And, you know, I think that it's been, what, a month since Deep Sea
One sort of came onto the scene and brought a bunch of these discussions around US-China competition to the forefront and the AI arms race. And so, uh, you know, how how real is the AI arms race? How important is it for the US to win that? And is global cooperation possible? Like what does that look like? How do we, um, you know, how do we come together as a global community to build the standards and the frameworks that are needed to make sure that all models are built responsibly. So two years ago I had this
op ed in Wired entitled AI needs No Global Regulation, and it was about how AI is a borderless technology and us trying to regulate it within. It's much like we did with climate, right? You can try to regulate air quality within your borders, but air does not know borders. If your neighbor to the north is polluting, you're just getting polluted air, right? And so thinking about AI that way was the smarter way to think about it. And your question, if you asked me this two years ago, would have been very different because we lived in
a world of significantly more global cooperation than we have today. One of the most interesting, I suppose I use that word interesting in a very broad sense. Things that has happened in just the past, maybe 6 to 8 months, is increasing fraction factionalization in the world, right? We're seeing 2025 will be the year of sovereign AI. That's one of my 2025 predictions. We are going to see homegrown AI models being built by different nation states, which is not something we've had before. So there's a couple of things that is that's worth note. One is that
there's this blurring between state and corporate. So when Singapore is making their own AI models and India is making their own AI models in the US is making their own AI models, and France, you know, it is. No, it is a corporate endeavor, but it is a corporate endeavor that is state subsidized, which means something very, very different from what the AI world was a few years ago, which was, yes, it was. It was centered in Silicon Valley, but these were distinctly corporate. These were corporations. These were not state funded and state run corporations. So
that's one. So sovereign AI two is that we're going to see a lot of rolling back of a lot of this cooperation that we have historically seen, which I don't think will bode well for global regulation. I am concerned about that. We're seeing an increase in scientific cooperation, which is a good thing overall. And that's actually what we saw with climate as well. So in climate we had the IPCC, which was a scientific panel of climate scientists who came from different countries. Now their job was not to and it was very clear their mandate was
not to represent their country on this panel. Their job was to be a scientist on this panel. And they are you know, obviously one cannot just like it's not severance, we can't just turn half of our brains off. But the intent and the goal was to not make it, um, either ideological or not make it nation state based. I think with AI, we just sort of jumped the gun and decided everything should be decided by borders, when that's not actually the best way to structure any of this. Okay. We have one, um, that is more
related to culture at Twitter. So it says, can you speak to how the algorithm might have been biased to the left by the culture at Twitter. How it might be affecting it to the right today at X. That's a great question. Um, I think we know today it is very obvious because Elon Musk is not a subtle person. How X is being used as a megaphone to push his perspectives of the world. Like it's very, very clear, right? Like, I don't think there's any debate or discussion to be had here. I don't think Jack Dorsey
acted anywhere near the way Elon Musk, he was not on Twitter every day with his opinions, blocking people, you know, amplifying others, calling some people terrorists like Jack did not act that way. So the question is a bit of a false equivalency, right? So we're talking about like somebody literally using the platform as propaganda for their own ideology versus, hey, do we think that sometimes Twitter as an organization acted in a way that was ideologically left because the organization was left. Right? Maybe. Sure. Did I ever. Was I ever in a room where we sat
around and discussed shadow banning certain people? No. Shadow banning is not real. I think I think that is one of those phrases that just got really popular that has zero credence, at least in sort of the old world of social media. So, for example, the decision to ban Donald Trump from the platform was a decision that was made incredibly transparently. It was started within the company. People discussed it at the company. And then when the decision was made to ban him from the platform, it was there was a lot of clear discussion publicly. It was
shared publicly. This is very, very different. I'm just going to point out, again, incredibly different from Elon Musk unilaterally deciding he's mad at somebody, yelling at them and shoving them off the platform, or worse, sort of sticking people on them the way he has done in the past, which has led to these people having to go under protection, disappear for a bit and move away from their homes. So just like again, to point out the false equivalency. At no point ever did anybody at Twitter treat anybody on the right in a way that made them
have to protect themselves from violence. And that is something that happens today on Elon Musk's X. Yeah, I, uh, I can imagine that it was just an insane roller coaster to, uh, to be on there. We have we have another question on open source. So they ask, is open source the only way to gain control? Can't it be a regulated minimum amount of user control or transparency on a used model? Um, and maybe, maybe the another way to phrase this is are there scenarios in which open source actually isn't the best, um, uh, route to,
to go, or when should we not be open sourcing models? Yeah. I mean, the balance is always between open source and security, right? So for highly critical, highly sensitive models, open source is not the best decision because you have to be very careful about exposing and showing critical flaws or just enabling backdoors where your training data is coming from, how your model is being built. So there is will always be that discussion of transparency versus safety and security, which is very, very real and should definitely be taken, you know, taken under consideration. Um, for example,
um, during my time at Twitter, we actually shared we open sourced some of our code in order to have an algorithmic bias boundary. We wanted anybody in the world to be able to contribute issues of bias that they found. In this one model we use, which is an image cropping model. We couldn't do the same thing with other kinds of model, for example, like um, models that we use to identify bots or malicious actors. You can't open source that code because bots and malicious actors are trying to get around that code. If you've told them
your formula, they're just going to figure out how to get around it, right? So I like the second part of your question, and this is like, this is the creativity I want to see. And this is the world we don't have today. We don't have a world in which we can talk about what is a regulated minimum amount of user control and transparency. So we were talking about sort of my Ted talk, which is my future wild vision of a world that we don't have today, which is the right to repair. I think one step
towards that would be user control and transparency on models that are run by corporations. Absolutely. Why not. Right. So but we need to have that conversation by having a spectrum of offerings. So we can have fully closed source models, let's say for like highly secure reasons, we can have fully open source models just for like grad students to be able to play with and for like hobbyists to be able to learn. And then what is this in-between space where there are models used? Let's say, for example, in hiring, where as a user who might be
impacted by these models, Maybe I want some transparency to understand how my resume is being ranked and assessed, and maybe some degree or ability to be able to say, well, I don't think this is correct, or how can I make that look different? And maybe that responsibility wouldn't fall on the user. It might even be like a third party organization that does this. And again, this is a paradigm that exists in other industries, like we have it in privacy somewhat. I think the best analog I can come up with is, you know, how you can
download ad blockers to your browsers. Like somebody had to go build that ad blocker, right? And somebody had to have the permission to build that ad blocker so that they're not sued by the companies saying that you're blocking us from revenue. So all of those institutions that needed to exist for you to, for a third party, for it to be viable for a third party organization to exist. Right, because they have to pay their bills to, to make an ad blocker. And for you as a consumer to use that ad blocker, there's a whole ecosystem
that was built around all these institutions, all these questions. So I love that question. Right. Because that's not something we know how to do today, but we have analogs by which we might be able to build something like that. Cool. So we're coming to the end of our time. We've covered a lot of ground, obviously. And, um, you know, I think when we think about where this where AI is a technology is going and say, you know, we have achieved some level of, uh, sort of equity and, and absence or near-absence of bias to a
point that is like satisfactory and like, what are you what are you most excited about when when you think about the technology in its best form? Uh, you know, in our lifetimes. Yeah. I think the shortest answer is choice. I think replacing one set of people who we think don't mean well with another set of people that we think do mean well is not actually how we get to a better future. Fundamentally, the thing we have to question is the paradigm of power structure. In order to do that, people just have to have choice. And
that's something that we actually can start building. It's not fundamentally it doesn't fundamentally mean that we have to blow up every institution we have today. It means that there are ways we can build this that actually make better products and better markets and better outcomes for people. So for me, it's just choice. Choice on what what models to use for what purposes. Or it can mean many things. I think there's a there's a wide range of ways in which people want choice. Some people want choice over whether or not certain algorithms are being used on
them. They want choice and maybe visibility into whether or not they're comfortable with the output of these models. Again, things we have no right to and no access to two today. Maybe if you're somebody who builds and develops model, you want choices of what models you could use or could build. If you're starting a company or you work at a company again, having some choice in vendors competition will lead to better outcomes. So it actually it ranges based on what your level of, um, access or interest in AI is. So it's not just choice for
AI developers and tech people. It's choice for just like regular people out there. Like, what's the choice that a DoorDash driver has today? Or an Uber driver right over the algorithms that are running their day to day jobs? What might choice look like in that environment that actually cultivates a healthier and better workforce? Awesome. Well, thank you so much for for being here and sharing your your thoughts with us. And thank you, everybody for for joining us today. Happy, happy.