Is Tricking A Robot Hacking? with Prof. Ryan Calo and David O’Hair (Big Conversations)

We discuss adversarial machine learning, the CFAA, and AI bias with Prof. Ryan Calo (Professor of Law at the University of Washington School of Law) and David O’Hair (associate at Knobbe Martens), co-authors of “Is Tricking a Robot Hacking?” from our Journal’s Volume 34, Issue 3.

Hosts: Haley Broughton (J.D. ’23) and Allan Holder (J.D. ’21)

SoundCloudSpotifyGoogle PodcastsApple PodcastsPocketCasts

[Haley] You’re listening to the Berkeley Technology Law Journal Podcast’s Big Conversations. I’m Haley Broughton.

[Allan] And I’m Allan Holder. Today on the podcast we will be speaking with Professor Ryan Calo and Mr. David O’Hair.

[Haley] Professor Calo and Mr. O’Hair are two of the five co-authors of the article entitled “Is Tricking a Robot Hacking?” from our Journal’s recent Volume 34, Issue 3. The other three authors are Ivan Evtimov, a Ph.D. student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, Prof. Tadayoshi Kohno, also of the Paul G. Allen School of Computer Science and Engineering; and Earlence Fernandes, Assistant Professor in the Department of Computer Sciences at University of Wisconsin-Madison.

[Allan] Professor Calo is the Lane Powell & D. Wayne Gittinger Endowed Professor at the University of Washington School of Law. Before becoming a professor in 2012, Prof. Calo worked as Research Director and Fellow at Stanford Law’s Center for Internet and Society. Earlier, he was an associate at Covington & Burling’s Washington, D.C. office, and he clerked for The Honorable Judge R. Guy Cole, Jr. of the U.S. Court of Appeals for the Sixth Circuit. Professor Calo graduated from the University of Michigan Law School in 2005.

[Haley] David O’Hair is a trademark and copyright associate at Knobbe Martens’s Seattle office. He graduated from the University of Washington School of Law in 2019. While in law school, Mr. O’Hair was the Chief Online Editor of the University of Washington Journal of Law, Technology, and Arts.

[Allan] Here is our Big Conversation with Prof. Ryan Calo and Mr. David O’Hair, co-authors of “Is Tricking a Robot Hacking?:”

[Haley] So I guess for starters, we’d like it if y’all could give us a broad statement just about the issues or problems that you were both aiming to address in this article.

[Mr. O’Hair] For this article, obviously, we talked about adversarial machine learning and legal liability of that, but I think the big issue we were talking about is, because we wrote this with three computer scientists, it was about the, you know, potential liability for, you know, testing and research, and how most of the case law and academia around that, just as summarized, with it’s all ambiguous right now, with liability. And so I think that’s kind of the umbrella of what we looked at is there, there’s a, you know, this testing and research that needs to be done, here’s this hacking law, how do those get married? And what are the issues with that?

[Prof. Calo] What I would say is that law and technology, as a discipline tends to proceed in a kind of a regular way. Not all of it, but a lot of it. And it takes this formula of, there’s this emerging technology, in this case, machine learning, in particular, adversarial machine learning. And there’s this existing set of laws. And that change to technology winds up causing us to have to revisit the adequacy of laws that, after all, were written at a time of very different technology, in this case, dramatically so, since it was the mid 80s. And so, as David said, you know, we really tried to draw from the technical expertise of the computer scientists to tell us what the precise state of the technology is. But then David and I, as legal scholars, our responsibility was to see where there was a mismatch between the affordances of the technology and the law itself. And where in particular, that creates concern, concern about researchers, research being chilled, or concerned about companies not having adequately strong security standards.

[Haley] I’m curious about how you bridge that gap in understanding between the computer scientists and yourselves in terms of the laws around this?

[Prof. Calo] David and I, he, as a student and I, at the time as a faculty member, were both members of the Tech PolicyLlab at the University of Washington. And the Tech Policy Lab puts together interdisciplinary teams to work on cutting edge issues of Technology Policy. And so we have a lot of experience integrating multiple disciplines. The founding members of the Tech Policy Lab, one of them is a co-author, Yoshi Kohno, Tadayoshi Kohno, the other is in computer science and engineering. And the other is Batya Friedman, who is an information scientist, and the three of us have, you know, had a lot of experience with this. And so we often we’ll bring in the technologists to carefully define the technology, talk about what is contingent about it, rather than, you know, defining and then only once we really have our arms around sort of the what the technology is, what has to be true about it, what doesn’t have to be true about it, you know, then structures a question that, that David and I can investigate. But David, I mean, if you want to reflect on your experience working across disciplines, I’d love to hear it, too.

[Mr. O’Hair] Yeah, I mean, obviously, echo all of that. And the I think the Tech Policy Lab is a very unique, you know, kind of grouping of people. And what I saw which was really interesting is the you know, the technology people we did work with, were very interested in the legal aspect of it as well and very willing to kind of dive in into that and go through and especially in drafting this paper, multiple iterations of drafting their explanations of the technology in a more digestible way for maybe the legal community as a whole. So it was very interesting working with them and seeing a very different world and how those two can match up.

[Allan] We were about to ask you for a working definition of the technology so that we, and our listeners could follow your argument. But I find it interesting that you mentioned that you bring in the computer scientists to say what is contingent about the technology, as opposed to just defining it. So can you expand a little bit on that distinction? And sort of the interesting parts about the technology that informed the rest of the paper?

[Prof. Calo] Sure, I mean, so imagine this, like somebody asks you, can you work, can you regulate AI? You know, and you’re gonna really be able to easily deflect that question if you want to, and you can be like, how can you regulate AI? AI is not like a thing like a train, you know, what I mean? Like, you know, AIs not like a genie in a bottle that you open it up and just tell the AI what to do, you can’t regulate it, that doesn’t make sense, you know? And so it’s easiest, you know, if you if you define it to too generally, right, versus if you say, you know, what would what would be an appropriate regulatory response to Amazon Echo, and Alexa, in the home? You know, and you might say, okay, well, let’s think about that for a moment. You know, what, what is the relationship between the consumer and Amazon? How does Amazon interact with third parties, like the government, and so on. And so you, you, you need to define something at the appropriate level of generality before you can before you can figure out what your legal analysis has to do with and I think that that’s not often done in law and technology. People just, you know, say, I’m looking at augmented reality, I’m looking at, you know, artificial intelligence. And so what we do is we talk about like, okay, well, artificial intelligence is a set of techniques that are aimed to approximate some aspect of human or animal cognition using the machine. Machine learning is a subset of artificial intelligence, that works roughly by training a model with a ton of data, and then applying that trained model to perform a pattern recognition function on either data that’s been held back or novel, new data. And so what adversarial machine learning is, is when you attempt to ascertain how a model works, and then you purposely fool it. And so, you know, we have on the paper, a person who’s getting their Ph.D. in adversarial machine learning, you know, what I mean? And so, the lead author, Ivan, and so he was able to sort of carefully describe how it works right now, how it could work. Because again, if what we say is, you know, you know, we got to regulate deep fakes, and then what we mean by deep fakes is like, the precise way that deep fakes work right this moment, right, you know, what I mean, like, and the exact technique of, of GANS, you know, like that’s how we’re going to regulate. And then technology just subtly shifts in such a way that you know, and so if you get if you get the general, if you get the level generality wrong, if you describe the technology at the inappropriate level, you fail to see what is about it that is contingent and what about it is sort of core. And I think that’s critical to doing a legal analysis. And so what we figured out was, look, you know, for the purposes of law, what’s novel here is the ability to get a system to misbehave, to behave in a way that you want rather than the way it’s designed not by hacking into it, not by bypassing a security protocol, but through other some other means that that uses the understanding of the model purposely against it. And we listed out a number of different examples of adversarial machine learning techniques, you know that that gets at the core of what the law cares about. What the law cares about is that the CFAA is outdated. You know what I mean? Not that like, you know what? So anyway, so it, I found that you can, you can understand technology well enough through conversation, but there’s nothing like having a dedicated set of co-authors who really get it.

[Haley]  I have a follow up because I thought one thing that was pretty amazing about the article was that this is essentially the first attempt to create a taxonomy for this kind of, these kind of legal issues. Right. And so I’m curious if, since the paper has been published, if you found others who begin to use some of the definitions that you wrote about and published.

[Prof. Calo]  I will just quickly say that I, that I have had conversations with lawmakers state and federal, and explain to them the inadequacy of the of the definition and the like. And I anticipate that if there were new laws related to security standards, or anti hacking, that this would be, would be useful. I don’t know  if you wanted to add anything else, David, but I mean, that’s sort of it, you know, there. As far as I know, since the publication last year, there just hasn’t been, you know, and now there is I think NIST is looking at definitions of security. And I think that this conversation will be in the in the mix there, right? Because if you were to define security today and leave out adversarial machine learning, that would be a very significant omission.

[Mr. O’Hair] Yeah, I have not been having discussions with state and federal lawmakers. So no update on my end.

[Prof. Calo] The team that wrote this paper has been circulating different reports and new stories that have that have cited to it. And so I think it’s percolating in the conversation. But I don’t know that any new laws or standards have been written that have relied upon it yet, but we wouldn’t expect that just yet.

[Haley]  Got it. And then for our listeners who may not be familiar with the Computer Fraud and Abuse Act, could you give a high level overview of the law, perhaps within the context of your article, and then also why you describe it as outdated?

[Mr. O’Hair] Sure, Ryan, I can take the first stab at this. Yeah, basically, the CFAA or Computer Fraud and Abuse Act can be thought of as just the nation’s umbrella anti-hacking law. If it sounds broad, it’s because it is broad. You know, basically, it set out in the 80s to, to prevent hacking of primarily government computers, and you know, financial institution computers, which they labeled “protected computers.” But in reality, the law also protects against, let’s just say hacking, of anything that really can be defined as a computer, as well as affecting interstate commerce, which has been defined as roughly anything that can be defined as a computer, cell phones, personal computers, etc. So that’s kind of the general overview, it’s to prevent hacking, damaging intrusion to computers, and government computers. And so we kind of took a, we looked at that, for our paper, I think in part because it’s often the law that gets cherry picked into any conversation, or any prosecution of any intrusion to anything connected to the internet, or just anything that can be defined as a computer, which has, you know, there’s a lot of criticism about it, because it’s been applied very broadly. And that’s why we looked at it is because these techniques that we’re looking at, and that the computer scientists are looking at, on face might not, you know, a reasonable person might not say that that’s hacking. But when we look at this law that’s been applied so broadly, that’s really when the questions come up, is does that fit in to this law?

[Prof. Calo] The Computer Fraud and Abuse Act is the national anti-hacking law. It but we were careful, I think in the paper to, to show that this idea of having to bypass the security protocol, as the definition of hacking is much broader, right. So we even cite it to sort of international cyber security standards where that was the case. And the Computer Fraud and Abuse Act is written on purpose to be very broad indeed, again, as David said, so that it’s like any time that you exceed authorization, or engage in unauthorized access in a protected computer, if it’s a government computer, you know, that’s it, no more, it’s already a violation. If it’s a non-government computer, then you have to sort of cause some additional mischief. And, and so it’s like, it’s so broad, it’s by all these different things that like, it is very interesting that it technically does not reach adversarial machine learning. Right. I mean, having been read having been written so broadly, to survive actually relatively well, for such a long time, for 40 years, you know, nearly, it’s remarkable that this sort of the direction that security, direction that hacking is going it doesn’t reach it finally. And, and so that’s why we but we really just use the CFAA as a stand in for the idea that the law treats bypassing the security protocol as the core definition of hacking.

[Haley] I’m curious about what the most common place or seemingly harmless ways a regular user could potentially be hacking artificial intelligence using machine learning.

[Prof. Calo] The innocuous example we use in the paper is imagine that you go to an airport, and you’re wearing makeup that thwarts facial recognition. You know what I mean, and then facial recognition is being used on a government computer. And we come to define the CFAA so broadly, that tricking the system with your makeup, or your hat, or whatever it is, you’re doing, that that becomes hacking, right. So on the one on the one extreme, on the other extreme, imagine that you have adequate security as a company, but you deploy a system that is incredibly easy to game . It’s not that you can hack into it. I mean, it’s all the, you know, everything’s zipped up tight, everything is, you know, all the all the ports are blocked and whatever, but it’s just that you know, that you’re ready for a buffer overflow, you’re ready for whatever, but you’re, you’re just you’re not, you, it’s super easy to game it. Right, and then people get hurt, their money gets stolen, they get physically hurt, some opportunity is denied to them, because of gaming. Why shouldn’t they be set to fall below the requisite standard of security? Why isn’t their security just as poor as it would be if they left a peer to peer, you know, open on their sensitive documents, you see what I mean. And so it’s, it’s really interesting to think of, all you have to do is subtly shift the way that you, the way that you steal information or the way that you hack into, the way that you will can cause the system to behave the way that you want rather than the way it’s intended, to subtly shift that. And all of a sudden, you neither have adequate security standard in terms of holding companies accountable, but you also have things that are quite over inclusive. And I gotta say that just quickly, it’s, it’s especially dangerous, especially pernicious, because in the world of artificial intelligence, one of the primary ways that we hold systems and companies and governments accountable, is that third party researchers come in, and they show that the system is biased. You know, it just disproportionately has a negative impact on people of color, on women. It’s not safe, because it overreacts or under reacts to stimulus in the world. And it has you know, and it’s a driverless car, you know, these things are done by third parties, they’re come in. They’re journalists, they’re researchers. And if that can be construed as hacking, that’s a problem, right? Because it chills, the accountability mechanism we have to determine whether AI is fair and safe. And now, ultimately, the case law itself could settle this out, because while there’s no research exemption to the Computer Fraud and Abuse Act, there has been some movement in the courts. So hopefully, in that direction. But it’s the but David mentioned this, it’s the ambiguity. That’s the problem. It’s the ambiguity. That’s the problem. The fact you don’t know, I mean, I remember attending a talk by a prominent journalist, who was one of the main people to be holding sort of systems accountable, the lead author on the ProPublica story. And she made a joke at an event that I was at where she was describing her methodology. And she said, I’m probably in violation of federal law. You know what I mean, and she went out, she went ahead, but what journalists maybe didn’t go ahead.

[Haley] I’m wondering if you could expand a bit more on the ways in which the biases of machine learning come up, that researchers would be able to look into were legislators and policymakers and folks with that power, were to take some of the suggestions and considerations within your article into account, how that would work to solve some of the issues that you see within machine learning that are biased are racist, are sexist, and things like this.

[Prof. Calo] David?

[Mr. O’Hair] Yeah, yeah, no, I think a really good example of specifically talking about the bias and that could be present is, you know, we discuss in the paper, like audit based testing, in this example was for like real estate. And so they’re a good example of what is currently a very hot debate topic for the CFAA is violating website Terms of Service. And whether just that in itself, is a violation. And so for all this audit based testing, you inherently have to create fictitious profiles of people on these websites, which is clearly violating any good website’s Terms of Service. And that is a barrier in itself right there to be pursuing that very basic type of research that very easily can verify if there’s a bias in a system. I think that’s a very good and very simple example of the type of behavior that is getting, the type of research that is getting potentially chilled by this by a federal anti-hacking law, something that a journalist might not consider when simply doing this audit testing.

[Haley] This may be a bit of a silly question, but as I was reading the examples of how we can trick robots. I thought about a time when my friends and I held our phones held my phone up to each of us and just kept saying a product over and over again, like soy milk, soy milk, to see if we could trick the advertisements to be all soy milk. And, you know, of course, as a lay person, I’m thinking like, Oh, this can’t, I can’t get prosecuted for this. But I’m curious about how your article or how the folks you’ve worked with would categorize that situation?

[Prof. Calo] That’s a good one. Yeah. I mean, imagine, imagine that you were doing it because you had some financial gain? Do you see what I mean, so you weren’t, you weren’t just doing it just to screw around. You were doing it to, because you’re trying to sell soy milk, or you’re trying to get clicks on soy milk, or make it look like it’s been clicked upon? Right? Because you’re an intermediary for a soy milk advertiser, you know, then you’re doing it for financial gain. You know what I mean, you’re gaming the system. You’re probably violating Terms of Service, you’re not hacking into anything. You know, you’re not bypassing security protocols. So that would be a very good example, just if you, because the thing about the Computer Fraud and Abuse Act is you do have to do some mischief, unless it’s a government computer, you have to do something. The mere violation of a terms of service without more shouldn’t land you in in CFAA territory. But, you know, some courts seem to, certainly a lot of prosecutors, and some courts seem to think it should, but you read the letter of the law like, if it’s not a government computer, you got to do something, you know, pernicious and so if we just tweak your hypothetical a little, I think we get a good example.

[Haley] So for the folks listening at home, if you’d like to trick a robot, there you go. All right.  So another question I have is that in the article, you explained that the problem of malicious actors attempting to attack machine learning models is still technologically young. We know that the law is often reactive rather than preemptive. How can lawyers, computer scientists, and policy advocates convince decision makers to act on research like yours before these issues become major threats?

[Mr. O’Hair] Yeah, I’ll punt that one to Ryan. He has much more experience interacting with the key players in this arena.

[Prof. Calo] I mean I don’t know David, I think hopefully, more and more. So David, David graduated a couple of years ago now. When did you graduate David in 2000 and…?

[Mr. O’Hair] 2019. I just started my second year as an associate

[Prof. Calo] Second year. Yeah, exactly. So I people, people who are listening to this podcast should hire David because he’s got great experience and knowledge. You know, the truth is, is that on, you know, technology provides particular challenges for policymakers. And they’re not always the ones that you think and so a very common claim about technology is that it outpaces the law, you know, what I mean? Like, technology’s too fast for the law, you know, and the law is always trying to keep up and so on. I gotta say that, in my experience, that’s not usually the problem. Do you what I mean like, usually, the problem is not that technology is outpacing law, it’s that there’s a problem of political will, that, you know, that somebody has sort of misunderstood the technology or only understood one particular stakeholders conception of the technology. And so if you look at the Computer Fraud and Abuse Act, like, you know, here, the law, that technology is outpacing, you know, the CFAA, 40 years later. Do you see what I mean? And if and if the CFAA doesn’t change, to update what security means 40 years later, it’s not because technology was so fast and it takes 40 years to change the law. It’s because law enforcement loves the Computer Fraud and Abuse Act, and they don’t want to see any changes to it. You what I mean, like, that’s what would be the reason, you know, another example is drones, you know, we’ve been told that all you know, that the laws can’t keep up with the drones are so amazing, and they can’t keep up. I mean, setting aside the fact that like, you know, we strapped cameras to pigeons in World War One. But setting that aside for a moment, the truth of the matter is, is that the jurisdictions that went ahead with drone testing for drone delivery, all the places that we were told that drone industry was going to disappear, too, because the United States they could not, you know, let enough people do drones and not and the Federal Aviation Administration had been too lax. You know, they’re no closer to drone delivery. Why? Because technology’s super hard. Robots are super hard, and has nothing to do with the law. You know, and so I think what I think what policymakers you know, need to do is they need to appreciate that in regulating technology, they have to hear from a range of stakeholders, particularly impartial ones who don’t have, you know, don’t have a robot dog in the race on. And also, they should be mindful of the fact that usually the relevant values are already well understood and established. And the role of the law really, is not to completely reinvent everything, but rather make sure that those important values are, are put forward. And so here, what I’d like them to do is to say, look, we got to hold companies responsible for releasing dangerous products into the world. You know, what I mean? I mean, if your product is biased in 2020, you know, which many, many, many are? Come on, you know, how many decades has it been since the Civil Rights Movement? Are you kidding me? You know, if your technology is dangerous, because you haven’t thought about that edge case where someone could put stickers on the ground and cause a driverless car accident, you know, the law should throw a book at you, it’s these values are not as often such a leisure domain, to talk in terms of the way that that the technology disrupts the norms and the values and we need to think about  where we are now and everything else, you know, no that is a delay tactic. Right? We know, you know, we know what our values are, we know what we should be doing. And the question is, do we have the political will to do it? Anyway, I think the utility of this is to be, you know, now that David and our co-authors have said, hacking laws inadequate, here’s exactly why, you know what I mean, and so on, then if they ever get the political will, they can pick up this document and have a roadmap for how to fix it, but we can’t give them the political will. We can’t, you know, un-entrench law enforcement interests. You know, it’s just that’s not in our in our capability as scholars.

[Allan] So I think you’ve alluded to some of this in the interview already, but we wanted to ask for yourselves and your co-authors. Why this article now and to the extent that it is a call to action to industry and policymakers and lawmakers, why this call to action and this article, in this sort of moment in history?

[Mr. O’Hair] Yeah, I mean, I think one definite motivating factor is you have now and this, you know, as you said, moment, in history, all these different, very consumer accessible services and products that are using this potentially vulnerable, potentially biased, oftentimes biased technology. That, kind of, requires the type of testing that right now is ambiguous, right now is violating the CFAA. And so, you know, rewind the clock, 15 years, you know, while people could have predicted that this will be an issue, you know, it’s not everyday that 15 years ago, 10 year old’s were able to download services or apps or something that actively employ this kind of vulnerable technology. And so we’re there now, so much of what we use every day is, you know, in my opinion, requiring this type of testing and this type of research, to spot vulnerabilities. So I think it was just the confluence of the technology has evolved to a place that it is reaching consumers in large numbers now. And so we should probably have some more, you know, we should greenlight the security researchers a little more, is my opinion of why the paper was a kind of came about at this time.

[Prof. Calo] Yeah, I agree with that. I mean, both greenlight the research and also hold companies accountable, governments accountable for their products. I mean, those are the two big ones, you know, because it’s, it’s a double edged sword, it’s like, on the one hand, you worry about over-enforcing researchers. And on the other hand, you worry about having a definition of security that is not comprehensive. So in an ideal world, you want to make sure that Tesla can’t, that the Tesla can’t easily be gamed, right? But at the same time, you want people to be able to test the Tesla to make sure that it isn’t easily gamed. You know what I mean? And so isn’t that interesting, right? So when, and I don’t know how to quite, we don’t quite know how to thread that needle. The other thing I would say is that the Tech Policy Lab, generally speaking, tries to pick technologies that are far enough along to be obviously important in there and to have societal impact, but not so far along that they’re entirely path dependent. And so that that’s why we select the technologies that we do and why we wrote about augmented reality a bunch of years ago, and why we wrote about adversarial machine learning, you know, one or two years ago, and why we’re writing about brain machine interfaces now. You know what I mean, we’re just trying to pick it up. And you don’t always do that correctly, and you get it wrong, but it’s like, you know, try to pick technologies that are that are far enough along to know that they matter, but not too far along is that is that they’ve just the ship has sailed, so to speak.

[Allan] Thank you for that. And for our final question we wanted to shift gears a little bit and ask for your advice for both of you, many of our listeners are students, and we wanted to ask if you have any advice for law students who are interested in pursuing careers in technology law

[Mr. O’Hair] Ryan, go for it.

[Prof. Calo] Oh, okay. Well, I mean, yeah, okay, I’ll start off in, and then David can go. I mean, what, you know, I think that most schools, but not all of them have to have some technology offerings these days. And if your school happens not to, I mean, if you’re in a place like Berkeley, or University of Washington, or Georgetown, or Colorado, you know, I mean, there’s like, there’s all these places that have these incredible Law and Technology programs, you know what I mean? That are just world class. But there are other places that have one or two just really good faculty members. And even if you’re at a place that doesn’t have a faculty member that thinks of themselves as being Law and Technology, you’ll look across campus, you know, if you’re part of a university system, there are other departments, and I increasingly hear stories that warm my heart, about students on their own initiative, going to another to another student, in another department that they met at, like, you know, a party back when we used to have parties. And, or that they that they knew through somebody, or they just admired something they saw about them, whatever happens to be, and saying, hey, do you want to team up on a paper, I mean, I’ve seen that more and more, and it’s just like, I love it. I mean, it’s like, because we created structures, like the Tech Policy Lab in order to facilitate more top down by putting students into teams, but that doesn’t have to be top down. You know, it could also just be on peers, you know, peers going to each other. And, and so what I would say is, you know, look at what your school has to offer, and look across campus, and find the people – not just a technologist, but you know, the sociologists of technology, the anthropologists of technology, the people who are interested in these questions, the science and technology studies folks, communications, information science, and see if maybe you can team up because there’s a lot that the law can lend. There’s the you know, the law is interesting, because you know technology is fascinating, you can look at it from any number of lenses, but you know, where do our decisions get carried out, you know, and implemented into policy that affects real people’s lives, right? One of my favorite quotes from the Law Review article is from Robert Cover’s “Violence in the Word,” which opens with, “legal interpretation takes place in a field of pain and death.” And that dramatic opening sentence while having little to do with technology, says everything we need to know about law, which is that when judges interpret law, when a judge makes a decision about somebody, you know, she’s making a decision about their property, about their liberty, about their status, in ways that affect real people, and their real lived experience. And you know, law and policy is the place where we figure out what actually to do and what to make mandatory and what to prohibit and where to invest massive resources that still continue to be greater than anything that individual corporate or foundation donors can do. And so, you know, I would just say, you have a lot to add, like, you know, you’re studying something that people don’t understand, and you’re studying the levers of power. And so you should go to those other places and say, I want to learn from you. But you know, what, if I mean, I’ll give you I’ll give you a great example. So I’m going to go off this is like a whole thing about that. But so you can edit this out, just be like, I cannot believe Professor Calo would not shut up about this. So just be like, edit it out!

[Mr. O’Hair] Oh, man, imagine being in class with him, man, I know.

[Prof. Calo] Exactly.

Exactly, exactly.

But like, think about this for a moment. You have a bunch of people who, for years have been studying fairness, accountability, and transparency within algorithms. You know, what I mean? So these, these are people who are thinking about what makes an algorithm fair, what is, how do we balance, transparency, efficacy, and so on? You know whose been thinking about that for hundreds of years? People who work in civil, criminal, and constitutional procedure, right? What do people do in criminal and civil procedure other than try to figure out how to balance competing values like efficiency and fairness and devising conceptions of fairness and devising mechanisms for ensuring accountability and participation? What have we been doing for hundreds of years? You know, I get so angry whenever I hear technologists say things like, you know, well, you know, lawyers are fine, but you know, they don’t build anything. They say, you know, they lawyers don’t build anything. And I’m like, yeah, we built the rule of law. You know. Like you’re welcome. Not that it isn’t falling apart right now, around us. But the point of the matter is, is that, you know, you have value they have that go meet them. Yeah. All right, I’ll stop.

[Mr. O’Hair] Yeah, no, of course, you know, echo that. And I was, you know, I don’t have the, you know, best tips because I happened to be at a school that had just like, this great program already completely built around me. And all I had to do was plug in, you know, but, so obviously, if anything like that is around, you take advantage of it. And of course, even outside of that, I worked with other professors outside of the Tech Policy Lab, on kind of tech policy adjacent work that they were interested in, but just didn’t have time to research or delve into. So, you know, take whatever you can that’s even, you know, tech policy-esque, and then, you know, it’s, there’s not a ton of opportunity to leave law school straight into like a tech policy advisory, you know, academia role, but even wherever you end up. Like, so I do IP litigation. And so obviously, there’s a tons of technology involved, not a ton of policy, but you can be, you can carry over your policy, I guess, knowledge into wherever you end up. So it’s like, when there’s any CFAA issue at work, like, I’ve been on all the issues so far at the firm, which has been like, super cool. So you can bring whatever specialty you have developed into wherever you end up. Because not a ton of law firm, you know, private practitioners are sitting around thinking about policy. So, you know, I’d say wherever you end up if it doesn’t have to be or if it doesn’t happen to be strictly tech policy, like, still continue to be that person. Because there’s a ton of need for it in firms as well.

[Haley] I have to ask, because you mentioned you attended class with Professor Calo, and I was wondering if y’all could expand on how you met each other? And then how you continued to grow your relationship with now publishing a paper together?

[Prof. Calo] No, I mean, look, I mean, you, you know, we have a lot of great students. And, and so and those that are both excellent and express a deep interest in technology policy, right? That we try to get, like, students like David, we try to get them involved. And what we do at the Lab, we handpick a couple of just great students, and we pair them up in an interdisciplinary teams, which is a largely a function of faculty driven research interests, but it also has to dovetail with what, you know, what they, what the student themselves wants to do. And so yeah, so I mean, you know, we I had David in class, and he was great. And when he expressed an interest in tech policy, I was I was excited, and then put him on a team to look at adversarial machine learning and law. And his contributions were very significant. And so of course, he became a co-author, but it’s pretty, it’s pretty organic. You know, it’s sort of like, I wish we could provide I mean, the Tech Policy Lab is small. And so at any given time, we only have a few students. Now we also have a Center around misinformation, called the Center for an Informed Public. And so I actually have, at the moment, I just have one student, right? Yeah, I just have, I have one LLM or no I’m sorry, one PhD candidate in law, and one student, who are working at the Lab. But then I have another student who is working at the Center on and so it’s just a function of it, but many other law students at the University of Washington are able to come to our events and get involved and stuff like that. But it’s, I wish it were more commonplace that we could, people would have David’s experience of being able to actually just roll up your sleeves, do deep research  with, across the interdisciplinary lines and have a paper come out of it. You know that would be that would be the gold standard. But I think we’d have to scale up in ways that  are difficult in order to accomplish that. But at Berkeley, you all have a lot of that. I mean, you have so many people working on this stuff that you can and so much, so many of them are oriented across the whole campus. You’re really lucky to be where you are.

 

[Allan] Professor Calo and David, thank you so much for joining us today and sharing your insights with us. We’re very grateful and our listeners will be as well. Thank you.

[Mr. O’Hair] Yeah, of course. Thanks for having us.

[Prof. Calo] Thank you so much.

[Haley] Thank you for listening! The BTLJ Podcast is brought to you by Podcast Editors Andy Zachrich and Haley Broughton. Our Executive Producer is BTLJ Senior Online Content Editor Allan Holder. BTLJ’s Editor-in-Chief is Emma Lee.

[Allan] If you enjoyed our podcast, please support us by subscribing and rating us on Apple Podcasts, Spotify, or wherever you listen to your podcasts.

[Haley] If you have any questions, comments, or suggestions, write us at btljpodcast@gmail.com. If you would like to read Professor Calo and Mr. O’Hair’s article, or explore more of BTLJ’s scholarship, it is available online at btlj.org.

[Allan] The information presented here does not constitute legal advice. This podcast is intended for academic and entertainment purposes only.