SXSWorld
Issue link: https://sxsw.uberflip.com/i/842052
Click to hear an audio excerpt of Kate Crawford speaking at SXSW 2016. KATE CRAWFORD Why Equality and Social Impact Must Be Part of the A.I. Future By susan elizaBeth shePaRd SXSW.COM | N O V E M B E R 2 0 1 6 | SXS W O R L D 1 9 Creating the protocols for accomplishing these tasks from scratch is a massive undertaking made even more difficult by the opacity of some systems, like deep neural nets, where the way in which they work isn't yet understood. Professional organizations and conferences such as Fairness, Accountability, and Transparency in Machine Learning (FAT-ML) are places where these questions get hashed out. "We've got some amazing people who are really now just starting to come together and think about how do we build fair systems? And that's a really new question. That's not a question that we've had to face before," she says. Crawford likes the cross-disciplinary audience available at SXSW. "You have data scientists talking to artists talking to social scientists talking to philanthropists talking to designers," she says. "That's a really nice space to have conversations about how we make these systems better." Last year, the Future of Life Institute, whose board members include Stephen Hawking and Elon Musk, released an open letter articulating their priorities for AI research, with an accompanying document that closed with a warning about the dangers of a super- intelligence that escapes human control. Crawford considers this something of a luxury concern. "When it gets down to it, being concerned about the creation of a superintelligent AI that will dominate and exploit humanity is something that people who are already very wealthy and gener- ally white and male tend to worry about, because if you want to look at technology that's creating domination and producing unfair results, we already see that happening," she says. "The things that are currently already affecting marginalized populations are of a much bigger concern to me than killer robots of the future." But of course, one of the reasons that imagery is so prominent in our minds is because of the cultural representation of AI, which is why "Skynet," from the Terminator movies, is such effective shorthand for a malevolent superintelligence. That's why Crawford would love to see culture makers in the audience when she speaks: "I would love to see Sam Esmail think about the next Mr. Robot as being about what will happen in society when AI is part of everything we do … where it's like the water pipes, it's the infra- structure, it's the traffic lights, it's phones, it's when you walk into a school, when you walk into the shop, when you walk in the street, it's omnipresent." Kate Crawford will be a Featured Speaker in the Intelligent Future Track at SXSW 2017. efore she became a prominent researcher called on by theWhite House and Microsoft for her analysis of the social and ethical implications of artificial intelligence (AI) and machine learning, Kate Crawford was a self-described "music nerd" as both a fan and a musician. Her experience as a performer and television host in her native Australia explains why she's such a dynamic speaker, as she regularly makes the case for why those creating and utilizing machine learning systems should be taking their cultural impacts into account. Crawford has served as co-chair for the White House's AINow symposium, emphasis on the "Now." Her primary concern is not with what might happen but what is already happening with AI, and specifically who is being affected. "If there's a big overarching theme that I think we really need to think a lot about, it's social inequality. How does AI contribute to or ameliorate social inequality?" says Crawford. "We're already starting to see these kinds of distinctions being made where some communities are experiencing the downsides more than others, but how is this already feeding into societies that already have issues with social inequality?" As big data and machine learning play an increasingly large role in institutional decision-making, Crawford says issues of transparency and accountability have to be addressed, citing a ProPublica investigation that found black defendants in Florida were twice as likely to be wrongly predicted as "at risk of violent recidivism" by a risk assessment algorithm. This translated into real-world differences in sentencing and probation conditions, influenced by a tool whose workings defendants (and prosecu- tors) didn't grasp. When untested systems and tools are rushed into social insti- tutions without an understanding of their workings, the public trust is at stake. As continued interest and investment pours into AI and new applications arise, Crawford says that a highly vis- ible, poorly functioning system can wreak much more damage to the field than a delay to make the system more fair. "What we've been seeing in what I would call almost precursor AI systems, we're seeing forms of sort of racial, class and gender bias, and that should concern all of us," she says. A recent editorial Crawford co-authored with law and tech- nology researcher Ryan Calo for Nature calls for establishing the means to measure the effect of AI on the populations in which it's being used. "There's a lot of money being spent on AI right now, a lot of research being done and a lot of new startups, specifically looking at the potential around applying AI in everyday life," says Crawford. "There's this huge gap in terms of thinking about what are the actual impacts of AI ... How do we assess them? How do we see and test and verify and measure the impacts on human populations?"

