Despite its sophistication, AI can and does discriminate and amplify bias. Our digital society needs to act swiftly: There needs to be more transparency, and ways to correct mistakes and enforce accountability. If we succeed, we can build an innovative *and* just digital society.
Artificial intelligence is anything but simple. It’s a broad term for a range of sophisticated technologies. And each of those technologies have nuanced, complicated impacts on society and our everyday lives.
Despite this, AI is often over-simplified in mainstream conversation. The tl;dr on AI? It’s either an existential threat, just a few lines of code away from world domination. Elon Musk famously tweeted: “If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.” Or, AI is a panacea, a means to a utopian, post-work world where humans relax and machines make difficult decisions on our behalf.
In reality, AI is neither inherently good nor evil. And its impacts on society aren’t so obvious. AI can be an algorithm on social media, determining what story you read next. It can be the code inside a smart car, determining when to brake. Or, it can be the technology in a hospital, diagnosing whether a patient has melanoma.
In his talk, Mark Surman will examine why the popular tl;dr understanding of AI is misguided and dangerous. By oversimplifying AI, we overlook the more nuanced problems: is that algorithm recommending content that is misleading or addictive? Is that smart car more likely to recognize (and thus brake for) white faces instead of brown faces? Is that cancer-detection technology trained using data from men, and not women? Just how convincing are deep fakes, and how do we identify them? These problems aren’t as apparent as a six-foot Terminator. But their impact can be just as devastating.
Mark will also examine the positive work underway to make AI more understandable and responsible. Like an AI watchdog agency in New York City. Like an AI “justice league” fighting bias in algorithms. Like Mozilla's Responsible Computer Science Challenge. And like a former Silicon Valley engineer who's speaking out about the need for more AI accountability.
If we want better machine decision making, we can’t reduce AI to tl;dr tropes.