Building responsible and equitable algorithms is one of the toughest challenges technologists face today. Expert Cathy O'Neil explains why and what to do about it.
Ryan Fuller, Co-Founder and CEO at Round, sat down with Cathy O’Neil, a Harvard and MIT-trained mathematician, algorithmic auditor, and author of the New York Times Bestseller Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Fuller and O’Neil chatted about the power and inherent dangers of predictive algorithms, how to design algorithms that are both efficient and fair, through an ethical matrix created by her algorithm auditing organization ORCAA, what responsibility tech companies have when things go wrong, and more. Here are five key takeaways from their hour-long conversation:
The basic mechanics of an algorithm are fairly simple (and sound innocuous). A data scientist creates a set of rules for a computer that tell it how to interpret a set of data. Then, the data scientist feeds data into the algorithm and suddenly you have what seems like an accurate predictor of future success. But as we’ve seen throughout history, predictive algorithms that impact very basic functions, such as who gets a mortgage or who gets a raise, have evolved into “weapons of math destruction”.
Upon reflecting over the past five years since releasing her New York Times Bestseller, Cathy believes we’ve made progress, given many consumers no longer believe that algorithms are scientific instruments that can be trusted blindly. There’s more work to do though, given many algorithms (mainly B2B) still remain unchecked and unvetted.
Humans and algorithms have one thing in common: They’re imperfect. The difference is that even when a human has decent internal values, algorithms aren’t designed to operate based on your personal beliefs. Out of the box, they’re designed to operate based on historical data, which tends to be biased. So what can you do to create fairer and more equitable algorithms? Cathy dives into her ethical matrix framework:
The goal of an ethical matrix is to identify all stakeholders, both internally and externally, in a given algorithm - e.g. who will be impacted by this algorithm one way or another - and to have a meaningful conversation about how algorithms aren’t inherently equitable, and more importantly, what it would mean to them for the algorithm to succeed or fail. By building and implementing tests you can see the extent to which a problem is actually happening, providing an opportunity to mitigate them. Learn more at orcaarisk.com.
What can data scientists do when they see something or are asked to do something they find immoral? Right now, not a whole lot outside of using whistleblower protections. While Cathy believes that a version of The Hippocratic Oath for data scientists would be too vague, she does think there’s an opportunity to form a professional society, like lawyers have with the American Bar Association or electronic and electrical engineers have with IEEE.
Sure, professional societies are far from a panacea in any industry. But data scientists should not have to decide between their morals and their careers. But as Cathy says, this is a complicated two-sided conversation. “Nobody backs us up when we say we shouldn’t do something,” she adds. “But we also never get in trouble for doing something.”
It’s no secret that success at Facebook is defined by profit- the longer you stay on the platform and the more things you click on, the more they make money. Hence, Facebook’s algorithm emphasizes and prioritizes pieces of content that divide us, anchor us, and make us fight—and minimizes that kind of content that makes us realize that we’re wasting our time on the platform. But is it possible for Facebook to optimize for something else, perhaps for truth or decency?
<quote>“Every time you hear someone like Mark Zuckerberg or one of those guys say that artificial intelligence is going to fix the problems, you should be rest assured that it’s not.”</quote>
Let’s say that you’re a manager at a big tech company and you’re in charge of performance reviews. To determine who’s “productive,” you use a pre-existing HR algorithm that defines a productive employee as someone who gets high marks during their year-end performance review cycle. On paper, that might make sense—until you consider the biases that exist in the performance review process.
If we accept an algorithm as fact, we end up skipping some really important questions about the biases that may exist across an entire organization. Algorithms can make important business decisions easier (and faster), but when we trust them blindly, we ultimately end up ignoring human problems.
© 2023 Round. All rights reserved | Privacy Policy | Terms of Service