Lunga Ntila

Last month, Twitter users uncovered a disturbing example of bias on the platform: An image-detection algorithm designed to optimize photo previews was cropping out Black faces in favor of white ones. Twitter apologized for this botched algorithm, but the bug remains.

Acts of technological racism might not always be so blatant, but they are largely unavoidable. Black defendants are more likely to be unfairly sentenced or labeled as future re-offenders, not just by judges, but also by a sentencing algorithm advertised in part as a remedy to human biases. Predictive models methodically deny ailing Black and Hispanic patients’ access to treatments that are regularly distributed to less sick white patients. Examples like these abound.

These sorts of systematic, inequality-perpetuating errors in predictive technologies are commonly known as “algorithmic bias.” They are, in short, the technological manifestations of America’s anti-Black zeitgeist. They are also the focus of my doctoral research exploring the influence of machine learning and AI on identity development. Sustained, frequent exposure to biases in automated technologies undoubtedly shape the way we see ourselves and our understanding of how the world values us. And they don’t affect people of all ages equally.

Importantly, algorithmic biases likely impose long-term psychological impacts on teenagers, many of whom spend almost every waking minute online: A 2018 Pew Research Center study found that 95 percent of teens have access to a smartphone, and 45 percent describe themselves as being online “almost constantly.” Hispanic teens, in particular, spend more time online than their white peers, according to the same study. Given America’s reliance on remote learning during the pandemic, adolescents are likely spending even more time on the internet than they did before.

[Bethany Mandel: Distance learning isn’t working]

Research suggests that being on the receiving end of discrimination is correlated with poor mental-health outcomes across all ages. And when youth of color experience discrimination, their sleep, academic performance, and self-esteem might suffer. Experiencing discrimination can even alter gene expression across the life span.

Algorithmic racism frequently functions as a sort of technological microaggression—those thinly veiled, prejudiced behaviors that often happen without the aggressor intending to hurt anyone. But the algorithmic variety differs from human microaggressions in several ways. For one, a person’s intent might be hard to pin down, but the computational models imbued with algorithmic bias can be exponentially more opaque. Several common machine-learning models, such as neural networks, are so complex that even the engineers who design them struggle to explain precisely how they work. Further, the frequency at which technological microaggressions occur is potentially much higher than in real life because of how much time teens spend on devices, as well as the automatic, repetitive nature of programmed systems. And everyone knows that human opinions are subjective, but algorithms operate under the guise of computational objectivity, which obscures their existence and lends legitimacy to their use.

Teens of color aren’t the only ones at risk of computer-generated racism. Living in a world controlled by discriminatory algorithms can further segregate white youths from their peers of color. TikTok’s content-filtering algorithm, for example, can drive adolescents toward echo chambers where everyone looks the same. This risks diminishing teens’ capacity for empathy and depriving them of opportunities to develop the skills and experiences necessary to thrive in a country that’s growing only more diverse.

[Read: Reddit is finally facing its legacy of racism]

Algorithmic racism exists in a thriving ecosystem of online discrimination, and algorithms have been shown to amplify the voices of human racists. Black teens experience an average of five or more instances of racism daily, much of it happening online and therefore mediated by algorithms. Radicalization pipelines on social platforms such as YouTube can lead down rabbit holes of videos designed to recruit young people, radicalize them, and inspire them to commit real-world violence. Before the internet, parents could discourage their kids from spending time with bad influences by monitoring their whereabouts. Today, teens can fraternize with neo-Nazis and spread eugenics propaganda while just feet away from a well-intentioned but unaware parent. Part of the problem is that parents don’t see the underlying structures of popular platforms, such as YouTube, Facebook, and Reddit, as strangers that can take the hand of a teenager and guide them deeper and deeper into disturbing corners of the web. There is no “stranger danger” equivalent for a recommendation engine.

Big tech companies know that all sorts of racism, algorithmic bias among them, are a problem on their platforms. But equity-focused projects at these companies have historically amounted to little more than lip service. For example, the day after a Google blog post highlighted the perspectives of three employees on bias in machine learning, news broke that Google was actually rolling back diversity and inclusion programs. And instead of fixing a bug in Google Photos that retrieved images of Black people for a query on gorillas, the service simply stopped labeling any images—even those of actual apes—as monkeys, gorillas, or chimpanzees. Still, some work from internal teams led by researchers such as Spotify’s Henriette Cramer seems promising. Across the industry and academia, women of color have paved the way in addressing technological racism and algorithmic bias. Organizations such as NYU’s AI Now Institute and the Algorithmic Justice League out of the MIT Media Lab are developing guidelines for ethical artificial intelligence. But importantly, research on algorithmic bias typically fails to account for age as a dimension of inequity, which is a point my own research on the subject aims to address.

Ignoring age is a misguided approach because teens are psychologically different from adults. By adulthood, our identities are mostly well formed and durable, but adolescents are still deeply immersed in the process of figuring out who they are and where they fit in the world. At the same time, teens’ still-developing brains have less intense fear responses than adults’, which makes them more curious, but also impulsive and likely to engage in risky behavior. Further, instances of race-based trauma have far longer-lasting effects when experienced in adolescence, a period when neural architectures are being rewired and are most susceptible to environmental inputs.

[Conor Friedersdorf: An internet for kids]

Algorithms powerfully shape development; they are socializing an entire generation. And though United States governmental regulations currently fail to account for the age-based effects of algorithms, there is precedent for taking age into consideration when designing media policy: When the Federal Communications Commission began regulating commercials in kids’ TV in the 1990s, for example, the policies were based on well-documented cognitive and emotional differences between children and adults. Based on input from developmentalists, data scientists, and youth advocates, 21st-century policies around data privacy and algorithmic design could also be constructed with adolescents’ particular needs in mind. If we instead continue to downplay or ignore the ways that teens are vulnerable to algorithmic racism, the harms are likely to reverberate through generations to come.

Originally Published By The Atlantic – Read More

Previous articleJoin us August 13th in person for the 2022 DeKalb County HOA Boot Camp!
Next articleTeenage Filmmaker Leans On Michelle Obama’s Words To Tell ‘The Power of Hope’