Ángel Díaz Explains How Knowledge On Content material Moderation Can Expose Harms In opposition to Marginalized Customers

Taking On Tech is an informative series that explores artificial intelligence, data science, algorithms, and mass censorship. In this report, For (bes) The Culture examines how social media politics disproportionately target marginalized groups.

The Brennan Center For Justice recently released a report entitled “Double Standards for Moderating Content on Social Media”. The document examines how discretionary measures put marginalized communities to the test but fail to protect them from harm. Vulnerable groups are attacked disproportionately for policy violations and typically receive more severe penalties. They can rarely appeal against the measures of a platform. On the other hand, users affected by excessive rules say that their perpetrators can use violence with impunity.

According to the report, platforms like Facebook, Twitter and YouTube are setting a dangerous precedent with their low prioritization of ethical content moderation. Experts believe these companies and their subsidiaries are restricting activists’ speech under the guise of community safety. As the platforms continue to implement new rules, marginalized users experience exponential abuse. This begs the question, “Who exactly are these guidelines protecting?”

Social media companies are urged to realign their modus operandi to focus on the safety of marginalized groups. As expected, top managers respond to critical concerns by plausibly denying implicit bias in their approach to content moderation. Arguing about discrimination can seem daunting without transparency, oversight and regulatory governance.

Ángel Díaz is a former counsel in the Liberty & National Security Program at the Brennan Center For Justice. He is co-author of Brennan’s latest report and recently became a lecturer at the UCLA School of Law. For (bes) The Culture spoke to Díaz about the results of the report.

For (bes) the culture: What inspired you to advocate for marginalized communities through political work?

Angel Diaz: Between my studies and law school, I worked in the legal department at Google. There I was exposed to politics and the way private rules affect public discourse. After graduating from law school, I worked in a number of law firms. Here I’ve helped tech companies deliberately design broad and vague policies with built-in flexibility in how rules enforcement works. After leaving my last company, I got a job at the Brennan Center, where I mainly focused on two areas. One of these was police surveillance and its impact on freedom of expression and equal protection. The other was about content moderation and how decisions affect marginalized groups.

For (bes) the culture: Do you think social media platforms are really invested in the safety of marginalized communities?

Angel Diaz: These guidelines contain specific ideas about freedom of expression, human rights, and user safety. But if you look closely, they reflect a number of decisions about whose voices to protect and whose burdens to bear. These decisions fit in pretty well with the existing power dynamic.

For (bes) the culture: How would you say platform policies add to this dynamic?

Angel Diaz: Most of these guidelines are designed to protect public figures and powerful voters. When platforms choose to enforce rules that are too broad, they are typically only applied to marginalized groups. I acknowledge that content moderation is difficult and that mistakes will inevitably arise. You can’t always get it right. However, you have chosen to protect public figures, elected officials and powerful figures. These people are often the top causes of harassment, hate speech, and violence. Focusing on high profile accounts is a more logical way to step in. It’s better than what we have now; through broad enforcement for the marginalized and a very measured, frivolous approach for the powerful.

For (bes) the culture: What do you think might prevent corporate executives from making equality enforcement an immediate priority?

Angel Diaz: The decision-makers at the top are privileged and have a rather limited view of the world. This limits their understanding of the dangers marginalized groups face. The leadership doesn’t believe the threats are as serious as we often claim until something like the Capitol Rebellion unfolds. Then it becomes undeniable.

For (bes) the culture: In the report, you emphasize the importance of data collection. What role does data play in reducing the abuse marginalized groups face?

Angel Diaz: Detailed data can help us to document damage. These insights can be used to empower attorneys and attorneys to sue these companies and hold them accountable for the harm they perpetuate. At the moment, platforms decide what to share and what metrics to share. We have no fundamental way of checking your homework. If we don’t understand all the logistics of what is going on, we cannot intelligently advocate regulation.

For (bes) the culture: Facebook recently justified its withholding of requested data for research purposes in the public interest and expressed concern about the privacy of users. What do you think of your statement?

Angel Diaz: Facebook said, “We are subject to a consent decree with the Federal Trade Commission, so we cannot share this type of information or facilitate this type of research as it would be a violation.” Protecting the privacy of its users is valuable and I think everyone agrees on that. But pretending the FTC decree is preventing them from facilitating research in the public interest is just not true. There was a recent letter from the FTC to Facebook saying, “Do not use this consent decree to justify the action you have taken.” The FTC only asked them to inform users about the use of their data. It is misleading to pretend there are limitations that prevent them from coming up with solutions. The current regulatory system is very permissive about what to share or hide.

*** For (bes) The Culture contacted a Facebook spokesperson to leave a comment. At this point in time, no official statement has been made. ***

For (bes) the culture: Your report addresses the unobtrusive way in which Facebook’s policy updates are announced. Do you think there should be stricter legal disclosure guidelines?

Angel Diaz: It’s almost a full time job just to understand what’s allowed and what’s not allowed on their platform. I have to look for clues and put it all together. They have a blog, but sometimes they make announcements on Twitter – which isn’t even their platform. Often times they make an announcement, but they may not implement it into their community standards. It’s not difficult to notify users of new changes. There are more than enough logs. Their decision not to follow her reflects an attempt to go under the radar because they know there can be backlash.

For (bes) the culture: How well do you think social media companies have moderated COVID-19 related content?

Angel Diaz: Alerts are pretty useful interventions, and they can actually break this binary file to either exit or remove its contents. It is useful to have contextual tools that educate people. However, Facebook’s decision to put the COVID-19 warning label on all COVID-related content isn’t very helpful. People just get desensitized. At this point I ignore every time I see the warning notice. It doesn’t say, “Hey, this is actually misinformation about COVID. You should learn more about it at this point.” It’s like saying, “Well, we warned you. Whatever happens next is up to you.” This warning label almost feels like it is doomed.

For (bes) the culture: What do you think of government intervention? Should there be federal laws to promote transparency?

Angel Diaz: At the moment, their disclosures are mostly focused on how much content they removed. That doesn’t necessarily reflect their success. Which communities are affected by all these distances? It’s not difficult to keep track of. You can say, “Hey, we removed X content that was hate speech against black people. We removed X content that harassed women.” There are ways to gather better data on who is affected by these decisions and how.

For (bes) the culture: What steps should be taken if the government decides to intervene?

Angel Diaz: From a legislative point of view, there are many difficult questions. For me, as someone whose job includes police surveillance, I’m nervous about the idea of ​​just handing over some platform data to the government. We have a long history of law enforcement and intelligence agencies targeting civil rights activists and protesters. I would be concerned about making this kind of surveillance “in the public interest” possible. That is not to say that we should not involve the government in regulation. We just have to be more prudent when we enable research in the public interest. There should be guidelines that say, “You must not give this information to the police or intelligence agencies.” If there is a researcher studying for surveillance purposes, the answer should be no; we can be strategic in overseeing research assignments that do not inadvertently facilitate surveillance.

For (bes) the culture: What could this surveillance look like?

In countries like Israel there are internet referral units. Government officials sit at computers all day marking content that should be removed. It allows the government to have platforms delete content that they are not legally allowed to remove themselves. America has a parallel system of censorship that is deeply invisible. Platforms have a great incentive to meet these regulatory requirements. One of the things we asked for in the report is full transparency about whether or not platforms are removing content on behalf of a government agency. If an authority was involved in the removal, we also need to know the rule by which it requested removal.

For (bes) the culture: What would you say to platform executives who are still denying the experiences of marginalized users on their platforms?

Angel Diaz: As much as social media companies love telling people of color that we’re wrong about how these systems work, it always turns out that our experiences are true. Develop apps and policies that actually support marginalized communities. Given all of our contributions to these platforms, this is long overdue.

Comments are closed.