California social media rules — a more durable promote in 2021 – East Bay Occasions
Posted by Zayna Syed | CalMatters
Read this article in Spanish.
California lawmaker Cristina Garcia was hanging out with a friend when she asked her friend’s 8-year-old why she refused to eat.
“I said, ‘Why don’t you have lunch? ‘And she says,’ Well, because I have to look thin and look a certain way to be a YouTube star and get followers, ‘said Garcia, a Democrat from Bell Gardens who is a member of the congregation.
After spending 13 years teaching, she’d seen eating disorders and body image concerns affect her students – and she blamed the altered images on social media as solving the problem. “I think this affects you even as an adult, but I think it’s especially harmful when you’re a young child when you’re still trying to figure things out,” she said. “You are bombarded with these images, which are set in unrealistic expectations.”
It has laws that require social media companies to disclose whether certain images have been retouched. If someone – and influencers in particular – were to artificially improve their skin or tone their body and do so to make money, the platform would have to say so and even specify what it changed.
But California state lawmakers who have introduced bills to further tighten social media practices have had little success so far this year.
Legislators pushed Garcia’s Bill 613 review until next year after social media companies pushed it back. The companies said it was difficult for them to even know when an image was edited because it often occurs on a third-party platform like a photo editing app.
Other bills that have been similarly switched to a two-year track include one that bans features like automatic play for children unless parents let them, and another where social media companies post obscene or violent posts report to their platforms.
Michael Karanicolas, executive director of UCLA’s Institute of Technology, Law and Politics, said social media restriction is difficult to judge from a constitutional perspective as laws and court judgments are constantly in flux.
“It is not always easy to draw a clear line between what the government can force you to disclose … and what the government cannot force you to say,” Karanicolas said. “That doesn’t mean they won’t regulate the space because the state government may think it’s worth throwing the dice and seeing how the law survives a constitutional challenge.”
And the majority of Democrats gave a Republican-sponsored bill that would have classified social media platforms as a new “public space” the cold shoulder – and tried to prevent them from restricting speech that was lawful under the First Amendment. A current example from practice: Twitter bans the account of former President Donald Trump.
Regulations – too far or not far enough?
California’s pioneering computer privacy law, which went into effect last year, was the first of its kind in the country to give people more control over their digital data. It grants Californians the right to request free information that companies collect about them, and companies must give users the option to opt out of selling their data.
Critics say the law doesn’t go far enough to contain social media platforms.
They have called for increased protection for children online and for regulation to prevent the spread of misinformation and hate speech on social media.
Although tech companies spend less than traditional big lobbyists like oil interests and unions, they still have a huge impact on the state because of their strong economic impact. California’s surprising budget surplus this year is due in part to the huge success of the Golden State-based tech giants during the pandemic.
“Look under the hood” from social media
Gabriel hopes the proposed law will achieve two things: encourage “good behavior” and enable policy makers to better understand how misinformation, hate speech and the like spread on social media and influence hate crimes.
The proposed legislation would force social media companies “to let people look under the hood a bit because there is just a lot of confusion and a lot of skepticism about what they’re doing right now,” Gabriel said. “I firmly believe in transparency because it encourages people to behave as they would expect the public to do in order for them to behave.”
Gabriel noted that the bill will target companies with sales over $ 100 million in the past year. He said, “I think such companies can do what we ask of them pretty easily, and I think a lot of that information comes from us. I ask, they look around every day and maybe even more often. “
A Facebook spokesperson emailed CalMatters that Facebook is already publishing “regular transparency reports, including our quarterly Community Standards Enforcement Report.” The report includes data on how many posts violated Facebook’s content standards and what actions the company took to address those standards. They can be found on the company’s Transparency Center website.
Facebook recently announced that Ernest & Young will be reviewing these reports outside of the company. “So we don’t evaluate our own homework,” said the Facebook representative.
Giving bad actors a blueprint?
Business interests like the Internet Association, which represents big tech companies like Facebook, Twitter, and Google, claim that Gabriel’s bill could undermine the goal of reducing misinformation and hate speech by providing “bad actors” with a detailed blueprint to circumvent the detection is provided.
“While these requirements are well-intentioned, they will ultimately allow scammers, spammers and other bad actors to take advantage of our systems and moderators,” she argued, cited in the Assembly’s analysis of the bill.
These groups also warn that Gabriel’s bill could open social media companies to complaints about routine decisions made by content moderators, and perhaps even about how effective corporate moderation practices are – which platforms predict could keep them from getting into content -Moderation to invest.
An industry spokesman for the Internet Association, who would only discuss the bill if not named, noted that it does not apply to some social media companies involved in spreading misinformation and hate speech. For example, the current law would include the fancy exercise bike company Peloton, but not the lawful social media sites Parler or Gab, as they don’t meet the $ 100 million revenue requirement.
One of the bills that lawmakers kept until next year, AB 1545 from Buffy Wicks, a member of the Berkeley Democratic Assembly, aims to add more parental controls for auto-play features and in-app purchases.
Do Parents Need Big Tech Support?
Wicks said that autoplaying on sites like YouTube could lead children to view offensive content. Her example: If parents post a video on YouTube called Thomas the Tank Engine, an hour later their child might see a video about train wrecks, depending on which YouTube algorithms count as related content.
Your action would require sites like YouTube to add a parental opt-in to autoplay. An earlier version – which died – would have created broader regulation.
Wicks said the bill has been popular with both Democrats and Republicans, especially parents: “Any parent who’s been into technology with their children these days knows this problem.”
The Internet Association refused to enforce the bill. For example, the bill would require social media companies to disclose whether a person is making money from a post, which could be difficult to tell. The bill’s requirement for an annual review to ensure compliance with the children’s online privacy rule is not necessary as the attorney general is already empowered to enforce the rule.
“Just because you can create a product for young people doesn’t mean you should,” said Marvin Deon, vice president of Common Sense Media, a nonprofit that provides families with resources for media literacy and age-based ratings of movies and television shows and books. “We have to be sure that we are keeping an eye on the constitution, but also that we are not circumventing our duties to protect children.
“Things that explore the addictive nature of some of the designs on these platforms, like automatic play where a kid can watch a Disney cartoon and 20 minutes later a kid sells toys and then 20 minutes later a kid shows how someone is blown up with some kind of explosive. “
Tech and social media companies often counter that parents are responsible for monitoring and regulating children’s online and social media usage. But David Monahan, campaign leader for the Campaign for a Commercial Free Childhood, a nonprofit that advocates children’s privacy, disagrees.
Monahan said laws are needed until companies stop manipulative and unfair practices; For example, encourage children to spend excessive amounts of time online, sharing personal information, viewing advertisements, and making in-game or in-app purchases.
“We find companies that point the finger at families and parents and say, ‘You are the goalkeeper? Why don’t you protect your children? ‘And that’s really unfair,’ said Monahan. “Parents need support from Big Tech.”