Social media corporations ‘not trusted to fight on-line abuse’
The public does not trust social media companies to deal with the problem of online abuse and hateful content.
It also found that the majority of people in the UK support more regulation of tech companies.
The study by the anti-abuse campaign group Hope not Hate found that 74% of respondents said they didn’t trust social media companies alone to decide what extreme content or disinformation is when they appear on their platforms.
It turned out that online public abuse remains a key issue. 73% of respondents said they were concerned about the amount of such content on social media.
And there’s strong public support for stricter regulations forcing tech companies to take action against harmful content. 71% agree that they should be legally held responsible for the content on their platforms and 73% say they should be forced to remove such content when it appears. .
The government’s draft Online Security Act, which requires platforms to exercise user due diligence, with heavy fines for those who fail, is due to be reviewed by MPs and colleagues this month.
The proposals also include plans that would force platforms to identify “legal but harmful” content and how to monitor it on their websites, which has raised concerns among some about possible anti-free speech action.
However, Hope not Hate’s research suggests the public is supportive of the move. 80% of respondents said that while they believe in freedom of expression, there must be limits to stop the spread of extremist content on the internet.
“Allowing people to post hateful and obnoxious content online is not a way of protecting freedom of expression, but rather risks sowing divisions and reinforcing the hideous views of a tiny minority,” said group research director Joe Mulhall.
“Currently, online speech that causes division and harm is often defended on the grounds that removing it would undermine freedom of expression.
The story goes on
“In reality, allowing such language to be amplified only undermines the quality of public debate and harms the groups that achieve such language goals. This defense minimizes freedom of expression in theory and in practice.
“As our survey shows, there is clearly an overwhelming consensus that hateful content, even if it’s legal, is too visible on social media platforms.
“The only way to really ensure that everyone has freedom of expression is to protect anyone who is currently being attacked or marginalized on the basis of traits such as race, gender, or sexual orientation.
“Therefore, continuing to include legal but harmful content in the Online Safety Act is the best way to ensure that social media companies have effective systems and processes in place to reduce the promotion of hatred and abuse while maintaining freedom of expression to protect.”