Leaked documents claim Facebook’s algorithm ‘promoted toxic and hateful content by favouring posts that got lots of angry emojis over those that received lots of likes’

  • Facebook is claimed to have made misinformation and clickbait more prominent
  • Algorithm allegedly used reaction emoji to push site’s more provocative content
  • Five emojis of ‘love,’ ‘haha,’ ‘wow,’ ‘sad’ and ‘angry’ were launched five years ago 
  • Algorithm meant emojis were treated as five times more valuable than ‘likes’

Facebook spent three years making misinformation and clickbait more prominent in users’ news feeds to keep them more engaged on its network, it was claimed today.

The firm’s algorithm, which decides what people see on a newsfeed, was allegedly programmed to use the reaction emoji as a sign to push more provocative content.

The five emojis of ‘love,’ ‘haha,’ ‘wow,’ ‘sad’ and ‘angry’ were launched five years ago to give users an alternative way to react to content aside from the traditional ‘like’.

But a ranking algorithm meant emoji reactions were treated as five times more valuable than ‘likes’, according to internal papers revealed by the Washington Post.

This idea behind this was that high numbers of reaction emojis on posts were keeping users more engaged – a crucial element to Facebook’s business model.

The five Facebook emojis of ‘love,’ ‘haha,’ ‘wow,’ ‘sad’ and ‘angry’ were launched five years ago to give users an alternative way to react to content aside from the normal ‘like’

However, the company’s own researchers and scientists found that posts prompting angry reactions were far more likely to include misinformation and low-quality news.

One staffer allegedly wrote that favouring ‘controversial’ posts such as those making people angry could open ‘the door to more spam/abuse/clickbait inadvertently’.

Another is said to have replied: ‘It’s possible’. In 2019, its data scientists confirmed the link between posts sparking the angry emoji and toxicity on its platform.

This means Facebook stands accused of promoting the worst parts of its site for three years – making it more prominent and seeing it reach a much bigger audience.

It would have also had a negative effect on the work of its content moderators who were trying to reduce the amount of toxic and harmful posts being seen by users.

Facebook whistleblower Frances Haugen told MPs yeterday that the firm is ‘unquestionably’ making online hate worse because it is programmed to prioritise extreme content

The discussions between staff were revealed in papers given to the Securities and Exchange Commission and provided to Congress by the lawyers of Frances Haugen.

How Facebook’s profits shot up as daily active users hit 1.93billion

Facebook profits shot higher as the number of daily active users on its site and apps hit 1.93billion on average in September.

This was 6 per cent up on last year.

Around 3.6billion people used Facebook or one of its other platforms – which include WhatsApp and Instagram – last month.

Facebook’s profits shot 17 per cent higher to £6.7billion in the third quarter amid the jump in users.

But the company’s revenues fell short of Wall Street forecasts as Apple’s new privacy rules hit sales.

Since April, Apple has required all apps to ask users if they want to be tracked, which has made it harder for advertisers to target the right audiences. It said Apple’s new regime would continue to hit business for the rest of the year.

Facebook’s total revenue – most of which comes from advertising – rose to £21billion in the third quarter.

This was £400million below expectations – though it was more than a third higher than the same period of last year when companies had put their marketing budgets on ice during the pandemic.

The whistleblower said in London just yesterday that Facebook was ‘unquestionably’ making online hate worse because it is programmed to prioritise extreme content.

Miss Haugen told MPs and peers that bosses at the firm were guilty of ‘negligence’ in not accepting how the workings of their algorithm were damaging society.

The American data scientist claimed the tech giant was ‘subsidising hate’ because its business model made it cheaper to run angry and divisive adverts.

She said there was ‘no doubt’ the platform’s systems would drive more violent events because its most extreme content is targeted at the most impressionable people.

Miss Haugen also issued a stark warning to parents that Instagram, owned by Facebook, may never be safe for children as its own research found it turned them into addicts. 

She also told the joint committee on the draft Online Safety Bill that it was a ‘critical moment for the UK to stand up’ and improve social media.

The Bill will impose a duty of care on social media companies to protect users from harmful content and give watchdog Ofcom the power to fine them up to 10 per cent of their global turnover.

Facebook is currently battling a crisis after Miss Haugen, a former product manager at the firm, leaked thousands of internal documents that revealed its inner workings.

Its founder Mark Zuckerberg has previously rejected her claims, saying her attacks on the company were ‘misrepresenting’ the work it does.

Yesterday the committee highlighted how the tech giant had previously claimed it removes 97 per cent of hateful posts on the platform.

But leaked research showed its own staff estimated that it only took down posts that generated around 3 to 5 per cent of hate speech and 0.6 per cent of content that breached its rules on violence and incitement.

Facebook founder Mark Zuckerberg (pictured) has previously rejected the claims made by Miss Haugen, saying her attacks on the company were ‘misrepresenting’ the work it does

Asked about hate speech, Miss Haugen said: ‘Unquestionably it is making hate worse.’ 

She said Facebook was ‘very good at dancing with data’ to make it seem as though it was on top of the problem but was reluctant to sacrifice even a ‘slither of profit’ to make the platform safer.

The committee also heard how Facebook’s research found that 10 to 15 per cent of ten-year-olds were on the platform – despite the minimum age being 13.

Lord Black of Brentwood noted that the Bill exempts legitimate news publishers from its scope, but that there is no obligation for Facebook and other platforms to carry such journalism as they would have to observe the codes of the regulator.

AI would effectively be making these decisions, he said, and asked if Miss Haugen trusted AI to make these types of judgment. 

The thumbs up ‘Like’ logo is shown on a sign at Facebook’s offices in Menlo Park, California

Miss Haugen said the Bill should not treat a ‘random blogger’ the same way as a recognised news source as this would dilute users’ access to high quality news on the platform.

She said: ‘I’m very concerned that if you just exempted across the board you will make the regulations ineffective.’ 

She further warned that ‘any system where the solution is AI is a system that’s going to fail’.

A Facebook spokesman said last night: ‘We’ve always had the commercial incentive to remove harmful content from our sites. 

‘People don’t want to see it when they use our apps and advertisers don’t want their ads next to it.’

MailOnline has also contacted the company today for comment on the latest report about emojis. 

Source: Read Full Article