- Photo by Chip Somodevilla/Getty Images
- Facebook deliberately flouted competition and privacy laws, behaving like “digital gangsters,” according to a damning report by lawmakers.
- British politicians concluded an 18-month investigation into disinformation, and said democracy was “at risk” from foreign countries trying to influence UK elections through social media ads.
- Lawmakers called for an independent regulator to make sure tech firms like Facebook take down illegal content.
- They also accused Facebook of blocking their investigation.
- Facebook said it shares the concerns of the committee and is taking steps to improve its processes.
British lawmakers have accused Facebook of behaving like “digital gangsters”, abusing its dominant power in social networks, and behaving “ahead of and beyond the law.”
The comments come in a devastating report on fake news by British Parliament’s Digital, Culture, Media and Sport Committee, made up of cross-party UK politicians and led by Conservative MP Damian Collins.
Facebook deliberately flouted privacy and competition laws and should be subject to new regulation, the committee wrote.
Lawmakers accused Facebook CEO Mark Zuckerberg of showing “contempt” for UK Parliament, and a larger gathering of international parliaments after he refused to give evidence three times.
The report is the culmination of a sprawling, 18-month parliamentary investigation into disinformation and fake news online, which heard from 73 witnesses and received 170 written submissions.
It examined Facebook’s role in the Cambridge Analytica scandal and its overall privacy practices. The report also touched on possible Russian interference in the Brexit referendum.
Collins said: “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.
“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.”
The committee claimed that Facebook had deliberately “sought to frustrate” its work, putting up executives who were poorly briefed on areas such as election interference.
The report calls for major changes to the way the UK regulates its elections and technology, including:
- Stricter rules that will force tech firms to take down illegal content on their site
- A code of ethics that defines “harmful content”
- An independent regulator to oversee enforcement of that code
- New laws around political advertising online
The UK Culture Secretary Jeremy Wright will head to the US this week to meet with the heads of major tech firms, including Zuckerberg, to talk about harmful content online.
Facebook denied it had breached competition and privacy laws, and said it hadn’t found evidence of foreign interference in the Brexit referendum.
In a statement, Facebook’s public policy manager Karim Palant said the company shares the concerns of the committee and said it is taking steps to improve its processes. He said:
“We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.
“We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.
“We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.
“While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.”