Internet companies should provide real-time data on disinformation, Lords told

by Jeremy

The UK’s upcoming Online Safety Bill should place requirements on social media and internet companies to provide real-time updates about suspected disinformation, fact-checking experts have told a House of Lords committee.

To limit the spread of mis- and disinformation via the internet, the UK government’s current approach – outlined in its complete response in December 2020 to the online harms whitepaper published in April 2019 – is to require online companies to explicitly state what content and behavior is acceptable on their services in clear and accessible terms and conditions, including how content that is legal, but could still cause significant physical or psychological harm, will be handled.

Houses of parliament fotolia

Companies in Category 1 – those with the most significant online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram, and Twitter – will also be under a legal requirement to publish transparency reports on their measures taken to tackle online harms.

To comply with the duty of care, the companies covered by the legislation will have to abide by a statutory code of practice drawn up by Ofcom. The government’s complete response officially confirmed will be the online harms regulator.

Suppose they fail in this duty of care. In that case, Ofcom will have the power to fine the companies up to £18m, or 10% of annual global turnover (whichever is higher), and will also be empowered to block non-compliant services from being accessed within the UK. However, addressing the House of Lords Communications and Digital Committee on 23 February as part of its ongoing inquiry into freedom of expression online, Full Fact CEO Will Moy said real-time information from internet companies about suspected disinformation is needed to foster compelling public debate.

“We need real-time information on suspected misinformation from the internet companies, not as the government is [currently] proposing in the Online Safety Bill,” said Moy, adding that Ofcom should be granted similar powers to the Financial Conduct Authority to demand information from businesses that fall under its remit.

“We need independent scrutiny of the use of artificial intelligence [AI] by those companies and its unintended consequences – not just what they think it’s doing, but what it’s doing – and we need real-time information on the content moderation actions these companies take and their effects,” he said.

Related Posts

Leave a Comment