Peter Wright gives evidence over new Online Safety Bill

Peter Wright gives evidence over new Online Safety Bill
Peter Wright gives evidence over new Online Safety Bill

News organisations such as MailOnline should be given a 'positive exemption' from the provisions of the new Online Safety Bill, an industry representative said today.

Peter Wright, editor emeritus of DMG Media, said social media platforms should not moderate journalistic content when it is produced by 'recognised news publishers'.

He told a Parliamentary committee that he was 'hugely sceptical' about Facebook's teams of fact checkers with some appearing to be 'single-issue lobby groups'.

Speaking to the Joint Committee on the Draft Online Safety Bill, he also pointed out that Google and Facebook's algorithms and artificial intelligence are 'very poor'.

Mr Wright added DMG's titles including MailOnline were already 'fully subject to the law' and there could be a case where it is following the Independent Press Standards Organisation (Ipso) code of conduct but breaking a US tech firm's terms of service.

He wants a positive exemption for journalism from the provisions of the bill, which aims to establish a new regulatory framework to tackle harmful content online.

Facebook and Google will fall within the scope of the new bill, which will give UK regulator Ofcom the power to hand out multi-million pound fines to tech companies.

Peter Wright, editor emeritus of DMG Media, whose brands include MailOnline and the Daily Mail, gives evidence to the Joint Committee on the Draft Online Safety Bill today

Peter Wright, editor emeritus of DMG Media, whose brands include MailOnline and the Daily Mail, gives evidence to the Joint Committee on the Draft Online Safety Bill today

Mr Wright was asked today about whether the protection given to journalistic organisations from the duty of care principles within the bill was sufficient.

He said: 'My reading of the bill is that we're protected in the first case by the fact that the duty of care does not apply to our content, both on our own websites but also when it's distributed on search and social media, so there's no obligation on the platforms to censor our content.

Social media bosses face tough criminal sanctions for hosting extremist content 

Social media bosses could face 'criminal sanctions with tough sentences' if they allow extremist content to appear on their platforms, Boris Johnson said yesterday.

He told MPs that the forthcoming Online Safety Bill would tackle web giants if they allow 'foul content' to circulate.

And he promised the long-awaited legislation would make quick progress in the Commons, with the bill receiving its second reading before Christmas.

But a Whitehall source later said the second reading might not take place until early next year.

Published in May, the draft bill gives regulator Ofcom the power to impose multibillion-pound fines on technology giants that fail to show a duty of care to users.

But it stops short of bringing criminal sanctions against bosses. 

Instead, a new criminal offence for managers has been included as a deferred power that can be introduced if Ofcom finds that firms are failing to keep to their new responsibilities.

Some campaigners have raised fears that the rules risk stifling the free press, 'silencing marginalised voices' and introducing 'state-backed censorship'.

Advertisement

'However, the problem comes because there is also no compulsion on them not to, and clearly the authors of the bill envisage that they will block and take down items of content because the journalistic protections are there.

'And the journalistic protections specifically apply to news publisher content, so we have to look at the journalistic protections and ask how effective they are, and in my view they're not effective.'

He added that the moderation of content by social media firms will be done by algorithm, and the bill puts them under threat of possible criminal penalties.

Mr Wright continued: 'And their inevitable response to that will be to set the parameters of any moderation they do as widely as possible. It's human nature – they'll want to protect themselves. So they'll be using a very blunt instrument. 

'I saw in Google's submission, they say their algorithms are very poor at understanding context, they're going to find moderating journalism particularly difficult.

'And we also know from articles in the Wall Street Journal over the weekend that Facebook's artificial intelligence is very poor at actually moderating this type of content. But what does the bill demand of them? That they take freedom of expression into account? 

'Well, that can mean almost anything. It's left to the platforms to determine how they do this, what rules they set.'

The Wall Street Journal claimed that documents revealed Facebook's artificial intelligence cannot consistently identify first-person shooting videos, racist rants and even the difference between cockfighting and car crashes.

Mr Wright continued: 'From what I've seen of how Facebook have been trying to moderate journalism in the USA – where they're doing it for completely different reasons, they had an advertiser boycott last year, which has prompted them to do this - it's arbitrary, it often fails to understand the nature of the content, it's imposed without any sort of process, it is not in line with

read more from dailymail.....

PREV Furious police chiefs declare they will 'never agree to pausing any unnecessary ... trends now
NEXT Fauci's right-hand man dumbfounded over deleted Wuhan COVID lab emails and ... trends now