UK wants tech firms to build tools to block terrorist content

Share it:
....



UK Home Secretary Amber Rudd is holding talks with several major Internet companies today to urge them to be more proactive about tackling the spread of extremist content online. Companies in attendance include Google, Microsoft, Twitter and Facebook, along with some smaller Internet companies.


We’ve contacted the four named companies for comment and will update this story with any response.


Writing in the Telegraph newspaper on Saturday, in the wake of last week’s terror attack in London, Rudd said the UK government will shortly be setting out an updated counterterrorism strategy that will prioritize doing more to tackle radicalisation online.


“Of paramount importance in this strategy will be how we tackle radicalisation online, and provide a counter-narrative to the vile material being spewed out by the likes of Daesh, and extreme Right-wing groups such as National Action, which I made illegal last year,” she wrote. “Each attack confirms again the role that the internet is playing in serving as a conduit, inciting and inspiring violence, and spreading extremist ideology of all kinds.”


Leaning on tech firms to build tools appears to be a key plank of that forthcoming strategy.


A government source told us that Rudd will urge web companies today to use technical solutions to automatically identify terrorist content before it can be widely disseminated.


We also understand the Home Secretary wants the companies to form an industry-wide body to take greater responsibility for tackling extremist content online — which is a slightly odd ask, given Facebook, Microsoft, Twitter and YouTube already announced such a collaboration, in December last year (including creating a shared industry database for speeding up identification and removal of terrorist content).


Perhaps Rudd wants more Internet companies to be part of the collaboration. Or else more effective techniques for identifying and removing content at speed to be developed.


At today’s roundtable we’re told Rudd will also raise concerns about encryption — another technology she criticized in the wake of last week’s attack, arguing that law enforcement agencies must be able to “get into situations like encrypted WhatsApp”.


Such calls are of course hugely controversial, given how encryption is used to safeguard data from exploitation by bad actors — the UK government itself utilizes encryption technology, as you’d expect.


So it remains to be seen whether Rudd’s public call for encrypted data to be accessible to law enforcement agencies constitutes the beginning of a serious clampdown on end-to-end encryption in the UK (NB: the government has already given itself powers to limit companies’ use of the tech, via last year’s Investigatory Powers Act) — or merely a strategy to apply high profile pressure to social media companies in try to strong-arm them into doing more about removing extremist content from their public networks.


We understand the main thrust of today’s discussions will certainly be on the latter issue, with the government seeking greater co-operation from social platforms in combating the spread of terrorist propaganda. Encryption is set to be discussed in further separate discussions, we are told.


In her Telegraph article, Rudd argued that the government cannot fight terrorism without the help of Internet companies, big and small.


“We need the help of social media companies, the Googles, the Twitters, the Facebooks of this world. And the smaller ones, too: platforms such as Telegram, WordPress and Justpaste.it. We need them to take a more proactive and leading role in tackling the terrorist abuse of their platforms. We need them to develop further technology solutions. We need them to set up an industry-wide forum to address the global threat,” she wrote.


One stark irony of the Brexit process — which got under way in the UK this Wednesday, when the government formally informed the European Union of its intention to leave the bloc — is that security cooperation between the UK and the EU is apparently being used as a bargaining chip, with the UK government warning it may no longer share data with the EU’s central law enforcement agency in future if there is no Brexit deal.


Which does rather throw a sickly cast o’er Rudd’s call for Internet companies to be more proactive in fighting terrorism.


Not all of the companies Rudd called out in her article will be in attendance at today’s meeting. Pavel Durov, co-founder of the messaging app Telegram, confirmed to TechCrunch that it will not be there, for instance. The messaging app has frequently been criticized as a ‘tool of choice’ for terrorists, although Durov has stood firm in his defense of encryption — arguing that users’ right to privacy is more important than “our fear of bad things happening”.


Telegram has today announced the rollout of end-to-end encrypted voice calls to its platform, doubling down on one of Rudd’s technologies of concern (albeit, Telegram’s ‘homebrew’ encryption is not the same as the respected Signal Protocol, used by WhatsApp, and has taken heavy criticism from security researchers).


But on the public propaganda front, Telegram does already act to remove terrorist content being spread via its public channels. Earlier this week it published a blog post defending the role of end-to-end encryption in safeguarding people’s privacy and freedom of speech, and accusing the mass media of being the priory conduit through which terrorist propaganda spreads.


“Terrorist channels still pop up [on Telegram] — just as they do on other networks — but they are reported almost immediately and are shut down within hours, well before they can get any traction,” it added.


Meanwhile, in a biannual Transparency Report published last week, Twitter revealed it had suspended a total of 636,248 accounts, between August 1, 2015 through to December 31, 2016, for violations related to the promotion of terrorism — saying the majority of the accounts (74 percent) were identified by its own “internal, proprietary spam-fighting tools”, i.e. rather than via user reports.


Twitter’s report underlines the scale of the challenge posed by extremist content spread via social platforms, given the volume of content uploads involved — which are orders of magnitude greater on more popular social platforms like Facebook and YouTube, meaning there’s more material to sift through to locate and eject any extremist material.


In February, Facebook CEO Mark Zuckerberg also discussed the issue of terrorist content online, and specifically his hope that AI will play a larger role in future to tackle this challenge, although he also cautioned that “it will take many years to fully develop these systems”.


“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” he wrote then.


In an earlier draft of the open letter, Zuckerberg suggested AI could even be used to identify terrorists plotting attacks via private channels — likely via analysis of account behavior patterns, according to a source, not by backdooring encryption (the company already uses machine learning for fighting spam and malware on the end-to-end encrypted WhatsApp, for example).


His edited comment on private channels suggests there are metadata-focused alternative techniques that governments could pursue to glean intel from within encrypted apps without needing to demand access to the content itself — albeit, political pressure may well be on the social platforms themselves to be doing the leg work there.


Rudd is clearly pushing Internet companies to do more and do it quicker when it comes to removing extremist content. So Zuckerberg’s timeframe of a potential AI fix “many years” ahead likely won’t wash. Political timeframes tend to be much tighter.


She’s not the only politician stepping up the rhetoric either. Social media giants are facing growing pressure in Germany, which earlier this month proposed a new law for social media platforms to deal with hate speech complaints. The country previously secured agreements from the companies to remove illegal content within 24 hours of a complaint being made, but the government has accused Facebook and Twitter especially of not taking user complaints seriously enough — hence, it says, it’s going down a legislative route now.


A report in the Telegraph last week suggested the UK government is also considering a new law to prosecute Internet companies if terrorist content is not immediately taken down when reported. Although ministers were apparently questioning how such a law could be enforced when companies are based overseas, as indeed most of the Internet companies in question are.


Another possibility: the Home Office was selectively leaking a threat of legislation ahead of today’s meeting, to try to encourage Internet companies to come up with alternative fixes.


Yesterday, digital and humans rights groups including Privacy International, the Open Rights Group, Liberty and Human Rights Watch called on the UK government to be “transparent” and “open” about the discussions it’s having with Internet companies. “Private, informal agreements are not consistent with open, democratic governance,” they wrote.


“Government requests directed to tech companies to take down content is de facto state censorship. Some requests may be entirely legitimate but the sheer volumes make us highly concerned about their validity and the accountability of the processes.”


“We need assurances that only illegal material will be sought out by government officials and taken down by tech companies,” they added. “Transparency and judicial oversight are needed over government takedown requests.”


The group also called out Rudd for not publicly referencing existing powers at the government’s disposal, and expressed concern that any “technological limitations to encryption” they seek could have damaging implications for citizens’ “personal security”.


They wrote:



We also note that Ms Rudd may seek to use Technical Capability Notices (TCNs) to enforce changes [to encryption]; and these would require secrecy. We are therefore surprised that public comments by Ms Rudd have not referenced her existing powers.


We do not believe that the TCN process is robust enough in any case, nor that it should be applied to non-UK providers, and are concerned about the precedent that may be set by companies complying with a government over requests like these.



The Home Office did not respond to a request for comment on the group’s open letter, nor respond to specific questions about its discussions today with Internet companies, but a government source told us that the meeting is private.


Earlier this week Rudd faced ridicule on social media, and suggestions from tech industry figures that she does not fully understand the workings of the technologies she’s calling out, following comments made during a BBC interview on Sunday — in which she said people in the technology industry understand “the necessary hashtags to stop this stuff even being put up”.


The more likely explanation is that the undoubtedly well-briefed Home Secretary is playing politics in an attempt to gain an edge with a group of very powerful, overseas-based Internet giants.

...
Share it:

Amber Rudd

encryption

europe

extremist content

facebook

google

government

internet

microsoft

policy

privacy

Security

social

social media

TC

telegram

terrorism

Twitter

United Kingdom

Post A Comment:

0 comments: