Day-to-day, we’re inseparable from our always-connected devices; but following a number of issues raised this year, it seems social media, online retailers, and even government watchdogs are having to consider some pretty drastic changes on how to safeguard the internet.
Enter the UK’s Investigatory Powers Act. This controversial solution – which includes forcing ISPs to store users’ web-browsing history for 12 months – became law last year and is already under intense scrutiny from human rights groups and privacy activists. Though the bill’s is intention is to fight online crime and extremism, many members of the public object to the government being able to dip into all of our personal browsing history – not to mention how much of it will be stored. The recent TalkTalk hacks, which saw millions of customer details compromised, illustrated how helpless the average citizen is when entrusting their details to outside institutions. Our personal lives were, potentially at least, more vulnerable and less personal than ever under the new act.
The European Court of Justice felt similarly. Days after the ruling, they ruled that such bulk data collection was illegal, and a recent tribunal between the UK’s spy agencies and privacy advocates has since seen the case elevated further. Both parties have agreed that this is a matter for the Grand Chamber – and as of today, the debate continues.
So to, however, does the threat of ‘Fake News’, extremist content and other online obstacles. With connectivity now just as much our daily lives as electricity and running water, who decides what is and isn’t acceptable? Where does the responsibility lie?
Theresa May lays the blame at the feet of the tech companies. May has asserted at recent UN assemblies that technology giants have a responsibility to go further and faster when combatting illegal online content. Following reports that the UK is the biggest target audience for ISIS propaganda, one can’t blame her for her insistence; she’s adamant that tech companies should be able to identify and take down dangerous content within 2 hours.
Yet whilst some of her demands might seem unrealistic, they might well improve upon systems which, currently, feel lacking. Facebook’s advertising algorithm has recently learned how to target hate groups, whilst Amazon’s algorithms ‘helpfully’ recognise bomb ingredients and compile them all neatly via its “frequently bought together” feature. Algorithms designed specifically to combat hate speech or propaganda continue to turn up false positives (such as perfectly innocuous news reports), whilst criminals simply create more secretive places online to conduct business.
Few could predict these behaviours, but then few would also rely on algorithms to make logical or ethical decisions. That’s why companies such as Facebook, Twitter and Google have started combining the metadata of known suspicious content into what they call the Global Internet Forum. As this pool of data grows, so too does their combined knowledge of criminal methods. Twitter can boast a 75% rate of extremist accounts removed before a single Tweet is sent, so this co-operation seems like a promising start.
Recent UN meetings have seen British, French and Italian governments commit to the aforementioned “2 hour” limit for internet extremism, pledging to take legislative action against internet companies who don’t eliminate dangerous content with vigilance. Companies, reacting to this hard-line stance, are now having to code and research safeguarding technology more advanced than any current measures.
The Global Internet Forum has demonstrated how, when efforts are combined, the statistics can be encouraging. One can’t help but wonder why, however, the preventative measures of tech companies aren’t as calculated and thorough as their marketing technology. Perhaps, if governments can make efforts instead of demands, and tech companies solutions instead of excuses, a stronger collaborative effort lies just around the corner.