As we have come to terms with recent tragic events in the UK understandably there is great anxiety and a lot of questions about the causes of such terrible loss of life. It has again highlighted the debate around regulating the internet giants like Google, Facebook, Twitter and Amazon. These channels have given criminals and terrorists the opportunity to broadcast their message, so politicians in the UK responded in the first instance by suggesting the technology industry should play their part in addressing this huge challenge. However, the Queen’s Speech, the list of laws that the government hopes to get approved by Parliament over the coming year, leaves me confused.
Listening to the earlier comments from policy makers the rhetoric suggested the new Government would push the technology industry for tougher legislation that might not have proper checks and balances in place. These concerns were heightened reading Matt Burgess’ report claiming the Government wanted to push through demands for tech companies to provide access to user information by breaking end-to-end encryption as needed.
The Home Secretary’s comments, especially in relation to encryption compounded that concern, so it was very pleasing to see positive signals from the European Union on the individual’s right to privacy. The European Parliament’s Committee on Civil Liberties, Justice and Home Affairs underlined its support for the “principle of confidentiality.”
However, a week really is a long-time in politics, especially when it comes to digital and technology legislation.
The Queen’s Speech has highlighted a commitment to make “the UK the safest place online” and added new “right to be forgotten” laws, as well as a determination to comply with the European Union’s GDPR legislation. The speech also included a pledge to review counter terrorism strategy. This might suggest the Government is revising its view on cybersecurity, placing the individual’s right to privacy above national security issues. Unfortunately the vagueness of the Queen’s address leaves far too much room for interpretation. The talk of a Digital Charter is good if its goal is to protect the privacy of consumers, but how will that be weighed up against national security needs?
From the perspective of MaidSafe we applaud attempts to protect user privacy. However, there is no clarity on the question of encryption, particularly giving intelligence services “exceptional access” in the name of national security. The Investigatory Powers Bill still stands and there appears to have been no mention of the unassumingly named Investigatory Powers (Technical Capability) Regulations, which will require service and application providers to give access to information. While this remains unaddressed we have one simple question for the authorities: what if the technology has been designed so that it cannot reveal user information?
Creating a better internet
As most people who follow the story of MaidSafe know the start point for the SAFE Network was creating a better internet – one where users were in control of their data and privacy was paramount. That is why it has been designed with encryption at its core and why users are the only ones, who control access to their data. However, to ensure MaidSafe cannot compromise a user’s identity and data MaidSafe has no way to break the encryption. The user is the only one with the keys and we have no “master key” that can override the system. Bottom line we cannot put a backdoor into our network, because we have no way of identifying users once they are set up.
If you listen to the arguments from politicians the potential threat outweighs the right to privacy and freedom of speech. We believe that rushing legislation through is the wrong approach. This should be a time for cool reflection and a recognition that it is a complex problem, which cannot be solved by pressurising technology companies to create backdoors to their products. Even if you do not accept the fundamental right of individuals to privacy and freedom of speech there is a simple practical point - weakening encryption will make it…well insecure. A vast array of organisations use encryption today for everything from banking to processing legal documents, tax accounts and protecting email. Creating mechanisms for the security services to access information means there is a weak point which hackers can exploit too. If you don’t believe they will then you have clearly erased Wannacry from your memory. The excellent article by Andy Greenberg in Wired on the extent of the hacking in the Ukraine shows how devastating cyberattacks already are without giving the hackers a short cut and this week’s episode has only served as a stark reminder.
The more difficult moral debate we are fully aware of is that we are building a network, which could be used for both good or bad purposes. It is our view that users should be given the right to make this choice for themselves. If they control their data and who they share it with, they control whether or not an individual can broadcast information to them. Security services may also say the SAFE Network will make it harder for them to do their jobs, but there is little or no evidence that mass surveillance and breaking encryption will mean it is easier to catch criminals. Indeed while the bad guys appear to take an innovative approach to new technologies it often seems as though the authorities wish to take a step backwards.
Compromising security and allowing sweeping powers more often than not leads to abuses of such authority. We have seen this time and again. We would argue there is evidence the police and security services are more successful with targeted surveillance and building partnerships with communities. John Thornhill at the Financial Times recently reminded me of a report I had seen before, originally published in 2015. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) produced a damning criticism of backdoor access to encryption – the title of the report underlining the crudeness of such an approach: “Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications.” While it sounds obvious there is absolutely no point in locking the door and allowing the bad guy to find the keys. It makes for good drama in Hollywood, but it in real life it has serious consequences.
The intelligence community terms this breaking of encryption as “exceptional access” which makes it sound very benign. However, MIT CSAIL was clear about the consequences in its report: “In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse “forward secrecy” design practices that seek to minimise the impact on user privacy when systems are breached.”
If you are not convinced on moral grounds there is also a simple technical reason why giving control back to users works. If an individual controls his or her identity that person is anonymous, but also potentially traceable. As John Thornhill rightly points out using encryption also authenticates the user and in environments such as the blockchain it should not be forgotten that once an individual, including a hacker, adds something to the blockchain it is recorded for posterity. Suggesting that encryption is an enabler for the bad guys shows a lack of understanding of next generation technologies, because unlike previous analogies of good guys versus bad guys technologies in the current landscape are more complex.
At its heart this debate needs a reset, because it feels like cybersecurity strategy is still in the 2000s when Web 2.0 came along. The technology is cleverer now, but so too are the users and the technology is reflecting what users want. They want privacy, but equally they do not want to propagate terrorism or hatred. They believe technology exists that balances the absolute right of individuals for privacy and the need for national security.
Sadly we do not live in a perfect world and technology is unfortunately being used by bad actors to do some nefarious things. Certainly, the approach of the big tech companies in response to growing consumer and political concerns has not been as quick and responsive as many would like, but weakening encryption in the name of national security is not the answer. Paul Bernal, in Matt Burgess’ article, raised the important issue of accountability and oversight. If the Technical Capability Regulations are passed into law there is also an even more fundamental question of right to privacy and right to freedom of speech. This is a time for cool heads. The MIT CSAIL report is good not just in its technical analysis but also as a historical reminder. We have been debating this issue since the 1970s when computers became increasingly mainstream. Today we are seeing rights undermined increasingly around the world and if a country like the UK is seen to promoting more draconian laws it will give more authoritarian states the justification they need to implement similar and worse rules. If we force technology companies to break their encryption we do not just compromise security we compromise fundamental human rights.
Nick Lambert, Chief Operating Officer, MaidSafe
Image Credit: Yuri Samoilov / Flickr