In light of recent UK terrorism attacks Prime Minister Theresa May has suggested the Internet must now be regulated so as to ‘deprive the extremists of their safe spaces online’. To those paying attention, this statement is not a new position for Theresa May, but rather the latest in a long line of attempts by her political party to impose greater controls upon cyber space. Many on the left and in libertarian circles decry such moves, seeing them as an attempt to control and limit personal freedom. On the other side of the debate are government, police and intelligence services in favour of regulation so as to enable them to deal with very real cyber threats. The debate between both factions largely boils down to an argument of privacy versus security. There is a lot of confusion and misinformation in the public domain regarding these fields. Thus, given our current situation there is a real need to ask what do we actually mean when we refer to ‘privacy’ and what is our definition of ‘security’? Further, is it possible to develop a technological system that would satisfy both sides of this debate? If so what might its requirements be?
Those challenging the desire to intensify security tend to discuss and present privacy as though it is an inalienable individual right, as though this has always been at the nub of our legal arrangement as UK citizens. This perception of privacy may be entwined in the perceived basis of the social contract between State and individual. The popular argument goes that in order to be a constituent of a political State (e.g. a country) one gives up certain ‘god given’ rights so as to be afforded protections by the political state. The simple analogy might be joining a gang. Though instead of getting a tattoo, motorcycle and a bunch of new intimidating friends you get a passport, national security number, the right to vote and access the affordances of the state, such as the National Health Service. One such perceived affordance is personal privacy.
Interestingly, in the UK the legal right to privacy is afforded to citizens via The Human Rights Act (1998), which in Article 8 sets out the right to privacy. Prior to this personal privacy was not recognised in UK law per se . That said several other areas of law could be viewed as affording some legal protections to private property. These are the aspects of law that deal with trespass, nuisance, defamation and malicious falsehood . In this sense, personal privacy in law could be seen as a logical evolution of law in recognition that data is a ‘thing’ to be owned and protected. Thus privacy, while being an old concept, is a very recent addition to the civil rights of UK citizens. Though it must be made explicit that the duration of its existence as a civil right should have no bearing on any judgement concerning its value. Perhaps instead of debating the pros and cons of preserving this right in the current climate, it would be more useful to identify that which we actually wish to protect through its existence. Therefore one must ask: to what are we actually referring when using the term privacy?
In terms of terrorism and cyber security, privacy debates tend to focus on the individual’s right to control access to their data; and the types of controls people desire to do this are complicated. Ensuring privacy is not simply a matter of controlling which data can and cannot be accessed, but also which people can access what data and further, when and under what circumstances those people are allowed to access that data. Thus a discussion of privacy quickly becomes a discussion of how data is secured. Through this a definition of personal privacy is quickly conflated with a definition of personal security and thus these two distinct issues are made to appear as one and the same. If privacy is not the means by which it is assured, then what is it? The answer to this lies in an understanding of what it is that people want to protect. To define this we must unpick what exactly is being referred to when people speak of ‘data’.
When non-specialists speak of data they tend to use the term as a kind of short hand for personal information. Thus it could be said that, because of this common usage, data and information are in many people’s minds equivocal. The question here is how helpful is this conflation of data with information and are they really the same thing? I would argue that they are not the same thing and I would invite the reader to consider the usefulness of the taxonomy I’m now proposing: Data is something that individuals produce – knowingly or unknowingly – through their daily activities, even when asleep or unconscious. Think of it as a kind of digital perspiration. Information is data that has been read by human beings. The fact that it is read by human beings rather than processed by AI in an important issue in this taxonomy, though this is beyond the scope of this text. For now I will focus on the question of why this distinction between data and information is useful and important.
If we take data to mean the binary ones and zeros as it is to itself and to the machines it passes through and we take information to mean data as it is when it is read by human beings then we can begin to distinguish between a signal and its reception. This is important because the production of data is an unavoidable consequence of every citizen’s daily life. What can be controlled, or at the very least mediated, is if and how this data is interpreted. Thus in terms of understanding what privacy is, I would propose that it is not the generation of data that concerns most people, but rather the fact that any data they generate may be seen by another person. Once data is read by another human being, within this proposed taxonomy, that data is then classified as information because it informs the reader of something.
The fact that the reader is informed of something does not necessarily mean that they will remember, analyse, draw conclusions from or act upon this information further. This is important since it is the difference between something being received and something being interpreted. Yet the fact that the reader can processes this information further requires a third category to be introduced to this taxonomy, that of knowledge. Thus if data can be considered as signal and its reception by humans as its transmogrification into information, then the process of interpretation is that which refines it into knowledge.
This knowledge is always knowledge of something, and that something is the person who produced the data from which the information and knowledge could be extrapolated. Understanding this factor is key in understanding people’s motivations in their desire to protect access to their personal data. By extension it is also key in helping to identify what privacy actually is.
If privacy is not the means by which it is assured and further, we have a nuanced understanding of the differences between data, its reception as information and processing into knowledge then how do we begin to define privacy on this basis? When people talk about privacy what they are interested in controlling is not access to their data but rather access to information about themselves that may give others knowledge that the individual does not want others to have. Therefore the desire for privacy is the desire to control our boundaries with others. When we control our boundaries with others what we are actually controlling is our levels of personal intimacy with them. Thus privacy is not the ability to protect ones data, but rather the ability to control the levels of intimacy in our relationships. Part of this control is the ability to choose who and/or what to have intimate relationships with and how to negotiate those relationships.
Where this gets interesting is that we do not only have intimate relationships with other people, but also with objects and, significantly, ourselves. This last point is absolutely vital to an understanding of what intimacy actually is and therefore also the source of the tension between privacy and security. Human beings are not simply producers of data. They are also readers of information. Not only can they read and acknowledge other peoples data but also their own. Through the process of experiencing and processing the generation of one’s own personal data we develop an ongoing and evolving feedback loop of data processing that we refer to as self knowledge. Thus we are always not just a thinker of thoughts but also a presence in those thoughts witnessing ourselves think. This becomes more complex when we recognise that not only are we a presence in our own thoughts, witnessing them being thought, but also, we author the presence of others into our thinking processes to postulate and ruminate upon their possible perceptions and judgements. All of this is key to understanding what intimacy actually is.
Intimacy is not simply our degree of familiarity with others, but also our degree of familiarity with ourselves. This familiarity is established and developed through the above process. It is on the foundations of this first intimacy with ourselves that the intimacy of all other relationships is established. If we lose the freedom to behave like this then we lose the ability to have a developed sense of ourselves. Thus what we are afraid of losing through the intensifying of invasive security is not simply the ability to control our levels of intimacy but actually our capacity for all intimacy. To be able to have total intimacy we must be able to exclude all unwelcome agencies, thus intimacy is primarily the ability to be alone with ourselves and secondarily also with others. Thus when people speak or refer to a right to privacy, it is my contention that at the nub of what they are seeking to protect is intimacy: as in the right to be alone with ourselves and others. It is a freedom to protect a core human behaviour.
Therefore instead of ripping up The Human Rights Act (1998), As May has proposed, wouldn’t it be more sensible to challenge those who are engineering security systems to implement solutions that understand and account for this human behavioural need? Further, wouldn’t it make sense, rather than to push an unwelcome solution upon the population, to encourage a public debate with regards to the kinds of analysis that would be acceptable and under what terms? What I am suggesting is that if we can accept that terrorism is becoming part of our daily reality and our fight against it is in many ways an ideological fight to preserve our way of life, then perhaps we should be seeking to develop systems both technical and legislative that can distinguish between and protect personal privacy based on the distinction between data, information and knowledge.
 http://www.independent.co.uk/news/uk/politics/theresa-may-internet-regulated-london-bridge-terror-attack-google-facebook-whatsapp-borough-security-a7771896.html – site accessed 05/06/2017
 though not the text from which it is abridged
 I am using the terms ‘read’ and ‘reader’ in the way that are used in semiotics. Thus the information that is being processed doesn’t necessarily have to be a piece of writing. It can be a picture, or anything else for that matter.
 https://www.theguardian.com/commentisfree/2017/jun/07/theresa-may-human-rights-european-charter-terrorists site accessed 10/06/2017