The Internet of Things, or IoT as it is commonly abbreviated, is becoming part of current technological rhetoric. A quick Google search will provide the reader with a multitude of information ranging from media articles about what it will do for society and how many connected devices each person will have by 2020, to attempts at providing a chronology of its emergence. Yet the question remains, behind all this information, what actually is the Internet of Things?
In order to unpick this question, one must recognise that there is a difference between technologies and the possibilities that they afford to the world. Behind the rhetoric and the dreams of social possibilities, the Internet of Things is, in essence, the convergence of three spheres of technological development. These are: the development of computer networking, Radio Spectrum Communication, and the development of sensors and sensor networks. It is worth taking some time to familiarise ourselves with these things so that we can begin to understand how our current technological state of existence came into being and through this consider its place in the wider world.
The first and most obvious component giving birth to the Internet of Things is the Internet itself. The first emergence of something resembling the modern Internet occurs with the advent of Arpanet in 1969. Arpanet was developed by America’s Advanced Research Projects Agency (ARPA). Arpanet initially consisted of the interconnection of four computers utilising a packet switching network and the Network Control Program (NCP), which was an early forerunner of TCP/IP. NCP essentially provided the middle layers of the protocol stack, which is part of the layered structure that makes the internet work.
By 1974 NCP was eventually becoming outmoded by research into what would eventually become TCP/IP. TCP/IP resulted in large part from the collaborative research between NCPs original designer for Arpanet, Vinton Cerf and Robert E. Kahn. Cerf and Kahn worked out the details of what would become TCP/IP in broad brush strokes, Cerf then refined this into a working reality though his own work at Stanford University. This eventually led to the realisation of TCP/IP. The central difference between NCP and TCP/IP was that Kahn and Cerf had reformulated the concept underpinning the network so that the hosts rather than the network were responsible for reliability. Although the beginnings of TCP/IP occur in 1974, it was not until 1st January 1983 that TCP/IP replaced NCP on Arpanet. This change marked the beginnings of the Modern Internet.
During that same year Arpanet introduced the first domain name system. Previous to its existence the method for addressability involved each computer on the network retrieving the host computers host.txt file from a computer at the Stanford Research Institute (SRI), which mapped the host computer to a numerical address. As the network grew this became an unmanageable system as it was so centralised. By 1985 the domain name system and the Uniform Resource Locator (URL) on which it is built had come into existence.
The combination of TCP/IP and the introduction of the domain name system paved the way for the creation of the first web page by Tim Berners-Lee, then based at the European Organisation for Nuclear Research (CERN). He originally proposed this in 1989 and it went live 6th August 1991. The page itself was hosted on a NeXT computer located at CERN. The first page dealt with providing information on the World Wide Web project or world wide web virtual library, as it was known. It was not long after this that the first servers were tested and ran on networks and the internet, spurred on by the birth of the web, began to take shape. By the mid 1990s the growth of the World Wide Web saw internet use had spread beyond the military, academics and geeks. Now the public was becoming interested in what it was and what it could do. To many the early web – and the internet on which sat – was seen as a gimmick or passing fad. Though this was very wrong as we now know.
The early web was something of a wild west: frames, animated Gifs, flaming, every man woman and child their own home page in the wondrous labyrinth that was geocities. In many ways the technological accessibility of the early web encouraged mass adoption. The ease of HTML coding and the lack of standardisation meant that the barrier to entry was very low. Pretty soon everyone and their cat (or dog) became involved in making pages and sharing information via the world wide web. The early web was a remarkable place; it made information and people more accessible than they had previously been. This encouraged collaboration and experimentation with and by friends and strangers. Everyone was now an explorer and the web was the frontier.
The culture of the early web has much in common with the culture that arose through early public telephone systems. If one investigates the history of hacking and freaking it is interesting to note the amount of play that was inherent in the activities on these phone networks. The same is true of the web. During this period there were several entities created, many of which were made simply for fun, that are commonly cited as early portents of the emergence of the internet of things.
One of the commonly cited examples is the Trojan Room Coffee Pot (1993), located within the Trojan room in the computer labs at Cambridge university. This was created by Quentin Stafford-Fraser and Paul Jardetzky. This consisted of a camera that updated images of the coffee pot about every 20 seconds on a web page. This allowed the computer scientists to check in their browser to see if there was fresh coffee available. Another example of Internet of Things style invention on the early web was Steve Mann’s wearcam which was a wearable camera that broadcast web images updates of things he was seeing in the real world as he interacted in it. Today this sounds like a fairly un-radical idea but for the world in 1994 this seemed ground breaking.
Both these examples demonstrate the latent potential for the existence of an Internet, not simply of hypertext; but of objects or Things. Yet it would be some time more before the full convergence of technologies required to produce the Internet of Things would occur. An early visionary whose work plays a key part in the development of the Internet of Things is Mark Weiser.
Mark Weiser’s paper The Computer for the 21st Century (1991), first published in Scientific American, provides a prescient analysis of the then emerging technological reality and where it would lead. Weiser’s research focus was Ubiquitous Computing, or Ubicomp as it is generally known. Ubicomp primarily focuses on the integration of computing technology into everyday life. As Weiser himself put it ‘we are trying to conceive a new way of thinking about computers in the world, one that takes into account the natural human environment and allows the computers themselves to vanish into the background’.
In this paper Weiser describes the types of technologies he and his colleagues at the Palo Alto Research Center (PARC) had been developing and testing. The types of technologies he describes are akin to many modern familiar hardware devices such as Radio Frequency Identification (RFiD) tags and computer tablets. In this text Weiser provides an example of what a world filled with these devices may look like and discusses what it might be like to live in such a world. While his viewpoint is utopian, and therefore open to some critique, it does describe a world eerily like the one that is emerging through the Internet of things.
While the development of the Internet – both in terms of physical architecture and communication protocols – is fundamental to the existence of the Internet of Things, other fields of research would have to develop and converge with this radical new communication system to allow the Internet of Things to come into existence.
read part two here