Part Two: Notes toward a history of the Internet of Things

As stated at the end of Part One of this post series: While the development of the Internet – both in terms of physical architecture and communication protocols – is fundamental to the existence of the Internet of Things, other fields of research would have to develop and converge with this radical new communication system to allow the Internet of Things to come into existence. The second key field of research is that of Radio Spectrum communication devices. Within this field, the development of two technologies are of particular significance: RFiD technology, which played a significant role in the early developmental stages of the Internet of Things; and cellular based communication, which paved the way for the mobile web.

RFiD is in many ways a simple technology. In its primitive state it exists in one of two forms: passive and active. Passive RFiD is when a signal is sent to a transponder, which reflects the signal back to the sender. In an Active RFiD system the transponder broadcasts and active signal back. Historically the invention of RFiD owes something to the invention of radar in 1935 by Scottish physicist Sir Robert Alexander Watson-Watt.

During the Second World War both allies and enemies were using radar to scan their respective skies. ‘The Germans discovered that if their pilots rolled their planes as they returned to base, it would change the radio signal reflected back to their radar dishes’ . This was in essence the first occurrence of a passive RFiD system. Likewise Watson-Watt, working for the British, developed what could broadly be described as the first active RFiD system. In Watson-Watt’s design British planes were equipped with transmitters. When a plane received an appropriate signal from a British Radar station it would begin broadcasting a signal back that allowed the operators on the ground to identify the plane as one of their own.

Radio Frequency communication was of significant interest post World War Two. As a result much work was done by the scientific community on exploring its potential uses. This research continued through the forties, fifties and sixties. By 1973 the first U.S. patent for an active RFiD tag was filed by Mario W. Cardullo, while Charles Walton, a Californian entrepreneur, received a patent for a passive RFiD used to unlock a door without a key, akin to the modern swipe card.

Given the birth of RFiD technology as a military application, it is unsurprising that Los Alamos National Laboratory was also involved in developing RFiD based systems. In the 1970s they developed a system at the bequest of the Energy Department for tracking nuclear materials . The Los Alamos scientists solution was to fix RFiD transponders to trucks and readers at the gates of secure facilities. The RFiD transponders on board the trucks would then communicate with the gates transmissions, responding to their transmission by supplying an ID.

Los Alamos also developed RFiD technologies for monitoring cattle, so as to ensure that they were not given double doses of medication or hormones when being farmed. For this they developed a passive RFiD system that drew a small charge form the RFiD reader and reflected back a modulated signal. This technique is known as backscatter . Later developments upon this technique saw the RFiD chip encased in a small glass tube that could be embedded under the cows hide.

This later development came with the development of 125kHz low frequency RFiD. As RFiD developed, this bandwidth became commercialised and RFiD research led scientists to experiment with higher frequency communication, which allowed for greater range and faster data transfer. This led to the development of 13.56MHz RFiD communication, which is still in common use today for many of the devices one may encounter or commonly use. For example, the Oyster card for using London’s transportation system.

In the 1990s IBM developed an ultra-high frequency (UHF) RFiD system, which it then sold to Intermac, a bar code systems provider. Under Intermac, UHF RFiD became more common in public usage. It was significant in stock control in warehouses, where it is still used today. In 1999 UHF took a significant leap forward thanks to the work of David Brock and Sanjay Sarma at the Massachusetts Institute for Technology (MIT). Akin to the switch from NCP to TCP in the history of the Internet, Brock and Sarma realised that if they made the responsibility for the data stored on RFiD part of the network itself and only used the RFiD to store an identification number that could be used as a key to access this data, then the cost of producing and using the chips could be dramatically reduced. Thus the RFiD tags they produced would only store a serial number, which could then be used to access data associated with the tag, stored somewhere on a networked database. This innovation changed RFiD from an identification system to a network communication tool. From this moment forward objects could be linked to the internet via RFiD.

This was a significant development because it meant that many manual tasks that would have to be performed by humans in a supply chain could now be automated. For example, the process of letting a customer know when an object had been shipped and letting the business know when an object had arrived could all now be automated via UHF RFiD chips and scanners.

Brock and Sarma’s research led to the establishment of the Auto-ID Center at MIT. Auto-ID became a significant entity in the development of this aspect of the Internet of Things, opening research labs in the UK, Switzerland, Australia, China and Japan. Auto ID Developed two air interface protocols (Class 1 and Class 2), the Electronic Product code (EPC) and a network architecture for looking up data associated with RFiD via the internet. The Auto-ID Center closed in October 2003 and its research responsibility were passed on to the Auto-ID Labs , who are currently the leading global research network of academic laboratories in the field of the Internet of Things.

Of similar significance to the development of the web is the evolution of cellular communication technology. The first developments relating to what would become mobile cellular communication happened at Bell Labs, when their scientists proposed hexagonal cells for mobile phones in cars and other vehicles in 1947. By 1956 an automated system had been developed in Sweden, named MTA, which allowed direct dial calls to be placed to vehicles. By today’s standards, the system was extremely limited in nature. The first actual handheld mobile phone was developed by Motorola and publicly demonstrated by one of their executives, Martin Cooper, in 1973.

Following quickly on the heels of this innovation, Japan’s NTT (Nipon Telegraph and Telephone) launched the first cellular automated network in 1979. This was followed closely in the west by the launch of the Nordic Mobile Telephone (NMT) system in Denmark, Sweden, Finland and Norway. Further cellular networks were then developed in America in 1983, Israel in 1986 and Australia in 1987. These first Generation (1G) phone networks were analog, utilising the Advanced Mobile Phone System (AMPS) system standard, which was developed by Bell Laboratories.

One of the items fuelling the development of these networks was the commercial release of the Motorola DynaTAC 8000x in 1983. This was the first commercially available Cellular phone. It had a battery life of thirty minutes and took ten hours to charge. Yet despite this the device was hugely popular. The release of this device marked the moment that mobile cellular communication entered the public domain.

By the 1990’s Second Generation mobile phone technology was under development, or 2G. Unlike 1G, 2G devices were digital, utilising Global System for Mobile Communications (originally Groupe Spécial Mobile) or GSM For short. This standard was developed by the European Telecommunications Standards Institute (ETSI). These networks were first deployed in Finland in 1991 and quickly spread.

The new digital GSM 2G devices had significant new features that made them more appealing to the mass market. 2G offered SMS services. This was available initially on GSM, but spread to other digital networks. The first SMS message was sent by a machine on the 3rd December 1992. The First person-to-person SMS was sent shortly after in 1993. Thus marking the beginning of a mode of technological communication taken for granted today.

By 1993, IBM had developed IBM Simon The Worlds Frist Smart phone. This device, though primitive by today’s standards, incorporated a touch screen interface, phone, pager, faxing, email and PDA facilities in one device. IBM Simon was assembled under licence by Mitsubishi and cost $899 for a two year contract or $1099 for a single year contract. Despite its high costs the device shifted 50,000 units in it’s first six months on the Market in America alone.

By the mid 1990s, like the Internet, mobile phones were beginning to take hold in the consumer domain. By the end of the 1990s the latent ability for media content in the network was being leveraged, with the first ringtones being sold to users in 1998. By this point the cheapness of the technology, combined with the growing public interest in it saw the advent of the first Pay-As-You-Go contracts, which made the devices accessible to nearly all social strata in the west and beyond.

By 2001 the need for data consumption on mobile devices was becoming all too apparent as consumers sought to leverage the Internet connectivity of mobile devices. Understanding that this demand would only grow and that the circuit switching technology that 2G networks were built upon was not up to the job to provide this, the industrial stakeholders began to look for a means to integrate and implement packet switching for data transmission on mobile devices. This led to the development of 3G technology via a rather competitive market of competing solutions, each seeking to become the commercially accepted standard. 3G revolutionised cellular mobile communication by connecting the mobile telephone to the Internet at speeds viable for consumer usage. Streaming services and download services were for the first time available on mobile devices.

However, 3G still contained elements of circuit switching that was used for voice communication. This aspect of the device, the growing public demand for data intensive services and the evident significance of the internet to the everyday life of consumers led the mobile industry to realise that 3G networks would in the future become overwhelmed if an alternative system was not created. To this end they sought to create a communications standard fully reliant on an IP based packet switching network. By 2009, with the creation of 4G technologies, for the first time in the history of mobile devices, voice communication was treated like any other data utilizing VoIP.

With the realisation of 4G and the growing sophistication of sensors built into mobile devices – voice, camera, movement, GPRS etc – cellular phones had now become integrated as nodes in the fabric of the Internet. Significantly, cellular phones are almost always personal devices, so they became personal interfaces for their owners. Private – or so they seemed – digital space in which to curate content and interact while on the move.

As a result of this we now had in existence an integrated planetary system of communications that could service both human and machine communications. RFiD and mobile cellular devices lent objects in the physical world the ability to interface with the web via the automated sensing of their presence. Yet, while the cameras and microphones inherent in mobile phones and the serial numbers of RFiD allowed for the participation of objects these objects in the materiality of the web, another set of technologies was required that would allow for the latent potential of such devices to be fully harnessed. This forth set of technologies had, like cellular and internet technologies, been around for a long time. Just like the other technologies discussed so far, this forth set of technologies only came to participate in the public realm when its production became so cheap as to allow it. In this instance I am referring to sensors.

Like the Internet, the development of sensors, and more significantly, sensor networks owes a lot to the military industrial complex and, in particular, DARPA. The first sensor network that had any significant relationship to the kinds of sensor networks used today was the Sound Surveillance System (SOSUS ) developed in the 1950’s by the U.S. military. This system was used for identifying and tracking Soviet submarines in the Atlantic and Pacific Oceans. SOSUS utilised submerged acoustic sensors referred to as hydrophones. This system is still in place today, although it is now used for monitoring underwater volcanic activity and wildlife rather than submarines.

Building on these activities and others in the following two decades DARPA became interested in the potential of such technologies for strategic defence purposes. They started the Distributed Sensor Network (DSN) program in 1980. This program focused on the formal scientific exploration of the development and application of Wireless Sensor Networks (WSN) and other sensing devices. Building on these developments through the links between DARPA and University research institutions, WSN research found it’s way into academic research.

Utilizing academic research in this field, Governments in particular, were very interested in the potential of these networks. Early networks deployed for civilian ends were used for such things as monitoring air quality, forest fire detection, weather stations, natural disaster provision (volcano eruptions etc) and structural monitoring of buildings. These early developments could be considered the first wave of automated data collection or data mining on a large scale.

While interest in Wireless sensor networks was strong, it was difficult to bring the technology further into the public domain. There were several factors creating this situation. The first was that the types of sensors being developed and deployed were expensive, bulky and governed by proprietary networking protocols, which required specialist knowledge to work with. The second was that the optimisation of these early devices focused primarily on functionality and performance at the cost of power consumption, scalability, networking standards and hardware and deployment costs. This combination of high costs for production and deployment, and low volume of production prevented Wireless sensor networks from becoming more developed and deployed in other potential areas.

During the 1990s a number of initiatives began to try and work on the problems of sensor networks. These initiatives involved the collaboration of Academia and industry, both of which recognised the latent commercial potential of these devices. Significant known initiatives included: UCLA Wireless Integrated Network Sensors (1993), University of California at Berkeley PicoRadio program (1999), μ Adaptive Multi-domain Power Aware Sensors program at MIT (2000), NASA Sensor Webs (2001), ZigBee Alliance (2002) and the Center for Embedded Network Sensing (2002).

At the core of many of these collaborative working research partnerships was the desire to actively enable high volume deployment of Wireless Sensor Networks in both industrial and consumer goods. The strategy for achieving this was to focus on making them more energy efficient, less costly to produce and simplifying their development and maintenance.

At the heart of any WSN are the sensors themselves. Over the past ten years, thanks to the efforts of the above groups and others, significant progress has been made in achieving these goals to the extent that today many common devices have some form of inbuilt sensor array. Enabling such developments in the consumer and commercial realm have been the technological developments in three specific areas of sensor technologies: Micro-electro-mechanical systems (MEMS), Complementary metal-oxide semiconductor (CMOS) and Light Emitting Diode (LED) based sensors.

MEMS, also known as micromachines in Japan and Micro Systems Technology (MTS) in Europe, are miniscule part-electronic, part-mechanical devices that offer the ability to sense and compute data. Some MEMS devices are so small that they enter into the realms of nanotechnology. To give some idea of scale, the components used to assemble MEMS are normally within the range of 0.001 – 0.1milimetres. MEMS are usually comprised of a central microprocessor, which interacts with sensors and other devices attached to it. To give an idea of size, MEMs devices can be in the range of 0.02 – 1.00 millimetres. Common examples of MEMs technology in everyday use include: gyroscopes, pressure sensors and accelerometers, which are commonly used as a trigger for airbags in modern cars.

CMOS refers to both a branch of electronic circuit design – including sensors – and the process by which they are implemented in modern circuits. CMOS have extremely low static power consumption and are significantly noise resistant. Both these factors mean that they are ideal for deploying in ambient devices or adding to existing devices without much if any effect on their power consumption or function. Good examples of CMOS sensors utilised in developing Internet of Things products are temperature, humidity and proximity sensors.

LEDs can be used for more than emitting light. The same construction can also be used as photodiodes, essentially converting light into current. Through this they can be used to detect and measure a range of things such as Ambient light, and proximity sensing.

What is significant about all three of these areas is that they allow for analog data capture in a way that digital sensing devices do not. This means that things such as acceleration, pressure, presence, movement and sound can be measured and monitored. Therefore any device, at least in theory, can be fitted with an array of different kinds of sensors that will enable it to capture data from its local environment, use this data as the basis for computational processing, make adjustments to its own function on the basis of this processing and even communicate the data across the internet to other devices or people. The potential of this kind of technology is evidently significant.

What is potentially even more significant is the effect that the emergence of these technologies and how cheaply they are now becoming on the mass market has had on the development of the Internet of Things. In many ways the development of the Internet of Things has largely came about because of highly funded top tier research and development. For example the significant role DARPA has played in the development of both the Internet and sensor technology is unarguable. However, since such technologies have become more commonly available to the general public, society has seen the development of a whole new wave of invention.

The ethos of the Open Source movement has played a large part in this. Coming from within these communities, the development of the Open Source Arduino platform in 2005, has had a significant affect on what the internet of Things might become. The Arduino, thanks to its simple design and coding language, meant that those relatively new to both coding and electronics could become involved in designing and developing networked devices for very little cost. This has led to the growth of a plethora of maker communities, creating and sharing designs for all manner of connected devices that integrate with the existing Internet and cellular technologies discussed so far. Somewhere in the tension between the high-end government research and the low-level maker community initiatives, not to mention a whole raft of IoT start-ups, the Internet of Things is currently taking form.

reference list:
Violino, B. The history of RFiD Technology, RFiD Journal, Jan 2015
http://autoidlabs.org
http://www.silabs.com/Support%20Documents/TechnicalDocs/evolution-of-wireless-sensor-networks.pdf

Leave a Reply