The Internet and the seeds of a democratic renaissance in Ukraine

“The Internet revolution in cyberspace and on the streets”

As I write this, I am watching a live video stream from Hryshevsky Street, in Kyiv, Ukraine. Several hundred protesters are facing off against several hundred riot police. Right now there is no changed situation, but there is a ceaseless cacophony of sound coming from the protesters, beating clubs against metal barrels, light standards, and the burnt-out hulks of buses lying across the road. This is an episode in the ongoing Euromaidan protest movement, which started out as a reaction to the failure of the Ukrainian government to enter into a long-promised association agreement with the European Union. Euromaidan is an amalgam of the words for Europe and Square, and it is how Independence Square in the centre of Kyiv, where the main rallies take place, has given its name to the whole movement. It has since morphed into a wider movement of disaffection with the failure of any meaningful reform of Ukraine’s society since the break-up of the Soviet Union and the nation’s independence, 22 years ago.

While a handful of policemen are video-recording the crowd and keeping the information secret, almost everyone in the crowd is taking videos, snapping pictures, and sending text messages, and making the information public. The technical means of suppressing dissent by state authorities is traditional in every respect, while the behaviour of the crowd is unprecedented. It is not a traditional mob mentality, but it is very much what Marshall McLuhan called the tribal mind, enabled by its technical connector, the Internet. An even better example of the tribal mind in action, within the global village that is the Internet, is not this street confrontation with state authorities, but the “AutoMaidan” protests going on in Ukraine now. This is where small groups of motorists co-ordinate mobile action of observation and demonstration with their smartphones. Where the participants do not even need to see each other, this is truly a “crowd-sourced” democratic revolution. Co-ordinated by mobile telephone calls and text messaging, AutoMaidan has even become a quasi-vigilante force against the “titushky” (hooligans, or street thugs) who have been paid to engage in vandalism and disruptions, designed to discredit the protest movement.

But why am I so interested in what is going on in the streets of Kyiv? After all, the whole point of the Internet is that it is the communications medium of the Universal and Homogeneous State. It is everywhere, and it is everywhere the same. The reason, naturally enough, is personal. Because of my own experience living and working in Ukraine in the 1990s, I can draw a connection between the birth of the Internet in that country, and what is happening now in the squares and streets of central Kyiv, and indeed in most of the regions of Ukraine.

Behind the protesters I am watching on Hryshevsky Street, there is a traffic circle at the top of Volodymyrska Uzviz. Going down that hill towards the Dnieper River, one is in the neighbourhood of Podil, the lower town of Kyiv. Podil is the home of the National University of Kiev-Mohyla Academy (NaUKMA), an ancient institution of higher learning that was re-founded immediately after Ukrainian independence. Its languages of instruction are Ukrainian and English, and it does its best to practice academic freedom and self-governance, without supplication to government diktat, on a more Western model of a university. I was a lecturer of politics and the director of an Internet access project at NaUKMA, from 1993 to 1996. This was a time of great pride in Ukrainian nationhood and independence, but also of crushing hyper-inflation and the start of oligarch acquisitiveness and the crony capitalist system that has bedevilled Ukraine to this day.

“The Kiev-Mohyla Academy: First online university in Ukraine”

My work as a lecturer can be characterized as traditional. I taught my curious students about different political constitutions, but more importantly about different political cultures. The lecture hall was a safe place to argue anything, but idealism stopped there. Building civil society — which is what it was supposed to be all about — was left for countries that had magically found the “secret”, like Britain, Canada, and the United States. The world of vibrant non-governmental organizations, spontaneous and genuine volunteerism, and of a public life that had little connection to state institutions — all these things were for “outside”, and not for Ukraine.

It was different with the Internet access project. Here was something that had no precedent in the Soviet Union, nor did it have a substantial cultural precedent in the West. The World Wide Web had only recently been invented, and the first graphical browsers were being introduced. Control of the Internet had long been released from the U.S. military, and had recently been released from the control of the National Science Foundation in the United States. The Internet was going to be what the world made of it, and that included Ukraine.

I knew that the Internet was going to be revolutionary, and that it was going to be revolutionary in a democratic way. I was one of the earliest members of the National Capital Freenet in Ottawa, Canada, which was a pioneering effort of Carleton University to open its portal onto the global academic and research computer network to the public, for free. When I went to Ukraine, I took this sensibility of free and open access to the Internet with me. For my personal use, I am told that I was the first paying customer for an online Internet service in Ukraine. When I started to formulate a plan for free and open access to the Internet for students and faculty at the Kiev-Mohyla Academy, I found enthusiastic support, both among the university community and foreign donors.

The goal of project was stated on our web site, the first version of which appeared in 1994. You can still view it at the WayBack Machine Internet Archive. Our mission statement read: “The University of ‘Kiev-Mohyla Academy’ Internet Project is a large-scale effort to provide access and training for computer communications to each and every student, professor and administrator at the University. The goal is to transform the UKMA into Ukraine’s first on-line campus.” The students who visited the computer lab of the NaUKMA Internet Project were amazed. Here was this gold mine of information and communication, and it was theirs to access for free. This was everything the Soviet Union was not, and it seemed to symbolize the brave new world of independent Ukraine among the family of nations. Beyond learning English and studying under a few Western professors, here was something truly new and exciting that marked these young people as distinct. They were peers with their cohorts from Western countries, and they were set apart from their parents’ generation in their own country.

“The global village’s tribal mind confronts traditional politics”

So what did this first generation of Internet-savvy Ukrainians do with their knowledge and power? For many, the path they trod was the familiar one of emigration, away from their impoverished native land. This is regrettable, but understandable. I was not only training the future elite of Ukraine, but also helping to create a pool of highly-qualified emigrants, whose lives and talents ended up benefitting Western Europe and North America. But if my ideas about vibrant civil society barely left the political science lecture hall, the practical civil society ethos inherent in the Internet, and introduced to them through the NaUKMA Internet Project, went with my students beyond the university. It was embedded in the technology that took over the whole world in the dot-com and tech-boom era of the late 1990s. It was in the first mobile phone which they bought, or the first laptop, when they went on to graduate school or to getting their first job.

In Ukraine itself, there was a leap-frogging of technology. Land-line telephones were never very good, and never would be, so mobile telephones became ubiquitous and leading edge. The citizens of Kyiv were using text messaging for hailing taxis and for making restaurant reservations long before anyone else. Television was banal, and fell under state control or under the control of the oligarchs, and so for real information and communication Ukrainians turned to online media. They were chatting, posting, and blogging online before Facebook, sharing music before BitTorrent, and they were recording and watching videos online before YouTube. Everyone in Ukraine knows that the real news is to be learned online, and is to be watched — and generated! — on a smartphone, tablet or computer, and not on a television. Ukrainians built civil society online, because the virtual space of the Internet was the only space in which that could happen for them. The freedom and openness required for real, non-state and non-private civil society is inherent in the technological processes that underlie the Internet. They’re inherent in the suite of protocols for spontaneous and open interconnection of nodes over packet-switched wide area networks. They’re in the DNA of the Internet. That same freedom and openness has not yet come to the traditional, physical space of social life in Ukraine, and has remained a “beyond the borders” dream until now. I was told by my students in the 1990s that my idealism was misplaced, and that the entire generation of people raised under the Soviet Union had to die, or to age beyond the relevance of power, before anything like the civil society I talked about in my politics lectures and demonstrated at the Internet access project would come to fruition in Ukraine. I didn’t believe them then, but they were wise in a way that I was not. Now, it is up to a new generation, who do not even have childhood memories of the Soviet Union, to take up the fight.

Which brings us to why there is all this fuss over a boring, bureaucratic, and incremental association agreement with the European Union. It is because the European Union — or more importantly the standards and practices and the ideals of the European Union for civil society — is seen as the completion of the democratic revolution that started in 1991. A bottom-up democratic revolution has been held back by hide-bound elites stuck in the practices of the past. Commissars transformed themselves into oligarchs, and they found useful mouthpieces to front governments for them, and that has been the story throughout the former Soviet Union — bar those countries with the luck or wisdom to place themselves in the orbit of the EU. The great hope of the last few years for Ukrainians has been that the external influence of the EU could ameliorate the people’s suffering, and offer a way out of their misery.

It used to be said that in Soviet Union times freedom was not to be found in the streets, but only around the kitchen table. In other words, the public space was enslavement while the private space was liberty. A civil space between the two was nowhere to be found. The promise of independence for Ukraine was supposed to be the extension of private and familial liberty into civil society and into the public life of citizens within the state. That did not happen with most of civil society, which remains weak, and it did not happen at all with state institutions, which remain profoundly corrupt, inefficient, arbitrary, and oppressive. The shining exception was a new, virtual civil society sustained by the Internet, which flourishes in Ukraine. This is the force that Euromaidan protesters carry with them, and which offers a glimmer of hope amidst so much despair. Ukraine today has almost no independent news media outside the Internet, and so the liberal ideal of a citizen who is a rational and informed calculator of his or her own self-interest can only be realized by Ukrainians who are Internet-connected. Internet access has become a human right, for those who take democracy seriously.

“Virtual communities and the seeds of a democratic renaissance”

I was profoundly moved by one image I saw, taken from the largest demonstration yet held in the central square of Kyiv. Half a million people gathered, and in the winter darkness they were filmed by a pilotless drone, chanting “Glory to Ukraine!” and punching their fists in the air. In their clenched fists they held mobile phones, shining blue-white light into the night sky. Everything about this scene — the pilotless drone with a camera, the hundreds of thousands of people with their independent information and communication devices — spoke to the power of the Internet, and its force in sustaining civil society. I can draw a connection between this moment of mass political action, that I was witnessing from my own Internet-connected computer from the other side of the world, and quieter moments in a computer lab at the Kiev-Mohyla Academy almost 20 years ago. I think of how many times I patiently explained to new arrivals at the Internet Project how a graphical browser worked, or how to write an email message. I never had to tell them twice. Soon they were teaching themselves. Once the match was lit, the flame never died. It never could, and I am inspired by the strength of the fire of liberty that we kindled then.

Big Brother meets the global village

A while back, I did a traceroute from a home computer in Ottawa to one of my company’s servers in Toronto. traceroute is a venerable networking utility that shows the relays or “hops” that encapsulated packets of data take in traversing the Internet. I was disappointed, but not surprised, to see that my traffic went by way of Chicago. A round-trip at the speed of light that should have taken 0.00235 seconds took 0.00346 seconds instead, about one-and-a-half times slower than the direct trip. If you don’t remember your geography, Chicago is on the other side of two of the Great Lakes from Toronto, while Ottawa is a city just a 40-minute flight away within the same province.

I was not surprised to see my network traffic leave Canada, go to a foreign country, and return to Canada, instead of following a direct line domestically. I know that Canada does not have a national infrastructure plan or policy for the Internet, and merely piggy-backs on top of the infrastructure of the United States. The hands of government regulators have been “off the tiller” for decades, and control of the revolutionary packet-switched network of networks has fallen into the hands of an effective monopoly of old-line, carrier-based telephone and cable television companies. The lack of investment is apparent to every Canadian who gets the latest smartphone, the latest next-generation network, the latest streaming/downloading service, and so forth, six months to a year after everybody else. High speed Internet has become a legal right in Finland, but Canada is nowhere close to matching this wise policy. Watching my packets crawl across the border and then back again is only one of the many effects of living in a country in the second tier of technological innovation.

My disappointment comes from my awareness of the political and social implications of my technological dependence on the United States. When my packets of data and metadata went through Chicago, they landed on a server there. A rich collection of IP addresses, port numbers, host names and other identifiers was gathered and logged. Because the data payload was plain to see, it might have been recorded too. Being physically in the United States, my data and metadata were subject to the laws and regulations of that country. That means that my information was completely subject to the alphabet soup of American warrantless surveillance: FISA, NSA, PRISM, ECHELON, and an unknown number of other programs and entities set up to eavesdrop on communications.

What was true about my packets, which happened to pass through the United States on this occasion, is also true about any Canadian who uses “cloud” services like web mail from the big providers, document storage and network applications to process these documents, music and video services where content is uploaded to central servers — in short, anyone who uses the Internet in the new ways that are coming to predominate.

There is a conflict between the American government’s desire to snoop on Internet traffic, under programs which they claim are justified and which they assert are legal, and the Canadian government’s desire to protect the privacy rights of its citizens, as required under constitutional law. Here, I think that realpolitik trumps all. An overwhelming superpower is determined to intrude on the civil liberties of its own citizens, let alone foreigners, and has little regard for consequences once it has set itself on this course of action. The Internet is no different than air travel has become. Canadians for many years have had to abandon their constitutional rights when they take a flight from Toronto to Vancouver. Because that route enters American airspace at points, personal details which are prohibited from being revealed domestically are betrayed into the hands of a foreign government. The Canadian government is too weak to object, because the consequences of American retaliation for non-compliance are too severe.

When details of the PRISM program of warrant-less surveillance by the National Security Agency were revealed recently, a defence of this program made by the executive branch of government in the U.S. was that the data (but not metadata) of Americans was not a target, when it was within the United States. This left me wondering about my status. After all, I am not an American, and I am not in the United States. What category do I fall under? My data and my metadata are in the United States all the time, so I have to assume they are the intended target of PRISM. The very minimal reassurances given to the U.S. domestic audience do not apply to me, or to any other Canadian, or to Europeans or to any of the other 4 billion users of the Internet who are not citizens of the U.S., physically located there. No wonder the big “cloud” companies are running scared, and are desperately trying to get legal permission to reveal the details of the extent to which they are subject to FISA warrants! Companies like Microsoft, Apple, Google, and Amazon have been going to court to try to remove their gag orders — so far without success. They know that as long as the cloak of secrecy is over them and their participation in domestic spying on Internet traffic, their businesses face an existential threat. What reason does anyone have to believe that their “big data” provider will not betray every detail of their business to the U.S. government? At the moment, they have none.

There is a mis-match between the global, trans-national character of the Internet, and the legal and jurisdictional parochialism of national sovereignty. We live in the global village technologically, but our politics and our laws have not caught up. There are no “foreigners” on the Internet, but the entire premise of institutions like the NSA and programs like PRISM is that there are.

What is most odd and upsetting about the imbroglio over U.S. domestic spy programs is that the U.S. is very much in the lead when it comes to technological innovation, but far behind when it comes to politics. Enthusiasts for the “cloud” make a mistake in ignoring the fact that American irredentism in the political sphere threatens their dreams of a new tech boom. For all the talk about the United States being a new republic, it is in fact an old democracy. Its constitution is 226 years old. The Charter of Rights and Freedoms is only 30 years old in Canada, and the Charter of Fundamental Rights of the European Union was only proclaimed 13 years ago. The salient point in talking about how venerable one’s constitution is is the question of privacy. In Canada and in Europe, people have the impression that the security of the person against unreasonable intrusion by the state extends beyond the physical realms of selfhood, such as the sanctity of ones home, to the virtual realms of selfhood, such as the sanctity of ones data and communications. Privacy rights are felt to be more fundamental and closer to being constitutionally protected, whereas in the United States they are felt to abide by convention and are treated in a more ad hoc fashion. The chances of Americans being able to amend their constitution to enshrine privacy rights for themselves are very slim. The U.S. Constitution has been fetishised to such an extent that it is all but impossible to alter it. This is a country, after all, that has failed to ratify an Equal Rights Amendment whose only principal is that men and women should be treated equally under the law.

For more than a century, the American government has been constrained by the courts and by a regime of legally-obtained warrants in how it can intrude on communications carried out over the telephone and by mail. When Big Brother taps your phone or opens your mail, he has a warrant which will be revealed, in time. A newer means of communication, the Internet, has been held back from these protections in the United States — but not in Canada or in Europe! — by political elites presenting a threat of “foreign terrorism” and a docile and compliant media and public accepting this. The exaggeration of prospective harm by a miniscule and insignificant enemy is not balanced against the real damage done by warrantless surveillance, because it is not in the nature of security and intelligence bureaucracies in their isolated “silos” to do that. Canada and the U.S. are supposed to have a free trade agreement, such that goods and services flow back and forth across the border without constraint. Under the guise of “security”, the United States has brought back protectionism, and there is little the Canadian government can do to stop its partner from turning the clock back to the days of tariff walls and tortoiseshell borders.

I’ve heard that sales of George Orwell’s book “Nineteen Eighty Four” are up 7,000 percent. That’s not surprising, because Big Brother *is* watching you. Talk of the “military-industrial complex,” which President Eisenhower warned about in 1961, is being replaced by talk of the “security-industrial complex.” The Cold War then and the “War on Terror” now are both examples of George Orwell’s perpetual war — wars that are fought without a plan for how they will end. Living in a satellite state of the American empire as I do, I shudder to think that I’m like Winston Smith living in Oceania.

I am not wearing my tin foil hat, yet. I am, though, an even greater skeptic about the “cloud” because of the exposure of PRISM and the extent of warrant-less surveillance in the United States. If I can use my technical competence to keep my data and my metadata out of the hands of a foreign government, I will do that. As a loyal Canadian, subject to my country’s Charter of Rights and Freedoms and honour-bound to defend it, how can I do otherwise? George Orwell’s “Big Brother” and Marshall McLuhan’s “global village” are coming together in a tumultuous and unpredictable fashion. The Internet generation is adopting McLuhan’s tribal base in a virtual collective that is global in scale, while the world’s governments still work in a world of citizens and foreigners — of “us” and “them.” In praising the benefits of the technology, we ignore the harm done by our anachronistic politics at our peril.


Technocracy handsCropping up in the news these days is the word “technocrat.” It is an unfamiliar word to some, but those who are au courant with the political scene in Italy and in Greece frequently described the last governments in each of those countries as technocratic. Presumably, what is meant is that the former prime ministers of Italy and Greece were not chosen for their position by the usual means whereby telegenic tokens of elites with a perfunctory legal education are put forwarded for validation by citizens who happen to vote in elections. Instead, these men came to power by the merit of their expertise, which is economics. The idea is that if they are only there to steer their countries through an economic crisis, and if they have no expectation of renewing their grasp of power through election, then they can make the “tough choices” that are necessary to get their countries out of trouble. As it turns out, the bitter medicine prescribed by these caretaker governments were too unpalatable for voters, and at the first opportunity they were tossed out in favour of more populist and pandering politicians cut from the traditional mold.

I would argue, though, that calling these economists who happened to become leaders “technocrats” is inaccurate. Economics is not a science, just as political science is not a science, because there can be no objective test to tell the difference between experts and lucky amateurs. The term “technocrat” is not properly applied to the former leaders of Italy and Greece. They are more like a “lame duck” President of the United States, who because of the constitutional term limit might feel at liberty to make unpopular but rational decisions in the public interest towards the end of eight years in power.

Technocracy is the government or control of society or industry by an elite of technical experts. “Techne” is an ancient Greek word meaning “art” or “craft”, and since the Industrial Revolution we have come to use the use the word “technology” to mean the mechanical or systemic aspects of our civilisation.

The idea that we should be governed in the public sphere by those who are the wisest among us is an ancient one, and goes back at least as far as Plato’s Republic, written almost 2400 years ago. Since that time, we have been ruled mostly by tyrants, occasionally we have been ruled by leaders chosen popularly, but never have we been ruled by the wisest among us. The reason — as Plato correctly observed — is that the best men and women, being wise, would never voluntarily choose to govern. They would avoid the shabby theatre of politics, and would only enter it unless they were forced to do so. In over 2000 years, no society has come up with a system of education whereby those who are wise rise to the top, and are then forced to enter politics and lead those who are less wise.

What has happened in more recent centuries is that we have built a mechanical or technical civilization whereby specialized knowledge (but not wisdom) has found an arena in which it can rise to the top based on objective and verifiable criteria. When George Stephenson built the first public railway using a steam engine, he found a way to move goods and people that was better than the system of canals using barges that had been spreading throughout Britain. When the Advanced Research Project Agency Network researchers successfully proved the efficacy of inter-connected computer networks using packet-switching, they invented the Internet and started the slow death of the circuit-based networks built by the telephone companies. These men were technocrats. They proved their success in a field of technology, and then they were thrust into a position where they guided public policy.

My favourite fictional account of a modern technocracy is “The Shape of Things to Come”, written by H.G. Wells and published in 1933. H.G. Wells was also the screenwriter on the film adaptation, “Things to Come”, which was released in 1936 and stars Raymond Massey and Ralph Richardson. Raymond Massey’s character, John Cabal, is the leader of of a group of aviators and scientists who have formed a technologically advanced world government called “Wings Over the World.” They put an end to endless global war and the resultant descent into barbarism. It is clear, though, that although the rule of Wings Over the World is benevolent one and something that H.G. Wells finds praiseworthy, it is a dictatorship. In his wonderfully forceful style, the actor Raymond Massey in the film makes it clear that he feels disgust that he is forced to rule over others, so that he and his fellow elite of technical experts can do what needs to be done for the benefit of all mankind.

Away from the realm of utopian fiction and into the realm of recent history, I think that the Manhattan Project during the Second World War can be held up as a model of a technocracy. This spectacularly large, secretive, and expensive project was undertaken by the United States government to build the atomic bomb. The director of the Manhattan Project, Brigadier General Leslie Groves, made Professor Robert Oppenheimer the head of the secret weapons laboratory, and the leader of the group that would eventually succeed in exploding the first atomic bomb at Los Alamos, New Mexico on 16 July 1945. It can be said that the military man, Groves, forced the theoretical physicist, Oppenheimer, to undertake something that he was not inclined to do — lead a large, complex and untested bureaucracy towards an uncertain goal — so that he could do what he really wanted to do, which was to conduct research at the leading edge of his field of study. The emergency of the war led the United States government to consider an almost pure technocracy for the Manhattan Project, leaving Oppenheimer and also Enrico Fermi to have carte blanche to do what needed to be done.

The fictional John Cabal in “Things to Come” could and wanted to fly airplanes and find a cure for the sleeping sickness that was a plague in the new Dark Age, but he reluctantly was the leader of Wings Over the World. The theoretical physicist Robert Oppenheimer could make an atomic bomb himself, but he needed to be the director of a project involving a large number of people for that to happen. They found themselves in circumstances where — as in Plato’s Republic — the wise were forced to rule.

Which brings me to the subject of information technology companies today. It would seem that this would be a happy hunting ground for technocracy, because the connection between technical efficiency and business success is more direct. Indeed, small information technology start-up companies usually follow this pattern. Skilled system administrators and programmers and engineers build something, but then those same people are forced to become the sales people, marketers, managers, CEOs and board members, to let the world know about the “great new thing” and to sustain a growing business. Companies that force skilled and successful workers to become managers are technocracies.

Companies that put in place leaders who actually *want* to be managers, and who imagine that management is an exclusive area of expertise apart from the thing or person managed are not technocracies. Instead of forcing the most technically competent to manage, large corporations and government departments tend to place in authority persons whose supposed expertise is in “management”, as if this is a field of endeavour different from the persons or things managed. I always ask myself: “Manage what? Productive workers? Who are producing what, and to what end?” Large IT companies enter a different world of shareholders, quarterly reports, missed and met market expectations, mergers and acquisitions — a rarefied world that becomes disconnected from their origins in technical competence, and the ability to produce goods and services that people actually need to live a good life. Our popular culture has even put forward memes to express this disconnect from reality. We have TV shows like “Undercover Boss” and the cartoon strip “Dilbert” to show us CEOs and managers who are buffoons and clowns, and who couldn’t drive a forklift truck or write a line of software code if their life depended on it. The Emperor has no clothes. But we are not in a crisis like a war, and there is no compelling need to “call out” our pointy-haired bosses, because we can get by with quiet incompetence. And so we live in a North America where we can’t ride high-speed trains (even though we could), we have not gone back to the moon in 40 years (even though we could), we have not reduced our carbon emissions by industry (even though we could), we have not eliminated child poverty in rich societies (even though we could). Just as we have celebreties who are “famous for being famous”, with no discernible talent, and we have politicians who know nothing else except the pursuit of power for its own sake, so we have managers divorced from the reality of workers and systems, and from production and consumption. The reason is that we are led by men and women who want to be leaders, and we have little idea how bring reluctant heroes to the fore instead. Technocracy is not here yet, and H.G. Wells’ “Wings Over the World” remains a “what if” of utopian fiction.

An Internet Protocol address is not a personal identifier

IP addressAs it has become more popular, many misconceptions about the Internet have crept into public consciousness. Because it is large, complex, and incomprehensible from any single point-of-view, many people make the mistake of thinking that there is some kind of technical mastery behind it all, to make it all work. Nothing could be further from the truth. The Internet is messy, error-prone, and inefficient. It was designed right from its origins to be thus — “assume the network is unreliable” was a mantra of its early developers. One part of the Internet doesn’t know what another part is doing or what it is like, and it doesn’t need to know this information to function. Limited and imperfect knowledge is all that is required for communication to succeed. The Internet is a machine language for communication, and just like a natural human language it has rules of grammar and spelling and pronunciation that are broken all the time by its native speakers.

An Internet Protocol address, or IP address, is a sequence of ones and zeroes that identifies a node on a network. A node is a computer or a smartphone or some kind of intelligent device that can communicate. Knowing this address, an intelligent communicating device can calculate how to move data farther along in the network towards the final destination, and ultimately right up to the destination itself. That is a procedure called routing. IP addresses are the information, and routing is the action based upon this information.

Notice that in talking about an IP address I didn’t say anything about a human being who might be using the node on the network. I didn’t need to, because an IP address doesn’t have anything to do with a person. An IP address pertains to the task of networking devices.

The Internet works according to a layered model. Layers or functions define a type of action that must be taken for communication to occur. There are identifiers that are meaningful in describing what is going on at each layer. An IP address is an identifier for a node, and it is meaningful at the Internet layer that defines how packets are routed to their final destination. But that is just one of the functions that must be carried out for successful communication to occur.

There is no authentication between layers. This is referred to as a “stateless” mode of communication. Think of it like the connection between a person dropping a letter in a mailbox and the post office worker who collects all the letters to take to the sorting station. What is the connection between these two steps in communication? There isn’t one, immediately. Each function can operate while treating logically prior functions as a fait accompli. I can drop off my letter with only the vague awareness that it should be picked up later; the post office worker can collect the letter with only the vague awareness that it is properly addressed and has the right postage.

With the Internet, there is no direct connection between a packet of data moving across a network and a human being who may have initiated the sending of that data. That is because the Internet developed from earlier “store-and-forward” models of communication, that worked much like the post office in delivering paper mail.

Network communication can be initiated either by a human being operating a computer, or by the computer itself. From the point of view of the network, there is no way to know the difference. That is like saying that the post office can deliver mail where it was individually sent by a human being or sent in bulk by a machine, or it is like saying that the telephone company can route a call whether a human being is calling or if a machine is robo-dialing the call.

In summary, the layered model of the Internet can be described as follows. A process runs on a computer, and may be something like browsing a web page or sending an email. The “conversation” between computers at this layer can be described as a session, and it has identifying descriptors including port numbers. A packet of data containing a very small part of the conversation is then transferred to the networking layer, where an identifying number called an IP address is attached to it, to help route this piece of the conversation to its final destination. Now properly addressed, this chunk of data is passed to the data link layer, where it is goes out an interface or port to exit the computer to traverse a network, where it will transit a series of interfaces on its way to the final destination. There is an identifier at this level too, and it is commonly an Ethernet address that is associated with an interface. Finally, a signal is sent across a wire or through the air to physically reach another interface. Once at the final destination, the entire procedure is followed in reverse, until the corresponding application at the other end receives this small packet of data, which is one piece among many of the overall conversation.

An IP address is not a personal identifier, and on its own cannot connect a human being to data transmitted across a network. The Internet does not have such a thing as a personal ID, like a Canadian has a Social Insurance Number in interactions with the federal government. The only way to connect a person with Internet use is by taking a bottom-up approach to the network layers, and capturing all of the meta-data and data involved along the way.

Here is what it would take to convincingly prove that a person did anything on the Internet. First, prove that a person made use of a networked device by possessing and operating it. Then ascertain the identifying information at the data link layer for the port that sent and received packets of data. Then connect this meta-data with the IP address used to identify the node in the whole Internet. Then isolate the session layer information that identifies the particular conversation. Finally, reconstitute the pieces of the conversation, such that the actual data transmitted or received can be determined.

When the Internet began 40 years ago, each of the layers had a physical analogue and it was easy to picture what was going on. An operator was a real person; a node was a real computer; a port was a physical connection with a wire attached and there was only one address needed to identify it uniquely. Now, the trend of virtualization means that every level is more abstract. A node can be a virtual machine, which can be one of dozens or hundreds of computing/communicating computer-like objects inside the physical box. The physical port can manifest itself as any number of network presences, each one of which can have many IP addresses associated with it. From a static and human-shaped Internet, we have evolved to a dynamic Internet consisting mostly of machines talking to machines.

The amount of information that must be collected to tie an Internet Protocol address to the activities of a person using the Internet is very large, and it is computationally and monetarily expensive to record and store all of this. Logging data and meta-data, on its own, serves no security purpose. Only analyzing logged data serves a security purpose. This too has costs, which are substantial.

It is technically possible to record every telephone conversation. Capturing, storing, retrieving and analyzing all of this information would have large costs. It is technically possible to open everybody’s mail. Recording all the information contained in postal addresses and letters and then analyzing all this meta-data and data would have large costs. For the Internet, large data centres would have to be maintained to store all the captured meta-data and data, and large numbers of skilled analysts would have to be employed to make sense of it all.

The replacement of IPv4 with IPv6 does not change the nature of the Internet or the purpose of the Internet layer. The Internet is still a “best possible effort” relay system, and the Internet layer is still stateless, which is to say it does not have error corrections at this stage.

It is not true that IPv6 addresses are “persistent” in a way that IPv4 addresses are not. In fact, the opposite is the case. IPv6 addresses are transient, and more removed from human agency. When IPv4 addresses were introduced in 1982, they were fixed, unique addresses assigned by human beings to networked devices. As IPv4 evolved, ways of assigning addresses that were more automated began to be used, such as the Dynamic Host Configuration Protocol. With DHCP, pools of addresses are controlled by an Internet Service Provider, and assigned to connecting devices and revoked from disconnecting devices without human intervention. Now, with IPv6, we have a built-in scheme of stateless auto-configuration, where the device itself “makes up” its own interface identity, solicits a network prefix from a router, and concocts a complete 128-bit address thereby. An IPv6 address is a unique identity among 380 undecillion possibilities, and is firmly in the realm of autonomous technology — it is all about machines talking to machines. The future shape of the Internet is such that IPv6 addresses not only can be but perhaps should be random, unpredictable, and utterly removed from human agency.

If one has in mind a single IP address fixed to one physical device, which is proven to be under the control of one identifiable human being, then one is thinking about the Internet as it was created in 1982. The Internet does not work that way now, and it will be even more removed from that in the future as it improves in scale with IPv6 and improves in security with strong encryption and authentication.

One might consider consolidating multiple sources of information. The Internet is a node-based network of networks, so let’s suppose we gather meta-data and data anywhere along the line from the source, through intermediate relays, to the final destination. The “store-and-forward” nature of the connection between the nodes is stateless, just like the connection between the layers in the network stack is stateless. Therefore, it is up to us to put the pieces together. A unique problem with coordinating information gathered from different nodes arises from the need to accurately establish a timeline. The “chain of custody” must be followed through time. Timekeeping with most computing and networking devices is not very good. The most discrete interval most devices comprehend is a millisecond. A gigabit network card can push packets across a wire at a rate of over 14,000 per second. That is a rate 14 times greater than the most granular measurement of time built in to most computers. The Internet has a feature called the Network Time Protocol, but there is no requirement to run it. Most networked devices are going to have the wrong time, uncoordinated time, and have a coarse measurement of time. Think of it like a grainy surveillance video in a store — it may record a robbery, but it may be unusable for the purposes of an investigation or a prosecution.

This is not to say that gathering information from many sources is impossible. It is problematic, though, and it is not a panacea to the substantial cost and effort required in order to convincingly tie an Internet Protocol address to the actions of a human being.

Trying to isolate Internet spying to a domestic context is impossible, because the Internet is by its very nature trans-national. The trends of virtualization and so-called “cloud computing” make this even more so. The connection between an IP address and a human being is a weak one, and the always indirect connection between what happens at the network level and what a person does has been made even more tenuous and remote by the explosive growth and speed of the 21st century Internet.

The Domain Name System

The Domain Name System (DNS) was first defined in 1983, and has been vital to the functioning of the Internet ever since.  This is the scheme whereby Internet Protocol addresses — which are hard-to-remember numbers — are translated into easy-to-remember names. DNS is hierarchical in structure, global in scope, and works in an automated way so that most of the people who use browsers and email and other Internet applications have no idea that it is there, let alone understand how it works. It is enough that DNS is the magic glue so that someone types a URL into a browser or puts an address on an email, and it simply works. A domain name is an address like An Internet Protocol address looks like (IPv4) or 2001:f38:1f70::b99:df8:7148:6e8 (IPv6). Human beings are much better at remembering and using names, whereas computers are by nature number crunchers. DNS bridges the gap for human beings as users of computers, by translating name addresses to number addresses, and vice versa. It is essential that DNS do this in a way that is accurate, and which makes sense to people. Since its foundation in 1983, DNS has been very successful because of its accuracy and sensibility, and it has become a global classification scheme on a par with that triumph of nineteenth century standardization, the International Postal Union. A big part of the common-sense validity of the Domain Name System comes from the concept of the top-level domain. A top-level domain is the largest-scale category of the name, giving the general sense of the kind of entity that has the name.  The first top-level names were organizational, and applied only to the United States.  They were .edu, .org, .net, .gov, and others.  Educational institutions like universities belonged in .edu; non-profit, non-governmental organizations belonged in .org; Internet entities belonged in .net; governmental agencies belonged in .gov.  Until the early 1990s, the Internet was restricted from commercial exploitation, so the top-level domain of .com was effectively a joke.  When this restriction was lifted and when the World Wide Web was invented by Sir Tim Berners-Lee, .com suddenly became serious, and registering a name under .com became an essential part of doing business and eventually to protecting trademarks.

The Internet became international in scope, and DNS expanded with it. Two-letter country codes derived from ISO standards became top-level domains.  Now, top-level domains like .ca for Canada and .ua for Ukraine and .za for South Africa and about 200 other designations competed with the traditional organizational ones like .com.  With the delegation of authority away from the United States-based university and military researchers, the clarity of DNS began to erode.  Should a Canadian company register under .ca or under .com, or should it register under both?  My own employer faced this decision, and was lucky to get its corporate trademark registered intact under both .ca and .com, but many other entities had to make compromises, with the result that a customer who is an Internet user does not have the old, comfortable assurances of where a name logically belongs.

Further weakening of DNS occurred with top-level country codes that belonged to states that were too weak to have a viable Internet, but who had two-letter combinations that because of their appearance were valuable as ersatz organizational domains.  For example, the South Pacific island nation of Tuvalu has the top-level country code of .tv. To the 10,000 people who live on Tuvalu, the letters “.tv” probably mean Tuvalu, but to the almost 7 billion other people who live on our planet the letters “.tv” bring to mind the word “television.”  Accordingly, the .tv top-level country code was exploited, early on, by television stations and similar TV-related entities, for their Internet presence, even though that presence had nothing to do with a small South Pacific island.

The original organizational top-level domains such as .edu and .net have since been expanded in number by the Internet Corporation for Assigned Names and Numbers (ICANN) and the Internet Assigned Numbers Authority (IANA).  Names ending in .museum and .aero and .pro and others have joined the list, with varying rules of assignment and varying popularity in terms of adoption.  Most recently, the top level domain .xxx was sanctioned, intended for adult entertainment entities.  The intent of all of this expansion has been to internationalize and to democratize the Domain Name System, but its practical effect has been to devalue domain names.  Before, there was an artificial scarcity, and so a domain like “” was considered valuable enough that somebody spent 13 million dollars for the right to use it.  Now, companies looking to protect trademarks or secure coveted labels are registering in dozens of top-level organizational and country code domains, but it is an investment making diminishing returns.

One reason that chasing after prestigious domain names is becoming a nostalgic pastime is that search engines are now the primary means by which Internet users are finding what they want.  It once was the case that someone using a web browser would guess at a likely name address, and enter it manually in the URL field.  It made sense for any business to want to have a named presence that was short, easy-to-remember, and made sense for the kind of entity that they were.  Very few people type URLs any more, or even know what they are.  Web users are clicking on links, and those links are pushed at them by search engines.  There is no longer the requirement that the domain name in the URL be short and meaningful.  DNS is still needed to resolve the domain name in a URL to an IP address, but the content of the domain name is of less importance for human eyes than it was before.


I said that DNS thrives because of accuracy and sensibility.  I have addressed the issue of sensibility, arguing that the expansion of top-level domains and the increasing importance of search engines has eroded the primacy of short, meaningful domain names as the nonpareil sinecure of anyone’s presence on the Internet.  What I would not have expected is the need to address the issue of accuracy.  The honesty of the Domain Name System as a reliable means to translate a name to a number and vice versa is now under attack from an unexpected source: the United States government.  Yes, the U.S. government, which created the Internet through the Defense Advanced Research Projects Agency, is now considering hobbling DNS, by forcing authoritative bodies that control naming service servers to falsify their records.  The Stop Online Piracy Act (SOPA) went before the U.S. Congress last year, and it had the backing of lobbyists from the old-line entertainment industries that have seen their monopoly power of distribution badly eroded because of the Internet.  The bill sought to bring under criminal law the civil law torts that organizations like the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA) claim.  SOPA would have forced search engines not to acknowledge an Internet presence that actually exists, forcng DNS servers not to do their jobs honestly — all at the behest of a “take-down order” issued by someone who claims rights to intellectual property.  Thankfully, the immediate threat of SOPA was withdrawn, but the persistent threat of forcing DNS to tell lies remains.

This legalized hacking strikes at the very heart of the Internet.  By cutting away at the integrity of search engines and DNS, SOPA and legal measures like it look to do damage to the electronic network economy that has grown up over the past twenty years.  Laws like these are being made by people who do not understand the Internet (U.S. Representatives and Senators) at the behest of people who are afraid of the Internet (the old-line entertainment industry).  An honest DNS is a bystander victim in the battle of old media versus the Internet.  If DNS dies because of legislated corruption, that would be a shame, as it has been a tremendous success, and it made sense for a long time.  However, its demise would be the end of a long-term decline, because the Internet is evolving to new ways of connecting people with information by means of networks and computers.  My hope is that SOPA and similar legislative attacks on the Internet in the U.S. and Canada continue to “miss the mark,” because these are sledgehammer solutions to non-existent problems, and they cannot work because they don’t match what is true about the way the Internet works.  28 years after DNS was invented, some people in power are finally becoming aware of what it is.  They don’t understand it, they don’t like it, and they are afraid of it, and they may succeed in destroying DNS as an honest arbiter.  They are too late, because the Internet is moving on, beyond the United States and beyond its twentieth century foundational principles.  The Internet is dead — long live the Internet!

Pardon Alan Turing!

Alan Touring

It is shocking to the very core to find out what happened to Alan Turing after the war. The achievements of Bletchley Park were kept in profound secrecy until the 1970s. No one knew the great debt of gratitude that we all owe to Alan Turing. Photo by Flickr User basegreen

Alan Turing is perhaps the greatest computer scientist who has ever lived, and among the greatest British scientists of the 20th century. He was a key figure among the Bletchley Park “boffins” who cracked the Enigma code used by Nazi Germany, and who built Colossus, the world’s first electronic and digital computer. Alan Turing and the Bletchley Park team are directly responsible for saving many merchant ships and warships and hundreds of lives in the Battle of the Atlantic, and they can be credited with making the war shorter than it otherwise would have been. Alan Turing is a war hero. As a computer scientist, he is the inventor of the deceptively simple Turing Test which is a benchmark for artificial intelligence to this day. He thought about the potential of electronic computers in a way that is so fundamental and revolutionary that it takes your breath away to try to grasp his concepts today. He pioneered the concepts of the algorithm and of a stored-program computer, and that is how we have come to understand electronic computing as a union of software and hardware.

Continue reading

Still waiting for Internet Protocol version 6

The Internet Protocol version 4 began in 1981, and eventually someone will send the last packet using it. I'll bet you they will send it from a horse-and-buggy North American ISP.

Sixteen years ago, the replacement for the Internet Protocol was made official. Years of work by groups of experts from around the world had solved the problems that were faced by the old way of doing things, which was called Internet Protocol version 4. For a short while, they called this grand collaborative effort IP “Next Generation” — lots of Star Trek fans in that bunch! — but eventually it was decided to call the new, improved set of rules “Internet Protocol version 6,” or IPv6 for short.

Continue reading

Digital vrs. Analog or Consumers vrs. Industry

To me, it is like the year 1900, when stable owners and buggy whip makers could force compliant lawmakers to enforce a 5 mile per hour speed limit on automobiles, or demand that a flag man walk in front of cars when they drove through town.

The technology to allow entertaining performances to be preserved and then experienced later, again and again, was invented in the nineteenth century. The photograph, the audio recording, and the motion picture are all nineteenth century sensations. Starting with these, there was some medium introduced between the artist and the audience. People could still go to plays and they could still listen to what was starting to be called live music, but they now could also go to a cinema and watch a movie, or they could buy a record and play it. A business was born to mediate between the artist and the audience, and it even came to be known collectively as “the media.” Continue reading

Encryption as a munition

The World Wide Web is considered to be the “killer app” of the Internet.  Because of Sir Tim Berners-Lee’s invention, computer networking moved from the world of the military, universities and research institutions, and into everybody’s home and business.  Electronic commerce followed, but many people do not realise that in early days an important ingredient was missing from the mix.  That missing ingredient was strong encryption, and only because we have it now — built in to every browser and every electronic commerce web server — that we are able to think about the Internet as a place to do business. Continue reading

Computers and the Tragedy of the Commons

People who use computers and networks are very comfortable with discussing questions of how these resources are to be used, but are very uncomfortable discussing what they should be used for, or why.  The technical mind-set prefers to dwell on considerations of means over ends, and computers are very much utilitarian objects.  Computers are how-to devices, and computer users and administrators and programmers prefer to be how-to problem solvers using them. Continue reading