During the 1980s the telecommunications industry developed an architecture and services that were branded as the "Intelligent Network". The Intelligent Network was based on a series of standards issued by the International Telecommunications Union (ITU) and grew from the widespread deployment of digital switches controlled by Unix-based computers. These standards enabled such services as 800 and 900 numbers, re-dialing, call-back, and voicemail, but prevented any possibility of new services being offered from outside the network. Those of you of a certain age may recall that it was only in 1974, following an FCC ruling, that AT&T finally agreed to allow any device, e.g. a telephone, other than its own products to be connected to its network.
Beginning around 1993 the Internet began to emerge in public consciousness and demonstrated exponential scaling of the numbers of hosts (although exponential scaling from a small base had been a characteristic since the earliest days of the Internet). The telecommunications industry provided the backbone links (T1 and T3 lines in the early days), but at the same time the industry scoffed at the notion of a large-scale network without central management. There were frequent predictions of imminent "meltdown" (which has never occurred). By 1997 it was hard to argue that the Internet could not scale and David Isenberg, an employee at AT&T Labs, published a white paper entitled "Rise of the Stupid Network". This article contrasted the core assumption of the Intelligent Network, namely a centralized intelligence supporting dumb terminals at the periphery, with the highly distributed approach of the Internet. In the Internet the network is (in a certain sense) "stupid" and the intelligence, including the transport protocol management, is provided by intelligent hosts or terminals at the periphery. This has enabled amazing innovation in the creation of services that can be delivered over the stupid Internet. I believe that David Isenberg's subsequent career at AT&T Labs was short, although thanks to the same Internet AT&T was unable to prevent the dissemination of his article.
Since 1997 the industry has considerably transformed, initially adopting IP telephony on international and later domestic long-distance links, while at the same time attempting to block the use of IP telephony end-to-end. Switches have also evolved until a telephone exchange is almost indistinguishable from an Internet routing center. But the commitment to centrally managed communications remains, even as hand-held devices now rival switch controllers in computing power.
We are now developing another set of large-scale network-based services, Smarter Cities, in which it seems perfectly logical to connect millions of sensors via networks to a centralized intelligence for the purpose of extracting insight from this flood of information. But some of us are agitating for transparency in access to municipal and local government information. We are encouraging cities and regions to think of their citizens as active and intelligent participants in these systems and not merely the dumb recipients of centralized decision-making. We are wondering what kinds of innovative, external services could be created by this kind of openness. What will happen if we view the Smarter city, not as a centralized facility for analyzing real-world data, but as a set of cleansed and labelled flows of information, together with reference and historical databases, that anyone (modulo privacy and security issues) could access - what innovation would this unleash? Could the citizens manage municipal services better - and more cheaply - than the local government agencies? Are we missing an important lesson here? It may not be a stupid thought.