Nothing captures the imagination of philosophers, entrepreneurs and technophiles more than the idea of accelerating progress. But what is progress? Can it be measured? And is the pace of technological change accelerating as fast as we think? Greg Stevens looks back over 200 years of our obsession with progress and concludes that we need to fundamentally reimagine our understanding of the concept.
The Social Media Revolution video produced by socialnomics.com is played relentlessly in corporate meeting rooms across the country by executives, their hearts a-flutter, trying to capture some of the magic of today’s technological acceleration.
The video boasts that radio took 38 years to reach 50 million users, television took 13 years, the internet took 4 years, the iPod took 3 years and Facebook added 100 million new users in 9 months. What more evidence could you need that we live in extraordinary times?
The language and ideas used by modern day “futurists” can be traced back to intellectual movements in the 1960s. They were popularised by books like Alvin Toffler’s Future Shock. But the tradition actually goes back much further. People have watched the progress of technology and civilisation, progress that is seemingly without bounds, with breathless and rapt attention for over 200 years.
The obsession can be seen as far back as Marie Jean Antoine Nicolas Caritat, Marquis de Condorcet, in his Esquisse d’un tableau historique des progrès de l’esprit humain (“Sketch for a Historical Picture of the Progress of the Human Mind”), written some time before his death in 1794:
“Applying [our] general reflections to the different sciences, we shall give examples of their successive improvement that will leave no doubt as to the certainty of the future improvements we can expect… [And] if we now turn to the mechanical arts, we shall see that their progress can have no other limit than the reach of the scientific theories on which they depend; […] Instruments, machines, looms will increasingly supplement the strength and skill of men; will augment at the same time the perfection and the precision of manufactures by lessening both the time and the labor needed to produce them.
“All these causes of the improvement of the human species, all these means that assure it, will by their nature act continuously and acquire a constantly growing momentum. […] Would it be absurd now to suppose that the improvement of the human race should be regarded as capable of unlimited progress?”
This faith in the unlimited acceleration of progress is one of the defining characteristics of the modern era, stretching from the 1700s to the last century: belief in the objective and inexorable forward march of all things human. Today, it has seeped into the public consciousness, and can be seen in toys as well as boardrooms. We listen to a selection of a lifetime’s worth of music stored on a postage stamp-sized iPod and chuckle at how badly we used to under-estimate what gadgets would be able to do.
Contemporary chatter about exponential increases in storage capacity and viral YouTube videos about the rapid adoption of social media are only the latest incarnation of what is, in many ways, the pre-eminent fetish of the Enlightenment era.
Fetish and criticism
The term “fetish,” used in the technical, anthropological sense, has nothing to do with sex. In the original sense of the word, a fetish was an object that was perceived as having inherent magical or supernatural power. It could refer to charms or totems, for example: objects that were assumed to have power over man, not because of their function or history or even their symbolism, but by their very nature.
When a superstition or myth maintains that blood can be used to heal an illness, not because of some chemical property of the blood but simply because it is blood, then blood is being treated as a fetish. Over time, the term has been extended to refer to anything that is perceived as having an intrinsic power over people: something that captures people’s minds and feelings, something that bodily and inescapably grabs the attention, something that can direct people’s behaviour simply because it is what it is.
The mythology of the ever-accelerating progress of man also has many characteristics of a fetish. Though students of futures studies relentlessly seek out confirmation of their evidence in hard data, they generally admit that it starts with an intuition: a feeling that the world is progressing faster and faster over time. This sense is so powerful, so pervasive, that it has captured the human imagination for literally hundreds of years.
It is so intrinsically compelling that even sober scientists have rarely stopped to suppose that an “all-pervasive feeling” of acceleration says more about the way things feel than about the actual progress of technology itself.
Hand-in-hand with that feeling, there is also a fascination with where such rapid progress will lead. Today’s futurists talk about the idea of “technological singularity,” a term first coined by science fiction author Vernor Vinge, but popularised by science writer Ray Kurzweil: a point when human technology has progressed to such an extent that humans become immortal, human intelligence becomes super-human, and further advances become literally unimaginable by people living today.
But, as with the notion of accelerating progress more generally, the “singularity” concept is not a new idea. Indeed, later in the same piece by the Marquis de Condorcet above, he asks if there will one day be a time “when death would result only from extraordinary accidents or the more and more gradual wearing out of vitality, and that, finally, the duration of the average interval between birth and wearing out has itself no specific limit whatsoever?”
This speculation is not data-driven. The idea of exponential progress leading to unimaginable and super-human technological “singularity” is popular first and foremost because it feels good. We have a visceral attraction to it. We have a bodily response to it. For technophiles, it functions as a fetish.
Futurists, whose feverish writings can be seen on the pages of all the glossy technology magazines and in the technology sections of local bookstores, seek out statistics to support the idea that progress is exponentially accelerating: Moore’s Law, the observation that the number of transistors that can be placed on an integrated circuit doubles every two years. Gilder’s Law, the observation that that bandwidth capacity doubles every six months. An entire family of statistics has been tagged to exponential curves (a matter that will be addressed in a moment).
Yet there is something about the sheer rapture with which futurists talk about “progress” that makes its scientific grounding seem suspect; indeed, the tone of some futurists talking about “singularity” echoes that of end times evangelists. “2045: The Year Man Becomes Immortal” reads the Time Magazine headline of an interview with Ray Kurzweil. Are you so sure that you want to put an actual date to it, Mr. Kurzweil? (Even after seeing how that worked out for Harold Camping?)
To understand how emotionally-driven this belief actually is, one only has to look at the responses elicited by its criticism. In 2005, Jonathan Huebner published “A Possible Declining Trend for Worldwide Innovation,” in which he used both the number of US patents granted per person per year and number of “important innovation events” per person per year to demonstrate that the rate of innovation actually peaked somewhere around a century ago – and has been on the decline ever since.
The outrage among futurists was immediate and, to say the least, hot-headed. His conclusions were criticised in every conceivable methodological way. How do you decide what counts as an “important” innovation event? How do you count innovations that spawn other innovations? How do you deal with the fact that not all innovations have equal impact? Do you count “per person” as “per person in the world” or just “per person in innovating countries”? And so on.
Although all valid questions, no query functioned as a satisfying rebuttal of Huebner’s hypothesis – that the rate of innovation is not increasing exponentially – for the simple fact that they equally call into question any data for any of the futurists’ own hypotheses, too. In fact, taken as a whole, they call into question whether innovation can be measured at all.
One of the better critiques was offered by John Smart, the president of the Acceleration Studies Foundation. There are two specific points in his article worth noting.
It is my intuition, supported by today’s crude exponential technology capacity growth metrics such as Moore’s law (processing), Gilder’s law (bandwidth) Poor’s law (network node density), Cooper’s law (wireless bandwidth), Kurzweil’s law (price performance of computation over 120 years) and many others, that technological capacity and technological innovation have always accelerated since the birth of human civilization, and that their growth remains exponential or gently superexponential today.
It is difficult to understand exactly what argument Smart is making here. Moore’s Law says that the number of transistors that can be placed on an integrated circuit doubles every two years. Miniaturisation is great, but it’s not a measure of innovation, invention, or discovery. Gilder’s Law says that bandwidth capacity doubles every six months. Bandwidth speed is also great, but it is also not a measure of innovation, invention, or discovery. None of these supposed “laws” are actually material to innovation, which was the subject of Huebner’s analysis.
Perhaps Smart’s argument is simply: “Lots of things in the world show exponential growth, therefore it would be weird if innovation didn’t.” There is certainly a lot of evidence to support the first half of that claim. Would you like to know some of the things that have been described as following an exponential curve, or a “power law” relationship?
The relationship between practice and learning follows a power law. The intensity of wars from 1816–1980 measured as the number of battle deaths per 10,000 of the combined populations of the warring nations follows a power law. The severity of terrorist attacks worldwide from February 1968 to June 2006, measured as the number of deaths directly resulting, follows a power law. The numbers of copies of bestselling books sold in the United States during the period 1895 to 1965 follows a power law.
To shorthand: mathematicians have a power law fixation. They always have, in every field of science. The fact of the matter is, when you are looking at noisy data, there are a lot of things that can be “fit” approximately to an exponential curve. And, not coincidentally, exponential equations are extremely easy to analyze.
So, after a certain point, one has to ask whether there is some grand universal “power law” design to the entire universe, or whether mathematicians simply like exponential functions because it’s what they know and like. As they say, “When all you have is a hammer…”
At any rate, Smart’s reference to “power law” dynamics in things like network capacity and miniaturisation is at best a form of “argument by really vague analogy.” One could present the hypothesis that the progress in making stuff smaller “should be” mirrored by the progress of innovation and invention, but that is clearly a stretch.
It would be equally sensible to say that with the rapid increase of computational capacity in today’s world, it may follow that innovation would be dampened because it simply isn’t as necessary. After all, if you can solve a problem faster by putting it on a faster microprocessor, why invent a more efficient solution?
The second argument that Smart presents is here:
[T]echnological innovation may be becoming both smoother and subtler in its exponential growth the closer we get to the modern era. Perhaps this is because since the industrial revolution, innovation is being done increasingly by our machines, not by human brains. I believe it is increasingly going on below the perception of humans who are catalysts, not controllers, of our ever more autonomous technological world system.
Ask yourself, how many innovations were required to make a gasoline-electric hybrid automobile like the Toyota Prius, for example? This is just one of many systems that look the same “above the hood” as their predecessors, yet are radically more complex than previous versions. How many of the Prius innovations were a direct result of the computations done by the technological systems involved (CAD-CAM programs, infrastructures, supply chains, etc.) and how many are instead attributable to the computations of individual human minds? How many computations today have become so incremental and abstract that we no longer see them as innovations?
Again, Smart is visibly squirming to find an explanation as to why the data found by Huebner doesn’t match his own “intuition” that the rate of progress is increasing. Once again, he is missing the point. The idea behind measuring innovation is to measure real, life-changing events that affect the day-to-day experiences of masses of people in the world.
It might be very cool that the inside of a car, what is going on “under the hood,” both metaphorically and literally, is getting more and more complex over time. But is this really the type of “important innovation” that Huebner is trying to measure? Is this the kind of change that we intuitively associate with “world-changing inventions”? To the end-user, it’s still just a car.
These responses to Huebner – the question of how an “important” innovation event is assessed, the seeming exponential growth of other measurements of progress (e.g., Moore’s Law), and the gradual increase in the complexity of technology all suggest a common and fundamental problem. Despite centuries of intuition and writing about the feeling of acceleration, there has yet to be any consensus on a basic definition of the term.
In the final analysis, any serious discussion of our culture’s acceleration fetish must address this question head-on: how, exactly, is progress measured? What, exactly, is the thing that is accelerating?
The Mis-Measure of Technology
Huebner wanted to measure “important” innovation events, which is an ill-defined category. Some critics have suggested that any list of “important” technological advances should be validated through survey data that shows, at the very least, an objectively-measurable consensus as to which technological innovations should be counted. This makes sense, but only to an extent.
On the one hand, “important” suggests that we have to take into account the impact that an innovation has on the public consciousness, because “important” developments intuitively must be those that have had a large and material impact on society.
On the other hand, the “public consciousness” is notoriously susceptible to fads and fancy. Tesco Mobile surveyed about 4,000 consumers, aged between 18-65, who were asked about the technological developments that were most important in their lives. They placed the iPhone ahead of both the toilet and the combustion engine.
Scientific discoveries often facilitate the development of technology in a way that is ultimately invisible to most of the population. The same is true of industrial technological innovations, particularly raw materials, such as plastic, and manufacturing processes, such as thin-film solar cell manufacturing. What people are most aware of is the development of consumer products: computers, phones and cars.
But while the measurement of the importance of an end-user product might well be measured by a survey of the general population, the importance of scientific discoveries or industrial developments might be better measured using other methodologies.
There is another reason that these types of innovation should be measured separately: they actually represent different types of progress. Scientific discovery, industrial innovation, and product invention arise through different processes, and often from different segments of society. Each is likely to be influenced by different factors: for example, raw materials shortages may impact industrial innovation more than scientific discovery, while governmental regulations or fads and fashions in public opinion may have a stronger impact on end-user products.
Does an increase in scientific discovery always lead to a corresponding increase in product innovation? Does a new industrial invention always lead to a burst in product development? These are empirical questions, and the answers should be measured, not assumed.
This same argument can be extended beyond the measurement of innovation to the measurement of progress more generally. The common intuition is that progress includes but is not limited to innovation. Indeed, Smart’s references to “Moore’s Law” and other “power law” dynamics point to another complementary, but completely separate, phenomenon that people associate with progress: the quantitative enhancement of existing technology, rather than the creation of a qualitatively new or different technology. We could call this amplification.
It seems to me that the exponential increases in processor speed, or bandwidth, or number of chips that can fit on the head of a pin, all fall under the heading of amplification, rather than innovation.
Are there other factors? Should the rate of social acceptance be added into the mix of “things that people associate with progress” as well? Clearly this is open for discussion. But what is equally clear is that any questions, studies, or claims about progress that do not address these distinctions will eventually only stir up more mud than gold.
This is the guilty (not-so) secret about the study of progress and the future. Even the Acceleration Watch website, dedicated to futurist studies, ends their “Brief History of Future Studies” by noting,
One might expect that the 1980s would have seen marked improvement of futures methodology and a steady advancement of this important new academic field, yet unfortunately, this did not occur… As accelerating and convergent technological changes continue to drive our technological environment to new heights… it seems reasonable to expect that futures studies will one day soon finally enter its own Golden Age. Until that time, as a practical matter, the field must remain on the edge of social legitimacy.
The methodological morass in which “future studies” finds itself is obviously in need of an overhaul. On the bright side, their situation is by no means unique. In fact, the state of affairs I just described has a strong parallel in another scientific field that had its own revolution in the last century: the study of intelligence. Allow me a brief diversion.
The scientific study of intelligence was initially plagued by contradiction and confusion. Despite the strong intuition that intelligence is a real and objective trait, nobody was clear on exactly how to define it or measure it. Intelligence intuitively seemed related to the ability to think quickly, but clearly thinking quickly was not sufficient for intelligence.
Intelligence was partially correlated with creativity, but certainly there was plenty of evidence that the two did not always go hand-in-hand. Intelligence seemed to be enhanced by in-depth knowledge, but common knowledge suggested that certain types of intelligence were only demonstrated in novel situations, where knowledge was not a factor. So, in the end, none of these measurable characteristics (problem-solving speed, creativity, or knowledge) quite matched up to anyone’s intuitions about what intelligence ought to mean.
After decades of failed attempts to find the underlying thing that somehow could be partially-but-not-completely measured in this wide variety of ways, it dawned on people: there is simply no such thing as intelligence. Or rather: what we intuitively respond to as intelligence is a composite, a symptom that can arise from a number of different factors in a number of different ways that are correlated but which function independently of one another.
It is true that people who think quickly can sometimes also learn more, and sometimes people who think quickly have more time to come up with creative results. But this sense of intelligence was an end result without a single cause; instead, it is what we see in the capabilities of a person who is able to harness any of a number of traits and get them to work together to produce a result. In the end, a person can display intelligence by being very quick, or very knowledgeable, or very creative, or any mixture of these or other helpful traits.
This is the way we should think about progress. If the Enlightenment era attitude of infinite unidirectional objective progress is what academics call a “key fetish” of modernity, then what Huebner’s criticism has pointed us to is the need for a post-modern critique of our understanding of what progress is. We must deconstruct and study the elements that give us that ubiquitous feeling of progress and study them as what they are: separate, independent, interrelated processes and phenomena.
Happily, the study of intelligence still continues in the field of psychology as strongly as ever. It simply does away with the belief in a core phenomenon: intelligence. Futurists and students of acceleration can take heart in this.
For psychologists who study intelligence, the discovery that their subject is in fact not a thing, but a family of interwoven and mutually connected phenomena, has produced a burst of new research questions and measurement methods. If the field of psychology is taken as a model, futurists can look forward to spending decades refining their measurement of individual factors, investigating how each impacts the others, and hypothesizing exactly how and why it may, or may not, be accelerating.
As long as they start by doing away with the idea of progress.