Let me begin this post with a confession of total ignorance as to what the physical basis, if any, of “Moore’s Law” is but I kind of have this pet notion related to it that Kevin Drum inadvertently reminded me of:
World-changing inventions just don’t come around all that often, and when they do it takes a long and variable time for them to become integrated enough and advanced enough to have an explosive economic effect. Steam took the better part of a century, electrification took about half that, and computers — well, we don’t really know yet. So far it’s been about 60 years and obviously computers have had a huge impact on the world. But I suspect that even if you put the potential of AI to one side, we’re barely halfway into the computer revolution yet. To a surprisingly large extent, we’re still using computers to automate stuff we’ve always done instead of actually building the world around what computers can do.
My pet notion is that improvements in computer power have been, in some sense, come along at an un-optimally rapid pace. To actually think up smart new ways to deploy new technology, then persuade some other people to listen to you, then implement the change, then have the competitive advantage this gives you play out in the form of increased market share takes time. The underlying technology is changing so rapidly that it may not be fully worthwhile to spend a lot of time thinking about optimizing the use of existing technology. And senior officials in large influential organizations may simply be uncomfortable with state of the art stuff. But the really big economic gains come not from the existence of new technologies but their actual use to accomplish something. So I conjecture that if after doubling, then doubling again, then doubling a third time the frontier starts advancing more slowly we might actually start to see more “real world” gains as people come up with better ideas for what to do with all this computing power.