It seems to be bon-ton these days to talk about the end of Moore’s law. While originally stated for the number of transistors stored on a given area of silicon, it has been quickly extended to CPU clock speed and performance as well.
In general, for a long time, the number of transistors in a CPU and its clock speed used to double every 1.5 to 2 years. All this was going on until around 5 years ago. Due to laws of physics, the heat and energy grow roughly cubically in the clock speed and so increasing the clock speed above 3 GHz seemed impractical.
As a result, rather than increasing the clock speed, chip manufacturers opted for increasing the number of cores in a CPU, thereby increasing the theoretical performance by adding parallelism rather than by increasing the clock speed.
Yet, even this trend may soon be put to a serious challenge, as the distances between transistors quickly become smaller than the minimal required by CMOS technology.
All this is great news for computer science in my opinion. For a long time, people got used to being lazy. If computers become twice as fast every 1.5 to 2 years, there is no point in investing much efforts in writing efficient code.
If something does not run fast enough, simply wait for the next generation of Intel x86 and everything will be resolved. In particular, CPUs became fast enough that traditional programming languages and efficient data structures and algorithms were being abandoned in favor of high level scripting languages whose most sophisticated data structure is an associative array. Suddenly, every Joe-hoe could become a programmer developing sophisticated Web applications with no effort – no need for hard earned computer science degrees anymore.
All these could change back with the end of Moore’s law. As CPUs become parallel, programmers need to learn how to write parallel code and deal with all the intricacies of concurrent execution. They need to understand how the system executes their code, dealing with memory consistency issues, avoiding synchronization in order to facilitate parallelism etc.
There is suddenly great demand for innovation in compiler technology for automatically parallelizing sequential programs. Programming models are suddenly an important topic again. Data structure libraries need to be parallelized in an efficient and scalable manner. Operating systems must be redesigned and re-architected to make an effective use of the many cores that are put at their dispense.
Moreover, as the number of transistors might be reaching a limit, this means that even the number of cores on a CPU will likely be limited. Also, given Ahmdel’s law, the maximal benefit from parallelism is in any case quite limited. Hence, writing efficient code will suddenly become important again. For this, strong background in computer science is a must!
This post was brought to you by Electri Council. Visit their website to know more about the latest in electrical construction.