The Roots of Cloud Computing
By Bruce Harmon, Ph.D.
What is all this hype over cloud computing? Is it really something new under the sun? As one who has witnessed the last forty years of computing technology, I say yes and no.
Think back to the 1960s if you can and will. In those days individuals accessed “time-share” computers by logging into a “dumb” terminal and connecting via telephone line to a central mainframe computer. The user’s company and department would be billed for the minutes connected plus CPU-minutes plus storage (disk space) plus any long-distance charges. The IBM 360 was typical.
Then in the 1970s came minicomputers, highly distributed within the user’s company, an asset that was owned. Departments might or might not be charged by the minute. The Digital VAX/VMS was typical. Throughout this time, the CPU was formed on a printed circuit board out of discrete components including integrated circuits or chips. Toward the end of the 1970s one could have the CPU be a single chip, a microprocessor.
With the microprocessor came the client-server model which allowed heavy duty work on the server and light work on the client. Networking came to be essential. Digital, HP, Sun, IBM, and Apollo had many competing products as did others. Then came the personal computer (PC), initially from IBM. The PC often took the place of the client at a much reduced price. With its proliferation it became economical for a user to own the PC. Thus it could sit idle much of the time.
As Moore’s Law marched on, more and more circuitry could be placed on a single chip until it became possible to put multiple microprocessor cores on a single chip. Today, the modern server in the rack in your data center holds two microprocessor chips each of which holds as many as six such cores. Along the way, it occurred to some people to “virtualize” such servers, that is, to allow the allocation of running programs to each and every core. A further refinement now allows that one can dynamically allocate a running operating system to each core with multiple simultaneous processes running on top of that.
Thus it was logical to once again centralize computing, this time into the efficient virtualized servers. This is a far more efficient way to use these dramatic advances in computing power. Cloud computing can be via a private cloud, one owned by the user’s company, or a public cloud, owned by a company in that business such as Amazon, HP, or Oracle. If it is a public cloud, the user will be billed by the minute and unit of storage.
Thus, in a sense, we are back to the future!
Want to contribute your thoughts on cloud computing? Tweet us @CTUTech, or leave a comment below.
Learn more about Bruce Harmon, Ph.D., the University Doctoral Chair of Computer Science at Colorado Technical University.