Companies with computer systems often set up data centers to provide backup in case crucial information is lost during disasters. Your company may have a single or multiple data centers, which may either be located in one place or in different locations.
When you hear about a data center, what probably comes to mind is a room full of servers and cables. True as it might be, this conceptualization of data centers no longer applies in today’s tech-savvy world. This is how data centers have evolved over the past 50 years.
Until the early 60s, computers were mainly used by government organizations. These were typically large machines weighing up to 30 tons, and taking up to 1,800 square feet of floor space. The computers required six full-time technicians to work. These large mainframe computers were stored in what we refer to as “data centers” today. Since the cost of these mainframes was high, maintaining them was costly. The machines were also highly prone to errors and breakdowns.
The release of the first commercial microprocessor by Intel in 1971 led to a significant reduction in the size of mainframe computers. Data centers in particular, gained prominence due to the growing need to have formal data recovery plans in the event of a disaster. Players in the computing industry were of the idea that if disaster struck, it wouldn’t necessarily have to disrupt business operations since computers only handled after-the-fact bookkeeping duties. Towards the end of the 1970s, huge mainframe computers were phased out gradually, and replaced by air-cooled computers that could be moved into offices.
This period was characterized by the introduction of IBM’s personal computer (PC). IBM consequently established a 30 million-dollar computer facility at Cornell University. This was to act as a super data center for IBM PCs. The introduction of the IBM Application System/400 (AS/400) in 1988 made it even more necessary for companies to have backups for the business computing systems. The unprecedented growth of information technology resources led to awareness about the importance of setting up and controlling IT resources including data centers.
In the 90s, microcomputers (now referred to as servers) started being introduced to the old computer rooms that previously housed mainframes. Companies were increasingly setting up server rooms within their premises due to the availability of relatively affordable networking equipment. Data centers became even more prominent during the dot.com bubble since companies needed fast Internet connection to establish a footprint on the web. Consequently, most companies started building large facilities to provide data backups.
2000 to Present
That we are experiencing the holy grail of computing technology is no longer a secret. Every company that you know about probably has at least one data center. Larger data centers are being set up every day to meet the needs of the industry. In recent years, data centers have been virtualized due to the advent of cloud computing technology. Today, the resources that you get from huge corporations such as Google, Amazon and Microsoft are usually provided by the data centers of these companies before being distributed to you over the Internet.
This clearly illustrates that data centers have made cloud computing a reality. Previously, selling Internet-based resources was impossible due to the unavailability of data centers. Thanks to the evolution of data centers, you can now access information quickly via the internet. Today’s data centers are gradually shifting from a hardware and software ownership and infrastructural model to a subscription and capacity-on-demand model. With an average of 5.75 million servers being deployed annually, online data will keep growing exponentially.