A history of the cloud and an outlook to edge computing (part 2)

AntiNews
5 min readDec 22, 2020

Our physical infrastructure dictates the limits of our virtual world. This truth is important for understanding challenges in our past as well as opportunities in our near future.

Although there are many elements which have determined the pace at which we continue to accelerate technology, there are two key infrastructure pieces that go hand-in-hand in importance:

  1. Computing capacity (CPU) — servers/computers/IoT devices

2. Network capacity (bandwidth) — internet (LAN, internet, 4G, 5G)

In terms of an analogy, you should view server computing capacity (CPU) as the speed of a car, and you should view the network capacity as the number of lanes on a highway. For example, having high computing power (CPU) connected to a network with limited bandwidth is the equivalent of having a Ferrari on a busy 1-lane highway. It doesnt matter how fast you can process information if you are limited by the speed at which you can send it to others. Obviously, computing capacity and network capacity are both critical elements of enabling our cloud-driven world and both have historically been bottlenecks.

If you have not yet read the previous article on pre-web computing and original networking, than you can follow this link first. We continue below with the rise of the internet and how the dot-com boom eventually gave rise to cloud computing due to the inefficient use and management of (non-core business) server/CPU resources on a large scale. The web 1.0 period started around the mid 90s and was characterised by a significant increase in network capacity as the internet was formed. Early internet was seen as web 1.0, which was 1-way traffic (read-only) content from companies on static HTML websites.

Mid 1990s — Understanding how we got to web 1.0

The boom in PCs and the rise in on-premise server (CPU) parks

The processing power of microchips increased significantly from the 80s onwards and throughout the 90s, which decreased the cost of computing power, making it possible to bring CPU processing to the masses via the personal computer (PC). The decreased costs of PCs led to a growth of homeowners buying PCs as well as business users adopting PCs and software that runs on the PCs (i.e. Microsoft Office).

Moore’s Law has largely held true over time and states

“the speed and capability of computers can be expected to double every two years, as a result of increases in the number of transistors a microchip can contain.”

Source: Economist

The percentage of homeowners with a PC grew exponentially throughout the 90s and early 2000s. The costs of personal computers decreased, and the value increased as companies started creating software applications that could run on these computers (i.e. notepad, email, messaging). As more individuals purchased PCs and started using these communication applications, the value of the applications increased (network effects), creating a greater incentive to new users to also buy a computer and join the network. In 1990, the percentage of US households that had a computer was 15%. In 1997 this percentage grew to 35% and ten years later this percentage had grown to 70% (Statista). With a growing PC user base, the ROI on developing software became higher as the market of potential buyers was growing exponentially.

PCs were effective for running simple applications such as Microsoft Office (i.e. Word, Excel, Powerpoint), games, communication tools as well as web browsing. The interaction with the internet was primarily one-way wherein websites had information that PC users would read. There was limited 2-way interaction in the sense that PC users were not uploading documents (i.e. images, files, videos) to the internet as the bandwidth was too slow. This age can be seen as being (relatively) good in CPU power, but limited by slow dial-up internet speeds. The “killer apps” were messaging (AOL, MSN), Microsoft Word/Powerpoint/Excel and basic browsing to find information online.

Source: The Verge

Network-of-networks (internet is born)

Despite an increase in computing capacity, networks were still limited by their overall reach and protocol standards. The standardization of network protocols (TCP/IP) was one of the most important steps in the overall development of the internet as we know it today. The origin of the internet comes from ARPANET, which was initially an initiative from the US Department of Defense to distribute information in the event of an attack on a central node (information broker) in the network. The idea to spread information was further developed and led to universities establishing an early network as depicted below.

Source: Wiki History of Internet

Over the years, ARPANET grew with more nodes and endpoints being connected over the network. Furthermore, multiple networks started being formed, which raised the question as to how they would all communicate with each other.

Source: Wikipedia

With the implementation of TCP/IP protocol, a standard protocol was developed for communication across multiple networks. This meant that a single network-of-networks was formed — which became to be known as the internet. Although the network could now communicate effectively with each other, this was still not the internet as we know it today as there was no browsable web. In 1989 a British researcher at CERN developed a way to link a hypertext document to an information system, which any member (node) of the network could have access to. This was the origin of the world-wide-web as we know it today, allowing us to navigate to websites.

The internet use-case of the early 90s was seen as web 1.0. Websites were seen as 1-way stream of information from content creators to a static HTML web page. As web-pages across the internet were now accessible (due to TCP/IP and enabled by internet browsers), users could access web pages as they wished. A problem was that although the internet was filled with information/content, there needed to be some sort of way to filter the internet to find what one is searching for. Google was one of the early companies which understood this problem, and effectively solved it by scraping and indexing all pages of the internet and then ranking which pages have the best match with a certain “search” phrase, largely based on the text on a web site. As user-friendly tools emerged for web-search and communication (AOL, MSN, e-mail), internet adoption rapidly grew. A key limitation that remained was that network speeds were not fast or reliable enough to send rich files such as audio or media.

The article continues here with part 3, which goes into our shift to web 2.0 where user-generated content becomes the norm. In part 4 we shift towards where we stand with CDNs and how edge networks will drive how machine-to-machine communication.

--

--

AntiNews

Read about an amazing moment in human history, which happened on this day