Techies today don't know they're born.
It is personally hard for me to imagine life without cloud services, but that is because I am a relatively junior techie, who got into the tech industry when cloud services were just starting to be widely adopted. For many of my team members that is not the case. I am constantly hearing from them about how much easier and more efficient it is to use cloud services compared to the old school way of buying their own servers. I cannot comprehend this ease of use as I have never experienced life without cloud services. So I sat down with my coworkers and we had a discussion about how life was for him before cloud services.
Photo credits to Rezaid
Provisioning a development environment on the cloud is relatively easy. You can have a complete development environment in just a few minutes from just running a few scripts. If you have an account with AWS cloud services for example, when provisioning an EC2 instance, all you will need to know is how to make decisions about instance size, AMI types, security groups, NACLs, VPC, subnets, route tables, DHCP options, etc. To achieve creating a development environment without cloud, is a difficult task that could potentially costs loads of money. Talking to some of my older friends and colleagues I am amazed at how much they used to have to know to get anything to work.
On the server and operating system side, when they wanted to provision a new server they first had to place an order with a hardware vendor for the server. Before they get to this stage, a development team would have had to draw up detailed plans for the new server to determine which model to buy with which options installed up front and which options they would need to purchase extra over the projected lifetime of the server.
After the purchase is made, the vendor would deliver the server to the team. This sometimes used to take at least a couple of weeks. Compare that to provisioning an environment on AWS which takes a matter of minutes.
Once the hardware arrived an engineering team would have to get it installed in a server rack in a server room. The server would then have to be connected to multi-power supplies for resilience and similarly with multiple network connections. The network connections would have to be configured by a separate network team. This would be a team of gurus in Cisco network management. With the hardware racked and connected, came the task of installing the Operating System. This is a process that has evolved from using various tape formats, through CDs and DVDs to installing from a network install server.
With the OS installed came the task of configuring multiple parameters to get the new server to connect to other systems on the network that all seemed to be some variations on a theme between different organisations. Standardisation was a very slow process not been helped by having multiple vendors of incompatible hardware and Operating Systems. Installation of applications would require obtaining the software on suitable distribution media but for the sake of discussion here let’s assume CD or DVD. The software package would come with detailed release notes specifying various OS parameters that would need to be configured in the kernel or network. These configurations would have to be set manually.
Eventually software packaging systems came along and installation was reduced to installing some packages plus any dependency packages. This evolved into package management systems that would understand the package dependencies and have access to online package repositories of the Operating System packages. This all sounds like a lot of work and high level of understanding the details of how things fit together. I most certainly appreciate cloud services after learning about this ordeal.
With cloud services I can just run my Terraform scripts and have my system up and running in a few minutes with the application(s) and all the software dependencies and the network configured and be ready to use. If I want to change something I just modify my code and run it again. The main decision is whether to create a new instance or to change the existing instance.
All this thanks to virtualisation and the way that the bottom layer in the entire software stack has been moved from hardware into a software layer. The hardware layer still exist, but it is not managed by me, but it’s all managed by the dark arts of a hypervisor software layer that sits between the real hardware layer and the virtualized ‘hardware’ layer that we now deploy our OSes onto. As an engineer, I don’t need to know that much about the OS now compared to the old school. I only need to understand enough about it to get my application to run. If something goes wrong it’s often quicker and easier to tear it down and replace it with a new instance and I can configure that replacement process into my Infrastructure as Code scripts to minimise interruptions to the service of my application.
Considering application development , I have at my disposal all sorts of modules for routine functionality that I can incorporate into my code whether I am coding in Java, Python, Scala, Go or any other language. My older software developer coworkers often had to write everything from scratch. At times they would reuse their own code which they would configure into libraries. Alternatively they would use libraries written by others within their organisation.
Because CPU speeds and RAM configurations were miniscule compared to what we now have engineers would have to consider their choice of algorithms and coding to be as efficient as possible on CPU clock cycles and memory usage. One old techie friend even talked about a technical interview where they were concerned with reducing the number of CPU clock cycles in a loop by 2! This was where CPU speeds were sub 100MHz so a 2 clock cycle saving could make a significant difference. I can’t imagine having to that concerned in anything I write with CPUs running at multiple GHz and having easy access to double figures CPU counts. I have never had to look at the assembler output stage of the compilation process and wonder how I can make it run faster because code optimisers do such a good job and are better than probably almost any coder now (in fact I had to read up about the compilation process and the optimizers to really understand what these older techies were actually talking about!)
Before git, engineers used systems with names like Synergy or Perforce or Clearcase and checking code in or out might take long enough to go for a coffee or lunch. Builds of a software system would be scheduled to run overnight with the results of build failures available in the morning. If the build worked then groups of testers would then run a barrage of tests on the components and the integrated system. With it’s far more rapid and flexible code management, git gave rise to automated testing mechanisms that were triggered to run as soon as code was checked into git. Testing now happens continuously including integration testing in what we now call CI/CD systems. Couple CI/CD with virtualisation and Infrastructure as Code tools such as Terraform or Cloud Formation for example and the infrastructure itself is now stored under version control and any Infrastructure changes can be subjected to the CI/CD treatment and tested automatically.
The journey doesn’t stop here though. While I still have to know enough about Linux OS to get my Python and Scala code running I am learning about serverless which seem to be just a container with my application plus dependencies bundled together and submitted to run out there in the cloud somewhere without me having to know anything about the OS at all. I am also learning of developments where some serverless functions can get executed in the CDNs to bring some processing nearer to the consumers of the service of web-scale apps. What are CDNs I hear you ask. Well the acronym is for Content Delivery Network and is basically some caching servers hosted all around in ISPs equipment racks. The CDN caches static content, things like images used on websites for example but these caching servers are also just commodity computer hardware running an OS and some applications, in particular some web caching apps. As I have learnt, back when Moore’s Law was following the standard trajectory a doubling of CPU speed every 18 months, back then network latency wasn’t a big issue for apps whereas now that apps span the globe with millions of users concurrently interacting and demands for media files in real time .
This was a lot to take on for me, but the journey of continuous development never ends . I now appreciate the beauty of cloud services more now.