So, it’s HER fault! :)
I recently wrote about why Y2K was such a big deal. To summarize, one major reason was the cost of computer memory, and another was that programmers in the early days could not imagine that many of their programs would still be running decades later.
For a bit of background, all computer programs must, eventually, be converted to machine code, which consists of the set of instructions available to a given processor architecture and is represented as strings of binary. Over the years, progressively higher levels of abstraction have become available, starting with assembly language, which is vastly more user-friendly (or at least less user-unfriendly...), though still tied to a processor architecture.
In practice, this meant that a given program would only be expected to work on a specific type of machine, and since the technology was moving quickly, there was no realistic expectation that a program would be useful for more than a few years.
Until Grace Hopper came along...
Hopper was one of the pioneers of computer programming, and had an enormous influence on the evolution of programming languages. She was one of the first programmers of the Harvard Mark I (one of the earliest electromechanical computers), is credited with writing the first computer manual, and was part of the team that developed UNIVAC I (the first general-purpose electronic digital computer for business application in the United States).
While working on the UNIVAC I team, Hopper recommended developing a new programming language that used entirely English words, but received pushback because “computers don’t speak English”. By 1952, she had a working compiler, the A-0, and was working on some of the earliest compiler-based programming languages. While these languages were still quite complicated and cryptic, they were far simpler for humans to read and understand.
The breakthrough, however, came in 1959, when experts from industry and government met for a two-day conference known as the Conference on Data Systems Languages (CODASYL). Hopper was a technical consultant for the committee, and several of her former employees served on the short-term committee which defined the Common Business-Oriented Language (COBOL), which was partly based on FLOW-MATIC, designed by Hopper a few years earlier.
While it is not justified to call Hopper the “mother of COBOL”, her nickname “Grandma COBOL” is well-justified. She even has a “Google Doodle”, celebrating her 107th birthday...
So, while computer memory was still ridiculously expensive, COBOL was far less dependent on a specific processor architecture than earlier languages, was designed and focused on business applications, and was strongly promoted by the US Department of Defence. It was also comparatively simple for humans to develop and read, so more and more people started using it.
In these early days, organizations in banking and financial services were among the few large enough to use computers extensively, required very high reliability, and were generally quite risk-averse (at least in technological terms). This resulted in both the propagation of COBOL and the development/evolution of mainframe computers.
Out of a 2022 survey of COBOL-connected architects, software engineers, development managers and IT executives from 49 countries, 92 percent viewed COBOL as a strategic platform, and estimated the amount of COBOL code in daily use at around 800 billion (with a “b”) lines of code.
Why is this “ancient” language still running?
For organizations of the type described above, the business case usually focuses on speed, stability, and reliability. This was the sweet-spot of both COBOL and mainframe computers, which have been evolving quietly for decades.
It’s easy to think of computers evolving from massive mainframes, down to desktop computers, and then on to modern cloud computing consisting of racks of similar computers. Instead, both mainframes and desktops evolved in different directions, but very few people were paying attention to the mainframes, chugging along in the background.
According to Ars Technica, it is estimated that there are 10,000 mainframes currently in use, almost exclusively by the largest companies in the world (2/3 of the Fortune 500, 45 of the top 50 banks, 8 of the 10 top insurers, 7 of the top ten retailers, and 8 of the top 10 telecom companies). Just the sort of companies that need high volume, high speed, and high reliability.
Modern mainframes can have up to 240 server-grade CPUs, 40TB of error-correcting RAM, secondary storage measured in petabytes (ie, thousands of terabytes), and 99.999% uptime (roughly 5 minutes downtime per YEAR). All of the hardware is “hot-swappable”, which means you can simply remove and replace it if it fails, without rebooting. There is really no comparison – even the best modern Unix, Linux, or Windows servers are dangerously unstable when compared with mainframes. Secondary storage is comparable – you don’t plug in drives, but rather storage arrays, which are much faster and have much greater capacity.
So, picture a company which has been maintaining a retail or banking application for decades. Any bugs would have been corrected years ago, along with any performance tuning, so the system reliability would be near 100%. Will you replace it because it’s old? High cost and high risk, for minimal benefit, so...
But we need new applications!!
Most organizations solved this by building interfaces to the legacy systems, which allowed them to build other programs, in whatever languages they wanted, and then plug them into the legacy system. That said, this is still an aging platform and there are practical limits, including the fact that there are relatively few COBOL programmers left.
All is not lost, though, and AI may be able to help. As an example, IBM is promoting an AI-powered code assistant called “watsonx”, but it’s still early days. Modern application and infrastructure scanners can also help, but the consensus seems to be that COBOL isn’t going anywhere quickly.
Grandma COBOL would be proud!
Cheers!
Comentários