Today we’ll overview the early history of programming languages, and I’ll follow this post with others that explore more recent developments. I’m going to intentionally leave out people and focus on the languages in general terms – although the personalities involved make quite a story, too.
In the very early days of computing, the only “language” employed was comprised of native machine instructions, which were often “entered” by flipping switches and moving cables around. Programmers had to know the numeric representation of each instruction, and they had to calculate addresses for data and execution paths. Can you say “brittle code?”
Some time in the 1950′s someone got the bright idea of writing instructions in a human-readable form by using symbols for instructions and memory addresses. They called this “assembly language”, because they ran this text through a utility called an “assembler” that would translate the nearly-human-readable code into machine instructions. Assembly language is often considered the second generation of computer languages. Naturally, each type of processor has its own flavor of assembly language corresponding to its unique instruction set and addressing capabilities. Translating a program from one processor’s assembly to another can be quite a task, especially if it’s over 30,000 lines of code (sans comments) that makes use of idiosyncrasies of the target processor (I still haven’t let go of the pain).
The fifties also saw the rise of the third generation of programming languages, which sought to solve the machine-specific problem as well as to make programs even more understandable to humans. Fortran and COBOL are both imperative languages (which means that they’re written in a sequential “do this, now do this” style) — and each tried in various ways to mimic human language with the goal of eliminating the need for programmers. Scientists could code Fortran, and business people could code COBOL — or so the grand vision ran. Lisp, which was introduced at about the same time, never made any pretensions to human language. It expresses abstractions in a purely functional form and allows code and data to be interchanged easily. Thus, it appeals to mathematicians and cognitive scientists — and has proven to be a source of inspiration for other programming languages ever since.
Third generation languages (aka “3GLs”) made it possible for businesses to create huge, complex applications that would remain in service for decades. Soon it became obvious that a programming methodology that made code easier to understand and modify would provide a distinct advantage. Thus, structured programming was born. The GOTO statement became anathema. Programmers were encouraged to write programs top-down, starting with the general processing steps and then breaking those down into smaller logical chunks, all called in a hierarchical fashion with clear entry/exit points.
But it was dang hard to write structured code in Fortran IV (although you could if you really tried – I’ve even written structured assembly!). COBOL, though it had the modular PERFORM paragraph construct, wasn’t quite up to the task either. Then along came languages like ALGOL, Pascal, C, and ADA. These languages provide the ability to define, within one source file, discrete functions that each have their own private data scope – encapsulating portions of the application and exposing only a limited interface via the function’s arguments and return value, so the innards of each can be modified without affecting the rest of the application.
In yet another attempt to eliminate programmers, quite a few companies tried their hand with what become known as fourth-generation languages (4GLs). These languages sought to abstract business application development to the point where only the business rules needed to be specified. In the 80′s and early 90′s many attempts were made to rewrite applications in a 4GL, most of which failed miserably because real-world applications require exceptions to any rule, and unless you can easily get at lower layers of abstraction you can’t use a highly abstract language for all purposes. Thus, 4GLs are really only suitable for specific problem domains, and so have over the years morphed into DSLs or scripting languages for specific parts of applications, like VBA for MS Office or SQL for database access.
Back in the thriving land of 3GLs, Software developers began to realize that collections of encapsulated functions could become reusable between applications if they were made generic enough. They developed “utility libraries”, and they’d sometimes even document the functions contained therein. They also started noticing that some of their functions were closely related to one another — often operating on the same data. Sometimes this led to combining the functions and adding a parameter to indicate which operation to perform — a crude way to aggregate methods.
The desire for reusability, the encapsulation of functions and data behind a controlled interface, and the logical grouping of functions around data – these were some of the ideas that led to the development of object-oriented programming and languages that support it. But lets save that topic for a future post, as well as the rise of scripting languages and the flowering of functional and dynamic programming.