The Early Days: JCL
At least as early as 1964, programmers realized that having to do everything in code, including copying files, was too tedious. So IBM introduced a “job control language” (JCL) on the OS/360. In this language, you could copy a file from one location to another using only 9 lines (punched on cards, of course), as compared to writing a Fortran programming that would be much more complex. JCL was also used to run programs in batch mode, specify their parameters, and branch based on their result.
I remember when I was a lowly operator on a Data General Eclipse system, and I needed to load the contents of a tape into a file on an IBM 4331 in another department. I strolled over to their operations window with the tape in hand. When I finally got an operator’s attention, I asked him to “please run this tape into a disk file for me.” He stared at me and said, “Where’s your JCL?” I replied, “JC who? On the DG, I can just enter a command on the console to do that.” He looked smug and said, “That’s impossible. You can’t read a tape without JCL.” So, I had to research the required JCL and punch the cards. Which goes to show how command line interpreters on minicomputers had already advanced beyond mainframe batch jobs by the late seventies.
Scripting Command Line Interpreters
Command line interpreters, such as DG’s CLI, DEC’s MCR and DCL, and more famously Unix’s Bourne Shell not only accepted simple commands to perform (what at the time seemed like) complex operations, they also provided a way to execute a series of those commands stored in a file. DG called them “macros”, DEC called them “command files”, and Unix called them “shell scripts”. These early scripting languages also provided local variables and flow control mechanisms, so it wasn’t long before they began to be used for even more complex operations that used to require compiled programs.
When I was first introduced to Unix systems back in 1984, I was thrilled with the C language, the Bourne shell, and powerful utilities like awk and sed. But there was one thing that bothered me: their syntaxes were similar, but not identical — and the overlap in problem domain was significant. I thought, wouldn’t it be cool if you could have one language that addressed the combined problem domain in a consistent syntax? But I was too busy to do anything about it.
A few years later, Larry Wall found the time to do something about it: the Perl language. Perl combines the ease of shell scripting with powerful features borrowed from C and various Unix utilities. Immediate and easy access to regular expressions, lists, associative arrays, the ability to treat any value as a string, no need for scaffolding (not even a “main()” declaration), automatic resource management, eval (the ability to execute a string as code at runtime), and the ability to combine different programming paradigms freely are some of the language’s strengths that combine to make “easy things easy and hard things possible.” And you can choose from a number of ways to do the same thing, a principle known as TIMTOWTDI (There Is More Than One Way To Do It).
Object-orientation can be a good approach for managing complexity, but languages that force the use of object-oriented notation usually add unnecessary complexity to almost every project. When Perl added support for objects, it did so in an optional way that can be freely mixed with non-OO code so that complex modeling can be achieved when needed, but otherwise not required. The syntax for objects was initially a bit of a kludge, but that seems sufficiently remedied in Perl 6.
About the same time (late 1980’s), John Ousterhout created the TCL scripting language. It’s an interesting functional language in its own right, but its enduring contribution to scripting languages has been the Tk, a framework for enabling cross-platform graphical user interfaces that can be used from within many popular scripting languages.
Guido van Rossum was also at work in the late 1980’s on a new scripting language: Python. Unlike Perl, Python’s philosophy is that there should be only one clear way to do something. This philosophical rigidity is reflected in the language’s syntax. Rather than curly braces to group statements, indentation (significant white space) is used. Rather than a semicolon to terminate statements, the end of line is used. These rules may prevent clever programmers from writing code that is easily misunderstood, but they also limit flexibility of coding style.
On the other hand, Python does not limit the choice of programming paradigm. It readily supports structured, functional, object-oriented, and aspect-oriented programming — and allows you to mix those styles.
Python gave us the term duck typing (although some earlier languages also provided forms of this). If a class provides the methods expected for an interface, then it meets the requirements for that interface — whether or not it is derived from any base class of that interface. The name comes from the duck test: “if it looks like a duck, swims like a duck, and quacks like duck, then it probably is a duck”. In Python, if an object implements the methods we want to call, then it qualifies as a receiver. This seemingly simplistic form of typing easily enables polymorphism without requiring inheritance — thus eliminating the need for complex inheritance hierarchies.
I may not be qualified to write objectively about Ruby, because it’s currently my most favorite programming language. Yukihiro Matsumoto, affectionately known to the Ruby community as “Matz”, released Ruby to the public in 1995. His design philosophy focuses on creating a language for human programmers, rather than for the machine. The result is based largely on Perl and SmallTalk, with nods to other languages. It follows the POLS (“Principle of Least Surprise”) — so where it may be minutely inconsistent, it usually does what the programmer expects. Ruby also embraces Perl’s concept of TIMTOWTDI — you can probably find eight or more ways to perform even the simplest operation.
Ruby is one of the more thoroughly object-oriented languages around. Almost anything can be treated as an object, even literals — which provides amazing expressiveness. Nevertheless, you aren’t required to explicitly use classes at all, unless you want to. If you define a function that isn’t in a class, it actually extends the global Object class. You don’t have to be aware of that, and yet you can also use that fact to your advantage.
Ruby uses duck typing, like Python. Ruby also allows any class to be reopened and extended. For instance, you can add your own methods to String and Integer, or override the ones that are predefined. This makes most OO purists wet their pants. Potentially it’s a truly dangerous capability, but in the right hands at the right time it can be amazingly powerful.
Because of its great flexibility, Ruby is inherently multi-paradigmatic. It’s possible to write Ruby code that reads like Lisp, Java, or even Pascal. You can tune for readability, performance, minimum code, cleverness, or just about any other emphasis you’d like to achieve.
And since we’ve been talking about Ruby, now might be a good time to throw in a continuation (pardon the pun).
To be continued…
There is so much more ground to cover, and I’ve probably already exhausted your attention span. So, I’ll continue this thread in a later post on the use of scripting languages and the web. I have also neglected to talk about scripting languages embedded in applications. There are many scripting languages that I won’t be able to cover, and many distinguishing features and capabilities that I won’t have space to discuss. But I hope you’ll find this overview helpful.
This post is part three of a series on the history of programming languages. For the first two parts, see: