An introduction to object oriented languages


----------------

An introduction to object oriented languages

By Sterling “Chip” Camden
Contributing Writer, [GAS]

Object-orientation can refer to a set of design principles, a programming style, or features of programming languages that support that style. Continuing from an earlier post on the history of programming languages, let’s next concentrate on the purpose and history of the languages that support OOP.

The purpose of object orientation is to model, in code, the objects that make up the application you’re writing and the interactions between them. As in human language, it’s impossible to describe any process without referring to the nouns that are involved. All programming languages provide some nouns, but until object-oriented languages arrived on the scene, the programmer couldn’t create his/her own nouns very easily. Programming was limited to talking about the set of nouns provided by a language: numbers, characters, channels, etc. Of course, programmers built more abstract structures around this limited set of nouns, but the code that described those abstractions was much more complex than talking about them in English. Object-oriented languages allow you to define types of objects (called classes) that are derived from, or composed of, other types. In addition to this data component, the functions (also called methods) that “belong” to the data are also grouped in the class. This has at least three benefits:

  1. Encapsulation. Functions that are internal to a class can be marked as “private”. This means that they’re hidden from any code outside the class, so their implementation can be changed without bothering any code that uses the class. Conversely, the methods that are marked “public” form a well-defined interface that should not be changed without due consideration, because client code relies on it.
  2. Inheritance. You can derive one class from another, and the new class automatically contains all of the methods and data of the original class. This is useful when some subset of your objects needs an additional capability, but you don’t want to give that capability to all of the other objects.
  3. Polymorphism. Polly who? It’s a Greek-derived term that means “many forms”. In OOP, it means that sending the same message (in most OO languages, this means calling a method by name) may evoke different responses depending on type. Polymorphism itself has more than one form. The first form is when a derived class overrides an inherited method with its own implementation, so that sending the same message (calling the same function) on two different objects yields a different behavior depending on their types. A second type of polymorphism is called “parametric polymorphism”, which means that a class provides different implementations for a method depending on the types of parameters passed to it.

Have I lost you yet? Good.

Many people these days automatically think of Java when they hear “object-oriented language”, but Java was far from the first object-oriented language. That distinction belongs to Simula, which was developed back in the 1960’s. But even though Simula introduced the concepts of object-orientation, the first language to be called “object-oriented” was SmallTalk — and it earned that moniker by making literally everything an instance of a class, even literals (a feature that Ruby picked up later).

Object-orientation made for good theory back in the eighties when I was first introduced to it, but it didn’t receive wide adoption until it became widely available as extensions to more popular languages like C (as C++) and Pascal (as Object Pascal). Borland helped to popularize both of these languages on the PC with their Turbo C++ and Turbo Pascal products. Since both C++ and Object Pascal are layered on top of non-OOP languages, they were often criticized by OO purists as “hybrids” — because it was still possible to write non-OO code in those languages.

When Java came along in the early 90’s, it introduced a syntax similar to C++, but simpler in the object-orientation department. It also eliminated the “hybrid” problem (sort of) by forcing all routines to be members of a class — even the main routine. Unfortunately, this has led to the creation of gratuitous classes merely to enclose functions, and thus to an over-abundance of nouns in the code conversation. But neither is Java a pure OO language, because primitive types are not members of a class (although they can be coerced to objects). Because of these limitations, and the casting required by static typing, programming in Java often becomes an exercise in verbosity. C# is largely a Microsoft-centric variant of Java, though it has introduced many verbosity-reducing features, some of which have subsequently been copied by Java in what looks to me like an attempt to keep up with the Joneses.

Perhaps the best thing that Java did for object-oriented programming was to demonstrate by negative example that not every function belongs in a class. If one of the main goals of object-orientation is to model in code the objects and processes that comprise the application, then forcing an unnatural discussion of the actor for every action misses that goal. In English, we often describe a process by saying “first, do this… then, do that” without mentioning the implied subject (“you”). More recent languages like Python and Ruby give you the option of using object-oriented syntax, but they don’t force it down your throat. They are called “multi-paradigmatic” languages, because you can write them in an imperative, object-oriented, or functional style — and you can mix these styles as you like.

So it turns out that the “hybrid” nature of Object Pascal and C++, so criticized by OO purists, was actually a strength. As OOP has caught on, almost all programming languages have added capabilities to support it, including even Perl, PHP, Lisp, COBOL, and a language that I’ve worked on for many years, Synergy/DE. Because these languages weren’t originally OOP languages, they too are “hybrids”.

So object-orientation finds it natural place in the programmer’s vocabulary. It isn’t the end-all and be-all that it was originally advertised to be, but it can be quite useful. You need complex nouns in a hierarchy of abstraction to be able to describe things well. But you can’t be confined to using only one hierarchy of abstraction, nor to describing all actions in terms of the nouns involved.





19 Responses to An introduction to object oriented languages

  1. Pingback: links for 2008-04-05 -- Chip’s Quips

  2. I’ve been using Objective-C for a few years now, and love the “multiparadigmatic” nature of it. Gives the flexibility I need for performance while still making it convenient to take advantage of the most useful characteristics of objects.

  3. I've been using Objective-C for a few years now, and love the "multiparadigmatic" nature of it. Gives the flexibility I need for performance while still making it convenient to take advantage of the most useful characteristics of objects.

  4. Pingback: The Early History of Programming Languages - Part 1

  5. Nice.
    I like that you mentioned the hybrid-ness of some languages and that purists frown on this.
    Would you all agree that when we are not writing a Game or “user-interface application” like a painting program or word processor that following a pure OO design is just not possible, or at least not feasible? In my experience, the number one culprit is the fact that entities are created and edited by admin users. If all I had to do was business, then my objects would never expose an attribute. However, I cannot have them display themselves on the 3 UIs I support, and some of their business logic results are useful for reports.

    -Joe

    • I think that a pure OO design is always possible, but not always desirable. Often, you end up talking about things that are irrelevant to the problem at hand — like constructing an ornate hierarchy of abstract classes that nobody will ever subclass again — in fact, they’ll probably have to dismantle that hierarchy completely as requirements change, because the abstractions are too specific and unidirectional.

      • I’m glad. I would appreciate any thoughts from someone “in the middle.” I find many OO developers are either purists or the other extreme…anemic object modelers (as Fowler would put it). I appreciate anyone putting real effort into finding a balance.
        -Joe

  6. Nice.

    I like that you mentioned the hybrid-ness of some languages and that purists frown on this.

    Would you all agree that when we are not writing a Game or "user-interface application" like a painting program or word processor that following a pure OO design is just not possible, or at least not feasible? In my experience, the number one culprit is the fact that entities are created and edited by admin users. If all I had to do was business, then my objects would never expose an attribute. However, I cannot have them display themselves on the 3 UIs I support, and some of their business logic results are useful for reports.

    -Joe

      • I'm glad. I would appreciate any thoughts from someone "in the middle." I find many OO developers are either purists or the other extreme…anemic object modelers (as Fowler would put it). I appreciate anyone putting real effort into finding a balance.

        -Joe

    • I think that a pure OO design is always possible, but not always desirable. Often, you end up talking about things that are irrelevant to the problem at hand — like constructing an ornate hierarchy of abstract classes that nobody will ever subclass again — in fact, they'll probably have to dismantle that hierarchy completely as requirements change, because the abstractions are too specific and unidirectional.

  7. Pingback: The ascent of scripting languages

  8. Pingback: Scripting Languages and the Web

  9. Pingback: An introduction to functional programming

  10. I usually find these kind of articles where technologies like Objective-C, Smalltalk or Prolog are criticized but by people with no real working experience (I mean systems which are used in production by users).
    The point of the article is that real object technology ("pure" as you said) is not worth just because of some kind of nouns explosion? Let me put it this way: Where is the support for claiming that having more options is better than having one just per se? More options = More things to learn = More time

  11. I usually find these kind of articles where technologies like Objective-C, Smalltalk or Prolog are criticized but by people with no real working experience (I mean systems which are used in production by users).
    The point of the article is that real object technology ("pure" as you said) is not worth just because of some kind of nouns explosion? Let me put it this way: Where is the support for claiming that having more options is better than having one just per se? More options = More things to learn = More time