Vinyl: Plays music, holds it

I don’t know about you guys, but my DVD’s are lying in a pile on my floor mixed in with laundry and magazines. I guess if you’re the type that likes to keep your precious optical media nice and tidy, this retro DVD holder should do the trick.

It is made out of records so be prepared for a trip down memory lane!

The Early History of Programming Languages

Today we’ll overview the early history of programming languages, and I’ll follow this post with others that explore more recent developments.  I’m going to intentionally leave out people and focus on the languages in general terms — although the personalities involved make quite a story, too.

In the very early days of computing, the only “language” employed was comprised of native machine instructions, which were often “entered” by flipping switches and moving cables around.  Programmers had to know the numeric representation of each instruction, and they had to calculate addresses for data and execution paths.  Can you say “brittle code?”

Some time in the 1950’s someone got the bright idea of writing instructions in a human-readable form by using symbols for instructions and memory addresses.  They called this “assembly language”, because they ran this text through a utility called an “assembler” that would translate the nearly-human-readable code into machine instructions.  Assembly language is often considered the second generation of computer languages.  Naturally, each type of processor has its own flavor of assembly language corresponding to its unique instruction set and addressing capabilities.  Translating a program from one processor’s assembly to another can be quite a task, especially if it’s over 30,000 lines of code (sans comments) that makes use of idiosyncrasies of the target processor (I still haven’t let go of the pain).

The fifties also saw the rise of the third generation of programming languages, which sought to solve the machine-specific problem as well as to make programs even more understandable to humans.  Fortran and COBOL are both imperative languages (which means that they’re written in a sequential “do this, now do this” style) — and each tried in various ways to mimic human language with the goal of eliminating the need for programmers.  Scientists could code Fortran, and business people could code COBOL — or so the grand vision ran.  Lisp, which was introduced at about the same time, never made any pretensions to human language.  It expresses abstractions in a purely functional form and allows code and data to be interchanged easily.  Thus, it appeals to mathematicians and cognitive scientists — and has proven to be a source of inspiration for other programming languages ever since.

Third generation languages (aka “3GLs”) made it possible for businesses to create huge, complex applications that would remain in service for decades.  Soon it became obvious that a programming methodology that made code easier to understand and modify would provide a distinct advantage.  Thus, structured programming was born.  The GOTO statement became anathema.  Programmers were encouraged to write programs top-down, starting with the general processing steps and then breaking those down into smaller logical chunks, all called in a hierarchical fashion with clear entry/exit points.

But it was dang hard to write structured code in Fortran IV (although you could if you really tried — I’ve even written structured assembly!).  COBOL, though it had the modular PERFORM paragraph construct, wasn’t quite up to the task either.   Then along came languages like ALGOL, Pascal, C, and ADA.  These languages provide the ability to define, within one source file, discrete functions that each have their own private data scope — encapsulating portions of the application and exposing only a limited interface via the function’s arguments and return value, so the innards of each can be modified without affecting the rest of the application.

In yet another attempt to eliminate programmers, quite a few companies tried their hand with what become known as fourth-generation languages (4GLs).  These languages sought to abstract business application development to the point where only the business rules needed to be specified.  In the 80’s and early 90’s many attempts were made to rewrite applications in a 4GL, most of which failed miserably because real-world applications require exceptions to any rule, and unless you can easily get at lower layers of abstraction you can’t use a highly abstract language for all purposes.  Thus, 4GLs are really only suitable for specific problem domains, and so have over the years morphed into DSLs or scripting languages for specific parts of applications, like VBA for MS Office or SQL for database access.

Back in the thriving land of 3GLs, Software developers began to realize that collections of encapsulated functions could become reusable between applications if they were made generic enough.  They developed “utility libraries”, and they’d sometimes even document the functions contained therein.  They also started noticing that some of their functions were closely related to one another — often operating on the same data.  Sometimes this led to combining the functions and adding a parameter to indicate which operation to perform — a crude way to aggregate methods.

The desire for reusability, the encapsulation of functions and data behind a controlled interface, and the logical grouping of functions around data — these were some of the ideas that led to the development of object-oriented programming and languages that support it.  But lets save that topic for a future post, as well as the rise of scripting languages and the flowering of functional and dynamic programming.

Part 2: An introduction to object oriented languages
Part 3: The ascent of scripting languages



MP3 Players: 10 Years of Digital Music

MPMan F10This month, exactly 10 years ago, the father of all MP3 players, the MPMan F10, was released.

Initially presented at the 1998 CeBIT exposition, the MPMan F10 featured 32 MB of memory, which could stock around eight songs, and a tiny LCD screen. The gadget’s starting price was set at $250, and for an additional $69, you could get your hands on a 64-MB version.

Media players have certainly come a long way since then. Today, the most powerful iPod manufactured by Apple can stock up to 168 GB of music, or about 40,000 songs.

What will the future hold for MP3 players? If in 10 years, the total capacity of the devices increased 4000 times, would the future players be able to hold 672 terabytes of data? And would you be able to fill it?

Wal-Mart Ends Test of Linux in Stores

A few months ago, Wal-Mart started selling $199 Everex “Green PCs” in about 600 of their stores through the U.S. Unfortunately, it seems that the project was a total failure and that the interest of running penguin-powered systems among Wal-Mart customers was just not there.

Computers that run the Linux operating system instead of Microsoft Corp.’s Windows didn’t attract enough attention from Wal-Mart customers, and the chain has stopped selling them in stores, a spokeswoman said Monday.

“This really wasn’t what our customers were looking for,” said Wal-Mart Stores Inc. spokeswoman Melissa O’Brien.

Wal-Mart Ends Test of Linux in Stores

Study shows gamers get a thrill out of dying

By Mark O’Neill

Not being a gamer, I wouldn’t know myself. But according to Wired, a new study just out has shown that gamers get distressed at shooting their opponents dead but they get very happy when they get killed themselves.

According to the piece :

Ravajas (the study author) isn’t entirely sure why gamers feel this way, though he has theories. If we feel distress when we kill an in-game opponent, it may be because it violates our ingrained sense of morality; we know killing is bad, even when it’s virtual.

His much weirder experimental result, though, is our thrill at dying. Ravajas thinks this might occur because getting killed is “transient relief from engagement”: A first-person shooter is so incredibly stressful that we’re happy to get any respite, even if it requires being blown to pieces.

What do you think? When you get blown to pieces in a computer game, are you annoyed as hell or are you completely exhilarated that you’re being hacked to pieces by a two-headed monster with a bad breath problem and a bad attitude?

On the verge of creating synthetic life

“Can we create new life out of our digital universe?” asks Craig Venter. And his answer is, yes, and pretty soon. He walks the TED2008 audience through his latest research into “fourth-generation fuels” — biologically created fuels with CO2 as their feedstock. His talk covers the details of creating brand-new chromosomes using digital technology, the reasons why we would want to do this, and the bioethics of synthetic life.

Want to play God? Hold the Milky Way in the palm of your hand

I don’t usually fall for useless desktop accessories, but when I saw this 3D Living World model of the Milky Way, my brain started trying to convince me that I couldn’t live without one. Each 12 X 12 cm glass cube is created using real space data collected from Japan’s National Astronomical Observatory and holds 80,000 laser etched stars. Unfortunately, the privilege of holding a galaxy in the palm of your hand comes at a very steep price: $770.

[Product Page]

When Dad wants to be your Facebook friend

By Mark O’Neill

It seems that American teenagers these days are terrified of logging onto Facebook and finding one thing.

Nope, it’s not finding out that they’ve been slaughtered by a ten year old at Scrabulous but instead finding out that that Dad has sent them a friend request!  Oh shock!  Horror!   How will you be able to show yourself in polite society ever again?

What about you?  Would you be mortified if your parents tried to befriend you on your favourite social network?   Or would they be going one step too far poking you on Facebook?