Introduction


In todays fast-paced society, IT technology has gotten such a short lifespan were systems become obsolete in a year or two, creating a throw-away mentality and endless upgrade cycles at an ever increasing speed. And while I marvel at the latest technology with its possibilities and improvements, I have also grown fond of "the good old times". Could a "ancient" system from the year 1992 still do its job? Could less be even more?

Sparc Classic

Pictured left: Sparc Classic: 50 MHz MicroSparc processor, 96MB RAM, internal 2GB Harddisk, 2x SBus slots at 20MHz, 10BaseT network, 8bit audio, upgraded cgsix SBus graphics card, Solaris 9, Speed: 59.1 MIPS, 4.6 MFLOPS

While I have been working with servers and workstations from SUN Microsystems for years now, I still remember my fascination for UNIX as a teenager, at a time when UNIX Workstations where state-of-the-art and so unavailable to me. I admired the innovation, solid engineering and their superiority compared to PC's. 2003, when I saw a Sparc Classic for sale, I bought it in a sentimental moment, giving the surprised owner $20 for what he thought was junk.

I liked the small size of the Sparc Classic, which is commonly refered to as the "lunchbox". What a great space-saving design many years before todays success of mini barebone and cube PC's. But what would I use the system for?

SUN makes great efforts to ensure their operating system Solaris is backward compatible with older hardware, so I planned to install a late Solaris version and use it as a training system for Shell/Perl scripting and 'C' programming. Of course I expected some difficulties due to the unaltered, 12-year old hardware, but it surprised me to manage installing the latest Solaris 9 OS, running CDE and being able to work almost flawless. It run in fact so well, I thought about extending its use to be a web server for my homepage and to host the software I wrote. Now, I did say *almost* flawless, didn't I? Well, here are the challenges I encountered while working with an antique system:

Solaris 9
Pictured left: Sparc Classic: "df - k" on a 2GB harddisk with Solaris 9 (click to enlarge)

Modern operating systems are grown huge in size. Graphical user interfaces such as CDE and integrated Internet applications are now spread over several installation CD's or come on a DVD, easily filling up more then a Gigabyte of diskspace after installation. Ancient systems like the Sparc Classic originally came with harddisk sizes as little as 210MB. I got mine with 2GB disk, which had been the only upgrade along with a 8-bit cgsix graphics card during its live.

After 512MB had been put aside for the swap partition, the remaining space had to be filled with carefully selected software packages, so the system has enough space left for applications. Disk space also was an issue when I installed additional software such as the GNU C compiler gcc. The standard gcc package now includes a lot more then just the plain C and C++ compilers. It's 92MB size decompressed to 400MB on disk, to much to handle for the Classic. Luckily there was still a smaller package available. Also, modern operating systems start an enormous number of processes and daemons for caching, device control and system management that run fine on tha latest systems (at least nobody cares much or even notices about the often wasted CPU cycles) but they would kill the Classic, competing about its limited resources.

Solaris 9 running top
Pictured left: Sparc Classic: "top" on a 50MHz system with 96MB RAM (click to enlarge)

Finally, the age shows when running some modern applications that had not been available during the haydays of the Classic. For example, file transfer with scp is awfully slow because the processor is so busy crunching numbers for the encryption. Or, just reading a longer man page resulted in half-a-minute wait due to the formatting process. For the manpages, I decided to use some precious diskspace and created the preformatted versions with catman. For the local file transfer, I fall back to good old ftp. Compiling modern software is also a process requiring a lot of patience, a full day went by when I compiled the latest Nmap and Nessus packages. But what really caused me headache was the issue I encountered after bringing up the Apache webserver. While serving static pages just fine, I had a unbelievable 7 second delay once I tried my first .cgi that was written in Perl. Unacceptable! I was ready to give up.

But then I accepted the challenge and I searched for alternatives: mod_perl, mod_jserv and CGI programming in 'C'. While mod_perl showed no improvement at all, mod_jserv was a big surprise to me. I did not expect Java to be an option at all. But once the JVM was running, pages were delivered fast. I thought hard about it and decided against. First, the JVM is another process takes permanently 40MB of memory. Although not all of it is resident, running it would only make sense if I expect to be under constant web load, which is rarely the case on a personal website.

Second, handling the complexity of a 'jserv' environment is also hard to justify. I opted to re-write my Perl CGI's in 'C' instead and tests showed acceptable performance. Much of the convenience using Perl and its modules were lost. I found Thomas Boutells cgic library that helps with basic CGI functions like reading the environment variables and handling forms. He provides his library for free use if his copyright appears under credits: CGIC, copyright 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Thomas Boutell and Boutell.Com, Inc. Thanks Thomas. For those of you who don't know, Thomas Boutell is also the author of the GD graphics library used in popular applications like MRTG. Programming in 'C' down at the library level for applications isn't very easy. WebCert was particular difficult for me with the OpenSSL libraries not very well documented. However, now I learnt firsthand what ASN.1 is and how a X509 certificate is built from scratch. Still, I wonder if this knowledge will help me one day and if it was worth the time...

I asked myself why I go through all the effort and not just go out and buy a brandnew system. Trying to answer, I think its a mixture of reasons: nostalgia, the challenge it is presenting and the experience that often in life, abundance of resources is not always stimulating to use them to its full potential. Also, I get some benefits from working on an ancient system, which is a rocksolid hardware and a software installation that is manageable in size. Plus, the satisfaction in demonstrating my skills of course is priceless. I plan to keep the system for some years to come, I already extended its use by implementing a NTP time server with a local attached radio receiver synchronizing time with the atomic clock in Fort Collins, Colorado.

I hope I could inspire you to have a second look at an old computer system, and to be thankful of the engineers that developed great, lasting hard/software which became legends in the industry. What do we really do with these GigaHertz power we have available now? And what we build, shouldn't it last longer then just a season?

Links:


Credits and Copyrights: