Martin's Computing Experiences

School

I wrote my first program (which simulated throwing a 3-sided die) soon after going to "big" school, at age 11. This was run by my teacher at the local technical college - batch processing! Fortunately my school soon became ¼-owner of this computer - a DEC PDP-8/e, so interactive computing was opened up to me - albeit on an ASR-33 Teletype! As I entered the "6th form" (at age 16), the school started teaching computing, and upgraded to a "Research Machines 380Z", a Z80-based system, which had the best "software front panel" that I have (still) ever seen. Pupils were taught the ICL-CES language CESIL(mentioned here), as a stylized assembly-level language, but the system for our computer was very primitive. It required a custom keyboard, which was very expensive (~£100 was a lot of money in those days...). The classic "I could do better" feeling kicked in, and I implemented a replacement CESIL system in BASIC, thus calling it "BASIL". It predicted the user's input to minimise hunt-and-peck keystrokes, and thus avoided the custom keyboard. The school gave BASIL to Research Machines (unfortunately they dropped the name...), in return for a dual 8" floppy disk and CP/M upgrade, and we came one step closer to real computing! (I heard that the company making the custom keyboards went bust!)

I designed and built two computers around this time: The first was based on the Z80, and had 1Kbytes of RAM, and a hardware front panel - mostly because I didn't have access to an EPROM programmer, and so had no other way to boot it! The second was a 'proper' computer, based on the 6809, and having 64KBytes of DRAM - by this time I could program a homebrew battery-backed-up pseudo PROM using my (less powerful!) BBC Micro. However, bootstrapping a usable software environment on the 6809 proved to be a much bigger task than getting the hardware running!

University

Going to the University of York to study Maths, I got access to big computers for the first time - they had hard disks - gosh! I also learnt my first high-level programming language - ALGOL-60 (BASIC didn't count in those days - does it now? :-). I did two computing projects during my maths degree: a sputtering simulation in Algol-60, using a novel self-adjusting Runge-Kutta numerical integration algorithm, and the symbolic calculation of the space-time curvature tensor near to a spinning (axially symmetric) black hole, using REDUCE.

Research

Finishing my maths degree in 1983, I moved to the Software Technology Research Centre at the Department of Computer Science to do a D.Phil looking at the implementation of Inheritance in Object-Oriented systems. This was before C++ existed, and everyone used to ask "what does 'Object-Oriented' mean?". I became particularly interested in strongly-typed object-oriented languages (such as is now typified by C++), and later on, to their relationship with strongly-typed functional languages (such as Standard ML).

In my spare time I was also working on the first reasonably cheap CD-quality hard-disk sound recording system - part of the Composers' Desktop Project (CDP). The system ran on Atari ST computers (520, 1040, TT and later, the Falcon), used 'gigantic' SCSI disks (i.e., 80 Mbytes, or more!), and attached to Sony PCM digital recording equipment using a custom interface plugged into the computer's cartridge port. I wrote the sound filing system, the device driver for the custom interface, and various utilities, and provided technical support to the project for 12 years.

In 1987 I spent 6 months at the IBM Thomas J. Watson Reasearch Center, adding inheritance to the AML/X language. Here I also became interested in "destroy methods" leading to a paper with Lee Nackman - see publications. This provided some practical experience, and much-needed impetus to my D.Phil work. It was also great fun to be at IBM, and to live close to New York City!

Returning to York, I worked on the Ten15 system developed by Michael Foster, Ian Currie, Philip Core, et al, at the Royal Signals and Radar Establishment (now QinetiQ). Like Microsoft's .NET framework, Ten15 was a virtual machine intended to support the interworking of a number of high-level languages. Unlike .NET the virtual machine wasn't designed from the union of all the features in all the languages that were of interest, rather it was a strongly-typed higher-order eager-evaluation lambda calculus, which by it's generality could automatically support any typed imperative language up to, and well beyond, Standard ML, and anything that could be mapped into any of these. Ten15's type system included explicit, bounded universal and existential polymorphism, only approached in expressibility by Cardelli's F<:, and Quest, and encompassed persistence and remote objects. Some novel features, such as unique types, and a solution to the problem of typing updateable values never, to my knowledge, even got published. (Until now that is, if you are interested, see my unofficial Ten15 homepage: An Introduction to Ten15)

Unfortunately, the one language Ten15 could not support was C, since C is not adequately typed, and this - around the hight of C's dominance - was a big reason for Ten15's downfall. In a desperate bid to get something out of Ten15, it became the foundation for TDF, which  was proposed to, and adopted by, the Open Software Foundation for their ANDF technology, however most of the nice features of Ten15 (including all the type stuff!) were lost in the transition, and who now uses TDF/ANDF (except the valiant people at TenDRA.org) ? :-(

However, one good thing came out of it all - my thesis was finished in 1989, and ended up being mostly about how to implement object-oriented languages in Ten15.

Independant Consultant

I left the University to become an independent consultant, and worked on a real-time networked livestock auction system (operating well before the web existed!); and database applications, including an ODBC driver that added calculated columns to those coming from a conventional data source, in order to make it easier to write complex reports.

During this period I also did alot more work for the CDP, writing a portable version of the sound filing system which used normal files in standard ".wav" and ".aiff" formats. This enabled the porting of the whole CDP system (over 100 sound transformation programs) to SGI workstations, DOS/Windows PCs, and Linux.

Northern Real-Time Technologies Limited

My association with real-time started in 1995, when I founded Northern Real-Time Technologies Limited (NRTT) with former colleagues from the University of York. Here I co-authored two "hard" real-time operating systems. They were Static Priority Pre-emptive Kernels, and used the theory of Deadline-Monotonic Scheduling to provide absolute guarantees that deadlines would be met. I also co-authored the Volcano real-time communications infrastructure for CAN bus which provided similar guarantees for all network communications. All this was developed for the Volvo S80, and we had to work with Volvos suppliers to incorporate Volcano into their systems. I presented training courses for our software to Volvo and their suppliers, both in Sweden and in Japan. I left the company in 1997, and since then, NRTT has evolved into LiveDevices, and Volcano is now marketed by Volcano Communication Technologies.

Network Administration

From NRTT, I returned to the Department of Computer Science, but this time to a non-academic post - the head of the system support group. I was responsible for all the internal networks, servers and workstations in the department - the disparate mixture of Unix, Linux and various versions of Windows that is needed in a research environment. But the real challenge was that the whole department (at that time, scattered in several buildings around the campus) was just about to move into a brand-new building. This was the opportunity to replace the department's aging networks (based around several 10-base5 thick-coax ethernet segments, with 10-base2 spurs around the offices - yes! Coax everywhere - a reliability nightmare!), and to design a new network using state-of-the-art infrastructure, and structured 10-baseT wiring. Of course, the whole move had to be done in the (rapidly disappearing!) time between the completion of the building, and the start of the new academic year with the arrival of lots of students, and various servers had to be upgraded simultaneously!

After a bidding process, we eventually settled on Cabletron (now Enterasys) switches for the core, and 3Com 'port-switching' hubs (PS Hub 40) for desktop connections. The 3Com hubs were great - with 24 ports (plus 2 uplink ports) they could be sub-divided into smaller collision domains, and they were managed, so we could work out what was going on! The Cabletron hardware was also good, but we were running virtual LANs, and the software in the switches was more bleeding edge than we had expected, and reliability suffered somewhat! All the backbone links to the wiring cabinets were dual, redundant 100-baseTX fiber links, and this worked well - you could unplug a backbone link, and no-one would notice (not that we ever did this, of course)!

In the middle of all of this, I also tought a ½ course (Introduction to Computer Systems) to the first-year undergraduates in the term after the 'big move', and the following year too.

Mission Critical Applications Limited

The big challenge over, I left in 1999 to start Mission Critical Applications Ltd, together with my wife, Divya, as a vehicle for our unusually-broad spectrum of experience and expertise. She brings a research background in safety-critical systems, formal methods, dependability, measurement theory and decision analysis as well as other previous experience.

Apart from pursuing our own ideas, we have also had some external consultancy projects, more details on which can be found here.

A particularly interesting project began late in 2002, when I was asked by Vita Nuova to complete the draft of the book Inferno Programming, that had been started by Rob Pike and Howard Trickey at Bell Labs before Vita Nuova acquired Inferno in 1999 (my involvement was announced here). Inferno is a virtual machine and an operating system that can run natively on bare hardware, or in a process under another operating system. It was a successor to Plan 9 (which was itself the successor to Unix 10th Edition, 8th Edition, etc...), and includes many of the same ideas - most notably taking the Unix idea that "everything is a file" to its logical conclusion. Both systems provide applications with automatic location independence. Since everything is a file, and files can transparently be local or remote, then programs can access their resources without knowing (or caring) whether they are implemented on the local machine, or on a remote machine. Inferno's virtual machine allows it to go one step further, since it no longer matters what kind of processor the local machine has, the same binary executable of the program will run, with the same results. Thus Inferno encourages distributable applications, rather than just distributed systems. The book is expected to be released in March 2003.