Bye bye, London!

My last project at the BBC is finished, as is the related contract. My worldly possessions are in boxes and being shipped. The lease on my London flat has ended. A goodbye party (how unlike me!) was had. I’m now at my parents, sleeping, reading, playing with the dogs, resting.

I don’t think I’ll miss London very much (too big and busy, too expensive, takes too long to get out of), but I will definitely miss all the people a lot. Those I worked with and those I partied with, but especially those I just plain hung out with. I think my fondest memories will be of having a beer in the park on a sunny afternoon, or having fancy lunch at a notting hill restaurant, or strolling along south bank amidst the tourists.

What’s next? I have my sights set on Amsterdam, to work and maybe to live. I’m looking at buying a car which I’ve actually never done before. I’m also looking for interesting work. Talking to a few interesting folks already, it seems like Amsterdam is filled with startups these days ūüôā

A personal history of computers and the internet

I’m pretty sure this is not very interesting to anyone at all, but I wanted to write it down so I can look back at it some years from now and remember.

Early years

In ’88 or ’89, when I was about 5 or 6, my dad bought an Amiga 500. I remember that he taught me how to draw mandelbrot fractals with it. We would sometimes leave it running for two days slowly drawing the picture. We had the extra floppy drive which meant that we could play games that were on two floppy disks. At first me and my brother mostly played a demo version of Pang that had come with a magazine, and then we got Rainbow Islands which we played for ages and ages. I think we got it for Sinterklaas. We also got Batman eventually.

Then, much later (the nextdoor neighbours had an IBM PC by then, though we weren’t allowed to play with it), with another magazine we got some kind of a programming environment; I can’t remember which one. I figured out how to make it display something like “your computer has been infected with a virus”, and I also figured out how to change the high scores on some of our games. We had that amiga for a long time.

The next computer to enter the house was an old IBM PC that had been obsoleted at my dad’s work. It had a 80×24 screen with green text on it, it ran MS-DOS and had a hard drive. We broke it the same day! With the amiga I had learned that if things went wrong you could just turn off the power and turn it on again. With this computer, that was enough to crash the hard drive.

When the pentium processor came out in ’93 my dad bought us a new computer. It came with DOS and Windows 3.1. I read the entire manual cover to cover and figured out how to do simple things with Basic in DOS, including messing with a version of snake that I typed in from a magazine.

When we got Compuserve with WinCIM and a 14k4 modem my 10-year-old self got properly addicted to the computer. Online time was expensive so I was allowed only 30 minutes a week at first. When my dad learned of the internet (I think after a business trip to the US) we spent ages together trying to get Winsock and WinCIM to work together. After calling compuserve support they sent us a new version of WinCIM with Mosaic and we got on the world wide web. I’m not sure when exactly this was, but the WWW was grey with blue and black, and we used WebCrawler to find stuff, which was really hard.

I remember I didn’t like the world wide web very much, because it wasn’t as easy to use as the compuserve forums. It all changed when I read about Yahoo! in a magazine; that was probably some time in 1995. I was then active on the compuserve AD&D forum, and as hobbyists set up AD&D websites, the web became very interesting.

Teenage years

I learned some HTML and made my own website on Geocities, I think in 1996 or 1997. When we switched ISP to Demon my website moved there, which the wayback machine keeps some copies of. I became an amazon associate and made commission on AD&D and other roleplaying books. I used the credit earned to buy roleplaying books and then a javascript book. I was learning javascript and eventually some PHP mostly to manage the book catalog.

According to my amazon history I got my first book credit in May 1998, when I was 15. That’s also roughly the time when I started making my first “company presence” websites for money. I started my first company just before I turned 16 and invested my first earnings in a copy of Flash 4 which I used to build lots of stuff with. I never made much money, but it was a bit more than I made from being a paperboy.

In 1999 or 2000 or so I had learned enough PHP and MySQL and Corel PhotoPaint 4 to help out on a major rework of another roleplaying site, AtFantasy, which stayed pretty popular for several years; I think the guy that started that eventually made a living out of running the site.

I started lurking on the PHP-Dev mailing list at some point, where there was this Sam Ruby person talking about server-side java all the time. It was intriguing, so I read a book or two about java, started programming in it, got interested in servers, and one thing led to another. I was voted in as a committer on Apache Avalon in March of 2001 (aged 17, my mum had to sign the CLA).

I switched my desktop to (red hat) linux around this time.

Going professional

In June 2001, just after finishing high school, I got a job as a web programmer for Planet Internet, which I quit after 3 months to go backpacking in Australia. It was my first real job, and I learned a lot in those 3 months (how to take down an oracle cluster, how to royally piss off the sysadmins by asserting they misconfigured the reverse proxy, how to write Tcl, how to do cross-browser HTML and javascript, how to do flasm, just how many support calls you cause if you accidentally publish the wrong version of the help pages).

After returning from Australia I got a job at Multi-M/IA, where I was hired to do some PHP CMS work. I then worked on an e-mail based CMS in java, which was for UNAIDS field doctors who had about 20 minutes of GPRS connectivity per day, a bulk mail tool using JavaMail, and several filesharing/intranet projects. This is where I first learned about server administration (we had managed hosting with Rackspace), which probably also triggered my interest in build engineering. As I started studying physics I eventually quit that job.

I partially switched to Mac when I bought an iBook G4, though I kept using (ubuntu) linux on my desktop for a long time.

The next key turning point was when I got a phone call from Dirk-Willem, who needed someone to help out on some project infrastructure and a build system for a major web service project for the Dutch government. I found I enjoyed that way more than studying, so I quit uni.

I got a beefy PowerMac and a 30″ Cinema Display and have been mac-only ever since.

I worked as a freelancer for 2 years, most of which was with asemantics, where I learned about low-level engineering and business and large-scale commercial software projects.

Asemantics was a subcontractor for Joost, whom I joined in October 2006 and then left in May this year. Like everyone else there I worked long hours and my open source contributions dwindled, but working with many very smarty and talented people meant I continued to learn quite a lot. In particular I ended up learning a lot more about data modeling and databases, eventually leading a migration effort away from an RDF database to a relational model.

At the moment I’m back to contracting, working at the BBC, where I’m part of a platform engineering team responsible for the BBC’s shiny new web platform. This is the first time I’ve been fully embedded in a really big (and old) organization. That means learning about policies and processes and organization structures, and then sometimes trying to change them. When a lot of engineering choices are dictated by non-technical constraints (“we must be able to hire people 10 years from now that can maintain our software”), your perspective does change. I think.

You don’t know and you don’t understand

You know much less than you think you know. You misunderstand many more things than you think you do. You’re also much more wrong much more often than you think.

(Don’t worry, it’s not just you, the same is true for everyone else.)

Even better, this is how science works. Being a scientist is all about actively trying to be wrong (and proving everyone else wrong), all the time. When you do science, you don’t know, and what you learn doing the science, you don’t ever know for sure.

The scientific method

Here’s the basic steps in the scientific method:

  1. Based on past experience of you and others, try and make some sense of a problem
  2. Try to find a reasonable explanation for the problem
  3. If the explanation is correct, what else would you be able to see or measure?
  4. Try to disprove the explanation by doing the observation and measuring

Scientists do this all day every day, they do it together on a world-wide scale, and they do it to each other.

Experimentation

In uni, studying applied physics, I was trained in a specific application of the scientific method to experimentation, which went something like:

  1. Define a question to answer.
  2. Define what you already know (or will assume) that is related.
  3. Form a hypothesis of what the answer may be.
  4. Figure out what you can measure.
  5. Define how those measurements could be interpreted to verify or disprove the hypothesis.
  6. Do the experiments and collect the measurements.
  7. Analyze the data.
  8. Assert the internal consistency of the experimental data by applying statistics.
  9. Draw conclusions from the analysis.

The course was called Introduction to Experimentation, and it included many more specifics than just that process. For example, it was also about teamwork basics, the use of lab journals, safe lab practices, how to think about accuracy and precision, and quite a lot of engineering discipline.

The course was nearly completely free of actually interesting math or physics content. For example, the first two 4-hour practicums of the course centered around the measurement of the resistance of a 10 ohm resistor. Some of the brighest 18- and 19-year olds in the country would leave that practicum feeling properly stupid for the first time, very frustrated that they had “proven” the resistor to have a resistance of 11+/-0.4 Ohm (where in reality the resistor was “known” to be something like 10.000+/-0.001 Ohm).

The art of being wrong

Teaching that same course (some 2 years later) has turned out to be one of the most valuable things I’ve ever done in my life. One of the key things that students learned in that course was that the teacher might not know either – after all a lab is a strange and wonderful place, and volt meters can in fact break! The teacher in turn learned that even when teaching something seemingly trivial it is possible to be utterly wrong. Powerful phrases that I learned to use included “I don’t know either”, “You are probably right, but I really don’t understand what’s going on”, “Are you sure?”, “I’m not sure”, “How can you be so sure?”, “How can we test that?”, and the uber-powerful “Ah yes, so I was wrong” (eclipsed in power only by “Ok, enough of this, let’s go drink beer”).

This way of inquisitive thinking, with its fundamental acceptance of uncertainty and being wrong, was later amplified by studying things like quantum mechanics with its horrible math and even more horrible concepts. “I don’t know” became my default mind-state. Today, it is one of the most important things I contribute to my work environment (whether it is doing software development, project management, business analytics doesn’t matter) – the power to say “I don’t know” and/or “I was wrong”.

For the last week or two I’ve had lots of fun working closely with a similarly schooled engineer (he really doesn’t know anything either…) to try and debug and change a complex software system. It’s been useful staring at the same screen, arguing with each other that really we don’t know enough about X or Y or Z to even try and form a hypothesis. Communicating out to the wider group, I’ve found that almost everyone cringes at the phrase “we don’t know” or my recent favorite “we still have many unknown unknowns”. Not knowing seems to be a horrible state of mind, rather than the normal one.

Bits and bytes don’t lie?

I have a hypothesis about that aversion to the unknown: people see computers as doing simple boolean logic on bits and bytes, so it should be quite possible to just know everything about a software system. As they grow bigger, all that changes is that there are more operations on more data, but you never really stop knowing. A sound and safe castle of logic!

In fact, I think that’s a lot of what computer science teaches (as far as I know, I never actually studied computer science in university, I just argued a lot with the computer so-called-scientists). You start with clean discrete math and through state machines and automata and functional programming you can eventually find your way to the design of distributed systems and all the way to the nirvana of the artificial intelligence. (AI being much better than the messy biological reality of forgetting things and the like.) Dealing with uncertainty and unknowns is not what computer science seems to be about.

The model of “clean logic all the way down” is completely useless when doing actual software development work. Do you really know which compiler was used on which version of the source code that led to the firmware that is now in your raid controller, and that there are no relevant bugs in it or in that compiler? Are you sure the RAM memory is plugged in correctly in all your 200 boxes? Is your data centre shielded enough from magnetic disturbances? Is that code you wrote 6 months ago really bug-free? What about that open source library you’re using everywhere?

In fact, this computer scientist focus on logic and algorithms and a high appreciation building systems is worse than just useless. It creates real problems. It means the associated industry sees its output in terms of lines of code written, features delivered, etc. The most revered super star engineers are those that crank out new software all the time. Web frameworks are popular because you can build an entire blog with them in 5 minutes.

Debugging and testing, that’s what people that make mistakes have to do. Software design is a group activity but debugging is something you do on your own without telling anyone that you can’t find your own mistake. If you are really good you will make fewer mistakes, will have to spend less time testing, and so produce more and better software more quickly. If you are really really good you might do test-driven development and with your 100% test coverage you just know that you cannot be wrong…

The environment in which we develop software is not nearly as controlled as we tend to assume. Our brains are not nearly as powerful as we believe. By not looking at the environment, by not accepting that there is quite a lot we don’t know, we become very bad at forming a reasonable hypothesis, and worse at interpreting our test data.

Go measure a resistor

So here’s my advice to people that want to become better software developers: try and measure some resistors. Accept that you’re wrong, that you don’t know, and that you don’t understand.

Massive change is inevitable

Massive change is inevitable.
I’m not scared.
I’m not panicking.
My mood is eager impatience.

The way we ran the world was wrong.
The market is not the complete answer.

We will not be defeated.
We will directly attack our worst problems.
We will fix the world.
I don’t know what to do.
We can work it out together.
Grow up to the size of your challenges.

The future is unwritten.

(adapted from and insipired by Bruce Sterling)

The startup suck

Startups suck. Not suck as in suck ass, but suck as in suck you in. It’s an interesting experience.

I work at a company that somewhat resembles a startup. I’ve worked there (as an employee, that is) for 3 months now. Like many of my collegues, I tend to work 60-hour weeks.

Before that, I was self-employed, and I worked 60-hour weeks, too, and I dare say with more stress. However, somehow I always managed to find some time to keep somewhat involved with some amount of open source stuff on the side, whereas now, I don’t even seem to find the time to read the members@apache.org mailing list.

It’s not because we don’t use open source software at work:

$ pwd

/Users/lsimons/dev2/tvp/bistro-trunk/external-libs

$ find . -name '*.jar' | grep -v '.svn' | wc -l

     294

It’s also not because my employer doesn’t want me working on open source projects (quote: “if this thing starts taking up more than 30% of your time we should sort-of talk about it”).

It’s about the rhythm. Just about everyone in our little p2p video company works like crazy, yet parties like students (actually, the students tend to leave the bars way early in Leiden, whereas we often get kicked out). In many ways, coming into the office (which I tend to do for about two days a week, the rest I work from home) feels more like arriving at a mini-conference than like, ehm, coming into the office. It just sucks me in, and I like it that way.

What I do these days? Judging by flickr, crowd control, ranting, eating, drinking, and making a fool of myself (and others). I don’t think there’s a public picture to be found on flickr yet of me actually working. Hmm….

I guess I should be happy I don’t work at youtube — then the above links would’ve been to the even more embarassing christmas dinner video footage, that, as far as I know, hasn’t made it off of our SSL-secured intranet. Of course, if we do our job really well on the COW (short for Content Owner Website, looks to be the main project for me to work on in January), we might be seeing a video channel on TVP about TVP before I can come up with a ploy to prevent it…

PS: you’ll be missed, Mads. I’m sure you find the job you’re looking for soon.