Simplifying the Landscape

At the end of the last post I wrote about the actual implementation of my Clockwork Aphid project, I said the next step was going to be display simplification. At that point I’d generated a few landscapes which were just starting barely starting to test the limits of my computer, though they were nothing like the size or complexity I had in mind. That said, it was looking at landscapes containing 1579008 polygons and it was obvious that not all of these needed to be put on screen. Moreover, because my landscapes are essentially made up of discrete samples (or nodes): I needed to reduce the number of samples which were displayed to the user, otherwise my performance was really going to tank as the landscapes increased in size.

Shamus Young talked about terrain simplification some time ago during his original terrain project. This seemed as good a place as any to start, so I grabbed a copy of the paper he used to build his algorithm. I didn’t find it as complicated as it appears he did, but this is probably because I’m more used to reading papers like this (I must have read hundreds during my PhD, and even wrote a couple), so I’m reasonably fluent in academicese. It was, as I suspected, a good starting point, though I wouldn’t be able to use the algorithm wholesale as it’s not directly compatible with the representation I’m using. Happily, my representation does make it very simple to use the core idea, though.

If you remember, my representation stores the individual points in a sparse array, indexed using fractional coordinates. This makes it very flexible, and allows me to use an irregular level of detail (more on that later). Unlike the representation used in the paper, this means a can’t make optimisations based on the assumption that my data is stored in a regular grid. Thankfully, the first stage of the simplification algorithm doesn’t care about this and examines points individually. Also thankfully, the simplification algorithm uses the same parent/child based tessellation strategy that I do.

The first step is decide which points are “active”. This is essentially based on two variables:

  • The amount of “object space error” a point has (i.e. how much it differs from its parents);
  • The distance between the point and the “camera”.

A local constant is also present for each point:

  • The point’s bounding radius, or the distance to its furthest child (if it has children);

I’m not sure if I actually need this last in my current implementation (my gut says no, I’ll explain why later), but I’m leaving it in for the time being. Finally, two global constants are used for tuning, and we end up with this:

SimplificationEquation2

Where:

  • i = the point in question
  • λ = a constant
  • εi = the object space error of i
  • di = the distance between i and the camera
  • ri = the bounding radius of i
  • τ = another constant

This is not entirely optimal for processing, but a little bit of maths wizardry transforms this like so:

SimplificationEquation3

This looks more complicated, and it’s less intuitive to see what it actually does, but from the point of view of the computer it’s a lot simpler, as it avoids the square root needed to calculate the distance between the point and the camera. Now we get to the fun part: diagrams! Consider the following small landscape, coloured as to the granularity of each of the points (aka the distance to the node’s parents, see this post):

AllPoints

Next, we’ll pick some arbitrary values for the constants mentioned above (ones which work well for explanatory purposes), and place the viewpoint in the top left hand corner, and we end up with this the following active points (inactive points are hidden):

ActivePoints

Now, we take the active points with the smallest granularity, and we have them draw their polygons, exactly as before, which looks like this:

SmallestPolygons

When we come to draw the polygons of the next highest granularity you’ll see that we have a problem, though. The previous set of polygons have encroached on their territory. To avoid this, each node informs its parents that it is active and then the parent doesn’t draw any polygons in the direction of its active children. If we add in the polygons drawn by the each of the other levels of granularity, we now end up with this:

FilledPolygons

Oh no! There’s a hole in my landscape! I was actually expecting that my simplistic approach would lead to more or less this result, but it was still a little annoying when it happened. If I was a proper analytical type I would next have sat down and worked over the geometry at play here, then attempted to find a formulation which would prevent this from happening. Instead, though, I stared at it for a good long while, displaying it in various different ways, and waited for something to jump out at me.

Eventually it did, and thankfully it was a very simple rule. Each parent stores a list of the directions in which it has active children in order to prevent overdrawing (as mentioned above). The new rule is that a node is also considered active if this list is non-empty. With this addition, our tessellated landscape now look alike this:

BackfIlledPolygons

Presto! A nice simple rule which fills in all of the gaps in the landscape without any over or under simplification, or any overdrawing. I suspect this rule also negates the need for the bounding radius mentioned above, though I have not as yet tested that thought. To recap, we have three simple rules:

  1. A node is active if the object space error/distance equation says it is;
  2. A node is active if any of its children are active;
  3. Polygons are tessellated for each active point, but not in the direction of any active children.

But what does this look like in actual eye poppingly 3D landscapes? Well, here’s an example, using the height based colouring I’ve used before:

SimplifiedLandscape

I quite pleased with this, though what I’m doing here is still quite inefficient and in need of some serious tuning. There are a couple of further simplification tricks I can try (including the next step from the (paper) paper). More to come later. Honest.

Advertisements

Revisiting the Language Issue

Some time ago, I wrote a series of posts about language choice for my Clockwork Aphid project. In the end I decided to start the project in Java, this being the language I’m most comfortable with. Once the project reaches a stable state, with some minimum amount of functionality, the plan is to port it to C++ for comparison purposes, this being the language which is likely to provide the best performance.

I still plan on doing this, but I’ve also decided to add a couple of extra candidate languages to the melting pot and get an even broader comparison. The first of these languages is Go, a relatively new language developed at Google. This is not coincidence. I’ve been doing some reading about it lately and finding a lot of things I really like. It has the potential to provide the benefits of both Java and C++, whilst avoiding many of the pitfalls. This is definitely a good thing. It will also give me chance to dogfood (there’s that word again!) some more Google technology.

One of Go’s features which I really like is implicit interfaces. Allow me to explain. In most regular statically typed object orientated languages, such as Java (which I’ll use for this example), you can abstract functionality using something like an interface. For example, let’s say I have a class which looks like this:

class Counter {
  int value;
  int get() {
    return value;
  }
}

Here we have defined an class which declares a single method which returns an integer value. I might then make use of this an instance of this class elsewhere:

class Printer {
  void update(Counter counter) {
    System.out.println(counter.get());
  }
}

All is good with the world, unless I decide I want to change the behaviour of the code. Perhaps I want the value to increment after each call, for example. I could extend the Counter class and change its behaviour that way:

class IncrementingCounter extends Counter {
  int get() {
    return value++;
  }
}

I can now pass an instance of this new class into the update method of the Handler. Done. Right? Well… no. This is a bit of a clumsy way to go about this. It doesn’t scale and it’s not always possible. A better way to handle this is to use an interface:

interface CounterInterface {
  int get();
}

This specifies the interface of the methods, but not their implementation. We can then change the Printer class to use this interface, rather than the concrete class:

class Printer {
  void update(CounterInterface counter) {
    System.out.println(counter.get());
  }
}

Now any class which implements this interface can be passed to the Printer. So, going to back to our original example:

class Counter implements CounterInterface {
  int value;
  int get() {
    return value;
  }
}

We can now make any number of alternative implementations (incrementing, decrementing, random, fibronatchi…) and as long as they implement the interface they can be passed to the printer. This is fine if you’re in control of the implementation, and even more fine if you’re in control of the interface as well. There are times, however, when you’re in change of neither. Things can get a little messy and you may have to find a way of pushing a round peg through a square hole.

In dynamically typed languages, such as Python and Ruby, things work a little differently. These languages are often referred to as being “duck” typed, as they make the assumption that if something “looks like a duck and quacks like a duck, treat it as though it’s a duck.” In this case we wouldn’t bother with any of the interfaces and our Printer class would look more like this:

class Printer:
  def update(counter):
    print counter.get()

So long as the counter object has a method called get() we don’t have a problem. Everything will be fine. This is much simpler, and is one of the things which makes Python very quick to program in, but it does have problems. The main problem (for me, at least) is specification. Without examining the source code, I can’t see what sort of object I have to pass into the update method. If the method has been manually commented then there’s no problem, but this is an incredible tedious thing to have to do. In the Java code I can see the type right there in the auto-generated documentation, and even if the writer has written no comments at all (what a bastard!) I can still get a good idea of what I need to pass into the method.

Go takes a different approach. It’s statically typed, and it has interfaces, but a class doesn’t need to state that it implements an interface. This is implicit and automatic. If a class has the methods defined in an interface, then it is automatically considered to implement it. You get the flexibility of Python with the specification and predicability of Java. This is just one of the things in Go which I think is a really good idea.

On the other hand, I think functional programming is a really stupid idea. I find the languages to be completely horrendous. I feel they must be created by the sort of people who think Linux is user friendly. I consider them curiosities whose only merit is academic. It appears to me that their major use case is to make programming appear more obscure than it actually is and to abstract way the programmer’s knowledge of what the computer is actually doing.

You may be surprised to learn, then, that the third language I’m going to be trying to port Clockwork Aphid into is Scala, a functional programming language. The reason for this is simple: while I personally believe that functional programming (FP) is rubbish, many people disagree. Not a majority, but certainly a very vocal minority. Within Google this minority is very vocal in indeed. The word “fundamentalists” might be appropriate to describe them. When someone believes something that hard it makes me very curious. This is turn tends to lead me towards testing my own beliefs. Sometimes I discover something new and exciting which I was missing out on previously*, and sometimes my initial suspicions are confirmed**. We’ll see which way it goes with Scala.

* Such as the Harry Potter books, which I had stubbornly refused to read until just before the first film was released.

** Such as when I noticed that the Twilight books had taken up the first four places on the Waterstone’s chart and decided I aught to find out what all the fuss was about.

s/@seebyte\.com/@google\.com/g

Yes. That’s right. I did it. I used a sed expression as post title.

I’ve been very quiet as of late, though in my defence I’ve been very busy for a few months. In the middle of that I had a potentially life changing decision to make, and then I was dealing with the ramifications of the choice I made.

As you may have gathered from the post title (even if it mostly looks like crazy speak to you), the choice was whether I should accept a job at Google or not. Believe it or not, it was a choice, and a fairly hard one. There are various reasons for this. I’m not going to go into all of them, though I will go into some, but let’s start with a little bit of background.

It started with the receipt of a LinkedIn message with the subject “Hello from Google.” and ended with me standing in a car park being offered a very good job. Regarding what happened in between: the Google interview process is lengthy and pretty hardcore. Reputedly the most hardcore in the entire of the tech industry. But having a gruelling four and a half hour viva a little over a week before your main interview can make it seem like a walk in the park, albeit a mentally tiring one.

So then I was left with a choice. I could stay at my good job at a small but growing company with a lot of potential, at which I knew I had some prospects. I’d still be working in an industry which I know, and which to some extent knows me. I’d stay in a city I love (and have loved since the moment I set eyes upon it ten years ago), surrounded by a wonderful group of friends.

Alternatively I could accept an incredible opportunity to work at one of the most exciting companies in the world, which is famous for treating it’s employees incredibly well, and has projects which excite me more than I can adequately express in words. But I’d be changing industries and I good portion of my existing knowledge might be useless (or more useless, as the case may be). I’d have to move to London, a city I like but don’t know that well, and feel slightly intimidated by. As luck would have it, though, I do have a group of close friends living in London, who are also awesome.

It was a very hard choice, and it came down to a couple of things:

  • A former colleague put it to me that if I turned this offer down I’d hate myself for it every time I had a bad day (or spent a year putting my life on hold for a field trip which was consistently two weeks away from happening);
  • Another colleague suggested that I would be swallowed up by Google. A tiny cog in a huge machine. Which is potentially true… and a little scary. But… the other analogy people use here is “small fish in a big pond.” There’s a distinction to be made: fish grow, cogs don’t. Unless you put yourself in a bigger pond, you’ll never find out if you have the potential to get any bigger.
  • It would be nice to not work for the oil companies and the military. Not necessarily because either party is evil, but because of the shear amount of red tape involved.
  • I’d been feeling as though I’d been stuck in a rut for a while, and really wanted to shake things up somehow.
  • It’s frickin’ GOOGLE!

So. Here I am. In London. Staying in wicked temporary accommodation. Tomorrow is my first day at my shiny new Google job, and right now I should really go to bed!

PS More updates coming soon I swear, though it may be a month before I can get back to my Clockwork Aphid project, for logistical reasons.

Brave New Worlds

If you were writing a taxonomy of stories you might choose books as a good place to start. Flicking your way through the world’s libraries, time and the Dewey Decimal System would eventually bring you to the fantasy genre. There is a lot of fantasy writing out there, and you might choose to subdivide it further. A one potential way of cutting it neatly in twain is like so:

  1. Stories set in our world;
  2. Stories not set in our world.

Simple. Harry Potter, for instance is set in our world. That’s a big part of the appeal. Likewise so are Neverwhere, Kracken (if you like’d Neverwhere, you’ll want to read this), and most of Stephen King’s work. Lord of the Rings is not set in our world. Simple. There are other options, of course. What about the  Chronicles of Narnia, for example (which is set in a world beyond our own)? Or Magic Bites* (which is set in an alternate version of our world)? Clearly we’re looking at some shades of grey here as well, but I’m sticking the with the original idea while it still serves my purpose.

What is my purpose here, though? Why the ramblings on this most nerdy or genres? Well, I’m thinking about procedurally generated landscapes again, you see. Clearly, if you’re generating your landscape procedurally, it’s going to be entirely of this world. Existing fantasy landscapes are a good place to look for ideas, then, particularly because they were designed specifically for the purpose of telling stories in.

The quintessential fantasy landscape is, of course, Tolkien’s Middle Earth, which looks a lot like this:

Middle Earth

That should look vaguely familiar to anyone who’s read the films or seen the books (or words to that effect). There’s a definite feeling of size there. Clearly we’re looking at a chunk of a continent, split into something not unlike countries. It always bothers me on maps like this, though: what about the rest of it?

There are a couple of well known knock offs of Tolkien’s work out there, so why don’t we consider a few of those as well? One which used to be close to my heart in my adolescent years is Games Workshop’s Warhammer:

The Warhammer World

Now, it should be quite obvious that the good chaps at GW are knocking off more than just Tolkien here. The Big G / the anthropic principle (depending on your world view) could probably claim some royalties here, because the shape of some of those continents looks very familiar. Grand Cathay, indeed. Now we appear to be looking at close to an entire planet, though, unwrapped using something not unlike the good old Mercator projection (or possibly something more politically correct). I assume so, anyway. It’s entirely possible that the Warhammer world is flat.

Another world with more than a bit of Tolkien about it it the World of Warcraft:

The World of Warcraft

Blizzard have taken no chances, though. This is a world you can actually go wandering about on, virtually speaking. They’ve made sure there aren’t any inviting edges for you to go wandering off. If this isn’t the entire of the world, it is at least self contained. That doesn’t stop Blizzard causing new continents to pop to surface whenever they need to make new content, of course.

Okay, I’m only going to show one more, then I’ll get to the actual point. This is a big one, though, so take a deep breath:

Westeros

This is also one you might be less familiar with. If you haven’t already, I heartily suggest you take a look at George R R Martin’s “A Song of Fire and Ice” series, which starts with “A Game of Thrones,” which is not coincidentally the name of the TV adaption of the books which starts on HBO quite soon. This is probably the best of the four maps I’ve posted here, thanks to a fairly stupid amount of detail (click on it, I dare you). This detail is evident in the books themselves (which I have to confess are not for the faint of heart) as well. This giant map actually represents only a smallish portion of the world these books are set in. It makes a good illustration though. We have mountain ranges, plains, rivers, cities, castles and so on.

If I’m going to procedurally generate a landscapes to tell stories in, they need to have at least a percentage of this amount of detail. Take “The Neck,” the narrow portion of land around halfway up, for example. The fact that the land perceptively narrows here feeds heavily into the plot at several points in the books. This is a choke point which cuts the continent in half. Likewise, “The Eyrie” (*shudder*) is a fort sitting at the peak of a mountain range. Towns are in places that towns would be placed: bridging point on rivers, sheltered harbors, and so on.

The point is this: my procedurally generated landscapes will need variety, but the right kind of variety. They will need “features.” That’s the first major problem I’m going to need to work on once the basic engine is in place, but first I need to make a decision: should this be done top down, or bottom up? Or some combination of the two?

First, though, I need some terrain simplification and some unit tests. The unit tests I’m actually quite looking forward to doing (oddly), since I’m going to try doing them in Groovy.

PS I wanted to include the world from Brandon Sanderson’s utterly spectacular “Mistborn” series here as well, but I couldn’t find a good map online. These books are truly awesome, though. As well as been part of a series, each actually stands alone and completes it’s own story, unlike the fast majority series in the fantasy genre. Seriously. Read them.

* Confession: I enjoyed this book, even if there is a gramatical error in the first sentence. The very first sentence.

The Day Job Part 2: Let’s get SAUC-E!

If you know when I started my PhD you’ll be aware that it took quite some time for me to finish it. There are various reasons for this. One is that I spent quite a bit of time working and getting industrial experience during it. The other is that it took me something in the region of 18 months to figure out what it was I was actually going to do. This happens quite a bit at the Ocean Systems Laboratory, you don’t actually start working on a particular project or problem, you just sort of find something which seems to need doing. It also didn’t help that it talk me 9 months to get any feedback on the first draft of my thesis. One of the major things which took up my time, though, was the Student Autonomous Underwater Challenge – Europe, or SAUC-E for short.

This post is going to focus on my own part in the proceedings, but you should assume that everyone else on the teams worked at least as hard as I did, and their contribution was at least as important as my own. This is my blog, though, so I’ll mainly be talking about me.

Basically, teams of students build an underwater robot which then has to complete an obstacle course. Let me stress: it is not in any way like Robot Wars, so you can abandon that notion right now. The crucial word here is autonomous, as in you have no contact with the vehicle after you push go; it all has to run on autopilot. The first competition was just ramping up when I joined the OSL and I offered to go along to the site and help out. The fact that it was held at Pinewood Studios (which is a movie studio, not a furniture store) was no small bonus (I’ve wandered about on the sets for Casino Royale and Stardust), but the competition itself was also very cool indeed. I leapt at the chance to be part of the team for the second year of the competition, and then stared in disbelief as a technical issue nuked our chances right before the final. Up until that point we’d been leading the field by quite a margin, so finishing second to last was no fun at all.

The next year myself and one of my colleagues decided this wasn’t going to happen again, so we damn near killed ourselves working sixteen hour days for a couple of months, and then we took the robot to France for SAUC-E 2008. Each year there are a number of supplementary tasks which must be completed. One is to write a paper (or “report” if you’re not down with the academic lingo) about your entry, another is to produce a video diary. Our video diary for the 2008 competition does a pretty good job of showing our frustration at the previous year’s result and the amount of preparation we put into it this time round:

It was a hell of a lot of work, but we got there and we damn well kicked everyones’ asses. In the final we cleared the entire course on our first try. Everyone else used their entire twenty minutes of time. We used about seven. It was a good feeling. This isn’t a hugely fascinating video to watch and, but here’s the official video of out final run:

What you don’t say in that video is me standing on a table with a microphone in my hand narrating what I think the robot is doing, then the entire team practically leaping into the air when it completed the course. Still, here’s a picture of us (minus one team member who had to leave a day early) from an article in an industry journal:

Team Nessie victorious at SAUC-E 2008. Click for the article this picture is taken from.

The next year we reworked a lot of the electronics and moved to a much more hydrodynamic external design. This was a good thing, because previously Nessie had been slow, and now the competition area was a lot bigger. The 2009 video diary sums it up:

Most of the tasks were adaptations of those from previous years, but a fairly intense new one was also introduce: dock inside an elongated box placed on the floor of the pool. None of the other teams even attempted this last task, but I’d arrived a little late to the on site practice time thanks to other commitments. Everyone had their jobs and things were running pretty well. My responsibility was the mission controller (the captain, if you want to use a ship’s crew as a metaphor) and we weren’t quite ready to start doing any serious tests with this yet. So I started working on a strategy to do the docking. There was plenty of grunt work for me to help out with; I did some code review and put together some mission plans, but at this point I was essentially surplus to requirements. So I messed about with the docking thing.

Nessie IV in the practice pool.

First of all, I built a detector which could find the box from above. I’m not really a computer vision guy, but after a lot of experimentation and playing I managed to put something quite stable together. There was no space in the practice pool to actually attempt the docking itself, though. I put a behaviour together which I was pretty sure would get the vehicle into the box, but I had no reliable way of testing it. I put it to one side and got on with the serious business of the competition itself. This was held at a different location, so needless to say every algorithm needed to be re-tuned to the new environment.

One of the other teams did very well in the qualifying stages and went into the final in second place by a very narrow margin. We new what we could do, and we know what they could do. They had a much better sonar, but no cameras, so some tasks were just plain out of their reach. Even so, it was going to be close. The day of the final we got a bit of extra practice time in the morning. At this point the docking had received about ten minutes worth of practice time, and it’s performance had not been what you would call “successful,” exactly. I was fairly sure the last night’s late night coding session had found all the bugs, though. “Fine,” said the team. Give it a shot, but we don’t want to waste too much of the practice time on it.

It worked first time.

“Did anyone see that?” There was a judge at other side of the pool, but no one was looking at us. We wouldn’t get any points, but still, we wanted the judges to SEE it. Someone from one of the other teams saw some of it though, it seems, because they took this video:

Out supervisor came running over. “Was that autonomous?!” He demanded. Apparently one of the other judges was standing at the monitors and there was a camera inside the box. There was a little bit of a buzz.

We tried it again. It worked again. This time someone had actually put a tape in the VCR, which is nice, but I don’t have that video.

The organisers were smiling, but not in a 100% friendly way. No one was actually supposed to pull off the docking this time around. But we had. There was no time to add this behaviour into the plan for our main run in the final, but this year there was a new rule: you could demonstrate the tasks individually to pick up extra points. The final, as it turned out, was not quite as close as we were worried it might be, and Team Nessie picked up another first prize.

Team Nessie victorious at SAUC-E 2009. Click for the article this picture is taken from.

As well as the industry journal from which the above picture has been taken, the BBC paid some attention to the competition this time around, as well (which you can find here). Yep, that’s me at the end of the video. They interviewed me for the article they wrote on the competition and used a lot of my quotes. I learned an important lesson from this: assume anything you say to a journalist will be taken literally, and don’t assume that they’ll check their facts, even if you specifically tell them to because you’re not certain of your numbers. For example, the figure of £10,000 mentioned in the article is actually closer to £100,000.

A Short Note Regarding the Deafening Silence

For a while there it really looked as though I was on top of this whole blogging lark, didn’t it?

The problem is that writing this blog (and to an even greater extent, working on the Clockwork Aphid project) doesn’t feel like procrastinating. It feels like doing something. Not working exactly, but definitely making an active contribution.

As a result, if there’s something else I’m supposed to be doing with my time, I have a really hard time working on either without the guilt setting in (it’s happening right now). This doesn’t stop me from dicking around on the web, complaining about the latest change to Facebook and working my way through my mountain of articles which get dumped into my RSS reader on a daily basis, of course. They do feel like procrastinating, you see, so I waste plenty of time doing those.

This is one of those times when there’s something more important I need to work on.

Hopefully I’ll be able to talk more about it later. That doesn’t mean I’m not spending time thinking up new blog posts (I have at least three fairly big ones sitting in my head) or going over possible implementation strategies for Clockwork Aphid (I probably need to find a catchier name for that). At the moment I’m giving some thought to the management of the landscape data. As in:

  • how much do I show to the user?
  • How much do I keep in memory?
  • How much do I keep on the disk?
  • How much do I keep just on the server?
  • Also: why don’t I use more bulleted lists?

To help with this I’m doing a bit of research and reading a journal paper called “Terrain Simplification Simplified: A General Framework for View-Dependent Out-of-Core Visualization” by Peter Lindstrom and Valerio Pascucci. The previously mentioned Shamus Young used in one of his previous projects and talked about it here. The links he gives are dead now, but if you’re interested you can find a copy of the paper by googling the title.

As for Shamus’ current project, he’s doing something with a landscape subdivided in hexagons. This makes me think it might be some sort of turned based game, as hexagons have the nice side benefit of being equidistant from their neighbors (measuring the distance from centre to centre). Interesting… This makes me wonder if there’s a good method of doing fractal subdivision using a hexagonal (rather than square) topography…

You’re Speaking my Language, Baby. Part 4: Objective-C.

Author’s note: As this post started out HUGE, it’s been split into parts. You’ll find the introduction here, my comments on Java here, and my comments on C++ here.

The last language I’m considering is Objective-C. I know this language the least of three. To make matters worse, while Java and C++ share a similar syntax, Objective-C is completely different in places. That being said, it’s semantically very similar to Java (more so than C++) and people who know it well speak very highly of it. i.e. it does not appear to be anywhere near as broken as C++. The language itself has some dynamic capability built in, but also has all of the additional dynamic options available to C++ (more on that later) and an excellent Ruby implementation which sits directly on top of the Objective-C runtime (MacRuby).

In general, Objective-C should be faster than Java, but not as fast as C++. It doesn’t use a virtual machine, but it does have a minimal run time which is used to implement the more dynamic message passing paradigm it uses in place of standard message calls between objects. It also has optional garbage collection, allowing you to make a choice between stability and performance when you need to (i.e. you can get the code working and worry about the memory allocation later). It’s also able to leverage all of the power of both the LLVM back end and the newer Clang front end, which C++ currently can’t.

While there aren’t a lot of directly relevant tools available for Objective-C itself, it is able to directly use any code or library written in either C or C++. No problems there, then.

It’s the last metric which is the kick in the teeth fot Objective-C, though. In short: no one really uses it unless they’re programming for an Apple platform. As a result, unless you’re programming specifically for either OSX or iOS you’ll loose out on a lot of frameworks. Objective-C is a first class language in the Gnu Compiler Collection (GCC), so it can be deployed easily enough under Linux (minus a lot of the good frameworks). This is not the case under windows, however, where there doesn’t seem to be any good deployment options. I have no problem ignoring Windows, but directly precluding it would appear to be somewhat foolhardy when building a piece of technology related to computer games. It wouldn’t be too much of a problem if I was only doing this as an academic exercise, but I actually have delusions of people using it.

That’s the last of the languages I’m considering. Look for my conclusion (and possibly a bit of a twist) tomorrow (here).

You’re Speaking my Language, Baby. Part 3: C++.

Author’s note: As this post started out HUGE, it’s been split into parts. You’ll find the introduction here, and my comments on Java here.

The second language I’m considering is C++. This is the language that I use the most at my day job. It’s also the language that’s used to build the vast majority of computer games and one hell of a lot of commercial software. I’m not as familiar with it as I am with Java, but I know it well enough to be productive with it. I’m also familiar enough with it to know how horribly broken it is in many respects. One of the major design goals of Java (among other more modern programming languages) was to fix the problems with C++. It also has no dynamic capabilities what-so-ever, but it’s possible to paper over this by using a minimal dynamic runtime such as Lua for scripting.

All things being equal, C++ is the fastest of the three languages. It is also the one you’re most likely to write bad code in, though, so there’s a bit of a trade off here.

As I mentioned, most games are programmed using C++. As a result, there is a veritable shit load of graphics engine options. I would probably tend towards using the open sourceOgre3D rendering engine (or something similar), but it’s worth baring in mind that I could easily switch to using, say, the Quake 3 engine (open sourced by id) if I wanted to. I could also port the project to using a commercial graphics engine if I had the desire to do such a thing.

The measure of applicability to other parties is definitely a point in favour of C++. Code written in C++ would be the easiest of the three for deployment as part of a larger project, as that project is most likely to be written in C++. In terms of acting as a developer showcase C++ has the edge as well, as it’s the language a lot of companies ask for code samples in.

Looks for my comments on the last language I’m considering tomorrow (here).

You’re Speaking my Language, Baby. Part 2: Java.

Author’s note: As this post started out HUGE, it’s been split into parts. You’ll find the introduction here.

The first language I’m considering is Java. This is by far the language I’m most comfortable and proficient with. It was used for about 90% of my Bachelors degree, I wrote the entire codebase of my PhD using it, and it gets used here and there in my day job. I’m comfortable with Java, and find it to be quite a pleasant language to program in. Big tick on the question regarding my ability to use it, then. Java has some modest dynamic capabilities built in, but it also has a lot of small options for using higher level languages for the scripting, the cleanest of which is possibly Groovy.

Java has a bad reputation performance wise, but this largely isn’t true any more. It does run using a virtual machine, but is compiled to native code at run time. It’s a lot easier to write good code using Java than the other languages I’m considering, and that can help with performance a lot, but in general Java has the potential to be the slowest of the three, all things being equal.

Tools are actually not a problem. There are a lot of high quality graphics engines available for Java, with the Java Monkey Engine (JME) being my favourite. A physics add-on is available in the form of JMEPhysics, with the next version slated to have a physics engine baked in. Raw OpenGL is also an option with LWJGL, should I want it. Likewise, I suspect that the Red Dwarf Server is likely meet my communication needs.

The applicability of Java to other interested parties is an interesting question. A lot of software gets written in Java. A LOT. But the vast majority of it is not games. Largely, I think this is because it’s perceived to be lacking in the performance department. It’s also a little harder to protect you code when you’re writing in Java, too. The previously mentioned JME has the support of a commercial games company, though, so clearly there is interest. Computers are getting faster at quite a rate, so performance has the potential to be less of a concern, especially if the project you’re working on has the whiff of a server side application about it. When it comes to server side code, I think Java is definitely winning the race. Frankly, I have a bit of trouble calling this one either way.

One language down, two to go. Look for the next post tomorrow (here), should you be interested in such things.

You’re Speaking my Language, Baby. Part 1: Introduction.

Author’s note: This post started out HUGE, so I’ve split it up. Look for the other parts over the next couple of days.

If you’re about to start on a programming project of some sort (and I am), then the first choice you have to make is the main programming language you’re going to use. Now, if you’re carrying out this project on your employers time they probably have very specific views about that. I’m not doing this project on company time, though, so the world is my oyster, figuratively speaking. There are, at a rough guess, shit loads of programming languages out there. There’s a reasonably good list to be found here, though it is missing a couple of the weird ones. While constructing the project using a language which uses LOL cat type speech for syntax, or takes its input in the form of abstract art would be an excellent mental challenge, I’m sure, that’s just not what I’m looking for.

I’m also, right off the bat, going to eliminate a couple of other classes of language. First of all: no functional programming languages. I have no patience for learning a new programming paradigm, expecially one which up until now has shown limited application outside academia. No Haskell, no CAML and absolutely no Prolog.

I’m also not considering high level dynamically typed languages, so no Python and no Ruby. For that matter, no Groovy or Lua either. You can program very quickly in them, but I’m not prepared to take the performance hit which comes with them. Python might be very popular, but I think it actively encourages bad programming practice and I want no part of that. It’s an excellent hobbyest language, but that isn’t what I’m looking for.

Lastly: I’m not looking at anything based on Microsoft’s .Net platform, and that includes Mono.

The questions I’m going to be asking of the languages I am considering are the following:

  • How well  can I use it?
  • Generally, speaking, how good is the performance?
  • What tools are available? Specifically, does it have access to the libraries I’ll need to build the project. These are mostly ones relating to 3D graphics, inter-computer (client-server) communication and (possibly) physics. There are probably a couple of other things I haven’t thought of yet.
  • How relevant is it to others? That is, if I write the project in this language will it be useful to other interested parties?

I considered three languages and I’ll tackle them one at a time in future posts, starting here.