Simplifying the Landscape

At the end of the last post I wrote about the actual implementation of my Clockwork Aphid project, I said the next step was going to be display simplification. At that point I’d generated a few landscapes which were just starting barely starting to test the limits of my computer, though they were nothing like the size or complexity I had in mind. That said, it was looking at landscapes containing 1579008 polygons and it was obvious that not all of these needed to be put on screen. Moreover, because my landscapes are essentially made up of discrete samples (or nodes): I needed to reduce the number of samples which were displayed to the user, otherwise my performance was really going to tank as the landscapes increased in size.

Shamus Young talked about terrain simplification some time ago during his original terrain project. This seemed as good a place as any to start, so I grabbed a copy of the paper he used to build his algorithm. I didn’t find it as complicated as it appears he did, but this is probably because I’m more used to reading papers like this (I must have read hundreds during my PhD, and even wrote a couple), so I’m reasonably fluent in academicese. It was, as I suspected, a good starting point, though I wouldn’t be able to use the algorithm wholesale as it’s not directly compatible with the representation I’m using. Happily, my representation does make it very simple to use the core idea, though.

If you remember, my representation stores the individual points in a sparse array, indexed using fractional coordinates. This makes it very flexible, and allows me to use an irregular level of detail (more on that later). Unlike the representation used in the paper, this means a can’t make optimisations based on the assumption that my data is stored in a regular grid. Thankfully, the first stage of the simplification algorithm doesn’t care about this and examines points individually. Also thankfully, the simplification algorithm uses the same parent/child based tessellation strategy that I do.

The first step is decide which points are “active”. This is essentially based on two variables:

  • The amount of “object space error” a point has (i.e. how much it differs from its parents);
  • The distance between the point and the “camera”.

A local constant is also present for each point:

  • The point’s bounding radius, or the distance to its furthest child (if it has children);

I’m not sure if I actually need this last in my current implementation (my gut says no, I’ll explain why later), but I’m leaving it in for the time being. Finally, two global constants are used for tuning, and we end up with this:

SimplificationEquation2

Where:

  • i = the point in question
  • λ = a constant
  • εi = the object space error of i
  • di = the distance between i and the camera
  • ri = the bounding radius of i
  • τ = another constant

This is not entirely optimal for processing, but a little bit of maths wizardry transforms this like so:

SimplificationEquation3

This looks more complicated, and it’s less intuitive to see what it actually does, but from the point of view of the computer it’s a lot simpler, as it avoids the square root needed to calculate the distance between the point and the camera. Now we get to the fun part: diagrams! Consider the following small landscape, coloured as to the granularity of each of the points (aka the distance to the node’s parents, see this post):

AllPoints

Next, we’ll pick some arbitrary values for the constants mentioned above (ones which work well for explanatory purposes), and place the viewpoint in the top left hand corner, and we end up with this the following active points (inactive points are hidden):

ActivePoints

Now, we take the active points with the smallest granularity, and we have them draw their polygons, exactly as before, which looks like this:

SmallestPolygons

When we come to draw the polygons of the next highest granularity you’ll see that we have a problem, though. The previous set of polygons have encroached on their territory. To avoid this, each node informs its parents that it is active and then the parent doesn’t draw any polygons in the direction of its active children. If we add in the polygons drawn by the each of the other levels of granularity, we now end up with this:

FilledPolygons

Oh no! There’s a hole in my landscape! I was actually expecting that my simplistic approach would lead to more or less this result, but it was still a little annoying when it happened. If I was a proper analytical type I would next have sat down and worked over the geometry at play here, then attempted to find a formulation which would prevent this from happening. Instead, though, I stared at it for a good long while, displaying it in various different ways, and waited for something to jump out at me.

Eventually it did, and thankfully it was a very simple rule. Each parent stores a list of the directions in which it has active children in order to prevent overdrawing (as mentioned above). The new rule is that a node is also considered active if this list is non-empty. With this addition, our tessellated landscape now look alike this:

BackfIlledPolygons

Presto! A nice simple rule which fills in all of the gaps in the landscape without any over or under simplification, or any overdrawing. I suspect this rule also negates the need for the bounding radius mentioned above, though I have not as yet tested that thought. To recap, we have three simple rules:

  1. A node is active if the object space error/distance equation says it is;
  2. A node is active if any of its children are active;
  3. Polygons are tessellated for each active point, but not in the direction of any active children.

But what does this look like in actual eye poppingly 3D landscapes? Well, here’s an example, using the height based colouring I’ve used before:

SimplifiedLandscape

I quite pleased with this, though what I’m doing here is still quite inefficient and in need of some serious tuning. There are a couple of further simplification tricks I can try (including the next step from the (paper) paper). More to come later. Honest.

Advertisements

Dogfood, Nom Nom Nom

Dog food, the common noun, is reasonably self explanatory (pro tip: it’s food for dogs). Dogfood the verb or dogfooding the verbal noun, though, might require a little bit of explanation.

At the root of it is this: if you make dog food, you should feed it to your own dogs. There are essentially two reasons for this:

  1. If you don’t personally test it, how will know if it’s any good?
  2. If your dogs don’t eat it, why the hell should anyone else’s?

The same principle applies to software. Even more so in fact, as it’s something you’re more able to test directly. As a simple example: in Google, we use Google docs for most documentation purposes (design docs, presentations, etc.). I’m willing to bet that folks at Apple use iWork for much the same purpose. I’m absolutely certain that Microsoft employes use Office, excepting those times when it’s necessary to write a document in the blood of a green eyed virgin upon the pressed skin of an albino goat.

This process is called dogfooding. You use the software internally before it’s released to users, ensuring that it gets a lot more test usage. As an added bonus, developers who actually use the software they create are more likely to create software that’s actually worth using. That’s not always the case, of course, since most developers don’t really fall into the “average users” category. Case in point: upgrading my computer to Lion broke my installation of Steam. I fixed it with a quick command line incantation, then performed a couple more in order to make The Stanley Parable functional under OSX. Most computer users would not be comfortable doing something like this, nor should they have to.

As well as using your company’s products at work, it’s generally a good idea to use them at home. It’s always good to have a feel for what your company actually does and get more experience with it. I’ve used Google search more or less exclusively for at least ten years. That’s not really a hard choice. It gives the best results. Likewise, I started using Google Chrome is my main web browser pretty much as soon as it was available for the platforms I used (in my last job that was actually Windows, OSX and Linux). I use iPhone in preference to Android, however, though I do have an upgrade due me towards the end of the year and it’s not completely inconceivable that I might switch. For the time being at least, I’m definitely sticking with WordPress, so I won’t get to play with Blogger, Google Analytics or AdSense, either.

As well as dogfooding Google products at work, we also dogfood technologies and platforms. This sounds fairly obvious, but it’s not always the case with companies who create platform technology. Microsoft, for instance, used to be famous for not using the technologies they provided to developers internally, though they are getting better now. Some of Google’s technologies are open source, and thus available for everyone to use. Guice and Protocol Buffers are pretty good examples of this. Guice is amazing, by the way. This being the case, there’s nothing to stop me using them on personal projects, should that be appropriate. Personal projects such as Clockwork Aphid, for example.

I’ll talk about which particular Google technologies I think might be useful in later blog posts, but since I brought it up, I suppose I should probably say something about Clockwork Aphid. I’ve blown the dust off the code, tweaked a couple of things and got a feature I’d left half finished back on track. I tried switching my current implementation from jMonkeyEngine version 2 to version 3, only to find that while it does appear a lot faster and improved in several other ways, the documentation is pretty crappy, and it’s less… functional.

I’ll talk about that more later, but for now just know that things are once again happening.

The Elephant in the Room

Since I haven’t been able to do any actual work on my Clockwork Aphid project as of late, I suppose I may as well talk about the background behind it a little more. Those who talk about it the most are the ones doing it the least, and all that. I’ve spoken a little about virtual worlds before and focussed almost entirely on World of Warcraft, because it’s a the big one. It’s not the only MMORPG, and it definitely wasn’t the first. It is the one that I have most experience with, and statistically the one most other people are likely to have experience with, as well.

There are several other virtual worlds I really should talk about, but the elephant in the room is another extremely large, and very notable, virtual world. One which has double relevance, because I’ve made an oblique reference to it already in another post.

This is a virtual world whose currency has an exchange rate with the real world, and sale of virtual goods within this world has turned people into real life millionaires. There exist architectural practices whose entire portfolio exists “in world.” Sweden, among several other countries, has an embassy in this virtual world, and presumably pays staff to work there. Several musicians have given live concerts there (don’t ask me how that works). This virtual world is not itself a game (as you may have gathered), but it has the infrastructure which has allowed people to build games inside it. Despite all this, though, it has a reputation of, for want of a better word, lameness.

This is, in and of itself, slightly frustrating, because I can’t help feeling that it could be awesome. It should be awesome. It bears more than a passing resemblance to the “Metaverse” from Neal Stephenson’s fantastic Snow Crash, you see. I presume you’ve read Snow Crash? No? Well go and read it. Now. It’s okay, I’ll wait until you’ve finished.

Done? Okay, good. Those are some big ideas, right? Yes, I thought she was a little young, too. Anyway. In case you just went right ahead and skipped over my suggestion there, the metaverse can be summarised, thus:

The Metaverse is our collective online shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worldsaugmented reality, and the internet.

I’m talking, of course, about Second Life. If you’re not familiar with it, it looks a bit like this:

 

One thing you might notice right away is that the graphics have a bit of a low-fi look about them, and there’s a reasonably good reason for this*. In our old friend World of Warcraft, the graphics aren’t exactly stellar either, but they’re much sharper than this. In WoW, by and large, the landscape doesn’t really change, unless (topically) a large new expansion is being released with results in sweeping changes to the world. In WoW, when this does happen, the game forces you to download the changes before it lets you start playing. This might be a lot of data (in the order of gigabytes) but it doesn’t happen often. As previously noted, the World of Warcraft is essentially static. Not so Second Life, though, as its landscape is built by its users. Just because a location contained an island with the Empire State Building rising out of it yesterday doesn’t mean that you won’t find a scale replica of the star ship Enterprise there tomorrow. Thus, the content of the game is streamed to the user as they “play,” and thus the polygon counts need to be kept reasonably low so that this can happen in a timely fashion. Even so, you might teleport to a new location, only to find that the walls appear ten seconds after the floor, and then suddenly you’re standing in the middle of a sofa which wasn’t there a second ago.

The issue with second life, for me at least, is that it’s not as immersive as I want it to be. I don’t feel as though I’m connected to it. I feel restricted by it. There’s something cold and dead about it, much like the eyes of the characters in the Polar Express. Something is missing, and I can’t quite put my finger on what it is. Also, sometimes the walls appear ten seconds after the floor. That said, it is a fully formed virtual world with a large population and a proven record for acting as a canvas for people’s ideas. Given that the point of Clockwork Aphid is to tell stories in a virtual world (I mentioned that, right?), why not tell those stories in Second Life?

This is an idea I’m still exploring, and I keep going backwards and forwards about it, because I’m still not sure if the juice is worth the squeeze. I’d get an awful lot of ready built scope and a huge canvas to play with, but I’m note sure if it’s the right type of canvas. This is a canvas which comes with no small number of restrictions, and I would basically be attaching my wagon to a horse which was entirely outside of my control. Mixed metaphors could be the least of my worries. That said, did I mention that people have become millionaires trading inside Second Life? Then again, Second Life doesn’t exactly represent a living breathing virtual world, so much as the occasionally grotesque spawn of its users’ collective unconsciouses. Sometimes it’s not pretty, other times quite impressive results emerge.

Your thoughts are, as always, both welcome and encouraged, below.

* To be fair, the graphics in Second Life are actually a lot better than they used to be.