Simplifying the Landscape

At the end of the last post I wrote about the actual implementation of my Clockwork Aphid project, I said the next step was going to be display simplification. At that point I’d generated a few landscapes which were just starting barely starting to test the limits of my computer, though they were nothing like the size or complexity I had in mind. That said, it was looking at landscapes containing 1579008 polygons and it was obvious that not all of these needed to be put on screen. Moreover, because my landscapes are essentially made up of discrete samples (or nodes): I needed to reduce the number of samples which were displayed to the user, otherwise my performance was really going to tank as the landscapes increased in size.

Shamus Young talked about terrain simplification some time ago during his original terrain project. This seemed as good a place as any to start, so I grabbed a copy of the paper he used to build his algorithm. I didn’t find it as complicated as it appears he did, but this is probably because I’m more used to reading papers like this (I must have read hundreds during my PhD, and even wrote a couple), so I’m reasonably fluent in academicese. It was, as I suspected, a good starting point, though I wouldn’t be able to use the algorithm wholesale as it’s not directly compatible with the representation I’m using. Happily, my representation does make it very simple to use the core idea, though.

If you remember, my representation stores the individual points in a sparse array, indexed using fractional coordinates. This makes it very flexible, and allows me to use an irregular level of detail (more on that later). Unlike the representation used in the paper, this means a can’t make optimisations based on the assumption that my data is stored in a regular grid. Thankfully, the first stage of the simplification algorithm doesn’t care about this and examines points individually. Also thankfully, the simplification algorithm uses the same parent/child based tessellation strategy that I do.

The first step is decide which points are “active”. This is essentially based on two variables:

  • The amount of “object space error” a point has (i.e. how much it differs from its parents);
  • The distance between the point and the “camera”.

A local constant is also present for each point:

  • The point’s bounding radius, or the distance to its furthest child (if it has children);

I’m not sure if I actually need this last in my current implementation (my gut says no, I’ll explain why later), but I’m leaving it in for the time being. Finally, two global constants are used for tuning, and we end up with this:

SimplificationEquation2

Where:

  • i = the point in question
  • λ = a constant
  • εi = the object space error of i
  • di = the distance between i and the camera
  • ri = the bounding radius of i
  • τ = another constant

This is not entirely optimal for processing, but a little bit of maths wizardry transforms this like so:

SimplificationEquation3

This looks more complicated, and it’s less intuitive to see what it actually does, but from the point of view of the computer it’s a lot simpler, as it avoids the square root needed to calculate the distance between the point and the camera. Now we get to the fun part: diagrams! Consider the following small landscape, coloured as to the granularity of each of the points (aka the distance to the node’s parents, see this post):

AllPoints

Next, we’ll pick some arbitrary values for the constants mentioned above (ones which work well for explanatory purposes), and place the viewpoint in the top left hand corner, and we end up with this the following active points (inactive points are hidden):

ActivePoints

Now, we take the active points with the smallest granularity, and we have them draw their polygons, exactly as before, which looks like this:

SmallestPolygons

When we come to draw the polygons of the next highest granularity you’ll see that we have a problem, though. The previous set of polygons have encroached on their territory. To avoid this, each node informs its parents that it is active and then the parent doesn’t draw any polygons in the direction of its active children. If we add in the polygons drawn by the each of the other levels of granularity, we now end up with this:

FilledPolygons

Oh no! There’s a hole in my landscape! I was actually expecting that my simplistic approach would lead to more or less this result, but it was still a little annoying when it happened. If I was a proper analytical type I would next have sat down and worked over the geometry at play here, then attempted to find a formulation which would prevent this from happening. Instead, though, I stared at it for a good long while, displaying it in various different ways, and waited for something to jump out at me.

Eventually it did, and thankfully it was a very simple rule. Each parent stores a list of the directions in which it has active children in order to prevent overdrawing (as mentioned above). The new rule is that a node is also considered active if this list is non-empty. With this addition, our tessellated landscape now look alike this:

BackfIlledPolygons

Presto! A nice simple rule which fills in all of the gaps in the landscape without any over or under simplification, or any overdrawing. I suspect this rule also negates the need for the bounding radius mentioned above, though I have not as yet tested that thought. To recap, we have three simple rules:

  1. A node is active if the object space error/distance equation says it is;
  2. A node is active if any of its children are active;
  3. Polygons are tessellated for each active point, but not in the direction of any active children.

But what does this look like in actual eye poppingly 3D landscapes? Well, here’s an example, using the height based colouring I’ve used before:

SimplifiedLandscape

I quite pleased with this, though what I’m doing here is still quite inefficient and in need of some serious tuning. There are a couple of further simplification tricks I can try (including the next step from the (paper) paper). More to come later. Honest.

Advertisements

Fractal Errata

Some of the particularly sharp/anal ones amongst you might have noticed that while the technique for generating fractal lanscapes I previously described works (and works well), it’s not 100% correct. Specifically, the fact that it uses the the same scaling factor for nodes created by the diamond and square steps isn’t quite right.

Why is this? Because they generate nodes which adhere to different levels of detail, that’s why. Lets go back to that last diagram for the post which described the algorithm:

Diamand step (left) and square step (right).

Now while you’ll note that both steps add nodes that can be addressed using fractions with two as their denominator, the distance of the nodes created by the diamond step to their parents is greater than those created by the square step.

The nodes created by the square step are orthogonal to their parents, so the distance between them is proportional to a half, which as luck would have it has the same as the denominator as the fractions used to address the node. How convenient!

The nodes created by the diagonal step, on the other hand, are diagonal to their parents. This means that the distance to their parents is the pythagorean root of this same distance, so in this specific case:

sqrt(½*½+½*½) = sqrt(¼+¼) = sqrt(½) = something

Once again, the key fraction used to work this out has the same denominator as those used to address the node in the landscape. Thus, if d is equal to the denominator we’re using to address a node, the basic scaling factor used to offset a new node from its parents would be the following:

if (diamond step) range = [-sqrt(1/d*1/d*2), sqrt(1/d*1/d*2)]

else range = [-1/d, 1/d]

As I said before, this won’t make a lot of difference, but it will be more correct and that’s important to some people. Myself included.

For comparison purposes this is the effect this change has on the example landscape I’ve been using:

The original scaling method.
The updated scaling method.

There’s some difference visible, but not a huge amount. Mostly, it’s just increased the range the data are occupying and expanded the bell curve accordingly. Hence, more high points and more low points, but the land is the same basic shape.

Now In Eye Popping 3D!

It took a little bit of fighting with bugs that weren’t showing up in the 2D view, and a bit of time to figure out what was going on with the lighting system in JME, but I finally got the 3D display of the fractal landscapes working.

The first stage was just displaying each node as a discrete point so I could see that each was in about the right place. It looks a little bit like this:

 

Fractal landscape as points (click for bigger).

 

I did this by simply piping the spatial coordinates and colour information of each node into a pair of standard Java FloatBuffers, passing these to a JME Point class (which should really be called PointSet, in my opinion) and attaching this to the root display node of my JME application. The colouring scheme is the same as the one used for the 2D display. Some things don’t look quite right, largely due to the fact that I’ve just drawn the “underwater” points as blue, rather than adding any actual “water.” Don’t fret, it’s on the todo list.

That said, the landscape looks about right. All the points seem to be in their correct location. As a quick implementation note, I’m defining the (x, y, z) coordinates of the scene in the following way:

x = east

y = altitude

z = -north

With some scaling factors used to map the values from the [0,1] range used to address them to slightly more real world like dimensions.

The next stage was to display the landscape in wireframe to make sure the connections I’ll be using create a more solid looking polygon based display are all working correctly. Why not just go directly to polygons? You can see the the detail better in the wireframe display, making debugging much easier. I’ll definitely be using it again later.

This time, instead of piping each and every node into the vertex array, only the nodes at the highest level of detail are used. These are the nodes generated during the final “square stage” of the fractal algorithm, for those of you playing at home. Each draws a triangle (consisting of three separate lines) into the vertex buffer for each pair of parents it has in the landscape. The result looks something like this:

 

Fractal landscape as lines (click for bigger).

 

Everything seems to be in good order there, I think. One or two things don’t look quite right, particularly the beaches, but the tessellation and coverage of the polygons looks right. Here’s a closer in look at some of the polygons so you can see what the tessellation scheme actually produces:

 

Polygon tessellation (click for bigger).

 

You can (hopefully) see that each of the “active” nodes sits at the centre of a diamond formed from the shape of its parents, so it’s the points with four lines branching from them (rather than eight) which are actually being used to draw the scene.

Next: polygons. Again, only the nodes at the highest level of detail are used. This time, each inserts itself into the vertex buffer, then adds its parents if they’re not in there already. Each node remembers its postion in the vertex buffer, and these indices are then used to draw the actual polygons. These are declared by passing the indices in sets of three into a standard Java IntBuffer. The buffers are then passed to one of JME TriMesh geometry classes and displayed, like this:

 

Fractal landscape as polygons (click for bigger).

 

Again, the beaches don’t look quite right, but otherwise I’m reasonably pleased. I still need to add the actual water and improve the form of the landscape itself (and about a million other things), but in terms of display this is looking pretty good. Except for one thing: I’m using far more detail than I need to. Let me illustrate this with a slightly more extreme example. The pictures I’ve posed so far were generated using seven iterations of the diamond square algorithm. Here’s what happens when I use ten iterations (remember, the number of points increases exponentially):

 

MOAR POLYGONS! (click for bigger)

 

On the bright side the beaches look better, but that’s a lot of polygons. Far more then we actually need to display. 1579008 polygons, in fact. We need to reduce that somewhat, if we’re going to make things more complicated and maintain a reasonable frame rate (I’m getting about 60fps with this view at the moment). You can see the problem more clearly if I show you the same view using lines rather than polygons:

 

MOAR LINES! (click for bigger)

 

You can just about see the individual triangles up close, but further away the lines just mush together. I think we can afford to reduce the level of detail as we get further away, don’t you?

Well, I’ll get right on that, then…

Some Random Landscapes

I don’t have any 3D views of the fractal landscapes I’ve been making to show you yet, as I’m still working through the different implementation options. I did get a little distracted with the 2D views of the landscape this morning, though, and play with the colouring scheme.

First of all, let’s start again with the example landscape used in yesterday’s post, only with slightly more sober colours and a bar on the right showing how the height values map to actual colours:

Fractal coastlines.

Now that looks reasonably neat already, in a “my first atlas” kind of way, but clearly there’s a lot of information missing. We can see this if I plot the height values in monochrome, giving the landscape a more “plasma cloud” appearence:

Plasma cloud landscape.

Now we can see the extra information, but we’ve lost a lot of the sense that what we’re looking at is a landscape. It looks more like a cloud. We can get some of that back by combining the two approaches and using different shades of blue and green:

Shaded landscape.

Now this looks a lot better. I think the water looks quite reasonable using this scheme, but the landscape is a bit… homogenous. Is every part of the land covered in grass? How boring!

We can make things a bit more interesting by adding a thin band of “sand” around the coast, and some “mountainy” colours (and snow!) higher up, like so:

More in depth colouring scheme.

Now this looks better, the sand in particular. The mountains look okay, but not quite right. Something’s a little off. That’s not what mountains actually look like. We also don’t have any rivers or above sea level lakes. These are problems I’m going to look at later, after I get a reasonable 3D display system working. In the mean time, though, here are a couple more 2D landscapes for your viewing pleasure:

A bit more snow and an inland sea.
Yet another coastal region.