Some of the particularly sharp/anal ones amongst you might have noticed that while the technique for generating fractal lanscapes I previously described works (and works well), it’s not 100% correct. Specifically, the fact that it uses the the same scaling factor for nodes created by the diamond and square steps isn’t quite right.

Why is this? Because they generate nodes which adhere to different levels of detail, that’s why. Lets go back to that last diagram for the post which described the algorithm:

Now while you’ll note that both steps add nodes that can be addressed using fractions with two as their denominator, the distance of the nodes created by the diamond step to their parents is greater than those created by the square step.

The nodes created by the square step are orthogonal to their parents, so the distance between them is proportional to a half, which as luck would have it has the same as the denominator as the fractions used to address the node. How convenient!

The nodes created by the diagonal step, on the other hand, are diagonal to their parents. This means that the distance to their parents is the pythagorean root of this same distance, so in this specific case:

sqrt(½*½+½*½) = sqrt(¼+¼) = sqrt(½) = something

Once again, the key fraction used to work this out has the same denominator as those used to address the node in the landscape. Thus, if d is equal to the denominator we’re using to address a node, the basic scaling factor used to offset a new node from its parents would be the following:

if (diamond step) range = [-sqrt(1/d*1/d*2), sqrt(1/d*1/d*2)]

else range = [-1/d, 1/d]

As I said before, this won’t make a lot of difference, but it will be more correct and that’s important to some people. Myself included.

For comparison purposes this is the effect this change has on the example landscape I’ve been using:

There’s some difference visible, but not a huge amount. Mostly, it’s just increased the range the data are occupying and expanded the bell curve accordingly. Hence, more high points and more low points, but the land is the same basic shape.

It took a little bit of fighting with bugs that weren’t showing up in the 2D view, and a bit of time to figure out what was going on with the lighting system in JME, but I finally got the 3D display of the fractallandscapes working.

The first stage was just displaying each node as a discrete point so I could see that each was in about the right place. It looks a little bit like this:

I did this by simply piping the spatial coordinates and colour information of each node into a pair of standard Java FloatBuffers, passing these to a JME Point class (which should really be called PointSet, in my opinion) and attaching this to the root display node of my JME application. The colouring scheme is the same as the one used for the 2D display. Some things don’t look quite right, largely due to the fact that I’ve just drawn the “underwater” points as blue, rather than adding any actual “water.” Don’t fret, it’s on the todo list.

That said, the landscape looks about right. All the points seem to be in their correct location. As a quick implementation note, I’m defining the (x, y, z) coordinates of the scene in the following way:

x = east

y = altitude

z = -north

With some scaling factors used to map the values from the [0,1] range used to address them to slightly more real world like dimensions.

The next stage was to display the landscape in wireframe to make sure the connections I’ll be using create a more solid looking polygon based display are all working correctly. Why not just go directly to polygons? You can see the the detail better in the wireframe display, making debugging much easier. I’ll definitely be using it again later.

This time, instead of piping each and every node into the vertex array, only the nodes at the highest level of detail are used. These are the nodes generated during the final “square stage” of the fractal algorithm, for those of you playing at home. Each draws a triangle (consisting of three separate lines) into the vertex buffer for each pair of parents it has in the landscape. The result looks something like this:

Everything seems to be in good order there, I think. One or two things don’t look quite right, particularly the beaches, but the tessellation and coverage of the polygons looks right. Here’s a closer in look at some of the polygons so you can see what the tessellation scheme actually produces:

You can (hopefully) see that each of the “active” nodes sits at the centre of a diamond formed from the shape of its parents, so it’s the points with four lines branching from them (rather than eight) which are actually being used to draw the scene.

Next: polygons. Again, only the nodes at the highest level of detail are used. This time, each inserts itself into the vertex buffer, then adds its parents if they’re not in there already. Each node remembers its postion in the vertex buffer, and these indices are then used to draw the actual polygons. These are declared by passing the indices in sets of three into a standard Java IntBuffer. The buffers are then passed to one of JME TriMesh geometry classes and displayed, like this:

Again, the beaches don’t look quite right, but otherwise I’m reasonably pleased. I still need to add the actual water and improve the form of the landscape itself (and about a million other things), but in terms of display this is looking pretty good. Except for one thing: I’m using far more detail than I need to. Let me illustrate this with a slightly more extreme example. The pictures I’ve posed so far were generated using seven iterations of the diamond square algorithm. Here’s what happens when I use ten iterations (remember, the number of points increases exponentially):

On the bright side the beaches look better, but that’s a lot of polygons. Far more then we actually need to display. 1579008 polygons, in fact. We need to reduce that somewhat, if we’re going to make things more complicated and maintain a reasonable frame rate (I’m getting about 60fps with this view at the moment). You can see the problem more clearly if I show you the same view using lines rather than polygons:

You can just about see the individual triangles up close, but further away the lines just mush together. I think we can afford to reduce the level of detail as we get further away, don’t you?

I don’t have any 3D views of the fractal landscapes I’ve been making to show you yet, as I’m still working through the different implementation options. I did get a little distracted with the 2D views of the landscape this morning, though, and play with the colouring scheme.

First of all, let’s start again with the example landscape used in yesterday’s post, only with slightly more sober colours and a bar on the right showing how the height values map to actual colours:

Now that looks reasonably neat already, in a “my first atlas” kind of way, but clearly there’s a lot of information missing. We can see this if I plot the height values in monochrome, giving the landscape a more “plasma cloud” appearence:

Now we can see the extra information, but we’ve lost a lot of the sense that what we’re looking at is a landscape. It looks more like a cloud. We can get some of that back by combining the two approaches and using different shades of blue and green:

Now this looks a lot better. I think the water looks quite reasonable using this scheme, but the landscape is a bit… homogenous. Is every part of the land covered in grass? How boring!

We can make things a bit more interesting by adding a thin band of “sand” around the coast, and some “mountainy” colours (and snow!) higher up, like so:

Now this looks better, the sand in particular. The mountains look okay, but not quite right. Something’s a little off. That’s not what mountains actually look like. We also don’t have any rivers or above sea level lakes. These are problems I’m going to look at later, after I get a reasonable 3D display system working. In the mean time, though, here are a couple more 2D landscapes for your viewing pleasure: