Simplifying the Landscape

At the end of the last post I wrote about the actual implementation of my Clockwork Aphid project, I said the next step was going to be display simplification. At that point I’d generated a few landscapes which were just starting barely starting to test the limits of my computer, though they were nothing like the size or complexity I had in mind. That said, it was looking at landscapes containing 1579008 polygons and it was obvious that not all of these needed to be put on screen. Moreover, because my landscapes are essentially made up of discrete samples (or nodes): I needed to reduce the number of samples which were displayed to the user, otherwise my performance was really going to tank as the landscapes increased in size.

Shamus Young talked about terrain simplification some time ago during his original terrain project. This seemed as good a place as any to start, so I grabbed a copy of the paper he used to build his algorithm. I didn’t find it as complicated as it appears he did, but this is probably because I’m more used to reading papers like this (I must have read hundreds during my PhD, and even wrote a couple), so I’m reasonably fluent in academicese. It was, as I suspected, a good starting point, though I wouldn’t be able to use the algorithm wholesale as it’s not directly compatible with the representation I’m using. Happily, my representation does make it very simple to use the core idea, though.

If you remember, my representation stores the individual points in a sparse array, indexed using fractional coordinates. This makes it very flexible, and allows me to use an irregular level of detail (more on that later). Unlike the representation used in the paper, this means a can’t make optimisations based on the assumption that my data is stored in a regular grid. Thankfully, the first stage of the simplification algorithm doesn’t care about this and examines points individually. Also thankfully, the simplification algorithm uses the same parent/child based tessellation strategy that I do.

The first step is decide which points are “active”. This is essentially based on two variables:

  • The amount of “object space error” a point has (i.e. how much it differs from its parents);
  • The distance between the point and the “camera”.

A local constant is also present for each point:

  • The point’s bounding radius, or the distance to its furthest child (if it has children);

I’m not sure if I actually need this last in my current implementation (my gut says no, I’ll explain why later), but I’m leaving it in for the time being. Finally, two global constants are used for tuning, and we end up with this:

SimplificationEquation2

Where:

  • i = the point in question
  • λ = a constant
  • εi = the object space error of i
  • di = the distance between i and the camera
  • ri = the bounding radius of i
  • τ = another constant

This is not entirely optimal for processing, but a little bit of maths wizardry transforms this like so:

SimplificationEquation3

This looks more complicated, and it’s less intuitive to see what it actually does, but from the point of view of the computer it’s a lot simpler, as it avoids the square root needed to calculate the distance between the point and the camera. Now we get to the fun part: diagrams! Consider the following small landscape, coloured as to the granularity of each of the points (aka the distance to the node’s parents, see this post):

AllPoints

Next, we’ll pick some arbitrary values for the constants mentioned above (ones which work well for explanatory purposes), and place the viewpoint in the top left hand corner, and we end up with this the following active points (inactive points are hidden):

ActivePoints

Now, we take the active points with the smallest granularity, and we have them draw their polygons, exactly as before, which looks like this:

SmallestPolygons

When we come to draw the polygons of the next highest granularity you’ll see that we have a problem, though. The previous set of polygons have encroached on their territory. To avoid this, each node informs its parents that it is active and then the parent doesn’t draw any polygons in the direction of its active children. If we add in the polygons drawn by the each of the other levels of granularity, we now end up with this:

FilledPolygons

Oh no! There’s a hole in my landscape! I was actually expecting that my simplistic approach would lead to more or less this result, but it was still a little annoying when it happened. If I was a proper analytical type I would next have sat down and worked over the geometry at play here, then attempted to find a formulation which would prevent this from happening. Instead, though, I stared at it for a good long while, displaying it in various different ways, and waited for something to jump out at me.

Eventually it did, and thankfully it was a very simple rule. Each parent stores a list of the directions in which it has active children in order to prevent overdrawing (as mentioned above). The new rule is that a node is also considered active if this list is non-empty. With this addition, our tessellated landscape now look alike this:

BackfIlledPolygons

Presto! A nice simple rule which fills in all of the gaps in the landscape without any over or under simplification, or any overdrawing. I suspect this rule also negates the need for the bounding radius mentioned above, though I have not as yet tested that thought. To recap, we have three simple rules:

  1. A node is active if the object space error/distance equation says it is;
  2. A node is active if any of its children are active;
  3. Polygons are tessellated for each active point, but not in the direction of any active children.

But what does this look like in actual eye poppingly 3D landscapes? Well, here’s an example, using the height based colouring I’ve used before:

SimplifiedLandscape

I quite pleased with this, though what I’m doing here is still quite inefficient and in need of some serious tuning. There are a couple of further simplification tricks I can try (including the next step from the (paper) paper). More to come later. Honest.

Advertisements

Revisiting the Language Issue

Some time ago, I wrote a series of posts about language choice for my Clockwork Aphid project. In the end I decided to start the project in Java, this being the language I’m most comfortable with. Once the project reaches a stable state, with some minimum amount of functionality, the plan is to port it to C++ for comparison purposes, this being the language which is likely to provide the best performance.

I still plan on doing this, but I’ve also decided to add a couple of extra candidate languages to the melting pot and get an even broader comparison. The first of these languages is Go, a relatively new language developed at Google. This is not coincidence. I’ve been doing some reading about it lately and finding a lot of things I really like. It has the potential to provide the benefits of both Java and C++, whilst avoiding many of the pitfalls. This is definitely a good thing. It will also give me chance to dogfood (there’s that word again!) some more Google technology.

One of Go’s features which I really like is implicit interfaces. Allow me to explain. In most regular statically typed object orientated languages, such as Java (which I’ll use for this example), you can abstract functionality using something like an interface. For example, let’s say I have a class which looks like this:

class Counter {
  int value;
  int get() {
    return value;
  }
}

Here we have defined an class which declares a single method which returns an integer value. I might then make use of this an instance of this class elsewhere:

class Printer {
  void update(Counter counter) {
    System.out.println(counter.get());
  }
}

All is good with the world, unless I decide I want to change the behaviour of the code. Perhaps I want the value to increment after each call, for example. I could extend the Counter class and change its behaviour that way:

class IncrementingCounter extends Counter {
  int get() {
    return value++;
  }
}

I can now pass an instance of this new class into the update method of the Handler. Done. Right? Well… no. This is a bit of a clumsy way to go about this. It doesn’t scale and it’s not always possible. A better way to handle this is to use an interface:

interface CounterInterface {
  int get();
}

This specifies the interface of the methods, but not their implementation. We can then change the Printer class to use this interface, rather than the concrete class:

class Printer {
  void update(CounterInterface counter) {
    System.out.println(counter.get());
  }
}

Now any class which implements this interface can be passed to the Printer. So, going to back to our original example:

class Counter implements CounterInterface {
  int value;
  int get() {
    return value;
  }
}

We can now make any number of alternative implementations (incrementing, decrementing, random, fibronatchi…) and as long as they implement the interface they can be passed to the printer. This is fine if you’re in control of the implementation, and even more fine if you’re in control of the interface as well. There are times, however, when you’re in change of neither. Things can get a little messy and you may have to find a way of pushing a round peg through a square hole.

In dynamically typed languages, such as Python and Ruby, things work a little differently. These languages are often referred to as being “duck” typed, as they make the assumption that if something “looks like a duck and quacks like a duck, treat it as though it’s a duck.” In this case we wouldn’t bother with any of the interfaces and our Printer class would look more like this:

class Printer:
  def update(counter):
    print counter.get()

So long as the counter object has a method called get() we don’t have a problem. Everything will be fine. This is much simpler, and is one of the things which makes Python very quick to program in, but it does have problems. The main problem (for me, at least) is specification. Without examining the source code, I can’t see what sort of object I have to pass into the update method. If the method has been manually commented then there’s no problem, but this is an incredible tedious thing to have to do. In the Java code I can see the type right there in the auto-generated documentation, and even if the writer has written no comments at all (what a bastard!) I can still get a good idea of what I need to pass into the method.

Go takes a different approach. It’s statically typed, and it has interfaces, but a class doesn’t need to state that it implements an interface. This is implicit and automatic. If a class has the methods defined in an interface, then it is automatically considered to implement it. You get the flexibility of Python with the specification and predicability of Java. This is just one of the things in Go which I think is a really good idea.

On the other hand, I think functional programming is a really stupid idea. I find the languages to be completely horrendous. I feel they must be created by the sort of people who think Linux is user friendly. I consider them curiosities whose only merit is academic. It appears to me that their major use case is to make programming appear more obscure than it actually is and to abstract way the programmer’s knowledge of what the computer is actually doing.

You may be surprised to learn, then, that the third language I’m going to be trying to port Clockwork Aphid into is Scala, a functional programming language. The reason for this is simple: while I personally believe that functional programming (FP) is rubbish, many people disagree. Not a majority, but certainly a very vocal minority. Within Google this minority is very vocal in indeed. The word “fundamentalists” might be appropriate to describe them. When someone believes something that hard it makes me very curious. This is turn tends to lead me towards testing my own beliefs. Sometimes I discover something new and exciting which I was missing out on previously*, and sometimes my initial suspicions are confirmed**. We’ll see which way it goes with Scala.

* Such as the Harry Potter books, which I had stubbornly refused to read until just before the first film was released.

** Such as when I noticed that the Twilight books had taken up the first four places on the Waterstone’s chart and decided I aught to find out what all the fuss was about.

Dogfood, Nom Nom Nom

Dog food, the common noun, is reasonably self explanatory (pro tip: it’s food for dogs). Dogfood the verb or dogfooding the verbal noun, though, might require a little bit of explanation.

At the root of it is this: if you make dog food, you should feed it to your own dogs. There are essentially two reasons for this:

  1. If you don’t personally test it, how will know if it’s any good?
  2. If your dogs don’t eat it, why the hell should anyone else’s?

The same principle applies to software. Even more so in fact, as it’s something you’re more able to test directly. As a simple example: in Google, we use Google docs for most documentation purposes (design docs, presentations, etc.). I’m willing to bet that folks at Apple use iWork for much the same purpose. I’m absolutely certain that Microsoft employes use Office, excepting those times when it’s necessary to write a document in the blood of a green eyed virgin upon the pressed skin of an albino goat.

This process is called dogfooding. You use the software internally before it’s released to users, ensuring that it gets a lot more test usage. As an added bonus, developers who actually use the software they create are more likely to create software that’s actually worth using. That’s not always the case, of course, since most developers don’t really fall into the “average users” category. Case in point: upgrading my computer to Lion broke my installation of Steam. I fixed it with a quick command line incantation, then performed a couple more in order to make The Stanley Parable functional under OSX. Most computer users would not be comfortable doing something like this, nor should they have to.

As well as using your company’s products at work, it’s generally a good idea to use them at home. It’s always good to have a feel for what your company actually does and get more experience with it. I’ve used Google search more or less exclusively for at least ten years. That’s not really a hard choice. It gives the best results. Likewise, I started using Google Chrome is my main web browser pretty much as soon as it was available for the platforms I used (in my last job that was actually Windows, OSX and Linux). I use iPhone in preference to Android, however, though I do have an upgrade due me towards the end of the year and it’s not completely inconceivable that I might switch. For the time being at least, I’m definitely sticking with WordPress, so I won’t get to play with Blogger, Google Analytics or AdSense, either.

As well as dogfooding Google products at work, we also dogfood technologies and platforms. This sounds fairly obvious, but it’s not always the case with companies who create platform technology. Microsoft, for instance, used to be famous for not using the technologies they provided to developers internally, though they are getting better now. Some of Google’s technologies are open source, and thus available for everyone to use. Guice and Protocol Buffers are pretty good examples of this. Guice is amazing, by the way. This being the case, there’s nothing to stop me using them on personal projects, should that be appropriate. Personal projects such as Clockwork Aphid, for example.

I’ll talk about which particular Google technologies I think might be useful in later blog posts, but since I brought it up, I suppose I should probably say something about Clockwork Aphid. I’ve blown the dust off the code, tweaked a couple of things and got a feature I’d left half finished back on track. I tried switching my current implementation from jMonkeyEngine version 2 to version 3, only to find that while it does appear a lot faster and improved in several other ways, the documentation is pretty crappy, and it’s less… functional.

I’ll talk about that more later, but for now just know that things are once again happening.

Features

So, in my last post I remarked that for procedurally generated landscapes to be interesting, they would need features. But what sort of features was I talking about? Population centres, in particular, tend to to be found close to certain kinds of… things. The first of these which always comes to my mind is the “defensible position.” Something like, say, this:

Edinburgh Castle

A lot of the truly great cities in Europe tend to centre around castles, but you don’t tend to find castles just plonked anywhere on the landscape. Edinburgh Castle is a good example because:

  1. I think it looks awesome. Look at the way it seems to just grow out of the rock of those cliffs!
  2. It’s pretty much unassailable from every direction but one (I think I mentioned the cliffs);
  3. It’s on a raised vantage point with a decent line of sight to just about every direction of approach.

It’s not the most castle-ey looking castle and it lacks, for instance:

  1. A moat;
  2. The sort of tower Rapunzel might hang out in.

But I think it makes a reasonable example here. I think this is a good rule of thumb: a castle should look good and be well situated (defensibly speaking).

Another feature which population centres tend to spring up around are harbors. They’re very important for trade, thus they attract people. Wikipedia served up a reasonable example:

Capri harbour, Italy

Rule of thumb number one for a harbor: it should be at the intersection of the land and the sea, ideally. If we’re playing in a fantasy sand pit this isn’t actually the only option, but that’s a story for another day. The nature of the coast is important as well, though. You need reasonably deep water right next to the land, but not too deep. Lastly, and just as importantly: it should provide shelter. The boats in the harbor need to be able to survive the harsh weather the sea sometimes serves up. A sheltered cove with high cliffs to each side sounds about right, no?

The last kind of feature I’m thinking of here is again something quite important to trade: the bridging point. You need to get your trade caravan across the river (or gorge) which separates the harbor and the castle (for example), thus you need to cross the bridge. The bride spans the river at a conveniently narrow point and it’s the only one for miles. Thus, the bridge becomes a nexus of activity and a town springs up there. Over time the town increases in size and more bridges get built. Before you know it… BOOM, Florence!

 

Ponte Veccio

This is, of course, just the tip of the ice berg. We’ve not even talked about churches, cathedrals, monasteries, oases, and any number of geological features.

Another question, of course is how to go about actual generating the population centres themselves. Well, I’m going to talk more about this later, when I actually get around to talking some other people doing some procedural content related stuff, but here’s a little something to wet your appetite:

Cool, no? More about procedural city generation from me later (I don’t want to get too far ahead of myself), but you can read more about this project at this excellent series of blogs. Start at the beginning, and be ready to loose a good hour of your time.

The Elephant in the Room

Since I haven’t been able to do any actual work on my Clockwork Aphid project as of late, I suppose I may as well talk about the background behind it a little more. Those who talk about it the most are the ones doing it the least, and all that. I’ve spoken a little about virtual worlds before and focussed almost entirely on World of Warcraft, because it’s a the big one. It’s not the only MMORPG, and it definitely wasn’t the first. It is the one that I have most experience with, and statistically the one most other people are likely to have experience with, as well.

There are several other virtual worlds I really should talk about, but the elephant in the room is another extremely large, and very notable, virtual world. One which has double relevance, because I’ve made an oblique reference to it already in another post.

This is a virtual world whose currency has an exchange rate with the real world, and sale of virtual goods within this world has turned people into real life millionaires. There exist architectural practices whose entire portfolio exists “in world.” Sweden, among several other countries, has an embassy in this virtual world, and presumably pays staff to work there. Several musicians have given live concerts there (don’t ask me how that works). This virtual world is not itself a game (as you may have gathered), but it has the infrastructure which has allowed people to build games inside it. Despite all this, though, it has a reputation of, for want of a better word, lameness.

This is, in and of itself, slightly frustrating, because I can’t help feeling that it could be awesome. It should be awesome. It bears more than a passing resemblance to the “Metaverse” from Neal Stephenson’s fantastic Snow Crash, you see. I presume you’ve read Snow Crash? No? Well go and read it. Now. It’s okay, I’ll wait until you’ve finished.

Done? Okay, good. Those are some big ideas, right? Yes, I thought she was a little young, too. Anyway. In case you just went right ahead and skipped over my suggestion there, the metaverse can be summarised, thus:

The Metaverse is our collective online shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worldsaugmented reality, and the internet.

I’m talking, of course, about Second Life. If you’re not familiar with it, it looks a bit like this:

 

One thing you might notice right away is that the graphics have a bit of a low-fi look about them, and there’s a reasonably good reason for this*. In our old friend World of Warcraft, the graphics aren’t exactly stellar either, but they’re much sharper than this. In WoW, by and large, the landscape doesn’t really change, unless (topically) a large new expansion is being released with results in sweeping changes to the world. In WoW, when this does happen, the game forces you to download the changes before it lets you start playing. This might be a lot of data (in the order of gigabytes) but it doesn’t happen often. As previously noted, the World of Warcraft is essentially static. Not so Second Life, though, as its landscape is built by its users. Just because a location contained an island with the Empire State Building rising out of it yesterday doesn’t mean that you won’t find a scale replica of the star ship Enterprise there tomorrow. Thus, the content of the game is streamed to the user as they “play,” and thus the polygon counts need to be kept reasonably low so that this can happen in a timely fashion. Even so, you might teleport to a new location, only to find that the walls appear ten seconds after the floor, and then suddenly you’re standing in the middle of a sofa which wasn’t there a second ago.

The issue with second life, for me at least, is that it’s not as immersive as I want it to be. I don’t feel as though I’m connected to it. I feel restricted by it. There’s something cold and dead about it, much like the eyes of the characters in the Polar Express. Something is missing, and I can’t quite put my finger on what it is. Also, sometimes the walls appear ten seconds after the floor. That said, it is a fully formed virtual world with a large population and a proven record for acting as a canvas for people’s ideas. Given that the point of Clockwork Aphid is to tell stories in a virtual world (I mentioned that, right?), why not tell those stories in Second Life?

This is an idea I’m still exploring, and I keep going backwards and forwards about it, because I’m still not sure if the juice is worth the squeeze. I’d get an awful lot of ready built scope and a huge canvas to play with, but I’m note sure if it’s the right type of canvas. This is a canvas which comes with no small number of restrictions, and I would basically be attaching my wagon to a horse which was entirely outside of my control. Mixed metaphors could be the least of my worries. That said, did I mention that people have become millionaires trading inside Second Life? Then again, Second Life doesn’t exactly represent a living breathing virtual world, so much as the occasionally grotesque spawn of its users’ collective unconsciouses. Sometimes it’s not pretty, other times quite impressive results emerge.

Your thoughts are, as always, both welcome and encouraged, below.

* To be fair, the graphics in Second Life are actually a lot better than they used to be.

What todo?

The return train ride after a visit to my parents’ house is, if anything, more pleasant that the outward journey. This is not least, of course, because it ends in Edinburgh, rather than Doncaster*. Be that as it may, this is perhaps a good time to pick up the thought I left hanging at the end of my last entry, in which I talked a little about ways of keeping notes and writing down ideas. Having dealt with information, we now come to action. From stasis, to process. Less obliquely: what, exactly, are you going to do about those ideas?

For the longest time I never kept todo lists. I tried to keep my goals and the individual steps required to reach them inside my head. Sometimes this worked quite well; thoughts are things which are simple to rearrange and update as requirements change, after all. Other times, not so much. In short: I forgot things.

My first attempt at organising these things was quite simple. I wrote a todo list down in my lab book. Paper is always I good place to start for a lot of things. Simple is generally a good place to start, also. I’d say this worked almost flawlessly on the occasions when I finished every item on the todo list I wrote on a particular morning by the end of that day. The problem appears on the days when this doesn’t happen. You don’t always carry out the tasks in the order you wrote them down, because that doesn’t always make sense, and there’s no way to rearrange the order of the items short of rewriting the list (yeah, right). As result, you end up with a partially completed list to be carried over to the next day. Worse still, often times a single item will subdivide into numerous smaller tasks when you come to take a closer look at it. So now you have two lists on different pages of your book, with some miscellaneous notes (and quickly jotted down take-away orders) separating them.

Version 2 worked a lot better, and I still use it from time to time. Basically, you write each item down on a post-it note (the 2.5*7.5cm ones are ideal, though a little hard to find) and stick it on a flat surface close to where you work. You’re then free to re-arrange them to you hearts content and, best of all, when you finish an item you get to take it down, screw it up into a little ball, and throw it in the bin. Very satisfying. Take that todo list! Obviously this isn’t as mobile as version one, though you can pull the post-its down and stick them in a book to take with you. If you always work in the same place it’s pretty great though. It’s also fairly obvious, so apologies if I’m not telling you anything new here.

What version 2 doesn’t do is allow you to easily add things wherever you are, share your todo lists, or integrate into your more everyday todo type scenarios. In particular, I do not recommend using this approach for shopping lists. That isn’t going to work. As a result, I started looking at more computer based approaches (surprise, surprise). I gave Remember The Milk a try for a while, but the browser based approach doesn’t work for me. I like applications to integrate with my desktop.

This was around the time that the brilliant OmniGroup announced OmniFocus. I already use (and love) OmniOutliner and OmniGraffle, so this looked like a pretty good bet. Plus they later announced that an iPhone version was coming as well. Perfect, or so I thought. As it turned out, I much prefer Cultured Code’s Things, however. It is, quite frankly, the desktop todo manager of my dreams. It’s Mac only though, which doesn’t help me if I think of something at the office, or on the bus. Fortunately, there are also iPhone and iPad apps available, which are equally sweet. They like one crucial feature, though: web sync. Sure, I can sync them over wireless, but this is much less use to me. I have to actively do it, rather than just fire up the application and wait for the magic to happen. It’s on the road map, but I’m still waiting. Side note: OmniGraffle does have web sync, but I don’t like it as much, and it’s a lot more expensive. It might be perfect for your needs, however.

Which brings me sort of full circle, to my shiny new Clockwork Aphid project. At this point I’ve mostly just been implementing things as the mood takes me, maybe planning a couple of steps ahead. I probably need a bit more structure than that. The post-it note and Things based approaches both have their benefits, but their are a couple other things to take into consideration. First of all, I’m using the fabulous BitBucket.org to host my source code, and that come with a handy issue tracker. Especially handy should anyone else join the project once it get a little bit more fleshed out. Another interesting possibility is the Mylyn plug-in for eclipse (my code editor of choice), which provides you with a “task based interface.” In other words, it hides the clutter and shows you only the parts of your project you need to care about for the particular task you’re working on. That’s quite interesting, but works best when you link it up to a central issue tracker. Frustratingly, it doesn’t work with the BitBucket issue tracker. It does work with Atlasian’s Jira, however, and Atlasian recently acquired BitBucket, so hope is not lost.

These approaches are clearly better tailored to what I’m actually doing, but neither is as user friendly as either the post-its or Things, especially when it comes to adding new tasks, and that’s the bit you need to do quickly, brain-dump style. The search continues…

I, of course, welcome any alternative todo list solutions you’d like you leave in the comments. <subliminal>Please comment on my blog.</subliminal>

* Know this: no written text, regardless of italicisation, can accurately reproduce the tone of my voice when I say the word “Doncaster.”

 

Fractal Errata

Some of the particularly sharp/anal ones amongst you might have noticed that while the technique for generating fractal lanscapes I previously described works (and works well), it’s not 100% correct. Specifically, the fact that it uses the the same scaling factor for nodes created by the diamond and square steps isn’t quite right.

Why is this? Because they generate nodes which adhere to different levels of detail, that’s why. Lets go back to that last diagram for the post which described the algorithm:

Diamand step (left) and square step (right).

Now while you’ll note that both steps add nodes that can be addressed using fractions with two as their denominator, the distance of the nodes created by the diamond step to their parents is greater than those created by the square step.

The nodes created by the square step are orthogonal to their parents, so the distance between them is proportional to a half, which as luck would have it has the same as the denominator as the fractions used to address the node. How convenient!

The nodes created by the diagonal step, on the other hand, are diagonal to their parents. This means that the distance to their parents is the pythagorean root of this same distance, so in this specific case:

sqrt(½*½+½*½) = sqrt(¼+¼) = sqrt(½) = something

Once again, the key fraction used to work this out has the same denominator as those used to address the node in the landscape. Thus, if d is equal to the denominator we’re using to address a node, the basic scaling factor used to offset a new node from its parents would be the following:

if (diamond step) range = [-sqrt(1/d*1/d*2), sqrt(1/d*1/d*2)]

else range = [-1/d, 1/d]

As I said before, this won’t make a lot of difference, but it will be more correct and that’s important to some people. Myself included.

For comparison purposes this is the effect this change has on the example landscape I’ve been using:

The original scaling method.
The updated scaling method.

There’s some difference visible, but not a huge amount. Mostly, it’s just increased the range the data are occupying and expanded the bell curve accordingly. Hence, more high points and more low points, but the land is the same basic shape.

Now In Eye Popping 3D!

It took a little bit of fighting with bugs that weren’t showing up in the 2D view, and a bit of time to figure out what was going on with the lighting system in JME, but I finally got the 3D display of the fractal landscapes working.

The first stage was just displaying each node as a discrete point so I could see that each was in about the right place. It looks a little bit like this:

 

Fractal landscape as points (click for bigger).

 

I did this by simply piping the spatial coordinates and colour information of each node into a pair of standard Java FloatBuffers, passing these to a JME Point class (which should really be called PointSet, in my opinion) and attaching this to the root display node of my JME application. The colouring scheme is the same as the one used for the 2D display. Some things don’t look quite right, largely due to the fact that I’ve just drawn the “underwater” points as blue, rather than adding any actual “water.” Don’t fret, it’s on the todo list.

That said, the landscape looks about right. All the points seem to be in their correct location. As a quick implementation note, I’m defining the (x, y, z) coordinates of the scene in the following way:

x = east

y = altitude

z = -north

With some scaling factors used to map the values from the [0,1] range used to address them to slightly more real world like dimensions.

The next stage was to display the landscape in wireframe to make sure the connections I’ll be using create a more solid looking polygon based display are all working correctly. Why not just go directly to polygons? You can see the the detail better in the wireframe display, making debugging much easier. I’ll definitely be using it again later.

This time, instead of piping each and every node into the vertex array, only the nodes at the highest level of detail are used. These are the nodes generated during the final “square stage” of the fractal algorithm, for those of you playing at home. Each draws a triangle (consisting of three separate lines) into the vertex buffer for each pair of parents it has in the landscape. The result looks something like this:

 

Fractal landscape as lines (click for bigger).

 

Everything seems to be in good order there, I think. One or two things don’t look quite right, particularly the beaches, but the tessellation and coverage of the polygons looks right. Here’s a closer in look at some of the polygons so you can see what the tessellation scheme actually produces:

 

Polygon tessellation (click for bigger).

 

You can (hopefully) see that each of the “active” nodes sits at the centre of a diamond formed from the shape of its parents, so it’s the points with four lines branching from them (rather than eight) which are actually being used to draw the scene.

Next: polygons. Again, only the nodes at the highest level of detail are used. This time, each inserts itself into the vertex buffer, then adds its parents if they’re not in there already. Each node remembers its postion in the vertex buffer, and these indices are then used to draw the actual polygons. These are declared by passing the indices in sets of three into a standard Java IntBuffer. The buffers are then passed to one of JME TriMesh geometry classes and displayed, like this:

 

Fractal landscape as polygons (click for bigger).

 

Again, the beaches don’t look quite right, but otherwise I’m reasonably pleased. I still need to add the actual water and improve the form of the landscape itself (and about a million other things), but in terms of display this is looking pretty good. Except for one thing: I’m using far more detail than I need to. Let me illustrate this with a slightly more extreme example. The pictures I’ve posed so far were generated using seven iterations of the diamond square algorithm. Here’s what happens when I use ten iterations (remember, the number of points increases exponentially):

 

MOAR POLYGONS! (click for bigger)

 

On the bright side the beaches look better, but that’s a lot of polygons. Far more then we actually need to display. 1579008 polygons, in fact. We need to reduce that somewhat, if we’re going to make things more complicated and maintain a reasonable frame rate (I’m getting about 60fps with this view at the moment). You can see the problem more clearly if I show you the same view using lines rather than polygons:

 

MOAR LINES! (click for bigger)

 

You can just about see the individual triangles up close, but further away the lines just mush together. I think we can afford to reduce the level of detail as we get further away, don’t you?

Well, I’ll get right on that, then…

Some Random Landscapes

I don’t have any 3D views of the fractal landscapes I’ve been making to show you yet, as I’m still working through the different implementation options. I did get a little distracted with the 2D views of the landscape this morning, though, and play with the colouring scheme.

First of all, let’s start again with the example landscape used in yesterday’s post, only with slightly more sober colours and a bar on the right showing how the height values map to actual colours:

Fractal coastlines.

Now that looks reasonably neat already, in a “my first atlas” kind of way, but clearly there’s a lot of information missing. We can see this if I plot the height values in monochrome, giving the landscape a more “plasma cloud” appearence:

Plasma cloud landscape.

Now we can see the extra information, but we’ve lost a lot of the sense that what we’re looking at is a landscape. It looks more like a cloud. We can get some of that back by combining the two approaches and using different shades of blue and green:

Shaded landscape.

Now this looks a lot better. I think the water looks quite reasonable using this scheme, but the landscape is a bit… homogenous. Is every part of the land covered in grass? How boring!

We can make things a bit more interesting by adding a thin band of “sand” around the coast, and some “mountainy” colours (and snow!) higher up, like so:

More in depth colouring scheme.

Now this looks better, the sand in particular. The mountains look okay, but not quite right. Something’s a little off. That’s not what mountains actually look like. We also don’t have any rivers or above sea level lakes. These are problems I’m going to look at later, after I get a reasonable 3D display system working. In the mean time, though, here are a couple more 2D landscapes for your viewing pleasure:

A bit more snow and an inland sea.
Yet another coastal region.

You’re Speaking My Landscape, Baby.

No, that isn’t a typo… but yes, it is a bad play on words. That’s the bad news. The good news: finally! A Clockwork Aphid implementation post!

If you’re building something which relates in some way to virtual worlds, then the first thing you’re going to need is a virtual world. This gives you two options:

  1. Use a ready made one;
  2. Roll your own.

Option 1 is a possibility, and one that I’m going to come back to, but for now let’s think about option 2. So then, when building a virtual world the first thing you need is the lanscape. Once again you have two options, and let me just cut this short and say that I’m taking the second one. I did used to be a bit of a CAD ninja in a previous job, but I’m not a 3D modeller and I have no desire to build the landscape by hand.

So I’m going to generate one procedurally. As to what that means exactly, if you don’t already know… well I’m hoping that will become obvious as I go along.

Traditional Fractal Landscape Generation

There are several ways of generating a landscape. A pretty good one (and one I’m quite familiar with, thanks to a certain first year computer science assignment) is the fractal method. It goes something like this:

Start off with a square grid of floating point numbers, the length of whose sides are a power of two plus one. I’m going to use a 5*5 (2*2 + 1) grid for the purposes of this explanation.

Set the four corners to have the value 0.5 (the centre point of the range I’ll be using), thus:

Fractal Landscape Step One
Grid based starting point.

Now, we’re going to generate the landscape by repeatedly subdividing this and introducing fractal randomness (random fractility?) using the diamond square algorithm. First the diamond step, which in this iteration will the set the value of the central cell based on the value of the four corners:

The first diamond step.

To do this we take the average of the four corners (which I calculate to be 0.5 in this case, because I did maths at school) and adding a small randomly generated offset, which has been scaled according to the size of the subdivision we’re making. How exactly you do this varies between implementations, but a good simple way of doing it is use a random number in the range [-0.25,0.25] at this stage and half this at each subsequent iteration. So, on this occasion let’s say I roll the dice and come up with 0.23. This now leaves us with:

Result of the first diamond step.

Next, we have the square step, will set the cells in the centre of each of the sides. Once again we take the averages of other cells as starting point, this time in the following pattern:

The first square step.

Now we generate more random numbers in the same range and use them to offset the average values, giving us this:

Result of the first square step.

That completes an iteration of the algorithm. Next we half the size of the range to [-0.125,0.125] and start again with the diamond step:

The second diamond step.

…and so on until you’ve filled your grid. I think you get the general idea. I’ve left out one potentially important factor here and that’s “roughness,” which is an extra control you can use to adjust the appearance of the landscape. I’m going to come back to that in a later post, because (hopefully) I have a little more that I want to say about it. I need to play with it some more first, though.

Once you’ve finished here you can do a couple of different things if you want to actually look at your creation. The simplest is to pick a threshold value and call this “sea level,” then draw the grid as an image with pixels set to blue below the threshold and green above it:

Generated fractal landscape.

This was obviously generated with a slightly larger grid (513*513), but as you can see it creates quite reasonable coastlines. You can do slightly fancier things with it, such as more in depth colouring schemes and 3D display. For 3D, the simplest method is to use each cell as a vertex in your 3D space and tessellate the grid into triangles like this:

Simple grid based tessellation.

You can then do a couple of fancy things to remove the triangles you don’t need, either based on the amount of detail they actually add or their distance from the user (level of detail).

This system works quite well, but tends to produce quite regular landscapes, without of the variation we’re used to or the things generated by rivers, differing geology, coastal erosion, glaciation and other forces which affect the landscape of the real world. Additionally, because the data is stored in a height map, there are some things it’s just not capable of displaying, such as shear cliffs, overhangs, and cave systems. The grid structure is also very efficient, but quite inflexible.

How I’m Doing it

Needless to say that’s not exactly how I’m doing it. Of course there’s generally very little sense in reinventing the wheel, but sometimes it’s fun to try.

I’m not doing too much differently with the actual algorithm, but I am using a slightly different data representation. Rather than a grid, I’m using discrete nodes. So you start off with something like this:

Node based starting point.

Which then is transformed like this to generate the actual landscape:

Adding further nodes.

..and so on.

What you you can’t see from the diagrams is that I’m using fractions to address the individual nodes. So, for instance, the node in the centre is (1/2,1/2) and the node on the centre right is (1/1, 1/2). This means I don’t need to worry about how many nodes I have in the landscape, and the adress of each never has to change. The next set of nodes will be addressed using fractions with 4 as the denominator, then 8, 16 and so on. Before looking up a node you first reduce its coordinates down to a lowest common denominator (which is a factor of 2) and then pull it out of the correct layer. I’m currently using maps as sparse arrays to store the data in a structure which looks like this:

Map<int, Map<int, Map<int, LandscapeNode>

If you’re thinking that this isn’t possible in Java, you’re half right. I’m actually using one of these. The first int addresses the denominator, then the east numerator, then the north numerator. I’ve looked at another couple of strategies for hashing the three ints together to locate a unique node but this one seems to work the best in terms of speed and memory usage. I might look at other options later, but nor yet.

This is a much more flexible representation, which removes some of the limitations of the grid. I can keep adding more detail to my heart’s content (up to a point) and I don’t have do it in a regular fashion. i.e. the native level of detail doesn’t have to be the same across the entire map. More remote areas can have less detail, for instance. By the same token, I can keep the entire “landscape” in memory, but flexibly pull individual nodes in or out depending on where the user actually is in the world, saving memory. This also potentially gives me the following:

  1. The possibility to decouple the geometry of the landscape from the topography of the representation;
  2. A “native” way of implementing different levels of detail;
  3. A natural tessellation strategy based on connecting a node to its parents (maybe you spotted it);
  4. Enough data to allow the landscape to be modified to produce more dramatic features across different levels of detail;
  5. The processes for the above should be very parallelisable.

There are still a couple of things I’m working on (3D display for a start), as I’ve been obsessing over how to organise the data structures I’m using. Hopefully I’ll be back tomorrow with some 3D views.

If you’re interested in the code you can find it here. If what you found at the end of that link didn’t make any sense to you, then you’re probably not a programmer (or you’re still learning). If you still want a look drop me a comment and I’ll see what I can do.

Disclaimer: As far as I’m aware I didn’t steel this from anybody, but I don’t claim it’s completely original, either.