Dogfood, Nom Nom Nom

Dog food, the common noun, is reasonably self explanatory (pro tip: it’s food for dogs). Dogfood the verb or dogfooding the verbal noun, though, might require a little bit of explanation.

At the root of it is this: if you make dog food, you should feed it to your own dogs. There are essentially two reasons for this:

  1. If you don’t personally test it, how will know if it’s any good?
  2. If your dogs don’t eat it, why the hell should anyone else’s?

The same principle applies to software. Even more so in fact, as it’s something you’re more able to test directly. As a simple example: in Google, we use Google docs for most documentation purposes (design docs, presentations, etc.). I’m willing to bet that folks at Apple use iWork for much the same purpose. I’m absolutely certain that Microsoft employes use Office, excepting those times when it’s necessary to write a document in the blood of a green eyed virgin upon the pressed skin of an albino goat.

This process is called dogfooding. You use the software internally before it’s released to users, ensuring that it gets a lot more test usage. As an added bonus, developers who actually use the software they create are more likely to create software that’s actually worth using. That’s not always the case, of course, since most developers don’t really fall into the “average users” category. Case in point: upgrading my computer to Lion broke my installation of Steam. I fixed it with a quick command line incantation, then performed a couple more in order to make The Stanley Parable functional under OSX. Most computer users would not be comfortable doing something like this, nor should they have to.

As well as using your company’s products at work, it’s generally a good idea to use them at home. It’s always good to have a feel for what your company actually does and get more experience with it. I’ve used Google search more or less exclusively for at least ten years. That’s not really a hard choice. It gives the best results. Likewise, I started using Google Chrome is my main web browser pretty much as soon as it was available for the platforms I used (in my last job that was actually Windows, OSX and Linux). I use iPhone in preference to Android, however, though I do have an upgrade due me towards the end of the year and it’s not completely inconceivable that I might switch. For the time being at least, I’m definitely sticking with WordPress, so I won’t get to play with Blogger, Google Analytics or AdSense, either.

As well as dogfooding Google products at work, we also dogfood technologies and platforms. This sounds fairly obvious, but it’s not always the case with companies who create platform technology. Microsoft, for instance, used to be famous for not using the technologies they provided to developers internally, though they are getting better now. Some of Google’s technologies are open source, and thus available for everyone to use. Guice and Protocol Buffers are pretty good examples of this. Guice is amazing, by the way. This being the case, there’s nothing to stop me using them on personal projects, should that be appropriate. Personal projects such as Clockwork Aphid, for example.

I’ll talk about which particular Google technologies I think might be useful in later blog posts, but since I brought it up, I suppose I should probably say something about Clockwork Aphid. I’ve blown the dust off the code, tweaked a couple of things and got a feature I’d left half finished back on track. I tried switching my current implementation from jMonkeyEngine version 2 to version 3, only to find that while it does appear a lot faster and improved in several other ways, the documentation is pretty crappy, and it’s less… functional.

I’ll talk about that more later, but for now just know that things are once again happening.

Advertisements

Full Nerd II: Nerd Harder

It seems that people really enjoyed my post about the computer history museum. At the time I wrote it, I was worried that it might constitute just a little bit too much nerd, so I held back on my initial impulse to put in more pictures and gush enthusiastically about how awesome it all was.

With hindsight, perhaps I can afford to ignore that particular mental stopcock, at least for a little while. I do not, I regret to tell you, have anymore pictures of the teapot. I do intend to buy myself a Melitta teapot at some point quite soon, however, so that it may sit in my flat and act as a most nerdy in-joke.

“Tea, anyone?”

“Yes, please.”

Pause.

“Why are you grinning like that?”

I do, however, have pictures of many other fun things. Let’s start with a reference to my current employer:

2011-05-29-16-31-07_1000000043

This is one of the original Google server racks. At one point, if you typed a query into the Google homepage, this is where the magic happened. If you have any familiarity at all with how servers usually look, you might be scratching your heading and thinking that this one does not look entirely right. Let me help you with that:

2011-05-29-16-31-27_1000000044

Yes, you can see all of the components. No, that is not normal. Yes, each individual server would normally have its own case. No, under normal circumstances sheets of cardboard would not be used as the insulation between motherboard and shelf. Yes, that is an awful lot of servers to fit into a single rack. Yes again, that would require very good air circulation, but you’ll have a bloody difficult time finding a case that gives better circulation than no case at all. No, you would not expect a server to bow in the middle like that…

Two things the early Google was known for: providing the best search results; and being very, very frugal when it came to equipment purchasing.

Let’s talk about something a bit more fundamental, though. Hard disks, for example. The one in the computer I’m writing this on has a capacity of around 120 GB (depending on how you measure a giga byte, but that’s a different story). Wikipedia tells me that is measures around 69.85 mm × 15 mm × 100 mm, so quite small. This is also a hard drive:

2011-05-29-13-14-18_1000000029

Assuming I haven’t gotten mixed up here somewhere, this is the worlds first hard disk, and is made up of 50 24″ disks, holding a grand total of 5 million characters. Now, if  each of those characters is a one byte ASCII character (or similar) that’s approximately 5 MB, or 0.005 GB. Quite the difference in storage density, no?

Here’s a (slightly) more recent example of a hard disk, one which I’m told is actually still in use to some extent:

2011-05-29-13-11-55_1000000028

Now, if my understanding is correct, this next piece looks like a hard drive, but is much closer in function to RAM:

2011-05-29-13-23-04_1000000031

What’s particularly neat is that it’s based on an original design by the regrettably late, extremely great, and utterly brilliant Alan Turing.

The museum has an entire section devoted to the evolution of storage, and it’s quite fascinating. Another of the forebears of modern RAM is magnetic core memory, which looks like this:

2011-05-29-13-27-46_1000000032

Now that’s quite cool, but I’d say that it’s also quite pleasing to the eye. I think I’d happily frame that, mount it, and have it hanging from the wall in my flat (somewhere close to the teapot). People walk through castles and talk in hushed tones of all the many things “these stones” have seen. All of the stories they might tell, if they could only speak. But this… unlike your average rock, this is unquestionably memory, and memory which lived through very exciting times in the development of our society. Here’s something I look at and wonder what stories it might be able to tell, and what stories it has been forced to forget.

There are many things at the Computer History Museum which are very cool and certainly raise a smile (as well as an appreciative thought as to how far things have come). There are also things which just plain stop you in your tracks; the Difference Engine, for example. Well, at the risk of repeating myself, I’m going to post another picture of it, this time from the other side, so you can see a little bit more of the mechanism:

2011-05-29-16-41-32_1000000049

Now there’s a thing I would really and truly love to have in my flat. Ideally in a more coffee table friendly size, of course.

The Process and the Platform

The effort required to actually publish that last post was… considerable. Several factors contributed to this:

  • I have slow internet access at my hotel;
  • Currently, my only (full) computer is my work laptop;
  • This doesn’t have iPhoto installed (this is mostly for reasons of simplicity, I’d probably get it if I asked).

In the end, I wrote the text of the post in Evernote on my iPad (using an external keyboard), since I’m supposed to install an absolute minimum of third party software on the laptop. Next, I did a copy paste into BlogPress, a blogging app which lets you insert images inline. In theory the official WordPress app also does this, except that the upload always fails for me. As you may have gathered, I inserted the images here. This also rescales them, so you don’t need to upload all 12 megapixels. Next, I uploaded it to my actual blog as a draft, and used the web interface to fix any formatting errors and add any extra formatting, since BlogPress doesn’t allow bold or italic text (that I’ve found). That done, I hit the publish button, and: presto!

Needless to say, this is not an ideal workflow. I like using Evernote for writing the actual text (on both iPad and Mac) because it has about the right amount of functionality and it backs everything up and synchronises it between all of my devices. I like having that always available record which I can look back over and search as I see fit. I also like being able to drop one device, pick up another and keep working on the same document more or less seamlessly.

Digression: I also like that I can use it as a permanent record of my notes. Before I moved down to London I was part way through scanning my notebooks from my PhD into Evernote. Evernote runs OCR on the images, finds the text (when my handwriting makes this feasible) and make it searchable. Brilliant. It’s like being able to carry all of my old note books around with me, all of the time.

What Evernote doesn’t allow you to do is freely mix text and images, however. The WordPress interface does (obviously), but that leaves me with the problem of uploading the images. This is where the low bandwidth and lack of iPhoto were became problems. I suspect the cleanest workflow would be to immediately upload an album to Flickr (or other photo sharing site) and then use the appropriate URL to include the image in the blog post. Searching through the iPhoto library manually sure as hell isn’t ideal, and that’s the only real option for doing it image by image in the web interface.

The most pleasant experience I’ve had for putting together blog posts with both text and pictures was actually iWeb, and by a metric mile. iWeb uses the built in OSX controls and lets you select images according to meta-data and what they look like, rather than their file name, as though you were working with a file system specifically designed for serving you images. The iPad does more or less the same thing, in fact, so that part wasn’t actually too bad. Using iWeb leaves you with very limited options for your blog, however. It’s made me think I should look at using a dedicated program, such as MarsEdit, for writing my posts, or at least for the final stages.

Ideally, I need to find a decent workflow, which doesn’t break down when I’m away from home and likely to actually have semi interesting things to blog about, but doesn’t restrict me when I am at home. Ideally, it should allow me to jump between different machines with a minimum of effort, and not require me  to always add the final touches from the same machine. The workflow should also not break down when no internet connection is available. Text is fairly easy. Images make things more difficult, especially if the images were recorded using my own camera.

Lastly, I’m giving some consideration to porting this blog over to Blogger (only if I can transfer all of my posts and comments, however). It’s not a coincidence that I now work for the company responsible for Blogger’s infrastructure. Becoming more familiar with that platform can only really be a good thing for me here. Blogger also gives me a couple of options which WordPress doesn’t, though. Thoughts?

…In Which I Go Full Nerd

Jet lag is a funny thing. Right now it actually seems to be working in my favour; it’s managed to knock a couple of bad habits out of me. Specifically, these happen to be the not entirely unrelated habits of going to bed too late (then making it later by reading for a good long while) and getting up too late. Right now I seem to be fighting to keep my eyes open by around nine, and then being wide awake by seven. Which is more or less the position I found myself in on Sunday. Since day one at Google camp was a couple of days away I thought I’d check out my immediate surroundings.

This may come as a bit of a surprise to you, but there isn’t actually a lot to do in Mountain View. One of the things there is to do, however, is the computer history museum, which I’d been told is exactly as awesome as it sounds. In case my meaning isn’t clear: really awesome. There is no sarcasm here. Look at my face. Awesome. This is not my sarcastic face. Awesome. Face. Awesome.

I have a rental car, but the brakes scare the shit out of me, and the place didn’t look too far away, so I decided to walk. Now, I’d been warned that no one walks in America, but I wasn’t quite prepared for it to be true. I must have walked 5 miles on Sunday and saw a grand total of perhaps 3 other pedestrians, and found that drivers looked at me as though I was a crazy person. I think perhaps one reason for this might be that the pavements (or sidewalks, if you like) are… well… shit. Anytime you have a height difference of more than an inch between two slabs… that’s bad.

Slightly thankful that there were no other pedestrians to see me trip, I arrived at the building in question. Externally, it’s kind of neat. You might mistake it for the headquarters of some hip new tech startup. If it wasn’t for the big sign saying “Computer history Museum” outside, obviously. Inside, though, it reminded me quite of bit of the Science Museum (“which science museum?” “The Science Museum”). It’s nowhere near as grandiose, and has a much narrower focus, but the comparison feels apt.

The scope of the exhibits is quite impressive, starting with slide rules and abaci, moving though Babbage (oh, I’ll come back to Babbage), on to Turing and right up to the present day. Here are a couple of examples of things which made me smile:

The Altair 8800, quite an important machine in the history of Microsoft, of which the fictionalised version of Steve Jobs in Pirates of Silicon Valley says “I never had any problem with the Altair… until I tried to use it.”

You know what’s better than that, though? A computer made out of wood. If you bought an Apple I you received a box of parts and some schematics. You hade to supply the case yourself. You know what else is awesome? UNIX is awesome:

You see? There’s a badge and everything. What says awesome more than a badge? Oh wait, I know:

Oh yeah. That’s right. I bet you wish you were cool enough to have that licence plate. As a side note: I wonder if anyone does have that licence plate, since I assume this one isn’t real. Furthermore: what kind of car would you put that on? This is fodder for Pimp My Ride right here. They should get on that (“Yo, we heard you like UNIX…”).

There was a lot more at the museum. Too much, in fact. I arrived about an hour after it opened and literally left as they locked the door behind me. I probably skipped about half of the section on the internet, and only had time for a brief look at the exhibit on the history of computer chess. Did I mention that they have half of Deep Blue? They also have something else very, very cool, and that’s one of the two Babbage Difference Engines which we only very recently developed the capability to actually build:

You really can’t do justice to this thing in a photograph. It’s beautiful. A marvel of engineering, it actually works exactly as Babbage said it would, and he built it entirely on paper. In 1849. Somebody should build an Analytical Engine. That, my friends, would truly be something.

One last thing:

I think this might be THE teapot.

Update: Someone is building an analytical engine!

Another update: there is a follow up post here.

Magic!

Okay, you’ve probably heard about this already, since it seems to be spreading across the internet like some sort of tube based wildfire/hot cakes hybrid, but it’s still awesome. Check this out:

This is exactly the sort of thing I love to see as an app on a smart phone. It’s a very cool use of augmented reality, it’s actually very useful, and it has a near perfect user interface. As in: the interface (and the device itself) essentially becomes invisible. You’re left looking through a magic window into a world in which you can read all of the writing. I’ve been playing with it and it’s very, very cool. It’s not perfect yet, and tends not to deliver imperfect translations (occasionally with hilarious results) and bad grammar, but it seems to be close enough most of the time for you to understand the general meaning. I recommend downloading the (free) app just to play with the demos. They let you either remove words from the scene or reverse the order of the letters; the language packs cost extra. You can find the developer’s website here.

Disclaimer: this may sound like an advert, but it’s not. I’ve received nothing from these guys. I just think this is really, really cool.

The Elephant in the Room

Since I haven’t been able to do any actual work on my Clockwork Aphid project as of late, I suppose I may as well talk about the background behind it a little more. Those who talk about it the most are the ones doing it the least, and all that. I’ve spoken a little about virtual worlds before and focussed almost entirely on World of Warcraft, because it’s a the big one. It’s not the only MMORPG, and it definitely wasn’t the first. It is the one that I have most experience with, and statistically the one most other people are likely to have experience with, as well.

There are several other virtual worlds I really should talk about, but the elephant in the room is another extremely large, and very notable, virtual world. One which has double relevance, because I’ve made an oblique reference to it already in another post.

This is a virtual world whose currency has an exchange rate with the real world, and sale of virtual goods within this world has turned people into real life millionaires. There exist architectural practices whose entire portfolio exists “in world.” Sweden, among several other countries, has an embassy in this virtual world, and presumably pays staff to work there. Several musicians have given live concerts there (don’t ask me how that works). This virtual world is not itself a game (as you may have gathered), but it has the infrastructure which has allowed people to build games inside it. Despite all this, though, it has a reputation of, for want of a better word, lameness.

This is, in and of itself, slightly frustrating, because I can’t help feeling that it could be awesome. It should be awesome. It bears more than a passing resemblance to the “Metaverse” from Neal Stephenson’s fantastic Snow Crash, you see. I presume you’ve read Snow Crash? No? Well go and read it. Now. It’s okay, I’ll wait until you’ve finished.

Done? Okay, good. Those are some big ideas, right? Yes, I thought she was a little young, too. Anyway. In case you just went right ahead and skipped over my suggestion there, the metaverse can be summarised, thus:

The Metaverse is our collective online shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worldsaugmented reality, and the internet.

I’m talking, of course, about Second Life. If you’re not familiar with it, it looks a bit like this:

 

One thing you might notice right away is that the graphics have a bit of a low-fi look about them, and there’s a reasonably good reason for this*. In our old friend World of Warcraft, the graphics aren’t exactly stellar either, but they’re much sharper than this. In WoW, by and large, the landscape doesn’t really change, unless (topically) a large new expansion is being released with results in sweeping changes to the world. In WoW, when this does happen, the game forces you to download the changes before it lets you start playing. This might be a lot of data (in the order of gigabytes) but it doesn’t happen often. As previously noted, the World of Warcraft is essentially static. Not so Second Life, though, as its landscape is built by its users. Just because a location contained an island with the Empire State Building rising out of it yesterday doesn’t mean that you won’t find a scale replica of the star ship Enterprise there tomorrow. Thus, the content of the game is streamed to the user as they “play,” and thus the polygon counts need to be kept reasonably low so that this can happen in a timely fashion. Even so, you might teleport to a new location, only to find that the walls appear ten seconds after the floor, and then suddenly you’re standing in the middle of a sofa which wasn’t there a second ago.

The issue with second life, for me at least, is that it’s not as immersive as I want it to be. I don’t feel as though I’m connected to it. I feel restricted by it. There’s something cold and dead about it, much like the eyes of the characters in the Polar Express. Something is missing, and I can’t quite put my finger on what it is. Also, sometimes the walls appear ten seconds after the floor. That said, it is a fully formed virtual world with a large population and a proven record for acting as a canvas for people’s ideas. Given that the point of Clockwork Aphid is to tell stories in a virtual world (I mentioned that, right?), why not tell those stories in Second Life?

This is an idea I’m still exploring, and I keep going backwards and forwards about it, because I’m still not sure if the juice is worth the squeeze. I’d get an awful lot of ready built scope and a huge canvas to play with, but I’m note sure if it’s the right type of canvas. This is a canvas which comes with no small number of restrictions, and I would basically be attaching my wagon to a horse which was entirely outside of my control. Mixed metaphors could be the least of my worries. That said, did I mention that people have become millionaires trading inside Second Life? Then again, Second Life doesn’t exactly represent a living breathing virtual world, so much as the occasionally grotesque spawn of its users’ collective unconsciouses. Sometimes it’s not pretty, other times quite impressive results emerge.

Your thoughts are, as always, both welcome and encouraged, below.

* To be fair, the graphics in Second Life are actually a lot better than they used to be.

The Journey Home

Authors note: This post was actually written on Sunday. As it turns out, writing it on the iPad was no problem at all, but actually posting it (with the picture and links) was a different matter entirely.

You know, the train journey from Edinburgh to Doncaster is really quite beautiful, for the most part. A reasonable amount of it happens within sight of the sea and most of the rest passes through open country. The Yorkshire dales is a landscape I find quite pleasing to look out over, unless I’m driving through it, in which case I find to to mostly feel endless. Some of the towns you pass through have more charm than others, of course. Newcastle isn’t without its fans, and Berwick-Upon-Tweed is gorgeous, but I can’t see anyone wanting to put Doncaster on a postcard any time soon.

Why the sudden reflection? I’m on the way down to visit my parents for a couple of days and for the first time in years I’m not taking a “real” computer with me. The implication here, of course, is that I’m taking something which is not a real computer, or at least something which some don’t consider to be one. I picked up an iPad on my recent work trip down under (AU$ was doing quite well against USA$ at the time, which knocked a considerable amount of the price as far as I was concerned) and that’s a gadget I have with me. Yes, I’m typing this on an iPad. I actually typed the majority of this post on it as well, using the on-screen keyboard, and it wasn’t too bad at all. The biggest problem was that it reduced to effective screen area so much. Right now, though, I’m using a bluetooth keyboard for the typing. The iPad itself is resting on the crappy little shelf  attached to the seat in front (angled using one of these cases, which I highly recommend), while the keyboard is sitting on my knee, actually under the shelf. All in all, this setup is one hell of a lot more comfortable than a laptop would be in these circumstances. So that’s definitely a win.

I didn’t buy this thing to replace a laptop exactly, though. That would be silly. Nor did I intend to replace a smart phone with it. That makes no sense. What, then, is it actually for? This seems to be the number two reaction to seeing the thing in my experience. Number one being “cool!” and “it looks just like a giant iPhone!” being quite high up the list, as well. No, what I intended to replace with this gizmo is one of these:

As a general rule, I need to write thinks down more. I should probably make more notes of ideas and such which occur to me when I’m not in a position to do anything about them, and I find that todo lists are basically a necessity when it comes to keeping myself organised. Paper is pretty good for this, as a general rule. I’m pretty fond of my squared paper moleskin journal and when it comes to just scribbling things down I’d say it’s pretty much unsurpassable. But, and there’s usually a but, carrying it around at all times isn’t exactly practical and it doesn’t have an erase (or move) function, which isn’t ideal when you suddenly realise that the diagram you’ve been working on for the last half an hour really needs to be about an inch to the left if you’re actually going to fit the whole thing on the page.

If you’re using Evernote for your notes, though, then jotting things down with a phone (which I do carry at all times) becomes an option. Throw in a couple of other applications and scribbling, brainstorming and generally playing with ideas does become a legitimate possibility. A possibility contained in something no larger than a Moleskin, which does have an erase, and indeed a shift-a-bit-to-the-left, function. Todo lists are a different matter, though, and I’m going to come back to those later.

As a final note here, it should probably be quite obvious that I’m not going to be doing any clockwork aphid related work in the next couple of days, as hacking is not currently a legitimate possibility on the iPad. Someone really needs to write an app which interfaces with Eclipse (or any other IDE) on your desktop and turns it into a context sensitive programmer’s assistant keyboard type thing, though. Expect good things to show up towards the end of the week, however.

Fractal Errata

Some of the particularly sharp/anal ones amongst you might have noticed that while the technique for generating fractal lanscapes I previously described works (and works well), it’s not 100% correct. Specifically, the fact that it uses the the same scaling factor for nodes created by the diamond and square steps isn’t quite right.

Why is this? Because they generate nodes which adhere to different levels of detail, that’s why. Lets go back to that last diagram for the post which described the algorithm:

Diamand step (left) and square step (right).

Now while you’ll note that both steps add nodes that can be addressed using fractions with two as their denominator, the distance of the nodes created by the diamond step to their parents is greater than those created by the square step.

The nodes created by the square step are orthogonal to their parents, so the distance between them is proportional to a half, which as luck would have it has the same as the denominator as the fractions used to address the node. How convenient!

The nodes created by the diagonal step, on the other hand, are diagonal to their parents. This means that the distance to their parents is the pythagorean root of this same distance, so in this specific case:

sqrt(½*½+½*½) = sqrt(¼+¼) = sqrt(½) = something

Once again, the key fraction used to work this out has the same denominator as those used to address the node in the landscape. Thus, if d is equal to the denominator we’re using to address a node, the basic scaling factor used to offset a new node from its parents would be the following:

if (diamond step) range = [-sqrt(1/d*1/d*2), sqrt(1/d*1/d*2)]

else range = [-1/d, 1/d]

As I said before, this won’t make a lot of difference, but it will be more correct and that’s important to some people. Myself included.

For comparison purposes this is the effect this change has on the example landscape I’ve been using:

The original scaling method.
The updated scaling method.

There’s some difference visible, but not a huge amount. Mostly, it’s just increased the range the data are occupying and expanded the bell curve accordingly. Hence, more high points and more low points, but the land is the same basic shape.

Now In Eye Popping 3D!

It took a little bit of fighting with bugs that weren’t showing up in the 2D view, and a bit of time to figure out what was going on with the lighting system in JME, but I finally got the 3D display of the fractal landscapes working.

The first stage was just displaying each node as a discrete point so I could see that each was in about the right place. It looks a little bit like this:

 

Fractal landscape as points (click for bigger).

 

I did this by simply piping the spatial coordinates and colour information of each node into a pair of standard Java FloatBuffers, passing these to a JME Point class (which should really be called PointSet, in my opinion) and attaching this to the root display node of my JME application. The colouring scheme is the same as the one used for the 2D display. Some things don’t look quite right, largely due to the fact that I’ve just drawn the “underwater” points as blue, rather than adding any actual “water.” Don’t fret, it’s on the todo list.

That said, the landscape looks about right. All the points seem to be in their correct location. As a quick implementation note, I’m defining the (x, y, z) coordinates of the scene in the following way:

x = east

y = altitude

z = -north

With some scaling factors used to map the values from the [0,1] range used to address them to slightly more real world like dimensions.

The next stage was to display the landscape in wireframe to make sure the connections I’ll be using create a more solid looking polygon based display are all working correctly. Why not just go directly to polygons? You can see the the detail better in the wireframe display, making debugging much easier. I’ll definitely be using it again later.

This time, instead of piping each and every node into the vertex array, only the nodes at the highest level of detail are used. These are the nodes generated during the final “square stage” of the fractal algorithm, for those of you playing at home. Each draws a triangle (consisting of three separate lines) into the vertex buffer for each pair of parents it has in the landscape. The result looks something like this:

 

Fractal landscape as lines (click for bigger).

 

Everything seems to be in good order there, I think. One or two things don’t look quite right, particularly the beaches, but the tessellation and coverage of the polygons looks right. Here’s a closer in look at some of the polygons so you can see what the tessellation scheme actually produces:

 

Polygon tessellation (click for bigger).

 

You can (hopefully) see that each of the “active” nodes sits at the centre of a diamond formed from the shape of its parents, so it’s the points with four lines branching from them (rather than eight) which are actually being used to draw the scene.

Next: polygons. Again, only the nodes at the highest level of detail are used. This time, each inserts itself into the vertex buffer, then adds its parents if they’re not in there already. Each node remembers its postion in the vertex buffer, and these indices are then used to draw the actual polygons. These are declared by passing the indices in sets of three into a standard Java IntBuffer. The buffers are then passed to one of JME TriMesh geometry classes and displayed, like this:

 

Fractal landscape as polygons (click for bigger).

 

Again, the beaches don’t look quite right, but otherwise I’m reasonably pleased. I still need to add the actual water and improve the form of the landscape itself (and about a million other things), but in terms of display this is looking pretty good. Except for one thing: I’m using far more detail than I need to. Let me illustrate this with a slightly more extreme example. The pictures I’ve posed so far were generated using seven iterations of the diamond square algorithm. Here’s what happens when I use ten iterations (remember, the number of points increases exponentially):

 

MOAR POLYGONS! (click for bigger)

 

On the bright side the beaches look better, but that’s a lot of polygons. Far more then we actually need to display. 1579008 polygons, in fact. We need to reduce that somewhat, if we’re going to make things more complicated and maintain a reasonable frame rate (I’m getting about 60fps with this view at the moment). You can see the problem more clearly if I show you the same view using lines rather than polygons:

 

MOAR LINES! (click for bigger)

 

You can just about see the individual triangles up close, but further away the lines just mush together. I think we can afford to reduce the level of detail as we get further away, don’t you?

Well, I’ll get right on that, then…