Planetary Map Data Model

I’ve been doing a lot of thinking lately on how planetary data should be stored/manipulated.

Why? Because the type of datastructure behind a planet decides how big the planet is, what operations can be done over it, what can’t be done, etc.

As a bit of a primer: There have been a number of discussions on planetary data modelling in the past, usually implicit within the question of planetary size, or something similar. I’ve noticed that the trend in all of them has been to assume a planet would be stored as some sort of regular array of data, sampled over a regular (or close-to-regular, for example, sphere-cube) grid. Thus, planetary scale is constrained primarily by max resolution, and all algorithms run over a regular grid with some fudging since the size of a grid-cell changes depending on where on the planet it is (just look at a picture of a quadrilateralized sphere cube and note the distortion at the corners to see). There were additional ideas of using, for example, quadtrees to help model areas where less resolution is required.

What I realized, a little while ago, was that we don’t have to do it this way. I had been bashing my head against the problem of how to do plate tectonics in such a datastructure – essentially, a problem of translating and rotating a rasterized polygon, with the additional issue (a biiiig one) of having to do it on a distorted canvas. I thought to myself, well, why can’t I just do straight geometry, and just write the results to the grid? That’s certainly possible, so then I wondered why we need the grid at all.

Essentially, the idea I’ve taken this long to get to is this: Model the planet using a datastructure capable of efficiently rotating arbitrary polygons over the surface of a sphere, and finding their (possibly approximate) intersections. Rotation can be done easily with quaternions, especially when rotating lots of points by the same quaternion. Intersection is a lot harder – if we use arbitrary polygons, we’ll need to either convert a clipping algorithm for use on the surface of a sphere, or work out a suitable approximation.

This is where I started spitballing ideas for alternatives to working just with polygons (and you’re all free to join in on that).

Here’s one idea that intrigued me:

  • Instead of storing polygons, store sets of “thick line segments” (approximately rectangles)
  • Intersections are approximate: using some form of spatial partitioning, make intersection-finding between sets of lines run at n log n, then produce a resulting line segment using some to-be-determined computational geometry
  • Imo, this would work best when the segments are short, thin, and plentiful; and would break down completely with very long/thick line segments which are distorted greatly due to being on a sphere

To help steer our discussion, here are a few questions we could start with:

  • How do we generate terrain to render from this?
  • How do we get environmental data for the biological simulation from this?
  • How do we model land-use etc for strategy stages with this?

I have plenty of ideas for all of these, but I’m going to stop myself here for now to get input from you guys.

I’m not sure I completely understand (I’m no expert in data structures).

Are you suggesting covering the sphere in small polygons (approximately rectangles) and then sticking lots of these little polygons together to form a tectonic plate? Then the boundary of the plate is the jagged edge of all the polygons which it is made?

Could you use a physics engine to run this? Make the small polygons into collision objects and then have them float around, a bit like a fluid? If they could stack on top of each other (move in 3d) then you could generate mountain ranges that way.

Moreover you could use each mini polygon as a tile for environmental / strategic data and then assume they have a uniform climate across the whole thing. Just an idea.

Are you suggesting covering the sphere in small polygons (approximately
rectangles) and then sticking lots of these little polygons together to
form a tectonic plate? Then the boundary of the plate is the jagged edge
of all the polygons which it is made?


I should post some more of my thinking:

I have a vague idea of most of the planetary generation process:

  • Tectonics:
  • A bunch of convex polygons (for scale, each somewhere between the size of france and india) are created on the planet’s surface. These are cratons, the large, strong rocks that clump together with weaker rocks and each other to form continental plates. There wouldn’t be all too many, about 30 for Earth, for example.
  • The planet is divided somehow into plates – probably using the cratons as a starting point, since plate boundaries will never pass through cratons.
  • Each plate moves in a certain direction with a certain rotation, and has a bunch of stuff sitting atop it – mountains, land, sea, life, etc.
  • Important stuff happens at plate boundaries – more land added to plates at divergent boundaries, mountains form at convergent ones
  • Hotspots: static points relative to the planet’s reference frame, which produce volcanic chains on the plate passing over
  • From the tectonic sim, we would get a planet full of regions of different geologic type:
  • Orogeny: Mountains/plateaus, with some metadata for age (for erosion, height, etc), and maybe other metadata as well.
  • Assorted continental plate: perhaps with a few subtypes, useful for knowing what minerals would be contained within for strategy stages
  • Continental rift
  • Oceanic plate
  • Oceanic rift
  • Ocean trench
  • Volcanic island/chain: not sure if single islands are better, or archipelagoes with data within for islands
  • Ocean currents:
  • Cut the oceans up into large gyres, polygons (possibly convex) with a circulation pseudovector.
  • Currents have a smoothing effect on water temperature within them
  • Next we do climate, to produce temperatures and rainfalls. Everything here is just about covered in this thread.
  • From basic temperature data, planet size, and rotation rate, divide the planet into atmospheric cells
  • From axis tilt and continental placement, create approximate north/south extents of the ITCZ
  • Use diagonal sweeplines to cast rain paths from the edges between the cells, along prevailing winds
  • A couple steps would happen in an order I’m not sure of yet:
  • Route rivers, working kinda-similarly to what this paper does.
  • An erosion pass: Based on climate data, modify the metadata of land zones (particularly orogenic zones) to clarify what the topography is like, useful for the simulation and for rendering.

The result would be a planet full of hopefully enough environmental data for the CPA simulation, and now I think I should conclude this post, as it’s already taken me far too long to write.

Ah yes, the point I was going to get to but never did: I listed the above to help clarify the requirements of the underlying data model. In so doing, I think I’ve made myself more confident that something involving polygonal intersection is the way to go.

I also came up with an obvious-in-retrospect alternative to using thick line segments – circles. The math is simpler, and they would be more useful in certain cases, but in most cases I think they would be less useful.

1 Like

Wow this sounds really awesome.

I really like how heirachical it is, that each new piece of data rests on the one below it. I think that could allow for some really awesome effects. Like you could change the orbit of a planet and see the climate change, or mega weapons could effect the plate tectonics.

Circles does sound like an interesting way of doing it, by tieing together circles of different sizes you could pretty much make any shape. Moreover the data stored would only be the centers of the circles, their radii and which other circles they are affixed to, you wouldn’t need any shape data.

Yeah, circles are simpler and easier for bulky shapes (which many would be), but when I originally got to this idea, I was considering how to deal with the boundaries of moving plates – and it seemed easier to me to deal with that using polylines.

We could certainly use both lines/polylines and circles as primitives – circle-line collision is easy, and approximate intersection would just be a bit more cheap math on top.

Although your algorithm sounds well thought out and feasible, I always thought planets in Thrive would be generated a different way, so I am just going to add my two cents.

My comprehension was that at the start of the game the planet would be created procedurally (I have a couple really good articles about this if you would like to read them). Afterwards, this planet would be stored as a height map. Basically, every mile (this number is off the top of my head, might be too big or too small), you take a point and find its height.

Plotting the resulting points on a sphere and connecting them to get triangles gives us a very course outline of the original planet. Assuming you are looking at it from space, at a far away distance this planet would look exactly as the original one. As you zoom in, you would procedurally add detail by using noise maps and fractals and quadtrees. Although every single time you come to a specific part of the planet it will be slightly different (because you do all of this procedurally), this difference shouldn’t be too large to be noticeable or affect gameplay in any way.

To back up my algorithm with a proof-of-concept, I bring up Outerra:

This amazing replication of the Earth relies on a dataset that gives the height of the earth at every 150 metres. Another example would be Proland, which, although not as beautiful, has a lot of papers dedicated to explaining the modeling of realistic trees, rivers, clouds, etc… (since we are on the topic of trees I want to share this video by Soeren:

I don’t know if you know this, but procedural modeling and rendering of virtual terrains is something I am planning to do professionally in a couple of years, and, as a result, I have an extensive library of research papers that allow for the creation and real-time rendering of many aspects of nature that I could share with you.

I also bring up this book which describes the procedural creation of planets with dynamic LOD in openGL:

For plate tectonics, we could simply create a slope field covering the surface of the planet (which would simulate the convention currents of magma) and move the points of the height map accordingly. These points could then be surrounded by lines to create continents.

moopli that sounds cool. I guess wrapping a poly-line around a cluster of circles (or filling a polygon with circles) would be relatively easy so both formulations are somewhat equivalent.

The Creator awesome, didn’t know you had so many talents!

The thing about even a coarse heightmap is that the dataset is huge.

My point is that we should not care about the shape of mountain x, only that it is one of many mountains in the area.

If we are to generate heightmaps, then we would have to put a lot of work into going “backwards”, distilling a heightmap into data about drainage basins, mountainous areas, etc.

Luckily, we don’t have to do that. Many, maybe even most, methods for creating realistic-looking terrain involve lots of steps with conditionals carefully tweaked, so, for example, you don’t get fjords in the middle of a flat desert.

Outerra doesn’t help, because you have to get the original dataset from somewhere, and it will only be as realistic as your generation method. I’ve essentially detailed a physically-based planetary-generation method above, and the fact that it is physically-based is important, as not only does that mean that we can get a realistic result for an earth-like planet, but we can get realistic results even with different parameters.

I’ve read through dozens of papers on terrain generation, and almost all of them focus on realism in the small scale (ie, mountains which look like mountains, etc), but neglect realism in the large scale, which is what we actually need for this game. We do not need mountains which look good. I would love to have mountains which look good, I would love to work with you on making them happen, but they are not needed for the actual simulation that forms the core of the game.

So what I’m saying is that what you thought the game would do, and what I’m suggesting the game do, are (almost entirely) non-conflicting. My algorithm above would spit out a bunch of info, saying such things as:

  • This area over here (which coincidentally is the size of France) is a bunch of cold sand dune desert.
  • This area over here has these species in these amounts.
  • This area over here has a lot of mountains which are about this eroded, with fjords and a lot of volcanism.

From that data, we can generate terrain for rendering, on the fly, possibly with some precomputation if necessary, and only for the area which our species will be exploring for this round. And we can do so using whatever methods you like.

Sorry if I sound snippy, btw, I just escaped a heated argument IRL. I could edit this for tone once I’ve chilled out a bit.

Yeah, you’re right about that. The Outerra dataset is 15GB large which is a lot for even the most modern computers.

I was thinking of using something along the lines of this video that I shared a while back:

It generates a continuous terrain with mountains, plateaus, and rivers based on a map that shows how fast the height is changing at any given point. The good thing about this algorithm is that it uses a method that is similar to how terrain is actually formed in real life. But as you mentioned, storing this kind of data would involve tons of space that we don’t have.

Truer words have never been said. If I may suggest one thing, though, would it be possible to somehow create different ecosystem patches (mountains, wetlands) and then populate them with terrain generated on a small scales with a gradual transition between them?

Never said they were :slight_smile: I was just trying to contribute to the discussion with my ideas. After reading through your post again, however, I do see that I slightly misunderstood what you were talking about.

Oh hey, that’s the paper I thought I was referencing. I guess I referenced the wrong one :stuck_out_tongue:

From basic rainfall and geologic data we can generate rivers much like how that video shows, and from there we could generate terrain for the parts of the planet the player needs to see.

If I may suggest one thing, though, would it be possible to somehow
create different ecosystem patches (mountains, wetlands) and then
populate them with terrain generated on a small scales with a gradual
transition between them?

I don’t see any reason why not.

Never said they were :smile: I was just trying to contribute to the discussion with my ideas. After
reading through your post again, however, I do see that I slightly misunderstood what you were talking about.

You brought up good points. Happy to clarify :smiley:

More thoughts on the collision-testing datastructure:

One way to go about it is to use a quadrilateralized sphere-cube, with some form of spatial partitioning on each face (quadtree, simple grid, etc)

  • But how do we efficiently bin the collision primitives?
  • And how do we efficiently collision-test primitives in the same bin?

Primitives can be larger than bins – for example, a large circular prim, which forms the core of a tectonic plate – so we can’t simply bin by location, we have to find all the colliding bins. I think the math could be done, but it’s very messy and it might end up being less efficient than simple brute-force because binning is hard and the primitives will move a lot, and thus need to be re-binned.

Then again, each primitive will likely only need to be moved and collision-tested a small number of times for every step of the planet simulator, and each step only happens once during every timeskip, so there may be a case against worrying too much about optimization for large objects.

Maybe we could have a two-part system: brute-force circle-circle collision (I think I’ve settled on using just circles instead of overly-complex lines) for the large primitives, and a second layer with a quadtree-faced quadrilateralized sphere-cube for objects sufficiently small. …Of course, in that case we would still have to bin the large polygons…

Working a bit more on the math:

To translate a point on the surface of the sphere is to rotate it by a certain angle around a specific axis. In quaternion terms, the rotation quat would look like:

q = [cos(theta/2), sin(theta/2) * v]

Where theta is the rotation angle and v is the axis, in cartesian 3D.

To collision-check two circles, you check if:

r1 + r2 >= 2 * asin (||c1 - c2|| / 2)

Where r1, r2 are reals, and c1, c2 are real 3-vectors.

This is all well and good, however I don’t like how both of these have trig in them. In the first case, it is probably possible to work entirely with quats and thus avoid needing trig, but I’m not sure how we can do trig-free collision testing. Maybe the trig will be cheap enough that it doesn’t matter, I’m not entirely sure, but I doubt it at least a bit.


What if you covered the sphere in a grid (by mapping from a cube) and then a plate would be a list of the grid points it covered. Then you only let the plate move in the 8 cardinal directions (so it always snaps to the gridpoints). A complex movement could be made up of lot’s of small steps. So you could go left left left down, for example. The plate could have a “real” direction of movement that would, over time, be approximated by these discrete steps.

That way a collision would be very obvious, you would just compare the lists of gridpoints of each plate. You could then look for any gridpoints that weren’t covered by a plate and then fill them in with volcanoes / rifts.

If you had a lot of gridpoints this could become very smooth. Moreover you could make a bezier curve around the edge when you display it to the player so it looks nice and smooth.

What do you think? Is this crazy?

It’s not crazy. However, I have considered this (that would be where I was discussing the merits of spatial partitioning above), and making a “nice” grid on a sphere’s surface is not trivial. You’re trying to get around having to do ugly binning computation by making all primitive motions discrete instead of continuous. However, that is not going to work, well, unless we accept an egregious amount of distortion.

Mathematically, you’re suggesting we translate the shapes over the surface of the cube instead of over the surface of the sphere. It’s bound to be much cheaper, because the grid of test points (or of collision-testing spatial-partitioning bins) is regular over the surface of the cube. However, each face of the cube gets strongly fisheye-distorted upon projection to the sphere, so as a shape moves over the surface of the cube, the spherical projection changes shape and size. Since we actually care about what happens in sphere-space, that’s not good.

Maybe the answer is to try a geodesic subdivision instead – a square-cube is nice mainly because it’s a simple, low-distortion way to impose an easy-to-work-with cartesian grid onto a sphere, most importantly so a single texture can be spat onto a sphere with minimal fuss and distortion. If we look into geodesic subdivisions for spatially-partitioning collision-set data and use the sphere-cube just for, say, cheap planet rendering from far away, that could work perfectly fine.

I’m just going to throw a crazy idea out there. What if instead of dividing a sphere into squares as @tjwhale suggested, we divide it into hexagonal tiles similar to the civ games. Here is an example of what I mean:
Although normally geometry with hexes is hard, if all we need to do is find a bordering hex, that should be fairly straightforward.

I quite like that. Basically a planet goes from being a sphere to being a polygon.

Problem is, that’s still not regular, you need a couple pentagons, it’s less isotropic than a geodesic triangle tesselation, and while it’s quite the pretty way to divide a planet if we want to discretize everything (which I really don’t want to do), it doesn’t have the properties we need for a nice partitioning scheme for easily binning circles for collision testing.

In more detail, we need much more than simply checking neighbors – we need to be able to find the (approximate) intersections of arbitrary circles (or arcs thereof, possibly) on the surface of the sphere. If I have a datastructure that will do that cheaply for me, everything else falls into place.

But maybe you are suggesting we discretize the entire simulation, from the ground up, so we can avoid (most of) these questions of computational geometry. Are you?

What about this, just use distance in 3D and ignore the spherical geometry altogether. I drew a picture,

Basically you approximate the distance on the surface with the distance through the sphere (which requires no trig). So long as the planet is big and the circles / plates / cratons are sufficiently small this distance will be a good approximation.

Edit: Also, if you wanted to move the plates on the surface of the sphere, you could move them in 3d and then normalise their radius from center of the sphere. That would avoid trig completely.

I think this is correct, not 100%.

@Moopli and all, I’d like some feedback on this before I go any further with it.

Basically I have made a prototype for points moving on a sphere. It moves the points using quaternions for rotation. I think this is a good system as it doesn’t have gimbal lock and it can move the points in any direction (though this will need a bit more thought). For the collisions I’ve just used euclidean distance between the points. I think this is enough as long as we make the points small. Here is a hard to understand video.

I imagine the next step is to make a raft full of points and have that be a tectonic plate. Then we can do collisions between the points on the edge of the raft, like this. Points which are locked together could always move in together, ties could get made and broken over time. The points could be the patches.

Thoughts? (Code in the prototypes repo as always)

1 Like