I’ve been doing a lot of thinking lately on how planetary data should be stored/manipulated.

Why? Because the type of datastructure behind a planet decides how big the planet is, what operations can be done over it, what can’t be done, etc.

As a bit of a primer: There have been a number of discussions on planetary data modelling in the past, usually implicit within the question of planetary size, or something similar. I’ve noticed that the trend in all of them has been to assume a planet would be stored as some sort of regular array of data, sampled over a regular (or close-to-regular, for example, sphere-cube) grid. Thus, planetary scale is constrained primarily by max resolution, and all algorithms run over a regular grid with some fudging since the size of a grid-cell changes depending on where on the planet it is (just look at a picture of a quadrilateralized sphere cube and note the distortion at the corners to see). There were additional ideas of using, for example, quadtrees to help model areas where less resolution is required.

What I realized, a little while ago, was that we don’t have to do it this way. I had been bashing my head against the problem of how to do plate tectonics in such a datastructure – essentially, a problem of translating and rotating a rasterized polygon, with the additional issue (a biiiig one) of having to do it on a distorted canvas. I thought to myself, well, why can’t I just do straight geometry, and just write the results to the grid? That’s certainly possible, so then I wondered why we need the grid at all.

Essentially, the idea I’ve taken this long to get to is this: Model the planet using a datastructure capable of efficiently rotating arbitrary polygons over the surface of a sphere, and finding their (possibly approximate) intersections. Rotation can be done easily with quaternions, especially when rotating lots of points by the same quaternion. Intersection is a lot harder – if we use arbitrary polygons, we’ll need to either convert a clipping algorithm for use on the surface of a sphere, or work out a suitable approximation.

This is where I started spitballing ideas for alternatives to working just with polygons (and you’re all free to join in on that).

Here’s one idea that intrigued me:

- Instead of storing polygons, store sets of “thick line segments” (approximately rectangles)
- Intersections are approximate: using some form of spatial partitioning, make intersection-finding between sets of lines run at
*n log n*, then produce a resulting line segment using some to-be-determined computational geometry - Imo, this would work best when the segments are short, thin, and plentiful; and would break down completely with very long/thick line segments which are distorted greatly due to being on a sphere

To help steer our discussion, here are a few questions we could start with:

- How do we generate terrain to render from this?
- How do we get environmental data for the biological simulation from this?
- How do we model land-use etc for strategy stages with this?

I have plenty of ideas for all of these, but I’m going to stop myself here for now to get input from you guys.