Convolution Surfaces: A better alternative to metaballs

I saw the metaballs concept on my Facebook feed, and figured I should probably makes this post (about 4 years overdue). One of the things I was working on back in 2016 was figuring out a good solution for 2d and 3d organism editing. There are quite a few constraints that a good editor should meet, but the 3 main ones is that it should be easy to use (you don’t need a 2 hour long course to design a creature), it should be sufficiently powerful (any creature you can come up with should be possible to create), and it should produce good results (people who aren’t artistically inclined should be able to make good looking creature without much effort). Metaballs, which is what we’ve been discussing using for this purpose since way before TJWhale designed and I implemented the current membrane code, unfortunately don’t satisfy the last two requirements: They are incredibly frustrating to deal with (have you ever tried creating a sausage-like shape like a tail? practically impossible) and the results always look like blobs.

Anyway, introduction aside, after much research and testing out various solutions, I came to the conclusion that the best solution is what is known as “Convolution Surfaces”. Specifically, a variation known as “SCALe-invariant Integral Surfaces”.

The way they work is you place a few points, specify the radius around that point, and connect the points with segments. The convolution surface will then contain all of the “spheres” you placed within its boundaries with very good looking transitions. Unlike metaballs, the surface will be the specified radius away from each point, which lends itself to very fine-grained and intuitive control. It also doesn’t produce unwanted bulges:

image

In the above image with 3 points connected by two line segments, metaballs would produce the surface on the left while convolution surfaces would product the one on the right. In fact, adding any number of points on the segments between the outer two would not change the shape of the surface, provided they have the same radius.

And here is an image of an ant created using convolution surfaces with a solid and wireframe representation:

image
image

*The above images are taken from the SCALIS paper linked above.

And here’s another example of a hand:

image

Here’s a prototype that I wrote using convolution surfaces and marching squares for rendering:

It runs at like 2 frames a second because I wrote it in python, is in 2d, and is missing quite a few features (you can see that in some cases the blue circle is outside the red surface, but this is only because I didn’t have time to implement that step), but it is a good demo of what could be.

Another incredibly useful feature is that if you were to put 7 circles in a hexagon grid, it create a near perfect circle, which is something that I tried forever, but was never able to accomplish with the current membrane code:

image

So for the cell stage, using this method we could 1) Have better looking membranes, 2) Have a more deterministic algorithm for membranes, 3) allow concave membranes, and 4) Prevent organelles from spilling outside the membrane. You would just need to have a point in each hex that is part of the cell, and set the “radius” of each point to the bounding sphere of the organelle at that position + some buffer.

For early multicellular, each point could be a cell, and the surface would produce a good looking outline. For organism stage, the points could be organs/muscles. My implementation uses 2 points to define a segments, but the algorithm supports linking 3 (or more) points into a triangle, which produces some interesting results:

image

The above image is a fin created using triangle primitives.

Feel free to check out my code: GitHub - TheCreator--/SCALIS: Implementation of SCALe-invariant Integral Surfaces
It’s clearly in prototype state, and barely runs (again, python probably wasn’t the best solution for this), but it should give a good starting point to anyone who decides to pick this task up.

6 Likes

Interesting, I’ll look into this more when I have time.

My current implementation of marching squares uses segments, it connects every point like a graph and then decides on how thick the membrane should be based on various radius definitions along each segment.

My way definitely has problems (the line distance calculation is sloowwwwww), it’ll be interesting to see how difficult this method is to implement.

It’s how I made this.

unknown

2 Likes

In 3D these kind of spheres tend to be somewhat problematic because they often have odd/heavy geometry. This is essentially how Sculpting Programs work but whenever a model is made that way it usually must be worked on and optimized in a seperate program where individual triangles can be manipulated. I am unsure of it’s performance as a 2D object however.

2 Likes

Convolution Surfaces are very different from metaballs. For one thing, a Convolution Surface mesh actually has a full blown skeleton made out of points and lines. Unless you use many points and lines (or add support for curves) there are joints by default sort of.

That can be worked around, right? Having the computer place “bones” in insensible locations doesn’t sound nice…

Not really, the skeleton is a core part of the convolution algorithm since it’s what the ‘kernel’ marches along and gets shape data from. Metaballs can be the way they are because you’re literally just finding a sum of some fields.

How would we implement bones though, or animate them? Cuz they will need to be animated- What would be easier to animate is my question- Cuz I could animate the “bones” I showed above, and have the metaballs, or CS would be wrapped around them…

Wouldn’t it then maybe be better that the user makes the skeleton themselves and some properties on the bones for the convolution surface?

I was under the impression that convolution surfaces would use the same data as metaballs. But if that isn’t the case, maybe I need to rethink my plans for the early 3D editor?

Significantly better. These skeletons can become complicated depending how you place points and lines

That seems a bit much. What’s your thoughts on using metaballs as the base but then allowing the player to somehow tweak or select from a few possibilities the bone placement?

Yeah that’s exactly what I’m thinking

Main issue with metaballs is you can’t control the way they fuse. There will always be massive bulges, lumps, and things like fingers would become almost mittens kind of.

What if we have prerigged metaball things in the form of bones? The bones are actually rigged, and that rig controls the metaball/CS and how it moves-

Hm… do we have the coding chops to use CS though?

Isn’t that image you shared showing a bunch of metaballs + the skeleton? Those balls are what I’m referring to when I say metaballs.

No, that image is convolution surfaces. Those balls you see are the weights of different points. The blue is the flow of the lines

So what’s the source data used by the convolution surfaces then? I think we’ll have an easier time if I understand that and can make the beginning of the 3D editor with that in mind.

1 Like

My question is how would this animate- Would they animate themselves, or will we need to make prebuilt animations?

Source data, there are 5. k, i, A, B, and P. k and i are meaningless to you, they control the way things fuse and the math needing to be done mainly. A, B are consecutive points, P is a given surface point i believe. Here is the function for Power Integral Convolution if it helps.

public float Convolution(int k, int i, Vector3 pointA, Vector3 pointB, Vector3 pointP)
{
float discriminant = DistanceSquared(pointA, pointB) * DistanceSquared(pointA, pointP)
- Mathf.Pow(Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP)), 2);
if (discriminant <= 0)
{
return 0;
}

    if (k == 0)
    {
        if (i == 1)
        {
            return Mathf.Log((Distance(pointB, pointA) * Distance(pointB, pointP)
                    + Vector3.Dot(Vector(pointB, pointA), Vector(pointB, pointP)))
                / (Distance(pointA, pointB) * Distance(pointA, pointP) -
                    Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP))));
        }

        if (i == 2)
        {
            return Mathf.Atan(Vector3.Dot(Vector(pointB, pointA), Vector(pointB, pointP))
                    / Mathf.Sqrt(discriminant)) + Mathf.Atan(Vector3.Dot(Vector(pointA, pointB),
                    Vector(pointA, pointP)) / Mathf.Sqrt(discriminant)) * Distance(pointA, pointB)
                / Mathf.Sqrt(discriminant);
        }

        return Distance(pointA, pointB) / (i - 2) / discriminant
            * ((i - 3) * Distance(pointA, pointB) *
                Convolution(0, i - 2, pointA, pointB, pointP)
                + Vector3.Dot(Vector(pointB, pointA), Vector(pointB, pointP)) /
                Mathf.Pow(Distance(pointB, pointP), i - 2)
                + Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP)) /
                Mathf.Pow(Distance(pointA, pointP), i - 2));
    }

    if (k == 1)
    {
        if (i == 2)
        {
            return Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP)) /
                DistanceSquared(pointA, pointB) * Convolution(0, 2, pointA, pointB, pointP)
                + Mathf.Log(Distance(pointB, pointP) / Distance(pointA, pointP))
                / Distance(pointA, pointB);
        }

        return Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP)) /
            DistanceSquared(pointA, pointB) * Convolution(0, i, pointA, pointB, pointP)
            + (Mathf.Pow(Distance(pointB, pointP), 2 - i)
                - Mathf.Pow(Distance(pointA, pointP), 2 - i)) /
            Distance(pointA, pointB) / (2 - i);
    }

    if (k == i - 1)
    {
        return Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP)) / DistanceSquared(pointA, pointB)
            * Convolution(i - 2, i, pointA, pointB, pointP)
            + Convolution(i - 3, i - 2, pointA, pointB, pointP)
            / DistanceSquared(pointA, pointB) + Mathf.Pow(Distance(pointB, pointP), 2 - i) / (2 - i)
            / Distance(pointA, pointB);
    }

    return (i - 2 * k) / (i - k - 1) * Vector3.Dot(Vector(pointA, pointB), Vector(pointA, pointP))
        / DistanceSquared(pointA, pointB) * Convolution(k - 2, i, pointA, pointB, pointP)
        - Mathf.Pow(Distance(pointB, pointP), 2 - 1) / Distance(pointA, pointB) / (i - k - 1);
}

So the source data is a list of points and each point also has the k and i parameters on it? I do still kinda think that rendering the points as differently sized metaballs wouldn’t be too far away from converting that to the convolution surface data by taking the metaball centers and their attachments to each other to consider the line sections. Figuring out sensible k and i values would be harder, though.