Utility AI

The AI of the microbe stage is already reaching a state where it is fun to play with and against, and it makes for quite the challenge in some instances whether you be predator or prey. However, there has been little to no talk as far as I have seen on catching current AI up with implemented features (Pilus, Colonies, New Organelles, Sessility, ETC.) nor any talk on how to future proof for future stages.

I come bearing a possible solution. Utility-based AI and modular behaviors.

For those who don’t know and don’t want to sit through a 3-hour programming lecture, I will boil both of these down. Basically, Utility-based AI is AI that has a set of tasks and calculates the usefulness (or Utility) of each task using an algorithm specific to each task.

Here is a strong Youtube video on the subject:

Now, you would be right to point out this is similar to the system that exists currently. The main difference is that each behavior that an AI microbe can perform is mixed in with the calculation, and there is a clear order/priority to the behaviors that microbes undertake (eg. they will ALWAYS try and search for chunks before tumbling, and will ALWAYS try to search for prey before either). With Utility AI, this would be broken up into three phases: Analysis, Decision, and Action.

This stage would occur at the beginning of an AI’s “thought process”, so to speak. The AI would intake every needed piece of data from the environment, and calculate Utility Values going forward. An example of this would be as follows:

The utility of the action “Gather Glucose from Cloud” would be calculated by analyzing whether Glucose is the primary food source of the AI, the current Glucose levels, how much it needs to keep operating for an arbitrary amount of time, the closest glucose cloud size, and whether the AI would gain more from consuming a prey item. Of course, this is an arbitrarily constructed example, and would likely be different in the final implementation.

After the utility of each task is calculated, it moves into its decision phase.

This stage forms the middle of the AI’s thought process, and consists of simple comparisons and priority management.

Due to the nature of balancing many needs and actions in a game such as Thrive, simply having an AI flatly compare each task would not only be relatively resource intensive but would create plenty of “dumb” behaviors. I suggest we use the Bucket System used in The Sims. The Sims, another utility-based game, separates each utility calculation into groups known as Buckets. These Buckets are then processed with an additional weight on each, allowing AI to avoid dumb behaviors and keep self-sustaining. An example of this would be processing the Food need at a higher level than the Entertainment need.

After it finds the action of the highest utility, it then moves into the action phase.

This stage forms the end of the process, and the part the player actually sees. Once a cell finds an action of the highest utility, it will proceed to execute this task until it finds another task of higher utility.

You might be rightly worried about microbes bouncing between tasks rapidly if they have incredibly similar Utility Values. One solution, proposed by this paper, posits that giving each task inertia would be a good way to solve this. I wholeheartedly agree with this! If you give extra weight to a task already being performed, it can make an AI want to do that task until a truly better option comes along.

Modular Behaviors
Now, onto the Modular Behaviors part. There is something to be said about the potential size of this system. Separating all potential actions into tasks then assigning a value to them can sound extraordinarily big, especially if we account for every possible part a microbe can have.

I suggest we, in this case, only attach specific tasks to the AI if it is physically possible with the AI’s current biology. For example, we can attach all relevant Pilus code to the AI when it is spawned if it actually has a pilus, whereas microbes without a pilus don’t even possess the AI to use one.

There is a similar, albeit very different concept in UE4 Behavior Trees, where each task is a self-contained piece of code that is loaded into a behavior tree, and can be changed separately from the tree itself.

The idea behind this is to hopefully make both microbe and multicellular/aware AI light and easily expandable. After all, an exclusively aquatic creature doesn’t need the AI necessary to create anthills!

The Utility Sysem combined with a Modular Task system should allow us to continuously add behaviors related to new features while creating a system that allows for emergent behavior! The distinct phases of the AI's system should allow for easier debugging and modification of the system.

That’s pretty much the long and short of my concept behind a potential AI cleanup/overhaul. I would love to hear from other programmers and designers on this. If you can’t tell, this was one of the primary reasons for me joining the project, and AI is something that could use cleaning on this project!


The devil’s in the details, but I certainly think this approach can work if implemented well.

I am obviously biased, but I don’t think that most of the AI-related complaints we’ve had were really about our current pseudo-decision-tree model isn’t good enough at comparing different tactics against each other. Mostly, I think we’re now seeing issues in particular sub-routines not working under new game mechanics (mostly the new slower turn rates, which the AI has no way to handle). I don’t want to talk anyone out of grand ideas (I made a few big refactors while I was active) but if you want to make sure you have an immediate impact I would focus on the modular tasks, particularly those that don’t work well now, rather than the grand strategy of the AI as a whole. In later stages, of course, things may need to get more sophisticated, but we’ve also got a little time before we need to put that to code.

I also wouldn’t worry too much about the AI seeming “spastic” and switching between objectives. A lot of behaviors that players seem to like now are actually combinations of other behaviors (quite frankly, accidentally) working together. For example, you may notice microbes weaving in and out of range of a larger player, trying to sneak in and steal some chunks, or even doing hit-and-run with toxin. It looks really clever, but that’s actually just a result of a species with a higher “seek chunks” distance than “run from predators” distance, and changing its mind back and forth over and over again. (Fun fact, there’s an Issac Asimov short story on robots doing this!) Of course a sufficiently advanced AI can always handle all of these behaviors explicitly, but that’s easier said than done.

The last thing I’d want to say is, watching the many videos of people playing, I think players respond to interesting behavior more than strictly good behavior. People play Thrive to see new forms of life they’ve never seen before, and in my opinion that comes down a lot more to HOW the species seems to act than the particular combination of colored chunks inside the blob. The nice aspect of how few distinct behaviors a microbe has (where do you point, where do you move, are you shooting toxin, are you engulfing, and that’s it right now) makes it hard for the AI to “look dumb” but doing something like raising up a gun just to turn around and walk off the other way. One “ohh wow look at that” behavior can be worth two or three dumb microscopic things without brains doing something brainless. Whatever you do, don’t forget to make things awesome!


As the one who wrote the original state machine based AI before thim improved it. I agree with @Thim This is basically what i said in the #programming channel on the discord. The Basis of the current AI is simply a state machine that rolls its personality values against the environment to see what state to switch into and than those same values modifying what the AI does in a given state based on arbitrary ideas of what cells could do.
Eg with state machine + personality values an ai that thinks of itself as sessile but that also has very high aggression and is in hunt mode wouldn’t chase but still could shoot toxins like a turret and engulf things unfortunate enough to bump directly into it’s membrane. This is interesting behavior.
The purpose is more to give different species…well…different personalities and to make them feel alive, and to make their behavior interesting to observe and to see more complex behavior emerge. Not to have an AI that is actually the most effective/optimized at staying alive, sure that helps but the main purpose is the interesting lifelike behavior.
I know that is counter-intuitive, but in a video game, its more important that things are interesting.
Keep it general while keeping the impact of these behavior values very visible and very variable.
So that players can observe interesting unique behaviors emerge from different cells based on their personality values. I dont know how well this can be done with the suggested AI approach., But if it can be done with the approach Ivy suggested with the same level of dynamism and emergent behavior as the state machine based one or more. It’s probably worth trying. Just make sure to keep the AI dynamic and surprising and ensure more complex emergent behavior can happen. I dont think a list of specific behaviors would be able to do this, but I would be happy to be surprised.

1 Like

What you said is so incredibly important to understanding why we designed the AI we did.

The last thing I’d want to say is, watching the many videos of people playing, I think players respond to interesting behavior more than strictly good behavior.

This 1000 times.

I think perhaps my original point behind this might have been lost with me discussing the positives of Utility AI. The point of Utility AI is to create a framework for emergent behavior, not to kneecap it. Ideally, the range of tasks and dynamic task management on the AI’s part would encourage interesting behavior! Again, think Sims or Three Kingdoms!

I actually covered this in my original post, being Task Intertia. The idea is that AI would commit to a current behavior until they feel they should switch to a different task, and this would be accomplished by weighting currently active tasks over other tasks (keeping this extra weight low so that the AI can still switch).

I will absolutely be tackling some smaller changes to make AI play better with the current game! I proposed this mainly to float an idea how we could create a system that not only is easily modifiable, but expandable for future stages! I can only imagine how creating dynamic AI for multicellular stage will work.

See above, Utility AI is meant to give the AI the tools to not only act smart but encourage interesting behavior. But of course, this is something that is very important and is a very good thing to reiterate and keep in mind!

Overall, I think I may have focused too much on the concept of Utility AI itself instead of how it could improve Thrive. To be clear, I think this is a good framework for current and future AI and the nature of having to create custom algorithms for utility calculation allows us to make these weights based on personality, player actions, and any number of other dynamic factors.

As it stands, I will move into fixing up some minor stuff with AI and try to implement some kind of Modular Task system. I have a few ideas for how this could be accomplished, but I will wait until I have a proper dev environment and get settled before I make a extreme change.

1 Like

I agree with the concerns brought up by Thim and Untrustedlife, but at the same time I see the utility in having a better structured AI system to make it easier to understand and extend.

Now’s basically as good as time as any to overhaul the AI as no one else is working on it so there won’t be any merge conflicts. Though, the AI changes should be finished within a month to give playtesting and code review enough time before the next release so that there’s time for multiple rounds of tweaks to the AI, because it’s an important system to have working correctly so that the next release won’t end up being worse off.