Utility AI / restructuring the AI system

The AI of the microbe stage is already reaching a state where it is fun to play with and against, and it makes for quite the challenge in some instances whether you be predator or prey. However, there has been little to no talk as far as I have seen on catching current AI up with implemented features (Pilus, Colonies, New Organelles, Sessility, ETC.) nor any talk on how to future proof for future stages.

I come bearing a possible solution. Utility-based AI and modular behaviors.

For those who donā€™t know and donā€™t want to sit through a 3-hour programming lecture, I will boil both of these down. Basically, Utility-based AI is AI that has a set of tasks and calculates the usefulness (or Utility) of each task using an algorithm specific to each task.

Here is a strong Youtube video on the subject:

Now, you would be right to point out this is similar to the system that exists currently. The main difference is that each behavior that an AI microbe can perform is mixed in with the calculation, and there is a clear order/priority to the behaviors that microbes undertake (eg. they will ALWAYS try and search for chunks before tumbling, and will ALWAYS try to search for prey before either). With Utility AI, this would be broken up into three phases: Analysis, Decision, and Action.

Analysis
This stage would occur at the beginning of an AIā€™s ā€œthought processā€, so to speak. The AI would intake every needed piece of data from the environment, and calculate Utility Values going forward. An example of this would be as follows:

The utility of the action ā€œGather Glucose from Cloudā€ would be calculated by analyzing whether Glucose is the primary food source of the AI, the current Glucose levels, how much it needs to keep operating for an arbitrary amount of time, the closest glucose cloud size, and whether the AI would gain more from consuming a prey item. Of course, this is an arbitrarily constructed example, and would likely be different in the final implementation.

After the utility of each task is calculated, it moves into its decision phase.

Decision
This stage forms the middle of the AIā€™s thought process, and consists of simple comparisons and priority management.

Due to the nature of balancing many needs and actions in a game such as Thrive, simply having an AI flatly compare each task would not only be relatively resource intensive but would create plenty of ā€œdumbā€ behaviors. I suggest we use the Bucket System used in The Sims. The Sims, another utility-based game, separates each utility calculation into groups known as Buckets. These Buckets are then processed with an additional weight on each, allowing AI to avoid dumb behaviors and keep self-sustaining. An example of this would be processing the Food need at a higher level than the Entertainment need.

After it finds the action of the highest utility, it then moves into the action phase.

Action
This stage forms the end of the process, and the part the player actually sees. Once a cell finds an action of the highest utility, it will proceed to execute this task until it finds another task of higher utility.

You might be rightly worried about microbes bouncing between tasks rapidly if they have incredibly similar Utility Values. One solution, proposed by this paper, posits that giving each task inertia would be a good way to solve this. I wholeheartedly agree with this! If you give extra weight to a task already being performed, it can make an AI want to do that task until a truly better option comes along.


Modular Behaviors
Now, onto the Modular Behaviors part. There is something to be said about the potential size of this system. Separating all potential actions into tasks then assigning a value to them can sound extraordinarily big, especially if we account for every possible part a microbe can have.

I suggest we, in this case, only attach specific tasks to the AI if it is physically possible with the AIā€™s current biology. For example, we can attach all relevant Pilus code to the AI when it is spawned if it actually has a pilus, whereas microbes without a pilus donā€™t even possess the AI to use one.

There is a similar, albeit very different concept in UE4 Behavior Trees, where each task is a self-contained piece of code that is loaded into a behavior tree, and can be changed separately from the tree itself.

The idea behind this is to hopefully make both microbe and multicellular/aware AI light and easily expandable. After all, an exclusively aquatic creature doesnā€™t need the AI necessary to create anthills!


The Utility Sysem combined with a Modular Task system should allow us to continuously add behaviors related to new features while creating a system that allows for emergent behavior! The distinct phases of the AI's system should allow for easier debugging and modification of the system.

Thatā€™s pretty much the long and short of my concept behind a potential AI cleanup/overhaul. I would love to hear from other programmers and designers on this. If you canā€™t tell, this was one of the primary reasons for me joining the project, and AI is something that could use cleaning on this project!

7 Likes

The devilā€™s in the details, but I certainly think this approach can work if implemented well.

I am obviously biased, but I donā€™t think that most of the AI-related complaints weā€™ve had were really about our current pseudo-decision-tree model isnā€™t good enough at comparing different tactics against each other. Mostly, I think weā€™re now seeing issues in particular sub-routines not working under new game mechanics (mostly the new slower turn rates, which the AI has no way to handle). I donā€™t want to talk anyone out of grand ideas (I made a few big refactors while I was active) but if you want to make sure you have an immediate impact I would focus on the modular tasks, particularly those that donā€™t work well now, rather than the grand strategy of the AI as a whole. In later stages, of course, things may need to get more sophisticated, but weā€™ve also got a little time before we need to put that to code.

I also wouldnā€™t worry too much about the AI seeming ā€œspasticā€ and switching between objectives. A lot of behaviors that players seem to like now are actually combinations of other behaviors (quite frankly, accidentally) working together. For example, you may notice microbes weaving in and out of range of a larger player, trying to sneak in and steal some chunks, or even doing hit-and-run with toxin. It looks really clever, but thatā€™s actually just a result of a species with a higher ā€œseek chunksā€ distance than ā€œrun from predatorsā€ distance, and changing its mind back and forth over and over again. (Fun fact, thereā€™s an Issac Asimov short story on robots doing this!) Of course a sufficiently advanced AI can always handle all of these behaviors explicitly, but thatā€™s easier said than done.

The last thing Iā€™d want to say is, watching the many videos of people playing, I think players respond to interesting behavior more than strictly good behavior. People play Thrive to see new forms of life theyā€™ve never seen before, and in my opinion that comes down a lot more to HOW the species seems to act than the particular combination of colored chunks inside the blob. The nice aspect of how few distinct behaviors a microbe has (where do you point, where do you move, are you shooting toxin, are you engulfing, and thatā€™s it right now) makes it hard for the AI to ā€œlook dumbā€ but doing something like raising up a gun just to turn around and walk off the other way. One ā€œohh wow look at thatā€ behavior can be worth two or three dumb microscopic things without brains doing something brainless. Whatever you do, donā€™t forget to make things awesome!

3 Likes

As the one who wrote the original state machine based AI before thim improved it. I agree with @Thim This is basically what i said in the #programming channel on the discord. The Basis of the current AI is simply a state machine that rolls its personality values against the environment to see what state to switch into and than those same values modifying what the AI does in a given state based on arbitrary ideas of what cells could do.
Eg with state machine + personality values an ai that thinks of itself as sessile but that also has very high aggression and is in hunt mode wouldnā€™t chase but still could shoot toxins like a turret and engulf things unfortunate enough to bump directly into itā€™s membrane. This is interesting behavior.
The purpose is more to give different speciesā€¦wellā€¦different personalities and to make them feel alive, and to make their behavior interesting to observe and to see more complex behavior emerge. Not to have an AI that is actually the most effective/optimized at staying alive, sure that helps but the main purpose is the interesting lifelike behavior.
I know that is counter-intuitive, but in a video game, its more important that things are interesting.
Keep it general while keeping the impact of these behavior values very visible and very variable.
So that players can observe interesting unique behaviors emerge from different cells based on their personality values. I dont know how well this can be done with the suggested AI approach., But if it can be done with the approach Ivy suggested with the same level of dynamism and emergent behavior as the state machine based one or more. Itā€™s probably worth trying. Just make sure to keep the AI dynamic and surprising and ensure more complex emergent behavior can happen. I dont think a list of specific behaviors would be able to do this, but I would be happy to be surprised.

1 Like

What you said is so incredibly important to understanding why we designed the AI we did.

The last thing Iā€™d want to say is, watching the many videos of people playing, I think players respond to interesting behavior more than strictly good behavior.

This 1000 times.

I think perhaps my original point behind this might have been lost with me discussing the positives of Utility AI. The point of Utility AI is to create a framework for emergent behavior, not to kneecap it. Ideally, the range of tasks and dynamic task management on the AIā€™s part would encourage interesting behavior! Again, think Sims or Three Kingdoms!

I actually covered this in my original post, being Task Intertia. The idea is that AI would commit to a current behavior until they feel they should switch to a different task, and this would be accomplished by weighting currently active tasks over other tasks (keeping this extra weight low so that the AI can still switch).

I will absolutely be tackling some smaller changes to make AI play better with the current game! I proposed this mainly to float an idea how we could create a system that not only is easily modifiable, but expandable for future stages! I can only imagine how creating dynamic AI for multicellular stage will work.

See above, Utility AI is meant to give the AI the tools to not only act smart but encourage interesting behavior. But of course, this is something that is very important and is a very good thing to reiterate and keep in mind!

Overall, I think I may have focused too much on the concept of Utility AI itself instead of how it could improve Thrive. To be clear, I think this is a good framework for current and future AI and the nature of having to create custom algorithms for utility calculation allows us to make these weights based on personality, player actions, and any number of other dynamic factors.

As it stands, I will move into fixing up some minor stuff with AI and try to implement some kind of Modular Task system. I have a few ideas for how this could be accomplished, but I will wait until I have a proper dev environment and get settled before I make a extreme change.

1 Like

I agree with the concerns brought up by Thim and Untrustedlife, but at the same time I see the utility in having a better structured AI system to make it easier to understand and extend.

Nowā€™s basically as good as time as any to overhaul the AI as no one else is working on it so there wonā€™t be any merge conflicts. Though, the AI changes should be finished within a month to give playtesting and code review enough time before the next release so that thereā€™s time for multiple rounds of tweaks to the AI, because itā€™s an important system to have working correctly so that the next release wonā€™t end up being worse off.

Hello all,

this will be a longer explanation, so please bear with me (Or skip it :smiling_face_with_tear:) . I looked at the current AI code and I also noticed that some things are supposed to be changed there anyway. First of all, some problems I see with the current solution:

  • It scales extremely poorly / is very poorly extensible.
  • It is computationally intensive with many instances
  • Decisions are made in a linear fashion (unintentional prioritization of actions)

For an easier understanding of what the current AI does (a.k.a. for myself) , I created a flowchart of the ChooseActions function in MicrobeAI.cs. If youā€™ve already dealt with this just ignore it, itā€™s not too complex, this is just for illustration.

First of all, this is a simple, unfortunately rather poorly extensible, hardcoded variant of a behavior tree. Arguments for Behavior Trees are valid and my proposal can therefore obviously be rejected, although itā€™s just as powerful. However, I think the solution I propose is more suitable and flexible for the purposes used here. At least any system providing some functionality for extension other than hard coding. (Whoever designed this, please donā€™t be offended. I donā€™t mean to be harsh, just honest) So now hereā€˜s my idea:

Iā€™ve worked out a framework for AI systems in Thrive in general. It is based on an state machine approach (not FSM) and offers a very flexible framework, which can be used for all possible forms of automation and/or AI (also later stages). For this I have already prepared a test project that implements the framework. It consists of a machine that holds states and transitions. So far so good. The Machine class provides all functions for building and executing the Machine. Besides all available states it holds the active state, the start state and the transitions starting from the current state. Also nothing new here. A State object holds its name/ID, and one delegate each for entering the State, exiting the State, and continuous execution within the State. A Transition object holds the IDs of the source and target states, and any number of delegates as conditions under which that transition should be triggered.

So we have an arbitrary set of states, whose entry and exit trigger arbitrary functions and which execute an arbitrary function as long as they are active. Transitions between these states are defined with a start state, a destination state, and any number of checks that, if all are true, trigger the transition. Alternatively, transitions can be defined with Godotā€™s signal system, in which case they are triggered by any Godot signal, provided the transition is possible from the current state. States and transitions are exclusively created by the user and assigned to the Machine instance. Everything else is done by the machine.

An example implementation or use of the framework could look like this (this one does what it should):

using System;
using Godot;

public class MicrobeMachine : Node
{
    private Machine _baseMachine;
    
    public override void _Ready()
    {
        //Initialize base machine
        _baseMachine = new Machine();
        
        //Initialize states A and B, add them to base machine
        State stateA = new State("A", ExampleEnter, ExampleExit, ExampleProcess);
        _baseMachine.AddState(stateA);
        State stateB = new State("B", ExampleEnter, ExampleExit, ExampleProcess);
        _baseMachine.AddState(stateB);

        //Create transition from A to B, add a conditional function for the
        //transition to happen, add it to base machines transition pool
        Transition aToB = new Transition("A", "B");
        aToB.AddCondition(ExampleCondition);
        _baseMachine.AddTransition(aToB);
        
        //Create transition from B to A, add as a signal that triggers this
        //transition
        Transition bToA = new Transition("B", "A");
        _baseMachine.AddSignalTransition(GetNode("Button"), "pressed", bToA);

        //Set starting state and start base machine
        _baseMachine.StartState = "A";
        _baseMachine.StartMachine();
        GD.Print("_Ready of MicrobeMachine done...");
    }

    public override void _Process(float delta)
    {
        _baseMachine.ProcessMachineState();
    }

    public override void _PhysicsProcess(float delta)
    {
        _baseMachine.ProcessMachineTransitions();
    }

    private void ExampleEnter(State myState)
    {
        GD.Print("Entered state " + myState.Name);
    }
    private void ExampleExit(State myState)
    {
        GD.Print("Exited state " + myState.Name);
    }
    private void ExampleProcess(State myState)
    {
        //GD.Print("Processing state " + myState.Name);
    }

    private bool ExampleCondition()
    {
        return true;
    }
}

The decision when and how often the current stateā€™s process function is to be triggered is thus with the user. This can be done, for example, in the _Process function of the implementing node. Alternatively, if this is more suitable for the use case, one could implement a ticker that triggers the processing of the machine. Also, when the transitions are to be checked can be defined separately. Here for example in the _PhysicsProcess function of the node. Separating this can be useful if many conditions have to be checked, but you want to avoid checking them in every frame (because it is computationally intensive). Checking them every frame is mostly unnecessary anyway and depending on the use case a transition check a few times per second or even only every few seconds is sufficient.

Now to the creation of such a machine. Here in the _Ready function of the executing node. First, an instance of the Machine class is created. This represents our base machine, in which all functionalities run. Theoretically it is possible to define further machines as states (quasi sub-machines) of an overlying machine. Whether this makes sense might be discussed, so ignore this for now. Now any number of states are created and added to the machine. For the creation of the states so called delegates are used. Simplified, with a delegate you can point to a function in an object. The functions themselves are not defined in any State, Machine or Transition. So you wonā€™t have to write a separate class for each state, etc.They are defined only in the actual, concrete implementation (here the MicorbeMachine class) and the States refer to them. They may also redirect to any function of another object that you have access to. All tidy. Maybe you can see where this is going. The framework is completely detached from any specific use case.
Transitions are added just as simple. Here for example a simple one (A to B) with a condition that always returns true. The previously mentioned alternative is the use of signals as transition triggers. (Here B to A) For this, the desired signal-giving node, the signal and the transition to be executed then are passed to the machine with the function AddSignalTransition.

That was it already. Now only the ID of the start state is defined and the machine is started. From here on the machine runs as desired and defined. So much for the structure and the use of the framework. What are the advantages of such a solution?

  • Very easily scalable / expandable
  • Computational effort adaptable & more performant (always calculates only current state and conditions of currently possible transitions)
  • Reacts just as fast to changes in the environment
  • Transitions allow more constraints
  • Applicable to all possible automations and AI (not only microbes)
  • Very easy to use

Of course I have some bias here because itā€™s my framework. But I hope you see the same advantages. The framework is already up and running. If the idea meets with approval, my approach would be to first convert the existing AI system to the framework (static). If this works, the system offers some interesting possibilities that could be discussed. For example, the framework offers a simple way to generate the machines based on the organelles of the respective microbe in the current evolutionary stage, creating species-specific behavior. (Certain states are only of interest with certain organelles) Furthermore, the same could be done with the player species, even giving the player control over the construction and rules of the machine. But this is all far away, if it is accepted at all.

One small note: If the conditions for multiple transitions from a state are met, arbitrary heuristics can be implemented. For example, right now a random one of the fulfilled transitions is chosen. But a rating system based on the utility model could be included. (This is also already implemented, for this purpose evaluation functions can be given to a transition) I have read the topic by IvyGM and I think Utility AI has its advantages, but is difficult to get it where you want it. Especially when the AI impersonates your opponents. Balancing, for example, becomes hard to figure out here from experience. Personally, Iā€™m a fan of having as much control over an AIā€™s behavior as possible so that emergent behavior can be aimed for, rather than hoped for. And last but not least: It is also possible to use states as nodes and endpoints. Then, theoretically, a behavior tree can be built with this. (Although Iā€™d rather not)

I am curious what you think about this! :slight_smile:

Gyzoto

2 Likes

I didnā€™t have time yet to read all of your post but at least your point about the extensibility of the AI was already discussed in this thread. Sorry if I mismerged these threads.

I basically already agreed someone can work on restructuring the entire AI:

Though you should make sure with @IvyGM that you arenā€™t both trying to restructure the AI at the same time as thatā€™s going to be a mess.

First off, really fun seeing the code (mostly) I wrote turned into a flowchart, maybe I should have done that! You switched some of the yes/no labels, but I donā€™t think that that really matters for this discussion.

I think my comments are more or less the same as what I said above about the utility AI idea, with the added caveat that Iā€™d keep an eye out for AI getting ā€œstuckā€ on a behavior. Obviously a FSM is fully capable of having the right set of transitions to avoid this, so the devilā€™s in the details here.

Before about a year ago the AI WAS a state machine: Improve Microbe AI by adQuid Ā· Pull Request #2230 Ā· Revolutionary-Games/Thrive Ā· GitHub, and the reason we switched to the pseudo-behavior-tree was less not liking the stateful algorithm as feeling it wasnā€™t necessary.

If youā€™ve already got a working implementation up, Iā€™d love to see that branch. Nothing sells me on an idea like a working prototype!

2 Likes

I think overall we donā€™t want to go too hard into a pure state machine design, because as @Thim said, we need reactivity in our AI. So basically all states would end up with a bunch of exit conditions to swap to any other state if environmental changes warrant it.