Sunday, May 20, 2012

Turn Based Strategy Game AI part 7

Continued from part 6

For this installment, I watched the AI play several games and made note of a few behaviors it was performing that were less than ideal.  Some of these were solved by tweaking the behavior weights while others required some code changes.

On the test map I have been using for these tests, there is an isolated castle across a river that the AI has been ignoring.  By increasing the weight value used in scoring capture moves the AI responded to more isolated settlements such as the one located on this map.

The AI was also assigning units on attack goals which were ill suited to do so.  To correct this, I added a minimal suitability threshold when calculating if a goal had enough resources.  If it does not, the goal is not assigned.

        public bool HasSufficientResources
        {
            get
            {
                int numSuitableResources = 0;
                foreach (GoalResource gr in PotentialResources)
                {
                    // only count those that meet the minimum threshold
                    if (gr.Suitability > SUITABILITY_THRESHOLD) numSuitableResources++;
                }

                return (numSuitableResources >= NeededResources);
            }
        }

Another bad behavior I witnessed was the AI would abandon its settlements to let them be easily taken over by the enemy on the next turn.  Also somewhat related was that healer units were moving close to the enemy to heal their targets, as they had an attack goal rewarding them for being close to their target.

To address both of these issues, I added in two new AI Goals:
  • Support - Applicable to all friendly units, to support a unit means using friendly abilities (such as heal) or simply being close to them to prevent the enemy from attacking the unit from as many directions.
  • Protect - Applicable to all owned settlements, protecting a settlement is being close to the settlement, preferrably right atop it, to prevent the enemy from capturing the settlement.

The related code was so similar to the code for existing goals, that I am going to skip showing the code here in the interest of saving space.  This code will be presented fully when I do a complete overview of the code in a couple weeks.

Because the AI rewarded units with attack goals for being as close to their target as possible, Ranged units were attacking from a much closer range than necessary, leaving them open to counterattacks. To correct this, I updated the location scoring algorithm to take into account the attacker's range, and giving it high scores for maximizing its range.

        private float GetDistanceToGoalImproval(UnitStatus u, Point newLoc, int idealRange)
        {
            float oldDistance = Globals.GetDistance(u.X, u.Y, u.AiGoalTarget.X, u.AiGoalTarget.Y);
            float newDistance = Globals.GetDistance(newLoc, u.AiGoalTarget);

            // return factor representing improvement in position, protect vs divide by 0
            return (newDistance <= idealRange) ?
                1f + newDistance / 10f :            // give bonus for maximizing within range
                (oldDistance > 0) ? (1f - newDistance / oldDistance) : (1f - newDistance);
        }

        private float ScoreLocationForUnit(Point loc, UnitStatus u, GameDetail gd)
        {
            float ret = 0f;

            // score location based on goal
            switch (u.AiGoal)
            {
                case Goal.GoalType.ATTACK:
                    ret = GetDistanceToGoalImproval(u, loc, u.Range) * CurrentDisposition.AttackGoal.TargetDistanceFactor;
                    break;
                case Goal.GoalType.CAPTURE:
                    ret = GetDistanceToGoalImproval(u, loc, 0) * CurrentDisposition.CaptureGoal.TargetDistanceFactor;
                    break;
                case Goal.GoalType.SUPPORT:
                    ret = GetDistanceToGoalImproval(u, loc, u.FriendRange) * CurrentDisposition.SupportGoal.TargetDistanceFactor;
                    break;
                case Goal.GoalType.PROTECT:
                    ret = GetDistanceToGoalImproval(u, loc, 0) * CurrentDisposition.ProtectGoal.TargetDistanceFactor;
                    break;
                case Goal.GoalType.FORTIFY:
                default:
                    // get location score based on based on influence, (tend towards 0)
                    ret = CurrentDisposition.FortifyGoal.TargetDistanceFactor - _influenceMap.GetTotalInfluencePercentAt(loc.X, loc.Y) * CurrentDisposition.FortifyGoal.TargetDistanceFactor;
                    break;
            }

The largest change made to the AI however was to give it the ability to behave differrently based on the current game situation.  There are 4 different game states recognized by the AI:
  • Balanced - The default behavior the AI has been using up to this point.  This behavior strikes a balance between attack and defense.
  • Expansion - Representing the early game situation where neither player has many units and are attempting to expand their influence and capture resource points (settlements)
  • Winning - When the AI detects it is winning, it will play more aggressively, pushing the attack, even if it means taking more losses than normal
  • Losing - When The AI detects it is losing the game, it will play more defensively, pulling back to reinforce its positions, purchasing units more suited and cost effective for defending, and not making any risky attacks.

This is implemented rather easily.  Each of the 4 states has a separate set of goal and move weights it uses  when planing its moves.  As these weights are already packaged up into the DispositionProfile class, we now have 4 DispositionProfiles, one for each game state.  The AI detects which state the game is in at the start of each turn, and then uses the corresponding DispositionProfile.

The Game State is detected by summing the value of all units and resources for both players and comparing their totals.  If both players score below a certain threshold, the game state is set to "Expansion".  Otherwise the ratio of each player's score is computed, and this value used to determine who, if anyone, is currently winning, and then setting the game state appropriately.

This logic relys upon 3 new constant values that set the various thresholds.  It took several attempts to arrive at values that worked correctly.  I feel pretty good about the selected values for determining Winning vs Losing, but the value of the expansion threshold has me a bit concerned.  This value I can see being different based on the map being played, and I am expecting to have to update this to be a computed value based on the game map in the future.

This logic all fit nicely into a new class called a DispositionProfileSet which is shown below:

    public class DispositionProfileSet
    {

        public List Profiles;

        private const int PROFILE_BALANCED = 0;
        private const int PROFILE_WINNING = 1;
        private const int PROFILE_LOSING = 2;
        private const int PROFILE_EXPANSION = 3;

        private const float EXPANSION_THRESHOLD = 160f;
        private const float LOSING_THRESHOLD = 1.25f;
        private const float WINNING_THRESHOLD = 0.8f;

        private int _currentProfile;

        public DispositionProfileSet()
        {
            Profiles = new List();
            _currentProfile = PROFILE_BALANCED;
        }

        public DispositionProfile ActiveProfile
        {
            get
            {
                return Profiles[_currentProfile];
            }
        }

        public void ComputeActiveProfile(GameState gs)
        {
            float friendlyUnitsScore = 0f;
            float enemyUnitsScore = 0f;

            // loop through all units
            foreach (UnitStatus u in gs.Units)
            {
                // ensure unit is living
                if (u.IsAlive)
                {
                    // add unit score to running total based on which side it is on
                    if (gs.AreFriendly(u.Owner, gs.CurrentPlayer))
                    {
                        // friendly
                        friendlyUnitsScore += u.Cost; 
                    }
                    else
                    {
                        // enemy
                        enemyUnitsScore += u.Cost; 
                    }
                }
            }

            // loop through settlements and add gold income from each
            foreach (Settlement s in gs.Settlements)
            {
                // is settlement owned?
                if (s.Owner != Globals.PLAYER_NONE)
                {
                    if (gs.AreFriendly(s.Owner, gs.CurrentPlayer))
                    {
                        // friendly
                        friendlyUnitsScore += s.GoldIncome;
                    }
                    else
                    {
                        // enemy
                        enemyUnitsScore += s.GoldIncome;
                    }
                }
            }

            // now compare our scores to see which profile to set as active
            
            // if both unit scores are less than base threshold, consider this to be expansion phase
            if (friendlyUnitsScore < EXPANSION_THRESHOLD && enemyUnitsScore < EXPANSION_THRESHOLD)
            {
                _currentProfile = PROFILE_EXPANSION;
            }
            else
            {
                // compare scores
                float scoreRatio = enemyUnitsScore / friendlyUnitsScore;
                // assume balanced
                _currentProfile = PROFILE_BALANCED;
                if (scoreRatio < WINNING_THRESHOLD) _currentProfile = PROFILE_WINNING;
                if (scoreRatio > LOSING_THRESHOLD) _currentProfile = PROFILE_LOSING;
            }
        }

And as usual, here is this week's modified AI (codenamed "George") against Frank from last week.  It should be noted that Frank gained the benefit of several of this week's changes such as the updated weight values and the Suitability minimum thresholds due to the fact that these changes were made to the AI framework itself.

Next time I am planning on running the collected weights through a genetic algorithm to arrive at hopefully better values.  Until then, happy coding!


Continued in part 8

2 comments:

Unknown said...

I love how far you have progressed! I have been following you from AIgameDev - please keep posting!

Assuming you have played against the latest AI incarnation (George?) - how do you fare?

Fuhans Puji Saputra said...

Hai, i am interested with your project especially the AI, could you please tell us how to make AI on the turn-based strategy game? because i am doing similar like yours. I am able to move the AI character by itself, but it is not follow the right procedure (not following the pathfinding). Thank you very much! I will keep in touch in this game!