|Posted by Heem on March 15, 2014 at 2:20 PM||comments (2)|
So, I finally got to a point where the players , goal keepers would interact with the ball and pass it around, kick , dribble etc. ( sort of working. ). I realized that it is not at all how I want to work, but hey progress is progress and I am glad that I have reached this point. I plan on revamping the whole AI again so that it is more in line with how I want the agents to behave. So yea that is what I have been doing the last week! You can see the current behavior in link at the end of this post.
In order to revamp the AI, I plannon on going from bottom to top. I will be working on AI for goalkeepers and testing them then I'll move onto AI for field players and then finally make the state machines for the teams. It gives me the ability to test the states for each type of player individually and then when I move onto the teams I can test its state.
|Posted by Heem on March 10, 2014 at 1:20 AM||comments (0)|
So , you ask how the project has been going? Well, I have been working hard on it since last week and I can say that the three most important systems of my project have been implemented. The three systems being StateMachine Handler, Steering Behavoir Manager, and Message Passing system.
I tested the three systems and they are working great! One of the aims for me while making these systems was to make them modular and independant of each other, and I can say that they are. So how do they work you might ask? Let me describe them to you one by one -
Steering Behaviors -
So for this system, I create a normal class and used it as an object in the Players class. The steeringBehavior class has a member in it called m_pOwner which basically maintains a pointer to the player it is associated with. So now, all I need to do is turn on the behavior in the steeringBehavior object of that player. So for e.g. in order to purse the ball I would have the following call -
playerObj->getSteeringBehaviour()->setTarget( ballObj );
This works because in every update call of the player , a call to calculateForces() function present in the SteeringBehavior is made. This function takes care of updating the velocity of the m_pOwner member, as a result the playerObj's velocity gets updated.
The behaviors I have implemented are pusuit, arrive and interpose.
So this one was a littel bit tricky, because there are three classes which basically have a state machine associated with them i.e. FieldPlayers , GoalKeeper ( both of these classes inherit from Players ) and Teams. So what I did is I made an abstract template class called State<T> which had three functions enter(), execute() and exit(). Now, I made another class called StateMachine, which had an objects of State i.e. currentState, and previousState. Now, in the update function of StateMachine, I called the updateFunction of that particular state. This way, all I had to do was have an object of StateMachine in whichever class I needed.
now you may ask, that how did I manage different States? and Why is there a different State class? Well, for that I made classes whihc would inherit from that state class.So for e.g. goalKeeper had state classes such as GoalKeeperReturnHome: public State<GoalKeeper>.
What this allowed me to do was have a set of different states for each of my classes that required a StateMachine. This way, I managed to create a single system that could be used by more than one entity.
Message Passing System
So, for this I had to create a data structure for messages. For that I created a class called Message which had sender, receiver ( both of type baseGameEntity ) , MessageType , delayForMessage and extraInfo ( which was of type void* , and could be used to send any object ) .
Now, I created a singleton class called MessageDispatcher, this had three functions - dispatchMessage, dispatchDelayedMessage and sendMessage. Send MEssage basicallay called the handleMessage of the receiver . dispatchMessage would calle the function sendMessage immediately procided the delay was 0, other wise it would store the message in a priority queue and the function dispatchDelayedMessage would handle it and send it after the amount of delay that was specified in the message.
Since the BaseGameEntity had a purely virtual function of HandleMessage, all the classes that inherited from it had the ability to send messages. If a particular class did not have any message to handle, it would simply return false and it would mean that there is somethign wrong in the way the game has been programmed because its an exception.
So that describes the 3 main systems that consist of my project. I should now finally be able to move on to coding the behavior and reaction of the agents. Fun!
PS: I was going through my code yesterday and realized that I had about 23 header files, about 19 source files and almost about 2500 lines of code
|Posted by Heem on March 3, 2014 at 2:20 PM||comments (0)|
So one of the things I achieved this last weekend was to setup a git repository for this project of mine. I created a GIT account for myself ( finally ) and created the repositories. It was much easier that I had expected it to.
You can find my GIT repository here - https://github.com/heem2990/SOCCER_GAME
So why use GIT?
Even though I am the only one working on the project, I get a lot of advantages. The first and foremost is version control, I can go back to a snap shot of the project and revert the whole project to that time.
The second thing is I can share this git with the recruiters and it would serve as a code sample for them. Also, if I do want someone to work on this project with me, I can easily pass on the repository to him/ her and get started really easily.
Finally, I can keep a track on my progress and see how much I have achieved each day.
|Posted by Heem on February 26, 2014 at 11:15 PM||comments (0)|
So after learning about steering behaviors, the book talks about how they can be implemented in a very simple soccer game. I am going to implement a similar soccer game on my own. I actually finished reading through that chapter of the book and it has given me a great insight on how to pursue it.
I decided to use C++ to implement my demo, the reason being I needed more practice with C++. I decided to use 2 libraries, glm which is a basic header only math library available for opengl. I plan to use it mainly for vector math. The reason I am doing that is so that I can speed up my time and concentrate more on developing the demo rather than the framework I would require to develop it. I also decided to use the Allegro library for basically rendering the images on screen and easily setting up a game window. I did successfully integrate both in my project and started coding the framework and class hierarchy.
My basic class hierarchy looks as shown below.
I was able to successfull create the class structure, and have implemented the basic draw calls. My current game demo is able to render the following on the screen -
|Posted by Heem on February 26, 2014 at 11:05 PM||comments (0)|
So this post was supposed to come about a week ago , sorry about that! But I had started looking into steering behaviors.
What are steering behaviors , one might ask? Steering behaviours are behaviors basicaly such say "chase" , "flee" , "avoid" etc. Think about steering behaviors as decisions you make when playing any physical game. Say the game of tag. Now when the person who is "it" is near you ,you basically "flee" away, when you are "it" you are chasing the other players thus showing the behavior of chase. Same concept applies to AI agents as well. The agents, depending on what state they are in would display different kinds of behaviors for e.g. If there is a guard nearby and he spots you, the guard will start chasing after you basically depicting the "chase"/ "seek" behavior. Similarly, an eney target might try to "flee" from you if they realize that you as a player are pursuing them. Thus is the concept of steering behavior.
So how do they work? Most of the steering behaviors can be implemented in terms of basic vector mathematics. The seek behavior can be implemented easily as follow -
Say your position is at P and your targets position is at Q , then the vector Q-P will be a vector that points from your position to the enemy's position. if you take a unit vector in direction of Q-P and multiply with the speed you can travel at, and assign that as your velocity vector, you will be travelling in the direction of your target. You will then be able to catch the enemy as long as their speed is less than yours.
The flee can be implemented really easily as well, it is basically the opposite of seek, and the velocity would be in direction of -( Q-P ) vector.
Some of the other steering behaviors that I lookied into after that are obstacle avoidance, and offset pursuit. These are a little bit more complicated. All of these behaviors can then be combined to get more complex behaviors such as group movements or flocking.
So thats an intro about steering behaviors! keep an eye out for the next post where I'll talk about how I am planning on implementing them to show a demo.
|Posted by Heem on February 24, 2014 at 11:55 PM||comments (0)|
So I finished this chapter about a week and half ago, and I implemented the first part of the world and the messaging system. It looks sort of boring mainly because its all in console and is not really interactive. BUT! What is it about?
Its about how agents have states associated with them. Agents? States? What ? So , Agent is basically an entity in a game that can move on its own and has the AI behavior. Now, a state can be loosely defined as the behavior that entity is exhbiting at a particular snapshot in time. So think of it like your day to day life, in the mornign when you go towards work , you are in "Travelling" state, then once you reach work, your state changes to "Working". That is essentially what state means for Agents as well.
In the chapter the author took the example of a miner, and talked about it while building a state machine for the miner to follow. The miner had "Mine for Gold" , "Quench Thirst" , "Deposit Gold" and "return home" states.
In the second part the author talks about implementing a messaging system, and introduces the miners wife as a new entity. The miners wife has states such as "work", "cook food" and the Miner basically sends a message to his wife when he returns back home.
This was an essential chapter basically because the state machine is a fundamental thing to AI. Think about it this way, if theres a a Cop and a Robber game, the Cop could have states such as Patrol, pursue Robber etc, where as Robber could have states such as "Hide" , "Shoot an arrow" , "Run" etc.
|Posted by Heem on February 4, 2014 at 8:30 AM||comments (0)|
Last weeks milestone was pretty easy and I was actually able to finish it during the weekend itself, and I was able to tackle it on previous Monday itself. It was actually a very pretty basic refresher to Mathematics and Physics required for game play programming.
Other than that, I also started and was able to go through Chapter 2: State-Driven Agent Design. I read through it , and was able to understand it pretty clearly, I am now working on implementing the AI State system described in the book.
The state system is basically about decision making , and is used to decide what state an agent will go into next. In the example, we are dealing with a Miner named Bob, and our task is to make sure he does the following tasks - "Go to the mine"-> "Go to the bank if his pockets are full of gold, and deposit it, if the amount of gold he has deposited in a day is enough for him then go home otherwise go back to the mine. Also, if at anypoint he is thirsty he should go to a bar and drink. If at any point he feels fatigued he should go back home and sleep."
The more complex state system described in the book has an additional character "Elsa" who is the miners wife. We then use a messaging system to pass messages between Elsa and Bob so that their dependent states can be called at correct times. i.e. for e.g. when the miner goes to "going home state" Elsa should go to "cooking food" state. Also, once the "food is cooked" i.e. after being in the "cooking food" state for a certain time period , Elsa should Message bob, that would change his state from "sleeping" to "going to eat food" state.
|Posted by Heem on February 4, 2014 at 8:25 AM||comments (0)|
So last week, I finally decided my milestones and I will basically be working on finishing one chapter for each milestones. Each milestones should last roughly aroun 1.5-2 weeks. There are some chapters that I am familiar to and will take less time, for e.g. A math and physics primer should not take me more than 1-2 days to go through, mainly because it will be a rivision for me.
The detailed milestones I am planning to follow is as below -
|Posted by Heem on January 23, 2014 at 3:10 AM||comments (0)|
I am Heem Patel, I am currently a student at the Silicon Valley campus of Carneige Mellon University's Entertainment Technology Center. This semester I will be working on on implementing various algorithms related to the field of Artificial Intelligence. I will be posting my dev diaries here , so keep coming and please please give me your feedback.