I am interested in and have the need to explore how AI in games is constructed and how I can use existing approaches in my games. Being relatively new to the field means that there are existing, well understood and popular ways to handle this problem and I would benefit from fully understanding those. The AI that I have used in Serial Link, the only game project that I have created AI for, is very simple. In fact it is very obviously too simple. Although I found much of the material from this weeks content very interesting and I talk a little about that below, I know that there are technologies like the Environmental Query System that are already present in Unreal and I think that its appropriate for me to focus on those at the moment. I have become increasingly guarded concerning the subjects that I agree to dig into, as it is very easy to found myself knowing a little surface information about a lot of topics but I don’t believe that gets me any closer to the goal that I have of being a professional games programmer.
This weeks course material was plentiful. There was so many interviews to watch and think about that I must admit, I should probably have watched them in two, even three sessions so as to get all of the benefit. I found that my eye was drawn most to the discussion around Artificial Intelligence and in particular how it can be used to learn from human players. The purpose is to have agents the use this technique behave in a more human way, complete with not just the moments of brilliant play but also the plentiful mistakes they make. This makes the game experience much more immersive particularly for adversarial games. Here, its difficult for AI programmers to create challenging, defeatable, and consistent yet a little unpredictable. On the other hand, nothing annoys players more than a stupid side kick character who is constantly in the way and reduces the quality of the players experience rather than enhancing it.
The particular flavour of this approach that I liked was Genetic Programming. The reason I like this is the resulting code generated from the learning process reminds me of Behaviour tree nodes from Unreal. I have not delved deeply into this because I have already committed to more work than I think I can complete but based on the presentation by Swen E. Gaudl, it seems that the developer is left with nodes or ‘genes’ that are interchangeable and customise-able. I would very much like to break into Machine Learning in some way soon and the thought of having AI agents who are appropriately skilled and feel human is very interesting. There is a barrier to me using the particular frameworks that he talks about in that I don’t know how the code in Java. However, I am sure that this is a paradigm at least as much as it is a framework and would not be surprised to find this approach in play using other languages. I will continue my learning of C++ and once that is more up to date, I could consider Java if needed although I have already expressed the need to move from C++ to C# in order to use the Unity game engine.
As for where I am in the course right now? I have posts covering the work on the Battery Collector Unreal tutorial and the creation of a SMART goal for understanding animation. I also cover a presentation that was suggested to my by Al at the Games Academy, that talks about whats going on behind the scenes with Git.
I will learn enough about animating 3D Characters so that I can animation characters for the games that I make which will enable me to create much more specific and personalised content for my games. I will do this by first seeking advice on which application to learn, deciding on one, and then finding a beginner level tutorial series on one the learning platforms. I will also find something short that can inform me about the basics of the 12 principles of animation. I will complete this in 6 weeks
I will track my progress by progressing through the course I choose and the measurable outcome will be that I will be able to take some character from Unreal or a third party service, bring it into the software of choice, create animations for it, import those to Unreal and set them up there so that they can be used during the game.
This is achievable because I am already familiar with the concept of key frames and I can draw to a good standard. I also understand that using reference material for animating really would be the best place to start so that I can learn to make convincing movement. The skill level that I am looking for is beginner to intermediate as this is a skill that I would employ while prototyping and would not intend that my animation work would make its way into anything for production. I need to watch out for this taking too much time as the primary focus for me is still coding.
This goal is relevant to me because it improves my ability to make my games more interesting and would allow me to more clearly show the player what the mechanics do. I am the right person for this as I already understand form to the point where I can draw a little and I do not struggle to learn things of this nature. In the future I will need this skill if I want to be able to attract interest to something that I am working on. This is the right time for me to achieve this because I have time set aside for learning how to be a game developer and my life should only be getting busier over the next year.
This goal should be complete within 6 weeks. I will achieve this by spending 5 hours per week on this goal.
Some of these commit notes are from memory as I have found out that Git Desktop does not save the description that I created between launches of the program. Once the program is closed the information is lost. Very irritating! So, from now on I will work in Workflowy to take the notes that I need and just paste them into Git Desktop when I am ready to make a commit. Bah.
Macros are cool
This is my first macro. I dont mean the macros that you can set up in an actor that are a little like functions, only at compile time the nodes themselves are pasted in where ever the macro is used. No. I mean that this is the first macro that I have created that is part of a custom Macro Library. Yes, I feel very grown up right now. I know its simple but it turns out that this little guy is really useful. So, now I am looking out for logic that I an duplicating between blueprints, as thats really the issue to solve with this. I also finally know what a ‘wildcard’ is. Its anything. Simple. More specifically and as I understand it, a wildcard is used as a placeholder at time of creating the logic so that when the macro is used, it will determine the type of what has been passed to it and work with that. Perfect for an array operation like this as it does not matter what the array contains, only that it is an array.
Fixed the macro that was supposed to be fetching a random element from an array
Imported some new blood splatters for testing
Started to set up the gore profiles
Created Enum called Gore Profile Struct that contains a Name field and an array of will be – Gore Pieces. This should allow me to set up things like ‘HeadShot’ as a profile and then determine that eyes and brain gore should be included in the list of gore.
Created Gore Pieces Base that I will use to make children Gore Pieces. The functionality like audio on hit and spawning decals or drawing them from a pool will happen in here.
Created Gore Piece Eyeball just as a test really as I have not changed the model to an eye although I do have them. Its just to start building the Profile idea out.
I have completed my very first Unreal and C++ tutorial! I am very pleased with this simple game as it is a significant milestone for me and is the first fully functioning project that I have been able to put together. Now before anyone shouts ‘but it was just a tutorial, anyone with eyes and fingers could have done that!’. That’s true. They could have. But here’s the good part (for me at least). I understand it.
Not much more than a little while ago, there would have been no way that I would have looked at something like this and thought, yeah, I see whats going on there. I didn’t know basic C++ so following Unreal ‘version’ of it was painful and not really possible. This problem came about mainly due to the learning curve that I would have had to experience while on the BA Top Up course. I could not learn about the Unreal editor, Blueprint (which even when using C++ needs to be understood and used) and then a mighty language like C++ over the top of all that. I just would not have been able to produce anything worth showing.
Since graduating from the course, I still felt that learning the code in C++ would be too much of a challenge and so I resigned myself to working in Blueprint only while carrying on the development of Serial Link. It was not really until I was experiencing the reality of thinking about what would happen if Serial Link did not work that I thought, oh dear, I still cant code in anything but a very specific tool for a very specific engine.
So, shortly before taking on this MA, I grabbed a course on Udemy that I have already covered here, and started the climb to coding stardom. Ok, that’s a touch over the top, but you know what I mean. I thought about Unreal and C++ together and thought that I would save myself a ton of headache’s if first, I got used to the C++ language on its own and then stepped into the fray with Epic, the makers of Unreal. I expected that I would be able to break into coding for Unreal once I understood a little bit more about the language itself and for the most part I’ve been right. I think that I am ready to start a C++ project in Unreal or, and this may be a better move, refactor some of the logic in Serial Link into C++ one small feature at a time. I would rather have something to show for it at the end that I could use professionally.
I have succeeded! Thank you, thank you… Its fine honestly, sit down, please. No, please. Flowers? You shouldn’t have. Im allergic to them for a start…
I have succeeded in confusing people about what I want to do and where I want to be in my professional life! Not great. But repairable. So, lets lay it out and have a little chat about it.
Having had some feed back about the journal and talking its content through, it seems that a post regarding an overview of where I am and why I am looking at the things that I am, is in order. I can take you back to the video that I did for my personal case study where I introduced myself and said ‘… and I want to be a generalist with a focus on coding so that I can make my own games…’
What does Generalist mean to me?
I have been thinking about this and its pretty simple. I means two things to me. The first is that it means ‘being able to make a game from start to finish on your own’. So that means being able to design a game, work out the mechanics and implement them. Then you need to be able to work with the animation, audio, particles, decals, post processing, AI, navigation, UI and so on and so on until you have touched almost every feature in your engine of choice. Then you should be at point where someone could sit and play something that you have created. This is very different to being a modeller. Very different to being a programmer. I want to able to model, animate, maybe create some music. I want to understand and be able to perform in most areas to a good standard. However…
The T shape
I read a very interesting article while I was thinking about this and you can find it here. It talks about seeing your skill set as a T shape and it really resonated with me. It also says to try not to be a specialist or a generalist but some sort of hybrid, biological catastrophe of nature, sort of specialising in something but then being ‘aware’ of the other areas. I’m not sure I agree that completely but I do like the structure of what the author is saying.
Having had a shower and coming back to this draft he’s right. Thats exactly what I want to be. Very strong at logic and coding and ‘alright to pretty good’ at the other bits.
Since I started in my young game development education (and hopefully, career) it was obvious to me that programming is a key skill, even if you are interested in many things. The reason is that an audio engineer can make great audio, but they cant make a game. An animator can make things move beautifully, but they cant make a game. But, before jumping to programming as the Almighty vocation, it, on its own, is not enough to make a game either. Thats when I found this term ‘Generalist’ and thought (in my brain, as my 4 year old would say) yes, thats me, or at least, that’s what I want to be but with a little tweak as I said above.
The article I referenced talks about making sure that you are ‘really’ good at one thing and then starting to branch out and understand other related areas. Thats kind of what happened to me when I was on the BA top up course. I had already started to look at Unreal Blueprinting and out of the class I was in, I clicked with it the best it seemed. That naturally make me the logic guy in all of the projects in which I worked and started me down that path of being good at that ‘one thing’. I am happy with that and have not deviated much from that role at all. For now.
My Vertical axis (depth)
This is logic. To me that means Unreal’s Blueprint at the moment. I have decided that I want to take this to the next step and that means learning to code in C++. There are two reasons for this. I want more access to the Unreal engine and I want to learn something thats industry standard and portable.
But, having had enough experience of creating mechanics, bolting some more on, and some more, attaching this to that (and it shouldn’t go there) and generally creating a Frankenstein’s monster of a prototype in Serial Link, its clear that learning a language is not enough. I am now just as concerned with learning how the think properly as I am about understanding the pointy end of pointers and a bit of bit shifting.
The solution to this problem is Design Patterns and the SOLID principles alongside the study of the C++ language itself. Its worth mentioning at this point that the development of Serial Link, a blueprint based project, is my stomping ground to exercise many of the goals that I have, particularly Version Control, Design Patterns and the SOLID principles. So I will be posting about that project and linking the work done there to those goals. For one thing, I want to take Serial Link forward professionally (I haven’t decided what that means yet) and the other is that its the most developed project I have ever worked on and so is that place that I am most often finding the sort of challenges my SMART goals cover.
The SMART goals that are associated with this axis are:
This one is much less fleshed out on the MA. And I’m not sure that’s a good thing so maybe I need to have a think about that. I think that the area that I feel the most limited and restricted in when developing Serial Link is animation. I would really, really like to learn how to animate 3D models. I have some interest in modelling but it is limited, as there is just so much out there in terms of assets and things that create them. But I have found that although, yes, there are also animation assets available and I already own a really good one, when it comes to getting close to the unique, special bits of the game, I have found that I need custom animation more than anything else. I think I have just found the need for a SMART goal.
Audio is another area that I would like to learn a bit more about but I am not finding at the moment that my lack of knowledge in this area is a serious limitation. I have some tasks of the Serial Link Kanban board to sort out some of the audio in Serial Link, ducking, occlusion and some other things and once I get to that task then I will learn (and be able to make immediate use of) those things. I think a SMART goal for that would be over the top and I am confident that I will just pick it up in this case.
Particles are another area that I feel that I wish to understand more deeply and actually on reflection, yes, it is a little limiting not knowing how to put them together properly. I think that it would be quite sensible to look at this as what I do know about the system is that it works in a programmatic and logical way. That means that its not much of a jump from where I am at the moment in terms of what I learning and how I am thinking. I do want to understand more about the artistic side of things too and how to create say, my own blood splatter or muzzle flashes and so on. I considered getting a tablet a little while ago and I just might do that as I can draw a little and I have used one before to draw this…
I think that I have just found another SMART goal for learning how to create basic particles.
I will be more sensitive to making sure that the content that I post is linked explicitly to the goals that I have set up while on the course. I will set up SMART goals for learning about Animation and Unreal particles system. I think that this post clears up some things for the reader and I am happy that it has allowed me the space to think about the other, less obvious areas of game development that I need to have a look at.
This talk was recommended to me by one of the fine chaps at the Games Academy, Al, as a way for me to get my feet wet with version control. Well I can report that my feet are indeed wet. The problem I was facing was one that I have spoken about before in that I did not use version control, aside from copying the project file of whatever game I was working on and then working from that copy. The main issue there is memory and it cost me more space than my wife’s shoe collection. No, I don’t know why she like Army boots either, shall we move on? There was also the small issue of perhaps losing a whole days worth of development should I introduce some issue that breaks the whole thing. The best I could do in that case would be to copy the project from the last time I knew it was ‘good’ and work from there. And then, of course, I learned the really hard way (I do that sometimes) and lost my game jam project. Time for some version control. To be honest I don’t know why I didn’t do it earlier as I had been introduced to the concept during a Udemy course I had been following. But I think that I got a little overwhelmed with having the learn C++, version control and Unreal all at the same time. I just dont think I was ready for all of it. Also, and I hate to point the finger but its true, we were not advised to use source control during the BA top up course. I think, now that I have a little more experience that is a mistake and it should be introduced in the first week as it have discovered that it really is an industry standard practice.
So I have a SMART goal for this that I have talked about recently so I wont go into that here and this talk is sort of outside of that really but I really like to know how things work behind the scenes and I know that I will feel more comfortable using git in particular if I understand the how and why and little bit better. The rest of this post is very much straight from the presenters mouth and I am not passing this off as my own work unless its worth a ton of marks then yeah, all me. I was just trying to keep up with her and document my own understanding of what she was talking about. It was very interesting and I have learned much about what goes on behind the scenes. Sorry about the bullet style I don’t write straight to the journal, I use WorkFlowy as reordering things is so very easy!
A Little History
The Source Code Control System SCCS
A delta table
A data structure that held deltas between changes in files
A set of control and tracking flags
Set permissions and release control on specific files
A set of control records
Keep track of insertions and deletions at the file level
Revision Control System RCS
This was not distributed and was only used on a single machine
Concurrent Version System CVS
This worked on the modern client/sever model
However this leads to the Merge Conflict and this happens because different developers edit the same file at the same time. This then needs to be resolved. How? CVS enforced the fact that the developer had to pull the most recent commit from the sever in order that they could commit their own work. This essentially stops merge conflicts from happening and from the sound of it, was a first come first serve approach to development changes.
This is decentralised. You can have multiple ‘first class’ copies of a repository in different places. They also introduced a different way to resolve merge conflicts. You could work on a old version, pull the new version in and reconcile the two versions at that point.
Git hub is a separate software to Git
Under the Hood
What is an object?
Its a data type that has
A tree is a reference to multiple blobs or multiple trees
Commit objects have
A reference to a particular tree
A Time stamp
A committer, the user that made the commit
Commit message that goes along with it so that the developer can explain the changes made
‘cut’ a release
create a tag
contains many of the same details that can be found in the commit object
use command ‘file’ to get more information about them
Clone the repository first
Create a branch, lets call it the Feature branch
Whats in the Head file?
The 40 character nonsense that is shows is called a hash
You can print out the content of the file that is associated with that hash
When you stage a file a new blob object is created that corresponds to that file. A blob is used to represent file data. You can fetch the hash associated with the blob object and print it. When you stage a file or project, you add a new blob object to the .git directory with every file that you have changed
When you make a commit the directory name is the first two characters of the hash and the file name the other 38
2 reasons for this
Some operating systems place a limit on the number of files that you can have in a directory
Because of the way that some operating system search for files, its faster to split the files up over multiple directories.
The commit object, because of its connections to other blobs and trees, stores the state of the entire repository in one data structure.
Git has compressed the ‘loose objects’ that were created in the repository into a pack file.
The Heuristic that is used to execute the compression is
The system goes through the directories and sorts files by type
Commit, blob, tree or tag
Then they are sorted by name
Then sorted by size
Because of the fact that files tend to grow over time with additions to the code base and refactoring, ordering by size is a good way to order by recency. You should have the most recent changes at the top of the directory so that the system can figure out the deltas on them.
Then the system uses a sliding window to compute the deltas between those adjacent objects. I don’t know why, but this is just really cool.
Linus’s Law – as time passes by the size of the file grows
If it detects only very small changes, instead of storing both objects, it will store one and a delta to the next
I think that they are compressed at this point? The presenter talked about compression and from what I can see this is the most obvious place for it to happen.
Then the index file is created that is used to resolve the hash to the file it represents in the compressed pack file via a pointer in the index.
If the Master branch has not changed then the Feature branch created earlier can just be merged with the Master using a fast forward command, which sets up a pointer in the Master branch to the one you want to merge it with, the Feature branch in our case. The pointer points to the most recent commit object you have.
Recursive Merge Strategy
You will end up with a merge commit at the head of the new branch
After merging you can query the merge commit object and find that it has 2 parents, pointer to the commits that are on the Feature branch and the Master branch it was merged with.
Instead of creating a merge commit, the system reconciles that differences into a single linear history.
When you make a merge commit, you maintain the explicit branching which means that the commit has an awareness of the two branches that it came from. When you do a Rebase you favour having a linear history
Git represents key information as objects stored in the file system.
Git compresses loose objects into pack files to increase space efficiency using delta compression
Rebases and merges differ in whether they give preference to maintaining a linear history or explicit branches