Creating Effective User Stories

Very organised combat. Image.

I have been interested in this sort of thing. I mean that I have been interested in finding some structure to the place that is just a step back from the Kanban board, just a bit further away. I find that when I think about planning for a project, I think in a blend of ‘wouldn’t it be cool if’ and ‘we need to implement’. I have a good grasp on breaking down the implementation side of things but I think that I could benefit from learning a bit more about structuring the part of the project that leads to how the user should interact with some feature and how they should feel about it. That last one, ‘feel about it’ is very important to me as I already understand that games, once boiled down, are really all about feelings.

The Pluralsight course can be found here.

The MA content requested that I watch the video on Personas from the Pluralsight course ‘Creating Effective Stories’ and upon having watched it, I think that I could get a lot of benefit from watching the whole course, so thats what I am going to do.

Thinking in Stories

  • A user story is a placeholder for a conversation. Thats a great quote and is really helping me to understand what this is about.
  • Design is deferred to the Last Responsible Moment giving the team the time to find out as much as possible about the real requirements.
  • The key to the flexible, high level story is that it is an encouragement to the team to interact with the customer and find out more about the requirements as the quickly implemented builds are delivered.
  • Its important to remember that you should define Acceptance Criteria as the first step of starting development work on the story. Note that the Acceptance Criteria should be created and confirmed before any coding begins. More on that later.
  • Users Stories: The 3 Parts
    • 1. As a …
    • 2. I want to …
    • 3. So that …
    • This tends to reduce frivolous requirements that can be created using a feature list.
    • As a player, I want to be able to shoot bad guys, so that I can level up and get more abilities.

Types of Stories

  • What are Roles?
    • They represent groups of users rather than individuals
    • They are derived from the characteristics of the group
    • From the example
      • Users – Customers
        • Dont use the word ‘users’ in encompasses everyone really
      • Service People
      • Owners
  • How to evaluate the stories using INVEST
    • Independent
      • Independent: They are self contained and don’t rely on other stories. This is very useful due to the fact that User Stories may/will need to move around in the backlog, changing their prioritisation. Its useful if there are not another set of stories that have to move because of that. I can imagine that in game development this may be difficult as one system builds on another but I am keen to see how this can be resolved. It actually links in a little to the Single Responsibility principle in terms of how I should think about it.
    • Negotiable
      • Negotiable: This means that they should be available for change as the true requirements are honed in on, but that it should stabilise as the team get closer the Last Responsible Moment before implementation begins
    • Valuable
      • Valuable: This makes sure that the story has real value to the User defined in the story, removing superfluous requirements and wishful thinking from the backlog and therefor the application. This also prevents the occurrence of too many technical requirements and makes sure that there are ‘downstream’ benefits to the User defined in the story.
    • Estimate-able
      • Estimate-able: The team should know enough about the story and it should be small enough so that they have a good idea of the work involved in delivering it. Note that if it is not Estimate-able, a Spike story may be required. I talk about this later.
    • Small
      • Small: This is linked in with the ability to estimate the work required to implement the story and serves to reinforce that its scale is directly related to that outcome.
    • Testable
      • Testable: The story should be defined enough so that developers could define tests that could be used to prove that the value in the story is being delivered or be able to highlight where that is not the case.
  • Epics
    • This is a User story that is too large to be completed without further breakdown into separate stories and consideration of its dependencies. An Epic could be the story that describes the entire value proposition, overlaps several smaller stories or is just too vague due to the assumed, smaller activities that would be required to have it work properly.
    • What are the qualities of an Epic?
      • The Epic story captures a complete workflow towards a goal, and can be broken done into the beginning, middle and end.
      • Its not deliverable until all the stories are complete
      • For a game that Epic might be something like: As a player, I want to experience a great story and complete the adventure using puzzle solving skills, so that I can experience satisfaction when I reach the end.
  • Themes
    • Themes are a way the think about stories that are related but don’t need to be completed together. This is the crucial difference that makes them different to Epics, in which the Epic is not delivered until each story is complete. A good example the presenter gives is performance. In games that could relate neatly to delivering a high FPS lets say. There may be multiple areas of the game to consider while completing this work but each part of the game can contribute to the improvement in performance individually, which would make the collection of stories a Theme and not an Epic

Personas

  • What are they?
    • What is a persona? Its a fictional character that can represent a person that is going to use the product we are developing. They are different from Roles, covered earlier as they represent groups instead of individuals and can give the roles depth. The development team should be able to relate to the invented persona more easily with an aim to getting closer to what the customer wants from the product. Create personas for roles that are coming up in conversation frequently.
  • Relationship Mapping
    • This allows a visual representation of who is using the product. It can identify new users and their influence on the product. It also shows the interactions between the users to reveal insights about what each of the users, or roles, need from the system and that should lead to a better design.
  • Choosers Vs. Users
    • Choosers are the ones that are paying for the system
    • Users are the people that will be, yep, you guessed it, using it.
    • Im game development, this could be the publisher vs the player. The publisher would want financial return from the ‘system’ (game in this case), and the player (user) wants the fun. Its common that the needs of the choosers are placed above the need of the user as the choosers have greater influence on the system at the time of design and are the ones signing it off. This can be frustrating for the poor user/player and cause the product to fail as without users, whats the point?
  • How to create a Persona?
    • Give them a name, a photo (of a real person doing something that matches the feeling they should get from the product), which Role they belong to and a description of what they want to get out of the product. You can also include a ‘key quote’ from the user that explains what they want. If appropriate you could add a title and demographic information. Dont use pictures of celebrities, the team will already have preconceived ideas about who that person is. Make sure that the Personas are visible to the team and keep them up to date should their needs change.

Splitting Stories

  • Why would you need to split it?
    • Larger stories are more difficult to estimate, can be too narrow in expertise required causing one person to have the bulk of the work and can be much more noticeable if not completed on time.
    • Smaller stories offer more flexibility, are easier to negotiate with the product owner and can save a lot of work as the additions to the story that were planned may never actually be needed in the end.
  • When might you want to split a story?
    • When the team gets nervous about estimating completion times.
    • When the proposed completion times and the actual times dont match.
    • When the number of Story Points reaches some number, say 3. I didn’t know anything about Story Points and have found another resource I hope to cover here soon.
  • How to split stories?
    • Vertically
      • Don’t split the task ‘horizontally’ meaning that you should not define a story like ‘build database’ if there is also a UI due. Better to define a story ‘vertically’ which allows some database and some UI to be built so that the team is left with an ‘end to end’ piece of work when its completed. This would also be easier to test as its a complete feature at this point and allows the team to maintain their flexibility demanded by Agile. But it can lead to an initial drop in productivity as team members have to learn about areas of the project that they are unfamiliar with.
    • Finding Seams
      • A Seam is an logical separation with a larger story. There are a few different types presented here and they are:
        • Workflow
          • This is good for when there are many steps in the workflow ,move to weapon, select weapon, equip weapon, target with weapon, fire weapon and so on…
        • ‘illities’
          • These are the qualities of the product that are not so obvious. For instance
            • Security
            • Reliability
            • Scalability
        • Positive and negative cases
          • Splitting the story along whether some action in the system is successful or not. The example given is successful vs. failed login attempts. In a game it could be getting to the end of a level, having outstanding tasks and being able to notify the user.
        • Third party dependencies
          • This makes sense if there are other external parties involved who are not part of the core team or who are remote. The example given is that of a UI artist who will only be focused on that part of the product. This also makes sense as contractors will be less influenced by the core teams day to day needs and changing priorities and more concerned with delivering the work that was agreed within some previously agreed window or deadline.
        • Roles
          • This one is straight forward and is just splitting up the story based on the roles, or stakeholders within it. The only consideration is that the new stories should not rely on each other.
        • Spikes
          • A Spike is a special case of story that is useful when the team is tackling a story that requires some skill or technology that the members of that team are unfamiliar with. Because of this lack of experience, its not possible for the team to give a good estimate on how long the work will take to complete. A Spike story can be created that uses time boxing and allows the developers a chance to explore the problem with the specific goal of learning enough about how it will be solved that they can give a completion time estimate.

Getting to Done

  • Meeting the expectations
    • Customer
      • The customer will require qualities from the product such as it being easy to use, reliable, meeting the latest specification, clearly adding value and so on. This is implemented using Acceptance Criteria which are best extracted from the product owner using conversation and questioning. These can be created along with the story and can be written on the back of a sticky for instance.
      • Acceptance Criteria
        • This allows the team to go back to the high level user story and fill in the gaps to make it more detailed and specific. As mentioned above, they should be created as the first step in actually executing the story and should be fully accepted before development work begins. The format for this is
          • Given
            • What has to have happened in the product first, what is the prerequisite?
          • When
            • Some particular action happens
          • Then
            • Some expected result
        • Given that the Soldier is patrolling, when the Soldier sees the player, then the Soldier shoots at the player.
        • How to get it right
          • Be specific and make sure that everyone is clear about what the criteria means. They should be measurable so that the team and the customer are clear when a criteria has been met. It must be realistic and within the scope and constraints of what the team can deliver.
    • Developer
      • This allows the team to be happy that the features are robust and easy to maintain and develop in the future. The Acceptance Criteria are often called ‘Done Rules’ and are also common on Kanban boards, I have some of my own on the Serial Link board. These should be created by the team, allowed to evolve, be made public and kept informal which encourages the feeling that they can change if needed.
      • Done Rules Common Examples
        • Testing
          • Has the code gone through automated testing and passed.
        • Peer reviewed
          • Has someone other than the original developer looked at the code or was it pair programmed.
        • Deployed
          • Is it staged checked in for deployment and have the changes to the core code base been made if any are required.
        • Has the Product Owner reviewed the feature and agreed that it meets their expectations.

In Conclusion

I really got a lot of value from this course and have a much clearer understanding of how to set up an Agile project from the perspective of the value that needs to be delivered rather than the tasks that need to be completed. Although I found the course a little dry, the content was worth the time. I will keep these things in mind for the next project that I find myself in.

Serial Link Commit 14: 16th March 2019

Line tracing for decal locations.

Continuing the work on making a component with a Single Responsibility, there has been some updates to the Bleeding Component. There is a link to the SOLID principles smart goal here.

So that this component is as portable as possible I have used variable types that are as high level as I can. Ultimately, I suppose that I would be able to pass the Bleeding Component a location vector only and that might be the way to go thinking about it. However, this is an Unreal BP and I would only be putting this on an Actor as its a Actor Component so I really don’t have much of a choice there! My point is that the only thing that I am passing to the events in that component that are specific in any way are Actors. Actors are defines in Unreal as ‘anything that can be represented in the world’. I think in this case that is high level enough. It makes the component very flexible and I’m happy that it would be able to be added to any type of character, robot, turret, vehicle or anything else, and work fine. The event prototypes are:

The ‘Projectile’ input is an Actor object reference and not an actual projectile blueprint class. That means that anything thats in the game world should be able to be used as the initial location for the trace.

Again, the Actor is just an Actor, but I think that’s a bit more obvious here! This event just traces straight down and find the first positive response to the Valid Blood Surface channel.

This one is a little more specific but I don’t think that its a problem. It needs a hit result to break inside the event and I am comfortable that a hit result is a standard UE4 data structure and would be valid in most situations. This could be changed to use the impact point and impact normal which now that I type this might be worth considering. Thats what I like about journaling, its that moment when I am thinking something like this through in order to explain and you spot some obvious improvement that could be made. However its worth mentioning that the Blood Spray is totally optional and if the user does not specify one, one will be chosen from the array that should be populated when the component is used.

This event is the one that would be called by the actor that was using it when that actor registered a hit event and determined in its own logic that a blood splat should be placed there. Again, this may be able to be slimmed down but to be fair, the hit event that would have triggered it in the first place would produce a Hit Result so it may as well just be passed right in, unless I find out that’s really expensive for some reason.

Commit

  • Drawing blood under the soldier when he is hit now happens from the bleeding component.
  • Drawing blood from projectile penetration from the bleeding component.
  • Enabled all of the physics bodies for the UE4 guy so that they generate hit events. I was only using the chest and head and then testing the velocity of the hit, but if he hit with his shoulder for instance, he would lose a lot of that force and not draw the blood when the chest did make contact with the surface.
  • Changed the penetration check to that if it does not draw blood, blood is drawn underneath the location of the projectile instead.
  • Added the suspended in the air effect to the weapons so that they can be plucked from the air. – Updated the weapon suspend in the air feature so that they can rotate a little, looks more natural

Serial Link Commit 13: 16th March 2019

Those red ticks mean I need to commit. Pffft.

This is the problem. The Soldier actor is a crazy mess of tons of functionality, some of which is right, some of which is wrong and some of which just needs the plain ol’ delete key. But, I have a SMART goal regarding SOLID principles that is tackling this atrocity of a class. Ok, Blueprint. You know what I mean. I am applying what I am learning on the course and my own research to fixing as much of it as I can.

Single Responsibility Principle

This is the idea that each component, class, actor and so on is responsible for one, defined, activity. I understand this to mean that it should make common sense that a Weapon object could not reach into the Character object and make changes to the character directly, that sort of thing. If the weapon was heavy and needed to change the characters speed then there should be some other method by which that can happen without the Weapon being made to have a hard reference to the character or some logic that requires that the Weapon ‘cast’ to a very specific implementation of a character. Now, I may have some of this wrong at the moment and thats because I am still very much a beginner in thinking about these kind of design choices but I thought that I would start with something easily defined, not super complex but could be very portable if designed well.

The Bleeding Component

I decided to remove the following features from the Soldier actor and implement them in a component whose ‘single responsibility’ was producing gore in the world. Its a dark place to start I know, lets get on. So the functionality that existed was

  • Spray a blood particle from a location
  • Draw blood directly under a projectile hit location
  • Draw blood that would travel due to projectile penetration
  • Produce gore at a location, usually in response to a head shot or explosive going off.

Its not quite finished but now that majority of this is being handled in the Bleeding Component. This means that there is one place to edit the blood samples that are used for decals, one place to handle the tracing and other environment querying logic and so on. The other benefit of this is that is does not have to be just red, human like gore. It could just as easy be blue, dragon gore in another game and thats the point. I have really tried to take the Single Responsibility principle to heart and think in those terms now. I am very keen to start the course I have found on Pluralsight. I talk about that in the SOLID principles post.

Commit

  • Added logic to be able to track if the bullet casings are suspended in the air or not.
  • Added a new sound set of light metal taps that can interact with the weapon when it taps the casings when the casings are suspended in the air.
  • Used the velocity that the casing is travelling with to determine how loud the audio should play, this worked well and I would like to apply this to all the other sounds that are ‘hit’ based
  • Added a small delay to triggering the suspended in air effect so that the casing could leave the barrel a little further and it looks really cool, give the player much more of a chance the notice the feature
  • Came across a node called ‘get reflection vector’ and have used it on the kids bullets shield to properly deflect bullets and its better than the implementation I was using before.
  • Created a ‘BleedingComponent’ that I hope to use the centralise all the blood effects that are spread throughout the soldier actor. That way I could make anything ‘bleed’ like a robot that bleeds black oil and so on.
  • Have soldier’s body impact blood working from the new bleeding component – Have soldier’s blood spray from wounds working from the bleeding component

Serial Link Commit 12: 15th March 2019

Hanging in the air…

Looks like everything is falling but it isn’t!

I am very, very happy with this session. There are two reasons to that. The first is that the feature, which I will talk about in a second is just really cool. The second is that its a solid example of the learning that I have doing about programming patterns at play. I have a SMART goal about learning about design patterns that you can see here.

The observer pattern

One of the most useful patterns that I have learned about is this observer pattern. When I first started learning Unreal and the blueprint scripting system I could not get my head around event dispatchers (or interfaces for that matter, both I am using prolifically now though I’m pleased to say). Well, it turns out that event dispatchers enabled me to use the observer pattern before I knew that was what it was called. So in short this pattern is based on one object registering with another so that upon some being evaluated or some event being triggered, that actor can let the registered actor know ‘something’ has gone on. That something could be anything, a boss has been defeated, the player is at a certain location or, and in my case, the game is in coming out of slow motion. Anyone not familiar with event dispatchers, read on and I will tell you how I am using them in the logic too.

How I have implemented it

I have implemented this feature on a few things now, although I think in this commit its only the Soldiers, but I am catching up a touch with the journal. Soldiers, weapons, the explosive mine and bullet casings. I am going to talk about and show how I did the bullet cases. The first problem is that every single time a bullet case is spawned (sorry Jamie, I will pool them I promise) it, as the bullet case actor, needs to know if it should hang in the air or just fall as normal. Here is the logic from the shell casing actor that happens as soon as the little blighter is born:

The first part of making that happen to ask the game state if slow motion is active. This is a bool that I keep updated from the Slow Motion component that I created a while back to handle all of the time dilation. If the game is in normal time, nothing happens. However, if it is in slow motion, then the first thing (and this is the observer pattern at play) is that the shell casing registers itself with an event dispatcher in the Serial Link Game State using that Bind Event To node. From there I created a custom event that would fire when the event dispatcher fires. You can see I called it Drop Shell Casings.

This is the event dispatcher set up in the game state:

The one that we are registered with is the one that will be called when the game comes out of slow motion. Each actor that has been ‘bound’ to that event will get a call from the game state at the right time to say ‘hey, that thing you were listening for? Its just happened’. So, when its fired, my custom event will fire and the shell casing will be updated. When that happens, I am unbinding from the event as that particular shell casing never needs to know about the state of the slow motion again.

What I am using it for

This is quite simple really. I am only using this set up to determine what the values for linear and angular damping should be at the time that the shell casing is created and then I am using the observer pattern so that the shell casing can be updated with the normal values for those fields if needed. Very high numbers for those two properties lead to the effect of the air feeling like tar to those actors.

What I intend to do with it

I think that one of the best ideas I have had with this but have not had the chance to try out is linking it up with the combat teleport feature for the player. I would really like a simple way to have the player just hang in the air when teleporting so that should he teleport over a group of enemies, its not gravity that brings him down but choice or running out of power for that ability. I think that being able to hover in the air like that would lead to some really cool and satisfying combat scenes in which the player could teleport up, take out some guys, teleport back to the floor and carry on. I think that I would start with playing around with the player capsule as I am not sure that the updating the damping settings on the mesh would work because the mesh is under the hand of the animation system at that point. I would also try disabling gravity for some period of time if it if confirmed that the player is in the air although that could lead to the character being affected by other forces like radial explosions…

Commit

  • Used the logic that controls whether the ‘body hitting the floor’ foley gets played to limit how many times the blood decal can draw. Its hacky but its works well enough for now
  • Updated the damping on the gore pieces so that they dont roll around so much.
  • Stumbled across something very cool. Its a work around for having physics bodies look like they are in slow motion when they are not.
  • Updated Ragdoll character event with the ability to set the linear damping and the angular damping to a very high value, this makes the air feel like tar to the skeletal mesh.
  • Bound to an event dispatcher in the slow motion component so that the ‘ragdoll character’ event can be notified when the player is out of slow motion, then, the normal values for the damping are re-instated. This makes the characters who were trapped in the tar effect, fall to the ground as normal.
  • Updated the explosive mine so that if it goes off when slow motion is being used, the victims hang in the air.
  • Updated the Inferno-mine so that if it goes of when time dilation is active, the victims hang in the air.
  • Updated the Fling ability so that if slow motion is enabled when the body hits the wall, the victim goes into ‘tar mode’ so that it hangs there until the slow motion. They are returned to normal when the time dilation goes back to normal
  • Casings for the SMG now hang in the air in slow mo!!! When time comes back to normal they all drop to the floor at the same time and it looks epic.
  • Also noticed that the bullets casings can be moved in the air with the gun, as that has physics enabled. So the player can tap the shell casings as they hang in the air!

Serial Link Commit 11: 15th March 2018

Lets talk about blood, baby, lets talk about you and me…

Trace channels

This commit was a good one. I learned a ton and I was also able to notice that I think about this stuff more clearly now. I really enjoy recognising that my skill level has moved on, its very satisfying. I talk about it because its one of the pleasures of being involved in something like game development, where the pit of available skills to learn is so deep that you could just go on forever. Sometimes, I think its very rewarding and motivating to tackle some problem that you tackled a while ago and find that your tool kit has grown in the mean time. Thats what happened to me in this development session. The main issue that I was solving was that the line traces that were being used to work out where the blood decals should go were returning hits from and ever increasing array of things that I had to declare as ‘actors to ignore’. Weapons, gore pieces, other soldiers, it was a right mess really. Then, upon coming back to this issue for the first time in a while my first thought was ‘I should set up a custom trace channel that everything blocks be default called something like ‘Valid Blood Surface’ then I could just go round the project setting what should not return anything in that trace to ignore’.

Well, thats what I did.

Yep, custom trace channel. Hardcore.
Tracing for the brand new trace channel. Can I get a ‘Hell Yeah’!?

Now, there were quite a lot of actors to go through an set that they and some of their components ignore this channel but thats only because its late to the party. From here on, when I create something new, I can set it to what ever I need. Very pleased with that.

Tracking the head location

Tracking the head…

The next issue that I had was that because I had been using the ‘head’ bone as the starting location of a 360 degree ‘what should I draw blood on’ test, if the poor fella that the test was coming from found that his head had been smashed into a wall in that frame, the return from that process was less than desirable. It looked as if all the decals had been draw at various rotation from exactly the location in the world that the head had been at the time. Then, there was a kind of ‘slicing’ effect that was happening and I needed to sort it out.

So, I decided to track the heads position every so often so that I could use it as a location delta (I hope that my terminology is right there). Then I could decide whether or not to carry out the scan from the head location or the location delta I had tracked. I decided to always use the delta (I think, I am writing the retrospectively). This worked the vast majority of the time. I was using the last know head location and the current head location to create a unit vector that could be used as the direction that the trace would go. This lead to the blood always being in about the right place. But…

… It turned out that I could have saved my self all of that work as I decided to play with doing things a different way instead and that worked out much better.

Setting up gore to work with hit events

So, having thrown all that work away (thats not quite how it happened but I like a little drama), I did this instead…

I know that Jamie’s toes just curled a little at the sight of the ‘Spawn Decal At Location’ node and he’s right. I shall refactor sir, I promise.

Well this lovely bit of logic caused this mess…

This looked much more natural and the gore seemed like it was in the right place although a touch over the top

I am tracking the gore pieces velocity when the hit event is thrown as in earlier testing they just painted the scene red, leaving decal’s at every single location that they came into contact with regardless of how brief that contact was. Tracking the velocity really helped with that. I also intend to link the size of the decal to the velocity, seeing as that value is already being tracked for the gating side of things.

There are some other things that need to change with this though. I am only using one sample of blood and I would also like to be able to turn the blood and think about some ways that I could make it fall in a more realistic pattern. Anyway, for now that takes care of the gore pieces and stops most of the slicing effect that I was getting. Onto the bodies them selves.

Soldiers causing blood on hit event

I tried to implement this approach immediately on the soldiers but had a bit of a problem. I didn’t know that hit event had to be enabled from the physics asset and so spent plenty of time trying to get ‘I’ve been hit’ printing out from the hit event on the skeletal mesh from the soldier actor, silly boy. Anyway, this was the problem…

Can you see the little checkbox…
… and this is the logic it runs on at the moment. Pretend that the ‘Bleeding Component’ isn’t there though, that came in a later commit! This is this like the time line in Pulp Fiction.

In an effort to not have a million decals in the level thanks to all the hit events and the fact that testing only for the velocity would still put blood on top of blood, I needed another way to control the ‘flow’ a little more. I can up with the idea of tracking the last location that there was blood drawn and then not allowing anymore unless the requested location of the new blood was at least x amount of units away. That worked quite well and was not too jarring, but I think there might be a better way that I will talk about when I try it out.

Commit

  • Added a custom trace channel called ValidBloodSurface. Now when the line traces go out to find places to draw blood, they are only looking for this. That has stopped most of the blood decal slicing effects we were getting. Changed multiple actors and component responses to allow this to work.
  • Have solved the blood decal slicing issue by tracking the head bone location every so often, line tracing from that to the actual head location while looking for surfaces that block the ‘valid blood surface’ channel set up earlier. Then, step back 50 units from the surface normal and start the actual blood splatter tracing event. Seems to have solved all the slicing problems.
  • Changed how the head position is tracked so that we are not polling for it. When the player uses the push attack, the current head location is passed to the victim soldiers for use in the blood events.
  • Updated the blood events so that it can still be triggered using the head bone without first having provided its snapshot location so that the line trace can run.
  • Changed all the components collision settings in the Base Mine class so that they ignore the ‘blood surfaces’ channel too.
  • Made a separate event for spawning gore and blood for the explosive mine as there was a problem with deciding whether or not to track the current head location.
  • Fixed a problem with the math for the blood gen location vector. Works properly now.
  • Added functionality so that the gore that is generated can leave its own blood impact decals. Need to balance this, as its too easy to just paint the whole level red!
  • Added functionality to allow the soldier to draw blood when he hits a surface hard enough.
  • Work is still needed in the new approach to blood and gore but I think that general approach of creating decals on hit events looks more natural.

Serial Link Commit 10: 15th March 2019

Decaltastic! COME ON GRAMMERLY, IM RIGHT HERE!

There isn’t really much to say other than the mighty Kanban board now says that its time to work on the gore again… Mwahahaha! I intend to centralise the gore into an actor or an actor component so that I can bolt it onto anything and have is bleed or produce gore. So, I would like to able to add it to say, a robot who’s version of blood would be globulous black liquid and who’s gore would be metal shards and sparking circuits. Im sure we will see how that goes in the next posts…

Commit

  • Fixed the blood decals not showing up when only in indirect lighting.
  • Enabled the features in project settings
  • Changed all the blood decals translucency settings
  • Updated the translucency settings on the bullet holes decal so that they now show up under static lighting like the blood.
  • Changed the ‘screen fade size’ when bullet decals are created to 0 so that they dont fade away when you walk away from them. It would be better to have this set to something like 20 meters, but the only value that does anything is 0 and that means that they dont fade out at all…
  • Added a fade to the bullet hole decals to clean them up after about a minute – Added a fade to the blood decals to clean them up after about a minute

Serial Link Commit 9: 14th March 2019

Is that a pistol in your pocket… ?

There is not too much to report in this commit, just some bug fixes and some balancing really. At the time development I was on the task ‘Weapon Basics’ and the last part of that running through the Kanban board was to balance the weapons and make sure that they were distinct from one another. I do like the way that the Kanban board has kept me totally focused on making sure that the weapon features are all finished (at least to a certain standard) before moving onto the next major task which is ‘Gore’. The ‘Weapon Basics’ tasks broke down into lots of sub tasks including…

  • Having weapon clatter when dropped
  • Being able to collect ammo
  • Being able to exchange your weapon with one on the floor or on a body
  • Being able to hold an extra weapon
  • Tying up the levelling component so that improvements and skill gains were shown on the screen via some messaging system
  • Implementing Gaussian randomness to the bullet spread
  • Implementing the grip change from rifle to pistol in the animation BP
  • Balance SMG
  • Balance Assault Rifle
  • Balance Pistol
  • Implement UI for all features above

This links in with the Agile methodology well as by focusing on just the weapons they are now a complete feature and it feels right to move onto something else.

Commit

  • Fixed bug where the weapons would not show the right ammo pool count because another weapon had altered it. Now shows the right count when the weapon is drawn.
  • Changed the psychic push so that it no longer works on dead people, it was getting in the way of the ammo and weapon collection stuff and felt odd.
  • Changed the direction of the shell casings from the pistol as they were working on the world direction and not the sockets.
  • Decreased the rate of fire on the Assault rifle as the weapons need to be more different –
  • Decreased the ammo in the clip for the Assault rifle to 20 as its strength is that its more accurate than the SMG
  • Set the spread range for the Assault rifle to -5 and 5 – Set the spread range for the SMG to -10 and 10
  • Created PistolProjectile for use with the pistol as it was using the SMG ammo which made it difficult to change the damage that it was doing.
  • Made the Pistol slow but causing a large amount of damage, 80 pts
  • AI now switches between the SMG, Assault rifle and the Pistol.
  • Added new fire sound for the pistol to make it stand out from the other weapons

Serial Link Commit 7 – 8: 14th March 2018

Shell casings now spin. They SPIN I TELL YOU!

This development session focused on having the bullet spread feature that was already built in, represented to the player via a pretty standard (but first for me) dynamic reticle. From there I wanted the shell casings to be a little more noticeable so I updated the orientation of the ‘ejector’ socket on each of them and added a little torque to the casing as it came out. I made sure that the player gets a much better look at the casings as the come up and out of the weapon and it worked really well.

This is how the reticle ‘heats’ and expands
This is how it ‘cools’ and shrinks again.

There is information coming from the players Levelling Component (yes you can level up in this game) so that allows the users skill with that weapon to be represented by the growing and shrinking of the reticle. Pretty cool.

Commit

  • Now have the dynamic reticle expanding with the bullet spread and it feels right.
  • The reticle shrinks properly now too.
  • Fixed the problem with the pistol animation although I am still not satisfied that the weapon feels good to use.
  • Changed the position of the main player cam so that the weapon is more prominent.
  • Changed the location of the ammo counter UI so that it can still be seen on the SMG with the new camera position
  • Changed the near plane clipping from 10 to 1 in the main project settings
  • The pistol has been changed to 1.4 in scale and looks much better
  • Added torque to the shell casings in the pistol
  • Added torque to the shell casings in the SMG and changed the location of the ejector socket on the mesh for a more dramatic feel to the shooting
  • Added torque to the shell casings in the Assault Rifle and changed the location and rotation of the socket that spawns them

Serial Link Commit 6: 13th March 2019

I know its too small but hey. You will have to take my word that its a line trace on tick looking for UI elements on things in the world.

I am pleased with how the information panel on the right of the UI works. I am trying hard to step back from things being too specific and I am trying to make things as abstract and flexible as I can. Sometimes it works and sometimes it doesn’t but this time it was a success! So the logic above it always looking actors with the interface ‘IsContextSensitiveUIElement’ and then should it find one it throws the message ‘ShowUI’ I wont give a detailed explanation of all the little things that stop it breaking, just the overview. Questions? Tough. I mean, ask.

The part I want to talk about though is that UI Info panel that appears to tell the player the options about how to interact with the thing that he is looking at.

This is the tail end of the event implemented for the interface I talked about earlier.

I like this as there is no casting and should the player character change for any reason it would still work. Also by passing in an array of UI elements, its easy to change what is displayed from different actors that implement the interface. I know that there is no multiplayer option to be built so I know that the player is going to be player zero so I think that accessing them in this way is fine considering the project. On the Controller side…

On Dex’s Controller. Dex is the main character that the player is using.
The widgets are added to the UIInfoPanel widget and we can have a quick look at that too in the designer…
That little tiny box there? Thats the vertical box that expands and shrinks as needed. Pretty cool.

Commit

  • Updated the look of the ‘collect ammo’ and the ‘out of ammo’ UI widgets.
  • Updated the ‘out of ammo’ to say ’empty’ instead
  • Found out that I cannot use an emissive in the UI without sorting some other things out. This is to do with the order that Unreal draws and applies post processing.
  • Added and UI info panel to the main HUD.
  • Making sure that the UI info panel is spawning its widgets and organising them so that I am free to add what I need in the future.
  • The info panel is now to the right of the cross hair although I would like to have it in the bottom right corner, I didn’t work out how to have the vertical boxed position change to accommodate its children.
  • Can pass an array of UI elements to the info panel now and they are removed when the player looks away from the weapon.
  • Added ammo pool and ammo clip UI widgets to the base weapon class
  • Created widget assets for those components
  • Now have ammo in clip displayed on all weapons in the calculator font. Looks cool!
  • Ammo status UI to change from ‘collect ammo’ to ’empty’ when the ammo is collected by the player is still looking at the weapon. Competed
  • Make sure that only the players currently equipped weapon shows the UI for ammo. Complete
  • Made the UI on the assault rifle a bit smaller.
  • Added the ammo pool from the weapon component to the ammo in the clip counter so that it can be shown on the weapon too. Looks really good.

Serial Link Commit 5: 13 March 2019

Holding an extra weapon. Kinda greedy really.

This development session focused on making sure that the player knew that they had picked up an extra or opportunistic weapon. I always liked the idea that you could have a full complement of weapons and just pick up another as all of yours were slung. In our case the Weapon Component that I created defines the ability to carry 3 weapons and an extra in your hands. The characters all have a primary, close quarters, weapon on the chest, then a secondary, assault rifle style, weapon on the back. Finally, they all have a side arm on the leg. The player can pick up the extra weapon but I would like to see that on the AI too. As all the characters are using the same Weapon Component, that should not really be difficult to set up.

Commit

  • Set up a ‘Holding Extra Weapon’ UI element in the main HUD – Used an event dispatcher from the Weapon Comp and bound to it in Dex Controller. Thats used to call an event on the ‘Main HUD’ variable on the Soldier. The event changes the opacity of the text on the HUD. – Player now takes the ammo thats in the Opportunistic weapon as he drops it. That takes care of the case where you pick up a gun, fill it with ammo, drop it and either lose that ammo or have to re-collect it. Works well.
  • Imported a font called Calculator from dafont.com and used it for the ‘holding extra weapon message