Custom Character Controller in Unity: Part 4 – First Draft

After nearly a month of silence, the wait is finally over! No longer will you have to endure the hardship of waking up every morning, immediately opening my illustrious blog only to suffer a crushing disappointment deep in your soul. …Anyways. In the intermediate time since the last post, I’ve built a first draft of the character controller, and I’ll be going over it’s implementation in this article. This post will concern itself solely with the main controller class—in the next article, I will go over an application of the controller in a demo character that I’ve built. The controller itself is a single C# script, which can be downloaded here. As with the previous Pushback example (from the second part of this series), I am making use of some of fholm’s RPG classes, as well as a modified version of a class which he uses to find closest points on the surfaces of colliders called SuperCollisions.cs. In addition, for debugging purposes I typically use my own DebugDraw.cs class to draw markers and vectors on the screen. Finally, I use the lots of the 3D math functions found in this class, by Bit Barrel Media. In the future, it’ll probably be simpler for me to just post a Unity project, but for now since we’re just mostly focusing on a single class today, this is easier. I’ll go over the basic structure of the controller, and then iterate through each of it’s features in more detail.

[ EDIT: past Erik was spot on. You can now get the controller through the Downloads page. Note that the code linked to above is an earlier version of the character controller, which I am leaving posted here for clarity and learning purposes. If you are planning on using the controller in your project, please head over to the Downloads page for the up-to-date and complete code along with a sample project ]

The controller goes through three primary phases: Movement, Pushback and Resolution. In the Movement phase, we calculate all of our character’s movement logic and modify his position accordingly. We then run our Pushback function, ensuring that he is not intersecting any of our geometry. Finally, we run any necessary Resolution steps. These could include limiting the angle of slope our character could move up, clamping him to the ground, etc.

 Figure showing the movement and pushback phases of the controller

Figure showing the movement and pushback phases of the controller

Before we get started I should note that, unlike the previous controller, this one is built using three OverlapSpheres, placing one above the other, to simulate the shape of the capsule. The controller is built to work with any number of spheres—tall slim characters may require more than three, while short squat ones may need less. Let’s take a look at the code now. The first phase within our controller is fairly simple for the time being, consisting of a single instruction:

transform.position += debugMove * Time.deltaTime;

This allows you to set in the inspector how much the controller will move each frame. When we build our actual character, this line will be replaced with all of our movement logic. For now, it serves as a handy debugging tool. Phase two is Pushback. Here, our goal is to check if the controller is intersecting any colliders, and if so to then push him to the nearest location on their surface. The basics of how we do this can be seen in the Implementation article I posted earlier. This time around, the algorithm is slightly more complicated. The first half of the method is more or less the same as before; we check the nearest point on the surface of any collider within the OverlapSphere. Next, we need to see which side of the normal the origin of the OverlapSphere is. We do this by raycasting from the center of the sphere in the direction of the nearest point on the surface. Since a raycast only detects a surface if the normal is facing the cast, whether this cast returns true of false will tell us if the origin is outside or inside the surface, respectively. Note that in the code I use a SphereCast with a very small radius instead of a raycast; this avoids errors when raycasting directly at an edge of a mesh.

The "feet" OverlapSphere of the controller detects a collision with the ramp, finds the nearest point and then raycasts towards it (shown as the red arrow)

The “feet” OverlapSphere of the controller detects a collision with the ramp, finds the nearest point and then raycasts towards it (shown as the red arrow)

Before applying the pushback vector, we do a final check to make sure we’re still colliding with the object. Because the OverlapSphere will return all the collisions first and then apply the pushbacks one by one. This makes it is possible that, in the case of hitting multiple colliders, a previously applied pushback can have the side effect of moving the controller enough so that it is no longer colliding with objects that were originally touched by the OverlapSphere, but no longer are. We resolve this by checking the distance between the origin of our sphere and the nearest point on the surface of the collider; if the distance is greater than the radius and we are located “outside” the normal, we know that we are not touching it and were displaced by an earlier collision.

The controller's lowest OverlapSphere collides with both the green ramp and the blue ground, with the nearest points on their surfaces shown in teal and blue, respectively. The ramp collision pushback is resolved first, causing a side effect where the OverlapSphere is no longer colliding with the blue ground

The controller’s lowest OverlapSphere collides with both the green ramp and the blue ground, with the nearest points on their surfaces shown in teal and blue, respectively. The ramp collision pushback is resolved first, causing a side effect where the OverlapSphere is no longer colliding with the blue ground

The third phase is less clear cut than the previous two. It can be defined as doing any “clean-up” or “reactionary” logic. There are two main methods that are executed here: slope limiting and ground clamping. Slope limiting should be familiar to anyone who has used Unity’s built-in controller: if a character attempts to move up a slope that is steeper than a specific angle, he is repelled by the slope as if it were a solid wall, instead of pushed up it. Ground clamping is not included in the Unity controller, and is fairly important. When moving horizontally over an uneven surface, the controller will not (by itself) follow the geometry of the ground. In the real world, we time our leg movements to allow for each slight increase or decrease in elevation, and gravity takes care of the rest. However, in a game world we need to handle this a bit more explicitly. Unlike the real world, gravity is not a constantly applied force in most controllers. When we are not standing on a surface, we apply acceleration downwards. When we are on a surface, we set our vertical velocity to zero, to represent the normal force exerted by the surface. Because our vertical velocity is zeroed out when standing on a surface, it will take time to accelerate our downwards speed when we walk off said surface. This is fine when we are actually walking off a ledge, but when we’re walking down a slope or over uneven ground, it creates an unnatural bouncing effect. In addition to being a problem visually, this oscillation between grounded and not-grounded is a problem for our actual game logic, since a character’s behavior is typically very different when he is on a surface compared to when he is a falling.

The left image shows how the character's movement follows the uneven surface by ground clamping. On the right, we see how he "bounces" across the surface when clamping is not applied. Each red "X" represent when the downward force of gravity is zeroed out

The left image shows how the character’s movement follows the uneven surface by ground clamping. On the right, we see how he “bounces” across the surface when clamping is not applied. Each red “X” represent when the downward force of gravity is zeroed out

This problem is solved in our reaction phase with the aforementioned “ground clamping,” which, as the name implies, will adjust our character’s position to be in line with the ground by SphereCasting directly downwards from our “feet.” Obviously, there are plenty of times when you do not want to clamp your character to the ground, such as when he is beginning a jump, or far enough above the surface that he should not be counted as standing on it. You’ll notice that I talk about whether the controller is “grounded” or not an awful lot. You’ll also see the method ProbeGround() be called multiple times throughout the main loop of the controller. Knowing when your character is standing on a surface and when he is not is very important to building a proper controller. I don’t intend to provide the tools to check if a character is “grounded” or not within the main controller class, since this depends greatly on your game’s structure. However, I do provide a method that will detect what is below the player, as well as store it (and some additional useful information) in a variable that is easily accessible. How you use this is up to you, but in the next article in the series I’ll be providing an example character that uses this controller and the data from the ProbeGround() method. The SlopeLimit method should be easy enough to understand as it’s functionality is familiar and I’ve commented it fairly well. (I actually haven’t. But I plan to before I upload the file.) Speaking of familiar functionality…those who know the Unity character controller well have probably identified that my custom controller seems to be lacking a feature: StepOffset. I do intend to tackle this method, but it seems much more complex than I initially expected, or I’m missing a simple solution for it. It’s definitely a “need-to-have,” since it’s pretty essential for most applications. That pretty much covers the Super Character Controller. Next time, I’ll go over an example character that I’ve built using the controller class detailed today, as well as provide the source code for it. If any of my code doesn’t seem to be working of compiling properly, please contact me so I can fix the error!

Custom Character Controller in Unity: Part 1 – Collision Resolution

After using Unity over the years for various projects, I’ve come to two conclusions: overall it’s a terrific engine that I would recommend to anyone interested in getting into game development, and that it’s built in character controller sucks.  I’ve been working on a custom character controller for a couple weeks and noticed that finding any kind of reference or learning material on the subject is pretty difficult.  So since I couldn’t find anything out there to read…I figured I’d write something instead!  I intend to post a few pieces here outlining what I’ve learned so far and some issues I’ve encountered.  For any actual implementation, I’ll be using the aforementioned Unity game engine.  You can visit their website here and download their latest version here.  I really dig Unity.  It takes care of a lot of the low level under the hood side of game development but still gives you enough freedom to do just about anything.  It also has a really great and active community that is the epitome of helpfulness.

Unfortunately, as previous stated Unity is also home to the world’s worst character controller.  I suppose I should loosely define what a “character controller” is before I build my case against Unity’s.  More or less it’s the code (or class, or whatever) that handles and resolves your character’s collisions in the world.  Unlike boxes and barrels and whatnot that can be taken care of by the rest of the physics engine, characters require special code to behave differently.  However, since we’re doing collision tests, we still need to pick a geometric shape to represent our character.  Most 3D games use a capsule collider.

Capsule collider used to approximate a character's form in Unity

Capsule collider used to approximate a character’s form in Unity

Capsule colliders are great for a wide variety of reasons, but we’ll see that later.  For now, I’m going to go over the basics of character controllers in just 2 dimensions, to keep it simple for now.

CCSetup1

You’ll notice I’ve labelled the axes and x, instead of and y.  This is because I’m going to treat this as a top down view of a three dimensional world, which we’ll eventually move onto.  The character here is seen as a blue circle, as a capsule seen from top view is a circle!  The green rectangle we will treat as a wall.  Ideally, characters cannot walk through walls.  So should the character intersect with it, we’ll want to detect the collision, and properly resolve it.  We’re going to pass over the actual collision detection, i.e., checking to see if the circle is intersecting the box, for two reasons.  One is that Unity has a fair amount of resources (which we’ll go over later) to handle this, and two is that there is a pretty good selection of reference material out there for collision detection.  We’ll focus on getting the colliding object to resolve it’s position properly based on expected behavior.

CCSetup2

Shown above we have the controller attempting to move into the wall.  To prevent this we run a function that performs a sweep test, from the initial position of the controller to the desired position of the movement.  The test detects that a wall is in front of us, and returns the distance.  Using the value we move the controller in the direction of the test the distance the check traveled, placing it directly beside the wall.  (Aside: Unity has several built in functions for this, including Rigidbody.Sweeptest, Physics.SphereCast and Physics.CapsuleCast).

However, this really isn’t the kind of behavior we want.  If we use this method, the character will be immediately halted in any movement he makes if he collides with an object, if even only slightly.  This is undesirable as it doesn’t reflect the way real world objects tend to bounce and slide off each and, more importantly, it would be annoying as hell to play.

CCSetup3

This is a much more desirable behavior.   The initial sweep test is performed in the movement direction for the movement distance.  When the sweeptest contacts the wall, the character is moved directly to it, just like before.  However, this time around we further move the character upwards to make up for the lost movement, which allows it to slide along surfaces.  This is a great example of the desired behavior of the controller, but it isn’t the best way to implement it.  For one, it’s not very efficient: every time you want to move the controller, you need to run this function.  This is fine if you just move it once per frame, but if you plan on doing something different–for whatever reason–you’ll need to rerun the function.  Two, collisions resolution is reliant on character movement direction and distance.  If he just magically finds his way into a solid wall (as character controllers are wont to do) he’s not going to get automatically pushed out.  In practice, I’ve found this method a massive headache.

CCSetup4

Here is that terrifying situation put onto screen.  We see our hero is currently within the walls of the object.  Instead of looking at collision resolution as a response to a movement, like the previous examples, instead we are going to treat it completely independently.  We no longer are concerned with what direction the player moved or how far he moved.  Instead, we will consider only where he is at this moment, and whether his location is a problem or not.  In the above figure, we can see that currently the player is intersecting the wall (he’s inside it!), and therefore his current location is a problem and needs to be rectified.  Since we are no longer resolving collision as a response to movement, we do not know where his previous position was or how far he moved.  All we know is that currently he is stuck inside a wall, and we need to move him out of the wall.  But where to put him?  Just like in previous examples, we should only push him out so that he is just touching the edge of the wall.  We have many locations that are candidates for this…

CCSetup7

Each of the transparent yellow circles indicate a possible position for the character controller that satisfies our goal–to push him out of the wall to a point somewhere on it’s surface.  But which point do we choose?  Simply put, calculate the closest point on the surface of the wall with respect to the controller’s location.

Drawing these diagrams in Photoshop was a huge mistake they take forever

Drawing these diagrams in Photoshop was a huge mistake they take forever

Here we have calculated that the nearest point to our controller’s current location lies to the right of us.  We then move the controller to that point, plus the radius of the controller (shown in red).

This concludes the first part of our epic adventure into the mysterious realm of character controllering.  Next time I’ll start talking about my implementation in Unity, and some of the more complex functions character controllers use.

Acknowledgements and References:

Most of the knowledge here I’ve acquired by exploring two main information sources: a Unity forums post by the user techmage, and reading through/reverse engineering a Unity custom character controller package by user fholm.  I don’t know how to pronounce that either.  Fuhholllme.  Disgusting.  Reminds me of phlegm.  Anyways, you can download his package off his github here, under the RPGController directory.  This is an amazing project overall that I will be exploring the next few sections, and is really terrifically coded.  Apparently he does Unity consulting too if anyone needs that kind of services.  Finally, if anyone has any good reference material for this subject, or anything relating, sharing it here would be a really great way to expand on the topic!