Unity Tutorial: How to use Lerp

I’ve noticed when talking to various Unity users (and browsing the forums) there seems to be a bit of confusion about the purpose of the Lerp methods, and how and when to use them. This isn’t too surprising given that the documentation on Lerp is bizarre (it uses Time.time as a value, something that by the end of this tutorial you’d know is somewhat odd). I’ve also supplied a Unity webplayer below to visualize how Lerp works.

Lerp stands for LineaInterpolation. Unity has a family of Lerp methods that all function pretty much the same way but have different applications: Mathf.Lerp, Vector3.Lerp, Color.Lerp, and Quaternion.Lerp.

Lerp essentially lets you pick two values, a from and to value, and then select a value that returns an interpolation between the from and to. The value is clamped between 0 and 1 (regardless of what you put in the method as a parameter) so you can think of it like a percentage; 0.4 is 40%, 0.6 is 60%, and so on. This leads to the idea of thinking that when you run Lerp on a from and to values, you will get a value percent between the two. Lets look at some examples.

Mathf.Lerp(0, 100, 0.5f)

This is trivial. We can read this as asking Lerp to give us the value 50% between 0 and 100. This will return 50.

Mathf.Lerp(50, 80, 0.3f)

This will return the value 30% between 50 and 80, which is 59. To make Lerp’s inner workings less opaque, you can actually verify these values using simple arithmetic. You really just need to get the range between the two values, multiply that by the value and then add that to the from value. For the example above you would do.

range : 80 - 50 = 30
distance by t value : 30 * 0.3 = 9
interpolated value : 50 + 9 = 59

(The above is in pseudocode and not C#)

As stated above, the other Lerp methods work in pretty much the same way. Vector3.Lerp has you select two Vector3s for the from and to values, Color.Lerp uses Colors (duh), Quaternion.Lerp, rotations.

You don’t always have the luxury of having the value easily supplied for you. Often when using Lerp you are attempting to relate two different values to each other. An example of this would be simulating camera shake when a player is nearby a cannon firing: the further the player is away from the cannon, the less the camera shakes. Therefore there is a clear relation between the distance the player is from the cannon and the magnitude of the camera shake. (This example is based on the cannons from the Mario 64 HD project I built, specifically the FiringCannon class.)

Unity supplies a method named InverseLerp. InverseLerp takes a from and to parameter, and then a value parameter which should be somewhere between from and to. I’ll do an example before we try it out on the camera shake problem stated above.

Mathf.InverseLerp(30, 60, 45);

We can read this as “how far is 45 between 30 and 60?” This will return 0.5, as 45 is halfway between 30 and 60.

For our cannon, lets assume we have some CameraShake(float magnitude) method that we will use. We will state that the maximum shake magnitude is 20, and the minimum is 0. We will also say that the maximum distance the player can be from the cannon and still feel the shake is 50, and the minimum 0. Let’s solve this.

float t = Mathf.InverseLerp(0, 50, Vector3.Distance(cannon.position, player.position));

Where cannon and player are Transforms. This will give us a value that is 0 when the player is immediately beside the cannon, and 1 when the player’s distance is 50 or greater. We now need to get the proper shake magnitude.

Mathf.Lerp(20, 0, t);

You’ll notice that the from value is larger than the to value. This perfectly acceptable and is  done because we want the shake magnitude to be a maximum when the player’s distance is 0 (and the value is 0) and at a minimum when the player’s distance is 50 or greater (and the value is 1). If for whatever reason you need the from value to be the smaller of the two, you can always invert by subtracting it from 1.

I’ve added a demo below showing Lerping between two values (Mathf.Lerp), positions (Vector3.Lerp) and colors (Color.Lerp), with the value controlled by the slider. Hopefully this will clear up how Lerp works and has demonstrated how useful a method it is.

Click to open webplayer Lerp demo

Advertisements

Custom Character Controller in Unity: Part 4 – First Draft

After nearly a month of silence, the wait is finally over! No longer will you have to endure the hardship of waking up every morning, immediately opening my illustrious blog only to suffer a crushing disappointment deep in your soul. …Anyways. In the intermediate time since the last post, I’ve built a first draft of the character controller, and I’ll be going over it’s implementation in this article. This post will concern itself solely with the main controller class—in the next article, I will go over an application of the controller in a demo character that I’ve built. The controller itself is a single C# script, which can be downloaded here. As with the previous Pushback example (from the second part of this series), I am making use of some of fholm’s RPG classes, as well as a modified version of a class which he uses to find closest points on the surfaces of colliders called SuperCollisions.cs. In addition, for debugging purposes I typically use my own DebugDraw.cs class to draw markers and vectors on the screen. Finally, I use the lots of the 3D math functions found in this class, by Bit Barrel Media. In the future, it’ll probably be simpler for me to just post a Unity project, but for now since we’re just mostly focusing on a single class today, this is easier. I’ll go over the basic structure of the controller, and then iterate through each of it’s features in more detail.

[ EDIT: past Erik was spot on. You can now get the controller through the Downloads page. Note that the code linked to above is an earlier version of the character controller, which I am leaving posted here for clarity and learning purposes. If you are planning on using the controller in your project, please head over to the Downloads page for the up-to-date and complete code along with a sample project ]

The controller goes through three primary phases: Movement, Pushback and Resolution. In the Movement phase, we calculate all of our character’s movement logic and modify his position accordingly. We then run our Pushback function, ensuring that he is not intersecting any of our geometry. Finally, we run any necessary Resolution steps. These could include limiting the angle of slope our character could move up, clamping him to the ground, etc.

 Figure showing the movement and pushback phases of the controller

Figure showing the movement and pushback phases of the controller

Before we get started I should note that, unlike the previous controller, this one is built using three OverlapSpheres, placing one above the other, to simulate the shape of the capsule. The controller is built to work with any number of spheres—tall slim characters may require more than three, while short squat ones may need less. Let’s take a look at the code now. The first phase within our controller is fairly simple for the time being, consisting of a single instruction:

transform.position += debugMove * Time.deltaTime;

This allows you to set in the inspector how much the controller will move each frame. When we build our actual character, this line will be replaced with all of our movement logic. For now, it serves as a handy debugging tool. Phase two is Pushback. Here, our goal is to check if the controller is intersecting any colliders, and if so to then push him to the nearest location on their surface. The basics of how we do this can be seen in the Implementation article I posted earlier. This time around, the algorithm is slightly more complicated. The first half of the method is more or less the same as before; we check the nearest point on the surface of any collider within the OverlapSphere. Next, we need to see which side of the normal the origin of the OverlapSphere is. We do this by raycasting from the center of the sphere in the direction of the nearest point on the surface. Since a raycast only detects a surface if the normal is facing the cast, whether this cast returns true of false will tell us if the origin is outside or inside the surface, respectively. Note that in the code I use a SphereCast with a very small radius instead of a raycast; this avoids errors when raycasting directly at an edge of a mesh.

The "feet" OverlapSphere of the controller detects a collision with the ramp, finds the nearest point and then raycasts towards it (shown as the red arrow)

The “feet” OverlapSphere of the controller detects a collision with the ramp, finds the nearest point and then raycasts towards it (shown as the red arrow)

Before applying the pushback vector, we do a final check to make sure we’re still colliding with the object. Because the OverlapSphere will return all the collisions first and then apply the pushbacks one by one. This makes it is possible that, in the case of hitting multiple colliders, a previously applied pushback can have the side effect of moving the controller enough so that it is no longer colliding with objects that were originally touched by the OverlapSphere, but no longer are. We resolve this by checking the distance between the origin of our sphere and the nearest point on the surface of the collider; if the distance is greater than the radius and we are located “outside” the normal, we know that we are not touching it and were displaced by an earlier collision.

The controller's lowest OverlapSphere collides with both the green ramp and the blue ground, with the nearest points on their surfaces shown in teal and blue, respectively. The ramp collision pushback is resolved first, causing a side effect where the OverlapSphere is no longer colliding with the blue ground

The controller’s lowest OverlapSphere collides with both the green ramp and the blue ground, with the nearest points on their surfaces shown in teal and blue, respectively. The ramp collision pushback is resolved first, causing a side effect where the OverlapSphere is no longer colliding with the blue ground

The third phase is less clear cut than the previous two. It can be defined as doing any “clean-up” or “reactionary” logic. There are two main methods that are executed here: slope limiting and ground clamping. Slope limiting should be familiar to anyone who has used Unity’s built-in controller: if a character attempts to move up a slope that is steeper than a specific angle, he is repelled by the slope as if it were a solid wall, instead of pushed up it. Ground clamping is not included in the Unity controller, and is fairly important. When moving horizontally over an uneven surface, the controller will not (by itself) follow the geometry of the ground. In the real world, we time our leg movements to allow for each slight increase or decrease in elevation, and gravity takes care of the rest. However, in a game world we need to handle this a bit more explicitly. Unlike the real world, gravity is not a constantly applied force in most controllers. When we are not standing on a surface, we apply acceleration downwards. When we are on a surface, we set our vertical velocity to zero, to represent the normal force exerted by the surface. Because our vertical velocity is zeroed out when standing on a surface, it will take time to accelerate our downwards speed when we walk off said surface. This is fine when we are actually walking off a ledge, but when we’re walking down a slope or over uneven ground, it creates an unnatural bouncing effect. In addition to being a problem visually, this oscillation between grounded and not-grounded is a problem for our actual game logic, since a character’s behavior is typically very different when he is on a surface compared to when he is a falling.

The left image shows how the character's movement follows the uneven surface by ground clamping. On the right, we see how he "bounces" across the surface when clamping is not applied. Each red "X" represent when the downward force of gravity is zeroed out

The left image shows how the character’s movement follows the uneven surface by ground clamping. On the right, we see how he “bounces” across the surface when clamping is not applied. Each red “X” represent when the downward force of gravity is zeroed out

This problem is solved in our reaction phase with the aforementioned “ground clamping,” which, as the name implies, will adjust our character’s position to be in line with the ground by SphereCasting directly downwards from our “feet.” Obviously, there are plenty of times when you do not want to clamp your character to the ground, such as when he is beginning a jump, or far enough above the surface that he should not be counted as standing on it. You’ll notice that I talk about whether the controller is “grounded” or not an awful lot. You’ll also see the method ProbeGround() be called multiple times throughout the main loop of the controller. Knowing when your character is standing on a surface and when he is not is very important to building a proper controller. I don’t intend to provide the tools to check if a character is “grounded” or not within the main controller class, since this depends greatly on your game’s structure. However, I do provide a method that will detect what is below the player, as well as store it (and some additional useful information) in a variable that is easily accessible. How you use this is up to you, but in the next article in the series I’ll be providing an example character that uses this controller and the data from the ProbeGround() method. The SlopeLimit method should be easy enough to understand as it’s functionality is familiar and I’ve commented it fairly well. (I actually haven’t. But I plan to before I upload the file.) Speaking of familiar functionality…those who know the Unity character controller well have probably identified that my custom controller seems to be lacking a feature: StepOffset. I do intend to tackle this method, but it seems much more complex than I initially expected, or I’m missing a simple solution for it. It’s definitely a “need-to-have,” since it’s pretty essential for most applications. That pretty much covers the Super Character Controller. Next time, I’ll go over an example character that I’ve built using the controller class detailed today, as well as provide the source code for it. If any of my code doesn’t seem to be working of compiling properly, please contact me so I can fix the error!

Custom Character Controller in Unity: Part 3 – Analysis of the Physics API

Up until this point I’ve made multiple references to some of the Unity Physics API, but we haven’t really explored it in detail. As an astute reader may have guessed from the title of this entry, that’s what we’ll be doing now. We’ll go through the functions available, analyze some of their associated issues, and ways to overcome them.

As per usual I’ve done my best to avoid doing original research and will be making heavy use of this post from fhhoollm.

Fhhoollolllmm!!

Fhhoollolllmm!!

The Physics API

With many of the functions simply being variations of each other, it shouldn’t take long to go over the details of the Physics Script Reference. I’m not going to bother talking about the methods that have an All variation available, since they are identical other than that the raycast stops immediately at the first contact.

Raycast: Fires a ray in a specified direction for a specified distance (or infinitely far). If an object is contacted, the RaycastHit structure provides useful information about it: where it was contacted, what the normal of the surface was at the contact point, and so on. Because it fires just an infinitely thin ray, this method isn’t particularly use for collision resolution.

CapsuleCastAll: At first glance this seems ideal for usage with a character controller (due to it’s capsule shape), and for the most part it is. It is important to note that as this is a cast it will only detect a collision where the normal of the surface is facing the cast-no backfaces are detected. In addition, the cast does not detect any objects that are within the boundaries of the “capsule” origin of the cast, i.e., it doesn’t detect any objects touching it’s initial position. This is a drawback we will need to overcome if we want it to be a useful tool for our character controller.

CheckCapsule: Right away we have a candidate to solving the problem stated above. CheckCapsule seems to exactly compliment CapsuleCastAll-it will detect all the objects at the initial position of the cast that the CapsuleCast cannot. Unfortunately, it only returns a bool, as opposed to an array of colliders, giving us no information on what objects we actually collided with.

CheckSphere: Same as above, except with a sphere shape.

Linecast: Identical in terms of function to Raycast. Simply a different way of defining the origin, direction, and magnitude of the ray.

OverlapSphere: Now we’re getting somewhere. As far as I can tell, OverlapSphere works exactly as advertised. Bear in mind this note does appear on the docs:

NOTE: Currently this only checks against the bounding volumes of the colliders not against the actual colliders.

…and I really don’t know what this means. I’ve tested it against Box Colliders, Sphere Colliders and Mesh Colliders and it seems to be checking against the actual collider, not just the bounding volume. Note that I am taking bounding volume to mean axis aligned bounding box, and it may mean something different in this case. If not, I’m going to assume it’s a documentation error.

RaycastAll: Same as the Raycast method, except that it does not stop at the first object it contacts.

SphereCastAll: Functions the same as CapsuleCastAll, with the same primary drawback of not detecting objects contained in the sphere defined at the origin of the cast. SphereCast also (like CapsuleCast) does NOT always return the proper normal of the face it collides with. Because it is a sphere that is being cast (rather than an infinitely thin ray in Raycasting) it can collide with the edges of a mesh. When this happens, the hit.normal that is returned is the interpolated value of the normals of the two faces that are joined by the edge. Since CapsuleCasting is just casting with a swept-sphere, it also has the same issue.

In addition to the above tools to detect collisions, Unity also provides a Rigidbody.SweepTestAll method. After testing it, it seems to have identical behavior to the cast methods; faces contained within the collider are not detected by the sweep. I tend to prefer using CapsuleCastAll and SphereCastAll over SweepTest all, as they offer more options (like being able to define your own origin), however SweepTest is useful for box shaped characters, as there is no BoxCast method.

Mesh Colliders

Before we go any further, I want to talk a little bit about mesh colliders. Up until now we’ve focused primarily on the primitive colliders (Box, Sphere, Capsule, etc). However in practice the overwhelming amount of your level’s collision geometry is going to be composed of mesh colliders.

Unlike the primitive colliders, which have their collision representation built from a variety of preset parameters (radius for spheres, height for boxes, and so on), a mesh collider’s collision data is unsurprisingly formed from a 3D mesh. Mesh colliders come in two flavors: Convex and Concave. This article does a terrific job explaining the difference between them.

Since convex hulls must be fully enclosed and Unity limits their size to 255 polygons, they are unideal to be used to represent intricate level geometry. Concave hulls can be of any size, but they come with the drawback of no longer being an enclosed object; instead of being a solid volume, they are essentially just a surface of triangles. This means we can no longer detect if an object is “inside” a concave mesh, since there is no “inside” to check against. This brings us to the problem of phasing. Phasing occurs when a character is moving fast enough (or a wall collider is small enough) that in two frames he travels from one side of the wall to the other, effectively passing through it. Concave mesh colliders amplify this problem by no longer having the ability to detect player collisions occurring “inside” them, making it easy for the player to phase into the mesh.

Controller movement over one frame. His speed is great enough that neither his initial position or final make contact with the thin mesh collider wall in our way

Controller movement over one frame. His speed is great enough that neither his initial or final position make contact with the thin mesh collider wall in his way

Effectively, if we are directly beside a triangle on the surface of a mesh collider with it’s normal facing in the exact inverse direction of our movement vector, the furthest we can move is exactly equal to twice our radius. Considering collision resolution tends to place the character directly flush with the wall, this is a situation that is encountered fairly often. If your character controller is representing a character of about 2 meters (represented as generic units in Unity) high, your radius is typically in the ballpark of 0.5 meters (units). Which means your character can move at most 1 unit per frame. If your game runs at 30 frames per second, you can move at most 30 meters per second, or 108 kilometers per hour. This is pretty damn fast, but if you’re building the latest and greatest Sonic the Hedgehog title it may not be fast enough.

With the controller directly flush with the surface, it cannot move more than twice it's radius or it will phase through the wall

With the controller directly flush with the surface, it cannot move more than twice it’s radius or it will phase through the wall

One solution to this problem is to run your controller’s physics more than once per frame. Alternatively, we can use CapsuleCastAll to check if there are any colliders between our initial and final position every frame. We’ll explore both these options in future articles where we continue to implement the character controller.

Custom Character Controller in Unity: Part 2 – Implementation

Now that we’ve gone over the basics of character controller collision resolution, I’m going to demonstrate how to implement the last presented example (the pushback method) into Unity.

To start off, make sure you have Unity downloaded and installed. For this article I am using Unity 4.3.4f1. (To check your version of Unity, go HelpAbout Unity…) Open an existing project or create a new one for this tutorial. Create a new scene and create a Cube and a Sphere Game Object within it. Although we’ll eventually move on to using a Capsule shape for our controller, we’ll start with a Sphere to keep it simple. Rename the Sphere to Player and the Cube to Wall. Change the Wall’s scaling factor to 6 on each axis. To ease visualization, I also added a blue transparent material to the player and a green transparent material to the wall. Remove the Sphere Collider component from the player.

This sure beats making those dumb diagrams in Photoshop

This sure beats making those dumb diagrams in Photoshop

Create a new C# script and name it SuperCharacterController.cs, to express our dominance as the alpha character controller. Assign this script to our player, and then copy and paste the following code into it:

using UnityEngine;
using System;
using System.Collections.Generic;

public class SuperCharacterController : MonoBehaviour {

 [SerializeField]
 float radius = 0.5f;

 private bool contact;

 // Update is called once per frame
 void Update () {

 contact = false;

 foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
 {
 Vector3 contactPoint = col.ClosestPointOnBounds(transform.position);

 Vector3 v = transform.position - contactPoint;

 transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

 contact = true;
 }
 }

 void OnDrawGizmos()
 {
 Gizmos.color = contact ? Color.cyan : Color.yellow;
 Gizmos.DrawWireSphere(transform.position, radius);
 }
}

…and that’s all, really. Run the project and open the Scene window, while it’s still running. Drag the player around the edges of the wall and attempt to slowly push him into it. You’ll notice the wall resists, and keeps the player flush against it’s edge. So what are we actually doing here?

Physics.OverlapSphere returns an array of Colliders that are contacted by the sphere. It’s a great function in that it doesn’t come with any of the caveats of the other various methods in the Physics class (which we’ll inspect in more detail later). You define an origin and a radius and it gives you the colliders, no frills.

With any collisions detected, we now need to perform resolution. To retrieve the closest point on the surface of the box collider, we use the ClosestPointOnBounds method. We then take a vector that points from the contactPoint to our location. The vector’s magnitude is then clamped and our position is “pushed” out of the collider the proper amount.

You’ll also notice that I implement OnDrawGizmos so that it’s easy to see when the OverlapSphere is colliding with an object.

 Two frames demonstrating the collision being detected, and then resolved

Two frames demonstrating the collision being detected, and then resolved

Fairly simple. Unfortunately our success up until this point has been…an illusion. Create a new class named DebugDraw.cs, and add in the following code.

using UnityEngine;
using System.Collections;

public static class DebugDraw {

 public static void DrawMarker(Vector3 position, float size, Color color, float duration, bool depthTest = true)
 {
 Vector3 line1PosA = position + Vector3.up * size * 0.5f;
 Vector3 line1PosB = position - Vector3.up * size * 0.5f;

 Vector3 line2PosA = position + Vector3.right * size * 0.5f;
 Vector3 line2PosB = position - Vector3.right * size * 0.5f;

 Vector3 line3PosA = position + Vector3.forward * size * 0.5f;
 Vector3 line3PosB = position - Vector3.forward * size * 0.5f;

 Debug.DrawLine(line1PosA, line1PosB, color, duration, depthTest);
 Debug.DrawLine(line2PosA, line2PosB, color, duration, depthTest);
 Debug.DrawLine(line3PosA, line3PosB, color, duration, depthTest);
 }
}

This is a useful helper function of mine that allows us to draw markers in the editor from anywhere in the code (as opposed to just the OnDrawGizmos function). Modify the foreach loop to look like this.

foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
{
Vector3 contactPoint = col.ClosestPointOnBounds(transform.position);

DebugDraw.DrawMarker(contactPoint, 2.0f, Color.red, 0.0f, false);

Vector3 v = transform.position - contactPoint;

transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

contact = true;
}

Run the code, and you’ll notice when a collision happens a large red cross hair is drawn on it’s location. Now, drag the player inside the wall and observe that the marker follows the player. This isn’t necessarily wrong of the ClosestPointOnBounds function, but to match our pushback model from the previous section we really wanted a ClosestPointOnSurfaceOfBoundsOrSomething.

I can't believe this free game engine doesn't do exactly everything I want all the time

I can’t believe this free game engine doesn’t do exactly everything I want all the time

The main issue here is that we cannot properly resolve collisions when our character’s origin is inside a collider, as we do not have a function that will correctly find the nearest point on the surface. For now however, we’re going to move on to the next problem with our current implementation.

Rotate the wall about 20 degrees either way on it’s y axis and then run the scene. You’ll notice nothing seems to work properly anymore. This is because ClosestPointOnBounds returns the closest point on the axis-aligned bounding box, not the object-oriented bounding box.

Axis-aligned bounding bound on the left, with object-oriented on the right

Axis-aligned bounding bound on the left, with object-oriented on the right

You can already imagine how this problem will extend beyond just Box Colliders. Since the function is only capable of returning the axis-aligned bounding box, it clearly will not give us the closest point on the surface if we’re colliding with any other collider type (Sphere, Capsule, Mesh, etc.). Unfortunately, there is no silver bullet for this issue (or not one I’m aware of); we’ll need to implement a separate algorithm for each collider type.

Let’s start with the easiest first: Sphere Colliders. Create a new Sphere game object in the scene. There are a few steps to finding the nearest point on the surface, none of which are too complicated. To know which direction to push the player, we calculate the direction from our position to the Sphere’s centre. Since every point on a Sphere’s surface is the same distance from the origin, we normalize our direction and then multiply it by our radius and our local scale factor.

The following code implements the above. You’ll notice that in addition to the new method I’ve also added in a conditional check to see what kind of collider our OverlapSphere has detected.

using UnityEngine;
using System;
using System.Collections.Generic;

public class SuperCharacterController : MonoBehaviour {

 [SerializeField]
 float radius = 0.5f;

 private bool contact;

 // Update is called once per frame
 void Update () {

 contact = false;

 foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
 {
 Vector3 contactPoint = Vector3.zero;

 if (col is BoxCollider)
 {
 contactPoint = col.ClosestPointOnBounds(transform.position);
 }
 else if (col is SphereCollider)
 {
 contactPoint = ClosestPointOn((SphereCollider)col, transform.position);
 }

 DebugDraw.DrawMarker(contactPoint, 2.0f, Color.red, 0.0f, false);

 Vector3 v = transform.position - contactPoint;

 transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

 contact = true;
 }
 }

 Vector3 ClosestPointOn(SphereCollider collider, Vector3 to)
 {
 Vector3 p;

 p = to - collider.transform.position;
 p.Normalize();

 p *= collider.radius * collider.transform.localScale.x;
 p += collider.transform.position;

 return p;
 }

 void OnDrawGizmos()
 {
 Gizmos.color = contact ? Color.cyan : Color.yellow;
 Gizmos.DrawWireSphere(transform.position, radius);
 }
}

The astute reader may have noticed that this ClosestPointOn method actually returns the closest point on the surface of the Sphere, unlike the ClosestPointOnBounds which returns the closest point within the bounds. This is handy, but we have a few hurdles to jump before we’re able to make much use of this. For now, let’s tackle the second (and final for today) type of Collider we’ll implement: object-oriented bounding boxes.

Image demonstrates how the vector direction between the origin of the sphere and the location of our controller is extrapolated to give us our nearest point

Image demonstrates how the vector direction between the origin of the sphere and the location of our controller is extrapolated to give us our nearest point

Our general approach to this algorithm will be to take the input point and clamp it within the extents of the box. This will give us the same behaviour as the built in ClosestPointOnBounds, except we’ll ensure that it works even if the box has a rotation other than the identity.

The extents of the Box Collider are defined as it’s local size in x, y, and z. In order to clamp our point to the local extents of the Box Collider, we need to transform it’s position from world coordinates to the local coordinates of the Box Collider’s transform. Once we do that, we can clamp the point’s position within the bounds. To get our final point we then transform it back into world coordinates. The final code for the day looks like this.

using UnityEngine;
using System;
using System.Collections.Generic;

public class SuperCharacterController : MonoBehaviour {

 [SerializeField]
 float radius = 0.5f;

 private bool contact;

 // Update is called once per frame
 void Update () {

 contact = false;

 foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
 {
 Vector3 contactPoint = Vector3.zero;

 if (col is BoxCollider)
 {
 contactPoint = ClosestPointOn((BoxCollider)col, transform.position);
 }
 else if (col is SphereCollider)
 {
 contactPoint = ClosestPointOn((SphereCollider)col, transform.position);
 }

 DebugDraw.DrawMarker(contactPoint, 2.0f, Color.red, 0.0f, false);

 Vector3 v = transform.position - contactPoint;

 transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

 contact = true;
 }
 }

 Vector3 ClosestPointOn(BoxCollider collider, Vector3 to)
 {
 if (collider.transform.rotation == Quaternion.identity)
 {
 return collider.ClosestPointOnBounds(to);
 }

 return closestPointOnOBB(collider, to);
 }

 Vector3 ClosestPointOn(SphereCollider collider, Vector3 to)
 {
 Vector3 p;

 p = to - collider.transform.position;
 p.Normalize();

 p *= collider.radius * collider.transform.localScale.x;
 p += collider.transform.position;

 return p;
 }

 Vector3 closestPointOnOBB(BoxCollider collider, Vector3 to)
 {
 // Cache the collider transform
 var ct = collider.transform;

 // Firstly, transform the point into the space of the collider
 var local = ct.InverseTransformPoint(to);

 // Now, shift it to be in the center of the box
 local -= collider.center;

 // Inverse scale it by the colliders scale
 var localNorm =
 new Vector3(
 Mathf.Clamp(local.x, -collider.size.x * 0.5f, collider.size.x * 0.5f),
 Mathf.Clamp(local.y, -collider.size.y * 0.5f, collider.size.y * 0.5f),
 Mathf.Clamp(local.z, -collider.size.z * 0.5f, collider.size.z * 0.5f)
 );

 // Now we undo our transformations
 localNorm += collider.center;

 // Return resulting point
 return ct.TransformPoint(localNorm);
 }

 void OnDrawGizmos()
 {
 Gizmos.color = contact ? Color.cyan : Color.yellow;
 Gizmos.DrawWireSphere(transform.position, radius);
 }
}

You’ll also notice that I made a few changes to the main collision loop, allowing us to call either the axis-aligned or object-oriented ClosestPointOn in the same line. I say “I made a few changes” in a fairly disingenuous sense in that I really mean “I slightly modified the code that I copied and pasted,” as most of the implementation here is taken from fholm’s RPGController package. You can open the RPGCollisions class within it to check out some of the other alterations I made: namely, updating some deprecated code and replacing the matrix multiplications with the more user friendly TransformPoint methods.

Cut me some slack I got like a C minus minus in Linear Algebra I need all the user friendliness I can get.

Cut me some slack I got like a C minus minus in Linear Algebra I need all the user friendliness I can get

This wraps up the first part of our implementation. In future articles I’ll address some of the shortcomings with Unity’s physics API that I’ve alluded to, and begin to outline various components of our ideal Character Controller we will build.

Maybe. Who knows. I’m just making this stuff up as I go.

References

The majority of the code from this article comes from fholm’s RPGController package, specifically the PushBack method from RPGMotor.cs and the closest point methods from RPGCollisions.cs.