Custom Character Controller in Unity: GitHub!

Due to multiple requests, I’ve built a GitHub repository for the Super Character Controller project. It can be accessed here.

Octocat, GitHub's logo

This thing gives me nightmares

The project currently on the repo is extremely similar to the 2.0 release from several weeks ago, with the only main differences being that the demo models are now in the .FBX format, as opposed to the .max format. This is to ensure that users without 3ds Max will be able to download and contribute to the project without any errors.

This is the first open source project I’ve created, so please bear with me while I iron out any details I’ve missed and get the hang of this. As always, any contributions, comments or tips and tricks are appreciated.

Custom Character Controller in Unity: Release 2.0.0

Immediately after completing the Mario 64 HD demo project I began working on the second version of the Super Character Controller. For over a month now I’ve had a download link up on the controller’s page for a Beta version of the project, which allowed users to see what I was working on at the time and contribute to the latest version.

As per usual, the Super Character Controller can be downloaded from it’s main page.

Just a blue man in a technicolour world

Just a blue man in a technicolour world

2.0.0 adds a number of features that are discussed in detail in the accompanying documentation, changlog, or previous blog posts, but below is an outline of the major changes.

New Grounding Techniques

The grounding algorithm in version 1 of the controller had numerous problems, mostly that it was inaccurate and inconsistent when dealing with several edge cases. Most of these have been resolved, with the solutions described in a previous post here.

Binary Space Partitioning Tree

Replacing the RPGMesh class is the BSPTree. BSPTree recursively partitions a mesh’s triangles into two sets using a series of partition planes. The component is complete, but not necessarily fully optimized or the best possible implementation of a BSPTree. More can be read in the Future Work section of the documentation.

Other updates can be viewed on the controller’s main page, as well as a list of contributors to the project. These changes are also showed in the README in the package, and all code and libraries used are sourced in the PDF document also included. If I missed anyone in the changelog or anyone’s code in the sources section, let me know so I can fix it.

There’s been a lot of interest in having a Git repo setup for this project, so now that version 2 is out it’s definitely on my radar, and hopefully something I can get setup soon. I’ve never setup and maintained an open-source project, so if anyone has any suggestions on best practices or tips n’ tricks, please leave them in the comments below.

Custom Character Controller in Unity: Part 6 – Ground Detection

Despite having written five posts about the Super Character Controller, up until now I’ve only briefly touched on the issue of ground detection. Knowing what your controller is standing on is a hugely important topic, since a great many of your player’s actions will depend on what kind of ground he is standing on, if any at all. Good ground detection can make all the difference between a smooth playing experience and a terrible one.

An example of bad ground detection.

An example of bad ground detection

So what do we want to know about the ground beneath our character? We definitely need to know how far away it is. We’ll want to know if our character has his “feet” touching the surface of ground or if he’s 6 meters above it. We also will want to know the location of the point on the surface of the ground directly below us, as this is important for ground clamping, highlighted in a previous post. Thirdly, we will want to know the normal of the surface directly below us, a direction represented by a Vector3. And lastly it will be necessary that we know the GameObject that the ground belongs to, to allow us to retrieve any attached components to the object that may be relevant to grounding. (In the Super Character Controller, we retrieve a SuperCollisionType component that describes properties on the object).

Looking at an earlier post surveying the Unity Physics API, the most obvious solution to our problem comes in the form of Physics.Raycast. We can fire a ray directly downwards at the ground below us. Through the RaycastHit structure we can retrieve the contacted point, the distance traveled, the normal of the surface, and the object the ray collided with. This seems to fulfill all of our requirements at first glance, but looking more closely reveals a significant problem.

Our character controller is represented by a series of spheres to form a capsule, which means that when he is standing on directly flat ground, the nearest point on the surface of the ground will be exactly radius distance from the center of the lower sphere in the capsule. This is fine, but problems appear once the character is standing on a slope. If we now fire a raycast directly downwards (as before) the point we contact is no longer the closest point to our “feet.” This will cause issues with our ground clamping method (among other things).

Raycast

Ray is cast directly downwards from the center of the lowest sphere in the controller. As the controller is standing on a slope, the point directly below is NOT correct, and when the controller is clamped to that point it introduces an error where the controller is slightly clipping into the slope. For steeper slopes, this error will become more pronounced

Luckily, we have a savior in the form of Physics.SphereCast. Instead of casting a thin ray downwards, we cast a sphere (if it wasn’t already clear enough in the name). This will solve the above issue by ensuring that our controller is properly aligned to the surface below us, regardless of it’s normal.

While SphereCast works extremely well in representing our controller’s “feet,” it comes with a few issues. The first is one you will encounter regardless of what you’re using SphereCast for: when the SphereCast contacts the edge of a collider (rather than a face directly on) the hit.normal that is returned is the interpolation of the two normals of the faces that are joined to that edge. You can think of it similar to the Vector3.Lerp method, with the normals of the two faces being your from and to values. The value would then be represented by how far the contacted point was from the center of the sphere that contacted the surface; if it hit dead center, the from and to would be equally weighted with a value of 0.5, and as you move along the edge it would increase or decrease.

Animation demonstrating SphereCast hit.normal interpolation. Yellow wire sphere is the origin point; red is the target. The green vector represents the hit.normal. Notice how as the SphereCast moves over the edge of the box, the normal is interpolated between the two normals of the joining faces.

Animation demonstrating SphereCast hit.normal interpolation. Yellow wire sphere is the origin point; red is the target. The green vector represents the hit.normal. Notice how as the SphereCast moves over the edge of the box, the hit.normal is interpolated between the two normals of the joining faces

I discovered fairly early on that it was necessary to know the actual normal of the surface you are standing on, rather than just the interpolated normal. To solve the problem presented by SphereCast’s interpolation, I would follow up the SphereCast with two separate Raycasts aimed at each of the two joining faces which would retrieve for me the correct normals. (In the Super Character Controller’s ProbeGround method, these are called nearHit and farHit, representing the normals of the closest and furthest face from the center of the controller, respectively.)

The next issue with SphereCast is specific to using it for ground detection but highly essential to ensuring proper accuracy. We’ve been assuming that anything the SphereCast collides with is valid ground that our character can stand on (or slide or, in the case of some slopes for certain games). In practice however this is not always true! We can reasonably assume that the physical surfaces of a game world (that the controller collides with) can be divided into ground and walls, with the idea that only the ground surfaces should be detected by ground probing methods (like the SphereCast defined above). The simplest way to partition the world into ground and walls would be to say that any surface with any angle (relative to the Vector3.up of the world) less than 90 degrees is ground, while surfaces angled 90 and above are walls and cannot be detected as ground. Therefore we should ensure that we never mistakenly detect a wall in our ground probing method. Naively it looks like this problem is implicitly solved by the nature of our ground probing: we are casting a SphereCast directly downwards, which means it should (in theory) never contact a 90 degrees wall. However, very often the normals of the walls in a game world will only be near 90 degrees, or somewhere between 85 and 90. We want to treat these 85 degree angle surfaces as walls, meaning that they should be ignored in our SphereCast.

SphereCastWallAngle

Our controller is flush up against an 85 degree angle wall. Due to this slight angle in the wall, our SphereCast’s contact point is at the yellow X marker, rather than the surface directly below us. This will cause our controller to believe he is standing on a steep slope, rather than safely on flat ground

The most obvious solution would be to use Physics.SphereCastAll. It would ideally collide with both the steep wall and the flat ground, and we could iterate through all contact points to decide what we are standing on. Unfortunately, SphereCastAll only picks up a single contact point per object, so if the ground and wall are part of the same mesh collider, this solution will not work.

Trying something simpler, we could just reduce the radius of the SphereCast by a small amount to account for the above error. And for very slight errors, this does solve some problems, but not all. We still need to find a way to reliably retrieve the proper ground when flush up against a steep slope.

[ Note: In the Super Character Controller, the angle of a “steep slope” is defined as the value StandAngle in the SuperCollisionType component attached to each object the controller collides with. ]

To do this, let’s make the assumption that there is some sort of ground beneath us, and that if there was not a steep slope blocking us our SphereCast would have contacted it. To retrieve this ground, we can Raycast down the steep slope our SphereCast hit. This is primarily to verify that there is some sort of ground there, and (more importantly) to retrieve it’s normal.

RaycastDownSlope

Initial SphereCast contact point marked in yellow. Because we have contacted a steep slope, we Raycast (shown in red) down the slope to detect the ground that is actually beneath us (Raycast contact point marked in purple)

This tells us what’s below us, but because we used a Raycast, and not a proper SphereCast, we once again run into the problem presented earlier, where the shape of our ground detection does not correctly align with the shape of the bottom of the controller. Given the information we have (the normal of the surface below us), can we somehow transform our Raycast data into an approximation of SphereCast data? Yessir.

When you SphereCast downwards from the bottom of the controller and contact a surface, there exists a relationship between the normal of the surface and the point on the sphere that the contact takes place. When we Raycast down the steep slope have the normal of the proper ground surface, so it’s our job to find the point along the bottom of the controller that a SphereCast would connect with. We also have the planar direction (from top view) towards the point (given by the direction of the slope below us), allowing us to tackle this problem in 2-dimensional space (finding a point on a circle, as opposed to a sphere) and then convert our result to 3-dimensional space.

spherecast_point

Animation showing how the contact point of a SphereCast is directly related to the normal of the surface it collides with. SphereCast origin in yellow, contact in red, with the contact point marked in light blue. Notice how as the slope becomes steeper the point moves further up and along the edge of the red circle

Since we are attempting to find a point in 2d space, we are looking for two values which we can call and y. If calculated properly, x and y will describe a point along the position of our circle. Referring to the diagram above as an example, we would be given the normal of the ground surface, and our job would be to calculate the position of the light blue point.

Luckily, this really isn’t all that difficult. Referring back to grade 8 math, we can use the Sine and Cosine functions to calculate the x and y positions, with the normal of the surface being the angle we pass in.

x = Mathf.Sin(groundAngle);
y = Mathf.Cos(groundAngle);

Pretty handy. Note that in Unity the Sine and Cosine methods require you to pass in the angle in radians, so ensure you convert your angles beforehand.

SinAndCosine

Calculating the contact point (light blue) from the angle of the green slope using the Sine and Cosine functions

We can now use these values to find our point by adding them to the position of our controller (multiplied by it’s radius). Effectively, we’ve now converted our Raycast data in SphereCast data, and nobody is the wiser.

[ Note: In the Super Character Controller, the method for approximating SphereCast data based off a surface normal is called SimulateSphereCast. ]

This sums up the current technique I am using for ground detection (in the Beta controller). Unlike previous articles, the question of finding a solution to the problem of detecting ground is fairly openthe above is by no means the optimal solution, although I’ve found it works very well in practice. I plan to write a follow-up to this showing how to actual use the data we’ve worked so hard to retrieve, since it’s a somewhat involved process.

Super Mario 64 HD – Completed!

Due to requests from Nintendo, this project is no longer available. Head over to the Mario 64 HD page to read more. The original blog post is unaltered below. Below is a video of the mod being featured on cobanermani456‘s channel.

StartingArea1

Bob-ombs are my favourite Mario character, closely followed by the loyal Bob-omb Buddies

Given the amount of time this took to build, I’m going to do away with the usual preamble typically accompanying these posts (or is it too late already?). For the time being, I’m finished working on both the Super Character Controller as well as the demo for it, Mario 64’s Bob-Omb Battlefield. Since the Super Character Controller is (was?) the primary focus of this project, I’m planning to post a write-up and retrospective on it, including what could be done to improve it in the future. Both projects include a PDF fully documenting them. In addition to the using the Super Character Controller library (written by me, but adapted from code by fholm), Mario 64 HD also uses a couple other libraries from the community. All references to these (as well as sources for any art and sound assets I did not make myself) can be found on the downloads page, but since some people may never make the pilgrimage there from this post I figured it would be nice to include them here. Pixelplacement’s iTween was used to help animate the rolling metal balls as they path around the mountain. Since Unity doesn’t expose the InputManager to be modified at runtime (another one of it’s lovable quirks), a heavily modified version of cInput v1.4 was used. Past version 2 cInput is no longer free, but since it looks like the author has been working on it for awhile I suspect there are tremendous improvements from 1.4, which wasn’t all that difficult to integrate anyways. Probably worth the $30 on the asset store!

BobOmbValley1

Most of the art assets that were not constructed by me are from Mario Galaxy, but I did all the animations for them. Sound assets likewise were primarily ripped from different Mario games, or acquired from freesound.org, a very useful (and free, in case you didn’t pick up on that from the name) online sound library. If anyone sees (or hears!) any assets in the project that they themselves made, please contact me so I can give the proper credit. I imported the project to Unity 5 and fixed all the errors that cropped up, but none of this project was developed using any of the new tools available, like Global Illumination or Render To Texture. For anyone wondering what tools I used, they were as follows: Unity (duh), Photoshop, 3ds Max, and Adobe Audition. I used Adobe Premier and Fraps to make the trailer above, as well as the capture videos from the original Mario 64 to use as reference for animations. I used the N64 emulator Project64 for this, which I also used to capture sounds directly from the game. All of the mesh files are currently in the .max format, so while you will be able to open the project without issue, unless you have 3ds Max you’ll be unable to edit the files.

GoombaPlateau

get rekt m8

I probably am not going to expand on this project any further, since it would be insanely time consuming to continue any further, and my primary motivation to build this was to drive development on the Super Character Controller and provide a demo project for it. That said, if anyone wants to continue working on this, feel free! The project is open source and can be used for anything you like, short of selling it. If anyone is interested in playing the original (and superior) version of Mario 64, it’s available on the Nintendo 64 as well as the Wii’s Virtual Console.

Custom Character Controller in Unity: Part 5 – Release 1.0.0

Since it seems like school is slowly but surely taking over my life (for the time being…) I figured it would be best to release the Super Character Controller package before I complete the Mario 64 demo project, since at this point an ETA for that is pretty far off.

I have added two ways to download the controller: you can either get a .zip file containing an example project or a .unitypackage of the essential scripts. The example project includes a demo scene with a (very basic) implementation of the Super Character Controller™.

Demo project example scene

Just a blue man in a green world…

Download the Super Character Controller

Regardless of whether you open the demo project or import the .unitypackage, you’ll have a folder containing all the essential code, named SuperCharacterController®. Inside this folder there is: the RPGController folder containing classes used to build Mesh trees written by fholm; a README file, Math3d class (by BitBarrelMedia), DebugDraw class (by me) and finally a Core folder that holds all the character controller classes.

You shouldn’t ever need to touch anything in the RPGController folder unless you plan on doing some rewriting of the mesh trees. Math3d is hugely useful and is widely used inside the controller classes. DebugDraw is a class I wrote to streamline drawing visual debugging cues to the screen, and has some helpful methods that will draw vectors, markers, etc.

Inside Core there are five classes. SuperMath is a static class where I dump any useful math functions I come up with that are not represented in the standard Mathf. SuperCollider is another static class, with it’s only current method being (three variations of) a tool to find the closest point on the surface of a (Sphere, Box or Mesh) collider. SuperCollisionType is meant to be attached to all objects the controller collides with, to allow the user to customize certain properties (at what angle on this surface can the controller stand, slide, fall, what type of ground is it, and so on). This is meant to have additional properties written in (although in the future using some sort of inheritance would probably be better). Currently, every object the controller collides with is required to have this attached.

SuperStateMachine is a modified version of the state machine built in the Unity Gems Finite State Machine tutorial. Mine is a bit more stripped down for simplicity but is still hugely powerful. Having an easy to implement and use state machine is by far one of the most important components of game development, especially when it comes to prototyping character movement and actions. This state machine is written to function exclusively with the SuperCharacterController. Characters that use the state machine are implemented as a subclass of the SuperStateMachine. I typically using a naming schema following the pattern of “CharacterNameMachine.” For example, my Mario 64 project has classes named MarioMachine, GoombaMachine, BobombMachine, which all inherit from the SuperStateMachine.

[ EDIT: The above link previously pointed to unitygems.com, which as of Jan 1st, 2015 displays only “pageok” on it’s home page. As the page is clearly not okay despite it’s assurance, I changed the link to instead point to a web archive of Unity Gems ]

Finally, we have the man of the hour, the SuperCharacterController©™®. In general (or at least initially) you won’t need to do much to this component other than add it to your objects. It broadcasts a message to all other scripts in your object to call a function named “SuperUpdate”, where you can perform all your character logic and movement. Much like the Unity character controller, you’ll probably want to cache a reference to the Super Character Controller in your controller scripts. This way, you can access certain public members (controller height, radius, etc.) and public methods (to disable and enable clamping, ignore colliders, and so on). For a more in depth description, see the example attached to the demo project.

I tried to test this package as much as I possible could, but this is the first time I’ve really distributed a project this large, so hopefully it will work fine on everyone’s different machines and software. If anyone runs into any errors, please post them below as a reply.

And no, SuperCharacterController isn’t actually trademarked/reserved/copytradereservemarked.

Super Mario 64 HD! – Custom Character Controller Update

It’s been awhile since I’ve posted on here, and for good reason! To ensure that my Customer Unity Character controller is able to meet a wide variety of use cases, I figured it would be best to create a demo character implementing the tool. I wanted to pick a character that would be reasonably complex, since I figured a simpler one wouldn’t highlight it’s features (or help me discover it’s problems) as well. To that end, I decided to base my implementation on…

Super Mario 64 cover art

The best

Super Mario 64! Aside from being one of the best games of all time and one of my personal favourites, Mario boasts a wide range of moves that would test the limits of my Character Controller. Initially, I planned to grab a Mario model from somewhere, do a couple quick animations and build a controller that would implement a fairly small subset of his moves. However, things quickly got out of hand…

Super Mario 64...now in HD

pls no c&d ninty

…and before I knew it, I had added in virtually all of the moves from the original game, a fully animated Goomba and Bob-Omb, and had begun to build an HD version of Bob-Omb Battlefield, the game’s first level.

Current progress on Bob-Omb Battlefield Redux. Got a ways to go...

Current progress on Bob-Omb Battlefield Redux. Got a ways to go…

In addition to being a huge time sink, this project is serving two purposes: demonstrating the character controller, and helping me learn the ins and outs of 3D art. The Mario model I’m using is borrowed from Mario Galaxy, but it didn’t come with bones, rigging or animations, leaving the task to me. This was made infinity easier by this terrific tutorial I found on character rigging using 3ds Max.

Super Mario, master hula-hooper

Super Mario, master hula-hooper

I plan to release the project on this page sometime in the near future, and hopefully it will help others build their projects!

Editor shot of the same scene as above. SuperCharacterController with three debug spheres enabled on Mario.

Editor shot of the same scene as above. SuperCharacterController with three debug spheres enabled on Mario.

Custom Character Controller in Unity: Part 4 – First Draft

After nearly a month of silence, the wait is finally over! No longer will you have to endure the hardship of waking up every morning, immediately opening my illustrious blog only to suffer a crushing disappointment deep in your soul. …Anyways. In the intermediate time since the last post, I’ve built a first draft of the character controller, and I’ll be going over it’s implementation in this article. This post will concern itself solely with the main controller class—in the next article, I will go over an application of the controller in a demo character that I’ve built. The controller itself is a single C# script, which can be downloaded here. As with the previous Pushback example (from the second part of this series), I am making use of some of fholm’s RPG classes, as well as a modified version of a class which he uses to find closest points on the surfaces of colliders called SuperCollisions.cs. In addition, for debugging purposes I typically use my own DebugDraw.cs class to draw markers and vectors on the screen. Finally, I use the lots of the 3D math functions found in this class, by Bit Barrel Media. In the future, it’ll probably be simpler for me to just post a Unity project, but for now since we’re just mostly focusing on a single class today, this is easier. I’ll go over the basic structure of the controller, and then iterate through each of it’s features in more detail.

[ EDIT: past Erik was spot on. You can now get the controller through the Downloads page. Note that the code linked to above is an earlier version of the character controller, which I am leaving posted here for clarity and learning purposes. If you are planning on using the controller in your project, please head over to the Downloads page for the up-to-date and complete code along with a sample project ]

The controller goes through three primary phases: Movement, Pushback and Resolution. In the Movement phase, we calculate all of our character’s movement logic and modify his position accordingly. We then run our Pushback function, ensuring that he is not intersecting any of our geometry. Finally, we run any necessary Resolution steps. These could include limiting the angle of slope our character could move up, clamping him to the ground, etc.

 Figure showing the movement and pushback phases of the controller

Figure showing the movement and pushback phases of the controller

Before we get started I should note that, unlike the previous controller, this one is built using three OverlapSpheres, placing one above the other, to simulate the shape of the capsule. The controller is built to work with any number of spheres—tall slim characters may require more than three, while short squat ones may need less. Let’s take a look at the code now. The first phase within our controller is fairly simple for the time being, consisting of a single instruction:

transform.position += debugMove * Time.deltaTime;

This allows you to set in the inspector how much the controller will move each frame. When we build our actual character, this line will be replaced with all of our movement logic. For now, it serves as a handy debugging tool. Phase two is Pushback. Here, our goal is to check if the controller is intersecting any colliders, and if so to then push him to the nearest location on their surface. The basics of how we do this can be seen in the Implementation article I posted earlier. This time around, the algorithm is slightly more complicated. The first half of the method is more or less the same as before; we check the nearest point on the surface of any collider within the OverlapSphere. Next, we need to see which side of the normal the origin of the OverlapSphere is. We do this by raycasting from the center of the sphere in the direction of the nearest point on the surface. Since a raycast only detects a surface if the normal is facing the cast, whether this cast returns true of false will tell us if the origin is outside or inside the surface, respectively. Note that in the code I use a SphereCast with a very small radius instead of a raycast; this avoids errors when raycasting directly at an edge of a mesh.

The "feet" OverlapSphere of the controller detects a collision with the ramp, finds the nearest point and then raycasts towards it (shown as the red arrow)

The “feet” OverlapSphere of the controller detects a collision with the ramp, finds the nearest point and then raycasts towards it (shown as the red arrow)

Before applying the pushback vector, we do a final check to make sure we’re still colliding with the object. Because the OverlapSphere will return all the collisions first and then apply the pushbacks one by one. This makes it is possible that, in the case of hitting multiple colliders, a previously applied pushback can have the side effect of moving the controller enough so that it is no longer colliding with objects that were originally touched by the OverlapSphere, but no longer are. We resolve this by checking the distance between the origin of our sphere and the nearest point on the surface of the collider; if the distance is greater than the radius and we are located “outside” the normal, we know that we are not touching it and were displaced by an earlier collision.

The controller's lowest OverlapSphere collides with both the green ramp and the blue ground, with the nearest points on their surfaces shown in teal and blue, respectively. The ramp collision pushback is resolved first, causing a side effect where the OverlapSphere is no longer colliding with the blue ground

The controller’s lowest OverlapSphere collides with both the green ramp and the blue ground, with the nearest points on their surfaces shown in teal and blue, respectively. The ramp collision pushback is resolved first, causing a side effect where the OverlapSphere is no longer colliding with the blue ground

The third phase is less clear cut than the previous two. It can be defined as doing any “clean-up” or “reactionary” logic. There are two main methods that are executed here: slope limiting and ground clamping. Slope limiting should be familiar to anyone who has used Unity’s built-in controller: if a character attempts to move up a slope that is steeper than a specific angle, he is repelled by the slope as if it were a solid wall, instead of pushed up it. Ground clamping is not included in the Unity controller, and is fairly important. When moving horizontally over an uneven surface, the controller will not (by itself) follow the geometry of the ground. In the real world, we time our leg movements to allow for each slight increase or decrease in elevation, and gravity takes care of the rest. However, in a game world we need to handle this a bit more explicitly. Unlike the real world, gravity is not a constantly applied force in most controllers. When we are not standing on a surface, we apply acceleration downwards. When we are on a surface, we set our vertical velocity to zero, to represent the normal force exerted by the surface. Because our vertical velocity is zeroed out when standing on a surface, it will take time to accelerate our downwards speed when we walk off said surface. This is fine when we are actually walking off a ledge, but when we’re walking down a slope or over uneven ground, it creates an unnatural bouncing effect. In addition to being a problem visually, this oscillation between grounded and not-grounded is a problem for our actual game logic, since a character’s behavior is typically very different when he is on a surface compared to when he is a falling.

The left image shows how the character's movement follows the uneven surface by ground clamping. On the right, we see how he "bounces" across the surface when clamping is not applied. Each red "X" represent when the downward force of gravity is zeroed out

The left image shows how the character’s movement follows the uneven surface by ground clamping. On the right, we see how he “bounces” across the surface when clamping is not applied. Each red “X” represent when the downward force of gravity is zeroed out

This problem is solved in our reaction phase with the aforementioned “ground clamping,” which, as the name implies, will adjust our character’s position to be in line with the ground by SphereCasting directly downwards from our “feet.” Obviously, there are plenty of times when you do not want to clamp your character to the ground, such as when he is beginning a jump, or far enough above the surface that he should not be counted as standing on it. You’ll notice that I talk about whether the controller is “grounded” or not an awful lot. You’ll also see the method ProbeGround() be called multiple times throughout the main loop of the controller. Knowing when your character is standing on a surface and when he is not is very important to building a proper controller. I don’t intend to provide the tools to check if a character is “grounded” or not within the main controller class, since this depends greatly on your game’s structure. However, I do provide a method that will detect what is below the player, as well as store it (and some additional useful information) in a variable that is easily accessible. How you use this is up to you, but in the next article in the series I’ll be providing an example character that uses this controller and the data from the ProbeGround() method. The SlopeLimit method should be easy enough to understand as it’s functionality is familiar and I’ve commented it fairly well. (I actually haven’t. But I plan to before I upload the file.) Speaking of familiar functionality…those who know the Unity character controller well have probably identified that my custom controller seems to be lacking a feature: StepOffset. I do intend to tackle this method, but it seems much more complex than I initially expected, or I’m missing a simple solution for it. It’s definitely a “need-to-have,” since it’s pretty essential for most applications. That pretty much covers the Super Character Controller. Next time, I’ll go over an example character that I’ve built using the controller class detailed today, as well as provide the source code for it. If any of my code doesn’t seem to be working of compiling properly, please contact me so I can fix the error!

Custom Character Controller in Unity: Part 3 – Analysis of the Physics API

Up until this point I’ve made multiple references to some of the Unity Physics API, but we haven’t really explored it in detail. As an astute reader may have guessed from the title of this entry, that’s what we’ll be doing now. We’ll go through the functions available, analyze some of their associated issues, and ways to overcome them.

As per usual I’ve done my best to avoid doing original research and will be making heavy use of this post from fhhoollm.

Fhhoollolllmm!!

Fhhoollolllmm!!

The Physics API

With many of the functions simply being variations of each other, it shouldn’t take long to go over the details of the Physics Script Reference. I’m not going to bother talking about the methods that have an All variation available, since they are identical other than that the raycast stops immediately at the first contact.

Raycast: Fires a ray in a specified direction for a specified distance (or infinitely far). If an object is contacted, the RaycastHit structure provides useful information about it: where it was contacted, what the normal of the surface was at the contact point, and so on. Because it fires just an infinitely thin ray, this method isn’t particularly use for collision resolution.

CapsuleCastAll: At first glance this seems ideal for usage with a character controller (due to it’s capsule shape), and for the most part it is. It is important to note that as this is a cast it will only detect a collision where the normal of the surface is facing the cast-no backfaces are detected. In addition, the cast does not detect any objects that are within the boundaries of the “capsule” origin of the cast, i.e., it doesn’t detect any objects touching it’s initial position. This is a drawback we will need to overcome if we want it to be a useful tool for our character controller.

CheckCapsule: Right away we have a candidate to solving the problem stated above. CheckCapsule seems to exactly compliment CapsuleCastAll-it will detect all the objects at the initial position of the cast that the CapsuleCast cannot. Unfortunately, it only returns a bool, as opposed to an array of colliders, giving us no information on what objects we actually collided with.

CheckSphere: Same as above, except with a sphere shape.

Linecast: Identical in terms of function to Raycast. Simply a different way of defining the origin, direction, and magnitude of the ray.

OverlapSphere: Now we’re getting somewhere. As far as I can tell, OverlapSphere works exactly as advertised. Bear in mind this note does appear on the docs:

NOTE: Currently this only checks against the bounding volumes of the colliders not against the actual colliders.

…and I really don’t know what this means. I’ve tested it against Box Colliders, Sphere Colliders and Mesh Colliders and it seems to be checking against the actual collider, not just the bounding volume. Note that I am taking bounding volume to mean axis aligned bounding box, and it may mean something different in this case. If not, I’m going to assume it’s a documentation error.

RaycastAll: Same as the Raycast method, except that it does not stop at the first object it contacts.

SphereCastAll: Functions the same as CapsuleCastAll, with the same primary drawback of not detecting objects contained in the sphere defined at the origin of the cast. SphereCast also (like CapsuleCast) does NOT always return the proper normal of the face it collides with. Because it is a sphere that is being cast (rather than an infinitely thin ray in Raycasting) it can collide with the edges of a mesh. When this happens, the hit.normal that is returned is the interpolated value of the normals of the two faces that are joined by the edge. Since CapsuleCasting is just casting with a swept-sphere, it also has the same issue.

In addition to the above tools to detect collisions, Unity also provides a Rigidbody.SweepTestAll method. After testing it, it seems to have identical behavior to the cast methods; faces contained within the collider are not detected by the sweep. I tend to prefer using CapsuleCastAll and SphereCastAll over SweepTest all, as they offer more options (like being able to define your own origin), however SweepTest is useful for box shaped characters, as there is no BoxCast method.

Mesh Colliders

Before we go any further, I want to talk a little bit about mesh colliders. Up until now we’ve focused primarily on the primitive colliders (Box, Sphere, Capsule, etc). However in practice the overwhelming amount of your level’s collision geometry is going to be composed of mesh colliders.

Unlike the primitive colliders, which have their collision representation built from a variety of preset parameters (radius for spheres, height for boxes, and so on), a mesh collider’s collision data is unsurprisingly formed from a 3D mesh. Mesh colliders come in two flavors: Convex and Concave. This article does a terrific job explaining the difference between them.

Since convex hulls must be fully enclosed and Unity limits their size to 255 polygons, they are unideal to be used to represent intricate level geometry. Concave hulls can be of any size, but they come with the drawback of no longer being an enclosed object; instead of being a solid volume, they are essentially just a surface of triangles. This means we can no longer detect if an object is “inside” a concave mesh, since there is no “inside” to check against. This brings us to the problem of phasing. Phasing occurs when a character is moving fast enough (or a wall collider is small enough) that in two frames he travels from one side of the wall to the other, effectively passing through it. Concave mesh colliders amplify this problem by no longer having the ability to detect player collisions occurring “inside” them, making it easy for the player to phase into the mesh.

Controller movement over one frame. His speed is great enough that neither his initial position or final make contact with the thin mesh collider wall in our way

Controller movement over one frame. His speed is great enough that neither his initial or final position make contact with the thin mesh collider wall in his way

Effectively, if we are directly beside a triangle on the surface of a mesh collider with it’s normal facing in the exact inverse direction of our movement vector, the furthest we can move is exactly equal to twice our radius. Considering collision resolution tends to place the character directly flush with the wall, this is a situation that is encountered fairly often. If your character controller is representing a character of about 2 meters (represented as generic units in Unity) high, your radius is typically in the ballpark of 0.5 meters (units). Which means your character can move at most 1 unit per frame. If your game runs at 30 frames per second, you can move at most 30 meters per second, or 108 kilometers per hour. This is pretty damn fast, but if you’re building the latest and greatest Sonic the Hedgehog title it may not be fast enough.

With the controller directly flush with the surface, it cannot move more than twice it's radius or it will phase through the wall

With the controller directly flush with the surface, it cannot move more than twice it’s radius or it will phase through the wall

One solution to this problem is to run your controller’s physics more than once per frame. Alternatively, we can use CapsuleCastAll to check if there are any colliders between our initial and final position every frame. We’ll explore both these options in future articles where we continue to implement the character controller.

Custom Character Controller in Unity: Part 2 – Implementation

Now that we’ve gone over the basics of character controller collision resolution, I’m going to demonstrate how to implement the last presented example (the pushback method) into Unity.

To start off, make sure you have Unity downloaded and installed. For this article I am using Unity 4.3.4f1. (To check your version of Unity, go HelpAbout Unity…) Open an existing project or create a new one for this tutorial. Create a new scene and create a Cube and a Sphere Game Object within it. Although we’ll eventually move on to using a Capsule shape for our controller, we’ll start with a Sphere to keep it simple. Rename the Sphere to Player and the Cube to Wall. Change the Wall’s scaling factor to 6 on each axis. To ease visualization, I also added a blue transparent material to the player and a green transparent material to the wall. Remove the Sphere Collider component from the player.

This sure beats making those dumb diagrams in Photoshop

This sure beats making those dumb diagrams in Photoshop

Create a new C# script and name it SuperCharacterController.cs, to express our dominance as the alpha character controller. Assign this script to our player, and then copy and paste the following code into it:

using UnityEngine;
using System;
using System.Collections.Generic;

public class SuperCharacterController : MonoBehaviour {

 [SerializeField]
 float radius = 0.5f;

 private bool contact;

 // Update is called once per frame
 void Update () {

 contact = false;

 foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
 {
 Vector3 contactPoint = col.ClosestPointOnBounds(transform.position);

 Vector3 v = transform.position - contactPoint;

 transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

 contact = true;
 }
 }

 void OnDrawGizmos()
 {
 Gizmos.color = contact ? Color.cyan : Color.yellow;
 Gizmos.DrawWireSphere(transform.position, radius);
 }
}

…and that’s all, really. Run the project and open the Scene window, while it’s still running. Drag the player around the edges of the wall and attempt to slowly push him into it. You’ll notice the wall resists, and keeps the player flush against it’s edge. So what are we actually doing here?

Physics.OverlapSphere returns an array of Colliders that are contacted by the sphere. It’s a great function in that it doesn’t come with any of the caveats of the other various methods in the Physics class (which we’ll inspect in more detail later). You define an origin and a radius and it gives you the colliders, no frills.

With any collisions detected, we now need to perform resolution. To retrieve the closest point on the surface of the box collider, we use the ClosestPointOnBounds method. We then take a vector that points from the contactPoint to our location. The vector’s magnitude is then clamped and our position is “pushed” out of the collider the proper amount.

You’ll also notice that I implement OnDrawGizmos so that it’s easy to see when the OverlapSphere is colliding with an object.

 Two frames demonstrating the collision being detected, and then resolved

Two frames demonstrating the collision being detected, and then resolved

Fairly simple. Unfortunately our success up until this point has been…an illusion. Create a new class named DebugDraw.cs, and add in the following code.

using UnityEngine;
using System.Collections;

public static class DebugDraw {

 public static void DrawMarker(Vector3 position, float size, Color color, float duration, bool depthTest = true)
 {
 Vector3 line1PosA = position + Vector3.up * size * 0.5f;
 Vector3 line1PosB = position - Vector3.up * size * 0.5f;

 Vector3 line2PosA = position + Vector3.right * size * 0.5f;
 Vector3 line2PosB = position - Vector3.right * size * 0.5f;

 Vector3 line3PosA = position + Vector3.forward * size * 0.5f;
 Vector3 line3PosB = position - Vector3.forward * size * 0.5f;

 Debug.DrawLine(line1PosA, line1PosB, color, duration, depthTest);
 Debug.DrawLine(line2PosA, line2PosB, color, duration, depthTest);
 Debug.DrawLine(line3PosA, line3PosB, color, duration, depthTest);
 }
}

This is a useful helper function of mine that allows us to draw markers in the editor from anywhere in the code (as opposed to just the OnDrawGizmos function). Modify the foreach loop to look like this.

foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
{
Vector3 contactPoint = col.ClosestPointOnBounds(transform.position);

DebugDraw.DrawMarker(contactPoint, 2.0f, Color.red, 0.0f, false);

Vector3 v = transform.position - contactPoint;

transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

contact = true;
}

Run the code, and you’ll notice when a collision happens a large red cross hair is drawn on it’s location. Now, drag the player inside the wall and observe that the marker follows the player. This isn’t necessarily wrong of the ClosestPointOnBounds function, but to match our pushback model from the previous section we really wanted a ClosestPointOnSurfaceOfBoundsOrSomething.

I can't believe this free game engine doesn't do exactly everything I want all the time

I can’t believe this free game engine doesn’t do exactly everything I want all the time

The main issue here is that we cannot properly resolve collisions when our character’s origin is inside a collider, as we do not have a function that will correctly find the nearest point on the surface. For now however, we’re going to move on to the next problem with our current implementation.

Rotate the wall about 20 degrees either way on it’s y axis and then run the scene. You’ll notice nothing seems to work properly anymore. This is because ClosestPointOnBounds returns the closest point on the axis-aligned bounding box, not the object-oriented bounding box.

Axis-aligned bounding bound on the left, with object-oriented on the right

Axis-aligned bounding bound on the left, with object-oriented on the right

You can already imagine how this problem will extend beyond just Box Colliders. Since the function is only capable of returning the axis-aligned bounding box, it clearly will not give us the closest point on the surface if we’re colliding with any other collider type (Sphere, Capsule, Mesh, etc.). Unfortunately, there is no silver bullet for this issue (or not one I’m aware of); we’ll need to implement a separate algorithm for each collider type.

Let’s start with the easiest first: Sphere Colliders. Create a new Sphere game object in the scene. There are a few steps to finding the nearest point on the surface, none of which are too complicated. To know which direction to push the player, we calculate the direction from our position to the Sphere’s centre. Since every point on a Sphere’s surface is the same distance from the origin, we normalize our direction and then multiply it by our radius and our local scale factor.

The following code implements the above. You’ll notice that in addition to the new method I’ve also added in a conditional check to see what kind of collider our OverlapSphere has detected.

using UnityEngine;
using System;
using System.Collections.Generic;

public class SuperCharacterController : MonoBehaviour {

 [SerializeField]
 float radius = 0.5f;

 private bool contact;

 // Update is called once per frame
 void Update () {

 contact = false;

 foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
 {
 Vector3 contactPoint = Vector3.zero;

 if (col is BoxCollider)
 {
 contactPoint = col.ClosestPointOnBounds(transform.position);
 }
 else if (col is SphereCollider)
 {
 contactPoint = ClosestPointOn((SphereCollider)col, transform.position);
 }

 DebugDraw.DrawMarker(contactPoint, 2.0f, Color.red, 0.0f, false);

 Vector3 v = transform.position - contactPoint;

 transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

 contact = true;
 }
 }

 Vector3 ClosestPointOn(SphereCollider collider, Vector3 to)
 {
 Vector3 p;

 p = to - collider.transform.position;
 p.Normalize();

 p *= collider.radius * collider.transform.localScale.x;
 p += collider.transform.position;

 return p;
 }

 void OnDrawGizmos()
 {
 Gizmos.color = contact ? Color.cyan : Color.yellow;
 Gizmos.DrawWireSphere(transform.position, radius);
 }
}

The astute reader may have noticed that this ClosestPointOn method actually returns the closest point on the surface of the Sphere, unlike the ClosestPointOnBounds which returns the closest point within the bounds. This is handy, but we have a few hurdles to jump before we’re able to make much use of this. For now, let’s tackle the second (and final for today) type of Collider we’ll implement: object-oriented bounding boxes.

Image demonstrates how the vector direction between the origin of the sphere and the location of our controller is extrapolated to give us our nearest point

Image demonstrates how the vector direction between the origin of the sphere and the location of our controller is extrapolated to give us our nearest point

Our general approach to this algorithm will be to take the input point and clamp it within the extents of the box. This will give us the same behaviour as the built in ClosestPointOnBounds, except we’ll ensure that it works even if the box has a rotation other than the identity.

The extents of the Box Collider are defined as it’s local size in x, y, and z. In order to clamp our point to the local extents of the Box Collider, we need to transform it’s position from world coordinates to the local coordinates of the Box Collider’s transform. Once we do that, we can clamp the point’s position within the bounds. To get our final point we then transform it back into world coordinates. The final code for the day looks like this.

using UnityEngine;
using System;
using System.Collections.Generic;

public class SuperCharacterController : MonoBehaviour {

 [SerializeField]
 float radius = 0.5f;

 private bool contact;

 // Update is called once per frame
 void Update () {

 contact = false;

 foreach (Collider col in Physics.OverlapSphere(transform.position, radius))
 {
 Vector3 contactPoint = Vector3.zero;

 if (col is BoxCollider)
 {
 contactPoint = ClosestPointOn((BoxCollider)col, transform.position);
 }
 else if (col is SphereCollider)
 {
 contactPoint = ClosestPointOn((SphereCollider)col, transform.position);
 }

 DebugDraw.DrawMarker(contactPoint, 2.0f, Color.red, 0.0f, false);

 Vector3 v = transform.position - contactPoint;

 transform.position += Vector3.ClampMagnitude(v, Mathf.Clamp(radius - v.magnitude, 0, radius));

 contact = true;
 }
 }

 Vector3 ClosestPointOn(BoxCollider collider, Vector3 to)
 {
 if (collider.transform.rotation == Quaternion.identity)
 {
 return collider.ClosestPointOnBounds(to);
 }

 return closestPointOnOBB(collider, to);
 }

 Vector3 ClosestPointOn(SphereCollider collider, Vector3 to)
 {
 Vector3 p;

 p = to - collider.transform.position;
 p.Normalize();

 p *= collider.radius * collider.transform.localScale.x;
 p += collider.transform.position;

 return p;
 }

 Vector3 closestPointOnOBB(BoxCollider collider, Vector3 to)
 {
 // Cache the collider transform
 var ct = collider.transform;

 // Firstly, transform the point into the space of the collider
 var local = ct.InverseTransformPoint(to);

 // Now, shift it to be in the center of the box
 local -= collider.center;

 // Inverse scale it by the colliders scale
 var localNorm =
 new Vector3(
 Mathf.Clamp(local.x, -collider.size.x * 0.5f, collider.size.x * 0.5f),
 Mathf.Clamp(local.y, -collider.size.y * 0.5f, collider.size.y * 0.5f),
 Mathf.Clamp(local.z, -collider.size.z * 0.5f, collider.size.z * 0.5f)
 );

 // Now we undo our transformations
 localNorm += collider.center;

 // Return resulting point
 return ct.TransformPoint(localNorm);
 }

 void OnDrawGizmos()
 {
 Gizmos.color = contact ? Color.cyan : Color.yellow;
 Gizmos.DrawWireSphere(transform.position, radius);
 }
}

You’ll also notice that I made a few changes to the main collision loop, allowing us to call either the axis-aligned or object-oriented ClosestPointOn in the same line. I say “I made a few changes” in a fairly disingenuous sense in that I really mean “I slightly modified the code that I copied and pasted,” as most of the implementation here is taken from fholm’s RPGController package. You can open the RPGCollisions class within it to check out some of the other alterations I made: namely, updating some deprecated code and replacing the matrix multiplications with the more user friendly TransformPoint methods.

Cut me some slack I got like a C minus minus in Linear Algebra I need all the user friendliness I can get.

Cut me some slack I got like a C minus minus in Linear Algebra I need all the user friendliness I can get

This wraps up the first part of our implementation. In future articles I’ll address some of the shortcomings with Unity’s physics API that I’ve alluded to, and begin to outline various components of our ideal Character Controller we will build.

Maybe. Who knows. I’m just making this stuff up as I go.

References

The majority of the code from this article comes from fholm’s RPGController package, specifically the PushBack method from RPGMotor.cs and the closest point methods from RPGCollisions.cs.

Custom Character Controller in Unity: Part 1 – Collision Resolution

After using Unity over the years for various projects, I’ve come to two conclusions: overall it’s a terrific engine that I would recommend to anyone interested in getting into game development, and that it’s built in character controller sucks.  I’ve been working on a custom character controller for a couple weeks and noticed that finding any kind of reference or learning material on the subject is pretty difficult.  So since I couldn’t find anything out there to read…I figured I’d write something instead!  I intend to post a few pieces here outlining what I’ve learned so far and some issues I’ve encountered.  For any actual implementation, I’ll be using the aforementioned Unity game engine.  You can visit their website here and download their latest version here.  I really dig Unity.  It takes care of a lot of the low level under the hood side of game development but still gives you enough freedom to do just about anything.  It also has a really great and active community that is the epitome of helpfulness.

Unfortunately, as previous stated Unity is also home to the world’s worst character controller.  I suppose I should loosely define what a “character controller” is before I build my case against Unity’s.  More or less it’s the code (or class, or whatever) that handles and resolves your character’s collisions in the world.  Unlike boxes and barrels and whatnot that can be taken care of by the rest of the physics engine, characters require special code to behave differently.  However, since we’re doing collision tests, we still need to pick a geometric shape to represent our character.  Most 3D games use a capsule collider.

Capsule collider used to approximate a character's form in Unity

Capsule collider used to approximate a character’s form in Unity

Capsule colliders are great for a wide variety of reasons, but we’ll see that later.  For now, I’m going to go over the basics of character controllers in just 2 dimensions, to keep it simple for now.

CCSetup1

You’ll notice I’ve labelled the axes and x, instead of and y.  This is because I’m going to treat this as a top down view of a three dimensional world, which we’ll eventually move onto.  The character here is seen as a blue circle, as a capsule seen from top view is a circle!  The green rectangle we will treat as a wall.  Ideally, characters cannot walk through walls.  So should the character intersect with it, we’ll want to detect the collision, and properly resolve it.  We’re going to pass over the actual collision detection, i.e., checking to see if the circle is intersecting the box, for two reasons.  One is that Unity has a fair amount of resources (which we’ll go over later) to handle this, and two is that there is a pretty good selection of reference material out there for collision detection.  We’ll focus on getting the colliding object to resolve it’s position properly based on expected behavior.

CCSetup2

Shown above we have the controller attempting to move into the wall.  To prevent this we run a function that performs a sweep test, from the initial position of the controller to the desired position of the movement.  The test detects that a wall is in front of us, and returns the distance.  Using the value we move the controller in the direction of the test the distance the check traveled, placing it directly beside the wall.  (Aside: Unity has several built in functions for this, including Rigidbody.Sweeptest, Physics.SphereCast and Physics.CapsuleCast).

However, this really isn’t the kind of behavior we want.  If we use this method, the character will be immediately halted in any movement he makes if he collides with an object, if even only slightly.  This is undesirable as it doesn’t reflect the way real world objects tend to bounce and slide off each and, more importantly, it would be annoying as hell to play.

CCSetup3

This is a much more desirable behavior.   The initial sweep test is performed in the movement direction for the movement distance.  When the sweeptest contacts the wall, the character is moved directly to it, just like before.  However, this time around we further move the character upwards to make up for the lost movement, which allows it to slide along surfaces.  This is a great example of the desired behavior of the controller, but it isn’t the best way to implement it.  For one, it’s not very efficient: every time you want to move the controller, you need to run this function.  This is fine if you just move it once per frame, but if you plan on doing something different–for whatever reason–you’ll need to rerun the function.  Two, collisions resolution is reliant on character movement direction and distance.  If he just magically finds his way into a solid wall (as character controllers are wont to do) he’s not going to get automatically pushed out.  In practice, I’ve found this method a massive headache.

CCSetup4

Here is that terrifying situation put onto screen.  We see our hero is currently within the walls of the object.  Instead of looking at collision resolution as a response to a movement, like the previous examples, instead we are going to treat it completely independently.  We no longer are concerned with what direction the player moved or how far he moved.  Instead, we will consider only where he is at this moment, and whether his location is a problem or not.  In the above figure, we can see that currently the player is intersecting the wall (he’s inside it!), and therefore his current location is a problem and needs to be rectified.  Since we are no longer resolving collision as a response to movement, we do not know where his previous position was or how far he moved.  All we know is that currently he is stuck inside a wall, and we need to move him out of the wall.  But where to put him?  Just like in previous examples, we should only push him out so that he is just touching the edge of the wall.  We have many locations that are candidates for this…

CCSetup7

Each of the transparent yellow circles indicate a possible position for the character controller that satisfies our goal–to push him out of the wall to a point somewhere on it’s surface.  But which point do we choose?  Simply put, calculate the closest point on the surface of the wall with respect to the controller’s location.

Drawing these diagrams in Photoshop was a huge mistake they take forever

Drawing these diagrams in Photoshop was a huge mistake they take forever

Here we have calculated that the nearest point to our controller’s current location lies to the right of us.  We then move the controller to that point, plus the radius of the controller (shown in red).

This concludes the first part of our epic adventure into the mysterious realm of character controllering.  Next time I’ll start talking about my implementation in Unity, and some of the more complex functions character controllers use.

Acknowledgements and References:

Most of the knowledge here I’ve acquired by exploring two main information sources: a Unity forums post by the user techmage, and reading through/reverse engineering a Unity custom character controller package by user fholm.  I don’t know how to pronounce that either.  Fuhholllme.  Disgusting.  Reminds me of phlegm.  Anyways, you can download his package off his github here, under the RPGController directory.  This is an amazing project overall that I will be exploring the next few sections, and is really terrifically coded.  Apparently he does Unity consulting too if anyone needs that kind of services.  Finally, if anyone has any good reference material for this subject, or anything relating, sharing it here would be a really great way to expand on the topic!