Enemy health bars in 1 draw call in Unity

· by Steve · Read in about 11 min · (2264 Words)

Recently I needed to do something pretty common in many top-down games: render a whole bunch of health bars for enemies on the screen. Something like this:

Obviously, I want to do this as efficiently as possible, preferably all in a single draw call. As I always do before I start something, I did a quick search online of what other people are doing, and the results were…mixed.

I won’t code-shame anyone here, but suffice to say that some of the solutions were not great, including attaching a Canvas object to each enemy. 😬 (that’s really inefficient)

The method I ended up using is a little different than I’ve seen elsewhere, and doesn’t use any UI classes at all (including Canvas), so I figured I’d document it for others. I’ve also put the source up on Github for those that want to check it out.

Why not use Canvas?

One Canvas per enemy is clearly bad, but I could have used a shared Canvas for all enemies; a single Canvas would batch up the draw calls as well.

However, I didn’t really like the amount of per-frame work that involved. If you use a Canvas, it means you need to do this every frame:

  • Figure out which enemies are on screen, and allocate each of them a UI bar from a pool
  • Project the enemy’s position on to the camera to position it
  • Alter the size of the ‘fill’ part of a bar, probably an Image
  • Potentially resize the bars to match the enemy type; e.g. bigger enemies may have larger bars so they don’t look silly

In all cases this is going to dirty the Canvas' geometry buffers and cause a rebuild of all the vertex data on the CPU. I didn’t like that for something this simple.

My approach in a nutshell

A quick roadmap of what I did:

  • Attach health bar objects to enemies in 3D
    • This gets us positioning and culling of the bars automatically
    • The position / size of the bar can be customised to fit the enemy
    • We’ll point them towards the camera in code using the transform which is there anyway
    • The shader will ensure they’re always rendered on top
  • Use Instancing to render all of them in one draw call
  • Use simple procedural UVs to represent the fill level on the bar

Let’s get stuck into the detail.

What is Instancing?

A common technique for a long time in graphics is to batch lots of things together so that they share vertex data and materials, so you can render them in one draw call. You want this, because each draw call has an overhead which spans CPU and GPU. Instead of issuing one draw call per object, you render them all in one go and use a shader to add variation within each copy.

You can do this manually by duplicating vertex data for a mesh X times in a single buffer, where X is the maximum number of copies you ever want to render, then use an array of shader parameters to transform / colour / vary each copy. Each copy needs to have some knowledge of which numbered instance it is, so it can index this array. Then you use an indexed render call which says “only render up to N”, where N is the number you actually need this frame, and which has to be less than the maximum, X.

This has been codified in most APIs now, so you don’t have to do it manually any more. It’s called “Instancing” and basically automates the above with some predefined constraints.

Unity supports instancing too and has its own API and set of shader macros to help you. It makes specific assumptions too, including that you want a full 3D transform in every instance. Technically, we wouldn’t need all of this for 2D bars - we could get away with some simplifications - but since it’s there we’ll use it. It will make the shader simpler, and also opens up the option to use 3D indicators like circles or arcs later if you want.

The Damageable class

Our enemies will have a component called Damageable which just gives them health, and makes them take damage from collisions. It’s pretty simple for this example:

public class Damageable : MonoBehaviour {
    public int MaxHealth;
    public float DamageForceThreshold = 1f;
    public float DamageForceScale = 5f;

    public int CurrentHealth { get; private set; }

    private void Start() {
        CurrentHealth = MaxHealth;
    }

    private void OnCollisionEnter(Collision other) {
        // Collision would usually be on another component, putting it all here for simplicity
        float force = other.relativeVelocity.magnitude;
        if (force > DamageForceThreshold) {
            CurrentHealth -= (int)((force - DamageForceThreshold) * DamageForceScale);
            CurrentHealth = Mathf.Max(0, CurrentHealth);
        }
    }
}

The HealthBar object: position / rotation

Our health bar object is really simple; it’s really just a Quad attached to the enemy.

Healthbar Object

We’ll use the scale on this object to make the bar long and thin, and place it just above the enemy. Don’t worry about its orientation, we’ll fix that in the code attached to this object via HealthBar.cs:

    private void AlignCamera() {
        if (mainCamera != null) {
            var camXform = mainCamera.transform;
            var forward = transform.position - camXform.position;
            forward.Normalize();
            var up = Vector3.Cross(forward, camXform.right);
            transform.rotation = Quaternion.LookRotation(forward, up);
        }
    }

This points the quad toward the camera in all cases. We could do sizing and orientation in the shader instead, but I’m doing it here for 2 reasons. Firstly, Unity instancing always uses a full transform for each object, so since we’re passing this data anyway, we might as well use it. Secondly, setting the scale/rotation here ensures that the bounding box for culling the bar is always correct. If we left the size and rotation of the bar entirely to the shader, then Unity might cull bars that should be visible when they’re close to the edge of the screen because their bounding box’s size and orientation don’t match what we eventually render. We could do our own culling instead of course, but it’s generally better to use what you’re given if possible (Unity code is native and has access to more spatial data than we do).

We’ll talk about how the bar is actually rendered once we’ve talked about the shader.

The HealthBar shader

For this version, we’re going to create a simple, classic red/green bar.

I use a 2x1 texture, 1 pixel of green on the left and one pixel of red on the right. I’ve turned off mipmaps (obvs), filtering and compression, and set the addressing mode to “Clamp” - this means our bar’s pixels will only ever be perfect green or red, and colour will bleed indefinitely off the edges. This lets us modify texture coordinates in the shader to shift the greed/red dividing line up and down the bar.

(Because it’s just 2 colours, I could have used a step function in the shader to return one or the other at a given point. However the nice thing about this method is you can drop in a more fancy texture if you want and it’ll work the same way, so long as the transition is in the middle of the texture.)

Firstly, we’re going to declare the properties we need:

Shader "UI/HealthBar" {
    Properties {
        _MainTex ("Texture", 2D) = "white" {}
        _Fill ("Fill", float) = 0
    }

_MainTex is the green/red texture, and _Fill is a value between 0 and 1 where 1 is full health.

Next, we need to tell the bar to render in the overlay queue, and for it to ignore all depth in the scene so it always renders on top:

    SubShader {
        Tags { "Queue"="Overlay" }

        Pass {
            ZTest Off

The next part is the shader code proper. We’re writing an unlit shader, so we don’t have to worry about integrating with Unity’s various surface shader models, this is a plain vertex/fragment shader pair. First, the bootstrapping:

    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag
    #pragma multi_compile_instancing
    #include "UnityCG.cginc"

This is mostly standard bootstrap, but the #pragma multi_compile_instancing lets the Unity compiler know to compile for Instancing.

Our vertex struct needs to incorporate instance data, so we do that:

    struct appdata {
        float4 vertex : POSITION;
        float2 uv : TEXCOORD0;
        UNITY_VERTEX_INPUT_INSTANCE_ID
    };

We also need to define what exactly is in our instance data, above and beyond what Unity already handles for us (the transform):

    UNITY_INSTANCING_BUFFER_START(Props)
    UNITY_DEFINE_INSTANCED_PROP(float, _Fill)
    UNITY_INSTANCING_BUFFER_END(Props)

What this is saying is that Unity should create a buffer called “Props” to contain our per-instance data, and that within it we’re going to be using a single float per instance for a property called _Fill.

It’s possible to use more than one buffer; the main reason for doing this is if you have properties which are updated at different frequencies; separating them lets you leave one buffer unchanged while changing another for example, which is more efficient. But we don’t need that.

Our vertex shader doesn’t do a great deal beyond standard, because the size, position and orientation have already been handled in the transform. That’s applied with UnityObjectToClipPos, which automatically uses the per-instance transform. You can imagine that without instancing, that’s usually just using a single matrix property, but when using instancing under the hood it’s an array of matrices and Unity is picking the right one for this instance for you.

The one thing we need to do is alter the UVs to change where the green/red transition point is on the bar, according to the _Fill property. Here’s the pertinent bits:

    UNITY_SETUP_INSTANCE_ID(v);
    float fill = UNITY_ACCESS_INSTANCED_PROP(Props, _Fill);
    // generate UVs from fill level (assumed texture is clamped)
    o.uv = v.uv;
    o.uv.x += 0.5 - fill;

The UNITY_SETUP_INSTANCE_ID and UNITY_ACCESS_INSTANCED_PROP do the magic which accesses the right version of the _Fill property from the constant buffer for this instance.

We know that a quad’s UVs cover the whole texture range normally, and that the dividing line for the bar is in the middle of the texture horizontally. So a quick bit of math on the horizontal UV shifts the bar left and right, with the texture clamping making sure that the remainder is filled.

The fragment shader couldn’t be simpler since all the work has been done:

    return tex2D(_MainTex, i.uv);

The full shader with extra comments is available in the GitHub repo.

The Healthbar material

Simple one next - we just need a Material to assign to our bar which uses that shader. There’s not much to this, except to pick the right shader at the top, assign the green/red texture, and, perhaps most importantly, check “Enable GPU Instancing” box.

Healthbar Material

Updating the HealthBar Fill property

Right so now we have a healthbar object, a shader/material to render with it, now we just have to set the _Fill property on each instance. We do this inside HealthBar.cs like so:

    private void UpdateParams() {
        meshRenderer.GetPropertyBlock(matBlock);
        matBlock.SetFloat("_Fill", damageable.CurrentHealth / (float)damageable.MaxHealth);
        meshRenderer.SetPropertyBlock(matBlock);
    }

We’re turning the CurrentHealth of the Damageable into a value between 0 and 1, by dividing it by the MaxHealth. We’re then passing it to the _Fill property, via a MaterialPropertyBlock.

If you’re not using MaterialPropertyBlock already to pass data to shaders even without instancing, you should get to know it, it’s not that well surfaced in the Unity documentation, but it’s the most efficient way to pass per-object data to shaders.

In our case, using instancing, the values for all the health bars get packed into a constant buffer for so they can be passed all at once and draw all of them in one shot.

There’s a bit more boilerplate for setting up the variables used there, but it’s quite boring; see the GitHub repo for details.

The Demo

The GitHub repo contains a toy demo where a bunch of evil blue cubes is pummelled by heroic red spheres (yay), taking damage in the process, displayed via the bars described here. It’s written with Unity 2018.3.6f1.

You can see the effect of instancing in 2 ways:

The Stats Panel

After hitting Play, click the “Stats” button above the Game panel. There, you can see how many draw calls were saved by instancing (batching):

Healthbar Stats

While you’re playing, if you click on the HealthBar material and uncheck the “Enable GPU Instancing” box, you’ll see this number drop to zero.

The Frame Debugger

While playing, go to the menu Window > Analysis > Frame Debugger, then click “Enable” in the window that pops up.

Down the left hand side you can see all of the render operations which are happening. Notice how right now there are lots of separate calls for all the enemies, and the bullets (we could instance those too if we wanted). If you scroll all the way to the bottom, you should see an entry for “Draw Mesh (instanced) Healthbar”.

This single call is what’s rendering all the bars. If you click on that operation, then the operation above it, you’ll see all the bars disappear, because they’re all happening in one call. If you toggle the “Enable GPU Instancing” box on the material while you’re in the Frame Debugger, you’ll see that one line turn into many, and back again as this feature is switched on/off.

Possible extensions

As I mentioned earlier, because these health bars are actually real objects, there’s nothing to stop you turning them into something other than just a 2D bar. They could be semi-circles under the enemies which deplete in an arc, or spinning diamonds above their heads. You could still render all of them in one call with exactly the same approach.

Thanks for reading, I hope you’ve found this useful!