Writing a SpriteLamp Shader in Unityposted on January 16, 2014 at 2:23 pm by Steve Karolewics
A few months ago, Justin and I began investigating ways to make our next game (in Unity3D) look great. We knew we wanted to stick with 2D art, but didn’t want to tacitly accept the “conventional” expectations associated with 2D art in games.
We discovered SpriteLamp, which essentially allows you to generate dynamic lighting on pixel art. It accomplishes this by producing normal maps, depth maps, and anisotropy maps for use in shaders. All you provide are 2-5 “lighting profiles” of what an object would look like lit from a specific direction (top, bottom, left, right, or front). This animation sort of sells itself:
Courtesy of the SpriteLamp Kickstarter
We encourage anyone interested to check out the Kickstarter or SnakeHillGames for more information. SpriteLamp was successfully funded, and we’ve received beta access. The tool, even in its beta state, is very usable, and has UI that is easy enough to understand for now:
The UI for SpriteLamp
Although we’re not artists, even we could see how exciting this would be to get working in Unity. SpriteLamp’s developer, Finn Morgan, said that a shader for Unity will be provided later, but we decided that we couldn’t wait, so we wrote it ourselves.
Shaders in Unity
For those unfamiliar with how shaders work in Unity, here are some resources that helped out a lot for us:
Another important aspect to keep in mind is that if your shader has errors, it’s easiest to see the errors by viewing the shader file itself in Unity’s inspector window:
Sometimes Unity’s Console window will show all shader errors, but I’ve found the Inspector for the shader to be more reliable.
With all of that in mind, let’s get started. I figure it will be more valuable to talk through the various aspects of the shader, rather than just provide the shader in its entirety (though if you just want that, check the end of the article).
A Bare Bones Cg Shader
Let’s start with the minimal amount of work, to better understand the structure of a Unity shader, since it was pretty overwhelming for me at first, especially as it relates to lighting. If you’re familiar with the structure of shaders and how Unity handles multiple lights, then feel free to jump to the next section.
This might seem a bit overwhelming already for someone new to shaders in Unity, but it’s a great starting point. Let’s talk about what is going on.
On line 1, we specify the name of the shader, as viewed when selecting a shader for a material. Using slashes gets you nested folders in the shader dropdown. On lines 3-6, the shader properties specify what data you can set outside the shader that will be brought in. See this documentation for more information on shader properties. Since we’re using the new Unity 2D features, _MainTex is required if you’re going to use SpriteRenderer. On line 9, we specify that pixels with an alpha of 0 should be ignored.
We’re now describing one pass of our shader, referred to as “ForwardBase”. This is where ambient lighting, the first directional light, per-vertex lights, and lights using spherical harmonics are handled. This Unity reference page explains the various Pass tags in more detail, and this page explains how Unity handles multiple lights in shaders.
Then we begin writing our Cg shader, which occurs between CGPROGRAM and ENDCG. We specify the function names that we’ll use for our vertex and fragment shaders. Then we state the data that we’ll bring in from outside the shader. These variables must be named the same as values specified in the Properties section. Next, structs are defined for the data that our vertex shader will receive, and what it will output. The output of the vertex shader is the same data received by the fragment shader (after interpolation of that data occurs). For now, we’re just using the vertex position and texture coordinates.
In our vertex shader, we simply pass the text coordinates through, but we multiply the position by the model*view*projection matrix. This converts the vertex position from object space to screen space. In our fragment shader, we simply get the pixel from the main texture and return that color.
This pass is used for per-pixel lights, and it should look really familiar. For now it’s the same, but that will quickly change.
Ambient Lighting in ForwardBase
For our uses, we only wanted to focus on ambient lighting in our ForwardBase pass. However, if you want to add directional lights or per-vertex lights, this Cg shader page will be helpful.
With this shader pass, we’ve added color to the data we receive in the vertex shader and pass to the fragment shader. The field specified by COLOR correlates to the color set via SpriteRenderer. Factoring that in, along with UNITY_LIGHTMODEL_AMBIENT (which is the ambient light color specified in your project’s render settings), we get ambient-lit sprites that we can further color via SpriteRenderer:
We’re now done with the ForwardBase pass, so all code past this point is focusing on the ForwardAdd pass.
Phong illumination (your standard ambient+diffuse+specular lighting) has been described in many places (such as Wikipedia), but it’s an important step to getting SpriteLamp integrated properly in Unity. To prepare for this, we have to add some shader properties:
It’s easy to get the intention of “shininess” reversed. Smoother surfaces have larger values for this, which result in smaller specular highlights.
On line 4, we specify a blend state for additive blending. By using additive blending, we allow multiple lights to contribute to lighting, instead of each light overwriting the previous one.
Our vertex shader now also outputs “posWorld”, which is the vertex position in world coordinates (unlike output.pos, which is in screen coordinates). We’ll need this for computing light strength. Although it’s not a texture coordinate, we bind it to TEXCOORD1 because we choose how to interpret it in the fragment shader.
In order to compute Phong illumination, you need to know the normal for the surface (or else you don’t know how strong a light is hitting a surface). Since we’re working with Unity 2D sprites, the normal is always (0, 0, -1), pointing toward the screen. Additionally, the view direction (which points from the fragment to the camera) is needed to compute specular highlights. Since we have an orthographic camera, this is also a known value.
_LightColor0 is a built-in value provided by Unity, and is pretty self-explanatory. This is also the first time we encounter lighting attentuation, which specifies how light strength descreases over time. This typically follows an inverse quadratic curve, but for now we’re using inverse linear. There’s nothing wrong with either, but inverse linear provides more light.
It’s important to note that this code is intended for point lights only. If you want to use directional lights, then the attenuation and lightDirection are calculated differently. If you want to use spot lights, cookie attenuation needs to be added.
The rest of the new code in this shader is just following the Phong illumination model, so I won’t explain any further.
Now that we have standard lighting implemented, it’s time to take advantage of SpriteLamp! The first (and most important) aspect to integrate is the normal map. For the head above, our normal map looks like this:
Like with Phong illumination, for normal maps we have to add to our shader properties:
For the normal map, “bump” refers to a default texture where the red and green channels (which correspond to the x and y components of the normals) are 128, and the blue channel (the normal’s z component) is 255. The range of values in a normal map is from -1 to 1, so by default a normal map specifies normals of (0, 0, 1). This is why most normal maps appear blue.
As you might expect, the only thing that changes when incorporating normals is the normalDirection variable. This still requires a few changes throughout the shader, though:
There’s a lot going on in those few lines that compute the normal. Since we’re getting the normal from a texture, we have to convert from color coordinates to normal coordinates. Colors range from 0 to 1, while normals range from -1 to 1. This is handled on line 9.
Next, we multiply the normal by the “world to object” matrix. This is necessary because that matrix contains the transform for things such as rotated sprites. Without this line, the lighting wouldn’t change as you rotate a sprite around a light!
As mentioned above, the default normal value is (0, 0, 1). However, as you’ll recall from the Phong illumination shader, we used (0, 0, -1), so we negate the z component. Finally, we normalize the normal.
Another feature that SpriteLamp provides is depth maps, which adjust the depth of the fragment. For sprites, this translates to adjusting the Z position, but more generally it adjusts the position along the normal that would be computed in the vertex shader. We’re just taking shortcuts because we’re using sprites (and also since shaders have an instruction limit). This depth map generated by SpriteLamp adds some definition to the ear and jawline, and adds rounding to the edges of the head and face:
As you probably expected, we added the depth map to the shader properties. We’re interpreting the depth map to be able to both add and subtract depth, so the range 0 to 1 for color maps to -1 to 1 for depth. This means that our texture should be “gray” by default, and we do the same computation as we did for normals to convert to the -1 to 1 space.
After getting the depth adjustment, we then subtract it from our posWorld.z, factoring in the “Amplify Depth” setting. We’re subtracting because our camera is looking in the positive Z direction, and the brighter areas of the depth map are “closer”, which means moving in the negative Z direction.
The difference in “amplify depth” settings is very noticeable, but you have to be careful to not increase it so much that the sprite will be “within” a light:
The unlit parts in the center of the head have final depth values that are closer to the camera than the light.
A Future Improvement
Some of you may have noticed that we’re only focusing on the x component for the depth map. And for the normal map, we only focused on the x, y, and z components. While we haven’t implemented it, one improvement that we’re considering (and we hope you do too) is to combine the normal map and depth map, so that the alpha value of the bitmap is the depth value, reducing the number of texture lookups needed.
As Finn Morgan pointed out in his recent blog post, adding cel-shading is a pretty simple technique:
Note that now we compute the diffuse and specular “level” (before color is factored in), apply cel-shading to that, and then add the color-based computations in. If you just added the cel-shading to the end of the previous shader, you would end up with per-component cel-shading which generally looks terrible:
However, with the changes to apply cel-shading before the color is added, we get a much better result:
The Finished Shader
Putting everything together, the final shader looks like this:
You can also download it directly here: http://indreams-studios.com/SpriteLamp.shader
There are still plenty of features that we haven’t implemented for our shader, such as ambient occlusion, anisotropy maps, self-shadowing, and wraparound lighting. We’re content with where we are at this point though, since we can continue making progress on our game visually, and there’s always time to improve the shader later. If you end up using (or improving) this shader, please let us know! We’d love to learn that we’re helping out other developers.
Edit: By request, we added the MIT license to the final shader file, so feel free to use it unrestricted!