Bump-maps, Normal-maps, Displacement and Vector Displacement. You have probably stumbled upon at least one of these by now. While there is a lot of information about them already, there still seems to be a bit of confusion about their differences and the consequences of using a specific type of map. This article will focus a lot more on what issues to look out for, than on how things are done.
All four texture types have the same purpose, to add extra surface detail to a model, using different methods to reach that goal. Looking at these methods, they can be divided into two categories which I'm going to call True Displacements and Fake Displacements. The word 'displacing' is just a synonym of the word 'moving', which is right at the core of this difference. True Displacements will move around vertices while Fake Displacements strive to achieve the same look but without actually changing the geometry. For now, I will focus on the differences between the two categories instead of breaking down each map individually.
Overall, True Displacements deliver cleaner results but have a heavy impact on render times.
Side note: Just in case you are wondering why Height-maps aren't mentioned. The term Height-map describes how information is stored, but not how it is applied to the mesh. They can either be applied as Bump or as Displacement. Software such as Substance uses this term because it does not make a statement whether you should use the Height-map as True or as Fake Displacement.
Before we go any further, I should mention that I'm a VFX artist and will thus look at this topic from a VFX point of view, using path traced rendering. Your results will not be the same if you are using a different render method (e.g. game engines). True Displacements might even be fully supported by your engine.
To understand how Fake Displacements work, we need to first take a quick look at path-tracing. To simplify things: Light-rays are emitted from all light sources in the scene and bounce of the surfaces of the objects until they reach the camera. Alternatively, rays can be sent from the camera and bounce until they reach a light source. Which of these two methods is being used has no impact on the effects I'm about to demonstrate.
The angle of the reflected ray is calculated by comparing the angle of the incoming/incident ray to the angle of the Surface Normal. Those of you that had vector math in school and still remember it, might recognise the term Normal. In vector math, which is the core of rendering, a Normal is a vector that is perpendicular to a plane. Simplified we can say that each face on our model is planar and we thus have one Normal per face.
Side note: In reality, polygons aren’t always planar. To work around this, render engines split all polygons, including quads, into tris and then calculate the average Normal of all the tris that the face is made of.
Normal-maps are called that way because they allow you to store a custom Surface-Normal on a per-pixel basis. This means you can freely manipulate the bounce direction of the light rays without altering the geo. Bump-maps convert the height information into Surface-Normals at render time. This means that to light rays, which are basically the eyes of your render engines, Normal and Bump-maps look identical.
We can mimic the results of True Displacement by matching the angle of the Surface Normal. In this simple example, it might look fine, but not deforming the geometry comes at the costs of several artefacts. It doesn't make a difference which texture-subtype you are using, Bump or Normal, Displacement or Vector Displacement and whether your textures were created by baking or are premade, the artefacts are only category dependant as they all result from the fact that Fake Displacements do not deform the geometry, while True Displacements do.
Impacts on Lighting
One of the first things that you notice when looking at examples would be a lack of shadows. Not only does True Displacement affect the shape of the shadows that the base-mesh casts, but it can also create new shapes, which cast completely new shadows. Often this is the most noticeable with self-shadowing but it also occurs when casting shadows onto other objects.
Displacing the model will lead to self-shadowing, as the light ray is prevented from hitting the same spot on the surface. If the light source in this graph is the sun (parallel light rays), then using True Displacement would cast a shadow covering all of the area shown in red, which is missing when using Fake Displacement.
Looking at this simple render, you can see that the general direction of the lighting situation is captured with both methods, but the actual shadows of the geometry are missing when using Fake Displacement.
A great example of this effect taking place is creating a brick wall, from a flat base geometry. Using True Displacement, the shadows in the crevices of the wall give the wall a great sense of depth while the lack of these shadows causes the Fake Displacement version to look very flat.
But to be clear, the lack of shadows is not the only visible difference in the interaction with light. With irregular True Displacement, we are increasing the surface of our object, which leads to a greater amount of bounce light. This causes dark areas to look unnaturally black when using Fake Displacement.
In the render with the little pyramid, demonstrating shadow casting, I increased the contrast of the rendered, to mask this effect, so that it would be easier to focus on one aspect at a time. Looking at the unedited render, the lack of bounce light will become visible.
After covering the lack of shadow casting, and bounce-light there is one more effect that belongs to the impacts on lighting. How the geometry receives shadows.
Like in one of the previous examples, red represents shadows. Comparing the height of the shadows, cast onto our object by the sphere, demonstrates that the shape of the shadow will only be affected by True Displacement.
The True and Fake Displacement applied to the planes represents a simple wave, going up and down. Only the shadow cast onto the plane using True Displacement is being deformed, increasing the sense of depth.
All of the effects I’ve mentioned so far are affected by the angle of the incoming light in comparison to the angle of the camera. If your camera is aligned to the light source, then all these effects will seemingly disappear. The deformation of the shadows, for example, gets reduced as the same deformation also happens to the view of the camera.
Additionally, more and more of the shadow will disappear behind the geometry itself.
While this is working pretty well, it’s not really a solution. Shadows and the shapes that they create and enhance greatly contribute to making the scene look interesting. And if you are using more than one light source, then you could only align your camera to one of them anyway. So don't let this impact the way you light your scene or the placement of the camera.
Looking at the last render you might have noticed something else. While the Normal-map does recreate the same brightness values, the pyramid still looks very flat. This is due to the next effect that I’m going to cover.
As one of the main principles of perspective, Parallax describes the effect which causes objects to seemingly move in relation to each other when moving the camera. It's not limited to separate objects though, it also happens with any two given points in 3D space. This means that the deformation of a mesh using True Displacement has an effect on it.
I used red lines to mark areas that are at the same depth on all three objects so that they can serve as a reference point. The blue lines, on the other hand, are pushed down by the Displacement. Looking straight down, True and Fake Displacement will be almost identical, but at grazing angles, the lines on the Lowpoly/Fake Displacement version will keep an even spacing, while the ones on the mesh using True Displacement will seem to move. If we would tilt the camera just a little bit further, the blue line in the back part of the plane would completely disappear behind the centre line. Pushing the blue lines upwards instead, this effect would be reversed, causing the front blue line to almost cover the centre red line.
Some game engines offer a way to fake Parallax, called Parallax-mapping, which works by applying 2D distortion to the rendered pixels instead of the geometry. This provides some nice extra fidelity but also causes a performance hit.
Fun fact: Some artists use parallax in really interesting ways. One great example of this is Patrick Hughes, take a look at the video below, demonstrating of one of his pieces.
While it is possible to create 90-degree vertical offsets using True Displacement, the same cannot be said about Fake Displacement. Vertical angles can theoretically be stored in Normal-maps, but it doesn’t make any sense to do so, as they logically wouldn’t have any correlating area on a low-poly mesh.
The green line in the image above represents the surface of our base mesh, while the red line represents a Highpoly which we are going to bake onto the mesh. The gradient emerging from our Lowpoly shows the values that the resulting Normal-map would have. Each surface-angle of the Highpoly can be assigned to an area on our Lowpoly, except for the vertical angle between area 1 & 2, as it does not have any width relative to the base mesh? Bump-maps share this issue, as they are converted to Surface Normals at render time. The first image below shows the side-view of the Highpoly, which is used to bake Normals and Displacement, while the second image is a render using the baked textures.
As you can see, all the other steep angles can still be stored in a Normal-map, even though they are represented in tiny areas, but the 90-degree angles completely disappear. Note that creating any steep angles with textures should generally be avoided, even when using True Displacement. As this kind of Displacement will stretch the pixels of your other textures.
That Fake Displacement does not change the shape of your geometry can also become obvious when looking at the silhouette. In extreme cases, the same could even happen with internal overlaps, such as the nose covering a part of a character’s cheek.
Generally, there are two ways this effect can be visible. Silhouettes that should be soft become blocky or silhouettes that should have a lot of high-frequency detail suddenly lack said detail.
Sub Surface Scattering
If the asset you are working on is using SSS, then you might also perceive another downside of Fake Displacement.
SSS calculations are randomising the angles of the light rays as they are travelling through objects. This means that they depend a lot less on surface angles than on the thickness and the actual shape of a mesh. The only reason why we see any effect in our example when using Fake Displacement is that the only light that reaches the camera from the dark areas are the scattered rays, while the bright areas are a mix of all rays. This gives the darker areas a red tint but does not come anywhere close in terms of fidelity.
Unfortunately, I have to split this article into two parts. Artstation has a maximum length to its blog posts, which is also heavily impacted by images. (Allegedly this half of the article is 8,000 characters longer than it actually is.) Luckily I've already finished writing the whole article, so you can immediately continue reading here: