Intermediate Computer Graphics Week 9: Deferred Rendering

Up until now, all of these blogs have discussed graphics in the context of forward rendering. Forward rending is the ‘standard’ linear rendering process in which geometry in the scene is passed down the rendering pipeline one by one and then rendered directly to the screen.

The typical forward rendering process.

The typical forward rendering process.

Deferred rendering, on the other hand, does not render all of the geometry to the scene immediately. Rather, it stores several properties of the scene in separate buffers, collectively known as the G-Buffer, which includes depth, color, and normal information, among others. The lighting (and other effects) are then calculated, and everything is composited and rendered to the screen.

Rendering pipeline using deferred rendering.

Rendering pipeline using deferred rendering.

Deferred rendering has advantages and disadvantages. The main advantage is that with deferred rendering, it becomes possible to have a large number of dynamic lights in a scene, which would be too computationally expensive with forward rendering. Some disadvantages of deferred rendering include that it requires somewhat new graphics cards, as some old ones do not support multiple render targets or high enough bandwidth. In addition, deferred rendering on its own does not allow for transparent objects. If transparent objects are required in the scene, they will have to be rendered using a combination of deferred and forward rendering.

The various buffers that can be included in a G-Buffer. From left to right, color, depth, and normals.

The various buffers that can be included in a G-Buffer. From left to right, color, depth, and normals.

In terms of the code required for deferred rendering, there are two fragment shaders that will be required. The first takes the geometry and renders out all of the required data into separate targets. The second is very similar to our regular Phong fragment shader, except instead of getting normal and position data from the vertex shader, it now gets this information by sampling textures passed in as uniforms. Since this is a rather simple change, I will only show code for the first fragment shader:

#version 420

in vec3 Position;
in vec2 texCoord;
in vec3 Normal;

layout(location = 0) out vec4 color;
layout(location = 1) out vec3 normal;
layout(location = 2) out vec3 position;

uniform sampler2D tex;

void main()
{
	color.rgb = texture(tex, texCoord).rgb;
	color.a = 1.0;

	// Since values in the texture must from 0 to 1, but normals are represented by values from -1 to 1,
	// the normals must be scaled and offset to conform to the texture requirement.
	normal = normalize(Normal) * 0.5 + 0.5;

	position = Position;
}

Once we have the information stored separately in the G-Buffer, it is rather easy to apply many different effects to the scene since we can choose which data we want to use. Note that this is not the only information that can be stored in the G-Buffer. For example, you could also keep material information and store it on a per-fragment basis.

Leave a comment