Thursday, May 2, 2013

Deferred shading - Dealing with low precision normal artifacts

While the G-Buffer format I described at the previous post was quite workable, after careful examination, in close-ups, some artifacts DID pop up.


As in most "real" detective stories, the culprit is really the "obvious" candidate. Obvious?



Well, while the depth buffer did have some transformation going front-and-back, 32 or even 24 bits of normalized precision seem to be quite enough, at least for the small world extents that my demo uses.

Colors simply should be quite, quite enough with 8 bits per channel (in some cases you can use even less if you really must save these last bits).

Which leaves us with the normal.

Yes, R8G8B8A8 signed normalized looks ok at first. But it seems it has not enough resolution. In my case, it was only visible on very bright specular highlights and on close up. Still, since using the alpha channel for Gaussian exponent was in fact a luxury and not a necessity, the Gaussian exponent was simply expelled. Yes the demo now officially only supports Phong and BlinnPhong specular :).

So, how to use these last 8 bits to increase precision of the normal buffer?

There are two ways to go around the problem, the Programmer one and the Mathematics one, which converge to the same things but probably in a different way.

"Programmer mode" says we have 8 unused bits, and we need to use them. Initially this would push us towards the "exotic" formats R10G10B10A2 or R11B11G10 that would allow us to continue to use our shaders to store the normals in the same way, but with more precision.


And then "Mathematics" kick in, and say you are looking at the problem wrong. The problem is not getting more storage for the three components - you don't need three components. A normal may be shown in 3 numbers, but it does NOT contain 3 component's worth of information. A normal is a unit length vector. This allows us all kinds of neat tricks, all of which basically use the fact that a normal should be unit length:

  • Store two components and reconstruct the third, using the formula z=sqrt(1+x^2+y^2). The caveat here is that we are losing the sign of z. If you are working in view space, this can be acceptable as the cases where you have usable normals pointing towards the view direction are rare. You could get artifacts, though
  • Use an entirely different two component storage model, like polar coordinates or something else
Normal stored as R16G16 signed
normalized format, storing x and y
and reconstructing z. Z is assumed
to always be positive. The tiling
artifact is gone.

I really suggest this excellent read, complete with different cases, error metrics and profiling information covering a lot of the most useable ways to compress a normal in less space.


At the moment, I implemented the first way ( R16G16_SNORM, storing x and y and reconstructing z) just for simplicity. In reality, if you check the above link, as other techniques give much better results without much more fuss.

No comments:

Post a Comment