Post processing is basically any processing that happens after you create a ready - to - display 2D image from your 3D model data. HLSL and GLSL gives you a very convenient place to perform that post processing in the pixel shader.
The Graphics pipeline's job is to take 3D model data, "somehow" convert it into a 2D image (in the back buffer), and then display it on-screen.
In general, this means that the final "product" of the pipeline is a "classic" 2D texture (which is nothing more than a block of memory containing colors for display on pixels on the screen).
So any processing that is done after you have this 2D texture, is called "post"-processing.
Conceptually, post processing is exactly what you would do with Photoshop in order to make a photograph better: Adjust brightness/contrast/gamma/exposure, adjust colors, apply effects such as sepia or negative, or any number of other exotic effects, such as a lens, blur or distortion effect etc. But the take-home point here is that it is applied on a 2D image, not on a 3D model.
Thus, for GR-Graphics, the post-processing step is another "program", or "pass", added to a "technique" batch: The previous step, no matter what kind it was (forward or deferred, regardless of techniques used), would be setup as render-to-(single-)texture, and its output hooked as input to this pass. The post processing step then either draws a full screen quad on-screen with post-processing done in the pixel shader, or (and that is probably the smartest way to do it in many circumstances where you might want to do complex calculations where you might want to write multiple times on the output image or have some kind of per-pixel feedback), have a compute shader edit the texture before display.
Without further ado, some "all time classic" post-processing pixel shaders (i.e. what everyone seems to do for their first tries in order to understand post-processing):
The vertex shader just passes a full screen quad. Nothing to show, really.
Input to the pixel shader is also trivial:
Texture2D renderBuffer: register(t10);
SamplerState textureSampler: register(s10);
struct VSin {
float2 texUv: VSOUT_TEXUV;
};
0) No effect
This is the base, without any post-processing
1) The classic "luminance" grayscale effect: This shader uses the luminance of a pixel (that is, computes a weighted average of pixel's color, where the weight is their apparent brightness to the human eye), and uses this luminance in all three color channels of the output.
float3 sample = renderBuffer.Sample(textureSampler, inp.texUv).rgb;
float4 target;
target.xyz = dot(sample, float3(.3, 0.59, .11)).xxx;
target.w = 1;
return target;
2) Grey noise / film grain. This goes into the family of "random" effects. Basically, what you do is randomize all three components of your sample, weighted by some number determining your max effect. Depending on your needs, this weighting might be additive, multiplicative, whatever suits the particular need. In general, multiplicative noise is more apparent on bright parts, additive is rather more apparent on dark parts.
You can find information on obtaining a random value in your pixel shader in many places in the internet. In my case, I am using a procedural noise generator, without too much emphasis on noise quality - just enough number-fiddling to look random.
Other people swear by random textures which they sample for the random number. I cannot say I am a huge fan of the technique, as GPU's tend to go the same way as CPUs : Processing is becoming a lot faster than memory, and this seems to be getting even more pronounced as technology advances. An interesting read is the paper for generating improved perlin noise in GPU Gems.
In any case, after getting a "randomize" function, the shader basically boils down to :
target.xyz = randomize3(renderBuffer.Sample(textureSampler, uv).rgb, uv * time, .5);
Randomize3 basically does: FinalColor = RandomValue(-.5, .5) * Color;
UV * time is the random seed. Be careful here, this is not a very good seed, because it might quite fast start showing patterns. Probably a modulo function will be safer here in order to recycle time after a reasonable amount (a few seconds to a few minutes - anything that will not become apparent).
This is basically is a multiplicative noise: as noted, it will have a greater effect on bright pixels, and no effect at all on black pixels.
An additive noise, something like FinalColor = ( 1 + RandomValue(-.5,.5)) * Color would have a greater effect on dark pixels but still have an effect on bright pixels. In any case, the possibilities for noise are infinite, but keep in mind that this is not cryptography : the correct noise is just the one that looks correct and does not deteriorate over time or create unwanted patterns, not the one that gives you the most random results.
3) Colored noise. This is the same as before, but randomizes each pixel color separately, effectively adding a colored noise that does NOT maintain pixel hue. In essence, the illusion of correct hue is maintained by the randomness of the noise, which lets the human eye discern the average hue from auto-averaging neighboring pixels
target.xyz = randomize33(renderBuffer.Sample(textureSampler, uv).rgb, uv * time, .5);
Randomize33 is the same as randomize3 with the only difference that it multiplies each component of the vector with a different random value.
4) Night vision. This is a very pretty and nice effect that simulates single - color, brightness - increasing night vision goggles. Basically, you take the luminance of a pixel, multiply it by a set "gain" value, which is the light amplification, and output it as a predetermined color, traditionally (but not necessarily) a hue of green. An HDR/tone-mapping shader would work even better here, allowing you to create an even more realistic, auto-gain pair of goggles, or let you simulate overload, the possibilities are infinite.
static const float gain = 10;
static const float3 color = float3(0.15, 1, 0.3);
...
float3 sample = textureBuffer.Sample(textureSampler, inp.texUv).rgb;
float luminance = dot(float3(0.30, 0.59, 0.11), sample);
luminance *= gain;
float4 target;
target.xyz = luminance * color;
...
Generally night vision effects look very nice and realistic with some noise effect added:
luminance = randomize1(luminance, uv * time, .5);
All these effects have to do with color distortion, where the only thing you change is the pixel output color.
Just as easily, you can apply geometric effects, where you modify the pixel position (or rather, you sample from a different position where it would be expected to do).
A simple example I just cooked is what we could call a "cubic lens" (not sure if there is a physical analog to it), which basically works like a strange distorting mirror. What you do here is sample the pixel more left, right, up or down depending on a home-cooked sine function of texture coordinates (in other words, position on screen).
float2 uv = inp.texUv;
//Screen - width frequency, width 0.1
//By increasing frequency you can get distorting
//glass effects (bath glass), waves, whatever.
//Screen - width frequency, width 0.1
//By increasing frequency you can get distorting
//glass effects (bath glass), waves, whatever.
float2 distance = sin(uv * 2 * 3.14159265359) * .1;
uv += distance;
float3 sample = textureBuffer.Sample(textureSampler, uv).rgb;
return float4(sample, 1);
This particular configuration adds a distortion that is :
- Zero for the edges and center of the screen (i.e. uv = 0,0.5 or 1, hence uv * 2 * pi = 0, pi or 2pi, hence sin = 0)
- Maximum for 1/4 of the screen,
- Reverses sign for left/right or top/bottom part of the screen (where uv * 2 * pi = pi/2 or -pi/2, hence sin = 1 or -1)
- Reverses sign for left/right or top/bottom part of the screen (where uv * 2 * pi = pi/2 or -pi/2, hence sin = 1 or -1)
That's it for now. I hope I can move to the more pretty and complex HDR Lighting effects such as bloom soon.
No comments:
Post a Comment