HDR stands for "High Dynamic Range" and is has become a popular technique in digitial photography. There are many explanations of it scattered across the internet, but I've found many of them either excessively technical or lacking information entirely. Even Wikipedia, my usual go-to, doesn't have much to say (yet?). So as Tom Lehrer put it, I have a modest example here:
The premise behind HDR is simple: the world simultaneously exhibits a greater range of luminosity than a camera can capture, and HDR is an attempt to embrace the entire spectrum. Thus an HDR image reveals detail in shadows and highlights which an ordinary image can not. The human eye has an extremely high dynamic range -- with a little time it can adjust to pitch-dark conditions, extraordinarily bright light, or any combination of the two. A camera needs to be specially calibrated to work in either condition, and often a camera capable of resolving one is useless for the other. Thus, HDR is an attempt to make a camera capture light in a way similar to the human eye. This allows pictures with high contrasts -- anything including the sun, bright lights at night, strong shadows, etc. - to be presented in a more realistic manner.
The classic example is a room lit only by a single window, with a chair in the corner. A human could stand in the room and simultaneously perceive the pattern on the chair as well as the outdoor scene through the window. A camera would have to be set to properly expose either the chair or the window. In the former case, the window and outdoor scene would appear as just a white burst on film, having been well over-exposed; the latter method would reveal the outdoors but underexpose the chair into darkness. A compromise -- setting the exposure for the average light level -- probably leads to a boring picture.
HDR photography as it stands today - due to hardware limitations - is a post-processing technique. The image is still captured with traditional cameras (this is arguably an advantage, as it is the reason HDR remains accessible to anyone) and software is used to piece together an HDR shot. Specifically, in the room example, all three shots - one exposed for the chair, one exposed for the window, one exposed for the scene as a whole - are combined into a single HDR image, containing all the luminosity information about the scene. The HDR image is then used to create a final image in which each pixel is properly exposed, so both the outdoor scene and the chair are visible. In essence, one cherry-picks the properly-exposed parts of each individual image, combining them into one photo.
Let's walk through the mechanics of this process. Consider each individual picture of a given exposure E to be a two-dimensional representation of a scene, having height and width:
Multiple pictures of the same scene, taken at different exposure settings, can be aligned along the exposure dimension. This is like flipping through three photos, all shot from the same position, taken at increasingly higher exposures:
Software is then used to construct the HDR image by interpolating all other exposure levels of the scene from the three "low-dynamic range" images. The software basically takes the two "bookend" photos and the one middle photo and figures out what every photo in between would look like. The result is a continuous bank of images showing a single scene at every possible exposure:
This is actually an HDR image -- this prism-like representation of the scene contains all of the physical descriptions (height and width) as well as the entire spectrum of luminosity information. But it's not really an image at all -- it's some strange high-dimensional structure which simply can't be visualized. When we see HDR images in print, they are actually no longer HDR, strictly speaking, but rather low-dimensional, printable projections of the high-dimensional HDR prism. In other words, one has to compress the HDR image back into two-dimensions before it can be printed or viewed. One way to do that is to take a simple cross-section:
In this example, the cross section corresponding to an interpolated exposure of -0.2 is extracted from the HDR prism. The cross section is two-dimensional (having height and width and a set exposure) and can therefore be printed. By taking the cross sections at -1, 0 and 1, the original three images can be recovered. However taking simple cross-sections like this doesn't solve the original problem -- it doesn't enhance the dynamic range of the cross section.
Suppose (in a highly simplified manner) that in the prior example of the room, the window is on the left of the image and the chair is on the right. A simple cross section won't grab the proper exposures for each side of the image, but a split cross section will:
This diagram gets a little confusing, but essentially the left half of the image is a cross section from the low-exposure end of the prism (so that the bright window is resolved properly) and the right half of the image is a cross section from the high-exposure end of the prism (so the chair is exposed properly).
HDR software performs this "split cross-sectioning" not by partitioning an image into two halves, but by optimizing every single pixel, so that each individual point in the resulting photo is at its optimal exposure level. This process of flattening the HDR image back into two-dimensions is called tonemapping.
In practice, an image with every pixel perfectly exposed is boring -- contrast gives images interest. So, tonemapping is frequently used to enhance shadows and highlights that would otherwise be blown out or invisible, but the algorithm is usually suspended before it reaches its logical conclusion. A fully tonemapped photo would look flat and lifeless; a photo merely enhanced by tonemapping is exciting and dynamic.
And that's basically it. To sum:
- Multiple exposures of the same scene are taken
- The exposures are combined into a three-dimensional HDR image
- Software compresses the HDR image back into two dimensions by varying the exposure level of each pixel