One of the trending technique in photography right now is HDR or High Dynamic Range. The basic concept of HDR photography coming from the fact that current technology of today camera, or should I say camera’s sensor to be specific, cannot capture all the range of exposure level as wide as human eyes can. So, the HDR technique will combine a set of multiple level of exposure images and create a new one that combined all exposures levels. Trey Ratcliff is probably is king of HDR and his works is being shown at stuckincustoms.com if you want to learn/know about HDR photography that’s where you should start. What I’m writing here isn’t about how to make HDR but rather a rant about what I find in today camera regarding HDR.
The true is I don’t very do HDR photography. I have thought about trying it some day. Some time I bracket shoot with the mind set that I might do HDR some day. And yet, I’m still lazy enough to not actually put those shots together. Now a day, some cameras do support HDR, meaning it will take 3 shots and do all the technical works inside the camera and spit out all the 3 images it originally capture and the 4th which combine the highlight and shadow details of the 3 original shots, a.k.a. HDR image. With that said, that usually come out not as good as manually doing a HDR in the computer using specialize tool like photomatrix.
I’m not complaining about that, nor why today sensor can’t still capture HDR like exposure level. Considering the competitive market of photography tools(notice I didn’t use the word camera here) now a day, I doubt the camera manufacturers are slacking off. Both old and trusted Canon and Nikon are very competitive in all the market segments from high-end dSLR(digital Single Lens Reflex) to Point-and-Shoot and everything in the middle including entry level dSLR and enthusiast P&S. Then there is Sony, a giant electronic compnay with tons of cash have a fast growing market in this area and then all other companies that make EVIL(Electronic Viewfinder Interchangable Lens) cameras like Olympus, Panasonic, Samsung and Fujifilm. Now a day, almost all P&S camera have to compete with smartphone as well. Maybe with the exception of Samsung whom decided to merge the two together instead.
Let’s take a look at the timeline here.
Set the camera to Aperture Priority with bracketing or manually set them one after another, and lets’ assume the correct shutter speed for normal exposure level is 1/30sec with the aperture value of f/8 and ISO 100
|Set the shutter speed to 1/120sec||captured first image|
|Set the shutter speed to 1/30sec||captured second image|
|Set the shutter speed to 1/7.5sec||captured third image|
Now, here is my rant, why(insert your own connotation for special impact here) do need to do that in the digital age?
Seriously why haven’t camera manufacturers improve thing like this already. The camera don’t need to expose the sensor 3 different times to capture 3 images. It only need to expose 1 time to capture 3 or more images. First, let me explain that by expanded the timeline.
|clear the sensor||Set the shutter speed to 1/120sec and start capture||sensor readout to memory||export sensor memory captured to first image|
|clear the sensor||Set the shutter speed to 1/30sec and start capture||sensor readout to memory||export sensor memory captured to second image|
|clear the sensor||Set the shutter speed to 1/7.5sec and start capture||sensor readout to memory||export sensor memory captured to third image|
This is what it should be
|clear the sensor||camera adjust the shutter speed to 1/120sec and start capture||pause sensor and copy sensor readout to memory||export sensor memory captured to first image|
|camera adjust the shutter speed to 3/120sec (1/90sec) and continue capture||pause sensor and copy sensor readout to memory||export sensor memory captured to second image|
|camera adjust the shutter speed to 12/120sec (1/10sec) and continue capture||pause sensor and copy sensor readout to memory||export sensor memory captured to third image|
I think Canon, Sony and Fujifilm produce their own image sensors for their own cameras. This kind of things should have been possible. The whole process would have take about the same time or slightly more than as you would take an image with the exposure of +2EV alone.
The benefit of capture the image this way for the purpose of doing HDR is to reduce the total time it take to capture all the necessaries exposure level and thus reduce the camera shake thus improve in image quality. And in the case of dSLR, expose the image this way would only require the mirror to flip once and thus reduce the camera vibration caused by mirror flipping. I know the latest and greatest stuff like Canon EOS 5D mark III already have feature to combine images from different exposure level, this isn’t a stretch from what is already available. Considering the newer cameras now have some sort of HDR available, yet they still do it by capture 3 images from 3 exposures. I don’t believe there is a downside of the camera tying to make or assist in making the HDR this way. In the age where most camera now can shoot video, eg: camera sensor are flexible enough to work differently from when it was originally designed to do, I don’t see why this is impossible.
I believe that this is an area that the camera manufacturers can improve upon. There is no reason for the digital camera to be stuck trying to work around the old way we capture image from the film day. It can works differently for the better.