Sensor Design

Sensor Design Notes

From a friend of mine who designs sensors for a company in California

 

1) Technology type by itself does not define how sharp the sensor will be, given that you are talking about large pixels in high performance camera systems (pixels >3um)

 

2) Pixel MTF has  A LOT to do with image sharpness. Percent fill factor and crosstalk are the major drivers. Crosstalk is both electrical (photon is converted to electron then ends up in the wrong pixel well), and optical (photons end up in the wrong conversion zone due to optical effects). MTF is about the same for CCD and CMOS given a large pixel. In CCDS, the type of device (full frame or interline) makes a difference for MTF. Interline devices have higher MTF, but lower fill factor and dynamic range. Full frame have better dynamic range, in general.

 

3) The only reason why medium format cameras use CCDs is because the manufacturers who make such large area devices are CCD makers. Period. CMOS would work just as well at those size devices. Kodak (Truesense) and Dalsa made large area CCDs for that market and cornered the market. The investment hurdle for CMOS devices in that market is quite high so nobody jumped in to compete (like Sony for instance - they went for the higher volume 35mm market and stayed out of med format - smart guys :-) )

 

4) Leica used CCDs in the past since the CCD makers were the only source who was willing to make a custom device for them. (I happened to be the guy who made the first deal with Leica for their first digital camera that used a Kodak CCD). At the time, Leica could have easily used a CMOS device but they could not find anybody to build it (poor business case and high opportunity cost issues). I think Leica now uses CMOS - they found a supplier in Europe willing to build the device.

 

Most of the reasons why CCDs are found in some camera systems and not others and conversely why CMOS is found in some systems and not others is mainly business issues, not technical. In high volume camera systems (cell phones, consumer digi-cam) that require low power and small space CMOS immediately dominated because of cost/power not quality. In 35mm systems CMOS is preferred due to system simplicity and power (and these days….frame rate performance). Given the same frame rate, a 35mm area CCD needs A LOT more power to operate the device + system and due to the size constraints in a 35mm camera the design is very challenging. I was the guy who built the first 35mm full frame sensor camera back in 2002 (Kodak Pro-14n). We looked carefully at CCD vs CMOS and were driven to CMOS mainly due to power (and heat abatement). Frame rate is also a major driver why 35mm guys choose CMOS. CCDs are very challenging to run at high frame rates using multiple channels. CMOS is much easier to deal with.

DSLR RAW and JPG camera process:

 

The ‘usual; steps are:

 

  1. RAW data is clocked off the sensor, including the black pixels on the periphery (these are pixels covered by a metal shield so no light can hit them)
  2. The black pixel value is interrogated and the black level is subtracted off the entire image; this establishes an ‘optical black floor’ for the image. This is a critical step and defines the image’s ability to have a neutral black floor (deep shadows that have equal RGB values, etc.)
  3. White balance scaling: the linear RAW data is scaled per channel by the WB scaler defined by either the preset or the AWB algorithm. In daylight, the GRN channel has the most responsivity and the RED and BLU are linearly scaled up. Neutral objects in the scene now have equal RGB values. In tungsten illumination the RED channel is usually the largest, and GRN/BLU are scaled.
  4. Noise reduction steps occur here (or earlier in the chain). These typically are 5x5 or 7x7 filters designed to remove the baseline Gaussian noise.
  5. Bayer pattern interpolation - creates separate RGB planes; the data at this point is still usually 16 bits linear.
  6. Color processing and tone scale processing; these steps are highly complex and highly non-linear. All linearity in the image will be lost at this point.
  7. Noise reduction (again); the chroma channels are usually noise processed to remove aliasing from the Bayer steps and general color noise. This also can be highly non-linear.
  8. Color space rotation to sRGB, drop to 8 bits/pixel, and JPEG compression.
  9. Done!

 

The RAW file is ‘usually’ saved after step 2, and in my case, after step 1. We stored the black level in the file header for processing later. No clue how modern cameras do this but I plan on experimenting with my D200 to find out. If you shoot with a lens cap and the RAW data is all near zero, then the black subtract already happened. If the data has a small offset, usually about 10-30 counts in 12 bit space, then the data is pre-black level. For accurate RAW measurements you need to subtract this offset, which is determined by shooting a lens cap shot. Note that this offset is shutter speed and temp dependent, so a lens cap shot may need repeating…