EDN published large 2-part article on image sensors for embedded vision systems. 6-page Part 1 is titled "Image sensors evolve to meet emerging embedded vision needs" and is written by Embedded Vision Alliance editor-in-chief Brian Dipert and his colleagues Eric Gregori and Shehrzad Qureshi. In a very popular form it describes automotive, mobile and regular cameras, 3D imaging and light field cameras.
The second part is devoted to HDR imaging and is written by Michael Tusch, CEO of Apical. Talking about various HDR implementations, the article describes Altasens' 1080p/60 A3372 sensor using a "checkerboard" pixel structure, wherein alternating Bayer pattern (RGGB) quad-pixel clusters are set to long- and short-exposure configurations:
"Long exposure delivers improved signal-to-noise but results in the saturation of pixels corresponding to bright details; the short exposure pixels conversely capture the bright details properly. Dynamic range reaches ~100 dB. The cost of HDR in this case is the heavy processing required to convert the checkerboard pattern to a normal linear Bayer pattern. This reconstruction requires complex interpolation because, for example, in highlight regions of an HDR image, half of the pixels are missing (clipped). An algorithm must estimate these missing values.
While such interpolation can occur with remarkable effectiveness, some impact on effective resolution inevitably remains. However, this tradeoff is rather well controlled, since the sensor only needs to employ the dual-exposure mode when the scene demands it; the A3372 reverts to non-HDR mode when it's possible to capture the scene via the standard 12-bit single-exposure model."
Update: Another version of this article is published by EDN-Europe on Oct. 1, 2012.