Thursday, April 26, 2018

4 Generations of Camera Module Testing

Pamtek publishes a video showing the four generations of its automated camera module testing machines:

Wednesday, April 25, 2018

High Speed SERDES Technology Enables High Frame Rates, Potentially

EETimes: With the emergence of 112 Gbps per lane SERDES technology and wide adoption of 56 Gbps per lane one, the 12.8 Tbps single-chip switches from different companies have reached the market. This enables a data infrastructure for high frame rate and high resolution imaging systems. For instance, an 8K video with 16b per pixel can transferred at more than 24,000 fps speed through this data pipe. Now, once the data transfer technology is ready and the wafer stacking technology is mature, we could design image sensors supporting this speed and find an application for them. Or, may be, find the application first.

Technavio Forecasts Automotive CIS Cost Reductions

BusinessWire: Technavio global automotive image sensors market reports talks about an number of the recent trends:

"The global automotive image sensors market has witnessed a reduction in the cost of image sensors. The adoption of image sensors in consumer electronics and smartphones has allowed image sensor manufacturers to experience economies of scale, which further resulted in price reduction. The automotive industry did not benefit just from the reduction in cost but also by improved performance and picture quality.

One of the key trends impacting the growth of the market is the development of high-sensitivity CMOS image sensor with LED flicker mitigation.
"

ST Reports Weak Sales of Imaging Products for Smartphones

SeekingAlpha publishes ST Q1 2018 earnings call. The company sounds not happy about its imaging business in smartphones:

"On a sequential basis, AMS [Analog, MEMS and Sensors Group] revenues decreased by 27.4%, principally reflecting the negative impact of smartphone applications to our Imaging business...

As we already anticipated, and now this is well-known by the industry, the second quarter is another quarter of weak sales in smartphones, particularly for our Imaging business.
"

Tuesday, April 24, 2018

Sony Stacked Vision Chip Paper

MDPI Special Issue on the 2017 International Image Sensor Workshop keeps publishing papers presented at the workshop. Sony paper "Design and Performance of a 1 ms High-Speed Vision Chip with 3D-Stacked 140 GOPS Column-Parallel PEs" by Atsushi Nose, Tomohiro Yamazaki, Hironobu Katayama, Shuji Uehara, Masatsugu Kobayashi, Sayaka Shida, Masaki Odahara, Kenichi Takamiya, Shizunori Matsumoto, Leo Miyashita, Yoshihiro Watanabe, Takashi Izawa, Yoshinori Muramatsu, Yoshikazu Nitta, and Masatoshi Ishikawa presents:

"We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning."

Nondestructive Photon Detection

APS Physics publishes Washington University article "Viewpoint: Single Microwave Photons Spotted on the Rebound" by Kater W. Murch.

"Single optical photon detectors typically absorb an incoming photon and use that energy to generate an electrical signal, or “click,” that indicates the arrival of a single quantum of light. Such a high-precision measurement—at the quantum limit of detection—is a remarkable achievement, but the price of that click is in some cases too high, as the measurement completely destroys the photon. If the photon could be saved, then it could be measured by other detectors or entangled with other photons. Fortunately, there is a way to detect single photons without destroying them.

This quantum nondemolition photon detection was recently demonstrated in the optical domain, and now the feat has been repeated for microwaves. Two research groups—one based at the Swiss Federal Institute of Technology (ETH) in Zurich and the other at the University of Tokyo in Japan—have utilized a cavity-qubit combination to detect a single microwave photon through its reflection off the cavity.
"


The non-destructive optical photon detection paper has been published in 2013 and described in Photonics magazine:

"Andreas Reiserer and colleagues at the Max Planck Institute of Quantum Optics have developed a device that leaves the photon untouched upon detection.

In their experiment, Reiserer, Dr. Stephan Ritter and professor Gerhard Rempe developed a cavity consisting of two highly reflecting mirrors closely facing each other. When a photon is put inside the cavity, it travels back and forth thousands of times before it is transmitted or lost, leading to strong interaction between the light particle and a rubidium atom trapped in the cavity. By reflecting the photon away from the device, the team was able to detect the photon by changing its phase rather than its energy.

The phase shift of the atomic state is detected using a well-known technique.
"

I'm not sure what is the practical use of this for image sensing. In theory, this opens a way to an invisible image sensor that detects and releases all the incoming photons without absorbing them.

Prophesee Event-Driven Reference Design

EETimes: Prophesee (former Chronocam) comes up with an event driven sensor reference design for potential customers. The Onboard reference system contains a VGA event-driven camera integrated with Prophesee’s ASIC, Qualcomm’s quad-core Snapdragon processor running at 1.5GHz, 6-axis Inertial Measurement Unit, and interfaces including USB 3.0, Ethernet, micro-HDMI and WiFi (802.11ac), and MIPI CSI-2:

Monday, April 23, 2018

Image Sensor Market is Greater than Lamps

IC Insights Optoelectronic, Sensor, and Discrete (O-S-D) report gives a nice comparison of image sensor business with others. It turns out that the world spends more on image sensing than on the scenes illumination:

LiDAR Patents Review

EETimes publishes Junko Yoshida article "Who’s the Lidar IP Leader?" Few quotes:

"Pierre Cambou, activity leader for imaging and sensors at market-research firm Yole Développement (Lyon, France), said he can’t imagine a robotic vehicle without lidars.

Qualcomm, LG Innotek, Ricoh and Texas Instruments.. contributions are “reducing the size of lidars” and “increasing the speed with high pulse rate” by using non-scanning technologies. Quanergy, Velodyne, Luminar and LeddarTech... focus on highly specific patented technology that leads to product assertion and its application. Active in the IP landscape are Google, Waymo, Uber, Zoox and Faraday Future. Chinese giants such as Baidu and Chery also have lidar IPs.

Notable is the emergence of lidar IP players in China. They include LeiShen, Robosense, Hesai, Bowei Sensor Tech.
"

Sunday, April 22, 2018

Trinamix and Andanta Company Presentations

Spectronet publishes presentations of two small German image sensor companies - Trinamix and Andanta:


As for 3D imaging, Trinamix complements its initial "chemical 3D imager" idea with a more traditional structured light approach:


Andanta too publishes some info about the company and its products:

Saturday, April 21, 2018

Stretchcam

Columbia University, Northwest University and University of Tokio publish a paper "Stretchcam: Zooming Using Thin, Elastic Optics" by Daniel C. Sims, Oliver Cossairt, Yonghao Yu, Shree K. Nayar:

"Stretchcam is a thin camera with a lens capable of zooming with small actuations. In our design, an elastic lens array is placed on top of a sparse, rigid array of pixels. This lens array is then stretched using a small mechanical motion in order to change the field of view of the system. We present in this paper the characterization of such a system and simulations which demonstrate the capabilities of stretchcam. We follow this with the presentation of images captured from a prototype device of the proposed design. Our prototype system is able to achieve 1.5 times zoom when the scene is only 300 mm away with only a 3% change of the lens array's original length."

Friday, April 20, 2018

Thursday, April 19, 2018

Leonardo DRS Launches 10um Pixel Thermal Camera

PRNewswire: The pixel race goes on in microbolometric sensors. Leonardo DRS launches of its Tenum 640 thermal imager, the first uncooled 10um pixel thermal camera core for OEMs.

The Tenum 640 thermal camera module combines small pixel structure with its sensitive vanadium oxide micro-bolometer sensor and a 640 x 512 array. It provides exceptional LWIR imaging at up to 60fps. The high-resolution LWIR camera core features image contrast enhancement, called "ICE™ ", 24-bit RGB and YUV (4,2,2), at sensitivity less than 50 mK NETD.

"The Tenum 640 represents the most advanced, uncooled and cost-effective infrared sensor design available to OEM's today," said Shawn Black, VP and GM of the Leonardo DRS EO&IS business unit. "Our market-leading innovative technologies, such as the Tenum 640, continue to enable greater affordability while delivering uncompromising thermal imaging performance for our customers."

Face Recognition Startup Raises $600m on $3b Valuation

Bloomberg, Teslarati: A 3 year-old Chinese startup SenseTime raises $600m from Alibaba Group, Singaporean state firm Temasek Holdings, retailer Suning.com, and other investors at a valuation of more than $3b ($4.5b, according to Reuters), becoming the world’s most valuable face recognition startup. By the way, the second largest Chinese facial-recognition start-up Megvii has raised $460m last year.

The Qualcomm-backed company specializes in systems that analyse faces and images on an enormous scale. SenseTime turned profitable in 2017 and wants to grow its workforce to 2,000 by the end of this year. With the latest financing deal, SenseTime has doubled its valuation in a few months.


Wednesday, April 18, 2018

Corephotonics Signs Broad Agreement with Oppo

OPPO, one of the largest smartphone manufacturers in China, signs a strategic licensing agreement with Corephotonics, the licensor of dual camera technologies. Under the agreement, OPPO will collaborate with Corephotonics on developing its smartphone camera roadmap – supporting high optical zoom factors, accurate depth mapping, digital bokeh and other advanced features, all involving innovations in optics, mechanics, computational photography, deep learning and other fields.

Mobile photography is a key focus of OPPO, and we have always been eager to forge strong partnerships with leading suppliers like Corephotonics,” said Dr. King, OPPO’s Hardware Director. “Corephotonics’ dual cameras with wide-angled and telephoto lenses, along with the periscope-style construction, optical image stabilization and image fusion technology, edge mobile photography even closer to what digital cameras are capable of doing.

OPPO has the most impressive record of innovation in the field of smartphone imaging,” affirmed David Mendlovic, CEO of Corephotonics. “We are proud to be working closely with the OPPO teams on their next generation camera technologies. This strategic agreement is a major validation of the benefits that our camera designs and imaging algorithms have on the future of mobile photography.

Tuesday, April 17, 2018

LargeSense Unveils 4K Video-Capable Medium Format Camera

PRNewswire: LargeSense LLC launches the LS911, said to be the first medium format 8x10 digital camera for sale. Everything in this announcement looks, well, impressively large:

  • 9x11-inch-large digital sensor. The company tends to refer to it as 8x10 as that is the closest format that people search for.
  • 75 micron pixel size
  • 12MP resolution
  • Live view for easy focusing
  • Video modes (all progressive):
    - Up to 26fps: Full sensor scan 3888 x 3072 and optional crop sizes: 3840 × 2160, 3840 x 1600, 1920 x 1080.
    - Up to 70fps with pixel 2x2 pixel binning: Full frame: 1944 x 1536
  • 14 bit ADC
  • The US price is $106,000, available now

The LS911 sensor uses a rolling shutter. Rolling shutters can produce distortions, but the LS911 is said to handle it well, as shown in the test video:



Anitoa Ultra-low Light Bio-optical Sensor in Volume Production

PRNewswire: Anitoa, a Menlo Park CA startup since 2012, announces the volume production of its Ultra-low-light CMOS bio-optical sensor, ULS24. Capable of 3x10-6 lux low-light detection, Anitoa ULS24 is said to be "the world's most sensitive image sensor manufactured with proven low-cost CMOS image sensor (CIS) process technology."

Until now, molecular testing such as DNA or RNA, and immunoassay testing (e.g. ELISA) rely on traditional bulky and expensive Photon Multiplier Tube (PMT) or cooled CCD technologies. "Following the trend of CMOS image sensors replacing CCDs in consumer cameras, many customers are exploring this CMOS Bio-optical sensor to replace CCD or PMT designs for new products," says Anitoa SVP Yuping Chung. With Anitoa ULS24 now in volume production, it's low-light sensitivity is said to rival PMTs and CCDs used in molecular and immunoassay testing devices. ULS24 achieves this high level of sensitivity through the innovation of temperature compensated dark current management algorithm.

Cadence Presents Tensilica Vision Q6 DSP

Cadence announces the Tensilica Vision Q6 DSP, its latest DSP for embedded vision and AI built on a new, faster processor architecture. The fifth-generation Vision Q6 DSP offers 1.5X greater vision and AI performance than its predecessor, the Vision P6 DSP, and 1.25X better power efficiency at the Vision P6 DSP’s peak performance. The Vision Q6 DSP is targeted for embedded vision and on-device AI applications in the smartphone, surveillance camera, automotive, augmented reality (AR)/virtual reality (VR), drone and robotics markets.

NIT Announces HDR Sensor with LED Flicker Mitigation

NIT presents its NSC 1701 HDR Global Shutter CMOS sensor featuring a new Light Flicker Mitigation mode and a 12b digital output.
  • 1280 x 1024 Pixels Resolution 
  • 6.8µm Pixel Pitch 
  • On Chip 12 bits ADC 
  • 1.3 MP
  • Light Flicker Mitigation
  • Color or Monochrome
The NSC1701 sensor is aimed to industrial and emerging embedded applications and to the automotive market with its new flicker mitigation mode. The engineering samples are available now and the mass production planned for June 2018.

Monday, April 16, 2018

ABI Research: Face Recognition is 5x Easier to Spoof than Fingerprint One

ABI Research: “Face recognition on smartphones is five times easier to spoof than fingerprint recognition,” stated ABI Research Industry Analyst Dimitrios Pavlakis. “Despite the decision to forgo its trademark sapphire sensor in the iPhone X in favor of face recognition (FaceID,) Apple may be now forced to return to fingerprints in the next iPhone,” added Pavlakis.

ABI also comments on Synaptics under-display fingerprint sensor in Vivo X20+ smartphone:

Vivo may have been cautious to fully commit to the new technology and left room to fall back to a traditional sensor below the display,” said Jim Mielke, ABI Research’s VP of the Teardowns service. “The performance of this first implementation does warrant some caution as the sensor seemed less responsive and required increased pressure to unlock the phone.

Samsung 0.9um TetraCell Sensor Reverse Engineered

TechInsights publishes reverse engineering of Samsung first 0.9um pixel sensor found in Vivo V7+ smartphone 24MP selfie camera:

"We did not expect to see the Samsung S5K2X7SP image sensor until Q2, 2018…but here it is in the Vivo V7+."

Omnivision Announces 8MP 2um Nyxel Sensor

PRNewswire: OmniVision announces the 8MP OS08A20 equipped with 2um pixel with Nyxel NIR technology. The OS08A20 is the first sensor to combine Nyxel technology with PureCel pixel architecture.

"With 8 megapixels of resolution and our industry-leading Nyxel technology, the OS08A20 allows surveillance cameras to capture accurate and detailed images at night, without the need for high-power LEDs," said Brian Fang, Business Development Director at OmniVision. "With such capabilities, this sensor also fills a need in emerging applications, such as video analytics, where accurate object and facial recognition is aided by higher resolution and sensitivity."

Demand for surveillance cameras continues to grow, with well over 125 million such cameras expected to ship globally in 2018, according to IHS Markit. Other applications with similar requirements, such as body-worn cameras for law enforcement, represent an additional growth opportunity.

Nyxel technology delivers QE improvement at 850nm and 940nm while maintaining high-modulation transfer function, allowing the OS08A20 to monitor a larger area compared with legacy technologies. Eliminating the need for external lighting sources reduces power consumption and enables covert surveillance for improved security. The OS08A20 is also a color CMOS image sensor, employing the PureCel pixel architecture with BSI to capture color images during the daytime.

The OS08A20 is currently sampling and is expected to start volume production in Q2 2018.

Friday, April 13, 2018

Omnivision Proposes Adding Shield Bumps to Pixel Level Interconnect

Omnivision patent application US20180097030 "Stacked image sensor with shield bumps between interconnects" by Sohei Manabe, Keiji Mabuchi, Takayuki Goto, Vincent Venezia, Boyd Albert Fowler, and Eric A. G. Webster reduces coupling in pixel level interconnected stacked sensor:

"One of the challenges presented with conventional stacked image sensors is the unwanted capacitive coupling that exists between the adjacent interconnection lines between the first and second dies of the stacked image sensors that connect the photodiodes to the pixel support circuits. The capacitive coupling between the adjacent interconnection lines can cause interference or result in other unwanted consequences between adjacent interconnection lines when reading out image data from the photodiodes."


"As such, there are also shield bumps 520 disposed between adjacent interconnection lines 518 along each of the diagonals A-A′ and/or B-B′ of the pixel array of stacked imaging system 500 in accordance with the teachings of the present invention. As such, when every other pixel cell in two rows of the pixel array included in stacked imaging system 500 are read out at a time, there is a shield bump 520 disposed the corresponding interconnect lines 518 in accordance with the teachings of the present invention. With a shield bump 520 disposed between adjacent interconnection lines 518, the coupling capacitance is eliminated to reduce unwanted interference, crosstalk, and the like, during readouts of stacked image sensor 500 in accordance with the teachings of the present invention."

Image Sensors at 2018 VLSI Symposia

VLSI Symposia to be held on June 18-22 in Honolulu, Hawaii, publishes its official Circuit and Technology programs. In total, there are 8 image sensor papers:
  • C7‐1 A 252 × 144 SPAD Pixel FLASH LiDAR with 1728 Dual‐clock 48.8 ps TDCs, Integrated Histogramming and 14.9‐to‐1 Compression in 180nm CMOS Technology,
    S. Lindner, C. Zhang*, I. Antolovic*, M. Wolf**, E. Charbon***,
    EPFL/University of Zurich, *TUDelft, **University of Zurich, ***EPFL/TUDelft 
  • C7‐2 A 220 m‐Range Direct Time‐of‐Flight 688 × 384 CMOS Image Sensor with Sub‐Photon Signal Extraction (SPSE) Pixels Using Vertical Avalanche Photo Diodes and 6 kHz Light Pulse Counters,
    S, Koyama, M. Ishii, S. Saito, M. Takemoto, Y. Nose, A. Inoue, Y. Sakata, Y. Sugiura, M. Usuda, T. Kabe, S. Kasuga, M. Mori, Y. Hirose, A. Odagawa, T. Tanaka,
    Panasonic Corporation 
  • C7‐3 Multipurpose, Fully‐Integrated 128x128 Event‐Driven MD‐SiPM with 512 16‐bit TDCs with 45 ps LSB and 20 ns Gating,
    A. Carimatto, A. Ulku, S. Lindner*, E. D’Aillon, S. Pellegrini**, B. Rae**, E. Charbon*,
    TU Delft, *EPFL, **ST Microelectronics 
  • C7‐4 A Two‐Tap NIR Lock‐In Pixel CMOS Image Sensor with Background Light Cancelling Capability for Non‐Contact Heart Rate Detection,
    C. Cao, Y. Shirakawa, L. Tan, M. W. Seo, K. Kagawa, K. Yasutomi, T. Kosugi*, S. Aoyama*, N. Teranishi, N. Tsumura**, S. Kawahito,
    Shizuoka University, *Brookman Technology, **Chiba University
  • T7‐2 An Over 120 dB Wide‐Dynamic‐range 3.0 μm Pixel Image Sensor with In‐pixel Capacitor of 41.7 fF/µm2 and High Reliability Enabled by BEOL 3D Capacitor Process,
    M. Takase, S. Isono, Y. Tomekawa, T. Koyanagi, T. Tokuhara, M. Harada, Y. Inoue,
    Panasonic Corporation
  • T15‐4 Next‐generation Fundus Camera with Full Color Image Acquisition in 0‐lx Visible Light by 1.12‐micron Square Pixel, 4K, 30‐fps BSI CMOS Image Sensor with Advanced NIR Multi‐spectral Imaging System,
    H.Sumi, T.Takehara*,S.Miyazaki*, D.Shirahige*, K.Sasagawa*, T. Tokuda*, Y. Watanabe*, N.Kishi, J.Ohta*, M.Ishikawa,
    The University of Tokyo, *NAIST
  • T15‐2 A Near‐ & Short‐Wave IR Tunable InGaAs Nanomembrane PhotoFET on Flexible Substrate for Lightweight and Wide‐Angle Imaging Applications,
    Y.Li, A. Alian*, L.Huang, K. Ang, D. Lin*, D. Mocuta*, N. Collaert*, A.V‐Y Thean,
    National University of Singapore, *IMEC
  • C23‐2 A 2pJ/pixel/direction MIMO Processing based CMOS Image Sensor for Omnidirectional Local Binary Pattern Extraction and Edge Detection,
    X. Zhong, Q. Yu, A. Bermak**, C.‐Y. Tsui, M.‐K. Law*,
    Hong Kong University of Science and Technology, *University of Macau, **also with Hamad Bin Khalifa University

Thursday, April 12, 2018

Luminar Acquires Black Forest Engineering

Optics.org, Techcrunch: Colorado Springs-based image sensor and ROIC design house Black Forest Engineering has been acquired by a LiDAR startup Luminar:

"“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”

...the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).

The secret is the sensor. Most photosensors in other lidar systems use a silicon-based photodetector. Luminar, however, decided to start from the ground up with InGaAs.

The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by minimizing the amount of InGaAs used: only a tiny sliver of it is used where it’s needed, and they engineered around that rather than use the arrays of photodetectors found in many other lidar products. (This restriction goes hand in glove with the “fewer moving parts” and single laser method.)

Last year Luminar was working with a company called Black Forest Engineering to design these chips, and finding their paths inextricably linked, Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.

By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.

“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”

“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”
"


Update: IEEE Spectrum publishes a larger image of Luminar's InGaAs sensors:

Spectral Edge Raises $5.3M

Remember the times when ISP startups were popular - Nethra, Insilica, Alphamosaic, Nucore, Atsana, Mtekvision, etc? With AI and machine learning in fashion, this time might come back. EETimes reports that UK-based Spectral Edge has raised $5.3m. The new startup bets on image fusion of IR and RGB images claiming the improvements of image quality:

SWIR Camera Market

Esticast Research and Consulting publishes Shortwave Infra-Red Camera Market report. Tke key findings in the report:

  • North America held the largest chunk of market share in 2016 owing to rapid technical development and increasing applications.
  • China and other Asian countries are expected to grow the fastest during the forecast period.
  • Area cameras held more than 50% of the global market share. However, linear cameras are expected to grow with the fastest growth rate of 8.31% during the forecast period.
  • Optical communication dominated the global market in 2017, holding nearly 3/7th of the global market.
  • Aerial SWIR cameras are expected to witness the highest CAGR of 10.03% during the forecast period.

SWIR Camera Market