CMOS Image Sensors Overview

cmos chip design
cmos analog design
cmos custom image sensor
cmos image sensor design
cmos custom IP design
image sensor


Some doubts about image size and image resolution.

The quality of any image depends partially on the number of pixels contained in the pixel array. Smaller pixels leads to more pixels, therefore image has more resolution, in same space. However, is that better? Well it depends.

There is another factor that contributes for the image quality. That is the electrical read noise (obtained in dark conditions) and shot noise (is a fucntion of impinging ligth).

The most noticeable effect is the electrical black read noise (includes electronics readout noise and pixel noise). The smaller the pixel is, usually the bigger is the dark readout noise.

What is the point of having a high resolution image if we cannot see the zoomed in detail quite clearly, due to the noise? The answer for that is not every time pays off having high resolution arrays, due to noisier pixels.


Why everyone integrates more and more functionalities into the sensor, if that brings up the cost of the sensor?

It's true. The cost per sensor is bigger, because it takes more silicon area to implement all those functionalities into silicon. However, think of what you would have to do to achieve same functionalities at the PCB board level? It would make it much more complex for sure.

Your PCB head board would have to be much bigger, and you would have to include discrete components on PCB board as well. This means you would bring the complexity towards off-chip.

The overall price for the system would be probably the same, or even bigger when compared with all features built-in on-chip. In the other hand, if you do as much as you can on-chip, you would have a complex system working as a simple system from the user point of view. In that sense, chip would do everything for you. That is what everyone wants. Right?


Why market is evolving to deeper and deeper on-chip ADC resolutions?

Well, in one hand that brings the issue of exchanging huge amount of data, a problem for the user, but that is a different subject. In the other hand, novel low noise architectures has been developed,
and the ADC resolution in this case matters.

Can you image a pixel architecture and a read data-path such that the combined noise of them is less that 1DN of a 10bit ADC? We barely would see any noise on the image besides shot noise, but it doesn't mean we cannot have 1bit more resolution on the ADC and still don't see the dark read noise effect. In this case, we would increase the dynamic range (DR) feature, which is another important feature of an image sensor.

DR is the ability of seeing a low light image section of the matrix with good quality or good perception, when surrounded a bright image section, without being saturated on that bright section.
This is the case of a high contrast image, which requires high dynamic range (HDR) feature. In a short statement, DR is the ability of an image sensor to capture details of a scene.


Why LVDS drivers came up on CIS?

Parallel single ended output taps, are slower than single LVDS output taps. Problem is more on PCB side and not that much at on-chip PAD drivers side. Parallel data, not only consumes lots of space on PCB, but also is much slower than LVDS data on PCB side. High speed serial data is not a problem for current FPGAs on the market. If it's necessary to increase data rate, more and more LVDS taps
can be used, and still modern FPGA can handle it.


Why semi or autonomous operation?

If sensor can have this feature the better it is. User usually wants to have sensor running with less effort as possible. Preferably, user just wants to supply power and receive data in a convenient way. Just that!

User don't want to bother with complex things. Just plug and play. To accomplish this, a on-chip configurable state machine and/or a RISC microcontroller is needed. Once configured, it does everything for you and the way you want.


Why configurable CIS?

User many times want their specific settings for their specific applications. Sometimes sensors requires to be tunned, as well. Internal configuration registers are added to compensate process variations. Don't forget, a CIS is still an analogue chip and it might need tunnig for proper operation.


Why on-chip references and temperature sensors?

A CIS cannot suffer from temperature dependence. User sees it as annoying effect, and usually it's not tolerable.


Why on-chip data processing and data correction?

Sometimes CIS comes with some column or row artifacts. The row artifacts are the worse to fix. The column artifacts, require silicon and more complexity to system, but still allows to fix them or reduce the magnitude of those artifacts, and even some non-linearities.

Some custumers doesn't want to take the risk to implement them in their custom designs because it brings more complexity and contributes for possible post production buggs, which can compromise the sensor. So, not every sensor has this features. But once it's done, the advantages are enormous.


Why electrical and optical black correction?

This feature is not so hard to include, and it pays off the effort of implementing because it removes automatically any correlated row-to-row noise on the sensor. This feature improves sensor noise performance.


What is TDI and why TDI operation?

TDI is the designation for time delay integration. This mode of operation allows sensor to achieve much more DR compared if not done. TDI can be done on-chip or off-chip.

On-chip requires silicon area and complex circuitry. Off-chip requires mainly memory. It's needed to store the current line or frame and add it with the next one, shifted by one vertical position, assuming vertical shutter direction.

This ends up in same effect of taking some images and average them. The maximum DN we can get is still the 2^(ADC_resolution) but the noise floor (electrical read noise) comes reduced by Sqrt(N_image_samples). That is why DR increases.


Why CDS and not only DS?

The image information depends on 2 samples: reset level and photo signal level. It's the difference from this 2 measures that there is the scene information. By doing double sampling (DS) and convert them followed by a subtraction to get the difference, we not only get the pixels scene information but also we cancel or minimise the fixed pattern noise (PFN).

However, there is still present the pixel reset noise, which sometimes is quite contributive for the total read noise. In some cases can be acceptable to have it depending on the application with 3T pixels. For example if the pixel is big, like 20umx20um or 100umx100um, where large full well capacity (FWC) is needed.

On the cases where is not acceptable, for example with 4T pixels, a correlated double sampling (CDS) operation is required, so reset noise can not present in the total read noise floor.


Why column parallel ADC? Why not single global fast ADC?

In one hand, single global fast ADC, has it's beffits, but to drive analogue signals across the sensor up to ADC engine, is very slow, therefore not feasible.
In the other hand, column parallel ADCs, consumes more power and area, as you can imagine, but makes actual demanded high line and high frame rates possible.


Why Ramp ADC? Why not other?

There is no absolute reason why should be Ramp type ADC and not other type. All of them have inconvenients and are suited for many things. It all depends on the specifications for the sensor, and depends also on the designer, in what is more confortable.
But one reason comes into my mind, which is, ramp ADCs keeps the conditions of signal comparison with ramp constant over conversion time.


What are the differences between front side illumination (FSI) and back side illumination (BSI)?

Typical CIS are made with FSI. This means that pixel array and readout electronics is drawn in same side of silicon. This makes the pixel layout such that fill factor (FF) is reduced.
FF is the portion of pixel active area exposed to light. In a FSI image sensor, each pixel has some electronics on it and its connectivity, which occupy some space, that contributes to reduce pixel FF.
In a BSI image sensor, the pixel area is fully exposed, while its electronics is drawn in the opposite side of silicon. The electrical connection between matrix and electronics is made by through silicon vias (TSV).

This is a complex process. Not all design houses were capable to have this done correctly. For those who can have it done and take advantage of it, they can exhibit on their CIS specs, pixels with quantum efficiency (QE) of 0.9 This is a major advance and a key factor to put a company ahead on the market.


Inteligent or Smart CIS.

This is a category of CMOS image sensors which has on it, some deep complicated image processing, not just data correction processing. Really image processing in a row by row basis.

Can be so complicated as performing auto-correlation function or fourier transform. These are quite complex algoriths and makes system slow. There are other types of image processing algoriths,
such as, erosion, expand, shrink, edge, etc.

These image processing fucntions are easier to implement. Making more elaborated functions on top of these ones, and if executed recursively, we can achieve complex algorithms like if it was really a processor, with data-path, control unit, instruction decoding unit, etc. Some companies are doing this already quite a few years.


On-chip colour filters, and optical cross-talk.

A colored image sensor in nothing more than a typical grey CIS with some deposition of colour filter on top of matrix. The colour filter pattern is deposited in specific pixels locations.

Electronics is then covered first with blue filter and secondly with red filter, so light can be blocked (can also be opposite process, first red then blue). This way, light cannot interfeer with readout electronics.

Pixels cross-talk can happen, and in a specific red pixel it might happen to have influence of neigbour pixels, for example, from green ligh or blue light. This means that, when reading a specific red pixel, we need to extract the green and blue component of it. Same must be done with green and blue pixels.

We can accomplish this by implementing hardware algoriths on-chip, facilitating user to do it off-chip. That is why there is advantages to integrate more and more features on-chip. It makes everything for you.


Ethalon free effect pixels.

Another effect that will occur on image sensor is the Ethalon effect. This effect reflects on the image sensor spectral response curve . This is due to the existance of a glass layer on top of matrix
for its mecanical protection. This effect looks like a sine wave superimposed over the spectral response, which vanishes towards the left and right edges of spectral response.

This has to do with angle of insident light, and light wavelength. There are some spatial frequencies which glass reflection interfeers constructively and there are others which interfeers destructively.
Same happens with angle, for a given wavelength. In some cases this can be corrected, as long as illumination wavelength is knwon.


Distance measurement setups and Time of Flight (TOF) sensors.

If a laser hits a non ideal glass planar surface, light will pass over it, but also will create a reflected difuse light spot on it. Spot will look like a source of spherical light waves. If sensor is placed asside of laser, difuse reflected light will hit sensor back with a certain angle. With a lens, the spot on glass will focus on a specific matrix location.

In addition, if there is a second flat glass further, same effect will occur. But due to distance between two glass plates, each reflected light spot will focus on a different location on the matrix. Distance information is on the displacement between them.

Time of Flight (TOF) sensors, are a category of image sensors that captures the depth distance from focal plane array to the object of scene as an image. TOF pixels addressing is different from a standard image sensor, but not so complicated. The difficult part for TOF sensors, is the pulsed light generation. It must be sharp, either during turning ON or turning OFF. Light for this application is usualy a laser, or a LED setup based illumination.

One idea to overcome this problem, regarding speed of turning ON or OFF the illumination source, is light modulation.


Long pixel layout on line scan sensors, for spectrometry applications.

White flat spectrum light passing through a prism makes light transmited over it, to be splited in different wave lenghts, each one with a specific angle. A sensor in front of this splited spectrum of light can be used to detect, which waves length sources hitting light has.


Regions of interest (ROI) features.

When it's useful to read just a portion of the matrix, or line (in case dealing with line scan sensors), a ROI feature is needed on the sensor. This makes sensor to have higher line rates or higher frame rates.

The limiting part in terms of speed, is related with data readout. In the other hand, to make line rate to increase linearly with ROI width, it's necessary to adapt and reduce the ADC max convertion threshold, so it can follows the time required to read a reduced segment of columns.


Configurable full well capacity (FWC) sensors.

This is a major advantage on sensors which can offer this feature. Having a independent full well from photodiode size has its benefits, as well as, its inherent feasability complications.

This allows user to use same sensor on different applications. In the other hand, allows design house to sell same sensor for different customers on different applications, making such sensor
a good and flexible standard product.


Multiple non destructive readouts.

This is an important feature in high contrast scene capturing, where high dynamic range (HDR) is required. In this mode, user can read several lines or frames (depending on sensor type, line scan or area scan sensors) without destroying already builted photo signal.

Form line to line (or frame to frame) user get an image, with some pixels already saturated, and others not. If continue sampling and converting without reseting photodiode, until end of exposure time, user can select all non saturated pixels from catpured frames, and build a new image with those select pixels.

This ends up in a HDR image, where drak pixels are exposed more time, than brighter pixels.This enhances photo signals from dark pixels compared with those from bright pixels.

Another way to increase DR, appart from multiples samples in non destructive readout, is to provide on-chip a feature of piece wise linear response (PWLR), controlling flatness if ADC ramp on the PWL corner spots.


Package types.

There are lots of package types for image sensors. The most important is ceramic types, which is good to dissipate heat, however they are expensive either using it as a closed development product
or using it for post production debug tests and for characterization purposes.

One interesting type of packaging, is chip-on-board (COB) package. In this type of packaging, sensor is glued on a piece of metal (INVAR) with good thermal properties. Glue has also good insulation properties. INVAR is fixed to PCB board with screws. Glue can conduct heat from sensor silicon to metal INVAR, and isolate sensor silicon electrical potential from metal INVAR or anything on the PCB board.

Another one is BGA package. It's more complicated, but secures much more your design implementations, for those who want to take some ideas from it, starting from pad signals assignment. BGA package difficults more this approach.

Another interesting one is LCC package. It looks like a very small PCB of basically same size as die. Its a very interesting package.


Stitching sensors.

When a specific sensor product is a set of multiples equal layout cells, and each top cell is already at edge of chip size foundry requirements, stitching is a good choice to do very large sensors,
specialy in line scan sensors.

It requires some modifications on the stitching boarder layout, but apart from that, there is only benefits by adopting it. Stitching has the drawback of creating artifacts on stitching boarder, but nothing that can't be easely fixed by interpolation or any other correction  measure.


3D stacked image sensors.

This is a solution to compete with BSI sensors, where almost 100% fill factor can be achieved, and also almost unitary quantum effiency (QE). 3D stacked sensors needs TSV, like how it happens on BSI sensors. Matrix is made on one silicon substate and readout electronics is made in another silicon substrate. TSV interconnect both substrates.


Wafer level probing (WLP).

In order to increase number of working sensors, WLP is usualy performed. This ensures no catastrofic sensor failure, when assembling it on package. This can make the price per sensor to drop
and lead to a better position on market because product became more competitive.


Yield.

The larger is the sensor, the lower is yield. This has a major impact on the sensor price. Stitching might be usefull with waffer level probing to increase yeld.

Ther are some measures to prevent yield to drop as chip size increases. One of them is placing more contacts and vias in places where there is only one or two of those. For example, digital circuitry.
Yield is a problem that can be contained by design house, by adopting proper measures, but also depends on the stability and acuracy of Fab process lithography.


On-chip PLL.

On-chip PLL is needed when sensor have high data rate. Lets think on cases where serial data bit period can be up to 2.5ns (400MHz clock). This looks like quite a bit data rate, hein?

But lets consider for the example, having a bit serial period as 5ns (200MHz serialization clock). It's not feasible to supply chip with a 200MHz clock or higher one to serialize pixel data, even as an LVDS signal input. It is just not practical on the PCB board and from who will generate that clock.

A solution is to supply the sensor with a typical 50MHz clock from a external quartz oscilator, and on-chip using a PLL to generate the 200MHz clock synchronous with the input coming 50MHz clock.

If deserialization process on FPGA uses same input 50MHz clock and if it can synthesize in it, the required 200MHz deserialization clock, then everything matches and can be deserialized with no phase or clock issues.

However, specital attention must be taken on jitter from on-chip PLL. Too much jitter on it, becomes almost impossible to deserialize the input coming data to the FPGA, from sensor.


cmos chip design
cmos analog design
cmos custom image sensor
cmos image sensor design
cmos custom IP design
image sensor