photo reflex

photo reflex

PHOTOMODELREFLEX ®

PHOTOMODELREFLEX ®
GUIDA ALLA FOTOGRAFIA REFLEX

samedi 9 août 2014

Dual Pixel AF technology of Canon




The phase-difference AF method works by dividing the incoming subject image that passes through the photographic lens into two images and then detecting the difference in the focus point position between the two images. The original image is split into two images with two secondary microlens arrays in the AF sensor unit. Two line sensors measure each focus point.
The lens moves forward or backward depending on whether the focal point is in front of the subject (closer to the camera), or behind the subject (farther away from the camera). Because the camera can immediately figure out the direction and amount to move the lens, phase-difference AF can focus very quickly.


Traditionally, two autofocusing (AF) systems have co-existed in digital cameras: phase-detection systems in DSLRs and contrast-detection systems in non-reflex cameras. For phase-detection, light entering the lens is split into pairs of images and the intensity patterns (peaks and valleys) that indicate the focus point are compared by the AF processing system.
The separation error is calculated to determine whether the subject is closer to (in front focus) or behind (back focus) the current position, enabling the camera to calculate which way and by how much the lens must be adjusted. Autofocusing is usually very fast with these systems.
Contrast-detection systems take advantage of the fact that contrast is highest when an image is in focus. The difference in contrast between adjacent pixels on the sensor is measured and the lens is adjusted until the maximum contrast is achieved.


Focusing can be very accurate with this system because the image will be at its sharpest when the sensor detects focus. But it’s not necessarily very fast. In addition, since no actual distance measurement is involved, the system can’t calculate whether the subject is in front focus or back focus and focus tracking isn’t possible.
Sometimes, the lens will rack backwards and forward until the maximum contrast is reached, a phenomenon known as ‘hunting’. This hunting is noticeable when you’re recording movie clips. It also delays the triggering of the shutter for still shots, a phenomenon known as ‘AF lag’.
DSLR cameras can only record an image when the mirror is up, which creates a lag between when the shutter is pressed and the picture is taken. This means phase-detection AF can’t be used in movie mode and DSLR cameras default to contrast AF when they are switched to live view mode.
Sony addressed this problem with its ‘translucent-mirror technology’ in its SLT (Single-Lens Translucent) cameras. The semi-transmissive mirrors in these cameras don’t need to flip up, allowing phase-detection AF to operate all the time. Sony’s system is better at tracking moving subjects and supports fast continuous shooting speeds as well as being usable in movie mode. But, because it relies on light rays hitting the sensor from different angles, it can only operate at apertures of f/5.6 or larger.
Some manufacturers have developed ‘hybrid’ systems that combine both phase- and contrast-detection technologies. Fujifilm was the first company to embed phase-detection detectors in the Super CCD EXR sensor in its F300EXR camera. Similar systems are provided in more recent FinePix cameras as well as recent releases from Canon, Nikon, Olympus, Ricoh, Samsung and Sony.
To make the phase detection system work, different sensing detectors must be able to capture light from different parts of the lens and form two separate images. To create these detectors, some of the photosites in the array are half covered by a black mask on one side, while others are masked on the opposite side. These left- and right-facing photosites register light coming from opposite sides of the lens, enabling the image processor to measure phase differences and determine how much adjustment the lens requires for sharp focus.
This diagram shows how much of the sensor is covered by the phase-detection sensors in a camera with a typical hybrid AF system. Designers must balance the area required to provide adequate tracking AF performance against the reduction in light reaching the sensor.
Typically, the phase-detection sensors cover a relatively small area of the image sensor. This is because the half-covered detectors reduce the amount of light reaching the sensor. In low light levels, the signal may not be strong enough to support autofocusing – and overall image quality could suffer through increased image noise.
And Now for Something New....
At the beginning of July 2013, Canon announced its Dual Pixel CMOS AF system, which provides phase-detection AF without substantially reducing the light reaching the sensor. It’s achieved by using two photodiodes in each photosite on the sensor; one ‘looking’ to the left and the other to the right.
This means readouts are captured simultaneously from each side of the photosite and the difference between them can be measured. The signal differences are used to calculate the distance to the subject, allowing the lens to focus upon it, as in a normal phase-detection AF system.
Canon’s illustrations show the photodiodes in each photosite lying side-by-side, which suggests the array resembles a linear array of AF points with no cross-type points. If this is the case, the system would only be sensitive horizontally (or vertically, depending on the direction of the linear array). Other advantages aside, this could put it at a disadvantage when compared with AF sensor arrays with multiple cross-type points, which can detect in both directions.
One further benefit of the new Canon technology is that these signals can be combined and output as a single image pixel, which Canon claims suffers from ‘virtually no light loss’. In other words, rather than having embedded ‘pixels’ dedicated to AF, which are separate from the imaging photosites, the dual pixels have both AF and imaging functions.
Canon’s literature says approximately 80% of the shooting area of the CMOS sensor horizontally and vertically is covered by the dual pixels in the EOS 70D, the first camera with the new technology. The remaining photosites bordering this area have the same dual pixel structure but aren’t used for autofocusing.
Because the new system is collecting almost twice the signal from the sensor, compared with traditional AF systems, it provides much more information than any Hybrid CMOS AF system delivers. Canon has had to develop a dedicated IC (integrated circuit) to enable the 70D to process the focusing data separately from the image processing in order to optimise focusing speeds.


The structure of the sensor in Canon’s Dual Pixel CMOS AF system uses two photodiodes in each photosite. The Bayer colour filter (which enables colour signals to be recorded) is shown overlaid upon the sensor. (Source: Canon.)
The diagram above shows how much of the sensor area in Canon’s EOS 70D is covered by the ‘Dual Pixels’ and how the system is used during focusing and shooting. (Source: Canon.)

Canon says its Dual Pixel CMOS AF technology is particularly effective for capturing moving subjects, where Canon claims it can achieve AF speeds ‘that approach’ those of optical viewfinders, even with live view shooting.
Interestingly, the 70D's system works slightly differently when recording still pictures and movie clips. In the still capture mode, it determines the degree of focus adjustment required before moving the lens. In movie mode, focusing and lens movement occur simultaneously to minimise any possible hunting.
Canon claims the Dual Pixel CMOS AF system is 30% faster than the hybrid systems introduced in the EOS 650D and EOS M. Our tests showed it could match the speed and accuracy of the AF systems in most video camcorders.

 http://www.canon.com/technology/canon_tech/explanation/35mm.html