Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals

Sense & Sensors in Digital Photography

In another incarnation I was a commercial photographer. At the end of that life I sold all of my studio equipment and all of my cameras save one, a Horseman 985, a contraption with a black bellows that resembles the Speed Graphic press cameras you see in pre-war movies. It uses roll film and allows the front and back of the camera to be twisted in every direction when it’s parked on a tripod. You can also hold it in your hands and pretend you’re acting in "Front Page." Never have I found a camera so useful. Nowadays, however, digital sensors are pushing the optical limits of lenses and software has become more pliable than leather bellows, not just for adjusting colour but for optical manipulations as well. This year a modestly priced (as such things go) digital SLR supplanted my Horseman. I can no longer see owning a camera that uses film.

In this article I am going to examine the technology of digital cameras, but in an unconventional way. I am going approach it from basic principles. This approach may seem abstract and theoretical at first, but it won’t for long. You will see that if you understand the scientific principles, you can ignore a lot of marketing hype and save significant sums of money.

Photocells — Imagine a small windowpane with bits of a special metal embedded in the glass and a wire touching those bits. Photons of light bang against the glass. The impact unsettles electrons in the metal. They bang into electrons within the wire, which bump into electrons further down the wire, which bump into still more electrons, so that a wave of moving electrons passes along the wire – an electrical current. The more photons that bang into the pane, the more electricity flows.

This is a photocell, a sensor that is sensitive to the intensity of light. Now imagine millions of cells like this assembled into a checkerboard and shrunk to the size of a postage stamp. Put this stamp-sized collection of photocells inside a camera where the film usually goes. The lens projects an image onto it. Each cell receives a tiny portion of the image and converts that portion into an electrical charge proportionate to the amount of light forming that portion of the picture. Now we have a photosensor.

The complete matrix of charges on this photosensor forms an electrical equivalent of the complete image – but only of the intensity of the image. Since the eye interprets the intensity of light as brightness, brightness devoid of colour, this photosensor provides the information of a colourless photograph, of a black-and-white photograph. If we feed the output of the photosensor to the input of a printer, and if we let the printer spray ink on paper in inverse proportion to the voltage (lower voltage, more ink), then we will see a black-and-white photograph appear. The output of the photosensor can be connected directly to the printer through an amplifier, or it can be converted into digital numbers and the digital numbers can be sent to the printer. The first approach is analog, the second is digital. The greater the range of digital numbers, the finer the steps from black to white. If there are enough steps, the printout will look like a continuous-tone photograph.

To make a photosensor record colour, we need to make it sensitive to wavelengths of light as the eye is sensitive to them. We see long wavelengths weakly as reds, short wavelengths very weakly as blues, and medium wavelengths strongly as greens. The easiest way to make a black-and-white photosensor record colour is to put filters over the cells so that alternate cells respond to short wavelengths, medium ones and long ones. Since the eye is most sensitive to medium wavelengths, it is practical to use twice as many of these as the others: one blue, one red, two greens. Such a set of filtered cells – red, green, blue, green – forms the Bayer photosensor (named after its inventor) that is used in nearly every digital camera.

Now consider what happens when a spot of light is smaller than a group of four cells, when it is small enough to strike only a single cell. Assume the spot to be white light, which includes every wavelength. If the white spot falls on a blue-filtered cell, then the picture will show the spot to be blue. If the white spot falls on a red-filtered cell, the picture will show the spot to be red. If it falls on a green-filtered cell, the spot will look green. This can cause so many errors in the image that manufacturers try to prevent it from happening by blurring the image, by putting a diffusing filter in front of the sensor to smear small spots of light over more than one cell.

Note that in a sensor like this, four cells form the smallest unit that can capture full information about some part of a picture. That is, four cells form the basic element of a picture, the basic "picture element" or "pixel". Unfortunately, to make their products sound more impressive, manufacturers count cells as pixels. That’s like saying a piano has 234 notes, not 88, because it is built with 234 strings. Since the sensors function differently at the level of the cell and the level of the pixel, it is important to ignore the advertising and to discriminate appropriately between pixel and cell. I shall do that in this article.

A simpler approach would be to design a sensor in which every cell is sensitive to every wavelength. Such a sensor was patented by Foveon, Inc., in 2002, and is currently in its second commercial generation. Foveon’s sensor uses no coloured filters but instead embeds photo-sensitive materials within the silicon at three depths. The longer the wavelength of the light, the farther it penetrates the semi-transparent silicon and the deeper the photo-sensitive material it stimulates. With a Foveon sensor, every cell records a complete pixel with all wavelengths. (Note, however, that Foveon have taken to multiplying the number of pixels by three, to sound competitive in their ads.)

How many pixels do you need? The smallest detail usable in a print is defined by the finest lines that a person can see. At a close reading distance (about 10 inches, or 25 cm), somebody with perfect vision can resolve lines slightly finer than those on the 20/20 (6/6) line of the eye chart, lines of about 8 line-pairs per millimetre (l-p/mm), which is the unit of optical resolution.

However, those are black-and-white lines. No ordinary photograph contains black-and-white lines so thin because no camera can produce them on photographic (as distinct from lithographic) film. No lens can create such fine lines without beginning to blur the blacks and whites into grey. Dark-grey-and-light-grey lines need to be thicker than black-and-white lines to be seen. In the perception of fine lines, a halving or a doubling of thickness is usually the smallest difference of any practical significance, so this pronouncement of Schneider-Kreuznach sounds perfectly reasonable to me: "A picture can be regarded as impeccably sharp if, when viewed from a distance of 25 cm, it has a resolution of about 4 l-p/mm." On an 8" x 12" photo, this is 1,600 by 2,400 pixels, or 3.8 megapixels. (8" x 12" is about the size of A4 paper. It isn’t quite a standard size of a photo but will prove more convenient for discussion than 8" x 10".)

In short, 4 million pixels carry all of the useful information that you can put into an 8" x 12" photograph. Finer detail than this will matter to technical aficionados making magnified comparisons, and it may matter for scientific or forensic tasks, but it will not matter for ordinary purposes. The same holds for larger prints because we don’t normally view larger photographs from only 10 inches away. It holds even for the gigantic images in first-run movie theatres. The digital processing used routinely for editing and special effects generates movies with no more than 2,048 pixels of information from left to right, no matter how wide the screen. The vertical dimension differs among cinematic formats but is typically around 1,500 pixels.

This, of course, presents quite a paradox: a frame of a Cinemascope print obviously contains a lot more than 4 million pixels. Even an 8" x 12" print from a 300-dpi printer contains 2,400 pixels by 3,600 pixels, or 8.6 million pixels. Large prints need those additional pixels to prevent our seeing jagged edges on diagonal lines, because the eye will see discontinuities in lines that are finer than the lines themselves.

Since no photograph of any size can contain more than 3 to 4 million elements of information, even when made from film, any substantial enlargement needs to be composed primarily of pixels that do not exist in the original. These pixels need to be interpolated: interpolated through continuous optical integration (film), interpolated mechanically (high-resolution scanner), or interpolated logically by software (digital photography). This need for interpolation in enlargements makes interpolating algorithms fundamentally important to digital photography. For most enlargements, the quality of the interpolating algorithm matters more than the resolution of the sensor or the quality of the lens. We shall come back to this.

For the moment – indeed, forevermore – it is essential to keep straight the distinction between (1) the information that is contained within an image and (2) the presentation of this information. Both are often measured by pixels but they are orthogonal dimensions. The information within a picture can be described by a certain number of pixels. That information may be interpolated into any number of additional pixels but doing so adds nothing to the information, it merely presents the information in smaller pieces.

To illustrate this, here are some examples:


  • A good 8" x 12" photograph and the same photo run full-page in a tabloid newspaper both contain about 1 megapixel of information.

  • A slightly better photograph and the same photo run full-page in a glossy magazine and a broadsheet newspaper all contain about 1.9 megapixels of information.

  • A slightly better photograph still – the best possible – and the same photo spread over two pages in a glossy magazine both contain about 3.8 megapixels of information.


If you have an 8" x 10" photo printer, you can compare those levels of information by printing out a set of pictures (linked below, about 30 MB) that I took at approximately those resolutions, keeping everything else the same. (The test pictures were shot at 3.4, 1.5 and 0.86 megapixels: I used a Foveon sensor and, to generate the lower resolutions, used its built-in facility to average cells electronically in pairs or in groups of four.) I enlarged the pictures using the best interpolator I could find to 3,140 by 2,093 pixels.

<http://www.tidbits.com/resources/751/ HighMedLowResolution.zip>

The photos are JPEG 2000 files, saved in GraphicConverter at 100 percent quality using QuickTime lossless compression. To prepare them I adjusted the levels, cleaned up some dirt in the sky, then enlarged them in PhotoZoom Pro using the default settings for "Photo – Regular." Those settings include a modest and appropriate amount of sharpening.

What you will see, if you print them, is surprisingly small differences from one level of resolution to the next. Each of these photos looks sharp on its own, and at arm’s length they all look the same. You can see a difference only if you compare them up close. That, of course, is because the only information that’s missing from the lower-resolution pictures is information that is close to the limit of the eye’s acuity and thus is difficult to see.

Bayer vs. Foveon in Theory — Cameras today fall into two categories, those with a Bayer sensor and those with a Foveon sensor, which at this writing include only two, a theoretical Polaroid 530 and a very real Sigma SD-10.

<http://www.pdcameras.com/usa/catalog.php? itemname=x530>

<http://www.foveon.com/SD10_info.html>

In a Bayer sensor, a single cell records a single colour, but a pixel in the print can be any colour. Carl Zeiss explain this: "Each pixel of the CCD has exactly one filter color patch in front of it. It can sense the intensity for this color only. But how can the two remaining color intensities be sensed at the very location of this pixel? They cannot. They have to be generated instead through interpolation (averaging) by monitoring the signals from the surrounding pixels which have filters of these other two colors in front of them."

Since the cells provide a lot of partial information, the interpolation can be accurate, but it can be inaccurate as well. Patterns of coloured light can interact with the checkerboard pattern of filters over the cells to generate grotesque moire patterns. To avoid these, Bayer sensors are covered with a filter that blurs every spot of light over more that one cell. The net result proves to be interpolated resolution that varies with colour and peaks with black-and-white at about 50 percent more line-pairs/millimetre than the intrinsic resolution of the sensor. This sounds like a lot but cannot be seen unless you look closely.

More problematic is the fact that this filter does not merely prevent moire patterns, it also blurs edges. With a Bayer sensor, every edge of every line is blurred. You can see the interpolated resolution and the blurring in the magnified tests in the picture linked below. There I have compared cameras with a Foveon and a Bayer sensor containing the same number of pixels – pixels, not cells. Both have 3.4 million pixels (although the Bayer has 13.8 million cells).

<http://www.tidbits.com/resources/751/ Resolution.jpg>

People make a big deal about resolution because it sounds important and is easy to test, but aside from special cases like astronomical observation, fine resolution actually matters little. By definition, at the limits of resolution, we can only just make out detail. Anything that is barely visible will not obtrude itself upon our attention or be badly missed if it is not there. What we see easily is what matters to us, what determines our impression of sharpness. Our impression of sharpness is determined by the abruptness and contrast at the edges of lines that are broad enough to be easily made out. You can see this with the two tortoises in this picture linked below. The sharper tortoise has less resolution but its edges are more clearly defined.

<http://www.tidbits.com/resources/751/ Sharpness.jpg>

The Bayer sensor resolves finer black-and-white lines but a Bayer sensor will not reproduce any line so sharply as the Foveon. As a result, when comparing two top-quality images, I would expect the Bayer’s image to look slightly more impressive when large blow-ups are examined up close, but I would expect the Foveon’s to look slightly clearer when held a little farther away. Moreover, when detail is too fine for the sensor to resolve, the Bayer looks ugly or blank but the Foveon interpolates pseudo-detail. This means that in some areas, large enlargements examined closely might actually look better with the Foveon. In sum, I would expect the 3.4 megapixel Foveon and what is marketed as a 13.8-megapixel Bayer to be in the same league. I would expect photographs from them to be different but comparable overall, if they are enlarged with an appropriate algorithm.

Bayer vs. Foveon in Practice — "If they are enlarged with an appropriate algorithm…" – that statement is critical to a sensible comparison. Usually, if you magnify an object a little, it won’t change its appearance much. If you simply interpolate according to some kind of running average, you can increase its size to a certain extent and it will still look reasonable. This is how most enlargements are made. It is the basis of the bicubic algorithm used in most photo editors, including Photoshop and, apparently, Sigma’s PhotoPro. It is also the basis of most comparisons between Bayer and Foveon. However, a running average will widen transitions at the edges of lines, and it will destroy the Foveon’s sharp edges, softening them into the edges of a Bayer. A better class of algorithm will stop averaging at lines. Any form of averaging, though, tends to distort small regularities (wavelets) that occur in similar forms at different scales. Best of all are algorithms that look for wavelets, too. The only Macintosh application I know of in that class is PhotoZoom Pro. PhotoZoom Pro has a limited set of features and some annoying bugs – version 1.095 for the Mac feels like a beta release – but it creates superb enlargements.

<http://www.trulyphotomagic.com/>

An appropriate comparison of the Bayer and Foveon sensors would see how much information these sensors capture overall. (How much spatial information, that is: comparing colour would be comparing amoebas, as I explained in "Colour & Computers" in TidBITS-749.) To do this, I tested an SD-10 against an SLR that was based on a larger Bayer sensor, a sensor 70 percent larger than the Foveon that contained 13.8 million cells. Kodak were most helpful in supplying this camera once they heard Doctors Without Borders (Medecins sans Frontiers) was to benefit (see the PayBITS block at the bottom of this article to make a donation if you’ve found this article helpful). Also, Sigma sent me a matched pair of 50-mm macro lenses to use with the cameras.

<https://tidbits.com/getbits.acgi?tbart=07840>

I copied an oil painting with a wide variety of colours and a lot of fine textural detail. With each camera I photographed a large chunk of the painting, cropped out a small section from the centre, blew up that section to the same size as the original using PhotoZoom Pro (the defaults for "Photo – Regular"), and compared that blow-up to a gold standard, a close-up that had not seen any enlargement, interpolation, or blurring filter in front of the sensor. Before blowing them up I balanced all three photos to be as similar as I could, then, to prevent unavoidable differences in colour from confounding the spatial information, I converted all three images to black-and-white. I did this in ImageJ. First I split each image into its three channels, then I equalized the contrast of each channel across the histogram, then I combined the channels back into a colour picture, converted the new colour picture to 8-bit, and equalized the contrast of the 8-bit file. (See the second link below for an explanation of contrast-equalization.) I chose a painting in which most of the coloured brush strokes were outlined with black brush strokes, so that adjacent colours would not merge after conversion into a similar shades of grey. With my 314-dpi printer, the two enlargements are the equivalent of chunks from a 14" x 21".

<http://rsb.info.nih.gov/ij/>

<http://homepages.inf.ed.ac.uk/rbf/HIPR2/ histeq.htm#1>

The difference between the photos from the Bayer and Foveon is very slight. The two pictures are indistinguishable unless you compare them closely. Fine, contrasty lines on the standard are finer on the Bayer, more contrasty on the Foveon. The one that looks more like the standard depends upon the distance from the eye and the lighting but the differences are trivial. The two images do contain slightly different information, but they contain comparable amounts overall.

On the other hand, for efficiency of storage and speed of processing, the Foveon wins hands down. This is how two identical pictures compared:




















 
 

Foveon

Bayer

 

RAW

7.8 MB

14.7 MB

 

8-bit TIFF

9.8 MB

38.7 MB

If you would like to print out my test pictures, you can download them. However, for the comparison to be meaningful, you must specify a number of dots per inch for the pictures that your printer can resolve in both directions. I know that an Olympus P-440 can resolve 314 dpi, with no more than occasional one-pixel errors in one colour’s registration. I have not found any resolution that an Epson 9600 can handle cleanly in both directions, although I have not been able to test it exhaustively. Other printers I know nothing about. You will have to experiment with the test patterns in the Printer Sharpness Test file linked below. For this purpose, only the black-and-white stripes matter.

<http://www.tidbits.com/resources/748/ PrinterSharpnessTest.zip>

Each picture in the 5.8 MB file below is 1512 pixels by approximately 2270. If a picture has been printed correctly, the width in inches will be 1512 divided by the number of dots per inch. Print them from Photoshop or GraphicConverter; Preview will scale them to fit the paper.

<http://www.tidbits.com/resources/751/Bayer_vs_ Foveon.zip>

Remember that the question to ask is not which picture looks better or which picture shows more detail but which picture looks more like the gold standard overall. I suggest that you compare the pictures upside down. Remember, too, that these are small sections from big enlargements that you would normally view framed and hanging on a wall. Also, although the contrast is equalized overall, the original colours were not quite identical and the equalization of contrast amplified some tonal differences. If you perceive the Bayer or Foveon to be better in one or another area, make sure that in this area the tonality is similar. If the tonality is different, the difference there is probably an artifact. An example of this is the shadow beneath the tape on the left side.

I have not been able to test this but I suspect that the most important optical difference between Bayer and Foveon sensors may be how clearly they reveal deficiencies in lenses. Since the Foveon sensor is sharper, I would expect blur and colour fringing to show up more clearly on a Foveon sensor than a Bayer.

Megapixels, Meganonsense — Megapixels sell cameras as horsepower sells cars and just as foolishly. To fit more cells in a sensor, the cells need to be smaller. It is possible to make cells smaller than a lens can resolve. Even if the lens can resolve the detail more finely, doubling the number of cells makes a difference that is only just noticeable in a direct comparison.

On the other hand, small pixels create problems. Electronic sensors pick up random fluctuations in light that we cannot see. These show up on enlargements like grain in film. Larger cells smooth out the fluctuations better than smaller cells. Also, larger cells can handle more light before they top out at their maximum voltage, so they can operate farther above the residual noise. For both reasons, images taken with larger cells are cleaner. Enlargements from my pocket-sized Minolta Xt begin to fall apart from too much noise, not from too few pixels.

In contrast, enlargements from my Sigma SD-10 have so little noise that they can be enormous. A 30" x 44" test print looked as though it came from my 2-1/4" x 3-1/4" Horseman. The Sigma has less resolution than the Horseman – it’s probably less than can be extracted from scanning the finest 35-mm film – but its noise level can be reduced to something approaching 4" x 5" sheet film. Such a low level of noise leaves the detail that it contains, which is substantial, very clean. In perception, above a low threshold, the proportion of noise to signal matters far more to the brain than the absolute amount of signal. Indeed, if I look through a box of my old 11" x 14" enlargements, the only way I can distinguish the 35-mm photos from the 2-1/4 x 3-1/4" is to examine smooth tones for noise. I cannot tell them apart by looking at areas with detail.

In sum, with the range of sensors used in cameras today, there is no point to worrying about a few megapixels more or less. Shrinking cells to fit more of them in the sensor can lose more information than it gains. The size of the cells is likely to be more important than their number. For the same money, I would rather buy a larger sensor with fewer pixels than a smaller sensor with more pixels. If nothing else, the larger sensor is likely to be sharper because it will be less sensitive to movement of the camera. For a realistic comparison of sensors as they are marketed see this chart:

<http://www.tidbits.com/resources/751/ SensorChart.png>

Tripod vs. Lens — Most people believe that the quality of the lens is of primary importance in digital photography. If you have stayed with me so far, you may not be surprised to hear me calculate otherwise. With 35mm cameras, an old rule of thumb holds that the slowest shutter speed that a competent, sober photographer can use without a tripod and still stand a good chance of having the picture look sharp is 1 divided by the focal length of the lens: 1/50" for a 50-mm lens, 1/100" for a 100-mm lens, etc. At these settings there will always be some slight blur but it will usually be too little to be noticed. This blur will mask any difference in sharpness between lenses. To see differences in sharpness requires speeds several times faster.

With digital cameras that use 35-mm-sized sensors, the same rule of thumb holds, but most digital cameras use smaller sensors. With smaller sensors, the same amount of movement will blur more of the picture. If you work out the trigonometry, you’ll find that you need shutter speeds roughly twice as fast for 4/3" sensors and four times faster for 2/3" and 1/1.8" sensors. (Digital sensors come in sizes like 4/3", 2/3" and 1/1.8". Those numbers are meaningless relics from the days of vacuum tubes; they are now just arbitrary numbers equivalent to dress sizes.) That means minimal speeds of 1/100" and 1/200" for a normal lens. Differences in sharpness among lenses would not be apparent until shutter speeds are several times higher again. Because of this, it strikes me that the weight of lenses matters more to image quality than the optics. The heavier a camera bag becomes, the more likely the tripod will be left at home.

(Note that this does not mean that 35-mm-sized sensors are best. Other optical problems increase with the size of the sensor. As an overall compromise, the industry is beginning to adopt a new standard, the 4/3", or four-thirds, which is approximately one-half the diameter of 35-mm. This is not unreasonable.)

Frankly, I should be astonished to find any lens manufactured today that does not have sufficient contrast and resolution to produce an impressive image in the hands of a competent photographer. I know that close comparisons of photos shot on a tripod will show differences from one lens to another, and I know that some lenses have weaknesses, but very few people will decorate a living room with test pictures. In the real world, nobody is likely to notice any optical deficiency unless the problem is movement of the camera, bad focus, distortion or colour fringing. It is certainly true that distortion and colour fringing can be objectionable but, although enough money and experimentation might find some lenses that evince less of these problems than others, as a practical matter, especially with zoom lenses, they seem to be inescapable. Fortunately, these can usually be corrected or hidden by software.

Indeed, even a certain amount of blur can be removed with software. Let’s say that half of the light that ought to fall on one pixel is spread over surrounding pixels. Knowing this, it is possible to move that much light back to the central pixel from the surrounding ones. That seems to be what Focus Magic does (see the discussion of Focus Magic in "Editing Photographs for the Perfectionist" in TidBITS-748).

<http://www.focusmagic.com/>

<https://tidbits.com/getbits.acgi?tbart=07832>

One More Myth — Finally, I would like to end this article by debunking a common myth. I have often read that Bayer sensors work well because half of their cells are green and the wavelengths that induce green provide most of the information used by the eye for visual acuity. This made no sense to me but I am not an expert on the eye so I asked an expert – three experts in fact, scientists known internationally for their work in visual perception. I happened to be having dinner with them. It made no sense to them, either, although I took care to ask them before they had much wine. Later I pestered one of them about it so much that eventually she got out of bed (this was my wife Daphne) and threw an old textbook at me, Human Color Vision by Robert Boynton. In it I found this explanation:

"To investigate ‘color,’" an experimenter puts a filter in front of a projector that is projecting an eye chart. "An observer, who formerly could read the 20/20 line, now finds that he or she can recognize only those letters corresponding to 20/60 acuity or worse. What can be legitimately concluded from this experiment? The answer is, nothing at all," because the filter reduced the amount of light. "A control experiment is needed, where the same reduction in luminance is achieved using a neutral filter…. When such controls are used, it is typically found that varying spectral distribution has remarkably little effect upon visual acuity."

In short, each cell in a Bayer sensor provides similar information about resolution. It is true that green light will provide a Bayer sensor with more information than red and blue light but that is only because the sensor has more green cells.

If you want to shop for a digital camera, this article will help you make the most important decision, what kind and size of sensor to buy, with how many pixels. Once you have decided that, a host of smaller decisions await you. My next article will walk you through these. It is also going to incorporate a review of the Sigma SD-10 and will appear shortly after one more lens arrives from Japan.

PayBITS: If Charles’s explanation of resolution and debunking of

the megapixel myth were useful, please support Doctors Without

Borders: <http://www.doctorswithoutborders-usa.org/donate />

Read more about PayBITS: <http://www.tidbits.com/paybits/>


Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.