Noise in Digital Cameras

Back to the Nitty-Gritty section,
with articles on technical aspects of photography.

Few subjects related to digital photography raise as many emotions as noise does. Also, few issues are as often misrepresented, misunderstood, and misinterpreted. This article is an attempt to present the basics of this problem, so that you can approach it is a more informed manner.

Noise 101

Noise visible in images from digital devices (including cameras) is due to the fact that the same amount of light on a pixel may cause a different signal response, varying stochastically around some average, nominal value.

This noise, by the virtue of the tertium non datur principle, consists of two components:

  • Type I (Fixed Noise): Response variation between different pixels (some always show higher current than others at the same light input);
  • Type II (Random Noise): Response variation for the same pixel (fluctuations in sensitivity of the same pixel from one instance to another).

The fixed noise is predominant at long exposures and high gain (ISO) settings. Under brighter light, the fixed noise practically disappears and the random component takes over. With different character (and different underlying reasons) of both components it is natural that camera manufacturers treat them as separate issues being addressed in different ways — although this is not always clear in documentation. This is also why we will discuss both components one at a time.

The fixed noise

The static (fixed) noise results from different sensitivity to light (and this includes also any differences in signal collection, amplification, and digitalization) for different photosites: some are more responsive than others. This effect is most visible at longer exposures (for current sensors: one second or so) and higher gains (ISO settings), therefore it may become bothersome at low-light shooting

The fixed noise magnitude distribution clearly shows, again, a superposition of two quite different sub-components: one with smaller amplitude and looking fairly random (like film grain), and one consisting of bright, but far-between, spikes, often referred to as "hot pixels". (These should not be confused with "dead", also called "stuck" by some writers, pixels, which are always bright, not only at long exposures).

Some cameras address the problem by recording another, "dark" frame at the same shutter/ISO immediately after taking the picture, and subtracting the former from the latter. This usually works, reducing the fixed noise considerably (in particular, getting rid of hot pixels). The price you pay is the extra delay before the next picture can be taken.

This technique can be also applied in postprocessing if the user takes a "dark frame" after the actual picture, using the same camera settings. The results may be, however, not as good as if the camera does the job: even if you save the images as raw files, they must be converted to RGB before dark frame subtraction is applied. Still, using the raw format here is recommended, as the images will not be affected by a number of postprocessing effects, like sharpening, dealiasing, JPEG compression, WB adjustment and others, all of which will affect both frames in a different way.

It also has to be remembered than in modern cameras (2003 or later) the static noise really affects only exposures of one second or longer; at shorter times the random noise prevails so the frame subtraction will do more bad than good (remember that it actually increases the random noise).

The random noise

The random noise occurs at all light levels; the same photosite may respond differently to recorded light from one instance to another.

Physics, in the micro scale at least, is statistical: a response to a given, fixed value, stimulus will be random (although distributed around a certain, well-defined value). No amount of engineering can change that, we can only try to work around this effect, trying to hide it.

There are a numerous stochastic effects contributing to this component at various stages of the process:

  • Fluctuations in the number of photons penetrating the photosite, Np
  • Fluctuations in number of electrons, Ne, generated, before amplification, per a given Np
  • Random effects in the amplification process, where Ne is converted into the recorded value of the photosite's R, G, or B value
  • Possibly more, with importance I have no clue about.

The first two of these effects follow the Poisson distribution (and so does, in a good approximation, their superposition), and this is why they are often referred to as the Poisson noise.

In the Poisson model, for a given mean value of Np or Ne the width of random fluctuations, measured as root-mean-square deviation, equals to the square root of the mean. For example, if the photosite is illuminated to generate, on average, 1000 photoelectrons, the expected RMS fluctuations will be about 31, or 3.1%; at a light level 10 times lower, corresponding to 100 electrons, they will be 10 electrons, or 10%.

The random noise magnitude, after being translated in RGB response, increases with the sensor gain — which makes sense, as at higher gains we use less light and therefore less photosite charge to build a useable image.

Reducing the random noise seems to be somewhat more complicated. Obviously, the previous approach will not work here. Manufacturers use various image processing algorithms to average the neighboring pixels, therefore smoothing the differences. What we actually see in the final, recorded image is a superposition of two effects: the inherent sensor noise and the denoising job done in processing.

Here we come to the tricky part. Denoising (averaging) may often lead to loss of detail, therefore impacting the actual image resolution. Whatever we do, we can never be sure whether the difference between two neighboring pixels is due to a difference in the corresponding areas of the subject, or just to noise.

There are "smart" algorithms for doing that (many of them with elaborate and ornate names); they try to apply averaging with different degree along different directions in a particular point of the image: more aggressive and omni-directional in large areas of blue sky, less aggressive and directed along contours in other places. This often leads to quite pleasing results, but no cigar: any algorithm can be fooled. Areas with lots of detail without visible contour lines (for example, a rough concrete wall) will loose the detail to some degree, varying from camera to camera.

While usually you have no say in how the in-camera random noise filtering works, you may see the effect using one of available denoising programs in postprocessing. Once I applied the (perhaps) best denoising program on the market to clean up a night picture taken at 400 ISO; the results seemed good: noise gone, contour sharpness seemingly unaffected — until I noticed all the shingles gone from some roofs; the algorithm, seeing no discernible detail, took the shingles for noise and averaged them out into oblivion.

Many camera makers, following the whims of the market, are denoising their images in-camera too aggressively. This reduces the number of complaints, and is cheaper than educating the general public. As a rule, some entry-level cameras have less visible noise that their higher-priced counterparts (in spite of cheaper and noisier sensors).

It is not difficult to "measure" the noise by computing the pixel brightness variance at various spatial frequencies (roughly speaking: with various widths of an averaging window). Such measurements, however, are not able to separate the "real" noise generated by the sensor from its reduction due to the in-camera denoising algorithms; even less can they measure the effect of denoising on the image resolution.

Therefore I'm skeptical about most of the evaluative statements about noise levels in various cameras. There is too much going behind the scenes. (See also Banding and Clustering below.)

The so-called full-frame transfer CCDs have one advantage here: all connecting circuitry runs in a layer beneath the photosites (light-sensitive elements), not alongside. This means that the photosites themselves may occupy almost all surface of the chip, increasing their light-collecting size. The image quality may potentially be improved this way, with less noise and diffraction effects.

On the other hand, CMOS sensors used in some cameras (among SLRs: all Canon models, the Olympus E-330 and E-410/510) have more raw noise from photosites than CCD ones. The signal from CMOS sensors, however, seems to be easier to process, and this advantage allows for smarter noise filtering, which levels the playing field. Still, in my school any noise reduction, how smart it may be, is a necessary evil.

Banding and clustering

The worst issue with the random (non-static) noise is that it can be not-so-random: deviations from the expected mean value for individual pixels are correlated. This can be seen as clustering (when a group of neighboring pixels shows deviations in the same direction, effectively increasing the "grain size") or banding (when pixels show a similarity along a horizontal or vertical line).

Unfortunately, the most frequently used measures of image noise (related to the width of pixel response distribution) disregard the correlations, and that's why I consider them (almost) useless.

The banding problem was being "discovered" at high ISO settings in cameras of other manufacturers for a number of years, causing a minor PR disaster for some of Nikon models a few years ago, although I had an impression that MOS sensors were more prone to it than CCDs.

Too much noise about noise

Many users (and some respected reviewers) would blow up an image file on their monitor to 1:1 pixel scale (or higher), see some noise, and complain loud about it. They don't seem to remember that the pixels visible on screen, with its inherent high contrast, are much larger and more visible than they ever will be printed or viewed in full frame; many also never have seen a negative or transparency enlarged to the same size. Most today's cameras have less noise than that of a 100 ISO film grain, and you still hear complaints.

There is nothing wrong about pixel-peeping, as long as you realize the fallacies of this way of analyzing the samples; especially, remembering that what you're viewing is magnified and exaggerated.

Remember also that this pixel-peeping effect is more significant for higher pixel counts. For a higher-megapixel camera, the same amount of pixel noise will show less in the viewed or printed image. Therefore, yes, 10-MP cameras may have more noise than 5-MP ones, but because the reduction factor is about 1.4× greater (square root of two), this noise is less objectionable. Another reason why the common noise metrics are close to useless.

My advice would be to treat all these complaints with a good dose of skepticism. Except for some bottom-of-the-market, cheap fly-by-night operators, all manufacturers have the noise well under control, at least at low to moderate gains, up to, say, ISO 200. All digital SLRs I've tried (or seen samples from) behave nicely in this aspect up to at least ISO 400.

Further reading

  • An article by Jeff Medkeff on photo.net — a general, but detailed, introduction to noise issue and a description of the dark-frame technique of static noise removal.
  • For a discussion of individual components of image noise and the reasons behind them, refer to a very educational article (2014: the document is no longer there) published by the Quantitative Imaging Group, Technical University of Delft, Holland. (Some background in physics will help.)

    Note of 2014: The article is no longer there, but I am not removing the dead link: copy the address to clipboard (right-click menu) and use the Wayback Machine to find an archived copy.


Back to the Nitty-Gritty section,
with articles on technical aspects of photography.

Home: wrotniak.net | Search this site | Change font size

Photo News | The Gallery


Posted 2003/06/26; last updated 2014/01/28 Copyright © 2003-2014 by J. Andrzej Wrotniak