概述
Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and digital video data, and is often one of the last stages of audio production to compact disc.
Contents[hide]
|
[edit] Etymology
…one of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb "didderen," meaning "to tremble." Today, when you tap a mechanical meter to increase its accuracy, you are applying dither, and modern dictionaries define dither as a highly nervous, confused, or agitated state. In minute quantities, dither successfully makes a digitization system a little more analog in the good sense of the word.—Ken Pohlmann, Principles of Digital Audio [1]
The term "dither" was published in books on analog computation and hydraulically controlled guns shortly after the war.[2][3] The concept of dithering to reduce quantization patterns was first applied by Lawrence G. Roberts[4] in his 1961 MIT master's thesis[5] and 1962 article[6] though he did not use the term dither. By 1964 dither was being used in the modern sense described in this article.[7]
[edit] In digital processing and waveform analysis
Dither is often used in digital audio and video processing, where it is applied to bit-depth transitions; it is utilized in many different fields where digital processing and analysis are used — especially waveform analysis. These uses include systems using digital signal processing, such as digital audio, digital video, digital photography, seismology, RADAR, weather forecasting systems and many more.
The premise is that quantization and re-quantization of digital data yields error. If that error is repeating and correlated to the signal, the error that results is repeating, cyclical, and mathematically determinable. In some fields, especially where the receptor is sensitive to such artifacts, cyclical errors yield undesirable artifacts. In these fields dither results in less determinable artifacts. The field of audio is a primary example of this — the human ear functions much like a Fourier transform, wherein it hears individual frequencies (for details see Mathematics of hearing). The ear is therefore very sensitive to distortion, or additional frequency content that "colors" the sound differently, but far less sensitive to random noise at all frequencies.[citation needed]
[edit] Digital audio
In audio, dither can be useful to break up periodic limit cycles, which are a common problem in digital filters. Random noise is typically less objectionable than the harmonic tones produced by limit cycles.
In 1987, Lipshitz and Vanderkooy pointed out that different noise types, with different probability density functions, behave differently when used as dither signals, and suggested optimal levels of dither signals for audio.[8][9]
In an analog system, the signal is continuous, but in a PCM digital system, the amplitude of the signal out of the digital system is limited to one of a set of fixed values or numbers. This process is called quantization. Each coded value is a discrete step... if a signal is quantized without using dither, there will be quantization distortion related to the original input signal... In order to prevent this, the signal is "dithered", a process that mathematically removes the harmonics or other highly undesirable distortions entirely, and that replaces it with a constant, fixed noise level.[10]
The final version of audio that goes onto a compact disc contains only 16 bits per sample, but throughout the production process a greater number of bits are typically used to represent the sample. In the end, the digital data must be reduced to 16 bits for pressing onto a CD and distributing.
There are multiple ways to do this. One can, for example, simply discard the excess bits — called truncation. One can also round the excess bits to the nearest value. Each of these methods, however, results in predictable and determinable errors in the result. Take, for example, a waveform that consists of the following values:
- 1 2 3 4 5 6 7 8
If we reduce our waveform by, say, 20% then we end up with the following values:
- 0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4
If we truncate these values we end up with the following data:
- 0 1 2 3 4 4 5 6
If we instead round these values we end up with the following data:
- 1 2 2 3 4 5 6 6
For any original waveform, the process of reducing the waveform amplitude by 20% results in regular errors. Take for example a sine wave that, for some portion, matches the values above. Every time the sine wave's value hit "3.2," the truncated result would be off by 0.2, as in the sample data above. Every time the sine wave's value hit "4.0," there would be no error since the truncated result would be off by 0.0, also shown above. The magnitude of this error changes regularly and repeatedly throughout the sine wave's cycle. It is precisely this error which manifests itself as distortion. What the ear hears as distortion is the additional content at discrete frequencies created by the regular and repeated quantization error.
A plausible solution would be to take the 2 digit number (say, 4.8) and round it one direction or the other. For example, we could round it to 5 one time and then 4 the next time. This would make the long-term average 4.5 instead of 4, so that over the long-term the value is closer to its actual value. This, on the other hand, still results in determinable (though more complicated) error. Every other time the value 4.8 comes up the result is an error of 0.2, and the other times it is –0.8. This still results in repeating, quantifiable error.
Another plausible solution would be to take 4.8 and round it so that the first four times out of five it rounded up to 5, and the fifth time it rounded to 4. This would average out to exactly 4.8 over the long term. Unfortunately, however, it still results in repeatable and determinable errors, and those errors still manifest themselves as distortion to the ear (though oversampling can reduce this).
This leads to the dither solution. Rather than predictably rounding up or down in a repeating pattern, what if we rounded up or down in a random pattern? If we came up with a way to randomly toggle our results between 4 and 5 so that 80% of the time it ended up on 5 then we would average 4.8 over the long run but would have random, non-repeating error in the result. This is done through dither.
We calculate a series of random numbers between 0.0 and 0.9 (ex: 0.6, 0.1, 0.3, 0.6, 0.9, etc.) and we add these random numbers to the results of our equation. Two times out of ten the result will truncate back to 4 (if 0.0 or 0.1 are added to 4.8) and the rest of the times it will truncate to 5, but each given situation has a random 20% chance of rounding to 4 or 80% chance of rounding to 5. Over the long haul this will result in results that average to 4.8 and a quantization error that is random — or noise. This "noise" result is less offensive to the ear than the determinable distortion that would result otherwise.
Audio samples:
|
16-bit sine wave
truncated to 6 bits
|
Problems listening to these files? See media help. |
[edit] Usage
Dither should be added to any low-amplitude or highly-periodic signal before any quantization or re-quantization process, in order to de-correlate the quantization noise from the input signal and to prevent non-linear behavior (distortion); the lesser the bit depth, the greater the dither must be. The result of the process still yields distortion, but the distortion is of a random nature so the resulting noise is, effectively, de-correlated from the intended signal. Any bit-reduction process should add dither to the waveform before the reduction is performed.
[edit] Different types
RPDF stands for "Rectangular Probability Density Function," equivalent to a roll of a dice. Any number has the same random probability of surfacing.
TPDF stands for "Triangular Probability Density Function," equivalent to a roll of two dice (the sum of two independent samples of RPDF).
Gaussian PDF is equivalent to a roll of a large number of dice. The relationship of probabilities of results follows a bell-shaped, or Gaussian curve, typical of dither generated by analog sources such as microphone preamplifiers. If the bit depth of a recording is sufficiently great, that preamp noise will be sufficient to dither the recording.
Colored Dither is sometimes mentioned as dither that has been filtered to be different from white noise. Some dither algorithms use noise that has more energy in the higher frequencies so as to lower the energy in the critical audio band.
Noise shaping is a filtering process that shapes the spectral energy of quantization error, typically to either de-emphasise frequencies to which the ear is most sensitive or separate the signal and noise bands completely. If dither is used, its final spectrum depends on whether it is added inside or outside the feedback loop of the noise shaper: if inside, the dither is treated as part of the error signal and shaped along with actual quantization error; if outside, the dither is treated as part of the original signal and linearises quantization without being shaped itself. In this case, the final noise floor is the sum of the flat dither spectrum and the shaped quantization noise. While real-world noise shaping usually includes in-loop dithering, it is also possible to use it without adding dither at all, in which case the usual harmonic-distortion effects still appear at low signal levels.
[edit] Which types to use
If the signal being dithered is to undergo further processing, then it should be processed with TPDF dither that has an amplitude of two quantization steps (so that the dither values computed range from, say, –1 to +1, or 0 to 2).[9] This is the lowest power ideal dither, in that it does not introduce noise modulation (constant noise floor) and completely eliminates the harmonic distortion from *quantization*. If colored dither is used at these intermediate processing stages then the frequency content can "bleed" into other, more noticeable frequency ranges and become distractingly audible.
If the signal being dithered is to undergo no further processing — it is being dithered to its final result for distribution — then colored dither or noise shaping is appropriate, and can effectively lower the audible noise level by putting most of that noise in a frequency range where it is less critical.
[edit] Digital photography and image processing
Dithering is a technique used in computer graphics to create the illusion of color depth in images with a limited color palette (color quantization). In a dithered image, colors not available in the palette are approximated by a diffusion of colored pixels from within the available palette. The human eye perceives the diffusion as a mixture of the colors within it (see color vision). Dithered images, particularly those with relatively few colors, can often be distinguished by a characteristic graininess, or speckled appearance.
By its nature, dithering introduces a pattern in to the image, the idea is that the image is viewed from such a distance the pattern is not discernible to the human eye. Unfortunately this is not typically the case and often the patterning is visible. In these circumstances it has been shown that a blue noise dither pattern is the least unsightly and distracting.[11] The error diffusion techniques where some of the first methods to generate blue noise dithering patterns, however, other techniques such as ordered dithering can also generate blue noise dithering without the tendency to degenerate in to areas with artefacts.
[edit] Examples
Reducing the color depth of an image can often have significant visual side-effects. If the original image is a photograph, it is likely to have thousands, or even millions of distinct colors. The process of constraining the available colors to a specific color palette effectively throws away a certain amount of color information.
A number of factors can affect the resulting quality of a color-reduced image. Perhaps most significant is the color palette that will be used in the reduced image. For example, an original image (Figure 1) may be reduced to the 216-color "web-safe" color palette. If the original pixel colors are simply translated into the closest available color from the palette, no dithering occurs (Figure 2). Typically, this approach results in flat areas (contours) and a loss of detail, and may produce patches of color that are significantly different from the original. Shaded or gradient areas may appear as color bands, which may be distracting. The application of dithering can help to minimize such visual artifacts, and usually results in a better representation of the original (Figure 3). Dithering helps to reduce color banding and flatness.
One of the problems associated with using a fixed color palette is that many of the needed colors may not be available in the palette, and many of the available colors may not be needed; a fixed palette containing mostly shades of green would not be well-suited for images that do not contain many shades of green, for instance. The use of an optimized color palette can be of benefit in such cases. An optimized color palette is one in which the available colors are chosen based on how frequently they are used in the original source image. If the image is reduced based on an optimized palette, the result is often much closer to the original (Figure 4).
The number of colors available in the palette is also a contributing factor. If, for example, the palette is limited to only 16 colors, the resulting image could suffer from additional loss of detail, and even more pronounced problems with flatness and color banding (Figure 5). Once again, dithering can help to minimize such artifacts (Figure 6).
|
[edit] Applications
Display hardware, including early computer video adapters and many modern LCDs used in mobile phones and inexpensive digital cameras, show a much smaller color range than more advanced displays. One common application of dithering is to more accurately display graphics containing a greater range of colors than the hardware is capable of showing. For example, dithering might be used in order to display a photographic image containing millions of colors on video hardware that is only capable of showing 256 colors at a time. The 256 available colors would be used to generate a dithered approximation of the original image. Without dithering, the colors in the original image might simply be "rounded off" to the closest available color, resulting in a new image that is a poor representation of the original. Dithering takes advantage of the human eye's tendency to "mix" two colors in close proximity to one another.
Some LCDs may use temporal dithering to achieve a similar effect. By alternating each pixel's color value rapidly between two approximate colors in the panel's color space (also known as Frame Rate Control), a display panel which natively supports 18-bit color (6 bits per channel) can represent a 24-bit "true" color image (8 bits per channel).[12]
Dithering such as this, in which the computer's display hardware is the primary limitation on color depth, is commonly employed in software such as web browsers. Since a web browser may be retrieving graphical elements from an external source, it may be necessary for the browser to perform dithering on images with too many colors for the available display. It was due to problems with dithering that a color palette known as the "web-safe color palette" was identified, for use in choosing colors that would not be dithered on displays with only 256 colors available.
But even when the total number of available colors in the display hardware is high enough when rendering full color digital photographs, as those 15- and 16-bit RGB Hicolor 32,768/65,536 color modes, banding can be evident to the eye, especially in large areas of smooth shade transitions (although the original image file has no banding at all). Dithering the 32 or 64 RGB levels will result in a pretty good "pseudo truecolor" display approximation, which the eye cannot resolve as grainy. Furthermore, images displayed on 24-bit RGB hardware (8 bits per RGB primary) can be dithered to simulate somewhat higher bit depth, and/or to minimize the loss of hues available after a gamma correction. High-end still image processing software, as Adobe Photoshop, commonly uses these techniques for improved display.
Another useful application of dithering is for situations in which the graphic file format is the limiting factor. In particular, the commonly-used GIF format is restricted to the use of 256 or fewer colors in many graphics editing programs. Images in other file formats, such as PNG, may also have such a restriction imposed on them for the sake of a reduction in file size. Images such as these have a fixed color palette defining all the colors that the image may use. For such situations, graphical editing software may be responsible for dithering images prior to saving them in such restrictive formats.
Dithering is analogous to the halftone technique used in printing. The recent widespread adoption of inkjet printers and their ability to print isolated dots has increased the use of dithering in printing. For this reason the term dithering is sometimes used interchangeably with the term halftone, particularly in association with digital printing.
A typical desktop inkjet printer can print just 15 colors (the combination of dot or no dot from cyan, magenta, yellow and black print heads), and some of these ink combinations are not useful because when the black ink is used it typically obscures any of the other colors. To reproduce a large range of colors, dithering is used. In densely printed areas, where the color is dark the dithering is often not visible because the dots of ink merge producing a more uniform print. However, a close inspection of the light areas of a print where the dithering has placed dots much further apart relieves the tell-tale dots of dithering.
[edit] Algorithms
There are several algorithms designed to perform dithering. One of the earliest, and still one of the most popular, is the Floyd–Steinberg dithering algorithm, developed in 1975. One of the strengths of this algorithm is that it minimizes visual artifacts through an error-diffusion process; error-diffusion algorithms typically produce images that more closely represent the original than simpler dithering algorithms.[13]
Dithering methods include:
- Thresholding (also average dithering[14]): each pixel value is compared against a fixed threshold. This may be the simplest dithering algorithm there is, but it results in immense loss of detail and contouring.[13]
- Random dithering was the first attempt (at least as early as 1951) to remedy the drawbacks of thresholding. Each pixel value is compared against a random threshold, resulting in a staticky image. Although this method doesn't generate patterned artifacts, the noise tends to swamp the detail of the image. It is analogous to the practice of mezzotinting.[13]
- Patterning dithers using a fixed pattern. For each of the input values a fixed pattern is placed in the output image. The biggest disadvantage of this technique is that the output image is larger (by a factor of the fixed pattern size) than the input pattern.[13]
- Ordered dithering dithers using a "dither matrix". For every pixel in the image the value of the pattern at the corresponding location is used as a threshold. Neighboring pixels do not affect each other, making this form of dithering suitable for use in animations. Different patterns can generate completely different dithering effects. Though simple to implement, this dithering algorithm is not easily changed to work with free-form, arbitrary palettes.
- A Halftone dithering matrix produces a look similar to that of halftone screening in newspapers. This is a form of clustered dithering, in that dots tend to cluster together. This can help hide the adverse effects of blurry pixels found on some older output devices. The primary use for this method is in offset printing and laser printers, in both these devices the ink or toner prefers to clump together and will not form the isolated dots generated by the other dithering methods.
- A Bayer matrix[13] produces a very distinctive cross-hatch pattern.
- A matrix tuned for blue noise (such as those generated by the void-and-cluster method[15]) produces a look closer to that of an error diffusion dither method.
(Original) | Threshold | Random | Halftone | Bayer (ordered) |
---|---|---|---|---|
- Error-diffusion dithering is a feedback process that diffuses the quantization error to neighbouring pixels.
- Floyd–Steinberg dithering only diffuses the error to neighbouring pixels. This results in very fine-grained dithering.
- Jarvis, Judice, and Ninke dithering diffuses the error also to pixels one step further away. The dithering is coarser, but has fewer visual artifacts. It is slower than Floyd–Steinberg dithering because it distributes errors among 12 nearby pixels instead of 4 nearby pixels for Floyd–Steinberg.
- Stucki dithering is based on the above, but is slightly faster. Its output tends to be clean and sharp.
- Burkes dithering is a simplified form of Stucki dithering that is faster, but less clean than Stucki dithering.
Floyd–Steinberg | Jarvis, Judice & Ninke | Stucki | Burkes |
---|---|---|---|
- Error-diffusion dithering (continued):
- Sierra dithering is based on Jarvis dithering, but it's faster while giving similar results.
- Two-row Sierra is the above method modified by Sierra to improve its speed.
- Filter Lite is an algorithm by Sierra that is much simpler and faster than Floyd–Steinberg, while still yielding similar (according to Sierra, better) results.
- Atkinson dithering, developed by Apple programmer Bill Atkinson, resembles Jarvis dithering and Sierra dithering, but it's faster. Another difference is that it doesn't diffuse the entire quantization error, but only three quarters. It tends to preserve detail well, but very light and dark areas may appear blown out.
Sierra | Two-row Sierra | Sierra Lite | Atkinson |
---|---|---|---|
[edit] In optical fiber systems
Stimulated Brillouin Scattering (SBS) is a nonlinear optical effect that limits the launched optical power in fiber optic systems. This power limit can be increased by dithering the transmit optical center frequency, typically implemented by modulating the laser's bias input. See also polarization scrambling.
[edit] See also
- Digital audio
- Quantization (signal processing)
- Anti-aliasing
- Lossy data compression
[edit] References
- ^ Ken C. Pohlmann (2005). Principles of Digital Audio. McGraw-Hill Professional. ISBN 0-07-144156-5. http://books.google.com/?id=VZw6z9a03ikC&pg=PA49&dq=didderen+dither+intitle:Principles+intitle:of+intitle:Digital+intitle:Audio.
- ^ William C. Farmer (1945). Ordnance Field Guide: Restricted. Military service publishing company. http://books.google.com/?id=15ffO4UVw8QC&q=dither.
- ^ Granino Arthur Korn and Theresa M. Korn (1952). Electronic Analog Computers: (d-c Analog Computers). McGraw-Hill. http://books.google.com/?id=dwsuAAAAIAAJ&q=dither.
- ^ Thomas J. Lynch (1985). Data Compression: Techniques and Applications. Lifetime Learning Publications. ISBN 978-0-534-03418-4. http://books.google.com/?id=E7EmAAAAMAAJ&q=first+suggested+by+Roberts+in+1962&dq=first+suggested+by+Roberts+in+1962.
- ^ Lawrence G. Roberts, Picture Coding Using Pseudo-Random Noise, MIT, S.M. thesis, 1961 online
- ^ Lawrence G. Roberts (February 1962). "Picture Coding Using Pseudo-Random Noise" (abstract). IEEE Trans. Information Theory 8 (2): 145–154. doi:10.1109/TIT.1962.1057702. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1057702.
- ^ L. Schuchman (December 1964). "Dither Signals and Their Effect on Quantization Noise" (abstract). IEEE Trans. Communications 12 (4): 162–165. doi:10.1109/TCOM.1964.1088973. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1088973.
- ^ Lipshitz, Stanley P; Vanderkooy, John; Wannamaker, Robert A. (November 1991). "Minimally Audible Noise Shaping". J. Audio Eng. Soc. 39 (11): 836–852. http://www.aes.org/e-lib/browse.cfm?elib=5956. Retrieved October 28, 2009.
- ^ a b Vanderkooy, John; Lipshitz, Stanley P (December 1987). "Dither in Digital Audio". J. Audio Eng. Soc. 35 (12): 966–975. http://www.aes.org/e-lib/browse.cfm?elib=5173. Retrieved October 28, 2009.
- ^ Mastering Audio: The Art and the Science by Bob Katz, pages 49-50, ISBN 978-0-240-80545-0
- ^ Ulichney, Robert A (1994). "Halftone Characterization in the Frequency Domain". http://ulichney.netfirms.com/CV/papers/1994-freq-characterization.pdf. Retrieved 2012-7-20.
- ^ 6-Bit vs. 8-Bit... PVA/MVA vs. TN+Film Are Things Changing? [1]
- ^ a b c d e Crocker, Lee Daniel; Boulay, Paul & Morra, Mike (1991-06-20). "Digital Halftoning". Computer Lab and Reference Library. http://www.efg2.com/Lab/Library/ImageProcessing/DHALF.TXT. Retrieved 2007-09-10. Note: this article contains a minor mistake: “(To fully reproduce our 256-level image, we would need to use an 8x8 pattern.)” The bold part should read “16x16”.
- ^ Silva, Aristófanes Correia; Lucena, Paula Salgado & Figuerola, Wilfredo Blanco (2000-12-13). "Average Dithering". Image Based Artistic Dithering. Visgraf Lab. http://www.visgraf.impa.br/Courses/ip00/proj/Dithering1/. Retrieved 2007-09-10.
- ^ Ulichney, Robert A (1993). "The void-and-cluster method for dither array generation". http://ulichney.netfirms.com/CV/papers/1993-void-cluster.pdf. Retrieved 2012-7-19.
[edit] External links
- "Dither - Not All Noise Is Bad"
- What is Dither? Article previously published in Australian HI-FI with visual examples of how audio dither sharply reduces high order harmonic distortion.
Other well-written papers on the subject at a more elementary level are available by:
- Aldrich, Nika. "Dither Explained"
Both Nika Aldrich and Bob Katz are esteemed experts in the field of digital audio and have books available as well, each of which are far more comprehensive in their explanations:
- Aldrich, Nika. "Digital Audio Explained"
- Dither Vibration Example
More recent research in the field of dither for audio was done by Lipshitz, Vanderkooy, and Wannamaker at the University of Waterloo:
- Stan Lipshitz
最后
以上就是顺利巨人为你收集整理的Dither的全部内容,希望文章能够帮你解决Dither所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复