Downsampling (signal processing)
In digital signal processing, downsampling, compression, and decimation are terms associated with the process of resampling in a multi-rate digital signal processing system. Both downsampling and decimation can be synonymous with compression, or they can describe an entire process of bandwidth reduction (filtering) and sample-rate reduction.[1][2] When the process is performed on a sequence of samples of a signal or a continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate (or density, as in the case of a photograph).
Decimation is a term that historically means the removal of every tenth one.[a] But in signal processing, decimation by a factor of 10 actually means keeping only every tenth sample. This factor multiplies the sampling interval or, equivalently, divides the sampling rate. For example, if compact disc audio at 44,100 samples/second is decimated by a factor of 5/4, the resulting sample rate is 35,280. A system component that performs decimation is called a decimator. Decimation by an integer factor is also called compression.[3][4]
Downsampling by an integer factor
[edit]Rate reduction by an integer factor M can be explained as a two-step process, with an equivalent implementation that is more efficient:[5]
- Reduce high-frequency signal components with a digital lowpass filter.
- Decimate the filtered signal by M; that is, keep only every Mth sample.
Step 2 alone creates undesirable aliasing (i.e. high-frequency signal components will copy into the lower frequency band and be mistaken for lower frequencies). Step 1, when necessary, suppresses aliasing to an acceptable level. In this application, the filter is called an anti-aliasing filter, and its design is discussed below. Also see undersampling for information about decimating bandpass functions and signals.
When the anti-aliasing filter is an IIR design, it relies on feedback from output to input, prior to the second step. With FIR filtering, it is an easy matter to compute only every Mth output. The calculation performed by a decimating FIR filter for the nth output sample is a dot product:[b]
where the h[•] sequence is the impulse response, and K is its length. x[•] represents the input sequence being downsampled. In a general purpose processor, after computing y[n], the easiest way to compute y[n+1] is to advance the starting index in the x[•] array by M, and recompute the dot product. In the case M=2, h[•] can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products.
Impulse response coefficients taken at intervals of M form a subsequence, and there are M such subsequences (phases) multiplexed together. The dot product is the sum of the dot products of each subsequence with the corresponding samples of the x[•] sequence. Furthermore, because of downsampling by M, the stream of x[•] samples involved in any one of the M dot products is never involved in the other dot products. Thus M low-order FIR filters are each filtering one of M multiplexed phases of the input stream, and the M outputs are being summed. This viewpoint offers a different implementation that might be advantageous in a multi-processor architecture. In other words, the input stream is demultiplexed and sent through a bank of M filters whose outputs are summed. When implemented that way, it is called a polyphase filter.
For completeness, we now mention that a possible, but unlikely, implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the h[•] array, process the original x[•] sequence at the input rate (which means multiplying by zeros), and decimate the output by a factor of M. The equivalence of this inefficient method and the implementation described above is known as the first Noble identity.[6][c] It is sometimes used in derivations of the polyphase method.
Anti-aliasing filter
[edit]Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) is a Fourier series representation of a periodic summation of X(f):[d]
When T has units of seconds, has units of hertz. Replacing T with MT in the formulas above gives the DTFT of the decimated sequence, x[nM]:
The periodic summation has been reduced in amplitude and periodicity by a factor of M. An example of both these distributions is depicted in the two traces of Fig 1.[e][f][g] Aliasing occurs when adjacent copies of X(f) overlap. The purpose of the anti-aliasing filter is to ensure that the reduced periodicity does not create overlap. The condition that ensures the copies of X(f) do not overlap each other is: so that is the maximum cutoff frequency of an ideal anti-aliasing filter.[A]
By a rational factor
[edit]Let M/L denote the decimation factor,[B] where: M, L ∈ ; M > L.
- Increase (resample) the sequence by a factor of L. This is called Upsampling, or interpolation.
- Decimate by a factor of M
Step 1 requires a lowpass filter after increasing (expanding) the data rate, and step 2 requires a lowpass filter before decimation. Therefore, both operations can be accomplished by a single filter with the lower of the two cutoff frequencies. For the M > L case, the anti-aliasing filter cutoff, cycles per intermediate sample, is the lower frequency.
See also
[edit]Notes
[edit]- ^ Realizable low-pass filters have a "skirt", where the response diminishes from near one to near zero. In practice the cutoff frequency is placed far enough below the theoretical cutoff that the filter's skirt is contained below the theoretical cutoff.
- ^ General techniques for sample-rate conversion by factor R ∈ include polynomial interpolation and the Farrow structure.[7]
Page citations
[edit]- ^ Harris 2004. "6.1". p 128.
- ^ Crochiere and Rabiner "2". p 32. eq 2.55a.
- ^ Harris 2004. "2.2.1". p 25.
- ^ Oppenheim and Schafer. "4.2". p 143. eq 4.6, where: and
- ^ Harris 2004. "2.2". p 22. fig 2.10.
- ^ Oppenheim and Schafer. "4.6". p 171. fig 4.22.
- ^ Tan 2008. "1.2.1". fig 12.2.
References
[edit]- ^ Oppenheim, Alan V.; Schafer, Ronald W.; Buck, John R. (1999). "4". Discrete-Time Signal Processing (2nd ed.). Upper Saddle River, N.J.: Prentice Hall. p. 168. ISBN 0-13-754920-2.
- ^
Tan, Li (2008-04-21). "Upsampling and downsampling". eetimes.com. EE Times. Retrieved 2017-04-10.
The process of reducing a sampling rate by an integer factor is referred to as downsampling of a data sequence. We also refer to downsampling as decimation. The term decimation used for the downsampling process has been accepted and used in many textbooks and fields.
- ^ Crochiere, R.E.; Rabiner, L.R. (1983). "2". Multirate Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall. p. 32. ISBN 0136051626.
- ^ Poularikas, Alexander D. (September 1998). Handbook of Formulas and Tables for Signal Processing (1 ed.). CRC Press. pp. 42–48. ISBN 0849385792.
- ^
Harris, Frederic J. (2004-05-24). "2.2". Multirate Signal Processing for Communication Systems. Upper Saddle River, NJ: Prentice Hall PTR. pp. 20–21. ISBN 0131465112.
The process of down sampling can be visualized as a two-step progression. The process starts as an input series x(n) that is processed by a filter h(n) to obtain the output sequence y(n) with reduced bandwidth. The sample rate of the output sequence is then reduced Q-to-1 to a rate commensurate with the reduced signal bandwidth. In reality the processes of bandwidth reduction and sample rate reduction are merged in a single process called a multirate filter.
- ^
Strang, Gilbert; Nguyen, Truong (1996-10-01). Wavelets and Filter Banks (2 ed.). Wellesley, MA: Wellesley-Cambridge Press. pp. 100–101. ISBN 0961408871.
No sensible engineer would do that.
- ^
Milić, Ljiljana (2009). Multirate Filtering for Digital Signal Processing. New York: Hershey. p. 192. ISBN 978-1-60566-178-0.
Generally, this approach is applicable when the ratio Fy/Fx is a rational, or an irrational number, and is suitable for the sampling rate increase and for the sampling rate decrease.
Further reading
[edit]- Proakis, John G. (2000). Digital Signal Processing: Principles, Algorithms and Applications (3rd ed.). India: Prentice-Hall. ISBN 8120311299.
- Lyons, Richard (2001). Understanding Digital Signal Processing. Prentice Hall. p. 304. ISBN 0-201-63467-8.
Decreasing the sampling rate is known as decimation.
- Antoniou, Andreas (2006). Digital Signal Processing. McGraw-Hill. p. 830. ISBN 0-07-145424-1.
Decimators can be used to reduce the sampling frequency, whereas interpolators can be used to increase it.
- Milic, Ljiljana (2009). Multirate Filtering for Digital Signal Processing. New York: Hershey. p. 35. ISBN 978-1-60566-178-0.
Sampling rate conversion systems are used to change the sampling rate of a signal. The process of sampling rate decrease is called decimation, and the process of sampling rate increase is called interpolation.
- T. Schilcher. RF applications in digital signal processing//" Digital signal processing". Proceedings, CERN Accelerator School, Sigtuna, Sweden, May 31-June 9, 2007. - Geneva, Switzerland: CERN (2008). - P. 258. - DOI: 10.5170/CERN-2008-003. [1]
- Sliusar I.I., Slyusar V.I., Voloshko S.V., Smolyar V.G. Next Generation Optical Access based on N-OFDM with decimation.// Third International Scientific-Practical Conference "Problems of Infocommunications. Science and Technology (PIC S&T'2016)". – Kharkiv. - October 3 –6, 2016. [2]
- Saska Lindfors, Aarno Pärssinen, Kari A. I. Halonen. A 3-V 230-MHz CMOS Decimation Subsampler.// IEEE transactions on circuits and systems— Vol. 52, No. 2, February 2005. – P. 110.