Most engineers know of the Nyquist-Shannon sampling theorem and frequently use it to justify design decisions. However, a lot of these decisions are misguided due to a lack of understanding of how to apply the theorem in the real world, partly caused by the way it’s taught. Usually these are recent graduates but sometimes these claims come from seasoned engineers who should know better. These claims are generally something along the lines of:
These claims would all be true if ideal sampling were possible; however, in the real world ideal sampling is never possible as this would require all of the following, none of which is realisable in practice:
So, is the Nyquist theorem of any use in the real world? Well, yes it is, but rather than it informing us of what we can do, as the claims above imply, it actually tells us what we can’t do—we definitely can’t capture all the information in a signal by sampling at less than the Nyquist rate; this is all the theory tells us, not how high the sampling frequency needs to be to capture all the information in the signal2.
Even if ideal sampling could be achieved, for a lot of manipulations and analyses, the original signal needs to be reconstructed at a higher sample frequency (e.g. measuring the peak, which may occur far from any sample for a signal approaching the Nyquist frequency). This adds a fair amount of processor overhead and latency before the reconstructed signal is ready for processing. It is often easier and cheaper to just sample at a higher frequency to start with.
So, what sampling frequency should we use? Well, the answer is application dependant and may require careful analysis for demanding applications. Consideration may be needed of the quality required, circuit cost (particularly the ADC), processor requirements, power requirements and development time and cost. However, where an increased sample rate doesn’t cause any significant problems, a reasonable rule of thumb is five times the Nyquist rate (i.e. ten times the maximum frequency of interest); this allows plenty of overhead for the anti-aliasing filter and adds enough additional information to sufficiently compensate for most sampling imperfections. This sample frequency also captures the shape of the waveform sufficiently well for most processing to be performed directly on the sampled data, rather than having to perform any reconstruction of the signal. If a lower sample-rate stream is needed later, a decimation filter can be used—a digital filter can have a sharper roll-off than an analogue filter. As with all rules of thumb, understanding the reasoning behind it means we can ignore it when it’s not appropriate—e.g. when a faster ADC would be exorbitantly expensive—and invest the extra design effort needed to ensure a lower sampling rate is sufficient.
This principle can be seen in many products on the market:
1 To keep things simple, we’ll only consider baseband signals in this article
2 In fact, it’s easy to see that there is no sample frequency that allows us to capture all the information in the signal, as we would need the samples to be of infinite precision