Is analog better than digital?
A question for the ages. It’s been asked and answered about a gazillion times and here’s our own take on it.
Short answer: it’s not.
Here’s the long answer:
First, the word digital is simply the name for the quantization, or measure, of a real world, or analog, phenomenon.
In the analog world, an electrical signal has different properties and we act on those properties using electronic components laid out in circuits. The way the properties of a signal are influenced by these circuits is dictated by the laws of physics and the physical properties of the components used in the circuits.
An electrical signal can be defined with an amplitude, measured in volts and a frequency, or a number of cycles in a second, measured in Hertz. A single measure of these two represents a “snapshot” of the state of the signal at a single point in time. Just the same, if you were to take 30 photos of a person running every second and play them back at the same speed, you’d basically get a moving representation of the original person running - which is exactly what a movie is.
In the digital world, the analog signal is converted into data usable by a computer program and that is this process that often causes a great deal of confusion so let’s see what happens in this Analog-to-Digital Conversion - or ADC.
First, a few basics. Computers handle numbers, nothing else. These numbers are represented internally as 1s and 0s. Hardware-wise, these 1s and 0s are billions of tiny transistor switches that are either on and off. The higher the number, the more bits (1s and 0s) it requires to represent. For example, the highest possible number for an 8-bit representation is 255, or 11111111. The highest possible number for a 16-bit representation is 65535, or 1111111111111111.
An digital measurement, an ADC, has two main defining attributes: resolution and sample rate. Resolution is the size, in bits, of the scale used to make a voltage measurement. An ADC would measure the analog voltage against a reference voltage and apply it to this scale. Consider these examples:
Suppose an 8-bit ADC with an analog reference of 5 volts. If it takes a measurement of 2.5v and applies it to its 8-bit scale (or 255), it uses a simple rule of three: 2.5 / 5 * 255, to obtain 127.
Now, suppose a 16-bit ADC with the same analog reference of 5 volts. It also takes a measurement of 2.5v but instead applies it to its 16-bit scale (or 65535). Using the same rule of three, it gets: 2.5 / 5 * 65535, so 32767.
You can clearly see that the 16-bit ADC has way more possible values for the same range of analog voltage - making the measurement much more accurate.
In summary, the higher the resolution, the more accurate the measurement an ADC makes.
The other attribute, sample rate, is fairly simple. It represents the number of measurements made by the ADC every second. A sample rate of 44kHz means that 44000 voltage measurements are made every second. To our running person earlier, it’s the number of photos we take every second. The more photos we take in a given amount of time, the less uncertainty we have about what happened during that same time frame.
By now you should be able to picture a number of “points” representing amplitude over time, accumulating, in order, in the digital world. These points, can be used to reconstruct the original waveform with the same accuracy as the ADC measurements, the same as the photos of the running person can reproduce the scene of that person running caught by the camera as it happened.
Now here’s the tricky part: the number of “points”, or “samples” are finite. An analog signal is continuous and the digital world has only knowledge of what happened during measurements and no idea whatsoever of what happened in between. This knowledge is lost and causes uncertainty. To compensate for that uncertainty, the reconstruction process uses approximation and extrapolation. By definition, approximations and extrapolation are prone to errors. This is called aliasing. A sufficiently accurate audio ADC should make these small enough that they become inaudible.
Unfortunately, because it’s impossible to take infinitely precise measurements every Planck time (the smallest possible time measurement - Planck time), a 100% accurate reconstruction (or DAC - Digital-to-Analog Conversion) of the original following an ADC is also impossible. A certain amount of aliasing is inevitable, regardless of resolution or sample rate. That being said, human ears aren’t really good and a normal human adult shouldn’t be able to hear aliasing from a good quality ADC/DAC. Music CDs introduced 16-bit resolution and 44.1kHz sampling rate to the masses and that’s pretty much the best quality normal human ears can normally perceive.
So, is analog better than digital? Physics says yes, your ears though, aren’t realistically sensitive enough. Of course, you can have very bad digital circuits that might take bad measurements or even introduce digital clock noise or artifacts from bad computation but normally, a signal going through a good digital circuit should be indistinguishable to human ears to the original, quality-wise - especially when it’s been transformed.
Most people that pretend to “hear the difference” are doing so out of bias and couldn’t tell the difference in a blind test. Bottom line is, always trust your ears before everyone else’s.