Signal Chain

Chapter 2. From Analog Sound to Digital Audio

2.1. Signal Chain

What does analog mean?

A sound generated by an instrument takes the form of an analog pressure wave. The term analog refers to the fact that the pressure varies continuously with some variable, for example the string motion; this is by contrast with a digital signal that varies in discrete steps, for example the binary code representing the different voltages encoding the air pressure.

Right now, we are still in the analog domain. If we want to record (or track) this sound, this varying air pressure, we must use a device called a transducer: it captures one form of energy (for example. mechanical, like air pressure) and transforms it into another (for example electrical, like voltages). That is why the device capturing the varying air pressure in Figure 1 is labelled with a “T”: it is a transducer.

Why do we need transducers?

Because we do not know how to encode information with air pressures, but we have found ways to use voltages instead. Since our best tool to treat information is a computer, we must transform the electrical (analog) signal into a digital signal through a converter. On the way out, the same thing happens: our ears are not very good at decoding electrical signals, so we transform the treated digital signal through another converter before transducing it into sound pressures for our ears’ pleasure.

Figure5 Signal Chain

Figure 5 Signal Chain The sound travels from left to right: a pressure wave (analog signal) is generated, then transduced (T), then processed (P), then converted into a digital signal by an analog to digital converter (AD), then processed in the digital domain (P), then converted back into an analog signal by a digital to analog converter (DA), then processed (P), then transduced (T) back into a pressure wave.

To make the above figure more concrete, you could imagine that (from left to right) the first “T” is a microphone, the first P is a microphone pre-amplifier, the “AD” is found in an audio interface, the second P is the computer with which you will treat the sound, the “DA” is found in the same audio interface, the third “P” is a speaker amplifier and the last “T” is a set of speakers: those are the devices that you would use to record a sound with a microphone. If you do not know what all those terms mean, do not worry, each will be treated to its own section later.

What is a signal chain?

The chain of devices and events depicted in Figure 5 is called a signal chain. Of course, the signal chain will vary greatly depending on the situation; what is presented in Figure 5 above is the most complete logical representation of a signal chain with an analog source, an analog destination and some processing in between. What takes place in that middle processing box (labelled P) depends on what needs to be achieved: improving a recording’s volume so that it can be better mixed with other signals; mixing different signals so that they sound good together; slightly modifying the balance of a sound so that it sounds better in a certain listening environment.

Why do I need a signal chain?

But why have this processing at all? Isn’t the simplest signal chain just having the source and the destination close by, like in a live performance? You are right, it is; but what if there is a spatial difference between them, for example I record the sound in one location and I want to play it 1000 miles away? Or what if I want to listen to it again? That is why we need to record sounds: to make sure they are available when we need them.

The issue is that as soon as you decide to record anything, you are in fact changing what you are recording: first by capturing only a part of the whole performance and second by playing the recording in an environment that is different from the one in which the recording was made. All of this might seem very logical and maybe a bit obvious, but those are precisely the reasons why audio engineering exists: to create or recreate a pleasant listening experience from recorded material.

What is audio engineering’s job?

One debate I will put to rest right now is this: shouldn’t the audio engineering’s job be to reproduce the original performance with as much fidelity as possible? Since you have paid attention the last couple of lines, you already know the answer: no. It is just not possible. Fidelity to what? An ephemeral performance which ceased to exist the moment it was over? Everything is different once you have recorded the performance, so yes, music is a complete fabrication, much like movies are: I hear no one complaining about it, and gods know how much editing goes into making a movie – can you imagine a director who would have to shoot the whole movie in one go?

To simplify the signal chain displayed in Figure 5, let us imagine that the transducer in the left-hand side analog domain is the membrane of a microphone picking up sound from the source and that the transducer restituting the sound in the right-hand side analog domain is the driver of a speaker (driver is the technical term, but it is basically a membrane pushing air, see section 4.2). The transducer on the way in could also be an electric guitar pickup, or a telephone. On the way out, the transducer could be the tiny speakers in your ear buds or the gigantic sound systems in your favorite arena, or a telephone, for that matter. These will not be discussed here but references [1] and [2] are a good start if you are interested.

 

Previous section
Print Friendly, PDF & Email
Next section