What is Sound?

Sound is a vibration or a series of vibrations that move through the air. Anything that creates the vibrations, or waves, is referred to as the source. The source can be a string, a bell, a voice, or anything that generates a vibration within our hearing range.

Imagine dropping a stone in water. The stone (source) will create a series of ripples in the water. The ripples (waves) are created by areas of dense molecules that are being pushed together as sparse molecules expand, thus creating flatter areas.

Sound travels just like this, by compression and rarefaction. Compression is the area where dense molecules are pushed together and rarefaction is the area where fewer molecules are pulled apart, or expanded, in the wave. The compression area is higher in pressure and the rarefaction area is lower in pressure.

Seven Characteristics of Sound

You may already know about amplitude and frequency. If you have ever adjusted the tone on your amp or stereo, then you have turned up or down the “amplitude” or a “frequency” or range of frequencies.

It is necessary to understand these two important sound wave characteristics, as they are important building blocks in audio engineering. Two other characteristics of sound help humans identify one sound from another: harmonics and envelope.

The remaining three characteristics of sound are velocity, wavelength, and phase. These characteristics identify how fast a sound wave travels, the physical length of a completed cycle, and the phase of the sound wave.

Amplitude

Amplitude is associated with the height of a sound wave and is related to volume. When a stereo, amp, or television's volume is turned up or down, the amplitude of the sound being projected is increased or decreased.

Loud sounds have higher amplitudes while quiet sounds have lower amplitudes. The greater the amplitude of a sound the greater the sound pressure level.

Amplitude is measured in decibels (dB). Most people can recognize about a 3 dB change in amplitude. A trained ear can recognize even smaller amplitude changes. An increase in amplitude is usually expressed as a “boost” and a decrease in amplitude is often expressed as a “cut.”

The word volume is often substituted for amplitude. An audio engineer may say, “boost that 3 dB” or “cut that 3 dB.” When amplitude is written out, it is expressed with a 24/710 positive sign such as +3 dB or a negative sign such as −3 dB.

Here are some common activities and their corresponding decibel levels:
  • 0 dB – near silence
  • 40–50 dB – room ambience
  • 50–60 dB – whisper
  • 60–75 dB – typical conversation
  • 80–85 dB – a blender, optimum level to monitor sound according to the Fletcher–Munson curve
  • 90 dB – factory noise, regular exposure can cause hearing damage
  • 100 dB – baby crying
  • 110 dB – leaf blower, car horn
  • 120 dB – threshold of pain, can cause hearing damage
  • 140 dB – snare drum played hard from about 1′
  • 150–160 dB – jet engine
As you can see, in our daily lives, we are constantly confronted with amplitude levels between 0 dB and about 160 dB. Most people listen to music between 70 dB (on the quiet side) and 100 dB (on the loud side).

Frequency

The amount of cycles per second (cps) created by a sound wave is commonly referred to as the frequency. If you are a musician, you may have tuned your instrument to A/440. Here, “440” is the frequency of a sound wave. Unlike amplitude, which is measured in decibels, frequency is measured in hertz (Hz), named after the German physicist, Heinrich Hertz.

The average human hearing range is from 20 to 20,000 Hz. Typically, once 1000 cycles per second is reached, the frequency is referred in kilohertz (kHz), i.e., 1000 Hz = 1 kHz, 2000 Hz = 2 kHz, and 3000 Hz = 3 kHz.

Frequency is related to the pitch of a sound. Here is a handy chart to help identify the frequency ranges of various instruments and how the keys of a piano relate to frequency.

The first note on a piano is A, which is 27.5 Hz. Have you ever turned up the bass or treble on your car stereo? If so, you are boosting or cutting the amplitude of a frequency or range of frequencies. This is known as equalization (EQ), a vital aspect of audio production.

Each frequency range has distinct characteristics, and some common terms can help you to identify them. Frequency is often divided into three ranges:
  • Low or bass frequencies are generally between 20 and 200 Hz. These frequencies are omnidirectional, provide power, make things sound bigger, and can be destructive if too much is present in a mix.
  • Mid, or midrange, frequencies are generally between 200 Hz and 5 kHz. This is the range within which we hear the best. These frequencies are more directional than bass frequencies and can make a sound appear “in your face,” or add attack and edge. Less midrange can sound mellow, dark, or distant. Too much exposure can cause ear fatigue.
  • High or treble frequencies are generally between 5 and 20 kHz and are extremely directional. Boosting in this range makes sounds airy, bright, shiny, or thinner. This range contains the weakest energy of all the frequency ranges. High frequencies can add presence to a sound without the added ear 31/710 fatigue. A lack of high frequencies will result in a darker, more distant, and possibly muddy mix or sound.
Midrange is the most heavily represented frequency range in music. It is often broken down into three additional areas:
  • Low-mids, from around 200 to 700 Hz darker, hollow tones
  • Mid-mids, from 700 to 2 kHz more aggressive “live” tones
  • High-mids or upper-mids, from 2 to 5 kHz brighter, present tones

Phase

Phase designates a point in a sound wave's cycle and is also related to frequency. It is measured in degrees and is used to measure the time relationship between two or more sine waves. When two sound waves are in phase, the result is increased amplitude.

When they are 180 degrees out of phase, they can completely cancel each other resulting in little or no sound. This concept is used in many modern devices, such as noise-cancelling headphones or expensive car mufflers, to eliminate the outside sound or engine noise.

However, sound is not always completely in or out of phase. Sounds can be out of phase by any number of degrees, ranging from 1 to 359. Phase issues can make some frequencies louder and others quieter.

Often a room's acoustics create these areas of cuts and boosts in the frequency spectrum. These cancellations and amplitude increases influence the way a room is going to sound. Standing waves and comb filtering are often the result of these phase interferences.

Phase is also very important to keep in mind when stereo miking and when using multiple mics on an intended source. When listening in a typical stereo environment, a sound may be completely out of phase and go unnoticed unless the phase is checked.

Velocity

Velocity is the speed at which sound travels. Sound travels about 1130 ft per second at 68 degrees Fahrenheit (344 m/s at 20°C). The speed at which sound travels is dependent on temperature.

For example, sound will travel faster at higher temperatures and slower at lower temperatures, knowing that the velocity of sound can come in handy when calculating a standing wave or working with live sound.

Wavelength

Wavelength is the length of the sound wave from one peak to the next. Consider the 38/710 wavelength to be one compression and rarefaction of a sound wave. In determining the wavelength, the speed of sound and divide it by the frequency. This will identify the length between these two peaks.

The lower the frequency the longer the wavelength. This demonstrates the power and energy that low end creates as a result of a longer wavelength. High frequencies are much smaller in length resulting in a weaker form of energy that is highly directional.

Harmonics

The richness and character of a musical note is often found within the harmonics. Harmonics are commonly referred to as “timbre.” Every instrument has a fundamental frequency, referred to as the fundamental, and harmonics associated with it.

On an oscilloscope, the fundamental shows up as a pure sine wave, as seen in the Ruben's Tube video; however, sound is much more complex. Most sounds contain more information in addition to the fundamental.

In music, instruments have their own musical makeup of a fundamental plus additional harmonics unique to that instrument. This is how we can distinguish a bass guitar from a tuba, a French horn from a violin, or any two sounds when the same note at the same volume is played.

Instruments that sound smoother, like a flute, have less-harmonic information and the fundamental note is more apparent in the sound. Instruments that sound edgier, like a trumpet, tend to have more harmonics in the sound with decreased emphasis on the fundamental.

If you were to play a low E on the bass guitar, known as E1, the fundamental note would be about 41 Hz. You can figure out the harmonics by simply multiplying the fundamental times 2, 3, 4, etc.
  • The fundamental note E1 = 41 Hz.
  • The second harmonic would be 82 Hz (41 × 2).
  • The third harmonic would be 123 Hz (41 × 3).
  • The fourth harmonic would be 164 Hz (41 × 4).
It is a common practice among engineers to bring out a sound by boosting the harmonics instead of boosting the fundamental. For instance, if the goal is to hear more bass, boosting 900 Hz may bring out the neck, or fret board, of the instrument and make the note pop out of the mix.

The result is more apparent in bass, without the addition of destructive low end to the instrument. Additionally, harmonics are divided into evens and odds.

Even harmonics are smoother and can make the listener feel comfortable, whereas odd harmonics often make the listener feel edgy.

Many engineers and musicians use this knowledge when seeking out microphone preamps, amplifiers, and other musical equipment containing vacuum tubes. These tubes create even distortion harmonics that are pleasing to the ear and odd distortion harmonics that generate more edge and grit.

Taking a music fundamentals class or studying music theory can definitely benefit you as an audio engineer. These classes and concepts can help you develop a well-rounded background and better understanding of music. You can never know too much in this field!

The more you know, the easier time you will have communicating effectively with skilled musicians. If you are able to speak intelligently, they are more likely to be comfortable working with you and putting their trust in you. The more skills you possess the better your chance for success.

Envelope

Like harmonic content, the envelope helps the listener distinguish one instrument or voice from the other. The envelope contains four distinct characteristics: attack, decay, sustain, and release.
  • Attack is the first point of a note or sounds envelope. It is identified as the area that rises from silence to its peak volume.
  • Decay is the next area of the envelope that goes from the peak to a medium level of decline.
  • Sustain identifies the portion of the envelope that is constant in the declining stage.
  • Release identifies the last point in the envelope where the sound returns back to silence.
A percussive instrument has a very quick attack, reaching the note instantly upon striking. With woodwinds, brass, and reed instruments, no matter how quickly the note is played, it will never reach the note as fast as striking a drum.

Other Periodic Waveform Types

Waveform defines the size and shape of a sound wave. Up to this point, a simple sine wave has been used to illustrate sound. Sound can come in different waveforms, other than a sine wave.

Other common waveforms include triangle, square, and sawtooth waves. Each waveform has its own sound and characteristics and each may be used for different applications. A triangle wave looks like a triangle when viewed on an oscilloscope, a square wave appears as a square, and a sawtooth wave appears as a sawtooth.

A square wave is typically associated with digital audio. A square wave's sound is often described as hollow and contains the fundamental note plus the odd harmonics. These harmonics gradually decrease in amplitude as we go higher in the frequency range.

A triangle wave is similar to a square wave in that it also contains only the fundamental plus the odd harmonics. It is a kind of a cross between a sine wave and a square wave.

One main difference is that the higher frequencies harmonics are even lower in amplitude than those of square waves. This results in a less harsh sound and is often used in synthesis.

A sawtooth wave contains both the even and the odd harmonics of the fundamental. Its sound is harsh and clear. Sawtooth waveforms are best known for their use in synthesizers and are often used for bowed string sounds.

Noise

Noise is any unwanted sound that is usually non-repeating. Noise is a hum, a hiss, or a variety of extraneous sounds that accompany a sound wave when it is mixed or recorded. Noise comes from a variety of sources besides the instrument, such as an air conditioner, fluorescent lights, or outside traffic.

One way to express quality of sound is by using the Signal-to-Noise Ratio, shortened S/N. This ratio compares the amount of the desired signal with the amount of unwanted signal that accompanies it.

A high-quality sound will have significantly more signal (desired sound) than noise (undesired sound). Distortion, unlike noise, is caused by setting or recording levels too hot, pushing vacuum tubes, or by bad electronics.

When needed, adding it can be an effective way to make a sound dirty, more aggressive, or in your face.

Headroom is the maximum amount a signal can be turned up or amplified without distortion. As an audio engineer you should be aware that audio devices have different amounts of headroom.

Make sure you allow for plenty of headroom when setting audio signal levels. If you don't, a loud spike of sound may ruin a take. Analog level settings can exceed zero, while digital cannot.