Connect with us

Columns

Composer’s Forum: Harmonic Theory Part 2 – Enter the Volt

Published

on

Exploring harmonic relationships beyond chords and intervals.

Chords. What are chords? Oh, that’s right, they are collections of notes sounded at the same time. They have a certain effect, make you feel a certain way, lead you to expect some change, some new sound, some other chord. Right? Well, pretty much. You probably read something about that in this column last month. Well, what if I told you that it turns out every note, each single note that you play and hear, is also a chord?

What? Okay, so not always and, hmm…it’s complicated. Think, harmonics instead of harmony.

Way back in the day, most music was made with the human voice. In sacred music, the style was pure tone; singers strove to sing with fundamental-centric tonalities that indicated a kind of purity of spirit. The frequencies that the polyphonic voices sounded together, chords, were discussed in last month’s article, “Harmonic Theory, Part 1.” 

You can still hear this pure singing style in churches and cathedrals today. The tones are similar to that most basic electronic waveform, the sine wave, in that they lack harmonics that add brightness and timbral character found in articulated singing and instrumental sounds. Those more diverse, more aggressive tones are used in secular music.

In addition to the fundamental pitch, those secular tones contain enough harmonic energy to alter the sound quality (timbre), but not enough to be perceived as pitches. Overtones are created above the fundamental and are members of the harmonic series. (Pythagoras famously divided a string into two, three, four, five, and more equal parts to demonstrate this phenomenon.) If there could be sufficient energy in each overtone, the “chord within the note” would be heard, and it would sound like a dominant ninth chord. 

Exposing the Overtones

On a synth, an easy way to expose these harmonics is to play a bass note and sweep the cutoff frequency from low to high with a lot of resonance. Depending on the oscillator waveform, you will hear all or some of the harmonics quite clearly, because the filter’s resonance adds energy where these hidden harmonics are (if not, try adding drive/distortion). With acoustic instruments, increasing the power of the attack can also reveal resident harmonics. A flute blown lightly in the lower range produces a pure tone with few harmonics and then brings out the harmonics when blown harder, even making a chordal sound when completely overblown.

In last month’s article, we discussed the contrapuntal conventions of polyphonic music that led to the rise of chord progressions. This article will cover the subsequent rise of timbre as an important expressive and structural element of electronic musical composition. We’ll give more consideration to harmonics than harmony, more to voltage modulation than chordal modulation. 

We have many ways to affect a synthesizer’s tone and, in a way, replace the passing tones and polyphonic conventions of former times so that we can experience an ever-evolving timbral soundscape. Modulation and continuous parameter control have, in a sense, superseded voice leading and chord progression as powerful linear forces that can move your music forward. Enter the volt!

The Technology of Music

Music tech started a really long time ago. Tired of the sound of the sticks they were banging, humans looked for better sounding wood, perhaps also roughing up the ends to hold them better. Aborigines, scouring the area for straight fallen tree branches eaten hollow by termites, eventually understood that red gum trees furnished the best-sounding didgeridoos, and later applied beeswax to the blowing end as a first attempt at fashioning a mouthpiece. Log drums were first found and then carved, skins were stretched over shells and gourds, etc. All examples of music technology fueled by creative curiosity: find, create, improve, combine, play…

…repeat. Endless refinements and invention followed, producing a host of acoustic musical instruments: strings bowed and plucked, winds and brass blown through holes or reeds, percussion of every possible kind, and, of course, the grand pianoforte. All are vastly different in construction and sound, with wildly diverse tones and colors, but also remarkably similar in basic design. They all have an initial impulse: something is struck, bowed, plucked, blown (and in the case of the trumpet, doing the raspberry), imparting energy through a resonating body of some kind, a wooden box, a gourd, a metal tube. 

The Song Remains the Same

The growth of electronic music in the twentieth century seemed to be a complete revolution in the technology of musical instruments, especially as it represented a domain shift from acoustic world to analog world, with the accompanying change in physical principles governing those domains. And yet, basic instrumental design has remained the same. 

Think of the most basic synthesizer patch: an oscillator routed through a filter. Consider the essential design principle of all acoustical instruments: an impulse through a resonator. The basics are the same. And just as a cello wouldn’t sound like a cello without its wooden body, a Moog synth wouldn’t sound like a Moog without its ladder filter. 

In 1968, as if to illustrate this technological affinity to us all, composer Wendy Carlos released a debut recording Switched On Bach (under her birth name Walter Carlos), creating analog synthesizer sounds that imitated the period instruments that Bach wrote for. While the acoustic instruments have infinitely more expression in general, the synth parts succeed in the context of the steadily percolating rhythms and terraced dynamics of the Baroque style. The recording was a huge success, winning multiple Grammy Awards in 1970 and eventually becoming just the second classical album ever to reach Platinum status.

Putting It All Together

Combining instruments together is a similar challenge for both acoustic and electronic composers. The orchestra quadrupled in size between the eighteenth and twentieth centuries, from around thirty musicians to well over a hundred. Not only did it get louder as venues also grew, but new instruments were added as well as multiple players per instrument. The harmonic language also expanded, adding more and more chromaticism and modulating constantly to different tonal centers. Moreover, you’ve heard how each note played by a monophonic acoustic instrument is already harmonically complex. 

As a result, it became necessary for composers to understand the harmonic consequences of doubled parts, inverted chord structures, and sections where the entire group played (tutti). Composers learned to voice and arrange all the parts so that complex compositions could be heard clearly by the listener. 

A well-orchestrated piece has so much more color and power. Check out the larger works of composers like Berlioz, Respighi, Rimsky-Korsakov, and Holst, and listen for the coordination of color palette and clarity of polyphonic parts. These composers were “mixing” their music to optimize its effect.

This ensemble of diverse instruments is much like the collection of synths and parts that comprise an electronic music piece. So similarly, the electronic music composer must pay attention to the overall sound once many notes and parts are sounding together. Synthesizers are like acoustic instruments on steroids, and so composing with them and mixing their inherent harmonic energies together is all that much more difficult.

Electronic music pioneer Edgard Varèse predicted, “The role of color or timbre would be completely changed from being incidental, anecdotal, sensual, or picturesque; it would become an agent of delineation like the different colors on a map separating different areas, and an integral part of form.”

Let’s Get This Sorted

Varèse was correct: timbre was taking over a larger role in the structure of music. Emphasis on tonal color has led to the ongoing development of synth technology offering astonishing possibilities for sound design. Instruments continue to appear with truly impressive factory sound sets. 

Almost any one of those incredible patches that inspire you to write music in the first place can also gobble up all available sonic space, and then, of course, you probably want to put many of them all together. Below are some ideas for creating the kinds of parts that can coexist and sound good together, and also ideas for sorting out a wall of sound that has already become…bulky. 

Chord content – Use simple intervals rather than full chords on polysynth parts. Maybe a certain track remains unedited since the original sketch, and many notes have become redundant or simply unnecessary. Reserve full chords for thicker texture. 

Doubling parts – This can sound great, even if just one of the chord voices is doubled, and especially if the envelopes are dissimilar (like pluck + pad). There are advantages to doing it quickly in your DAW (duplicate MIDI track and assign new sound) rather than program a layered patch. For example, once you have two tracks, it’s easy to decide whether that doubling needs to happen a hundred percent of the time. 

Frequency strata – Two great knobs on an SSL console’s channel strip are the Low Pass/Hi Pass shelving filters. Using them to set an available min/max frequency range for any part is a quick and easy way to audition a frequency map for the tracks/parts, especially if you are using some gloriously fat factory patches. Of course, lots of mixing consoles have this functionality, and it can also be done quickly with a bandpass filter. The idea is to rough it out with simple controls that make it easy to experiment and make a plan, and then head for your favorite equalizer where you will have more control to process the tracks. 

Patch effects – Unless you are modulating them as playable parameters (key velocity, pressure), add the effects at the mix stage. However, if you’re routing keyboard velocity to bit crush in order to get a nice crunch on the hard attacks, keep it in the patch. If the patch has delay that functions rhythmically (syncs with BPM), then leave it on.  But if the patch effects are simply ambient delay or reverb, cut them and control the ambience in your DAW while mixing.

Let nothing sit still – Sweep a synth patch’s volume, filter, pitch, or drive just a bit and then cut the overall level. When sound is modulated, it attracts the ear more and needs less volume. Opera singers actually developed their tremolo and vibrato in order to be heard in the back row over the orchestra.

Spatial environment – Management of both frequency space and stereo image should be considered together. If you simply maintain the static stereo image of each synthesizer track, you can be limiting your control over the greater spatial environment. Try some mono placements, try modulating pan (CC#10) within the image, and then try widening some of the stereo pairs and narrowing others.

We’ve Got It All

The sheer sonic variety and power of electronic instruments is so compelling, and control over timbre so expressive. But only so much sonic space is available, only so much attention/absorption available in human listeners. Less can be more when it comes to your music’s eventual impact. If you start with too much density, then cull and cut and mute the overages. If nothing else, clarity and separation make it easier for a sound system to handle the multiple tasks your mix asks of them. 

Harmonic theory refers to many things, and the harmonics of each moment can be complex. Composers are in charge of the way they progress through the music and make us feel the way we do. For example, you can use a filter slightly to escort a melodic idea through to its completion. Make it subliminal; that works.

Truth be told, the best music incorporates all that’s come before: multipart polyphony plus musical chord progressions plus dynamic timbral fluctuations and sonic diversity. We have indeed got it all; welcome to the party! 

Continue Reading
Advertisement

Join the S&S Newsletter

Facebook

Trending