Connect with us


Trevor Morris: Synth and Software Artist Interview

John Krogh



Hybrid scoring secrets and career advice from one of Hollywood’s leading composers, Trevor Morris.

Trevor Morris has firmly established himself as one of the industry’s first-call media composers. Having cut his teeth scoring TV commercials, Morris left his native Canada to pursue his film-scoring dreams and moved to Los Angeles. Once there, his keen production and composition skills quickly translated into stints working for two of Hollywood’s titans: James Newton Howard and Hans Zimmer. Eventually striking out on his own, Morris went on to receive recognition and acclaim for his scores to three massively popular TV series (The TudorsThe Borgias, and Vikings) and blockbuster movies such as Olympus Has Fallen and its sequel, London Has Fallen. 

Known for his hybrid style and unique aesthetic of blending orchestra with electronic textures, Morris is a master of maximizing onscreen emotion. Synth and Software recently caught up with Morris shortly after the release of the final season of Vikings to discuss his approach. Through our discussion, he shared tips and techniques for producing hit soundtracks, as well as lessons he’s learned throughout his career that can help aspiring composers and producers get to the next level.

S&S: There are many stylistic trends in composing for media, whether the genre is action, sci-fi, etc., and these trends can become clichés. How do you approach creating music for a movie or TV show that serves the story, that’s fresh, avoids being cliché, and reflects your own musical style?

Trevor: I think most composers, we all have our little strategies and setups of things to kind of keep us creatively flowing. For example, I’ll go over to Vikings for a moment. If you listen to the score, I call it quietly electronic because what you see onscreen, it’s very rustic and rocky, and very real, leathery and all that. But we wanted the score to be from a mythical world. So there was a lot of taking organic subject matter, like let’s say, a bunch of tagelharpas—a Norse violin, which has an overtone series built into it—and we’ll take that and then we’ll kind of mangle it and we’ll put it into Omnisphere, or we’ll put it into Reaktor, and we’ll generate pads out of there and we’ll stretch it. So the origin of the idea was organic, but the outcome is very synthetic. 

What I find interesting about that is it gave it a point of view rather than picking a stock patch and modifying it—which is fine, I do a lot of that too—but taking some organic subject matter and manipulating it in a way that it came from a point of view that suited whatever project you’re working on. I do a lot of that now. I tend to record a lot of source material that I’d never use as it was recorded. I’ll maybe run it through the filter on my Moog Sub 37, which has a great filter input, just to get some tactile element to it, like turning knobs. I have a lot of synths and hardware in my room, so that approach inspires me, too. I’m not really a preset kind of guy, so I want everything to be a little bit custom to my point of view, and that’s definitely one way that I do it.

S&S: Given that you want to customize your sounds and reflect your point of view, what techniques do you use for making sounds from commercial collections your own?

Trevor: I have my own particular sonic thumbprint on the orchestra that I like and that makes it sound like me. I would say with the exception of the orchestra, which tends to be fairly “set it, forget it”, I tend to use a couple of things. I have a hardware insert that I can put on anything in Cubase, and we’ll go through a pedalboard. I have a couple of Strymon pedals, and I have this great Kemper box, which is basically a guitar amp-modeling box; you plug in, and it models all the great amps. I’ll take a pad or a bass and I’ll insert this hardware, and then I can just start turning knobs and wait for what a friend of mine calls “happy-stance,” waiting for moments of inspiration. Perhaps the software version of this would be the Soundtoys plug-ins. Their Decapitator is a fantastic distortion plug-in.

I also apply extreme filtering, both high and low. For example, the FabFilter Simplon is just a great, easy, go-to filter. Oftentimes, I’ll decide I want something like a “tweeter tickler,” which is what I call those tiny elements that I use to “dance” in the tweeters, and I’ll hear the rhythm in my head. I’ll go find drum loops that are super bottom-endy and thick, and I’ll literally shelf off the bottom end all the way to maybe 12kHz until there’s almost nothing left. But what’s left is unique, and it’s from my point of view. So you see the trend here about customizing my sounds [laughs].

One other technique: I’ve created plug-in channel presets for almost every solo live instrument I use. I have channel presets for violin, cello, voice, whatever. Each of these probably has seven, eight, or even nine plug-ins on there, and they’re all making subtle additions. The result is a sound that suits my ear, maybe moving around in space or maybe using a limiter to bring the sound closer to the speaker; that’s how I describe what limiting does.

So if I wanted to, say, focus more on a sound being in front of the cue, I’ll put an L2 on it, and it’ll move it forward spatially, to me. That’s something I do a lot of. I definitely spend a fair amount of time experimenting with plug-ins. Even with software instruments that I use all the time—for example, almost nothing comes out of Zebra that doesn’t end up with four plug-ins on it at some point [laughs].

Trevor Morris in studio

S&S: Let’s dig into this more. I know that you use commercial libraries, which can become very identifiable. When you’re working with something like, say, a Heavyocity library, what techniques do you use to make the sounds your own and not so recognizable?

Trevor: What I look for when I play those sorts of presets from those amazing libraries that I know are going to be very widely used is inspiration. I’m looking for motion, a cue, a sound that has already has some motion in it, or I’m looking for something to behave a certain way, but then once I’ve found it, I just can’t use it as it is. It’s just not my thing. So whether it’s using filtering or some other processing, that’s usually step one. And I’m generally reaching for distortion, depending on the project. Just to give something grit or edge, I find a lot is a little on the clean side.

I find a lot of synths and libraries a little on the clean side, maybe because I grew up with analog gear that has a little bit of noise built into it. But if it’s uber-clean, I feel like I need to add a subtle touch. For example, I put a lot of 2% to 5% distortion on things you would never necessarily hear is there, but it just gives it some teeth, some fur or character, I guess. Sometimes I’ll put Adaptiverb on a sound at 100% and make some sort of aquatic tone out of it. But it starts with me being inspired by the patch, and I kind of know what I’m fishing for. So my sonic aesthetic has developed to the point that when I’m visualizing a sound, I already know the plug-ins that will help me get there. So I’m looking at the sounds that I’m sending through [my effects] as source material.

S&S: Creating sounds, especially for a TV series or film, can be time-intensive. What does your process of creating a palette involve? Is it a two-week period, a two-month period—depending on your time—and then you’re crafting sounds? Are you really organized about it? For example, do you think in terms of sound categories that you work to fill?

Trevor: That’s a good question. You know, the ethos of it is really dependent on time, like you said. If you watch any of [Tom Holkenborg] Junkie XL’s videos talking about working on a film score for three, six, nine months—my world moves a little quicker than that, so it really depends on how much lead time I have. For example, I have a project coming up in March that I’m really going to enjoy the lead time of doing exactly what we’re talking about. But sometimes, it just happens super fast. I will say this, I’m a decent synth programmer, but I’m not a deep-dive guy. I’m more a producer than I am a hardcore synth programmer. That’s why I don’t have a modular rig, because I would just spend a week exploring, and I typically need to move a little faster than that.

I do have a couple of select guys that I hire to create custom sounds for a project. I’ll ring them up and talk to them about what I’m interested in for this particular score, and I’ll have them make sounds for me, because that’s all they do. They’re specialists; it’s collaborative. Depending on the nature of the project, a lot of these guys will come in on a movie and stay on it from beginning to the end. So they may come in and be continually adding textures, because the composer is typically so busy just trying to get the job done.

I tend to hire a couple of specialists that I like, who understand my point of view, to make me a bank of arps or a bank of evolving pads or a bank of asymmetrical rhythmic patterns with direction such as, “Give me something that’s in 7/8 or whatever.” And then I’ll spend time customizing some patches. I’ll create some from scratch; I create a lot of bass patches from scratch. But luckily, knock on wood, I’ve always been so busy that I can’t take two months off to create sounds. It’s just never been the reality of my schedule.

Going through your sample libraries can almost be a full time job because they come out so fast. But typically, I will go through and spend a couple of hours on one new virtual instrument and pick maybe only three things out of it and put those into a channel preset, so that by the time I’m ready to go, I have at least sounds that are funneled through my point of view for that particular show or movie or game that I can call upon. So it’s a two-tiered approach: I’ll create some of my own stuff and curate from new libraries, and I’ll hire some specialists to do things for me and combine it all together.

S&S: Since you’re a big Omnisphere fan, I have to ask, do you use any sounds from third-party soundware developers like The Unfinished (Matt Bowdler)?

Trevor: I have almost everything he’s ever made, if not everything he’s ever made. I like it because it’s bespoke. It’s one person’s point of view, and he’s created sounds for some of my projects, I believe. I take a similar approach as with new libraries: I’ll go through his stuff, and I’ll look at it from my point of view.

To be honest, I’ve never admitted this before, but I tend to go through collections of sounds—loops, patches, whatever—and find what I think are the superstar patches, and then I ignore them because they’re the ones that everyone’s going to use [laughs].

Trevor Morris studio overview

S&S: How do you think about the different sound categories you work with? For example, do you think in terms of low arps, mid-range and high arps, high-frequency percussion, big booms, etc.? 

Trevor: It’s not too dissimilar from what you describe. For example, percussion tends to have two or three categories, each with subsets. Let’s take orchestral percussion, which is pretty much “set it and forget it” for me. I still have low, mid, high, and metal subgroups, with low being gran cassa, mid being timpani, high being snare, and metal being metals. Same with ethnic percussion, which has its own stem [stereo mix]. I’ll also break it out with low, mid, high, and they’ll have different kinds of compression and things like that on them. Then I have contemporary, which is modern hybrid percussion. Again, low, mid, high. And EQ-wise, if it’s a high percussion track, for example, I’m shelving off the bottom end at 80Hz or higher so there’s nothing down there, because all it’s doing is eating up headroom.

Part of my approach is really from an engineering perspective. I’m an engineering kind of guy; my brain is wired that way. So I have a lot of signal routing going on in my template. And so far what I’ve mentioned is just the virtual instruments, what’s under my fingers. I also have audio tracks set up in the same way, where I’ll have, say, low booms as audio files, and they’ll have nothing above 100Hz on them and a nice, deep reverb with maybe a subharmonic synthesizer to get those deep, low octaves. I use a bunch of those that I’ve done for myself, and they’re in my DAW on boot up. Same with metals and other stuff. So my sounds are highly organized into subgroups and treated as such, whether it be low, mid and high arps, or basses that are drones, basses that have movement, along those lines.

For example, I have a category called drones, which are sounds that tend to be wider. They can probably withstand a bit more reverb, some modulation. They tend to be heavier and take up more space in the cue. But I would never want to use a drone bass to, say, double the strings, because it would be washy and reverb-y. With drones I often have very subtle autopan at 30%, and I’ll have some modulation to give it that kind of wide treatment. So with that example, those are very different uses for basses, technically. Another example: There are almost no string parts I create where I don’t have a Minimoog-style patch underneath the basses. That’s something that I learned when I was first starting in the business when I was James Newton Howard’s assistant for a very short period of time.

And his mixer, Sean Murphy (who is one of our industry’s greats), was using one of my favorite hardware boxes of all time called the dbx 120, which is a subharmonic synthesizer box. There are lots of plug-ins like Lowender that emulate this and they do a fair job. But Sean was putting the dbx 120 on the basses from the live orchestra, which I thought was really interesting because they don’t, by nature, go down to those low octaves. And so I’m doing that in a different way with basically a super-compressed Minimoog bass patch that I’ve had for 10 years, and I do this on every score I do. I tuck that under the bass, and that goes to its own track because it’s part of the orchestra. This is another aspect of what I consider my point of view. For me, bottom end is a big part of the language of the film music that I enjoy listening to and writing, so I have a lot of subcategories for anything “bottom endy,” be it a bass, an arp, percussion, a boom or a drone.

S&S: I want to clarify what you said earlier about “momentary musical elements” like booms or metals. Do you have those loaded in your DAW as audio so you can drag them into your timeline?

Trevor: No, I have all those booms built into a Kontakt instrument. I just find it quicker because I have eight or ten booms that I use. I’ll go a little geeky here: I have different booms and they all have different blooms to them, different attacks. For example, one’s like an impact, one fades up—I took the attack off it so it literally emerges out of the downbeat rather than right on the downbeats—so it kind of blooms differently, and that one has a little bit more reverb on it. Other ones might be gothic; other ones are really impactful for action cues. They all behave differently. I find it easier just to look at the markers and drag them into audio quantize because it’s faster than playing it by hand.

S&S: Can you elaborate on how you created these sounds?

Trevor: They’ve been around for a while, man. I mean, some of them were definitely an instrument or sound that I found and processed. I might add subharmonics or I’d add a little Moog, a little tone to it or something, and resample it. I’ve also had a couple that were made for me that are pretty good. I’ll run across one once in a blue moon from some instrument that I like, and rather than note the patch for later, I’ll just record it in as audio and add it to my “boom bin.” 

Trevor Morris #3

S&S: Some software instruments—Omnisphere, for example—tend to have a lot of low end, and they can take up a lot of space frequency-wise in the low end. Do you find this to be the case, and if so, how do deal with it?

Trevor: It’s a good question. If I ever did a master class, I would do one on how to listen to bottom end, because it’s one of those things that I think either people aren’t, excuse the pun, “in tune with” or don’t have speakers that allow them to hear it. Or they do, but they don’t pay attention to it. For example, my synth orchestra, say, my woodwinds, they all have a low shelf on them. They’re all shelved up at around 100Hz. You might think there’s nothing down there, but there is. It builds up over time as you add more instruments. So for me, if it’s not specifically a bass, it probably has a filter on it. 

Specifically to answer your question, the two or three plug-ins that I use all day long are, one, the FabFilter Simplon, and I’ll just put it on 48dB-per-octave [slope] and start sweeping from the bottom up until…you’d be surprised how high you can get into the frequency spectrum before the sound even changes. But there’s information down there that’s eating up headroom.

Two, I use the FabFilter multiband compressor [Pro-MB], because it has a visual display, so you can find the woof tone pretty quick. I use that multiband compressor on almost every single thing. There’s always some tone that’s bugging me, and I can correct with the multiband compressor.

And three, I’ve been experimenting with this for the last couple of years, which is that I’m learning to not be afraid to “mono-ize” things, put them in mono. I write in surround with two subwoofers, and my rig sounds very big, very wide and three-dimensional. Once in a while, I’ll throw on just the simple stock Cubase spatial plug-in [on a particular track] and hit the mono button because I don’t need it to be five feet wide. I just need to hold down the middle.

For example, an 808 kick or a modern kick, they do the same job in mono as they do in stereo, and everything’s fighting for the same space. I used to be the guy where everything was always in stereo, and the mixes became messy and unfocused. I think one of the best composers, probably of all time in my opinion, who does electronic-based film score, who understands spectral assignment, if you will, is Harry Gregson-Williams. If you listen to his scores, everything has a place, sonically, and it’s something that makes them efficient and easy to listen to. If you start stacking plug-ins in stereo, eventually they’re going to start masking each other and phase and all sorts of nasty stuff.

Back to your question, I’m not afraid to shelf off bottom end on a bass and not afraid to mono-ize it a little bit. And certainly the woof tone, if you don’t have the multiband compressor that lets you see the frequencies, I suggest you find something that has a spectrometer; it just makes life so much easier. I have Cubase 10.5 sitting on my desktop, and apparently their EQ and compressor upgrade has the spectrometer in it as well. It’s just such a useful tool.

S&S: Speaking of all these great plug-ins and the creative process, it can sometimes be tempting in the heat of making music to want to put more and more into a track, as you were saying. Does that ever happen to you where you maybe start to add too many parts and you start taking a kitchen-sink approach?

Trevor: It’s always an exercise in restraint for me, because my particular taste (which is not better or worse) comes from a “highly produced” perspective. I used to make records back in the day, and I also worked with Hans Zimmer for a time, and obviously his stuff is highly produced and polished. So I tend to lean into that camp. But once in a while I do have to step back and decide, “It’s enough.” You know, the music is serving the purpose for the story, and now you’re just adding more shit because you think it’s cool, not because it needs it [laughs]. So that is definitely a skill or sensibility to learn. The mindset can be if a little is good, more must be better. It’s a matter of restraint that I’m getting better at it. Putting a bass in mono is a great example of how I might be efficient about what’s speaking [audio-wise] in what environment.

I teach my assistants this, too, and this is where Gregson-Willam’s stuff comes in. I think of left-to-right as stereo, bottom-to-top as bass-to-treble, and front-to-deep is the depth in the speakers. I think of it as a cube. So if I have solo instruments that I want to feature, that’s when I’ll put the L2 limiter on it, and it will bring it closer to my nose. It actually comes forward in the speakers and forces the listener to hear it first, because spatially it’s just closer. So if you think of it that way, you can start to organize things in a way so that some elements behave differently in a space relative to something else, and it makes the mix a little less cluttered.

S&S: To clarify, when you’re talking about L2, you mean putting it on individual tracks, correct? Some people might think because it’s a limiter you may mean the stereo buss.

Trevor: Yes, correct. For example, with a solo cello, I’ll put a plug-in stack on it with an EQ and the first thing I’ll do, because a cello doesn’t have anything at 50Hz, is I’ll shelve it off because there may be some noise in the microphone signal in the room that rumbled or whatever. Then there’ll be a nice soft compressor just to hold the dynamics in—I mean, we’re talking 3 or 4dB of compression—then I’ll have the L2 limiter. And that’s a spatial tool, that’s not maximizing like you think of trailer music; I’m using it to move the cello in space closer to me. Then after that, I’ll add subtle delays or reverbs, stuff like that. I play around with those three things a lot.

I tend to not be too heavy on the busses with lots of limiting and compression. I tend to do it at the source, with some exceptions, but I do have some pump-it-up plug-ins on my percussion busses a little bit. Back in the day, if you remember [TC Electronic] Master X, everyone would put that on everything and turn it to 11. It was just so sonically cheap-sounding after a while [laughs]. I try to be more elegant with the orchestra and go with the right EQ and the right surround reverb, and basically that sparkles, and I’m good with that. And then the “tweaker” plug-ins are more for individual stuff.

S&S: You don’t monitor or write through any processing, software or hardware, on your main busses then?

Trevor: That’s a deeper question, a long conversation. I’ll condense as best I can. I have a final mix in 5.1 and a stereo fold-down, which has nothing on it. Before that are ten 5.1 stems—for example, strings, brass, woodwinds, orchestral percussion, contemporary percussion, ethnic percussion, synths high, synths low, arps, booms, sound design, things like this. Those ten stems and what goes to the dub stage, and that’s what they mix to broadcast—they never have any processing on them, ever. And the reason is because you want to make sure that your “6” mix—your 5.1 and your stereo mix—equals your stems. If you start processing things at the stem level, you can mess that up pretty quick. So the layer before that, which I call the synth master, is maybe 500 tracks wide. That’s where I start to stack up plug-ins and I’m committing to those. But I don’t do group-wide processing very much. If, say, all the woodwinds—flutes, oboes, and so on—get grouped to a woodwind track, that will have a little EQ on it or a little shelf, often bottom and a little sparkle [low- and high-frequency EQ]. But that’s about it.

S&S: Let’s switch gears from tech talk. As a working, successful composer, how do you balance family life and work, especially when you have multiple projects going on at the same time?

Trevor: That’s the eternal question. I like to think I’ve got better at it over time. With a family your priorities change, they just do. I work from home, and I think you have to be okay with the idea that when you walk out of the studio (mine is in a separate building) and you walk into the house, no one knows that you’re thinking about a melody or you’re creating. You have to be able to multitask. That’s an incredibly hard transition for me that I still battle. But I will say I’m much better at what I call “being where your feet are.”

Instead of looking at as a work-life balance, I look at it as, when I get up in the morning, my priorities are my kids and getting them to school; that’s where my feet are. I’m in the kitchen making breakfast. I’m not writing music, I’m not dealing with clients. But when I’m in the studio, that’s where my feet are—then I’m there.

I’m really diligent about trying to make sure I maximize my time. I’m very disciplined. I don’t surf the web and mess around. I used to procrastinate for hours. Now, I get in there to work. And then I’ll come back to the house. Family dinner is very important to me, and if I have to go back to the studio, I will. I don’t think anyone has ever got this truly figured out, but for me, fortunately I have an understanding family, and I try to make sure that when I’m there, I’m there.

I tend to not answer the phone, which drives my agents crazy [laughs]. You can imagine, it’s 5:30 p.m., they’re driving home and they want to update me, but I’m making dinner at 5:30. I’m not answering the phone. And then I’ll call them back at 8:00 p.m., and they’ll say, “I tried to reach you for three hours!” I say, “Yeah, I’m not answering the phone at 5:30.” So I had to train them to not call me at 5:30. So now the phone rings a lot at 4:30. Everyone has their own version of this. I think work-life balance is a term that we need to get rid of. I will say I’ve been better about being where my feet are and being committed to wherever I am at that particular moment, and that’s the best I can do.

S&S: If you could go back and give your younger self any advice starting out in the industry, what would it be?

Trevor: For me personally, I was a really shy kid and I grew up in a quiet family, so communication wasn’t my strong suit. I’ve struggled with that my entire career, and I still do in a way. It’s funny, because now I can get up on stage and conduct an orchestra or talk in front of a panel of 500 people. But one-on-one, it’s always been a bit of a struggle for me to learn how to communicate with people. No one ever teaches this as a life lesson in anything, in a school of any kind. And forget about [teaching it in] our business, you know?

But if I was the chair of Berklee College of Music’s Film Scoring department, I’d have a psychology and communication skills course, because it’s so important, especially as you get into Hollywood, learning to communicate with people and relay ideas. I wasn’t great at it when I was young. I think I’m better at it now, but I would tell my younger self to just focus on learning how to get on with people as best you can. There are a lot of colorful personalities in this business; I get that. But I think you have to be able to communicate your ideas and get along with people. There are a lot of big composers I’ve spoken to who’ve expressed they thought they were hard to get along with early on, and eventually they realize that’s not really the optimal way to work. So for me, I would tell my younger self just to be more mindful of the people part of it, which is something that I struggle with. I’ve gotten better over time, but it’s something that I wish I could have done better earlier. 

S&S: Gratuitous desert-island gear question: If you could only choose one software instrument to use on a project, what would it be and why? What about that instrument would make it a must-have for you?

Trevor: For a software instrument, today I would have to say it’s Omnisphere. I find Omnisphere incredibly deep and super inspiring, and again, you can drop audio into it and process audio through it now. I find that between being able to layer things up and stack effects on there, it’s relatively easy to make things sound custom, which is really important for me. I want everything to sound a little bit unique in my point of view. I hate hearing when someone uses Preset 1 [from a popular synth] on their score; it makes me crazy [laughs]. Short of Kontakt, which is a sampler, but in terms of an instrument that generates its own sound, for me, it’s Omnisphere. If it’s not on every cue I write, it’s in most cues that I write.

S&S: Same question, but for hardware—do you have a desert-island synth? 

Trevor: It changes over time. There was a period when it was a Virus TI, which was something I couldn’t live without and I still have. Right now, I’m really into the MatrixBrute. It’s sort of my go-to at the moment. I haven’t bought a Moog One yet, but I’ve heard that’s a pretty deep dive as well. That’s showing up in about four days, though, so I imagine that will probably be the answer in four days. But right now the MatrixBrute is the apple of my eye [laughs].

More Artist Features

Continue Reading