Contact Info
Guy Whitmore
Creative Director
Music Design Network, LLC
www.musicdesignnetwork.com
guy@musicdesignnetwork.com

Interview with Guy Whitmore - June 2002

by Alexander Brandon

Guy Whitmore has been composing game scores as a freelancer, company man, and entrepreneur, since 1994. He has recently co-founded a music production company called Music Design Network, LLC. Recent titles include Die Hard: Nakatomi Plaza, Russian Squares, and No One Lives Forever.

After studying music at Northwestern, and Southern Methodist University, he began writing music for regional theater productions in Dallas, Los Angeles, and New York. His independent film music has found its way to Cannes, Digi-Dance (part of the Sundance film festival), and the Seattle International Film Festival.

Corporate clients have included: Amazon.com, Microsoft, Corbis, The Bon Marché, Fisher Broadcasting, Sellen, Real Networks, and the Seattle Aquarium.

Guy is a founding member, and on the board of the Seattle Composers Alliance; an organization bringing awareness and community to professional composers in the Seattle area. www.seattlecomposers.org

1) Let's start with your first title with an adaptive soundtrack (which was it? Shivers? Claw? Blood? Shogo?) What techniques did you use and how?

Well, that depends on how you define adaptive, eh? Which brings up the idea of a 'spectrum of adaptability', which is a term I use to describe the depth of adaptability in game scores. Shivers (Sierra 1995), used cross fading, and location based music, but also would cross to suspense music as the player approached a monster. Then a musical 'stinger' would play as the monster attacked. This is a good case of the music informing the player as to what may lie ahead i.e. danger, and it worked very well in the context of this design. Shivers is also where I first dabbled in random variation. The code was as simple as: play a random wave 1-6, wait 2-6 seconds, play a random wave 1-6 (without repeating) etc. Each wave would be a short musical gesture of some sort, about 2 to 3 seconds.

Shogo (Monolith 1997), was my first game score to incorporate seamless musical transitions between given intensities, so it was quite a leap. The idea was to create three differing intensities of music (plus silence), and create musical transitions for all possible cases. (low to med, low to high, low to silence, etc.) The idea of a transition matrix arose out of the need to map these various transitions.

2) What is your opinion on the use of adaptive music in 3d first person adventure games? For instance, SHOULD there be music at all?

As an audio community, we need to get past the idea of whether or not adaptive music is appropriate or inappropriate for a given genre, or for games in general. If music is called for in an 'interactive' game where specific timings are unknown, then adaptive music is appropriate. End of story. Where on the spectrum of adaptability you choose to go depends on the specific game design, and creative choices made between designer and composer.

It just kills me to see game composers making broad statements about the appropriateness of adaptive audio, when we haven't even explored 1/100th of the possibilities! Try it first, then if it really doesn't work, try a different approach to adaptability. But don't give up on adaptive music, because any game that is non-linear and interactive demands music that is flexible and malleable, so that it may be appropriate to the given situation on the screen at any given time. Long linear music loops simply can't do that.

My favorite analogy: Linear music is analogous to 2D prerendered art, as adaptive music is analogous to 3D game-rendered art. What did games gain from game-rendered art assets? The ability to view objects from any side or distance; the flexibility to create a truly interactive game environment, which put gamers in a more immersive/controlable environment. Was there much of a debate over the appropriateness of game-rendered art? No. If there was, we may still be viewing 'Myst' style games. (although I'm sure a hip sub-genre game could still be built with that level of tech)

The analogy is very literal. Currently most game music is 'pre-rendered', it is mixed in fairly large sections prior to being put in a game. Whereas music that is more adaptive is 'game-rendered', i.e. music components are assembled by the game as it is being played. Again, this is where the 'spectrum of adaptability' comes into play. There isn't a black and white line between pre-rendered and game-rendered music. It is a spectrum, and the game composer chooses the most appropriate place on that spectrum for a given score. For example, even in my most highly adaptive scores, I often use small and large pre-rendered chunks of music. Why? Production values. All the adaptability in the world means nothing if the music doesn't sound good. So I look for a balance between high production values and the flexibility of adaptive techniques.

Oh yeah, about first person adventure games... I've scored a lot of them, and the best of them have dramatic flow; meaning they have an overall arch to the game as a whole and individual levels ebb and flow as well. Scoring this general ebb and flow of intensity using adaptive techniques has been a very effective way to approach this genre. Even with this broad stroke approach the music becomes a more integral part of the game experience and not simply a backdrop.

In No One Lives Forever (NOLF) Monolith/Fox, I learned a valuable lesson in implementation. The best adaptive arrangements can fall flat if they're not implemented well. Prior to NOLF, we mainly used location based triggers to change the music (set up in a level editor), which can be effective but are also time consuming to implement one by one. Instead we tied the various music 'intensities' directly to game-AI states. These worked on a global level, while specific location triggers could over-ride the AI triggers when necessary.

3) Same question as 2, except for puzzle games.. using whatever example you'd like.

Puzzle games are a perfect opportunity for an adaptive score. There are usually lots of pacing changes and visual cues for the music to score. If a designer chooses, for aesthetic reasons, to have a simple background loop or no music, fine. But as a composer I will present other more adaptive solutions that the designer may not be aware of.

The score for Russian Squares for the Windows XP Plus Pack is adaptive from top to bottom. The score acts in tandem with most aspects of the puzzle play. Using DLS2 instruments allowed for great variation even on the synth filter sweeps. Variation is one answer to the 'endless loop' syndrome. I've found that even a few simple instrument level variations make a section of music feel more organic and on going, rather than loopy. In the game, every time the player progresses or regresses (by clearing a puzzle row) the music subtly changes i.e. a layer is added/subtracted or the chord or rhythm changes. Most short visual cues (like time running low on the clock), are mirrored with an audio cue that syncs up with the underlying score. Did it all work well and was it appropriate? Yes and yes, the music definitely makes the game much more fun to play. When I started with ideas for the score, I had no idea if it would work technically or aesthetically (and some things didn't!) but we kept at it until we found what felt good in the context of the game.

4) What are some of your favorite examples of adaptive audio in games?

'Frequency' on the PS2, has done a great job of turning on the fly remixing into a game. Too much fun, I couldn't stop.

'Aliens vs. Predator 2' (PC) Nathan Grigg at Monolith has added some ideas to the concepts developed in NOLF/LithTech to get a great sounding adaptive score. It has seamless transitions between intensities, but uses streaming waves, instead of DLS instruments.

Alistar Hirst has helped create some very cool adaptive music technology and music tracks for of EA's 'Need For Speed' racing games.

5) What tools have you used over the years to achieve adaptive audio? How effective have they been?

People know me as the DirectMusic Guy, and that's because it has been one of my main tools for many games. Other than DirectMusic there are examples where we've coded (I don't personally write code though!) the audio from the ground up ending up with a proprietary method and tools. The disadvantage to proprietary code (especially from the vantage of a free-lancer) is that you have to reinvent the wheel every game.

DirectMusic has been the only commercially available adaptive audio tool for gaming; and certainly the only commercially available tool with its depth and capabilities. That said, it is no secret that DirectMusic Producer is a bear to learn and a bear to use once you've learned it. But I use it because at this point in time, it's what I need to get from point A to point B. And I really want to get from point A to point B. We won't get better tools for adaptive scoring until composer/developers start producing more adaptive scores. That's the only way this chicken-egg cycle will be broken.

It's largely up to us, the composers, to push for game scores that are more adaptive, and use whatever tools we have, or can make to get the job done. Better and more efficient ways of doing the work will emerge, and better tools will follow.

6) How interested have development teams you've worked with been in adaptive audio? Are they interested at first but not sure what it is? Are they onboard with all the terminology? Somewhere in between?

I've experienced a variety; from development teams that love the adaptive audio concept and give me the resources necessary to create a highly adaptive score, to developers that just want a simple wave file. By and large, I find that once developers hear what adaptive techniques can add to their game, they are very receptive and even proactive.

The key to working with any developer is communication. I'm not a programmer, but I'm very good at explaining how I want my score to function in the game. Also, in addition to explaining how the score will function, I try to demonstrate with audio(as best I can) how it will function, even if it's hacking together a demo in my sequencer or Producer.

As for terminology, most developers don't know formal or informal adaptive audio terms (maybe because the terminology is still being formed?), but they always learn the terminology I use very fast, since on an abstract level the adaptive audio features I ask for are not that different from other aspects of the game (in fact they usually tap into the same global trigger calls and AI). Often, I overestimate what it will take to code up a given feature.

7) How receptive have you seen the public being to adaptive audio? Is it something that won't ever get its own spotlight in the public eye, but be a point of professionalism in the industry?

I've gotten endless positive feedback from gamers about the adaptive aspects of my scores. Especially with NOLF, the press almost unanimously mentioned the music's adaptiveness in a positive light. Eventually they may not have to mention the 'adaptive nature' of the music because it will be a seamless underpinning of most games and will be more the norm than the xception. -Just as art assets moved from being largely pre-rendered to being mostly game-rendered.

8) Any chance adaptive audio functionality can find its way into hardware? Does it need more formation in software?

For PC gaming: only if it could be virtually universal. In a perfect world, the only thing such an audio card should do is accelerate your music engine, effects, and mixing, while the specific algorithms and data are determined by the software. That way it's a flexible 'author once' environment... sort of like a Pro-tools farm card, but for game audio. Some good standards would help these sort of pie-in-the-sky visions. btw, I'm truly rooting for the XMS format. Tons of great potential there. On the platform front, XBox has a fantastic hardware card that accelerates the DLS synth and effects. And I plan on using it to a good extent.

9) Give us a rundown of your latest adaptive audio techniques and how they worked (I'm assuming this would be for Die Hard: Nakatomi, but correct that if its wrong)

I try to approach each game I score, without any particular adaptive scoring technique in mind. Usually after seeing a prototype or reading the design doc, I start to get ideas of how the game could sound, and from there I look for ways to get from A to B, using any method available. This way, the creative leads to the technical solution.

Most games lead to some new idea or technique that I may apply to future games. These adaptive scoring techniques often build on one another, or are improvements on an idea i.e. the desire for smoother transitions lead from cross-fade transitions to seamless transitions. I have to say, dealing with transitions may be the single most challenging aspect of adaptive scoring. How do I get from this music to that music when I don't know exactly when it's going to happen?? It's just as much a musical question as it is technical question. In fact, working with adaptive music is changing the way I think about music. I think in terms of music 'cells', and how to get from cell to cell. If a transition needs to happen within 3 seconds of when it's called, you have to start looking at every other measure as a possible transition point.

Die Hard: Nakatomi Plaza is an interesting example. Because of a tight schedule, we used established technology and techniques(LithTech/NOLF) which helped me turn that project around quickly. With that score I learned more about how to write adaptive music in an orchestral style, rather than any specific 'technique'. And that was a very valuable experience.

When scoring a movie, the movie dictates the overall form of the music, so when scoring a game, the flow of the gameplay (and its design) are what determine the overall form of an adaptive game score; and we don't know ahead of time what the exact timing will be -hence the need for adaptive scoring-. but we do know a lot about the general form and flow of the game by design, and we know the general range of intended timings i.e. the different rates players are expected to play through various parts of the game. That knowledge is the basis for an adaptive game score implementation.

One thing I've learned on recent games is the importance of solid integration between the music engine and the game engine. They need to talk to one another effectively. That is the key to making a score that works well with the game, vs. one that feels disconnected from the game.

I seem to be moving in the direction of more flexibility in game scores so that I can score them more tightly. By that I mean having the ability create a score that feels like it's scored accurately to picture -every time. To that end, I'll be experimenting with a combination of the adaptive concepts I've been using and then some.

10) Anything I've left out?

While I'm at it...

Adaptive music is at the 'early adoption' stage in the game industry (even though there's already a rich legacy of adaptive scoring). The degree of adaptability in game scores will gradually increase. I don't think it's a matter of 'if' but of 'when'. Sometimes I sense a slight fear of adaptive music among the game audio community. And that fear isn't completely unfounded; new learning curves, new technology, and rethinking how we write music! But those who choose to be early adopters may find themselves with a competitive advantage, and those who don't may find themselves hiring someone to create adaptive arrangements of their music (actually a viable model which I used to a good extent on NOLF). But aside from all that, creating adaptive scores is loads of fun, and creatively rewarding. Think of the adaptive part of scoring as another integral component of your music.


© Copyright 2002 MIDI Manufacturers Association