A Spy’s Score: A Case Study for No One Lives Forever

This article is an excerpt from a paper originally written  for the book, DirectX Audio Exposed: Interactive Audio Development,  published by Wordware, edited by Todd M. Fay (a.k.a. LAX). [ http://www.wordware.com/ ISBN 1-55622-288-2 ] This excerpt replaces DirectMusic terms with general adaptive audio terms that can be applied to any audio technology. If you're interested in  learning about the specific DirectMusic techniques, process, and technology used  in No One Lives Forever, read the full length article from the book (due out  late summer or early fall 2003) The full length article is much longer than this  excerpt and contains detailed information and instruction regarding the  DirectMusic production process.

PART I: AN OVERVIEW OF THE SCORE

No One Lives Forever (NOLF) is a  first person action adventure game. It is a spy story set in the 1960’s,  complete with gadgets, gizmos, and guns. The player assumes the role of Cate  Archer, British spy extraordinaire! As shooters go, it is refreshingly light  hearted and sprinkled with kitsch and humor.
 
Monolith Productions  developed, and Fox Interactive Published No One Lives Forever, for the PC  in 2000. I composed and produced the adaptive score, and Rich Ragsdale  contributed the title theme. Eric Aho, Nathan Grigg, and Tobin Buttram, created  the DirectMusic (DirectX 7) arrangements and composed additional music. Bryan  Bouwman programmed and integrated the game’s DirectMusic code and Sonic Network  Inc. provided many of the DLS instruments.

For  the soundtrack, I was asked to capture the flavor of the 50’s 60’s spy genre,  without infringing on any existing copyrights. Believe it or not, at first I was  told to limit my use of brass instruments (This directive came to me through the  grapevine via the Bond franchise). That is like being asked to produce a blues  album without guitars! The powers that be quickly got over the legal paranoia,  however. I did have one theme refused because of a subtle P5, m6, M6 melodic  progression (made famous by composer John Barry), even though I thought it was  the least ‘Bond-ish’ of my themes. Actually, I drew more influence from German  composer Peter Thomas, whose film scores have more of the lighthearted feel we  were after. The Barbarella soundtrack was also required listening.
 
I began the pre-production process by writing five or six  themes and prototyping them. These themes became the backbone of the adaptive  score. The adaptive scoring techniques for NOLF came out of the concepts  and technology implemented for three previous Monolith games: Shogo: Mobile  Armor Division, Blood II: The Chosen and Sanity. Shogo:  Mobile Armor Division was the first game I scored that broke the music down  into separate music states, which matched the game action. Blood II: The  Chosen and Sanity each added to those concepts, creatively and  technically.
 
NOLF built upon my adaptive scoring foundation,  improving on many aspects of my technique. This white paper will describe the  adaptive scoring concepts, the production process, and implementation process  used to create this game score. I will describe my intentions and the actual  outcome; what worked and what didn’t. I will also describe what I’d like to  achieve with future action scores.

The Adaptive Concept and Music States

NOLF gameplay has high  points of action and ambient points; times when the pace is furious, and times  filled with suspense. In many scenarios the player may direct Cate in with guns  blaring or may sneak her through the situation at hand. Obviously the same music  cue wouldn’t be appropriate for both approaches. Also, there’s no way of  pre-determining how long a firefight might last and what might come immediately  after it. These are the reasons that lead me to break the music down  into flexible music states.

After writing a thematic idea, I arrange it  in a variety of music states using subjective naming conventions that  reflects their functionality (or intensity) in the over all score. Some of the  tags I’ve employed are ‘ambient’, ‘suspenseful’, ‘action’, etc. Each of these  music states can play for an indeterminate amount of time. Typically each  music state has about one and a half to three minutes of music composed for it.  It is sometimes difficult to calculate the exact amount of time when considering  variations. As a general rule, I think in terms of how long a particular music  state can hold a listener’s interest (more on this later). A music state can  repeat as necessary until another music state is called. I could also define the  number of repeats. The music engine supported automatic transitions from one  music state to another or even to silence. This prevents the music from  repeating ad nauseum for moments when player interaction is limited.

Music Sets

Each musical theme in NOLF is arranged using six basic music  states that make up a single music set. At the start of a level, one music set  is loaded along with the rest of the level assets. The six standard music states  are:
  1. Silence  
  2. Super ambient  
  3. Ambient  
  4. Suspense/sneak  
  5. Action 1  
  6. Action 2
The key to composing music for any given music state, is  to give the music enough musical ebb and flow to keep things interesting, while  staying within the intensity range prescribed by the music state. For example  the ‘ambient’ music state may rise and fall a bit in musical density but should  not feel like it has moved up or down too dramatically. The goal is to maintain  the musical state while holding interest. One way the music sets in NOLF do  achieve this level of sustained interest is through instrument-level  variation.

Using variation on just a few instrument tracks of a given  music state was very effective and didn’t cut too deeply into the production  schedule. Instrument-level variation is used in the lower intensity music states  quite often. These music states start differently every time they’re called up,  giving the illusion of more music. In some cases a four to eight measure  repeating music state feels like three to five minutes of fresh music.

Transitions: Getting from A to B and Back Again

The ability to  transition between the various music states provides the flexibility needed for  the soundtrack to adapt to the game state. Seamless musical transitions  facilitate this adaptability in a way that sounds intentional and musically  satisfying. In NOLF, any of the six music states may be called at any  time. This means that any given music state must be able modulate to any of the  other five states.

This required transitions between states that made  sense musically and which did not interrupt the flow of the score. Sometimes  simply starting the next music state on a logical musical boundary was all that  was needed. Often, quickly ending one state and starting the next was enough.  However, the most satisfying transitions were the ones that built up to a more  intense music state or resolved downward to a less intense music state, without  missing a beat, so to speak.

The Matrix

First conceived for the Shogo score, a transition  matrix filled the need to keep track of the myriad of possible movement between  music states. By defining the matrix in a simple script (more on the script  later), I was able to assign short sections of music to act as specific  transitions between states. When the game calls for a change of music state, it  knows which music state is currently playing, which one it needs to move to and  plays the appropriate transition between them.

The transition matrix  acts as a look up chart for the music/game engine. With six music states there  are thirty possible transitions. Needless to say I didn’t labor over thirty  individual sections of music for each theme. Many transitions did not need  transition Segments as they sounded good cutting in on an appropriate boundary.  Also, I found that the some transition Segments could be used for multiple  transitions between music states. Transitions were generally divided into two  types, to help clarify my thinking: transitions that move to a higher or more  intense music state, and transitions that move to a lower or less intense music  state. Categorizing transitions in this way made reusing transition material  easier. (i.e. transitioning from music state three to music state two may be  similar to ‘3 to 1’, while ‘3 to 4’ may similar to ‘3 to 5’. (But not  always!)

Performance Boundaries

Key to making the transitions work musically were  Performance Boundaries. Performance Boundaries defined the points along a music  state where transitions can take place. Boundary types included, ‘immediate’,  ‘grid’, ‘beat’, ‘measure’ and ‘segment’. Each of these boundary types proved  useful for different situations in NOLF. When a music state was  rhythmically ambiguous, immediate or grid worked fine, allowing for the quickest  transitions. Beat and Measure boundaries came in handy when the rhythmic pulse  needed to stay constant, and Segment boundaries allowed the currently playing  melodic phrase or harmony to resolve before transitioning.

Maintaining a  balance between coherent musical transitions and the need to move quickly  between states, challenged us as arrangers. As you will hear if you play the  game, some transitions work better than others do. When a new state is called,  there is an acceptable window of time to move to the new state. We used a window  of zero to six seconds (eight seconds tops). This meant that at a tempo of 120  BPM, a four-measure phrase (in 4/4) was the absolute maximum that a current  music state could finish prior to transitioning to the new state. One typical  solution was to use two measure boundaries (for quicker transitions) for most of  a music state and four measure boundaries in spots that called for them aesthetically.

Composing and arranging convincing musical transitions in  an adaptive score twists your brain and forces you to think non-linearly. The  interactive score used in NOLF only scratches the surface in this regard;  there is plenty of room for future innovation. I can say for certain that,  having written a number of non-linear scores, I’ll never think about music the  same way again. In a way, it’s freed my thinking about how music is put  together. Even when listening to linear music I sometimes think ‘Hmmm… this  piece could just as easily start with this section instead of that one ’or‘ I  bet they could’ve started that transition two measures earlier!’ etc. Music is  malleable and only is frozen when we record it.

Stinger Motifs

Two or three of the music sets used in NOLF employed Motifs. The Motifs were applied as short musical accents, or stingers  that played over the top of the currently playing music state. Performance  Boundaries were set so that the Motifs would be in sync with the underlying  music, and Chord Tracks were used so that Motifs would play along with a  functional harmony (this was the only use of DirectMusic’s difficult to navigate  harmonic features).

These Motifs were composed of brass riffs, quick  guitar licks, and things that would easily fit over the music state Segments.  More flexibility would have been nice so that different Motifs could be assigned  to specific music states (possible using DirectX 8 Audio scripting). Five or six  Motifs were written for each music set. The engine called a Motif randomly when  the player hit an enemy AI square in the head (ouch!). A silent Motif was  employed so to prevent a Motif from playing every single time.

Sounds and DLS banks

All the music in NOLF uses Microsoft’s  software synthesizer in conjunction DLS. DLS banks are loaded into the software  synthesizer (using RAM), and played via the DirectMusic engine. Each music set  uses up to 8MB of DLS instruments (as 22kHz samples), which are loaded as each  game-level is loaded. These DLS banks are selected, created and optimized for  each individual music set. This gives each music set its own timbral character  that coincides with the aesthetic needs of each theme.

DirectX 7 didn’t  have wave tracks, as DirectX 8 and above does; as a result, premixed tracks  weren’t an option. The DLS+MIDI approach provided the flexibility and  practicality needed for features such as instrument level variation, and Motifs  that respond to harmonic information. My current projects use a combination of  wave/streaming tracks and DLS+MIDI tracks. This provides a balance of pre-mixed  waves, CD quality with the flexibility of MIDI. However, as processor speeds  continue to increase and better real-time DSP comes about, professional  production values will be easier to attain via MIDI+DLS. The two approaches will  likely merge and simply be two tools in the same toolbox.

Integration and Implementation

Even though we went with an off-the-shelf  solution, namely DirectMusic, there was still a good amount of programming  needed to successfully integrate the music engine with the LithTech engine  (Monolith’s game engine). Thankfully, much of the work was done on previous  games, and we simply needed to update the code for NOLF. Perhaps the  biggest leap in this area was in how the music states were tied to the game. In  Shogo, Blood II, and Sanity, music states were called via  location triggers placed strategically throughout the levels. This was a  laborious task (done in the LithTech level editor) and was a pain when enemy/NPC  placement inevitably changed as the ship date neared. Necessity was the  mother-of-invention for NOLF, as we didn’t have the production schedule  to individually place music triggers.

Bryan Bouwman and the fabulous  programmers at Monolith came up with the bright idea of tying global game-states  and NPC-AI directly to music states. This approach made perfect sense as the  game knows when there is action on the screen; it knows when Cate is sneaking  around in ‘stealth mode’, and it knows when the player is simply exploring a  level. To add flexibility, the game-state to music state assignments are done  individually for each game level, so different levels could have different  assignments. In addition, more than one music state could be assigned to a  game-state. For example, music states 5 and 6 were both often assigned to the  ‘combat’ game-state and music states 1 and 2 to ‘exploration’, etc. The game  chooses randomly between them at runtime. This means that you could play through  the same level twice and have a somewhat different score each time, yet the  music would be appropriate to the action in both cases.

Scripting

Monolith created a simple scripting method which provided me  with some control over the music asset management and implementation. Being a  DirectX 7 game we didn’t have the DirectX 8 Audio scripting that now comes with  DirectMusic. In NOLF there is a script for each music set, which is  called when a game level is loaded. The scripts’ basic functions are:
 
  • Load the music assets for the music set
    • DLS instruments    
    • DirectMusic Styles, Bands, Motifs, and Segments
  • Assign DirectMusic segments to the six music states  
  • Set up the transition matrix
    • Assign transition segments    
    • Assign transition boundaries
  • Set the basic reverb settings  
  • Set up the Motifs and their boundaries

The Test Player

Monolith also built a handy little LithTech DirectMusic  player that contained the adaptive functionality used in the game. It loads a  script and its music set, and then plays the various music states using the  correct transitions between them (a simple selector allows you to choose music  states). Motifs can even be tested over the music states. This player was a  lifesaver, as it allowed me to debug the music content in a game like setting  before implementing the content.

PART II: THE PRODUCTION PROCESS

The Prototype and Pre-production

The first step in the whole production  process was to zero in on the musical direction and thematic material of the  score. This began with discussions about the style of the music with game  designer Craig Hubbard. Next, I was to bring these ideas to realization in the  studio in the form of a prototype. Each thematic prototype was created in my  MIDI studio using all appropriate synthesizer/sampler modules and sounds  available. The idea was to ignore the nonlinear aspects that the music would  take on, and to ignore the technical limitations the game machine would place on  the music. In this way the focus of the prototype was the thematic material  itself, the musical style, and an ideal set of production values.

Each  prototype was mixed to a stereo wave file and presented to designer and  producer. Some themes were accepted on the first take, some were sent back to  the drawing-board and others were rejected outright. By the end of the process  we had 5 or 6 themes we were all happy with, and full production could  begin.

Composing and Sequencing

The sequences created for the prototypes (using  Digital Performer on a Mac) served as a starting point for production in  DirectMusic Producer. Some of the sequences were fairly complete while others  required extensive work and additional sections once brought into Producer. From  Digital Performer I exported each prototype sequence in the form of a ‘Standard  MIDI’ file (.mid). This allowed Producer to import the sequences for editing and  arranging.

I did as much sequencing as practical in Digital Performer in  conjunction with my samplers. One key piece of advice I can offer when using  this approach is to have the instrument samples from your studio match, as  closely as possible, the instrument samples of the DLS banks to be used in  Producer. This is getting easier to do, as the DLS2 format can be easily  translated to GigaStudio (.gig) and SoundFont formats. To create the game score  for DieHard: Nakatomi Plaza, I had an exactly mirrored set from DLS as  the target format to GigaStudio as the production format.

DLS Creation

The DLS(level 1) instruments for No One Lives Forever came  from two sources: First, we licensed many instrument collections from Sonic  Network Inc. (The sounds are called SonicImplants. The  other samples were home-made, including some solo cello samples. Sounds are a  composers palette and having a rich palette, despite memory constraints of the  game, was key to making the interactive score convincing.

There are some  tricks to creating a rich instrument collection within tough memory requirements.
 
  1. Layering sounds and resampling: When creating music in a traditional MIDI  studio, layering and stacking instrument patches to create a thick sounding   timbre is commonplace. The drawback to this technique in a game is memory usage and limited polyphony. The cure is to layer your patches and resample   them into a single set of samples to be assembled into DLS instruments. For example, I created a brass staccato instrument by stacking about 5 or 6 brass   patches in unison (including French Horns, trombones, and trumpets). One sample from this instrument using one voice sounded like the entire orchestral   brass section playing triple forte!  
  2. The use of unique or interesting sounds: One interesting timbre in a piece of music can carry the piece and make it memorable. The low cello glissando in   the ‘H.A.R.M.’ theme is one such example in NOLF. A generic cello patch and the pitch bend wheel would never have cut it. Instead, I brought in   cellist Lori Goldston and had her record some short figures, and motifs. The ponticello glissando figures were then sampled and pitched down about a perfect fourth. This became the central figure around which the rest of the piece was composed. One ‘live’ sounding instrument can trick our perception into hearing other parts as performed live.  
  3. Each instrument must sound convincing when soloed: If an instrument sounds weak on its own, it most likely will not add anything to your music. I   resampled many instruments with a bit of room or hall reverb on each sample. The real-time reverb in DirectMusic Audiopaths is very useful but it certainly   isn’t your Lexicon quality algorithm. Adding a bit of high quality processing  o the individual samples, be it reverb, compression, or EQ, can go a long way   to get that ‘professional’ sound back into your interactive score.  
  4. The samples should match the individual composition: Even within different orchestral arrangements, different sets of samples are called for depending on   the pacing and mood of each piece. The ‘one size fits all’ mentality of General MIDI will fail to give your score anything but a generic quality. Each   theme of the NOLF score had some instrument sets that were built specifically for that theme: The vocal ‘BaDeDum’ sample for its theme, the   horn ‘blatts’ for the Ambush theme, in addition to the cello samples already mentioned for the H.A.R.M. theme.
    Creating looping samples posed a big challenge given the short length of the samples. For many samples, a Mac program called Infinity was used to create internal loops. The program has tools such as cross-fade looping which help smooth out harmonic thumps and clicks common to short loops. That said, short loops are never perfect, and compromises are always made for the sake of memory constraints.

Instrument-level Variation

Each instrument part in NOLF can have  up to 32 variations. Each time a music state is played, one variation per part  is randomly chosen. Multiple variations do not have to be utilized, in fact many  instrument parts in NOLF had one “hardwired” variation to play. I found  that four or five variations on two or three instrument parts was enough variety  for most music states.

The ‘ambient’ and ‘sub-ambient’ states tended to  get a deeper variation treatment, with most instrument parts having variations.  It may be that the moody, atmospheric, and arrhythmic nature of these music  states lent themselves to a truly non-linear treatment.

Variations in the  ‘suspense’ or ‘action’ music states entailed two or three parts with variation.  This allowed the foundation of that music to remain consistent while providing  some variety. There are many instances where instrument-level variation is not  used. This was mainly due to a limited production schedule. A simple technique  for creating subtle variation is to copy the original track (variation 1), paste  it into another variation, and then slightly alter it. The new variation could  have the same contour with different embellishments. This way it remains  consistent with the intent of the music yet adds interest. Another technique for  using variations effectively is composing unique melodies for one pattern’s  variations while the other parts may have subtle variation. This technique  creates a “lead” instrument that riffs over a consistent, yet changing, bed of  music. I ensure that each part has its own rhythmic space to play in. For  instance, all of the vibes’ variations may occur during measures three and five,  while the flute variations are given measures two and six. This ensures that the  parts won’t step on each other, despite its non-linear nature.

Continuous Controllers

The use of continuous controllers is essential to  breathing life into any MIDI score. In NOLF, controller 7 (volume) is  used to change a part’s general volume, while controller 11 (expression) creates  the dynamic ebb and flow. Parts without dynamic expression can sound flat and  monotonous, no matter how well the part is written. Inserting expression curves  (CC 11), helps convey the dramatic intent of a musical phrase. Expression curves  are used in the NOLF score as crescendos, quick swells, fade in/outs, and  to emphasize certain phrases of a part.

Creating Transition Segments

Transitions link the six music states to  one another musically. It takes a puzzle-like logic to figure out how the  transitions should operate. NOLF uses specifically created Segment files  for transitions. The transition matrix architecture, set up in the script,  allows specific Segments to be assigned to each possible transition (between 6  music states there are 30 transitions). These transition Segments are one to  four measures in length. This duration allows enough time for convincing  transitions, while being short enough to keep up with the game action.

I  ask myself two basic questions when composing a transition Segment; ‘Which music  state am I moving from?’ and ‘To which music state am I transitioning?’ I write  down all the possible transitions and methodically check them off as they are  created. I begin composing transition Segments to and from silence; in other  words, the intos and ends of each music state. The music composed for these  intros and ends provide the foundation for other transitions. This is because  the material written for a music state’s intro may become the basis for  transitioning to that music state from another music states. Also, a music  state’s end may also function well when transitioning from that music state to  other music states. (This is the puzzle logic I mentioned!) Sometimes I’ll use  the end transition of one music state and the intro of another music state as  the transition between them. The end material brings the music out of the  current music state and the intro brings the music into the next state. More  often, this end/intro transition will be the basis for that transition, and  further editing and composing is done to make it work well.

Many  transition Segments function well in multiple transitions, and this saves  production time. For example, the transition Segment built for music states 2 to  3, may also work between music states 2 and 4, etc. I compose one music state’s  transitions at a time. Thus, when I’m working on music state 2, I sequentially  create transitions between states 2 to 1, 2 to 3, 2 to 4, 2 to 5, and 2 to 6.  Again, this is because there are bound to be similarities among these  transitions that I can reuse.

Simple transitions are often the most  effective. If the music needs to stop or transition quickly, a large percussive  accent brings the music to a halt. It’s as if the music hits a wall; blunt and  jarring and that frequently works well within the game. There are also cases  when no transition Segment is needed between music states. In these cases, the  music flows directly from one music state to the next, and the release times of  the DLS instruments create a natural blending or cross-fade between music  states.

Sometimes, one transition Segment is not adequate for a  particular transition. If a music state contains a variety of musical sections,  more than one transition Segment may be needed. For instance, music state four  has an A section that is 16 measures in length, and a B section also 16 measures  long. If the instruments used in each section are different or the harmony and  tempo varies between them, then a single end Segment may not work from both  section A and section B. In a case such as this, two transition Segments are  created; one for transitioning from the A section and a second for transitioning  from the B section. During gameplay, the transition matrix calls the appropriate  transition Segment depending on the current playback point of the music state.
 
When creating music states and their transitions, keep their harmonic  content in mind. Music states with disparate key centers cause difficult to  execute transitions, because the drastic harmonic modulation of the transition  may sound unnatural or awkward. I recommend using key centers that are closely  related (i.e. CMajor to Gmajor) to create convincing transitions. I also use  chromatic modulation in NOLF (i.e. Bmajor to C major), and many transitions  simply stay in the same key. It is easy to back yourself into a corner  harmonically when creating an adaptive score. Be aware of the tonal centers of  the music states as you create them and try to think ahead about how they will  transition harmonically.

PART III: INTEGRATION AND IMPLEMENTATION

Integration of Technology

DirectMusic functionality had already been a  part of the LithTech engine, so integration of the technology was already  complete. We innovated in the area of AI, however. NOLF already  integrated an advanced state machine, which calculates the players state, and  enemy AI. The programmers simply made it possible to trigger music states via  game states. These game state to music state associations were made in the  LithTech level editor so that different game levels could have unique setups if  desired. The level editor also allowed two or more music states to be assigned  to one game state, one of which would be randomly chosen during run-time. The  most intense music states, five and six, were both assigned to the ‘combat’ game  state. Also, music states one (silence) and two (sub-ambient), often shared the  quiet ‘investigate’ game state. Assigning multiple music states to a single game  state cut down the repetition and predictability of music within a given level,  by adding variety to game scenarios.

Implementation of the Music Content

The adaptive music state machine  described above, makes implementing the music content easy. The first steps  include checking the DirectMusic files into the game and properly setting up  each game level. The LithTech level editor selects the music theme and script  for a given game level. The music themes are thus assigned to the various  levels, each theme being used across an average of three or four levels. Ninety  percent of the music’s adaptability is handled by the state machine. Location  based triggers account for the other 10 percent, and override the state machine  when triggered. Location based triggers come into play when a specific theme or  music state is desired regardless of the game state.

NOLF cinematics apply the same themes and music sets as the adaptive game score.  Music triggers are placed at key points in a cinematic, where music is needed,  and transitions between music states automatically occur. Triggering music  states from cinematics works surprisingly well but certainly does not sound as  good as custom cinematic scores would have. Music scored specifically to a scene  matches the events more precisely than music composed out of context. The  adaptive music sets and triggers are used because the NOLF cinematics  were not complete in time to score them individually. The lesson is, reserve  production time to custom score game cinematics, and demand that the developer  finalizes the timing of them before the scoring begins.

CONCLUSION

The score for No One Lives Forever was a challenge to  produce but was also very rewarding. The first challenge was convincing the  producers at Fox Interactive that an adaptive score could have high standards of  production quality. The Monolith team helped make the case by presenting demos  of previous scores, such as Sanity. Also, my prototype themes helped convince  them of my abilities as a composer. Putting together convincing DLS banks within  tight memory constraints also posed a big challenge. The optimization process  was time consuming and tedious, but key to the sound of the game. Final mixing  and editing called for great attention to detail by going though all the  Patterns -all their variations-, Motifs, and Segments, making sure volume  levels, panning, and instrumentation meshed well together. NOLF’s game  state/music state integration gave me the greatest reward. It was fantastic to  simply drop music into the game and hear the interactivity immediately. It was  also gratifying to collaborate with a strong team of arranger/composers. Having  the help of three other musicians produced more content for the game, and  sharing compositional ideas and techniques made us all better musicians.  Finally, spy music was just plain fun to compose. The game’s sense of humor made  it a delight to create its music.

Overall, the adaptive design  functioned as planned or better. The transitions reacted quickly and smoothly to  the game calls, and the mood of each music state matched the on screen action  very well. The instrument variation, music state variation, and use of silence,  alleviated the repetitiveness common to many games, and the Motifs made direct  hits more satisfying to the player. At its best, the adaptive score draws the  player deeper into the game experience. My biggest criticism is that sometimes  the game states change faster than the music was intended to react. This makes  the music seesaw between music states unnaturally. Many of these instances are  only noticeable to me, but some are more obvious. I will be thinking of  solutions to this type of dilemma for my next adaptive score. Also, the sonic  quality of the music is limited due to the 22kHz DLS banks. A combination of  wave tracks and DLS banks would have allowed for longer samples, phrases, and  premixed sections, which can increase the overall fidelity of a score while  maintaining adaptability.

Each adaptive game score I produce gives me  ideas and concepts for the next. The biggest lesson learned from NOLF is that  global music integration is hugely important to a successful score. Good  integration creates the logical lines of communication between the music system  and the game engine. If these lines are weak or nonexistent, the music will not  respond well to gameplay, no matter how well the music functions out of context.  And context is everything.

ADDENDUM: A Guide to the NOLF Media Files

NOLF Quicktime Videos

1. EarthOrbit:  The Ambush theme starts in music state 5 (combat 1), transitions to music state  2 (ambient), then transitions to music state 6 (combat 2) with motifs.
2. HamburgClub: The BaDeDum theme starts with music state 3 (main theme), transitions  to music state 6 (combat 2), the main theme returns then transitions to music  state 5 (combat 1), and ends with the dialogue.
3+4. MorroccoAmbushA and MorroccoAmbushB:  Each of these movies runs the same scenario but with differing scores. AmbushA  transitions to the combat 1 music state, and AmbushB moves to the combat 2 music  state.
4. SniperB2:  This scene exhibits the Motifs of the Ambush theme well, as the music  transitions from music state 2 (sub-ambient) to music state 3  (suspense).

Ambush Music States Audio File Ambush_music_states.mp3

This  music clip moves through all six of the music states for the Ambush theme.
Time		Music State
0:00		6 (combat 2)
1:22		transition 
1:30		2 (sub-ambient)
2:14		transition
2:16		4 (suspense)
3:10		transition
3:12		3 (ambient)
4:08		transition
4:12		5 (combat 1)
5:33		transition
4:12		1 (silence)