6: Interactive Music Systems and Theoretical Approaches To Non-Diegetic ‘Embodied’ Content.
Non-Diegetic sound worlds, i.e., music not originating from an obvious source in the game world offers many challenges to the music programmer not present in equivalent, linear, film sound environments. Because events are increasingly dynamic in the modern game environments particular those of more, open world games such as ‘Fall Out 3’, ‘Mass Effect’, ‘Assassins Creed’ etc the needs required of interactive music increase. This is due largely to the simple truth that in the case of linear media formats.once you have written and placed a piece of music on a time line, in which, for instance a character walks down a particular street that sequence will exist that way each time you play it back. However “Dynamic audio complicates the traditional diegetic/non-diegetic division of film sound. “The unique relationship in games posed by the fact that the audience is engaging directly in the sound-making process onscreen”  Therefore in the game world the player can quite conceivably choose not to walk down the street and may choose to enter a building instead. As this may require, for dramatic effect, a change in non-diegetic musical content it is reasonable to suggest the linear approach can not realistically be applied to interactivity. “In a game, the mixing happens at run-time in a software engine and is happening every time the player actually plays the game. Because video game mixing can only ever rely on the software effects and DSP (digital signal processing) that ship with the game, software-based auxiliary channels may be used to send the sound from a particular channel to a software reverb, also running in memory in real-time.” 
What this does not mean is that music need necessarily be written for specific locations and actions all the time. For instance the musical content of leading Role Playing Game “Dragons Age: Origins” [clip 24] seems largely to be based around a series of ambient playlists which abruptly cut into denser, more aggressive music when a fight breaks out.
Is the abruptness of the change a problem? Perhaps not. If there is a abrupt change in game state perhaps the music should follow suite. Both Firelight Technologies ‘F-MOD’ and Audiokinetic’s Wwise audio middleware applications offer dedicated interactive music systems that both utilize containers to hold cues, associated audio assets and the arguments that allow progression to new sequences or the addition of complimentary layers.
Figure seventeen below illustrates an experimental chain of Cues in a system developed during research for this paper that could feasibly represent a similar system to that featured in ‘Farcry 2’ where music reacts tightly to in game action both sequentially and vertically reacting to dynamically to danger in a fluid and musical manner.
This is particularly relevant as Crytek utilize FMOD for their sound implementation each Cue sends and receives essentially IF, NOT, AND OR arguments that determine the Cue’s behavior, allowing both sequential and concurrent mixing simultaneously.
The recent Eidos game ‘Deus Ex’ uses this system to compliment the role playing style of the game. As your character can choose how to deal with situations in a variety of ways so the music adapts to the players choices. Many situations can be approached either aggressively, with all guns blazing at one extreme to diplomatically through conversation at the other and the fact that the player can change approach at any time further complicates the problem, though this needs to happen if the music is to fulfill its narrative function.
The developers on this project split the Cues into three layers, Ambient, Stress and Combat that run concurrently allowing the previously referred to changes of narrative to be reciprocated through the music as the narrative adapts to the players position in the narrative. ‘Dead Space‘s’ music system which is integrated into the ‘Godfather Engine’ allows for much of the same functionality to achieve its highly adaptive soundtrack utilizing a method where four stereo streams of music are playing at any one time with the game mixing between them dependent on what Don Veca refers to as “fear emitters”, “These could be any aspect of the game itself – from a creature to a blind corner. The game is constantly calculating what fear emitters exist within a certain radius of the main character, what their fear strengths are, and coming up with a fear value for that moment. That fear value is what dictates the balance of the music in the game, as well as the balance of other content and mixing parameters.”  For this purpose Veca wrote a scripting system that delivers essentially what both FMOD and WWise can now do and sit metaphorically on top of the middleware triggering, filtering and mixing sounds without changing code.
Wwise uses a similar system of associating segments of music with in game event triggers that allow for sample accurate transitions. The favored system of containers used by Audiokinetic draws visual similarities with Firelight Technologies interactive music system. However in the WWise system each container delivers a specific function. A segment relates directly to an individual WAV file, Music Switch containers handle transitions while Music Playlist handles horizontal sequencing. [clip 25]
Like FMOD, Segments can also be layered ‘vertically’ allowing different stems in a mix to be triggered concurrently whilst their behavior is dictated by a different set of algorithms.
Each segment contains user inputed data on tempo and time signature as well as user defined transition points, essentially allowing you to sub divide assets without the need for further asset division at the DAW stage. Using the Segments editor the programmer can dictate the algorithms that determine how the music behaves in accordance with in game contexts.
By attaching these segments to a Switch container we essentially create an interactive sequencer that can switch between different musical assets dependent on changes in game state in a manner, dependent on the programmer, that is musically intelligent. What this allows is a greater degree of narrative dynamics in the soundtrack than if we were playing back simple stereo files. For example you could increase layers of instrumentation in accordance with the level of enemy threat or more informatively associate instrumentation with different classes of NPC opponent perhaps reserving the brass section of an orchestral score for the arrival of a particularly menacing character of enemy class.
In this way ‘Batman Arkham City’ utilizes this system of Layers and Segments to employ different “branches” of the orchestral score to react to the players actions kinetically. this can result in a complete change to another section of music entirely or switch to different layers of the score changing the harmony and instrumentation while maintaining melodic and/or textual continuity. 
To enable the score to be broken down into the required segments
the score is constructed from, timing wise, segments of bars four to eight bars long and harmonically mainly sticking to a few minor keys that only modulate through the minor third of the scale. This approach helps enable the maximum number of segment combinations whilst maintaining a natural musicality. However layering ca still be a compositional challenge highlighted by Nick Arundel who was the key composer for the title “To get the middle one “layer” to match the fourth one but then know that you can never play the first two when the fourth one’s playing is tough to describe to someone – it literally feeds into the types of chords you use.” [clip 26]
This incorporation of composition incorporated with interactive mixing functionality is highlighted in ‘LA Noire’ where background music at crime scenes works as a crucial structural game playing device during the search for clues only concluding once all the clues at a scene have been detected acting as a structural game signifier. The discovery of these clues is helped by incidental two note stings sympathetic to the harmony of the score upon successful detection. By apparent use of FMOD’s music system, these notes are not just harmonically sympathetic to the existing soundtrack but also rhythmically so that though these notes are mixed louder than the rest of the score and fulfill their function they detract from the narrative immersivity that is being attempted as little or to a lesser extent.