3: Interactive Implementation: Mixing and Processing Systems

The realisation of immersive and narrative goals through the processing and dynamic control of audio assets illustrates a huge divergence between the implementation of linear and interactive audio experiences.  The narrative stability of film, by which once a final edit has been delivered the slammed door at 3:37 will always slam at 3:37, contrasts with the unpredictability of the interactive medium. As such the technical means by which we apply audio content is equally as contrasting. Typical Digital Audio Workstations such as Digidesign’s Pro-Tools and Steinburg’s Cubase are only useful for generating static assets for implementation in the interactive field and instead it falls to the ‘audio engine’ to fulfill implementation requirements. Though they rarely feature screens or layouts that resemble the recording studio mixing desk these programs essential function is to interactively mix the assets relevant to the on screen action for often identical function to their linear counterparts. As it is obviously impossible to ‘ride the faders’ in response to the action, as some film sound editors might choose to do. Due to the unpredictability of interactive play such graphic nods to their technological ancestry might well seem irrelevant if present, however much of the interface is designed to allow some approximation of such dynamic control particularly in the use of RTPC (Real Time Parameter Control) as seen in Audiokinetics WWise and their similar counterparts in the sound definitions window in Firelight Technologies FMOD. [fig 1]

fig 1

As such a Sound Definition operates  not unlike a mixer channel, allowing the programmer to set the equivalent of mixer fader arguments in accordance with game states in much the same way as drawing volume automation in a DAW. Other typical channel strip functionality is of course also simulated, for instance, panning too can be set and automated along with parameters of any DSP acting on the sound or sounds.
In order to allow sound worlds to reflect the narrative progression of changing states as well as the immersive environmental content, the game must be able to react in a similar manner to that of the film sound mixer. From a purely mix perspective we can compose on a vertical axis. Presuming all the sounds are synced to the correct visual cues, we can, if using a typical DAW layout, see our assets piled on top of each other in a vertical cascade. As the sound mixer we can now decide the relative volume levels and behavior of these assets in accordance with the requirements of the changing visual context through time on the horizontal axis.
The Event systems in  FMOD Designer and Wwise essentially act as our vertical channels linking in game events to audio stems, not just triggering files but also altering their behavior through panning, pitch shifting, equalization etc to reflect changing states such as player heath. In ‘Operation Flashpoint’ for example the players character loses his upper frequencies whilst severally injured. Manipulations of this kind can also be used for grander narrative progressions such as in ‘Fable 2’ where as the plot takes on a darker tone so does the audio content through gradual equalization and pitch alterations.[20]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: