1: Sound For interactive Games: Theoretical Concepts and Implementation Practice

This section sets out to draw together the principle ideas and theories behind both interactive and linear audio mixing with the aim of construing a theoretical framework conducive to the practical application of sound in the context of video games.
Whilst looking at the ideas of sound designer Walter Murch concerning the exploitation of cognitive function in order to better arrange audio assets in simultaneous play back it may be possible to synthesize his findings with that of leading theoreticians on the purpose and roles of game audio. The idea being that if we know what we need from interactive audio and are also aware of the limitations and capabilities of our audience to make sense of the sound outputted to them, then we can construe a framework for audio mixing solutions that incorporates both agendas.

Introduction

In an assessment of the relationship between theoretical conceptualization and the practical application of interactive computer game audio mixing and processing it seems wise to investigate how established sound theory for linear moving image relates to that of its interactive counterpart  and resonates with the technology available for interactive implementation.  With this in mind the stance this paper makes is to pull together some of the more prominent approaches to categorizing and allocating function to areas of the sound world in relation to interactivity and try to find some useful correlation with ideas related to film mixing and implementation.
There is also a correlation to be explored between these theoretical ideas and the tools that enable sound to be incorporated, mixed, in these digital interactive worlds. Changing technologies have always influenced the creative process and auditory outcomes for media going back to the introduction of sound to movies in the early part of the last century. Likewise it could be argued that the purchase of a new K.E.M. Hamburg flatbed mixing console by American Zoetrope  productions in 1969 launched a new era in sound design for motion pictures.  Dispensing with the vertical Moviola editing systems still popular in Hollywood studios the company founded by George Lucas, Francis Ford Coppola and Walter Murch decided “we were going to be a modern company and so there were no Moviolas around.” [1] They were adamant about exploring the creative opportunities presented by new technologies. Describing the workflow between the systems as “sculptural in the sense of a clay sculpture that your building up from bits”[1] for the traditional Moviola as opposed to “sculptural in the sense that there is a block of marble and you’re removing bits” [1] in the case of the K.E.M machine. The resulting response creatively to this change in technology is that not only did this precipitate new aspects of  the language in sound for film but also the creation of a new role. That of Sound Designer as first credited to Walter Murch for his work on Francis Ford Coppola’s ‘Apocalypse Now’. “We felt that there was now no reason — given the equipment that was becoming available in 1968 — that the person who designed the soundtrack shouldn’t also be able to mix it, and that the director would then be able to talk to one person, the sound designer, about the sound of the film the way he was able to talk to the production designer about the look of the film.”[1]
With this in mind it makes sense to explore how the leading technologies in game audio implementation best enable the realization of theoretical ideas and how this changes the experience for both the sound designer and that of the end consumer of these interactive experiences. At the heart of this process is the ‘middleware engine’. These applications allow game designers from a range of disciplines to build and implement content for games without or barely touching programming code. In the case of the sound designer this means that, instead of their job ending with the supply of audio assets and a list of instructions for a programmer to code they can implement not only the asset but also the relationship that asset has to game events, their relationship to other sounds, what and how digital signal processing is acting on those assets and if  a prototype of the game is up and running, trial and adapt these behaviors without having to change coding. [2] As such in this paper we will be investigating the two most prominent third party dedicated audio engines which have been brought on board for many prominent games of recent years with particular attention paid to what they bring to the interactive audio mixing table and if their  systems can be relatable in a practical way to theoretical frameworks . These being Firelight Technologies FMOD system which has recently been used on Eidos Softwares ‘Deus Ex’, Rockstar’s ‘LA Noire‘ Warner Brothers ‘Arkham Asylem/City‘    Batman games, Codemasters ‘Operation Flashpoint’) [3] and Audiokinetics ‘Wwise’ (Bioware’s ‘Dragons Age’ and ‘Mass Effect’ titles, Ubisoft’s ‘Assassins Creed’ series, LucasArts ‘Force Unleashed’. [4]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: