4: ‘Encoded-Embodied’: Mixing Across A Cognitive Audio Spectrum

In his essay “Dense Clarity – Clear Density” [21] Walter Murch breaks down the way the human brain perceives sound according to   its   content and uses it to classify all the possible sounds in a film according to the resultant spectrum in order that by way of the mixing process “so the sound-track of a film will appear balanced and interesting if it is made up of a well-proportioned spread of elements from” a “spectrum of sound colours”.[21] In this article Murch illustrates the link between the sounds we hear, the physiology of our brain and how this affects how we interpret and differentiate between the information disseminated by disparate audio content in relation to how our brain extrapolates meaning from sound according to its cognitive architecture .
Murch puts forward the idea here that the left and right hemispheres of the brain are used to process different sounds and therefore if you devise your mix according to this ‘Encoded – Embodied’  spectrum as illustrated below, the idea is that you can accommodate far more audio content than if you were mixing to the rule of ‘two point five rule’, itself pioneered by Murch.
Balancing a full non-diegetic orchestral score with the huge amount of possible diegetic sound, implicated for instance in a large battle scene, presents the sound designer with a complex set of questions.”The challenge seemed to be to somehow find a balance point where there were enough interesting sounds to add meaning and help tell the story, but not so many that they overwhelmed each other. The question was: where was that balance point? Suddenly I remembered my experience ten years earlier …. and my first encounter with the mysterious Law of Two-and-a-Half.” [21] This law communicates that there should only be two main sounds and a small element of something else at any one point in time in a film.
Now, what Murch has explored with his ‘Encoded – Embodied’ spectrum theory allows far more sound density whilst maintaining the clarity apparent in two point five mixes. This new approach was developed during the mixing of the helicopter attack scene in Apocalypse Now where Murch pushed the scope of the mix out a little wider and realized by carefully selecting the sounds present he could push the number of layers of sound played simultaneously to five. “Why is this? Well, it probably has something to do with the areas of the brain in which this information is processed. It appears that Encoded sound (language) is dealt with mostly on the left side of the brain, and Embodied sound (music) is taken care of across the hall, on the right. There are exceptions, of course: for instance, it appears that the rhythmic elements of music are dealt with on the left, and the vowels of speech on the right. But generally speaking, the two departments seem to be able to operate simultaneously without getting in each other’s way. What this means is that by dividing up the work they can deal with a total number of layers that would be impossible for either side individually.”.  Murch continues “In fact, it seems that the total number of layers, if the burden is evenly spread across the spectrum from Encoded to Embodied (from “violet” dialogue to “red” music) is double what it would be if the layers were stacked up in any one region (color) of the spectrum. In other words, you can manage five layers instead of two-and-a-half, thanks to the left-right duality of the human brain.” [21]
The concepts on which these mixing solutions are based are not abstracted artistic metaphors but grounded in scientific evidence “Studies under- taken by (Brenda) Milner in 1962 led her to state that the right hemisphere is overwhelmingly concerned with musical ability.” [22] For example her experiments with epilepsy sufferers who had undergone temporal lobotomies found that those who had the operation on their right lobe suffered the loss of “tonal memory and timbre perception.“ [22] Likewise, experimental studies carried out on stroke victims by Italian Nero-phycologist  Luigi A Vignolo [23] found that it was common that right hemisphere lesions disrupted both environmental (‘orange’) sound as well as melody though rhythmic perception remained intact. in addition left hemispheric lesions seem also to confirm Murch’s position disrupting the perception of rhythm though that of melody remained intact in the context of the “semantic identification of environmental sounds”.[21]
The rationale for allocating colours to this spectrum Murch relates to the experience of white light, which, though it may look simple  “is in fact a tangled superimposure of every wavelength (that is to say, every color) of light simultaneously” [21]. What Murch says in consequence is that if you applied this idea to sound, asking us to imagine a chaotic New York soundscape, and that being white sound, if you could refract it like a prism does with white light, it could enable us to see the colours individually,  revealing “to us its hidden spectrum.” [21].  Moving this rationale into  Murch’s choice of colours is justified by the light spectrum being bracketed by ‘Red’ at one extreme and ‘Violet’ at the other. [fig 3]

fig 2

Now we have our spectrum we need to understand how we can allocate sound in a way conducive to  concepts surrounding that interactive audio implementation structure. To aid us in this purpose Murch separates the extremities by saying you can have “Encoded” sound at one end of the spectrum and “Embodied” at the other.
Encoded sound according to Murch is that which conveys meaning according to a prescribed set of rules that, as literate humans, we can de-encode and derive meaning from. Therefore the “Sound, in this case, is acting simply as a vehicle with which to deliver the code.”[21] In contrast, as the meaning that may be conveyed in a piece of music is ‘embodied’ within the sound then the sound is no longer just a conveyance of meaning but an integral part of the meaning itself. Essentially what Murch is saying here is, in terms of mixing for moving image, is that dialogue and music occupy the extremes of the spectrum and that all sounds occupy some region between the two.
In accordance with these assumptions then it is fair to say that sound effects that we add to moving image productions in their various forms fall some way between the two. For example diegetic sounds, when associated with an image, communicate meaning that is to a greater or lesser degree codified. The code is often far simpler and arguably more universally understood than  language. For example, the creak of a stair case could be said to be  ‘embodied‘ in a sense because it communicates the condition of the stairs as a foot is placed on it however through the semiotics of film sound language we may perceive that sound to also be code for dread say or an expectation of something creepy to be occurring and thus and ‘encoded‘ more ‘blueish‘ sound  illustrated well in the creeks of the old house from the Spanish ghost story ‘The Orphanage’. To this end Murch asserts that “the language of sound effects, if I may call it that, is more universally and immediately understood than any spoken language.”[21]
Of course, music bears a strong relationship to language in that we speak with a kind of musicality to our voices naturally, giving  emotion to what we say that can completely change the meaning. Sarcasm for example. In this way we can derive meaning from listening to voices speaking in a different dialect to our own and certainly we can use that musicality in speech to understand the manipulated seal barks of Star Wars character ‘Chewbacca’ or the synthesized chirps and whistles of ‘R2 D2’ as we relate them to the codes associated with the musicality of the human dialects. Therefore we can quite clearly tell sad from happy or anguish from worry as depicted in the following clips. This idea of the left right disparity is backed by nuro-scientist Feyzar Sancar who asserts “Specifically, the right-brain auditory cortex specializes in determining hierarchies of harmonic relations and rich overtones whereas the left auditory hemisphere deciphers relationships between successions of sounds”. [24] Therefore the sequencing of sounds to form codes that the audience can decipher, the evidence would suggest, is interpreted by the brains left hemisphere.

5.2  Creative Mixing According To Hemispheric Cognitive  Functionality.

Murch’s idea that music, or at least the way we experience it in film is ‘embodied‘ rests on the assumption that we are so accustomed  to the codes that of course underpin the score that we do not need to decode it. Though he does concede that some music is, relating to the association of the colour ‘red’ in the spectrum, ‘cooler’ than others and that the less complex and more familiar the underlying codes the warmer the music will be. Therefore the more abstract and perhaps atonal the music is, then the more it drifts towards the ‘orange‘ area of the spectrum. There is a surreal scene from David Lynch’s ‘Lost Highway’ where Bill Pullmans character watches himself kill his wife on a video tape.  The scene builds up very slowly and  the instruments played here are not only played inharmonically but subverted by them being recorded through microphones placed in vacuum cleaner tubes amongst other manipulations casting this soundtrack in a very ‘orange‘ light.

 

Similarly this atonal ‘orange‘ approach is used to soundtrack where the ‘Joker‘ character in the film ‘The Dark Knight‘ is in the ascendancy in terms of plot trajectory. [clip 14]

This scene from ‘The Godfather’ pushes the ‘two point five’ to the limit as all the sounds are ‘Violet’ [clip 15]

so any additional sounds would perhaps be unwise however the ‘Apocalypse Now’ example illustrates the possibilities of increased textural depth if sounds are layered across this theoretical spectrum. [clip 16]

Murch indeed suggests that you could have two and a half sounds for each color split into five layers. Though maintaining that amount of sound for long periods would probably not work in a dynamic sense, algorithms employed in an engine that discerned between prioritized audio content dependent on parameters construed from  ‘Encoded – Embodied’ awareness could perhaps be advantageous in this respect.

Murch’s impetus for developing this approach to mixing was his struggle to mix the helicopter attack scene in ‘Apocalypse Now’ and admits he stumbled upon this solution partly by luck, partly by design and was assisted in this solution in that he happened to have grouped or pre mixed his individual sounds with a   compatibility “because my sounds were spread evenly across the conceptual spectrum.” [21]
Accordingly the helicopter attack mix was broken down in the following way:

Dialogue (violet)
Small arms fire (blue-green ‘words’ which say “Shot! Shot! Shot!”)
Explosions (yellow “kettle drums” with content)
Footsteps and miscellaneous (blue to orange)
Helicopters (orange music-like drones)
Valkyries Music (red)                                [21]

Therefore, if we refer to the above breakdown we can ascertain that for the areas where Murch chose to have five simultaneous sounds with all the above layers running concurrently then  he would probably have been having to make choices mainly between layers two and four as these occupy the same area to some degree. However decisions were also made according to the  imperative of narrative such as later in the scene where one soldier refuses to get out of the helicopter. He is shouting “I’m not going! I’m not going!” and is obviously terrified of joining the carnage outside. At this point Murch, taking into account that if he already had five layers he needed to remove something else. In this case he obviously needed the dialogue, the sounds of a chaotic war zone outside the soldier didn’t want to go into and the sounds of the helicopter which represented the relative safety of the helicopter. Therefore the choice was made to drop the Wagner score for a few seconds. On one level it doesn’t make that much sense as the music is being diegeticaly sourced from that helicopter however because other sounds rush in to replace the gap and because of the temporary narrative focus, it is highly unlikely the audience would notice. Hence the concurrent layers for this segment are:

Dialogue (“I’m not going! I’m not going!”)
Other voices, shouts, etc.
Helicopters
AK-47’s and M-16s
5.         Mortar fire.                                               [21]

So Murch was fortunate that his sounds were evenly spaced out conceptually, however in cases where there is a density in one area of the spectrum then, as Murch points out, the mixing options would have had to revert back to two point five. For example even in the opening battle from ‘Saving Private Ryan’[, which aims to be as realistic as possible, [25]there is a high density of possible sound sources in the ‘blue-green’ area from weapons fire and ‘yellow’ explosions that if represented in their entirety would have created a ‘white noise’ effect. “this is the fine line that I think the film had to walk, visually and also sound wise, especially in the opening scene. You have to make it chaotic but also you still have to keep control of it so that you can have the audience hear what you want them to hear. [21]

5.3 Possible Applications For Interactivity
If the sounds in a movie can be divided up according to this conceptual spectrum So for the games audio programmer this spectrum perhaps offers a solution to audio log jams that occur when there are multiple sound instances required to trigger. A battle from Dragons Age such as this [clip 17] illustrates this issue featuring four playable characters and non player characters triggering sound instances as well as music.

So if we were to apply this ‘Embodied – Encoded’ theory to this game snapshot we could perhaps instruct the audio engine to only  allow two pieces of combat dialogue, two footsteps, armor/sword clashes etc. The mage characters spell effects are almost musical so we could allow a couple of those ‘Orange’ sounds and of course the non diegetic music. Many more simultaneous sounds than ‘two point five’ would allow  though hopefully also maintaining clarity.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: