THE MAGIC OF MISDIRECTION – Jesse Harlin (November 2009)

  • Post author:
  • Post category:Uncategorized

WORKING AROUND AUDIO ENGINE LIMITATIONS

AUDIO IMPLEMENTATION is like a magic show. With limited resources and a bag of clever tricks, audio implementation gurus can convince a willing audience that anything they hear is real. The more limited the resources are, the more clever the tricks need to be.

Like magicians, a great tool of the audio implementer is the act of misdirection. In magic, misdirection is commonly defined as making an audience look at the wrong thing at the right time. The same practice is true for audio implementation. Audio implementers frequently find themselves needing to misdirect a listener’s attention with flashy tricks and loud noises as they finesse smaller, more subtle changes in the game’s audio while the audience is distracted. When performed well, the audience has no idea they’ve been bamboozled.

SLEIGHT OF HAND
» Instance limiting, dynamic loading, and intelligent systemic stream management can go a long way toward saving memory budgets and making life easier on audio implementers. However, implementers can’t count on these luxury features being available in every game they tackle. The casual game market has exploded via Flash-based web sites and the Nintendo DS, and for both, audio file sizes can remain a crippling limitation. Apple’s iPhone has indisputably become a major gaming device, but unfortunately has specific restraints other consoles don’t, such as the allowance for only a single hardware- decoded stream. Meanwhile, some higher end games are still being driven with legacy audio engine limitations that are leftovers from the last console cycle.

These are just some of the limited frameworks within which audio implementers are forced to work. But indeed, part of the job of audio imple- mentation is trying to make these limitations seemingly disappear into thin air while still providing a rich audio experience under tight restric- tions. This is where misdirec- tion becomes most crucial.

Whether on the Wii, the iPhone, or the PlayStation 2, many games can find themselves at the mercy
of only a single available stream for audio. Though saddled with limited tech, design teams still expect
rich audio worlds and dynamic sound experiences. Mixing ambiences and music together into a single streaming file is a start. However, problems enter when it comes time to change from one file to the next. Without a second stream, there’s no ability to crossfade. As such, the existing stream needs to fade down to infinity (zero gain) before the new stream can start, creating an audible gap of silence between the two streams.

Screen Shot 2014-09-07 at 9.18.14 pmWhile the implementer has no control over the existence of the gap, they can have control over when the gap is created. Changes in stream playback can be made during pre-scripted events that call the players’ attention to other elements of the game. Look to hide stream changes behind reliably scripted instances of explosions, elaborate animations, or dialogue- driven camera cuts. Pull attention away from the stream change by masking it under interesting or louder one-shot instances such as opening and closing doors, elaborate machinery sounds, or item and power- up pick-up sounds. The key is to be positive that your misdirection sounds will always play at the moment that you need them, so make sure to dig into available gameplay scripts that the level designers have created and look for suitable candidates for reliable misdirection.

SMOKE AND MIRRORS »

In-engine cinematics present their own set of challenges. Frequently they make use of special suites of specific animations, characters, and story- critical dialogue. Rather than loading a slew of files into resident memory for only those moments, pre-scored streaming files can be an effective way to fake detailed audio implementation.

Unfortunately, these sequences are notorious for having their frame rates fluctuate, thereby causing the animations to easily drift out of sync from the pre-rendered audio. The ideal solution is to slave the visuals to the audio, but this isn’t always achievable for tech reasons. In these cases, implementers can work with the sound designers to break the single audio file down into smaller constituent chunks. These chunks can then be anchored to scripted events throughout the cinematic as a means of periodically reestablishing sync. By planning the splices at times where single one-shot misdirection sounds can be triggered, the listener’s attention is drawn away from the joins, and they never sense the trick that was performed before their unsuspecting ears.

Like in magic, audio misdirection takes careful planning that is successful only if the audience doesn’t notice its inclusion in the act. The key to sneaking it past the audience’s detection is reliable audio scripting and a firm understanding of the specific audio engine’s limitations. With a few tricks and a knack for making the audience listen to the wrong thing at the right time, audio implementers can find clever solutions to their limited tech options, keep their development teams happy, and wow their listeners with magic.