For those unfamiliar with Brian Schmidt’s GameSoundCon , it is a 2 day conference, packed from AM to PM with solid game audio information. Aimed at both students and audio professionals in traditional media looking to learn more about the world of game audio, the conference is also entertaining, switching between single speakers, presentations, Q/A sessions and panels.
This year featured some new speakers, including Jay Weinland (Bungie), who showed off some of Bungie’s proprietary implementation tools. Much drooling commenced. Jay and Bungie have come up with a very simple, but comprehensive audio engine. He showed off the audio team’s “big room” which was a long corridor (which one might find in a Halo game) that contained all types of surface material (wood, cement, metal, dirt, water etc) that would be impacted on within the game. One could audition all sorts of guns, vehicles, footsteps, etc, throughout the virtual sandbox to quickly understand what was working and what wasn’t working with the audio assets and their implementation.
Why is this important? With all of the different types of surfaces that will be impacting on each other in the game, it’s impossible to create an asset for each of these surface combinations. Their system allows the team to combine sounds with implementation to create audio asset formulas that will cover the collision possibilities (instead of creating a hundred thousand different wav files!)
While G.A.N.G. president, Paul Lipson, has spoke at GameSoundCon before, this year his coverage of creating interactive music scores was stretched over 2 hours to take the audience under the hood with the creation of the Iron Man 2 soundtrack.
In the actual game score (not including the cut scene score) the implementation is 50% of the challenge. Underscoring the idea that when creating an interactive score, knowing how to write the music is half of the battle, Paul demonstrated all of FMOD iterations that his team went through during the process. For a composer of Film and/or TV, this is a lot of new information.
The reason for a lot of FMOD iterations? A lot changes during the cycle of a game, especially a large console game. A lot of what changes are the memory, CPU and streaming allowances given to the audio and music, so the audio team and composer(s) must be able to flex with those changes. Every time the team tried a new FMOD solution, they reduce their size by half until the 3rd version when they finally met the game specs.
Leonard Paul gave a great talk on procedural audio (real time synthesis, granulation, prototyping etc), and if you happen to be going to the London AES show, he will be giving a talk there as well. His talk, and many of his articles can be found at his site, http://videogameaudio.com. He just finished work on a new title, Retro City Rampage, which as been getting a lot of positive attention.
I’m only covering the tip of the iceberg with GameSoundCon, as there were fabulous talks given by Scott Selfon (Microsoft), Tom Salta, and an entertaining panel with Jesse Harlin (LucasArts), Jason Hayes (Carbine) and Paul Lipson. The crew from FMOD were also there, giving informative demos in their latest version of FMOD designer. I still haven’t even mentioned AES!
At the 129th AES show in San Francisco, Moscone center was filled with all of the latest audio gear, again, causing random puddles of drool throughout most corners of the show floor. Gear lust was in full swing.
In addition to the show floor, there was a healthy number of game audio talks this year, and although many of us reserve the game audio talks for GDC every year, AES featured many wonderful speakers and networking opportunities.
There were 5 days of audio talks, and every day there were 4-6 talks aimed at game audio. GANG’s own, Jeff Schmidt, joined the Harmonix team to explain Rock Band Network authoring to a packed room. Another GANG member, Jeff Essex, kicked off the AES audio track with a very informative talk on iOS audio (also to a packed room).
Steve Horowitz refereed (literally, with a referee jersey and everything) the Game Audio Cage Match between Peter Dresher and Larry the O, which featured topics like “Does audio quality really matter to people?” “Why do game soundtracks have to suck?” and various other spirited topics. Horowitz also led the “Careers in Game Audio” discussion, with a panel that also included GANG vet, Lennie Moore, discussing the rise of new game audio college programs across the world.
Damien Kastbauer (Force Unleashed 1 &2), Jay Weinland (Halo) and Stephen Hodde (Volition) chaired the Physics Psychosis panel, discussing the implementation of impacts and explosions in game audio, and both the approach and tools that they used.
For the technically oriented, there were the “Code Monkey” panels, each discussing different code (C++, XML, Lua etc) and specific uses of code within game audio applications.
Charles Deenan led a panel discussion about creating Game Audio Reference Standards, similar to what has been established in TV and Film. More info on reference standards can also be found (and contributed) at http://iasig.org. Deenan also took part in the Game Industry Overview panel, with Adam Levenson (Activision), Lance Brown and Marc Shaefgen, discussing the game development process, how game audio is different than audio in other media etc.
Another fun session was the “Audio Shorts”, and in the 3rd part of this session panelists discussed their favorite audio plug ins! Jay Weinland revealed his extensive use of Speakerphone 2 within Halo. Kristoffer Larson (WB Games) demonstrated the wonders of the Synplant synth, and Charles Deenan explained his use of Waves C6 in matching various sound sources.
Again, this is only scratching the surface, but apart from seeing friends, making new friends and talking shop all week, the highlight was definitely the talk given by Ben Burtt. For those unfamiliar with his name, you definitely know his work. Burtt created the sound design for the Star Wars films (responsible for Chewie’s growls, R2D2’s expressive beeps, and the iconic light saber sound.) In many ways, Burtt (along with contemporaries like Walter Murch) has created the template for today’s sound designers. To hear him speak about his process (“The successful sound designer injects his/her personality into every sound with their performance”) tell his stories and reveal some unearthed audio/video, was a treat I won’t forget.
Written exclusively for audiogang.org by Dren McDonald