BREAKING NEW GROUND – Jesse Harlin (September 2008)

  • Post author:
  • Post category:Uncategorized


STAR WARS: THE FORCE UNLEASHED called me into their conference room to look at a tech demo that would later debut at E3. At the time, the demo consisted of a large hall containing panes of glass, a brick wall, and a wooden beam ceiling. The demo’s driver proceeded to throw Stormtroopers at everything in the room and I watched bricks crack, wood splinter, and glass shatter—each time different from the one before. The technology at the heart of this new breakable system is known as Digital Molecular Matter, or DMM, and is a physics-based materials simulator developed by Pixelux Entertainment. Its goal is to remove last generation art-swap breakables while allowing for new interactive materials such as bending metal, rubbery plants, or melting ice.

The technology looked impressive, but was completely silent. Each collision resulted in hundreds of thousands of variables and fragmented the original materials into everything from enormous hunks of matter down to invisible splinters. Facing endless variations, the audio team needed a solution that could sell DMM’s realism without exceeding memory budgets.

The first challenge was to decide how to generate the wide variety of possible sounds for DMM. One early thought was to approach sound for procedural matter from the data-driven realm of physical modeling synthesis. However, this was quickly ruled out due to the largely academic nature of the field as of 2005, meaning DMM would have to be tackled using thousands of unique audio recordings.

At first, we attempted a literal approach and scored DMM breakables with combinations of hundreds of tiny sounds. Splinters made splinter sounds, shards sounded like shards, and every chunk of material made its own chunk sounds. After spending a couple of weeks doing material source recordings and tweaking the implementation, the end result was a completely unrealistic mess. It sounded like what it was—hundreds of disparate pieces of wreckage knocking together in front of a microphone.

The solution was to simplify and edge towards a hyper-realistic sound representation akin to that of film post-production. When something shatters, the brain does not process every last shard hitting the floor. Instead, the brain experiences a cacophonous impression of chaos defined by the behavior of an undetermined number of non-uniform pieces of debris.

The DMM engine consisted of 350 different material types that we were able to pare down into 20 different DMM sound materials, such as “organic_hard” or “metal_strong_hollow.” At a macro level, DMM allowed us three main behavior categories: collision, fracture, and bend. For instance, hitting a DMM wood prop might make it collide and fracture while hitting a metal prop might collide and then bend. Additionally, all DMM materials came in a range of small, medium, and large sizes. Of all the materials, glass was hardest to manage due to thousands of small particles generated from each shattered pane.

When it came time for implementation, each game level had its own .xml file that detailed all potential collision permutations applicable for only that level. In practice, fractures and bending did not require any data be kept for material-on-material relationships since each only dealt with a single material. While collisions and fractures were essentially instantaneous sounds, bending necessitated the use of bending loops—on average 3–4 seconds long and variable depending upon the size of the bending object—that were then augmented by banks of up to 15 randomized sweeteners. With collision, fracture, and bending behaviors figured out individually, the next step towards the rich realism we wanted from DMM came when we began combining behaviors together allowing for instances where large bending doors could scrape or bang into the dirty ground.

With hundreds of sounds now triggering per DMM behavior, the last piece of the puzzle was a three-tiered system of voice instance limiting. The audio engine allowed us to first limit at the cue level, allowing us to set a maximum number of times each cue could trigger per frame. Then, supplementary instance limiting was added to the DMM-specific sub-bus in the game’s main audio mixer. Lastly, because the PS3 was limited in available channels more so than the Xbox 360, we added an additional priority-based limiting system to help sort out the most important elements of each moment in-game.

Not every object was infused with DMM, though. Anything that could be picked up and thrown with The Force, non- deformable objects like walls and floors, and enemy bodies all used the Havok physics engine. In conjunction with the Euphoria AI engine, Havok also fueled our foley and footstep systems.

Havok had its own matrix that dealt with Havok-to-Havok object material collisions. Like DMM, Havok also allowed for the inclusion of small, medium, and large sound categories plus three levels of hit sensitivity. With these three hit sensitivities, a thrown object might bang hard against a wall, fall to the floor with a medium intensity, and then settle itself with a soft sound. Lastly, our proprietary audio engine allowed for standard audio parameters such as volume and pitch randomization, distance-based fall-off, and another level of Havok-specific instance limiting.

In the end, by combining intelligent instance limiting with multiple interlocking systems of physics-based collision and materials behavior detection, the result is a richly detailed world full of breakable materials that never sound exactly the same way twice.