MAKE SOMETHING EPIC

WHAT IT’S LIKE TO DESIGN IMMERSIVE SOUND FOR LARGE SCALE VR

An earlier version of this interview appeared in OSSIC.

If you made it to either GDC or SVVR, chances are you’ve heard of Chanel Summers. She does it all in audio. In addition to visiting and meeting students at USV, she is a lecturer at USC, co-founder of celebrated audio design company Syndicate 17, and an accomplished drummer. She is a strong voice in the game audio scene, where she advocates that sound designers shouldn’t forget about great design, storytelling, and creativity in an age where technology and tools are growing and evolving.

Needless to say, we are more than grateful that she took the time to speak with us. Before she took off for SVVR, we had a short conversation.

Where are you currently working?

My company, Syndicate 17, is a very unique audio production, design, and implementation company, that specializes in assisting highly innovative companies build industry-changing products with a really fresh and distinctive style through interactive, dynamic, and immersive audio. We work on very unusual, very abstract and ground-breaking projects.

Can you tell me about any of the projects you worked on? 

One that we just completed was a large-scale VR project with VRstudios for Knott’s Berry Farm. It is called “VR Showdown in Ghost Town,” and it launched on April 1st. It is the first permanent free-roaming VR experience at a U.S. theme park.

In this interactive VR experience, players will be transported to a futuristic version of Knott’s western town of Calico, where they have to defend the town from swarms of attacking robots. We brought on Hexany Audio to collaborate on this project and I think the results are exceptional.

Some of our other recent work includes audio for MediaMation’s “REACTIVR”, a really cool VR racing game that was first shown at IAAPA 2015, “Leviathan” for Intel, which was an Official Selection at Sundance New Frontiers Festival 2016, and the VR experience “The Repository” for Universal Studios Orlando’s 2016 Halloween Horror Nights.

What are some of the things you learned working on this most recent project?

One thing we learned more about was the choreography of the audio. If you are creating a fictional world, you will need to make this really immersive environment where people feel like they are there, in the story.

As I mentioned, in this project for Knott’s Berry Farm, players are transported to a future to defend a western town called Calico. Very early in the design process, we needed to consider how we would approach the creation of the space we were building so that throughout the process we could create a space that would be coherent, consistent, and cohesive within the story and game space. We needed to be able to use audio to create a truly immersive environment and make the players believe they were actually transported to this future western town. But we also had to supply players with very strong audio cues so that they could play the game in this super sound-rich environment.

So, we had to strike a balance between creating effective audio that satisfied basic gameplay requirements and building a soundscape that worked well and was also cohesive within the world. We needed to supply the players with auditory cues so they could play the game while also making them feel like they’re truly in some futuristic world that was somehow transported from the Wild West.

What kind of challenges did you face?

A major challenge was setting the priorities for the audio tracks. We had to think, “What are we missing in this environment and what are the important sounds the players need to hear? What needed to be huge and more cinematic and what could be pushed back in the audio mix?”

For example, in the Knott’s Berry Farm experience, these “melee robots” run up and storm the players while lots of other things are going on around them.

Getting the makeup of the sounds just right, so people could actually hear them sneak up on them and not be overwhelmed, was crucial. Their footsteps and the wind-up sounds for their laser batons weren’t cutting through the mix with all the sonic blaster shooting between the four players. Those sounds were too subtle and were getting masked and overwhelmed when players were shooting rapidly. So, the team had to think about what would be a better sound that would be more attention-getting. The requirement for the sound was that it needed to be very clearly positional, alerting the players to look in that direction. We ended up going with these sort-of robotic growls which ended up doing the trick!

Not all sounds work well as audio cues and not all sounds spatialize really well. So it took some time going back to the drawing board to create the right kind of sounds that could spatialize well and cut through this super rich sound mix and so that was a learning process doing some experimentation with that.

It is very important to test and see where people are looking and responding with regards to your audio cues. Then, adjustments can be made based on that data. It is very difficult to theorize and predict people’s responses in such a complex soundscape. Multiple iterations of the design–test–modify–test process is mandatory to achieve a high quality and effective experience.

The result was a wonderful balance of environmental and gameplay immersion.

So this leads pretty well into my next question. What has it been like seeing the gaming industry change over the last 20 years?

I definitely have a lot of memories of technology from the early days, when most sounds were bleeps and bloops, and sound designers needed to be extra creative to make audio that sounded acceptable, given the technological limitations. But even with things improving, what should always remain is a constant focus on great design. Better technology will not cover up a lack of creativity.

Earlier in your career, you started as a game designer, not focused on audio. When did you make the switch to focus more on audio design?

To me, I still kind of do it all because sound design is design. We are all designers. Sound design and composition is all a part of game design and game production. There’s not this big separation. We are all creating a wonderful experience. When I first came to Microsoft, I started as a program manager on a game called “Fighter Ace.” This was the time when Microsoft was really starting its push into video games.

During that time, though, Microsoft had acquired this new technology and I saw a lecture on it, and this technology would become DirectMusic which was the beginning of this era of interactive audio technology where now the music would change based on what was happening on screen. It was the most amazing thing I had ever seen. You know, I saw white lights and the angels sang to me, and at that point, I knew I had to work on this technology.

What was the reasoning behind the switch?

I had always put a lot of thought into the gameplay aspects of audio when I was doing game production and design. It was so important to me, and I couldn’t understand why other people never thought about it as much or why it was just lopped on at the end. Or even wallpapered in the game. So that was the pivotal moment when I knew I had to make my transition into audio.

After we shipped “Fighter Ace,” I transferred to work on DirectMusic. Microsoft actually created this brand new role that I stepped into, which was an Audio Technical Evangelist role and that was when I started working with audio and started to help develop and promote DirectMusic as well as other DirectX Audio technologies like DirectSound3D. And then I went on to work on the original Xbox, helping to co-design its audio subsystem to be like a hardware version of these software technologies we’d been building.

What have you taken from visual game design and used in your audio work and vice-versa?

Sound design and composition are all about achieving the same things as game design and game production. It is the same thing, actually, in a way. Really good audio design is about storytelling. It is about conveying an experience — that has never gone away. That is always the most important role for any aspect of audio creation.

I’ve actually taken from game audio and then brought back into game design and development, devices and aspects like how frequencies can be used to manipulate human physiology, that frequencies can communicate such emotion and such feeling, and that just with frequencies alone, you can create these wonderful experiences.

I’ve seen from your website that you love to play drums and have contributed to a number of albums. How has creating music affected your game audio work?

I have always tried to be a very musical drummer, or what they call a songwriter-drummer, who serves the needs of a song and doesn’t call attention to oneself by being too busy or throwing in drum fills everywhere. It is about supporting the song and doing things that are appropriate to make the song better. My work with game audio should be the same way. I should be able to create an audio experience that doesn’t call attention to itself, but instead, serves the game and the narrative.

Great game audio should reinforce or drive the experience. I bring that philosophy of being a songwriter-drummer to game audio in order to create immersive, effective, and aesthetic auditory environments.

The other thing is that to me, everything is about rhythm and how that can be applied to audio design. How can I create interesting textures from each sound element in order to build out more immersive environments?

That’s so true. Because even in music, the spaces where you don’t have a downbeat are just as important as the upbeat. The places where you don’t have sound are just important as the spots with no sound.

That is right! Silence is very important, and great audio experiences will employ silence and strive for a wide dynamic range. You want to have some beautiful wide spectrum frequency layers, you want textures that are mixed well, and you want moments of tension and release.

Definitely. And is there any particular title or experience that you can pinpoint to that does this well?

One that I am particularly proud of is this really amazing custom experience we did for Universal Studios Orlando in 2016 called “The Repository,” as part of their Halloween Horror Nights event. It was the 10th “secret” haunted house and this project combined custom VRstudios VR technology with immersive theater. It was an incredibly immersive attraction that seamlessly combined VR, highly themed physical environments and objects, live actors, 4D effects, and show control into an interactive paranormal experience where guests worked together in teams of four to uncover a paranormal mystery. Audio, both sound and music, played a major role in the experience, helping to convey the narrative, using both rhythms and silence to convey clues.

You recently were on a panel at SVVR that is talking about spatialize or to not spatialize. Why is spatial audio important in VR and gameplay and what is your opinion on why it is important?

As you are inside the experience and not just detached from it, you need some things to be spatialized in order to give the world depth, with individual ambient world sounds always positioning as you rotate your head, and therefore, you always feel a sense of direction and depth within the world. Also, if you are creating a VR game, it will be crucial to have spatialized audio cues in order to play the game effectively. The effectiveness of the gameplay is greatly reduced if the players don’t look where they need to look at any point in time during play.

But I want to add that not all audio in a VR environment needs to be spatialized. I believe that in a VR experience, audio must take kind of a hybrid approach, where some audio is spatialized while others can be in simple stereo. For instance, there may be sounds that are static or head relative. If you are going to have user interface sounds, those most likely should be 2D. Same goes for a musical score; this would probably be best in 2D unless there is a physical source of the music in the game world. And with low-frequency sounds, it’s harder to tell where they are emanating from. Low end is good for the feeling of a sound and affecting physiology and great for giving an object weight, presence, and size! Sounds that are primarily low frequency like an energy pulse or a rumble are well suited to being stereo sounds.

As an example of a hybrid approach, on Knott’s “VR Showdown in Ghost Town,” 2D looped stereo ambiances such as general ambiance that we did not feel required spatialization were mixed in with individual 3D mono spatialized sounds which worked quite well and felt very natural. The wind and general ambiance were not designed as a quad-array with emitters placed around the listener in all directions, as we didn’t find that necessary with this experience.

Thank you, that is a really good answer. It goes back to this idea of creativity vs. technology. An audio designer should be very aware of what is needed and not let technology override what you are creating.

Exactly.

Subscribe to the Newsletter