Audio Engineer, Producer & Songwriter…

AIA Final Project

By on Mar 31, 2013 in Full Sail University, Videos | 0 comments

by: Blake James Sumrall

Aesthetics: Creating the sound for the map.

This project is a continuation of work from my last blog post titled, “Sound Design for a First Person Shooter Weapon“. My goal was to maintain the same direction and vibe that I established when creating the sounds for the weapon. Meaning, simple and as organic sounding as possible. This thought process led me back to Darren Korb’s “Bastion”, my initial inspiration. I really wanted to follow some key concepts that Darren used in Bastion, and apply them to my sound design for this map.

Narration. As I explored the map for the first time, I was wondering if it would be possible to do some kind of interactive narration, like you may find in Bastion. Immediately, I got the vibe that the map was set in a Russian communist structure from the World War II era. I called up a voice actor friend of mine, DJ Souza, and asked him if he would be willing to help me record some dialog to narrate my map in a Russian accent. Ideally, I would have preferred to record his dialog with a high quality vocal mic. However, given the time constraints of the project and the fact that my voice actor was 2,000 miles away, I decided to run my iPhone directly into Logic and simply record our entire conversation as we went over the lines. This seemed to be the most efficient way to get what I needed for this project, and I managed to work the phone style EQ and distortion into the style of the implemented dialog.

The concept behind the narration, was that I wanted to create an invisible character that was a voice inside the user’s head. I wanted the voice to be more of a commentary than a narration really, so that it offset the dark mood of the visuals in the map. The map seemed to be a mix of a realistic style design integrated with a little bit of unrealism by including alien-like futuristic weapons. This initially felt odd to me, so I wanted to glue the map to the weapons by implementing some dry humor and sarcasm. Basically, this map could very easily be misinterpreted as a realistic World War II style map, but futuristic weapons and robots don’t fit that impression at all. I believe the personality that I created in the narrator balanced the contrast between realistic and unrealistic. I also believe that the concept of the dialogue added greatly to the entertainment value of the map.

Piano. Once the dialog was completed, I started to imagine what kind of atmospheric sounds would psychologically match the mood of the the map and narration. Continuing down the path that I had been creating, I wanted to follow the idea that the map was set in a World War II time period with a kind of epic si-fy twist. In order to match the audio with the visuals in a psychological connection to the time period, I decided that I wanted to use a simple clean piano. It just so happened that I had been mastering a live piano album for Randal Branham at the time. With his permission, I tested out my favorite of his songs, and it was an instant success. I decided to keep it looping consistently throughout the whole map so that it was a constant tie to the vibe and time period of the Russian’s involvement in World War II. Even in the background at a low volume, the minor progressions and classic piano sound drastically solidified the emotion of the map. To push the emotion a little further, I wanted to the music to be significantly louder while inside a structure, as compared to the lower background levels of the exterior. In my mind, this reestablished the piano as an important and active part of the map every time the user walked inside of a room or hallway. I felt this was much more effective than leaving the music as a static background element.

Other Sounds. After the dialog and piano parts were in place, the rest just fell into formation. I browsed through some sample libraries in order to save time and deliver an effective health pick-up sound. I borrowed elements from the weapons recording session in order to manipulate and tweak the sound so that it created the effect of the portal hum and the user traveling through the portal. I like how the sounds of the portal are different from the weapon, but still feel like they were intended to fit together, since they both started from the same recordings. Subconsciously, I believe the connection is made by the user and the map feels more fluid. The sounds for opening and closing the door to the portal room were also from a sample library and were slightly tweaked to fit the feel of the map.

Overall, I wanted the sound to feel kind of vintage. Like the user was in an old communist movie with a taste of futuristic influence. Tonally, I wanted the piano and dialog to be warm, so the sound effects could cut through with sharp and crisp tones.

Implementation: Adding the sound assets into UDK.

I used the Unreal Development Kit (UDK) to implement my audio assets into the map. I utilized techniques that I had learned throughout the month, and I had the opportunity to pick the brains of my lab instructors to add a few new techniques as well.

SoundCues. The sound for this map utilized allot of SoundCues, which was essential in implementing and generating random dialog lines for certain areas of the map. As I discussed in my previous post, I had the option to create different types of nodes (like plugins) that would allow me to apply certain attributes to the SoundCue. For example, I used the “Attenuation” node when I needed a sound or group of sounds to be spatialized. I always added in a modulation node, even if I didn’t need one right away, because this allowed me to easily go back into the SoundCue at any time and mix the overall levels of the sound cues or the levels of individual audio assists within the SoundCues. The modulation node offers min and max values for attributes like volume and lowpass filtering.

Finally, the most important node for my dialog… the “Random” node. This node was responsible for selecting one sound from within a group of dialog lines, each time the SoundCue is played or triggered. This allows the user to experience a different mix of dialog every time they move through the map. Ultimately, making the map feel more interactive and less predetermined. With the help of my instructors, I was able to use the random node in unique ways to control certain dialog lines. For example, when I first implemented the health pickup SoundCue, the dialog line “Band-aids” was stacking up on itself. This was very bad because I wanted the narrater to be an invisible character in the game, so I couldn’t have him saying more than one thing at a time. So I used the Random Node to make it so that the dialog line only played every 3rd time. Since the health pack were mainly set up in groups of three, and since the time it took to run across 3 packs was about the same time as the dialog line took to complete, 3 turned out to be the most natural feeling number. All of this was done within the SoundCue editor, which is basically a graphical user interface version of scripting.

Kismet. Allot like editing with the SoundCue editor. Kismet is also a graphical representation of scripting. Within Kismet, I was able to tie in certain sound assets and SoundCues to animations and triggers in the map. This is where I linked the sounds for the opening and closing of the doors, which I had to tweak a little further so I could use the input and output messages from the animations to control the start and stop of my sound files. I also used the animation output messages to trigger the opening dialog line when the sound of the opening door was completed. When I created volumes within the map, I used Kismet to create a script sequence that would play a sound when the volume was touched.

Volumes. These are not like audio volume which can be measured in dB. Volumes are 3D spaces within the map that are created using the builder brush. These volumes can be used to trigger sounds or SoundCues, and that is how I implemented the majority of dialog lines within the map. Another type of volume is a Reverb Volumes which I used to control interior and exterior volume and lowpass filtering levels. For example, the piano music in the map was controlled using Reverb Volumes that I sized to physical space of the room or hallway. I set up the Reverb Volumes so that the piano was very loud inside and very quiet outside. While the exterior ambiance was much quieter inside and much louder outside. The exterior ambiance used a lowpass filter to take away the bright frequencies while inside of a reverb volume. I also controlled the time that it took to fade between interior and exterior sounds separately, so that I could create a natural feeling transition when moving in and out of a reverb volume in the map. Reverb volumes automatically identify sound actors within a reverb volume as interior sounds, and sound actors outside of the reverb volume as exterior sounds. This made it quick and easy when creating different sound environments within the map.

I attempted to do allot more than what was required within the short amount of time provided to work on this project. I believe I successfully accomplished my conceptual goals, but I do wish I had more time to fully develop a finished product. However, I do believe this project demonstrates my capabilities within UDK. I felt as though I was able to clearly portray my audio concepts for the map, and implement my assets effectively.

Post a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>