Audio Engineer, Producer & Songwriter…

Sound Design for a First Person Shooter Weapon

By on Mar 24, 2013 in Full Sail University, Videos | 0 comments

It’s about time to share some sound design for video games. Here is the latest interactive sound design project that I have worked on while attending Full Sail University. The purpose of the project was to create and implement layers of sounds for a weapon in a first person shooter style video game. That means that I was able jump into a game development software program, and actually tie the sounds directly into the game in order to produce a functional example. Check out the video below (of my functional example), then continue reading if you are interested in the specifics of how I accomplished the sound design and implementation.

Aesthetics: Creating the sound of a futuristic weapon.

When I started developing the concept behind the sound design used in this project, I wanted to create something that I could learn from. Meaning, I wanted to do something that I haven’t done before, that was a little more involved than just pulling sounds from a sound library and calling it a day.

I found inspiration in a keynote given by Darren Korb, the one-man sound team behind the award winning game “Bastion”. During a section of his keynote, he discussed how he used “mouth sounds” to create some of the interactive sound effects in the game. I was fascinated by this idea, because my experience with games that use mouth sounds are typically comical games or games that use mouth sounds to be easily identifiable as mouth sounds.

So, I set off on my adventure to create a project completely based on recordings of my mouth sounds. That’s right, all of these sounds are based on recordings that were created from my mouth/vocals. After reading the following details of how I created the sound, go back and listen again and I’m sure you’ll be able to identify the different elements.

I created 3 to 4 layers of recordings for each individual asset. I was able to take a look at the weapon in action, and analyze it visual properties. Based on what I saw, I included wisper sounds, electricity type sounds, and some unique si-fy kind of sounds. In order to retain some of the organic nature of each recording, I used a limited set of plugins to tweak and manipulate each sound. My main weapon of choice, no pun intended, was a pitch shifter, which helped keep the sounds organic but unrecognizable as a natural human noise. The rest of the plugins included: chorus, sub bass, and limiter/compressor. I really had no idea of exactly how I was going to achieve each sound, so the process was very creative and experimental. My only goal in mind was to create sounds that were unique, but still fit the purpose of the weapon in the game.

For the main firing sound, I wanted to create a fast electrical attack with a slower decay to kind of emulate the impression that there was static electricity lingering after the weapon discharge. Underneith the firing layers, I created a layer that appeared to drop in pitch, volume, and lowpass frequency in order to achieve the perception that the weapon discharge had movement away from the shooter.

I used similar techniques to achieve the sounds for the alternate weapon firing option, which discharged a flying orb. The characteristic that really set the sounds of the two firing options apart, was the use of a looping sound layer that played while the orb traveled through the game space. This, in my opinion, is probably the easiest to identify as a human vocal sound, but I really liked how it represented movement and still felt like it could have been electric or synthetic in some way. This is my favorite element because it adds more of an unrealistic si-fy vibe, but still remains believable in the gameplay.

Explosion sounds needed to be created to represent the the flying orb making contact with an object, or the orb exploding when shooting it while it’s still moving. To achieve this, I made mouth sounds that represented explosions in the kind of way a 6 year old tells stories, which of course always includes sound effects. Then, I pitch shifted, modulated, and added some sub bass in order to make the sounds appear more realistic.

I was also required to create sound to represent picking up the weapon and picking up ammo for that weapon. I designed the weapon pick-up sound to imply electricity with a si-fy/alien twist. I wanted it to sound believable in the game, but still hold some mystery as to how the sound was created. The ammo pick up sound was actually created completely on accident. During one of my recording takes, my dog walked by and jingled her collar. I noticed this during editing, and decided to throw it in as the main audio layer for the sound of picking up ammo. So, an accident becomes a blessing.

In order to tie all the sounds together and make them sound like they fired from the same weapon, I shared layers across the different sounds, and just kept them mixed quieter as the glue in the background. I created 22 different layers of sound in total, to achieve the sound design for 10 audio assets including: 2 unique weapons fire sounds, an alternate weapon fire sound, alternate fire impact, alternate fire travel, combo explosion, underlying fire layers for both default and alternate weapons fire, a weapon pick up sound, and finally the sound of picking up ammo. The assets were all 16bit/32k/mono wav files.

Implementation: Adding the sound assets into the game engine.

As fun as the creation and sound design was, actually implementing the assets into the game is where the magic really comes to life. I completed all of the implementation using the Unreal Development Kit. The process started out by importing my assets, and added them to pre-existing SoundCues. Inside the SoundCues, I had to create a signal path for each sound. This basically told the game how the sound was going to be played.

For example, the main weapon fire sound (triggered by the left mouse click) needed to alternate between the 2 main firing sounds that I created. So, I added a “random node” inside the SoundCue, which connects to the two main firing sounds, and then randomly choose between them every time the weapon is fired. The idea behind this, is that the repetitive nature in firing a weapon over and over would not get so monotonous and stay believable in the game (every shot doesn’t sound the same).

Within that same SoundCue, an underlying layer had to be played. This allowed the 2 main firing sounds to be unique, but added a common element to tie them together and keep them identifiable as the main weapon fire. This called for another node which would take the choice from the random node, and then mix that with the sound of the underlying layer.

Then, that signal would need to be sent through an attenuation node, which would allow the gunfire to have directionality in the game. Meaning, if you were another player in the game, you could hear if that gun fired from the left or the right, in front or behind, etc. This node also allowed for attenuation over distance. For example, the orb slightly lost volume and higher frequencies as it traveled further away from the shooter. This was adjusted using min and max levels that denoted distance and volume.

In addition to all of that, I added a modulation node, which would allow me to randomize the pitch and/or volume of each weapon fire, offering even more of a natural reoccurring sound within the game. When I originally implemented the sounds, they were very close but needed some volume tweaking to sound believable in the game. I was able to tweak these volume levels through adjusting min and max levels within the modulation node.

After the SoundCues were properly set up, I had to connect them to the functionality of the weapon in the game. This required the use of part of the Unreal Development Kit called Kismet. Kismet is a graphical user interface (GUI) that allows you to easily add scripting to the game. This scripting basically tells the game engine how certain elements in the game need to function and interact with the player. Inside of Kismet, I replaced place-holder SoundCues with the new SoundCues I had created. This basically attached the sounds I created to their appropriate triggers in the game. (In this project, all of the triggers, actors, and Kismet scripting had already been setup. In other projects, I created these elements and got them to interact with the game successfully).

Basically, the SoundCue was like a chain that started with the original sound, and then connected all the different nodes together. Then, the Sound Cue was attached to the actual gameplay through links within Kismet. From there, I exported the game and was able to play it while I recorded the video above.

Conclusion.

That was probably way too many details for you… but if you stuck around this far, then you hopefully got a taste of what it takes to implement audio into a game. It can be tedious, as well as rewarding. It was a pretty cool feeling to be able to run around and shoot with sounds that I had actually created and implemented into a first person shooter game.

Trackbacks/Pingbacks

  1. AIA Final Project | Blake James - [...] project is a continuation of work from my last blog post titled, “Sound Design for a First Person Shooter ...

Post a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>