
Audio fiction, despite its long and storied past, is only now seeing a resurgence in popularity in tandem with that of the broader podcasting medium. From such smaller independent production houses such as Fable and Folly, to larger industry mainstays like Gimlet, to 20th century stalwarts like the BBC, audio fiction has found a rabid new fanbase in our smart device-run and on-the-go streaming-reliant lives.
With the affordability and proliferation of digital tools and equipment, the standards for quality within audio fiction are ever rising. Furthermore, with the continued distancing due to the pandemic, remote recorded audio has necessitated further attention paid to the ways in which audio is treated in order to convey a scene realistically.
Having studied sound design in the past, and currently pursuing a creative interest in audio fiction, I’ve come to learn a variety of methods by which to build an auditory scene in a way that communicates realism and promotes immersion for the listener.
The following are a few crucial techniques I employ while working within my DAW to build a realistic scene for audio fiction.
Method #1: Employing Ambience Tracks
We’ll start off with a tip that may be obvious to some. Ambience is one of the key elements in designing a scene with audio.
If your scene takes place on a beach, it makes sense that you would include a track of beach sounds such as waves, wind, and seagulls. Sometimes, ambience sound effects may include all of these sounds in one file; other times, you may need to combine several tracks of ambience in order to accurately portray an environment.
Sometimes, an ambience track may feel too “mono;” that is, it may feel too much centered for it to be realistic for the listener. This happened while working on my audio drama short A Drink Before the Dark, in which I employed a couple wind ambience tracks to portray the coming storm.
Rather than use a single wind ambience track, I copied it to a second track, and then panned each track hard left and hard right. Doing this with an exact copy of a track does nothing to actually widen the sound, as each track played an exact copy of the waveform on each side and thus makes it sound centered in the same way that using one centered track would sound.
To remedy this, I chose different starting positions for each file, and looped each at different points. This allowed each channel to be playing a different part of the file at different times, meaning each ear would constantly be hearing different gusts of winds. This helped in making the scene all the more immersive, despite not being immediately noticeable by the listener.
For scenes taking place indoors without any bleed from outside environments, it’s still important to include some form of ambience. If your scene takes place in an office, an air conditioner ambience would do wonders to put the listener in that scene, as would, say, a (low volume) ticking clock.
Method #2: Utilizing Reverb Effectively and Economically

Reverb is probably the single most effective way to place your characters in an environment. With the amount of control possible with reverb plugins, it’s relatively easy to be able to portray characters in any environment ranging from great cathedrals, to home garages, to the Grand Canyon.
Note that reverb isn’t useful for just character voices; objects and foley within the scene must also feature some amount reverb in order to accurately portray the environment. In fact, a scene in which characters are treated with a certain noticeable amount of reverb will contrast heavily with other sound design elements that are dry (devoid of reverb) in a way that can easily be jarring to the listener.
Realism vs. Romanticism

In addition to the very real reverb that comes from being in a hall or a tunnel, I’d also like to bring attention to the ability to use reverb to portray environments where actual reflection of sound is miniscule. We have the advantage nowadays of more than a hundred years of conditioning through music, radio, and film of sound design which is more representative than literal.
These “sonic spatial codes” have developed peculiar understandings of acoustic space as portrayed through recorded mediums. The Western genre is well-known for its idealistic, heavily romanticized depictions of the West, which developed in its sonic codes a tendency for reverb and delay (echo). Despite the lack of reflective surfaces and thus lack of reverberation in reality, the soft vocal whoops and slide guitar of Foy Willing’s “Blude Shadows on the Trail” are treated heavily in reverberation in order to convey a nighttime scene in the open, mesa-studded West.
So too can the modern sound designer utilize reverberation in similar ways to convey open spaces, provided the listener understands the coded implications of reverb in such a context.
Using Minimal Reverb for Maximum Effect
Going back to realism, reverb can be utilized in a number of subtle ways. A reverb set to a short decay time of anywhere from 0.1 to 1 second can easily place a character within a small, somewhat reflective room. Play back the lines while easing into the wetness amount of the reverb in order to arrive at a point you feel is realistic for the scene.
As you’ll find, even the smallest amount of reverb can do much to “place” a character just right in the mix. Fully dry vocals, while useful in certain circumstances, can sometimes come off jarring if the scene is set somewhere where the listener is not (consciously or unconsciously) expecting a dry voice. It’s also helpful in the event that narration is used: using small amounts of reverb for characters in tandem with a fully dry narrator helps the listener to distinguish between the events of a scene and its description by the narrator.
Economy in Reverb Use
Computers have, of course, a finite number of computations they can process at once. To throw a reverb on every single track that requires it becomes burdensome for your CPU. In order to bypass this, utilize bussing and sends as a way to control reverb for multiple tracks with one plugin.
Briefly, “busses” are tracks that take in audio from multiple other tracks, while “sends” are tracks on which plugins such as reverb are placed and to which a copy of a track’s audio is routed. You can then introduce however much reverb you want by increasing the level of the send on your original track until you get the balance between the dry signal and the wet reverb signal you desire. Note that reverb (and any other effects for that matter) on sends should be set to 100% wet, since the send amount itself determines the amount of the effect you hear.
Reverb as an Aid in Remote-Recorded Audio
The pandemic has shifted much of what we do even more so online, and audio fiction is no exception to this. The lack of in-person recording means most actors need to record their lines at home, sometimes in conditions which are not ideal for a clean recording.
Thus, you may receive lines that, because they were recorded in different spaces, will cause your characters to sound like they are in different spaces entirely, thus breaking the illusion of the scene.
Reverb can be your friend, along with EQ, in treating your character lines to sound like they are in the same environment. This will take some trial and error, but using reverbs with different parameters and adjusting the EQ curves of each voice track can sometimes be a lifesaver for those projects whose lines are all recorded remotely.
Method #3: Panning
Panning, another very common technique, can easily be utilized to place your characters and other sound design elements in an environment. It’s important to attempt to balance your panning in a way that’s not uncomfortable for the listener. Furthermore, there is a general rule of thumb within audio fiction to not pan more than 20 degrees either way. Panning voices hard right or left can easily become jarring for the listener and can even confuse him or her if they happen to be listening to your drama in one ear only.
Personally, I don’t shy away from hard panning so long as I don’t keep the character there for a long time. Certain elements that aren’t as crucial to the story, such as ambience, can also be hard panned in a way that’s not too uncomfortable for the listener.
Method #4: Employing Psychoacoustic Mixing
Psychoacoustics is the study of how humans perceive sound. The way our bodies are built, from the ways our ears convert sounds to our brains, to how our ears are shaped, to our actual heads and bodies themselves, all play a role in how we perceive sounds in the real world.
There are a number of strategies you can employ in audio fiction to emulate these factors and increase realism. Some of this may sound like overkill, and indeed, some of it may very well be. But I would like to share a few techniques I sometimes use in ways that help the listener’s perception of the scene.
EQ

Equalization is one of the best ways to employ psychoacoustic mixing. Think of EQ as acting like your ears: a sound coming from behind you will sound differently than a sound from directly in front. Your ears will not be able to pick up the higher frequencies of a sound source from behind, so along with panning, shaving off some high end will help replicate that illusion for the listener.
Likewise, a character directing his or her voice away from the listener will also lack a certain amount of high end, which EQ can help replicate as well.
In fact, the farther away the sound, the less high end it will have. So loud noises off in the distance, such as an explosion, will feel more realistic when low-passed with EQ. Human voices yelling in the distance will likewise lack high end, but their relative weak amplitude will also result in a lack of a low end as well, meaning a gentle band pass filter would do the trick.
EQ can also be used to place a sound on the height axis of the 3D space. A subtly high-passed sound can help to place the sound higher in space, while a subtle carving out of upper frequencies can help to place a sound lower on the height axis.
EQ should (generally) be used gently; even the softest adjustment of EQ curves can help a sound to appear higher, lower, closer, or farther from the listener.
Reverb
Again with the reverb, I know.
But reverb, in its all-encompassing importance, can be used in more ways than simply placing a sound within a physical space. The movement of reverb through the use of automation can also help to ground your sounds in space.
For example, the movement away from a listener of a sound source should involve automating the mix parameter of a reverb plugin. If the plugin comes from a send, automating the send amount will do the same. The increase in reverb “wetness” can help push a sound farther back in space, helpful for voices or other sounds moving away from the listener.
For sound sources such as a vehicle moving toward the listener, and then away, automating a reverb’s dry/wet amount, in combination with panning and volume can help give the illusion you’re looking for.
Compression
Compression is a useful tool that will cause your sounds to remain “up front” to the listener. Undoubtedly, there will be a certain amount of compression on all of your voices and sounds; compressing some more than others, however, can help create the illusion of being closer or farther from the listener. A more compressed sound will do well for those sounds that you want to appear “right there” for the listener; sounds farther back in the distance should not feature such heavy amounts of compression.
Simple Leveling (Volume)
The last method is the easiest and most no-brainer: simple volume leveling! There is a reason why in the world of music mix engineering, a common saying is that leveling is 90% of the work, with the rest being a mix of panning, compression, EQ, reverb, and other effects.
It’s no different in audio fiction. Reaching for the fader is the first thing that anyone should do when mixing a scene for realism, before trying out any other psychoacoustic tricks. Leveling is a great way to “place” a sound in the 3D space, specifically on the axis of distance between the listener and the sound source.
Conclusion
Sound designing a scene in audio fiction requires a number of elements. While not all are always necessary, the combination of multiple techniques can greatly aid in listeners’ sense of immersion. Just by employing some basic techniques such as ambience, panning, and reverb, you can easily ground your Frankenstein-ed amalgamation of sound design and vocal lines in reality.
I hope that through this overview of these methods, along with tips in dealing with remote recording and psychoacoustics, I’ve shed some light on the process and offered practical advice that you can use in your own audio fiction.