Blog

Analogue Emulation Audio Plugins: Nostalgia Meets Modern Stress Relief
- #010

Intro

In the ever-evolving world of music production, there's a growing trend towards analogue emulation VST / VST3 / AAX / AU plugins. These plugins, designed to mimic the warmth and character of vintage gear, have become a staple for many producers and engineers in their workflow. Why did these plugins become so popular though? Could it be that vintage equipment simply sounds better, or is it merely a longing for the past, or is there perhaps something deeper at play? Beyond the nostalgia, these plugins offer tangible benefits in music production and sound production, such as adding warmth, character, and some might say depth to sound. Personally, I often lean on saturation to introduce high frequencies into my tracks, sometimes opting for this approach over the traditional EQ boost. By running a saturation plugin in parallel and employing high-pass and low-pass filters in minimum phase mode, or high and low shelving filters, I can achieve a rich and nuanced sound texture. There are also so many flavours of saturation, all of which have their place in music’s history and in today’s post-modern and modern music productions.

The Allure of Analogue

Analogue gear, with its tactile knobs and unique quirks, has always held a special place in the hearts of audio enthusiasts. The subtle imperfections, the warmth of tape saturation, and the character of vintage circuits evoke a sense of nostalgia. There are characteristics that shout out that the music is made with certain gear. This music is often times a reminder of when making music was more physical, a more hands-on experience, and a time before the internet and smart phones.

Analogue Nostalgia vs. Modern Precision

While analogue emulation plugins offer a warm and nostalgic sound, it's essential to recognise the prowess of modern, typically digital plugins, like FabFilter with Pro Q3. There are almost no companies out there that offer non-emulations of famous vintage gear. Plugins like Pro Q 3 are designed with precision, flexibility, and a vast array of features that vintage gear simply cannot have. FabFilter Pro Q3, for instance, offers dynamic EQ, up to 30 parametric bands, surround support, and an intuitive spectrum analyser, making it a go-to for surgical EQ tasks. A company like Slate Digital on the other hand provides a suite of tools that combine the best of both worlds, offering both analogue warmth with fewer built-in options, yet with many analogue emulations that offer further sound sculpting capabilities than their analogue counterparts. The choice between nostalgia-driven and modern plugins isn't necessarily about one being better than the other; it's more about the right tool for the job at hand, and in many cases for the workflow. Some tracks might benefit from the character and colour of analogue emulations, while others might require the surgical precision of a plugin like Pro Q3. As with many things in music production, it often comes down to personal preference and the specific needs of the project at hand. In order for Music Production to be something that people enjoy returning to though the golden rule is “Progress must be fun”.

The Subtle Nuances of Analogue in a Digital Age

In the realm of music production, the debate often arises: can the average listener truly discern the subtle characteristics imparted by analogue equipment? If not then what is the point? While audiophiles and producers might revel in the warmth and depth of an analogue sound, the general audience might not consciously identify these nuances, although deep down their subconscious minds might. However, this doesn't diminish the value of using analogue or its emulations. For instance, in one of my pieces, I turned to Acustica Audio's Jade2, a plugin that emulates a specific set of analogue tools. This choice was instrumental in achieving the desired sonic texture, especially given Jade2's affinity for sound textures that work with contemporary classical music. The piece resonated with this modern tone, keeping listeners in the current time period whilst keeping the sounds sounding homogenous.

Analogue Warmth: A Soothing Balm for Misophonia Sufferers?

Misophonia, a condition where specific sounds trigger emotional and physiological responses, can be profoundly distressing for those affected. In a world dominated by noise, displeasing reverbs in our day-to-day environments, and in our digital media we hear sharp digital sounds, the soft, rounded edges of analogue might offer a call-back and a reprieve. The inherent warmth and gentle saturation of analogue recordings, reminiscent of a time when sound was more organically made outside of the box, when people tended to have only one job, and when times seemed simpler, might provide a sense of comfort and nostalgia. In my own research, I've found that integrating analogue equipment into the audio chain, even in unexpected scenarios like playing Apex Legends on PC, can make the experience more pleasant. The analogue touch not only enhanced the auditory experience but also gave me a competitive edge in the game by further increasing my immersion, by heightening my auditory senses, and thus making me a more effective opponent in the game. The instant nature of analogue signals with their total immediacy, dynamic behavior at volume levels higher than where most digital plugins are programmed to, and with their ability to contain no aliasing at all made this game much better to play, and it made me a fan.

Conclusion

Analogue emulation VST / VST3 / AAX / AU plugins are more than just tools for music and sound production. They're a bridge to the past, a sonic comfort blanket, and a testament to the enduring appeal of analogue sound. As we navigate the complexities of modern life, these plugins offer a momentary respite, a chance to lose ourselves in the warm embrace of analogue nostalgia. Is there still a way to go before we get the best of analogue and digital? Some companies like to think so: Acustica Audio continue to say they are one step closer, but some claim they have already got there: Slate Digital, UAD, so many others. Let’s continue this journey forward in this ever so exciting time to be alive.

Date: Monday, 18 September 2023

The Art of Listening: A Deep Dive into Aural/Hearing Diversity
- #009

Intro

This blog post is all about how fascinating and interesting the different sounds we can hear are, and how our ability to hear them is complex and important. It will go into more detail about how these differences in hearing can affect different parts of our lives, like making music or working with sounds in a technical way.

Exploring Sound Interpretation

Sound perception tends to improve with time, not just the way that our ears continuously develop until we're at least 19 years of age, but also in our ability to decipher what our ears are conveying to us. As one gains more life experience and practices conscious listening by actively interacting with the audio, in some way, they can become better at interpreting sound in general. A hunter-gatherer’s hearing would have been very sensitive to sounds in the environment when a mature adult, so can too we improve our hearing capabilities. However, even with extensive experience, some sound intricacies may still be challenging to understand. For instance, the effectiveness of various spatial plugins, or HRTFs, in videogames varies and some may work well for a portion of the population but not so for others. Some people may have so-called 'good ears' with the ability to hear frequencies up to 20 kHz with no signs of tinnitus, while others may have the so-called ‘golden ears', which are ears legendary mixing and mastering engineers are known to have because of their extensive experience mixing and/or mastering music for certain audiences. This can affect their ability to perceive sound and decipher what they are listening to more effectively.

Throughout our lives, we often experience shifts in how we perceive music. We might listen to a piece of music when we were young, only to discover an entirely different interpretation upon subsequent listening because of how our ears have developed. This illustrates how our auditory perceptions can change and evolve over time. Have a listen to a recording of a song you haven't heard since you were a child and see if you can hear anything new you hadn't heard before. I’ve done this many times and it is astounding how a song you thought you once knew so well had a layer of sonic elements you had never heard before.

Aural Diversity and Sensory Perception

Just as it's hard to understand what it's like to be colour-blind unless we experience it ourselves, it's difficult to grasp someone else’s auditory perception. The typical view we have had since Fletcher-Munson created their standardisation tests in the 1930s is that we all hear the same thing when presented, but this is clearly not the case. Even Fletcher-Munson classified atypical listeners as people between the ages of 18 and 25 with no wriggle room either side, and to have very healthy hearing. In the gaming world various techniques are used to create a sense of spatial awareness with in sound. However, there is no one-size-fits-all solution. Some of the most successful HRTFs are the most customisable, with Dolby HRTFs requiring images of the listener’s ears in order to make an effective HRTF, as well as all of the possible measurements of the head. Some streamers even use certain sound profile settings, if they are available, such as in Rainbow Six Siege where many Esports players will use Night Mode at all times of the day giving them a competitive edge allowing them to control their game-audio better without risking themselves aversely reacting to it in a competitive game where the stakes are high.

The Role of the Mastering Engineer

In the realm of music production, mastering engineers play a vital role. They are responsible for making the last few changes and tweaks to the music making sure that it appeals to a wide audience, maintaining the integrity of the artistic vision of the music, and aligning it with industry standards. They do this with extremely accurate monitoring and a variety of listen back devices. Having said this, they are primarily there to impart their skills in their art of mastering utilising their taste in audio tools and their taste in music.

Successful mastering engineers are in high demand because of their previous sales and their artistic success, but have largely remained somewhat vague on what actually separates them from a new budding mastering engineer who has just started: their instincts are mentioned more often than in other lines of work. Having said that, a mastering engineer worth their salt will be a huge fan of music, they will use monitoring that has the extremes of the audio range covered, so clear low bass, and very clear and well defined high mids and highs. This is so nothing gets past this quality control stage of the process. These types of monitors can also much more fatiguing on the ear, which doesn’t bode well for producers and mixing engineers, so much, because of how live they sound at close range. Producers need a monitoring solution that is pleasant to work with, whilst mastering engineers need to be able to hear precisely what is going on before sending the audio out. Rooms are very important too in order to make sure there is enough silence to hear the smallest sounds.

The exploration of aural diversity shines a light on the intricacies of our auditory senses and their effects on our perception of music and sound. By embracing aural diversity, we can truly celebrate the vast richness of our auditory world and strive for a better understanding of each other's listening experiences.

Date: Saturday, 20 May 2023

Red Dead Redemption Mysterious Stranger - A Fresh Take on Sound and Voice
- #008

Introduction

Red Dead Redemption is a game that's loved by many for its captivating storytelling and immersive world. Recently, my friend Laurie Hyde, a talented voice actor, and I decided to bring a fresh perspective to the Mysterious Stranger, a character from the game. This blog post will take you through our journey of reimagining the sound design for this iconic character.

Our Collaboration

After our successful collaboration on the voice replacement mod for Ready Or Not, where we replaced the voice of the commander character, Laurie and I decided to team up again. We both love storytelling and audiovisual art, and we wanted to challenge ourselves by reinterpreting a beloved character while respecting the original work.

The Sound Design

For this project, I used a new technique to enhance the audio experience. This technique added a subtle touch to the audio that might make a difference for those sensitive to certain sounds. Each sound goes through a process that adds a bit of colour and depth. This approach helps to create a sound that is reminiscent of classic films.

Our collaboration on the Mysterious Stranger in Red Dead Redemption showcases the combination of Laurie Hyde's voice acting talent and my know-how in sound design. We hope you enjoy this fresh interpretation of a beloved character and appreciate the attention to detail we've put into the project.

Date: Tuesday, 18 April 2023

Unleashing the Power of Modern Mixing Techniques: with Acustica Audio Nebula, Oversampling in Reaper, Plugin Hosts, and More
- #007

Welcome to this guide on a couple of advanced mixing techniques using Reaper, a DAW renowned for its extensive features, despite its compact program size. We'll also delve into the integration of hardware into your workflow in a unique way specific to Reaper.

1. Emulating a High-End Console with Acustica Audio Nebula and AlexB Programs

For a mix that radiates a certain amount of vibe, the emulation of analogue consoles is a technique that divides opinion. Some view it as a risk, while others consider it an indispensable tool. In our current endeavor, we'll harness the power of Acustica Audio’s Nebula 4 using a specific patch called AlexB American Console. This tool emulates a renowned 70s console, specifically the API 1608, utilising capture software to infuse the audio with the signature warmth and character the console is celebrated for.

2. Improving Your Sound with Oversampling

To get slightly more out of this technique we can use oversampling. Oversampling is a technique that can help improve the audio quality of your digital processing by raising the highest recorded frequencies that are processed. By using Bluecat Audio Patchwork or DDMF Metaplugin with any DAW, you can achieve high-quality oversampling and noticably enhance the sound of some of your plugins, especially algorithmic ones. To use this feature, just insert Bluecat Audio Patchwork or DDMF Metaplugin as an effect in your DAW project, and then load your desired processing plugins within the interface.

Bluecat Patchwork

3. Using Real Analogue Gear with Access Analog's Matrix Plugin

If you're looking to incorporate the unique sound of real analogue gear into your mixes, Access Analog's Matrix plugin provides a solution. This innovative plugin lets you process your audio through actual hardware devices in the cloud, all within the familiar environment of Reaper. If you want to use your own Analogue equipment you can do so with ReaInsert. In order to take note of the settings for future recall you can customise your own custom created macro with all the settings recorded. We will go into this in a bit more detail in the future.

Analogue gear

By using Reaper and integrating it with plugins like Acustica Audio Nebula, Bluecat Audio Patchwork, DDMF Metaplugin, and Access Analog's Matrix plugin, you can give your mix a bit more of an unsual and more unobtainable quality. Moreover, incorporating your own analogue gear and implementing a custom recall system within Reaper can further enhance your mixes and streamline your workflow.

With these tips and techniques, you'll be well on your way to creating great mixes that stand out in today's competitive music industry. Do of course use your ears and know the techniques, but use these to additionally add a certain special quality to your music/sounds. If you're in a bit of a rut you should try these out and see if take your music production to new heights.

Date: Wednesday, 5 April 2023

Analogue vs. Digital Mixing and Mastering: Is it worth having analogue in the processing chain? - #006

In the world of music production, the debate between analogue and digital processing has been ongoing for a long time now. As more and more producers and mixing engineers have adopted digital plugins and methods, it's become essential to understand the unique benefits and drawbacks of each approach to find the perfect solution for your mixing and mastering needs. Let's explore the world of analogue mixing and mastering and compare it to digital alternatives like Acustica Audio plugins, Slate Digital, Waves, Plugin Alliance, Kush Audio, DMG Audio, FabFilter, and SO many others. We'll also have a look at remote analogue services such as Access Analog and Mix:Analog, the pros and cons of using analogue hardware in the studio, and some of the newer processes of using these in your workflow. Additionally, we'll explore the benefits of using mid-side processing in an analogue chain and how it can make a huge difference in productions.

Analogue vs. Digital: A Brief Overview

Analogue mixing and mastering involve the use of physical hardware, such as mixing consoles, compressors, equalisers, tape machines, and many more. These devices have been the foundation of music production for around 100 years or so and are revered for the character they impart, saturation, and that mythical warmth they can add. Some artists, like Jack White of The White Stripes, prefer analogue processing over digital and he famously strove to use it as much as possible in the production of one of the latest albums, so much so he forbade computers to have anything to do with it.

Digital processing utilises software plugins that emulate the characteristics of analogue gear. Companies like Acustica Audio, Slate Digital, Waves, Plugin Alliance, Kush Audio, DMG Audio, FabFilter, et al, have developed algorithmic and convolution plugins that offer incredible flexibility and control while striving to replicate the sought-after analogue sound. It may leave a lot to be desired sometimes, but they are ways of enhancing a plugin further.

The Power of Mid-Side Processing in Analogue Chains

Mid-side processing is a technique that separates audio signals into two components: the mid (centre) and the side (stereo width). This allows engineers to process and adjust these elements independently, providing greater control over the stereo image and tonal balance. Incorporating mid-side processing in an analogue chain can lead to enhanced spatial depth and clarity in your mixes and masters. One easy technique is to send the mid channel into the left and the side channel into the right. This can negate any subtle stereo differences of the hardware gear and keep the centre exactly in the middle.

Access Analog, a remote analogue service, provides mid-side processing on all of their analogue pieces, whereas Mix:Analog unfortunately does not. This added functionality can make a significant difference in achieving the desired sound in less time, giving Access Analog an edge for those looking to harness the benefits of mid-side processing in an analogue environment. It has to be said that Mix Analog has some other advantages that may not be immediately apparent, and whilst it is a time saver there is no reason why you cannot render your audio as mid-side, although listening to it requires a mid-side decoder.

Analogue Sound in the Digital Age: Acustica Audio Plugins, and More

Acustica Audio is known for their advanced technology that closely resembles analogue processing. Their plugins use a technique called "dynamic convolution" to capture the nuanced behaviour of analogue gear. This approach results in a more authentic sound than traditional algorithmic plugins, giving users the best of both worlds: the convenience of digital and the warmth and character of analogue.

Slate Digital, Waves, Plugin Alliance, Kush Audio, and others, also offer high-quality plugins that model analogue gear. While their approach uses the algorithmic approach instead Acustica Audio's VVKT technology, they are still popular choices among music producers for their accurate emulation and user-friendly interfaces. In fact there is much debate over which sounds truer to its analogue counterpart, which one is more successful for live mixing, and the drawbacks of a high CPU load the VVKT technology requires.

Remote Analogue Services: Access Analog and Mix:Analog

For those who want to experience the benefits of analogue processing without investing in expensive hardware, remote analogue services like Access Analog and Mix:Analog offer a compelling solution, as they essentially allow users to rent out their analogue equipment remotely, just as if they were loading a plugin. These platforms allow users to process their audio through a range of high-end analogue gear via the internet, giving them access to the unique characteristics of analogue hardware without the need for a physical studio setup. With so many productions hitting the charts that were made in a bedroom on only headphones it's almost a disservice to oneself not to check these out.

Analogue Hardware vs. Plugins: The Pros and Cons

While plugins have come a long way in replicating the sound of analogue gear, there are still some differences that may make using physical hardware a better choice for certain applications. Analogue hardware often provides a more hands-on experience, allowing engineers to manipulate the equipment in real-time and fine-tune settings more intuitively.

However, plugins have the advantage of being more accessible and affordable, offering users an extensive range of options at a fraction of the cost of hardware. They also provide the flexibility of working in-the-box, making it easier to recall and modify settings in future sessions.

Ultimately, deciding whether to use analogue or digital processing in your music production depends on your individual preferences and budget. While there is no one-size-fits-all answer, understanding the unique benefits and drawbacks of each approach can help you make an informed decision that best suits your needs. By exploring options like Acustica Audio plugins, Slate Digital and other algorithmic plugins, remote analogue services, and even incorporating analogue hardware into your workflow, you can find the perfect balance between the warmth of analogue and the flexibility of digital to create high-quality, professional mixes and masters.

Date: Monday, 20 March 2023

Misophonia in Movies  - #005

Misophonia is a condition in which individuals experience strong negative reactions, such as panic attacks, anger, and even depression, when exposed to specific sounds. The representation of misophonia in movies can help raise awareness and foster understanding about this lesser-known condition. In this blog post, we will explore the portrayal of misophonia in various films and discuss how these depictions might impact audiences.

1. Trainspotting (1996)

In the critically acclaimed film Trainspotting, there is a scene in which the character Begbie, played by Robert Carlyle, becomes intensely irritated by the sound of a stranger opening a packet of crisps in a pub. Unable to contain his frustration, Begbie reacts violently, demonstrating the extreme emotional response that can be triggered by certain sounds in individuals with misophonia.

2. How the Grinch Stole Christmas (1966)

In the live-action adaptation of Dr. Seuss's classic tale, How the Grinch Stole Christmas, the Grinch, is initially shown to be highly sensitive to the cacophony of sounds from the Whos' musical instruments in Whoville. In one scene, he exclaims, "All the noise, noise, noise!" This portrayal highlights the extreme discomfort and irritation that certain sounds can cause in individuals with misophonia.

3. The Lion King (1994)

In Disney's animated classic, The Lion King, there is a scene where the antagonist Scar, Mufasa's brother, grinds his claws against a rock, producing a chalkboard-like sound. This unsettling sound not only adds tension to the scene but also serves as an example of the type of noise that might trigger a strong negative reaction in someone with misophonia, in this case, Zazu: voiced by the British actor, Rohan Atkinson.

The portrayal of misophonia in movies, such as Trainspotting, How the Grinch Stole Christmas, and The Lion King, sheds light on the complex emotions and reactions experienced by individuals with this condition. By showcasing these characters and their struggles with sound sensitivity, filmmakers can help to raise awareness and foster empathy for those living with misophonia. Additionally, these depictions can spark important conversations about the need for further research and support for individuals affected by this condition.

Date: Tuesday, 29 November 2022

Producing a game in Unreal using Blueprints and utilising AudioKinetic's Wwise for Sound implementation  - #004

This project allowed me to expand on my skills and focus more on the implementation side of games by utilising Unreal Engine 5's Blueprints and Audiokinetic's Wwise. The project could go further, but it was good to show it at this stage. Check it out below.

Stage 1: The first step was to code the behaviour of the game using Unreal's Blueprints. Using this visual block coding language is something I have done in the past using programs such as Controller Mate and Construct 3, and having coded in Unity using C# for a number of years the same programming principles apply in the same way, only this time it is very visual in nature, and easy to manipulate. The other benefit is syntax, and that it is hard to fail. This is not quite the same in Unity, but there are still a few things that seem quicker and easier using C# rather than using Blueprints. As the projects scale upwards though it became apparent that it is very niceusing Blueprints in Unreal 5.

Stage 2: Once the behaviour was coded it was time to tweak the code and implement the audio events into it. This was done from scratch using the Wwise AKEvents in animations, and in the Blueprints themselves. It must be noted that some of the Blueprints need event dispatchers, which act like portals across Blueprints to trigger behaviour from other scripts that might not be able to accomplish certain things. One example is the Gamemode Blueprint, which by its nature lives in the background of the game, and does not have a presence on stage, so to speak. An event dispatcher is needed in order to trigger any behaviour from the Gamemode into other 'on stage' scripts.

Stage 3: Creating audio assets and mixing came next. This was done in a mixture of Reaper using the Wwise WAAPI transfer plugin. I also used Audio Damage synthesiser plugins in Logic Pro to create the synthesised elements. The Audio Damage plugins I used have all been recently made free, and most of the sounds either came from the FM and subtractive synths; Axon, Phosphor, or Basic. For Sound FX I used sounds from freesound.org, or from Vege Violence and Swish 2 by Tim Prebble. For the rest of sounds, I recorded myself creating the sounds with a small selection of microphones.

With some fine-tuning towards the end, the project was in a good enough state to demonstrate. For only a few days' work, this was a great project to have done and to have under my belt to demonstrate my skills with Wwise and Unreal.

Date: Thursday, 1 September 2022

Misophonia in Escape from Tarkov - #003

Misophonia is a neurological condition where the listener has an adverse reaction to the sound of something in particular that can be viewed as unreasonable by people who are unaffected by the affliction. Misophoniacs tend to be on a scale of severity with mild Misophonia on one side and severe life-altering Misophonia on the other side. It is a condition that can cause a lot of misunderstanding between people if a non-affected person does not understand or relate to the condition. Lots of people have Misophonia and do not know that they have it. They are confined to misunderstand and/or get frustrated with themselves. Not knowing what it is can drive their anger, disgust, and/or frustration at the situations they find themselves in.

If the sounds of someone else eating near you cause you to have an adverse reaction that is either angry, frustrated, or malcontented in any way, this could be something you have to some degree. There are some sounds that seemingly affect most people with Misophonia, and there are some that trigger individuals that do not trigger other individuals.

There is a sonic commonality associated with the sounds that trigger Misophonia. These sounds emanate a high amount of transient energy above the 5 Khz frequency mark: sounds like smacking lips, and crunching crisp packets, are sounds have a high amount of transient energy that can give the sensation of creepiness or of something running down your neck or arm. The closer to the person's ear the worse the effect. This is due to the proximity to the source, and the exponential increase in volume of the higher frequencies the closer you get.

Whilst Escape From Tarkov is a tremendously popular game, it has had huge success and has a very large player base, there is plenty of backlash about the audio in the game and how it is unsuitable inits current state. I have found Esacpe From Tarkov's audio shares lots of similarities with sounds commonly associated with Misophonia having suffered from it. The way the footsteps and clothing rustle sound give the creepy sensation down the neck or arm as if a stranger were whispering in your ear. It gives the sense of something very intrusive and disconcerting. Most games do not have this problem, but FPS games with recording and mixing chains that record too close to the source may be inadvertently causing issues for Misophoniacs around the world.

Here is a video demonstrating some of the problematic audio heard in Escape from Tarkov. See what you think.

Date: Friday, 12 August 2022

Hyperacusis in FPS games - #002

Hyperacusis is the fear, or strong psychological reaction to, sudden changes in perceivable volume. This is a condition frequently reported in pets and animals, just like in The Simpsons during this episode. Although the example used below is humorous it is of its time, and the shock of loud noises to pets and animals is not humorous at all.

It can be quite debilitating in humans, and I have also found with some gamers. As Hyperacusis is not a well-known condition per se I do wonder whether some people have it without realising it. Here is a case in point.

No doubt this is quite an extreme stance on game audio, as the top comment is underneath is "Footsteps in my headphones are literally how I win games". Then again, it has been posted in 'unpopular opinion' because it is not the norm.

The case above uses a word commonly associated with shooter games: jarring. A lot of people have reported that they are jolting whilst playing FPS games because of the audio, me included, and some streamers have also gained a following because of the way in which they jolt on camera and lose neurological control. According to some, this is so they can get a sense of a genuine reaction from the streamer.

I have lost motor control when playing FPS games, although this is something I don't want to do or revisit. When playing Apex Legends one evening I set the volume to a comfortable level so the dialogue was clear and not too loud. I was immersed in the game and listened to the environment and the banter between my teammates. When engaging the enemy with the team I lost control. I ended up running away jolting from side to side each time the enemy shot at me. Soon after I was dead having tried to escape.

The reason for this loss of motor neuron control was the audio. Normally I send the audio through a compressor to tame the loudest peaks, and by accident, I had it bypassed without realising (an audio engineer's nightmare). Normally the compressor allowed me to keep fighting with the volume at a level where it didn't cause jolts, and I swear that this has allowed me to win the game on occasion. Many streamers alter their audio settings to best suit their style, with many competitive players of Rainbow Six Siege using either TV or Night mode audio profile settings to give them a fighting edge. The question to ask here is 'why are they having to do that?'. I have also noticed iTzzTimmy, a popular streamer of Apex, has altered the audio coming through so there is no music and only sound effects.

After realising there was a huge difference in level between the dialogue and environmental sounds compared to the gunshots I made some measurements and found that in SPL the gunshots were a full 10 times louder in pascals, which is a full 20 dB above the level at calm moments throughout the game.

Transients and low frequencies were particularly strong from the weapon sounds. The question that needs to be further answered is whether Hyperacusis is frequency dependent or simply a case of volume difference. It is a very interesting topic in terms of Loudness Units at Full Scale in what is considered to small a dynamic difference, but equally what is considered to be too large a sonic difference.

Date: Thursday, 4 August 2022

The Soundtrack of Factorio - #001

Factorio has a soundtrack that has garnered a lot of positive responses from the community since it was introduced to the game in 2016 by the game's composer and sound designer Daniel James Taylor. The music lends itself to the setting and time period in the game very well, with the science fiction setting being picked up on by the synth elements, the loneliness by the ambient nature of the soundtrack, and the grand scale and trepidation the game is set in is picked up by the orchestral instruments.

There are occasions where the music feels tense and mysterious, with tremolo violin interjections whilst double basses and cellos play long notes in octaves in cenSeq's Discrepancy. Instruments flutter in and out with reverberant male choir ahhs and closer female choir ahhs. There are also occasions where the tone is reversed and a more pleasant, tranquil, and calm feeling is created. A sense of awe and wonder is felt at the use of choir and strings with a low heart-beat-like thud, such as in the track Sentient.

The soundtrack is very varied, and it is hard to know from a first listen when playing the game how long it lasts. The OST has a total runtime of 1 hour and 16 minutes. This seems to be an amount of time that a long film would last for, a double album, or a long stint at playing a video game. At 72 minutes of playing time, there are relatively few rhythmic instruments in the soundtrack. There are a few moments where rhythmic instruments enter, but are kept quite reverberant, such as in the track Efficiency Program, or in Are We Alone. Since the release, the soundtrack has had some subtle changes where some reverberant elements have been added, and the mix has been altered. Throughout the game, the rhythmic nature of the game's soundtrack is kept more confined to pitched synth instruments that are arpeggiated or simply play ostinato patterns. This keeps the pace of the game and the tension of the soundtrack quite high, whilst not relying on rhythmic instrumentation to do this, although it is in the mix, it is the ostinato synths that are driving the rhythm of the track.

In a recent video by Trupen on Youtube and Twitch the streamer played the 0.6.4 version of the game before the soundtrack had been created for the game. Royalty-free and placeholder sounds and music were being used at the time, and when playing the game the sound and music were almost the first things Trupen commented on being much worse in the older version saying "That walking sound, you hear that?" [proceeds to walk around] "Awesome /s". He then proceeds to talk about the sound, but only laughs at it and says how 'awesome' it sounds with a big s for sarcasm. It's very interesting to know what the difference between the sounds are, and what makes the sounds more immersive in the later version once Daniel James Taylor had redone the sounds and music.

It's not quite enough to quote a single Youtuber to prove a point, but whilst the sounds and music are different to what they once were it certainly proves they are tremendously successful at bringing the world of Factorio to life on its own way. Something noticable about the sounds and music in Factorio is that they share the sound palette, and nothing appears out of kilter with the rest of the audio. The audio certainly provides immersion for the player, an identity for the game, and also creates a calm yet tense atmosphere that keeps the player coming back.

Date: Thursday, 4 August 2022