MUS3164 Foley and ADR Studies – Producing a Realistic Soundtrack for Film

A Background to Sound in Film

            When working with sound in film, there are four main aspects to consider, including atmosphere, special effects, dialogue, foley and music. These are referred to as stems and are worked on separately during the three stages of producing sound for film: Pre-production, production, and post-production.

            Foley sound effects are captured off-set and may include small rustling of clothes or larger sounds such as doors slamming and car engines (Holman, 2002: 71). The aim of recording foley sound effects is to consider sight on sound localisation, as humans associate locations of sound with visual stimuli (Tabry, Zatorre and Voss, 2013). Therefore, foley sound effects are used to produce a transparent and believable soundtrack for the listener by including sounds in relation to visible events on-screen.

            Dialogue in film is often captured through a mix of production and post-production sound, where some dialogue is captured on-set and some is captured later using automated dialogue replacement (ADR). ADR involves recording lines of dialogue in a studio, as some production dialogue may not be usable due to environmental interferences (Hellerman, 2020).

            This blog intends to document the process of capturing foley and ADR in a group project for a short film clip, including preparations, equipment, microphone placement and editing. The clip chosen for this project was taken from the film Home Alone (1990).

Essential Preparations (Pre-Production)

            Prior to commencing audio recording, preparations must be made to ensure the production process runs smoothly and prevent issues with loss of files or unusable audio. For our group project, I captured the appropriate clip from Home Alone using QuickTime, then used DaVinci Resolve to ensure the frame rate of the clip is the film industry standard of 24fps. This was an essential step, as the ProTools session, used for editing, must match this frame rate to ensure accurate synchronisation between audio and visuals. DaVinci Resolve was also used to add the timecode to the clip, which would later be used to locate points of audio when recording foley and dialogue.

After preparing the clip to the appropriate film standard, it was saved to a named folder, alongside other named folders made in preparation for the foley and dialogue recordings. File management is essential when working on sound in film, as missing files can be detrimental to a project. The workflow must also be efficient due to the pace of the film industry, as regularly discussed in top-tip style websites such as Your Survival Guide to Working on a Hollywood Movie Set (Ponic, 2022).

Parent folder and files for our project

Lastly, a spotting list and cue sheet were prepared to identify locations and dialogue prior to recording foley and ADR. This ensured good time management and planning of hiring equipment, organising places to record audio, and making sure no on-screen cues were missed.

Spotting list

Capturing Foley (Production)

            The process of recording foley sound effects began with identifying the appropriate recording equipment to capture high-quality audio. Such equipment included a Sennheiser mini boom microphone, the Roland R-44 field recorder, and some headphones to listen back to recorded clips. A low-cut filter was applied using the R-44 to eliminate any low frequency rumbles on the audio recordings. Ensuring the clips were clean preserved the quality of the audio, creating an easier editing process and more transparent soundtrack. This enhances the storyline, allowing the viewer to become more immersed in their viewing of it.

            Further immersing the viewer, all audio was recorded in stereo throughout, creating a sense of space and dimension, as we can locate sounds according to visual stimuli (Tabry, Zatorre and Voss, 2013). Such localisation is present in three dimensions: horizontal, vertical and depth (Holman, 2014: 30), so stereo sound introduces the horizontal aspect of sound localisation, emphasising space and direction, and enhancing the storyline according to character movements seen on screen.

            To create a further sense of space, buzz tracks were recorded for each scene. This refers to low-level audio tracks that represents room tone and atmosphere (Filmmakers Academy, 2021). Using this to create a sense of place can be beneficial to a character, as certain sounds become recognisable to the viewer, therefore enhancing the storyline (Harrison, 2021). Buzz tracks are also useful for masking silences between dialogue clips, creating a seamless soundscape and atmosphere that is believable to the viewer.

            When recording specific foley sound effects, a clear pick-up of the sound was ensured by pointing the directional mini boom microphone on-axis to the sound source. By exploiting the deaf end of the microphone, soundwave reflections were prevented from entering the diaphragm of the microphone, which can result in phase cancellation due to soundwaves arriving at the microphone at different times (Farnell, 2010). As a result, clear audio was captured of the desired sound source, that could be easily edited in post-production.

            Finally, our group made sure to announce takes verbally on each audio recording. This made file management and trialling different takes easier in the post-editing process, as each take could be easily located and identified.

Capturing ADR (Production)

            Like the foley recording process, parent folders were made prior to capturing ADR in the studio. This ensured that files could be properly organised and missing files prevented. Before recording, cues were also added to the timeline of the clip in ProTools, alongside pips for the voice actors to follow when speaking their lines. Making such preparations before entering the studio made the recording process time efficient and simple, an essential part of working in the film business due to its fast pace.

            When ready to enter the studio, acoustic panels were used to absorb unwanted reverberations. This created a dead sound that could later be manipulated with post-editing techniques to create a sense of space, such as the use of reverb. Using a high quality shotgun microphone and limiting the influence of background noise produced clean audio tracks by preventing variations in room tone, which are harder to mask in dialogue than foley, as dialogue is generally the focus of the narrative due to the information it portrays to the viewer and its role in character development (Kozloff, 2000).

ADR Recording set-up

            While recording, the voice actors were able to see the clips playing on a screen to make synchronising the dialogue easier. Each take was then organised into playlists in ProTools. This made auditioning the takes in the post-production process more efficient, as opposed to searching through an abundance of tracks in the ProTools session. A preferences (prefs) track was also used to organise the favoured takes, though this was utilised more in post-production to limit time spent in the studio.

Editing (Post-Production)

            The prefs track allowed the organisation of all the best takes of dialogue from in the studio, eventually forming the final dialogue stem. When placing the sounds into the prefs tracks for both dialogue and foley, the attack transients of the sounds were used to align them with on-screen events. In film production, a clapperboard is used to create a significant attack transient, which is used to synchronise audio and visuals. By using attack transients in our project, the audio was synchronised appropriately, creating seamless transitions and transparency within the soundtrack. Further creating a seamless soundtrack, fades were also used on the dialogue and foley tracks to prevent harsh bursts of sound.

            The editing of the dialogue tracks included the use of reverb, panning, volume based on the setting and characters’ position on screen. This enhanced the narrative of the clip, as it follows the concept of locating sounds visually (Tabry, Zatorre and Voss, 2013), increasing realism for the viewer and creating a sense of space for the characters. Reverb was implemented using buses, increasing the control of how much reverb is on the track, and including some of the dry signal to maintain the authenticity of the audio.

After editing the dialogue based on location, direction and narrative purpose, this stem was bounced out from ProTools and saved as an audio file that could be imported into a new project alongside the other stems.

            The foley sound effects required further editing alongside volume, reverb, and panning. This included pitch shifting and granular synthesis to alter the fundamental aspects of the sounds recorded. Layering these edited tracks allowed the creation of detailed sounds such as the loading gun scene at 1:01, producing believable sound effects for the viewer, therefore enhancing the narrative of the clip. This is a common method of producing sound effects in film, used by famous sound designers such as Ben Burtt, who used a similar technique to create the light saber sound for Star Wars (Star Wars, 2014).

VCA faders were also used as part of our project to efficiently control groups of faders, such as atmosphere tracks, to ensure consistent levels throughout the clip. The stems were then imported into a final ProTools project, where they were appropriately mixed. Here, the group project was concluded, with the results available in the video below:

Summary

To summarise, the production process of sound design in film was accurately followed throughout our group project. The appropriate preparations were made, such as cue sheets and good file management, to avoid any issues later. The foley and dialogue were recorded using skilled techniques to combat reverberations and background noise, resulting in a clean and professional soundtrack for the short Home Alone clip, attached in the clip below:

Bibliography

Farnell, A., 2010. Designing sound. Cambridge, Mass.: MIT Press.

Filmmakers Academy, 2021. Buzz Track. [online] Filmmakers Academy. Available from: https://www.filmmakersacademy.com/glossary/buzz-track/ [Accessed 11 April 2022].

Harrison, T., 2021. Sound Design for Film. Ramsbury, Malborough: Crowood Press, p.48.

Hellerman, J., 2020. What Is ADR and Why Is It Important?. [online] No Film School. Available from: https://nofilmschool.com/what-is-adr-in-film [Accessed 11 April 2022].

Holman, T., 2002. Sound for film and television. Boston, Mass.: Focal Press, p.71.

Kozloff, S., 2000. Overhearing film dialogue. Berkeley: University of California Press, p.33.

Leonard, K., 2019. The Film Slate Explained — An A-Z Guide for 2nd ACs. [online] StudioBinder. Available at: <https://www.studiobinder.com/blog/how-to-use-a-film-slate/#:~:text=The%20%E2%80%9Cclap%E2%80%9D%20of%20the%20clapperboard,where%20the%20clapper%20sticks%20meet.&gt; [Accessed 13 April 2022].

Ponic, J., 2022. Your Survival Guide to Working on a Hollywood Movie Set. [online] ReelRundown. Available from: https://reelrundown.com/film-industry/Your-Survival-Guide-to-Working-on-a-Hollywood-Movie-Set [Accessed 11 April 2022].

Star Wars, 2014. Ben Burtt Interview: The Sound of Lightsabers. [image] Available at: <https://www.youtube.com/watch?v=TJQ3_tipGEY&gt; [Accessed 14 April 2022].

Tabry, V., Zatorre, R. and Voss, P., 2013. The influence of vision on sound localization abilities in both the horizontal and vertical planes. Frontiers in Psychology, 4.

Production and Mastering MUS2057

Introduction

This blog aims to discuss the processes and techniques involved in producing, mixing, and mastering a piece of music in the studio to a professional standard.

Studio Set Up and Organisation

 Cables 

To keep my workspace organised, I ensured that all cables were routed to a stage box (snake) and coiled next to microphone stands. This made it easier to pinpoint different channels while when making changes during recording, also making troubleshooting simpler. 

Microphone Placements

I utilised two different microphone selections for guitar and vocals to capture the best sound from both in terms of dynamic and spectral aspects. 

For acoustic guitar, I used an AKGC414 condenser directed towards the bottom end of the instrument, along with a Nuemann KM184 condenser pencil microphone pointing at the 12th fret. This method allowed me to capture all the necessary spectral qualities to produce a high-quality recording, as the AKG microphone picked up the lower end frequencies and the Neumann caught some of the higher frequencies from the neck. 

Guitar Microphone Placement

Both microphones were placed around 5-6 inches away from the guitar itself to prevent proximity effect which would reduce the quality of the captured audio. Proximity effect is a phenomenon in which a microphone’s low frequency response is boosted due to its position, which can often lead to the bass end becoming muddy sounding. 

Neumann U87 and Shure SM58

For vocals, I used the Nuemann U87 condenser and a Shure SM58 dynamic microphone, both placed behind a pop shield to prevent plosives from affecting the quality of the recording. The Neumann U87 is a vintage condenser microphone, which was ideal for detecting the detailed spectral aspects of the human voice. The Shure SM58 was used to add some warmth to the vocal track and remove any artificial qualities, as dynamic microphones have a slightly less extensive frequency response compared to that of condenser microphones.  

Keeping both microphones at the same distance from the sound source helped prevent possibility of phasing issues, a phenomenon where sound reaches the microphones at slightly different points, causing the sound waves to be out of sync with each other. This can lead to reduction or complete loss of frequency bands as the sound waves cancel each other out. 

Vocal Microphone Placement

Reverberation

Reverberation of sound waves within the surfaces of a recording space can significantly impact the quality of a recording by blurring transients and making tracks sound less clean. To prevent this, I used acoustic panels when recording all instruments, as well as a reflection filter for vocals. This equipment captured reverberating sound waves, stopping them from reverberating and reaching the microphones’ diaphragms at different points. Therefore, captured audio remained clean and high quality, making the mixing and mastering process later much easier. 

Reflection Filter

Multitrack Recording 

There are different recording methods to be utilised in the studio that serve different purposes. Live recording refers to the capture of multiple sound sources without overdubbing, whereas multitrack recording is combining multiple tracks to result in one project. 

I used the multitrack method for my work, as it allowed me to isolate the instruments according to what processing and effects were required, resulting in a better produced sound overall. Furthermore, it meant that multiple takes of each instrument could be recorded, with the best ones chosen for the final product. 

Mixing and Mastering 

In the mixing and mastering process, there are four fundamental areas to work on that contribute to a professionally finished project. These are dynamic, spectral, spatial, and temporal aspects. Before working on these elements, I made sure all my work was opened and saved into a parent folder, ensuring all audio files were all routed correctly to prevent the possibility of missing tracks when the project was saved onto a USB drive. 

Pro Tools Session Set-Up

The Pro Tools project was created with a sample rate of 44.1kHz and a 24-bit bit depth. These settings control the quality of a recording by allowing the digital audio workstation to reproduce the full human hearing frequency spectrum (0Hz – 20kHz) as well as represent a sufficient number of amplitudes. 

To organise my session, I used markers and playlists to keep track of the song structure and takes to make the editing process more efficient. Markers allowed easier navigation throughout the project to revisit sections and work on specific parts. 

Dynamic Aspects

Dynamics refers to audio levels in terms of decibels as well as perceived loudness. Decibels are a representation of sound levels in the form of a relative logarithmic scale. For each increasing increment of 10dB, the sound level is 10 times louder. Decibels on a digital audio workstation are measured as dB and sound pressure entering a microphone is measured as dBSPL, but there are many other forms of decibels such as dBV, dBU and dBFS that all apply to different circumstances. 

When setting the levels at the desk for recording, I ensured that each track did not exceed –16dB to leave headroom for any sudden spikes in loudness. This prevented tracks from clipping and distorting, which can have a significant effect on the quality of a studio recording. 

Gain Adjusting

Perceived loudness (measured in LUFS, a K-weighted measurement) refers to how sound levels are detected by the human hear. For example, lower frequencies are often perceived as louder than higher frequencies, even at the same decibel level. When mixing my project, I took into consideration the destination medium, which would be online streaming platforms such as Spotify. The normalized loudness level for Spotify is -14 LUFS (equal to -14dB), therefore, I ensured my recording peaked around -14dB so that it was an appropriate loudness for Spotify usage. It is also common to use reference mixing as a way of ensuring that loudness levels correspond with other recorded music of the same genre or listening mediums, which I utilised when mixing my project to ensure similar loudness levels.

To further manage audio levels, I applied a compressor to each track, catering the attack, decay and threshold to each instrument to produce the desired effect. For acoustic guitar, I used a -18dB threshold, meaning when the level exceeds this, the compressor will reduce it slightly. I set this with an attack of 1.8ms so the effects of the compressor were not too harsh, but rapid enough to compress the signal efficiently. For the lead and backing vocals, I used the “vocal levelor” pre-set which set the attack to 14ms and the release to 25ms, producing a much softer compression when the level surpassed the threshold of -28dB.

Lastly, I used a VCA fader (voltage-controlled amplifier) to control the lead vocals so that I could change the overall level while maintaining the relative balance between the Neumann and SM58 microphone.  

Spectral Aspects

Spectral aspects of recording involve the frequency spectrum, with the aim to balance all frequencies appropriately and achieve a desired effect by boosting or limiting certain bands. 

When mastering my project, I applied a high pass EQ to each track to remove the bottom end frequencies from each instrument (around 200Hz and below). This eradicated low rumbles that may have caused the recording to sound messy or muddy. I also boosted selected frequency bands in each instrument to allow them to have their own spectral space in the mix. For example, I boosted the midrange frequencies (around 1,000Hz) in the acoustic guitar to add resonance, as well as the high frequencies (around 10kHz) in the lead vocals to add an airy quality. 

Spatial Aspects 

Controlling spatial aspects during the mixing and mastering process focuses on the stereo image (left and right) and the Z axis (front and back) of a recording. Sound should be balanced between all these points in the mix.

In my project, I kept the guitar and vocal tracks in the center of the stereo field as these are the fundamental parts of the song, which would cause the project to sound unbalanced if panned to the left or right. However, the harmonies (introduced in the second verse) I panned with a mixture to left and right to widen the stereo field 

To add a sense of space and front to back perspective to my recording, I introduced reverb in specific amounts to each instrument. For the acoustic guitar, I used a small reverb with a short decay of 1 second to further enhance the resonant quality I wanted to achieve as part of my final product. For the lead vocal, I layered two separate reverbs to add depth, with a longer decay of 3 seconds to enhance the stereo width of the vocal and expand its space within the mix. The backing vocals had an even longer decay time (4.5 seconds) to create space and add texture behind the main elements of the recording. This also contributed towards the expansion of the stereo image and the Z axis. 

Temporal Aspects

Structure is a fundamental aspect of recording a finished piece of music and is something that is worked on prior to entering the studio. I spent an abundance of time working on my song and deciding on the best structure to maintain the engaging elements of it and prevent it from becoming dull or repetitive. I did this by adding harmony parts throughout to create a more complex texture, as well as percussion in the second verse to maintain interest and variety in instrumentation. This ensures the listener’s attention is sustained throughout the entirety of the song.

Overall, the methods employed in live room set-up, as well as mixing and mastering techniques all had a significant impact on the quality of my studio recording. I utilised microphone placement to capture good quality audio from the instruments, then used resources in Pro Tools to mix these appropriately and produce a professional sounding project.

MUS2056 Recording and Mixing

Introduction

This blog will discuss recording techniques utilised in the recording of my originally written song for MUS2056. The aim of this project was to create two distinct mixes by applying a variety of studio techniques and microphone placements. 

1.0 Preparing for a Studio Session

1.1 The Song Writing Process

It is important prior to recording to have a finished and perfected song. I began the song writing process weeks in advance to create well-rounded material to record. I implemented an interesting chord structure that included 7th chords to generate interest and add complexity throughout. I wrote the vocal melody to complement this and bring out the 7thqualities, as well as stacked harmonies introduced in stages to progressively build texture. 

Something I found difficult during this process was constructing a musical structure that maintains the compelling aspects of the song and keeps the listener interested. To tackle this, I implemented an acoustic guitar solo after the second chorus. This provides an auditory break in which the listener can experience something new and regain attentiveness to the music. 

1.2 Practice

Another important aspect prior to entering the studio is preparation and practice. It is essential for musicians to have practiced their parts know the material well (Tips for Musicians Before Arriving at the Studio, 2017), as time within studios is often limited and costly.

Myself (vocalist) and Jack (guitarist) were both well practiced in our parts prior to recording. This made the process effortless and efficient as no provisional work was required during studio time. 

Another way I ensured sufficient preparation was assembling notes as guides for song structure. These were used as prompts for both me and Jack to prevent the need for retakes. This was beneficial as it ensured studio time was maximised and used efficiently. 

Song Structure and Lyrics

2.0 Live Room Set Up

A studio arrangement generally consists of a live room and a control room. The live room is where instruments and microphones are set up. There are specific methods and techniques followed in the live room to ensure all equipment remains organised and looked after. 

2.1 Attention to All Instruments

With multiple instruments to record (guitar and vocals), I made sure that each one had an equal amount of attention to detail when setting up and packing away. This made certain that the best possible sound could be achieved from both instruments, resulting in a high-quality recording. 

2.2 Cable Management

When working in the live room, a variety of cables may be needed to patch instruments to the control room using the patch bay. Microphones generally require an XLR cable, which I used three of in total when setting up the acoustic guitar ready to record, including a talkback microphone. To keep these cables organised I made use of a stage box (or “snake”). This consists of multiple inputs and outputs, allowing cables to be isolated to a singular area. This makes troubleshooting easier should any problems arise and prevents cables from becoming a trip hazard. 

Stage Box (Snake)

Cables were coiled next to microphone stands, as well as coiled to be put away between instruments and after the recording session had finished. This further contributed to sufficient organisation of the live room, allowing the recording process to be carried out in a non-problematic manner. 

2.3 Preventing Reverberations

A common cause of unwanted noise on recordings is reverberations within the recording space. This occurs when a sound reflects off surfaces at differing distances, causing them to reach the microphone’s diaphragm at varying time intervals. This can create a “muddy” effect as the transients of instruments become blurred, reducing the overall quality of a recording.

For vocals and acoustic guitar, I utilised acoustic panels to absorb reverberations and produce a cleaner sound. Also, I used a reflection filter for the vocals to further prevent reverberations being picked up by the microphone. Having clean recordings meant that unwanted noise was not increased as the signals passed through different gain stages during the recording and editing process. 

Acoustic Panels and Reflection Filter

I also used a pop shield when recording vocals to prevent plosives affecting the quality of the recording by causing the audio to clip and distort. This is a technique widely utilised in the recording industry, specifically described by Jason de Wilde who suggests this significantly preserves the quality of a recording and makes it sound more professional.

Pop Shield and Reflection Filter

3.0 Microphones and Their Placement

3.1 Microphones Overview

Microphones are transducers that convert changes in air pressure to a voltage so that it can be received and manipulated via a digital audio workstation (DAW). Different types of microphones have varying characteristics, with two common types being condenser and dynamic microphones. 

Dynamic microphones have a diaphragm that is attached to a coil of wire with a magnet inside. Changes in air pressure (dBSPL) cause the diaphragm to oscillate, resulting in movement of the wire and magnet inside and generating an electromagnetic signal. This type of microphone produces a well-rounded sound (Dynamic Microphone, 2020) and are often used in live performances due to their durable design. 

Condenser microphones have a diaphragm and a metal plate that sits parallel, with a constant voltage running between the two. This type of microphone requires phantom power (a constant voltage) provided via a mixing desk or DI box. When air pressure changes cause the diaphragm to oscillate, the voltage between the plates changes and creates an electromagnetic signal. Condensers usually employ a cardioid polar pattern but can often be set to figure 8. They are commonly utilised in studio recordings due to their extensive frequency response. 

3.2 Acoustic Guitar

With a goal to produce two contrasting mixes, I used two condenser microphones positioned differently to pick up separate elements of the guitar. I placed an AKG C414 in front of the sound hole to capture the lower frequencies of the instrument, and a Neumann KM184 pointed at the 12th fret to pick up higher frequencies and the overall guitar sound. This positioning resulted in an accurate representation of the full frequency spectrum that I could use to produce different mixes later.

Having a small diaphragm condenser directed at the 12th fret is a technique recommended and used by Rick Beato. This method prevents finger picking sounds being too audible on the track, as well as picking up a balance of frequencies from both ends of the guitar. 

Rick Beato also recommends using a second larger diaphragm microphone on the bottom end of the guitar to catch bass frequencies which I adapted slightly by placing it directly towards sound hole, further enhancing the lower end and therefore providing versatile mixing opportunities. 

Both microphones were directional and featured a cardioid polar pattern. This prevented the detection of sound from behind the microphone, limiting unwanted reverberations and noise that may not have been stopped by the acoustic panels. 

3.3 Vocals

For vocals, I used the AKG C414 along with a Shure SM58 dynamic microphone. The addition of a dynamic microphone to my vocal track contributed a warmer tone to the vocals, making them sound less artificial in the final product. 

AKGC414 and SM58

As recommended by Jason de Wilde, I sang into the microphone from a short distance, moving much closer for quieter and more intimate lines. Wilde states that this technique allows the microphone to pick up small details such as sibilants and vocal tone. Due to the vocals being on the more intimate side for this recording, this worked well in capturing specific characteristics and features of the voice. 

4.0 Control Room Set-Up

4.1 File Organisation

When recording on a DAW (digital audio workstation), it is important to have files organised and properly routed to prevent missing audio files when opened on a different computer. Therefore, I created a “parent folder” before setting up a pro tools session and routed everything to be saved in this location.

4.2 Setting Up the Pro Tools Session

When setting up a Pro Tools session, the sample rate and bit depth must be set. These settings determine the speed at which the computer processes and deals with the recording data. The sample rate of my project was 44.1kHz and a bit depth of 24-bit.

The condenser microphones I used for my project required phantom power to operate. This was switched on at the mixing desk and turned off between changes in the live room to prevent harm to the microphones. The SM58 was placed in a separate bank of tracks with no phantom power to prevent any damage, as dynamic microphones do not require a voltage to function. 

Phantom Power on the Mixing Desk

In my Pro Tools session, I ensured that each track was set up as a mono audio track and labelled according to the microphone and placement in the live room. This made it much easier to pinpoint controls for each microphone on the desk if adjustments were needed throughout. 

To further organise my session, I used playlists for each guitar and vocal take. This allowed me to store the takes in one place, rather than creating an abundance of new tracks for each one. 

4.3 Setting Up the Desk

Prior to recording, I ensured that the monitor speakers were switched on, the control room volume was turned up and the desk was unmuted. I then record enabled each track to allow Pro Tools to record the takes. 

The Mixing Desk

I also adjusted the gain for each microphone to the appropriate levels, ensuring they stayed consistently around -16dB to leave headroom for any spikes and prevent clipping or distortion. Popping sounds due to clipping can have a significant effect on the quality of a recording. Preventing this with adequate gain levels means all takes will remain clean and professional sounding. 

5.0 The Mixing Process

The goal of this project was to create two contrasting mixes by utilising studio set up techniques and microphone positioning. As can be heard in my two final mixes, the guitar sounds significantly different in each. In “Mix 1”, a lighter and thinner sound is created by including more of the KM184 in the mix. In “Mix 2”, a fuller and more well-rounded sound is can be heard due to incorporating more of the AKG C414 to bring in some lower frequencies. Overall, the techniques employed in “Mix 2” produced a higher quality natural sound compared to “Mix 1” which resulted in a more artificial effect. 

All the levels in the session were mixed appropriately to give each track its own space in the mix. This is important when producing a professional sounding recording, as it must sound balanced to the human ear and not sound muddy. Also, using a low-cut EQ on each track prevented low frequencies from becoming too prominent in the mix, maintaining the clean, professional sound.

Playing Live – MUS2066

Introduction

Within the music industry, there are many practices and procedures behind live sound and performance that allow them to run smoothly and professionally. This blog post will discuss some of these methods and strategies, catered towards different audiences and venues, why they are important, and how I have applied them to performances of my own.

Technical Preparation

When dealing with a live set up, there are basic practices and precautions that can make the process easier, more organised and professional, as well as accounting for health and safety matters along the way.

Channel List and Stage Plan

Organisation and preparation is essential prior to delivering a professional performance of any size. Therefore, for our own performance for this module, we operated via a channel list and stage plan, as shown below.

Channel List
Stage Layout Plan

This made the assembly and arrangement of equipment quick and efficient as we were able to plan signal paths, cabling and stage set up accordingly. Consequently, we spent less valuable time on equipment set up, meaning we had more time available to sound check and create an appropriate monitor mix for our performance. Technical preparations like soundchecks are useful for any sized production, as they ensure that the sound quality is adequate and performers can hear themselves properly, thus allowing them to perform to the best of their ability.

Cable Organisation

Prior to our performance, all cables chosen were a suitable length for what was required, with any extra coiled and placed neatly next to the piece of equipment they were being utilised for. This prevented masses of extra cables from becoming mixed up and taking up valuable floor space. This is especially important in a smaller venue where capacity may already be limited. Therefore, the stage itself looks to be professional and organised, as well as preventing the risk of trips and falls, which are considered a serious health and safety issue when setting up this style of event in both small or large venues.

Secondly, throughout our performance arrangements, we ensured that cables had dedicated pathways around the stage area, away from fire exits, walkways and the stage. For example, cables on stage should always remain behind the wedge monitors, as shown in the photo below. The purpose of this is to take into consideration health and safety risks, preventing people from falls and trips, especially in an environment where there is often a lot of movement from performers/audience or if there is an emergency. Furthermore, smaller venues are likely to be cramped and short of significant space, which intensifies the requirement for well placed cable runs. This helps create more movement capacity for the performers and the audience, providing a preferable experience overall.

Wedge Monitors and Cables

Time is an important factor in the assembly of a live performance setup, as everything must be done efficiently, correctly and with appropriate preparation. In larger venues, where there are a great deal of instruments and equipment, sufficiently organised cables that are easy to follow help to prevent confusion at the mixing desk when setting up signal paths, making the set up process much simpler and faster. This is also crucial in smaller venues, however, as it is good general practice and more professional to keep cables runs organised and efficient in order to prevent technical issues and delays to a performance due to basic practice mishaps.

Lastly, cables should always be plugged in prior to power being switched on, and power should always be turned off before unplugging of any equipment. This is done to prevent damage to hardware such as speakers and amps.

These basic systems and practices I carried out for all of 3 of my performances, as it was important for them to be professional and organised throughout.

Speaker System Set Up

For our final performance, we used a combination of the Yamaha DXR15 powered full range speakers and the Yamaha DXS15 powered sub-bass speakers. The term “powered” refers to the built in amplifier in both models of the speaker, meaning they do not require a separate amplifier. The purpose of the DXR15 full range speakers is to deal with the majority of the frequency spectrum, whereas the sub-bass DXS15 speakers are designed to deal with the lower bass frequencies only. The combination of the two provides a well rounded sound with appropriate representation of all frequency bands. As a result, all tonal qualities and timbres of the instruments can be perceived accurately by the listener, providing a professional listening experience for the audience.

To ensure that the sets of speakers weren’t actively operating on the same frequencies – which would cause them to clash – I turned on the hi-pass filter at 120dB on the DXR15 speakers, and the low-pass filter on the DXS15 speakers, also at a 120dB threshold. The hi-pass filter meant that any frequencies below the 120dB threshold were filtered out and not projected from the speaker. The low-pass filter meant that the sub-bass cabinets only dealt with frequencies below the 120dB point. This prevented both the full range and sub-bass speakers from outputting identical frequencies and becoming messy or muddy sounding, again creating a more professional sound experience for the audience.

DXR15 full range PA speaker settings
DXS15 bass speaker settings

For positioning of the speakers on stage left and stage right, we situated the sub-bass speakers on the bottom and placed the DXR15s above these, using speaker stands. When placing the full range speakers on the stand, we ensured that they were facing slightly downwards and pointing marginally inwards towards the centre of the audience. This positioning ensured that the vertical and horizontal projection of sound waves didn’t reverberate off the surfaces within the performance space. Such phenomenon is particularly common in smaller venues, as the external boundaries are much more compact, leaving increased opportunity for reverberation. Consequently, this would lead to phasing issues, which refers to when one side of a sound wave is reversed, thus leading to phasing cancellation. This is when certain frequencies are eradicated or reduced after being cancelled out by phasing issues. Therefore, the audience would not receive the full frequency spectrum of the instruments from the speakers.

Audio Phasing Diagrams

The Mixing Desk

For our final performance, we used the Behringer X32 compact digital mixing desk to control all input and output signals.

Behringer X32 Compact Mixing Desk

Gain

As part of preparing for our final recorded performance, we took part in a soundcheck to ensure that all equipment was set up correctly and to perfect what the audience and musicians on stage could hear. This is a major part of the live performance process as it allows the sound engineer to correct all of the levels ready for the final show, preventing any issues later on. We were able to ensure that each performer could hear the appropriate instrument parts back through their individual monitors. It is paramount that the artists can hear themselves as well as other instruments on stage to ensure the quality of the performance is clean, in time and in tune.

Gain Control

Part of obtaining sufficient levels for each of the channels includes using the gain function on the mixing desk. Gain is used to alter the signal strength coming into the desk. On a digital desk, this should be at around the -18dB point, which is equivalent to -6dB on an analogue desk. As the Behringer mixing desk we are using for our performance is digital, we ensured that the levels for each mic input didn’t peak past -18dB in order to prevent distortion. Audio distortion refers to when the signal inputted exceeds what the mixing desk is able to process, causing it to clip, meaning the sound becomes misrepresented when outputted via the speakers. This is an unpleasant experience for the audience, and it is vital to avoid this by leaving the appropriate headroom during a professional live performance.

Signal Routing

Setting up the desk for the performance, we had to route the signals paths of both the inputs and outputs appropriately to ensure that the sound from the microphones was being outputted from the front of house speakers as well as the correct wedge monitors. To do this, we accessed the “routing” tab on the desk and changed the inputs from local to external, as we used a stage box to better organise our cables, which makes them external inputs. We kept the outputs set to local as we ran the cables directly from the mixing desk to the five monitor wedges and the front of house speakers.

To ensure that the signal paths are correct to the monitor wedges, we selected the “bus 1-8”, “channel 9-16” and “sends on fader” buttons. This allowed us to use the faders to control how much of the signal from the inputs is sent to the wedge monitors and ensure that the correct channels were assigned to the correct inputs on the stage box. This helps to avoid confusion later on when controlling monitor mixes, making the process more efficient, which is essential when working in a live performance setting. In larger venues, there is an abundance of inputs and outputs to maintain and keep track of. Therefore, organisation skills are paramount in these large-scale situations in order to prevent confusion.

Input Controls

Equalisation (EQ)

Equalisation is used to manipulate the various frequency bands that are part of an audio signal, this is done when certain frequencies are over-active, which is common in live performance and can lead to a lack of sound quality. Controlling the levels of certain frequencies under the “Effects” tab on the Behringer mixing desk allowed us to obtain the best sound from the front of house speakers and wedge monitors.

EQ Controls

Although we did use a hi-pass filter on the front of house DXR15 speakers and wedge monitors, we found that the most over-active frequencies were between 100dB-160dB, all of which we reduced and altered according to each specific channel. This meant that the frequencies were all balanced and represented equally for the final performance, providing a well rounded and high quality sound for both the audience and musicians on stage.

Performance Preparation

Performance One and Two

Due to the current covid-19 circumstances, doing two organised performances with an audience has been made non-viable. Therefore, our first two performances were done in the Edge Hill University recording studio. In preparation for these performances, we compiled a playlist of music that we all enjoy, to distinguish which genres would be suitable for us to perform.

As a result of this, it was decided that we perform “Stay” by Rihanna featuring Mikky Ekko for our first recorded rehearsal. For this song, myself (middle) and Eleanor (right) doubled on the main melody. Ellie (left) followed with some main melody and included harmonies throughout the chorus. Sarah provided the piano part for this song.

Stay – Performance 1

Something we found difficult as a group was listening to each other while keeping our individual vocal lines isolated and clean throughout. Therefore, when practicing for this piece, we distinguished the dedicated vocal parts and run through these first, then focused on singing these together precisely with no muddiness or confusion.

For our second performance, we decided on the song “Don’t Dream It’s Over” by Crowded House. We chose this song as it provided a great opportunity each of the vocalists to have a solo part, as this would be the case in the final performance so we felt it necessary to build confidence in this area beforehand. We also felt this song complemented our overall style as a group. Something we wanted to do with the recorded rehearsals was develop our own energy and aura so we could incorporate this into the writing of our original songs, hoping to result in a more calculated and put together final performance.

Don’t Dream It’s Over – Performance 2

Songwriting – Final Performance

As part of our final performance, we used originally written material. Therefore, the song writing process was begun well in advance to ensure that the songs could be perfected and rehearsed well before presenting them on stage. We have been introduced to many different songwriting techniques since the start of the module, some of which I adopted in writing my own song. One of these techniques is the use of personal experiences and connections in order to demonstrate emotion. I incorporated internal ideas and sentiments in order to produce a meaningful piece of music, with the aim of resulting in a purposeful final performance.

In the live industry, it is common for a set list to be well thought out and cohesive to prevent a detached and dispassionate performance. To create a complete set list for our stage presentation, I suggested that we make the creative decision to name our songs in a specific order, creating a story with the titles. This resulted in our set list being named “Sometimes I hope We Could Fly to Mars”. The aim of this was to provide a connection and relationship between the songs to produce an all-round finalised performance that made creative sense, as the topics of the songs themselves went slightly unrelated due to being written separately.

My song, named “Sometimes”, was written with harmony parts in mind throughout the process. As our group featured three vocalists, I wanted to ensure that this could be exploited and utilised to the best of its ability. As shown in the final recording below, harmonies were featured in the choruses as well as the outro to create a more in-depth sound and maintain variation and interest throughout.

I also spent a lot of time dedicated to Sarah’s guitar part, experimenting with chords to create something interesting. This resulted in the inclusion of an abundance of 7th chords in my chord progressions to add some complexity to my song and enhance the vocal melody and harmonies.

Sometimes – Original Song By Caitlin Cregg

For Ellie’s song (“We Could Fly”), Eleanor’s Song (“I Hope”) and Sarah’s song (“Mars”), I contributed harmony parts. During rehearsal, I spent a lot of time working out and writing specific harmonies that would enhance the main melody of each song. We then practiced run throughs with the harmonies included, recording audio each time to document the process and later decide which ones contributed the most creatively to each piece.

As heard on Sarah’s piece, “Mars”, there are phrases at the end of each chorus dedicated to a three part harmony, as well as an a cappella section to conclude the song. This is an example of my contribution, arranging the vocal parts to bring out the best of each person’s vocal range while maintaining the engaging qualities of the song.

Overall, as a group, we spent masses of time writing, arranging and rehearsing our original material to produce a well thought-out and captivating performance. Our concentration on good preparation prior to the final staging of our pieces meant that we were able to provide a professional quality show, while taking into account technical set up, health and safety and the value of good rehearsal practice.

New Emerging Talent MUS1013

Introduction

A&R (Artist and Repertoire) is a role within the music industry that involves finding new artists and songs to be used for a range of different purposes. Some of these may include an artist to be signed by a record label and release music within the industry or looking for songs to be used in commercials, films or games. The method used to carry out the A&R role has developed dramatically over the years, with some of the first instances of A&R being used in a place known as “Tin Pan Alley” (Denmark Street, London). This is where a lot of producers and publishers were based who searched for new talent that would be suitable to release music or provide songs for artists they already had.  

Artists like Frank Sinatra after his big break in 1935 used A&R representatives to find material for him to record and perform, highlighting the importance of the role of talent scouting as big artists rely on it to be able to continue their career.  

At a time where TV and internet didn’t exist, it was important for A&R reps to find ways to discover new talent with what was readily available. Due to this, early A&R techniques involved listening to the radio for new talent, as well as travelling around to watch live performances from new and upcoming artists. This emphasises the importance and effect that social media has since had on the way that the A&R role is carried out in the modern age.  

Currently, social media has a massive influence on the way we find new and emerging talent. This can be anywhere from posting videos on YouTube of covers that can be watched by A&R scouts or using platforms such as Spotify to post new original music. Social media sites such as Facebook can also be used to promote gigs and events that may contain upcoming artists that A&R would find it interesting to attend. The possibilities have become endless as the role of talent scouting no longer revolves only around local artists like it once did, as many different genres and styles from across the globe are accessible through the internet.

One element of A&R that has remained the same throughout its time is the way that A&R will follow the “buzz” or hype surrounding an upcoming artist as a way of leading them to new talent. It is important within this job (especially within the pop industry) to follow the current trends, as this is how record labels make the most money. Around the time that Elvis was popular, he was known for taking inspiration from genres that weren’t typical at the time and adapting them to his own style. This meant that the A&R industry had to start encouraging peoples’ interest in different genres that they weren’t used to, both within the record labels and the public at the time.

Within this blog post, I am going to be using modern A&R techniques influenced by social media in order to find 3 new artists from 3 contrasting genres and explain why I believe they should be signed to a record label. 

Section 1 – Contemporary R&B

The first genre that I will be scouting for talent within is contemporary R&B. This is a combination of typical R&B and pop. It also includes influences from soul and hip-hop but with a more electronic influence, using a wide variety of synthesized sounds and drum machines to produce a smooth and pop-like sound.  

This genre originated not long after the 1970s influenced by artists such as Michael Jackson and thrived in the 1990s-2000s with other artists including Beyoncé, Chris Brown, Lauryn Hill, Usher, Rihanna and Mariah Carey being among the most popular.  Here are some examples of artists and songs well known within the contemporary R&B genre:

Keaira LaShae

I found the solo R&B artist Keaira LaShae on ReverbNation.com and managed to find many more of her tracks and information via the Soundcloud website. This singer/songwriter is experienced within the industry, from posting covers on Youtube up to 9 years ago to still releasing her own music now in 2020. The dedication shown from this artist is one to be admired, as she has worked very hard in expanding her empire through a variety of online sites over her 10 year career. She has managed to develop an impressive online following, accumulating 601,000 YouTube subscribers and over 1,000 followers on SoundCloud.

Her self-written and performed pieces have generated thousands of listens on SoundCloud, including her most popular named “Play The Side” (released in 2018) which has 138,000 plays. Many of her other songs each are generating above 30,000 plays also, with another popular track being “Frankie” which has over 52,000 listens on SoundCloud. I feel as though both of these most popular tracks demonstrate LaShae’s particular style within the R&B genre very well. “Play The Side” features more of the smooth and relaxed style of her personality, whereas “Frankie” shows her more upbeat and confident style. The demonstration of the typical aspects of R&B are very clear in her tracks, with use of drum machines and electronic influences as well as obvious features from pop, soul and hip-hop.

As well as Lashae’s obvious accomplishments on SoundCloud, her YouTube channel is definitely the highlight of her success. She has continued to upload a variety of content from covers, fitness videos and music videos consistently over the past 9 years, showing her amazing dedication to the industry and her determination to find success. She has also ventured into the world of dance, with each video racking up thousands (if not hundreds of thousands) of views and also having a second channel named “superherofitnesstv” that has accumulated 882,000 subscribers.

She brilliantly demonstrates her multitalented nature and versatility as an artist/performer, as she has taken a great interest in not only producing her own music but also creating music videos to accompany these. The quality and effort that is put into her music videos is clear to see with an incredibly professional look, from the filming and media skills to the hair, makeup and outfits. She comfortably fits in to the R&B genre with her overall look and star quality. Her incredible smooth and easy-listening voice is brilliantly accompanied by her natural ability to entertain and perform, showing that she really is the full package.

Keaira LaShae has an astounding social media presence, racking up 238,000 followers on instagram through her own dedication and perseverance throughout the years. Though she does focus mainly on fitness/dance on her instagram, this already impressive following would be a brilliant opportunity for her to expand her music career into this area, as she already has a solid fanbase.

I believe that Keaira LaShae would be an amazing advantage to any record label, as she has already built up a solid following through her own hard work. There is huge opportunity for this to be expanded a lot further, as she has the right personality and talent to become a great popular R&B artist.

Section 2 – Electronica

Electronica is a term used to describe quite a broad range of electronic music, including house, techno and ambient. These genres focus predominantly on the use of electronic instruments such as synthesisers, drum machines and effects added in the editing process. The first instances of electronic music originated in the early 20th century when the first electronic instruments were produced. Since this time, genres within electronica have continued to expand and develop along with technology and new capabilities to produce different sounds.

Some artists that are popular within the electronic music genre include: Skrillex, Tangerine Dream, Daft Punk, and The Chemical Brothers.

Payne Pepper Jam

After searching on the website SoundCloud, I found an artist that brilliantly represents the electronica genre under the name Payne Pepper Jam. Based in London, this artist is an electronic composer/songwriter and producer who completes all of his projects within a home studio. All of his compositions are professional sounding and well completed as he has committed all of his spare time and money to gather brilliant equipment and produce the sound that he desires. He uploads his work to online streaming services and currently holds 833 Soundcloud followers, 124 YouTube subscribers and over 6,000 monthly listeners on Spotify. His earliest release on SoundCloud was 2 years ago and his most recent was 1 month ago. He has been uploading consistently in between these, demonstrating his commitment and dedication to his music. The development of his work can be seen throughout this timeline, as he enthusiastically works to provide music regularly for his online fans.

Payne Pepper Jam uses a wide variety of electronic instruments which is obviously a typical feature of the electronic genre. He uses these to create a diverse and varied portfolio of music, also using some experimental methods such as including classical instruments in his compositions, as well as many of the base techniques of the electronica genre. A great example of this would be the track “High In The Sky” which features violins and piano throughout, creating an interesting and contrasting sound. This piece is a good demonstration of Payne Pepper Jam’s experimental nature and ability to create music that falls under the electronica “umbrella” whilst using instruments and sounds that are not typical to the genre, keeping his music interesting for the listener.

His other music ranges from ambient soundtracks to upbeat dance pieces, demonstrating his ability to appeal to a wide audience by creating content that can be enjoyed by everyone. I believe this allows Payne Pepper Jam to stand out as an artist as he has a great capability to adapt and is very versatile, which is something to be desired within the music industry. Some of his most popular work so far includes the track “Ambient Symphony” with over 76,000 plays on Soundcloud, and “Call me insane & Aimee” with over 55,000 plays.

I feel as though both of these tracks successfully embody the goal sound of the artist, both with varying effect and musicality. “Ambient Symphony” is described brilliantly by the name itself, as the track has an easy and calm impression. Comments posted by his online fans on SoundCloud about this song provide encouraging words of feedback including “authentic’, “spectacular” and “worth the listen”. The use of edited vocals, synths, and a steady electronic drum beat throughout gives the track a very capturing effect that is genuine and original to electronica itself.

In contrast to this, the track “Call Me Insane & Aimee” has a very opposite effect, consisting of an upbeat sound with a faster tempo. This piece demonstrates the more bright and catchy aspect of Payne Pepper Jam’s compositions, with some heavier elements of electronic music being brought into the making of this piece. These include the use of distortion effects added to the synths and complex rhythms made on drum machines can be heard in the background, giving the track a more evolved and groovy effect. The use of sampling is an extremely widely used technique within electronic music and is seen to be featured within many of his tracks. This shows his creativity when producing music and his ability to be able to create a particular sound using a variety of different materials, embodying the electronica genre brilliantly.

Payne Pepper Jam is an artist who is consistently active on social media, with over 1,000 followers on Instagram and posting daily about his new releases whilst also promoting older tracks. He is very committed to his craft and creates the impression that he would be a reliable and hardworking artist, as he works to provide new material for his fans meanwhile maintaining his professional and skillful attitude.

I predict Payne Pepper Jam to become a very popular artist within the electronic music world, and has great potential to expand his following even further than he already has. He has the right amount of dedication and hard work, and is very talented in his area of work, as well as expanding out of his comfort zone to create material that appeals to a wider audience. His flexibility and star quality I believe would make him an amazing asset to any record label.

Section 3 – J-Rock

Japanese rock (abbreviated to J-Rock) is a distinct form of rock music originating from Japan. This genre began in the 1960s when British Western Rock became popular in Japan, and bands there began to take influence from these artists as well as others from different strands of the rock genre. It has since developed to become widely recognised and consistently in competition with other Japanese-originated genres such as J-Pop.

Some of the main artists recognised within J-Rock include B’z, Mr. Children and many others.

Metro-Ongen

This rock band formed in Tokyo I found on the website SoundCloud. The band began in 2002, meaning they have over 18 years experience in making and performing music together. This is a positive as they appear to be a very close-knit group, who are very confident in making music with each other and can cooperate well as a team. This is likely to ensure that they are easy to manage and work with, making them ideal for a record deal.

Compared to other artists I have discussed, this band has the least following online, with 118 followers on SoundCloud and 244 subscribers on YouTube. However, they do claim to have a solid fanbase throughout Japan and are known to have a great turn out at their live performances. They have been releasing music on YouTube regularly for 9 years and still upload often in recent times. These regular uploads include new music releases, tutorials for fans to learn riffs on guitar as well as some covers of already popular bands.

Metro-Ongen are very dedicated to their craft and are great representatives for the J-Rock genre. It is obvious when looking at their work that they have put a lot of thought and effort into everything from their recognisable sound, to the artwork for their music. Just through the use of cover artwork for their music they have managed to create a clear picture for the listener of what kind of band they are and the kind of ideas they want to portray to their audience.

Metro-Ongen’s most popular track “Lily” has gained over 6,000 plays on SoundCloud and 4,600 views on YouTube with many words of encouragement from fans such as “great song” and “favourite song right now”. The influence from western rock can be heard in this song which is typical of J-Rock music as this is where the initial development of Japanese rock came from, allowing it to appeal to a wide audience across multiple genres. This shows the band’s ability to create and produce music that has the potential to sell in vast amounts, making them the ideal candidate to be signed by a suitable record label.

Metro-Ongen have posted multiple recordings of their live performances on their Youtube Channel have stated previously that they have had such a busy audience at their shows that fans have had to be turned away due to overcapacity. This is a great demonstration of their popularity in their area, therefore showing their potential to become a widely known artist with the right management and promotion.

The talent this band posesses is clear to see after watching some of their live performances online. They demonstrate a brilliant on stage presence, and have a great ability to capture the attention of the audience and continue to keep them entertained throughout the entirety of their performances with their upbeat energy. The lead singer in particular has an amazing on stage personality which is very intriguing and fascinating to the listener. This further highlights the band’s capability of becoming extremely popular in the future, as the performance side of being an artist comes so naturally to them.

This band does have a social media presence, with an instagram account where they post regularly updates on their musical activities, as well as updating their SoundCloud page often. These updates give a brilliant insight into the hard work that is put into their creations behind the scenes, showing their amazing commitment to their music. I believe this band would be well suited to a record label that is able to boost their social media presence and gain them more promotion, as they clearly are a great asset to the music industry.

After taking into consideration all of the points above, I believe Metro-Ongen would be a firm appreciated member of a suitable record label. They have had a sufficient amount of experience in creating music within their genre, and appeal to a wide audience with the music they create. They have the right “vibe” and personality for the content that they release, and are very likeable as a whole.

References

2020. Contemporary R&B: Wikipedia.com [online]. Available from: https://en.wikipedia.org/wiki/Contemporary_R%26B

2020. Electronica: Wikipedia.com [online]. Available from: https://en.wikipedia.org/wiki/Electronica

Electronic Music Artists: Allmusic.com [online]. Available from: https://www.allmusic.com/genre/electronic-ma0000002572/artists

2020. Japanese Rock: Wikipedia.com [online]. Available from: https://en.wikipedia.org/wiki/Japanese_rock

The Finishing Touches

Now that the audio and video for my projects have been synched appropriately, there are a few last steps needed to clean up the projects and create clean finished products.

Fades and Automation

A fade refers to the volume of a track increasing from 0 to the desired level over a set amount of time, instead of being introduced suddenly at full volume. Or on the other hand, decreasing at the end of the clip in the same manner instead of a sudden cut in audio. It is important to use fades when editing projects such as the ones I have been working on, as they ensure that the audio sounds smooth and clean rather than sudden and jumpy. Therefore, I have used fades on all of my audio clips in my sound dubbing project to varying degrees in order to achieve a professional finish and increase the quality of my project overall.

Another editing aspect relating to volume is automation. Two very common types of automation include volume and panning.

Volume automation has been essential for me when working on my sound dubbing project. This is controlling the volume at different points throughout an audio clip without having to change the volume for the entire duration of the clip. Volume automation is very useful when working in film as volume often hugely relates to what is happening on screen. For example, a car driving away. There are multiple points throughout my sound dubbing project where I have used volume automation, one of these being where WALL-E is charging via solar panels on the roof and the camera zooms in and moves closer.

Panning automation is a similar concept but refers to which ear the sound is being played in when working with stereo sound and to what degree. This has also been useful for me during my sound dubbing project as WALL-E is often seen moving across screen, which I am able to represent audibly by moving the sound from one ear to another.

Both of these automation techniques have enhanced the narrative of my film scene clip massively by creating a more immersive experience whilst watching the video.

Reverb, EQ and Pitch Shifting

I have already discussed reverb in the recording process section of my blog and how I tried to prevent reverb whilst recording. However, using reverb in the post production process is slightly different.

Reverberation is when sound reflects off its surroundings, creating an extended sound after the release transient of the original sound has already finished. The amount and type of reverb depends on the amount of parallel boundaries, the type of materials in that space. More parallel boundaries mean that there are more surfaces for the sound to reflect off, and different materials absorb and reflect certain frequencies differently.

Due to this, the reverberation on an audio clip can be very demonstrative of the acoustic space surrounding it and must match the scenario shown on screen. Therefore, for my sound dubbing assignment I have used the appropriate reverb on each clip that is relative to the on screen action. For example, at the point in the film scene where WALL-E throws the fire extinguisher, I have used extensive reverb on this as it represents what we would hear as it lands in an empty space with lots of tall buildings around it, creating lots of parallel boundaries.

In the dialogue recording interview, it was not appropriate to add any reverb in post production as I wanted the speech to be clear and crisp. The use of reverb would blend the transients together and would not be suitable for an interview scenario.

Equalisation (EQ) is another editing technique I have used in my sound dubbing assignment. This is a method where different frequencies within each sound can be boosted or reduced depending on what kind of effect you are trying to achieve. An example of this in my project is the cricket sound, where I have reduced the mid frequencies and boosted the higher ones to achieve the high pitched scratching noise that a cricket might produce in real life.

Lastly, I have used pitch shift on a few of the sounds in my sound dubbing project (such as the cricket sound previously discussed). This refers to making the pitch of a sound lower or higher than its original pitch. A good example of something I altered the pitch of in my project is the beeping noise when WALL-E is charging his solar battery. I have edited this so that the pitch slowly gets higher, which I did to represent the battery slowly gaining charge.

After the pre-production, recording and post-production process of both of my projects, I believe I have created two high quality pieces of work with all the fundamental features that are important to sound for picture.

Organisation and Beginning the Editing Process

Organising My Work

While I have been working on my two projects, I have discovered it is paramount to keep all of my work organised and in order. To make the process easier, I have backed up my sound recordings on 3 different mediums and named them all descriptively in according folders. This makes it much less complicated to find different sounds and prevents the long process of having to listen to each individually to find the correct one.

For my dialogue recording task, I had purchased 2 SD cards to keep my recordings on and transfer them to a computer, as well as making great use of USB sticks throughout the process also.

The First Steps of Editing

For the editing of both of my projects, I have used Pro Tools as my DAW (Digital Audio Workstation). Initially, it is important to set up the Pro Tools session accordingly and correctly to suit the needs of my assignments.

First of all, I decided the sample rate and bit depth that the DAW was going to work at.

  • Sample Rate and Nyquist – Using an analogue to digital converter (ADC), sound is sampled at regular intervals. The DAW that the sound is being manipulated on will then estimate the values between the samples, therefore, in a sense filling in the gaps of a sound wave at each interval. This then determines the Nyquist (always half the value of the sample rate) which is the maximum frequency that a system is able to reproduce. Anywhere up to a frequency of 20KHz is the maximum capacity for human hearing. Therefore, the lowest possible sample rate that will represent full human hearing range is 41.1KHz, because half of this equals 20,050Hz, which is slightly above the maximum hearing possibilities. Any lower amount of samples per second (E.g. 22,050Hz, meaning a Nyquist of 11,025Hz) would therefore provide lower quality audio, as there would not be enough measurements made per second to be able to sufficiently sample each cyclical waveform. This would mean that the estimations made by the DAW at each sample interval would not be representative of the original sound wave.
  • Bit Depth – When working digitally with audio, the amplitude has to be represented as close to the original as possible. This is done by using “bits”. Each sample is made up of bits, which refers to how many possibilities there are for the amplitude of each sample to be quantized to. For example, CD quality is 16 bit, meaning there are 16 different values that the amplitude of each sample can be assigned to as the analogue audio is converted to digital data. This means that the higher the bit depth, the more the amplitude of the sound can be represented closer to its original, improving the resolution.

As a result of the information above, for both projects, I chose to use a 48KHz sample rate and a bit depth of 24. This was important to my projects as sample rate and bit depth define the fidelity of sound when working digitally. All of the human hearing range (plus extra) will be represented with the 48KHz sample rate, and my audio can be quantized to a very wide range of amplitudes due to the bit depth of 24, overall improving the quality of my project.

Auditioning and Synching Sounds

Once I had renamed all of my sounds for my sound dubbing project, I added them to the “clip list” on pro tools. This was helpful for me as it allowed me to keep all of the sounds I had recorded in one place that was easily accessible. Starting from the beginning of my project, I auditioned different clips to see which suited a specific scene best. This allowed me to find the sounds that enhanced the narrative of the clip the most, and overall created the best effect for the sound I wanted to achieve.

An essential part of applying sound to the film clip was synching the audio to video appropriately. Synchronisation is the process in which the audio is played at the same time as the video. For example, in a film where a loud explosion takes place, the “Bang” sound will happen at the same time as the visual explosion seen on screen. This is important because sound is used to add emphasis to events we see in film. If these are not in synch then that effect will be lost, and the happenings on video will not be represented clearly for the viewer.

For this process within my own projects, making use of attack transients was very useful (the starting portion of a sound, from silence to its maximum amplitude). For each sound that I added, I observed the video whilst listening to the audio and synched the transient to when the transient was shown on the screen. For example, when WALL-E bumped into a metal shelf near the beginning of the scene, I added the sound of a wooden spoon hitting a pan and synched the transients together by physically moving the audio clip around in Pro Tools.

For my interview project (Dialogue Recording), once we started recording both audio and video we stood in view of the camera and clapped. This created a visual representation of a sudden attack transient whilst also recording the audio, meaning we could synch these together in the editing process. Once the clap was synched, the rest of the audio for the 3 minute interview was already appropriately lined up.

The Recording Process

Important Aspects of Recording

When recording sound, there are multiple precautions and measures that must be put in place to achieve high quality audio, as well as many things to be aware of. These include:

  • Room Tone, Atmosphere and Reverb
  • Headroom and distortion
  • Directionality and mic placement

In this post I will discuss the issues of each of these and how I have tackled them.

Room Tone, Atmosphere and Reverb

Room tone is the sound that can be heard always from outside and is typically unobtrusive to a recording. Atmosphere is the presence of sound within a room or the location that the sound is being recorded in that may contribute to a certain effect or background to a recording.

The use of room tone is to avoid harsh differences between silences and sound playing over a clip. Therefore, for my “WALL-E” project I did decide to record some room tone in order to blend the different sound clips seamlessly together and disguise any changes from silence to sound. I feel as though this has allowed my project to be a lot cleaner, polished and more professional.

On the contrary to this, the dialogue recording interview task was one singular long sound recording with no breaks and silences in between, so this was not necessary.

In regards to atmosphere noise in my recordings, I wanted this to be eliminated completely so that I could have completely clean audio clips with no background noise to work with in post-production. This meant that certain equipment was used throughout the recording process, as well as aiming to record in particular environments to avoid these issues.

For the sound dubbing task, I did avoid any outside environments where necessary. This eliminated the possibility of wind sounds as well as distant cars being featured on the recordings. Recording most of the sounds inside allowed the environment to be much more isolated, allowing me to capture the sounds more clearly with less to no background noise. Secondly, I did use a boom pole with the shot gun mic, usually balanced on the foot instead of the floor to act as a barrier to any low frequency rumbles coming through the ground from a nearby road. The elimination of these frequencies is important as they may effect the fidelity and quality of a recording if not removed.

Reverberation refers to when sound waves are reflected off different surfaces in the environment of recording. These consist of early and late reflections, both which are affected by how many parallel boundaries there are in a room, as well as any use of acoustic panelling or materials in the room that may absorb different frequencies.

For the interview, we did use a set up within a studio space that also contained some acoustic panelling on the walls and large carpeted areas. The use of this environment meant that reverberation was reduced as the acoustic panels created a more even distribution of sound waves. Also the density of the carpet was responsible for absorbing certain frequencies and therefore created a cleaner and clearer sound within the recording.

Headroom and Distortion

When measuring the amplitude of sound in digital form, it is often measured in dBFS (decibels relative to a full scale) where 0 on the dBFS scale would be equal to an amplitude of 1. Therefore, anything that is measured above 0dbFS is likely to become distorted, meaning the sound quality lacks and fidelity is lost (fidelity referring to whether the sound is true to its original source).

To tackle this issue, a technique called “headroom” is used when recording sound. This is done by ensuring that the gain is at such a level that the sound does not peak anywhere above -6dBFS. This allows literal headroom above the maximum peaking point in case of any small mistakes in recording and also allows some room for post-editing such as filters and many more of which may affect the level afterwards.

Therefore, in both my overdubbing project and dialogue recording I ensured that the levels when recording stayed around the -12dBFS mark, also not allowing them to exceed -6dBFS in order to prevent distortion. This made my recordings much cleaner as there is no compromise to the quality, and also meant there is less of a need for editing large amounts of volume in the post-production phase as all of the clips are of a similar volume.

Directionality

When recording sounds in different environments, it is important to take into account the directionality of different microphones. This means which areas around it a microphone will pick up sound from, which is called a polar pattern. For example, a common polar pattern seen in a variety of microphones is the cardioid polar pattern. This creates a shape around the microphone similar to a heart, showing that sound will be picked up mainly from the front region as well as slightly around the sides.

The microphone I chose to use for both of my projects is the Sennheiser ME66 shot gun microphone. This has a hypercardioid microphone, which is the most directional kind of polar pattern. Almost all of the sound captured is from the front area of the microphone, with a small amount coming from just behind. Due to this, I took extra care when positioning the microphone during recording in order to capture the most consistent and clear sound.

For example, when recording the chair scraping across a kitchen floor for the sound dubbing assignment, I followed the chair with the microphone whilst maintaining the same distance throughout and avoided deviating from the original position. This allowed me to capture consistent audio that did not contain massive fluctuations of volume as the microphone was the same distance from the sound source throughout the entire recording.

In the interview recording, we positioned the microphones directly at the nose and mouth area and reduced our movement whilst filming in order to capture the voices clearly and prevent any fluctuations in volume. We also ensured to keep the microphones out of shot on the camera whilst being positioned correctly towards the people in the interview.

Microphones and Their Uses

The Basics of Microphones

Microphones in their basic form are devices that are responsible for converting sound into an electrical signal. This is referred to as a form of transducer. Every microphone utilises a diaphragm, which is the part of the device that sound typically hits first, starting the process of converting sound to an electrical signal.

After this, each microphone differs in how it works depending on what type it is.

The Two Main Types

There are two most common types of microphones that are used widely for a range of purposes. These include dynamic mics and condenser mics.

  • Dynamic – This type of microphone functions by the use of electromagnetic induction. The diaphragm oscillates when sound waves reach it, causing a wire that it is attached to, to move around a magnet in the centre. This then creates an electrical signal which will eventually then be sent through to a DAW (digital audio workstation) to be manipulated digitally.

Because of the way dynamic microphones are built, they are a lot more robust and less likely to break than condenser microphones. However, they may have slightly less of a frequency range that they are able to capture.

  • Condenser – A condenser microphone consists of a diaphragm that forms one part of the capacitor, which consists of two metal plates parallel to one another. One of these plates (the diaphragm) is able to move when sound waves cause it to vibrate, whilst the other plate remains still. The change in space between the two plates causes a fluctuation in the fixed electrical charge between them (this is why condensers need phantom power – a supply of electricity). This then creates an electrical signal.

These mics are capable of capturing a more expanded frequency range compared to dynamic microphones, but are a lot more susceptible to damage via extreme loud noises or dropping, as this may cause the plates inside to come into contact with each other.

Some Other Types of Microphones

  • Ribbon Microphones – Are a type of dynamic microphone. They create an electrical signal by a suspended metal ribbon moving between two poles of a magnet. These are highly sensitive microphones.
  • Piezoelectric Microphones – May also be referred to as a “contact mic”. These use objects to detect vibrations through rather than the air.
  • Pressure Zone Microphones (PZM) – Referred to also as a “boundary mic”, these use a condenser capsule near the boundary surface that is omnidirectional.

Transients and Transient Responses

An attack transient refers to the starting portion of a sound from silence to its loudest portion (or maximum amplitude). A release transient is the opposite of this, meaning the end portion of a sound and how long it takes from its maximum amplitude to reach silence again.

Every microphone has a specific transient response. This describes how fast a microphone can pick up on the different transients of a sound. As a rule, dynamic microphones have a slower transient response due to having a much heavier diaphragm that needs more energy to cause it to vibrate. This can lead to a lack of definition in recordings such as finger picking on a guitar as the transients can become smeared.

A condenser microphone has a lighter diaphragm meaning the definition of a recording from these is much clearer and detailed due to the improved transient response.

Which Mic to Use?

Taking all of the above information into consideration, for my “Wall-e” project, I have planned to use a condenser mic (specifically the Sennheiser ME66 Mini Boom Microphone).

I have not planned to record any sounds that are extremely loud such as guitar amps or drum kits, therefore meaning the condenser microphone I have chosen will not be damaged or broken by any sounds that I have planned to record. Furthermore, I believe using a condenser microphone will improve the quality of my recordings due to its vast frequency range capabilities. This will in turn improve the overall quality of my sound dubbing project, as the microphone is able to pick up on all of the frequencies of the varying sounds I will capture, improving the fidelity (which I will discuss in further detail later).

For my dialogue recording project, I have once again decided to use a condenser microphone. Similarly to my first task, no sounds that I will be capturing will cause any damage to the mic. Also, I will be recording the sounds of human voice, which I believe will be better captured by a condenser microphone as more of the frequencies of the voice will be captured due to its extensive frequency range. Lastly, the transient response of a condenser microphone is much better than a dynamic. Speech includes lots of different transients, so using this microphone means my recordings will be more detailed and feature less smearing or blurring throughout. This is important when recording an interview as clear speech is the main aspect and the focus of the video, so it essential to ensure that the detail is captured appropriately and represented clearly throughout.

Which Sounds Work?

Deciding On The Right Sounds

For my first own recording of sound for video, I have decided to use a 3 minute clip from the film ‘Wall-e’, which I will record my own sounds for and dub these over the the clip. My task is to recreate the sounds for the full film clip using my own foley sounds and editing techniques, whilst maintaining the narrative of the scene and achieving great quality audio. (Day At Work: https://www.youtube.com/watch?v=WB8LrCWmGYw&t=3s) However, to begin this process it is essential that I plan out which sounds I am going to need in order to recreate the scene and enhance its narrative. Therefore, I have first written a list of sounds represented within the 3 minute clip that I believe I can recreate with objects at home or scenes around me that sound similar or the same.

For any sounds that I think I may not be able to recreate myself I have downloaded instead. These include some wind sounds, two types of robotic movement sounds and a loud crash for the end of the scene where a fridge door breaks and falls over.

This list has helped me organise my work and make the recording process much easier as I already have the majority of the sounds I would like to record planned out beforehand.

For my second task, a dialogue recording project, I will be recording sound and video of an interview with myself and a musician on campus. My goal with this will be to synchronise the audio appropriately and achieve clean audio throughout the interview. I have planned out 9 questions to ask as part of the interview between me and the musician, as shown below.

  • What was your inspiration to start playing music?
  • When were you first introduced to live music?
  • What experience have you had?
  • Describe your sound
  • What is your favourite song or artist?
  • Do you have any upcoming gigs or events?
  • What artists would you compare yourself to or aspire to be like?
  • If you weren’t a musician, what would you be doing now?
  • What would you like to be remembered for?

Also I have written a transcript as a brief guide to my interview in order to make the recording process slightly easier.

Diegetic or Non-Diegetic?

The terms “diegetic” and “non-diegetic” are often used surrounding audio within film or video. Both of these refer to whether the sound heard is within the film scene or not.

  • Diegetic – The sound presented would naturally be heard within that scene by the people in it. For example, dialogue from a person in a film that is talking directly to another character is diegetic sound, as both characters can hear it.
  • Non-diegetic – The sound shown in the scene does not represent what would naturally be present in it. For example, a voiceover of a movie that narrates the scenes but cannot be heard by the characters themselves.

For my first project using a clip from the film “Wall-e” to overdub, I have planned for all of my sounds to be diegetic. This means that they would all naturally be present within the scene itself. I believe that this will enhance the narrative of the clip as it will represent what is happening truthfully, also a non-diegetic sound such as a voiceover would not fit as well with the clip I have chosen.

In my second project where I will be recording dialogue and film as part of an interview, the sounds will also be diegetic. The people within the interview will be able to hear themselves talking and there will be no extra sounds added outside of the interview itself.

Foley Sounds or Sound Effects?

For my recording and overdubbing task, it is necessary to use sound effects and foley sounds. Sound effects are referred to as sounds that are artificially created and involve digital enhancing in order to create a realistic effect. However, foley sounds are created by physical interaction of some kind whether this is with objects or something such as footsteps, in order to create a sound that mimics one seen on screen.

For my “Wall-e” project, the majority of my sounds will be foley sounds, as they will be created by myself using physical interaction with different objects to create sounds that I feel represent the movements in the scene well. For example, as seen in my sounds list, I have stated that I need something to represent the movement of Wall-e’s movement down a bumpy ramp. For this, I planned to record the sound of a pencil brushing down a notebook binder. This involves human interaction with an everyday object to create a sound that represents something entirely new when applied in a different context.

Furthermore, I decided that I may need to include some special effects that are digitally created in order to add to my project and enhance it. For example a computer start up sound to emphasise when Wall-e fully re-charges after exposing his solar panels.

Design a site like this with WordPress.com
Get started