Here are two videos we’ve prepared to help you study your Area of Study E work for EDUQAS A Level Music. Let us know if you find them useful!
Play was released by Moby in 1999. It uses a variety of samples from old blues and gospel recordings. Some of the samples contain background noise and are quite ‘grainy’ or ‘lo-fi’. Moby uses this as a creative effect. Play was both a critical success and a commercial phenomenon. The album introduced Moby to a worldwide mainstream audience, not only through a large number of hit singles that helped the album to dominate worldwide charts for two years, but also through unprecedented licensing of his music in films, television, and commercial advertisements. It eventually became the biggest-selling electronica album of all-time, with over 12 million copies sold worldwide. The video below shows some of the samples that were used in the album, both in their original and new contexts.
What is Sampling?
On a basic level, sampling is when you take a part of a song, single note or sound and reuse it in another context. It is common to use a sampler to either record, manipulate or playback one of these pieces of audio material (or any combination of the three).
You can also sample single notes or hits. These can be used to form a sampler instrument, which can recreate the sound of a real instrument very accurately, or create a brand new instrument from a sample.
Originally in the 1960s, samplers used tape loops. To change the pitch on a tape-based recording, the tape was played faster or slower. However, unfortunately, tapes were subject to hiss, wow and flutter, and degradation. The Mellotron was a tape-based sampler with loops attached to each key used by The Beatles. In the video below, Paul McCartney demonstrates the Mellotron that was famously used in Strawberry Fields Forever.
Sampling was later extensively used in hip hop (1970s and 1980s) using turntables Drum machines used early sampling technology because the samples required were short and did not require pitch shifting. Modern day samplers are digital, and often triggered using MIDI as part of a DAW.
In modern day samplers, it is noticeable when a sound is pitch shifted beyond a few tones (when this is done in relation to a keyboard, we call it ‘keyboard tracking’). Multisampling is used to overcome this, ensuring samples are only used over a limited range. Velocity layering is used to change the sample depending on how hard the key is played It is important to ensure samples are edited at a zero-crossing point to avoid a click or a glitch; another solution is to use crossfade looping.
Repeats the sample
Increases the volume to the maximum without distorting
Repeating small parts of the sample
Adding spaces between small parts of the sample
Extending or reducing the time of the sample independently from its pitch
Changing the pitch of a sample independently from its length
Playing the audio data of sample backwards
Flips the waveform so peaks become troughs and vice versa
Sample Rate and Bit Depth
Analogue to digital conversion and recording also involves sampling. As part of this process, we take amplitude measurements of a waveform, and assign each measurement a number or value.
The bit depth gives us the accuracy of the amplitude measurements taken.
The sample rate is the number of amplitude measurements taken per second. It is measured in hertz.
When an analogue signal is sampled, we end up with a stream of numbers which make up the digital signal This is usually measured in binary code (0 & 1). A common bit depth for high quality sampling and CD audio is 16 bit. This means that there are 216 possible amplitude measurements (65,536).
A common sample rate for high quality sampling and CD audio is 44,100Hz This caters to Nyquist’s Theorem, which states that the sample rate is twice the highest frequency captured. Since the human hearing range is between 20Hz and 20,000Hz, this means that high quality sampled audio and CD audio is able to capture all frequencies we can hear with a bit to spare.
The Edge from U2 famously uses lots of effects to create his famous guitar sound. Watch him talk about how sometimes the effects he uses contributes to his sound in the video below.
One of the most commonly asked questions when starting to study Music Technology is about how you can identify effects and processing in your listening. A knowledge of effects and processing also helps you at Music GCSE, particularly in study of popular songs.
All of the audio samples below play the clean version of the synth riff, followed by the effect.
Compression – reduces the dynamic range of audio, creating a more consistent volume level. Often used to help a part such as a lead vocal, cut through the mix. Sidechain compression is often used creatively in dance music to make synths duck in time with the kick drum.
In the audio sample below, the synth is controlled by a compressor with a sidechain linked to a four to the floor kick drum.
Noise Gate – removes or silences quiet sounds below a threshold. Often used to remove noise or quieter sounds between musical phrases, or to remove hiss/hum from a distorted electric guitar between phrases, so the gate opens when the guitarist starts to play.
Delay – creates a repeat of the sound. The feedback alters the number of repeats, and the delay time changes how long between each one. It is possible to synchronise delays so they play in time with the music. The picture below shows a plugin recreation of the Roland Space Echo Tape Delay.
Reverb – simulates the sound and ambience of being in a room. Can be excessively used to give the sense of being in a very large space. If there is no reverb, we can say that a sound is very dry. In the 1980s, gated reverb was commonly used, which used a dense reverb along with a noise gate to cut off part of the reverb tail, giving greater clarity to the mix.
The photo below shows a NanoVerb digital reverb unit.
Modulation effects all depend on changing a parameter of a sound over time; for example, combining a signal and a delayed version, or changing the pitch according to another wave. We can use a LFO (low frequency oscillator) to do this. An easy way of identifying modulation effects is listening for a ‘wobble’ of something – does something change over time to create a sense of movement?
A chorus effect combines a wet and dry signal (with and without effect). The wet signal is slightly delayed and detuned, which creates a ‘shimmering’ effect that simulates multiple instruments or doubling, like in a choir or ensemble. Chorus is commonly used on guitars.
On a flanger, the dry signal is combined with a wet signal on which the delay time has been modulated. This gives a ‘whooshing’, ‘aeroplane’ effect, especially if the feedback is turned up high.
A phaser combines a dry signal with a wet signal on which an all pass filter has modulated the phase according to the incoming frequency. This means that phase cancellation takes place, altering the frequency content of the wet signal.
Vibrato is when the pitch of a sound is modulated. It is similar to the effect musicians use when playing a woodwind, stringed or brass instrument, or when singers sustain a long note. The vibrato effect in the audio sample below was created using a Rotary speaker emulator.
A tremolo effect is a modulation of volume. The volume is quickly ‘turned up and down’. This can be done in time with the music on a DAW.
EQ and Filtering
HPF – stands for high pass filter. It lets the high frequencies pass through, removing low frequencies below the cut off. Commonly used in dance music to give a ‘trebly’ sound.
LPF – stands for low pass filter. It lets the low frequencies pass through, removing high frequencies above the cut off. It is commonly used in dance music to give a ‘muffled’ sound.
It is common in dance music for producers to automate the cut off frequency and add some resonance to create a moving filter effect. The audio sample below has an automated cut off frequency that slowly removes a low pass filter. The filter is slightly resonant (it sounds whistly as it moves). The graph below shows a resonant low pass filter.
Distortion occurs by turning up the volume of a sound too loud. We call this the gain, to differentiate it from the output volume.
Many digital distortion plugins hard clip the sound as in the diagram above. Guitarists sometimes prefer the sound of valve amps, which don’t clip the wave as harshly; we call this soft clipping, and it is often described as a ‘warmer’ sound.
Another form of distortion that occurs when using lower bit depths and sample rates is bit distortion. This is audible on old samplers and can be applied to synths and other instruments to create a ‘lo-fi’ sound.
Wah wah is an onomatopoeic effect that changes the frequency content of a guitar. It is essentially a band pass filter with a moving centre frequency or a resonant low pass filter with a moving cut off frequency. It sounds like it is said!
Can you identify all of the effects in the audio file below? They are played one after another; clean then effect each time. Use the link below to check your answers.
3. LPF / high cut EQ
7. HPF / low cut EQ
I started making music when I was quite young, I think I was about 9 years old when I first started taking piano lessons. I always wanted to play the saxophone as well as the piano and mistakenly thought it was a brass instrument. At primary school I duly signed up for brass lessons and was handed a euphonium, there followed a career in brass playing with a bit of piano on the side!
During secondary school all of my spare time was filled with music making. I was involved in the local youth orchestra, brass band and jazz band which meant I was doing a lot of playing. I knew fairly early on, probably around Year 8, that I wanted to make a career out of music. My instrumental teachers instilled in me the importance of practice and I was pretty good at making sure I practised brass and piano every single day. I really wanted to be the best I could be and quickly worked through my grade exams. These were like some sort of holy grail whilst at school and that when you passed your Grade 8 you went to some mythical level- I realise now that was a load of rubbish.
I never really expected or intended to be a music teacher, just like I didn’t expect to study Music at University, or even at A Level! I should explain; although I don’t really remember when it happened, Music was always my passion and it was the thing that I was most interested in. It’s just that in my head, having a ‘proper job’ or following a secure and structured career path was at odds to studying music and the arts. I suppose I managed to convince myself that Music wasn’t an ‘academic’ choice, or perhaps I was concerned about my future employment prospects. I need not have worried; in retrospect, my study of music has opened up more opportunities to me than anything else I’ve done in my life.
I’ve played the piano for as long I can remember, but I vaguely remember being frustrated because I could play things by listening to them but found making the link between what I was seeing in front of me and hearing much more difficult. I stuck with it, and music theory helped me to fill in the gaps (I still prefer hearing things rather than reading things though!). On starting secondary school, I saw one of the peripatetic teachers playing the saxophone and thought it looked amazing. I started to play, and worked my way up the grades during my time there. Through doing this, I got involved with everything to do with music I could. On GCSE results day, suspecting I might have made a mistake, I changed my course choices for A Level from French to Music, alongside Maths, Chemistry and Physics.
In search of my ‘good career’, I went to university as a Chemistry student. In the summer before I moved after sitting my A Levels, I had a sneaking suspicion that I might have made the wrong choice (in retrospect, I probably should have suspected by then!). It wasn’t that I couldn’t do Chemistry, or that I didn’t like it; it was just that I enjoyed Music more, and I wondered whether there might be a better reason to choose a degree course than the search for a stable career. When I moved up to Durham, I loved the city and the collegiate feel of the university, but I still wasn’t convinced about my course choice. I chose two Music elective modules alongside Chemistry (when I was signing up, the tutor said they had never seen it before). Two weeks later, I made tentative enquiries to the senior tutor in college and she got in touch with the Music Department. They told me that if I wanted, and as long as I had my Grade 8 (which I’d taken on Saxophone the previous summer), I could simply change course!
Whilst at Durham, my eyes were opened to a variety of different types of music, performance opportunities and composers who I had never even heard of. However, what I valued the most was the chance to work with like-minded people, and make lots of friends who were interested in the same things as me. I also had loads of opportunities and support in starting to play professionally, and began to do so for weddings and functions as a solo saxophonist, in clubs with DJs, and started my own soul band (‘Souled Out’ – I was very proud of the name!). I also found particular areas that interested me – Acoustics and Electroacoustics ‘bolted’ the theory onto the things to do with sound and computer music that always had interested me.
I always intended to study more, and was all set to stay on to do an MA, but an opportunity came up to train as a teacher; I’d always wondered whether I’d like to teach, so I moved to the North West to see if I’d enjoy it. I just wasn’t sure – but then when I stepped into a classroom and the bustling environment of the school music department I realised that I really enjoyed teaching, working with students, and running the ensembles that formed so much of a part of a school’s musical life.
When I qualified, I ended up getting the amazing chance to build up a new A Level Music and Music Technology department in a brand new sixth form college in Rochdale – and I loved it. A few years later I took on responsibility for the leadership of Art, Dance, Drama along with Music and Music Technology, and saw clearly the value of students’ study in those subjects, along with the wider employability and life skills gained by those who studied them.
I passionately believe in the difference a sixth form college education can make. My career has focused on teaching across the breadth of Music and Music Technology, but I have ended up focusing particularly on the areas that interest me the most – namely Music Technology and Rock & Pop Music. This specialism and focus has led to further freelance work as an examiner, trainer of teachers both across the UK and abroad and as an author of revision guides and study materials to help both students and teachers. I find this work really rewarding, and it complements the work I can do as a teacher.