Me, Myself and MRI

[Skip navigation]
Produced by
Funded by

Other funders

Site developed by
One to One Development Trust


Our session on audio recording and editing took place in the University of York Department of Electronics recording studio suite. We started with a tour round the studio rooms and the media editing suite to get an idea of the kind of work that goes on here. There were two studio control rooms - one based on a digital mixing desk and one on an analogue desk - and we paid particular attention to the sound-proofing of the rooms and the sound treatment used to make them sound as good as possible for recording in (this actually looked like a whole lot of soft foam tiles stuck to the wall).

Throughout the session we used the free, PC/Mac sound editing software Audacity considering it as a word processor for audio, as we are able to cut/paste and arrange sections of a recording into a new, different version and apply specifc effects to different sections to enhance them or change how they sound. Audacity is mainly a stereo audio editor - stereo refers to the fact that we use two microphones to record the material because when we listen to the results they are played back over two loudspeakers (or headphone loudspeakers) for our two ears. We also looked at the waveform display on the screen and talked about how it represented an audio signal.

The first audio sample we worked with was from the interview with Mr Nihill. There was lots to listen out for, but the main point was the very, very quiet recording and the high level of noise or hiss that could be heard. This was due to the recording being so quiet.

The second example was a recording that was too loud - everyone agreed that it sounded harsh to listen to, and the waveform itself looked very different to the previous example - in some places it went completely off the scale. This is called clipping distortion.

In the next case we returned to the interview with Mr Nihill - this time noting the background noises clearly evident in the form of other pupils playing outside the recording room. We then listened to another example where air conditioning could be heard 'underneath' the main recorded material. This became most evident when it was turned off - although not everyone could hear it.

The next example came from the interview with Mr Riley. It was really easy to spot the problem - there were many horrible spikey noises, most likely from the table the recorder was sitting on being knocked every now and then.

We also considered the quality of the sounds across the recordings. We noted what makes a sound quiet or loud (microphone placement, recording input level on the actual recorders), echoey or not, and we noticed that the interview with Mr Riley sounded quite 'boxy' - most likely due to sound reflecting off a wall or large surface and interfering with the original signal.

Finally we played a sound that was 'just right'. It was the school bell recorded up close and at just the right level. It sounded great and is available for download here.



We next introduced the idea of trying to keep the meaning or intention of our recordings when editing. It is easy to use an audio editor to change the order of words to make them mean something else - and it is our responsibility to make sure we represent our interviewees accurately and fairly.

We did a quick demonstration of the difference that near and distant microphone placement makes using Mr Riley and Kirsty and listening to them via both microphones. They sounded a lot quieter and more echoey through the microphone placed furthest away from them.

We next took a break to visit the anechoic chamber (shown to the left) - a completly isolated space at the back of the building that has been designed to give a really quiet and virtually reflection free environment. We talked about whether the anechoic chamber was the ideal recording environment and decided that it might be ideal in terms of being able to control the sound, but we are so used to hearing backgroundsounds in the world around us - with lots of reflections added from walls, ceiling, furniture etc - that it sounds pretty odd, and gets even worse the longer you stay in there!

We returned to the studios where the group was split in two. We first had a go at editing some of the interviews and made a very good job - mistakes, pauses and stutters were removed, as were the original questions and comments from the interviewer. We also considered the difference between what the interviewee was telling us (as fact) rather than what they thought, which was much more interesting and appropriate for this project. The groups managed to trim away a lot of the original recording - but what was left was much better and often hinted at thoughts and ideas the interviewees had rather than the facts they relayed.

The next part of the session explored how to use sound to tell a story - the groups had to edit together a soundtrack that told a story starting with someone waking up, turning the radio on and brushing their teeth - what happened next? Using some interesting sound effects (including a cow mooing and a racing car engine!) both groups created a sound story where, rather than a narrator explaining what was happening, sound effects were used to tell the listener what was going on.

We've created a lesson plan on how to create a sound story and it can be found at the top right of this page.

Next section: How we created the exhibition audio