10 Tips for Capturing e-Learning Audio

by: Al Lemieux

Using audio in your online course is an extremely important factor in engaging your audience. Studies have shown that courses without audio are less compelling and memorable than courses with audio. Either used as a narration or for directions, audio – done the right way – can greatly enhance your e-learning materials.

Help with Audio for Online Course Developers

Audio engineering and the knowledge it takes to adequately capture, edit, and clean up audio to achieve a quality output is a task that is typically beyond the skill set of most online course developers. The average course developer has little if any knowledge of sampling rates, frequencies, modulation, compression schemes and other audio engineering concepts.

The goal of this article is to provide you information on how to get the audio in a format suitable for an e-learning course-authoring tool. The article will focus on capturing and editing audio at the basic level and certainly, the 10 tips listed below should help get you moving in the right direction.

1. Microphones

For narration, you’ll need to use a microphone to capture the voiceover. Sure, your laptop or PC might have a built-in microphone, but you probably don’t want to end up sounding like the broken drive-thru speaker at your local fast food restaurant. There are several microphones to choose from and they are made specifically for different purposes.

Dynamic Microphones are the ones you commonly see being used by rock stars in concerts. They have a ball-like shape as the head. These mics are omni-directional, meaning that they can pick up sound from a wide area. The Shure SM-58 microphone is an example of a dynamic microphone and has a consistent quality and dynamic range that makes it useful for all types of applications.

Condenser Microphones, unlike dynamic microphones, have a capacitor inside that requires them to be powered by a source – either battery or A/C. These microphones are often found in recording studios, used in live concerts, and are commonly found in lavalier mics (the kind you attach to a shirt or lapel). These mics are uni-directional, meaning that they capture a more narrow area of sound. Because of their power requirements, their voltage output can vary. The Neumann KMS-105 is an example of a condenser mic.

You may also find a Headset Microphone, which plugs directly into your computer’s input source and output source, so you can hear what you say as you are recording. Most of the microphones on these headsets have a noise cancellation function built into them. This causes unnatural sounding silences between phrases. The audio quality from these types of microphones tends also to sound blown out as the microphone position is very close to the mouth. That makes higher frequencies tend to sound noisy and lower frequencies sound poorly.

Built-In Microphones have the tendency to pick up any noise generated by your computer during use. This means any hard-drive motion, cooling fans, operating system sounds, and room ambience. These microphones are usually engineered to pick up the widest area of sound for situations like web conferencing and chat room sessions. The audio quality is usually poor and the microphones do not have sophisticated features like noise canceling or balancing. If at all possible, you should avoid using the built-in microphone for your audio input source.

2. Distance from Microphone

I’m sure you’ve all seen the rock videos or American Idol, where the rock stars have the microphone jammed up against their mouths as they are singing. So most people feel they need to do the same when recording narration. What the rock stars have to their advantage is a sound limiter that cuts off frequencies above a certain range. The maximum output is policed by this device to prevent any unwanted feedback or squelch, because the frequency is automatically limited. Most likely, your simple setup won’t have this capability.

One thing you can do to prevent any unwanted sounds in your narration, is to position your mouth about 6 to 12 inches away from the microphone and speak directly into the microphone, not away from it, to either side, nor in front of it. The best audio signal will be a direct path from your mouth to the microphone. If you start speaking into the microphone and then tilt your head downwards to read from a script, you’ll be able to notice the drop in the audio signal. If needed, hold your script up next to the microphone. Another tip: when reading from a script, don’t read across pages that you are turning or moving from one hand to another. Most microphones are sensitive enough to pick up all of that paper moving. If possible, have each page of the script segmented and keep them separated, not stapled or kept together with paper clips.

3. Use a Windscreen

A consistent frequency helps to produce the best audio. If you are looking at an audio waveform for the first time, you won’t be able to decipher it, but the peaks and valleys of each frequency can visually tell a lot about that sound. One thing that often happens, especially during narration recording, are pops that occur when saying words that begin with P or B. These pops go above the dynamic range and therefore, don’t sound anything like a P or a B but more like a popping sound.

A simple solution to this problem is a windscreen. Some engineers will wrap a wire clothes hanger with nylon stockings and place them in front of the mics in order to act as a low budget windscreen, but you can also buy them for less than ten dollars at your local music store. These are constructed out of a foam material and fit over your microphone.

4. Interfaces

So far, I’ve spent a lot of time talking about microphones but a majority of the mics recommended here won’t even plug in to your computers without adapters. Professional mics have an XLR connector, which has three pins. Most computers are made with eighth inch connectors. You can use Dynamic Microphones with an adapter connected to your computer without much of a problem. Condenser Mics, since they require power, won’t work even with an adapter.

There are interfaces specifically built for this purpose and they come in two flavors: USB and Firewire (IEEE 1394). Most PC manufacturers are including either USB 1.0 or 2.0 ports on their hardware. Firewire is more commonly found on Apple computers however, you can purchase Firewire cards for PCs. Firewire is faster than USB in certain applications and therefore is more desirable for audio input. There’s less latency on a Firewire connection than on a USB connection because of the performance speed.

Firewire comes in two flavors, there’s Firewire 400, which can transfer data at a rate of 400 MB per second, and Firewire 800, which doubles the speed to 800 MB per second. There are a wide range of musical digital interfaces out on the market today, and you can use either of these technologies to interface with a computer. M‐Audio has a line of both types including the ProFire 610 and the FastTrack USB.

These devices can run off of their intended connections and act as an audio input/output source for your computer to provide a professional recording result. At SyberWorks, we use an M-Audio Firewire 410 audio interface connected to two Shure-SM 58s for all of our narration. The 410 is a powerful choice because it offers multiple inputs and all of the audio controls necessary for level/gain and limiter/compressor. It also has XLR inputs and quarter inch inputs for microphones and instruments, and two headphone outputs. Connected to the 410 are two M-Audio BX8a monitors, which offer a much higher quality output sound than any built-in computer speaker.

5. Software

There are so many options for audio editing software, from the simple shareware/freeware to the professional level, that the determination of what to use might lie somewhere within your budget constraints. The basic audio recording tools that come with any Windows-based machine do not generate quality audio. Any Apple computer comes with GarageBand which is an excellent mid-level audio recording application. GarageBand is the step child of Apple’s Logic Studio and offers some pretty sophisticated tools for recording, editing, and delivering audio recordings on any platform.

Adobe has an audio recording/editing application called SoundBooth, which offers a variety of tools for cleaning up audio files and saving them in different formats. SoundBooth comes with the Creative Suite Production Premium or Master Collection. I recently used SoundBooth to record old cassette tape tracks as MP3 files so that I can burn the files to CD. I was able to use SoundBooth to clean up all of hissing sound on tapes and the audio quality was excellent.

Bias, Inc. has been in the audio production area for over a decade now and their flagship audio editing software, Peak Pro, is an award winning application. With a simple interface and a variety of effects and controls, Peak makes audio editing simple. I’m a long time user of Peak Pro and can say that it’s a stable, professional application that offers all of the tools that I need to edit the audio that I record. Combined with SoundSoap Pro, an audio cleaning application, Peak Pro can reduce noise, hiss, rumble, cracks and pops, and other unwanted sounds from any audio recording.

Here at SyberWorks, we use Peak Pro to record any narration for podcasts or courses and GarageBand to stitch together podcasts and teasers. GarageBand comes with some preset stingers and effects which are great for podcasts. It’s ridiculously easy to use. Once the file has been put together, it’s output as an AIFF file to iTunes. I then use iTunes to convert the sound to the MP3 format for delivery.

6. Normalize

During recording, audio levels can be mismatched creating undesirable results during playback. For example, recording from two different sources might produce two different volume levels. When played back, one source sounds softer and the other might sound louder, even though they were recorded in the same room on the same computer with the same hardware. This can be attributed to vocal style or audio input levels not being properly monitored.

To adjust audio levels across the board so that the volume is relatively consistent, most audio editing software offers some normalization option in which the audio levels are examined and a maximum and minimum range are then determined. Softer sounds are increased and louder sounds are decreased so that the overall sound level is more consistent.

If there is a stark contrast between the two input sources, then sound normalization might work against you. Softer sounds might have more sound introduced in them when the gain is increased. Be sure to check your audio input levels before recording. Try to get both sources to come up to the same decibel level prior to recording. If possible, show the input meter in your audio software to the speakers and try to get them to speak at a gain level about ‐5 decibels.

7. Ahh’s and Uhm’s

For some people, speaking into a microphone can be a little intimidating. You might hear a lot of Ahhh’s and Uhmm’s during a recording session. Some people naturally put these in their phrases because they are thinking about what they are going to say next. Others put them in out of nervousness. Others have lisp’s or emphasize S’s and Z’s. Still others smack their lips or breath heavy before talking.

When editing audio, the tendency might be to remove all instances of Ahh’s and Uhm’s. When separated from the rest of a passage, this is easy to do and is an effective way to make the entire sound file shorter. However, there are times when the Ahh’s and Uhm’s are rolled into other phrases and are difficult to separate.

The rule of thumb when editing audio is to remove whatever is bothersome, but keep the tempo of the original sound source and make it sound as natural as possible. Some people also take deep breaths between passages or have nasal sounds that are picked up by the microphones. Sometimes these can be removed and other times they can’t. Remove what you can, but try as much as possible to make the overall recording natural.

8. Cleaning Audio

Depending on how clean your input source is, you may have an audio track that is laced with hum or noise coming from a variety of sources, like an overhead fluorescent light, A/C noise, and other ambient sounds. SoundBooth and Peak Pro both have tools for eliminating these types of sounds from your audio input sources. As mentioned above, the rule of thumb still applies.

Some of these tools can end up making your audio sound very metallic and unnatural, more like a computerized version of the original. When using SoundSoap Pro, for example, the default settings for removing noise keeps all of the highs recognizable, while the mids and lows suffer from a dense computerized sounding quality. SoundBooth’s noise correction tools have the same issue, so if too much is applied, the result isn’t worth the effort.

There’s a balance to how much correction is applied to a sound versus the quality of the output. Sometimes, here at SyberWorks, we have to record voices over the phone. Sound quality from a phone line is always problematic, so invariably that sound will need to be cleaned up. Too much correction though, and the integrity of the voice is compromised. On the other hand, no correction will keep a lot of hiss and noise in the sound, which is undesirable.

9. Audio Formats

Depending on which authoring tools you are using and which platform you are on, you’ll need to know which audio formats to use. The major audio format for the PC platform is .wav, on the Mac platform it’s .aiff. Adobe Captivate and Microsoft PowerPoint both use the .wav format.

The most popular internet audio format now is .mp3, which has greater compression and better sound quality than other internet formats. Captivate uses MP3 compression in sound files for the final output. This makes the files smaller but they can suffer in quality, depending on the settings in Captivate. Native .wav files embedded in PowerPoint files can make those files enormous. Using iSpring, a PowerPoint to Flash converter, you can significantly reduce the file size of presentations and course materials intended for internet delivery.

SyberWorks Web Author has an additional tool called SyberWorks Web Audio which allows you to add streaming audio to courses that anyone can hear using just a web browser. Playback is accomplished through a small Sun Java applet that downloads automatically and quickly. It requires no additional plug-ins or server software. It has no firewall issues and can play in the background, without any visible controls on course pages, or with a small set of basic audio controls displayed.

SyberWorks Web Audio tool takes an audio file of the format .wav and compresses it into the SyberWorks audio format (.sa). It is then easily inserted into the word document by using the Add SyberWorks Audio template.

10. Compression

Depending on which tool you used during recording, you probably have an original audio source with a near CD quality (44 KHz) output. That quality is diminished as soon as the file is compressed. Too much compression and the sound quality is something like R2-D2 behind a large, metal door. Too little compression and the sound files become too large to transfer and play.

The factors involved in compression include bit rate and quality. The bit rate is the data stream target for your intended audience. The typical internet connection these days is at least DSL speed (128 KBps), but there still may be users at modem speeds of 56 KBps. Tools like Captivate will allow you to set the bit rate and quality for all the audio in your course. Therefore, when you save your files out of SoundBooth or Peak Pro, never add any compression. Let Captivate or your other e‐Learning tool do the compression for you.

Quality settings can also alter the file size. Lower quality files have higher compression and therefore are smaller, but sound worse. Higher quality files have less compression, sound better, but are larger in file size.

One tip is to try different compression levels and settings and listen to each output to find the one that is just right. It may be time consuming, but in the end, your e-learning product will be better for it.

Summary

We touched on a number of technical concepts in this article and there’s a lot more. Whatever tools you decide to use, incorporating better-sounding audio in your e-learning development is a great way to take them to the next level. Recording high quality audio can be challenging and fun. It may take some time to get used to, but it’s a skill that’s worth looking into.