It has never been easier to record music.
Only a few decades ago it would cost you an arm and a leg to buy the tape machines, mixing desks and outboard gear required to make a great recording, these days most of the stuff is integrated in your computer and even tablets can be powerful creative tools.
Recently I met Mike Scuffham on a trade show. He is the founder of ”Scuffham Amps” and since 2008 the developer of software amplifier plugin ”S-Gear”. Previously he was the product designer at Marshall Amplification back in the 90s and delivered the JMP-1 MIDI preamp (amongst other things).
At his booth I spotted 2 LEHLE SUNDAY DRIVER pedals as preamps and we started talking shop, and while nerding around I thought it would be a great idea to have him as a interviewee at ”The Lehle Files“. I mean, if there’s a competent expert in combining both worlds, analogue and digital, it’s him.
I brought him some questions that people had asked me at the LEHLE support desk.
And he revealed all the secrets you possibly always wanted to know…
For me, using the Sunday Driver is about achieving the ideal transparent input.
What’s gain staging?
In a recording environment this is the process of matching signal levels accordingly throughout your signal chain. It’s generally about achieving a healthy signal all the way through, minimising noise whilst allowing enough headroom to avoid clipping.
What’s the difference between instrument level and line level?
I think it is better to think of the difference between line and instrument in terms of the input requirements rather than absolute signal levels.
Instruments with passive pickups need to see a higher input impedance input. A typical line input has a lower input impedance which would load down the passive pickups of a guitar and you would not achieve a good signal level into the equipment.
Line level is a concept that comes from analogue mixing desks and is equivalent to a signal peaking at +4dBu. Instrument level is not very well specified since instrument pickups have quite different output levels into the same load. Generally instrument level is a lot lower than line level and requires a high impedance input and some gain in order to achieve line level.
How important is a preamp?
Very important, as are all elements in the guitar signal chain.
I’m a signal minimalist. For me, using the Sunday Driver is about achieving the ideal transparent input. It helps bring out the colour of the pickups, but it doesn’t add colour of its own. If you consider a classic tube amp like a Tweed Fender or a Marshall Plexi head, these designs are so simple and for that reason every element plays an important part in shaping the character of the amp. The first two tube stages of a Marshall Plexi lend a lot of character to the dynamics of the amp.
Should I record my (electric) guitar signal balanced?
A balanced connection to your recording equipment won’t stop your electric guitar from picking up noise. It will prevent additional noise/interference being picked-up along the cable run. If you have a problem with noise in your recording environment, better to identify and resolve the noise problem.
Can I use overdrive, modulation or any other pedals before recording?
Absolutely, if pedals are a key element of your signal chain then you probably want to keep that signal chain intact when recording. If you want to have the option to re-apply effects later, you can always split the guitar using a high quality splitter box and record a raw guitar DI track too. If there is a particular pedal that is first in line with your guitar and interacts with your guitar in a special way, then you might lose something by not keeping it.
Recording equipment is not designed to be overdriven.
At what volume level should I record into the DAW or play with the S-Gear?
The goal is to keep your audio interface input from clipping whilst still achieving a healthy signal level. Electric guitar pickups are hugely dynamic and its very hard to say what the maximum output is. A soft picking performance might result in a millivolts whilst the hardest hitting performance might peak at several volts. When we plug a guitar into a guitar amp we expect clipping and that’s part of the sound.
I often hear people say that signal to noise ratio doesn’t matter in 24 bit digital systems and you should reduce your input signal gain to avoid clipping at your hardest performance level with your highest output guitar. It’s worth remembering that audio interfaces are not perfect, they still add noise at the input (which will get amplified by your favourite super high gain amp simulator) and ADCs are not perfectly linear across their dynamic range. In my experience it helps to adjust your audio interface input gain depending on performance, the guitar used and the virtual amp.
What’s headroom? Is it important for me?
It’s the dynamic range between your normal performance level and the level at which your signal begins to clip. When recording a guitar signal it is important to have adequate headroom. Recording equipment is not designed to be overdriven.
Why doesn’t a digital amp simulation give me the same response/feeling like a real amp?
How are you listening to each? Is the audio environment the same and the SPL levels similar? The better amp emulations are extremely close to real amps and it is often factors in the listening environment which create the biggest differences. That said, I still believe that software based amps (and that includes all modelling amps, hardware and software) have some way to go before they achieve their ultimate potential.
Of course when the output of a guitar speaker couples with the guitar strings to create a feedback system this is something that can’t be emulated. You can’t physically resonate your strings with software. You can however achieve the same result with software amps during a recording session by taking a feed out to a guitar cabinet in your control room – this technique is often used by guitarists who don’t want to be in the same room as their dimed 100w Marshall stack.
What are the perfect settings of the audio interface (latency, buffer size, samplerate, bit depth)?
This is a broad question. It depends on your equipment and several other factors. Bit depth should be 24bit which is ideal for audio. Samplerate and Buffersize will determine your latency and will also determine how much work your computer needs to do to process audio in the given time window. You need to find settings that provide low enough latency so that you can play comfortably, without pushing your computer to the point where it can’t keep up. Lower sample rates will put less stress on your computer. Higher sample rates will yield a lower latency for the same buffer size.
For example, a 128 sample buffer at 44100Hz sample rate gives your computer 2.9 milliseconds (128 * 1/44100) to process 128 audio samples. The latency would be 2.9 msec plus whatever additional latency is introduced by your audio interface, typically you might see 4.5-10 msec roundtrip latency with 128 audio samples at 44100Hz samplerate.
After so much interesting opinions and good knowledge, I think it’s time for some more easier questions, don’t you think?
Tape or DAW?
Strat or Les Paul?
6L6 or EL34?
Win or Mac?
9-42 or 10-46?
Oh. One last question.
Joki: ”A chance you will ever build a digital hardware amp?”
Mike: ”Maybe one day ;)”