Is It Live Or Is It Memorex?
Playback systems come of age

TL;dr When I first started seeing playback systems. Or rather hearing them. When I first started using playback systems. Brief description and history of a system I’ve been on for several years. An overview of some of the methodologies used for playback systems. I know the blown away guy was Maxell not Memorex so work with here. If you don’t know what either Memorex or Maxell is ask your grandparents.
Playback systems in concerts have been around longer than many realize. The first playback of tracks I heard at a concert was the first concert I went to in the mid 70s. Pink Floyd were doing the In the Flesh tour (aka Animals Tour) at a stop at Anaheim Stadium or Angel Stadium, the Big A. I saw many iconic acts over a couple of summers there. They played the entirety of Animals and Wish You Were Here along with a couple of Dark Side of the Moon tunes for the encore. There playback was the sound effects from the various records. I didn’t think much of it. I was too impressed by the gigantic inflatable pig that I didn’t much think about where the pig sounds came from.
Thanks for reading A Barking Dog! Subscribe for free to receive new posts and support my work.
The second time I heard playback was at the Queen News of the World Tour in Long Beach later that year. This time I noticed the playback. It was during Bohemian Rhapsody during the operatic part. The stage went dark and it sounded like the part of the song from the record (as in vinyl LP) was being played. I’d bet it was a two track tape deck somewhere. The stage burst to life at the end of that passage with Brian May’s solo for the uptempo reentry. It was magic. We were stoked they were able to do the song just like the record. As in vinyl LP.

My journey from the late 70s into the 80s saw me working with punk, post punk, college indie rock and the hair metal bands of the Sunset Strip. There was not a playback rig to be had. It was a place where manly men played everything live on stage. Except of course for the backing vocal parts sampled into an E-mu. And a metric assload of SPX 90, (or a Lexicon 224 if you were a bad hombre) on the backing vocal inputs. Kids touring colleges and middled aged men playing butt rock in spandex that couldn’t really sing backup needed all the help they could get. There was no auto tune.

I next used playback after I’d moved to Grungetown and started working for them local girls. When I joined they’d just finished the mondo-mega days of Diane Warren power ballads and were looking to get back to their harder rocking roots. They only had a handful of songs that required the thick keyboard parts from both the previous hits and new tracks. They didn’t want to hire a keyboard player for only a few tunes. It was settled we’d use Alesis ADATs for playback.
ADATs were one of the first self-contained 8 track digital recording devices. It recorded on VHS video tape. They were supposed to sync and did. Mostly. Sometimes. Sometimes not. They went to their studio, named after one of their previous records (where the animals were not good) and pulled the keyboard tracks from the masters of the new stuff and played the old stuff into the ADAT. Initially the red clad guitar player was going to trigger the tracks. As it turned out his tech got nominated for the part after slipping up and telling us he owned a couple of ADATs. Plus the foot switch and sync thing didn't always work. The guitar player knew how to do it but the technology didn’t always allow it to be seamless. We also put a click on it so everyone could sync. Through the wedges and sides. This was before ear monitors started to be a thing. At first it was a beep. Then it became the sound of two drum sticks. Drum sticks as big as baseball bats. Through the wedges and sides. The system worked but was clunky and unreliable enough so that in future outings they hired a keyboard player that could sing and had a vintage Moog and a theremin. You ain’t lived ‘til you’ve mixed a theremin live in a rock band.
There were various implementations of playback rigs through the 90s helped by rap and hip-hop world. Loops and sound effects drove the music. By the early 2000s bands like Evanescence and Linkin Park were making heavy use of playback rigs. They were controlled by someone in the band usually a keyboard player, perhaps the drummer or in some cases DJs that were part of the band. It wasn’t hidden. It was featured up front. No more being afraid of a Milli Vanilli disaster. You saw and knew they weren’t playing some of the tracks but didn’t care. Those parts were a big part of the tunes you liked to sing with.
By then the recording guy turned premier prog-rock musician I was gigging with was using playback from a variety of sources. We needed 22 direct boxes which at the time quite a lot particularly for the casino and corndog gigs we were doing. With as thick as the songs were with six in the band you had to have tracks. What were sequencers when they were originally recorded became tracks from Digital Performer and an interface as well as a couple of stand alone samplers. It worked well. The eye in the sky was looking favorable upon us.
Not too long after that Apple released MainStage and the rush to laptop band and tracks was in full tilt. It became easy to do. Coffeehouse gigs and busking became more layered and dynamic. When you couple that with the boom in home/self recording (along with the decimation of recording studios) more comprehensive and affordable tools became available to anyone with a credit card and a Sam Ash nearby.
Pop music influenced by R&B, hip-hop, rock and traditional yacht rock pop pushed writers and producers to make more complex layered arrangements. It was not longer cost prohibitive for most to become as diverse as possible instrument wise without breaking the bank. Meanwhile the Broadway kats were writing more complex, often symphonic scores that need more musicians than most budgets would allow even on a big name show. You’d need a hella big pit for a full symphony and a large choir.
Playback became the answer. After my mixed experience in rock playback when I landed on the Strip it was playback central. Everybody used playback. And I mean everybody be it an augmentation or a track only show. Sinfonia was the go to then. Marketed as an orchestral enhancement it absolutely enhanced the orchestra. Because it was the orchestra. It was (and still is in many cases) controlled by the band leader though the concert residencies often employ a playback operator. The systems we use are anywhere between 16 and 64 outputs.

Everyone I know has long moved from Sinfonia to designed for the specific use case playback systems from a variety of components. I’m not going to detail the programming of our systems other than a general overview of what is used. Most of the secret sauce is in the coding of the user interfaces. We use Max (Max for Live on newer designs) as the primary human interface for visual feedback. The system is triggered by the band leader using a midi keyboard above the keyboard they play for the gig. They’re playing, calling/conducting the show and triggering playback. The UI separates each song into sections that correspond to bars and beats. Because we deal with complex automation and human circus elements at times there needs to be a vamp or repeat. You can keep looping between sections with the granulation to a the measure or even beat until it’s time to move on. Other types of shows don’t use or need a separate visual interface as they use what’s provided in the software playback package. For example manually triggering the scenes from within Live.

This kind of show is difficult to lock to linear time code though some do with varying degrees of effectiveness. The benefit is the mechanical, visual and audio elements lock together with precision. The drawback with using LTC is when something does get out of kilter and need some flexibility and the ability to improvise it may throw other elements off. That’s why shows that are staged like ours have live bands. Many times the audience can’t tell there is a short vamp because of the flexibility the band and playback rig have. Other times it’s more obvious but the creative impact is less due to the ability of the band and playback to roll with it. At least they didn’t stop to run a loop out of context or worse yet silence.

The bulk of the systems I see use Ableton Live as the primary playback software. There are a few of different ways to play the samples. One method is loading the audio samples directly into clips. Another is using midi in Live to trigger external samplers (in our case Kontakt). And yet another is to use either a sampler or a virtual instrument as an insert in Live and trigger them from within Live. The last two methods are used when you’re trying to conserve machine resources. These days any modern computer Mac or Windows should be able to run a few dozen tracks using audio clips in Live. When we first migrated to Live from Giga Studio/Sinfonia way back when there were memory limitations in 32 bit Windows that required we run a software sampler for audio playback and control that via Live. When Windows went 64 bit that issue largely disappeared.
While we’re on the subject of platforms most of the playback rigs I’ve seen are on Macs. Here they are Windows. We just upgraded the playback hardware and software to Windows 10 and current versions of Live and Kontakt. We were about to make the change in March of 2020 but the pandemic disrupted that. Either platform works though I prefer a Mac. The integration of MIDI (IAC) and Core Audio make it easier and more flexible than using something like LoopBe for internal MIDI routing and ASIO For All as a driver on a Windows box. The associate MD and I design and maintain the system. He does content and software, I do infrastructure and hardware. With the exception of Max runtimes which compile as an app on Mac and a shell with a buttload of dependencies on Windows working cross platform isn’t too big of a challenge. The system is reliable, works well and has good redundancy.
The system interfaces from the computer with RME Hammerfall HDSPe cards. Each computer has a card that is fed into a MADI bridge that allows for switching between computers. The switch is triggered by the conductor/band leader. It’s part of the legacy design using MIDI Solutions F8 and R8 contact closure and relay to MIDI boxes. It’s a two button switch box. The MIDI and sysex are programmed into an F8 which changes the MADI bridge (sysex) to the correct computer, changes the MIO XL MIDI patcher (MIDI) so the correct source MIDI is being used. The R8 is sending a tally light to the button in use for visual feedback. We also send MIDI to the guitar rig that controls his patches and FX, a couple of different switchers and volume changes. He spends most the show in the performance space so he’s not down at his rig in the studio two floors below stage level.
One thing I’m seeing more of is Dante out of the computer instead of a conventional interface. I think eventually we’ll see most interfaces (not only playback) replaced with networked audio. A design I saw for a new show going in uses Dante out with Dante Virtual Sound Card to a Ferrofish A32 using primary and secondary Dante networks not only for machine failover but for network failover as well. Another popular option is the Direct Out Prodigy line. It takes it a step further and can automatically switch inputs if it senses a disruption in the audio stream. There are a myriad of options available at all price points to implement a playback rig.
There has been specific marketing for playback systems and components as well as services. There are companies like Electronic Creatives where you can go for a soup to nuts integration for a playback system including content creation and operators. It’s like a sound company for playback. For acts with playback the implementation is as important as backline or audio. It’s the fifth Beatle. If you rent one or build one you’ll need some knowledge to interface the components and come up with a strategy/method to operate the system.
A lack of playback familiarity or configuration knowledge is common among many we interview. It’s a part of the business that’s only achieved scale in the last few years hence the learning/adoption curve among many techs. Also the fact it’s largely been operated by band members removes the audio techs from the picture. That’s changing though as in some genres it’s as prevalent as the monitor mixer with as complex of a mission.
Playback is no longer an outlier. In many genres it’s expected and mandatory. When the kids ask what they should focus on as a specialty I offer playback in addition to my other go-tos intercom and networking. It’s not sexy like mixing (not yet anyway) but for every mixer working there are probably 10 techs working. The more you can differentiate yourself and offer more value by having a wide set of skills they better you’ll do in our craft. Playback is one way to do that.
Thanks for reading A Barking Dog! Subscribe for free to receive new posts and support my work.