There are purists out there who think computers ruined electronic music, made it cold and alien, removed the human element: the warm, warbling sounds of analog oscillators, the unpredictability of analog drum machines, synthesizers that go out of tune and have minds of their own. Musicians played those instruments, plugged and patched them together, tried their best to control them. They did not program them.
Then came digital samplers, MIDI, DAWs (digital audio workstations), pitch correction, time correction… every note, every arpeggio, every drum fill could be mapped in advance, executed perfectly, endlessly editable forever, and entirely played by machines.
All of this may have been true for a short period of time, when producers became so enamored of digital technology that it became a substitute for the old ways. But analog has come back in force, with both technologies now existing harmoniously in most electronic music, often within the same piece of gear.
Digital electronic music has virtues all its own, and the dizzying range of effects achievable with virtual components, when used judiciously, can lead to sublime results. But when it comes to another argument about the impact of computers on music made by humans, this conclusion isn’t so easy to draw. Rock and roll has always been powered by human error—indeed would never have existed without it. How can it be improved by digital tools designed to correct errors?
The ubiquitous sound of distortion, for example, first came from amplifiers and mixing boards pushed beyond their fragile limits. The best songs seem to all have mistakes built into their appeal. The opening bass notes of The Breeder’s “Cannonball,” mistakenly played in the wrong key, for example... a zealous contemporary producer would not be able to resist running them through pitch correction software.
John Bonham’s thundering drums, a force of nature caught on tape, feel “impatient, sterile and uninspired” when sliced up and snapped to a grid in Pro Tools, as producer and YouTuber Rick Beato has done (above) to prove his theory that computers ruined rock music. You could just write this off as an old man ranting about new sounds, but hear him out. Few people on the internet know more about recorded music or have more passion for sharing that knowledge.
In the video at the top, Beato makes his case for organic rock and roll: “human beings playing music that is not metronomic, or ‘quantized’”—the term for when computers splice and stretch acoustic sounds so that they align mathematically. Quantizing, Beato says, “is when you determine which rhythmic fluctuations in a particular instrument’s performance are imprecise or expressive, you cut them, and you snap them to the nearest grid point.” Overuse of the technology, which has become the norm, removes the “groove” or “feel” of the playing, the very imperfections that make it interesting and moving.
Beato’s thorough demonstration of how digital tools turn recorded music into modular furniture show us how the production process has become an mental exercise, a design challenge, rather than the palpable, spontaneous output of living, breathing human bodies. The “present state of affairs,” as Nick Messitte puts it, is “keyboards triggering samples quantized to within an inch of their humanity by producers in the pre-production stages.” Anyone resisting this status quo becomes an acoustic musician by default, argues Messitte, standing on one side of the “acoustic versus synthetic” divide.
Whether the two modes of music can be harmoniously reconciled is up for debate, but at present, I’m inclined to agree with Beato: digital recording, processing, and editing technologies, for all their incredible convenience and unlimited capability, too easily turn rhythms made with the elastic timing of human hearts and hands into machinery. The effect is fatiguing and dull, and on the whole, rock records that lean on these techniques can't stand up to those made in previous decades or by the few holdouts who refuse to join the arms race for synthetic pop perfection.