Before Pandora’s personalized internet radio service launched in 2005, there was Savage Beast Technologies. From 2000 to 2004, the company focused on building out its music-recommendation technology, the Music Genome Project. Pioneered by founder Tim Westergren, the Music Genome Project had humans, in the form of skilled musicians, listening to songs to uncover and annotate their musical attributes.
Remarkably, music analysts are still listening to songs today. Many aspects of how streaming music platforms provide song recommendations has shifted to computers and machine learning, but not all. There are still analysts doing the same work they have for the last 20 years. We talked to several analysts, past and present, to get a sense of how they work today, 20 years after the Music Genome Project’s inception.
Human or Machine Recommendations?
“I think we fit in the way we always have, which is as a total anomaly,” says Pandora Music Analyst Scott Rosenberg. “As far as I know, we’re still the only service where the human component is still so central. At least at the scale we’re working at.
“Whenever we hear from people in the Music Science department—the folks working on machine learning—they are all so flipped out that they have this vast, multi-million song repository of human-generated analysis to draw on. There is no other set of musical analysis data that even remotely approaches what we’ve created.”