As easy and fun as it can be to romanticize our responses to art, we are not unique snowflakes. Streaming services such as Pandora and MOG already have sophisticated algorithms in place designed to predict what music we'll like, as the New Yorker's Sasha Frere-Jones described in a 2010 essay. And for years now, businesses from Amazon to UPS have been using algorithms for everything from purchase recommendations to scheduling deliveries, as the Economist reported way back in 2007. Increasingly, tech whizzes can predict what listeners will like, at least in theory.
A column over the weekend in the Guardian underscores just how far the process of turning human idiosyncrasies into data has already come. The story describes a contest called the Music Data Science Hackathon, where 138 different teams competed for a prize of £6,500 (about $10,000) to predict listeners' tastes based on data collected by music giant EMI. The goal: "Use this dataset to predict the rating someone would give a song based on their demographic, the artist and track ratings, their answers to questions about musical preferences and the words they use to describe EMI artists," as the Guardian reports.
Teams participating in the 24-hour competition, hosted by competitive data science firm Kaggle, had access to the results of 20-minute interviews with 800,000 music listeners from 25 countries. As the Guardian column puts it, listeners' "interests, attitudes, behaviors, and their familiarity and appreciation of music" all showed up in the data. Unfortunately, the artists were all kept anonymous, which (while understandable from EMI's point of view) seems like another potential missed opportunity, considering that whether you like, say, Lady Gaga or Arcade Fire, your opinions are probably based on more than simply what hits your eardrums.
Still, just crunching the data provided some intriguing results, according to the Guardian column. Women were, on average, more positive than men, and retired people were generally more positive than students and the unemployed. And a person's age or gender, despite being so crucial to music marketing, ended up not telling researchers much at all about what kind of music the person might like. What's more, people tended to use seemingly contradictory words, such as "noisy" and "uplifting," to describe the same song. Or a song that one person considered "superficial" would be "playful" to someone else.
It's also worth bearing in mind that people aren't always the best sources of information about our own habits. For instance, the market research firm Brain Juicer's blog recently responded to a widely publicized study on the effectiveness of Facebook advertisements by pointing out that most people tend to believe we're immune to advertising. Telling a researcher about our musical preferences is, clearly, different than talking about whether an ad has influenced us or not, but surely it's worth questioning how far to trust someone's opinion in an area that fluctuates as often as matters of musical taste. If I say I think a given track is "cool" today, will I necessarily still say the same thing by the time researchers get around to plugging in the numbers?
While the current study might only tell us that algorithms can correctly predict what people have previously said they like, it probably won't be long before highly accurate taste-predicting algorithms make it out of the lab. In fact, as Pandora and MOG show, to a certain extent they're already here. As with most technological innovations, the results will probably be neither good nor bad, but a combination of the two, in some way we're only now beginning to understand. (The steam engine may have killed John Henry, but it also helped usher in an industrial revolution.)