Close

How Spotify is working on deep learning to improve playlists

With its huge music library and recent acquisition of music-data specialist The Echo Nest, Spotify was already a force to be reckoned with in the streaming music space. Now, an intern at Spotify has published a blog post explaining his work to step up the company’s game even more by incorporating deep learning models to power better song recommendations.

The post author, a University of Ghent (in Belgium) Ph.D. student named Sander Dieleman, explains that the goal of his research was to make it easier for new or obscure songs to get included among listeners’ recommendations. Essentially, he wants to help listeners hear new songs by recommending others that sound like the songs they already like, instead of songs that other people with similar tastes also like.

Dieleman’s project at Spotify expands on a paper he and fellow Ghent researchers published in December.

The major problem with current recommendation systems is that they’re largely based on a technique called collaborative filtering. According to Dieleman:

The idea of collaborative filtering is to determine the users’ preferences from historical usage data. For example if two users listen to largely the same set of songs, their tastes are probably similar. Conversely, if two songs are listened to by the same group of users, they probably sound similar. …

Unfortunately, this also turns out to be their biggest flaw. Because of their reliance on usage data, popular items will be much easier to recommend than unpopular items, as there is more usage data available for them.

If you were wondering what a visualization of a neural network layer looks like. Dieleman writes that the horizontal axis is time, while the vertical axis is frequency. "From this representation," he adds, "we can see that a lot of the filters pick up harmonic content, which manifests itself as parallel red and blue bands at different frequencies."
If you were wondering what a visualization of a neural network layer looks like. Dieleman writes that the horizontal axis is time, while the vertical axis is frequency. “From this representation,” he adds, “we can see that a lot of the filters pick up harmonic content, which manifests itself as parallel red and blue bands at different frequencies.”

Dieleman’s deep learning system, which he explains in some detail in the post, analyzed thirty-second samples from about 500,000 of the million most popular songs on Spotify in order to learn their acoustic features. He used the rest of the millon songs to test the system.

What’s really cool about the blog post is that Dieleman includes sample playlists based on songs that activated certain of the 256 low-level filters in the neural network in different ways (e.g. bass drums, vocal thirds or chords). He also includes high-level-feature playlists (which combine results for all the low-level features and, in some cases, pretty much grouped songs by genre) and soundalike playlists for specific songs.

[spotify id=”spotify:user:sander_dieleman:playlist:6K9Df3nXsZVftKmYliUcIS” width=”300″ height=”380″ /]

Although his ultimate goal with this approach is to help listeners uncover new artists and songs by incorporating the results of the model into recommendation algorithms, Dieleman notes that it also has other uses. Those include filtering outliers (songs with little acoustic similarity, presumably) in existing recommendation algorithms, and filtering out intro and outro tracks and cover songs.

It seems likely that other media properties, including Netflix, are also hoping to achieve similar things with their own recommendation engines using deep learning. There are a lot of latent features to media content beyond just what users have liked, what genre things fall into and what other people are watching. Deep learning might not revolutionize recommendations, but in a digital world where they matter so much, every little bit helps.