Development/Summer of Code/2016/AcousticBrainz
Proposed mentor: ruaok or alastairp
Languages/skills: Python, Postgres, Flask
Forum for discussion
If you want to work on AcousticBrainz you should show that you are able to set up the server software and understand how some of the infrastructure works. Here are some things that we might ask you about
- Install the server on your computer or use the Vagrant setup scripts to build a virtual machine
- Download the AcousticBrainz submission tool and configure it to compute features for some of your audio files and submit them to the local server that you configured
- Use your preferred programming language to access the API to download the data that you submitted to your server, or other data from the main AcousticBrainz server
- Create an oauth application on the MusicBrainz website and add the configuration information to your AcousticBrainz server. Use this to log in to your server with your MusicBrainz details
- Look at the system to build a Dataset (accessible from your profile page on the AcousticBrainz server) and try and build a simple dataset
- Look at the list of tickets that we have open for AcousticBrainz and see if you understand what some of them mean. Feel free to ask questions about what they mean - some ticket descriptions don't have much detail
This page describes ideas that we've had for AcousticBrainz project. If you are interested in working on them for Summer of Code, or as part of the MusicBrainz project, contact us through the MusicBrainz IRC channels. If you want to explore this data in an academic context, talk to the Music Technology Group.
An interactive system to explore the data that we already have in AcousticBrainz. For example, what are all of the songs that we say are in a certain Key. Order these by tempo and then group them by mood.
A search system (which could be part of the above task) that lets you search for tracks by their metadata or by extracted features. This could use an existing search technology (e.g. Solr), or something custom-written for the task. A similar task would be to be able to place songs in an n-dimensional similarity space to explore songs that are acoustically similar.
An investigation of the accuracy of AcousticBrainz compared to other music databases. For example, MusicBrainz has many tags which represent genres. This information is also available from services like Last.fm. Lower-level information such as key and bpm is available from services such as the Echo Nest.
Investigate content-based similarity
In the Freesound project we use essentia and gaia, two main components of AcousticBrainz to compute the acoustic similarity between sound samples. We want to do something similar with AcousticBrainz. Some questions to be answered in this project are:
- Can Gaia perform similarity between all 3 million tracks in the AB database, or do we need another technology like solr
- Are duplicate submissions of the same song using different codecs very similar? If not, why not? Can we use this similarity to discover songs with incorrectly tagged MBIDs or the same song with two different MBIDs?
- Are there some songs which act as "Hub songs" - that is, they are similar to many other songs
Adding content with no MBIDs to AcousticBrainz
There is a lot of audio content around which doesn't have MusicBrainz ids. While we would like people to add their audio to MusicBrainz and then tag the files, this isn't always possible. As a result, we end up missing a lot of data. Some examples of data which we might want to accept is the Live music archive. Other projects have analysed this (e.g. CALMA - computational analysis of the live music archive). Many research projects use 30 second samples. These datasets are easy to find. For completeness we could also accept these samples and build datasets for comparative analysis between AcousticBrainz and other research. We want to consider being able to accept data with just a minimum number of tags. Perhaps an artist name and track name. We could use MessyBrainz to generate temporary uuids for these items. We would then want to try and match as many items as possible to MusicBrainz at a later stage.
Automatic updating statistics page, containing data about our submissions:
- Formats, year, reported genre, other tags (mood)?
- Results of all classifier models
- BPM analysis
- Compare audio content md5_encoded with mbids
- Descriptor search, perhaps using Elasticsearch
Visualize AB data - either a sub-dataset/list or all data in AB
- distribution plots for all low-level descriptors
- functionality to find and browse outliers (or any other segment of the distribution) -- via elasticsearch
- expectedness of features for each particular track (paper: Corpus Analysis Tools for Computational Hook Discover by Jan Van Balen)
2D visual maps
- Improving visualization of high-dimensional music similarity spaces (Flexter)
- can be used for visualizing AB datasets in 2d
- 2d maps with t-Stochastic Neighbor Embedding (TSNE, but there are other approaches in the paper) with shared nearest neighbor distance normalization (against hubs)
Spot the odd song out
Spot the odd song out is a research project by Daniel Wolff. In it, he gets people to listen to songs and say how similar they are. With this information, he is able to build models predicting how similar two other songs are. Part of this project is a web-based game which presents songs to people and asks them to say if songs are similar. We are interested in building a similar game to collect song similarity data.