Development/Summer of Code/2021/AcousticBrainz

From MusicBrainz Wiki
Jump to navigationJump to search

Proposed mentors: ruaok or alastairp
Languages/skills: Python, Postgres, Flask
Forum for discussion

Getting started

(see also: GSoC - Getting started)

If you want to work on AcousticBrainz you should show that you are able to set up the server software and understand how some of the infrastructure works. Here are some things that you could do to get familiar with the AcousticBrainz project and code:

  • Install the server on your computer or use the Vagrant setup scripts to build a virtual machine
  • Download the AcousticBrainz submission tool and configure it to compute features for some of your audio files and submit them to the local server that you configured
  • Use your preferred programming language to access the API to download the data that you submitted to your server, or other data from the main AcousticBrainz server
  • Create an oauth application on the MusicBrainz website and add the configuration information to your AcousticBrainz server. Use this to log in to your server with your MusicBrainz details
  • Look at the system to build a Dataset (accessible from your profile page on the AcousticBrainz server) and try and build a simple dataset

Join in on development

We like it when potential students show initiative and make contributions to code without asking us what to do next. We have tagged tickets that we think are suitable for for new contributors with the "good-first-bug" label. Take a look at these tickets and see if any of them grab your interest. It's a good idea to talk to us before starting work on a ticket, to make sure that you understand what tasks are involved to finish the ticket, and to make sure that you're not duplicating any work which has already been done. To talk to us, join our IRC channel or post a message in the forums or on a ticket.

Ideas

Here are some ideas for projects that we would like to complete in AcousticBrainz in the near future. They are a good size for a Summer of Code project, but are in no way a complete list of possible ideas. If you have other ideas that you think might be interesting for the project join us in IRC and talk to us about your ideas.

Statistics and data description

We have a lot of data in AcousticBrainz, but we don't know much about what this data looks like. This task involves looking at the data that we have and finding interesting ways to show this data to visitors to the AB website. Part of the proposal for this task would be to look at and understand the data and come up with a list of recommended visualisations/descriptions. For many of the types of statistics that we want to show, it is infeasible to compute the data at every page load, therefore part of this task is to also come up with an appropriate caching system.

Here are a few ideas for statistics that we have thought of so far:

Automatic updating statistics page, containing data about our submissions:

  • Formats, year, reported genre, other tags (mood)?
  • BPM analysis
  • Compare audio content md5_encoded with mbids
  • Use the musicbrainz mbid redirect tables to find more duplicates
  • Lists of artists + albums/recordings for each artist

Visualize AB data - either a sub-dataset/list or all data in AB

  • distribution plots for all low-level descriptors
  • expectedness of features for each particular track (paper: Corpus Analysis Tools for Computational Hook Discover by Jan Van Balen)

2D visual maps

  • Improving visualization of high-dimensional music similarity spaces (Flexter)
  • 2d maps with t-Stochastic Neighbor Embedding (TSNE, but there are other approaches in the paper) with shared nearest neighbor distance normalization (against hubs)


Machine learning feature temporary storage and evaluation

We have a machine learning process that takes new data submissions and combines them with a set of models, to produce new features. We also have a system where we can produce new datasets and build models for new tasks. We currently don't have a way of promoting a new model into the production, and want to add this functionality. This has a few steps:

  • Use the new model on a significant subset of the AB database in order to verify that the results look good
  • Optionally: provide a way for an evaluation of this computation to ensure that the model works at a large scale
  • Once the model has been approved, integrate it into the production system, and compute features for all existing submissions, before computing data again as new items are submitted to AcousticBrainz


Tensorflow-based transfer learning

In 2020 we had a summer of code project to integrate scikit-learn into AcousticBrainz. This gave us an up-to-date tool for performing machine learning, but there has also been a lot of work in music analysis using deep learning techniques. One very useful technique for performing analysis on content like what we have in AcousticBrainz is transfer learning, where a model is built using a large general dataset, and then it is refined a second time using a more specific dataset. An example of this type of process can be found here: https://github.com/jordipons/sklearn-audio-transfer-learning We would like to extend the existing machine learning system to support these transfer learning-based processes.


Identifying bad data submissions

AcousticBrainz accepts submissions from anyone. Submissions are identified by their recording MBID, but sometimes this value is incorrect. As a result, for some MBIDs we have hundreds of duplicate submissions, and we know that some of them have been tagged incorrectly. We want to perform clustering on the submissions to identify which submissions As part of our 2019 summer of code project we have a nearest neighbor system which allows us to group data in an n-dimensional space. We want to use this system to cluster recordings which have the same MBID and see if some submissions are always identified as outliers. These outliers can be marked so that they're not given to users or used in other machine learning tasks.


Further analysis on the quality of descriptors in AcousticBrainz

At one of the top conferences on music data analysis, a paper was presented analysing the quality of the data in the AcousticBrainz dataset: https://program.ismir2020.net/static/final_papers/137.pdf This is some really interesting preliminary work which would be great to continue. 
This could help us identify submissions or categories of submissions that don't provide good data in AcousticBrainz and should be removed.