AcousticBrainz/Ideas: Difference between revisions

From MusicBrainz Wiki
Jump to navigationJump to search
(intro, another project)
(more ideas)
Line 14: Line 14:
In the [http://freesound.org/ Freesound project] we use essentia and gaia, two main components of AcousticBrainz to compute the acoustic similarity between sound samples. We want to do something similar with AcousticBrainz. Some questions to be answered in this project are:
In the [http://freesound.org/ Freesound project] we use essentia and gaia, two main components of AcousticBrainz to compute the acoustic similarity between sound samples. We want to do something similar with AcousticBrainz. Some questions to be answered in this project are:


* Can Gaia perform similarity between all 3 million tracks in the AB database, or do we need another technology like solr
* Can Gaia perform similarity between all 3 million tracks in the AB database, or do we need another technology like solr
* Are duplicate submissions of the same song using different codecs very similar? If not, why not? Can we use this similarity to discover songs with incorrectly tagged MBIDs or the same song with two different MBIDs?
* Are duplicate submissions of the same song using different codecs very similar? If not, why not? Can we use this similarity to discover songs with incorrectly tagged MBIDs or the same song with two different MBIDs?
* Are there some songs which act as "Hub songs" - that is, they are similar to many other songs
* Are there some songs which act as "Hub songs" - that is, they are similar to many other songs


== Adding content with no MBIDs to AcousticBrainz ==
== odd one out ==
There is a lot of audio content around which doesn't have MusicBrainz ids. While we would like people to add their audio to MusicBrainz and then tag the files, this isn't always possible. As a result, we end up missing a lot of data.
Some examples of data which we might want to accept is the [http://etree.linkedmusic.org/about/ Live music archive]. Other projects have analysed this (e.g. [http://etree.linkedmusic.org/about/calma.html CALMA - computational analysis of the live music archive]). Many research projects use 30 second samples. These datasets are easy to find. For completeness we could also accept these samples and build datasets for comparative analysis between AcousticBrainz and other research.
We want to consider being able to accept data with just a minimum number of tags. Perhaps an artist name and track name. We could use MessyBrainz to generate temporary uuids for these items. We would then want to try and match as many items as possible to MusicBrainz at a later stage.

== Data description ==
Automatic updating statistics page, containing data about our submissions:
* Formats, year, reported genre, other tags (mood)?
* Results of all classifier models
* BPM analysis
* Compare audio content md5_encoded with mbids
* Descriptor search, perhaps using Elasticsearch

Visualize AB data - either a sub-dataset/list or all data in AB
* distribution plots for all low-level descriptors
* functionality to find and browse outliers (or any other segment of the distribution) -- via elasticsearch
* expectedness of features for each particular track (paper: Corpus Analysis Tools for Computational Hook Discover by Jan Van Balen)

2D visual maps
* Improving visualization of high-dimensional music similarity spaces (Flexter)
* can be used for visualizing AB datasets in 2d
* 2d maps with t-Stochastic Neighbor Embedding (TSNE, but there are other approaches in the paper) with shared nearest neighbor distance normalization (against hubs)

== Spot the odd song out ==
[http://www.juppiemusic.com/research/phd-thesis Spot the odd song out] is a research project by Daniel Wolff. In it, he gets people to listen to songs and say how similar they are. With this information, he is able to build models predicting how similar two other songs are.
Part of this project is a [https://smcse.city.ac.uk/doc/mirg/casimir/game/camir_gameClient/ web-based game] which presents songs to people and asks them to say if songs are similar.
We are interested in building a similar game to collect song similarity data.

Revision as of 10:54, 22 February 2016

This page describes ideas that we've had for AcousticBrainz project. If you are interested in working on them for Summer of Code, or as part of the MusicBrainz project, [Communication contact us] through the MusicBrainz IRC channels. If you want to explore this data in an academic context, talk to the Music Technology Group.

Data exploration

An interactive system to explore the data that we already have in AcousticBrainz. For example, what are all of the songs that we say are in a certain Key. Order these by tempo and then group them by mood.

Search

A search system (which could be part of the above task) that lets you search for tracks by their metadata or by extracted features. This could use an existing search technology (e.g. Solr), or something custom-written for the task. A similar task would be to be able to place songs in an n-dimensional similarity space to explore songs that are acoustically similar.

Data accuracy

An investigation of the accuracy of AcousticBrainz compared to other music databases. For example, MusicBrainz has many tags which represent genres. This information is also available from services like Last.fm. Lower-level information such as key and bpm is available from services such as the Echo Nest.

Investigate content-based similarity

In the Freesound project we use essentia and gaia, two main components of AcousticBrainz to compute the acoustic similarity between sound samples. We want to do something similar with AcousticBrainz. Some questions to be answered in this project are:

  • Can Gaia perform similarity between all 3 million tracks in the AB database, or do we need another technology like solr
  • Are duplicate submissions of the same song using different codecs very similar? If not, why not? Can we use this similarity to discover songs with incorrectly tagged MBIDs or the same song with two different MBIDs?
  • Are there some songs which act as "Hub songs" - that is, they are similar to many other songs

Adding content with no MBIDs to AcousticBrainz

There is a lot of audio content around which doesn't have MusicBrainz ids. While we would like people to add their audio to MusicBrainz and then tag the files, this isn't always possible. As a result, we end up missing a lot of data. Some examples of data which we might want to accept is the Live music archive. Other projects have analysed this (e.g. CALMA - computational analysis of the live music archive). Many research projects use 30 second samples. These datasets are easy to find. For completeness we could also accept these samples and build datasets for comparative analysis between AcousticBrainz and other research. We want to consider being able to accept data with just a minimum number of tags. Perhaps an artist name and track name. We could use MessyBrainz to generate temporary uuids for these items. We would then want to try and match as many items as possible to MusicBrainz at a later stage.

Data description

Automatic updating statistics page, containing data about our submissions:

  • Formats, year, reported genre, other tags (mood)?
  • Results of all classifier models
  • BPM analysis
  • Compare audio content md5_encoded with mbids
  • Descriptor search, perhaps using Elasticsearch

Visualize AB data - either a sub-dataset/list or all data in AB

  • distribution plots for all low-level descriptors
  • functionality to find and browse outliers (or any other segment of the distribution) -- via elasticsearch
  • expectedness of features for each particular track (paper: Corpus Analysis Tools for Computational Hook Discover by Jan Van Balen)

2D visual maps

  • Improving visualization of high-dimensional music similarity spaces (Flexter)
  • can be used for visualizing AB datasets in 2d
  • 2d maps with t-Stochastic Neighbor Embedding (TSNE, but there are other approaches in the paper) with shared nearest neighbor distance normalization (against hubs)

Spot the odd song out

Spot the odd song out is a research project by Daniel Wolff. In it, he gets people to listen to songs and say how similar they are. With this information, he is able to build models predicting how similar two other songs are. Part of this project is a web-based game which presents songs to people and asks them to say if songs are similar. We are interested in building a similar game to collect song similarity data.