Information about how to reach the tutorials can be found on the directions page.
Tutorials take place on Monday 9 August. The tutorial fee (€ 50 / tutorial) is not included in the Conference fee. Please note that two tutorials are held simultaneously (two in the morning and two in the afternoon). The morning tutorials are:
- Tutorial 1: Emmanuel Vincent (INRIA) and Nobutaka Ono (University of Tokyo). Music Source Separation and its Applications to MIR.
(Download relevant software here, handout)
- Tutorial 2: Ian Knopke (BBC) and Eric Nichols (Indiana University). A Tutorial on Pattern Discovery and Search Methods in Symbolic Music Information Retrieval.
(for more information see http://aimusic.net,handout)
The afternoon tutorials are:
- Tutorial 3: Meinard Müller (Saarland University and MPI Informatik) and Anssi Klapuri (Queen Mary, University of London). A Music-oriented Approach To Music Signal Processing.
- Tutorial 4: Ben Fields (Goldsmiths, University of London) and Paul Lamere (The Echo Nest). Finding A Path Through The Jukebox – The Playlist Tutorial.
Source separation consists of extracting individual sound components (e.g. notes, instruments, groups of instruments) from a recording. It is a mainstream topic in music and audio processing, with applications ranging from speech enhancement and speaker diarization to 3D upmixing and post-production of music. The first half of this tutorial will provide a general introduction to music source separation based on auditory segregation cues and probabilistic sound models. We will illustrate the performance of current techniques via a number of sound examples from the SiSEC 2008 campaign. The second part will provide in-depth analysis of the use of source separation for five MIR tasks: tempo estimation, singer or instrument identification, chord detection, melody extraction and genre classification. We will explain how to extract features describing individual sound components and quantify the benefit in terms of retrieval accuracy. We will conclude by suggesting future uses of source separation for other MIR tasks.
This tutorial is intended both for MIR researchers and developers aiming to extract more meaningful symbolic and audio features and music psychologists aiming to understand human perception of concurrent sounds. It is supported by the VERSAMUS Associate Team Program (http://versamus.inria.fr/).
Recent discussion in the MIR community has led to a renewed interest in symbolic music resources. These representations, often created from traditional musical notation, incorporate a much higher degree of structured musical information. This, combined with their discrete nature makes it possible to discover a larger range of musically-relevant patterns that would otherwise be difficult to approach, especially if progress is to be made in musicologically-centered problems
The intended audience for this tutorial is researchers who have some familiarity with MIR concepts and background but may be new to symbolic music research. The focus of the first part is primarily on finding musical patterns and motifs in symbolic music collections. Various exact and approximate search techniques in a musical context will be discussed, as well as their application to large collections and the scaling issues that can arise. The second part examines how results from music cognition research have provided insight to classic music IR problems, focussing on cognitive approaches to the problems of pattern discovery in melody and the modeling of melodic expectation. Examples, demonstrations and visualizations will be given using the PerlHumdrum and PerlLilypond toolkits for working with Humdrum scores, and the Musicat computer model of musical listening.
This tutorial discusses the extraction of semantically meaningful information from waveform-based audio data. When dealing with highly structured signals such as music, the understanding of their acoustic and musical properties is of foremost importance. In this tutorial, we discuss how music-specific aspects that refer to harmony, rhythm, timbre, or melody can be exploited for designing musically expressive feature representations and for carrying out meaningful music content analysis. To account for a general and interdisciplinary audience, we explain the design principles in a non-technical and intuitive way using many illustrative music examples. Furthermore, to highlight the practical and musical relevance, we discuss the various feature representations in the context of current MIR tasks including structure analysis, chord recognition, beat tracking, performance analysis, music synchronization, voice separation, and instrument recognition. Here, our general goal is to show how the development of music-specific signal processing techniques is of fundamental importance for tackling otherwise infeasible music analysis problems.
The simple playlist, in its many forms – from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.