MCP

Music Computing and Psychology Lab
Frost School of Music, University of Miami

Research

This page summarises active, ongoing research projects, which often extend beyond individual publications. If you are interested in using or developing code from any of the projects, please fill out the registration form below and I will grant you access. (This helps me write better funding proposals, as I can indicate how many and what type of people find the material useful.)

Marine Music

3rd March, 2024. Commencing later this month, the UMVerse-funded Marine Music project aims to investigate the effectiveness of using music to enhance immersion, recall, and comprehension in XR experiences focused on marine life. A collaboration between the Frost School of Music and Rosenstiel School of Marine and Atmospheric Science, the team plans to create an XR experience where marine flora and fauna are paired with specific musical elements like melodies and chord sequences. Participants will explore the virtual underwater environment, prompted with fact profiles of specimens and rewarded for answering questions about them correctly.

Based on previous psychological research, our main hypothesis is that meaningful music-specimen pairings will increase immersion, recall, and comprehension compared to unpaired-but-otherwise-similar music. This hypothesis is grounded in psychological research showing that context aids memorization, yet the project seeks to explore its applicability to XR and extend beyond recall to aspects like immersion and comprehension. Evaluation will employ both quantitative and qualitative methods to assess participants’ experiences.

SAMPA (Synchronized Action in Music-ensemble Playing and Athletics)

6th May, 2024. Commencing in June 2024, the PRA-funded SAMPA project investigates coordination in music ensembles and athletics, exploring patterns underlying enjoyment and success. Despite widespread participation in music and athletics, relatively little is understood about event patterns and their link to enjoyment and successful coordination.

Leveraging pattern discovery algorithms, SAMPA aims to:

  1. Collect and analyze multivariate time series data from music and soccer events.
  2. Utilize data alongside interviews to understand synchronized action’s impact.
  3. Summarize findings in a peer-reviewed journal article and conference paper.
  4. Release anonymized datasets post-embargo.

Pattern discovery, integral to the project, entails identifying recurring subsets within multidimensional data. SAMPA seeks to identify recurrence of such subsets, under various transformations, within music and soccer performance datasets. By analyzing event sequences preceding success, SAMPA aims to reveal salient patterns, shedding light on synchronization and coordination across both domains.


GIF indicating player positions from Kinexon

CHAI (Concerts with Humans and Artificial Intelligence)

6th May, 2024. Commencing in June 2024, the $100K, U-LINK-funded CHAI project addresses the increasing prevalence of AI tools in music creation, focusing on musicians' perceptions and self-efficacy when collaborating with AI. Acknowledging the increasing impact of AI on the music industry, the project aims to investigate the acceptability and influence of AI in music-making processes, considering its potential regulation.

Led by an interdisciplinary team, including Modern Artistry Development and Entrepreneurship professor Raina Murnak and Music Engineering Technology professors Tom Collins and Chris Bennett, the project proposes workshop series involving MusicReach students, Frost students in a dedicated ensemble, and external artists. The team envisions producing and releasing music, culminating in a finale concert in spring 2025.

The project seeks to contribute to STEM education through innovative AI-infused music-making experiences, and aims to further promote the University of Miami as a pioneer in cutting-edge music research and creative practice.

Renaissance madrigal meets spatial audio

30th August, 2022. This video demonstrates a web-based interface (recommended browser Google Chrome) designed and developed by Jemily Rime and Tom Collins, which enables the user to switch surroundings and move voices around their head as they listen to “Weep, o mine eyes”, wrtten by John Bennet and performed by I Fagiolini.

AI collaboration with Imogen Heap

10th April, 2020. "It's doing what I hoped it would do, which is what I believe AI will do for musicians, which is to push us to the next level of our own creativity" (Imogen Heap, Grammy Award-winning musican and technologist on working with music generation algorithms built in the lab).


Video courtesy of BBC Click

Music exploration system

20th May, 2021. This video introduces a prototype exploration system that we developed in the last year.


Video courtesy of Laura Stark, Brilliant Red Digital

MAIA, Inc.: A music cooperative

A lot of current research projects are related to a music cooperative that Tom founded with Christian Coulon in 2015, called Music Artificial Intelligence Algorithms, Inc. (or MAIA for short). MAIA provides online spaces for users to create, share, and appreciate music.

An example interface is Jam!, which supports listening to and creation of music, in collaboration with human users and AI. Projects in the MAIA sphere make use of the Web Audio API and associated packages.

Beehive activity


Photo and video by Christian Coulon

Since 2018, the lab has worked with the organizations Raw Power Apiary and Circle of Bees. In Spring 2019, as part of teaching CS 205 Software Engineering, we (the students, these organizations, and Tom) embarked upon a project that involved IoT-monitoring, visualization, and sonification (that's where the music comes in!) of beehive activity.

The photo and video above give you some sense of what's going on. If you'd like to see some live entry-exit data for a hive in Davis, CA, click here (you might have to wait 10 sec to see stuff happening – the sensors switch on and off periodically to avoid overheating). If you'd like more information about the software that the students produced, and/or the types of users it serves, please fill out the registration form below.

Musical schemata

This project is an example of the lab's general interest in computational modeling of high-level music-theoretic concepts. Musical schemata comprise characteristic scale-degree movements in melody and bass, plus harmonies, metric weights, and other contextual information. This combination of attributes makes musical schemata difficult to identify and discover computationally, but that is the aim of our research on the topic. Click here for more details, code, data, etc.

MCStylistic

MCStylistic supports research in Music theory, music Cognition, and Stylistic composition. It contains an algorithm called Stravinsqi (STaff Representation Analysed VIa Natural language Query Input), which is a step towards the first easy-to-use, high-level music-theoretic search engine. It also contains algorithms for functional harmonic analysis (including basic neo-Riemannian operations), metric analysis, tonal analysis, pattern discovery, and the generation of stylistic compositions. MCStylistic is free, cross-platform, and written in Common Lisp.

PattDisc

PattDisc supports research into the discovery of repeated patterns within and across pieces of music. It contains an algorithm called SIARCT-CFP, which takes a symbolic representation of a piece as input, and returns collections of notes that are subject to exact/inexact repetition throughout the piece. Output note collections ought to correspond to what music theorists would identify as motifs, themes, or repetitive elements. PattDisc also contains algorithms for pattern matching, and estimating the perceived salience of a note collection within a piece. PattDisc is free and cross-platform, although unfortunately the language in which it is written, Matlab, is not free!

Janata Lab Music Toolbox (JLMT)

JLMT (developed and hosted by the Janata Lab at University of California, Davis) is a package of Matlab functions for transforming audio signals through various physiologically and pscyhologically plausible representations. I used the package to build a model of tonal expectancy that predicted response times for over three hundred audio stimuli from seven different tonal priming experiments:

JLMT continues to be developed, and has had major input from Stefan Tomic, Petr Janata, Fred Barrett, and Tom Collins.

PatternViewer


Designed by Ali Nikrang, Tom Collins, and Gerhard Widmer, the PatternViewer application plays an audio file of a piece synchronized to a point-set representation, where the colour of the points represents an estimate of the local key. The pendular graph in the top-left corner represents the piece’s repetitive structure, and can be clicked to find out about motifs, themes, and other repetitive elements. PatternViewer Version 1.0 is available now for download from here.

Registration

Fields marked with * are required.

Registration type* , or
By submitting this form, you agree that your use of the above-mentioned projects will be within the terms of the GNU General Public Licence (copied below).



Form provided by Free Contact Form


Contact and credits

I hope you enjoyed visiting this site.
Feel free to get in touch (tom.collins@miami.edu) if you have any questions or suggestions.

Credits

The code for this site was written by Tom Collins and others as specified. Reuse of the code is welcomed, and governed by the GNU General Public License Version 3 or later.

Get Connected