Description of the annotations
The dataset consists of 400 song excerpts (1 minute long) in 4 genres (rock, classical, pop, electronic). The annotations were collected using GEMS scale (Geneva Emotional Music Scales) . Each participant could select maximally three items from the scale (the emotions, that he felt strongly listening to this song. Below is the description of the emotional categories as found in the game.
|Amazement||Feeling of wonder and happiness|
|Solemnity||Feeling of transcendence, inspiration. Thrills|
|Tenderness||Sensuality, affect, feeling of love|
|Nostalgia||Dreamy, melancholic, sentimental feelings|
|Calmness||Relaxation, serenity, meditativeness|
|Power||Feeling strong, heroic, triumphant, energetic|
|Joyful activation||Feels like dancing, bouncy feeling, animated, amused|
|Tension||Nervous, impatient, irritated|
The annotations produced by the game are spread unevenly among the songs, which is caused both by design of the experiment and design of the game. Participants could skip songs and switch between genres, and they were encouraged to do so, because induced emotional response does not automatically occur on every music listening occasion. Therefore, less popular (among our particular sample of participants) genres received less annotations, and the same happened to less popular songs. Moreover, for the purposes of analysis we split our songs in two subsets. On average, each songs in one of the subsets is annotated by 48 people, and by 16 people in the other.
Each line in the file corresponds to one participant (i.e., annotations are not averaged per song). These is the description of information found in the file:
- Id of the music file
- Genre of the music file
- 9 annotations by the participant (whether emotion was strongly felt for this song or not). 1 means emotion was felt.
- Participant's mood prior to playing the game.
- Liking (1 if participant decided to report he liked the song).
- Disliking (1 if participant decided to report he disliked the song).
- Age, gender and mother tongue of the participant (self-reported).
For more information, please consult .
Description of the game
We collected the data through online game with a purpose. There are two versions of the game: a Facebook application and a stand-alone version. Most data came through stand-alone version. Emotify was launched on the 1st of March 2013, and up to now 1778 people participated. Prior to the game, participants were instructed to report their felt (induced) emotion, and were asked some personal questions. As an incentive, participants received psychological questionnaire-style feedback and could compare themselves to friends of Facebook. Participants could choose genres and skip songs whenever they liked.
It is still possible to play the game.
More information about the game design can be found in .
Data usage agreement
If you decide to use this dataset in your work we kindly ask you to cite the paper .
 M. Zentner, D. Grandjean, and K. R. Scherer. Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion, 8(4): 494–521, 2008.
 A. Aljanaki, D. Bountouridis, J. A. Burgoyne, J. van Balen, F. Wiering, H. Honing, and R. C. Veltkamp. Designing games with a purpose for data collection in music research. Emotify and Hooked: Two case studies. Lecture Notes in Computer Science, pages 29–44, 2014.
 A. Aljanaki, F. Wiering, and R. C. Veltkamp. Collecting annotations for induced musical emotion via online game with a purpose Emotify. Technical report.
 A. Aljanaki, F. Wiering, R. C. Veltkamp. Studying emotion induced by music through a crowdsourcing game. Information Processing & Management, 2015.