(Translated by https://www.hiragana.jp/)
Evolving in-game mood-expressive music with MetaCompose | Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion skip to main content
10.1145/3243274.3243292acmotherconferencesArticle/Chapter ViewAbstractPublication PagesamConference Proceedingsconference-collections
research-article

Evolving in-game mood-expressive music with MetaCompose

Published: 12 September 2018 Publication History
  • Get Citation Alerts
  • Abstract

    MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.

    References

    [1]
    Steven Abrams, Daniel V Oppenheim, Don Pazel, James Wright, et al. 1999. Higher-level composition control in music sketcher: Modifiers and smart harmony. In Proceedings of the ICMC. Citeseer.
    [2]
    John Biles. 1994. GenJam: A genetic algorithm for generating jazz solos. In Proceedings of the International Computer Music Conference. INTERNATIONAL COMPUTER MUSIC ACCOCIATION, 131--131.
    [3]
    David Birchfield. 2003. Generative model for the creation of musical emotion, meaning, and form. In Proceedings of the 2003 ACM SIGMM Workshop on Experiential Telepresence. 99--104.
    [4]
    Daniel Brown. 2012. Mezzo: An Adaptive, Real-Time Composition Program for Game Soundtracks. In Proceedings of the AIIDE 2012 Workshop on Musical Metacreation. 68--72.
    [5]
    Deepti Chafekar, Jiang Xuan, and Khaled Rasheed. 2003. Constrained multi-objective optimization using steady state genetic algorithms. In Genetic and Evolutionary ComputationâĂŤGECCO 2003. Springer, 813--824.
    [6]
    Heather Chan and Dan A Ventura. 2008. Automatic composition of themed mood pieces. (2008).
    [7]
    Palle Dahlstedt. 2007. Autonomous evolution of complete piano pieces and performances. In Proceedings of Music AL Workshop. Citeseer.
    [8]
    Kalyanmoy Deb. 2001. Multi-objective optimization using evolutionary algorithms. Vol. 16. John Wiley & Sons.
    [9]
    Kalyanmoy Deb, Amrit Pratap, and T Meyarivan. 2001. Constrained test problems for multi-objective evolutionary optimization. In Evolutionary Multi-Criterion Optimization. Springer, 284--298.
    [10]
    Mirjam Eladhari, Rik Nieuwdorp, and Mikael Fridenfalk. 2006. The soundtrack of your mind: mind music-adaptive audio for game characters. In Proceedings of Advances in Computer Entertainment Technology.
    [11]
    Amitay Isaacs, Tapabrata Ray, and Warren Smith. 2008. Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems. In IEEE Congress on Evolutionary Computation. IEEE, 2780--2787.
    [12]
    Fernando Jimenez, Antonio F Gómez-Skarmeta, Gracia Sánchez, and Kalyanmoy Deb. 2002. An evolutionary algorithm for constrained multi-objective optimization. In Proceedings of the Congress on Evolutionary Computation. IEEE, 1133--1138.
    [13]
    Steven Orla Kimbrough, Gary J Koehler, Ming Lu, and David Harlan Wood. 2008. On a Feasible--Infeasible Two-Population (FI-2Pop) genetic algorithm for constrained optimization: Distance tracing and no free lunch. Eur. J. Operational Research 190, 2 (2008), 310--327.
    [14]
    Steven R Livingstone and Andrew R Brown. 2005. Dynamic response: Real-time adaptation for music emotion. In Proceedings of the 2nd Australasian Conference on Interactive Entertainment. 105--111.
    [15]
    Roisin Loughran, James McDermott, and Michael O'Neill. 2015. Tonality driven piano compositions with grammatical evolution. In IEEE Congress on Evolutionary Computation (CEC). IEEE, 2168--2175.
    [16]
    Hector P Martinez, Georgios N Yannakakis, and John Hallam. 2014. DonâĂŹt classify ratings of affect; rank them! Affective Computing, IEEE Transactions on 5, 3 (2014), 314--326.
    [17]
    Sidney K Meier and Jeffrey L Briggs. 1996. System for real-time music composition and synthesis. US Patent 5,496,962.
    [18]
    Eduardo Reck Miranda. 2013. Readings in music and artificial intelligence. Vol. 20. Routledge.
    [19]
    Eduardo Reck Miranda and Al Biles. 2007. Evolutionary computer music. Springer.
    [20]
    Kristine Monteith, Tony Martinez, and Dan Ventura. 2010. Automatic generation of music for inducing emotive response. In Proceedings of the International Conference on Computational Creativity. Citeseer, 140--149.
    [21]
    Miller Puckette et al. 1996. Pure Data: another integrated computer music environment. Proceedings of the Second Intercollege Computer Music Concerts (1996), 37--41.
    [22]
    Alexander P Rigopulos and Eran B Egozy. 1997. Real-time music creation system. US Patent 5,627,335.
    [23]
    Judy Robertson, Andrew de Quincey, Tom Stapleford, and Geraint Wiggins. 1998. Real-time music generation for a virtual environment. In Proceedings of ECAI-98 Workshop on AI/Alife and Entertainment. Citeseer.
    [24]
    Robert Rosenthal and Donald B Rubin. 1982. A simple, general purpose display of magnitude of experimental effect. Journal of educational psychology 74, 2 (1982), 166.
    [25]
    Arthur L Samuel. 1959. Some studies in machine learning using the game of checkers. IBM Journal of research and development 3, 3 (1959), 210--229.
    [26]
    Jonathan Schaeffer, Neil Burch, Yngvi Björnsson, Akihiro Kishimoto, Martin Müller, Robert Lake, Paul Lu, and Steve Sutphen. 2007. Checkers is solved. science 317, 5844 (2007), 1518--1522.
    [27]
    Jonathan Schaeffer, Robert Lake, Paul Lu, and Martin Bryant. 1996. CHINOOK the world man-machine checkers champion. AI Magazine 17, 1 (1996), 21.
    [28]
    Marco Scirea. 2013. Mood Dependent Music Generator. In Proceedings of Advances in Computer Entertainment. 626--629.
    [29]
    Marco Scirea, Yun-Gyung Cheong, Byung Chull Bae, and Mark Nelson. 2014. Evaluating musical foreshadowing of videogame narrative experiences. In Proceedings of Audio Mostly 2014.
    [30]
    Marco Scirea, Peter Eklund, Julian Togelius, and Sebastian Risi. 2017. Can you feel it?: evaluation of affective expression in music generated by MetaCompose. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 211--218.
    [31]
    Marco Scirea, Mark J Nelson, and Julian Togelius. 2015. Moody Music Generator: Characterising Control Parameters Using Crowdsourcing. In Evolutionary and Biologically Inspired Music, Sound, Art and Design. Springer, 200--211.
    [32]
    Marco Scirea, Julian Togelius, Peter Eklund, and Sebastian Risi. 2016. MetaCompose: A Compositional Evolutionary Music Composer. In International Conference on Evolutionary and Biologically Inspired Music and Art. Springer, 202--217.
    [33]
    Marco Scirea, Julian Togelius, Peter Eklund, and Sebastian Risi. 2017. Affective evolutionary music composition with MetaCompose. Genetic Programming and Evolvable Machines 18, 4 (2017), 433--465.
    [34]
    Alan Smaill, Geraint Wiggins, and Mitch Harris. 1993. Hierarchical music representation for composition and analysis. Computers and the Humanities 27, 1 (1993), 7--17.
    [35]
    Geraint Wiggins, Mitch Harris, and Alan Smaill. 1990. Representing music for analysis and composition. University of Edinburgh, Department of Artificial Intelligence.
    [36]
    Rene Wooller, Andrew R Brown, Eduardo Miranda, Joachim Diederich, and Rodney Berry. 2005. A framework for comparison of process in algorithmic music systems. In Generative Arts Practice 2005 --- A Creativity & Cognition Symposium.

    Cited By

    View all
    • (2023)Visual Recognition for ZELDA Content Generation via Generative Adversarial Network2023 3rd International Conference on Artificial Intelligence (ICAI)10.1109/ICAI58407.2023.10136680(76-81)Online publication date: 22-Feb-2023
    • (2023)Generating Music for Video Games with Real-Time Adaptation to Gameplay PaceIntelligent Information and Database Systems10.1007/978-981-99-5834-4_21(261-272)Online publication date: 24-Jul-2023
    • (2022)PreGLAM-MMMProceedings of the 17th International Conference on the Foundations of Digital Games10.1145/3555858.3555947(1-11)Online publication date: 5-Sep-2022
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AM '18: Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion
    September 2018
    252 pages
    ISBN:9781450366090
    DOI:10.1145/3243274
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 September 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Affective expression
    2. Music generation
    3. evolutionary algorithms

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AM'18
    AM'18: Sound in Immersion and Emotion
    September 12 - 14, 2018
    Wrexham, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 177 of 275 submissions, 64%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)16
    • Downloads (Last 6 weeks)2

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Visual Recognition for ZELDA Content Generation via Generative Adversarial Network2023 3rd International Conference on Artificial Intelligence (ICAI)10.1109/ICAI58407.2023.10136680(76-81)Online publication date: 22-Feb-2023
    • (2023)Generating Music for Video Games with Real-Time Adaptation to Gameplay PaceIntelligent Information and Database Systems10.1007/978-981-99-5834-4_21(261-272)Online publication date: 24-Jul-2023
    • (2022)PreGLAM-MMMProceedings of the 17th International Conference on the Foundations of Digital Games10.1145/3555858.3555947(1-11)Online publication date: 5-Sep-2022
    • (2022)Adaptive Game Soundtrack Tempo Based on Players’ Actions2022 IEEE Conference on Games (CoG)10.1109/CoG51982.2022.9893604(441-448)Online publication date: 21-Aug-2022
    • (2021)WITHDRAWN: Music Composition Feasibility using a Quality Classification Model based on Artificial IntelligenceAggression and Violent Behavior10.1016/j.avb.2021.101632(101632)Online publication date: Jun-2021
    • (2021)Deep learning for procedural content generationNeural Computing and Applications10.1007/s00521-020-05383-833:1(19-37)Online publication date: 1-Jan-2021
    • (2020)Computational Creativity and Music Generation Systems: An Introduction to the State of the ArtFrontiers in Artificial Intelligence10.3389/frai.2020.000143Online publication date: 3-Apr-2020
    • (2020)Dynamic Procedural Music Generation from NPC AttributesProceedings of the 15th International Conference on the Foundations of Digital Games10.1145/3402942.3409785(1-4)Online publication date: 15-Sep-2020
    • (2020)Evolutionary music: applying evolutionary computation to the art of creating musicGenetic Programming and Evolvable Machines10.1007/s10710-020-09380-721:1-2(55-85)Online publication date: 1-Jun-2020
    • (2019)Procedurally generating a digital math game’s levels: Does it impact players’ in-game behavior?Entertainment Computing10.1016/j.entcom.2019.10032532(100325)Online publication date: Dec-2019

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media