Vol. 39 Issue 4 Reviews

Study Day on Computer Simulation of Musical Creativity

University of Huddersfield, United Kingdom, 27 June 2015. Information about the study day and video recordings of the presentations given during the event can be found at https://simulationofmusicalcreativity.wordpress.com/.

Reviewed by Valerio Velardo and Steven Jan
Huddersfield, United Kingdom

The Study Day on Computer Simulation of Musical Creativity was held at the University of Huddersfield, United Kingdom, on 27 June 2015. The event was an opportunity to explore new lines of research in generative music, simulation of musical societies, systems to enhance the creativity of human users, and evolutionary and cognitive-based models for creative musical systems. Both theoretical and practical research were presented, each complementing the other. The main goal of the study day was to provide a multidisciplinary platform to researchers with different backgrounds, who were interested in computer simulation of musical creativity (CSMC), and discussing and promoting their work and fostering cross-fertilization. The study day occupied the area of intersection between music, artificial intelligence (AI), cognitive science, and philosophy.

Of the 36 delegates, who came from all over the world, 15 gave presentations on their work. The program consisted of two oral paper sessions, three workshops, one keynote lecture, and a poster session. The works presented covered numerous topics, such as the generation of rhythm and melodies, simulation of pop and jazz music, music programming frameworks, and computer-assisted systems for music analysis.
Research presented at this conference can be roughly classified into three categories: computers as composers, societies of musical agents, and computer-assisted musical systems. In the remaining parts of this review we summarize the main contributions to these categories.
  
Computers as Composers

The central topic of the CSMC has traditionally been the creation of automated meta music. Systems proposed at the study day with respect to this category can be divided into two subcategories: approaches that generate rhythm only, and systems that produce fully formed music.

Andrew Lambert, Tillman Weyde and Newton Armstrong presented a connectionist, machine-learning approach to expressive rhythm generation. Their framework is based on cognitive models implemented as a multi-layered recurrent neural network. The first layer, a gradient frequency neural network, is used as input to a second layer, a “long short-term” memory neural network. Once the mixed network is trained on a dataset of music containing expressive timings, it is then able to predict rhythmic events. These predictions can in turn be used to generate new rhythms. The system is robust for predictions, but some participants raised questions regarding its generative behavior. Machine-learning techniques effectively produce music within a musical style, but they cannot escape the stylistic boundaries of the musical set used to train them.

To overcome this issue, Rafael Valle and Adrian Freed proposed Batera, a drum agent able to learn different styles and to interpolate heterogeneous musical features between them. The system exploits probabilistic finite-state automata and considers rhythmic expressivity, musical structure, and drum patterns learnt from a training set. The musical output is a stream of drum-based music that can mix together the styles learnt, both in terms of rhythmic patterns and instrumentation. Batera can be seen as an instance of a system showing what Margaret Boden terms “combinatorial creativity”. In other words, Batera generates novel musical artefacts by combining uncorrelated, pre-existing ideas.

Although very sophisticated, the two approaches discussed so far focus on rhythm only. Of course, music is an extremely complex phenomenon, which comprises many dimensions at once (e.g., melody, harmony, counterpoint, form, instrumentation). A few approaches presented at the study day were able to produce fully shaped music. The system proposed by Tom Parkinson can create a potentially infinite stream of slow-moving jazz music in real time. The ensemble is limited to piano, trumpet, and cymbals. Behind the scenes, probabilistic choices dictated by Markov chains determine chordal sequences and the specific instruments involved, which play together at one time. The approach generates interesting musical results within a specific subset of jazz music while ignoring high-level musical form.
PopSketcher, a framework proposed by Valerio Velardo and Mauro Vallati, tackles the issue of musical form, by producing sketches of pop songs. Sketches are blueprints that contain fundamental information about the harmony, melody, and form of a song. An interesting feature of this approach is that it employs a range of diverse, artificial intelligence (AI) techniques for different compositional tasks. For example, the generation of form is carried out with a probabilistic generative grammar, while a dynamic naive Bayes classifier is responsible for the selection of chords. This strategy is in line with the divide-and-conquer approach used in much AI when dealing with complex issues.

A general point that emerges from the comparison of all the systems presented at the study day is that, at least for the moment, none of them is able to show what Margaret Boden terms “transformational creativity”. For Boden, a system is transformationally creative if it is able to transcend the boundaries of its given conceptual space and to create its own new rules of generation. The frameworks proposed at the study day, on the other hand, are only able to explore their given conceptual space. The future line of research for systems belonging to the “computers as composers” category is therefore clear. Not only is it necessary to improve the overall compositional results of these approaches, but it is also important to design new, flexible architectures that may lead to the emergence of transformational creativity.

Societies of Music Agents

There were two main contributions in the category of societies of music agents, one practical, the other theoretical. Marcelo Gimenes introduced CoMA (Autonomous Musical Communities of Musical Agents), a system that is designed to simulate musical evolution in virtual environments. CoMA comprises a number of artificial agents that compose and exchange melodies with each other. The musical style of an agent is determined by a set of musical patterns containing information about pitch and rhythm. During their life cycle, agents accumulate new sets of patterns based on the music of other agents with which they interact. This process leads to the stylistic evolution of agents. CoMA can run without the need for human intervention, and the agents can make “motivated decisions”. This is made possible thanks to perceptual, cognitive, and decision-making global models, which simulate fundamental cognitive elements found in humans.

Steven Jan presented a provocative paper that challenged the boundaries of musical creativity, claiming that whale song can be deemed as creative. Jan argued that whale song is analogous to human music in that it follows a similar cultural-evolutionary process that continually redefines it, and it emerges from the interactions between members of a society. The main difference between these two instances of music is that whale song is still at an early stage of evolution. Jan suggested that sociality, physicality, and embodiment, which are elements common to both human music and to whale song, should all be considered when developing algorithms for generating music, with the aim of emulating creative musical processes in societies of biological agents.

Computer-Aided Music Systems

Music computer systems can also be used to enhance the creativity of humans. The works presented at the study day in this category can be split into two subcategories: computer-assisted composition, and computer-assisted musical education. The boundary between these two subcategories is often blurred, so it is not always easy to identify a system as belonging clearly to one or the other.

Computer-assisted composition systems can also intervene at different stages of the creative compositional process. For example, the Abjad Python API presented by Trevor Bac?a, Josiah Oberholtzer, and Jeffrey Trevino helps composers to visualize sketches, compositional materials, and complete scores within an integrated interactive environment. The aim of the framework is to act as a medium between musical thought, formal models, and musical notation.
Computers can revolutionize the performance and the structure of a musical piece, as in the case of Multidimensional Interstice. In this composition, Alannah Halay explores the use of an iPhone app to let the audience interactively decide the form of the piece in real time. Multidimensional Interstice treats members of the audience as additional composers. The use of technology, and the participatory element of the piece, results in the roles of the composer, performer, and audience becoming enmeshed.

A highlight of the study day was the keynote lecture on computer-assisted musical creativity given by Eduardo Miranda. Miranda posited another way to use computational systems to enhance the compositional creative process. In his career as a composer, Miranda has developed algorithms that generate raw musical materials, which he then digests and reinterprets to compose fully shaped pieces. The large-scale orchestral composition Mind Pieces (2011) is an instance of this compositional practice. For this piece, Miranda developed an “artificial life” algorithm, mapping some of the features of virtual biological agents onto pitch and duration values. The musical results obtained from the program were then employed by Miranda as a basis for one of the movements of the piece.
Computer systems can be used to enhance the music learning process as well. Torsten Anders and Örjan Sandred presented a music constraint programming system that can be used to explore the rules of harmony and counterpoint in a visual environment. Users can select a number of compositional rules and observe how they generate musical passages that respect the specified constraints.

Michael Clarke, Frédéric Dufeu, and Peter Manning proposed an interactive environment that allows beginners and experts alike to creatively revisit John Chowning’s well known composition Stria (1977).

This standalone application enables users to engage aurally and visually with the compositional techniques employed by the composer. Users can also experiment with changing the time, pitch, and spatialization parameters of the algorithm employed by Chowning to compose Stria, in order to generate their own musical variations.

Conclusion

Research on algorithmic music and computer-assisted composition is almost as old as computers themselves. Lejaren Hiller and Leonard Isaacson composed the first computer-generated score, the Illiac Suite, in 1957. Thereafter, the number of generative systems grew considerably and today research in CSMC, in all of its declinations, is widespread. However, there are still few conferences specifically devoted to CSMC. The Study Day on Computer Simulation of Musical Creativity filled this gap, since it was an initial attempt to create a common forum meant to accommodate researchers from different backgrounds involved in CSMC.

As it emerged from the study day, CSMC still poses many challenges. As for music generation, the literature describes countless systems that perform relatively small compositional tasks (e.g., melodic generation, harmonization), but few frameworks have been developed that can create fully shaped musical pieces. During discussions the question arose as to whether these systems should try to emulate human cognitive structures. Some delegates agreed that the final aim of CSMC should be to understand human musical creative processes by means of computer simulation, whereas others thought that, regardless of the type of creative process employed, the focus should be on the output of the system only. In our view, these don’t have to be mutually exclusive approaches. Rather, they complement each other, providing insights into both human and machine creativity.

Some participants suggested that researchers should develop societies of artificial musical agents. In this scenario, there is a shift from the level of the single system to the level of a network of systems communicating with each other. This shift is similar to the change of perspective occurring when studying human beings initially through the lens of psychology, and then through that of sociology. By simulating societies of virtual musical agents, it would be possible to understand how music evolves in human societies and how different musical styles arise. This aspiration expresses in a nutshell both the underlying spirit of the study day and the essence of CSMC: computer simulation is a powerful tool to gain insights about real-life musical phenomena.