Call for Submissions for a Special Issue of Computer Music Journal,  "Is there an Aesthetic of AI Music?"

Editors: Ted Moore (tmoore97@jh.edu), Constantin Basica (cobasica@ccrma.stanford.edu), and Julie Zhu (zhujulie@umich.edu)

Historically, each new technology in computer music has brought new aesthetic paradigms. For example, the late 1990s glitch movement used sonic “errors” and artifacts as musical material (Cascone 2000). In a parallel to glitch’s celebration of error, many contemporary AI music practitioners find inspiration in the surprises, mistakes, and even “glitches” of working with AI (Paredes 2025).

Are there common threads in the emerging aesthetics of AI Music? Are these aesthetic tendencies artifacts of the mediation inherent in popular tools (Snape and Born 2022) such as FluCoMa (Tremblay et al. 2022), RAVE (Caillon 2021), SampleRNN (Mehri 2016), and others, or are they emergent properties of machine learning generally, such as a tendency toward continuous latent space interpolations, “timbre transfer” using concatenative synthesis (Schwartz 2000) or other tools, and multilayer perceptron mapping (Lee, Freed, and Wessel 1992)?

Practitioners frequently train models on datasets they have personally curated, or even entirely created, using machine learning tools not as general-purpose engines but as idiosyncratic instruments. When using small personal datasets, where does the “art” reside: in the choice of training data, the design of the algorithm, or the emergent behavior of the system? Is there an Aesthetics of AI Music or do aesthetics remain as varied as the artists and data involved?

Large Music Models (LMMs) such as Suno and Udio threaten to transform cultural perceptions of music and creativity. How do corporate goals influence the aesthetics of frontier models? What impacts might this have on experimental practices? Can LMMs be used to resist or repurpose the aesthetic biases they encode? Is it possible to use LMMs in the pursuit of idiosyncratic artistic expression or are they only capable of pastiche?

Do AI-based practices merely re-inscribe familiar styles, such as the glitch, generative music, noise, conceptual art, improvisation, live coding, data sonification, etc.? How do artists working with machine learning understand their practice in relation to other stylistic lineages in computer music? When will the fact that art was made with AI be the least salient characteristic of its artistic merit?

Possible topics include (but are not limited to):

·         Aesthetics of bias, intention, and noise in training datasets

·         Comparative aesthetics of frontier models vs. bespoke models

·         Artist-curated or artist-generated training data as aesthetic control over AI tools

·         Creative resistance to canonical datasets and/or models

·         The sonic signatures of algorithms, data, and software

·         Algorithm/software design as composition

·         When does experiencing AI-music rely on knowledge of AI?

·         When is a piece of music’s use of AI more important than the music itself?

·         How does AI music change our aesthetics or literacy of listening?

·         Are we hearing the algorithm, the dataset, or the artist?

Submissions should follow all CMJ author guidelines (https://direct.mit.edu/comj/pages/submission-guidelines) except that manuscripts should not be submitted online at cmjdb.com. Instead, submissions and queries should be addressed to the guest editors with subject [CMJ | Aesthetic of AI Music]. 


Schedule

Call for Submissions: April 1, 2026

Optional Abstract Submission*: February 1, 2027

Feedback on Abstracts from Editors*: March 1, 2027 

Full Article Submission: August 1, 2027 

Peer Reviews: November 1, 2027

Author Final Version: February 1, 2028

Expected Publication: Fall 2028

 

*Submitting an abstract by February 1, 2027 is optional. Abstract feedback does not guarantee article approval. Abstract submission is an opportunity to receive feedback from editors. Full Articles may be submitted by August 1, 2027 regardless if an abstract was previously submitted.

 

Works Cited:

Caillon, Antoine, and Philippe Esling. "RAVE: A variational autoencoder for fast and high-quality neural audio synthesis." arXiv preprint arXiv:2111.05011 (2021).

Cascone, Kim. “The Aesthetics of Failure: ‘Post-digital’ Tendencies in Contemporary Computer Music.” Computer Music Journal 24/4 (2000): 12–18.

Lee, Michael, Adrian Freed, and David Wessel. “Neural Networks for Simultaneous Classification and Parameter Estimation in Musical Instrument Control.” Adaptive and Learning Systems, vol. 1706. (1992): 244-55.

Loor Paredes, M. “Emerging paradigms in music technology: valuing mistakes, glitches and uncertainty in the age of generative AI and automation.” AI & Soc (2025). https://doi.org/10.1007/s00146-025-02209-w

Mehri, Soroush, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. "SampleRNN: An unconditional end-to-end neural audio generation model." arXiv preprint arXiv:1612.07837 (2016).

Schwarz, Diemo. "A system for data-driven concatenative sound synthesis." In Digital Audio Effects (DAFx), pp. 97-102. 2000.

Snape, Joe, and Georgina Born. “Max, music software and the mutual mediation of aesthetics and digital technologies.” In Music and Digital Media: A Planetary Anthropology, edited by Georgina Born. UCL Press, 2022.

Tremblay, P.A., Green, O., Roma, G., Bradbury, J., Moore, T., Hart, J., & Harker, A. The Fluid Corpus Manipulation Toolbox (v.1). (2022). Zenodo. doi: /10.5281/zenodo.6834643