Vol. 43 Issue 2-3 Reviews
New Interfaces for Musical Expression (NIME) Conference

NIME took place 3-6 June, 2019 at the Universidade Federal Do Rio Grande Do Sul in Porto Alegre, Brazil. For more information on NIME visit: https://www.ufrgs.br/nime2019/

Reviewed by Margaret Schedel, Stony Brook University; Sonya Yuditskaya, New York University; and Susan E. Green, Studio 7

NIMEThe New Interfaces for Musical Expression (NIME) Conference was held in the beautiful city of Porto Allegre, the southernmost capital of Brazil. The organizers, including Rodrigo Schramm, Anna Xambó Sedó, Isabel Nogueira and Federico Visi did a fantastic job preparing for the conference and communicating complex logistics with the participants. During the conference numerous volunteers dressed in easily identifiable blue shirts, which made sure that participants felt taken care of. The opening reception was a celebration of Brazilian culture and music including an electric berimbau player who was also one of the organizers of the Open Jack Night at the end of the conference. The atmosphere was infused with tropicália tunes with sounds of the electric birimbaò marinated in three flavors of capiriña and the ever-present aperol spritz.

Within the broader scope of NIME’s gesture-(re)action-perception themes, this year’s trends explored virtual instruments, machine learning, mobile apps, XR (an umbrella term covering VR, AR, and other realities yet to come), and a commitment to accessibility. Overall, in regards to paper presentations, we would encourage presenters to deliver examples with video and front-load demonstrations to invite audience engagement early on in the experience.

Concerts were held in a beautiful hall on campus, and were immaculately produced. The timing between sets was kept to a minimum, and the lighting helped define areas on the stage. The first night the sound was a bit harsh in the high and mid frequencies. By the second night the system was balanced and the sound was clean and powerful. The MC introduced each piece in Portuguese and English while making concertgoers feel welcomed with sharp wit and gregarious humor. It was nice to see the wider community involvement, which spoke to the robustness of the NIME community support in the UFRGS in Porto Alegre. The usual vast array of nations represented by attendees this year included a significant bump in conference participants from South American countries as compared to previous years. The International Computer Music Conference is in Chile next year; hopefully it will also reflect the local community continuing the precedent set by NIME 2019. In times of constrained budgets conference organizers should think of creative ways to encourage local engagement as well as remote participation for international populations.

There was an incredible amount of innovation coming out of the NIME community this year. There were many incredible papers, performances, demos, and installations at the conference. We are sorry that we could not include them all here. My review is organized by themes and trends observed at the conference.

Beyond Accessibility
A great model to follow for active participation can be found in this year’s paper NIME Prototyping in Teams by Anna Xambó, Sigurd Saue, Alexander Refsum Jensenius, Robin Støckert, and Øyvind Brandtsegg. This was a four-day workshop where participants worked in remote teams seven hours a day. Sounds were sent between locations, and there were observers and facilitators at each site. Issues arose with the portal that was used to connect the spaces, but the students’ confidence regarding prototyping showed improvement despite the technical issues. It would be wonderful to have some of the NIME workshops support remote participation.

One of the observed trends was the idea of openness and access with inclusivity in education and performance. There was a commitment to human diversity as well as device and OS diversity. Many programs were designed to be device agnostic, letting users customize their own way of interacting with sound. There were wide walls to enable all kinds of users to be involved with NIMEs, and high ceilings to promote performance virtuosity.

This year, assistive NIMEs were categorized based on the materials or techniques they used rather than put into a session set aside specifically for accessibility. Generally, assistive tech sessions are often not as well attended because only a subset of NIME attendees have the interest, funding, or support to work with humans in a medical context. Program schedulers attention to include these NIMEs across other sessions was much appreciated, and future organizers are encouraged to do the same. A particularly successful paper was Lucas, Ortiz and Schroeder’s “Bespoke Design for Inclusive Music: the Challenges of Evaluation”. The paper tackled the question of how to measure the success of one’s assistive technology in a highly specialized situation. They concluded that technology has to be highly effective and enabling, but from whose perspective should the evaluation be made? They decided that the subject should make the evaluation in their own voice, “nothing about us, without us.” The study involved making more accessible pots and knobs for an individual with multiple sclerosis, and addressed the ethics of working with disabled populations on assistive technology papers. It was nice to see an inclusivity of diverse backgrounds and abilities peppered throughout presentations. However, as usual, this NIME was not gender balanced, but an ethical step was made by the organizers to make sure that the chairs of the sessions were balanced.

A workshop on the ethics of NIME, and how the NIME community should work with people, animals, and even single-celled organisms was scheduled on the first day. Eduardo Miranda’s keynote address focused on his current research on bio-computing using physarum polycephalum (a kind of slime mold) to create music. These creatures have non-linear memory and can be used as memristors (i.e., non-linear two-terminal electrical components relating electric charge and magnetic flux linkage). Miranda argued that “bio-tech is the new digital.” He illustrated how physarum polycephalum can serve as a voltage control mechanism, process information, and then give feedback or response to the system in a musical environment. Miranda called this “biocomputer music” and showed an excerpt of a BBC performance at Peninsula Arts Contemporary Music Festival in Plymouth UK, 2015. He raised a few questions such as, what are the ethics behind the bio-art and bio-labor (these little bio-memristors die very quickly ...). Should we treat them just as “electronic components”, simple organisms, or creatures that have lives or even more? Just like the current discourse in robotic, AI labor, this might also be interesting to contemplate. These questions of ethics tie back into the conversation surrounding accessibility: Do research subjects have access to the choices that determine their lived world?

The human body itself became a new interface for musical expression in the installation Somacoustics by Marcos Suran Bomba and Palle Dahlstedt. Audiences were encouraged to “play” an artist who was blindfolded by virtual reality goggles (a powerful image perhaps commenting on our veil of blindness due to our addiction to technology) by moving their compliant body around like that of a puppet. The artist’s physical body was made vulnerable, a controller for a NIME, the motions of which were tracked and translated into synthesized four-channel sound surrounding the participants. Through social-physical and aural interactions, participants played his instrument-body, in a mutual dance of trust mediated by machines.

Trust in the body and the intuitive, embodied practice of ‘making’ featured centrally in “Material Embodiments of Electroacoustic Music: An Experimental Workshop Study”by Enrique Tomas and Thomas Gorbach of the Tangible Music Lab at the University of Art and Design in Linz, Austria. The authors held a workshop in which participants played with clay to make mockup imaginary musical instruments to express their cognitive mappings of sound to form. Clay is a low cost, easily accessible and malleable material that is perhaps one of the most versatile prototyping materials. It was refreshing to be reminded of the simple power of this material at a conference dealing with so many high tech concepts. The resulting objects were analyzed anthropologically. An insight from the paper involved the role of materiality in the design process, and our ability as researchers to leverage our understanding of material engagement while practicing quantitative research.

Device accessibility was showcased in a number of papers, including “Practical Considerations for Midi Over Bluetooth Low Energy (BLE) as a Wireless Interface,”by Johnty Wang, Axel Mulder, and Marcelo Wanderley. This was primarily a technical paper about testing BLE’s performance working with MIDI as a wireless interface for sensor or controller data and inter-module communication in the context of building interactive digital systems. The comparative experimental results showed that the BLE MIDI is comparable in performance to Wi-Fi implementations with end-to-end (sensor input to audio output) latencies under 10ms, under certain conditions. The authors believe that it is a big step for BLE MIDI. However, other parameters need to be tested, such as bandwidth, multiple devices, range, stability, and power consumption.

Vesa Petri Norilo presented “Veneer: Visual and Touch-Based Programming for Audio,” a Music DSP language designed to be relatively easy to use. It allows the user to grow as a programmer, which can be seen as accessibility in terms of learning to program music. Featuring multi-rate DSP, their program is deterministic at runtime, has no dynamic memory, and 0 cost abstractions. As such it seems like a promising language for writing music. “Veneer” is built in Closure and is run compiled in the browser, which makes it usable on low cost computers like Chromebooks and tablets, making it financially accessible. The programming environment has a multi-touch based UI that is hyper-adaptable for a world where internet usage happens primarily on smartphones and tablets. The nodes in this graphical programming language can be disconnected with gestures such as a shake, the menu flowers off objects and touches, and sub-patches open in browser tabs creating an intuitive and expressive language.

Bertrand Petit and Manuel Serrano added a focus on user testing in school groups to their script-based language in “Composing and Executing Interactive Music.” Using the hiphop.js language, “Skini” is a platform for composing and producing live performances with the audience participating using connected devices (smartphones, tablets, PCs, etc.) and facilitates a score to be performed by the audience. It is simple in that instruments play one pattern at a time, while the multiplier comes in the form of group use. The platform is implemented in hop.js and the user interface and automation in hiphop.js. The writers found this language to be popular with kids in the south of France where they are based. The power of the system lies in the management of group accessibility to portions of a live score or performance, thereby allowing many people to play, while maintaining artistic integrity of a cohesive composition.

Machine Learning
The main group of reviewers included in this paper met while taking Charles Patrick Martin’s machine learning workshop. The workshop was very well attended, comprehensive, and accessible. It used the online platform Google Colab for machine learning and running Python scripts. Participants were able to get things running quickly thanks to demo scripts and Martin’s clear, effective, and efficient planning. “Generating Convincing Harmony Parts With Blstm Network,” a paper by Andrei Faitas, Synne Engdahl Baumann, Torgrim Rudland Næss, Jim Torresen, and Charles Patrick Martin, was written about the use of the same technology at a much higher level. The writers described how they created a long short-term memory neural network (NN) to input results and receive harmonic output to create music in the style of a Bach chorale. The chorales were quite pleasant to listen to, and the surveyed audience enjoyed them, especially the edge cases. The paper presented a search to “generate convincing music via deep neural networks... One part of this challenge is the problem of generating convincing accompaniment parts to a given melody, as could be used in an automatic accompaniment system. Despite much progress in this area, systems that can automatically learn to generate interesting sounding, as well as harmonically plausible, accompanying melodies remain somewhat elusive.” To generate the chorales they used an old standard: unidirectional long short-term memory (LSTM) architecture, and bidirectional LSTM, both successfully trained to produce a sequence based on the given input. Study participants preferred the bi-directional model by a significant margin.

Another paper using NNs was called “Small Dynamic Neural Networks For Gesture Classification With The Rulers (A Digital Musical Instrument)by Vanessa Yaremchuk, Carolina Brum Medeiros and Marcelo Wanderley. This was an experiment for determining best practices with NN’s for gesture classification. It demonstrated that: 1) dynamic networks out-perform feedforward networks for sensor-based gesture classification; 2) a small network can handle a problem of this level of complexity, recurrent networks of this size are fast enough for real-time applications of this type; and 3) the importance of training multiple instances of each network architecture and selecting the best performing one from within that set. It was a thorough paper with ramifications for the future research on how to train neural networks.

Perhaps one of the most evocative implementations of machine learning was “T-voks: Controlling Singing And Speaking Synthesis With The Theremi” by Xiao Xiao, Grégoire Locqueville, Christophe d’Alessandro, and Boris Doval. Using a theremin for vokenesis, which is similar to vocoding, with T-voks they were able to control pitch, duration, vocal effect, timbre, and whether speech or song is voiced or unvoiced. The result is very funny and novel, combining the love and history that the NIME community has with the theremin. With machine learning used for vocal synthesis, we heard an eerie, yet familiar machine voice emerging from one of the oldest electronic musical instruments in existence. While some aspects of the sound were pre-synthesized and sequenced (notably the consonants), the system resulted in a very expressive output. This project was particularly successful because it was also presented as a concert the day before, so the audience already had experiential familiarity with it. After the paper presentation, they walked the project down to the demo room where participants could ask further questions for an expanded, interactive question and answer session. It may be too much to expect that every paper also contain a demo or a concert, but the practical demonstrations, especially when front-loaded did a lot towards cementing the reality of the presentation.

“From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks”by Akito van Troyer and Rébecca Kleinberger discussed how the research team used machine learning to teach their software to make a good interface based on Eurorack, Moog, Korg, and other popular synthesizers. They taught their software how to make new instruments from images of both musical instruments and stylistically distinctive art from the MIT image library, to combine symbolic versus sub-symbolic mappings. While some of the images of instruments were unsuccessful, a number were completely beautiful and many participants said they wanted to buy these hybrid chimeric art/synthesizers.

Another paper regarding conceptual design for an instrument from MIT was “Grain Prism Hieroglyphic Interface for Granular Sampling”by Gabriela Bila Advincula, Don Derek Haddad, and Kent Larson. The design is still in development but they showed a gorgeous handheld small black pyramid with strange, golden hieroglyphic-type markings that somehow referenced circuitry and music without being overtly understandable. We were enticed and want to know more.

“The Slowqin: An Interdisciplinary Approach To Reinventing The Guqin”was an instrument-focused paper by Echo Ho and Alberto de Campo that showcased an augmented Guqin with electronic complements and micro-processor that can input, control, and map many sound synthesis and processing effects through the performers’ gestures. We found one of the most fascinating parts of the talk to be about ancient Chinese philosophy and the history of Guqin combined with cutting-edge engineering. The first author explained how the ancient Guqin notation system and the finger techniques were designed to be seamlessly aligned with the phenomenology of Mother Nature. During lunch, the author revealed her next step in this research would be using deep learning to develop a neural network that can learn and interpret the connections between the music phenomenology and the finger techniques’ meaning-making. Ho showed a video of herself performing in environments ranging from busy traffic circles to quiet forests, to highlight the versatility of the instrument.

The above systems were built to enable virtuosic performance of complex electronic systems while “Adaptive Multimodal Music Learning Via Interactive Haptic Instrument”by Yian Zhang, Yinmiao Li, Daniel Chin, and Gus Xia presented a design of an interactive-haptic flute that aims to help accelerate the learning process of beginner flute students. There was a “clutch mechanism” which can turn haptic feedback on or off for advanced learners. This was presented to great comedic effect - a human-machine performance of a robot forcing a human to learn how to play music through physical force.

“Women’s Labor: Creating NIMEs from Domestic Toolsby Margaret Schedel, Jocelyn Ho, and Matthew Blessing showcased a coal iron embedded with sensors that uses machine learning to make live music. During the Victorian era, feminine instruments were traditionally smaller and made to be played in living rooms instead of concert halls. The very portable violin was considered too coarse and grotesque for a lady to pursue. In this project, the authors made visible and audible the tools of ordinary household work. Often times the tools of household labor are portable, but secreted within the home, rendered invisible, as is the labor made with them. This project considered material engagement theory and used machine learning through the Wekinator to map the pressure points, both physical and psychosomatic, on traditional tools of women’s labor. This paper was a factor in determining the Pamela Z innovation award, given to Margaret Schedel at the end of the conference.

We offer to future organizers the following observation: It’s very rewarding to see a single NIME in multiple contexts. The T-vox and V-vox in particular touched upon the most categories. We first saw it in a concert, the following day it was presented to us as a paper, and finally we got to jam with it at Open Jack Night. Open Jack Night showed that the NIME really works in casual settings, as well as in exalted intellectual discussions. Natascha Lamounier’s dress was presented as a demo, and in a concert. It was great to see the dress up close, interact with the sensor, see the fabric react, and have a chance to talk with its developer. The distinction is rewarding because seeing it as an audience member is totally different from seeing it up close, trying it, touching it, and interacting with it. Experiencing the technology in person also helps with the appreciation for the virtuosity of the person that has learned or made the NIME. In this same way the Open Jack session was one of the most rewarding performances because we got to experience NIMEs up close in an improvisational setting, after seeing them presented in concerts and installations throughout the week.

Gesture-(Re)Action-Perception
The final category that stood out during the conference was what we will call Gesture-(Re)Action-Perception, specifically when expressed by connecting things to other things, which can be thought of as a meta-category of NIME. Common themes included: a performer in a wearable or a performer on a controller, self-standing objects—almost sculptural in nature that were played by their creators, and objects that needed to be activated with the body of the performer to take shape in an embodied space.

“Bendit_i/O: A System For Networked Performance Of Circuit-bent Devices”by Anthony T. Marasco and Edgar Berdahl focused on designing an innovative input and control system called “Bendit_I/O” that wirelessly allows circuit bending in distributed musical practice. The system contains a board, a server, and a custom-made application that interfaces with the board. The latency is significant, however, the concept is innovative. We were pleased that the custom electronics of circuit bent portable CD players with dangling wires made it through TSA security. This exploration of the “ready-made” using accessible tools seemed related to Ausynthar, by Pedro Pablo Lucas, an android app that used computer vision to create a lightweight, portable augmented reality setup.

The demo and paper “Separating Sound From Source: Sonic Transformation of The Violin Through Electrodynamic Pickups and Acoustic Actuation”presented by Laurel Pardue (and written with Kurijn Buys, Dan Overholt, Andrew P. McPherson, and Michael Edinger) had great appeal in terms of the excellent sound-world it conjured as well as its level of craftsmanship. The already evocative gestures of playing a violin were combined with the cleverness of technological augmentation to bring a strong and novel instrument into being. Unlike most actuated acoustic instruments, in this case, the physical inputs of the instrument are acoustically separated from the resonating body. The team uses the string itself as the wire carrying the induced voltage, allowing any variety of samples to be processed through the strings and manipulated through traditional violin techniques.

The most prominent celebration of Gesture-(Re)Action-Perception occurred during the concerts. The first piece of the conference, Gira by Joaõ Nogueira Tragtenberg and Filipe Calegario, was the perfect choice for a dramatic opening for the series of concerts. The lights went out on the stage and a spot slowly came on at the side of it. A sole player wearing a long canvas wrap skirt walked into the light and sat down in the style of a flamenco guitar player. His NIME had the style form factor of an Oud covered in buttons and knobs. He began to play arpeggios via the instrument, which was linked to a Prophet synthesizer. From the description on the NIME website: “Pandivá is an instrument inspired by the gestures of a trombone and a Brazilian tambourine from piston-like controllers and 12 buttons grouped in three sections of a circle. The pistons select a set of notes, and each button plays each of the notes from the set. It was designed in a similar way to a guitar, where one hand selects the chords, and the other excites each note of the chord in a rhythmic pattern. Instead of complex guitar finger dispositions, the 4 piston controllers allow 16 different combinations and buttons afford a tambourine rhythmic gesture to play them.“ It was clear that the performer had complete control over his instrument. Like Yuditskaya’s circles, this performance linked light, sound, and dance. Eight sodium orange PAR cans arranged in a circle slowly cycled in tune with the music. In the middle a dancer spun. He wore a gourd on his chest and forearm and was dressed in the same wrap skirt as the musician. He turned around clockwise, like a sufi dancer and there was something reminiscent of Hapkido in his movements. The lights created a zoetrope effect behind the dancer throwing time dilation into his spin, sometimes the patterns changed and clashed with the dancer’s position and the arpeggiation, sometimes it felt perfectly in tune. Towards the end of the dance the dancer emitted a guttural scream as the lights clicked faster and faster, or maybe it just seemed that way, created by a magical vaporwave sufi world.

VERSE N 1 by Luiz Naveda and Natacha Lamounier was a performance with a servomotor pushing the dancer like an actuator on the first night of concerts. It went out as it began, in sync PAR can glory, with a dancer with a wearable in the middle of the stage and an audio-controlling performer at the side. This time the controlling performer was on the other side of the stage. The dancer wore a costume equipped with two servomotors, one at her solar plexus and one at her back. There was a single spotlight on the controlling performer who cast a huge shadow of a figure hunched over a control board. The dancer, emerged from that shadow, invisible but for a very faint red indicator led glowing through her flesh-colored tunic. The performance was a stunning interplay of light and dark, live dance and automation, humans and their shadows.

While the first night celebrated dance and the body, the second night of concerts went smaller, focusing on gesture. Colligation by James Dooley used a short armband sensor with synthesized sound with consistent mapping. The sound was noticeably noisier towards the back, and the left hand made more pure tones. This piece had welcome silence between phrases, and utilized panning that was not directly related to the Cartesian position of the hands. The third performance featuring a Self-built Instrument by Jiyun Park had a cellist enclosed in architecture, with a noisy bow, low frequencies, and pedals - gestures turned into architecture and a resonating hull. The third night moved to a new, more casual venue, and had a multichannel system. Romano Gomez’s performance set the tone for the rest of the performances of the closing night exploring a great variety of technical and aesthetic approaches. All three concert pieces that evening seemed to prepare the audience for the more informal open jack session at the end of the night.

Herstory
Marcelo Wanderley’s keynote address presented past and proto NIME conferences and technologies such as the ICMC of 1970, and the MIDI boom of the 1980s. He cited a paper that J. Piringer wrote which categorized 100 interfaces at the first NIME conference. Early NIME workshops marshaled the 2008 transition from instruments to papers, and brought ICMC and CHI together. Marcelo’s conclusions were that NIME is a dynamic field for research because it integrates instrument design, art, science, and engineering, in a truly interdisciplinary fashion. However, he warned that while human-computer interaction models can be useful to define musical interaction, contexts and instruments need to be responsible and reliable. Wanderley was a coauthor on a paper revisiting an older NIME: “Rebuilding and Reinterpreting a Digital Musical Instrument - The Sponge” by Ajin Jiji Tom, Harish Jayanth Venkatesan, Ivan Franco, and Marcelo Wanderley. This research provided an invaluable perspective about rebuilding old digital musical instruments, and re-evaluating and re-interpreting the design of an original digital musical instrument with new materials. The older material is not flexible enough to be twisted, stretched, and pressed and was a bit clumsy (size) and fragile (not very reliable). It is worth revisiting the ideas of the past and updating them.

“Reanimating the Readymade”by Peter Bussigel and Stephan Moore was a paper that stood out because it engaged with a longer history of art. They started out by mentioning objects with hidden noise, and then re-framed the history of readymades as part of the legacy of sound-art. Their paper began with Marcel Duchamp’s fountain (It’s important to also note contributions of the baroness Elsa Von Freytag-Loringhoven that cycled through John Cage’s Water Walk, Carolee Schneemann’s Noise Bodies, David Tudor’s Rainforest, Transmogrifier (a workshop of improvisation), Moore’s own Chorus for Untrained Operator, and ended with a quote from Alex Galloway’s Interface Effect: “Offering a counter-aesthetic in the face of such systemic efficiency is the first step towards building a poetics for it.” A useful phrase indeed in the context of a very scientific conference about music, which is after all, an art form.

Sofy Yuditskaya’s installation Markov Magic Circles was a celebration of female power. The magic circles in this project are a digital interpretation of the magic circles in Gogol’s “Vie,” a fable about a witch getting the best of a seminary student who abused his place in society. This piece used a Markov prediction algorithm to flip virtual coins in order to activate three large LED rings. Based on the three salt rings in Gogol’s “Vie” the installation creates an atmosphere of an invocation, summoning unknown presences with pattern and repetition, all while trapping the listener in a brutal soundscape, an extended stay in self-determining probability cycles. The conceptual strength of directly amplifying the electromagnet transduction of the LEDs as sound sources made the link between image and sound very clear, and while the curatorial choice of putting it in another building than the conference was at first surprising, walking over there with fellow attendees created a sense of adventure and camaraderie.

Ana Maria Romano Gomez delivered her keynote address as a concert presentation. Her prose statements were made with sampling that was very poetically, affectively and effectively used, combined with composition techniques. As the final keynote of the conference it was fitting that it was a concert. As musicians we speak a language that is not the language of words. It felt important to have a keynote speaker who came from a minority within the NIME community, but we are sad that she was the only one. The conference closed out on her music and lyrics, which were delivered in a warehouse-like building evoking an underground space. The music was highly danceable and complex while also educating us on a journey through the history of women’s rights.

We ask ourselves if this choice of keynote represents a self-confirming bias. The performance let us come to our own conclusions—whereas studies presented in papers were a small sample size of experiments lacking the validity for strong conclusions. This was important mostly in Romano Gomez’s piece since it was the only actively multichannel piece in the whole concert. Her piece was an interactive, psychoacoustically informed, spatially-rich piece using four channels. Two of them were located on where the stage starts, and the other two were in rear corners, not symmetrically located, near the curtains where the stage was separated from the main bar area. There was a decent amount of noise in the background but it did not mask or greatly affect the performances. The sound synthesis in her piece was rich and high quality. Each part within the 28 minute long performance blended in together and the transitions were very smooth. She used elements from natural sounds like a bass turning into heartbeats and then transforming into train rumblings, women's speech, protest slogans, and moaning sounds. There was a great balance between the use of natural, synthesized, and processed sounds. This piece was very musically and politically powerful, and her presence on the stage was interesting to watch. She did not move or show off expressive gestures significantly but her interaction with the interface and the sound was observable. You could see the tension in her posture behind the laptop as she played sirens, alarms, and drone sounds, adding a compelling visual component to her performance, and it was overall very captivating.

There was camaraderie on the final night, but the authors also experienced some cognitive dissonance. There was more interaction that night because of the casual party atmosphere, and the conference cohort had an opportunity to feel part of the town instead of the more formal atmosphere of the university. In the future we suggest that conference organizers try to integrate these more informal community moments into the conference as a whole so participants can meet each other in settings undifferentiated by the role they are playing in the conference hierarchy. We often relegate these more colloquial concerts, the “off” concerts, to late night events, marking the difference between the 9 a.m. paper crowd and the late night crew. It was really great to see the blue-shirted students who were helping all week present in their street clothes, with their friends and partners. We had a similar moment at the beginning of the conference, when we listened to the electric berimbau infused tropicalia jazz while enjoying drinks in the beautiful lobby of the engineering building. As a counterpoint to structured proceedings this open-ended time was a gift to explore new interactions with colleagues from other lands we didn’t know we had.

Even though the concert hall was packed for the final night, specifically during Romano Gomez’s keynote performance, the setting in Agulha’s stage gave opportunities for the audience to move around, interact, and experience her performance in different ways. The formal concert was followed by the traditional NIME closing, the Open Jack Session, which as usual was one of the most compelling and fun performances of the conference. Seeing all forms of DMIs performed and improvised all together for hours was very motivating and fit into this year’s theme. Keeping track of who performed with which instrument was very challenging, but to the extent that we could follow up on individual performances, along with Pandemoium Trio’s synthesizers and Federico Visi’s myo-band, XioaXiao’s Theremin solo improvs were one of the highlights of the jam session. Later on, Laurel Pardue joined the session with her violin controller and bubble physical model, while Sofy Yuditskaya and Stephan Moore played along on laptops. It was a unique mix of instruments, performers, and sounds not only on the stage but also in other corners of the bar where conference attendees were jamming on the piano and singing along. It was the perfect ending to a well organized, perfectly produced, intellectually stimulating NIME conference.

Edited with contributions from Doga Cavdir, Hannah Wolfe, and Jiayue Cecilia Wu