Organised by the Sound and Music Computing Group, Aalborg University Copenhagen, Denmark, 15 – 19 May 2017.

Attended by Dr Gareth W. Young.

New Interfaces for Musical Expression (NIME) is a highly respected, international conference dedicated to academic research applied in the development of creative technologies and the role of this technology in artistic practices. NIME began in 2001 as a workshop at the ACM (Association for Computing Machinery) Conference on Human Factors in Computing Systems (CHI) interest group. The workshop group subsequently expanded its interests, and since 2002 has been an annual conference that gathers people from all over the world interested in developing new technologies and deliberating, constructing, demonstrating, and investigating the role of new technologies in musical expression and artistic performance. With this global reach, researchers and musicians gather to share their knowledge and most recent work on new interface design and discuss the various techniques that can be applied in this field. One of the most interesting things about NIME is that it’s a big tent or catch-all party. The conference is attended by technical and engineering researchers alongside contemporary artists ALL with the common interest of capturing human movement and applying this data to generating music. With this diversity, the discussions and presentations span multiple areas and the research topics covered explore various aspects of academic research. The keynotes at NIME 2017 were delivered by Chris Chafe (Director of Stanford University’s Centre for Computer Research in Music and Acoustics), Dorit Chrysler (New York based composer, producer and singer), and Ge Wang (Associate Professor at Stanford University, in the Center for Computer Research in Music and Acoustics).


The 2017 edition of the NIME conference took place in Copenhagen from the 15th to the 19th of May and was organised by the Sound and Music Computing Group at Aalborg University Copenhagen. At the conference, several papers were presented to explore concepts and ideas that relate directly to the application of new qualitative assessment techniques that can be applied in the evaluation of multi-modal data interactions. Of interest to the Building City Dashboard (BCD) project were the conference topics of:

  • Performance experience reports on live performance and composition using novel controllers.
  • Perceptual & cognitive issues in the design of musical controllers and interfaces.
  • Artistic, cultural, and social impact of new performance interfaces.
  • Novel interfaces for sound and music interaction.

These sub-fields of NIME focused on the design and implementation of computer interfaces for sound and music manipulation; a topic that is directly related to Human-Computer Interaction (HCI) and the various evaluation techniques that can be applied in creative contexts. By directly interacting with seasoned researchers between workshops, paper, posters, keynotes, and coffee breaks, discussions were engaged in relation to multi-modal content development and its evaluation on multiple, creative platforms. Of the many interesting research projects that were presented during the conference, a small number can easily be seen to have specific relevance to the BCD project.


The first scientific paper of relevance to the project was “Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback,” by G. Isaac et al. (Arizona State University). The paper describes the iterative development of a haptic Digital Musical Instrument (DMI) through the application of two different approaches to the creation of multi-modal, interactive data. Primarily, the researchers presented an exploration of multi-modal data representations and the transferal of these findings into the musical interactions domain. The paper explores the concept of virtual, textural terrains as a control source for haptic feedback; specifically, force-feedback. This approach breaks from the conventional, or common practices in audio-haptic research, where physical models within virtual environments are designed to transduce input gestures into sonic outputs. The paper outlines the methodologies that are currently practiced with examples from existing projects; outlining the methods applied in the generation of terrains by using basic software functions and rendering these functions into monochromatic visual representations of data. By following techniques such as these, virtual terrains can be generated quickly, allowing re-mapping to take place very quickly. The visual terrains presented were navigated by the user with the use of a force-feedback controller (NovInt Falcon), which received data based on the gray-scale value of its location on the terrain. As the cursor was moved by the user, the level of resistance to movement varied across the terrain and the virtual terrain was presented as a physical surface for the user to explore. The authors discuss the potential of this type of multi-modal approach to creating an engaging user experience from both the performer and audience perspective, with examples provided throughout. Future work for the BCD project may involve the evaluation of these concepts by creating force-feedback profiles that relate to city specific profiles or visual data representations and the exploration of surface terrains of city maps. The node-based model may also be re-purposed as an audio-visual illustration of any given city area, presenting multi-modal data to a wider audience. We may also experiment with projecting the terrains onto larger visualization platforms for wider spectator inclusion, so other audience members may watch a user as they navigate a virtual space physically, visually, aurally, and most importantly digitally. Since the conference, I have been in touch with the paper research team (Specifically: Lauren Hayes, in the School of Arts, Media and Engineering at Arizona State University) and I have been directed to Edgar Berdahl’s HSP work that will be useful for getting started with the Falcon:


The second paper I’d like to talk about is “Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design,” by R. Graham et al. who presented and demonstrated some interesting collaborative works that included a number of interdisciplinary theorists and practitioners. The paper mainly focuses on three-dimensional environments as an incubator for performance system design. However, of particular interest to BCD is that this project makes extensive use of LIDAR data to create immersive performance spaces. The project specifically outlines how tracked gestures can be connected to concepts of embodied cognition, expanding on performative models for pitch and timbre spaces. Although this creative approach to applying geospatial data is perhaps beyond the requisites of the BCD project, it offers some insight into the use of large scale data sets in 3-dimensional practices, outlining the potential to present multi-modal data in virtual and augmented reality environments. Since the conference, I have been in contact with the research group and a methodology for using LIDAR data in Unity is being explored. A video of the performance can be seen on the link below where the following quote was also taken:

“Disrupt/Construct (2017) is a mixed-reality performance regarding the origin of object and place. An improvising musician explores assumptions about personal memory and the disruption of automatic trust placed in paramnesia. A history of gestural data is recorded from an augmented instrument and mapped to determine a variety of interactions between sounds within a complex mixed-reality scene. The performer has a degree of control over the unfolding virtual environment, modulating the appearance of the scene relative to chosen performance gestures. The audience views a projection of the virtual environment in addition to viewing the performer in the physical performance space. This piece seeks to reposition or re-contextualize performance systems design within the context of virtual reality environments while exploring where a music performance and by extension the human performer may be situated along the Reality-Virtuality Continuum.” – Christopher Manzione.


Overall, the NIME conference presented many different and insightful concepts that inspire the exploration of large-scale data and how it may be present using a diverse set of creative multimedia platforms. This notion was perhaps made even more tangible when we include the conference’s evening concerts. The live performances were an interesting opportunity to see and hear how the ideas and concepts that were explored theoretically in the paper sessions sound in action. This included a wide array of artistic practices, including curated robot orchestras, live electroacoustic concerts, and NIME specific composition. What was apparent from this variety of applied research was that through the application and adaptation of new interfaces, emergent domestic technologies can be explored for use in many different contexts. This variety of uses can also be considered when applied in leading concepts of creativity and the conceptualization of use with real-world application. For example, with the recent resurgence of VR technologies, new platforms for engaging with data creatively can now be explored with relative ease, as was demonstrated multiple times throughout the NIME demonstrations sessions. Additionally, the adaptation of existing technology should now be considered as a common approach for designing new interactive interfaces, as was demonstrated through the scientific research sessions. The visual affordances presented in VR give researchers new methods for augmenting data visualizations that extends not only the appearance of data visually, but also the capabilities of this technology for multi-modal data exploration. In this, we may present data via a mobile Mixed Reality (MR) application for augmenting the physical world or through an immersive Virtual Environment (VE) such as those created through LIDAR data visualization. These approaches can be applied to combine the different affordances of each and move towards multi-modal, real-time data interactions. Furthermore, through the augmentation of our physical surroundings, with 3D visual cues to indicate correct geospacial positioning, the physical world can act as a domain specific surface for the resulting MR environment. The effectiveness of each platform will be measured by analyzing the users’ progress and experiences while training with or without full immersion. We may also evaluate the usability of the interface by applying heuristic evaluations, which may be based on a newly proposed set of guidelines designed for VR data visualization environments.


It must be acknowledged that at first, BCD specific concepts and ideas cannot necessarily be directly translated in relation to the conference content and practices presented at NIME as the conference represents a community that is engaged in self-reflection and growth within a relatively new and quite specific academic domain. However, there is ample precedents for the exploration and discussion of the multiple design approaches that can be borrowed and applied in the building of city dashboards and exploring smart city technologies: for example, issues such as the exploration of user performance, input metaphors and their capture with new technologies, and the large body of peer reviewed scientific research papers (included in the NIME reader) that the project may explore for other shared practices. Further common interests may include subjects such as the permanency of specific technologies applied in practice based research, how practical research can be effectively disseminated in the broader academic community, the organology of new interfaces, and the importance of aesthetics in the design processes. In the context of these types of crossover discussions, the keynotes also provided a considerable source of reflection on the liminal spaces that exist between academic topics. Starting with Ge WangThe ME in NIME” and ending with Chris ChafeThe Modeling Shift.”

Ge Wang – “The ME in NIME”

This keynote (and “anti-keynote”) is about a lot of things. Music. Expression. Design. Technology. The Future. Art. Aristotle. HCI. Food. Tenure Adventures. Purpose. And above all else, NIME — and its search for itself.

Chris Chafe – “The Modeling Shift.”

Music from deep time to nowadays with deep networks charts the growth of artificial tools for sound and music creation. Some milestone examples illustrate how models in the digital age are more than that, they become the things we use to make and to appreciate music. Instruments, venue acoustics, composers, listening preferences are modelling targets and have experienced this shift. The art of music has always been an earlier adopter of new tools in a given technological age and is often a domain which pushes the extremes of complexity. Today’s digital instruments, automata, and assistants, are no exception. That complexity can reduce to simplicity through application and practice, a premise which will be explored by touching on the existence of characters in music.

The next NIME conference will be co-hosted by Virginia Tech and the University of Virginia in 2018.


Leave a comment