Research
Research publications, presentations, and dissertations are categorised according to the research areas of:
- collaborative research, trans-disciplinary research, and artistic practice as research
- digital musicology and music analysis
- sound synthesis
- sound spatialisation
- digital music notation
- teaching and learning
Collaborative Research, Trans-Disciplinary Research, and Artistic Practice as Research
James, S. (2020). From Concept to Realisation: trans-disciplinary and practice-based reflections on the development and management of electronics in an electroacoustic ensemble [in progress] |
Devenish, L & James, S. (2019). Composer-Performer Collaboration in the Development of Kinabuhi | Kamatayon for Percussion and Electronics. Soundscripts 6(1). Retrieved from https://ro.ecu.edu.au/soundscripts/vol6/iss1/9/ |
|
This paper provides an outline of the collaborative approach taken in the creation of electroacoustic percussion work Kinabuhi | Kamatayon (2015) by Stuart James for performance by Louise Devenish. Written for eleven Indonesian bossed gongs and electronics, the work involved creative and systematic exploration of various percussive and electronic techniques with the primary aim of re- contextualising these instruments. This paper offers an overview of the collaboration process with percussionist Louise Devenish and how these techniques were used in the work. This includes discussion of the performance practices developed and a suitable notation system for effectively executing these compositional ideas.
|
Vickery, L., Devenish, L., James, S. & Hope, C. (2017). Expanded Percussion Notation in recent works by Cat Hope, Stuart James and Lindsay Vickery. Contemporary Music Review 36(1-2), 15-35. Retrieved from https://www.tandfonline.com/doi/full/10.1080/07494467.2017.1371879 |
|
The timbral pallet employed by composers and performers has expanded exponentially since the end of the common practice period. Ideological shifts and technological advances, including the embrace of noise in Futurism, of all sounds in Indeterminacy, Musique Concrète, Musique Concrète Instrumentale, Free Improvisation and Spectral instrumental synthesis, have driven the art towards increasingly detailed sonic exploration. At the same time adoption of these practices have undermined domination of the rigid pitch and rhythmic grids of equal temperament and metrical subdivision found in tradition notation. These transformations have particularly influenced the notation of percussion music, where the definitions of ‘instrument’ and ‘performance technique’ have become uniquely broad. This paper discusses the percussion notation of two prominent Western Australian composers and the diverse approaches taken to the specification of timbral, improvisatory and coordinative characteristics. The compositional and performative aspects of their notational innovations, ideologies and technologies are examined using examples from works by Lindsay Vickery including The Miracle of the Rose [2015] InterXection [2002], The Semantics of Redaction [2014] and Lyrebird [2014]), and works by Stuart James including Particle I [2011] and Kinabuhi | Kamatayon [2015].
|
Devenish, L., & James, S. (2016). Greater Than The Sum of Its Parts: Balinese Gamelan Instruments, Electronics and Composer-Performer Collaboration. Presentation at the Musicology Australia MSA Conference, Adelaide, Australia. |
Composer-performer collaboration has played a vital role in the development of solo percussion works since the 1950s. The definition of a ‘solo percussion work’ is very broad, encompassing a wide range of instruments and performance practices. The point of departure in creating a new solo percussion work is highly variable (unlike the point of departure in creating a new solo flute work). Thus, in the composition of a solo percussion work, composers are often embarking on a work for a particular percussionist and their unique instrumentarium, bringing composer-performer collaboration to the fore from the outset. Such collaboration is particularly fruitful in the development of work exploring instruments or objects not commonly included in every contemporary or orchestral percussion studio, and in the development new performance practices and notational systems. Kinabuhi | Kamatayon (2015) is a solo percussion work that brings traditional Balinese instruments together with contemporary percussion techniques and experimental electronics. This combination is an extension of the performer’s and the composer’s respective research interests in Balinese gamelan instruments and spectral music, and collaboration was required in order to effectively unify elements of each. Key areas of interest included exploration of the pitch relationships between different instruments, development of new acoustic sounds through extended techniques and further expansion of the sonic palette by incorporating electronics. This presentation explores the nature of this collaborative research and its manifestation in a new solo percussion work.
|
Hope, C., James, S., & Vickery, L. (2014). Sogno 102: Working with the Compositional Techniques of Giacinto Scelsi. Proceedings of the 2014 Australasian Computer Music Conference. |
|
In 2013, the Western Australian Ensemble Decibel presented a program of late works of the Italian composer, Giacinto Scelsi (1905-1908) at the Goethe Institute in Palermo, Italy, entitled Inner Space: The Giacinto Scelsi Project. Amongst the pieces, were two original works by members of the ensemble, as a kind of homage to the composer. One of these compositions was by the director of the ensemble Cat Hope, entitled Sogno 102, a piece influenced by Scelsi’s composition methodologies and a book of Scelsi’s entitled Sogno 101, published by Italian publisher Quodlibet, in 2010. The book includes a range of articles derived from autobiographical audio recordings made by Scelsi himself, and transcribed for the book after his death. This article discusses the processes involved in conceptualising and realising the work, in particular the way electronics are used to realise some of the integral elements of Scelsi’s own ideas about music into this work.
|
Hope, C. James, S., & Tan, K. (2010). When Lines Become Bits: Engaging Digital Technology to Perform works by Alvin Lucier. Proceedings of the 2010 Australasian Computer Music Conference, Canberra, Australia. |
|
New music ensemble Decibel is a group of musicians, composers and improvisers who pursue music that combines acoustic and electronic instruments. In May 2010, they presented a concert of works by American composer Alvin Lucier (b1931), applying a range of new approaches to the reproduction of this important artists’ works. Lucier has made clear that he sees technology as a tool, a means to an end [8, 122]. Different possibilities for his works - such as alternative instrumentation, lengths, and live or recorded versions of pieces - were suggested in many his scores. The adaption of certain analogue electronic components in the works using digital software and hardware has facilitated many of these suggestions. In addition, Lucier’s compositions provide an opportunity to demonstrate how vital the performance of electronic sound generation and manipulation is when combined with live instruments. This is articulated in the consideration given to spatialisation of sound reproduction, the placement and assignment of performers to electronic sound generators and the use of software to facilitate performance requirements. Whilst Decibel may not be the first to attempt these adaptations and approaches, they have presented a number of Australian premieres of Lucier’s work and have carefully documented the process of their realisation in this paper. The curatorial rationale, methodologies applied to the proof of concept and live performance of the works, as well as the results achieved are discussed.
|
Digital Musicology and Music Analysis
Vickery, L. & James, S. (2020). Temporal Dissonance in Charles Ives' Putnam's Camp. [in progress] |
James, S. (2020). Pushing the Envelope: multichannel tape, analog synthesis, and quadraphonic sound in Smalley's 'Dijeridu' (1974). [book chapter in progress]
|
Roger Smalley’s ‘Didgeridoo’ (also referred to as ‘Dijeridu’) (1974) was composed whilst composer-in-residence at the University of Western Australia, using the electronic studio designed and maintained by Associate Professor John Albert Exton (1933-2009). This studio had several tape machines, a custom built console with 4 quadraphonic joysticks, several analog synthesizers including the EMS VCS3, and various other analog modules including a frequency to CV converter. All of this equipment was used to conceive the work.
The work is an electronic piece, the definitive version for 4-channel tape, however the work also exists on several other tapes in various stages of development: tapes that contain source sound materials used for resynthesis, a multichannel tape containing all of the ‘composed’ sounds in separation, and a number of diffused performances. With source material based on LP recordings taken at Mornington Island, the piece has no tape splices, and was ‘performed live’ to tape to create the final verson of Smalley’s only solo tape work. This paper studies the form-building structures of the work by applying a layer and dynamic form analysis, and also documents the style of diffusion explored. Spectrogram analysis is also used to draw clear temporal links between the source materials and the resynthesized sounds created. These processes allow us to trace more clearly the development of the work. |
James, S. (2020). A Reconsideration of John Cage's Variations I-VIII (1958-1967) in the 21st Century [book chapter, in the editorial stage]Seah, T., James, S., & Sleptsova, A. (2020). Lost in Time: revealing approaches to rubato and dynamics in recordings of Russian pianists at the turn of the century. HPI Conference. |
Russian piano music and pianism flourished in the late nineteenth century and the early twentieth century through two main institutions: the Moscow and St Petersburg Conservatories. The Moscow Conservatory arguably played a pivotal role in producing great pianists and composers, and scholars tend to suggest that John Field, Franz Liszt, Anton Rubinstein, and Russian Nationalist composers are among the main influences of what is understood as the Moscow piano style. Many Russian pianists of the time made recordings utilizing technologies such as the wax cylinder, the gramophone, and the reproducing piano rolls, the earliest of which can be traced back to as early as the 1880’s. Performances of pianists whose traditions are well rooted in the styles of the late Romantic era are captured in these early recordings. Therefore the analysis of rubato and dynamics in early recordings provides an insight into performance practices of the time. This research intends to identify some key elements of the Moscow piano style of the late nineteenth century and early twentieth century. Recordings selected satisfy two selection criteria: they must feature a Russian pianist trained at the Moscow Conservatory, and the work performed must be Russian. Recordings of three contrasting Russian pianists – Sergei Lyapunov, Sergei Rachmaninov, and Josef Lhévinne – are analysed using computational means, revealing a number of aspects of style such as tempo, rubato, notes inégales, and dynamics. These aspects of style are then discussed alongside historiographical and pedagogical sources.
|
James, S. and Vickery, L. (2018). Representations of Decay in the Works of Cat Hope. Tempo 73(287). |
|
This article considers the ‘representation of decay’ in selected concert works by the Australian composer Cat Hope. It draws on a mixed-method research methodology, comparing the conceptual aspects of Hope’s oeuvre with analyses of studio and live recordings of Hope’s work and discussing how such ideas of‘decay’ may play out in the sonic world. Two forms of spectral analysis are employed: firstly the analysis of spectral parametersroughness, noisiness, brightness, pitch, and centroid, and secondly a visualisation of the music as a spectrogram. The data for the spectral analyses are derived from Alexander Harker’s spectral descriptor tools for MaxMSP which record a value for each parameter every 25 milliseconds. At times, values are normalised within a range of 0 and 1, as representative of how listeners experi- ence parametrical changes (i.e. dynamics, in relative terms rather than absolutes in relation to other sounds in the work). Importantly, perception of noisiness is more acute at frequencies in which the auditory critical bands are wider, below 250 Hz (roughly below middle C), precisely the upper range specified by Hope to define instruments suitable for the Australian Bass Orchestra.
|
Vickery, L & James, S. (2016). The Enduring Temporal Mystery of Ornette Coleman's Lonely Woman. Proceedings of the 2015 WA Chapter of MSA Symposium on Music Performance and Analysis. |
|
Lonely Woman from Ornette Coleman’s groundbreaking 1959 album The Shape of Jazz to Come brought a completely fresh form of musical texture to Jazz. The texture in which a fast-paced, but unmetered rhythm section sits under, and in stark contrast, to a slow moving and highly rubato hymn-like melody has led to descriptions of the music being “completely free of meter”, “rhythmically elastic“ or “freely pulsating time”. In Coleman’s musical output Lonely Woman became the template for a set of works, such as Broken Shadows (1971) and What Reason Could I Give? (1971), that Peter Wilson has categorized as the “Coleman Ballad” group. There have been numerous transcriptions of the work, however representing the complexities of the unusual two stream nature of the rhythmic and melodic material, is always avoided in favour of “an approximation of reality” (often in 4/4).
This discussion seeks to precisely unravel the relationship between the rhythm section and the melodic instruments in Lonely Woman using digital analysis tools. It aims to uncover the role of synchrony and asynchrony in forming what has been properly described as a “unified performing aesthetic” of “seemingly opposing musical elements in juxtaposition against one another.” |
Hope, C., & James, S. (2016). Zeitebenen: Realising Roger Smalley's European past in Australia. Presentation at the Musicology Australia MSA Conference, Adelaide, Australia. |
Composer and concert pianist Roger Smalley (1943-2015) spent the majority of his adult life in Perth, Western Australia where he taught composition at the University of Western Australia for close to thirty years. Before he came to live in Perth, he formed part of an important English electro-acoustic quartet Intermodulation in 1967, that he called a “four-person live electronic improvisation type ensemble (in Ford, 2003). Smalley composed a work for this ensemble in 1973, ‘Zeitebenen’ that was performed around Europe and in the BBC Proms. Smalley added revisions in 1975 that were never performed as the group ceased when Smalley came to work in Australia in 1976. This paper discusses the archeological adventure of performing ‘Zeitbenen’, that began as an idea without a printed score or recording. A revised performance copy has been created, the tape part sourced and the electronics remodeled. With the assistance of surviving members of Intermodulation and Smalley’s friends and family, the piece was performed in June 2016. It involved meticulous musicological approaches to find the missing pieces of the score, as well as appropriate judgment when it comes to recreating ‘structured improvisation’ and the way performers interact with technology in the work. In addition, systematic music analysis that uncovers relationships between ‘Zeitebenen’ and Smalley’s work ‘Monody’ (1971-72) provide insight into the compositional procedures explored therein.
|
James, S. (2001). An analysis of Jennifer Fowler's 'Echoes from an Antique Land.' Australian Society for Music Education (Analysis requested as a teaching aid for WA Tertiary Entrance Exam in Music).
|
James, S. (2000). The Technique of Isorhythm and its use in the Music of the Middle Ages and the 20th Century (Honours Dissertation, University of Western Australia). |
Sound Synthesis
James, S. (2020). Constraining and Navigating Chaos: a morphological approach to controlling Nonlinear Dynamical Systems for Sound Synthesis. [in progress] |
James, S. (2017). A Novel Approach to Timbre Morphology: modulating timbre spaces using low dimensional audio rate control. Proceedings of the 2017 International Computer Music Conference. |
|
This research began with a novel method of controlling large parameter sets using vectors of samples de-interleaved from audio signals, a method that has been coined a low-dimensional audio-rate control of sound synthesis. This research investigates the suitability of this method for controlling additive synthesis by coupling this with an interface for morphing between existing timbre sets. The temporal nature of such a control interface allows for a rapid and precise choreography of control, also allowing for the independent application of ring modulation, amplitude modulation, frequency modulation and spatial modulation to independent oscillators. Such effects are also intended to reproduce the amplitude and frequency perturbation evident in complex sound sources, as well as providing an extended palette of timbres through the process of modulation.
|
James, S. (2016). A Multi-Point 2D Interface: Audio-Rate Signals for Controlling Complex Multi-Parametric Sound Synthesis. Proceedings of the 2016 New Interfaces for Musical Expression, Brisbane, Australia. |
|
This paper documents a method of controlling complex sound synthesis processes such as granular synthesis, additive synthesis, timbre morphology, swarm-based spatialisation, spectral spatialisation, and timbre spatialisation via a multi-parametric 2D interface. This paper evaluates the use of audio-rate control signals for sound synthesis, and discussing approaches to de-interleaving, synchronization, and mapping. The paper also outlines a number of ways of extending the expressivity of such a control interface by coupling this with another 2D multi-parametric nodes interface and audio-rate 2D table lookup. The paper proceeds to review methods of navigating multi-parameter sets via interpolation and transformation. Some case studies are finally discussed in the paper. The author has used this method to control complex sound synthesis processes that require control data for more that a thousand parameters.
|
James, S. & Hope, C. (2011). Multidimensional Data Sets: Traversing Sound Synthesis, Sound Scultpture, and Scored Composition. Proceedings of the 2011 Australasian Computer Music Conference, Auckland, New Zealand. |
|
This article documents some of the conceptual developments of some various approaches to using multidimensional data sets as a means of propagating sound, manipulating and sculpting sound, and generating compositional scores. This is not only achieved through a methodology that is reminiscent of some of the systematic matrix procedures employed by composer Peter Maxwell Davies, but also through a generative signal path method conventionally termed Wave Terrain Synthesis. Both methodologies follow in essence the same kind of paradigm - the notion of extracting information through a process of traversing multi- dimensional topography. In this article we look at four documented examples. The first example is concerned with the organic morphology of modulation synthesis. The second example documents a dynamical Wave Terrain Synthesis model that responds and adapts in realtime to live audio input. The third example addresses the use of Wave Terrain Synthesis as a method of controlling another signal processing technique - in this case the independent spatial distribution of 1024 different spectral bands over a multichannel speaker array. The fourth example reflects on the use of matrices in some of the systematic compositional processes of Peter Maxwell Davies, and briefly shows how pitch, rhythm, and articulation matrices can be extended into higher-dimensional structures, and proposes how gesture can be used to create realtime generative scores. The underlying intent here is to find an effective and unified methodology for simultaneously controlling the complex parameter sets of synthesis, spatialisation, and scored composition in live realtime laptop performance.
|
James, S. (2005). Developing a Flexible and Expressive Realtime Polyphonic Wave Terrain Synthesis Instrument based on a Visual and Multidimensional Methodology (Masters Thesis, Edith Cowan University). |
|
The Jitter extended library for Max/MSP is distributed with a gamut of tools for the generation, processing, storage, and visual display of multidimensional data structures. With additional support for a wide range of media types, and the interaction between these mediums, the environment presents a perfect working ground for Wave Terrain Synthesis. This research details the practical development of a realtime Wave Terrain Synthesis instrument within the Max/MSP programming environment utilizing the Jitter extended library. Various graphical processing routines are explored in relation to their potential use for Wave Terrain Synthesis.
Relevant problematic issues and their solutions are discussed with an overall intent to maintain both flexible and expressive parameter control. It is initially shown, due to the multidimensional nature of Wave Terrain Synthesis, that any multi-parameter system can be mapped out, including existing sound synthesis techniques such as wavetable, waveshaping, modulation synthesis, scanned synthesis, additive synthesis, et cetera. While the research initially makes some general assessments between the topographical features of terrain functions and their resulting sound spectra, the thesis proceeds to cover some more practical and useful examples for developing further control over terrain structures. Such processes useful for Wave Terrain Synthesis include convolution, spatial remapping, video feedback, recurrence plotting, and OpenGL NURBS functions. The research also deals with the issue of micro to macro temporal evolution, and the use of complex networks of quasi-synchronous and asynchronous parameter modulations in order to create the effect of complex timbral evolution in the system. These approaches draw from various methodologies, including low frequency oscillation, break point functions, random number generators, and Dynamical Systems Theory. Furthermore, the research proposes solutions to a number of problems due to the frequent introduction of undesirable audio artifacts. Methods of controlling the extent of these problems are discussed, and classified as either Pre or Post Wave Terrain Synthesis procedures. |
James, S. (2003). Possibilities for Dynamical Wave Terrain Synthesis. Proceedings of the 2003 Australasian Computer Music Conference 'Converging Technologies,' Perth, Australia. |
|
For the most part, Wave Terrain Sound Synthesis (WT) has remained within a conceptual domain defined by linear topographical structures deriving essentially from Euclidean and Cartesian geometry. Consequently, the technique has been characterized by simple oscillator and modulator types due to an inflexible process of modifying the phase state within the system. After addressing the limitations inherent in existing methodology, this paper discusses some possible alternatives to linearity by describing various ways of introducing dynamical and pseudo- dynamical systems into the Wave Terrain Synthesis model.
|
Sound Spatialisation
James, S. (2020). A Perceptual Evaluation of 3D Sonification Techniques for Computer-Assisted Navigation [in progress] |
James, S. (2016). A Classification of Multi-Point Spectral Sound Shapes. Proceedings of the 2016 Australasian Computer Music Conference, Brisbane, Australia. |
|
Previous research by the author has involved the investigation of sound shapes produced by the multi- point spatial diffusion of independent spectral bands. Fundamentally two implementations emerged through this research: one that primarily dealt with only the diffusion of spectra (i.e. spectral spatialisation) and another further extension of this approach that accounted for unique frequency-space distributions unfolding through time (i.e. timbre spatialisation implemented in the frequency domain). Through the process of exploring these possible sound shapes, a range of multi-point distributions emerged making it possible to form a categorical set of distinct multi-point distributions. The classifications were informed by the writings of Gary Kendall, Francis Rumsey, Robert Normandeau, Ewan Stefani, and Karen Lauke on spatiality, writings by Albert Bregman on auditory scene analysis (ASA), writings on directionality and immersion within the field of psychoacoustics, writings by Denis Smalley on spectromorphology, spatiomorphology, spatial texture, contiguous space, and non-contiguous space (i.e. zones), writings by Gary Kendall on spectral correlation and decorrelation, and writings by Trevor Wishart on spatial motion.
|
James, S. (2016). Multi-Point Nonlinear Spatial Distribution of Effects across the Soundfield. Proceedings of the 2016 International Computer Music Conference, Utrecht, Holland.
|
|
This paper outlines a method of applying non-linear processing and effects to multi-point spatial distributions of sound spectra. The technique is based on previous research by the author on non-linear spatial distributions of spectra, that is, timbre spatialisation in the frequency domain. One of the primary applications here is the further elaboration of timbre spatialisation in the frequency domain to account for distance cues incorporating loudness attenuation, reverb, and filtration. Further to this, the same approach may also give rise to more non-linear distributions of processing and effects across multi-point spatial distributions such as audio distortions and harmonic exciters, delays, and other such parallel processes used within a spatial context.
|
James, S. (2015). Spectromorphology and Spatiomorphology: Wave Terrain Synthesis as a Framework for Controlling Timbre Spatialisation in the Frequency-Domain (PhD Exegesis, Edith Cowan University). |
|
This research project examines the scope of the technique of timbre spatialisation in the frequency domain that can be realised and controlled in live performance by a single performer. Existing implementations of timbre spatialisation take either a psychoacoustical approach – employing control rate signals for determining azimuth and distance cues – or an adoption of abstract structures for determining frequency-space modulations. This research project aims to overcome the logistical constraints of real-time multi-parameter mapping by developing an overarching multi-signal framework for control: wave terrain synthesis, an interactive control rate and audio rate system. Due to the precise timing requirements of vector-based FFT processes, spectral control data are generated in frames. Performed in MaxMSP, the project addresses notions of space and immersion using a practice-led methodology contributing to the creation of a number of compositions, performance software and an accompanying exegesis. In addition, the development and evaluation of timbre spatialisation software by the author is accompanied by a categorical definition of the spatial sound shapes generated.
|
James, S. (2015). Spectromorphology and Spatiomorphology of Sound Shapes: audio-rate AEP and DBAP panning of spectra. Proceedings of the 2015 International Computer Music Conference, Denton, Texas. |
|
Explorations of a new mapping strategy for spectral spatialisation demonstrate a concise and flexible control of both spatiomorphology and spectromorphology. With the creation of customized software by the author for audio-rate histograms, spectral processing function smoothing, spectral centroid width modulation, audio-rate distance-based amplitude panning, audio-rate ambisonic equivalent panning, a growing library of audio trajectory functions, and an assortment of spectral transformation functions, this article aims to explain the rationale of this process.
|
James, S. (2014). Sound Shapes and Spatial Texture: Frequency-Space Morphology. Proceedings of the 2014 International Computer Music Conference, Athens, Greece. |
|
The use of Wave Terrain Synthesis as a control mechanism is a governing system that allows the performer to create a complex and coordinated change across an existing complex parametric system. This research has focused largely on the application of Wave Terrain Synthesis for the control of Timbral Spatialisation. Various mappings of the Wave Terrain mechanism are discussed, to highlight some various ways in which frequency-space morphology may be approached with such a model. With the means of smoothly interpolating between various terrain and trajectory states allow the performer to control the evolving nature of sound shapes and spatial texture generated by the model.
|
James, S. & Hope, C. (2013). 2D and 3D Timbral Spatialisation: spatial motion, immersiveness, and notions of space. Proceedings of the 2013 International Computer Music Conference, Perth, Australia. |
|
Timbral spatialisation is a signal processing technique that involves the spatial treatment of all individual spectral bands extracted from a source sound. Previous research proposed that Wave Terrain Synthesis can be used as an effective bridging control structure for timbral spatialisation, enabling gestural control of the thousands of panning parameters required. This paper considers some possibilities and challenges of firstly establishing a spatial language for timbral spatialisation in live computer music, and follows by addressing problems and ideas in pertinent writings on the notion of space, spectromorphology, spatial motion, and immersiveness by Smalley, Wishart, Normandeau, Rumsey, Kendall, and Sazdov. This finally leads to a discussion of some possible immersive states created through timbral spatialisation, as well as the spatial movement generated by Wave Terrain Synthesis.
|
James, S. & Hope, C. (2012). From Autonomous to Performative Control of Timbral Spatialisation. Proceedings of the 2012 Australasian Computer Music Conference, Brisbane, Australia. |
|
Timbral spatialisation is one such process that requires the independent control of potentially thousands of parameters (Torchia, et al., 2003). Current research on controlling timbral spatialisation has focussed either on automated generative systems, or suggested that to design trajectories in software is to write every movement line by line (Normandeau, 2009). This research proposes that Wave Terrain Synthesis may be used as an effective bridging control structure for timbral spatialisation, enabling the performative control of large numbers of parameter sets associated with software. This methodology also allows for compact interactive mapping possibilities for a physical controller, and may also be effectively mapped gesturally.
|
Digital Music Notation
Maujean, J-M. & James, S. (2020). 3D Spectrogram Notation: analysis, composition and performance. TENOR Conference, Hamburg, Australia. [in review] |
In this paper, a description of a novel set of animated scoring techniques are presented which involve the use of 3D spectrograms for new music composition and performance. This strategy aims to provide a prescribed yet intuitive representation of sound through the visualisation of three sonic variables: frequency, amplitude and time. Using animation software Adobe Aftereffects, this paper discusses the implementation of such techniques for scoring music, and documents some further refinements including: the superimposition of multiple 3D spectrograms of various sounds into the same score (as representative of separate musical ‘parts’ for anensemble), the superimposition of a prescribed tuning system or pitch framework onto the animated spectrogram, and a logarithmic scaling of spectrogram layers through time so as to encode a longer history of future events in the visualised animated score. Various iterations of this technique are demonstrated during the creative development of new-music.
|
Wyatt, A., Vickery, L. & James, S. (2019). Unlocking the Decibel ScorePlayer. TENOR Conference, Melbourne, Australia. |
|
This paper discusses recent developments in the Decibel ScorePlayer project, including the introduction of a canvas scoring mode, python ScorePlayer externals, and enhancements to the ScoreCreator application. Firstly, the canvas scoring mode of the Decibel ScorePlayer app allows for other applications, such as Max, to send drawing commands to the ScorePlayer via OSC. Several examples of implementations of generative and animated notation scores are discussed and evaluated. An object model has been developed allowing for the creation of hierarchies of drawn elements. The object model defines a framework of commands that can be used to create and control these objects, and supporting examples describe the way in which scores can be developed to take advantage of this new scoring mode. Secondly, a python scoreplayer-external library has been developed, defining two python classes:scorePlayerExternal that makes a connection to the iPad, opening a UDP listening socket and letting the iPad know which port to send its replies to, and scoreObject which is responsible for creating and drawing objects populated on the canvas display window of the Decibel ScorePlayer. It acts as a wrapper to the raw OSC commands so that programming can be done using object-oriented paradigms. Thirdly, the ScoreCreator, an application developed for Mac OSX for automating the process of making scores for the Decibel ScorePlayer, has been expanded allowing for the defining of a range of score types and functionalities.
|
Wyatt, A., Vickery, L. & James, S. (2018). The Canvas Mode: Rapid Prototyping for the Decibel ScorePlayer. The Australasian Computer Music Conference. |
|
This paper talks about recent developments in the Decibel ScorePlayer project, notably the introduction of a canvas mode that allows for other applications to send drawing commands to the player via OSC. It outlines the object model that has been developed to allow for the creation of hierarchies of drawn objects, the commands that can be used to create and control these, and the way in which scores can be developed to take advantage of this new mode. It is hoped that this addition will give composers the flexibility to experiment with new score paradigms while being able to leverage the existing strengths of the platform.
|
James, S., Hope, C., Vickery, L., Wyatt, A., Carey, B., Fu, X., & Hajdu, G. (2017). Establishing connectivity between the existing networked music notation packages Quintet.net, Decibel ScorePlayer, and MaxScore. Tenor Conference, Spain. |
|
In this paper we outline a collaboration where live internet-based and local collaboration between research groups/musicians from Decibel New Music Ensemble (Perth, Australia) and ZM4 (Hamburg, Germany), was facilitated by novel innovations in customised software solutions employed by both groups. The exchange was funded by the Deutscher Akademischer Austauschdienst and Universities Australia. Both groups were previously engaged in the research and performance of similar musical repertoire such as John Cage’s ‘Five’ (1988) and ‘Variations I-VIII’ (1958-67) among others, the performances of which utilise graphic, animated and extended traditional Western music notation. Preliminary steps were taken to achieve communication between the three existing network music notation packages, the Decibel ScorePlayer, MaxScore and Quintet.net, facilitating a merging – and ultimately an extension – of notational approaches previously prescribed by each music notation package. In addition to the technical innovations required to achieve such a project, we consider the outcomes and future directions of the project, as well as their relevance for the wider contemporary music community.
|
Hope, C., James, S., & Wyatt, A. (2016). Headline grabs for music: The development of an iPad score generator. Proceedings of the 2016 New Interfaces for Musical Expression, Brisbane, Australia. |
|
This paper-demonstration provides an overview of an generative music score adapted for the iPad by the Decibel new music ensemble. The original score “Loaded (NSFW)” (2015) is by Western Australian composer Laura Jane Lowther, and is scored for ensemble and electronics, commissioned for a performance in April 2015 at the Perth Institute of Contemporary Arts. It engages and develops the Decibel ScorePlayer application, a score reader and generator for the iPad as a tool for displaying an interactive score that requires performers to react to news headlines through musical means. The paper will introduce the concept for the player, how it was developed, and how it was used in the premiere performance. The associated demonstration shows how the score appears on the iPads.
|
Vickery, L. & James, S. (2016). Tectonic: a Networked, Generative and Interactive, Conducting Environment for iPad. Proceedings of the 2016 International Computer Music Conference, Utrecht, Holland. |
|
This paper describes the concepts, implementation and context of Tectonic: Rodinia, for four realtime composer-conductors and ensemble. In this work, an addition to the repertoire of the Decibel Scoreplayer, iPads are networked together using the bonjour protocol to manage connectivity over the network. Unlike previous Scoreplayer works, Rodinia combines “conductor view” control interfaces, “performer view” notation interfaces and an “audience view” overview interface, separately identified by manual connection and yet mutually interactive. Notation is communicated to an ensemble via scores independently generated in realtime in each “performer view” and amalgamated schematically in the “audience view” interface. Interaction in the work is enacted through a collision avoidant algorithm that modifies the choices of each conductor by deflecting the streams of notation according to evaluation of their “Mass” and proximity to other streams, reflecting the concept of shifting Tectonic plates that crush and reform each other’s placement.
|
Hope, C., Vickery, L., Wyatt, A., & James, S. (2013). Mobilising John Cage: The Design and Creation of Score Generators for the Complete John Cage Variations I–VIII. Malaysian Music Journal 2(2). |
|
The John Cage Variations provide a useful snapshot of a range of score writing techniques employed by Cage throughout this career. From very complex preparations and realization of parts required in Variations I and II, to the almost non-existent scores of VII and VIII, the complete Variations provide a range of opportunities and challenges. In 2011 Western Australian new music ensemble Decibel developed a software-based score maker and player for the works and presented a series of concerts of the Complete eight Variations. The performances have led to the development of the John Cage Complete Variations App for the iPad tablet computer, developed in conjunction with Peters Edition. Drawing on the ensemble’s experiments with real-time and scrolling computer score generation and performance, and their unique make up of performers, composers, sound artists and programmers, the group have made the realisation of these works more accurate and possible in real-time for the first time.
This paper discusses the approach taken by the group for the concept, design, creation and eventual performance of the scores for John Cage’s Variations I – VIII, including the packaging of the works into an application. It will also cover the challenges presented by the range of different score formats to the packaging of the collection as a whole. |
Hope, C., Vickery, L. & James, S. (2012). Digital Adaptions of the Scores for Cage Variations I, II, and III. Proceedings of the 2012 International Computer Music Conference, Lubjiana, Slovenia. |
|
Western Australian new music ensemble Decibel have devised a software-based tool for creating realisations of the score for John Cage’s Variations I and II. In these works Cage had used multiple transparent plastic sheets with various forms of graphical notation, that were capable of independent positioning in respect to one another, to create specifications for the multiple unique instantiation of these works. The digital versions allow for real-time generation of the specifications of each work, quasi-infinite exploration of diverse realisations of the works and transcription of the data created using Cage’s methodologies into proportionally notated scrolling graphical scores.
|
Hope, C., James, S., & Vickery, L. (2012). New Digital Interactions with John Cage's Variations IV, V and VI. Proceedings of the 2012 Australasian Computer Music Conference, Brisbane, Australia. |
|
To celebrate the centenary of John Cage’s birth in 1912, Western Australian new music ensemble Decibel undertook the realization of the American composer John Cage’s (1912 – 1992) complete Variations I – VIII. The works offer a unique insight into the development of Cage’s approach to composition practice, aleatoric approaches, spatial arrangements and the use of electronics. Entitled the “John Cage Complete Variations Project”, Decibel have created a performance of the eight pieces in around an hour. The preparation and reading of the scores that make use of transparent sheets (Variations I, II, III, IV and VI) has been adapted using digital score creators and readers. This permits real time generation of measurements and graphics, as well as the assemblage of performance symbols, that can occur during the actual performance of the works. This paper examines the approach to Variations IV (1963) and VI (1966) from the perspectives of digital adaption and the context of the program as a whole.
|
Teaching and Learning
James, S. (2016). In the Mirror: a metaphor for research practice and supervision. Presentation for the ECU Principle Supervisor Accreditation Program 2016. |
This presentation focussed on four processes of reflexive practice.
First, the use of free-writing and audio recording as a means of sketching out ideas, and following this process with critical reflection. Emphasis is placed on ‘in the moment’ techniques in order to counter writer’s block, academic stagnation, and encourage inquiry. Second, the use of analysis as a means of understanding, interrogating, clarifying and validating ideas. This may include techniques in both qualitative critical analysis, as well as the statistical analysis of data. Third, the use of graphical representations of information and ideas in order to gain new perspective on research. Fourth, perspective and validation of ones own work that emerge through ones own familiarity with the writing conventions of the field, familiarity with the academic conventions of the university, and personal acquaintance with peers in the field (i.e. through conferences). |
Hope, C., Riddoch, M., & James, S. (2009). Music Technology / Technological Music. Media Art Scoping Study Symposium, 80-89, Perth, Australia. |
|
In 2007 WAAPA began a new music course that tied a thorough traditional music training with computer programming and new media arts. The Music Technology Major in the three year Bachelor of Music aims to produce students who can not only program interactive or compositional projects but also have a full capability in a more traditional musical background of aural training, harmony, theory, history and performance. After initial learning in composition, acousmatics, spatial music, recording, mixing and mastering music, students are introduced to programming through composition projects using MaxMSP and Jitter, moving on to Csound and the programming of Arduino’s, as well as real time internet performances. The project based teaching and assessment structure encourages collaboration and performance in the public arena, creating a foundation for a performance/research ethic beginning at undergraduate level. This course is developing exciting outcomes that may finally solve the sound art versus music debate while developing learning strategies that combine musical and scientific approaches for a range of artworks with sound as a foundation. The paper discusses the design of the course and how it differs from others, and provides detail on the way programming is taught within a music framework and some of the outcomes to date.
|
Other
James, S. (2003). IRCAM: Institut de Recherche et Coordination Acoustique/Musique, Paris, France. EARWAX Magazine. |