ICAD 2007, Day 4

The final day of ICAD brought two sessions on Sonification including the following interesting work.

Another sonification environment, SoniPy, build in Python. The paper was presented by Somaya Langley, titled, THE DESIGN OF A HETEROGENEOUS SOFTWARE ENVIRONMENT FOR DATA SONIFICATION RESEARCH AND AUDITORY DISPLAY: SONIPY by David Worrall, Michael Bylstra, Stephen Barrass and Roger Dean.

A paper by Florian Grond, from ZKM, titled ORGANIZED DATA FOR ORGANIZED SOUND SPACEFILLING CURVES IN SONIFICATION showed an interesting way of taking trajectories through multidimensional spaces by using “space filling curves” with scan line patterns that produce clusters of proximate information, so that the transformation might be more spatially pertinent in comparison with linear raster scans for example. A visual example used a video from the Pure Data GEM open GL/video library and produced some Daniel Crooks-esque images. Maybe the space filling curves could be used as a new version of Crooks’ effects to produce time-based moving image output, rather than the stills from Grond’s paper?

Of particular interest to me is a suite of Max/MSP applications (available here) that illustrate and allow manipulation of various psychoacoustic phenomena. The paper was titled, SONIFICATION OF SOUND: TOOLS FOR TEACHING ACOUSTICS AND AUDIO by Densil Cabrera and Sam Ferguson.

Here’s the list of sonification examples:

  • Auditory graphs of absorption coefficient spectra
  • Sonification of rectangular room normal modes
  • Sonification of room impulse responses
  • Sonification of rectangular room reflections
  • Using the Hilbert transform for sonification
  • Sonification of the complex spectrum
  • Sonification of head-related transfer functions
  • Sonification of vowel formants
  • Sonification of spectral moments

All of which sound well worth of investigation, some even potentially useful as sound/music performance tools, such as the vocal formants sonifications.

Comments are closed.




Archive ads by Google