Physical modelling synthesis

Physical modelling synthesis refers to sound synthesis methods in which the waveform of the sound to be generated is computed using a mathematical model, a set of equations and algorithms to simulate a physical source of sound, usually a musical instrument.

General methodology

[edit]

Modelling attempts to replicate laws of physics that govern sound production, and will typically have several parameters, some of which are constants that describe the physical materials and dimensions of the instrument, while others are time-dependent functions describing the player's interaction with the instrument, such as plucking a string, or covering toneholes.

For example, to model the sound of a drum, there would be a mathematical model of how striking the drumhead injects energy into a two-dimensional membrane. Incorporating this, a larger model would simulate the properties of the membrane (mass density, stiffness, etc.), its coupling with the resonance of the cylindrical body of the drum, and the conditions at its boundaries (a rigid termination to the drum's body), describing its movement over time and thus its generation of sound.

Similar stages to be modelled can be found in instruments such as a violin, though the energy excitation in this case is provided by the slip-stick behavior of the bow against the string, the width of the bow, the resonance and damping behavior of the strings, the transfer of string vibrations through the bridge, and finally, the resonance of the soundboard in response to those vibrations.

In addition, the same concept has been applied to simulate voice and speech sounds.[1] In this case, the synthesizer includes mathematical models of the vocal fold oscillation and associated laryngeal airflow, and the consequent acoustic wave propagation along the vocal tract. Further, it may also contain an articulatory model to control the vocal tract shape in terms of the position of the lips, tongue and other organs.

Although physical modelling was not a new concept in acoustics and synthesis, having been implemented using finite difference approximations of the wave equation by Hiller and Ruiz in 1971[citation needed], it was not until the development of the Karplus-Strong algorithm, the subsequent refinement and generalization of the algorithm into the extremely efficient digital waveguide synthesis by Julius O. Smith III and others,[citation needed] and the increase in DSP power in the late 1980s[2] that commercial implementations became feasible.

Yamaha contracted with Stanford University in 1989[3] to jointly develop digital waveguide synthesis; subsequently, most patents related to the technology are owned by Stanford or Yamaha.

The first commercially available physical modelling synthesizer made using waveguide synthesis was the Yamaha VL1 in 1994.[4][5]

While the efficiency of digital waveguide synthesis made physical modelling feasible on common DSP hardware and native processors, the convincing emulation of physical instruments often requires the introduction of non-linear elements, scattering junctions, etc. In these cases, digital waveguides are often combined with FDTD,[6] finite element or wave digital filter methods, increasing the computational demands of the model.[7]

Technologies associated with physical modelling

[edit]

Examples of physical modelling synthesis:

References

[edit]
  • Hiller, L.; Ruiz, P. (1971). "Synthesizing Musical Sounds by Solving the Wave Equation for Vibrating Objects". Journal of the Audio Engineering Society.
  • Karplus, K.; Strong, A. (1983). "Digital synthesis of plucked string and drum timbres". Computer Music Journal. 7 (2). Computer Music Journal, Vol. 7, No. 2: 43–55. doi:10.2307/3680062. JSTOR 3680062.
  • Cadoz, C.; Luciani A; Florens JL (1993). "CORDIS-ANIMA : a Modeling and Simulation System for Sound and Image Synthesis: The General Formalism". Computer Music Journal. 17/1 (1). Computer Music Journal, MIT Press 1993, Vol. 17, No. 1.

Footnotes

[edit]
  1. ^ Englert, Marina; Madazio, Glaucya; Gielow, Ingrid; Lucero, Jorge; Behlau, Mara (2017). "Perceptual Error Analysis of Human and Synthesized Voices". Journal of Voice. 31 (4): 516.e5–516.e18. doi:10.1016/j.jvoice.2016.12.015. PMID 28089485.
  2. ^ Vicinanza , D (2007). "ASTRA Project on the Grid". Archived from the original on 2013-11-04. Retrieved 2013-10-23.{{cite web}}: CS1 maint: unfit URL (link)
  3. ^ Johnstone, B: Wave of the Future. http://www.harmony-central.com/Computer/synth-history.html Archived 2012-04-18 at the Wayback Machine, 1993.
  4. ^ Wood, S G: Objective Test Methods for Waveguide Audio Synthesis. Masters Thesis - Brigham Young University, http://contentdm.lib.byu.edu/cdm4/item_viewer.php?CISOROOT=/ETD&CISOPTR=976&CISOBOX=1&REC=19 Archived 2011-06-11 at the Wayback Machine, 2007.
  5. ^ "Yamaha VL1". Sound On Sound. July 1994. Archived from the original on 8 June 2015.
  6. ^ The NESS project http://www.ness.music.ed.ac.uk
  7. ^ C. Webb and S. Bilbao, "On the limits of real-time physical modelling synthesis with a modular environment" http://www.physicalaudio.co.uk

Further reading

[edit]
[edit]