Michaël Dzjaparidze is a sound artist and software developer from Amsterdam, The Netherlands.


More About Me.

March 12th, 2015

Currently living in Amsterdam, The Netherlands. I make electronic music using various digital sound synthesis methods. Most of this I do with SuperCollider: a pretty awesome programming language for real-time audio synthesis and algorithmic composition. Go to the portfolio section of this website to get an impression of my creative work.

I also develop software and research numerical algorithms for the generation of sound, music and visuals. Oh yeah, I do web and app development/hacking from time to time as well.


  • Ambient & Drone

    Physically Inspired Sound Synthesis.

    February 23th, 2014

    These works are part of my PhD portfolio which is concerned with the compositional application of physically inspired sound synthesis methods. This includes both abstract methods which are loosely based upon physical principles as well as physical modelling.

    Most sounds are generated with the help of my custom built physical modelling library PMLib, implemented in SuperCollider and Python for simulating systems of inter-connected 1D and 2D objects in the form of strings, bars, membranes and plates. For each work a different system was designed, each imparting its own characteristic identity onto the harmonic dimension of each work. In addition, various excitation models have been designed, which are used to excite a given system of inter-connected objects in different ways. Some of these excitation models simulate a physical interaction like plucking, bowing, scraping, bouncing or rolling, others are used to simulate environmental sources of sound like water drops and sea waves.

    Musically speaking, the portfolio tries to explore the unification of harmony and timbre by exploring different ways of deriving harmonic form and progression from the spectral description of the sound itself, a constant interplay between what a listener may perceive as physically plausible and wholly abstract, and focusses on the slow development of texture over time in order to create movement and flow in the music. The main idea being that the music is in constant flux without consciously directing the music from a beginning to an end in a strict linear sense. Instead, one is encouraged to appreciate the inner timbral details of the sound textures and how these develop and transform over time.

    Algorithmic Composition.

    August 6th, 2010

    A selection of older works which make use of algorithmic composition techniques. Some of these algorithms are targeted towards the structuring of musical materials, others for generating the sound material itself. However, most works were realised using a combination of both.

    The work H is the artistic end result of my MA research involving the derivation of musical form and function from the various solutions of the Schrödiner equation for the Hydrogen atom. Curves obtained from cross sections of the probability density functions corresponding to the different energy levels of the atom are used as window functions for grains of sound, as envelopes for different grain clouds, but also in their original interpretation: as probability densities which are used to generate individual grain and larger-scale musical parameters on a semi-random basis.

    t.01.07-01-08 is the first algorithmic work I realised. It uses data obtained from the Hénon map (a discrete-time dynamical system) to determine various additive synthesis parameters. The same data is also used to shape the musical structure of larger-scale events. Similarly, the works GR01, GS02 and GS03 map data obtained from both the Hénon map as well as the Lorenz system (a continuous time dynamical system) onto various granular synthesis parameters using an audio rate triggered granular synthesis engine built in Max/MSP.

    Resting Bell Release.

    August 25th, 2007

    rb002 cover A release I did for Christian Roth's netlabel Resting Bell back in 2007 exploring the creation of rich, organic sounding textures using FM synthesis only. All three works are based on a simple, repeating harmonic progression. The musical diversity of each work is mainly due to slowly modulating FM indices to dynamically vary the timbral properties of the composite sound continuum. Although quite dated by now, I still consider this to be one of my personal favourite things I have done so far from a musical point of view; complexity through simplicity.

  • Techno

    FM Synthesis Only.

    November 11th, 2014

    Apart from some post-processing like EQ, adding a touch of reverb and delay, all the sounds for these works have been generated using FM synthesis only.

  • Audiovisual

    The Schrödinger Equation.

    May 4th, 2009

    First experiment with the Schrödinger equation. In quantum mechanics, the Schrödinger equation describes how the state of a physical system changes with time.

    I experimented with using the data from the equation in generating images and to control certain sound parameters. The curves obtained from the equation are for instance used as envelopes for the different sound layers but also as probability functions which determine duration, (quantized) pitch, density etc. of the grains and the additive ‘waves’.

    Most of the sounds for this piece are a combination of granular and additive synthesis so as to be conceptually in accordance with the wave-particle duality of all matter and radiation. The frequencies for all sounds are based on the Lyman series. Also for this piece I used one basis image that I processed in time in various ways.

    Audio synthesis done in SuperCollider, images made with Processing and processing of the images done with Jitter.

    Bothering Heights Remix.

    September 18th, 2008

    A remix made for the Rosa Ensemble as part of an assignment for my BA at the School of the Arts Utrecht.

    I reinterpreted their work ‘Bothering Heights’ by processing samples of the original with a custom granular application built in Max/MSP to create different granular textures. The visuals are generated and sequenced in Jitter using the Voronoi noise algorithm primarily.