DISSCO

DISSCO, a Digital Instrument for Sound Synthesis and Composition, offers a unified approach to music composition and sound synthesis, bringing both disciplines together in a seamless process. Presently, DISSCO has three main modules: LASS, a Library for Additive Sound Synthesis; CMOD, a Composition Module; and LASSIE, a Graphic User Interface (GUI). Two more modules are currently under consideration: one for the production of visual events, the other for sonification experiments.

Although DISSCO can be used to generate music in any desired style, it exhibits a strong bias towards the use of controlled randomness and encourages the user to plan the composition ahead of time and in detail (pre-composition work). DISSCO is a “black box:” once the data is fed in, the user does not intervene during the computations, and the output does not require post-processing. All three components (CMOD, LASS, and LASSIE) are written in C++; the Bison/lexyacc parser used to read and interpret input files for the DISSCO-1.0 version has been replaced in the DISSCO-2.0 version by Xerces-C++ XML parser and muParser, a math parser. Older files can be updated using the UpgradeProjectFormat command.

LASS

LASS, the Library for Additive Sound Synthesis, is based on theoretical contributions by Hans G. Kaper, Senior Mathematician Emeritus at Argonne National Laboratory, and Sever Tipei, Professor of Music at the University of Illinois at Urbana-Champaign and Manager of the Computer Music Project of the UIUC Experimental Music Studios. LASS has also benefited from their experience with two earlier additive synthesis systems, DIASS_M4C and DISCO.
Unlike its predecessors, LASS uses function evaluations instead of table look-ups and does not require a "score." For this reason, LASS is not a program of the MusicN type. LASS can generate an arbitrary number of sounds, each of them containing an arbitrary number of partials, and provides the user with detailed control over each partial. LASS is also unique in the way it allows musicians to specify the loudness of a sound. Loudness is a nonlinear function of amplitude; to achieve an assigned perceived loudness, the amplitude of the sound is adjusted using the ISO equal-loudness curves and a number of critical bands.
Three design goals have guided this project: expandability, ease of use, and efficiency. The architecture of LASS is very modular. No doubt, new features will need to be added, and future developers must be able to easily expand the system. The library was also designed to be user-friendly. The interfaces to classes were made as clear as possible and kept consistent across objects. Extensive use of references instead of pointers helps insure good memory management. Finally, LASS must also be efficient since sound synthesis is computationally intensive.
The general framework of the library and many of its features were written by Braden Kowitz. A number of students enrolled in the "Advanced Computer Music" seminar at UIUC have also contributed to the project and Mert Bay wrote the BiQuad Filter.

CMOD

The central component of the composition module, CMOD, is theEvent class. An event can have children events that, in turn, can become parent events and have their own children, in an arrangement reminiscent of Russian dolls (matryoshkas). There is only one Top Event (the entire piece), but there can be an arbitrary number of High, Medium, Low, and Bottom Events corresponding to various structural levels (eg. sections, themes, chords or any other subdivisions of the piece) and individual sounds or notes generated by the Bottom level. This framework reflects the realization that similar tools are used at different time scales to select values for parameters of various events types.

Not all types of events need to be present, new categories can be added, and the hierarchy of events is flexible: a Low event may have a Medium event as a parent while, at the same time, it can have High or Medium events as its own children.

Events are defined by start time, type, and duration; they can also be assigned environmental attributes such as spatialization and reverberation as well as other parameters such as frequency, loudness, vibrato, etc. These features are inherited by the event's children, insuring uniformity of the offspring, unless overridden by the child event.

The Top Event and the Bottom Event are different from the other events: the Top Event is unique, while the Bottom Event creates synthesized sounds, notes in a score, or both, and actually assigns to them various attributes such as vibrato, tremolo, glissando, location in space, and reverberation or user-defined score symbols.

An initial version of the composition module was written by Sever Tipei, modified by students of the “Advanced Computer Music” seminar, and greatly improved by Ryan Cavis, Andrew Burnson and Ming-Ching Chiu.

In 2012-2013, CMOD underwent intensive refactoring and improving. The newest version was launched in May 2013. The improvements include:

  • Integrating LASS in a tighter manner.
  • Adopting XML as its input format.
  • Adopting muparser as its math parser.

LASSIE

A graphic user interface, LASSIE provides easy access to DISSCO on Linux machines. Without changing the format of the original text-based (XML) CMOD input files, LASSIE offers the users an alternative way of managing and editing the files in an integrated graphic environment. LASSIE is implemented using gtkmm, the official C++ interface for the popular GUI library GTK+. The main window of LASSIE contains three parts. The first part is the drop down menu and the tool bar on the top of the window. The left half of the main window is the Objects List, which groups the objects, including events, spectrums, notes, envelopes, etc., in separate folders. The right side of the window shows the attributes of the object selected in the Objects List. Users can inspect and edit the attributes of any object by double clicking the object in the Objects List.

Besides the main window, Envelope Library Window is also part of LASSIE. Users can click the “Envelope Library” button at the bottom of the Objects List to open Envelope Library Window. This window provides the visualized representation of the original text-based envelope specification. Users can directly modify the envelopes in this window.

LASSIE also provides some instructions in real-time when the users edit the attributes of objects. Besides, LASSIE checks the syntax of the information input by the users and shows warnings if the input files contain illegal syntax.

LASSIE was written by Ming-ching Chiu.

*******

Event objects

There are a number of event categories corresponding to various structural levels: Top, High, Medium, Low, and Bottom. An event object may have a number of layers and each layer may have a number of types. In an analogy with traditional music, a layer could be seen as an acoustic instrument (eg. Violin) able to produce various types of sound (eg. arco, pizzicato, col legno, etc.). Each event object has a start time, duration, and type as well as a specific method of producing all its children events. The Continuous method distributes children events within the duration of the parent according to a stochastic distribution, in an unordered time sequence. Sweep also uses a stochastic distribution but insures that the start time of the first child object is the same as the start time of the parent and that the rest of children events' start times are in an ordered time sequence. Discrete uses a three-dimensional matrix (start time/type/duration) to distribute children events that do not collide with each other within the same layer.

CMOD Files

Each event has a text file associated with it which contains basic information about the event as well as directions on how to create children of the event. While the start time, type, and duration have been already determined by the parent event, the text file specifies the number of children events, the names of their associated text files, and the method used to determine their start times, types and durations. The Bottom events have a more complex text file associated with them, since information about the frequency, loudness, etc. of synthesized sounds or printed notes is needed. Different category of event files are stored in separate directories, which are named after the first letter of the event category: T (Top), H (High), M (Medium), L (Low), and B (Bottom). Bottom files used to generate synthesized sounds must start with “s” (ex. B/sfilename),while those used to generate printed notes must start with “n” (ex. B/nfilename). The letters THMLBsn are used as flags identifying different categories of text files.

Various auxiliary text files are used to build envelopes (ENV), sieves (SIEV), and patterns (PAT), or to control the spectrum (S), the spatialization (SPA), and the reverberation (REV). A file (projectname.dat) defines the main properties of the project and precedes the creation of the Top event from the T/file.

Project Properties(project.dat):

  • Title of the project; a directory with that name needs to be created before accessing this window.
  • File Flags: the default values are THMLBsn (see above) and need to be preserved -in this particular order, no spaces between letters- even if not all structural levels are present in the project. The user may introduce more levels only by modifying the code.
  • Duration of the project in seconds may be a number or the output of a function (see Functions).

For sound synthesis only:

◦Number Of Channels may vary between two and an arbitrary number (see Spatialization)

◦Sampling Rate /second

◦Sample Size in bits

  • Number Of Threads should be left at 1 unless more than one processor is used.
  • Top Event: the name of the file associated with it the needs to start with T.

Event file:

All categories of events inherit from the Event class.

  • Event Name
  • maxChildDur the longest duration a child event may have in seconds, as a fraction of the parent's total duration (percentage) or in EDUs; needs to be expressed in the same units as the Duration Type.
  • EDU (Elementary Displacement Unit) Per Beat is given by the smallest common multiple (LCM) of the different tuplets used. If eight note triplets, sixteenths, and quintuplets are present, then: 1/4 * 1/3 * 1/5 = 1/60 and the beat will have 60 EDUs in order to accommodate divisions by 4, 3, and 5 of the quarter note, the beat. (Note that, in this case, sextuplets or 1/6 of the beat=quarter note will also be allowed).
  • Time Signature: any fraction whose denominator is a power of 2 is allowed (2/4, 7/8, 5/16, etc).
  • Tempo may be expressed as
  • a note value (e.g. quarter note = 60)
  • a fraction where the denominator is the number of beats in the section * EDU and the nominator is the length of the fragment in seconds. Ex: If the section is 50 beats long, the EDU = 60, and the length of the section is 28 seconds, then 50 * 60 /28 = 3000/28 =750/7 (the reduction of the fraction will be done automatically).
  • Number of Children can be
  • determined from a given Density, denotes the number of sounds/second desired (see SoundsPerSec)
  • entered as a Fixed number corresponding to the total number of children events
  • entered as individual numbers for each Layer.
  • Add New Layer allows the user to add new children names by Creating a New Object and dragging it in the window. The type, class, and name are displayed.
  • Children Events Definition: a choice between the three available methods of producing children events: Continuum, Sweep, or Discrete. Children of the same parent are produced with the same method.
  • If Discrete is chosen:
  • Attack Sieve: sieve defining all possible discrete start times for children events within the duration of the parent.
  • Duration Sieve: sieve defining all possible discrete durations for children objects (have to be ≤ maxChildDur).

(NOTE: When specifying children attributes in layers [see below] make sure you click outside the line, in the window, after each entry to secure it.)

  • If Continuum is chosen:
  • Start Time could be a floating point number or a function.
  • If Sweep is chosen:
  • use PREVIOUS_CHILD_DURATION for an uninterrupted sequence of sounds
  • for various intervals between sounds, a list and PREVIOUS_CHILD_DURATION
  • Value Type for Start Time and Duration:
  • Percentage: a floating point number between 0 and 1 (not available for Discrete) – refers to the duration of the parent event
  • EDUs (Elementary Displacement Units): an integer; EDU = 1/LCM (Least Common Multiple) of all beat subdivisions
  • Seconds: a float (not available for Discrete)
  • Type of the child/event: an integer starting with 0
  • Duration could be a floating point number or a function.
  • Value Type for Duration:
  • see above
  • Layers: Independent streams of events containing types of sound. Ex. a layer representing “violin” contains a type “arco” and a type “pizz.”; inside a layer, sounds of different types will not overlap, sounds in different layers may overlap.

Bottom file:

More information is required by a Bottom object in order to create sounds or notes. A Bottom file is the same as the generic Event file but, since more information is required by a Bottom, also includes:

  • Frequency, a floating point number, can be defined using:
  • Equal tempered: an integer, number of semitones from C0 = 0 (eg. C4 = 48).
  • Fundamental:
  • Fundamental Frequency in Hertz, chosen by the user (min. fundamental frequency = 1, not 0).
  • Partial number: of the specified Fundamental Frequency above
  • Continuum:
  • frequency in Hertz
  • Power of Two: value that controls the exponential of 2, to be multiplied with 20 Hz, lowest frequency and not to exceed 15,000 Hz, the highest frequency listed by ISO. 0 < value < 1.

20 * 2**(log2((15000 – 20) * value)), where 15000 is the max. frequency allowed

  • Loudness: values are expressed in sone units, on a logarithmic scale of 0 – 256, where a two-fold increase corresponds to a step in the traditional scale: pp, p, mf, f, ff, etc.

Modifiers

Alterations of the fundamental parameters of sound: frequency and amplitude (and phase). May apply to the SOUND as a whole or to each PARTIAL individually.

  • TREMOLO: amplitude modulation
  • Probability Envelope: probability that the modifier will be applied.
  • Amp Value Envelope: size (magnitude) of the distortion; 0 → no modulation; 1 → 0.1 of amplitude
  • Rate Value Envelope: rate of the modulation; 6Hz considered “normal”.
  • VIBRATO: frequency modulation
  • Probability Envelope: probability that the modifier will be applied.
  • Amp Value Envelope: size (magnitude) of the distortion; 0 → no modulation; 1 → 0.1 of amplitude; however, to create a FM type of sound, larger numbers are useful: 2, 4, 25, etc. - see Dodge or Chowning's writings)
  • Rate Value Envelope: rate of the modulation; 6Hz considered “normal”.

A frequency envelope is used at the sound level to create glissandi, sound bends, and when applied to individual partials, to

  • GLISSANDO: frequency is multiplied by the value of the envelope at tn. Frequency * 2 → octave above, frequency * 0.5 → octave bellow; a 0 value will produce segmentation fault.
  • BEND: small, irregular modification of the frequency, use an envelope

There are default values for

  • DETUNE_SPREAD
  • DETUNE_DIRECTION
  • DETUNE_VELOCITY
  • DETUNE_FUNDAMENTAL

Transients are narrow spikes/distortions in the amplitude domain or in the frequency domain. In the case of acoustic instruments, they occur briefly at the onset of the vibration.

  • AMPTRANS:
  • Probability Envelope: probability that the modifier will be applied.
  • Amp Value Envelope: size (magnitude) of the distortion; 0 → no modulation; 1 → 0.1 of amplitude
  • Rate Value Envelope: probability of occurrence of a spike
  • Width Envelope: width of the spike, default value is 1103.
  • FREQTRANS:
  • Probability Envelope: probability that the modifier will be applied.
  • Amp Value Envelope: size (magnitude) of the distortion; 0 → no modulation; 1 → 0.1 of amplitude
  • Rate Value Envelope: probability of occurrence of a spike
  • Width Envelope: width of the spike, default value is 1103.
  • WAVE_TYPE: a choice between sine wave (0) or white noise (1). More wave types to be added later. Filters can be used to select various frequency bands for white noise.

Environment

  • Spatialization: can be applied to the entire sound or to individual partials
  • SPA:
  • Stereo: a float representing the amount of sound produced by the left speaker
  • ReadSPAFile: name of the file stored in the directory SPA.

The following can be used for multi channel playback, an arbitrary number of channels

  • Multi_Pan: amount of sound played by each speaker
  • Polar: using polar coordinates to specify the location in space; assumes an arbitrary number of channels arranged in a circle, on a plane
  • Theta: the angle measured clockwise 0 (straight ahead) to ±π (behind) going through negative values on the right side and positive values on the left side.

0

0.25 π -0.25 π

0.50 π -0.50 π

0.75 π -0.75 π

± π

  • Radius: distance from the center of the circle

Two examples of using a choice between multiple files:

======

<Fun<Name>ReadSPAFile</Name>

<File<Fun<Name>Select</Name>

<List>

polAshriek0c,

polAshriek0e

</List>

<Index<Fun<Name>RandomInt</Name>

<LowBound>0</LowBound>

<HighBound>1</HighBound>

</Fun</Index>

</Fun</File>

</Fun>

======

<Fun<Name>ReadSPAFile</Name>

<File<Fun<Name>Select</Name>

<List>polApercText2.0,

polApercText2.1,

polApercText2.2,

polApercText2.3

</List>

<Index<Fun<Name>RandomInt</Name>

<LowBound>0</LowBound>

<HighBound>3</HighBound>

</Fun</Index>

</Fun</File>

</Fun>

======

  • Reverberation: The Reverb class implements an artificial reverberator, built on the model in Moore's "Elements of Computer Music" book. That model, in turn, is based on Schroder's and Moorer's work. More about reverberation in a special section bellow.

There are three ways in which reverberation may be applied; all three apply only to the entire sound.