Composite Voices – New Instrument Sounds

Note: Though you are welcome to read this earlier article, the topic is much better explained in the newer article, which you can read here.

Using only synthesizers with General-MIDI sounds (and a MIDI router), you can create amazing instrument sounds available only on expensive synthesizers, or with VST instruments using a number of chained effects.

I do this on Linux, using Qsynth (a GUI front-end for the Fluidsynth software synthesizer), using the FluidR3_GM soundfont, along with a MIDI router, and the JACK audio connection kit.

I like those sounds so much, that I use them every time I perform, even though it requires a little more effort and time in setting things up.

So, what are composite voices, and why would you want to use them?

A composite voice is an instrument sound (voice) made up of more than one instrument sound. My favorite example of a composite voice, is piano sound in the foreground, with a string ensemble in the background.

The reason I like using it, is when you are playing high notes with just a piano sound, the sound gets thin and dies-away quickly. But with the composite voice (piano + strings), the sound is sustained – even on the very highest notes. It gives the illusion of a piano playing with string-orchestra backing.

Here is an example of my piano + strings composite voice. Listen especially to the highest notes, and the way they continue to sound long after the piano portion of the sound has faded away:

Piano-Strings Composite Voice Example MP3 File

I create this sound by sending the performance data (MIDI signals) from my keyboard, to two different synthesizers (a foreground synthesizer, and a background synthesizer) simultaneously, on the same MIDI channel. The instruments in the background synthesizer are set at a lower volume level than those of the foreground synthesizer.

I have a short MIDI file, which I play to the background synthesizer, that sets up its instruments, and volume levels. Along with that, the MIDI data sent to the background synthesizer is filtered, so that volume, expression, sustain-pedal, and program-change signals are not passed to the background synthesizer.

With that filter in-place, instrument changes, and volume-level changes done in the foreground synthesizer, don't change the settings of the background synthesizer.

Also, the use of the sustain-pedal, which works great on a piano sound (which fades away quickly), doesn't get passed on to the background instrument. Using the sustain-pedal on a sustained sound (such as a string-ensemble) smears the notes together in a non-pleasing way. Therefore, filtering-out the sustain-pedal signals is a must.

Here is how I accomplish this on Linux:

In this case, for sound synthesis I am using Qsynth, which is a GUI front-end for the Fluidsynth software synthesizer. It is configured to provide three separate synthesizers ('engines'), of which two (Qsynth1 and Qsynth2) are used in this example. The Qsynth window is shown in the screen-shot below:


Using QjackCtl (the GUI front-end for the JACK Audio Connection Kit), I connect the MIDI instruments as shown in the screen-shot below. You can click on the “Connect” button and make the connections, but to make it easier, I have saved those connections as a patch-bay, which gets used automatically:


In the picture above, notice that the KeyMusician-Keyboard “KMK-Output” sends to both Qsynth1 (the foreground synthesizer), and the qmidiroute input port (the MIDI filter). Also, qmidiroute sends to Qsynth2 (the background synthesizer).

So everything I play on my keyboard goes to Qsynth1, and also to Qsynth2 (after passing through the qmidiroute MIDI filter).

Also, I have set my MIDI player (KMK-Player-Output) to send to Qsynth2 (the background synthesizer), though I did that in the MIDI player itself, as shown in the screen-shot below:


In this case, the MIDI file to be played is my composite-voice setup file. That file (when I play it) sets-up the background voices for MIDI channels 1 through 9, and 11 & 12, as shown in the screen-shot below:


Each of my keyboard's performance-panes (accessed by the function-keys above) use the corresponding MIDI channels. The voices named above are the sounds used in the background synthesizer.

Finally, there is the set-up required for the MIDI filter. Its configuration file is specified in the desktop launcher that activates it. The filtering is shown in the following two screen-shots:


In the “Map 1” tab, I am specifying for all input note events (note-on, and note-off) received in input, to be passed to the output channel. The note-on events include loudness information via the velocity number.

All other events are set to be discarded, as shown “Unmatched” tab in the screen-shot below:




Give it a try. You can get some really amazing sounds this way, using just a General-MIDI sound-font, and ordinary synthesizers.

On Windows & Mac, similar things can be done by chaining VST instruments and effects together, and/or by VST instruments listening on different MIDI channels.

Index Of All Newsletter Articles