メインコンテンツまでスキップ

チュートリアルマルチ・ポリフォニック・シンセサイザーを作る

MPE規格の基礎とMPEに対応したシンセサイザーの実装方法を学びます。アプリケーションをROLI Seaboard Riseに接続します!

レベル:中級

プラットフォーム:Windows, macOS, Linux

クラス: MPESynthesiser, MPEインストゥルメント, MPENote, MPEValue, スムース値

はじめる

Download the demo project for this tutorial here: ピップ | ジップ. Unzip the project and open the first header file in the Projucer.

If you need help with this step, see チュートリアルProjucerパート1:Projucerを始める.

注記

It would be helpful to read チュートリアルMIDIシンセサイザーを作る first, as this is used as a reference point in a number of places.

The demo project

The demo project is a simplified version of the MPEDemo project in the JUCE/例 directory. In order to get the most out of this tutorial you will need an MPE compatible controller. MPE stands for MIDIポリフォニック・エクスプレッション, which is a new specification to allow multidimensional data to be communicated between audio products.

Some examples of such MPE compatible devices are ROLI's own Seaboard range (such as the シーボード・ライズ).

警告

The synthesiser may appear very quiet unless your controller transmits MIDI channel pressure and continuous controller 74 (ベル) in the way that the シーボード・ライズ does.

With a シーボード・ライズ connected to your computer the window of the demo application should look something like the following screenshot:

The demo application
The demo application

You will need to enable one of the MIDI inputs (here you can see a シーボード・ライズ is shown as an option).

The visualiser

Any notes played on your MPE compatible device will be visualised in the lower portion of the window. This is shown in the following screenshot:

The visualiser
The visualiser

One key feature of MPE is that each new MIDI note event is assigned its own MIDI channel, rather than all notes from a particular controller keyboard being assigned to the same MIDI channel. This allows each individual note to be controlled independently by control change messages, pitch bend message, and so on. In the JUCE implementation of MPE, a playing note is represented by an MPENote object. An MPENote object encapsulates the following data:

  • ノートのMIDIチャンネル。
  • MIDIノートの初期値。
  • The note-on velocity (or ストライキ).
  • ノートのピッチベンド値:このノートのMIDIチャンネルで受信したMIDIピッチベンドメッセージから得られます。
  • ノートの音圧:このノートのMIDIチャンネルで受信したMIDIチャンネル音圧メッセージから得られます。
  • The ベル for the note: typically derived from any controller messages on this note's MIDI channel for controller 74.
  • The note-off velocity (or リフト): this is only valid after the note-off event has been received and until the playing sound has stopped.

ノートを演奏していない状態では、ビジュアライザーが従来の MIDI キーボードレイアウトを表していることがわかります。デモアプリケーションのビジュアライザーでは、各ノートが以下のように表現されています:

  • 灰色で塗りつぶされた円はノートオンの速度を表す(速度が速いほど円は大きくなる)。
  • ノートのMIDIチャンネルは、この円内の "+"シンボルの上に表示されます;
  • 初期MIDIノート名は "+"記号の下に表示されます。
  • 重ねて表示される白い円は、このノートの現在の圧力を表す(ここでも、圧力が高いほど円は大きくなる)。
  • ノートの水平位置は、元のノートと、このノートに適用されたピッチベンドから決定されます。
  • The vertical position of the note is derived from the ベル parameter for the note (from MIDI controller 74 on this note's MIDI channel).

Other setting up

Before delving further into other aspects of the MPE specification, which are demonstrated by this application, let's look at some of the other things our application uses.

First of all, our メインコンポーネント class inherits from the AudioIODeviceCallback [1] and MidiInputCallback [2] classes:

class MainComponent : public juce::Component、
private juce::AudioIODeviceCallback, // [1].
private juce::MidiInputCallback // [2].
{
public:

We also have some important class members in our メインコンポーネント class:

juce::AudioDeviceManager audioDeviceManager; // [3].
juce::AudioDeviceSelectorComponent audioSetupComp; // [4].

ビジュアライザー visualiserComp;
juce::Viewport visualiserViewport;

juce::MPEInstrument visualiserInstrument;
juce::MPESynthesiser synth;
juce::MidiMessageCollector midiCollector; // [5].

JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (MainComponent).
};

The オーディオデバイスマネージャー [3] class handles the audio and MIDI configuration on our computer, while the オーディオデバイスセレクタコンポーネント [4] class gives us a means of configuring this from the graphical user interface (see チュートリアルAudioDeviceManagerクラス). The ミディメッセージコレクタ [5] class allow us to easily collect messages into blocks of timestamped MIDI messages in our audio callback (see チュートリアルMIDIシンセサイザーを作る).

It is important that the オーディオデバイスマネージャー object is listed first since we pass this to the constructor of the オーディオデバイスセレクタコンポーネント object:

メインコンポーネント
: audioSetupComp (audioDeviceManager, 0, 0, 256、
true, // showMidiInputOptionsはtrueでなければならない。
true, true, false)

Notice another important argument that is passed to the オーディオデバイスセレクタコンポーネント constructor: the ショーミディインプットオプション must be 真の to show our available MIDI inputs.

We set up our オーディオデバイスマネージャー object in a similar way to チュートリアルAudioDeviceManagerクラス, but we need also to add a MIDI input callback [6]:

audioDeviceManager.initialise (0, 2, nullptr, true, {}, nullptr);
audioDeviceManager.addMidiInputDeviceCallback ({}, this); // [6].
audioDeviceManager.addAudioCallback (this);

The MIDI input callback

The handleIncomingMidiMessage() is called when each MIDI message is received from any of the active MIDI inputs in the user interface:

void handleIncomingMidiMessage (juce::MidiInput* /*source*/、
const juce::MidiMessage& message) override
{
visualiserInstrument.processNextMidiEvent (message);
midiCollector.addMessageToQueue (message);
}

ここでは、それぞれのMIDIメッセージを両方に渡す:

  • our ビジュアライザー・インストゥルメント member — which is used to drive the visualiser display; and
  • the ミディコレクター member — which in turn passes the messages to the synthesiser in the audio callback.

The audio callback

Before any audio callbacks are made, we need to inform the シンセ and ミディコレクター members of the device sample rate, in the audioDeviceAboutToStart() function:

void audioDeviceAboutToStart (juce::AudioIODevice* device) override
{
auto sampleRate = device->getCurrentSampleRate();
midiCollector.reset (sampleRate);
synth.setCurrentPlaybackSampleRate (sampleRate);
}

The audioDeviceIOCallbackWithContext() function appears to do nothing MPE-specific:

    void audioDeviceIOCallbackWithContext (const float* const* /*inputChannelData*/,
int /*numInputChannels*/,
float* const* outputChannelData,
int numOutputChannels,
int numSamples,
const juce::AudioIODeviceCallbackContext& /*context*/) override
{
// make buffer
juce::AudioBufferbuffer(outputChannelData、numOutputChannels、numSamples);

// 無音にするためにクリアする
buffer.clear();

juce::MidiBuffer incomingMidi;

// このオーディオ・ブロックのMIDIメッセージを取得する
midiCollector.removeNextBlockOfMessages (incomingMidi, numSamples);

// ブロックを合成する
synth.renderNextBlock (buffer, incomingMidi, 0, numSamples);
}
注記

In fact, this is rather similar to the SynthAudioSource::getNextAudioBlock() function in チュートリアルMIDIシンセサイザーを作る.

Core MPE classes

All of the MPE specific processing is handled by the MPE classes: MPEインストゥルメント, MPESynthesiser, MPESynthesiserVoice, MPEValue, and MPENote (which we mentioned earlier).

The MPEInstrument class

The MPEインストゥルメント class maintains the state of the currently playing notes according to the MPE specification. An MPEインストゥルメント object can have one or more listeners attached and it can broadcast changes to notes as they occur. All you need to do is feed the MPEインストゥルメント object the MIDI data and it handles the rest.

In the メインコンポーネント constructor we configure the MPEインストゥルメント in レガシーモード and set the default pitch bend range to 24 semitones:

この特別なモードは、MPE以外のMIDI機器との後方互換性のためのもので、楽器は現在のMPEゾーン・レイアウトを無視します。

注記

See チュートリアルMPEゾーンの理解 for an introduction to more flexible approaches using ゾーン and ゾーン・レイアウト.

In the MainComponent::handleIncomingMidiMessage() function we pass the MIDI messages on to our ビジュアライザー・インストゥルメント object:

In this example we are using an MPEインストゥルメント object directly as we need it to update our visualiser display. For the purposes of audio synthesis we don't need to create a separate MPEインストゥルメント object. The MPESynthesiser object contains an MPEインストゥルメント object that it uses to drive the synthesiser.

The MPESynthesiser class

We set our MPESynthesiser with the same configuration as our ビジュアライザー・インストゥルメント object (in legacy mode with a pitch bend range of 24 semitones):

synth.enableLegacyMode (24);
synth.setVoiceStealingEnabled (false);

The MPESynthesiser class can also handle voice stealing for us, but as you can see here, we turn this off. When voice stealing is enabled, the synth will try to take over an existing voice if it runs out of voices and needs to play another note.

As we have already seen in the メインコンポーネント::audioDeviceAboutToStart() function we need to set the MPESynthesiser object's sample rate to work correctly:

And as we have also already seen in the メインコンポーネント::audioDeviceIOCallback() function, we simply pass it a ミディバッファ object containing messages that we want it to use to perform its synthesis operation:

The MPESynthesiserVoice class

You can generally use the MPESynthesiser and MPEインストゥルメント classes as they are (although both classes can be used as base classes if you need to override some behaviours). The most important class you マスト override in order to use the MPESynthesiser class is the MPESynthesiserVoice class. This actually generates the audio signals from your synthesiser's voices.

注記

This is similar to the シンセサイザー・ボイス class that is used with the シンセサイザー class, but it is customised to implement the MPE specification. See チュートリアルMIDIシンセサイザーを作る.

The code for our voice class is in the MPEDemoSynthVoice class of the demo project. Here we implement the MPEDemoSynthVoice class to inherit from the MPESynthesiserVoice class:

class MPEDemoSynthVoice : public juce::MPESynthesiserVoice
{

We have some member variables to keep track of values to control the level, timbre, and frequency of the tone that we generate. In particular, we use the スムース値 class, which is really useful for smoothing out discontinuities in the signal that would be otherwise caused by value changes (see チュートリアルサイン波シンセサイザーを作る).

    juce::SmoothedValueレベル、音色、周波数;

double phase = 0.0;
double phaseDelta = 0.0;
double tailOff = 0.0;

// いくつかの便利な定数
static constexpr auto maxLevel = 0.05;
static constexpr auto maxLevelDb = 31.0;
static constexpr auto smoothingLengthInSeconds = 0.01;
};

Starting and stopping voices

The key to using the MPESynthesiserVoice class is to access its MPESynthesiserVoice::現在再生中の音符 (protected) MPENote member to access the control information about the note during the various callbacks. For example, we override the MPESynthesiserVoice::noteStarted() function like this:

    void noteStarted() override
{
jassert (currentlyPlayingNote.isValid());
jassert (currentlyPlayingNote.keyState == juce::MPENote::keyDown
|| currentlyPlayingNote.keyState == juce::MPENote::keyDownAndSustained);

// get data from the current MPENote
level .setTargetValue (currentlyPlayingNote.pressure.asUnsignedFloat());
frequency.setTargetValue (currentlyPlayingNote.getFrequencyInHertz());
timbre .setTargetValue (currentlyPlayingNote.timbre.asUnsignedFloat());

phase = 0.0;
auto cyclesPerSample = frequency.getNextValue() / currentSampleRate;
phaseDelta = 2.0 * juce::MathConstants::pi * cyclesPerSample;

tailOff = 0.0;
}

The following "five dimensions" are stored in the MPENote object as MPEValue objects:

  • ノートオン速度: in the [MPENote::noteOnVelocity](structMPENote.html#a9322650db7f2e76cec724746d1a75c1a "The velocity ("strike") of the note-on.") member
  • ピッチベンド: in the MPENote::ピッチベンド member
  • 圧力: in the MPENote::圧力 member
  • ベル: in the MPENote::音色 member
  • ノートオフ速度: in the [MPENote::ノートオフベロシティ](structMPENote.html#a9e46888c40a2d3eaf4b8c5129b21de6e "The release velocity ("lift") of the note after a note-off has been received.") member

MPEValue objects make it easy to create values from 7-bit or 14-bit MIDI value sources, and to obtain these values as floating-point values in the range 0..1 or -1..+1.

注記

The MPEValue class stores the value internally using the 14-bit range.

The メインコンポーネント::noteStopped() function triggers the "release" of the note envelope (or stops it immediately, if requested):

void noteStopped (bool allowTailOff) override
{
jassert (currentlyPlayingNote.keyState == juce::MPENote::off);

if (allowTailOff)
{
// このフラグを設定することで、テールオフを開始します。レンダリングコールバックは
// 終了したらclearCurrentNote()を呼び出します。

if (tailOff == 0.0) // テールオフを開始する必要があるのは、まだそうしていない場合だけです。
// stopNoteメソッドは複数回呼び出される可能性があります。
tailOff = 1.0;
}
else
{
// すぐに演奏を止めるように言われているので、すべてをリセットする。
clearCurrentNote();
phaseDelta = 0.0;
}
}
注記

This is very similar to SineWaveVoice::stopNote() function in チュートリアルMIDIシンセサイザーを作る. There isn't anything MPE-specific here.

Modify the メインコンポーネント::noteStopped() function to allow the note-off velocity (リフト) to modify the rate of release of the note. Faster lifts should result in a shorter release time.

Parameter changes

この音符の音圧、ピッチベンド、音色のいずれかが変更されると、それを知らせるコールバックがある:

void notePressureChanged() オーバーライド
{
level.setTargetValue (currentlyPlayingNote.pressure.asUnsignedFloat());
}

void notePitchbendChanged() オーバーライド
{
frequency.setTargetValue (currentlyPlayingNote.getFrequencyInHertz());
}

void noteTimbreChanged() オーバーライド
{
timbre.setTargetValue (currentlyPlayingNote.timbre.asUnsignedFloat());
}

Again, we access the MPESynthesiserVoice::現在再生中の音符 member to obtain the current value for each of these parameters.

Generating the audio

The MainComponent::renderNextBlock() actually generates the audio signal, mixing this voice's signal into the buffer that is passed in:

    void renderNextBlock (juce::AudioBuffer& outputBuffer、
int startSample、
int numSamples) オーバーライド
{
if (phaseDelta != 0.0)
{
if (tailOff > 0.0)
{
while (--numSamples >= 0)
{
auto currentSample = getNextSample() * (float) tailOff;

for (auto i = outputBuffer.getNumChannels(); --i >= 0;)
outputBuffer.addSample (i, startSample, currentSample);

++startSample;

tailOff *= 0.99;

if (tailOff <= 0.005)
{
clearCurrentNote();

phaseDelta = 0.0;
break;
}
}
}
else
{
while (--numSamples >= 0)
{
auto currentSample = getNextSample();

for (auto i = outputBuffer.getNumChannels(); --i >= 0;)
outputBuffer.addSample (i, startSample, currentSample);

++startSample;
}
}
}
}

It calls メインコンポーネント::getNextSample() to generate the waveform:

    float getNextSample() noexcept
{
auto levelDb = (level.getNextValue() - 1.0) * maxLevelDb;
auto amplitude = std::pow (10.0f, 0.05f * levelDb) * maxLevel;

// timbre is used to blend between a sine and a square.
auto f1 = std::sin (phase);
auto f2 = std::copysign (1.0, f1);
auto a2 = timbre.getNextValue();
auto a1 = 1.0 - a2;

auto nextSample = float (amplitude * ((a1 * f1) + (a2 * f2)));

auto cyclesPerSample = frequency.getNextValue() / currentSampleRate;
phaseDelta = 2.0 * juce::MathConstants::pi * cyclesPerSample;
phase = std::fmod (phase + phaseDelta, 2.0 * juce::MathConstants:pi)である;

return nextSample;
}

エンドコード

This simply cross fades between a sine wave and a (non-bandlimited) square wave, based on the value of the ベル parameter.

Modify the MPEDemoSynthVoice class to crossfade between two sine waves, one octave appart, in response to the ベル parameter.

概要

In this tutorial we have introduced some of the MPE based classes in JUCE. You should now know:

  • What MPE is.
  • That MPE compatible devices will allocate each note to their own MIDI channels.
  • How the MPENote class stores information about a note including its MIDI channel, the original note number, velocity, pitch bend, and so on.
  • That the MPEインストゥルメント class maintains the state of the currently playing notes.
  • That the MPESynthesiser class contains an MPEインストゥルメント object that it uses to drive the synthesiser.
  • That you must implement a class that inherits from the MPESynthesiserVoice class to implement your synthesiser's audio code.

関連項目