The heart of the code. Here is where
we have resamplers, filters, mixing, tuning,
voice/channel managament, control routing,
midi messages parsers.
Here is the call chain on how audio flows from wavetable to the audio device.
AudioFloatInputStream.read(float[], int, int) | Here sample data is converted to from 16 bit format to float arrays. |
ModelAbstractResamplerStream.nextBuffer() | Sample data is read into resampler buffer. |
ModelAbstractResamplerStream.read(float[], int, int) | Here we call resampler and returns resampled sample data. |
SoftVoice.processAudioLogic(SoftAudioBuffer[]) | Here is where the sample data are mixed and filtered. |
SoftMainMixer.processAudioBuffers() | Here is the final mixing done, reverb and chorus effect applied. |
SoftMainMixer.fillBuffer() | Audio is created in buffer of constant size. |
SoftMainMixer.read(byte[], int, int) | Here we return audio buffers. |
AudioInputStream.read(byte[], int, int) | Here input stream is wrapped into AudioInputStream. |
SoftAudioPusher.run() | This is where audio is pushed into Mixer SourceDataLine. |
Model instruments can't be used directly against the synthesizer.
They must first be pre-processed into SoftInstrument which is
done in "loadInstrument" method on the synthesizer.
SoftInstrument have everything ready to use and default connections have also been added.
This means that nothing or very little has been done when user does program-change.
When the synthesizer is in push mode, we have two threads.
One or more thread that feeds synthesizer with midi data
and another which renders audio into audio device.
Synthesier is synchronized against control mutex (the synthesizer object itself)
when MIDI is feed into the synthesizer.
When audio is rendered, we first compute control logic
synchronized against the control mutex, and
then we render the audio.