Spectrally Controlled Parallel Multi-FX Plugin · JUCE / C++17 · VST3 · AU
ToneLab is an audio effects plugin that decouples what effect is applied from where in the frequency spectrum it acts. Each of the five effect lanes has its own multi-band spectral shaping layer that defines which frequencies are routed into that lane's effect engine. All five lanes process in parallel and are recombined into a single stereo output.
The project was developed as a capstone for ATLAS 4010, exploring the intersection of real-time DSP constraints, JUCE plugin architecture, and interaction design for audio tooling. The core engineering challenge was building a parallel processing architecture that satisfies the hard real-time requirements of the audio thread: zero blocking, sample-accurate parameter automation, and perceptually transparent wet/dry blending, while remaining musically useful.
Standard insert effects chains apply processing to the full audio spectrum. This is acoustically appropriate for some effects (a compressor shaping dynamics, for instance) but creates unwanted interactions for others. A reverb applied to a full mix introduces low-frequency wash. A distortion applied to a vocal bus blends harmonic saturation of the fundamental frequencies with the high-frequency content in ways that are often undesirable.
The conventional workaround is a parallel send architecture: duplicate the signal to multiple buses, insert an EQ on each to isolate a frequency range, apply the desired effect, and sum back to the main channel. This works, but requires significant manual routing overhead: a separate bus per effect, careful gain staging, and session complexity that grows with each added process.
ToneLab asks: what if frequency-aware routing was native to the plugin itself?
The primary design goal was to internalize the parallel send architecture into a single VST3/AU insert that behaves like any other plugin in a DAW: five independent effect lanes, each with its own spectral shaping, no additional routing, no auxiliary buses. Insert ToneLab, shape the bands, mix the result.
The secondary goal was to do this without violating real-time audio constraints: all processing must complete deterministically within the audio buffer period, with no blocking operations that could cause dropouts.
ToneLab was developed within the capstone framework over one academic year at CU Boulder. The project was scoped as a solo engineering effort, focusing on DSP architecture and plugin development; a secondary goal being visual design polish and feature comprehensiveness. The five effect types (Distortion, Chorus, Reverb, Delay, Saturation) were chosen for their range of spectral interaction characteristics, not to be exhaustive.
The implementation deliberately avoids third-party DSP libraries beyond JUCE's built-in dsp:: module, keeping the signal processing logic legible and the codebase self-contained.
| Stage | Mechanism | Detail |
|---|---|---|
| InputRaw signal from host | Input gain, peak detection | Stereo 32-bit float. Input gain applied before lanes. Peak levels written atomically for the metering display. |
| Spectral SendPer-lane shaping | SendMask5 — 5-band IIR EQ |
Each lane shapes a copy of the dry signal through its own 5-band EQ (3-cascade IIR per band plus a notch). The shaped signal is what feeds the effect engine — not the raw input. Band types: peak, low/high shelf, bandpass, highpass, lowpass, notch. |
| Lane Processing5 parallel effect engines | Chorus, Delay, Saturation, Reverb, Distortion | Each lane processes its shaped send independently. Lanes support solo and mute. A silence gate skips all processing after 50 consecutive silent blocks. |
| RecombinationWet bus sum | Additive mix to wet buffer | Each active lane's output is accumulated into the wet buffer. Lane wet level is a scalar gain applied before accumulation. |
| OutputDry/wet blend | Equal-power crossfade | Global mix uses a quarter-cosine crossfade: cos(angle) dry, sin(angle) wet. Keeps output level consistent and prevents stereo image narrowing at intermediate mix positions. Output gain applied last. |
SendMask5) to shape what it receives. This gives the user musical control: a band can be a narrow peak, a shelf, a bandpass, or fully open. The bands overlap freely. This is more flexible than a fixed crossover and avoids the phase-sum constraints of a multiband splitter.juce::dsp::Reverb) and a convolution engine (juce::dsp::Convolution) and switches between them based on whether an IR is loaded. A smoothed EMA of each path's RMS output is maintained, and irOutputGain converges toward the ratio each block, so switching between algorithmic and IR modes does not cause a perceived level jump.MicroPitch class with up to 6 independently phased voices. Each voice has its own LFO phase offset, rate multiplier, and depth scale. The LFO uses a fast polynomial sine approximation to avoid std::sin on the audio thread. Voice weights are normalized by count so output gain is consistent regardless of voice number.std::atomic<float>* pointers populated once at startup in cacheParamPointers(). Each processBlock() call loads values from these pointers once and operates on the cached scalars. Last-value caches on filter coefficients (tone, reverb parameters, delay mode) skip redundant coefficient recalculation when parameters have not changed.// Each lane: build its spectral send, apply the FX engine, accumulate to wet bus. // All buffers pre-allocated. No heap allocation on the audio thread. void ToneLabAudioProcessor::processBlock( juce::AudioBuffer<float>& buffer, juce::MidiBuffer&) { juce::ScopedNoDenormals noDenormals; // Silence gate: skip CHO, SAT, DST after 50 silent blocks // Delay and Reverb keep processing so tails ring out const bool inputIsSilent = (silentBlockCount > kSilenceThresholdBlocks); // Chorus lane: spectral send is shaped per-band inside applyChorusLane if (!inputIsSilent && laneActive(choSolo, choMute)) applyChorusLane(dryBuffer, wetBuffer, 0, numS); // Delay lane: SendMask5 shapes the send, then applyDelayLane processes it if (wet > 0.0005f && !inputIsSilent && laneActive(dlySolo, dlyMute)) { const bool any = sendDLY.buildSend(dryBuffer, dlySendBuf, 0, numS, nodePtrs[1], sr); if (any) { applyDelayLane(dlySendBuf, numS, sr); wetBuffer.addFrom(...); } } // ... Saturation, Reverb, Distortion lanes follow the same pattern // Equal-power crossfade: cos(angle) dry + sin(angle) wet const float angle = mix * juce::MathConstants<float>::halfPi; const float dryAmt = std::cos(angle); const float wetAmt = std::sin(angle); }
// Tape mode: 3 kHz LPF on feedback path + slow pitch modulation on delay time // Wow: ±1% depth at 0.5 Hz — implemented as an LFO on delaySamps const float wowDepth = (mode == 1) ? delaySamps * 0.01f : 0.0f; const float wowInc = (mode == 1) ? (2.0f * juce::MathConstants<float>::pi * 0.5f) / (float)sr : 0.0f; for (int i = 0; i < n; ++i) { float wowOffset = wowDepth * std::sin(wowPhase); const float ds = jlimit(1.0f, 192000.0f, delaySamps + wowOffset); float dL = delayL.popSample(0, ds, true); // Feedback through 3kHz LPF for tape mode if (mode > 0) dL = dlyFbFiltL.processSample(dL); delayL.pushSample(0, L[i] + dL * fb * fbGainL); }
kSilenceThresholdRMS = 1e-6f) increments a counter on silent blocks and resets it on non-silent ones. After 50 consecutive silent blocks, the Chorus, Saturation, and Distortion lanes are skipped. Delay and Reverb continue processing silence so their tails can ring out naturally.cos dry, sin wet) maintains constant power across the mix range, keeping the output level and stereo width perceptually consistent.std::atomic<bool> ready flag. Input and output peak levels are written as std::atomic<float> each block and polled by the editor timer. No locks, no contention on the audio path.~/Library/Audio/Plug-Ins/Components/ and validated with auval before rescanning in Logic. Logic's AU validation gave useful error output when the plugin metadata or parameter tree was malformed.