ATLAS 4010 · University of Colorado Boulder · 2025–2026

ToneLab

Spectrally Controlled Parallel Multi-FX Plugin · JUCE / C++17 · VST3 · AU

ToneLab is an audio effects plugin that decouples what effect is applied from where in the frequency spectrum it acts. Each of the five effect lanes has its own multi-band spectral shaping layer that defines which frequencies are routed into that lane's effect engine. All five lanes process in parallel and are recombined into a single stereo output.

The project was developed as a capstone for ATLAS 4010, exploring the intersection of real-time DSP constraints, JUCE plugin architecture, and interaction design for audio tooling. The core engineering challenge was building a parallel processing architecture that satisfies the hard real-time requirements of the audio thread: zero blocking, sample-accurate parameter automation, and perceptually transparent wet/dry blending, while remaining musically useful.

Language C++17
Framework JUCE 7
Build Projucer / Xcode
Formats VST3 · AU
Platforms macOS · Windows
Course ATLAS 4010
00 — Demo

Plugin walkthrough &
feature overview.

01 — Motivation

Why build this?
The problem space.

Background
Frequency-aware effects processing

Standard insert effects chains apply processing to the full audio spectrum. This is acoustically appropriate for some effects (a compressor shaping dynamics, for instance) but creates unwanted interactions for others. A reverb applied to a full mix introduces low-frequency wash. A distortion applied to a vocal bus blends harmonic saturation of the fundamental frequencies with the high-frequency content in ways that are often undesirable.

The conventional workaround is a parallel send architecture: duplicate the signal to multiple buses, insert an EQ on each to isolate a frequency range, apply the desired effect, and sum back to the main channel. This works, but requires significant manual routing overhead: a separate bus per effect, careful gain staging, and session complexity that grows with each added process.

ToneLab asks: what if frequency-aware routing was native to the plugin itself?

Design Goal
Collapse the routing into a single insert

The primary design goal was to internalize the parallel send architecture into a single VST3/AU insert that behaves like any other plugin in a DAW: five independent effect lanes, each with its own spectral shaping, no additional routing, no auxiliary buses. Insert ToneLab, shape the bands, mix the result.

The secondary goal was to do this without violating real-time audio constraints: all processing must complete deterministically within the audio buffer period, with no blocking operations that could cause dropouts.

Scope
Academic context and technical boundaries

ToneLab was developed within the capstone framework over one academic year at CU Boulder. The project was scoped as a solo engineering effort, focusing on DSP architecture and plugin development; a secondary goal being visual design polish and feature comprehensiveness. The five effect types (Distortion, Chorus, Reverb, Delay, Saturation) were chosen for their range of spectral interaction characteristics, not to be exhaustive.

The implementation deliberately avoids third-party DSP libraries beyond JUCE's built-in dsp:: module, keeping the signal processing logic legible and the codebase self-contained.

02 — Architecture

Signal flow &
spectral send design.

Stage Mechanism Detail
InputRaw signal from host Input gain, peak detection Stereo 32-bit float. Input gain applied before lanes. Peak levels written atomically for the metering display.
Spectral SendPer-lane shaping SendMask5 — 5-band IIR EQ Each lane shapes a copy of the dry signal through its own 5-band EQ (3-cascade IIR per band plus a notch). The shaped signal is what feeds the effect engine — not the raw input. Band types: peak, low/high shelf, bandpass, highpass, lowpass, notch.
Lane Processing5 parallel effect engines Chorus, Delay, Saturation, Reverb, Distortion Each lane processes its shaped send independently. Lanes support solo and mute. A silence gate skips all processing after 50 consecutive silent blocks.
RecombinationWet bus sum Additive mix to wet buffer Each active lane's output is accumulated into the wet buffer. Lane wet level is a scalar gain applied before accumulation.
OutputDry/wet blend Equal-power crossfade Global mix uses a quarter-cosine crossfade: cos(angle) dry, sin(angle) wet. Keeps output level consistent and prevents stereo image narrowing at intermediate mix positions. Output gain applied last.
03 — Design Decisions

Key technical choices
and their rationale.

Spectral Routing
Send shaping over crossover splitting
Rather than splitting the signal into hard frequency bands via crossover filters, each lane uses a parametric EQ send (SendMask5) to shape what it receives. This gives the user musical control: a band can be a narrow peak, a shelf, a bandpass, or fully open. The bands overlap freely. This is more flexible than a fixed crossover and avoids the phase-sum constraints of a multiband splitter.
Reverb
Dual-engine with runtime level matching
The reverb lane runs both an algorithmic engine (juce::dsp::Reverb) and a convolution engine (juce::dsp::Convolution) and switches between them based on whether an IR is loaded. A smoothed EMA of each path's RMS output is maintained, and irOutputGain converges toward the ratio each block, so switching between algorithmic and IR modes does not cause a perceived level jump.
Chorus
Custom multi-voice micro-pitch engine
Rather than a standard delay-line chorus, the chorus lane uses a custom MicroPitch class with up to 6 independently phased voices. Each voice has its own LFO phase offset, rate multiplier, and depth scale. The LFO uses a fast polynomial sine approximation to avoid std::sin on the audio thread. Voice weights are normalized by count so output gain is consistent regardless of voice number.
Concurrency
Cached atomic parameter pointers
All APVTS parameters are accessed via pre-cached std::atomic<float>* pointers populated once at startup in cacheParamPointers(). Each processBlock() call loads values from these pointers once and operates on the cached scalars. Last-value caches on filter coefficients (tone, reverb parameters, delay mode) skip redundant coefficient recalculation when parameters have not changed.
04 — Implementation

Core code,
annotated.

PluginProcessor.cpp — processBlock() Audio thread
// Each lane: build its spectral send, apply the FX engine, accumulate to wet bus.
// All buffers pre-allocated. No heap allocation on the audio thread.

void ToneLabAudioProcessor::processBlock(
    juce::AudioBuffer<float>& buffer, juce::MidiBuffer&)
{
    juce::ScopedNoDenormals noDenormals;

    // Silence gate: skip CHO, SAT, DST after 50 silent blocks
// Delay and Reverb keep processing so tails ring out
    const bool inputIsSilent = (silentBlockCount > kSilenceThresholdBlocks);

    // Chorus lane: spectral send is shaped per-band inside applyChorusLane
    if (!inputIsSilent && laneActive(choSolo, choMute))
        applyChorusLane(dryBuffer, wetBuffer, 0, numS);

    // Delay lane: SendMask5 shapes the send, then applyDelayLane processes it
    if (wet > 0.0005f && !inputIsSilent && laneActive(dlySolo, dlyMute))
    {
        const bool any = sendDLY.buildSend(dryBuffer, dlySendBuf, 0, numS, nodePtrs[1], sr);
        if (any) { applyDelayLane(dlySendBuf, numS, sr); wetBuffer.addFrom(...); }
    }

    // ... Saturation, Reverb, Distortion lanes follow the same pattern

    // Equal-power crossfade: cos(angle) dry + sin(angle) wet
    const float angle  = mix * juce::MathConstants<float>::halfPi;
    const float dryAmt = std::cos(angle);
    const float wetAmt = std::sin(angle);
}
PluginProcessor.cpp — Delay lane (Tape mode detail) Wow / flutter
// Tape mode: 3 kHz LPF on feedback path + slow pitch modulation on delay time
// Wow: ±1% depth at 0.5 Hz — implemented as an LFO on delaySamps

const float wowDepth = (mode == 1) ? delaySamps * 0.01f : 0.0f;
const float wowInc   = (mode == 1)
    ? (2.0f * juce::MathConstants<float>::pi * 0.5f) / (float)sr
    : 0.0f;

for (int i = 0; i < n; ++i)
{
    float wowOffset = wowDepth * std::sin(wowPhase);
    const float ds = jlimit(1.0f, 192000.0f, delaySamps + wowOffset);
    float dL = delayL.popSample(0, ds, true);
    // Feedback through 3kHz LPF for tape mode
    if (mode > 0) dL = dlyFbFiltL.processSample(dL);
    delayL.pushSample(0, L[i] + dL * fb * fbGainL);
}
// Note 01
Silence gate
A block-level RMS check (kSilenceThresholdRMS = 1e-6f) increments a counter on silent blocks and resets it on non-silent ones. After 50 consecutive silent blocks, the Chorus, Saturation, and Distortion lanes are skipped. Delay and Reverb continue processing silence so their tails can ring out naturally.
// Note 02
Equal-power crossfade
A linear dry/wet blend reduces the combined level at mid-mix positions, which makes the stereo image appear to narrow. The quarter-cosine crossfade (cos dry, sin wet) maintains constant power across the mix range, keeping the output level and stereo width perceptually consistent.
// Note 03
RTA and peak metering
A 2048-point stereo FIFO feeds a lock-free block handoff to the editor's spectrum display via an std::atomic<bool> ready flag. Input and output peak levels are written as std::atomic<float> each block and polled by the editor timer. No locks, no contention on the audio path.
05 — Dev Environment

Getting it running
on my machine.

01
Machine
MacBook Pro 15-inch, 2019. 2.4 GHz 8-Core Intel Core i9, 32 GB 2400 MHz DDR4, Radeon Pro Vega 16. Running macOS Sequoia 15.7.4.
02
Build toolchain
The project was configured in Projucer and built through Xcode. Projucer handles JUCE module linking and generates the Xcode project; from there the plugin was built and run locally as a VST3 and AU without any additional packaging step.
03
DAW testing
All testing was done in Logic Pro. After each build, the AU component was copied to ~/Library/Audio/Plug-Ins/Components/ and validated with auval before rescanning in Logic. Logic's AU validation gave useful error output when the plugin metadata or parameter tree was malformed.
04
Windows build
A Windows version was built later in development. The same Projucer project was used to generate a Visual Studio solution, which required adjusting a few platform-specific JUCE component flags. Cross-platform differences were minor but real, mostly around font rendering and component sizing in the editor.