A Stateful Algorithmic MIDI Sequencer for Python. Subsequence is an algorithmic composition framework that gives you a palette of mathematical building blocks - Euclidean rhythms, cellular automata, L-systems, Markov chains, cognitive melody generation - and a stateful engine that lets them interact and evolve over time. It is designed for the musician who wants to build compositions that surprise them - where patterns combine, react to context, and develop in ways that reward exploration.
Unlike tools that loop a fixed pattern forever, Subsequence rebuilds every pattern fresh before each cycle. Each rebuild has full context - the current chord, the composition section, the cycle count, shared data from other patterns - so a Euclidean rhythm can thin itself as tension builds, a cellular automaton can seed from the harmony, and a Markov chain can shift behaviour between sections. The result is music that develops over time, not music that repeats.
It is a compositional platform for your studio - generating pure MIDI to control hardware synths, modular systems, or software instruments, with no fixed limits on complexity or length.
What you need: Basic Python knowledge and any MIDI-controllable instrument - hardware synths, drum machines, modular gear, or software VSTs/DAWs. Subsequence generates MIDI data; it does not produce sound itself.
- Plain Python, no custom language. Write patterns in a standard, popular language - no domain-specific syntax to learn. Your music is versionable, shareable, and lives in standard
.pyfiles. - A rich algorithmic palette. Euclidean and Bresenham rhythm generators, cellular automata (1D and 2D), L-system string rewriting, Markov chains, cognitive melody via the Narmour model, probability-weighted ghost notes, position-aware thinning, Perlin and pink noise, logistic chaos maps - plus groove templates, velocity shaping, and pitch-bend automation to shape how they sound. These aren't isolated features - they combine freely inside the stateful rebuild loop, feeding into each other, so compositions emerge that no single algorithm could produce alone.
- Infinite, evolving compositions. Patterns rebuild each cycle with full context - chord, section, history, external data - so music can grow and develop indefinitely, or run to a fixed structure. Or both.
- Multiple APIs and notation styles. Start with a one-line mini-notation drum pattern. Graduate to per-step control, harmonic injection, or the full Direct Pattern API - without changing tools.
- Built-in harmonic intelligence. Optional chord graphs with weighted transitions, gravity, voice leading, and Narmour-based melodic cognition. The cognitive engine writes melodies that sound human because it models deep listener expectations.
- Turn data into music. Schedule any Python function on a beat cycle. Feed in APIs, sensors, files, weather, ISS telemetry - anything Python can reach becomes a musical parameter.
- Pure MIDI, zero sound engine. No audio synthesis, no heavyweight dependencies. Route MIDI to your existing hardware or software instruments.
- Controlled randomness, not chaos. Every generative decision can be bounded by musical theory and constraints. Set a seed and every "random" decision - chords, form, note choices - becomes repeatable, intentional, and tweakable.
- From sketch to studio. Subsequence is a fast way to explore ideas - try rhythms, test harmonies, let the algorithms surprise you. When something clicks, record the session as a standard multi-channel MIDI file and bring it straight into your DAW to arrange, edit, and polish. The generative process feeds the finished product.
In this simplest example, using mini-notation, we create and play a drum pattern. More detail on the Composition API and Direct Pattern API further down.
import subsequence
import subsequence.constants.instruments.gm_drums as gm_drums
composition = subsequence.Composition(bpm=120)
@composition.pattern(channel=9, length=4, drum_note_map=gm_drums.GM_DRUM_MAP)
def drums (p):
p.seq("x ~ x ~", pitch="kick_1", velocity=100)
p.seq("~ x ~ x", pitch="snare_1", velocity=90)
p.seq("[x x] [x x] [x x] [x x]", pitch="hi_hat_closed", velocity=70)
composition.play()
Most live-coding environments are stateless: passing time determines the event. This excels at cyclic, rhythmic music (techno, polyrhythms) but struggles with narrative. Subsequence is stateful: it remembers history.
This means a pattern can look back at the previous cycle to decide its next move ("if I played a C last bar, play an E this bar"). It allows for motivic development - ideas that evolve over time rather than just repeating. It also supports traditional linear composition: because the system tracks "Global Position" and "Section", you can write a piece with a distinct Intro, Verse, and Chorus, where specific notes play at specific times, just like in a DAW.
Algorithms should sound like musicians, not random number generators. Standard generative tools often rely on "scale masking" (picking random notes from a scale), which ensures no "wrong" notes but often results in aimless melodies.
Subsequence integrates the Narmour Implication-Realization model, a theory of music cognition that predicts what listeners expect to hear. It models melodic inertia:
- Implication: A series of small steps in one direction implies continuation.
- Gap-Fill: A large leap implies a reversal to fill the gap.
By encoding these principles, Subsequence generates melodies that feel structured and intentional, satisfying the listener's innate expectations of musical grammar.
Subsequence is a framework for algorithmic composition - the practice of defining musical processes and letting them run. You select from a toolkit of rhythm generators, cellular automata, Markov models, melodic algorithms, and noise functions, combine them inside patterns that rebuild with full musical context, and listen to what emerges.
- Compose the process, not just the notes. You define the rules; the framework follows them. Every run can produce something new - or, with a seed, something exactly repeatable.
- Algorithms that interact. A Euclidean kick pattern shares its root note with a Markov bassline via
p.data. A cellular automaton seeds its next generation from the chord change. An L-system rhythm thins itself during the bridge. The stateful rebuild loop is what makes these connections possible. - Code is the score. Your composition is a
.pyfile - versionable, shareable, diffable. No custom language, no GUI, no audio engine. Python is the interface.
Subsequence connects to your existing studio. Sync to your DAW's clock, or let it drive your Eurorack system. It provides the logic; you provide the sound.
- Introduction
- What it does
- Quick start
- Composition API
- Direct Pattern API
- Mini-notation
- Form and sections
- The Conductor
- Chord inversions and voice leading
- Harmony and chord graphs
- Frozen progressions
- Seed and deterministic randomness
- Terminal display
- Web UI Dashboard (Beta)
- MIDI recording and rendering
- Live coding
- Clock accuracy
- MIDI input and external clock
- Hotkeys
- Pattern tools and hardware control
- OSC integration
- Examples
- Extra utilities
- Feature Roadmap
- Tests
- About the Author
- License
- Stateful patterns that evolve. Each pattern is a Python function rebuilt fresh every cycle with full context - current chord, section, cycle count, external data. Patterns can remember what happened last bar and decide what to do next. This is not a loop; it's a composition that develops.1
- Cognitive harmony engine. Chord progressions evolve via weighted transition graphs with adjustable gravity and Narmour-based melodic inertia - a model of how listeners expect music to move. Eleven built-in palettes. Automatic voice leading. Freeze progressions to lock some sections while others evolve freely.
- Sub-microsecond clock. A hybrid sleep+spin timing strategy achieves typical pulse jitter of < 5 μs on Linux (measured), with zero long-term drift. Pattern logic runs ahead of time and never blocks MIDI output.
- Turn anything into music.
composition.schedule()runs any Python function on a beat cycle - APIs, sensors, files, ISS telemetry. Anything Python can reach becomes a musical parameter, smoothed over time with built-inEasedValueinterpolation. - Pure MIDI, zero sound engine. No audio synthesis, no heavyweight dependencies. Route to your existing hardware synths, drum machines, Eurorack, or software instruments. You provide the sound; Subsequence provides the logic.
- Rhythm and feel. Euclidean and Bresenham generators, groove templates (swing, shuffle, MPC pocket, or custom - including Ableton .agr import), randomize, velocity shaping2, dropout, per-step probability, polyrhythms via independent pattern lengths, multi-voice weighted Bresenham distribution (
bresenham_poly()) withno_overlapcollision avoidance,ghost_fill()for probability-biased ghost note layers,thin()for position-aware per-instrument note removal (the musical inverse ofghost_fill),cellular_1d()for evolving cellular-automaton rhythms,cellular_2d()for polyphonic Life-like 2D CA patterns,logistic_map()for a deterministic chaos modulation source (dial from stable → periodic → chaos with onerparameter),pink_noise()for 1/f ("pink") noise sequences with natural multi-scale variation,p.lsystem()for self-similar pattern generation via L-system string rewriting (Fibonacci rhythms, fractal melodic contours), andp.markov()for Markov-chain melody and bassline generation. - Melody generation.
p.melody()withMelodicStateapplies the Narmour Implication-Realization model to single-note melodic lines: continuation after small steps, direction reversal after large leaps, chord-tone weighting, range gravity, and pitch-diversity penalty. History persists across bar rebuilds for natural phrase continuity. - Expression. CC messages and ramps, pitch bend, note-correlated bend/portamento/slide, program changes, SysEx, and OSC output - all from within patterns.
- Form and structure. Define musical form as a weighted graph, ordered list, or generator. Patterns read
p.sectionto adapt. Conductor signals (LFOs, ramps) shape intensity over time. - Mini-notation. Write
p.seq("x x [x x] x", pitch="kick")- subdivisions, rests, sustains, per-step probability suffixes. Quick ideas in one line. - Scales and quantization.
p.quantize("G", "dorian")snaps notes to any named scale. Built-in western and non-western scales (Hirajōshi, In-Sen, Iwato, Yo, Egyptian, pentatonics), plusregister_scale()for your own. - Randomness tools.3 Weighted choice, no-repeat shuffle, random walk, probability gates - controlled randomness via
subsequence.sequence_utils. Deterministic seeding (seed=42) makes every decision repeatable. - Pattern transforms. Legato, staccato, reverse, double/half-time, shift, transpose, invert, randomize,
p.every(), andcomposition.layer(). - Two API levels. Composition API for most musicians; Direct Pattern API for power users who need persistent state or custom scheduling.
- MIDI clock. Master (
clock_output()) or follower (clock_follow=True). Sync to a DAW, drive a Eurorack system, or both. - Hardware control. CC input mapping from knobs and faders to
composition.data. OSC for bidirectional communication with mixers, lighting, visuals. - Live coding. Hot-swap patterns, change tempo, mute/unmute, tweak parameters - all during playback.
- Hotkeys. Single keystrokes to jump sections, tweak patterns, or trigger actions - with optional bar-boundary quantization.
- Web UI Dashboard (Beta). Broadcast live composition state and pattern grids to your browser via standard WebSockets.
- Recording and rendering. Record to standard MIDI file. Render to file without waiting for real-time playback.
- Install dependencies:
pip install -e .
- Run the demo (drums, bass, and arp over evolving aeolian minor harmony in E):
python examples/demo.py
For the complete API reference, see the documentation. The sections below are a quick overview.
The Composition class is the main entry point. Define your MIDI setup, create a composition, add patterns, and play:
import subsequence
import subsequence.constants.instruments.gm_drums as gm_drums
DRUMS_CHANNEL = 9
BASS_CHANNEL = 5
SYNTH_CHANNEL = 0
composition = subsequence.Composition(bpm=120, key="E")
composition.harmony(style="aeolian_minor", cycle_beats=4, gravity=0.8)
@composition.pattern(channel=DRUMS_CHANNEL, length=4, drum_note_map=gm_drums.GM_DRUM_MAP)
def drums (p):
p.hit_steps("kick_1", [0, 4, 8, 12], velocity=100)
p.hit_steps("snare_1", [4, 12], velocity=100)
p.hit_steps("hi_hat_closed", range(16), velocity=80)
p.velocity_shape(low=60, high=100)
@composition.pattern(channel=BASS_CHANNEL, length=4)
def bass (p, chord):
root = chord.root_note(40)
p.sequence(steps=[0, 4, 8, 12], pitches=root)
p.legato(0.9)
@composition.pattern(channel=SYNTH_CHANNEL, length=4)
def arp (p, chord):
pitches = chord.tones(root=60, count=4)
p.arpeggio(pitches, step=0.25, velocity=90, direction="up")
if __name__ == "__main__":
composition.play()When output_device is omitted, Subsequence auto-discovers available MIDI devices. If only one device is connected it is used automatically; if several are found you are prompted to choose. To skip the prompt, pass the device name directly: Composition(output_device="Your Device:Port", ...).
MIDI channels and drum note mappings are defined by the musician in their composition file - the module does not ship studio-specific constants. Channels default to 0-based numbering (0-15, matching the MIDI protocol). To use 1-based numbering (1-16, matching instrument panels - channel 10 is drums), pass zero_indexed_channels=False:
composition = subsequence.Composition(bpm=120, key="E", zero_indexed_channels=False)
@composition.pattern(channel=10, length=4, drum_note_map=gm_drums.GM_DRUM_MAP)
def drums (p):
...Patterns are plain Python functions, so anything you can express in Python is fair game. A few more features:
# Per-step pitch, velocity, and duration control.
@composition.pattern(channel=0, length=4)
def melody (p):
p.sequence(
steps=[0, 4, 8, 12],
pitches=[60, 64, 67, 72],
velocities=[127, 100, 110, 100],
durations=[0.5, 0.25, 0.25, 0.5],
)
# Non-quarter-note grid: 6 sixteenth notes (reads like "6/16" in a score).
# hit_steps() and sequence() automatically use 6 grid slots.
import subsequence.constants.durations as dur
@composition.pattern(channel=0, length=6, unit=dur.SIXTEENTH)
def riff (p, chord):
root = chord.root_note(64)
p.sequence(steps=[0, 1, 3, 5], pitches=[root+12, root, root, root])
# Per-step probability - each hi-hat has a 70% chance of playing.
@composition.pattern(channel=DRUMS_CHANNEL, length=4, drum_note_map=DRUM_NOTE_MAP)
def hats (p):
p.hit_steps("hh", list(range(16)), velocity=80, probability=0.7)
# Schedule a repeating background task (runs in a thread pool).
# wait_for_initial=True blocks until the first run completes before playback starts.
# Optionally declare a `p` parameter to receive a ScheduleContext with p.cycle (0-indexed).
def fetch_data (p):
if p.cycle == 0:
composition.data["value"] = initial_api_call()
else:
composition.data["value"] = some_external_api()
composition.schedule(fetch_data, cycle_beats=32, wait_for_initial=True)The Direct Pattern API gives you full control over the sequencer, harmony, and scheduling. Patterns are classes instead of decorated functions - you manage the event loop yourself.
Full example - same music as the Composition API demo above (click to expand)
import asyncio
import subsequence.composition
import subsequence.constants
import subsequence.constants.instruments.gm_drums as gm_drums
import subsequence.harmonic_state
import subsequence.pattern
import subsequence.pattern_builder
import subsequence.sequencer
DRUMS_CHANNEL = 9
BASS_CHANNEL = 5
SYNTH_CHANNEL = 0
class DrumPattern (subsequence.pattern.Pattern):
"""Kick, snare, and hi-hats - built using the PatternBuilder bridge."""
def __init__ (self) -> None:
super().__init__(channel=DRUMS_CHANNEL, length=4)
self._build()
def _build (self) -> None:
self.steps = {}
p = subsequence.pattern_builder.PatternBuilder(
self, cycle=0, drum_note_map=gm_drums.GM_DRUM_MAP
)
p.hit_steps("kick_1", [0, 4, 8, 12], velocity=100)
p.hit_steps("snare_1", [4, 12], velocity=100)
p.hit_steps("hi_hat_closed", range(16), velocity=80)
p.velocity_shape(low=60, high=100)
def on_reschedule (self) -> None:
self._build()
class BassPattern (subsequence.pattern.Pattern):
"""Quarter-note bass following the harmony engine's current chord."""
def __init__ (self, harmonic_state: subsequence.harmonic_state.HarmonicState) -> None:
super().__init__(channel=BASS_CHANNEL, length=4)
self.harmonic_state = harmonic_state
self._build()
def _build (self) -> None:
self.steps = {}
chord = self.harmonic_state.get_current_chord()
root = chord.root_note(40)
for beat in range(4):
self.add_note_beats(beat, pitch=root, velocity=100, duration_beats=0.9)
def on_reschedule (self) -> None:
self._build()
class ArpPattern (subsequence.pattern.Pattern):
"""Ascending arpeggio cycling through the current chord's tones."""
def __init__ (self, harmonic_state: subsequence.harmonic_state.HarmonicState) -> None:
super().__init__(channel=SYNTH_CHANNEL, length=4)
self.harmonic_state = harmonic_state
self._build()
def _build (self) -> None:
self.steps = {}
chord = self.harmonic_state.get_current_chord()
pitches = chord.tones(root=60, count=4)
self.add_arpeggio_beats(pitches, step_beats=0.25, velocity=90)
def on_reschedule (self) -> None:
self._build()
async def main () -> None:
seq = subsequence.sequencer.Sequencer(initial_bpm=120)
harmonic_state = subsequence.harmonic_state.HarmonicState(
key_name="E", graph_style="aeolian_minor", key_gravity_blend=0.8
)
await subsequence.composition.schedule_harmonic_clock(
seq, lambda: harmonic_state, cycle_beats=4
)
drums = DrumPattern()
bass = BassPattern(harmonic_state)
arp = ArpPattern(harmonic_state)
await subsequence.composition.schedule_patterns(seq, [drums, bass, arp])
await subsequence.composition.run_until_stopped(seq)
if __name__ == "__main__":
asyncio.run(main())For a larger example with form sections and five patterns, see examples/arpeggiator.py.
| Feature | Composition API | Direct Pattern API |
|---|---|---|
| Primary Paradigm | Declarative / Functional | Object-Oriented (OO) |
| User Code | Decorated functions | Pattern subclasses |
| Complexity | Low (Musician-friendly) | Medium (Developer-friendly) |
| Lifecycle | Automated (play()) |
Manual (asyncio.run()) |
| State | Stateless builders | Persistent instance variables |
1. Composition API (composition.py)
This is the recommended starting point for most musicians. It handles the infrastructure (async loop, MIDI device discovery, clock management) so you can focus on writing patterns.
- Best for: Rapid prototyping, standard musical forms, live coding.
- Limitation: Patterns are stateless functions that get rebuilt from scratch every cycle. To keep state (like a counter), you need global variables or closures.
2. Direct Pattern API (pattern.py)
This gives you full control by letting you subclass Pattern directly. It's for power users who need features the Composition API abstracts away.
-
Unique Capabilities:
- Persistent State: Store variables in
selfthat persist across cycles (e.g., an evolving density counter). - Incremental Updates: In
on_reschedule(), you can modify existing notes instead of clearingself.steps. - Custom Scheduling: Launch async tasks that don't align with the pattern's cycle.
- Multiple Contexts: Run multiple independent sequencers or harmonic states.
- Persistent State: Store variables in
-
Example: A pattern that gets denser every cycle:
class EvolvingPattern(subsequence.pattern.Pattern): def __init__(self): super().__init__(channel=0, length=4) self.density = 0.5 def on_reschedule(self): self.density += 0.05 # State persists! self.steps = {} # Clear old notes self.euclidean(pulses=int(16 * self.density))
The Composition object stores its harmonic and form state internally. After calling harmony() and form(), three read-only properties expose them:
composition.harmonic_state- theHarmonicStateobject (same one patterns read from)composition.form_state- theFormStateobject (same onep.sectionreads from)composition.sequencer- the underlyingSequencerinstance
If you need Pattern subclasses alongside decorated patterns, the simplest approach is to use the Direct Pattern API for the entire composition - create a HarmonicState and FormState manually, then pass them to both simple helper patterns and complex Pattern subclasses. examples/demo.py and examples/demo_advanced.py produce the same music using each API.
For quick rhythmic or melodic entry, Subsequence offers a concise string syntax inspired by live-coding environments. This allows you to express complex rhythms and subdivisions without verbose list definitions.
When you provide a pitch argument, the string defines the rhythm. Any symbol (except special characters) is treated as a hit.
@composition.pattern(channel=DRUMS_CHANNEL, length=4)
def drums(p):
# Kick on beats 0, 2, 3
p.seq("x . x x", pitch="kick")
# Hi-hats with subdivisions:
# [x x] puts two hits in the space of one
p.seq("x [x x] x x", pitch="hh", velocity=80)When pitch is omitted, the symbols in the string are interpreted as pitches (MIDI note numbers or drum names).
@composition.pattern(channel=SYNTH_CHANNEL, length=4)
def melody(p):
# Play 60, 62, hold 62, then 64
# "_" sustains the previous note
p.seq("60 62 _ 64")| Symbol | Description |
|---|---|
x |
Event (note/hit) |
. or ~ |
Rest |
_ |
Sustain (legato) |
[ ... ] |
Subdivision |
x?0.6 |
Probability suffix - fires with the given probability (0.0–1.0) |
While designed for the Composition API, you can use mini-notation in Pattern subclasses by wrapping self in a PatternBuilder:
def _build_pattern(self):
# Create a transient builder to access high-level features
p = subsequence.pattern_builder.PatternBuilder(self, cycle=0)
p.seq("x . x [x x]", pitch=36)Define the large-scale structure of your composition with composition.form(). Patterns read p.section to decide what to play.
A dict defines a weighted transition graph. Each section has a bar count and a list of (next_section, weight) transitions. Weights control probability - 3:1 means 75%/25%. Sections with an empty list [] self-loop forever. Sections with None are terminal - the form ends after they complete.
# Intro plays once, then never returns. The outro ends the piece.
composition.form({
"intro": (4, [("verse", 1)]),
"verse": (8, [("chorus", 3), ("bridge", 1)]),
"chorus": (8, [("breakdown", 2), ("verse", 1), ("outro", 1)]),
"bridge": (4, [("chorus", 1)]),
"breakdown": (4, [("verse", 1)]),
"outro": (4, None),
}, start="intro")
@composition.pattern(channel=9, length=4, drum_note_map=DRUM_NOTE_MAP)
def drums (p):
p.hit_steps("kick", [0, 4, 8, 12], velocity=127)
# Mute snare outside the chorus - the pattern keeps cycling silently.
if not p.section or p.section.name != "chorus":
return
# Build intensity through the section (0.0 → ~1.0).
vel = int(80 + 20 * p.section.progress)
p.hit_steps("snare", [4, 12], velocity=vel)A list of (name, bars) tuples plays sections in order. With loop=True, it cycles back to the start:
composition.form([("intro", 4), ("verse", 8), ("chorus", 8)], loop=True)Generators support stochastic or evolving structures:
def my_form ():
yield ("intro", 4)
while True:
yield ("verse", random.choice([8, 16]))
yield ("chorus", 8)
composition.form(my_form())p.section is a SectionInfo object (or None when no form is configured):
| Property | Type | Description |
|---|---|---|
name |
str |
Current section name |
bar |
int |
Bar within section (0-indexed) |
bars |
int |
Total bars in this section |
progress |
float |
bar / bars (0.0 → ~1.0) |
first_bar |
bool |
True on the first bar of the section |
last_bar |
bool |
True on the last bar of the section |
next_section |
str? |
Name of the upcoming section, or None at the end |
next_section is pre-decided when the current section begins (graph mode picks probabilistically; list mode peeks the iterator). Use it for lead-ins:
if p.section and p.section.last_bar and p.section.next_section == "chorus":
p.hit_steps("snare", range(0, 16, 2), velocity=100) # Snare roll into chorusA performer or code can override the pre-decided next section with composition.form_next("chorus") - see Hotkeys.
p.bar is always available (regardless of form) and tracks the global bar count since playback started.
To replay the same chords every time a section recurs, see Frozen progressions.
Patterns often feel static when they just loop. The Conductor provides global signals (LFOs and automation lines) that patterns can read to modulate parameters over time.
Create signals in your composition setup:
# A sine wave LFO that cycles every 16 bars
composition.conductor.lfo("swell", shape="sine", cycle_beats=16*4)
# A ramp that builds from 0.0 to 1.0 over 32 bars, then stays at 1.0
composition.conductor.line("intensity", start_val=0.0, end_val=1.0, duration_beats=32*4)Use p.signal(name) to read a conductor signal at the current bar:
@composition.pattern(channel=0, length=4)
def pads(p):
dynamics = p.signal("swell")
p.chord(chord, root=60, velocity=int(60 + 60 * dynamics))For explicit beat control, use p.c.get(name, beat) directly.
By default, all ramps are linear. Pass shape= to any ramp to change how the value moves:
# Slow build that accelerates - good for intensity lines
composition.conductor.line("build", start_val=0.0, end_val=1.0, duration_beats=64, shape="ease_in")
# S-curve BPM shift - the tempo barely moves at first, rushes through the middle, then settles gently
composition.target_bpm(140, bars=16, shape="ease_in_out")
# Filter sweep - cubic response approximates how we hear filter changes
@composition.pattern(channel=0, length=4)
def sweep (p):
p.cc_ramp(74, 0, 127, shape="exponential")Available shapes: "linear" (default), "ease_in", "ease_out", "ease_in_out", "exponential", "logarithmic", "s_curve". You can also pass any callable that maps a float in [0, 1] to a float in [0, 1] for a custom curve. See subsequence.easing for details.
Subsequence offers two complementary ways to store values: Data (state) and Conductor (signals). Use whichever fits:
composition.data / p.data |
composition.conductor |
|
|---|---|---|
| Question it answers | "What is the value RIGHT NOW?" | "What was the value at beat 40?" |
| Nature | Static snapshots - no concept of time | Time-variant signals (LFOs, ramps) |
| Best for | External inputs (sensors, API data), mode switches, inter-pattern state | Musical evolution (fades, swells, modulation) that must be smooth and continuous |
Inside a pattern, p.data is a direct reference to composition.data - the same dictionary object. You can read it, write to it, and use it to pass values between patterns.
Patterns always rebuild in definition order (top-to-bottom in your source file). When two patterns share the same length, they reschedule at the same moment and the earlier pattern rebuilds first - so the writer's value is already in p.data when the reader runs:
@composition.pattern(channel=1, length=4) # defined first - rebuilds first
def bass(p):
root = 36 + (p.cycle % 12)
p.data["bass_root"] = root # visible to patterns that follow this cycle
p.note(root, velocity=100)
@composition.pattern(channel=2, length=4) # same length - guaranteed same-cycle read
def pad(p):
root = p.data.get("bass_root", 48) # current-cycle value, because bass ran first
p.chord(root=root, velocity=60)If the two patterns have different lengths they reschedule at different moments, so the reader sees the writer's value from its most recent rebuild - at most one bar old. This one-bar latency is musically natural for slowly-changing state (like a 4-bar bass phrase influencing a 1-bar arp), but use matching lengths when you need immediate reaction.
External data written by composition.schedule(), CC input, OSC, or hotkeys flows through the same dict:
@composition.schedule(cycle_beats=16, wait_for_initial=True)
def fetch_iss():
data = requests.get("https://api.wheretheiss.at/v1/satellites/25544").json()
composition.data["iss_lat"] = data["latitude"]
@composition.pattern(channel=1)
def iss_melody(p):
lat = p.data.get("iss_lat", 0.0) # same dict as composition.data
root = int(48 + (lat / 90) * 24)
p.note(root, velocity=80)If you use composition.schedule() to poll external data and want to ease between each new reading, use subsequence.easing.EasedValue. Create one instance per field at module level, call .update(value) in your scheduled task, and .get(progress) in your pattern - no manual prev/current bookkeeping required. See subsequence.easing for details.
Most generative tools leave voicing to the user. Subsequence provides automatic voice leading - each chord picks the inversion with the smallest total pitch movement from the previous one, keeping parts smooth without manual effort.
By default, chords are played in root position. You can request a specific inversion, or enable voice leading per pattern.
Pass inversion to p.chord(), p.strum(), or chord.tones():
@composition.pattern(channel=0, length=4)
def chords (p, chord):
p.chord(chord, root=52, velocity=90, sustain=True, inversion=1) # first inversionInversion 0 is root position, 1 is first inversion, 2 is second, and so on. Values wrap around for chords with fewer notes.
p.strum() works exactly like p.chord() but staggers the notes with a small time offset between each one - like strumming a guitar. The first note always lands on the beat; subsequent notes are delayed by offset beats each.
@composition.pattern(channel=0, length=4)
def guitar (p, chord):
# Gentle upward strum (low to high)
p.strum(chord, root=52, velocity=85, offset=0.06)
# Fast downward strum (high to low)
p.strum(chord, root=52, direction="down", offset=0.03)Pass legato= directly to chord() or strum() to collapse the two-step pattern into one call. The value is passed straight to p.legato(), stretching each note to fill the given fraction of the gap to the next note:
@composition.pattern(channel=0, length=4)
def pad (p, chord):
# Equivalent to: p.chord(...) then p.legato(0.9)
p.chord(chord, root=52, velocity=90, legato=0.9)
@composition.pattern(channel=0, length=4)
def guitar (p, chord):
p.strum(chord, root=52, velocity=85, offset=0.06, legato=0.95)sustain=True and legato= are mutually exclusive - passing both raises a ValueError.
Add voice_leading=True to the pattern decorator. The injected chord will automatically choose the inversion with the smallest total pitch movement from the previous chord:
@composition.pattern(channel=0, length=4, voice_leading=True)
def chords (p, chord):
p.chord(chord, root=52, velocity=90, sustain=True)Each pattern tracks voice leading independently - a bass line and a pad can voice-lead at their own pace.
ChordPattern accepts voice_leading=True:
chords = subsequence.harmony.ChordPattern(
harmonic_state=harmonic_state, root_midi=52, velocity=90, channel=0, voice_leading=True
)For standalone use, subsequence.voicings provides invert_chord(), voice_lead(), and VoiceLeadingState.
By default, chord.tones() and p.chord() return one note per chord tone (3 for triads, 4 for sevenths). Pass count to cycle the intervals into higher octaves:
@composition.pattern(channel=0, length=4)
def pad (p, chord):
p.chord(chord, root=52, velocity=90, sustain=True, count=4) # always 4 notes
@composition.pattern(channel=0, length=4)
def arp (p, chord):
tones = chord.tones(root=64, count=5) # 5 notes cycling upward
p.arpeggio(tones, step=0.25, velocity=90)count works with inversion - the extended notes continue upward from the inverted voicing.
Subsequence generates chord progressions using weighted transition graphs. Each chord has weighted edges to its possible successors, so the progression is probabilistic but musically constrained. On top of the base graph weights, two gravity systems shape which chord is chosen next.
composition.harmony(
style="aeolian_minor",
cycle_beats=4,
gravity=0.8,
nir_strength=0.5,
)| Parameter | Type | Default | Description |
|---|---|---|---|
style |
str or ChordGraph | "functional_major" |
Built-in name or custom ChordGraph instance |
cycle_beats |
int | 4 |
Beats per chord change |
dominant_7th |
bool | True |
Include dominant 7th chords |
gravity |
float | 1.0 |
Key gravity blend (0.0 = functional chords only, 1.0 = full diatonic set) |
nir_strength |
float | 0.5 |
Melodic inertia (0.0 = off, 1.0 = full). Controls how strongly transitions follow Narmour's Implication-Realization model |
minor_turnaround_weight |
float | 0.0 |
Minor turnaround weight (turnaround graph only) |
root_diversity |
float | 0.4 |
Root-repetition damping (0.0 = maximum, 1.0 = off). Each recent same-root chord multiplies the weight by this factor |
| Style | Character |
|---|---|
"diatonic_major" / "functional_major" |
Standard major key (I-ii-iii-IV-V-vi-vii) |
"turnaround" |
Jazz turnaround with optional modulation to relative minor |
"aeolian_minor" |
Natural minor with Phrygian cadence option |
"phrygian_minor" |
Dark, minimal palette (i-bII-iv-v) |
"lydian_major" |
Bright, floating (#IV colour) |
"dorian_minor" |
Minor with major IV (soul, funk) |
"chromatic_mediant" |
Film-score style third-relation shifts |
"suspended" |
Ambiguous sus2/sus4 palette |
"mixolydian" |
Major with flat 7th; open, unresolved (EDM, synthwave) |
"whole_tone" |
Symmetrical augmented palette; dreamlike drift (IDM, ambient) |
"diminished" |
Minor-third symmetry; angular, disorienting (dark, experimental) |
Four layers influence which chord comes next:
- Graph weights - the base transition probabilities defined by the chord graph. A strong cadence (e.g. V-I) has a higher weight than a deceptive resolution (e.g. V-vi).
- Key gravity - blends between functional pull (tonic, subdominant, dominant) and full diatonic pull. It ensures the progression retains a sense of home.
- Melodic inertia (Narmour) - applies 'cognitive expectation' principles to the root motion of the chords. It models the listener's innate sense of musical grammar:
- Process (A+A): A series of small steps in one direction implies a continuation in that same direction. The melody gathers momentum.
- Gap-Fill (Reversal): A large leap (> 4 semitones) stretches the "elastic" of pitch space, implying a change of direction to fill the gap.
- Proximity: Small intervals (1-3 semitones) are generally preferred over large leaps.
- Closure: Return to tonic gets a gentle boost.
- Root diversity - an automatic penalty that discourages the same chord root from appearing repeatedly. Each recent chord sharing a candidate's root multiplies the transition weight by
root_diversity(default 0.4). This prevents progressions from getting stuck on one root, even in graphs with same-root voicing changes (e.g., sus2/sus4 pairs) or strong resolution weights. Setroot_diversity=1.0to disable.
At nir_strength=0.0 the system is purely probabilistic (Markov). At 1.0 it is heavily driven by these cognitive rules. The default 0.5 balances structural surprise with melodic coherence.
Subclass ChordGraph and implement two methods: build() returns a weighted transition graph and the tonic chord, gravity_sets() returns the diatonic and functional chord sets for key gravity weighting.
import subsequence
class PowerChords (subsequence.chord_graphs.ChordGraph):
def build (self, key_name):
key_pc = subsequence.chord_graphs.validate_key_name(key_name)
I = subsequence.chords.Chord(root_pc=key_pc, quality="major")
IV = subsequence.chords.Chord(root_pc=(key_pc + 5) % 12, quality="major")
V = subsequence.chords.Chord(root_pc=(key_pc + 7) % 12, quality="major")
graph = subsequence.weighted_graph.WeightedGraph()
graph.add_transition(I, IV, 4)
graph.add_transition(I, V, 3)
graph.add_transition(IV, V, 5)
graph.add_transition(IV, I, 2)
graph.add_transition(V, I, 6) # Strong resolution
graph.add_transition(V, IV, 2)
return graph, I
def gravity_sets (self, key_name):
key_pc = subsequence.chord_graphs.validate_key_name(key_name)
I = subsequence.chords.Chord(root_pc=key_pc, quality="major")
IV = subsequence.chords.Chord(root_pc=(key_pc + 5) % 12, quality="major")
V = subsequence.chords.Chord(root_pc=(key_pc + 7) % 12, quality="major")
all_chords = {I, IV, V}
return all_chords, {I, V} # (diatonic, functional)
composition.harmony(style=PowerChords(), cycle_beats=4, gravity=0.8)Higher edge weights mean a transition is more likely. Use the constants WEIGHT_STRONG (6), WEIGHT_MEDIUM (4), WEIGHT_COMMON (3), WEIGHT_DECEPTIVE (2), WEIGHT_WEAK (1) from subsequence.chord_graphs for consistency with the built-in graphs.
Use diatonic_chords() to get the 7 diatonic triads for any key and mode - plain Chord objects with no probabilistic machinery:
from subsequence.harmony import diatonic_chords
# Seven triads of Eb Major: Eb, Fm, Gm, Ab, Bb, Cm, Ddim
chords = diatonic_chords("Eb")
# Natural minor
chords = diatonic_chords("A", mode="minor")
# Supported modes: "ionian" ("major"), "dorian", "phrygian", "lydian",
# "mixolydian", "aeolian" ("minor"), "locrian",
# "harmonic_minor", "melodic_minor"Each entry is a Chord object - pass it directly to p.chord(), p.strum(), or chord.tones():
@composition.pattern(channel=0, length=4)
def rising (p):
current = diatonic_chords("D", mode="dorian")[p.cycle % 7]
p.chord(current, root=50, sustain=True)For a stepped sequence with explicit MIDI roots - for example, mapping a sensor value to a chord - use diatonic_chord_sequence(). It returns (Chord, midi_root) tuples stepping diatonically upward from a starting note, wrapping into higher octaves automatically:
from subsequence.harmony import diatonic_chord_sequence
# 12-step D Major ladder from D3 (MIDI 50) up through D4 and beyond
sequence = diatonic_chord_sequence("D", root_midi=50, count=12)
# Map a 0-1 value directly to a chord
altitude_ratio = 0.7 # e.g. from ISS data
chord, root = sequence[int(altitude_ratio * (len(sequence) - 1))]
p.chord(chord, root=root, sustain=True)
# Falling sequence
sequence = list(reversed(diatonic_chord_sequence("A", root_midi=45, count=7, mode="minor")))The root_midi must be a note that falls on a scale degree of the chosen key and mode. A ValueError is raised otherwise.
The harmony engine generates chords live via the weighted graph - great for evolving, exploratory compositions. But sometimes you want structural repetition: the verse should always feel like the verse, with the same harmonic journey each time it plays.
composition.freeze(bars) captures the current engine output into a Progression object. composition.section_chords(section_name, progression) then binds it to a form section. Every time that section plays, the harmonic clock replays the frozen chords instead of calling the live engine. Sections without a binding keep generating freely.
Successive freeze() calls continue the engine's journey - so verse, chorus, and bridge progressions feel like parts of a whole rather than isolated islands.
composition = subsequence.Composition(bpm=120, key="C")
composition.harmony(style="functional_major", cycle_beats=4)
# Generate progressions before playback. Each call advances the engine,
# so the sections feel harmonically connected.
verse = composition.freeze(8) # 8 chords for the verse
chorus = composition.freeze(4) # next 4 chords for the chorus
composition.form({
"verse": (8, [("chorus", 1)]),
"chorus": (4, [("verse", 2), ("bridge", 1)]),
"bridge": (8, [("verse", 1)]),
}, start="verse")
composition.section_chords("verse", verse)
composition.section_chords("chorus", chorus)
# "bridge" is not bound - it generates live chords each time
composition.play()Patterns receive the current chord via the normal chord parameter - no changes needed in pattern code:
@composition.pattern(channel=BASS_CHANNEL, length=4)
def bass (p, chord):
root = chord.root_note(40)
p.sequence(steps=[0, 4, 8, 12], pitches=root)
p.legato(0.9)Key behaviours:
- Each time a frozen section is re-entered, playback restarts from chord 0.
- If a section is longer than its progression (more bars than chords), the extra bars fall through to live generation.
- NIR history is restored at the start of each frozen replay so every re-entry begins with the same harmonic context as when the progression was originally generated.
freeze()can be called before or afterform().
Set a seed to make all random behavior repeatable:
composition = subsequence.Composition(bpm=125, key="E", seed=42)
# OR
composition.seed(42)When a seed is set, chord progressions, form transitions, and all pattern randomness produce identical output on every run. Pattern builders access the seeded RNG via p.rng:
@composition.pattern(channel=9, length=4, drum_note_map=DRUM_NOTE_MAP)
def drums (p):
# p.rng replaces random.randint/random.choice - deterministic when seeded.
density = p.rng.choice([3, 5, 7])
p.euclidean("kick", pulses=density)
# Per-step probability also uses p.rng by default.
p.hit_steps("hh_closed", list(range(16)), velocity=80, probability=0.7)p.rng is always available, even without a seed - in that case it's a fresh unseeded random.Random.
subsequence.sequence_utils provides structured randomness primitives:
| Function | Description |
|---|---|
weighted_choice(options, rng) |
Pick from (value, weight) pairs - biased selection |
shuffled_choices(pool, n, rng) |
N items with no adjacent repeats (classic urn algorithm) |
random_walk(n, low, high, step, rng) |
Values that drift by small steps (classic drunk algorithm) |
probability_gate(sequence, probability, rng) |
Filter a binary sequence by probability |
All require an explicit rng parameter - use p.rng in pattern builders:
# Wandering hi-hat velocity
walk = subsequence.sequence_utils.random_walk(16, low=50, high=110, step=15, rng=p.rng)
for i, vel in enumerate(walk):
p.hit_steps("hh_closed", [i], velocity=vel)
# Weighted density choice
density = subsequence.sequence_utils.weighted_choice([(3, 0.5), (5, 0.3), (7, 0.2)], p.rng)
p.euclidean("snare", pulses=density)p.melody() generates a single-note melodic line guided by the Narmour Implication-Realization (NIR) model - the same cognitive framework used by the chord engine, now adapted for absolute pitch. It expects a MelodicState instance created once at module level, which persists history across bar rebuilds so melodic continuity is maintained automatically.
# Create once at module level - history persists across bars.
melody_state = subsequence.MelodicState(
key="A",
mode="aeolian",
low=57, # A3
high=84, # C6
nir_strength=0.6, # How strongly NIR rules shape pitch choice (0–1)
chord_weight=0.4, # Bonus for chord tones
rest_probability=0.1, # 10% chance of silence per step
pitch_diversity=0.6, # Penalise recently-repeated pitches
)
@composition.pattern(channel=3, length=4, chord=True)
def lead (p, chord):
tones = chord.tones(72) if chord else None
p.melody(melody_state, step=0.5, velocity=(70, 100), chord_tones=tones)NIR rules in melody:
| Rule | Trigger | Effect |
|---|---|---|
| Reversal | Previous interval > 4 semitones (large leap) | Favours direction change (+0.5) and a smaller gap-fill interval (+0.3) |
| Process | Previous interval 1–2 semitones (small step) | Favours same direction (+0.4) and similar interval size (+0.2) |
| Closure | Candidate is the tonic | +0.2 boost - the tonic feels like a natural landing |
| Proximity | Candidate is ≤ 3 semitones from last note | +0.3 boost - small intervals are generally preferred |
Unlike chord NIR, melody NIR operates on absolute MIDI pitch differences (not pitch-class modular arithmetic), so it correctly distinguishes an upward leap from a downward one even across octaves.
Additional factors:
- Chord tone boost. If
chord_tonesis provided, pitch classes that match receive a multiplicative bonus of1 + chord_weight. This keeps melodies harmonically grounded without locking them to arpeggios. - Range gravity. A soft quadratic penalty pulls notes toward the centre of
[low, high], preventing the melody from drifting to register extremes. - Pitch diversity. Each time a pitch appears in the recent history, its score is multiplied by
pitch_diversity. Low values (e.g.0.3) strongly suppress repetition;1.0disables the penalty entirely.
p.melody() parameters:
| Parameter | Default | Description |
|---|---|---|
state |
- | A MelodicState instance (required) |
step |
0.25 |
Time between note onsets in beats (0.25 = 16th note) |
velocity |
90 |
Fixed int or (low, high) tuple for random range per step |
duration |
0.2 |
Note duration in beats |
chord_tones |
None |
MIDI notes that are chord tones this bar |
Enable a live status line showing the current bar, section, chord, BPM, and key with a single call:
composition.display()
composition.play()The status line updates every beat and looks like:
125.00 BPM Key: E Bar: 17.1 [chorus 1/8] Chord: Em7 Swell: 0.42 Tide: 0.78
Components adapt to what's configured - the section is omitted if no form is set, the chord is omitted if no harmony is configured, and conductor signals only appear when registered. Log messages scroll cleanly above the status line without disruption.
Add grid=True to also render an ASCII grid above the status line showing what each pattern is doing - which steps have notes, at what velocity, and for how long:
composition.display(grid=True)
composition.play()The grid updates once per bar and looks like:
drums
kick |█ ▒ · · █ · · · █ · · · █ · · ▒|
snare |· · · · █ · · · · · · · █ · · ·|
bass |█ > > █ > > > > █ > > > █ > > ·|
125.00 BPM Key: E Bar: 3.1 [intro 3/8 → section_1]
Each column is one grid step (16th notes by default). Velocity and duration are encoded in the character at each position:
| Glyph | Meaning |
|---|---|
· |
Empty step on the grid |
|
Empty step between grid lines |
░ |
Ghost attack (velocity < 25%) |
▒ |
Soft attack (velocity 25% to < 50%) |
▓ |
Medium attack (velocity 50% to < 75%) |
█ |
Loud attack (velocity >= 75%) |
> |
Sustain (note still sounding from a previous step) |
The sustain marker makes legato and staccato patterns visually distinct - a legato bass line fills its steps with > between attacks; drum hits are short and show no sustain. Drum patterns show one row per distinct drum sound, labelled from the drum note map. Pitched patterns show a single summary row.
Add grid_scale to zoom in horizontally and reveal micro-timing from swing and groove:
composition.display(grid=True, grid_scale=2)
composition.play()At grid_scale=2 each base grid step expands to 2 visual columns. Empty columns between the original grid steps show as spaces; notes shifted by swing or groove appear at those in-between positions. Sustain markers fill all occupied columns:
grid_scale=1 (default):
bass |O - - O - - - - O - - - O - - .|
grid_scale=2:
bass |O - - O - - - - O - - - O - - . |
grid_scale accepts any float and snaps to the nearest integer columns-per-step, guaranteeing all on-grid markers are exactly the same distance apart. Values below 1.5 are equivalent to the default.
To disable:
composition.display(enabled=False)For an improved visual experience without relying on the terminal, you can enable the experimental Web UI Dashboard. This spins up an HTTP server and a WebSocket connection in the background to serve a reactive web interface that mirrors your composition state:
composition.web_ui()
composition.play()Opening http://localhost:8080/ in your browser will display:
- A live transport bar showing the current BPM, Key, Bar, and Form Section.
- Gauge bars representing all active Conductor signals (e.g., LFOs and automation lines).
- High-precision visual playheads and piano-roll style Pattern Grids for every pattern currently rendering.
Note: The Web UI is an optional beta feature. When you don't call web_ui(), it consumes zero threads and zero CPU overhead. It currently relies on an active internet connection to load the Preact frontend dependencies from a CDN.
Capture a session to a standard MIDI file. Pass record=True to Composition and Subsequence saves everything it plays to disk when you stop:
composition = subsequence.Composition(bpm=120, key="E", record=True)
composition.play()
# Press Ctrl+C - the recording is saved automaticallyBy default the filename is generated from the timestamp (session_YYYYMMDD_HHMMSS.mid). Pass record_filename to choose your own:
composition = subsequence.Composition(
bpm=120, key="E",
record=True,
record_filename="my_session.mid"
)The output is a standard Type 1 MIDI file at 480 PPQN - import it directly into any DAW. All note events are recorded on their original MIDI channels. Tempo is embedded as a set_tempo meta event, including any mid-session set_bpm() calls.
composition.render() runs the sequencer as fast as possible - no waiting for wall-clock time - and saves the result immediately. A default 60-minute safety cap (max_minutes=60.0) stops infinite compositions from filling your disk:
composition = subsequence.Composition(bpm=120, key="C")
@composition.pattern(channel=0, length=4)
def melody (p):
p.seq("60 [62 64] 67 60")
# Render exactly 8 bars (default 60-min cap still active as a backstop)
composition.render(bars=8, filename="melody.mid")
# Render an infinite composition - stops automatically at 5 minutes of MIDI
composition.render(max_minutes=5, filename="generative.mid")
# Remove the time cap - must supply an explicit bars= count instead
composition.render(bars=128, max_minutes=None, filename="long.mid")If the time cap fires, a warning is logged explaining how to remove it. All patterns, scheduled callbacks, probabilistic gates, and BPM transitions work identically in render mode. The only difference is that simulated time replaces wall-clock time.
Modify a running composition without stopping playback. Subsequence includes a TCP eval server that accepts Python code from any source - the bundled REPL client, an editor plugin, or a raw socket. Change tempo, mute patterns, hot-swap pattern logic, and query state - all while the music plays.
One line before play():
composition.live() # start on localhost:5555
composition.display()
composition.play()In another terminal:
$ python -m subsequence.live_client
Connected to Subsequence on 127.0.0.1:5555
{'bpm': 120, 'key': 'C', 'bar': 12, 'section': {'name': 'verse', ...}, 'chord': 'Em7', ...}
>>>
Query state - see what's playing right now:
>>> composition.live_info()
{'bpm': 120, 'key': 'E', 'bar': 34, 'section': {'name': 'chorus', 'bar': 2, 'bars': 8, 'progress': 0.25}, 'chord': 'Em7', 'patterns': [{'name': 'drums', 'channel': 9, 'length': 4.0, 'cycle': 17, 'muted': False, 'tweaks': {}}, ...], 'data': {}}Change tempo - hear the difference immediately:
>>> composition.set_bpm(140) # instant jump
OK
>>> composition.target_bpm(140, bars=8) # smooth ramp over 8 bars
OK
>>> composition.target_bpm(140, bars=8, shape="ease_in_out") # S-curve for a more natural feel
OKMute and unmute patterns - patterns keep cycling silently, so they stay in sync:
>>> composition.mute("hats")
OK
>>> composition.unmute("hats")
OKModify shared data - any value patterns read from composition.data:
>>> composition.data["intensity"] = 0.8
OKHot-swap a pattern - redefine the builder function and it takes effect on the next cycle:
>>> @composition.pattern(channel=9, length=4, drum_note_map=DRUM_NOTE_MAP)
... def drums(p):
... p.hit_steps("kick", [0, 8], velocity=127)
... p.hit_steps("snare", [4, 12], velocity=100)
...
OKThe running pattern keeps its cycle count, RNG state, and scheduling position - only the builder logic changes.
Tweak a single parameter - change one value without replacing the whole pattern:
>>> composition.tweak("bass", pitches=[48, 52, 55, 60])
OK
>>> composition.get_tweaks("bass")
{'pitches': [48, 52, 55, 60]}
>>> composition.clear_tweak("bass", "pitches")
OKThe pattern builder reads tweakable values via p.param(), which returns the tweaked value if set or a default otherwise:
@composition.pattern(channel=0, length=4)
def bass (p):
pitches = p.param("pitches", [60, 64, 67, 72])
vel = p.param("velocity", 100)
p.sequence(steps=[0, 4, 8, 12], pitches=pitches, velocities=vel)Changes take effect on the next rebuild cycle. Call clear_tweak("bass") with no parameter name to clear all tweaks.
The server speaks a simple text protocol (messages delimited by \x04). You can send code from anything that opens a TCP socket:
# From another terminal with netcat
echo -ne 'composition.set_bpm(140)\x04' | nc localhost 5555
# Or Python one-liner
python -c "import socket; s=socket.socket(); s.connect(('127.0.0.1',5555)); s.send(b'composition.live_info()\x04'); print(s.recv(4096).decode())"All code is validated as syntactically correct Python before execution. If you send a typo or malformed code, the server returns a SyntaxError traceback - nothing is executed, and the running composition is never affected.
Subsequence uses a hybrid sleep+spin timing strategy for its internal master clock. Rather than relying on asyncio.sleep() alone (which is subject to OS scheduler granularity), the loop sleeps to within ~1 ms of the target pulse time, then busy-waits on time.perf_counter() for the remaining sub-millisecond interval. Pulse times are calculated as absolute offsets from the session start time, so timing errors never accumulate.
Measured jitter on Linux at 120 BPM (64 bars, 6144 pulses):
| Mode | Mean | P99 | Max | Long-term drift |
|---|---|---|---|---|
| Spin-wait ON (default) | 3 μs | 4 μs | ~150 μs* | 0 |
asyncio.sleep only |
853 μs | 1.37 ms | 1.72 ms | negligible |
* Occasional spikes are GC pauses in the Python runtime, not clock instability.
To measure jitter on your own system:
python benchmarks/clock_jitter.py # default (spin-wait on, 32 bars)
python benchmarks/clock_jitter.py --compare # side-by-side with spin-wait off
python benchmarks/clock_jitter.py --bpm 140 --bars 128To disable spin-wait (lower CPU use, ~1 ms jitter):
composition = subsequence.Composition(bpm=120, key="C")
composition.sequencer.disable_spin_wait()Or at construction time: Sequencer(spin_wait=False).
Subsequence can follow an external MIDI clock instead of running its own. This lets you sync with a DAW, drum machine, or any device that sends MIDI clock. Transport messages (start, stop, continue) are respected automatically.
MIDI_INPUT_DEVICE = "Your MIDI Device:Port"
composition = subsequence.Composition(bpm=120, key="E")
# Follow external clock and respect transport (start/stop/continue)
composition.midi_input(device=MIDI_INPUT_DEVICE, clock_follow=True)
composition.play()When clock_follow=True:
- The sequencer waits for a MIDI start or continue message before playing
- Each incoming MIDI clock tick advances the sequencer by one pulse (24 ticks = 1 beat, matching the MIDI standard)
- A MIDI stop message halts the sequencer
- A MIDI start message resets to pulse 0 and begins counting
- A MIDI continue message resumes from the current position
- BPM is estimated from incoming tick intervals (for display only)
set_bpm()has no effect - tempo is determined by the external clock
Without clock_follow (the default), midi_input() opens the input port but does not act on clock or transport messages - it can still receive CC input for mapping (see below).
Map hardware knobs, faders, and expression pedals directly to composition.data - no callback code required:
composition.midi_input("Arturia KeyStep") # open the input port
composition.cc_map(74, "filter_cutoff") # CC 74 → 0.0–1.0 in composition.data
composition.cc_map(7, "volume", min_val=0, max_val=127) # custom range
composition.cc_map(1, "density", channel=0) # channel-filtered
@composition.pattern(channel=0, length=4)
def arps (p):
cutoff = composition.data.get("filter_cutoff", 0.5)
velocity = int(60 + 67 * cutoff)
p.arpeggio([60, 64, 67], step=0.25, velocity=velocity)CC values are scaled from 0–127 to the min_val/max_val range and written to composition.data[key] on every incoming message. Thread safety is provided by Python's GIL for single dict writes.
Make Subsequence the MIDI clock master so hardware can lock to its tempo:
composition = subsequence.Composition(bpm=120, output_device="...")
composition.clock_output() # send Start, Clock ticks, Stop to the output port
composition.play()Subsequence sends a Start message (0xFA) at the beginning of playback, one Clock tick (0xF8) per pulse (24 PPQN, matching the MIDI standard), and a Stop message (0xFC) when playback ends. This automatically disabled when midi_input(clock_follow=True) is active, to prevent a feedback loop.
Switch instrument patches mid-pattern with p.program_change():
@composition.pattern(channel=1, length=4)
def strings (p):
p.program_change(48) # String Ensemble 1 (GM #49)
p.chord(chord, root=60, velocity=75, sustain=True)Program numbers follow General MIDI (0–127). The message fires at the beat position given by the optional beat argument (default 0.0 - the start of the pattern).
For multi-bank hardware synths (Roland, Yamaha, Korg, etc.), pass bank_msb and/or bank_lsb to select the bank before the patch change. The two CC messages (CC 0 and CC 32) are sent automatically at the same beat position, in the correct order (CC 0 → CC 32 → program change):
@composition.pattern(channel=1, length=4)
def synth (p):
# Roland JV-1080 - bank MSB 81, LSB 0, patch 48
p.program_change(48, bank_msb=81, bank_lsb=0)
p.chord(chord, root=60, velocity=70, sustain=True)Omit either parameter if your synth only uses one bank byte:
p.program_change(12, bank_msb=1) # MSB only
p.program_change(12, bank_lsb=3) # LSB onlysubsequence.bank_select(bank) converts an integer bank number (0–16,383) to the (msb, lsb) pair - useful when a synth manual lists a single bank number:
msb, lsb = subsequence.bank_select(128) # → (1, 0)
p.program_change(48, bank_msb=msb, bank_lsb=lsb)To send a patch change only at the start of a section (not every bar), guard with p.section.bar:
@composition.pattern(channel=1)
def synth (p):
if p.section.bar == 0: # first bar of this section
p.program_change(48, bank_msb=81, bank_lsb=0)
p.chord(chord, root=60, velocity=70, sustain=True)Or switch patch depending on which section is playing:
@composition.pattern(channel=1)
def lead (p):
if p.section.name == "verse":
p.program_change(80) # Square Lead
elif p.section.name == "chorus":
p.program_change(88) # Fantasia Pad
p.note(root, velocity=90)Send System Exclusive messages for deep hardware integration - Elektron parameter locking, patch dumps, vendor-specific commands:
@composition.pattern(channel=0, length=4)
def init (p):
# GM System On - resets all GM-compatible devices to defaults
p.sysex([0x7E, 0x7F, 0x09, 0x01])Pass data as a bytes object or a list of integers (0–127). The surrounding 0xF0/0xF7 framing is added automatically by mido. beat sets the position within the pattern (default 0.0).
Three post-build transforms generate correctly-timed pitch bend events by reading actual note positions and durations - no manual beat arithmetic required. Call them after legato() / staccato() so durations are final.
p.bend() - bend a specific note by index:
p.sequence(steps=[0, 4, 8, 12], pitches=midi_notes.E1)
p.legato(0.95)
# Bend the last note up 1 semitone (±2 st range), easing in over its full duration
p.bend(note=-1, amount=0.5, shape="ease_in")
# Bend the 2nd note down, starting halfway through
p.bend(note=1, amount=-0.3, start=0.5, shape="ease_out")amount is normalised to -1.0..1.0. With a standard ±2-semitone pitch wheel range, 0.5 = 1 semitone up. start and end are fractions of the note's duration (defaults: 0.0 and 1.0). A pitch bend reset is inserted automatically at the next note's onset.
p.portamento() - glide between all consecutive notes:
p.sequence(steps=[0, 4, 8, 12], pitches=[40, 42, 40, 43])
p.legato(0.95)
# Gentle glide using the last 15% of each note
p.portamento(time=0.15, shape="ease_in_out")
# Wide bend range (synth set to ±12 semitones)
p.portamento(time=0.2, bend_range=12)
# No range limit - let the instrument decide
p.portamento(time=0.1, bend_range=None)bend_range tells Subsequence the instrument's pitch wheel range in semitones (default 2.0). Pairs with a larger interval are skipped. Pass None to disable range checking. wrap=True (default) also glides from the last note toward the first note of the next cycle.
p.slide() - TB-303-style selective slide:
p.sequence(steps=[0, 4, 8, 12], pitches=[40, 42, 40, 43])
p.legato(0.95)
# Slide into the 2nd and 4th notes (by note index)
p.slide(notes=[1, 3], time=0.2, shape="ease_in")
# Same thing using step grid indices
p.slide(steps=[4, 12], time=0.2, shape="ease_in")
# Without extending the preceding note
p.slide(notes=[1, 3], extend=False)slide() is like portamento() but only applies to flagged destination notes. With extend=True (default) the preceding note is extended to reach the slide target's onset - matching the 303's behaviour where slide notes do not retrigger.
| Method | Key parameters |
|---|---|
p.bend(note, amount, start=0.0, end=1.0, shape, resolution) |
note: index; amount: -1.0..1.0 |
p.portamento(time=0.15, shape, resolution, bend_range=2.0, wrap=True) |
bend_range=None disables range check |
p.slide(notes=None, steps=None, time=0.15, shape, resolution, bend_range=2.0, wrap=True, extend=True) |
notes or steps required |
Snap all notes in a pattern to a named scale - essential after generative or sensor-driven pitch work:
@composition.pattern(channel=0, length=4)
def walk (p):
for beat in range(16):
pitch = 60 + p.rng.randint(-7, 7) # random walk around middle C
p.note(pitch, beat=beat * 0.25)
p.quantize("G", "dorian") # snap everything to G Dorianquantize(key, mode) accepts any key name ("C", "F#", "Bb", etc.) and any registered scale. Equidistant notes prefer the upward direction.
Built-in modes include western diatonic ("ionian", "dorian", "minor", "harmonic_minor", etc.) and non-western scales ("hirajoshi", "in_sen", "iwato", "yo", "egyptian", "major_pentatonic", "minor_pentatonic").
Register your own scales at any time:
subsequence.register_scale("raga_bhairav", [0, 1, 4, 5, 7, 8, 11])
# then in patterns:
p.quantize("C", "raga_bhairav")chord.root_note(midi) and chord.bass_note(midi, octave_offset=-1) make register-aware root extraction self-documenting:
@composition.pattern(channel=BASS_CHANNEL, length=4)
def bass (p, chord):
bass_root = chord.bass_note(root, octave_offset=-1) # one octave below chord voicing
p.sequence(steps=range(0, 16, 2), pitches=bass_root)
p.legato(0.9)p.arpeggio() now supports four playback directions:
p.arpeggio([60, 64, 67], step=0.25) # "up" (default)
p.arpeggio([60, 64, 67], step=0.25, direction="down") # descend
p.arpeggio([60, 64, 67], step=0.25, direction="up_down") # ping-pong: C E G E C E ...
p.arpeggio([60, 64, 67], step=0.25, direction="random") # shuffled once per callThe "random" direction uses p.rng by default (deterministic when a seed is set). Pass a custom rng for independent streams.
Subsequence includes Open Sound Control (OSC) support for remote control and state broadcasting. This is useful for connecting to modular synth environments, custom touch interfaces, or other creative coding tools.
Start the OSC server before calling play():
composition.osc(receive_port=9000, send_port=9001, send_host="127.0.0.1")
composition.play()The server listens for incoming UDP messages (default port 9000) and maps them to composition actions:
| Address | Argument | Action |
|---|---|---|
/bpm |
int |
Set composition tempo |
/mute/<name> |
(none) | Mute a pattern by its function name |
/unmute/<name> |
(none) | Unmute a pattern |
/data/<key> |
any |
Update a value in composition.data |
Custom handlers can be registered via composition._osc_server.map(address, handler).
Subsequence automatically broadcasts its state via OSC (default port 9001) at the start of every bar:
| Address | Type | Description |
|---|---|---|
/bar |
int |
Current global bar count |
/bpm |
int |
Current tempo |
/chord |
str |
Current chord name (e.g. "Em7") |
/section |
str |
Current section name (if form is configured) |
Use p.osc() and p.osc_ramp() to send arbitrary OSC messages at precise beat positions - useful for automating mixer faders, toggling effects, or controlling any OSC-capable device on the network.
composition.osc(send_port=9001, send_host="192.168.1.100") # remote mixer on LAN
@composition.pattern(channel=0, length=4)
def mixer_automation(p, chord):
# Ramp a fader from 0.0 to 1.0 over the full pattern
p.osc_ramp("/mixer/fader/1", start=0.0, end=1.0)
# Ease in a reverb send over the second half
p.osc_ramp("/fx/reverb/wet", 0.0, 0.8, beat_start=2, beat_end=4, shape="ease_in")
# Trigger an effect toggle at beat 2
p.osc("/fx/chorus/enable", 1, beat=2.0)
# Address-only message (no arguments) as a trigger
p.osc("/scene/next", beat=3.0)| Method | Parameters | Notes |
|---|---|---|
p.osc(address, *args, beat=0.0) |
address: OSC path; args: float/int/str; beat: position | Single message at a beat |
p.osc_ramp(address, start, end, beat_start=0, beat_end=None, resolution=4, shape="linear") |
start/end: arbitrary floats | Smooth ramp; resolution=4 ≈ 6 msgs/beat at 120 BPM |
The same easing shapes available for cc_ramp (e.g. "ease_in", "ease_out", "exponential") work with osc_ramp.
osc_server = subsequence.osc.OscServer(
composition, receive_port=9000, send_port=9001
)
await osc_server.start()seq = subsequence.sequencer.Sequencer(
initial_bpm=120,
input_device_name="Your MIDI Device:Port",
clock_follow=True
)Assign single keystrokes to trigger actions during live playback - section jumps, tweaks, data updates, mutes, or any zero-argument callable. Linux / macOS only (raw single-character terminal input relies on the POSIX tty and termios modules, which Windows does not provide).
composition.hotkeys() # enable before play()
# Most actions are immediate; their musical effect lands at the next
# pattern rebuild cycle, which provides natural musical quantization.
composition.hotkey("a", lambda: composition.form_next("chorus"))
composition.hotkey("A", lambda: composition.form_jump("chorus"))
composition.hotkey("1", lambda: composition.data.update({"density": 0.3}))
composition.hotkey("2", lambda: composition.data.update({"density": 0.9}))
# Use quantize=N for explicit bar-boundary control (next bar divisible by N).
composition.hotkey("s", lambda: composition.mute("drums"), quantize=4)
composition.hotkey("d", lambda: composition.unmute("drums"), quantize=4)
# Named functions get their name as the label automatically.
def drop_to_breakdown():
composition.form_jump("breakdown")
composition.mute("lead")
composition.hotkey("x", drop_to_breakdown)
# Override the label explicitly.
composition.hotkey("q", my_action, label="reset mood")
composition.play()? is always reserved - press it during playback to log all active bindings.
Globally enable or disable the keystroke listener. Call before play(). When disabled, no keys are read and no actions fire - zero impact on playback.
| Parameter | Default | Description |
|---|---|---|
key |
- | Single character trigger |
action |
- | Zero-argument callable |
quantize |
0 |
0 = immediate; N = next bar divisible by N |
label |
None |
Display name for ? listing; auto-derived if omitted |
Labels are auto-derived: named functions use fn.__name__; lambdas in .py files use the lambda body extracted from source; fallback is <action>.
Queue a section to play after the current one finishes (graph mode only). Overrides the auto-decided next section. The transition happens at the natural section boundary, keeping the music intact. Call it again to change your mind - last call wins.
Force the form to a named section immediately (graph mode only). Resets the bar count within the section. Effect is heard at the next pattern rebuild. Use form_next for gentle transitions and form_jump for urgent ones.
A groove is a repeating pattern of per-step timing offsets and optional velocity adjustments that gives a pattern its characteristic rhythmic feel. Swing is a type of groove - the simplest one, where every other grid note is delayed. More complex grooves shift and accent every step independently, giving you MPC-style pocket, jazz brush feel, or any custom texture.
For the common case of uniform eighth- or sixteenth-note swing, use the shortcut:
@composition.pattern(channel=9, length=4)
def drums(p):
p.hit_steps("kick", [0, 8], velocity=100)
p.hit_steps("hi_hat", range(16), velocity=80)
p.swing(57) # 57% = gentle 16th-note shuffle (Ableton default)| Amount | Feel |
|---|---|
50 |
Perfectly straight - no swing |
57 |
Gentle shuffle (Ableton default) |
67 |
Classic triplet swing |
75 |
Heavy, almost dotted-eighth feel |
The optional second argument sets the grid: p.swing(57, grid=0.5) swings 8th notes instead of 16ths.
For full control - different timing per step, per-step velocity accents, or a shape loaded from a file - construct a Groove and apply it:
import subsequence
# Swing from a percentage (identical to p.swing(57), exposed as a Groove object)
groove = subsequence.Groove.swing(percent=57)
# Import from an Ableton .agr file
groove = subsequence.Groove.from_agr("path/to/Swing 16ths 57.agr")
# Fully custom groove - per-step timing and velocity accents
groove = subsequence.Groove(
grid=0.25, # 16th-note grid
offsets=[0.0, +0.02, 0.0, -0.01], # timing shift per slot (beats)
velocities=[1.0, 0.7, 0.9, 0.6], # velocity scale per slot
)
@composition.pattern(channel=9, length=4)
def drums(p):
p.hit_steps("kick", [0, 8], velocity=100)
p.hit_steps("hi_hat", range(16), velocity=80)
p.groove(groove)p.groove() is a post-build transform - call it at the end of your builder function after all notes are placed. The offset list repeats cyclically, so a 2-slot swing pattern covers the whole bar. Groove and p.randomize() pair well: apply the groove first for structured feel, then randomize on top for micro-variation.
Both p.groove() and p.swing() accept a strength parameter (0.0-1.0, default 1.0) that blends the groove's timing offsets and velocity deviation proportionally - equivalent to Ableton's TimingAmount and VelocityAmount dials:
p.groove(groove) # full groove
p.groove(groove, strength=0.5) # half-strength - subtler feel
p.swing(57, strength=0.7) # 70% of 57% swingGroove.from_agr(path) reads the note timing and velocity data from the embedded MIDI clip inside the .agr file:
- Extracted: note start positions → per-step timing offsets; note velocities → velocity scaling (normalised to the loudest note in the file);
TimingAmountandVelocityAmountfrom the Groove Pool section → pre-scale offsets and velocity deviation so the returnedGroovereflects the file author's intended strength. - Not imported:
RandomAmount(usep.randomize()separately for random timing jitter) andQuantizationAmount(not applicable - Subsequence notes are already grid-quantized by construction). Other per-note fields (Duration,VelocityDeviation,OffVelocity,Probability) are also ignored.
For a simple swing file like Ableton's built-in "Swing 16ths 57", from_agr() and Groove.swing(57) produce equivalent results. Use strength= when applying to further dial back the effect.
The examples/ directory contains self-documenting compositions, each demonstrating a different style and set of features. Because Subsequence generates pure MIDI, what you hear depends on your choice of instruments - the same code can drive a hardware monosynth, a VST orchestra, or anything in between.
python examples/demo.py
Drums, bass, and an ascending arpeggio over evolving aeolian minor harmony in E. demo.py uses the Composition API (decorated functions); demo_advanced.py uses the Direct Pattern API (Pattern subclasses with async lifecycle). Compare them side by side to see how the two APIs relate.
A more complete composition with form sections (intro → section_1 ↔ section_2), five patterns (drums, bass, arp, lead), cycle-dependent variation, and Phrygian minor harmony. Demonstrates velocity_shape, legato, non-quarter-note grids, and section-aware muting.
Dense generative drums demonstrating composition.form() with a weighted graph that cycles through four sections infinitely (pulse → emerge → peak → dissolve). Each section has its own sonic character from bar one. Uses ghost_fill() for probability-biased ghost note layers, cellular_1d() for evolving cellular-automaton rhythms (Rules 30 and 90), bresenham_poly() for interlocking multi-voice hat patterns, and perlin_1d() / perlin_2d() for smooth organic parameter wandering. The weighted graph means the journey through sections is never quite the same twice. Channel 10 (MIDI 11), GM drum map.
A six-section generative drum composition that breathes, builds, and breaks. Six independent Perlin noise fields wander at prime-ish speeds - because p.cycle increments forever, every pass through the form samples a fresh region of each field, so no two bars are ever the same. The weighted graph usually flows void → pulse → swarm → fury → dissolve → void, but a rare "fracture" section can erupt from swarm or fury, lasting only 4 bars of controlled rhythmic chaos before scattering to dissolve, pulse, or void. A "lightning" event fires when the chaos Perlin peaks above 0.92 - roughly once every 70–80 bars. Uses the full rhythm toolkit: cellular_1d() (Rules 30, 90, and 110), ghost_fill() with multiple bias modes ('sixteenths', 'offbeat', 'before', 'uniform'), bresenham_poly(), euclidean() with Perlin-driven pulse counts in fracture, and transition-aware fills when p.section.next_section is known. Channel 10 (MIDI 11), GM drum map, 132 BPM.
Demonstrates freeze() and section_chords() - pre-baking chord progressions with different gravity and nir_strength settings for verse, chorus, and bridge. The verse and chorus replay their frozen chords on every re-entry; the bridge generates live chords each time. Shows how to combine structural repetition with generative freedom in a single composition.
Turns real-time International Space Station telemetry into an evolving composition. Fetches live ISS data every ~32 seconds and uses EasedValue instances to smoothly map orbital parameters to musical decisions.
- Latitude drives BPM, kick density, snare probabilities, and chord transition "gravity". Dense, fast beats near the poles; sparse groove over the equator.
- Heading (Latitude delta) dictates arpeggio direction - ascending while heading North, descending while heading South.
- Visibility (Day/Night) switches the harmonic mode - bright functional major in daylight, darker Dorian minor during eclipse.
- Altitude, Longitude, and Footprint influence chord voicings and ride cymbal pulse counts.
How to run it:
- Install the
requestslibrary:pip install requests. - Connect your MIDI port to a multitimbral synth or DAW (channels: 10=Drums, 6=Bass, 1=Chords, 4=Arp). [Note: MIDI channels are zero-indexed in the code, i.e. 9, 5, 0, 3].
- Run:
python examples/iss.py.
subsequence.pattern_builderprovides thePatternBuilderwith high-level musical methods.subsequence.motifprovides a small Motif helper that can render into a Pattern.subsequence.grooveprovidesGroovetemplates (per-step timing/velocity feel). Swing is a type of groove -p.swing(amount)is a shortcut for the common case. For full control:Groove.swing(percent)for percentage-based swing;Groove.from_agr(path)to import timing and velocity from an Ableton.agrfile (note: the Groove Pool blend controls in the file are not imported - usestrength=when applying to partially blend the effect); or constructGroove(offsets=..., velocities=...)directly for a custom feel. Applied viap.groove(template, strength=1.0)-strength(0.0-1.0) blends the groove's timing and velocity proportionally, equivalent to Ableton's TimingAmount and VelocityAmount dials.subsequence.sequence_utilsprovides rhythm generation (Euclidean, Bresenham, weighted multi-voice Bresenham, van der Corput), probability gating, random walk, scale/clamp helpers, smooth 1D/2D Perlin noise (perlin_1d(x, seed),perlin_2d(x, y, seed)), 1D elementary cellular automaton sequences (generate_cellular_automaton_1d(steps, rule, generation, seed)) and 2D Life-like CA grids (generate_cellular_automaton_2d(rows, cols, rule, generation, seed, density)- Birth/Survival notation, toroidal wrapping, maps rows to pitches/instruments and columns to time steps), deterministic chaos sequences (logistic_map(r, steps, x0=0.5)- a singlerparameter controls behaviour from stable fixed point through period-doubling to full chaos), pink 1/f noise (pink_noise(steps, sources=16, seed=0)- Voss-McCartney algorithm producing natural multi-scale variation with both slow drift and fast jitter, matching the statistical character of real musical parameter fluctuations), and L-system string rewriting (lsystem_expand(axiom, rules, generations, rng=None)- parallel symbol rewriting producing self-similar sequences; deterministic rules use a plain string replacement, stochastic rules use weighted[(str, float)]lists; the expanded string is consumed byp.lsystem()). The weighted Bresenham function (generate_bresenham_sequence_weighted) underliesp.bresenham_poly()- pass apartsdict mapping pitch names to density weights (0.0–1.0) and the voices are distributed across the step grid in interlocking Bresenham patterns. Addno_overlap=Trueto any ofeuclidean(),bresenham(), orbresenham_poly()to skip placement when the same MIDI pitch already occupies that step - prevents double-triggers when layering an anchor pattern with a ghost-note layer.subsequence.mini_notationparses a compact string syntax for step-sequencer patterns.subsequence.easingprovides easing/transition curve functions used byconductor.line(),target_bpm(),cc_ramp(), andpitch_bend_ramp(). Passshape=to any of these to control how a value moves over time. Built-in shapes:"linear"(default),"ease_in","ease_out","ease_in_out"(Hermite smoothstep),"exponential"(cubic, good for filter sweeps),"logarithmic"(cubic, good for volume fades),"s_curve"(Perlin smootherstep - smoother than"ease_in_out"for long transitions). Callable shapes are also accepted for custom curves. Also providesEasedValue- a lightweight stateful helper that smoothly interpolates between discrete data updates (e.g. API poll results, sensor readings) so patterns hear a continuous eased value rather than a hard jump on each fetch cycle. Create one instance per field at module level, call.update(value)in your scheduled task, and call.get(progress)in your pattern.
subsequence.intervalscontains interval and scale definitions (INTERVAL_DEFINITIONS) for harmonic work, including non-western scales (Hirajōshi, In-Sen, Iwato, Yo, Egyptian) and pentatonics.SCALE_MODE_MAP(aliased asDIATONIC_MODE_MAP) maps mode/scale names to interval sets and optional chord qualities.register_scale(name, intervals, qualities=None)adds custom scales at runtime.scale_pitch_classes(key_pc, mode)returns the pitch classes for any key and mode;quantize_pitch(pitch, scale_pcs)snaps a MIDI note to the nearest scale degree.subsequence.harmonyprovidesdiatonic_chords(key, mode)anddiatonic_chord_sequence(key, root_midi, count, mode)for working with diatonic chord progressions without the chord graph engine, plusChordPatternfor a repeating chord driven by harmonic state.subsequence.chord_graphscontains chord transition graphs. Each is aChordGraphsubclass withbuild()andgravity_sets()methods. Built-in styles:"diatonic_major","turnaround","aeolian_minor","phrygian_minor","lydian_major","dorian_minor","suspended","chromatic_mediant","mixolydian","whole_tone","diminished".subsequence.harmonic_stateholds the shared chord/key state for multiple patterns.subsequence.voicingsprovides chord inversions and voice leading.invert_chord()rotates intervals;VoiceLeadingStatepicks the closest inversion to the previous chord automatically.subsequence.markov_chainprovides a generic weighted Markov chain utility.subsequence.melodic_stateprovidesMelodicState- the persistent melodic context forp.melody(). Tracks pitch history across bar rebuilds and applies NIR scoring (Reversal, Process, Closure, Proximity) to absolute MIDI pitches. Constructor params:key,mode,low,high,nir_strength,chord_weight,rest_probability,pitch_diversity. Exported from the top-level package assubsequence.MelodicState.subsequence.weighted_graphprovides a generic weighted directed graph used for transitions. Used internally bycomposition.form()(section transitions), the harmony engine (chord progressions), andp.markov()(Markov-chain melody/bassline generation).
subsequence.constants.durationsprovides beat-based duration constants. Import asimport subsequence.constants.durations as durand writelength = 9 * dur.SIXTEENTHorstep = dur.DOTTED_EIGHTHinstead of raw floats. Constants:THIRTYSECOND,SIXTEENTH,DOTTED_SIXTEENTH,TRIPLET_EIGHTH,EIGHTH,DOTTED_EIGHTH,TRIPLET_QUARTER,QUARTER,DOTTED_QUARTER,HALF,DOTTED_HALF,WHOLE.subsequence.constants.velocityprovides MIDI velocity constants.DEFAULT_VELOCITY = 100(most notes),DEFAULT_CHORD_VELOCITY = 90(harmonic content),VELOCITY_SHAPE_LOW = 64andVELOCITY_SHAPE_HIGH = 127(velocity shaping boundaries),MIN_VELOCITY = 0,MAX_VELOCITY = 127.subsequence.constants.gm_drumsprovides the General MIDI Level 1 drum note map.GM_DRUM_MAPcan be passed asdrum_note_map; individual constants likeKICK_1are also available.subsequence.constants.midi_notesprovides named MIDI note constants C0–G9 (MIDI 12–127). Import asimport subsequence.constants.midi_notes as notes. Convention:C4 = 60(Middle C, MMA standard). Naturals:C4,D4, …B4. Sharps:CS4(C♯4),DS4,FS4,GS4,AS4. Use instead of raw integers:root = notes.E2(40),p.note(notes.A4)(69).subsequence.constants.pulsesprovides pulse-based MIDI timing constants used internally by the engine.
subsequence.compositionprovides theCompositionclass and internal scheduling helpers.subsequence.event_emittersupports sync/async events used by the sequencer.subsequence.oscprovides the OSC server/client for bi-directional communication. Receiving:/bpm,/mute,/unmute,/data. Status broadcasting:/bar,/bpm,/chord,/section. Pattern output:p.osc(),p.osc_ramp().subsequence.live_serverprovides the TCP eval server for live coding. Started internally bycomposition.live().subsequence.live_clientprovides the interactive REPL client. Run withpython -m subsequence.live_client.
Planned features, roughly in order of priority.
-
Comprehensive Cookbook and Tutorials. A guided, progressive walkthrough that takes a new user from zero to an evolving composition in 15 minutes, alongside bite-sized, copy-paste recipes for common musical requests (e.g., "generative techno kick", "functional bassline"). The README is a reference; the tutorial and cookbook are the on-ramp.
-
Example library. More short, self-contained compositions in different styles - minimal techno, ambient generative, polyrhythmic exploration, data-driven - so musicians can hear what the tool does before investing time. Each example should demonstrate 2-3 features and fit on one screen.
-
Visual Dashboard / Web UI. A lightweight local web dashboard to provide real-time visual feedback of the current Chord Graph, global Conductor signals, and active patterns, making the generative process more observable.
-
Ableton Link support. The de facto standard for wireless tempo sync between devices in a modern studio.
-
Starter templates. Lower the blank-page barrier with ready-made starting points for common genres. Musicians tweak from a working composition rather than building from scratch.
-
Network sync. Peer-to-peer sync with DAWs and other Link-enabled devices, covering multi-machine Subsequence setups or custom sync protocols.
-
Standalone Raspberry Pi mode. Run Subsequence headlessly on a Raspberry Pi with a small touchscreen - turning it into a dedicated hardware sequencer with no desktop environment required.
-
Performance profiling. Optional debug mode logging timing per
on_reschedule()call, helping identify pattern logic that may cause jitter under load. -
Live coding UX improvements. Richer feedback in the live client - syntax highlighting, error display, undo/history for hot-swapped patterns. Explore integration with editors (VS Code extension, Jupyter notebooks).
-
CV/Gate output. Direct control voltage output for modular synthesisers via supported hardware interfaces, bridging the gap between code and analogue.
This project uses pytest.
pytest
Async tests use pytest-asyncio. Install test dependencies with:
pip install -e .[test]
This project uses mypy for static type checking. Run locally with:
pip install -e .[dev]
mypy subsequence/Type checking runs automatically in CI on all pull requests.
All feedback and suggestions will be gratefully received! Please use these channels:
- Discussions: The best place to ask questions and share ideas.
- Issues: If you ran into a bug, or have a specific feature request for the codebase, please open an Issue here.
Subsequence was created by me, Simon Holliday (https://simonholliday.com/), a senior technologist and a junior (but trying) musician. From running an electronic music label in the 2000s to prototyping new passive SONAR techniques for defence research, my work has often explored the intersection of code and sound.
Subsequence was iterated over a series of separate proof-of-concept projects during 2025, and pulled together into this new codebase in Spring 2026.
Subsequence is released under the GNU Affero General Public License v3.0 (AGPLv3).
You are free to use, modify, and distribute this software under the terms of the AGPL. If you run a modified version of Subsequence as part of a network service, you must make the source code available to its users.
If you wish to use Subsequence in a proprietary or closed-source product without the obligations of the AGPL, please contact [simon.holliday@protonmail.com] to discuss a commercial license.
Footnotes
-
A Markov chain is a system where each state (here, a chord) transitions to the next based on weighted probabilities. Subsequence adds "gravity" - a configurable pull that draws progressions back toward the home key, so harmony drifts but never gets lost. ↩
-
Velocity values are spread using a van der Corput sequence - a low-discrepancy series that distributes values more evenly than pure randomness, producing a more natural, musical feel. ↩
-
"Stochastic" means governed by probability. These tools give you controlled randomness - results that sound intentional rather than arbitrary. ↩