π§ Work In Progress: Active Engineering Sprint
This project is currently under active development. Core features are functional but APIs and data structures are subject to rapid iteration. Not yet ready for stable deployment.
Real-Time BCI Decoder Visualization Platform
A research-grade dashboard for visualizing and validating BCI decoder performance in real-time.
- π Universal Stream Architecture β Connect to any multichannel data source (EEG, spikes, simulated)
- π§ 25+ Device Support β OpenBCI, Muse, Emotiv, NeuroSky, PiEEG, LSL (130+ devices), and more
- β‘ Real-Time Performance β 40Hz streaming with <50ms end-to-end latency
- π€ AI-Powered Decoders β TensorFlow.js models with WebGPU/WebGL acceleration
- π Monaco Code Editor β Write custom decoders with VS Code-quality IntelliSense
- π Rich Visualizations β Center-out arena, neural waterfall, spectral analysis, and more
- π§ Brainflow Integration β Export configurations for any Brainflow-compatible device
PhantomLoop supports any multichannel time-series source through a unified adapter pattern.
| Manufacturer | Device | Channels | Sample Rate | Protocol |
|---|---|---|---|---|
| PhantomLink | MC_Maze Dataset | 142 | 40 Hz | WebSocket (MessagePack) |
| OpenBCI | Cyton | 8 | 250 Hz | Serial/WiFi |
| OpenBCI | Cyton + Daisy | 16 | 125 Hz | Serial/WiFi |
| OpenBCI | Ganglion | 4 | 200 Hz | BLE |
| Muse | Muse 2 / Muse S | 4 | 256 Hz | BLE |
| Emotiv | Insight | 5 | 128 Hz | BLE |
| Emotiv | EPOC X | 14 | 128/256 Hz | BLE |
| NeuroSky | MindWave | 1 | 512 Hz | Bluetooth |
| PiEEG | PiEEG | 8 | 250-16000 Hz | SPI (Raspberry Pi) |
| PiEEG | PiEEG-16 | 16 | 250-8000 Hz | SPI (Raspberry Pi) |
| PiEEG | IronBCI | 8 | 250 Hz | BLE/WiFi |
| PiEEG | IronBCI-32 | 32 | 250 Hz | WiFi |
| PiEEG | JNEEG | 8 | 250-2000 Hz | SPI (Jetson Nano) |
| PiEEG | ardEEG | 8 | 250 Hz | Serial (Arduino) |
| PiEEG | MicroBCI | 8 | 250 Hz | BLE (STM32) |
| Cerelog | ESP-EEG | 8 | 250 Hz | WiFi (TCP) |
| LSL | Generic (8-64ch) | 8-64 | Variable | Lab Streaming Layer |
| LSL | Brain Products | 32+ | Up to 25kHz | LSL (via Connector) |
| LSL | BioSemi ActiveTwo | 32+ | Up to 16kHz | LSL |
| LSL | g.tec | 16+ | Up to 38kHz | LSL (g.NEEDaccess) |
| LSL | Cognionics | 20-30 | 500 Hz | LSL |
| LSL | ANT Neuro | 32+ | 2048 Hz | LSL |
| LSL | NIRx fNIRS | 16+ | 10 Hz | LSL |
| Brainflow | Synthetic | 8 | 250 Hz | Virtual |
β οΈ Note: Browsers cannot connect directly to TCP/Serial/BLE. Hardware devices require a WebSocket bridge (Python scripts included).
PiEEG is a low-cost, open-source EEG shield for Raspberry Pi using the ADS1299 ADC. PhantomLoop provides full support for the PiEEG device family:
| Device | Channels | Use Case | Link |
|---|---|---|---|
| PiEEG | 8 | Raspberry Pi 3/4/5, research & learning | pieeg.com/pieeg |
| PiEEG-16 | 16 | Extended coverage, dual ADS1299 | pieeg.com/pieeg-16 |
| IronBCI | 8 | Wearable, BLE, mobile SDK | pieeg.com/ironbci |
| IronBCI-32 | 32 | High-density research | pieeg.com/ironbci-32 |
| JNEEG | 8 | Jetson Nano, GPU-accelerated DL | pieeg.com/jneeg |
| ardEEG | 8 | Arduino shield, beginner-friendly | pieeg.com/ardeeg |
| MicroBCI | 8 | STM32 NUCLEO-WB55, ultra-compact | pieeg.com/microbci |
Key Specs:
- 24-bit resolution (ADS1299)
- Programmable gain: 1, 2, 4, 6, 8, 12, 24
- Configurable sample rates: 250-16000 SPS
- Supports EEG, EMG, and ECG signals
- BrainFlow compatible (board ID: 46)
Lab Streaming Layer (LSL) is the universal protocol for streaming EEG and biosignal data in research settings. PhantomLoop supports 130+ LSL-compatible devices through the included WebSocket bridge.
Key Features:
- Real-time stream discovery on local network
- Sub-millisecond time synchronization
- Multi-stream support (EEG, markers, motion)
- Automatic reconnection on stream loss
LSL-Compatible Devices:
| Manufacturer | Devices | Notes |
|---|---|---|
| Brain Products | actiCHamp, LiveAmp, BrainVision | Via LSL Connector app |
| BioSemi | ActiveTwo 32-256ch | Research gold standard |
| g.tec | g.USBamp, g.Nautilus, g.HIamp | Via g.NEEDaccess |
| ANT Neuro | eego sport, eego mylab | Mobile & lab EEG |
| Cognionics | Quick-20, Quick-30, Mobile-72 | Dry electrode systems |
| OpenBCI | All models | Via OpenBCI GUI LSL |
| Muse | Muse 1/2/S | Via muse-lsl |
| Emotiv | EPOC, Insight, EPOC Flex | Via EmotivPRO LSL |
| NIRx | NIRSport, NIRScout | fNIRS devices |
| Tobii | Pro Glasses, Screen-based | Eye tracking |
| Neurosity | Notion, Crown | Consumer EEG |
| BrainAccess | HALO, MINI, MIDI | Affordable research EEG |
π Full device list: labstreaminglayer.org
# Clone the repository
git clone https://github.com/yelabb/PhantomLoop.git
cd PhantomLoop
# Install dependencies
npm install
# Start development server
npm run dev
# Open http://localhost:5173 in your browserOption 1: PhantomLink (No hardware needed)
# Use the default PhantomLink server
# wss://phantomlink.fly.devOption 2: PiEEG (Raspberry Pi)
# 1. Connect PiEEG shield to Raspberry Pi GPIO
# 2. Enable SPI: sudo raspi-config β Interface Options β SPI
# 3. Run the WebSocket bridge on the Pi
pip install websockets spidev RPi.GPIO numpy
python scripts/pieeg_ws_bridge.py --rate 250 --gain 24
# 4. In PhantomLoop, connect to ws://<raspberry-pi-ip>:8766
# 5. Select "PiEEG" in the device selectorOption 3: Lab Streaming Layer (130+ Devices)
# 1. Start your LSL source (OpenBCI GUI, muse-lsl, BrainVision, etc.)
# 2. Run the LSL WebSocket bridge
pip install websockets pylsl numpy
python scripts/lsl_ws_bridge.py
# 3. In PhantomLoop, connect to ws://localhost:8767
# 4. Select "LSL Stream" in the device selector
# Advanced: Connect to specific stream by name
python scripts/lsl_ws_bridge.py --stream "OpenBCI_EEG"
# List available LSL streams on your network
python scripts/lsl_ws_bridge.py --listOption 4: Cerelog ESP-EEG (WiFi)
# 1. Connect to ESP-EEG WiFi: SSID: CERELOG_EEG, Password: cerelog123
# 2. Run the WebSocket bridge
pip install websockets
python scripts/cerelog_ws_bridge.py
# 3. Select "Cerelog ESP-EEG" in PhantomLoopSince browsers cannot directly access hardware (SPI, Serial, BLE, TCP), PhantomLoop includes Python bridge scripts that expose devices via WebSocket:
| Script | Device | Port | Mode |
|---|---|---|---|
lsl_ws_bridge.py |
Any LSL source (130+ devices) | 8767 | LSL Inlet β WebSocket |
pieeg_ws_bridge.py |
PiEEG (Raspberry Pi) | 8766 | SPI / BrainFlow / Simulation |
pieeg_ws_bridge_dsp.py |
PiEEG + Signal Hygiene | 8766 | SPI + Real-Time DSP |
cerelog_ws_bridge.py |
Cerelog ESP-EEG | 8765 | TCP-to-WebSocket |
The LSL bridge connects to any Lab Streaming Layer source and forwards data to the browser:
# Auto-discover and connect to first EEG stream
python scripts/lsl_ws_bridge.py
# Connect to specific stream by name
python scripts/lsl_ws_bridge.py --stream "OpenBCI_EEG"
# Connect to Muse via muse-lsl
python scripts/lsl_ws_bridge.py --stream "Muse" --type EEG
# List available streams on network
python scripts/lsl_ws_bridge.py --list
# Run with simulated data for testing (no hardware)
python scripts/lsl_ws_bridge.py --simulate
# Custom port
python scripts/lsl_ws_bridge.py --port 8768WebSocket Commands:
{"command": "discover"}
{"command": "connect", "name": "OpenBCI_EEG", "stream_type": "EEG"}
{"command": "disconnect"}
{"command": "ping"}Response: Stream Metadata
{
"type": "metadata",
"stream": {
"name": "OpenBCI_EEG",
"stream_type": "EEG",
"channel_count": 8,
"sampling_rate": 250.0,
"channel_labels": ["Fp1", "Fp2", "C3", "C4", "P3", "P4", "O1", "O2"]
}
}# Basic usage (on Raspberry Pi)
python scripts/pieeg_ws_bridge.py
# With options
python scripts/pieeg_ws_bridge.py \
--rate 500 \ # Sample rate: 250, 500, 1000, 2000, 4000, 8000, 16000
--gain 24 \ # PGA gain: 1, 2, 4, 6, 8, 12, 24
--channels 16 \ # 8 or 16 (PiEEG-16)
--port 8766 \ # WebSocket port
--brainflow # Use BrainFlow instead of direct SPI
# Development mode (no hardware - generates synthetic alpha waves)
python scripts/pieeg_ws_bridge.py # Auto-detects non-Pi systemsWebSocket Commands:
{"command": "connect"}
{"command": "disconnect"}
{"command": "set_gain", "gain": 24}
{"command": "set_sample_rate", "rate": 500}The DSP-enhanced bridge applies real-time digital signal processing before streaming, removing common EEG artifacts at the source:
Signal Hygiene Pipeline:
Raw ADS1299 β DC Block β Notch (50/60 Hz) β Bandpass (0.5-45 Hz) β Artifact Reject β CAR β WebSocket
| Filter | Purpose | Default |
|---|---|---|
| DC Blocker | Removes electrode drift | Ξ±=0.995 (~0.8 Hz) |
| Notch Filter | Removes powerline + harmonics | 60 Hz (3 harmonics) |
| Bandpass | Isolates EEG band | 0.5-45 Hz (order 4) |
| Artifact Rejection | Blanks amplitude spikes | Β±150 Β΅V threshold |
| CAR | Common Average Reference | Disabled by default |
# Basic usage with 60 Hz notch (Americas, Asia)
python scripts/pieeg_ws_bridge_dsp.py --notch 60
# European 50 Hz with custom bandpass
python scripts/pieeg_ws_bridge_dsp.py --notch 50 --highpass 1.0 --lowpass 40
# Full signal hygiene with artifact rejection and CAR
python scripts/pieeg_ws_bridge_dsp.py --notch 60 --car --artifact-threshold 150
# Minimal processing (DC block only)
python scripts/pieeg_ws_bridge_dsp.py --no-notch --no-bandpass
# High sample rate with adjusted filters
python scripts/pieeg_ws_bridge_dsp.py --sample-rate 500 --notch 60 --lowpass 100All DSP Options:
# Network
--host 0.0.0.0 # WebSocket bind address
--port 8766 # WebSocket port
# Hardware
--sample-rate 250 # 250, 500, 1000, 2000 Hz
--gain 24 # PGA gain: 1, 2, 4, 6, 8, 12, 24
--channels 8 # Number of channels
# Notch Filter
--notch 60 # Powerline frequency (50 or 60 Hz)
--notch-harmonics 3 # Filter fundamental + N harmonics
--notch-q 30 # Quality factor (higher = narrower)
--no-notch # Disable notch filter
# Bandpass Filter
--highpass 0.5 # High-pass cutoff (Hz)
--lowpass 45 # Low-pass cutoff (Hz)
--filter-order 4 # Butterworth order
--no-bandpass # Disable bandpass filter
# DC Blocking
--dc-alpha 0.995 # DC blocker pole (0.99-0.999)
--no-dc-block # Disable DC blocking
# Artifact Rejection
--artifact-threshold 150 # Threshold in Β΅V
--no-artifact # Disable artifact rejection
# Common Average Reference
--car # Enable CAR
--car-exclude "0,7" # Exclude channels from CAR
# Smoothing
--smooth 0.3 # Exponential smoothing alpha (0 = disabled)Extended Packet Format:
DSP packets include artifact flags per sample:
Header: magic(2) + type(1) + samples(2) + channels(1) + timestamp(8)
Data: [float32 Γ channels + artifact_byte] Γ samples
type = 0x02indicates DSP-processed dataartifact_byteis a bitmask of channels with blanked artifacts
python scripts/cerelog_ws_bridge.py --esp-ip 192.168.4.1 --esp-port 1112PhantomLoop is a single-page React application with modular state management:
βββββββββββββββββββββββββββ βββββββββββββββββββββββββββ
β PhantomLink β β EEG Devices β
β (142ch @ 40Hz) β β (1-16ch @ 250Hz) β
βββββββββββββ¬ββββββββββββββ βββββββββββββ¬ββββββββββββββ
β β
βββββββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββΌβββββββββββββ
β StreamSource API β
β connect() / onSample() β
ββββββββββββββ¬βββββββββββββ
β
ββββββββββββββΌβββββββββββββ
β Zustand Store β
β (5 specialized slices)β
ββββββββββββββ¬βββββββββββββ
β
ββββββββββββββββββββββββΌβββββββββββββββββββββββ
β β β
βββββββββββΌββββββββββ βββββββββββΌββββββββββ βββββββββββΌββββββββββ
β Dynamic Decoders β β Visualizations β β Metrics Engine β
β (JS / TFJS) β β (15+ components) β β (accuracy, FPS) β
βββββββββββββββββββββ βββββββββββββββββββββ βββββββββββββββββββββ
- Stream Adapters - Unified interface for any multichannel data source
- WebSocket Client - Binary MessagePack protocol (40Hz streaming)
- State Management - Zustand with 5 specialized slices
- Decoder Engine - Supports JavaScript and TensorFlow.js models (dynamic input shapes)
- Visualization Suite - 2D arena, neural activity, performance metrics
- connectionSlice: WebSocket lifecycle, session management
- streamSlice: Packet buffering with throttled updates (20Hz UI refresh)
- decoderSlice: Decoder registry, execution, loading states
- metricsSlice: Accuracy tracking, latency monitoring, error statistics
- unifiedStreamSlice: Stream source selection, adapter state, N-channel buffer
- JavaScript: Direct execution in main thread (<1ms)
- TensorFlow.js: Web Worker execution (5-10ms)
- Dynamic Models: Input shape adapts to stream channel count
- Timeout protection: 10ms limit per inference
- Error handling: Falls back to passthrough on failure
The dashboard provides synchronized visualization panels:
Real-time cursor tracking in a center-out reaching task (8 targets):
- π’ Ground Truth Cursor (Green): Actual cursor position from dataset kinematics
- π΅ Decoder Output (Blue): Your algorithm's predicted position
- π£ Active Target (Purple): Current reach target with hit detection
- Error line and color-coded error bar visualization
- Neural Waterfall: Scrolling heatmap of all channels over time
- Neuron Activity Grid: Individual channel firing patterns
- Spike Raster Plot: Precise spike timing visualization
- Population Dynamics: Dimensionality-reduced neural trajectories
- Spectral Power Panel: Real-time frequency analysis
- Interactive 10-20/10-10 montage configuration
- Real-time signal quality indicators
- Device-specific default layouts
- Accuracy Gauge: Circular gauge (0-100%)
- Quick Stats: Mean error, samples processed
- Stream Monitor: Latency, FPS, packet statistics
- Correlation Matrix: Inter-channel relationships
Goal: Achieve >90% accuracy with <15mm mean error.
| Decoder | Description |
|---|---|
| Passthrough | Perfect tracking baseline (copies ground truth) |
| Delayed | 100ms lag test for desync detection |
| Velocity Predictor | Linear prediction using velocity |
| Spike-Based Simple | Naive spike-rate modulated decoder |
| Model | Architecture | Parameters |
|---|---|---|
| Linear (OLE) | Optimal Linear Estimator (Nβ2) | ~290 |
| MLP | Multi-layer perceptron (Nβ128β64β2) | ~27K |
| LSTM | Temporal decoder with 10-step history | ~110K |
| BiGRU Attention | Bidirectional GRU with max pooling | ~85K |
| Kalman-Neural Hybrid | MLP + Kalman fusion (Ξ±=0.6) | ~27K |
All TensorFlow.js models support dynamic input shapes β they auto-adapt to the stream's channel count!
Write custom decoders with a VS Code-quality editing experience:
- Monaco Editor with full IntelliSense
- TensorFlow.js autocomplete for
tf.layers,tf.sequential(), etc. - AI-powered code generation via Groq (natural language β code)
- Quick templates for MLP, LSTM, CNN, Attention architectures
- Real-time validation with syntax checking and best practices
See docs/CODE_EDITOR.md for full documentation.
vercel --prod- Build command:
npm run build - Publish directory:
dist
npm run build
npx wrangler pages deploy dist| Category | Technology |
|---|---|
| Framework | React 19 + TypeScript 5.9 |
| Build Tool | Vite 7 |
| State | Zustand (slice pattern) |
| Styling | Tailwind CSS |
| Animations | Framer Motion |
| ML Runtime | TensorFlow.js (WebGPU/WebGL) |
| Code Editor | Monaco Editor |
| Protocol | MessagePack (binary) |
| Testing | Vitest + Cypress |
| Metric | Target | Notes |
|---|---|---|
| Data Rate | 40 Hz | From PhantomLink backend |
| UI Refresh | 60 FPS | Throttled to 20Hz for React |
| Total Latency | <50ms | Network + Decoder + Render |
| Decoder Timeout | 10ms | Auto-fallback on timeout |
- Desync detection when latency exceeds threshold
- Web Worker decoders for non-blocking computation
- Draggable/resizable panels with localStorage persistence
Create .env.local:
VITE_PHANTOMLINK_URL=wss://phantomlink.fly.devEdit src/utils/constants.ts:
// Color scheme
export const COLORS = {
BIOLINK: '#00FF00', // Green - Ground Truth
LOOPBACK: '#0080FF', // Blue - Decoder Output
TARGET: '#FF00FF', // Magenta - Active Target
};
// Performance monitoring
export const PERFORMANCE_THRESHOLDS = {
TARGET_FPS: 60,
MAX_TOTAL_LATENCY_MS: 50,
DESYNC_THRESHOLD_MS: 50,
DECODER_TIMEOUT_MS: 10,
};src/
βββ components/ # React components (15+ visualization components)
β βββ visualization/ # Arena, gauges, charts, spectral analysis
β βββ CodeEditor.tsx # Monaco editor with AI assistance
β βββ DecoderSelector.tsx
β βββ ElectrodePlacementScreen.tsx
β βββ MetricsPanel.tsx
β βββ ResearchDashboard.tsx
β βββ SessionManager.tsx
β βββ StreamSelector.tsx
β βββ TemporalInspector.tsx
βββ hooks/ # Custom React hooks
β βββ useDecoder.ts # Decoder execution
β βββ useESPEEG.ts # Cerelog ESP-EEG device hook
β βββ useMessagePack.ts# Binary protocol parsing
β βββ usePerformance.ts# FPS and latency monitoring
β βββ useStream.ts # Unified stream handling
β βββ useWebSocket.ts # WebSocket connection
βββ store/ # Zustand state (slice pattern)
β βββ slices/ # connectionSlice, streamSlice, decoderSlice, etc.
βββ streams/ # Stream adapters
β βββ ESPEEGAdapter.ts # Cerelog ESP-EEG
β βββ PhantomLinkAdapter.ts
β βββ UniversalEEGAdapter.ts
βββ decoders/ # BCI decoder implementations
β βββ baselines.ts # JS decoders
β βββ dynamicModels.ts # Dynamic input shape models
β βββ tfjsDecoders.ts # TFJS model definitions
β βββ tfjsModels.ts # Model architectures
βββ devices/ # Device profiles and configurations
βββ types/ # TypeScript definitions
βββ utils/ # Constants, helpers, export utilities
scripts/
βββ cerelog_ws_bridge.py # WebSocket-TCP bridge for ESP-EEG
cypress/ # E2E tests
// src/decoders/baselines.ts
export const myDecoder: Decoder = {
id: 'my-decoder',
name: 'My Custom Decoder',
type: 'javascript',
description: 'Custom decoder description',
code: `
// Available inputs:
// - input.kinematics: { x, y, vx, vy }
// - input.spikes: number[] (N channels)
// - input.intention: { target_id, target_x, target_y }
// - input.timestamp: number
const { x, y, vx, vy } = input.kinematics;
const predictedX = x + vx * 0.025;
const predictedY = y + vy * 0.025;
return { x: predictedX, y: predictedY, vx, vy };
`
};// src/decoders/tfjsDecoders.ts
export const myTfjsDecoder: Decoder = {
id: 'tfjs-custom',
name: 'Custom TFJS Model',
type: 'tfjs',
tfjsModelType: 'mlp',
description: 'My custom neural decoder',
architecture: 'Dense(N β 64 β 2)',
};// Implement the StreamSource interface
import type { StreamSource, StreamConfig, StreamSample } from './types/stream';
class MyCustomAdapter implements StreamSource {
readonly id = 'my-source';
readonly name = 'My Custom Source';
async connect(url: string, config?: StreamConfig): Promise<void> {
// Connect to your data source
}
disconnect(): void {
// Clean up connection
}
onSample(callback: (sample: StreamSample) => void): () => void {
// Subscribe to incoming samples, return unsubscribe function
}
}# Unit tests (Vitest)
npm run test # Watch mode
npm run test:run # Single run
npm run test:coverage # With coverage report
# E2E tests (Cypress)
npm run cy:open # Interactive mode
npm run cy:run # Headless mode
npm run test:e2e # Full E2E with dev serverPhantomLoop auto-detects the best available backend:
| Backend | Performance | Availability |
|---|---|---|
| WebGPU | Fastest | Chrome/Edge (experimental) |
| WebGL | Good | All modern browsers |
| CPU | Fallback | Universal |
Check console on startup:
[PhantomLoop] TensorFlow.js initialized with WebGPU backend
- Verify PhantomLink is running at
wss://phantomlink.fly.dev - Create a session via
/api/sessions/create - Check network/firewall allows WebSocket connections
- Use production server for lower RTT
- Simplify decoder logic
- Close other browser tabs
- Check browser console for errors
- Wait for TensorFlow.js initialization (5-10s)
- Verify Web Worker is loading correctly
- Normal when total latency exceeds 50ms
- Usually occurs during TFJS model initialization or network congestion
- Cerelog ESP-EEG Integration - Full EEG device setup guide
- Code Editor Guide - Monaco editor usage and AI features
| Resource | Description |
|---|---|
| PhantomLink Backend | Neural data streaming server |
| Neural Latents Benchmark | MC_Maze dataset |
| DANDI Archive #000140 | Dataset download |
| TensorFlow.js | ML framework |
| Brainflow | Universal EEG interface |
MIT License - see LICENSE for details.
This project was developed with assistance from AI coding assistants:
- Claude Opus 4.5 & Sonnet 4.5 (Anthropic)
- Grok code fast 1 (xAI)
- Gemini 3.0 Pro (Google)
All code was tested and validated by the author.
Built for the BCI research community π§ β‘
"Visualize. Decode. Validate. Iterate."