Skip to content

docs: Clarify diarization pipeline version differences#511

Merged
Alex-Wengg merged 3 commits intomainfrom
docs/clarify-diarization-pipeline-versions
Apr 11, 2026
Merged

docs: Clarify diarization pipeline version differences#511
Alex-Wengg merged 3 commits intomainfrom
docs/clarify-diarization-pipeline-versions

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Apr 11, 2026

Summary

Addresses feedback from FluidInference/docs.fluidinference.com#6 (comment)

Clarifies the distinction between online and offline diarization pipeline versions:

  • Online/streaming (DiarizerManager): Based on Pyannote 3.1
  • Offline batch (OfflineDiarizerManager): Based on Pyannote Community-1

Changes

Updated documentation in four files to clearly distinguish between the two pipelines:

  1. CLAUDE.md - Model Sources section now lists both versions
  2. README.md - Added version info to Streaming/Online Speaker Diarization section
  3. Documentation/Models.md - Updated Pyannote CoreML Pipeline description in table
  4. Documentation/Diarization/GettingStarted.md - Added version to WeSpeaker/Pyannote Streaming section

Context

PR #6 updated references from 3.1 to community-1, but Brandon's review comment clarified that both versions are correct - they just apply to different pipelines. This PR makes that distinction clear throughout the documentation.


Open with Devin

Alex-Wengg and others added 2 commits April 10, 2026 22:41
- Update code comment in SegmentationProcessor.swift
- Update CLAUDE.md model source reference
- Update Documentation/Benchmarks.md to clarify both online/offline use community-1

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Distinguish between online and offline diarization pipelines:
- Online/streaming (DiarizerManager): Pyannote 3.1
- Offline batch (OfflineDiarizerManager): Pyannote Community-1

Updated documentation in:
- CLAUDE.md Model Sources section
- README.md Streaming/Online Speaker Diarization section
- Documentation/Models.md Diarization Models table
- Documentation/Diarization/GettingStarted.md WeSpeaker/Pyannote Streaming section

Addresses feedback from PR #6 review comment:
FluidInference/docs.fluidinference.com#6 (comment)
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 potential issues.

View 2 additional findings in Devin Review.

Open in Devin Review

binarizedSegments: [[[Float]]], chunkOffset: Double = 0.0
) -> SlidingWindowFeature {
// These values come from the pyannote/speaker-diarization-3.1 model configuration
// These values come from the pyannote/speaker-diarization-community-1 model configuration
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Comment incorrectly attributes sliding window parameters to community-1 instead of 3.1

SegmentationProcessor is used exclusively by the online DiarizerManager (Sources/FluidAudio/Diarizer/Core/DiarizerManager.swift:16), which runs the pyannote 3.1 segmentation model. The powerset in this file has 7 entries (lines 114-122, no triple-overlap [0,1,2]), matching pyannote 3.1's output classes. In contrast, the offline OfflineSegmentationProcessor (Sources/FluidAudio/Diarizer/Offline/Segmentation/OfflineSegmentationProcessor.swift:15-24) uses 8 powerset entries (including [0,1,2]), matching community-1. The comment was changed from "3.1" to "community-1" but the code clearly operates on the 3.1 model. This also contradicts the PR's own updates to CLAUDE.md:184, Documentation/Models.md:46, README.md:375, and Documentation/Diarization/GettingStarted.md:343 which all state that the online pipeline is based on pyannote 3.1.

Suggested change
// These values come from the pyannote/speaker-diarization-community-1 model configuration
// These values come from the pyannote/speaker-diarization-3.1 model configuration
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

## Speaker Diarization

The offline version uses the community-1 model, the online version uses the legacy speaker-diarization-3.1 model.
Both offline and online versions use the community-1 model (via FluidInference/speaker-diarization-coreml).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Benchmarks.md incorrectly states both pipelines use community-1 model

Documentation/Benchmarks.md:463 states "Both offline and online versions use the community-1 model" but this directly contradicts the PR's own changes to CLAUDE.md:184 ("Online/Streaming (DiarizerManager): based on pyannote/speaker-diarization-3.1"), Documentation/Models.md:46, README.md:375, and Documentation/Diarization/GettingStarted.md:343, all of which identify the online pipeline as pyannote 3.1. The code confirms this: SegmentationProcessor uses a 7-class powerset (3.1), while OfflineSegmentationProcessor uses an 8-class powerset (community-1). Since CLAUDE.md is a special rule file that provides authoritative project documentation, having contradictory information in Benchmarks.md is a documentation integrity issue.

Suggested change
Both offline and online versions use the community-1 model (via FluidInference/speaker-diarization-coreml).
Both offline and online versions use models from FluidInference/speaker-diarization-coreml. The offline pipeline uses the community-1 model; the online pipeline uses the legacy speaker-diarization-3.1 model.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@github-actions
Copy link
Copy Markdown

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m36s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (180.0 KB)

Runtime: 0m34s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

@Alex-Wengg Alex-Wengg merged commit 39df91d into main Apr 11, 2026
12 checks passed
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 4 new potential issues.

View 4 additional findings in Devin Review.

Open in Devin Review


- **Diarization**: [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-community-1)
- **Diarization**:
- Online/Streaming (DiarizerManager): [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-3.1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 CLAUDE.md incorrectly claims online DiarizerManager is based on pyannote 3.1

This PR re-introduces the incorrect claim that the online/streaming DiarizerManager is based on pyannote/speaker-diarization-3.1. The immediately preceding PR #510 (commit 421313a) explicitly corrected this, stating: "The actual CoreML model at FluidInference/speaker-diarization-coreml has always been based on community-1, but some documentation incorrectly referenced 3.1." The code itself at Sources/FluidAudio/Diarizer/Segmentation/SegmentationProcessor.swift:227 says values come from pyannote/speaker-diarization-community-1, and Documentation/Benchmarks.md:463 (unchanged by this PR) states "Both offline and online versions use the community-1 model." This creates a factual inconsistency in the CLAUDE.md special rule file, which will mislead AI assistants and developers about which model the online pipeline actually uses.

Suggested change
- Online/Streaming (DiarizerManager): [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-3.1)
- Online/Streaming (DiarizerManager): [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-community-1)
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

### Streaming/Online Speaker Diarization (Pyannote)

This pipeline uses segmentation plus speaker embeddings and is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization.
Pyannote 3.1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization. This is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 README.md incorrectly labels online diarization as "Pyannote 3.1 pipeline"

The README now says "Pyannote 3.1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization" but both pipelines use community-1 as established by PR #510 and confirmed by SegmentationProcessor.swift:227 and Documentation/Benchmarks.md:463.

Suggested change
Pyannote 3.1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization. This is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization.
Pyannote community-1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization. This is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

| **LS-EEND** | Research prototype end-to-end streaming diarization model from Westlake University. Supports both streaming and complete-buffer inference for up to 10 speakers. Uses frame-in, frame-out processing, requiring 900ms of warmup audio and 100ms per update. | Added after Sortformer to support largers speaker counts. |
| **Sortformer** | NVIDIA's enterprise-grade end-to-end streaming diarization model. Supports both streaming and complete-buffer inference for up to 4 speakers. More stable than LS-EEND, but sometimes misses speech. Processes audio in chunks, requiring 1040ms of warmup audio and 480ms per update for the low latency versions. | Added after Pyannote to support low-latency streaming diarization. |
| **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Best offline diarization pipeline, but also support online use | First diarizer model added. Converted from Pyannote with custom made batching mode |
| **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-3.1. Offline batch pipeline (OfflineDiarizerManager) based on pyannote/speaker-diarization-community-1. | First diarizer model added. Converted from Pyannote with custom made batching mode |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Models.md incorrectly claims online pipeline is based on pyannote 3.1

The Pyannote CoreML Pipeline description now says "Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-3.1" but the actual model is based on community-1 as established by PR #510 (421313a), code comments in SegmentationProcessor.swift:227, and Documentation/Benchmarks.md:463.

Suggested change
| **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-3.1. Offline batch pipeline (OfflineDiarizerManager) based on pyannote/speaker-diarization-community-1. | First diarizer model added. Converted from Pyannote with custom made batching mode |
| **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-community-1. Offline batch pipeline (OfflineDiarizerManager) based on pyannote/speaker-diarization-community-1. | First diarizer model added. Converted from Pyannote with custom made batching mode |
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

### WeSpeaker/Pyannote Streaming

Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks.
Pyannote 3.1 pipeline for online/streaming use. Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 GettingStarted.md incorrectly labels streaming section as "Pyannote 3.1 pipeline"

The WeSpeaker/Pyannote Streaming section now says "Pyannote 3.1 pipeline for online/streaming use" but the online pipeline uses community-1, as established by PR #510 and confirmed by SegmentationProcessor.swift:227 and Documentation/Benchmarks.md:463.

Suggested change
Pyannote 3.1 pipeline for online/streaming use. Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks.
Pyannote community-1 pipeline for online/streaming use. Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@Alex-Wengg Alex-Wengg deleted the docs/clarify-diarization-pipeline-versions branch April 11, 2026 15:24
@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 13.3x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 41s • 2026-04-11T15:25:57.260Z

@github-actions
Copy link
Copy Markdown

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.06x ~2.5x
Overall RTFx 0.06x ~2.5x

Runtime: 4m15s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.36x
test-other 1.19% 0.00% 3.51x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.26x
test-other 1.22% 0.00% 3.51x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.61x Streaming real-time factor
Avg Chunk Time 1.471s Average time to process each chunk
Max Chunk Time 1.615s Maximum chunk processing time
First Token 1.736s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.62x Streaming real-time factor
Avg Chunk Time 1.471s Average time to process each chunk
Max Chunk Time 1.638s Maximum chunk processing time
First Token 1.504s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 5m56s • 04/11/2026, 11:29 AM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 10.11x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 51.5s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.052s Average chunk processing time
Max Chunk Time 0.103s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m13s • 04/11/2026, 11:33 AM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 5.06x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 11.052 5.3 Fetching diarization models
Model Compile 4.737 2.3 CoreML compilation
Audio Load 0.092 0.0 Loading audio file
Segmentation 24.668 11.9 VAD + speech detection
Embedding 206.569 99.6 Speaker embedding extraction
Clustering (VBx) 0.744 0.4 Hungarian algorithm + VBx clustering
Total 207.478 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 232.0s processing • Test runtime: 3m 52s • 04/11/2026, 11:34 AM EST

@github-actions
Copy link
Copy Markdown

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 739.0x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 746.9x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 27.50x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 8.984 23.5 Fetching diarization models
Model Compile 3.850 10.1 CoreML compilation
Audio Load 0.046 0.1 Loading audio file
Segmentation 11.443 30.0 Detecting speech regions
Embedding 19.071 50.0 Extracting speaker voices
Clustering 7.628 20.0 Grouping same speakers
Total 38.155 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 38.1s diarization time • Test runtime: 2m 20s • 04/11/2026, 11:34 AM EST

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant