docs: Clarify diarization pipeline version differences#511
Conversation
- Update code comment in SegmentationProcessor.swift - Update CLAUDE.md model source reference - Update Documentation/Benchmarks.md to clarify both online/offline use community-1 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Distinguish between online and offline diarization pipelines: - Online/streaming (DiarizerManager): Pyannote 3.1 - Offline batch (OfflineDiarizerManager): Pyannote Community-1 Updated documentation in: - CLAUDE.md Model Sources section - README.md Streaming/Online Speaker Diarization section - Documentation/Models.md Diarization Models table - Documentation/Diarization/GettingStarted.md WeSpeaker/Pyannote Streaming section Addresses feedback from PR #6 review comment: FluidInference/docs.fluidinference.com#6 (comment)
| binarizedSegments: [[[Float]]], chunkOffset: Double = 0.0 | ||
| ) -> SlidingWindowFeature { | ||
| // These values come from the pyannote/speaker-diarization-3.1 model configuration | ||
| // These values come from the pyannote/speaker-diarization-community-1 model configuration |
There was a problem hiding this comment.
🟡 Comment incorrectly attributes sliding window parameters to community-1 instead of 3.1
SegmentationProcessor is used exclusively by the online DiarizerManager (Sources/FluidAudio/Diarizer/Core/DiarizerManager.swift:16), which runs the pyannote 3.1 segmentation model. The powerset in this file has 7 entries (lines 114-122, no triple-overlap [0,1,2]), matching pyannote 3.1's output classes. In contrast, the offline OfflineSegmentationProcessor (Sources/FluidAudio/Diarizer/Offline/Segmentation/OfflineSegmentationProcessor.swift:15-24) uses 8 powerset entries (including [0,1,2]), matching community-1. The comment was changed from "3.1" to "community-1" but the code clearly operates on the 3.1 model. This also contradicts the PR's own updates to CLAUDE.md:184, Documentation/Models.md:46, README.md:375, and Documentation/Diarization/GettingStarted.md:343 which all state that the online pipeline is based on pyannote 3.1.
| // These values come from the pyannote/speaker-diarization-community-1 model configuration | |
| // These values come from the pyannote/speaker-diarization-3.1 model configuration |
Was this helpful? React with 👍 or 👎 to provide feedback.
| ## Speaker Diarization | ||
|
|
||
| The offline version uses the community-1 model, the online version uses the legacy speaker-diarization-3.1 model. | ||
| Both offline and online versions use the community-1 model (via FluidInference/speaker-diarization-coreml). |
There was a problem hiding this comment.
🟡 Benchmarks.md incorrectly states both pipelines use community-1 model
Documentation/Benchmarks.md:463 states "Both offline and online versions use the community-1 model" but this directly contradicts the PR's own changes to CLAUDE.md:184 ("Online/Streaming (DiarizerManager): based on pyannote/speaker-diarization-3.1"), Documentation/Models.md:46, README.md:375, and Documentation/Diarization/GettingStarted.md:343, all of which identify the online pipeline as pyannote 3.1. The code confirms this: SegmentationProcessor uses a 7-class powerset (3.1), while OfflineSegmentationProcessor uses an 8-class powerset (community-1). Since CLAUDE.md is a special rule file that provides authoritative project documentation, having contradictory information in Benchmarks.md is a documentation integrity issue.
| Both offline and online versions use the community-1 model (via FluidInference/speaker-diarization-coreml). | |
| Both offline and online versions use models from FluidInference/speaker-diarization-coreml. The offline pipeline uses the community-1 model; the online pipeline uses the legacy speaker-diarization-3.1 model. |
Was this helpful? React with 👍 or 👎 to provide feedback.
Kokoro TTS Smoke Test ✅
Runtime: 0m36s Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon. |
PocketTTS Smoke Test ✅
Runtime: 0m34s Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon. |
|
|
||
| - **Diarization**: [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-community-1) | ||
| - **Diarization**: | ||
| - Online/Streaming (DiarizerManager): [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-3.1) |
There was a problem hiding this comment.
🔴 CLAUDE.md incorrectly claims online DiarizerManager is based on pyannote 3.1
This PR re-introduces the incorrect claim that the online/streaming DiarizerManager is based on pyannote/speaker-diarization-3.1. The immediately preceding PR #510 (commit 421313a) explicitly corrected this, stating: "The actual CoreML model at FluidInference/speaker-diarization-coreml has always been based on community-1, but some documentation incorrectly referenced 3.1." The code itself at Sources/FluidAudio/Diarizer/Segmentation/SegmentationProcessor.swift:227 says values come from pyannote/speaker-diarization-community-1, and Documentation/Benchmarks.md:463 (unchanged by this PR) states "Both offline and online versions use the community-1 model." This creates a factual inconsistency in the CLAUDE.md special rule file, which will mislead AI assistants and developers about which model the online pipeline actually uses.
| - Online/Streaming (DiarizerManager): [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-3.1) | |
| - Online/Streaming (DiarizerManager): [FluidInference/speaker-diarization-coreml](https://huggingface.co/FluidInference/speaker-diarization-coreml) (based on pyannote/speaker-diarization-community-1) | |
Was this helpful? React with 👍 or 👎 to provide feedback.
| ### Streaming/Online Speaker Diarization (Pyannote) | ||
|
|
||
| This pipeline uses segmentation plus speaker embeddings and is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization. | ||
| Pyannote 3.1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization. This is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization. |
There was a problem hiding this comment.
🔴 README.md incorrectly labels online diarization as "Pyannote 3.1 pipeline"
The README now says "Pyannote 3.1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization" but both pipelines use community-1 as established by PR #510 and confirmed by SegmentationProcessor.swift:227 and Documentation/Benchmarks.md:463.
| Pyannote 3.1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization. This is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization. | |
| Pyannote community-1 pipeline (segmentation + WeSpeaker embeddings) for online/streaming diarization. This is the third choice behind LS-EEND and Sortformer. It can be useful if you specifically want the classic multi-stage pipeline, but it is much slower than LS-EEND or Sortformer for live diarization. | |
Was this helpful? React with 👍 or 👎 to provide feedback.
| | **LS-EEND** | Research prototype end-to-end streaming diarization model from Westlake University. Supports both streaming and complete-buffer inference for up to 10 speakers. Uses frame-in, frame-out processing, requiring 900ms of warmup audio and 100ms per update. | Added after Sortformer to support largers speaker counts. | | ||
| | **Sortformer** | NVIDIA's enterprise-grade end-to-end streaming diarization model. Supports both streaming and complete-buffer inference for up to 4 speakers. More stable than LS-EEND, but sometimes misses speech. Processes audio in chunks, requiring 1040ms of warmup audio and 480ms per update for the low latency versions. | Added after Pyannote to support low-latency streaming diarization. | | ||
| | **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Best offline diarization pipeline, but also support online use | First diarizer model added. Converted from Pyannote with custom made batching mode | | ||
| | **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-3.1. Offline batch pipeline (OfflineDiarizerManager) based on pyannote/speaker-diarization-community-1. | First diarizer model added. Converted from Pyannote with custom made batching mode | |
There was a problem hiding this comment.
🟡 Models.md incorrectly claims online pipeline is based on pyannote 3.1
The Pyannote CoreML Pipeline description now says "Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-3.1" but the actual model is based on community-1 as established by PR #510 (421313a), code comments in SegmentationProcessor.swift:227, and Documentation/Benchmarks.md:463.
| | **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-3.1. Offline batch pipeline (OfflineDiarizerManager) based on pyannote/speaker-diarization-community-1. | First diarizer model added. Converted from Pyannote with custom made batching mode | | |
| | **Pyannote CoreML Pipeline** | Speaker diarization. Segmentation model + WeSpeaker embeddings for clustering. Online/streaming pipeline (DiarizerManager) based on pyannote/speaker-diarization-community-1. Offline batch pipeline (OfflineDiarizerManager) based on pyannote/speaker-diarization-community-1. | First diarizer model added. Converted from Pyannote with custom made batching mode | | |
Was this helpful? React with 👍 or 👎 to provide feedback.
| ### WeSpeaker/Pyannote Streaming | ||
|
|
||
| Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks. | ||
| Pyannote 3.1 pipeline for online/streaming use. Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks. |
There was a problem hiding this comment.
🟡 GettingStarted.md incorrectly labels streaming section as "Pyannote 3.1 pipeline"
The WeSpeaker/Pyannote Streaming section now says "Pyannote 3.1 pipeline for online/streaming use" but the online pipeline uses community-1, as established by PR #510 and confirmed by SegmentationProcessor.swift:227 and Documentation/Benchmarks.md:463.
| Pyannote 3.1 pipeline for online/streaming use. Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks. | |
| Pyannote community-1 pipeline for online/streaming use. Use `DiarizerManager` when you need the classic segmentation + embedding + speaker-database pipeline. This is the slowest streaming option and works best with larger chunks. | |
Was this helpful? React with 👍 or 👎 to provide feedback.
Sortformer High-Latency Benchmark ResultsES2004a Performance (30.4s latency config)
Sortformer High-Latency • ES2004a • Runtime: 2m 41s • 2026-04-11T15:25:57.260Z |
Qwen3-ASR int8 Smoke Test ✅
Performance Metrics
Runtime: 4m15s Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx. |
ASR Benchmark Results ✅Status: All benchmarks passed Parakeet v3 (multilingual)
Parakeet v2 (English-optimized)
Streaming (v3)
Streaming (v2)
Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming 25 files per dataset • Test runtime: 5m56s • 04/11/2026, 11:29 AM EST RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time Expected RTFx Performance on Physical M1 Hardware:• M1 Mac: ~28x (clean), ~25x (other) Testing methodology follows HuggingFace Open ASR Leaderboard |
Parakeet EOU Benchmark Results ✅Status: Benchmark passed Performance Metrics
Streaming Metrics
Test runtime: 1m13s • 04/11/2026, 11:33 AM EST RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O |
Offline VBx Pipeline ResultsSpeaker Diarization Performance (VBx Batch Mode)Optimal clustering with Hungarian algorithm for maximum accuracy
Offline VBx Pipeline Timing BreakdownTime spent in each stage of batch diarization
Speaker Diarization Research ComparisonOffline VBx achieves competitive accuracy with batch processing
Pipeline Details:
🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 232.0s processing • Test runtime: 3m 52s • 04/11/2026, 11:34 AM EST |
VAD Benchmark ResultsPerformance Comparison
Dataset Details
✅: Average F1-Score above 70% |
Speaker Diarization Benchmark ResultsSpeaker Diarization PerformanceEvaluating "who spoke when" detection accuracy
Diarization Pipeline Timing BreakdownTime spent in each stage of speaker diarization
Speaker Diarization Research ComparisonResearch baselines typically achieve 18-30% DER on standard datasets
Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:
🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 38.1s diarization time • Test runtime: 2m 20s • 04/11/2026, 11:34 AM EST |
Summary
Addresses feedback from FluidInference/docs.fluidinference.com#6 (comment)
Clarifies the distinction between online and offline diarization pipeline versions:
DiarizerManager): Based on Pyannote 3.1OfflineDiarizerManager): Based on Pyannote Community-1Changes
Updated documentation in four files to clearly distinguish between the two pipelines:
Context
PR #6 updated references from 3.1 to community-1, but Brandon's review comment clarified that both versions are correct - they just apply to different pipelines. This PR makes that distinction clear throughout the documentation.