Skip to content

feat: YouTube/yt-dlp parallel streaming downloads#26

Open
siq0o wants to merge 2 commits intomasterking32:python_testingfrom
siq0o:python_testing
Open

feat: YouTube/yt-dlp parallel streaming downloads#26
siq0o wants to merge 2 commits intomasterking32:python_testingfrom
siq0o:python_testing

Conversation

@siq0o
Copy link
Copy Markdown

@siq0o siq0o commented Apr 25, 2026

I have limited networking/Python knowledge so feel free to close this and implement your own version — but it's been working well in testing.

What this does:

Adds a YouTube fast path inside stream_parallel_download that activates when a googlevideo.com URL contains the clen= query parameter (total file size, always present in yt-dlp's direct video URLs). This allows:

  • Skipping the Content-Range probe (file size is known upfront from clen)
  • Using 4 MiB chunks instead of 512 KiB (better throughput per Apps Script call)
  • Capping parallelism at 4 concurrent chunks to avoid quota exhaustion
  • A robust 5-attempt/15s probe to avoid racing yt-dlp's socket timeout

Required yt-dlp flags:
--downloader native --http-chunk-size 0 --no-continue --socket-timeout 60

--no-continue is needed because resume support conflicts with our buffered streaming model — yt-dlp's resume validation expects specific Content-Length semantics we can't cleanly satisfy.

What it doesn't affect:

The is_yt guard ("googlevideo.com" in url + clen= in query string) means browser video playback and all other download types go through the original unmodified code path. Browser range requests use bounded bytes=X-Y headers and don't carry clen, so they never trigger this path.

Developed with Claude Sonnet as a collaborator.

@BOplaid
Copy link
Copy Markdown
Contributor

BOplaid commented Apr 25, 2026

May not be fully related but why don't we always use 4 MiB chunks when we encounter googlevideo.com? It's incredibly unlikely for a file coming from there to be less than that. Or maybe I have no idea what I'm talking about.

@abolix
Copy link
Copy Markdown
Collaborator

abolix commented Apr 25, 2026

I'm not saying this is a bad idea. but code quality is not really good, I can clearly see some code duplication. the chunk_size and max_parallel is hardcoded.

maybe double check it ?

@siq0o
Copy link
Copy Markdown
Author

siq0o commented Apr 25, 2026

Sorry for the low quality code 😬 feel free to take over, since this is all still vibe coded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants