Conversation
pyperf/_hooks.py
Outdated
| PYPERF_TACHYON_ALL_THREADS: Set to "1" to profile all threads | ||
| PYPERF_TACHYON_NATIVE: Set to "1" to include the native frames | ||
| PYPERF_TACHYON_ASYNC_AWARE: Set to "1" for async-aware profiling | ||
| PYPERF_TACHYON_EXTRA_OPTS: Extra arguments passed to |
There was a problem hiding this comment.
Why not just calling it PYPERF_TACHYON_OPTS?
There was a problem hiding this comment.
To be honest, I just copied the convention from perf_record:
We can perhaps add a signal handler to the tachyon CLI invocation so you can request stopping. I think we react already to SIGINT so perhaps that works |
|
I don't know hooks. How am I supposed to use them? I tried the Same issue when running a benchmark script: |
I didn't change Github description example to use Attaching the flame graphs: The obvious problem is that it includes |
vstinner
left a comment
There was a problem hiding this comment.
LGTM.
My example works with PYPERF_TACHYON_OPTS env var.
|
Thanks for the PR @maurycy |
The PR adds support for profiling benchmarks with a Tachyon hook.
It's enabled only during benchmark execution. It supports nearly all flags as environment variables, and there's a guard for versions below 3.15. It's disabled on Windows, since I don't have access to any.
I'm not sure if this should be merged before October 2026, but it's a good way to verify Tachyon itself and offers a new way to profile the whole pyperformance suite.
Basic way to test:
pyperformance
I was able to run the whole
pyperformancesuite with it:notes
WorkerTasketc. wouldn't be included,profiling.samplingdoes not seem to offer a way to stop, except killing the process (not the urgent priority),