-
Notifications
You must be signed in to change notification settings - Fork 150
Add new feature for kimi k2.5 mtp support #1115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
haic0
wants to merge
2
commits into
main
Choose a base branch
from
haichen/kimi-k2.5-int4-mtp
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,86 @@ | ||
| #!/usr/bin/env bash | ||
|
|
||
| source "$(dirname "$0")/../benchmark_lib.sh" | ||
|
|
||
| check_env_vars \ | ||
| MODEL \ | ||
| TP \ | ||
| CONC \ | ||
| ISL \ | ||
| OSL \ | ||
| MAX_MODEL_LEN \ | ||
| RANDOM_RANGE_RATIO \ | ||
| RESULT_FILENAME | ||
|
|
||
| if [[ -n "$SLURM_JOB_ID" ]]; then | ||
| echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME" | ||
| fi | ||
|
|
||
| hf download "$MODEL" | ||
|
|
||
| # Set HIP_VISIBLE_DEVICES to match ROCR_VISIBLE_DEVICES for Ray compatibility in vLLM 0.14+ | ||
| if [ -n "$ROCR_VISIBLE_DEVICES" ]; then | ||
| export HIP_VISIBLE_DEVICES="$ROCR_VISIBLE_DEVICES" | ||
| fi | ||
|
|
||
| SERVER_LOG=/workspace/server.log | ||
| PORT=${PORT:-8888} | ||
|
|
||
| if [ "${EVAL_ONLY}" = "true" ]; then | ||
| setup_eval_context | ||
| MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN" | ||
| fi | ||
| # Start GPU monitoring (power, temperature, clocks every second) | ||
| start_gpu_monitor | ||
|
|
||
| echo "Ensuring host benchmark dependencies (aiohttp/transformers/numpy/tqdm/huggingface-hub/tiktoken)..." | ||
| python3 -m pip install --no-cache-dir aiohttp transformers numpy tqdm huggingface-hub tiktoken | ||
|
|
||
|
|
||
|
|
||
| set -x | ||
| export VLLM_ROCM_USE_AITER=1 | ||
| export VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=INT4 | ||
| export VLLM_ROCM_USE_AITER_RMSNORM=0 | ||
| export VLLM_SPEC_CONFIG="${VLLM_SPEC_CONFIG:-{\"model\": \"nvidia/Kimi-K2.5-Thinking-Eagle3\", \"method\": \"eagle3\", \"num_speculative_tokens\": 3}}" | ||
|
|
||
| vllm serve $MODEL --port $PORT \ | ||
| --tensor-parallel-size=$TP \ | ||
| --gpu-memory-utilization 0.8 \ | ||
| --max-model-len $MAX_MODEL_LEN \ | ||
| --block-size=64 \ | ||
| --trust-remote-code \ | ||
| --no-enable-prefix-caching \ | ||
| --max-num-seqs 256 \ | ||
| --speculative-config "$VLLM_SPEC_CONFIG" \ | ||
| --mm-encoder-tp-mode data > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready | ||
| wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID" | ||
|
|
||
| run_benchmark_serving \ | ||
| --model "$MODEL" \ | ||
| --port "$PORT" \ | ||
| --backend vllm \ | ||
| --input-len "$ISL" \ | ||
| --output-len "$OSL" \ | ||
| --random-range-ratio "$RANDOM_RANGE_RATIO" \ | ||
| --num-prompts "$((CONC * 10))" \ | ||
| --max-concurrency "$CONC" \ | ||
| --result-filename "$RESULT_FILENAME" \ | ||
| --result-dir /workspace/ \ | ||
| --trust-remote-code | ||
|
|
||
| # After throughput, run evaluation only if RUN_EVAL is true | ||
| if [ "${RUN_EVAL}" = "true" ]; then | ||
| run_eval --framework lm-eval --port "$PORT" | ||
| append_lm_eval_summary | ||
| fi | ||
|
|
||
| # Stop GPU monitoring | ||
| stop_gpu_monitor | ||
| set +x | ||
|
|
||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,86 @@ | ||
| #!/usr/bin/env bash | ||
|
|
||
| source "$(dirname "$0")/../benchmark_lib.sh" | ||
|
|
||
| check_env_vars \ | ||
| MODEL \ | ||
| TP \ | ||
| CONC \ | ||
| ISL \ | ||
| OSL \ | ||
| MAX_MODEL_LEN \ | ||
| RANDOM_RANGE_RATIO \ | ||
| RESULT_FILENAME | ||
|
|
||
| if [[ -n "$SLURM_JOB_ID" ]]; then | ||
| echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME" | ||
| fi | ||
|
|
||
| hf download "$MODEL" | ||
|
|
||
| # Set HIP_VISIBLE_DEVICES to match ROCR_VISIBLE_DEVICES for Ray compatibility in vLLM 0.14+ | ||
| if [ -n "$ROCR_VISIBLE_DEVICES" ]; then | ||
| export HIP_VISIBLE_DEVICES="$ROCR_VISIBLE_DEVICES" | ||
| fi | ||
|
|
||
| SERVER_LOG=/workspace/server.log | ||
| PORT=${PORT:-8888} | ||
|
|
||
| if [ "${EVAL_ONLY}" = "true" ]; then | ||
| setup_eval_context | ||
| MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN" | ||
| fi | ||
| # Start GPU monitoring (power, temperature, clocks every second) | ||
| start_gpu_monitor | ||
|
|
||
| echo "Ensuring host benchmark dependencies (aiohttp/transformers/numpy/tqdm/huggingface-hub/tiktoken)..." | ||
| python3 -m pip install --no-cache-dir aiohttp transformers numpy tqdm huggingface-hub tiktoken | ||
|
|
||
|
|
||
|
|
||
| set -x | ||
| export VLLM_ROCM_USE_AITER=1 | ||
| export VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=INT4 | ||
| export VLLM_ROCM_USE_AITER_RMSNORM=0 | ||
| export VLLM_SPEC_CONFIG="${VLLM_SPEC_CONFIG:-{\"model\": \"nvidia/Kimi-K2.5-Thinking-Eagle3\", \"method\": \"eagle3\", \"num_speculative_tokens\": 3}}" | ||
|
|
||
| vllm serve $MODEL --port $PORT \ | ||
| --tensor-parallel-size=$TP \ | ||
| --gpu-memory-utilization 0.8 \ | ||
| --max-model-len $MAX_MODEL_LEN \ | ||
| --block-size=64 \ | ||
| --trust-remote-code \ | ||
| --no-enable-prefix-caching \ | ||
| --max-num-seqs 256 \ | ||
| --speculative-config "$VLLM_SPEC_CONFIG" \ | ||
| --mm-encoder-tp-mode data > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready | ||
| wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID" | ||
|
|
||
| run_benchmark_serving \ | ||
| --model "$MODEL" \ | ||
| --port "$PORT" \ | ||
| --backend vllm \ | ||
| --input-len "$ISL" \ | ||
| --output-len "$OSL" \ | ||
| --random-range-ratio "$RANDOM_RANGE_RATIO" \ | ||
| --num-prompts "$((CONC * 10))" \ | ||
| --max-concurrency "$CONC" \ | ||
| --result-filename "$RESULT_FILENAME" \ | ||
| --result-dir /workspace/ \ | ||
| --trust-remote-code | ||
|
|
||
| # After throughput, run evaluation only if RUN_EVAL is true | ||
| if [ "${RUN_EVAL}" = "true" ]; then | ||
| run_eval --framework lm-eval --port "$PORT" | ||
| append_lm_eval_summary | ||
| fi | ||
|
|
||
| # Stop GPU monitoring | ||
| stop_gpu_monitor | ||
| set +x | ||
|
|
||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,86 @@ | ||
| #!/usr/bin/env bash | ||
|
|
||
| source "$(dirname "$0")/../benchmark_lib.sh" | ||
|
|
||
| check_env_vars \ | ||
| MODEL \ | ||
| TP \ | ||
| CONC \ | ||
| ISL \ | ||
| OSL \ | ||
| MAX_MODEL_LEN \ | ||
| RANDOM_RANGE_RATIO \ | ||
| RESULT_FILENAME | ||
|
|
||
| if [[ -n "$SLURM_JOB_ID" ]]; then | ||
| echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME" | ||
| fi | ||
|
|
||
| hf download "$MODEL" | ||
|
|
||
| # Set HIP_VISIBLE_DEVICES to match ROCR_VISIBLE_DEVICES for Ray compatibility in vLLM 0.14+ | ||
| if [ -n "$ROCR_VISIBLE_DEVICES" ]; then | ||
| export HIP_VISIBLE_DEVICES="$ROCR_VISIBLE_DEVICES" | ||
| fi | ||
|
|
||
| SERVER_LOG=/workspace/server.log | ||
| PORT=${PORT:-8888} | ||
|
|
||
| if [ "${EVAL_ONLY}" = "true" ]; then | ||
| setup_eval_context | ||
| MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN" | ||
| fi | ||
| # Start GPU monitoring (power, temperature, clocks every second) | ||
| start_gpu_monitor | ||
|
|
||
| echo "Ensuring host benchmark dependencies (aiohttp/transformers/numpy/tqdm/huggingface-hub/tiktoken)..." | ||
| python3 -m pip install --no-cache-dir aiohttp transformers numpy tqdm huggingface-hub tiktoken | ||
|
|
||
|
|
||
|
|
||
| set -x | ||
| export VLLM_ROCM_USE_AITER=1 | ||
| export VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=INT4 | ||
| export VLLM_ROCM_USE_AITER_RMSNORM=0 | ||
| export VLLM_SPEC_CONFIG="${VLLM_SPEC_CONFIG:-{\"model\": \"nvidia/Kimi-K2.5-Thinking-Eagle3\", \"method\": \"eagle3\", \"num_speculative_tokens\": 3}}" | ||
|
|
||
| vllm serve $MODEL --port $PORT \ | ||
| --tensor-parallel-size=$TP \ | ||
| --gpu-memory-utilization 0.8 \ | ||
| --max-model-len $MAX_MODEL_LEN \ | ||
| --block-size=64 \ | ||
| --trust-remote-code \ | ||
| --no-enable-prefix-caching \ | ||
| --max-num-seqs 256 \ | ||
| --speculative-config "$VLLM_SPEC_CONFIG" \ | ||
| --mm-encoder-tp-mode data > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready | ||
| wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID" | ||
|
|
||
| run_benchmark_serving \ | ||
| --model "$MODEL" \ | ||
| --port "$PORT" \ | ||
| --backend vllm \ | ||
| --input-len "$ISL" \ | ||
| --output-len "$OSL" \ | ||
| --random-range-ratio "$RANDOM_RANGE_RATIO" \ | ||
| --num-prompts "$((CONC * 10))" \ | ||
| --max-concurrency "$CONC" \ | ||
| --result-filename "$RESULT_FILENAME" \ | ||
| --result-dir /workspace/ \ | ||
| --trust-remote-code | ||
|
|
||
| # After throughput, run evaluation only if RUN_EVAL is true | ||
| if [ "${RUN_EVAL}" = "true" ]; then | ||
| run_eval --framework lm-eval --port "$PORT" | ||
| append_lm_eval_summary | ||
| fi | ||
|
|
||
| # Stop GPU monitoring | ||
| stop_gpu_monitor | ||
| set +x | ||
|
|
||
|
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 The MI300X and MI325X runners hardcode the benchmark script path without any spec suffix, so the newly added
kimik2.5_int4_mi300x_mtp.shandkimik2.5_int4_mi325x_mtp.shscripts are dead code — CI jobs forkimik2.5-int4-mi300x-vllm-mtpandkimik2.5-int4-mi325x-vllm-mtpwill silently execute the old non-MTP scripts instead. Both runners need the sameSPEC_SUFFIXlogic already present inlaunch_mi355x-amds.sh(lines 182, 225).Extended reasoning...
What the bug is and how it manifests
This PR adds two new YAML benchmark entries —
kimik2.5-int4-mi300x-vllm-mtpandkimik2.5-int4-mi325x-vllm-mtp— each withspec-decoding: mtpin their search-space, plus corresponding benchmark scripts (kimik2.5_int4_mi300x_mtp.sh,kimik2.5_int4_mi325x_mtp.sh). When these CI jobs run, they dispatch through the MI300X and MI325X runner scripts, which are responsible for selecting the correct benchmark script. However, neither runner has logic to append the_mtpsuffix to the script name, causing the wrong script to be executed.The specific code path that triggers it
runners/launch_mi300x-amds.shline 38 hardcodes:Similarly,
runners/launch_mi325x-amds.shline 38 hardcodes:Neither runner inspects
SPEC_DECODINGor computes aSPEC_SUFFIX.Why existing code doesn't prevent it
The MI355X runner (
launch_mi355x-amds.shlines 182–225) was already updated with the correct pattern:The MI300X and MI325X runners were simply not updated to match, leaving them permanently dispatching to the base (non-MTP) script regardless of the
spec-decodingsetting passed via the YAML.What the impact would be
When
kimik2.5-int4-mi300x-vllm-mtporkimik2.5-int4-mi325x-vllm-mtpCI jobs run, the benchmark will silently execute the oldkimik2.5_int4_mi300x.sh/kimik2.5_int4_mi325x.shscripts. Those scripts use vLLM imagev0.18.0instead of the requiredv0.19.0, and they do not setVLLM_SPEC_CONFIG(the EAGLE3 speculative decoding config). Benchmark results will therefore reflect plain inference without MTP/speculative decoding, defeating the entire purpose of these new CI entries. The two newly added_mtp.shscripts are effectively dead code.How to fix it
Both
launch_mi300x-amds.shandlaunch_mi325x-amds.shneed the same SPEC_SUFFIX computation that already exists inlaunch_mi355x-amds.sh:(and analogously for mi325x).
Step-by-step proof
kimik2.5-int4-mi300x-vllm-mtpjob; the workflow setsSPEC_DECODING=mtpand callslaunch_mi300x-amds.sh.EXP_NAMEis set to something likekimik2.5_isl1024osl1024(fromgenerate_sweep_configs.pypattern{model_code}_{seq_len_str}).${EXP_NAME%%_*}strips everything from the first underscore onward, yieldingkimik2.5.benchmarks/single_node/kimik2.5_int4_mi300x.sh— the old non-MTP script.benchmarks/single_node/kimik2.5_int4_mi300x_mtp.shis never reached.