Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions .github/configs/amd-master.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -411,6 +411,60 @@ kimik2.5-int4-mi300x-vllm:
search-space:
- { tp: 8, conc-start: 4, conc-end: 64 }

kimik2.5-int4-mi355x-vllm-mtp:
image: vllm/vllm-openai-rocm:v0.19.0
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: mi355x
precision: int4
framework: vllm
multinode: false
seq-len-configs:
- isl: 1024
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }
- isl: 8192
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }

kimik2.5-int4-mi300x-vllm-mtp:
image: vllm/vllm-openai-rocm:v0.19.0
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: mi300x
precision: int4
framework: vllm
multinode: false
seq-len-configs:
- isl: 1024
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }
- isl: 8192
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }

kimik2.5-int4-mi325x-vllm-mtp:
image: vllm/vllm-openai-rocm:v0.19.0
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: mi325x
precision: int4
Comment on lines +432 to +455
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The MI300X and MI325X runners hardcode the benchmark script path without any spec suffix, so the newly added kimik2.5_int4_mi300x_mtp.sh and kimik2.5_int4_mi325x_mtp.sh scripts are dead code — CI jobs for kimik2.5-int4-mi300x-vllm-mtp and kimik2.5-int4-mi325x-vllm-mtp will silently execute the old non-MTP scripts instead. Both runners need the same SPEC_SUFFIX logic already present in launch_mi355x-amds.sh (lines 182, 225).

Extended reasoning...

What the bug is and how it manifests

This PR adds two new YAML benchmark entries — kimik2.5-int4-mi300x-vllm-mtp and kimik2.5-int4-mi325x-vllm-mtp — each with spec-decoding: mtp in their search-space, plus corresponding benchmark scripts (kimik2.5_int4_mi300x_mtp.sh, kimik2.5_int4_mi325x_mtp.sh). When these CI jobs run, they dispatch through the MI300X and MI325X runner scripts, which are responsible for selecting the correct benchmark script. However, neither runner has logic to append the _mtp suffix to the script name, causing the wrong script to be executed.

The specific code path that triggers it

runners/launch_mi300x-amds.sh line 38 hardcodes:

bash benchmarks/single_node/__mi300x.sh

Similarly, runners/launch_mi325x-amds.sh line 38 hardcodes:

bash benchmarks/single_node/__mi325x.sh

Neither runner inspects SPEC_DECODING or computes a SPEC_SUFFIX.

Why existing code doesn't prevent it

The MI355X runner (launch_mi355x-amds.sh lines 182–225) was already updated with the correct pattern:

SPEC_SUFFIX=
bash benchmarks/single_node/__mi355x.sh

The MI300X and MI325X runners were simply not updated to match, leaving them permanently dispatching to the base (non-MTP) script regardless of the spec-decoding setting passed via the YAML.

What the impact would be

When kimik2.5-int4-mi300x-vllm-mtp or kimik2.5-int4-mi325x-vllm-mtp CI jobs run, the benchmark will silently execute the old kimik2.5_int4_mi300x.sh / kimik2.5_int4_mi325x.sh scripts. Those scripts use vLLM image v0.18.0 instead of the required v0.19.0, and they do not set VLLM_SPEC_CONFIG (the EAGLE3 speculative decoding config). Benchmark results will therefore reflect plain inference without MTP/speculative decoding, defeating the entire purpose of these new CI entries. The two newly added _mtp.sh scripts are effectively dead code.

How to fix it

Both launch_mi300x-amds.sh and launch_mi325x-amds.sh need the same SPEC_SUFFIX computation that already exists in launch_mi355x-amds.sh:

SPEC_SUFFIX=$([[ "$SPEC_DECODING" == "mtp" ]] && printf '_mtp' || printf '')
bash benchmarks/single_node/${EXP_NAME%%_*}_${PRECISION}_mi300x${SPEC_SUFFIX}.sh

(and analogously for mi325x).

Step-by-step proof

  1. CI triggers the kimik2.5-int4-mi300x-vllm-mtp job; the workflow sets SPEC_DECODING=mtp and calls launch_mi300x-amds.sh.
  2. Inside the runner, EXP_NAME is set to something like kimik2.5_isl1024osl1024 (from generate_sweep_configs.py pattern {model_code}_{seq_len_str}).
  3. The expression ${EXP_NAME%%_*} strips everything from the first underscore onward, yielding kimik2.5.
  4. Line 38 constructs the path: benchmarks/single_node/kimik2.5_int4_mi300x.sh — the old non-MTP script.
  5. The new benchmarks/single_node/kimik2.5_int4_mi300x_mtp.sh is never reached.

framework: vllm
multinode: false
seq-len-configs:
- isl: 1024
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }
- isl: 8192
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }

kimik2.5-fp4-mi355x-vllm:
image: vllm/vllm-openai-rocm:v0.18.0
model: amd/Kimi-K2.5-MXFP4
Expand Down
86 changes: 86 additions & 0 deletions benchmarks/single_node/kimik2.5_int4_mi300x_mtp.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
#!/usr/bin/env bash

source "$(dirname "$0")/../benchmark_lib.sh"

check_env_vars \
MODEL \
TP \
CONC \
ISL \
OSL \
MAX_MODEL_LEN \
RANDOM_RANGE_RATIO \
RESULT_FILENAME

if [[ -n "$SLURM_JOB_ID" ]]; then
echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME"
fi

hf download "$MODEL"

# Set HIP_VISIBLE_DEVICES to match ROCR_VISIBLE_DEVICES for Ray compatibility in vLLM 0.14+
if [ -n "$ROCR_VISIBLE_DEVICES" ]; then
export HIP_VISIBLE_DEVICES="$ROCR_VISIBLE_DEVICES"
fi

SERVER_LOG=/workspace/server.log
PORT=${PORT:-8888}

if [ "${EVAL_ONLY}" = "true" ]; then
setup_eval_context
MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN"
fi
# Start GPU monitoring (power, temperature, clocks every second)
start_gpu_monitor

echo "Ensuring host benchmark dependencies (aiohttp/transformers/numpy/tqdm/huggingface-hub/tiktoken)..."
python3 -m pip install --no-cache-dir aiohttp transformers numpy tqdm huggingface-hub tiktoken



set -x
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=INT4
export VLLM_ROCM_USE_AITER_RMSNORM=0
export VLLM_SPEC_CONFIG="${VLLM_SPEC_CONFIG:-{\"model\": \"nvidia/Kimi-K2.5-Thinking-Eagle3\", \"method\": \"eagle3\", \"num_speculative_tokens\": 3}}"

vllm serve $MODEL --port $PORT \
--tensor-parallel-size=$TP \
--gpu-memory-utilization 0.8 \
--max-model-len $MAX_MODEL_LEN \
--block-size=64 \
--trust-remote-code \
--no-enable-prefix-caching \
--max-num-seqs 256 \
--speculative-config "$VLLM_SPEC_CONFIG" \
--mm-encoder-tp-mode data > $SERVER_LOG 2>&1 &

SERVER_PID=$!

# Wait for server to be ready
wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID"

run_benchmark_serving \
--model "$MODEL" \
--port "$PORT" \
--backend vllm \
--input-len "$ISL" \
--output-len "$OSL" \
--random-range-ratio "$RANDOM_RANGE_RATIO" \
--num-prompts "$((CONC * 10))" \
--max-concurrency "$CONC" \
--result-filename "$RESULT_FILENAME" \
--result-dir /workspace/ \
--trust-remote-code

# After throughput, run evaluation only if RUN_EVAL is true
if [ "${RUN_EVAL}" = "true" ]; then
run_eval --framework lm-eval --port "$PORT"
append_lm_eval_summary
fi

# Stop GPU monitoring
stop_gpu_monitor
set +x


86 changes: 86 additions & 0 deletions benchmarks/single_node/kimik2.5_int4_mi325x_mtp.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
#!/usr/bin/env bash

source "$(dirname "$0")/../benchmark_lib.sh"

check_env_vars \
MODEL \
TP \
CONC \
ISL \
OSL \
MAX_MODEL_LEN \
RANDOM_RANGE_RATIO \
RESULT_FILENAME

if [[ -n "$SLURM_JOB_ID" ]]; then
echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME"
fi

hf download "$MODEL"

# Set HIP_VISIBLE_DEVICES to match ROCR_VISIBLE_DEVICES for Ray compatibility in vLLM 0.14+
if [ -n "$ROCR_VISIBLE_DEVICES" ]; then
export HIP_VISIBLE_DEVICES="$ROCR_VISIBLE_DEVICES"
fi

SERVER_LOG=/workspace/server.log
PORT=${PORT:-8888}

if [ "${EVAL_ONLY}" = "true" ]; then
setup_eval_context
MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN"
fi
# Start GPU monitoring (power, temperature, clocks every second)
start_gpu_monitor

echo "Ensuring host benchmark dependencies (aiohttp/transformers/numpy/tqdm/huggingface-hub/tiktoken)..."
python3 -m pip install --no-cache-dir aiohttp transformers numpy tqdm huggingface-hub tiktoken



set -x
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=INT4
export VLLM_ROCM_USE_AITER_RMSNORM=0
export VLLM_SPEC_CONFIG="${VLLM_SPEC_CONFIG:-{\"model\": \"nvidia/Kimi-K2.5-Thinking-Eagle3\", \"method\": \"eagle3\", \"num_speculative_tokens\": 3}}"

vllm serve $MODEL --port $PORT \
--tensor-parallel-size=$TP \
--gpu-memory-utilization 0.8 \
--max-model-len $MAX_MODEL_LEN \
--block-size=64 \
--trust-remote-code \
--no-enable-prefix-caching \
--max-num-seqs 256 \
--speculative-config "$VLLM_SPEC_CONFIG" \
--mm-encoder-tp-mode data > $SERVER_LOG 2>&1 &

SERVER_PID=$!

# Wait for server to be ready
wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID"

run_benchmark_serving \
--model "$MODEL" \
--port "$PORT" \
--backend vllm \
--input-len "$ISL" \
--output-len "$OSL" \
--random-range-ratio "$RANDOM_RANGE_RATIO" \
--num-prompts "$((CONC * 10))" \
--max-concurrency "$CONC" \
--result-filename "$RESULT_FILENAME" \
--result-dir /workspace/ \
--trust-remote-code

# After throughput, run evaluation only if RUN_EVAL is true
if [ "${RUN_EVAL}" = "true" ]; then
run_eval --framework lm-eval --port "$PORT"
append_lm_eval_summary
fi

# Stop GPU monitoring
stop_gpu_monitor
set +x


86 changes: 86 additions & 0 deletions benchmarks/single_node/kimik2.5_int4_mi355x_mtp.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
#!/usr/bin/env bash

source "$(dirname "$0")/../benchmark_lib.sh"

check_env_vars \
MODEL \
TP \
CONC \
ISL \
OSL \
MAX_MODEL_LEN \
RANDOM_RANGE_RATIO \
RESULT_FILENAME

if [[ -n "$SLURM_JOB_ID" ]]; then
echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME"
fi

hf download "$MODEL"

# Set HIP_VISIBLE_DEVICES to match ROCR_VISIBLE_DEVICES for Ray compatibility in vLLM 0.14+
if [ -n "$ROCR_VISIBLE_DEVICES" ]; then
export HIP_VISIBLE_DEVICES="$ROCR_VISIBLE_DEVICES"
fi

SERVER_LOG=/workspace/server.log
PORT=${PORT:-8888}

if [ "${EVAL_ONLY}" = "true" ]; then
setup_eval_context
MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN"
fi
# Start GPU monitoring (power, temperature, clocks every second)
start_gpu_monitor

echo "Ensuring host benchmark dependencies (aiohttp/transformers/numpy/tqdm/huggingface-hub/tiktoken)..."
python3 -m pip install --no-cache-dir aiohttp transformers numpy tqdm huggingface-hub tiktoken



set -x
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_QUICK_REDUCE_QUANTIZATION=INT4
export VLLM_ROCM_USE_AITER_RMSNORM=0
export VLLM_SPEC_CONFIG="${VLLM_SPEC_CONFIG:-{\"model\": \"nvidia/Kimi-K2.5-Thinking-Eagle3\", \"method\": \"eagle3\", \"num_speculative_tokens\": 3}}"

vllm serve $MODEL --port $PORT \
--tensor-parallel-size=$TP \
--gpu-memory-utilization 0.8 \
--max-model-len $MAX_MODEL_LEN \
--block-size=64 \
--trust-remote-code \
--no-enable-prefix-caching \
--max-num-seqs 256 \
--speculative-config "$VLLM_SPEC_CONFIG" \
--mm-encoder-tp-mode data > $SERVER_LOG 2>&1 &

SERVER_PID=$!

# Wait for server to be ready
wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID"

run_benchmark_serving \
--model "$MODEL" \
--port "$PORT" \
--backend vllm \
--input-len "$ISL" \
--output-len "$OSL" \
--random-range-ratio "$RANDOM_RANGE_RATIO" \
--num-prompts "$((CONC * 10))" \
--max-concurrency "$CONC" \
--result-filename "$RESULT_FILENAME" \
--result-dir /workspace/ \
--trust-remote-code

# After throughput, run evaluation only if RUN_EVAL is true
if [ "${RUN_EVAL}" = "true" ]; then
run_eval --framework lm-eval --port "$PORT"
append_lm_eval_summary
fi

# Stop GPU monitoring
stop_gpu_monitor
set +x


Loading