Skip to content

Add new feature for kimi k2.5 mtp support#1115

Draft
haic0 wants to merge 2 commits intomainfrom
haichen/kimi-k2.5-int4-mtp
Draft

Add new feature for kimi k2.5 mtp support#1115
haic0 wants to merge 2 commits intomainfrom
haichen/kimi-k2.5-int4-mtp

Conversation

@haic0
Copy link
Copy Markdown
Collaborator

@haic0 haic0 commented Apr 22, 2026

No description provided.

Signed-off-by: haic0 <haichzha@gbt350-odcdh5-wbb3.png-odc.dcgpu>
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow

As a rule of thumb, generally, PR authors should request a review & get a PR approval from the respective companies' CODEOWNERS before requesting a review from core maintainers.

If additional help is needed, PR authors can reach out to core maintainers over Slack.

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow

As a rule of thumb, generally, PR authors should request a review & get a PR approval from the respective companies' CODEOWNERS before requesting a review from core maintainers.

If additional help is needed, PR authors can reach out to core maintainers over Slack.

@functionstackx functionstackx marked this pull request as draft April 22, 2026 08:38
@functionstackx
Copy link
Copy Markdown
Contributor

thanks for the PR @haic0

We haven't thought much about the guidelines for MTP for models that dont natively ship with it as we didn't think we would include it for inferencexv3 & have previously rejected submissions for it #1026 (review) do you think we should include it for inferencex v3 i.e. the questions that need some thought is:

  1. what speculator weight & speculator # of layers/arch should be used?
  2. Can different vendors submit different speculator weights?
  3. Can different vendors submit different speculator? If so, how you are maintain similar acceptance rate between vendors. i.e. [Model Runner V2] Enable forcing a specific acceptance rate during rejection sampling vllm-project/vllm#38045 --speculative-config '{"method": "eagle", "model": <eagle_model>, "num_speculative_tokens": 3, "rejection_sample_method": "synthetic", "synthetic_acceptance_rate": <test-value>}' if you are doing this, how do we ensure that the specific AR distribution implementation is similar across different frameworks?
  4. should we be using lightseekorg/kimi-k2.6-eagle3 https://github.com/vllm-project/recipes/pull/347/changes ?

We already have 3 different models that natively ship with MTP (deepseek, glm5, qwen3.5), is it worth it to spend time thinking about MTP for models that don't come natively with MTP?

Comment on lines +432 to +455
kimik2.5-int4-mi300x-vllm-mtp:
image: vllm/vllm-openai-rocm:v0.19.0
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: mi300x
precision: int4
framework: vllm
multinode: false
seq-len-configs:
- isl: 1024
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }
- isl: 8192
osl: 1024
search-space:
- { tp: 4, conc-start: 4, conc-end: 64, spec-decoding: mtp }

kimik2.5-int4-mi325x-vllm-mtp:
image: vllm/vllm-openai-rocm:v0.19.0
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: mi325x
precision: int4
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The MI300X and MI325X runners hardcode the benchmark script path without any spec suffix, so the newly added kimik2.5_int4_mi300x_mtp.sh and kimik2.5_int4_mi325x_mtp.sh scripts are dead code — CI jobs for kimik2.5-int4-mi300x-vllm-mtp and kimik2.5-int4-mi325x-vllm-mtp will silently execute the old non-MTP scripts instead. Both runners need the same SPEC_SUFFIX logic already present in launch_mi355x-amds.sh (lines 182, 225).

Extended reasoning...

What the bug is and how it manifests

This PR adds two new YAML benchmark entries — kimik2.5-int4-mi300x-vllm-mtp and kimik2.5-int4-mi325x-vllm-mtp — each with spec-decoding: mtp in their search-space, plus corresponding benchmark scripts (kimik2.5_int4_mi300x_mtp.sh, kimik2.5_int4_mi325x_mtp.sh). When these CI jobs run, they dispatch through the MI300X and MI325X runner scripts, which are responsible for selecting the correct benchmark script. However, neither runner has logic to append the _mtp suffix to the script name, causing the wrong script to be executed.

The specific code path that triggers it

runners/launch_mi300x-amds.sh line 38 hardcodes:

bash benchmarks/single_node/__mi300x.sh

Similarly, runners/launch_mi325x-amds.sh line 38 hardcodes:

bash benchmarks/single_node/__mi325x.sh

Neither runner inspects SPEC_DECODING or computes a SPEC_SUFFIX.

Why existing code doesn't prevent it

The MI355X runner (launch_mi355x-amds.sh lines 182–225) was already updated with the correct pattern:

SPEC_SUFFIX=
bash benchmarks/single_node/__mi355x.sh

The MI300X and MI325X runners were simply not updated to match, leaving them permanently dispatching to the base (non-MTP) script regardless of the spec-decoding setting passed via the YAML.

What the impact would be

When kimik2.5-int4-mi300x-vllm-mtp or kimik2.5-int4-mi325x-vllm-mtp CI jobs run, the benchmark will silently execute the old kimik2.5_int4_mi300x.sh / kimik2.5_int4_mi325x.sh scripts. Those scripts use vLLM image v0.18.0 instead of the required v0.19.0, and they do not set VLLM_SPEC_CONFIG (the EAGLE3 speculative decoding config). Benchmark results will therefore reflect plain inference without MTP/speculative decoding, defeating the entire purpose of these new CI entries. The two newly added _mtp.sh scripts are effectively dead code.

How to fix it

Both launch_mi300x-amds.sh and launch_mi325x-amds.sh need the same SPEC_SUFFIX computation that already exists in launch_mi355x-amds.sh:

SPEC_SUFFIX=$([[ "$SPEC_DECODING" == "mtp" ]] && printf '_mtp' || printf '')
bash benchmarks/single_node/${EXP_NAME%%_*}_${PRECISION}_mi300x${SPEC_SUFFIX}.sh

(and analogously for mi325x).

Step-by-step proof

  1. CI triggers the kimik2.5-int4-mi300x-vllm-mtp job; the workflow sets SPEC_DECODING=mtp and calls launch_mi300x-amds.sh.
  2. Inside the runner, EXP_NAME is set to something like kimik2.5_isl1024osl1024 (from generate_sweep_configs.py pattern {model_code}_{seq_len_str}).
  3. The expression ${EXP_NAME%%_*} strips everything from the first underscore onward, yielding kimik2.5.
  4. Line 38 constructs the path: benchmarks/single_node/kimik2.5_int4_mi300x.sh — the old non-MTP script.
  5. The new benchmarks/single_node/kimik2.5_int4_mi300x_mtp.sh is never reached.

@haic0
Copy link
Copy Markdown
Collaborator Author

haic0 commented Apr 22, 2026

thanks for the PR @haic0

We haven't thought much about the guidelines for MTP for models that dont natively ship with it as we didn't think we would include it for inferencexv3 & have previously rejected submissions for it #1026 (review) do you think we should include it for inferencex v3 i.e. the questions that need some thought is:

  1. what speculator weight & speculator # of layers/arch should be used?
  2. Can different vendors submit different speculator weights?
  3. Can different vendors submit different speculator? If so, how you are maintain similar acceptance rate between vendors. i.e. [Model Runner V2] Enable forcing a specific acceptance rate during rejection sampling vllm-project/vllm#38045 --speculative-config '{"method": "eagle", "model": <eagle_model>, "num_speculative_tokens": 3, "rejection_sample_method": "synthetic", "synthetic_acceptance_rate": <test-value>}' if you are doing this, how do we ensure that the specific AR distribution implementation is similar across different frameworks?
  4. should we be using lightseekorg/kimi-k2.6-eagle3 https://github.com/vllm-project/recipes/pull/347/changes ?

We already have 3 different models that natively ship with MTP (deepseek, glm5, qwen3.5), is it worth it to spend time thinking about MTP for models that don't come natively with MTP?

@functionstackx

Thanks. I’m thinking along the same lines as you mentioned, and I agree we should discuss this further.Personally, when defining new rules for SD/MTP, we should prioritize fairness and comparability. In general, we could allow speculative decoding more broadly, but it would be better to place it into a separate track, rather than mixing it with models that natively ship with MTP. Thanks!

What speculator weight / layers?
Must be fixed by benchmark (canonical speculator). Otherwise not comparable.

Can different vendors submit different speculator weights?
Not allowed for main track. Maybe only allowed in separate “customization” track.

Can different vendors submit different speculator architectures?
Not for main track. Maybe only allowed in separate “customization” track.

How maintain similar acceptance rate between vendors?
You can’t reliably unless you fix the speculator + decoding algorithm. Forced synthetic AR is not enough unless standardized.

Should we use lightseekorg/kimi-k2.6-eagle3?
We can provide serveral chioces for the vendor to select, more flexible.For example, for this model, both of the lightseekorg/kimi-k2.6-eagle3 and nvidia/Kimi-K2.5-Thinking-Eagle3 are allowed.

Is it worth it?
Yes. In real deployment scenarios, any approach that improves overall performance is valuable. It’s important to demonstrate the model’s capabilities with SD/MTP enabled, since this reflects real-world serving optimizations.

@functionstackx
Copy link
Copy Markdown
Contributor

thanks for your detailed insights @haic0

@chunfangamd can u add him to slack so we can discuss further?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants