fix: GQA kernel relax seqlens_k shape check to element count #1067
Open
ankitm3k wants to merge 2 commits intoovep-developfrom
Open
fix: GQA kernel relax seqlens_k shape check to element count #1067ankitm3k wants to merge 2 commits intoovep-developfrom
ankitm3k wants to merge 2 commits intoovep-developfrom
Conversation
|
Might not be needed if microsoft#28259 is merged |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
PR microsoft#28031 tightened the
seqlens_kshape check inGroupQueryAttentionto require rank 1 ANDdim[0] == batch_size. While the intent was sound, the rank-1 requirement is stricter than necessary for memory safety and collaterally rejects valid{batch_size, 1}tensors produced by common GQA exports — including Microsoft's own Phi-3 / Phi-3.5 ONNX models from HuggingFace ( microsoft/Phi-3.5-mini-instruct-onnx ),onnxruntime-genaimodel-builder output, andoptimum-onnx-exported LLMs.This PR replaces the rank check with an element-count check using
TensorShape::Size(). The per-element bounds loop ingroup_query_attention.cc(the actual CVE fix for MSRC-108962) is unchanged, so all security guarantees from microsoft#28031 are preserved.Reproducer before this fix
or any Python app running the same model via
CPUExecutionProviderfails with:The
seqlens_ktensor flowing into the op is{1, 1}(a legitimate export shape —ReduceSum(attention_mask, axis=-1)of a[batch, seq]mask keeps the trailing singleton). Pre-microsoft#28031, the buggy&&predicate silently accepted it. Post-microsoft#28031, the||predicate rejects it even though it is memory-safe.Change
This mirrors the existing
IsScalarOr1ElementVectorpattern used in the same helper fortotal_seqlen(L273) — acknowledging that ONNX exporters legitimately emit singleton tensors with varying rank.Security equivalence to microsoft#28031
microsoft#28031 defends against MSRC-108962, where crafted models supplied negative or oversized per-element
seqlens_kvalues that, cast tosize_t, produced OOB reads against the K/V cache. Its four parts:group_query_attention.cc:88-100gqa_attention_base.h/attention_helper.hgroup_query_attention_helper.hThe CPU kernel only ever reads
seqlens_kas a contiguousint32_t*in afor (b = 0; b < batch_size; ...)loop (group_query_attention.cc:86-101, gqa_attention_base.h:112-135). Shape is never read. The necessary-and-sufficient memory-safety invariant is therefore "the buffer holds exactlybatch_sizeint32 elements", which is precisely whatShape().Size() == batch_sizeenforces. CUDA and WebGPU kernels follow the same pattern — they useparameters.batch_size(derived fromquery.Shape()[0]) to size device-side reads, neverseqlens_k's own shape — so this relaxation is safe across all EPs sharing this helper.Edge-case audit
{B}{B, 1},{1, B}{}(scalar){}(scalar){2}{B, 2}SafeIntinSizeHelperthrows synchronouslyThe
SafeIntoverflow trap insideTensorShape::Size()is a minor upgrade over the old code, which never calledSize()for this validation path.Test changes
SeqlensKWrongRank→ renamed toSeqlensKRank2SingletonAcceptedand flipped from expect-failure to expect-success. Serves as a regression guard for Phi-3/3.5-style exports with{batch_size, 1}seqlens_k.SeqlensKWrongLengthkept; expected error-string updated to match the new message ("seqlens_k must contain batch_size elements").NegativeSeqlensK_OOB,OversizedSeqlensK_OOB,MultiBatchOneBadSeqlensK_OOB,Int32MaxSeqlensK_OOB,NonPromptSeqlensKUnderflow_OOB, etc.) are unaffected — all still expect failure against the per-element bounds loop.Verified with
phi_muffin_app) on PSU_MF (Phi-3-mini derived, 28-layer, int4-AWQ QDQ) via OpenVINO EP with device=CPU — GQA node falls through to the stock CPU kernel, runs correctly, generated output is coherent.Related