feat: Qwen 3.5 GDN support with hybrid model fixes#2133
Open
r-dh wants to merge 4 commits intoabetlen:mainfrom
Open
feat: Qwen 3.5 GDN support with hybrid model fixes#2133r-dh wants to merge 4 commits intoabetlen:mainfrom
r-dh wants to merge 4 commits intoabetlen:mainfrom
Conversation
Updates the llama.cpp submodule to da348c9df which includes support for
the Qwen 3.5 model architecture (hybrid SSM + attention).
Changes to Python bindings:
1. llama_cpp.py: Sync llama_context_params struct with upstream C API
- flash_attn (bool) → flash_attn_type (enum llama_flash_attn_type)
- Add samplers (void*) and n_samplers (size_t) fields
- Add LLAMA_FLASH_ATTN_TYPE_* enum constants
2. llama.py: Update flash_attn parameter handling
- Map flash_attn=True/False to flash_attn_type=1/0
3. _ctypes_extensions.py: Graceful handling of deprecated symbols
- ctypes_function decorator returns stub instead of crashing
when a symbol is not found in the shared library
Tested with Qwen3.5-0.8B-Q4_K_M.gguf on Apple Silicon (M1 Pro):
- Cold start: ~4s (vs ~40s with mlx-vlm)
- Inference: ~0.6s per chat completion
- Model loads and runs correctly on Metal GPU
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds Qwen 3.5 (Gated Delta Network) support, building on the work in #2132 by @codavidgarcia, with additional fixes needed to make hybrid GDN models work correctly:
BUILD_NUMBERandLLAMA_INSTALL_VERSIONfor mtmdkv_cache_seq_rmso callers can detect when partial memory removal failsgenerate(): GDN hybrid models reject partial memory removal viallama_memory_seq_rm(), but the return value was being ignored, causingllama_decode returned -1on subsequent calls. Now falls back to full prompt re-evaluation when partial removal is not supported.The GDN prefix-caching fix affects all hybrid architecture models (not just Qwen 3.5), since any model mixing KV-cache attention with recurrent state will return
Falsefromllama_memory_seq_rm()for partial ranges.Test plan
llama_memory_clearAPI