Skip to content

Conversation

@dxqb
Copy link

@dxqb dxqb commented Dec 6, 2025

This PR adds it to Z-Image the "transformer." LoRA prefix that is supported for other models like Qwen.
For some reason the code for Lumina2 is used for Z-Image.

Please try to be consistent in your LoRA loading code. Thank you.

Please try to be consistent in your LoRA loading code
Qwen supports the "transformer." prefix. This PR adds it to Z-Image (for some reason the code for Lumina2 is used for Z-Image)
@comfyanonymous
Copy link
Owner

Can you give me an example lora of this format and put which tool produces it as a comment in the code?

@dxqb
Copy link
Author

dxqb commented Dec 6, 2025

Can you give me an example lora of this format and put which tool produces it as a comment in the code?

upcoming OneTrainer support, and yes I could do that, but my thinking is that there is a prefix that works for everybody and especially is the same for all upcoming models - so commenting that this is for OneTrainer might not be good.

Why I think this is a good prefix for Qwen, Z-Image and more:

Those models use diffusers keys, because they originally started out as diffusers format models. It is likely that future model releases also do this, because they want day-0 diffusers support when they release the model.

Diffusers format follows a directory structure for the components. Using that as prefix in the lora to denote for which component that key is seems logical - and isn't new, it's already implemented in Comfy for Qwen:

grafik

@dxqb
Copy link
Author

dxqb commented Dec 6, 2025

If you don't like this solution, no prefix is also an option - so same keys as in the transformer files: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/blob/main/transformer/diffusion_pytorch_model-00001-of-00003.safetensors

but currently a `diffusion_model." or lycoris prefix is expected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants