-
Notifications
You must be signed in to change notification settings - Fork 207
BBR multi lora guide #1940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BBR multi lora guide #1940
Conversation
…serve multiple LoRAs (many LoRAs per one model while having multiple models)
-- The BBR guide is aligned with Getting Started (Main/Latest) -- There are only two models deployed, with the second one being a simulator -- Formatting issues and style fixed -- Typos and dangling sentences fixed -- The LoRA names are completely different -- The Routing example simplified: one HTTPRoute with matchers
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Hi @davidbreitgand. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/ok-to-test |
| 1. Send a few requests to Llama model to test that it works as before, as follows: | ||
|
|
||
| ```bash | ||
| curl -X POST -i ${IP}:${PORT}/v1/chat/completions \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where are $IP and $PORT set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A user was taught to set the IP and PORT in the Getting Started guide.
However, it's a good idea to add this for completeness. I accepted your rephrasing suggestions and pushed another commit to address the PORT and IP explicit setup. Good catch. Thanks!
Co-authored-by: Shmuel Kallner <kallner@il.ibm.com>
Co-authored-by: Shmuel Kallner <kallner@il.ibm.com>
Co-authored-by: Shmuel Kallner <kallner@il.ibm.com>
Co-authored-by: Shmuel Kallner <kallner@il.ibm.com>
Co-authored-by: Shmuel Kallner <kallner@il.ibm.com>
Co-authored-by: Shmuel Kallner <kallner@il.ibm.com>
…of PORT and IP when trying out multiple LLM setup
…ulti-lora-guide Addressing comments by the reviewer (shmuelk)
|
/lgtm |
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, davidbreitgand The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind documentation
What this PR does / why we need it:
This PR extends the Serve Multiple GenAI Models guide with a detailed example on how to configure multiple LoRAs per base model. This PR fixes the comments and feedback previously received on PR #1859.
Which issue(s) this PR fixes:
Fixes #1858
Does this PR introduce a user-facing change?:
NONE