Using this local RAG and hopefully making it better as new models come out or even better quantization strategies allow me to use >7B parameter models in my 3080. Notebook is annotated with everything one needs to do it alone and refine it.
seyeint/Local_LLM_RAG
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
