A simple GUI written in Python for chatting with large language models locally using Ollama.
Below is an example of a short chat with the 8B variant of Meta Llama 3:
-
Install Ollama and download Llama 3 by running
ollama pull llama3in your terminal -
Install
uv -
Clone/download the repository and run the following in your terminal
ollama run llama3 "" && uv run llm_gui.py
Optional:
- Download and install the Inter font (otherwise the default "Courier" font is displayed)
Write into the bottom text box and click Send (or Ctrl + Enter). The first prompt may take a while since the model will be loading in the background. After that, you should see a stream of answers from the LLM.
The Previous button returns the last message/query sent by the user.
The Cancel button interrupts the response of the LLM.
The New Chat button resets history and starts a brand new chat session.
The Exit button exits the GUI (or simply close out the window).
- Cancel / interrupt the LLM while an answer is being generated
- Start a new chat / reset
- Improve stability
- Add more fallback fonts
- Monospaced font for code
- Custom button design
This project is licensed under the MIT License.
This project builds upon the concepts and ideas from several sources listed in ATTRIBUTION.md.
