This project uses OpenDeepSearch with Ollama and SearXNG to search, rank and summarize search results. Ollama allows for easy use of various LLMs locally. Custom system prompt was also added to control the presentation of search results (e.g. "conscise", "detailed", "summarized" or any other prompt). Fixed several issues. Added logs to see search results.
- Git
- Python 3.11 (tested)
- Docker
- Ollama
- Internet connection
-
Clone this Repository
git clone https://github.com/Bensake/OpenDeepSearch-Bensake.git cd OpenDeepSearch-Bensake -
Create and Activate a Virtual Environment
python -m venv venv source venv/bin/activate -
Install Python Requirements
pip install -e . -
Install Torch For Nvidia GPUs run:
pip install torch==2.3.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html#Check you CUDA version (install if not present), cu121 stands for cuda 12.1 -
Install Docker
Download and install Docker from docker.com
-
Run Docker
Start Docker Desktop, make sure it's running.
-
Set Up SearXNG with Docker
docker pull searxng/searxng -
Run the SearXNG container
docker run -d --name searxng -p 8080:8080 searxng/searxng -
Copy Custom SearXNG Settings
docker cp settings.yml searxng:/etc/searxng/settings.yml # You might need to restart the container afterwards -
Install Ollama
Download and install Ollama from ollama.ai
-
Download Ollama Model
Go to Ollama.ai and download a model that fits into your GPU memory. For example, if your GPU RAM is 8 GB, make sure the model size is under 8 GB.
ollama pull llama3.1:8b-instruct-q5_K_M # Change the model name to your desired model -
Set Model Name, System prompt, Max Sources
Open search_web.py and set:
model_name to the Ollama model you downloaded
system_prompt what you want LLM to do with the search results
max_sources how many search sources to analyze
Once you adjust search_web.py based on your needs, you can use command prompt to execute searches:
python search_web.py "your search query here"
Ensure Docker and Ollama are running before executing the script.