An intelligent code review assistant that runs locally using LangChain, Ollama, and ChromaDB — optimized for real pull requests and real codebases.
- 🔍 Semantic code search with line-level references (file + line number)
- 💬 Local inference using Ollama + Mistral or Qwen
- 📦 RAG-powered insights using LangChain + ChromaDB
- 🧠 Memory support via LangGraph for contextual conversations
- ✅ PR-aware architecture ready for GitHub integration
- 🧼 Clean codebase following service-oriented architecture
LocalRAG/
├── .env
├── .venv/
├── .gitignore
├── Makefile
├── README.md
├── requirements.txt
├── client.py
├── app/
│ ├── __init__.py
│ ├── api/
│ │ ├── __init__.py
│ │ ├── routers/
│ │ │ ├── __init__.py
│ │ │ ├── chat.py
│ │ │ └── github.py
│ │ └── main.py
│ ├── core/
│ │ ├── __init__.py
│ │ ├── config.py
│ ├── services/
│ │ ├── __init__.py
│ │ ├── github/
│ │ │ ├── __init__.py
│ │ │ └── pr_loader.py
│ │ ├── langchain/
│ │ │ ├── __init__.py
│ │ │ ├── chains.py
│ │ │ └── prompts.py
│ │ └── vectorstore/
│ │ ├── __init__.py
│ │ └── chroma.py
│ └── models/
│ ├── __init__.py
│ └── schemas.py
├── data/
│ └── vectorstore/
- VectorStoreService: Handles indexing and retrieval
- OllamaService: Local LLM client
- GitHubService: Pull request integration
- Memory: LangGraph nodes to manage chat history
- LangChain — RAG, memory, embedding abstraction
- ChromaDB — Vector database
- Ollama — Local LLM inference (Mistral, Qwen, etc.)
- LangGraph — Memory-aware multi-step conversations
- FastAPI — Lightweight API server
- Python 3.10+
git clone https://github.com/yourusername/local-ai-code-reviewer.git
cd local-ai-code-reviewer
pip install -r requirements.txtollama run mistralOr use:
ollama run qwen:7bmake ingest(Make sure your repo is inside the repos/ folder)
make runpython client.pyEach code chunk is enriched with metadata during ingestion:
// File: chat_api.py - Line: 47
@app.post("/ask")
async def ask_question(request: QuestionRequest):
This allows the agent to ground its feedback to the exact place in the code.
- Web frontend (Streamlit or FastAPI + React)
- Auto-suggested review comments
- LLM self-evaluation for feedback quality
- Multi-agent code review (naming, perf, security, etc.)
Contributions welcome! Feel free to open issues, PRs, or reach out on LinkedIn.
This project is licensed under the MIT License.