An interactive interview practice app built with Streamlit, Gemini API, and Docker.
The app generates role-specific interview questions and provides real-time AI feedback on your answers.
-
🎯 Role-based interview question generation
-
📈 Adjustable difficulty level (Easy, Medium, Hard)
-
💬 Real-time AI feedback on answers
-
📦 Dockerized for easy deployment
-
🔄 Option to run with Gemini API (cloud) or Ollama (local LLM)
-
Frontend/UI → Streamlit
-
LLM → Google Gemini API (default) or Ollama (local option)
-
Containerization → Docker + Docker Compose
-
Language → Python 3.10+
Clone the repo:
git clone https://github.com/izudada/interview_prep.git
cd interview-coach
Create a virtual environment:
python3 -m venv venv
source venv/bin/activate
Install dependencies:
pip install -r requirements.txt
Create a .env file in the root directory and add:
# For Gemini
GEMINI_API_KEY=your_gemini_api_key_here
# For Ollama (optional local model setup)
OLLAMA_HOST=http://localhost:11434
Run the docker container using:
make up
App will be live at http://localhost:8502
Check into the containerization_branch:
git pull
git checkout containerization_branch
Then run the docker container using:
make up
-
Select difficulty level and role in the sidebar.
-
Click Start Interview.
-
Answer questions in the text box.
-
Get personalized feedback in real-time.
-
Reset anytime to start fresh.
-
Streamlit app (UI + logic)
-
Gemini API or Ollama (LLM backend)
-
Docker (containerization for deployment consistency)
-
Gemini API → Google handles scaling; subject to rate limits.
-
Ollama (local) → Performance depends on your machine’s CPU/RAM.
-
Load testing → You can use Locust or ab to simulate 50+ concurrent users.
-
Add voice input/output
-
Save interview history
-
Export feedback to PDF/CSV
-
Multi-user support with database integration