This project implements a production-style Agentic AI pipeline on top of Google Gemini 2.5. The agent orchestrates multi-step reasoning with a visible "thinking" phase, dynamic model routing between Pro and Flash-Lite based on task complexity, and a secondary LLM observer pass for contextual follow-up generation. Token-by-token streaming delivers low-latency responses via async Dart streams, deployed as a cross-platform Flutter app (Web + Android) with CI/CD via GitHub Actions.
This project was born out of a desire to move beyond simple "GPT-wrappers" and explore the potential of Agentic AI in a specialized, high-stakes environment: Higher Education.
Students at the University of Exeter face a fragmented data landscape—weather, bus schedules, and academic concepts are spread across different platforms. This assistant was built to:
- Consolidate campus-specific data into a single, intelligent interface.
- Implement Production-Grade AI patterns like streaming and multi-stage thought processes in a cross-platform (Flutter) environment.
- Demonstrate Agentic workflows where the AI doesn't just answer but actively helps the student "think" through their next steps.
Try the agentic experience directly in your browser: sandycompetent.github.io/exeter_academic_agent/
This isn't just a "wrapper app." It implements several advanced LLM patterns:
- Agentic Orchestration: The assistant uses a multi-step "Thinking" phase to determine the best response strategy before execution.
- Token-by-Token Streaming: Low-latency UI updates using asynchronous Dart streams for a responsive "real-time" feel.
- Dynamic Tool Use & Context Injection: Integrated system instructions that ground the agent in the University of Exeter's academic context.
- Automated Follow-up Generation: A secondary "observer" LLM pass (using
gemini-2.5-flash-lite) to generate contextual suggestions and maintain conversation flow. - Model Routing: Dynamic selection between specialized models (Pro vs Flash-Lite) based on task complexity and cost/latency requirements.
- Thinking State Indicator: Visualizes the agent's internal state transitions before response delivery.
- Smart Follow-ups: Zero-tap deeper exploration via 3 AI-generated relevant questions.
- Academic Specialist: System-level prompt engineering for high-quality study plans and summaries.
- Real-time Environment Data: Integration with Open-Meteo for hyper-local campus weather.
- Predictive Transit: AI-assisted analysis of Stagecoach bus routes (4/4A) to Exeter campus.
- Library Occupancy Modeling: Time-aware heuristic estimates for Forum and St Luke's library capacity.
- Framework: Flutter (v3.11+)
- AI Engine: Google Gemini SDK (v0.4.7)
- Models:
gemini-2.5-pro(Reasoning),gemini-2.5-flash-lite(Suggestions) - State Management: Provider for clean architecture and reactive state.
- External APIs: Open-Meteo API for real-time campus weather data.
- CI/CD: GitHub Actions for automated Web deployment and Android APK generation.
- Security: Secure API key management via
--dart-defineenv variables.
git clone https://github.com/sandycompetent/exeter_academic_agent.git
cd exeter_academic_agent
flutter pub getThis project follows security best practices and does not hardcode API keys. You must provide your Google AI Studio key at build time:
Debug/Run:
flutter run --dart-define=API_KEY=your_gemini_api_key_hereRelease Build:
flutter build apk --release --dart-define=API_KEY=your_gemini_api_key_hereNote: You can also manage your key dynamically within the app's Settings tab, which is persisted locally and never shared.
lib/services/: LLM orchestration and API communication (Gemini, Weather).lib/providers/: Global state, model configuration, and reactive logic.lib/models/: Domain-specific data structures.lib/screens/: High-level feature modules (Chat Agent, Dashboard).
While built with a production-grade architecture, the project has a clear roadmap for future engineering improvements:
- RAG Integration (Planned): Moving from system-prompt grounding to a Vector Database (like local embeddings) to index official Exeter course handbooks and academic policy documents.
- Deeper API Integration: Expanding from transit/weather to include real-time library seat availability and personalized timetable syncing.
- Reflection Loops: Implementing a "Self-Critique" step in the agentic flow where the model verifies its own output for academic accuracy before streaming.
- Automated Evals: Building a test suite for LLM-response evaluation to measure coherence and grounding over time.
Distributed under the MIT License. See LICENSE for more information.
