EventBoost is an events marketing platform built for students, campus leaders and staff. For a student, it surfaces important events based on affinities and for the staff and leaders, they can produce a full multi-channel campaign (Instagram, WhatsApp, email, Campus Engage) plus AI-generated flyer from one event description.
Live app: https://app.eventboost.dev/ Video walkthrough: https://youtu.be/UIQeAEmvgb8
Student organizations and student affairs offices pour time and money into campus events, yet roughly 1 in 3 students who RSVP never show up (primary interviews, Babson College staff/student leaders and Connecticut College career services). Schools spend $50-100k/year on event platforms, marketing tools, and staff - Babson alone spends ~$30k/year just on Engage (Campus Groups). Despite the investment, the incumbent tools (Cvent, Mailchimp, Engage) solve registration and bulk email, not attendance.
Who's most affected. Student club leaders and student affairs staff running 5-20 events a year. They have limited time, little marketing training, and fragmented comms across Instagram, WhatsApp, email, and campus portals. Producing one event campaign takes hours of copy-paste-and-rewrite per channel.
Why it matters. Co-curricular engagement correlates with 53.7% higher persistence, with top institutions hitting 94% retention (ModernCampus). Better-attended events are a retention lever, not a nice-to-have. The global event-management software market is projected to grow from $15.5B (2024) → $34.7B (2029) at 17.4% CAGR (MarketsAndMarkets), with education as a fast-rising vertical underserved by existing tools.
What success looks like.
- A leader turns one event description into publish-ready content for 5 channels in under 2 minutes (vs. ~1 hour manually).
- Iteration happens channel-by-channel through natural language, not full rewrites.
- Every shared link is tracked, giving orgs channel-level attribution the incumbents don't surface.
- Students get a personalized feed of campus events matched to their interests, not every RSO's firehose.
How I'd know it worked. (a) Active clubs run campaigns end-to-end without falling back to manual editing, and (b) tracking links show non-trivial click-through on the AI-generated copy. 10+ Babson grad clubs are currently testing the live prototype, reachable through 700+ graduate students.
A leader fills out one form - title, date, audience, tone, perks, RSVP link - and EventBoost produces:
- Five channels of copy in one generation - Instagram caption, WhatsApp broadcast, email subject + body, Campus Engage post - each tuned to channel length and voice.
- AI-generated flyers - either DALL-E background + canvas text overlay, or full AI flyer edits via
gpt-image-1.5, in Instagram post, story, or general flyer dimensions. - Context-aware RAG suggestions - every saved asset is embedded (pgvector). On "Suggest with Agent," the system retrieves the top-k similar past posts per channel and injects them as few-shot context, so the voice gets more on-brand over time.
- Per-channel natural-language iteration - "make the Instagram one punchier" regenerates only that channel; the other four stay untouched.
- Trackable short links - every event auto-generates a slug (
go.eventboost.dev/e/{slug}). The LLM is prompted to use the tracking URL, so click attribution happens for free. Hashed IP + UTM breakdown per org. - Student mode - the same codebase flips a navbar toggle from Leader → Student, surfacing a personalized feed ingested from the Babson Engage iCal and a chat assistant ("what's happening Friday with free food?").
Without LLMs this product is a form + a template engine - Cvent/Mailchimp already own that. The moat comes from:
- Channel-aware voice adaptation. The same event needs a different register for Instagram vs. formal email. Templates can't adapt; LLMs can.
- RAG-based brand continuity. Embeddings encode "how this specific club has written before," so generations stay on-brand without human prompt tuning.
- Natural-language edits on individual channels. No non-AI alternative short of manual rewriting.
- Flyer generation + edits. No non-AI alternative short of Canva + hours of work.
| Layer | Model | Why |
|---|---|---|
| Content generation | gpt-4o-mini via Azure OpenAI (fallback: OpenAI) |
Strong JSON mode, cheap enough for 5-channel fan-out, handles tone instructions well |
| Embeddings | text-embedding-3-small (1536d) |
Cheap, fast, sufficient fidelity for short-form marketing copy |
| Image generation | dall-e-3 for backgrounds, gpt-image-1.5 for flyer edits |
DALL-E for clean abstract backgrounds; gpt-image-1.5 for images.edit with format-aware sizing |
- RAG over the org's own history. Every generated asset is embedded and indexed. On suggest/iterate, the agent pulls top-k similar assets per channel and weaves them into the system prompt - implemented in backend/app/services/agent.py and backend/app/services/embeddings.py.
- Multi-step per-channel refinement.
POST /agent/content/iteratetakes a natural-language instruction for a single channel, loads the current version + similar examples, and regenerates that channel only. - Flyer-concept agent.
POST /agent/flyer/suggestrecommends format, template, and palette based on similar past events, with a user-visible reasoning string that can be overridden. - Strict JSON mode + Pydantic parsing. All content generation uses
response_format={"type": "json_object"}with a Pydantic schema. Malformed output fails hard and is retried rather than string-parsed.
- Cost vs. quality.
gpt-4o-miniovergpt-4o. A five-channel generation costs ~$0.001 instead of ~$0.02. Users regenerate frequently while dialing in tone; the quality delta is imperceptible on short-form copy. - Latency vs. reliability. All five channels generated in one call with a single JSON object, not five parallel calls. One round-trip, one retry path, consistent voice across channels. Downside: a malformed single field kills the whole response - acceptable given how rare it is with mini + JSON mode.
- pgvector vs. dedicated vector DB. Staying on Supabase's pgvector, with similarity computed in Python for now. At ~hundreds of assets per org this is fine; at 500 campuses I'll move to a tuned
ivfflat/HNSW index (the migration already creates an ivfflat index; I'm not yet using it at query time). - DALL-E text reliability. Asking DALL-E to render legible text is a dead end - it still garbles letters. The system prompt explicitly forbids text in the background, and text is overlaid client-side via
html2canvas. Uglier pipeline, reliable output.
- Tone transfer. One short instruction ("less corporate, more Gen-Z") produces genuinely good results with
gpt-4o-mini. Users hit iterate 2-3 times max before landing. - JSON mode. Near-zero parse failures in practice. I budgeted time for repair logic and didn't need to write it.
- DALL-E spatial control. "Leave the bottom third blank for text" is ignored ~40% of the time. The canvas-overlay workaround is the fix.
- Cold-start orgs. New organizations with no prior content get generic output because RAG has nothing to retrieve. Planned mitigation: seed each new org with a small set of exemplar assets at creation time.
Full design doc: docs/DESIGN.md.
- Backend: FastAPI (Python 3.12), Pydantic v2, Supabase (Postgres + Auth + Storage), pgvector, OpenAI / Azure OpenAI.
- Frontend: React 19 + TypeScript, Vite, Tailwind v4, TanStack Query, React Router, html2canvas.
- Infra: Vercel (backend + frontend), Supabase managed Postgres + Storage.
- Layered backend -
endpoints → services → db. Every LLM/image call lives behind a service (llm.py, agent.py, embeddings.py, flyer_generator.py), so endpoints stay thin and the OpenAI ↔ Azure swap is a config change. - JWT + RBAC via FastAPI dependencies.
get_current_user → get_current_user_with_role → require_member / require_admin / require_super_admincomposes cleanly; no inline role checks in endpoints. - Link tracking as a first-class primitive. Every event auto-generates a
tracking_slugon creation, and the LLM system prompt instructs the model to use the tracking URL rather than the raw RSVP link. That's how the product earns its attribution story without asking users to do anything extra. - Multi-tenant isolation at the org level. Every query is filtered by
org_idderived from the authenticated user's memberships. Basic Postgres RLS is in place (migrations/002_basic_rls.sql); tightening RLS to full coverage is on the roadmap. - Two personas, one codebase.
user_modeonapp_rolesflips the navbar and routes between Leader (event creation) and Student (feed + chat). Keeps the repo unified while the product surface diverges. - Engage iCal ingest as an idempotent CLI job. Babson's Engage has no webhooks, so backend/scripts/sync_engage.py polls the iCal feed, dedupes by
engage_uid → rsvp_link, and fuzzy-matches orgs by name. Simple, re-runnable, easy to reason about.
- Per-org asset volume stays in the hundreds short-term → in-Python cosine similarity is fine at this scale.
- Leader edits are canonical - manually edited content is what gets re-embedded for future RAG.
- Babson Engage's iCal is the Babson-specific event source; other campuses will need their own adapters.
OpenAI + Azure OpenAI SDKs, Supabase, FastAPI, Pydantic, React, Tailwind, TanStack Query, html2canvas, pgvector, Vercel, icalendar, qrcode, pillow. No proprietary code from any employer. No Anthropic API in the product itself - Claude Code was used as a development tool only (see next section).
AI had a big hand helping me work on this project. Very little code ~ 5 - 10% is really written by hand the traditional way where I was going back and forth with documentation and making changes on the go. The only problem sometimes is it can be easy to rely on AI too much and it can take longer than necessary or overengineered to begin with. I had to catch myself from doing it after a few times. Below are some specific examples.
- Specific wins: e.g., Claude scaffolded the RAG pipeline end-to-end, generated the pgvector migration + embedding service + agent orchestration in one pass; FastAPI dependency chain for RBAC was another big accelerator; Tailwind + React component scaffolding was near-instant.
- Specific frictions: Initially the authentication set up a bit overengineered. The UI set was not intuitive and created friction for an onboarding user. I had to go over multiple passes to make it smoother. It also is an issue when you run out of tokens and have to wait to continue mid session when you are making good progress. I have learned to be more cautious when using tokens as a result
- Using AI I spent more time on architecture decisions up front vs. exploratory coding; I was mostly reviewing/refusing suggestions rather than writing from scratch; how you validated AI-written code before shipping.
git clone <repo-url>
cd eventboost
# --- Backend ---
cd backend
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Create .env with the variables listed below, then:
python run.py # or: uvicorn app.main:app --reload
# --- Frontend (second terminal) ---
cd ../frontend
npm install
# Create .env with the VITE_ variables listed below, then:
npm run dev # http://localhost:5173Apply migrations in backend/migrations/ in order against your Supabase project. They enable the pgvector extension and create the clicks, generated_assets.content_embedding, student_profiles, and student_actions schemas.
Backend .env (minimum):
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_SERVICE_KEY=... # admin operations
SUPABASE_ANON_KEY=... # client auth
OPENAI_API_KEY=sk-... # or the three AZURE_OPENAI_* equivalents
TRACKING_BASE_URL=http://localhost:8000
FRONTEND_URL=http://localhost:5173
ENVIRONMENT=developmentFrontend .env:
VITE_API_URL=http://localhost:8000/api/v1
VITE_SUPABASE_URL=https://xxx.supabase.co
VITE_SUPABASE_ANON_KEY=...Full env reference (including Azure DALL-E and storage bucket setup) in docs/DESIGN.md.
Live app: https://app.eventboost.dev/
Five-minute tour:
- Sign up → get approved into a test org (or create one).
- Hit New Event, fill in title / date / audience / tone / RSVP link.
- On the event detail page, click Suggest with Agent - content for all five channels appears, citing the count of similar past assets used as RAG context.
- On any channel card, click Refine and type an instruction ("make it punchier"). Only that channel regenerates.
- Open the Flyer tab - pick a template, click Generate AI Background, drop in text, export.
- Share the tracking link. Clicks appear under Analytics grouped by UTM source.
Student mode: toggle the navbar from Leader → Student to see the personalized feed ingested from Babson Engage, with an LLM chat panel.
Current state. No automated unit tests yet. I've been iterating fast with live users - 10+ active Babson grad clubs and 20+ users - and my own end-to-end testing on staging before shipping to the live app. For a product of this maturity I should have written tests already. This is the single biggest technical-debt item and a hard prerequisite for the SSO/FERPA-compliant production grade software system. Given I am a one person team at the moment, I have spent my time on launching a product, testing it out with real users, and collecting data and metrics to validate traction.
- Pydantic validation at every endpoint boundary - malformed payloads 422 cleanly before hitting business logic.
- LLM JSON mode + strict schema parsing. Any malformed generation is rejected rather than silently stored as broken content.
- Provider fallback.
LLMServicepicks Azure when configured, falls back to OpenAI; an Azure outage doesn't take the product down. - Graceful flyer-edit degradation.
edit_flyer()falls back to a full regeneration ifimages.editfails. - Image format normalization. DALL-E outputs are converted to PNG RGBA before upload, avoiding mime-type mismatches across browsers.
- Redirect hardening. Tracking links never accept a destination as a query param (no open redirect); client IPs are salted + SHA-256 hashed before storage.
- Auth guardrails.
ProtectedRoute/ApprovedRoute/AdminRoutewrappers enforce access at the router level; the backend re-checks on every call.
- LLM returns valid JSON but the wrong channel gets the wrong content → caught by Pydantic field constraints and the consolidated schema.
- DALL-E produces a background with visible garbled text → the canvas-overlay pipeline makes this survivable; mitigation is a retry button.
- Engage iCal sync duplicating an event whose URL changed → dedup chain (
engage_uid → rsvp_link → insert) handles the common cases idempotently. - Cold-start org with no RAG context → currently outputs generic copy; acknowledged quality gap.
- Contract tests for every LLM-backed endpoint (mock OpenAI, assert schema compliance + Pydantic round-trip).
- Integration tests for the RBAC dependency chain - easy to subtly break.
- A property test on
generate_tracking_slugto catch collision edge cases. - A smoke test for the Engage sync against a checked-in fixture iCal.
- Frontend component tests for the Flyer canvas -
html2canvasoutput drift is the most likely regression.
- SSO + FERPA compliance. Prerequisite for campus contracts in the summer window.
- Real test suite - as above.
- Full-coverage RLS - basic RLS exists; every table should be locked down at the Postgres layer, not just the app layer.
- Scheduled posting integrations - push generated content to Instagram / Slack / Twitter on a cadence rather than copy-paste.
- Attendance feedback loop - close the loop from tracking click → RSVP → actual attendance (QR check-in or campus-card integration) and feed that signal back into RAG context. This is the data moat the executive summary calls out.
- Proper vector index usage at scale. Move cosine similarity into Postgres via the existing ivfflat index once per-org corpora pass ~10k assets.
- Cold-start seeding - generate exemplar assets at org creation so RAG isn't useless on day one.
- Multi-language generation - campus diversity demand is real.
- Template marketplace - leaders share flyer templates across orgs.
- Mobile app - student feed is the obvious first surface.
See also docs/DESIGN.md.
Product direction informed by primary interviews with Babson College staff and student leaders, Connecticut College career services, and 10+ actively testing Babson grad clubs. Market data from MarketsAndMarkets (2024). Retention data from ModernCampus. Campus event ingest targets Babson Engage (Campus Groups); other campuses will require their own adapters.