An intelligent webhook processing pipeline that classifies incoming events with Claude AI and routes them to the right destination — automatically.
Receives webhooks from any source, classifies them using Anthropic Claude (with a rule-based fallback), and dispatches them to configurable destinations like Slack, HTTP endpoints, or console logs.
Stripe / GitHub / Shopify ──► webhook-ai-router ──► Slack, HTTP, Console
(classifies with AI,
matches rules, retries)
Modern teams don't lack notifications — they drown in them. Payment providers, source control, e-commerce platforms, and internal services all fire webhooks constantly, but most arrive as raw JSON with no context, no priority, and no routing logic beyond whatever brittle if/else chain someone hardcoded last quarter.
webhook-ai-router replaces that with an AI-powered classification layer. Instead of maintaining keyword maps for every source, Claude reads the payload and determines the category, priority, and confidence — then routing rules dispatch it automatically. The result is cleaner operational signal, faster reactions, and less manual triage. A rule-based fallback runs when no API key is configured, so the project works out of the box without any external dependencies.
flowchart LR
A[POST /webhooks/:source] --> B[IngestionService\nenqueue]
B --> C[(In-memory queue)]
C --> D[ProcessorService\npoll every 500 ms]
D --> E{ClassificationService}
E -->|ANTHROPIC_API_KEY set| F[LlmClassifier\nClaude API]
E -->|no API key| G[RuleBasedClassifier\nkeyword match]
F --> H[ClassificationResult\ncategory · priority · confidence]
G --> H
H --> I[RoutingService\nmatch rules]
I --> J[ConsoleDestination]
I --> K[WebhookForwarder\nHTTP POST]
I --> L[SlackDestination\nBlock Kit]
Pipeline: Ingest → Classify (LLM or rules) → Route → Retry (up to 3× with exponential backoff) → Dead-letter on exhaustion.
Each stage is behind an interface, so swapping the classifier, queue, or destination requires changing one implementation — not rewiring the pipeline.
Design patterns in use: Strategy pattern for classifiers and destinations, factory pattern for destination creation, repository pattern for rule storage, and dependency inversion through domain interfaces. The architecture is intentionally modular so the system extends without rewriting the core workflow.
AI-powered classification — Claude analyzes each payload and returns a structured category, priority level, and confidence score. Payloads are truncated to 2,000 chars and sensitive headers are redacted before reaching the API. On API errors, the classifier degrades gracefully to a safe default — never blocks the pipeline.
Rule-based fallback — Keyword matching against serialized payloads. Runs automatically when no Anthropic API key is configured, so anyone can run the project without external dependencies.
Configurable routing — Rules live in a JSON file and are loaded per-request (no restart needed). Each rule filters by source, category, and minimum priority, then dispatches to console, HTTP webhook, or Slack (Block Kit formatted).
Retry with backoff — Failed deliveries retry up to 3 times with exponential backoff. Exhausted events are dead-lettered and logged.
Production infrastructure — API key auth, per-IP rate limiting (100 req / 60s), health endpoint, processing stats, structured Pino logging, and Docker support.
236 unit tests — full coverage across all layers, Anthropic API fully mocked.
Payment failure alerting — Stripe webhook fires a payment_intent.payment_failed event. Claude classifies it as payment_failure / high priority. A routing rule dispatches it to a Slack channel within seconds — no manual mapping of Stripe event types required.
Security incident escalation — GitHub or a custom source sends a webhook mentioning unauthorized access. Claude flags it as security_alert / critical. The router forwards it to both a Slack channel and an incident response HTTP endpoint simultaneously.
Multi-source event triage — A team receives webhooks from Stripe, GitHub, and Shopify. Instead of building three separate integrations, all events flow through one pipeline where AI handles classification and rules handle routing.
Prototype for event-driven AI — Use as a starting point for any system that needs intelligent event processing — support ticket routing, notification prioritization, alert deduplication.
Alert fatigue reduction — A team is getting hundreds of webhook notifications daily across multiple tools. Instead of forwarding everything to Slack, the router filters by priority and category — only high-value events reach humans, the rest are logged for audit.
Works with or without Docker. Works with or without an Anthropic API key.
git clone https://github.com/vveresh/webhook-ai-router.git
cd webhook-ai-router
cp .env.example .env
# Set API_KEY (required). Optionally set ANTHROPIC_API_KEY for AI classification.
# Without Docker
pnpm install
pnpm dev
# With Docker
docker-compose up -dSend a test webhook:
curl -X POST http://localhost:3200/webhooks/stripe \
-H "X-API-Key: your-api-key" \
-H "Content-Type: application/json" \
-d '{"type":"payment_intent.payment_failed","data":{"amount":2500}}'Response:
{
"data": { "eventId": "550e8400-...", "status": "queued" },
"error": null
}| Endpoint | Auth | Description |
|---|---|---|
POST /webhooks/:source |
API key | Accepts and queues a webhook payload. Returns 202. |
GET /health |
None | Liveness probe — queue size, uptime |
GET /stats |
API key | Processing counters — processed, failed, dead-lettered |
Supported sources: stripe, github, shopify, generic. Unknown sources normalize to generic.
The service selects a classifier automatically at startup.
LLM classifier — Active when ANTHROPIC_API_KEY is set. Sends a truncated, redacted payload to Claude and receives structured classification. Falls back to GENERAL / MEDIUM / confidence=0 on API errors.
Rule-based classifier — Active when no API key is set. Keyword matching with sensible defaults:
| Keywords | Category | Priority |
|---|---|---|
fail, error, declined |
payment_failure |
high |
breach, unauthorized, security |
security_alert |
critical |
success, paid, completed |
payment_success |
low |
signup, register, created |
new_signup |
medium |
deploy, release, build |
deployment |
low |
| (no match) | general |
medium |
Rules live in data/rules.json and reload on every request — no restart needed.
{
"id": "rule-001",
"name": "Critical payments to Slack",
"sourceFilter": "stripe",
"categoryFilter": "payment_failure",
"priorityThreshold": "high",
"destinationType": "slack",
"destinationConfig": {
"url": "https://hooks.slack.com/services/T.../B.../xxx",
"channel": "#payment-alerts"
},
"isEnabled": true
}Priority thresholds work as minimum filters: low → medium → high → critical. A rule set to high matches events classified as high or critical.
Three destination types are built in: console, webhook (HTTP POST), and slack (Block Kit). Adding a new destination means implementing the IDestination interface and registering it in the factory — no pipeline changes required.
| Variable | Required | Default | Description |
|---|---|---|---|
API_KEY |
yes | — | Secret for X-API-Key header |
ANTHROPIC_API_KEY |
no | — | Enables LLM classifier; omit for rule-based fallback |
PORT |
no | 3200 |
HTTP port |
LOG_LEVEL |
no | info |
Pino level: trace / debug / info / warn / error / fatal |
NODE_ENV |
no | development |
development / production / test |
RULES_FILE_PATH |
no | ./data/rules.json |
Path to routing rules JSON |
| Layer | What It Does |
|---|---|
| Authentication | API key validated on all mutating/sensitive endpoints |
| Rate limiting | 100 requests / 60s per IP |
| Header redaction | Sensitive headers (authorization, cookie, token, signature) stripped before LLM classification |
| Payload truncation | Payloads capped at 2,000 chars before reaching Claude |
| Graceful degradation | LLM errors never block the pipeline — falls back to safe defaults |
pnpm test # run all 236 tests
pnpm test:watch # watch mode
pnpm test:coverage # coverage reportAll Anthropic API calls are mocked — the test suite runs without any external dependencies.
| Status | Item |
|---|---|
| Known | In-memory queue — events lost on restart. Replace with Redis-backed IEventQueue |
| Known | Dead-lettered events are only logged — no store or alerting |
| Known | JSON rule file has no file locking for concurrent writes |
| Known | No webhook signature verification (Stripe HMAC, GitHub X-Hub-Signature-256) |
| Known | LLM classifier has no caching — identical payloads re-classified every time |
| Planned | Persistent queue with Redis or SQS |
| Planned | Webhook signature verification per source |
| Planned | Classification caching layer |
| Planned | Dead-letter store with retry/replay |
| Planned | Admin API for rule management |
| Planned | Database-backed rule storage |
| Planned | Observability and audit history |
| Planned | Tenant / workspace isolation |
TypeScript · Node.js · Express · Anthropic Claude API · Pino · Docker · Vitest
Copyright 2026 Viktor Veresh
This product is part of the AI Ops Platform project. https://github.com/viktor-veresh-dev
Licensed under the Apache License, Version 2.0.