AI-powered conversation analysis pipeline and visualization dashboard for the Poe AI Chat product.
This project analyzes AI chat conversations to understand user behavior patterns, conversation types, complexity, and topics. It consists of two main components:
- CLI Analysis Pipeline - Processes conversations using LLM-powered analysis
- Web Dashboard - Visualizes analysis results with interactive charts and filtering
- Load and validate conversations from JSON
- LLM-powered conversation classification
- Batch processing for cost efficiency
- Generate comprehensive analysis reports with:
- Conversation type classification
- Complexity scoring
- Topic extraction
- Summary statistics
- FastAPI backend with RESTful API endpoints
- Interactive visualization dashboard
- Real-time data filtering and search
- Chart.js visualizations:
- Pie chart for conversation type distribution
- Bar chart for complexity distribution
- Horizontal bar chart for top topics
- Sortable data table with all conversation details
- Python 3.11 or higher
- uv package manager
- Poe API key (from https://poe.com)
- Clone the repository
cd demo- Install dependencies
uv sync- Configure environment
cp .env.example .env
# Edit .env and add your POE_API_KEY
# Example: POE_API_KEY=nJvDH60DpAcb5yHqDgAwG5nb09y3zpP2GmoZaE6CnZEuv run python -m app.analyzeInput: conversations.json (should be in the project root)
Output: analysis_results.json with complete analysis
Expected Output:
Loading configuration...
Loading conversations from conversations.json...
Loaded 60 conversations
Initializing LLM analyzer (model: Claude-Sonnet-4.5)...
Processing 60 conversations...
Analyzing batch 1/6...
Analyzing batch 2/6...
...
Analyzing batch 6/6...
Analysis complete! Results saved to analysis_results.json
Total conversations analyzed: 60
Conversation types: {'Q&A': 25, 'Tutorial': 15, ...}
Average messages per conversation: 4.5
uv run uvicorn app.main:app --reloadThen open your browser to: http://localhost:8000
Features:
- 📊 Interactive visualizations (pie charts, bar charts)
- 🔍 Real-time search and filtering
- 📋 Sortable conversation table
- 📈 Summary statistics cards
API Endpoints:
GET /- Dashboard UIGET /api/analytics/summary- Summary statisticsGET /api/analytics/conversations- All conversation analysesGET /api/analytics/info- Analysis metadataGET /health- Health check
demo/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI application
│ ├── analyze.py # Main CLI script
│ ├── config.py # Configuration management
│ ├── models.py # Pydantic data models
│ ├── processors/
│ │ ├── __init__.py
│ │ ├── conversation_loader.py # JSON parsing
│ │ ├── llm_analyzer.py # LLM-powered analysis
│ │ └── result_writer.py # Output generation
│ ├── routers/
│ │ ├── __init__.py
│ │ └── analytics.py # Analytics API endpoints
│ └── static/
│ ├── index.html # Dashboard UI
│ ├── styles.css # Styling
│ └── dashboard.js # Visualization logic
├── conversations.json # Input data
├── analysis_results.json # Generated output
├── pyproject.toml # Project configuration
├── .env.example # Environment template
└── README.md
Configuration is managed through environment variables in .env:
# Required
POE_API_KEY=your-poe-api-key-here
POE_BASE_URL=https://api.poe.com/v1
POE_MODEL=Claude-Sonnet-4.5
# Optional
BATCH_SIZE=10 # Conversations per batchThe analysis_results.json contains:
{
"total_conversations": 60,
"analysis_timestamp": "2025-01-15T10:30:00Z",
"summary_stats": {
"conversation_types": {"Q&A": 25, "Tutorial": 15, ...},
"complexity_distribution": {"Simple": 20, "Medium": 30, ...},
"top_topics": [["technology", 35], ["career", 28], ...],
"avg_messages_per_conversation": 4.5
},
"conversations": [
{
"conversation_id": "...",
"user_id": "...",
"message_count": 3,
"conversation_type": "Q&A",
"complexity_score": "Simple",
"primary_topics": ["technology", "advice"],
"duration_seconds": 194
}
]
}- OpenAI Library: Uses official
openaiPython library - Poe Compatibility: Poe provides OpenAI-compatible API at
https://api.poe.com/v1 - Model: Claude Sonnet 4.5 via Poe platform
- Batch Processing: Groups conversations to minimize API overhead
- Efficient Prompts: Structured JSON responses reduce token usage
- Error Handling: Robust exception handling with detailed error messages
uv run pytest tests/# Linting
uv run pylint app/
# Type checking
uv run mypy app/
# Formatting
uv run black app/-
Phase 1: CLI Analysis Pipeline
- Data models
- Conversation loader
- LLM analyzer
- Result writer
- Configuration management
- Main CLI script
-
Phase 2: Web Dashboard
- FastAPI backend
- Analytics API endpoints
- Frontend visualization
- Chart.js integration
- Real-time filtering and search
- Responsive design
MIT
For issues or questions, please open an issue on GitHub.