A powerful tool to test how well AI agents can navigate and interact with your website. Get detailed performance metrics, execution times, and actionable insights to improve your website's AI-friendliness.
DEMO VIDEO: https://youtu.be/fbh4esnyE4Y
- π Lightning Fast: Get benchmark results in seconds with optimized AI agent testing
- π Detailed Analytics: Comprehensive metrics including execution time, success rates, and error logs
- π± Modern UI: Beautiful, responsive design built with Tailwind CSS
- π Secure Authentication: User accounts and data protection with Supabase
- πΈ Screenshot Capture: Visual evidence of AI agent interactions
- π Cross-Platform: Works with any website URL
- π Progress Tracking: Historical data and performance trends
- Frontend: Next.js 14, TypeScript, Tailwind CSS
- Backend: FastAPI (Python) with real BrowserUse library
- Database: Supabase (PostgreSQL)
- Authentication: Supabase Auth
- Storage: Supabase Storage (for screenshots)
- AI/Automation: BrowserUse with LangChain & Playwright
- Node.js 18+ and npm
- Python 3.11+ (for BrowserUse)
- A Supabase project
- An OpenAI API key or Anthropic API key (for BrowserUse)
```bash git clone cd benchmark-my-website npm install npx playwright install ```
Create a `.env.local` file in the root directory:
```env
NEXT_PUBLIC_SUPABASE_URL=your_supabase_project_url NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
OPENAI_API_KEY=your_openai_api_key
NEXT_PUBLIC_APP_URL=http://localhost:3000 ```
- Go to your Supabase project dashboard
- Navigate to the SQL Editor
- Run the SQL commands from `supabase-schema.sql` to set up the database schema
```bash cd python-backend pip install -r requirements.txt playwright install chromium --with-deps --no-shell ```
Create a .env file in the python-backend directory with your API keys.
Terminal 1 - Python Backend: ```bash cd python-backend python main.py ```
Terminal 2 - NextJS Frontend: ```bash npm run dev ```
Visit http://localhost:3000 to see the application. The Python API runs on http://localhost:8000.
- Sign Up/Sign In: Create an account or log in to get started
- Enter Website URL: Input the website you want to test
- Describe the Task: Tell the AI what it should try to accomplish
- Run the Test: Click "Run Benchmark Test" and wait for results
- View Results: See detailed metrics, screenshots, and logs
- Navigation Tests: "Find the contact page", "Navigate to pricing"
- Search Functionality: "Search for a specific product"
- Form Interactions: "Fill out the contact form"
- Account Operations: "Find login page", "Locate account settings"
- E-commerce: "Add item to cart", "Find checkout process"
The AI agent behavior can be customized by modifying the `BrowserUseService` class in `src/lib/browser-use.ts`. You can:
- Add new task patterns
- Modify element selectors
- Adjust timeout values
- Customize success criteria
The application uses the following main tables:
- `profiles`: User profile information
- `benchmarks`: Test results and metrics
- `benchmark-screenshots` (storage): Screenshot images
- Success Rate: Whether the AI agent completed the task
- Execution Time: Total time from start to completion
- Error Messages: Detailed error information if the test failed
- Browser Logs: Console logs, network errors, and other browser events
- Screenshots: Visual evidence of the final state
- β Success + Fast: Your website is AI-friendly
- β Success + Slow: Task completed but could be optimized
- β Failure + Error: Specific issues need to be addressed
- β Timeout: Website may be too complex or slow
- Push your code to GitHub
- Connect your repository to Vercel
- Add environment variables in Vercel dashboard
- Deploy!
```dockerfile FROM node:18-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . RUN npm run build EXPOSE 3000 CMD ["npm", "start"] ```
- Fork the repository
- Create a feature branch: `git checkout -b feature/amazing-feature`
- Commit your changes: `git commit -m 'Add amazing feature'`
- Push to the branch: `git push origin feature/amazing-feature`
- Open a Pull Request
Run a new benchmark test.
Request Body: ```json { "websiteUrl": "https://example.com", "taskDescription": "Find the contact form", "userId": "user-uuid" } ```
Response: ```json { "success": true, "data": { "id": "benchmark-uuid", "success": true, "executionTimeMs": 5432, "screenshotUrl": "https://...", "createdAt": "2024-01-01T00:00:00.000Z" } } ```
Fetch benchmark history for a user.
- Playwright Installation: Run `npx playwright install` if you see browser errors
- Environment Variables: Double-check all required env vars are set
- Supabase Setup: Ensure RLS policies are properly configured
- Port Conflicts: Change the port if 3000 is already in use
- Some websites may block automated browsers
- Consider using different browser types (Chrome, Firefox, Safari)
- Adjust timeout values for slow websites
This project is licensed under the MIT License - see the LICENSE file for details.
- BrowserUse for AI browser automation
- Supabase for backend services
- Playwright for browser automation
- Next.js for the amazing framework
- Tailwind CSS for beautiful styling
If you have any questions or need help:
- Check the FAQ
- Search existing GitHub Issues
- Create a new issue if needed
- Join our Discord Community
Made with β€οΈ for better AI-website compatibility