A blazingly fast, GPU-accelerated native LLM chat client built with GPUI — the same rendering engine that powers Zed.
Features • Installation • Configuration • Development • License
- GPU-Accelerated Rendering - Leverages Metal/Vulkan for buttery-smooth 120fps UI with minimal CPU usage
- Streaming Responses - Real-time streaming output from LLM providers
- Multi-Provider Support - Connect to various LLM providers:
- Anthropic (Claude)
- OpenAI (GPT-4, GPT-4o)
- Azure OpenAI
- DeepSeek
- Google AI (Gemini)
- Groq
- Mistral
- Ollama (Local)
- OpenRouter
- Modern UI - Clean, macOS-native interface with smooth animations
- Collapsible Sidebar - Chat history sidebar with animated toggle
- Persistent Settings - Configuration saved locally
- i18n Support - Internationalization ready
- GPUI - GPU-accelerated UI framework
- gpui-component - UI component library
- Tokio - Async runtime
- reqwest - HTTP client
- serde - Serialization
This project is dual-licensed:
- Open Source: AGPL-3.0 - Free for open-source projects
- Commercial: Commercial License - For proprietary use
For commercial licensing inquiries, please contact license@aprilnea.com.
Made with ❤️ by AprilNEA