diff --git a/docs/blog/why-aiscript.mdx b/docs/blog/why-aiscript.mdx
index 82479da..8210816 100644
--- a/docs/blog/why-aiscript.mdx
+++ b/docs/blog/why-aiscript.mdx
@@ -191,6 +191,12 @@ api_key = "YOUR_API_KEY"
[ai.anthropic]
api_key = "YOUR_API_KEY"
+
+# Local models with Ollama (no API key needed)
+[ai.ollama]
+api_endpoint = "http://localhost:11434/v1"
+model = "llama3.2"
+api_key = ""
```
diff --git a/docs/guide/ai/_meta.json b/docs/guide/ai/_meta.json
index df9ee2b..bea683d 100644
--- a/docs/guide/ai/_meta.json
+++ b/docs/guide/ai/_meta.json
@@ -1 +1 @@
-["prompt", "function", "agent"]
\ No newline at end of file
+["prompt", "function", "agent", "local-models"]
\ No newline at end of file
diff --git a/docs/guide/ai/local-models.mdx b/docs/guide/ai/local-models.mdx
new file mode 100644
index 0000000..c0d52f1
--- /dev/null
+++ b/docs/guide/ai/local-models.mdx
@@ -0,0 +1,120 @@
+import { Badge, Tab, Tabs } from "rspress/theme";
+
+# Local Models with Ollama
+
+AIScript supports running AI models locally through [Ollama](https://ollama.ai/), providing privacy, cost-effectiveness, and offline capabilities.
+
+## Setting Up Ollama
+
+### Installation
+
+1. **Install Ollama** from [ollama.ai](https://ollama.ai/)
+2. **Start Ollama** service (runs on port 11434 by default)
+3. **Pull models** you want to use:
+
+```bash
+# Popular general-purpose models
+ollama pull llama3.2
+
+# Coding models
+ollama pull codellama
+ollama pull deepseek-coder
+
+# Specialized models
+ollama pull deepseek-r1
+ollama pull gemma2
+ollama pull qwen2.5
+```
+
+### Verification
+
+Check that Ollama is running and models are available:
+
+```bash
+# List installed models
+ollama list
+
+# Test a model
+ollama run llama3.2 "Hello, how are you?"
+```
+
+## Configuration
+
+Configure AIScript to use Ollama models:
+
+
+
+
+```toml
+[ai.ollama]
+api_endpoint = "http://localhost:11434/v1"
+model = "llama3.2" # Default model
+```
+
+
+
+
+```bash
+export OLLAMA_API_ENDPOINT="http://localhost:11434/v1"
+```
+
+
+
+
+## Basic Usage
+
+### Simple Prompts
+
+```rust
+let response = prompt {
+ input: "Explain how neural networks work",
+ model: "llama3.2"
+};
+```
+
+## Troubleshooting
+
+### Common Issues
+
+**Ollama not responding:**
+```bash
+# Check if Ollama is running
+curl http://localhost:11434/api/tags
+
+# Restart Ollama service
+ollama serve
+```
+
+**Model not found:**
+```bash
+# List available models
+ollama list
+
+# Pull missing model
+ollama pull llama3.2
+```
+
+**Out of memory:**
+- Try a smaller model (e.g., `phi3` instead of `llama3.1`)
+- Close other applications
+- Use quantized models
+
+### Performance Issues
+
+- **Slow responses**: Try smaller models or increase hardware resources
+- **High memory usage**: Monitor with `htop` or Activity Monitor
+- **GPU not utilized**: Ensure GPU drivers are properly installed
+
+## Comparison: Local vs Cloud Models
+
+| Aspect | Local Models | Cloud Models |
+|--------|-------------|--------------|
+| **Privacy** | ✅ Complete | ❌ Data sent externally |
+| **Cost** | ✅ One-time setup | ❌ Per-token billing |
+| **Internet** | ✅ Works offline | ❌ Requires connection |
+| **Latency** | ✅ Low (local) | ❌ Network dependent |
+| **Quality** | ⚠️ Model dependent | ✅ Usually higher |
+| **Maintenance** | ❌ Self-managed | ✅ Fully managed |
+| **Scaling** | ❌ Hardware limited | ✅ Unlimited |
+
+Local models with Ollama provide a powerful way to integrate AI capabilities into your AIScript applications while maintaining full control over your data and costs.
diff --git a/docs/guide/ai/prompt.mdx b/docs/guide/ai/prompt.mdx
index d6c30d8..cf16498 100644
--- a/docs/guide/ai/prompt.mdx
+++ b/docs/guide/ai/prompt.mdx
@@ -51,7 +51,7 @@ let response = prompt f"Explain {topic} at a {detail_level} level";
## API Keys and Configuration
-To use the `prompt` feature, you need to configure your API keys. AIScript supports multiple LLM providers.
+To use the `prompt` feature, you need to configure your API keys. AIScript supports multiple LLM providers including cloud-based services and local models.
### Environment Variables
@@ -63,6 +63,9 @@ export OPENAI_API_KEY="your_openai_key"
# For Anthropic Claude
export CLAUDE_API_KEY="your_claude_key"
+
+# For Ollama (local models)
+export OLLAMA_API_ENDPOINT="http://localhost:11434/v1"
```
### Configuration File
@@ -78,10 +81,47 @@ api_key = "YOUR_OPENAI_API_KEY"
[ai.anthropic]
api_key = "YOUR_CLAUDE_API_KEY"
+
+# For Ollama (local models)
+[ai.ollama]
+api_endpoint = "http://localhost:11434/v1"
+model = "llama3.2" # or any other model installed in your Ollama instance
```
+## Local Models with Ollama
+
+AIScript supports running local models through [Ollama](https://ollama.ai/), which allows you to run AI models on your own hardware without sending data to external services.
+
+### Setting Up Ollama
+
+1. **Install Ollama** from [ollama.ai](https://ollama.ai/)
+2. **Pull models** you want to use:
+ ```bash
+ ollama pull llama3.2
+ ollama pull deepseek-r1
+ ollama pull codellama
+ ```
+3. **Start Ollama** (it runs on port 11434 by default)
+4. **Configure AIScript** to use Ollama
+
+### Example with Local Models
+
+```rust
+// Using a local Llama model
+let response = prompt {
+ input: "What is rust?",
+ model: "llama3.2"
+};
+
+// Using a local coding model
+let code_explanation = prompt {
+ input: "Review this Python function for potential issues",
+ model: "codellama"
+};
+```
+
## Error Handling
Like other AIScript operations, you can use error handling with the `prompt` keyword:
@@ -135,6 +175,12 @@ let claude_response = prompt {
input: f"Summarize this article: {article_text}" ,
model: "claude-3.7"
};
+
+// Using Ollama (local models)
+let local_response = prompt {
+ input: f"Summarize this article: {article_text}",
+ model: "llama3.2"
+};
```
### Chaining Prompts
diff --git a/docs/guide/getting-started/introduction.md b/docs/guide/getting-started/introduction.md
index b99bd65..cf04fd3 100644
--- a/docs/guide/getting-started/introduction.md
+++ b/docs/guide/getting-started/introduction.md
@@ -78,6 +78,21 @@ You can open [http://localhost:8080/redoc](http://localhost:8080/redoc) to see t

+### Local Models Support
+
+AIScript also supports local models through [Ollama](https://ollama.ai/), allowing you to run AI models on your own hardware:
+
+```javascript
+$ ollama pull llama3.2
+$ export OLLAMA_API_ENDPOINT=http://localhost:11434/v1
+$ cat local.ai
+let a = prompt {
+ input: "What is rust?",
+ model: "llama3.2"
+};
+print(a);
+```
+
## Use Cases
AIScript excels in these scenarios:
diff --git a/docs/reference/configuration/ai.md b/docs/reference/configuration/ai.md
index 82aec26..3bda1ef 100644
--- a/docs/reference/configuration/ai.md
+++ b/docs/reference/configuration/ai.md
@@ -2,6 +2,15 @@
> For AI observability, please refer to [observability](./observability.md).
+AIScript supports multiple AI providers:
+
+- **OpenAI**: Cloud-based models including GPT-4 and GPT-3.5
+- **DeepSeek**: Efficient and powerful models
+- **Anthropic**: Claude models for advanced reasoning
+- **Ollama**: 100+ local models from various providers running on your own hardware
+
+## Configuration
+
```toml
[ai.openai]
api_key = "YOUR_API_KEY"
@@ -20,4 +29,35 @@ api_key = "YOUR_API_KEY"
# optional, default is the official Anthropic API endpoint
api_endpoint = "YOUR_API_ENDPOINT"
model = "claude-3-5-sonnet-latest"
+
+# Ollama - Local models
+[ai.ollama]
+api_endpoint = "http://localhost:11434/v1" # Default Ollama endpoint
+model = "llama3.2" # or any other model installed in your Ollama instance
+```
+
+## Using Ollama
+
+[Ollama](https://ollama.ai/) allows you to run local AI models on your own hardware. To use Ollama with AIScript:
+
+1. **Install Ollama** from [ollama.ai](https://ollama.ai/)
+2. **Pull your desired models** (e.g., `ollama pull llama3.2`)
+3. **Make sure Ollama is running** locally
+4. **Configure AIScript** to use Ollama as shown above or by setting the `OLLAMA_API_ENDPOINT` environment variable
+
+### Available Models
+
+Ollama provides access to 100+ models ranging from small 135M parameter models to massive 671B parameter models, including:
+
+- **Llama family**: `llama3.2`, `llama3.1`, `codellama`
+- **DeepSeek models**: `deepseek-r1`, `deepseek-v3`, `deepseek-coder`
+- **Specialized models**: `phi3`, `gemma2`, `qwen2.5`, `mistral`
+- **And many more**: See the [full list of available models](https://ollama.com/search)
+
+### Environment Variables
+
+You can also configure Ollama using environment variables:
+
+```bash
+export OLLAMA_API_ENDPOINT="http://localhost:11434/v1"
```
\ No newline at end of file