Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/.agent/last_run.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"last_processed_ref": "v0.1.335",
"date": "2025-02-24",
"notes": "Added background jobs recipe and expanded learning path with Module 10."
"notes": "Added Phase 4 (Enterprise Scale) to Learning Path, created Testing recipe, and updated File Uploads recipe."
}
34 changes: 34 additions & 0 deletions docs/.agent/run_report_2025-02-24.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,37 @@ This run focuses on expanding the cookbook and refining the learning path to inc
## 4. Open Questions / TODOs
- Investigate adding `rustapi-jobs` as a re-export in `rustapi-rs` for better "batteries-included" experience in future versions.
- Consider adding more backend examples (Redis, Postgres) to the cookbook recipe when environment setup allows.

---

# Docs Maintenance Run Report: 2025-02-24 (Run 2)

## 1. Version Detection
- **Repo Version**: `v0.1.335` (Unchanged)
- **Result**: Continuing with Continuous Improvement phase.

## 2. Changes Summary
This run focuses on "Enterprise Scale" documentation, testing strategies, and improving existing recipes.

### New Content
- **Cookbook Recipe**: `docs/cookbook/src/recipes/testing.md` - Comprehensive guide to `rustapi-testing`, `TestClient`, and `MockServer`.
- **Learning Path Phase**: Added "Phase 4: Enterprise Scale" to `docs/cookbook/src/learning/curriculum.md`, covering Observability, Resilience, and High Performance.

### Updates
- **File Uploads Recipe**: Rewrote `docs/cookbook/src/recipes/file_uploads.md` with a complete, runnable example using `Multipart` streaming and improved security guidance.
- **Cookbook Summary**: Added "Testing & Mocking" to `docs/cookbook/src/SUMMARY.md`.

## 3. Improvement Details
- **Learning Path**:
- Added Modules 11 (Observability), 12 (Resilience & Security), 13 (High Performance).
- Added "Phase 4 Capstone: The High-Scale Event Platform".
- **Testing Recipe**:
- Detailed usage of `TestClient` for integration tests.
- Example of mocking external services with `MockServer`.
- **File Uploads**:
- Replaced partial snippets with a full `main.rs` style example.
- Clarified streaming vs buffering and added security warnings.

## 4. Open Questions / TODOs
- **Status Page**: `recipes/status_page.md` exists but might need more visibility in the Learning Path (maybe in Module 11?).
- **Observability**: A dedicated recipe for OpenTelemetry setup would be beneficial (currently covered in crate docs).
1 change: 1 addition & 0 deletions docs/cookbook/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
- [JWT Authentication](recipes/jwt_auth.md)
- [CSRF Protection](recipes/csrf_protection.md)
- [Database Integration](recipes/db_integration.md)
- [Testing & Mocking](recipes/testing.md)
- [File Uploads](recipes/file_uploads.md)
- [Background Jobs](recipes/background_jobs.md)
- [Custom Middleware](recipes/custom_middleware.md)
Expand Down
61 changes: 61 additions & 0 deletions docs/cookbook/src/learning/curriculum.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,67 @@ This curriculum is designed to take you from a RustAPI beginner to an advanced u

---

## Phase 4: Enterprise Scale

**Goal:** Build observable, resilient, and high-performance distributed systems.

### Module 11: Observability
- **Prerequisites:** Phase 3.
- **Reading:** [Observability (Extras)](../crates/rustapi_extras.md#observability), [Structured Logging](../crates/rustapi_extras.md#structured-logging).
- **Task:**
1. Enable `structured-logging` and `otel` features.
2. Configure tracing to export spans to Jaeger (or console for dev).
3. Add custom metrics for "active_users" and "jobs_processed".
- **Expected Output:** Logs are JSON formatted with trace IDs. Metrics endpoint exposes Prometheus data.
- **Pitfalls:** High cardinality in metric labels (e.g., using user IDs as labels).

#### 🧠 Knowledge Check
1. What is the difference between logging and tracing?
2. How do you correlate logs across microservices?
3. What is the standard format for structured logs in RustAPI?

### Module 12: Resilience & Security
- **Prerequisites:** Phase 3.
- **Reading:** [Resilience Patterns](../recipes/resilience.md), [Time-Travel Debugging](../recipes/replay.md).
- **Task:**
1. Wrap an external API call with a `CircuitBreaker`.
2. Implement `RetryLayer` for transient failures.
3. (Optional) Use `ReplayLayer` to record and replay a tricky bug scenario.
- **Expected Output:** System degrades gracefully when external service is down. Replay file captures the exact request sequence.
- **Pitfalls:** Infinite retry loops or retrying non-idempotent operations.

#### 🧠 Knowledge Check
1. What state does a Circuit Breaker have when it stops traffic?
2. Why is jitter important in retry strategies?
3. How does Time-Travel Debugging help with "Heisenbugs"?

### Module 13: High Performance
- **Prerequisites:** Phase 3.
- **Reading:** [HTTP/3 (QUIC)](../recipes/http3_quic.md), [Performance Tuning](../recipes/high_performance.md).
- **Task:**
1. Enable `http3` feature and generate self-signed certs.
2. Serve traffic over QUIC.
3. Implement response caching for a heavy computation endpoint.
- **Expected Output:** Browser/Client connects via HTTP/3. Repeated requests are served instantly from cache.
- **Pitfalls:** Caching private user data without proper keys.

#### 🧠 Knowledge Check
1. What transport protocol does HTTP/3 use?
2. How does `simd-json` improve performance?
3. When should you *not* use caching?

### 🏆 Phase 4 Capstone: "The High-Scale Event Platform"
**Objective:** Architect a system capable of handling thousands of events per second.
**Requirements:**
- **Ingestion:** HTTP/3 endpoint receiving JSON events.
- **Processing:** Push events to a `rustapi-jobs` queue (Redis backend).
- **Storage:** Workers process events and store aggregates in a database.
- **Observability:** Full tracing from ingestion to storage.
- **Resilience:** Circuit breakers on database writes.
- **Testing:** Load test the ingestion endpoint (e.g., with k6 or similar) and observe metrics.

---

## Next Steps

* Explore the [Examples Repository](https://github.com/Tuntii/rustapi-rs-examples).
Expand Down
129 changes: 91 additions & 38 deletions docs/cookbook/src/recipes/file_uploads.md
Original file line number Diff line number Diff line change
@@ -1,80 +1,133 @@
# File Uploads

Handling file uploads efficiently is crucial. RustAPI allows you to stream `Multipart` data, meaning you can handle 1GB uploads without using 1GB of RAM.
Handling file uploads efficiently is crucial for modern applications. RustAPI provides a `Multipart` extractor that allows you to stream uploads, enabling you to handle large files (e.g., 1GB+) without consuming proportional RAM.
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This introduction claims Multipart streams uploads and can handle 1GB+ without proportional RAM use. The current rustapi_core::multipart::Multipart implementation parses the full request body into memory (and even converts it to a string during parsing), so this claim is inaccurate and could mislead users about memory/DoS characteristics.

Copilot uses AI. Check for mistakes.

## Dependencies

Add `uuid` and `tokio` with `fs` features to your `Cargo.toml`.

```toml
[dependencies]
rustapi-rs = "0.1.335"
tokio = { version = "1", features = ["fs", "io-util"] }
uuid = { version = "1", features = ["v4"] }
```

## Streaming Upload Handler
## Streaming Upload Example

This handler reads the incoming stream part-by-part and writes it directly to disk (or S3).
Here is a complete, runnable example of a file upload server that streams files to a `./uploads` directory.

```rust
use rustapi_rs::prelude::*;
use rustapi_rs::extract::Multipart;
use rustapi_rs::extract::{Multipart, DefaultBodyLimit};
use tokio::fs::File;
Comment on lines 21 to 23
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rustapi_rs::extract::{Multipart, DefaultBodyLimit} is not a valid import in this repo: Multipart is re-exported from rustapi_core::multipart (and via the prelude), and there is no DefaultBodyLimit type. Update the imports to match the actual public API (e.g., use the prelude and RustApi::body_limit(...) / BodyLimitLayer).

Copilot uses AI. Check for mistakes.
use tokio::io::AsyncWriteExt;
use std::path::Path;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Ensure uploads directory exists
tokio::fs::create_dir_all("./uploads").await?;

println!("Starting Upload Server at http://127.0.0.1:8080");

RustApi::new()
.route("/upload", post(upload_handler))
// Increase body limit to 1GB (default is usually 2MB)
.layer(DefaultBodyLimit::max(1024 * 1024 * 1024))
.run("127.0.0.1:8080")
Comment on lines +34 to +38
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The example configures body size with DefaultBodyLimit::max(...), but RustAPI’s API uses RustApi::body_limit(...) (or BodyLimitLayer::new(...)). Also, the comment says the default is “usually 2MB”, but RustAPI’s default body limit is 1MB (DEFAULT_BODY_LIMIT = 1024 * 1024).

Copilot uses AI. Check for mistakes.
.await
}

async fn upload_file(mut multipart: Multipart) -> Result<StatusCode, ApiError> {
// Iterate over the fields
while let Some(field) = multipart.next_field().await.map_err(|_| ApiError::BadRequest)? {
async fn upload_handler(mut multipart: Multipart) -> Result<Json<UploadResponse>> {
let mut uploaded_files = Vec::new();

// Iterate over the fields in the multipart form
while let Some(mut field) = multipart.next_field().await.map_err(|_| ApiError::bad_request("Invalid multipart"))? {

let name = field.name().unwrap_or("file").to_string();
let file_name = field.file_name().unwrap_or("unknown.bin").to_string();
let content_type = field.content_type().unwrap_or("application/octet-stream").to_string();

println!("Uploading: {} ({})", file_name, content_type);
// ⚠️ Security: Never trust the user-provided filename directly!
// It could contain paths like "../../../etc/passwd".
// Always generate a safe filename or sanitize inputs.
let safe_filename = format!("{}-{}", uuid::Uuid::new_v4(), file_name);
let path = Path::new("./uploads").join(&safe_filename);
Comment on lines +51 to +55
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefixing the user-provided filename with a UUID doesn’t prevent path traversal if the filename contains separators like ../ or \. Use a strict sanitization step (e.g., strip to Path::new(name).file_name() and replace separators) or rely on the built-in MultipartField::save_to(...) / UploadedFile::save_to(...), which sanitizes filenames in rustapi_core.

Copilot uses AI. Check for mistakes.

// Security: Create a safe random filename to prevent overwrites or path traversal
let new_filename = format!("{}-{}", uuid::Uuid::new_v4(), file_name);
let path = std::path::Path::new("./uploads").join(new_filename);
println!("Streaming file: {} -> {:?}", file_name, path);

// Open destination file
let mut file = File::create(&path).await.map_err(|e| ApiError::InternalServerError(e.to_string()))?;
let mut file = File::create(&path).await.map_err(|e| ApiError::internal(e.to_string()))?;

// Write stream to file chunk by chunk
// In RustAPI/Axum multipart, `field.bytes()` loads the whole field into memory.
// To stream efficiently, we use `field.chunk()`:

while let Some(chunk) = field.chunk().await.map_err(|_| ApiError::BadRequest)? {
file.write_all(&chunk).await.map_err(|e| ApiError::InternalServerError(e.to_string()))?;
// Stream the field content chunk-by-chunk
// This is memory efficient even for large files.
while let Some(chunk) = field.chunk().await.map_err(|_| ApiError::bad_request("Stream error"))? {
file.write_all(&chunk).await.map_err(|e| ApiError::internal(e.to_string()))?;
}
Comment on lines +62 to 66
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MultipartField in rustapi_core doesn’t provide field.chunk() streaming; fields are buffered and you can read them via field.bytes().await / field.text().await (or save via field.save_to(...)). As written, this loop won’t compile and the surrounding text about streaming is incorrect.

Suggested change
// Stream the field content chunk-by-chunk
// This is memory efficient even for large files.
while let Some(chunk) = field.chunk().await.map_err(|_| ApiError::bad_request("Stream error"))? {
file.write_all(&chunk).await.map_err(|e| ApiError::internal(e.to_string()))?;
}
// Read the entire field content as bytes and write it to disk
let data = field
.bytes()
.await
.map_err(|_| ApiError::bad_request("Read error"))?;
file.write_all(&data)
.await
.map_err(|e| ApiError::internal(e.to_string()))?;

Copilot uses AI. Check for mistakes.

uploaded_files.push(FileResult {
original_name: file_name,
stored_name: safe_filename,
content_type,
});
}

Ok(StatusCode::CREATED)
Ok(Json(UploadResponse {
message: "Upload successful".into(),
files: uploaded_files,
}))
}

#[derive(Serialize, Schema)]
struct UploadResponse {
message: String,
files: Vec<FileResult>,
}

#[derive(Serialize, Schema)]
struct FileResult {
original_name: String,
stored_name: String,
content_type: String,
}
```

## Handling Constraints
## Key Concepts

You should always set limits to prevent DoS attacks.
### 1. Streaming vs Buffering
By default, some frameworks load the entire file into RAM. RustAPI's `Multipart` allows you to process the stream incrementally using `field.chunk()`.
- **Buffering**: `field.bytes().await` (Load all into RAM - simple but dangerous for large files)
- **Streaming**: `field.chunk().await` (Load small chunks - scalable)

```rust
use rustapi_rs::extract::DefaultBodyLimit;
### 2. Body Limits
The default request body limit is often small (e.g., 2MB) to prevent DoS attacks. You must explicitly increase this limit for file upload routes using `DefaultBodyLimit::max(size)`.

Comment on lines +97 to 104
Copy link

Copilot AI Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section refers to field.chunk() streaming and DefaultBodyLimit::max(...), but neither exists in the current RustAPI API. Consider rewriting this to reflect the actual behavior (buffered multipart parsing) and the supported configuration knobs (RustApi::body_limit(...), BodyLimitLayer, and/or documenting MultipartField::save_to filename sanitization).

Copilot uses AI. Check for mistakes.
let app = RustApi::new()
.route("/upload", post(upload_file))
// Limit request body to 10MB
.layer(DefaultBodyLimit::max(10 * 1024 * 1024));
```
### 3. Security
- **Path Traversal**: Malicious users can send filenames like `../../system32/cmd.exe`. Always rename files or sanitize filenames strictly.
- **Content Type Validation**: The `Content-Type` header is client-controlled and can be spoofed. Do not rely on it for security execution checks (e.g., preventing `.php` execution).
- **Executable Permissions**: Store uploads in a directory where script execution is disabled.

## Validating Content Type
## Testing with cURL

Never trust the `Content-Type` header sent by the client implicitly for security (e.g., executing a PHP script uploaded as an image).
You can test this endpoint using `curl`:

Verify the "magic bytes" of the file content itself if strictly needed, or ensure uploaded files are stored in a non-executable directory (or S3 bucket).
```bash
curl -X POST http://localhost:8080/upload \
-F "file1=@./image.png" \
-F "file2=@./document.pdf"
```

```rust
// Simple check on the header (not fully secure but good UX)
if let Some(ct) = field.content_type() {
if !ct.starts_with("image/") {
return Err(ApiError::BadRequest("Only images are allowed".into()));
}
Response:
```json
{
"message": "Upload successful",
"files": [
{
"original_name": "image.png",
"stored_name": "550e8400-e29b-41d4-a716-446655440000-image.png",
"content_type": "image/png"
},
...
]
}
```
Loading
Loading