diff --git a/runpodctl/reference/runpodctl-pod.mdx b/runpodctl/reference/runpodctl-pod.mdx
index ca9428ca..3adbcec6 100644
--- a/runpodctl/reference/runpodctl-pod.mdx
+++ b/runpodctl/reference/runpodctl-pod.mdx
@@ -168,6 +168,34 @@ Enable SSH on the Pod.
Network volume ID to attach. Use [`runpodctl network-volume list`](/runpodctl/reference/runpodctl-network-volume) to see available network volumes.
+
+Minimum CUDA version required (e.g., `11.8`, `12.4`). The Pod will only be scheduled on machines that meet this CUDA version requirement.
+
+
+
+Docker arguments passed to the container at runtime (e.g., `"sleep infinity"`).
+
+
+
+Container registry authentication ID for pulling private images. Use [`runpodctl registry list`](/runpodctl/reference/runpodctl-registry) to see available registry credentials.
+
+
+
+Country code for regional deployment (e.g., `US`, `CA`, `EU`). Restricts Pod placement to machines in the specified region.
+
+
+
+Automatically stop the Pod after the specified duration (e.g., `1h`, `24h`, `7d`).
+
+
+
+Automatically terminate the Pod after the specified duration (e.g., `1h`, `24h`, `7d`). Unlike `--stop-after`, this permanently deletes the Pod.
+
+
+
+Compliance settings for the Pod (e.g., regulatory requirements for data handling).
+
+
### Start a Pod
Start a stopped Pod:
diff --git a/runpodctl/reference/runpodctl-serverless.mdx b/runpodctl/reference/runpodctl-serverless.mdx
index 4dd22e0e..d306b908 100644
--- a/runpodctl/reference/runpodctl-serverless.mdx
+++ b/runpodctl/reference/runpodctl-serverless.mdx
@@ -122,6 +122,34 @@ Comma-separated list of preferred datacenter IDs. Use [`runpodctl datacenter lis
Network volume ID to attach. Use [`runpodctl network-volume list`](/runpodctl/reference/runpodctl-network-volume) to see available network volumes.
+
+Comma-separated list of network volume IDs to attach. Use this when attaching multiple network volumes to an endpoint.
+
+
+
+Minimum CUDA version required for workers (e.g., `12.4`). Workers will only be scheduled on machines that meet this CUDA version requirement.
+
+
+
+Autoscaler type (`QUEUE_DELAY` or `REQUEST_COUNT`). `QUEUE_DELAY` scales based on queue wait time; `REQUEST_COUNT` scales based on concurrent requests.
+
+
+
+Scaler threshold value. For `QUEUE_DELAY`, this is the target delay in seconds. For `REQUEST_COUNT`, this is the number of concurrent requests per worker before scaling.
+
+
+
+Idle timeout in seconds. Workers shut down after being idle for this duration. Valid range: 5-3600 seconds.
+
+
+
+Enable or disable flash boot for faster worker startup. When enabled, workers start from cached container images.
+
+
+
+Execution timeout in seconds. Jobs that exceed this duration are terminated. The CLI accepts seconds but converts to milliseconds internally.
+
+
### Update an endpoint
Update endpoint configuration:
@@ -156,6 +184,14 @@ Scaler type (`QUEUE_DELAY` or `REQUEST_COUNT`).
Scaler value.
+
+Enable or disable flash boot for faster worker startup.
+
+
+
+Execution timeout in seconds. Jobs that exceed this duration are terminated.
+
+
### Delete an endpoint
Delete an endpoint: