diff --git a/calico/getting-started/kubernetes/helm.mdx b/calico/getting-started/kubernetes/helm.mdx index 061711acd2..1be33d0bb6 100644 --- a/calico/getting-started/kubernetes/helm.mdx +++ b/calico/getting-started/kubernetes/helm.mdx @@ -85,6 +85,18 @@ For more information about configurable options via `values.yaml` please see [He helm install calico-crds projectcalico/crd.projectcalico.org.v1 --version $[releaseTitle] --namespace tigera-operator ``` + :::tip + + To install with [native v3 CRDs](../../operations/native-v3-crds.mdx) (tech preview) instead, use the v3 CRD chart: + + ```bash + helm install calico-crds projectcalico/projectcalico.org.v3 --version $[releaseTitle] --namespace tigera-operator + ``` + + Native v3 CRDs eliminate the need for the aggregation API server and allows `kubectl` to manage `projectcalico.org/v3` resources directly. + + ::: + 1. Install the Tigera Operator using the Helm chart: ```bash diff --git a/calico/getting-started/kubernetes/k3s/multi-node-install.mdx b/calico/getting-started/kubernetes/k3s/multi-node-install.mdx index 308aff7404..ebc2a60d63 100644 --- a/calico/getting-started/kubernetes/k3s/multi-node-install.mdx +++ b/calico/getting-started/kubernetes/k3s/multi-node-install.mdx @@ -97,7 +97,7 @@ curl -sfL https://get.k3s.io | K3S_URL=https://serverip:6443 K3S_TOKEN=mytoken s Install the $[prodname] operator and custom resource definitions. ```bash -kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml +kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/k3s/quickstart.mdx b/calico/getting-started/kubernetes/k3s/quickstart.mdx index 728301299e..c476900a18 100644 --- a/calico/getting-started/kubernetes/k3s/quickstart.mdx +++ b/calico/getting-started/kubernetes/k3s/quickstart.mdx @@ -68,7 +68,7 @@ you are assigning necessary permissions to the file and make it accessible for o 1. Install the $[prodname] operator and custom resource definitions. ```bash -kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml +kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/k8s-single-node.mdx b/calico/getting-started/kubernetes/k8s-single-node.mdx index 221e022bb3..0afb6c1178 100644 --- a/calico/getting-started/kubernetes/k8s-single-node.mdx +++ b/calico/getting-started/kubernetes/k8s-single-node.mdx @@ -92,7 +92,7 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions. ``` - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/kind.mdx b/calico/getting-started/kubernetes/kind.mdx index b2d360a4eb..3402628775 100644 --- a/calico/getting-started/kubernetes/kind.mdx +++ b/calico/getting-started/kubernetes/kind.mdx @@ -82,7 +82,7 @@ dev-worker2 NotReady 4m v1.25.0 172.18.0.3 < 1. Install the $[prodname] operator and custom resource definitions. ```bash -kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml +kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/managed-public-cloud/aks.mdx b/calico/getting-started/kubernetes/managed-public-cloud/aks.mdx index d6d4a6089c..18cd6cbc65 100644 --- a/calico/getting-started/kubernetes/managed-public-cloud/aks.mdx +++ b/calico/getting-started/kubernetes/managed-public-cloud/aks.mdx @@ -115,7 +115,7 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions: ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` @@ -216,7 +216,7 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/managed-public-cloud/eks.mdx b/calico/getting-started/kubernetes/managed-public-cloud/eks.mdx index 4dd7155c40..c3b80e4952 100644 --- a/calico/getting-started/kubernetes/managed-public-cloud/eks.mdx +++ b/calico/getting-started/kubernetes/managed-public-cloud/eks.mdx @@ -64,7 +64,7 @@ When using the Amazon VPC CNI plugin, $[prodname] does not support enforcement o 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` @@ -167,7 +167,7 @@ Before you get started, make sure you have downloaded and configured the [necess 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/minikube.mdx b/calico/getting-started/kubernetes/minikube.mdx index 31e852908c..517cdde9e8 100644 --- a/calico/getting-started/kubernetes/minikube.mdx +++ b/calico/getting-started/kubernetes/minikube.mdx @@ -61,7 +61,7 @@ minikube start --cni=calico 2. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/nftables.mdx b/calico/getting-started/kubernetes/nftables.mdx index f9621a2b65..62d5d3a785 100644 --- a/calico/getting-started/kubernetes/nftables.mdx +++ b/calico/getting-started/kubernetes/nftables.mdx @@ -102,7 +102,7 @@ mode: nftables 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/rancher.mdx b/calico/getting-started/kubernetes/rancher.mdx index 7545ae9a91..f9e2bf22cf 100644 --- a/calico/getting-started/kubernetes/rancher.mdx +++ b/calico/getting-started/kubernetes/rancher.mdx @@ -46,7 +46,7 @@ The geeky details of what you get: 1. Install the Tigera Operator and custom resource definitions. ``` - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/self-managed-onprem/onpremises.mdx b/calico/getting-started/kubernetes/self-managed-onprem/onpremises.mdx index cf186ab4a7..55f1051c2d 100644 --- a/calico/getting-started/kubernetes/self-managed-onprem/onpremises.mdx +++ b/calico/getting-started/kubernetes/self-managed-onprem/onpremises.mdx @@ -44,7 +44,7 @@ $[prodname] can also be installed using raw manifests as an alternative to the o 1. Install the Tigera Operator and custom resource definitions. ``` - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/self-managed-public-cloud/gce.mdx b/calico/getting-started/kubernetes/self-managed-public-cloud/gce.mdx index 4f8c6759e2..ab98eda449 100644 --- a/calico/getting-started/kubernetes/self-managed-public-cloud/gce.mdx +++ b/calico/getting-started/kubernetes/self-managed-public-cloud/gce.mdx @@ -193,7 +193,7 @@ worker-2 NotReady 5s v1.17.2 1. On the controller, install the Tigera Operator and custom resource definitions: ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/vpp/getting-started.mdx b/calico/getting-started/kubernetes/vpp/getting-started.mdx index e5cd8fdab9..35342d920f 100644 --- a/calico/getting-started/kubernetes/vpp/getting-started.mdx +++ b/calico/getting-started/kubernetes/vpp/getting-started.mdx @@ -89,7 +89,7 @@ Before you get started, make sure you have downloaded and configured the [necess 1. Now that you have an empty cluster configured, you can install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` @@ -158,7 +158,7 @@ DPDK provides better performance compared to the standard install but it require 1. Now that you have an empty cluster configured, you can install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` @@ -259,7 +259,7 @@ For some hardware, the following hugepages configuration may enable VPP to use m 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/getting-started/kubernetes/windows-calico/rancher.mdx b/calico/getting-started/kubernetes/windows-calico/rancher.mdx index c0a86e8c17..34f63b08ac 100644 --- a/calico/getting-started/kubernetes/windows-calico/rancher.mdx +++ b/calico/getting-started/kubernetes/windows-calico/rancher.mdx @@ -36,7 +36,7 @@ The following steps will outline the installation of $[prodname] networking on t 1. Install the Tigera Operator and custom resource definitions. ``` - kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx b/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx index 3403cf158d..fc8998080c 100644 --- a/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx +++ b/calico/network-policy/policy-tiers/rbac-tiered-policies.mdx @@ -19,7 +19,13 @@ Self-service is an important part of CI/CD processes for containerization and mi ### Standard Kubernetes RBAC -$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. The $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. +$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. In the default installation, the $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), tier RBAC is enforced via an admission webhook instead. + +:::note + +When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced for **create, update, and delete** operations via the admission webhook. However, **GET, LIST, and WATCH** operations on tiered policies are not enforced because admission webhooks cannot intercept read requests. This is a known limitation. + +::: ### RBAC for policies and tiers diff --git a/calico/networking/ingress-gateway/tutorial-ingress-gateway-canary.mdx b/calico/networking/ingress-gateway/tutorial-ingress-gateway-canary.mdx index 78a38e477f..802675f066 100644 --- a/calico/networking/ingress-gateway/tutorial-ingress-gateway-canary.mdx +++ b/calico/networking/ingress-gateway/tutorial-ingress-gateway-canary.mdx @@ -107,7 +107,7 @@ If you've already got a suitable environment, start the tutorial at [Step 2: Cre 1. Install Calico Open Source by adding custom resource definitions, the Tigera Operator, and the custom resources. - {`kubectl create -f ${files}/manifests/operator-crds.yaml + {`kubectl create -f ${files}/manifests/v1_crd_projectcalico_org.yaml kubectl create -f ${files}/manifests/tigera-operator.yaml kubectl create -f ${files}/manifests/custom-resources.yaml`} diff --git a/calico/operations/calicoctl/install.mdx b/calico/operations/calicoctl/install.mdx index 289930d82a..3a0a2cf019 100644 --- a/calico/operations/calicoctl/install.mdx +++ b/calico/operations/calicoctl/install.mdx @@ -48,7 +48,7 @@ should still be used to manage other Kubernetes resources. :::note If you would like to use `kubectl` to manage `projectcalico.org/v3` API resources, you can use the -[Calico API server](../install-apiserver.mdx). +[Calico API server](../install-apiserver.mdx). Alternatively, when using [native v3 CRDs](../native-v3-crds.mdx), `kubectl` can manage `projectcalico.org/v3` resources directly as native CRDs without needing the API server or calicoctl for resource management. ::: diff --git a/calico/operations/install-apiserver.mdx b/calico/operations/install-apiserver.mdx index 08a1c4a359..e86e1606c2 100644 --- a/calico/operations/install-apiserver.mdx +++ b/calico/operations/install-apiserver.mdx @@ -13,6 +13,12 @@ import TabItem from '@theme/TabItem'; Install the Calico API server on an existing cluster to enable management of Calico APIs using kubectl. +:::tip + +Starting in Calico v3.32.0, you can use [native v3 CRDs](native-v3-crds.mdx) to manage `projectcalico.org/v3` resources directly with `kubectl` without installing the aggregation API server. If you are setting up a new cluster and want a simpler architecture, consider native v3 CRDs instead. + +::: + ## Value The API server provides a REST API for Calico, and allows management of `projectcalico.org/v3` APIs using kubectl without the need for calicoctl. @@ -39,6 +45,8 @@ in this document are not required. In previous releases, calicoctl has been required to manage Calico API resources in the `projectcalico.org/v3` API group. The calicoctl CLI tool provides important validation and defaulting on these APIs. The Calico API server performs that defaulting and validation server-side, exposing the same API semantics without a dependency on calicoctl. +Alternatively, when using [native v3 CRDs](native-v3-crds.mdx), `projectcalico.org/v3` resources are native CRDs, so `kubectl` works directly without needing either the API server or calicoctl for resource management. + calicoctl is still required for the following subcommands: - [calicoctl node](../reference/calicoctl/node/index.mdx) @@ -55,14 +63,16 @@ Select the method below based on your installation method. -1. Create an instance of an `operator.tigera.io/APIServer` with the following contents. +1. Create an instance of an `operator.tigera.io/APIServer` with the following command. - ```yaml + ```bash + kubectl create -f - < -Once removed, you will need to use calicoctl to manage projectcalico.org/v3 APIs. +Once removed, you will need to use calicoctl to manage projectcalico.org/v3 APIs, unless you are using [native v3 CRDs](native-v3-crds.mdx) where `kubectl` works directly. ## Next steps diff --git a/calico/operations/native-v3-crds.mdx b/calico/operations/native-v3-crds.mdx new file mode 100644 index 0000000000..fb1776a5ab --- /dev/null +++ b/calico/operations/native-v3-crds.mdx @@ -0,0 +1,190 @@ +--- +description: Enable native projectcalico.org/v3 CRDs to use Calico resources directly as CRDs without the aggregation API server. +--- + +# Enable native v3 CRDs + +:::note + +This feature is tech preview. Tech preview features may be subject to significant changes before they become GA. + +::: + +## Big picture + +Enable native `projectcalico.org/v3` CRDs so that Calico resources are backed directly by CRDs, eliminating the need for the Calico aggregation API server. + +## Value + +By default, $[prodname] uses an aggregation API server to serve `projectcalico.org/v3` APIs, storing resources internally as `crd.projectcalico.org/v1` CRDs. When using native `projectcalico.org/v3` CRDs, Calico resources are CRDs themselves, which provides several benefits: + +- **Simpler architecture** — no aggregation API server to deploy and manage +- **GitOps-friendly** — no ordering dependencies between CRDs and the API server, so tools like ArgoCD and Flux can apply resources in any order +- **Less platform friction** — removes the need for host-network pods and other requirements of the aggregation API server +- **kubectl works directly** — manage `projectcalico.org/v3` resources with `kubectl` without installing the API server separately +- **Native Kubernetes validation and defaulting** — uses CEL validation rules embedded in the CRD schemas and MutatingAdmissionPolicies for defaulting, leveraging built-in Kubernetes mechanisms instead of a custom API server + +## Concepts + +### How native `projectcalico.org/v3` CRDs work + +When using native `projectcalico.org/v3` CRDs: + +- $[prodname] resources use the `projectcalico.org/v3` API group and are registered as native Kubernetes CRDs. +- The `APIServer` custom resource is still created, but instead of running the aggregation API server, it deploys a webhooks pod that handles validation and defaulting via admission policies. +- $[prodname] auto-detects the mode at startup based on which CRDs are installed on the cluster. If the `projectcalico.org/v3` CRDs are present, it uses them natively; if the `crd.projectcalico.org/v1` CRDs are present, it runs in API server mode. + +### Validation and defaulting + +When using native `projectcalico.org/v3` CRDs, resource validation and defaulting are handled by native CRD validation and defaulting, as well as ValidatingAdmissionPolicies and MutatingAdmissionPolicies. $[prodname] uses [MutatingAdmissionPolicies](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/) for defaulting, which are currently a **beta** Kubernetes feature. You must ensure that the `MutatingAdmissionPolicy` feature gate is enabled on your Kubernetes API server before using native `projectcalico.org/v3` CRDs. + +## Before you begin + +- A Kubernetes cluster **without** $[prodname] installed, or a cluster where you are performing a fresh install. There is no automated migration tooling from an existing API server mode cluster to native `projectcalico.org/v3` CRDs at this time. +- The `MutatingAdmissionPolicy` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) must be enabled on the Kubernetes API server. This feature is beta in Kubernetes and is not enabled by default. + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## How to + +### Install $[prodname] with native `projectcalico.org/v3` CRDs + +Select the method below based on your preferred installation method. + + + + +1. Add the $[prodname] Helm repo: + + ```bash + helm repo add projectcalico https://docs.tigera.io/calico/charts + ``` + +1. Create the `tigera-operator` namespace: + + ```bash + kubectl create namespace tigera-operator + ``` + +1. Install the v3 CRD chart instead of the default v1 CRD chart: + + ```bash + helm install calico-crds projectcalico/projectcalico.org.v3 --version $[releaseTitle] --namespace tigera-operator + ``` + + :::note + + This replaces the `crd.projectcalico.org.v1` chart used in the default installation. Do not install both CRD charts. + + ::: + +1. Install the Tigera Operator: + + ```bash + helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] --namespace tigera-operator + ``` + + If you have a `values.yaml` with custom configuration: + + ```bash + helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] -f values.yaml --namespace tigera-operator + ``` + + + + +1. Install the v3 CRDs: + + ```bash + kubectl create -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml + ``` + + :::note + + This replaces the `v1_crd_projectcalico_org.yaml` manifest used in the default installation. Do not install both CRD manifests. + + ::: + +1. Install the Tigera Operator and custom resources: + + ```bash + kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml + ``` + + + + +After installing, complete the following steps: + +1. Create the `APIServer` CR to deploy the webhooks pod. This does **not** run the aggregation API server — instead it deploys admission webhooks that handle validation and defaulting. + + ```bash + kubectl create -f - < -o yaml +``` + +### Tier RBAC enforcement + +In both modes, tier-based RBAC uses the same `ClusterRole` and `RoleBinding` definitions with pseudo-resources like `tier.networkpolicies` and `tier.globalnetworkpolicies`. + +In API server mode, tier RBAC is enforced for all operations (create, update, delete, get, list, watch) by the aggregation API server. + +When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced via the admission webhook for **create, update, and delete** operations. However, **GET, LIST, and WATCH** operations on tiered policies are **not enforced** because admission webhooks cannot intercept read operations. This is a known limitation. + +### calicoctl + +`calicoctl` continues to work when using native `projectcalico.org/v3` CRDs but is less necessary since `kubectl` handles Calico resources natively. `calicoctl` is still useful for: + +- [calicoctl node](../reference/calicoctl/node/index.mdx) subcommands +- [calicoctl ipam](../reference/calicoctl/ipam/index.mdx) subcommands +- [calicoctl convert](../reference/calicoctl/convert.mdx) +- [calicoctl version](../reference/calicoctl/version.mdx) + +## Known limitations + +- **No automated migration** — There is no automated migration tooling for converting an existing cluster from API server mode to native `projectcalico.org/v3` CRDs. This is planned for a follow-on release. +- **GET/LIST/WATCH tier RBAC not enforced** — Admission webhooks cannot intercept read operations, so tier-based RBAC for GET, LIST, and WATCH is not enforced when using native `projectcalico.org/v3` CRDs. diff --git a/calico/operations/operator-migration.mdx b/calico/operations/operator-migration.mdx index 47653fbed2..343b2577bc 100644 --- a/calico/operations/operator-migration.mdx +++ b/calico/operations/operator-migration.mdx @@ -57,7 +57,7 @@ Do not edit or delete any resources in the `kube-system` Namespace during the fo 1. Install the Tigera Operator and custom resource definitions. ```bash - kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` diff --git a/calico/operations/upgrading/kubernetes-upgrade.mdx b/calico/operations/upgrading/kubernetes-upgrade.mdx index 4eefcb3fe2..1f9424d8a0 100644 --- a/calico/operations/upgrading/kubernetes-upgrade.mdx +++ b/calico/operations/upgrading/kubernetes-upgrade.mdx @@ -47,7 +47,7 @@ owned resources being garbage collected by Kubernetes. 1. Apply the $[version] CRDs: ```bash - kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/operator-crds.yaml + kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml ``` 1. Run the helm upgrade: @@ -61,14 +61,14 @@ owned resources being garbage collected by Kubernetes. 1. Download the Tigera Operator manifest and custom resource definitions. ```bash - curl $[manifestsUrl]/manifests/operator-crds.yaml -O + curl $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml -O curl $[manifestsUrl]/manifests/tigera-operator.yaml -O ``` 1. Use the following command to initiate an upgrade. ```bash - kubectl apply --server-side --force-conflicts -f operator-crds.yaml + kubectl apply --server-side --force-conflicts -f v1_crd_projectcalico_org.yaml kubectl apply --server-side --force-conflicts -f tigera-operator.yaml ``` diff --git a/calico/reference/architecture/overview.mdx b/calico/reference/architecture/overview.mdx index 34324bf5f3..7ace03d949 100644 --- a/calico/reference/architecture/overview.mdx +++ b/calico/reference/architecture/overview.mdx @@ -32,6 +32,8 @@ The following diagram shows the required and optional $[prodname] components for **Main task**: Lets you manage $[prodname] resources directly with `kubectl`. +In the default installation, this is an aggregation API server that translates between `projectcalico.org/v3` and internal CRD representations. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), this component is not used — `kubectl` works directly with `projectcalico.org/v3` CRDs, and validation and defaulting are handled by admission policies instead. + ## Felix **Main task**: Programs routes and ACLs, and anything else required on the host to provide desired connectivity for the endpoints on that host. Runs on each machine that hosts endpoints. Runs as an agent daemon. [Felix resource](../resources/felixconfig.mdx). diff --git a/calico/reference/installation/helm_customization.mdx b/calico/reference/installation/helm_customization.mdx index e05eebc80c..eb11ea05e9 100644 --- a/calico/reference/installation/helm_customization.mdx +++ b/calico/reference/installation/helm_customization.mdx @@ -10,8 +10,9 @@ You can customize the following resources and settings during $[prodname] Helm-b - [Default felix configuration](../resources/felixconfig.mdx#spec) :::note -If you customize felix configuration when you install $[prodname], the `v1 apiVersion` is used. However, when you apply -felix configuration customization after installation (when the tigera-apiserver is running), use the `v3 apiVersion`. +If you customize felix configuration when you install $[prodname], the `crd.projectcalico.org/v1` API group is used. However, when you apply +felix configuration customization after installation (when the tigera-apiserver is running), use the `projectcalico.org/v3` API group. +When using [native v3 CRDs](../../operations/native-v3-crds.mdx), only the `projectcalico.org/v3` API group is installed and should always be used. ::: ### Sample values.yaml diff --git a/calico/reference/resources/ippool.mdx b/calico/reference/resources/ippool.mdx index 215188859a..734d614bcc 100644 --- a/calico/reference/resources/ippool.mdx +++ b/calico/reference/resources/ippool.mdx @@ -40,7 +40,7 @@ spec: | Field | Description | Accepted Values | Schema | Default | | ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- | -| cidr | IP range to use for this pool. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | +| cidr | IP range to use for this pool. See [CIDR overlap validation](#cidr-overlap-validation) for details on overlap behavior. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | | | blockSize | The CIDR size of allocation blocks used by this pool. Blocks are allocated on demand to hosts and are used to aggregate routes. The value can only be set when the pool is created. | 20 to 32 (inclusive) for IPv4 and 116 to 128 (inclusive) for IPv6 | int | `26` for IPv4 pools and `122` for IPv6 pools. | | ipipMode | The mode defining when IPIP will be used. Cannot be set at the same time as `vxlanMode`. | Always, CrossSubnet, Never | string | `Never` | | vxlanMode | The mode defining when VXLAN will be used. Cannot be set at the same time as `ipipMode`. | Always, CrossSubnet, Never | string | `Never` | @@ -78,6 +78,12 @@ addresses. $[prodname] supports Kubernetes [annotations that force the use of specific IP addresses](../configure-cni-plugins.mdx#requesting-a-specific-ip-address). These annotations take precedence over the `allowedUses` field. +### CIDR overlap validation + +By default (API server mode), creating an IPPool with a CIDR that overlaps an existing pool is rejected synchronously at creation time. + +When using [native v3 CRDs](../../operations/native-v3-crds.mdx), CIDR overlap validation is **asynchronous**. Pools with overlapping CIDRs are created successfully but receive a `Disabled` status condition. IPAM does not allocate addresses from disabled pools. Check the IPPool status to identify any pools that have been disabled due to CIDR overlap. + ### IPIP Routing of packets using IP-in-IP will be used when the destination IP address diff --git a/sidebars-calico.js b/sidebars-calico.js index f4845c6b89..5b6dabd920 100644 --- a/sidebars-calico.js +++ b/sidebars-calico.js @@ -536,6 +536,7 @@ module.exports = { 'operations/datastore-migration', 'operations/operator-migration', 'operations/install-apiserver', + 'operations/native-v3-crds', { type: 'category', label: 'Monitor',