Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions calico/getting-started/kubernetes/helm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,18 @@
helm install calico-crds projectcalico/crd.projectcalico.org.v1 --version $[releaseTitle] --namespace tigera-operator
```

:::tip

To install with [native v3 CRDs](../../operations/native-v3-crds.mdx) (tech preview) instead, use the v3 CRD chart:

Check failure on line 90 in calico/getting-started/kubernetes/helm.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/getting-started/kubernetes/helm.mdx", "range": {"start": {"line": 90, "column": 31}}}, "severity": "ERROR"}

```bash
helm install calico-crds projectcalico/projectcalico.org.v3 --version $[releaseTitle] --namespace tigera-operator
```

Native v3 CRDs eliminate the need for the aggregation API server and allows `kubectl` to manage `projectcalico.org/v3` resources directly.

Check failure on line 96 in calico/getting-started/kubernetes/helm.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/getting-started/kubernetes/helm.mdx", "range": {"start": {"line": 96, "column": 14}}}, "severity": "ERROR"}

:::

1. Install the Tigera Operator using the Helm chart:

```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ curl -sfL https://get.k3s.io | K3S_URL=https://serverip:6443 K3S_TOKEN=mytoken s
Install the $[prodname] operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI the old URL will continue to exist for a while for compatibility reasons, but this new URL is more explicit and is the counterpart to the v3 version.

kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion calico/getting-started/kubernetes/k3s/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ you are assigning necessary permissions to the file and make it accessible for o
1. Install the $[prodname] operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion calico/getting-started/kubernetes/k8s-single-node.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The geeky details of what you get:
1. Install the Tigera Operator and custom resource definitions.

```
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion calico/getting-started/kubernetes/kind.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ dev-worker2 NotReady <none> 4m v1.25.0 172.18.0.3 <
1. Install the $[prodname] operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ The geeky details of what you get:
1. Install the Tigera Operator and custom resource definitions:

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down Expand Up @@ -216,7 +216,7 @@ The geeky details of what you get:
1. Install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ When using the Amazon VPC CNI plugin, $[prodname] does not support enforcement o
1. Install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down Expand Up @@ -167,7 +167,7 @@ Before you get started, make sure you have downloaded and configured the [necess
1. Install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion calico/getting-started/kubernetes/minikube.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ minikube start --cni=calico
2. Install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion calico/getting-started/kubernetes/nftables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ mode: nftables
1. Install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
2 changes: 1 addition & 1 deletion calico/getting-started/kubernetes/rancher.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The geeky details of what you get:
1. Install the Tigera Operator and custom resource definitions.

```
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ $[prodname] can also be installed using raw manifests as an alternative to the o
1. Install the Tigera Operator and custom resource definitions.

```
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ worker-2 NotReady <none> 5s v1.17.2
1. On the controller, install the Tigera Operator and custom resource definitions:

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
6 changes: 3 additions & 3 deletions calico/getting-started/kubernetes/vpp/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Before you get started, make sure you have downloaded and configured the [necess
1. Now that you have an empty cluster configured, you can install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down Expand Up @@ -158,7 +158,7 @@ DPDK provides better performance compared to the standard install but it require
1. Now that you have an empty cluster configured, you can install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down Expand Up @@ -259,7 +259,7 @@ For some hardware, the following hugepages configuration may enable VPP to use m
1. Install the Tigera Operator and custom resource definitions.

```bash
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The following steps will outline the installation of $[prodname] networking on t
1. Install the Tigera Operator and custom resource definitions.

```
kubectl create -f $[manifestsUrl]/manifests/operator-crds.yaml
kubectl create -f $[manifestsUrl]/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,13 @@

### Standard Kubernetes RBAC

$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. The $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server.
$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. In the default installation, the $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server. When using [native v3 CRDs](../../operations/native-v3-crds.mdx), tier RBAC is enforced via an admission webhook instead.

Check failure on line 22 in calico/network-policy/policy-tiers/rbac-tiered-policies.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/network-policy/policy-tiers/rbac-tiered-policies.mdx", "range": {"start": {"line": 22, "column": 266}}}, "severity": "ERROR"}

:::note

When using native `projectcalico.org/v3` CRDs, tier RBAC is enforced for **create, update, and delete** operations via the admission webhook. However, **GET, LIST, and WATCH** operations on tiered policies are not enforced because admission webhooks cannot intercept read requests. This is a known limitation.

Check failure on line 26 in calico/network-policy/policy-tiers/rbac-tiered-policies.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/network-policy/policy-tiers/rbac-tiered-policies.mdx", "range": {"start": {"line": 26, "column": 42}}}, "severity": "ERROR"}

:::

### RBAC for policies and tiers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ If you've already got a suitable environment, start the tutorial at [Step 2: Cre

1. Install Calico Open Source by adding custom resource definitions, the Tigera Operator, and the custom resources.
<CodeBlock>
{`kubectl create -f ${files}/manifests/operator-crds.yaml
{`kubectl create -f ${files}/manifests/v1_crd_projectcalico_org.yaml
kubectl create -f ${files}/manifests/tigera-operator.yaml
kubectl create -f ${files}/manifests/custom-resources.yaml`}
</CodeBlock>
Expand Down
2 changes: 1 addition & 1 deletion calico/operations/calicoctl/install.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
:::note

If you would like to use `kubectl` to manage `projectcalico.org/v3` API resources, you can use the
[Calico API server](../install-apiserver.mdx).
[Calico API server](../install-apiserver.mdx). Alternatively, when using [native v3 CRDs](../native-v3-crds.mdx), `kubectl` can manage `projectcalico.org/v3` resources directly as native CRDs without needing the API server or calicoctl for resource management.

Check failure on line 51 in calico/operations/calicoctl/install.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/operations/calicoctl/install.mdx", "range": {"start": {"line": 51, "column": 188}}}, "severity": "ERROR"}

Check failure on line 51 in calico/operations/calicoctl/install.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/operations/calicoctl/install.mdx", "range": {"start": {"line": 51, "column": 85}}}, "severity": "ERROR"}

:::

Expand Down
16 changes: 13 additions & 3 deletions calico/operations/install-apiserver.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,12 @@

Install the Calico API server on an existing cluster to enable management of Calico APIs using kubectl.

:::tip

Starting in Calico v3.32.0, you can use [native v3 CRDs](native-v3-crds.mdx) to manage `projectcalico.org/v3` resources directly with `kubectl` without installing the aggregation API server. If you are setting up a new cluster and want a simpler architecture, consider native v3 CRDs instead.

Check failure on line 18 in calico/operations/install-apiserver.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/operations/install-apiserver.mdx", "range": {"start": {"line": 18, "column": 280}}}, "severity": "ERROR"}

Check failure on line 18 in calico/operations/install-apiserver.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/operations/install-apiserver.mdx", "range": {"start": {"line": 18, "column": 52}}}, "severity": "ERROR"}

:::

## Value

The API server provides a REST API for Calico, and allows management of `projectcalico.org/v3` APIs using kubectl without the need for calicoctl.
Expand All @@ -39,6 +45,8 @@
In previous releases, calicoctl has been required to manage Calico API resources in the `projectcalico.org/v3` API group. The calicoctl CLI tool provides important validation and defaulting on these APIs. The Calico API server performs
that defaulting and validation server-side, exposing the same API semantics without a dependency on calicoctl.

Alternatively, when using [native v3 CRDs](native-v3-crds.mdx), `projectcalico.org/v3` resources are native CRDs, so `kubectl` works directly without needing either the API server or calicoctl for resource management.

Check failure on line 48 in calico/operations/install-apiserver.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'CRDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'CRDs'?", "location": {"path": "calico/operations/install-apiserver.mdx", "range": {"start": {"line": 48, "column": 38}}}, "severity": "ERROR"}

calicoctl is still required for the following subcommands:

- [calicoctl node](../reference/calicoctl/node/index.mdx)
Expand All @@ -55,14 +63,16 @@
<Tabs>
<TabItem label="Operator install" value="Operator install-0">

1. Create an instance of an `operator.tigera.io/APIServer` with the following contents.
1. Create an instance of an `operator.tigera.io/APIServer` with the following command.

```yaml
```bash
kubectl create -f - <<EOF
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
```

1. Confirm it appears as `Available` with the following command.
Expand Down Expand Up @@ -183,7 +193,7 @@
</TabItem>
</Tabs>

Once removed, you will need to use calicoctl to manage projectcalico.org/v3 APIs.
Once removed, you will need to use calicoctl to manage projectcalico.org/v3 APIs, unless you are using [native v3 CRDs](native-v3-crds.mdx) where `kubectl` works directly.

## Next steps

Expand Down
Loading
Loading