Skip to content
Merged

NOP-7 #106087

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 30 additions & 23 deletions modules/configuring-dns-forwarding-with-tls.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
[id="configuring-dns-forwarding-with-tls_{context}"]
= Configuring DNS forwarding with TLS

[role="_abstract"]
Configure DNS forwarding with TLS to secure queries to upstream resolvers.

When working in a highly regulated environment, you might need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that you can ensure additional DNS traffic and data privacy.

Be aware that CoreDNS caches forwarded connections for 10 seconds. CoreDNS will hold a TCP connection open for those 10 seconds if no request is issued.
Expand Down Expand Up @@ -42,42 +45,44 @@ metadata:
name: default
spec:
servers:
- name: example-server <1>
- name: example_server
zones:
- example.com <2>
- example.com
forwardPlugin:
transportConfig:
transport: TLS <3>
transport: TLS
tls:
caBundle:
name: mycacert
serverName: dnstls.example.com <4>
policy: Random <5>
upstreams: <6>
serverName: dnstls.example.com
policy: Random
upstreams:
- 1.1.1.1
- 2.2.2.2:5353
upstreamResolvers: <7>
upstreamResolvers:
transportConfig:
transport: TLS
tls:
caBundle:
name: mycacert
serverName: dnstls.example.com
upstreams:
- type: Network <8>
address: 1.2.3.4 <9>
port: 53 <10>
- type: Network
address: 1.2.3.4
port: 53
----
<1> Must comply with the `rfc6335` service name syntax.
<2> Must conform to the definition of a subdomain in the `rfc1123` service name syntax. The cluster domain, `cluster.local`, is an invalid subdomain for the `zones` field. The cluster domain, `cluster.local`, is an invalid `subdomain` for `zones`.
<3> When configuring TLS for forwarded DNS queries, set the `transport` field to have the value `TLS`.
<4> When configuring TLS for forwarded DNS queries, this is a mandatory server name used as part of the server name indication (SNI) to validate the upstream TLS server certificate.
<5> Defines the policy to select upstream resolvers. Default value is `Random`. You can also use the values `RoundRobin`, and `Sequential`.
<6> Required. Use it to provide upstream resolvers. A maximum of 15 `upstreams` entries are allowed per `forwardPlugin` entry.
<7> Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in `/etc/resolv.conf`.
<8> Only the `Network` type is allowed when using TLS and you must provide an IP address. `Network` type indicates that this upstream resolver should handle forwarded requests separately from the upstream resolvers listed in `/etc/resolv.conf`.
<9> The `address` field must be a valid IPv4 or IPv6 address.
<10> You can optionally provide a port. The `port` must have a value between `1` and `65535`. If you do not specify a port for the upstream, the default port is 853.
+
where:

`spec.servers.name`:: Must comply with the `rfc6335` service name syntax.
`spec.servers.zones`:: Must conform to the `rfc1123` subdomain syntax. The cluster domain, `cluster.local`, is invalid for `zones`.
`spec.servers.forwardPlugin.transportConfig.transport`:: Must be set to `TLS` when configuring TLS forwarding.
`spec.servers.forwardPlugin.transportConfig.tls.serverName`:: Must be set to the server name indication (SNI) server name used to validate the upstream TLS certificate.
`spec.servers.forwardPlugin.policy`:: Specifies the upstream selection policy. Defaults to `Random`; valid values are `RoundRobin` and `Sequential`.
`spec.servers.forwardPlugin.upstreams`:: Must provide upstream resolvers; maximum 15 entries per `forwardPlugin`.
`spec.upstreamResolvers.upstreams`:: Specifies an optional field to override the default policy for the default domain. Use the `Network` type only when TLS is enabled and provide an IP address. If omitted, queries use `/etc/resolv.conf`.
`spec.upstreamResolvers.upstreams.address`:: Must be a valid IPv4 or IPv6 address.
`spec.upstreamResolvers.upstreams.port`:: Specifies an optional field to provide a port number. Valid values are between `1` and `65535`; defaults to 853 when omitted.
+
[NOTE]
====
Expand All @@ -103,7 +108,7 @@ data:
forward . 1.1.1.1 2.2.2.2:5353
}
bar.com:5353 example.com:5353 {
forward . 3.3.3.3 4.4.4.4:5454 <1>
forward . 3.3.3.3 4.4.4.4:5454
}
.:5353 {
errors
Expand All @@ -127,9 +132,11 @@ metadata:
name: dns-default
namespace: openshift-dns
----
<1> Changes to the `forwardPlugin` triggers a rolling update of the CoreDNS daemon set.
+

** The `data.Corefile` key contains the Corefile configuration for the DNS server. Changes to the `forwardPlugin` triggers a rolling update of the CoreDNS daemon set.

[role="_additional-resources"]
.Additional resources

* For more information on DNS forwarding, see the link:https://coredns.io/plugins/forward/[CoreDNS forward documentation].
* link:https://coredns.io/plugins/forward/[CoreDNS forward documentation]
3 changes: 2 additions & 1 deletion modules/k8s-nmstate-deploying-nmstate-CLI.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@
[id="installing-the-kubernetes-nmstate-operator-CLI_{context}"]
= Installing the Kubernetes NMState Operator by using the CLI

You can install the Kubernetes NMState Operator by using the OpenShift CLI (`oc)`. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
[role="_abstract"]
You can install the Kubernetes NMState Operator by using the OpenShift CLI (`oc)`. After it is installed, the Operator deploys the NMState State Controller as a daemon set across all of the cluster nodes to manage the node network state and configuration.

.Prerequisites

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
[id="installing-the-kubernetes-nmstate-operator-web-console_{context}"]
= Installing the Kubernetes NMState Operator by using the web console

[role="_abstract"]
You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

.Prerequisites
Expand Down
9 changes: 7 additions & 2 deletions modules/k8s-nmstate-uninstall-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
[id="k8s-nmstate-uninstall-operator_{context}"]
= Uninstalling the Kubernetes NMState Operator

[role="_abstract"]
Remove the Kubernetes NMState Operator and related resources when they are no longer needed.

You can use the {olm-first} to uninstall the Kubernetes NMState Operator, but by design {olm} does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services.

Before you uninstall the Kubernetes NMState Operator from the `Subcription` resource used by {olm}, identify what Kubernetes NMState Operator resources to delete. This identification ensures that you can delete resources without impacting your running cluster.
Expand Down Expand Up @@ -73,9 +76,11 @@ INDEX=$(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plug
+
[source,terminal]
----
$ oc patch console.operator.openshift.io cluster --type=json -p "[{\"op\": \"remove\", \"path\": \"/spec/plugins/$INDEX\"}]" <1>
$ oc patch console.operator.openshift.io cluster --type=json -p "[{\"op\": \"remove\", \"path\": \"/spec/plugins/$INDEX\"}]"
----
<1> `INDEX` is an auxiliary variable. You can specify a different name for this variable.
+

** `INDEX` is an auxiliary variable. You can specify a different name for this variable.

. Delete all the custom resource definitions (CRDs), such as `nmstates.nmstate.io`, by running the following commands:
+
Expand Down
5 changes: 4 additions & 1 deletion modules/nw-bpfman-infw-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,13 @@
//
// * networking/network_security/ingress-node-firewall-operator.adoc

:_mod-docs-content-type: PROCEDURE
:_mod-docs-content-type: CONCEPT
[id="ingress-node-firewall-operator_{context}"]
= Ingress Node Firewall Operator integration

[role="_abstract"]
Learn when to use eBPF Manager to load and manage Ingress Node Firewall programs.

The Ingress Node Firewall uses link:https://www.kernel.org/doc/html/latest/bpf/index.html[eBPF] programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall. You can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead.

When this integration is enabled, the following limitations apply:
Expand Down
3 changes: 3 additions & 0 deletions modules/nw-bpfman-infw-configure.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
[id="bpfman-infw-configure_{context}"]
= Configuring Ingress Node Firewall Operator to use the eBPF Manager Operator

[role="_abstract"]
Configure the Ingress Node Firewall to use eBPF Manager for program lifecycle control.

The Ingress Node Firewall uses link:https://www.kernel.org/doc/html/latest/bpf/index.html[eBPF] programs to implement some of its key firewall functionality. By default these eBPF programs are loaded into the kernel using a mechanism specific to the Ingress Node Firewall.

As a cluster administrator, you can configure the Ingress Node Firewall Operator to use the eBPF Manager Operator for loading and managing these programs instead, adding additional security and observability functionality.
Expand Down
51 changes: 32 additions & 19 deletions modules/nw-controlling-dns-pod-placement.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,17 @@
[id="nw-controlling-dns-pod-placement_{context}"]
= Controlling DNS pod placement

[role="_abstract"]
Control where CoreDNS and node-resolver pods run by using taints, tolerations, and selectors.

The DNS Operator has two daemon sets: one for CoreDNS called `dns-default` and one for managing the `/etc/hosts` file called `node-resolver`.

You can assign and run CoreDNS pods on specified nodes. For example, if the cluster administrator has configured security policies that prohibit communication between pairs of nodes, you can configure CoreDNS pods to run on a restricted set of nodes.

DNS service is available to all pods if the following circumstances are true:

* DNS pods are running on some nodes in the cluster.
* The nodes on which DNS pods are not running have network connectivity to nodes on which DNS pods are running,
* The nodes on which DNS pods are not running have network connectivity to nodes on which DNS pods are running,

The `node-resolver` daemon set must run on every node host because it adds an entry for the cluster image registry to support pulling images. The `node-resolver` pods have only one job: to look up the `image-registry.openshift-image-registry.svc` service's cluster IP address and add it to `/etc/hosts` on the node host so that the container runtime can resolve the service name.

Expand All @@ -33,12 +36,12 @@ As a cluster administrator, you can use a custom node selector to configure the
+
[source,terminal]
----
$ oc adm taint nodes <node_name> dns-only=abc:NoExecute <1>
$ oc adm taint nodes <node_name> dns-only=abc:NoExecute
----
+
<1> Replace `<node_name>` with the actual name of the node.
** Replace `<node_name>` with the actual name of the node.

. Modify the DNS Operator object named `default` to include the corresponding toleration by entering the following command:
. Modify the DNS Operator object named `default` to include the corresponding toleration by entering the following command:
+
[source,terminal]
----
Expand All @@ -49,28 +52,38 @@ $ oc edit dns.operator/default
+
[source,yaml]
----
spec:
nodePlacement:
tolerations:
- effect: NoExecute
key: "dns-only" <1>
operator: Equal
value: abc
tolerationSeconds: 3600 <2>
apiVersion: operator.openshift.io/v1
kind: DNS
metadata:
name: default
spec:
nodePlacement:
tolerations:
- effect: NoExecute
key: "dns-only"
operator: Equal
value: abc
tolerationSeconds: 3600
----
<1> If the `key` field is set to `dns-only`, it can be tolerated indefinitely.
<2> The `tolerationSeconds` field is optional.
+

** If the `key` field is set to `dns-only`, it can be tolerated indefinitely.
** The `tolerationSeconds` field is optional.

. Optional: To specify node placement using a node selector, modify the default DNS Operator:

.. Edit the DNS Operator object named `default` to include a node selector:
+
[source,yaml]
----
spec:
nodePlacement:
nodeSelector: <1>
node-role.kubernetes.io/control-plane: ""
apiVersion: operator.openshift.io/v1
kind: DNS
metadata:
name: default
spec:
nodePlacement:
nodeSelector:
node-role.kubernetes.io/control-plane: ""
----
+
<1> This node selector ensures that the CoreDNS pods run only on control plane nodes.
** The `nodeselector` field in the example ensures that the CoreDNS pods run only on control plane nodes.
19 changes: 9 additions & 10 deletions modules/nw-dns-cache-tuning.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,7 @@
= Tuning the CoreDNS cache

[role="_abstract"]
To reduce the load on upstream DNS resolvers, you can tune the CoreDNS cache by adjusting the duration of positive and negative caching. This process involves modifying the time-to-live (TTL) values within the DNS Operator object to control how long query responses are stored.

For CoreDNS, you can configure the maximum duration of both successful or unsuccessful caching, also known respectively as positive or negative caching. Tuning the cache duration of DNS query responses can reduce the load for any upstream DNS resolvers.

You can shorten the TTL of the DNS record by setting a lower positive cache. You cannot increase the TTL on the DNS record by setting a higher positive cache. The maximum cache is the lower of the TTL of the DNS record or the positive cache.
For CoreDNS, you can configure the maximum duration of both successful or unsuccessful caching, also known respectively as positive or negative caching. Tuning the cache duration of DNS query responses can reduce the load for any upstream DNS resolvers.

[WARNING]
====
Expand All @@ -37,12 +33,15 @@ metadata:
name: default
spec:
cache:
positiveTTL: 1h <1>
negativeTTL: 0.5h10m <2>
positiveTTL: 1h
negativeTTL: 0.5h10m
----
+
<1> The string value `1h` is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be `0s` and the cluster uses the internal default value of `900s` as a fallback.
<2> The string value can be a combination of units such as `0.5h10m` and is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be `0s` and the cluster uses the internal default value of `30s` as a fallback.

where:

`spec.cache.positiveTTL`:: Specifies a string value that is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be `0s` and the cluster uses the internal default value of `900s` as a fallback.
`spec.cache.negativeTTL`:: Specifies a string value that is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be `0s` and the cluster uses the internal default value of `30s` as a fallback.

.Verification

Expand All @@ -65,4 +64,4 @@ $ oc get configmap/dns-default -n openshift-dns -o yaml
[role="_additional-resources"]
.Additional resources

For more information on caching, see link:https://coredns.io/plugins/cache/[CoreDNS cache].
* link:https://coredns.io/plugins/cache/[CoreDNS cache]
Loading