Conversation
|
Would you mind sharing some artifacts that verify the changes have been tested and work as expected? These changes are a bit trickier to test since this data is templated by the launcher at runtime and can't be easily validated with Specifically, it would be helpful to see a launcher job successfully running with nested |
|
Working with the customer, I was able to create an example that should be reproducible. We found that the following # Controls how many instances of Posit Connect are created.
replicas: 1
# Mounts the license file appropriately from the Secret
license:
file:
# Replace with the name of your license file secret and key
secret: rstudio-connect-license
secretKey: rstudio-connect.lic
# Configures shared storage for the Posit Connect pod.
sharedStorage:
create: true
mount: true
# The name of the PVC created for Connect's data directory.
# Also specified by `Launcher.DataDirPVCName` below.
name: rsc-pvc
# The storageClass to use for Connect's data directory. Must
# support RWX.
# Replace with your storage class name if different.
storageClassName: longhorn
requests:
storage: 30G
# Adds an environment variable containing the PostgreSQL password from a Secret
pod:
env:
- name: CONNECT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
# Replace with the name of your database password secret and key
name: postgres-db-postgresql
key: postgres-password
securityContext:
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
runAsUser: 999
runAsGroup: 999
fsGroup: 999
runAsNonRoot: true
volumeMounts:
- mountPath: /etc/rstudio-connect/launcher
name: launcher-configuration
- mountPath: /var/lib/rstudio-connect-launcher
name: launcher-data
volumes:
- name: launcher-configuration
emptyDir:
sizeLimit: 50Mi
- name: launcher-data
emptyDir:
sizeLimit: 100Mi
# Enables building and executing content in isolated Kubernetes pods.
launcher:
enabled: true
templateValues:
pod:
# Pod-level securityContext with AppArmor and Seccomp profiles
# Using custom template v2.5.0 which properly renders complex YAML objects
securityContext:
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
defaultSecurityContext:
runAsNonRoot: true
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
# Container-level securityContext for the main runtime container
# These are properly rendered and provide comprehensive security restrictions
containerSecurityContext:
allowPrivilegeEscalation: false
privileged: false
runAsNonRoot: true
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
appArmorProfile:
type: RuntimeDefault
securityContext:
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
allowPrivilegeEscalation: false
privileged: false
service:
nodePort: 31911
type: "NodePort"
# The config section overwrites values in Posit Connect's main
# .gcfg configuration file.
config:
# Configures the Postgres connection for Posit Connect.
Database:
Provider: "Postgres"
Postgres:
# The URL syntax below is to utilize a PostgreSQL database installed
# in the cluster as described in the Kubernetes Cluster Preparation
# page of this guide. Change this URL if your PostgreSQL database is
# setup externally or in a different location.
URL: "postgres://postgres@postgres-db-postgresql.default.svc.cluster.local:5432/posit?sslmode=disable"
# While it is possible to set a Postgres password here in the
# values file, we recommend adding it from a Secret as an environment variable as shown in pod.env
Launcher:
# Configures the job launcher to use Connect's data dir PVC when launching content jobs
# This has the same value as `sharedStorage.name` above
DataDirPVCName: rsc-pvcThis error only appeared when the $ kubectl get $(kubectl get pods -o name | grep "run-shiny-application" | head -n 1) -o jsonpath='{.spec.securityContext}'; echo
{"runAsGroup":999,"runAsNonRoot":true,"runAsUser":999}
$ kubectl get $(kubectl get pods -o name | grep "run-shiny-application" | head -n 1) -o jsonpath='{.spec.containers[0].securityContext}}'; echo
{"allowPrivilegeEscalation":false,"appArmorProfile":{"type":"RuntimeDefault"},"capabilities":{"drop":["ALL"]},"privileged":false,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}}}This is where the move of helm/charts/rstudio-connect/files/job.tpl Line 139 in a1605fb out of the if statement {{- if .Job.container.supplementalGroupIds }} is important. Essentially, the $templateData.pod.securityContext is getting ignored if {{- if .Job.container.supplementalGroupIds }} is false as it is in this example. This bug makes everything run without error, but the pod level securityContext wasn't being correctly applied.
However, this led to the second bug, which actually produced the launcher error caused by this code: helm/charts/rstudio-connect/files/job.tpl Lines 141 to 146 in c6c1b08 This is directly related to time="2026-01-29T15:49:39.016Z" level=info msg="2026-01-29T15:49:39.015849Z [rstudio-kubernetes-launcher, log-source: Kubernetes] ERROR system error 71 (Protocol error) [description: Invalid status returned from create resource at /apis/batch/v1/namespaces/default/jobs: Bad Request - {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Job in version \\\"v1\\\" cannot be handled as a Job: json: cannot unmarshal string into Go struct field PodSecurityContext.spec.template.spec.securityContext.appArmorProfile of type v1.AppArmorProfile\",\"reason\":\"BadRequest\",\"code\":400}" stream=stderr subprocess=rstudio-launcherand the template not being able to work with more complex mappings. This was resolved by adding the following to the values.yaml and attaching the suggested fix to the job.tpl file (saved as launcher:
enabled: true
includeDefaultTemplates: true
useTemplates: true
extraTemplates:
job.tpl: ""In addition, because the fix to where full values.yaml # Controls how many instances of Posit Connect are created.
replicas: 1
# Mounts the license file appropriately from the Secret
license:
file:
# Replace with the name of your license file secret and key
secret: rstudio-connect-license
secretKey: rstudio-connect.lic
# Configures shared storage for the Posit Connect pod.
sharedStorage:
create: true
mount: true
# The name of the PVC created for Connect's data directory.
# Also specified by `Launcher.DataDirPVCName` below.
name: rsc-pvc
# The storageClass to use for Connect's data directory. Must
# support RWX.
# Replace with your storage class name if different.
storageClassName: longhorn
requests:
storage: 30G
# Adds an environment variable containing the PostgreSQL password from a Secret
pod:
env:
- name: CONNECT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
# Replace with the name of your database password secret and key
name: postgres-db-postgresql
key: postgres-password
securityContext:
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
runAsUser: 999
runAsGroup: 999
fsGroup: 999
runAsNonRoot: true
volumeMounts:
- mountPath: /etc/rstudio-connect/launcher
name: launcher-configuration
- mountPath: /var/lib/rstudio-connect-launcher
name: launcher-data
volumes:
- name: launcher-configuration
emptyDir:
sizeLimit: 50Mi
- name: launcher-data
emptyDir:
sizeLimit: 100Mi
# Enables building and executing content in isolated Kubernetes pods.
launcher:
enabled: true
includeDefaultTemplates: true
useTemplates: true
extraTemplates:
job.tpl: ""
templateValues:
pod:
# Pod-level securityContext with AppArmor and Seccomp profiles
# Using custom template v2.5.0 which properly renders complex YAML objects
securityContext:
runAsNonRoot: true
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
# defaultSecurityContext:
# runAsNonRoot: true
# appArmorProfile:
# type: RuntimeDefault
# seccompProfile:
# type: RuntimeDefault
# Container-level securityContext for the main runtime container
# These are properly rendered and provide comprehensive security restrictions
containerSecurityContext:
allowPrivilegeEscalation: false
privileged: false
runAsNonRoot: true
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
appArmorProfile:
type: RuntimeDefault
securityContext:
appArmorProfile:
type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
allowPrivilegeEscalation: false
privileged: false
service:
nodePort: 31911
type: "NodePort"
# The config section overwrites values in Posit Connect's main
# .gcfg configuration file.
config:
# Configures the Postgres connection for Posit Connect.
Database:
Provider: "Postgres"
Postgres:
# The URL syntax below is to utilize a PostgreSQL database installed
# in the cluster as described in the Kubernetes Cluster Preparation
# page of this guide. Change this URL if your PostgreSQL database is
# setup externally or in a different location.
URL: "postgres://postgres@postgres-db-postgresql.default.svc.cluster.local:5432/posit?sslmode=disable"
# While it is possible to set a Postgres password here in the
# values file, we recommend adding it from a Secret as an environment variable as shown in pod.env
Launcher:
# Configures the job launcher to use Connect's data dir PVC when launching content jobs
# This has the same value as `sharedStorage.name` above
DataDirPVCName: rsc-pvcWhen launched with helm upgrade --install rstudio-connect-prod rstudio/rstudio-connect \
--values values.yaml \
--set-file launcher.extraTemplates."job\.tpl"=./custom-job.tplThe launcher sessions began to load again AND the spec's are now correct: $ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-db-postgresql-0 1/1 Running 0 12h
rstudio-connect-prod-6767f7f87c-67lhp 1/1 Running 0 13m
run-shiny-application-4mnsp-dzbxh 1/1 Running 0 12m$kubectl get $(kubectl get pods -o name | grep "rstudio-connect-prod" | head -n 1) -o jsonpath='{.spec.securityContext}'; echo
{"appArmorProfile":{"type":"RuntimeDefault"},"fsGroup":999,"runAsGroup":999,"runAsNonRoot":true,"runAsUser":999,"seccompProfile":{"type":"RuntimeDefault"}}
$kubectl get $(kubectl get pods -o name | grep "rstudio-connect-prod" | head -n 1) -o jsonpath='{.spec.containers[0].securityContext}}'; echo
{"allowPrivilegeEscalation":false,"appArmorProfile":{"type":"RuntimeDefault"},"capabilities":{"drop":["ALL"]},"privileged":false,"seccompProfile":{"type":"RuntimeDefault"}}}
$kubectl get $(kubectl get pods -o name | grep "run-shiny-application" | head -n 1) -o jsonpath='{.spec.securityContext}'; echo
{"appArmorProfile":{"type":"RuntimeDefault"},"runAsGroup":999,"runAsNonRoot":true,"runAsUser":999,"seccompProfile":{"type":"RuntimeDefault"}}
$kubectl get $(kubectl get pods -o name | grep "run-shiny-application" | head -n 1) -o jsonpath='{.spec.containers[0].securityContext}}'; echo
{"allowPrivilegeEscalation":false,"appArmorProfile":{"type":"RuntimeDefault"},"capabilities":{"drop":["ALL"]},"privileged":false,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}}} |
lucasrod16
left a comment
There was a problem hiding this comment.
@jacpete thanks for sharing the testing notes! The template changes look good to me. You will probably need to rebase your branch against main, bump the chart version, and add an entry to the changelog for this fix: https://github.com/rstudio/helm/blob/main/charts/rstudio-connect/NEWS.md
ce8b7fd to
9721a4b
Compare
0b29b1a to
82ae933
Compare
|
@lucasrod16 Thanks! Looks like all the checks passed now. |
|
@jacpete do you have permissions to merge? If not, I can merge this. |
|
@kfeinauer @dmortondev These template changes might be a good thing to pull into the launcher. The job templates in the helm chart have started to diverge |
This a pull request for the fix suggested in and for issue #753.
I believe the following replacement for that section of code could fix both these issues. First, by changing the
rangefunction to usestoYamlinstead when expanding the$securityContextmapping. And second, moving themergeOverwriteline out of the.Job.container.supplementalGroupIdsif statement and into the$securityContextif statement instead.Noting that this may also be something that needs applied to other job.tpl files as well if approved. As an example:
helm/charts/rstudio-workbench/files/job.tpl
Lines 133 to 146 in c6c1b08