-
Notifications
You must be signed in to change notification settings - Fork 83
Open
Description
When using the 'no-volume' implementation we notice that the mechanism to copy data to or from the pod often fails.
I cannot reproduce it on demand, but it became very noticeable once our developers starting using these runners.
I suspect, but cannot prove, that the combination of tar and the Kubernetes exec API are responsible for the issue.
This issue prevents us from adopting the no-volume implementation in production because of the instability it currently brings.
##[debug]execPodStep response: {"metadata":{},"status":"Success"}
##[debug]Copying from job pod 'default-staging-2rrxt-runner-hl8ht-workflow' /__w/_temp to /home/runner/_work/_temp
##[debug]Copying from pod default-staging-2rrxt-runner-hl8ht-workflow /__w/_temp to /home/runner/_work/_temp
/home/runner/k8s/index.js:20421
cb(this._finished ? null : new Error('Unexpected end of data'))
^
Error: Unexpected end of data
at Extract._final (/home/runner/k8s/index.js:20421:32)
at WritableState.updateNonPrimary (/home/runner/k8s/index.js:18588:14)
at WritableState.update (/home/runner/k8s/index.js:18577:72)
at WritableState.updateWriteNT (/home/runner/k8s/index.js:18946:10)
at node:internal/process/task_queues:140:7
at AsyncResource.runInAsyncScope (node:async_hooks:206:9)
at AsyncResource.runMicrotask (node:internal/process/task_queues:137:8)
Emitted 'error' event on Extract instance at:
at WritableState.afterDestroy (/home/runner/k8s/index.js:18896:19)
at Extract._destroy (/home/runner/k8s/index.js:20430:5)
at WritableState.updateNonPrimary (/home/runner/k8s/index.js:18595:16)
at WritableState.update (/home/runner/k8s/index.js:18577:72)
[... lines matching original stack trace ...]
at AsyncResource.runMicrotask (node:internal/process/task_queues:137:8)
Metadata
Metadata
Assignees
Labels
No labels