-
Notifications
You must be signed in to change notification settings - Fork 341
100% CPU usage when used in kubectl exec and connection is terminated #1717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
either that or maybe cri-o shoulid detect the connection is closed and kill crun |
thanks for the report. What does runc do in this case? @kolyshkin FYI |
As I said, it is executing an interactive shell in the k8s container triggered with a command like this: kubectl exec -it podname -- /bin/bash ...stdin,stdout,stderr of the shell are piped from/to terminal executing kubectl. the kubectl connection to API server is abruptly broken (VPN interface down). I suspect this manifests on the cri-o/crun side a little differently from when the connection is terminated in a regular way from either side (triggered by EOF or exit). |
doesn't CRI-O see any error when the connection is dropped? Wouldn't be enough to close the terminal to/from crun in this case? |
...It might be that the SSL connection to the API server is left in a "lingering" state for a long time unless k8s API server has an active way of detecting such hung connections (with TCP keep-alive and timeouts). So in this case, kubelet and by extension cri-o and by extension runc also doesn't get notified about the lost session for a long time. Maybe the situation would resolve after a longer time. The problem is, all this time, 1 CPU core is being burnt. Is it possible to locate the code where in a loop the following system calls are being made:
...I'm speculating now as I don't know the code... the epoll_wait asks epoll descriptor about readyness of other descriptors and gets back 1. I think this means code may read or write to one descriptor. It looks that descriptor is STDOUT and the code wants to write to it, but when attempting to do so, it gets back EAGAIN. Which is returned when the descriptor is nonblocking and the call would block. This looks like a pipe with a full buffer. I speculate further that this is because the kubelet doesn't read from the other end because it wait for data to be sent to API server which doesn't read it since it waits for data to be sent back to kubectl but the SSL connection is hung in a lingering state. What can crun do in this case? Is there a bug in crun? Maybe the interpretation of epoll_wait result it makes is wrong. If the pipe buffer is full, wouldn't epoll_wait also block? If it returns with 1, it is maybe signaling some other information and not that STDOUT may be written to without blocking... |
I wonder if we are hitting containers/conmon#551 |
In my 3-node k8s cluster using:
...i noticed one crun process constantly consumig 100% CPU. ptrace revealed that it is spinning in loop trying to write to STDOUT:
Investigating further I found this hierarchy of processes (1567488 is the CPU consuming crun):
I had to kill -KILL 1567490 (the bash process) for the whole hierarchy to go away.
What I did to make this happen is the following:
kubectl exec -it
into a container where a ranwatch <some command>
to constantly watch the output of some command. The the networking from my laptop (VPN connection) broke and the kubectl process hung. I terminated it on the client. The interactive bash in the container continued to run though, with crun trying to write it's STDOUT but instead spinning in the loop.Should crun detect that the connection has been closed and kill the command itself?
The text was updated successfully, but these errors were encountered: