-
-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sporadic docker container cpu and memory updates #50
Comments
What version of Docker are you on ( I had this happen as well on a machine that was using version 23 or 24. Upgrading to 27 fixed it in my case. I'm not sure if it was the upgrade that fixed it, or just restarting docker itself. But I'd try doing one of those. |
They're all ubuntu 22.04 installs with 24.0.7. I bounced docker (and made my uptime kuma instance very upset lmao) and that appears to have resolved it, or at least the graphs are showing all the containers and I'm not seeing the endless errors. No idea if that helps point out a root cause (if it even matters, since it sounds like newer docker may resolve it anyways) but if there's any more data I can provide, i guess let me know. |
Yeah it's strange. Very small sample size, but seems to happen only on v24 or older, and not consistently on all machines running v24. Upgrading seems to fix it. I have the timeout set to 1 second, which should be plenty of time to respond on a local unix socket, so not sure what the problem is. From quick googling, it seems like a common issue and is usually fixed by restarting the docker service. Not sure if it's something I can fix on this end. |
Oh didn't mean I expected a fix; I'm happy to drag all these systems to Docker 27 over the next week or two since I probably should have done that a while ago anyways. |
Cool, I'll go ahead and close the issue. Let me know if you see it happen again. I might use your screenshot in the readme as an example of this issue. |
I've got beszel on 5 Ubuntu 22.04 systems, but one is an ARM-based system and is the one exhibiting the issue.
2024/07/27 22:19:40 Error getting container stats: Get "http://localhost/containers/c2e66e2fa7af/stats?stream=0&one-shot=1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Errors all over the logs. Different container IDs, but always the same fault. It does update occasionally, but it's not consistent. See attached for the graphs that are being generated showing random gaps for various containers.

Not sure if it's a bug, a configuration issue, or some ARM-weirdness, but figured I'd report it.
The text was updated successfully, but these errors were encountered: