Since some time (I think october/november 2025) one of my scan jobs always gets interrupted and also seems to kill the redis container. Before there were no problems, no changes on our side regarding the ressources the server can use. Here are the logs related to that specific scan job and the error:
All my other jobs are running just fine. The job scans a /24 network with 3 hosts inside (.101/.102/.103)
Could you please have a look and tell me if you need more information? thank you very much in advance!
| 11:C 28 Jan 2026 12:49:31.708 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see vm.max_map_count growing steadily when vm.overcommit_memory is 2 · Issue #1328 · jemalloc/jemalloc · GitHub . To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1’ for this to take effect.
Yes, I saw that already. My intention to share that with you was that this is a really small job and I don’t get this error with any other job in my environment. So perhaps there might be some kind of memory leakage in one of the executed scans on one of the three hosts.
Also the scan gets interrupted already three minutes before the log line you mentioned: 12:46:57,122: INFO: (ospd.ospd) a2d3ac16-8e9c-45dc-b3e7-0308087e26c6: Scan interrupted.
That is standard Redis knowledge So here is the wrong place to discussions like this. If you scan authenticated a big filesystems small jobs can produce large amounts of results
Thanks for your reply The scan, however, is unauthenticated… I’ll try with the overcommit setting and see if the error persists. If so, is there a way to identify which particular scan may cause the problem?
You can run a redis monitor , but it seems to be a RAM issue with your setup. Try disable sub-domains or hostnames on that one, that might lead to many results as well …