An odd thing here: you can see openvas: testing in the process list but the scan has interrupted state in the WebUI and writes in logs:
event task:MESSAGE:2024-02-02 15h47.42 utc:1385303: Status of task task_20240129 (ee1b2831-71fe-4f8d-b0c2-595d71667efb) has changed to Interrupted
Also, it seems like the feed update stuck with the same repeated messages in the logs:
md manage: INFO:2024-02-02 15h55.09 UTC:1385990: OSP service has different VT status (version 202402020650) from database (version 202402010857, 135427 VTs). Starting update …
It was started with greenbone-feed-sync from cron. But now ps aux|grep doesn’t find this process, so I even don’t know how to stop it.
Should I update something? Or maybe tune some settings to prevent scan interruptions? If I miss something, please let me know how to investigate further.
I think the inner-workings of this process is worth looking into the source code to determine whether greenbone-feed-sync places some sort of lock which logically interupts the scan task, and releases it when the feed data has finished populating the database. However, other than a source code review, I don’t know how else you can determine the answer.
Something definitely wrong with the update mechanism. It exhausted all free space in the weekend creating /tmp/gvmd-split-xml-file-... files. I fixed it by:
After that feed statuses changed from Update in progress... to Current and the new scan started successfully. But now it is interrupted again, Feeds are in Update in progress... and the previous solution doesn’t work anymore. At least new files in /tmp aren’t created near each other. It stuck with creating a temp file, filling it with 1,1 GB of data, deleting it and so on.
8 GB of RAM is also on the low side for running our stack, especially when a feed update and vulnerability scans run in parallel, both of which can be very resource-intensive. See this post for further information. Is it possible to assign more RAM to the VM and see if the problem persists?
I thought that the low side is 4GB as documentation claims. But anyway we doubled RAM. And it looks like 16GB is enough for now to run a scan and update feeds simultaneously. I’ll take a look at it for several days and report back here. Thank you for the help!