Scan stops - seems no resources issue

Dear,

I successfully setup docker images from your how to and they running fine.
I add 277 IPs for scan in single scan and after 21% he stoped, i can resume, but i would like to automate this process and with this behaviour i cannot.

Virtual machine have:
30 vCPU
21GB of RAM
here is graph of load from scan

And in logs i have:

Seems that he crashed for me unknown reason?

How to fix this issue?
thank you!

update, after resume, again stoped:

The cause of the issue is the following line

redis-server_1         | Killed

The redis-server is crashed for an unknown reason. Maybe there are some memory restrictions for the container?

1 Like

ok, it was the RAM issue after all, on every crash he “eat” all RAM available .
I was apply the “fix” which is mention in crash: vm.max_map_count growing steadily when vm.overcommit_memory is 2 · Issue #1328 · jemalloc/jemalloc · GitHub
But nothing change…

What makes this leakage?
How to troubleshoot it…

thank you

Just turn of overcommitment … and only hold redis data in memory.

same… i do not understand what happening.
Can be maybe one IP the problem? maybe it have many services on it and request too much resources?
I successful finish the scan for ±20 IPs …

ok, move to another installation, standalone (without docker) on Kali distribution.
I got same issue, cannot run bigger scan then 20 IPs (not precise number), all the time he run out of RAM, tried overcommitment with 1 and 2 and it is same (with 2 he got interrupted), cannot finish.
Is it possible that there is something on IPs which i scan?
I cannot believe that i cannot out of the box scan subnet…

I supporse your installation does have a memory leak or other issue. I would try with a Greenbone Trial Container on VirtualBox first. You can use the community feed just out of the box with the Trial. If this is working, you have a setup / build / machine issue.

If not enable debug and check why you are running out of memory.

1 Like

i have proxmox as virtual env. do not have vmware or virtualbox :frowning:

I just tried compile from source and it is same!
seems that community version is limited to ~20 IPs per scan to finish successful (without interruption) …

Ok, withdrawn by this experience, i will automate process with bulk scanning (20 IPs per scan).
I was used for single scan:

#!/bin/bash
IPs=$(cat ips.txt)
output=$(gvm-script --gmp-username admin --gmp-password password socket scripts/scan-new-system.gmp.py "$IPs" 33d0cd82-57c6-11e1-8ed1-406186ea4fc5)
scan_id=$(echo "$output" | grep -oP "(?<=Corresponding report ID is )[a-f0-9-]+")
echo IPs: $IPs
echo output: $output
echo $scan_id
sleep 120m
gvm-script --gmp-username admin --gmp-password password socket /auto/export-pdf-report.gmp.py "$scan_id" /auto/results/$(date +"%Y-%m-%d")".pdf"

How to accomplish this with bulks, to run in sequences or there is possibility with openvas to work with batches (did not found in documentation)?

fyi we found what was the problem.
When OpenVAS start to scanning

https://developer.hashicorp.com/boundary/docs 
https://developer.hashicorp.com/vault

we are getting memory leak by process:

gb_loghj_ CVE-2021-44228_http_web_dirs_active.nasl

This seems is some kind of bug.
If you can, replicate it, all software are free

Hello,

and welcome to this community forums. If there is a memory leak in the scanner you could open up a new issue over here with detailed instructions how to reproduce:

1 Like

For tracking / reference purposes the relevant / related Github issue:

For me this happens only when testing a server running ManageEngine Password Manager Pro 12.33.0.