Max ip scan limited at 30 on docker version

Hello, I need to scan a really big network (more than 8000 IPs), and I have installed the Docker version of Greenbone GVM. I have tried to scan 40 IPs simultaneously with this option in the scan configuration:

However, I consistently encounter the same issue: only 30 host are scanned and it is the same if i put 35 host.

When I change the limit to 29 in the same scan (clone), it scans 29 hosts.

So, is there a way to change this maximum number of IPs scanned simultaneously? My configuration doesn’t utilize all the RAM, so I know that I can increase the maximum IPs per scan at the same time.

I looked into why this is so. The answer to your question is likely that the max limit is being set by one of the underlying processes. Here is what I found:

The max_hosts setting can be set via the /etc/openvas/openvas.conf file according to the manual. However, openvas-scanner also checks the gvm-libs module using the get_prefs()prefs_init() function.

In that function, the max_hosts variable has a default of 30.

In several places in the openvas-scanner, such as: openvas.c/attack_network_init(), the configuration settings are loaded in this order:

  • set_default_openvas_prefs (); //Set the prefs from the openvas_defaults array, but this is hardcoded, not the config file, and does not include scanner settings.
  • prefs_config (config_file); // Apply the configs from given file as preferences. This would allow the /etc/openvas/openvas.conf file to set the max_hosts
  • set_globals_from_preferences (); // However, here the config file settings are overridden with the gvm-libs defaults found here, which are hardcoded.

So, it seems that the hardcoded gvm-libs setting is limiting you from setting higher than 30.
So, I don’t think a setting such as max_hosts = 50 in the /etc/openvas/openvas.conf configuration file, will supercede this hardcoded limit in the gvm-libs/base/prefs.c file.

Also, as far as I can tell, there is no way to set a global preference via the web-interface, only on the per-scan limit which (again as far as I can tell) does not override the hardcoded values in gvm-libs/base/prefs.c.

IMHO, this could be changed to allow the openvas.conf file to override the gvm-libs default setting. By putting the prefs_config (config_file); function after the set_globals_from_preferences (); :thinking:

Also, although gvmd does allow a startup flag to --modify-setting, I don’t think the max_hosts is available in the database to alter in this way.

Furthermore, another configuration option, the max_sysload setting might help you if you can find a way to overcome the global max_hosts setting. Combined with a very high max_hosts value, setting max_sysload = 90 or maybe higher :thinking: would allow you to max your CPU bandwidth.

Thank you for your reply, it was really helpful. I managed to solve the problem by adding max_hosts = 50 to the file /etc/openvas/openvas.conf. Here’s a detailed explanation of how I did it to assist anyone else facing the same issue:

To access the Docker file openvas.conf, I had to execute the Docker to find the file /etc/openvas/openvas.conf:

docker compose -f $DOWNLOAD_DIR/docker-compose.yml -p greenbone-community-edition exec ospd-openvas /bin/bash 

Since I couldn’t modify it with nano (being inside the Docker with limited capabilities), I had to proceed as follows:

echo "max_hosts = 50" >> /etc/openvas/openvas.conf

Currently, I’m not entirely sure if this modification persists after restarting the Docker. I’ll update here if I have more information.

No the config will not persist after restarting the containers. You can modify the docker compose file to copy the file from the host into the ospd-openvas container when you put them up though.

Put your full openvas.conf file in the same direcotry as the docker-compose.yml file and add this to the volumes: section of ospd-openvas: container.

    volumes:
      - ./openvas.conf:/etc/openvas/openvas.conf

Also, thanks for pointing out that the openvas.conf file worked. I took a quick swing at the source code, but wasn’t 100% sure all the place the configs were being loaded.