Starting many Tasks at once does not work

Hello everyone,

I’m using the container immauss/openvas to scan many targets, each target has its own task, so there are more than 100 Tasks (I have a script which adds the Targets to Greenbone).
If I schedule all Tasks to start at the same time, about 30 of them are Queued, the rest is still listed as New.

The only suspicious log entries in gvmd.log are those after the schedule starts (these messages appear about 50 times in sequence):

md manage:WARNING:2023-01-23 16h12.10 utc:118083: init_manage_open_db: sql_open failed
md manage:WARNING:2023-01-23 16h12.10 utc:118086: sql_open: PQconnectPoll failed
md manage:WARNING:2023-01-23 16h12.10 utc:118086: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replicat
ion superuser connections

After these messages, Tasks that were New show this message, but are scanned afterwards:

event task:MESSAGE:2023-01-23 16h12.15 UTC:117810: Task TASKNAME (ID) could not be resumed by admin

Does anybody have an idea how to fix this?

Hello and welcome to this community forums.

It seems the PostgreSQL database currently doesn’t allow the large amount on simultaneous connections required for starting such a large amount of tasks at the same time because the configured max_connections is reached.

Further readings on this topic:

thanks for the explanation.
I didn’t change the database settings, at the moment the max_connections setting is at 100 which seems to be the default for this Container. Is there a recommended value for this setting?

Or is it the problem that this amount of Tasks is not really supported? Even though it would be a bit more complicated in our use case, I could also put multiple hosts in one Target and thereby use less Tasks.

No not really. This depends on your setup and 99% of the users are fine with the default settings of their distribution package.

Lets say starting that many tasks at the same time is not a common use case. Please be aware that every running task consumes extra resources at gvmd and openvas scanner side. You need to get a good balance of the target size of a task (number of hosts scanned in a task) and the number of tasks started at the same time.

1 Like

Okay I see, it probably makes sense to consolidate the hosts into fewer targets/tasks.
Thanks for the clarifications.

1 Like