Scan limit on the schedule

Hello there,

I have more than 10 networks to scan, I would not to add them in a single target
For my usage, one target per network is more simple to admin

But when I schedule all tasks in the same time, GVM crashes. It is normal to many ressources needed
So I schedule dufferent period by groups of tasks, without warranty that all tasks finish before new tasks start

I found this post with a good idea : Extended Schedules
The feature to limit scan per scheduling, but it doesn’t exist

Do you have an idea to limit the number of tasks per schedule to preserve ressources of the host?
Thanks for your help

You can try reading this section from the docs. However this is for the Enterprise appliances and I don’t know whether the resources are managed in the same way for the Community Edition as described here.

Also, you can use alerts to start one scan after the previous one finishes. Set the “Method” to “Start Task”.

Thanks for this answer

It seems that resource management is not done with the community edition. Or perhaps I don’t understand how does it work

About the solution to chain an alert to a task, this is exactly the solution proposed in the old post.
But I don’t understand how to filter results to know that a specific task is done

Help will be appreciated

What does that mean exactly? The scanner is killed by the OOM? Redis is shut down? You get a segfault from gvmd? What’s the error output?

When I launch more than 3 tasks, openvas crashes because it consumes all the memoey of the server

Here is the logs

nov. 14 10:42:02 Pentesting-01 kernel: Out of memory: Killed process 2337524 (ospd-openvas) total-vm:883956kB, anon-rss:199668kB, file-rss:256kB, shmem-rss:0kB, UID:110 pgtables:1192kB oom_score_adj:0
nov. 14 10:42:02 Pentesting-01 systemd-journald[283]: /dev/kmsg buffer overrun, some messages lost.
nov. 14 10:41:57 Pentesting-01 systemd[1]: ospd-openvas.service: A process of this unit has been killed by the OOM killer.
nov. 14 10:41:57 Pentesting-01 systemd[1]: user@0.service: A process of this unit has been killed by the OOM killer.
nov. 14 10:41:57 Pentesting-01 systemd[1]: user@0.service: Main process exited, code=killed, status=9/KILL
nov. 14 10:41:57 Pentesting-01 systemd[1]: user@0.service: Failed with result 'signal'.
nov. 14 10:41:57 Pentesting-01 systemd[1]: user-0.slice: A process of this unit has been killed by the OOM killer.

The server has 4Go of RAM and 2 vCPU

In my mind, it is normal because there is no limitation of ressources by task

4GB of RAM is the minimum requirement to run a scan task. It is not enough to run several scans each with numerous hosts. If you don’t have a system with enough RAM, you can try the Greenbone Cloud Service (GCS) which will aleviate you from all the responsibility of managing the Greenbone software stack itself. With the cloud service, you can just focus on finding the vulnerabilities in your infrastructure,

You can register for a free trial here. :wink:

2 Likes

Thanks for the proposal :smiley:
I need a solution on premise to scan my internal network

Even if 4GB is the minimum, the system should not crash
Do you think, if I had more RAM, it will no more crash?
(It will check the memory consumption?)

Yes, there are real limitations when using low resource systems. The Greenone Cloud Service GCS has a Layer 2 VPN gateway and agent for proxying scans of internal networks.

However, for low resource scanners, you can reduce the default values when you are creating a scan task for:

  • The number of concurrently scanned hosts
  • The number of concurrent VTs per host

You can set these as low as possible. This will dramatically increase the time it takes to conduct a scan, but will use less RAM.

Thanks for informations, it sounds good

Currently, I am looking for information about filters and how to schedule tasks : how to know that a specific task is done?

I suggest reading the documentation to educate yourself on Greenbone’s features.

Some important sections include:

You may also peruse the Greenbone YouTube channel for some helpful tutorials. Specifically, a new one addresses learning Filters.

You can create an alert to notify you when a scan task has finished and then connect that alert to each scan task you want it applied to.

1 Like

It is not the system that is crashing, it´s Linux and the out of memory killer that kills just the process.
There is NO_WAY around that. You could add 12GB of Swap, but then you will execute the scan on the hard disk and no longer on the RAM.