Best way to install/scale OpenVAS?

I would like to get some advice from the community about a few things regarding installing OpenVAS on a supported software level and scale the performance to scan a large network.

I want to scan a large network (~20.000 hosts). In order to do this efficiently I plan to make use of the primary/secondary construction. Please share experiences about this, does it really help with scaling? Are there some neat tweaks to improve scaling or designing the cluster?

Furthermore, I have some other questions. Debian 10 is recommended to use. Debian 10 has the following versions available:

ii  libopenvas9:i386       9.0.3-1+b1   i386         remote network security auditor - shared libraries
ii  openvas                9.0.3        all          remote network security auditor - dummy package
ii  openvas-cli            1.4.5-2      i386         Command Line Tools for OpenVAS
ii  openvas-manager        7.0.3-1      i386         Manager Module of OpenVAS
ii  openvas-manager-common 7.0.3-1      all          architecture independent files for openvas-manager
ii  openvas-scanner        5.1.3-2      i386         remote network security auditor - scanner

And when I check blogs about this installation, many people need to change system level systemd unit files to make OpenVAS listen to all IPs. Which seems quite wrong to me. Or perhaps the Debian maintainer is making poor packaging decisions of course. A better way would be to copy the OpenVAS files in /usr/lib/systemd/system/ to /etc/systemd/system to prevent package updates overriding the changes. But even then, this seems wrong that on systemd level these --listen command arguments are statically configured to localhost. Is there not a ‘normal’ way to override these --listen switches in systemd? Normally these things are defined in e.g. /etc/openvas, /etc/default or /etc/sysconfig. Like in /etc/default/openvas-manager, which seems the correct place to do this. However… The packager seems to disagree:

# NOTE: This file is not used if you are using systemd. The options are
# hardcoded in the openvas-manager.service file. If you want to change
# them you should override the service file by creating a file
# /etc/systemd/system/openvas-manager.service.d/local.conf like this:
# [Service]
# ExecStart=
# ExecStart=/usr/sbin/openvasmd <your desired options>

Anyway, these OpenVAS packages in Debian are already outdated according to the OpenVAS local scan:


This script checks and reports an outdated or end-of-life scan engine for the following environments:

- Greenbone Source Edition (GSE)

- Greenbone Community Edition (GCE)

used for this scan.

NOTE: While this is not, in and of itself, a security vulnerability, a severity is reported to make you aware of a possible decreased scan coverage or missing detection of vulnerabilities on the target due to e.g.:

- missing functionalities

- missing bugfixes

- incompatibilities within the feed.
Vulnerability Detection Result

Installed GVM Libraries (gvm-libs) version:        9.0.3
Latest available GVM Libraries (gvm-libs) version: 10.0.2
Reference URL(s) for the latest available version: /

So I guess installing from source is highly recommended? Maintaining your own software is usually something to be frowned upon. Because upgrade paths may not be possible due to outdated libraries or upgrade paths in general might break compatibility with Redis (or whatever DB is used). Containers would also be a valid consideration of course. Which would make these upgrades more manageable. Anyone that would like to share some experience about that? There are no official containers as far as I spotted. Building one myself is of course possible.

How do you scale and install OpenVAS? Do you keep OpenVAS up-to-date by compiling each update yourself? Are major upgrades painless (e.g. gvm-libs 9 --> gvm-libs 10). How do you make sure everything stays fast and scalable?

One fact at the beginning, everything that interfere with your raw network packets ruins your installation.

You wanna have raw direct network access as root for the OpenVAS (Scanner) for the rest of GVM it depends on your needs, setup and environment. Almost all pre-compiled packets are outdated, broken or missing some features. Some come with really insecure default setups introduced by the maintainers. I suggest you read a bit here in the forum, many users maintain and run a GSE with lot of success, additional please check our FAQ here:

First, if you usually scan large networks (> 20.000), I recommend buying a professional appliance. You should not start with a GSM smaller than GSM-650 (10.000 IP). The GSM-5400 is for 40.000 IP.

If you like to start without an appliance, you should have proper hardware available. I recommend using the latest Version GVM 20.08. You should install it from source and compile it step by step. I installed everything on a virtual machine first (using Ubuntu 20.04 LTS) and learned how to fix problems.

After that, I installed everything on the server. You will find my documentation in German language only here:


to scale “OpenVAS” (now GVM :wink: ) I would go for a containerized setup. At the moment my approach is not production ready, but it should maybe deployable in a kubernetes setup in the future. You can refer my dockerfile ( how to build the needed binaries.
This single monolithic gvm container could be scaled using ansible like this ( The four parallel container can be scaled up, this depends on the hardware / vm resources you run the containers in. If you have multiple containers / vms with containers you could create targets and tasks inside each container using python-gvm (I use something like this Also this can be scaled, depends on the resources you have available. Afterwards you can collect the reports using python-gvm. My experience so far is to run only one scan per container at once and maybe reduce the parallel scaned targets per task. You can automate the check for running tasks etc. using python-gvm and systemd services / timers (example to sync the feeds You should be aware of the fact, that my containerized design is more like a “one shot solution”. This is meant to always do a new report and no delta reports.
Best advice I can give you is to think about how to break down the “big number” 20k hosts into smaller pieces and run these pieces in parallel. A small piece can also be a single host / ip :wink: In this case you would have 20k targets :slight_smile: but using automation / parallelism this is not that much.


Container and the container networks stack might ruin your results. You wanna run OpenVAS direct to the Interface or a SDN and not scanning from a container. Alternative you can run full kernel with hardware Ethernet driver and not container based virtualization.

Might be, I did not expirienced this yet. If this would be the case, I would go for debugging the issue and fix it properly :wink: Until now my containerized approach “works for me” since several years. Also running nmap inside a container works fine and reveals results (my approach to build a tiny nmap version using docker But as I always tell this is not meant for production, it is “research” / education / playing with technology.


Try start scanning in heterogeneous routed environments all protocols and all ports (1-65535) esp. UDP plus IPSec, and you will see the limits pretty fast.

You will exceed the tables of the internal kernel limits pretty fast, do so math 1022 x 65535 UDP plus 1022x 65535 sessions only if you wanna scan a /22. That can´t scale …

1 Like

…well that is a general “problem” you need to solve in your architecture phase, and btw in sdn your math would be incorrect as you need to include all available ips from the /22 also the net and broadcast address could be assigned as a floating ip. Your report would miss those addresses, if you only take the 1022.
For the “that can’t scale part”, it can scale :wink: . Basically this depends as always on your network throughput limits, the point you mentioned can also hit you with a physical deployment. There is no difference. What matters is your overall uplink packet / connection throughput, you could deploy the biggest enterprise gvm box and still run into limits :man_shrugging:
Maybe this “recon phase” would needed to be narrowed down, so that you don’t have to run a full scan inside a gvm container.

Anyways good point you mentioned :+1:

Tell me what is incorrect here ? I counted all “hosts” with all TCP and all UDP :wink:

The difference is that a flow and / or session is normally established inside a kernel structure for a UDP session unless timeout and for TCP unless FIN-ACK terminates the session or timeout as well.

If you ran NAT you need two sessions one outgoing and one incoming mapping the packet back to the correct docker container.

Docker is great for web-services, for mass-scans total contra-productive.

Sorry your hardware assumption is incorrect at GOS we run on our hardware appliances optimized Kernel to exactly avoid the overflow of kernel structures to ensure that we can cover the scan-capabilities.

We as Greenbone do everything to avoid that a singe TCP or UDP packet is allocating a limited kernel structure to avoid here the typical limits you have on NAT/Docker/home-routers … if you run a mass-scan.

Stateless packed forwarding is what you need in enterprise environments.

For example on our real hardware we have separated IP stacks for scanning and user-services (administration).

If your target /22 is a cloud network (sdn) you will miss the .0 and .255 for this network. In sdn these are also valid targets for a “host” and can be associated to vms inside your cloud environment. For “traditional networking” everything is fine. :wink:

I was not talking about the kernel you run inside your gvm boxes. I meant the case that you run the box inside your infrastructure through a nat / router (maybe your internet uplink infrastructure for public ips). Your router / routers needs to be properly configured as well, this is the limit I talked about.

Maybe you simply don’t need to do the “recon phase (mass-scanning)” inside docker :wink:
I use this setup only for “narrowed down” scans, as I don’t see a benefit in “bombarding” ports / protocols which are clearly not “open / available” (depends on implemented network monitoring / tabbing points you can use to collect network information).

:sunglasses: yeah “masscan” could maybe help with the tcp narrowing, but I would not run this in the gvm setup, I would only use the results from masscan and put the discovered ports into the target definition. Maybe also something like shodan or other stuff already available tells you what you need to scan / verify.

Good advise :+1: :clap:


If you have a /22 network and you substract the Network “.0” and the Broadcast (highest bit) you still get 1022 hosts :wink: Normally you don´t scan on “.0” as well the broadcast, that makes no scene at all.

You still don´t get it right, GOS runs on our appliances, GVM is the free (like freedom) stack and the Kernel is part of GOS.

Then running a scanner inside docker is pointless :wink:

Not on IPv6, we develop our own solution for that at the moment.

I don´t think this is beneficial for the community at this point. We should take from this post:

  • Docker is not useful for big networks, without customization. You exactly need to understand the underlying network and kernel configuration.
  • Do not use stateful device/mechanisms within your scan-system (docker, dsl-router, NAT, Kernel …) if you wanna scale the OpenVAS for a big network.
  • Try to run the up link and connectivity stateless.

:sunglasses: your welcome, we can have the off-topic discussion about the points with a beer or other drink when meeting in person on a conference is possible again.

Cool that you are working on the ipv6 part :+1: are you going to share this solution also with the community or will it stay a feature of the “paid” solutions.