I’m trying to figure out how to install default scan configs and run updates during the installation of openvas.
I am installing openvas in the following manner:
Running apt install openvas -y
Configuring openvas with gvm-setup
The installation is pretty much unattended and straight forward. However, when I start gsa for the first time, it goes through a good 10-15 minute process of “updating” and doesn’t have any default scan configs, which seems to be pretty redundant since it just did the same thing when installing.
As an example, let’s say I build a docker image to install openvas using steps 1 and 2 above. I fire up the docker image immediately after the docker image is successfully built. Why does it not have any default scan configs and why does it have to run through an entire update process again? Is there a way to take care of this during gvm-setup or should I just simply run gvm-setup twice or something?
First of all you are talking about GVM and not OpenVAS. openvas is only the scanner component nowadays. Therefore you should also use apt install gvm instead of apt install openvas. Technically both commands are the same because the nice Kali packagers added a provides openvas statement to the gvm meta package.
After a file sync of the feed the data needs to be loaded by gvmd into the postgres db. The feed sync itself takes some time depending on the network connection and available bandwidth. After the data is synced gvmd needs to parse this data and put it into the database. With the next feed sync only a diff is loaded and therefore the whole process is much faster. But for the initial first sync this just needs some time.
Thanks @bricks for the clarification and the reference to the architecture – definitely a good bit of moving components.
It seems that gvm is already installed, so perhaps apt install openvas installed gvm as well?
Quick question regarding the syncing process into the postgresdb, does it not do that when running gvm-setup? The reason I’m asking is because my Dockerfile runs gvm-setup, completes successfully, but when starting GVM, it seems like it goes through the process of updating/syncing (or maybe just syncing) to the database then. I’m wondering if I could somehow accomplish this during the Docker build rather than waiting to get on-prem to run my first scan.
As I wrote above the Debian/Kali packages have a package named gvm that includes the gvm-start, gvm-check-setup and other additional non greenbone scripts. This package acts as a so called meta package that has dependencies on all required packages for the GVM stack. Also the package has a provides openvas statement which acts like an alias. If you run apt install openvasapt install gvm is actually called.
The sync process is more complex. First of all there is the file sync with the feed using the rsync tool. Then we have to differentiate between the NVTs and the other data. For the NVT update ospd-openvas needs to be running. It checks if the there is a new feed version. If there is a new feed version it loads the changes into redis. Afterwards gvmd also needs to be aware of the NVT changes and asks ospd-openvas for updates periodically.
The other data SCAP, CERT and GVMD_DATA is loaded by gvmd. After the file sync (again using rsync) gvmd checks if there is new data. Afterwards it loads the changes. Because there are dependencies between the data (for example the scan configs from the GVMD_DATA need the NVTs) some data is only loaded after other data is already available. gvmd stores all its loaded information into the postgresql database.
And again I’ve no clue about gvm-setup. That’s not a script provided by Greenbone. I’ve just had a quick look at the source of this script.
Thanks @bricks. So I guess there’s really not a simple way of knowing when all of the updates/syncing are actually complete other than monitoring ps aux to see if updates are running? That’s pretty much the only thing I’m struggling with.
In other words, there’s not really anything I can do to make sure that the docker image is pretty much ready to go without having to run through another series of updating/syncing 5 minutes later after deployment?