Gvmd and ospd-openvas can not communicate via socket after upgrade

Hi,

after upgrading to 22.4.0 gvmd and ospd-openvas can not communicate with each other via socket anymore.
Sock file gets created upon starting of service(s) but still they seem to be not able to communicate through it.
For upgrading I read my way through all release messages for all modules on github and performed the required adjustments to build flags, arguments, configs, etc.
It worked fine on 21.x release.

Here are some infos that might be relevant:

gvmd:
I can see this message getting written to log repeatedly:
WARNING:2023-02-01 13h23.02 UTC:586570: osp_scanner_feed_version: failed to connect to /opt/gvm/var/run/ospd.sock

I am using the following arguments in systemd unit:
ExecStart=/opt/gvm/sbin/gvmd --osp-vt-update=/opt/gvm/var/run/ospd.sock --max-email-attachment-size=100000000 --max-email-include-size=100000000 --max-email-message-size=100000000 --listen-group=gvm --listen-mode=770

I used the corresponding build flag:
-DOPENVAS_DEFAULT_SOCKET= "/opt/gvm/var/run/ospd.sock"

ospd-openvas:
I can see the following log messages getting written to log repeatedly:

OSPD[588958] 2023-02-01 14:09:37,739: INFO: (ospd.main) Shutting-down server ...
OSPD[589063] 2023-02-01 14:11:38,154: INFO: (ospd.main) Starting OSPd OpenVAS version 22.4.4.
OSPD[589063] 2023-02-01 14:11:38,160: INFO: (ospd_openvas.messaging.mqtt) Successfully connected to MQTT broker
OSPD[589063] 2023-02-01 14:11:48,194: INFO: (ospd_openvas.daemon) Loading VTs. Scans will be [requested|queued] until VTs are loaded. This may take a few minutes, please wait...
OSPD[589063] 2023-02-01 14:11:48,238: INFO: (ospd.main) Shutting-down server ...

I use the following arguments for starting ospd-openvas:
ExecStart=/opt/gvm/bin/ospd-scanner/bin/python /opt/gvm/bin/ospd-scanner/bin/ospd-openvas --pid-file /opt/gvm/var/run/ospd-openvas.pid --unix-socket /opt/gvm/var/run/ospd.sock --socket-mode 0o770 --log-file /opt/gvm/var/log/gvm/ospd-scanner.log --lock-file-dir /opt/gvm/var/run/ospd/ --mqtt-broker-address localhost --mqtt-broker-port 1883 --notus-feed-dir /opt/gvm/var/lib/notus/advisories

Output of ls-alh of greenbone run dir:

total 28K
drwxr-xr-x 3 gvm  gvm  4,0K Feb  1 15:13 .
drwxrwxr-x 5 gvm  gvm  4,0K Mär 23  2020 ..
-rw-rw-r-- 1 gvm  gvm    25 Feb  1 03:30 feed-update.lock
-rw-r--r-- 1 root root    5 Jan 24 17:49 gsad.pid
-rw------- 1 gvm  gvm     0 Mär 23  2020 gvm-checking
-rw------- 1 gvm  gvm     0 Mär 23  2020 gvm-create-functions
-rw------- 1 gvm  gvm     5 Jan 24 17:49 gvmd.pid
srwxrwx--- 1 gvm  gvm     0 Jan 24 17:49 gvmd.sock
-rw------- 1 gvm  gvm     0 Mär 23  2020 gvm-helping
-rw------- 1 gvm  gvm     0 Mär 23  2020 gvm-migrating
-rw------- 1 gvm  gvm     0 Mär 23  2020 gvm-serving
-rw------- 1 gvm  gvm     0 Apr  6  2020 gvm-syncing-nvts
-rw-r--r-- 1 gvm  gvm     4 Jan 24 17:49 notus-scanner.pid
drwxrwxr-x 2 gvm  gvm  4,0K Jan 24 11:21 ospd

Both services are being started as gvm user and group.

Any hints/advices for debugging this issue would be greatly appreciated.

Best regards
ri-pa

Hi,

i would check with “ss -a” if the socket is connected to the software first.

Hi,
thanks for your message.

Socket gets listed by “ss” command:

root@gvm:~# ss -x -a | grep gvmd
u_str LISTEN 0      512                              /opt/gvm/var/run/gvmd.sock 33482                                    * 0                                    
u_str LISTEN 0      64              /opt/gvm/var/lib/gvm/gvmd/gnupg/S.gpg-agent 1151327                                  * 0                                    
u_str LISTEN 0      64        /opt/gvm/var/lib/gvm/gvmd/gnupg/S.gpg-agent.extra 1151328                                  * 0                                    
u_str LISTEN 0      64      /opt/gvm/var/lib/gvm/gvmd/gnupg/S.gpg-agent.browser 1151329                                  * 0                                    
u_str LISTEN 0      64          /opt/gvm/var/lib/gvm/gvmd/gnupg/S.gpg-agent.ssh 1151330                                  * 0

Any other things I can check?

Hi,

it seems ospd-openvas is shut down even before it is able to load the VTs for unknown reasons. I suppose systemd is responsible for that behavior. Could you try running ospd-openvas with --foreground and change the Type setting in the service file to Type=exec?

Just as a hint, it doesn’t make much sense to move the socket and pid locations to something else the /run/$SERVICE. This directory is a tempfs mount and managed by systemd. It is specified via the RuntimeDirectory service file setting. systemd ensures that the directory is removed if the service is stopped. Using a tempfs ensures that the content of the directory is always removed on a reboot. The unix socket and pid files are only temporary and should not survive the daemon shut down. Therefore always using /run/$SERVICE/ is the best choice.

Hi @bricks

Thanks for your message.

This pointed me in the right direction indeed.
Executing ospd-openvas in the shell with the --foreground option told me the actual problem:

Traceback (most recent call last):
  File "/opt/gvm/bin/ospd-scanner/bin/ospd-openvas", line 8, in <module>
    sys.exit(main())
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/daemon.py", line 1268, in main
    daemon_main('OSPD - openvas', OSPDopenvas, NotusParser())
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd/main.py", line 164, in main
    daemon.init(server)
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/daemon.py", line 549, in init
    self.update_vts()
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/daemon.py", line 674, in update_vts
    self.notus.reload_cache()
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/notus.py", line 143, in reload_cache
    self._verifier = hashsum_verificator(
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/notus.py", line 58, in hashsum_verificator
    sums = reload_sha256sums(sha_sum_reload_config)
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/gpg_sha_verifier.py", line 40, in reload_sha256sums
    config.gpg = __default_gpg_home()
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/ospd_openvas/gpg_sha_verifier.py", line 19, in __default_gpg_home
    return GPG(gnupghome=f"{home.absolute()}")
  File "/opt/gvm/bin/ospd-scanner/lib/python3.8/site-packages/gnupg.py", line 900, in __init__
    raise ValueError('gnupghome should be a directory (it isn\'t): %s' % gnupghome)
ValueError: gnupghome should be a directory (it isn't): /etc/openvas/gnupg

I then was able to fix this issue by doing 2 things:

  1. Updating to the ospd-openvas release you uploaded a few days ago.
    The new version contains this new commit and therefore is able to pick up the value of the GNUPGHOME environment variable.
  2. Setting the GNUPGHOME in the systemd units for both ospd-openvas and notus-scanner.

It’s now working without any issues, thanks alot!

Best regards

2 Likes