Missing socket file, possibly after latest system change

After our earlier trials with the HTTPS, we have still been unable to successfully run any scans.

In the openvasd-1 log, it reports

WARN openvasd: OSPD socket /var/run/ospd/ospd.sock does not exist. Some commands will not work until the socket is created!

Reviewing, we see that there is an ospd-openvas.sock file where it appears that ospd.sock is expected. I’m aware there were recent changes to the system including the item

Drop notus-scanner in favor of the new OpenVAS Daemon (openvasd). This made the Mosquitto MQTT broker obsolete too.

And wonder if this is related to that change. Also, if the MQTT broker is no longer in use, why do I see errors regarding an inability to connect to the MQTT broker constantly? I’m not sure where it is being referenced.

Thanks for reading. Hoping someone can point me in the right direction.

Interestingly, the logs continue to report that the socket file is missing and that MQTT is refusing connections, but when I set the target with “Consider Alive” because the alive test is apparently failing, it seems to have successfully scanned the target. In the logs, I also see a reference

pluginlaunch_wait_for_free_process. Number of running processes >= maximum running processes (1 >= 1). Waiting for free slot for processes.

which makes me wonder about the “maximum running processes” configuration. This is a dedicated server, so if I overcommit resources to greenbone, it only hurts greenbone. Is there a guideline on how many processes is a reasonable value? Is this something that can be modified?

libgvm util:  DEBUG:2024-06-06 18h17.37 utc:2459: get_redis_ctx: connected to redis:///run/redis/redis.sock/3
sd   main:MESSAGE:2024-06-06 18h17.37 utc:2459: Vulnerability scan 6febd0d5-6f9a-49bf-95a1-bb08c29ceedd finished for host *<redacted>* in **6587.43** seconds
sd   main:  DEBUG:2024-06-06 18h17.37 utc:2459: post_fn_call: called
sd   main:  DEBUG:2024-06-06 18h17.37 utc:2417: waitpid() failed. No child processes)
libgvm util:  DEBUG:2024-06-06 18h17.37 utc:2417: redis_delete_all: deleting all elements from KB #4
sd   main:  DEBUG:2024-06-06 18h17.37 utc:2417: Test complete
sd   main:  DEBUG:2024-06-06 18h17.37 utc:2417: attack_network: free alive detection data 
sd   main:  DEBUG:2024-06-06 18h17.37 utc:2417: attack_network: waiting for alive detection thread to be finished...
sd   main:  DEBUG:2024-06-06 18h17.37 utc:2417: attack_network: Finished waiting for alive detection thread.
sd   main:MESSAGE:2024-06-06 18h17.37 utc:2417: Vulnerability scan 6febd0d5-6f9a-49bf-95a1-bb08c29ceedd finished in 6621 seconds: 1 alive hosts of 1

This is why I’m asking about number of processes allocated - ~110 minutes to scan a single target on the same network. Surely that’s not the new normal. I had a tail on the logs while the scan was supposedly running - literally nothing changed in the log from 1628 UTC to 1809 UTC.
I don’t know the architecture, but I’m wondering if whatever is trying to call MQTT and failing is slowing everything else down.

Reviewing the results, I can see that it is failing. I know this target has unresolved vulnerabilities and greenbone found nothing.

I applied changes to /root/source/openvas-scanner-23.0.1/rust/openvasd/src/config.rs

change all related paths from /var/run/ospd/ospd.sock to /run/ospd/ospd-openvas.sock

then I went back to the documentation and rebuilt the openvas scanner:

this work for me

1 Like