PostgreSQL crash/recovery breaks ospd-openvas socket → gvmd loses scanner forever, tasks stuck in Interrupted

Hello everyone,

I am running the official Greenbone Community Edition via docker-compose (greenbone images, GVM 22.9 / gvmd 26.6.0).

Problem: every time the PostgreSQL container restarts (or even just recovers after a crash), gvmd loses the connection to ospd-openvas and never recovers it automatically. After that:

  • The Unix socket /run/ospd/ospd-openvas.sock disappears.

  • All running tasks switch to status Interrupted.

  • gvmd starts spamming warnings forever:

    md manage:WARNING:... osp_scanner_feed_version: failed to connect to /run/ospd/ospd-openvas.sock
    
  • Even when ospd-openvas comes back up completely and successfully loads all VTs (logs show “Finished loading VTs. The VT cache has been updated …”), gvmd still considers the scanner dead and does not reconnect.

PostgreSQL logs show the typical recovery messages:

database system was not properly shut down; automatic recovery in progress
invalid record length at ... wanted 24, got 0

Is this a known issue? Is there an official, up-to-date, “working” docker-compose.yml for the Community Edition (22.9+) that already solves these problems?

I would be very grateful for a working docker-compose example or any definitive recommendations.

Thank you in advance!

Just my first impression guess, but how much RAM do you have on the base system? Is PostgreSQL getting killed off by OOM killer.

every time the PostgreSQL container restarts (or even just recovers after a crash)

Ideally this should not be happening

There is enough RAM. I looked at the logs and it definitely wasn’t an OOM-Killer. Docker also does not limit the RAM for the container in any way.

Is this normal for volumes?
VOLUME NAME LINKS SIZE
greenbone-community-edition_gvmd_socket_vol 3 3B
greenbone-community-edition_openvas_data_vol 4 1.116kB
greenbone-community-edition_psql_socket_vol 2 97.9kB
greenbone-community-edition_scap_data_vol 2 6.519GB
greenbone-community-edition_vt_data_vol 3 338.1MB
greenbone-community-edition_gpg_data_vol 3 4.479kB
greenbone-community-edition_notus_data_vol 3 524.9MB
greenbone-community-edition_openvas_log_data_vol 4 7.181GB
greenbone-community-edition_ospd_openvas_socket_vol 3 1B
greenbone-community-edition_psql_data_vol 2 17.36GB
greenbone-community-edition_redis_socket_vol 2 2B
greenbone-community-edition_cert_data_vol 3 134.5MB
greenbone-community-edition_data_objects_vol 3 25.85MB
greenbone-community-edition_gvmd_data_vol 1 153kB

Yes — that is basically the “normal” list of volumes. I can’t verify for the rough expected sizes ATM because I don’t have access to a system with the containers, but they look roughly correct. TBH, I haven’t installed the containers in a few weeks, so I don’t know if there has been an component upgrade that is problematic.