Sql_open: PQerrorMessage (conn): FATAL: remaining connection slots are reserved for non-replication superuser connections

Hello,

I am currently using the dockerized Greenbone Community Edition (Greenbone Vulnerability Manager version 22.5.2, DB revision 255).

This morning, when downloading the images and booting up the containers via docker-compose, I got the following errors in the service logs:

gvmd:

greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 07h20.18 UTC:79: Updating /var/lib/gvm/cert-data/CB-K14.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 07h20.20 UTC:79: Updating /var/lib/gvm/cert-data/CB-K21.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 07h20.23 UTC:79: Updating /var/lib/gvm/cert-data/CB-K22.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 07h20.30 UTC:79: SCAP database does not exist (yet), skipping CERT severity score update
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 07h20.30 UTC:79: sync_cert: Updating CERT info succeeded.
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 07h26.48 utc:324: sql_open: PQconnectPoll failed
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 07h26.48 utc:324: sql_open: PQerrorMessage (conn): FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 07h26.48 utc:324: init_manage_open_db: sql_open failed
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 07h26.48 utc:325: sql_open: PQconnectPoll failed
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 07h26.48 utc:325: sql_open: PQerrorMessage (conn): FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 07h26.48 utc:325: init_manage_open_db: sql_open failed

pg-gvm:

greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.257 UTC [83] LOG:  starting PostgreSQL 13.11 (Debian 13.11-0+deb11u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.258 UTC [83] LOG:  listening on IPv4 address "127.0.0.1", port 5432
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.258 UTC [83] LOG:  could not bind IPv6 address "::1": Cannot assign requested address
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.258 UTC [83] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.267 UTC [83] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.274 UTC [84] LOG:  database system was shut down at 2023-07-06 07:19:06 UTC
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.281 UTC [83] LOG:  database system is ready to accept connections
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.978 UTC [94] gvmd@gvmd ERROR:  relation "public.meta" does not exist at character 19
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:06.978 UTC [94] gvmd@gvmd STATEMENT:  SELECT value FROM public.meta WHERE name = 'database_version';
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:08.207 UTC [95] gvmd@gvmd ERROR:  relation "public.meta" does not exist at character 19
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:08.207 UTC [95] gvmd@gvmd STATEMENT:  SELECT value FROM public.meta WHERE name = 'database_version';
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:08.262 UTC [95] gvmd@gvmd ERROR:  tuple concurrently updated
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:08.262 UTC [95] gvmd@gvmd STATEMENT:  CREATE OR REPLACE FUNCTION level_max_severity (lvl text, cls text)RETURNS double precision AS $$DECLARE  v double precision;BEGIN  CASE    WHEN lower (lvl) = 'log' THEN      v := 0.0;    WHEN lower (lvl) = 'false positive' THEN      v := -1.0;    WHEN lower (lvl) = 'error' THEN      v :=  -3.0;    ELSE      CASE        WHEN lower (lvl) = 'high' THEN          v := 10.0;        WHEN lower (lvl) = 'medium' THEN          v := 6.9;        WHEN lower (lvl) = 'low' THEN          v := 3.9;        ELSE          v := -98.0;        END CASE;    END CASE;  return v;END;$$ LANGUAGE plpgsql;
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:09.064 UTC [101] gvmd@gvmd WARNING:  there is already a transaction in progress
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:19:09.867 UTC [101] gvmd@gvmd WARNING:  there is no transaction in progress
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:26:48.593 UTC [356] gvmd@gvmd FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:26:48.892 UTC [357] gvmd@gvmd FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-pg-gvm-1  | 2023-07-06 07:27:00.031 UTC [359] gvmd@gvmd FATAL:  remaining connection slots are reserved for non-replication superuser connections

Has anyone experienced the same situation today?

Thank you in advance for your answers.

Kind regards.

1 Like

Dear all,

I modified the docker-compose file and replaced the gvmd service tag from stable into 22.5.1 (previous version):

  gvmd:
    image: greenbone/gvmd:22.5.1

Now it seems to be loading the SCAP data successfully

greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 09h34.01 UTC:80: Updating /var/lib/gvm/cert-data/CB-K21.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 09h34.02 UTC:80: Updating /var/lib/gvm/cert-data/CB-K22.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 09h34.06 UTC:80: SCAP database does not exist (yet), skipping CERT severity score update
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 09h34.06 UTC:80: sync_cert: Updating CERT info succeeded.
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 09h36.39 UTC:79: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2013.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 09h36.46 UTC:79: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2012.xml

Additionally, something I did not mention before, if I ran the top command while booting up, I had a few dozens of gvmd processes, whilst now there is only one:

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
2209324 104       20   0  244728 178068 150144 R  56.1   2.2   3:33.78 postgres
2209321 rse0001   20   0 1814824   1.7g   7040 S  10.0  21.3   0:51.34 gvmd

Kind regards.

1 Like

Same problem

It seems the new gvmd version 22.5.3 has not fixed the issue:

Again, multiple gvmd processes listed with top:

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
2177951 rse0001   20   0  169616   7328   4096 R   7.0   0.1   0:28.17 gvmd
2177870 rse0001   20   0   20864  17792   5248 R   6.6   0.2   0:31.18 xml_split
2177927 rse0001   20   0  169616   7328   4096 R   6.6   0.1   0:29.18 gvmd
2177948 rse0001   20   0  169616   7840   4608 R   6.6   0.1   0:27.86 gvmd
...
2179656 rse0001   20   0  169616   7200   3968 R   6.6   0.1   0:07.50 gvmd
2179710 rse0001   20   0  169616   7840   4608 R   6.6   0.1   0:04.07 gvmd

gvmd:

greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 15h39.56 UTC:84: Updating /var/lib/gvm/cert-data/CB-K14.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 15h39.57 UTC:84: Updating /var/lib/gvm/cert-data/CB-K21.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 15h39.58 UTC:84: Updating /var/lib/gvm/cert-data/CB-K22.xml
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 15h40.01 UTC:84: SCAP database does not exist (yet), skipping CERT severity score update
greenbone-community-edition-gvmd-1  | md manage:   INFO:2023-07-06 15h40.01 UTC:84: sync_cert: Updating CERT info succeeded.
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 15h46.28 utc:398: sql_open: PQconnectPoll failed
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 15h46.28 utc:398: sql_open: PQerrorMessage (conn): FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1  | md manage:WARNING:2023-07-06 15h46.28 utc:398: init_manage_open_db: sql_open failed

There a temporary fix way to modify the docker-compose.yml file to pull from a previous version or tag rather than the image: greenbone/<service-name>:stable for each service? This would allows building of the docker containers to revert to the previous version that is functional.

So for example, here is the docker hub tags page for the gvmd: Docker

You can set all the images in the docker-compose.yml files to tags that were before the bug started, such as:

gvmd:
    image: greenbone/gvmd:22.5.1

Which was pushed about 10 days ago.

or

gvmd:
    image: greenbone/gvmd:oldstable

Which was 7 months ago

:thinking:

You can get around the gvmd issue with versioning in the .yml file. You may see sev 10 issues in scans for each host found, complaining that openvas-scanner version is old if you set gvmd version to 22.5.1 in order to get scans to work.

1 Like

I’ve raised a bug report for this.
I’m having the same issue.

3 Likes

FWIW with postgresql max_connections and shared_buffers increased from 100/128MB to 400/512MB, gvmd 22.5.3 fails when a scan is started due to lack of db connections.

I’ve returned to gvmd 22.5.1 and created an override for the internal sev 10s for now.

I’m just starting to learn the source code, but I see in the gvmd/src/gvmd.c file an infinite loop is started which may spawn forked gvmd processes in the serve_and_schedule function.

Although I’m not so familiar with the source code yet, it seems plausible that this loop could be receiving multiple signals (from the feed sync?) and is spawning too many processes.

Again, I’m just learning the source code here and so if anyone can point me in the right direction of what could be causing this issue please let me know! :slight_smile:

A possible fix seems to be currently getting prepared here:

2 Likes

got the same errors when installing from source

I have successfully bypassed this error by following these steps in my Docker environment:

  • I connected directly to the pg-gvm container.
  • I copied the content of “/etc/postgresql/13/main/postgresql.conf” from my pg-gvm container to a host file (“./postgresql/postgresql.conf”).
  • I made changes to the local file content:
max_connections = 1024 # (change requires restart) 
superuser_reserved_connections = 5 # (change requires restart)
  • I made changes to the “docker-compose.yml” file:
  pg-gvm:
    image: greenbone/pg-gvm:stable
    restart: on-failure
    volumes:
      - ./postgresql/postgresql.conf:/etc/postgresql/13/main/postgresql.conf:ro
      - psql_data_vol:/var/lib/postgresql
      - psql_socket_vol:/var/run/postgresql
  • I restarted everything, and now it is running smoothly with 2 concurrent scans.
1 Like

There is no need to increase the maximum number of connections nor to modify the docker-compose.yml file.

Docker images edge, stable, etc. have been updated with the latest patch.

I have already booted up the edge image and the scans seem to run flawlessly.

Thank you @Matt for the patch.

Kind regards,

2 Likes

You mean, there is no longer any need…right? :wink:

Cool, thanks for the fix.

1 Like

The gvmd release 22.5.4 includes the fix and new stable container images have been uploaded. This issue should be fixed now.

8 Likes

Thank you very much for the quick bugfix!

1 Like