Gvmd cpu usage

Hi,

I built GreenBone from Source code (tried also docker-compose deployment) with the same issue.

After everything goes well, once I open the web interface, the gvmd process spawns itself up to 100x and uses all CPU.

The main OS is Ubuntu 22.04 x86 LTS with latest updates.

 296514 gvm       20   0  446024 175732   4040 R  51.8   2.2   0:45.62 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296143 gvm       20   0  446024 175732   4040 R  36.6   2.2   1:15.82 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296387 gvm       20   0  446024 175732   4040 R  33.3   2.2   0:50.65 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296148 gvm       20   0  446024 175732   4040 R  33.0   2.2   1:12.02 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296153 gvm       20   0  446024 177140   5416 R  33.0   2.2   1:13.60 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296281 gvm       20   0  446024 175732   4040 R  32.0   2.2   0:57.77 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296489 gvm       20   0  446024 175732   4040 R  32.0   2.2   0:44.34 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296492 gvm       20   0  446024 175732   4040 R  30.7   2.2   0:43.66 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296335 gvm       20   0  446024 175732   4040 R  30.4   2.2   0:50.10 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296598 gvm       20   0  446024 175732   4040 R  30.0   2.2   0:41.05 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296253 gvm       20   0  446024 175732   4040 R  29.7   2.2   1:07.94 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296462 gvm       20   0  446024 175732   4040 R  29.0   2.2   0:44.44 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296086 gvm       20   0  446024 175732   4040 R  28.4   2.2   1:19.28 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296608 gvm       20   0  446024 175732   4040 R  26.1   2.2   0:39.45 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296218 gvm       20   0  446024 174944   3252 R  25.4   2.2   1:03.47 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296094 gvm       20   0  446024 175732   4040 R  25.1   2.2   1:19.84 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296106 gvm       20   0  446024 175732   4040 R  25.1   2.2   1:19.29 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296220 gvm       20   0  446024 176488   4764 R  25.1   2.2   1:07.67 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm
 296230 gvm       20   0  446024 175732   4040 R  25.1   2.2   1:08.60 gvmd: gvmd: Serving  --foreground --osp-vt-update=/run/ospd/ospd-openvas.sock --listen-group=gvm

I tried to strace of the processes to see what might be going on, but it just seems to be looping in reading FD 17, which I followed.

# strace -p <pid> -s 2000
...
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
read(17, "", 1048576)                   = 0
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
read(17, "", 1048576)                   = 0
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
read(17, "", 1048576)                   = 0
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
read(17, "", 1048576)                   = 0
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
read(17, "", 1048576)                   = 0
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
read(17, "", 1048576)                   = 0
pselect6(18, [17], [], NULL, NULL, NULL) = 1 (in [17])
...

The fd 17 is socket:

root@tomassrnka-greenbone:~# ls -la /proc/294896/fd/17
lrwx------ 1 gvm gvm 64 Jul  9 11:12 /proc/294896/fd/17 -> 'socket:[3206582]'

Which belongs to gvmd.sock process

# ss -a -e  |grep 3206582
u_str ESTAB  0      0                                                   /run/gvmd/gvmd.sock 3206582                            * 0       --- ino:11020 dev:0/26

I looked into gvmd logs, but all it complains is about running out of pg connections (I increased those from 100 to 1000). The increase kind-of helped but it pushed the issue to having hundreds of gvmd processes running with using all available CPUs and delaying the problem.

# tail -f /var/log/gvm/gvmd.log
... 
md manage:WARNING:2023-07-09 08h38.35 utc:294048: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-09 08h38.45 utc:294053: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-09 08h38.45 utc:294053: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-09 08h38.45 utc:294053: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-09 08h38.50 utc:294057: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-09 08h38.50 utc:294057: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-09 08h38.50 utc:294057: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-09 08h39.00 utc:294061: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-09 08h39.00 utc:294061: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-09 08h39.00 utc:294061: init_manage_open_db: sql_open failed
```

Any ideas what might be going on here please?

Hi @tomassrnka we are already aware of this issue, see here, and are working on this!

2 Likes