Environment
Operating system: Ubuntu 22.04.3 LTS
Kernel: Linux 5.15.0-107-generic
GVM Versions
gsa: Greenbone Security Assistant 22.06.0
gvm: Greenbone Vulnerability Manager 22.9.0
PostgreSQL Version
PostgreSQL: PostgreSQL 16.0 (Ubuntu 16.0-1.pgdg22.04+1)
Problem
We have observed a problem with the result fetch times from GVM. Whether using the gvm-cli command to get_results (Eg: gvm-cli --gmp-username $GmpUsername --gmp-password $GmpPassword --timeout $current_timeout socket --xml "<get_results filter='rows=$rows first=$offset' task_id=$taskId />"
) directly or using the GSA, the results page takes a long time to load and sometimes does not load at all, resulting in session timeouts. To test the results fetch time, we wrote scripts to fetch 3378 records present in a scan with 50 records per page and a timeout of 1800 seconds. Initially, it took several hours to fetch all the records.
After researching the forum to find answers, we came across a discussion here which mentioned a potential fix: editing the postgresql.conf file and changing the parameter #jit = on
to jit = off
. After doing this and restarting the PostgreSQL service, the result fetch worked very quickly, and the GSA UI was loading the results page much faster.
However, after leaving the system idle for a day, we conducted the same test again, and the results fetch time reverted to being slow. This led us to believe that it wasn’t the configuration change that improved the fetch time, but rather the restart of the PostgreSQL service. To test this theory, we restarted the PostgreSQL service and conducted the same test again. This confirmed our hypothesis, as the results fetch was once again very fast.
Below are the logs of both test scenarios (slow and fast before and after the PostgreSQL restart, respectively), showing execution times for each page of the first 10 pages (50 records per page).
Slow Execution Logs (Before PostgreSQL restart):
Completed execution for page 1 in 14 seconds with timeout 1800 seconds.
Completed execution for page 2 in 27 seconds with timeout 1800 seconds.
Completed execution for page 3 in 39 seconds with timeout 1800 seconds.
Completed execution for page 4 in 51 seconds with timeout 1800 seconds.
Completed execution for page 5 in 63 seconds with timeout 1800 seconds.
Completed execution for page 6 in 76 seconds with timeout 1800 seconds.
Completed execution for page 7 in 89 seconds with timeout 1800 seconds.
Completed execution for page 8 in 100 seconds with timeout 1800 seconds.
Completed execution for page 9 in 112 seconds with timeout 1800 seconds.
Completed execution for page 10 in 123 seconds with timeout 1800 seconds.
Fast Execution Logs (After PostgreSQL restart):
Completed execution for page 1 in 1 seconds with timeout 1800 seconds.
Completed execution for page 2 in 0 seconds with timeout 1800 seconds.
Completed execution for page 3 in 1 seconds with timeout 1800 seconds.
Completed execution for page 4 in 1 seconds with timeout 1800 seconds.
Completed execution for page 5 in 0 seconds with timeout 1800 seconds.
Completed execution for page 6 in 1 seconds with timeout 1800 seconds.
Completed execution for page 7 in 1 seconds with timeout 1800 seconds.
Completed execution for page 8 in 1 seconds with timeout 1800 seconds.
Completed execution for page 9 in 0 seconds with timeout 1800 seconds.
Completed execution for page 10 in 1 seconds with timeout 1800 seconds.
This raises the question: what could be going wrong with PostgreSQL after it has been idle for some time? Any suggestions or recommendations are highly appreciated!
Thank you!