Currently our DB is at around 20 GB. Is this is a normal?

Hi :slight_smile:

I just found this topic. Our installation is also growing over time. Currently our DB is at around 20 GB. Is this is a normal size for a medium sized installation? We have around 40 scan jobs with each for /24 networks, but not all IPs used inside the networks.

Here’s the disk usage of our docker volumes:

12K     ./greenbone-community-edition_openvas_log_data_vol
24K     ./greenbone-community-edition_gpg_data_vol
6.5G    ./greenbone-community-edition_scap_data_vol
140K    ./greenbone-community-edition_psql_socket_vol
12K     ./greenbone-community-edition_ospd_openvas_socket_vol
4.3M    ./greenbone-community-edition_data_objects_vol
20G     ./greenbone-community-edition_psql_data_vol
12K     ./greenbone-community-edition_redis_socket_vol
512M    ./greenbone-community-edition_notus_data_vol
20K     ./greenbone-community-edition_openvas_data_vol
639M    ./greenbone-community-edition_vt_data_vol
280K    ./greenbone-community-edition_gvmd_data_vol
12K     ./greenbone-community-edition_gvmd_socket_vol
148M    ./greenbone-community-edition_cert_data_vol
27G     .

Biggest volumes are DB with 20G and SCAP data with 6.5G

Best regards

Chris

To reduce disk usage, I only keep the last 5-20 reports from a scan task. If you must keep the full history for audit purposes, you can output to CSV and store it out-of-band (somewhere else). :slight_smile:

Yes, that’s what we do, too. Only the last 5 reports for each scan are kept in the system…

I now logged in to the postgres db and selected the table sizes, here are the top tables by size (including indices):

 table_schema |            table_name             | size_pretty | size_bytes 
--------------+-----------------------------------+-------------+------------
 scap         | cpe_matches                       | 7851 MB     | 8232288256
 scap         | affected_products                 | 4953 MB     | 5193285632
 scap         | cpes                              | 1609 MB     | 1686986752
 public       | report_host_details               | 785 MB      |  823353344
 public       | nvts                              | 599 MB      |  628277248
 scap         | cves                              | 585 MB      |  612966400
 scap         | cpe_refs                          | 366 MB      |  383401984
 public       | vt_refs                           | 346 MB      |  363290624
 scap         | cpe_nodes_match_criteria          | 340 MB      |  356425728
 public       | results                           | 275 MB      |  288399360
 scap         | cve_references                    | 258 MB      |  270245888
 scap         | cpe_match_nodes                   | 136 MB      |  142360576
 scap         | cpe_match_strings                 | 126 MB      |  132366336

So it looks like the biggest tables are the scap tables. Is this a normal size for them?

Hi Chartman123. Can you please start a new thread when you have a distinct issue instread of appending to a somewhat related topic from over 1 month ago. Thank-you! :slight_smile:

As for your question: I’m sure you can compare your installation to a new container installation to make a comparison. Just some troubleshooting advice for the future. :slight_smile:

What would make you believe that your installation would be abnormal? :thinking:

Just that the disk usage grew a lot during the last months without having more jobs/reports in the system. :slight_smile:

Good idea, thanks!

Do you do database maintenance from time to time ? It does not looks like this …

1 Like

Indeed: no :wink:

Wasn’t aware of this chapter of the docs. I’ll have a closer look at this, thanks again!