Cannot export PDFs (No generate script found)

GVM versions

ENV gvm_libs_version=“21.4.3”
ENV openvas_scanner_version=“21.4.3”
ENV ospd_openvas_version=“21.4.3”
ENV gvmd_version=“21.4.4”
ENV gsa_version=“21.4.3”

Environment

Operating system: Debian 11
Kernel: 5.16.8-200.fc35.x86_64
Installation method / source: From source and running inside container

Problem

For some reason my GVM instance cannot import PDF reports. They all have a zero byte size. I believe I have all dependencies.

This is what I install:

RUN apt-get install --assume-yes \
        imagemagick \
        texlive-fonts-recommended \
        texlive-latex-base \
        xml-twig-tools \
        xsltproc

Full Dockerfile can be seen here.

The logs state the following:

sd   main:MESSAGE:2022-02-13 07h10.24 utc:1348: Vulnerability scan 7aca96b7-535c-4bd9-98b2-1046d86b4703 finished for host 45.33.32.156 in 1209.10 seconds
sd   main:MESSAGE:2022-02-13 07h10.24 utc:1291: Vulnerability scan 7aca96b7-535c-4bd9-98b2-1046d86b4703 finished in 1217 seconds: 1 alive hosts of 2

==> /var/log/gvm/ospd-openvas.log <==
OSPD[228] 2022-02-13 08:10:25,184: INFO: (ospd.ospd) 7aca96b7-535c-4bd9-98b2-1046d86b4703: Host scan finished.
OSPD[228] 2022-02-13 08:10:25,186: INFO: (ospd.ospd) 7aca96b7-535c-4bd9-98b2-1046d86b4703: Scan finished.

==> /var/log/gvm/gvmd.log <==
event task:MESSAGE:2022-02-13 07h10.26 UTC:1058: Status of task Immediate scan of IP scanme.nmap.org (9fa66b81-97e7-49b1-89a9-7e52e5997c13) has changed to Done
md manage:WARNING:2022-02-13 07h10.59 UTC:10998: run_report_format_script: No generate script found at /var/lib/gvm/gvmd/report_formats/8692cf71-d874-46f4-b128-e46f9e494933/c402cc3e-b531-11e1-9163-406186ea4fc5/generate

Which kinda makes sense, since that directory does not exist:

# tree /var/lib/gvm/gvmd/
/var/lib/gvm/gvmd/
└── gnupg
    ├── S.gpg-agent
    ├── S.gpg-agent.browser
    ├── S.gpg-agent.extra
    ├── S.gpg-agent.ssh
    ├── openpgp-revocs.d
    │   └── AA033CDC8692B2E455B6D4C631434C17AE5F7F3F.rev
    ├── private-keys-v1.d
    │   └── 862012B00C63734A8DF2B05A0824A1EC20182BD1.key
    ├── pubring.kbx
    ├── pubring.kbx~
    ├── random_seed
    └── trustdb.gpg

3 directories, 10 files

The only thing I can find are:

root@968e5607deb4:/var/lib/gvm/gvmd# find / -name *generate
/sys/devices/pci0000:00/0000:00:04.0/virtio1/block/vda/integrity/write_generate
/sys/devices/pci0000:00/0000:00:01.1/ata1/host0/target0:0:0/0:0:0:0/block/sr0/integrity/write_generate
/sys/devices/virtual/block/zram0/integrity/write_generate
/usr/local/share/.cache/yarn/v6/npm-escodegen-1.14.3-4e7b81fba61581dc97582ed78cab7f0e8d63f503-integrity/node_modules/escodegen/.bin/esgenerate
/usr/local/share/.cache/yarn/v6/npm-regenerate-1.4.0-4a856ec4b56e4077c557589cae85e7a4c8869a11-integrity/node_modules/regenerate
/usr/local/share/gvm/gvmd/global_schema_formats/02052818-dab6-11df-9be4-002264764cea/generate
/usr/local/share/gvm/gvmd/global_schema_formats/18e826fc-dab6-11df-b913-002264764cea/generate
/usr/local/share/gvm/gvmd/global_schema_formats/787a4a18-dabc-11df-9486-002264764cea/generate
/usr/local/share/gvm/gvmd/global_schema_formats/d6cf255e-947c-11e1-829a-406186ea4fc5/generate

But nothing in /var/lib/gvm/gvmd/report_formats. Can someone reproduce this?

Did you sync the feed especially the gvmd data? Not sure how you installed our stack so the user of the following command may need adjustments sudo -u gvm greenbone-feed-sync --type GVMD_DATA

There is a daily cron that does this, and also when the container starts.

# bash -x /etc/cron.daily/greenbone-feed-sync 
+ su --command 'greenbone-nvt-sync >> /var/log/gvm/greenbone-feed-sync.log' gvm
<28>Feb 15 11:43:07 greenbone-nvt-sync: The log facility is not working as expected. All messages will be written to the standard error stream.
<29>Feb 15 11:43:07 greenbone-nvt-sync: No Greenbone Security Feed access key found, falling back to Greenbone Community Feed
<29>Feb 15 11:43:12 greenbone-nvt-sync: Configured NVT rsync feed: rsync://feed.community.greenbone.net:/nvt-feed
+ su --command 'greenbone-feed-sync --type GVMD_DATA >>/var/log/gvm/greenbone-feed-sync.log' gvm
+ su --command 'greenbone-feed-sync --type SCAP >> /var/log/gvm/greenbone-feed-sync.log' gvm
+ su --command 'greenbone-feed-sync --type CERT >> /var/log/gvm/greenbone-feed-sync.log' gvm
+ openvas --update-vt-info

However, the directory remains empty.

# ls /var/lib/gvm/gvmd/
gnupg

The logs state this however:

==> /var/log/gvm/gvmd.log <==
md manage:WARNING:2022-02-15 11h45.48 CET:14350: secinfo_feed_version_status: last scap database update later than last feed update
md manage:WARNING:2022-02-15 11h46.01 CET:14431: secinfo_feed_version_status: last scap database update later than last feed update
md manage:WARNING:2022-02-15 11h46.11 CET:14437: secinfo_feed_version_status: last scap database update later than last feed update
md manage:WARNING:2022-02-15 11h46.21 CET:14441: secinfo_feed_version_status: last scap database update later than last feed update
md manage:WARNING:2022-02-15 11h46.31 CET:14444: secinfo_feed_version_status: last scap database update later than last feed update
md manage:WARNING:2022-02-15 11h46.41 CET:14448: secinfo_feed_version_status: last scap database update later than last feed update
md manage:WARNING:2022-02-15 11h46.51 CET:14451: secinfo_feed_version_status: last scap database update later than last feed update

Not sure if this can cause issues. I think I’ve seen it more often.

Never mind, restarted the container, now the error is gone:

md manage:   INFO:2022-02-15 12h19.03 CET:617: Updating user OVAL definitions.
md manage:   INFO:2022-02-15 12h19.03 CET:617: Updating CVSS scores and CVE counts for CPEs
md manage:   INFO:2022-02-15 12h20.04 CET:617: Updating CVSS scores for OVAL definitions
md manage:   INFO:2022-02-15 12h20.06 CET:617: Updating placeholder CPEs
md manage:   INFO:2022-02-15 12h20.16 CET:617: Updating Max CVSS for DFN-CERT
md manage:   INFO:2022-02-15 12h20.18 CET:617: Updating DFN-CERT CVSS max succeeded.
md manage:   INFO:2022-02-15 12h20.18 CET:617: Updating Max CVSS for CERT-Bund
md manage:   INFO:2022-02-15 12h20.19 CET:617: Updating CERT-Bund CVSS max succeeded.
md manage:   INFO:2022-02-15 12h20.21 CET:617: update_scap_end: Updating SCAP info succeeded
md manage:   INFO:2022-02-15 12h20.23 CET:860: OSP service has different VT status (version 202202151109) from database (version 202202141109, 92408 VTs). Starting update ...
event alert:MESSAGE:2022-02-15 11h20.58 utc:860: The alert Red alert was triggered (Event: New SecInfo arrived, Condition: Always)
md manage:   INFO:2022-02-15 11h20.58 utc:860: Updating VTs in database ... 4 new VTs, 841 changed VTs
md manage:WARNING:2022-02-15 11h20.59 utc:860: update_nvts_from_vts: SHA-256 hash of the VTs in the database (2235b7302fa0d2ca5e0002039d4dfecd2304c413757298c219279c5ac5d7d1b0) does not match the one from the scanner (d0c2f003262c54e6da3af44f053e3b66639ece567ed466c2e1192d41d50335c2).
md   main:MESSAGE:2022-02-15 11h20.59 utc:860: Rebuilding all NVTs because of a hash value mismatch
event alert:MESSAGE:2022-02-15 11h25.52 utc:860: The alert Red alert was triggered (Event: New SecInfo arrived, Condition: Always)
md manage:   INFO:2022-02-15 11h25.54 utc:860: Updating VTs in database ... 92530 new VTs, 0 changed VTs
md manage:   INFO:2022-02-15 11h25.55 utc:860: Updating VTs in database ... done (92530 VTs).
md   main:MESSAGE:2022-02-15 11h26.28 utc:860: update_nvt_cache_retry: rebuild successful

These are the only report_formats directories I could find:

# find / -name *report_formats* 2>/dev/null
/usr/local/lib/python3.9/dist-packages/gvm/protocols/gmpv208/entities/__pycache__/report_formats.cpython-39.pyc
/usr/local/lib/python3.9/dist-packages/gvm/protocols/gmpv208/entities/report_formats.py
/var/lib/gvm/data-objects/gvmd/20.08/report_formats
/var/lib/gvm/data-objects/gvmd/21.04/report_formats
/var/lib/gvm/data-objects/gvmd/21.10/report_formats
/var/lib/openvas/plugins/report_formats

You may also test out the container with this command: podman run --name=gvm -it --rm -p 9392:9392 -v gvm-sync:/var/lib/openvas -v gvm-data:/var/lib/gvm -v gvm-postgres:/var/lib/postgresql -v /etc/localtime:/etc/localtime:ro --cap-add=NET_RAW quay.io/keesdejong/gvm:latest

And then check the logs with podman logs -f gvm and login interactively with podman exec -it gvm bash

Probably you can replace podman with docker. But not sure.

The dashboard does seem to know these report formats though.

I have seen such messages if a manager (gvmd) database was at a database scheme version from versions < 20.08, a migration was done to >= 20.08 but the “old” report formats at $prefix/var/lib/gvm/gvmd/ was missing for the migration to the new location $prefix/var/lib/gvm/data-objects/gvmd/.

The following existing topic has some more background info:

1 Like

Hi, thanks for your reply and suggestion. I do set the import owner here. And also removed the database as a whole already. So there shouldn’t be any scheme conflicts.

So is it safe to say that it all works fine for others? With the same version numbers? Do you perhaps use a container? Then I can compare what people do different.

Can no one reproduce this? Because that would be odd, since I compile from source and I’ve done that from scratch in my container for many times already.


Apparently I have to create the folder myself? I also had to do that for some log files already. Is this expected from the user? Shouldn’t the software be creating these directories?

Still run into an issue though:

==> /var/log/gvm/gvmd.log <==
md manage:WARNING:2022-02-19 18h17.01 UTC:27383: run_report_format_script: system failed with ret 256, 1, /var/lib/gvm/gvmd/report_formats/f3e54c46-1930-4608-be32-8715e42aee42/c402cc3e-b531-11e1-9163-406186ea4fc5/generate /tmp/gvmd_nXAmyN/report.xml '<files><basedir>/tmp/gvmd_nXAmyN</basedir></files>' > /tmp/gvmd_nXAmyN/c402cc3e-b531-11e1-9163-406186ea4fc5-9gTqXN.pdf 2> /dev/null
^C
root@cd3ffdec0b4e:/# ls /var/lib/gvm/gvmd/report_formats/
f3e54c46-1930-4608-be32-8715e42aee42

But I’ll experiment more and try to find the magic requirements by trying.

No one? I’m following the installation instructions to the letter, if someone can confirm this is really the issue, then I’ll raise an issue on GitHub.

Hi @AquaL1te,

It doesn’t look like anyone can recreate it so far, but I’m bumping the thread.

1 Like

Strange, for those who can’t reproduce it. How do you install it? Maybe an RPM/DEB is creating these directories or maybe the Docker container does. My container is exactly what the docs say what I should do. So I find it a bit strange that no one can reproduce this. I would like to check what I’m missing.

I’ll try the recently released version soon, once I’ve compiled all components.

1 Like

Oops, sorry I wasn’t super clear on that last post. I’m not sure how common this specific setup is (and within that setup, it doesn’t look so far that anyone has recreated it). My own install is from Greenbone Github source, not in a container.

It might be best to check with the maintainer of the repository https://github.com/kees-closed/gvm since there might be differences with their build that might not be apparent when installing from the Greenbone dos (edit) or try it using all Greenbone sources.

I am the maintainer :nerd_face: And those steps I do in the Dockerfile and in the entrypoint.sh script are directly from the README.md docs on GitHub. I guess I’ll just have to check some DEB source codes myself and spy what other container setups do. Once I’ll know what is done differently, I’ll let it know here.

Oh my gosh, I feel kind of silly. :rofl:

Yes, please let us know what you find! :smiley:

I’ve built the latest version and there it seems to export those PDFs and such fine: https://github.com/kees-closed/gvm/releases/tag/21.4.4

1 Like