Custom VTs on Community Containers

Hi all,

Thought I’d post here as it’s more-likely people here develop custom VTs and also use community-containers than the inverse.

I’ve read through some previous forum posts and it looks like the recommendation for custom VTs it to place them in the ‘plugins/public/’, however when doing that using community containers, it appears as though the whole folder is overwritten on vulnerability-test update.
I assume that happens when the vulnerability-tests image runs and empties the MOUNT_PATH directory:

rm -rf "${MOUNT_PATH}/"*

Does anyone have any advice on how to implement my own VTs alongside these containers?

Thank you.

Hi @MDee,

I don’t have a /var/lib/openvas/plugins/public/ folder in the vt_data_vol volume of Greenone Docker containers. Do you create that folder? As they are, the containers do not persist changes to the vt_data_vol volume even after simply restarting them let alone after a feed-sync.

However, without much effort, you can identify the folder on your local host where the vt_data_vol volume is stored and just use a simple command to copy your public folder into that location. However, I think you will need to run that script each time you restart the containers including when you do a feed sync.

$ docker volume inspect greenbone-community-edition_vt_data_vol
[
    {
        "CreatedAt": "2023-09-05T08:28:34-04:00",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "greenbone-community-edition",
            "com.docker.compose.version": "1.29.2",
            "com.docker.compose.volume": "vt_data_vol"
        },
        "Mountpoint": "/var/lib/docker/volumes/greenbone-community-edition_vt_data_vol/_data",
        "Name": "greenbone-community-edition_vt_data_vol",
        "Options": null,
        "Scope": "local"
    }
]

The mountpoint is /var/lib/docker/volumes/greenbone-community-edition_vt_data_vol/_data.

Or, you could modify the openvas-scanner's prod.DockerFile to automatically import your custom VTs and use that file instead of the one in the default docker-compose.yml file.

Here is a post in another discussion about making the containers extensible which should work for you: Docker-compose: set GSA listen address and GVM admin password - #2 by rippledj

Also, I’m curious, where do you see the rm -rf "${MOUNT_PATH}/"* command?

1 Like

Thanks for the response!

I was hoping that someone has done this before and has a method of developing/testing using the community containers. Maybe someone will still pop in with a solution that has been tested :slight_smile: .

I’ll run my responses in reverse a little…

The rm -rf "${MOUNT_PATH}/"* command is in the contents of the VT container’s /bin/init.sh.
Full contents:

#!/bin/sh
set -e

rm -f "${STATE_FILE}"

license_file=${STORAGE_PATH}/LICENSE

if [ -e "${license_file}" ]; then
    cat "${license_file}"
fi

echo -n -e "\nCopying vulnerability tests data... "

if [ -d "${MOUNT_PATH}" ]; then
    rm -rf "${MOUNT_PATH}/"*
    cp -r "${STORAGE_PATH}/"* "${MOUNT_PATH}"

    state_dir=$(dirname ${STATE_FILE})
    mkdir -p "${state_dir}"
    touch "${STATE_FILE}"

    echo "files copied."
else
    echo "nothing to do."
fi

if [ -n "${KEEP_ALIVE}" ]; then
    sleep infinity
fi

The recommendation to put the VTs in a plugins/public folder was from below, but is obviously not related to the community containers, in hindsight:

Also in hindsight, that link suggests to put it in a folder called ‘private’, not public. Private is what I tried, despite what I posted above.

I thought the same thing as you - dropping files in the mountpoint, and I see the same challenge in running the upload each time the VTs are downloaded.

The options I can see right now, are:

  1. To create another container that runs a copy function after the VTs are downloaded. The issue there, however, is that the VT container seems to trigger something with a state file line in the VTs init.sh, that I assume kicks off the import to ospd-openvas / pg:
touch "${STATE_FILE}"
  1. The most elegent solution I can come up with (without coming up with a hack using wrapper scripts, inotify, or some other wizardy) is to submit a pull request to the VTs container that allows for a custom VTs mount.

The VT’s container includes this label:

"org.opencontainers.image.url": "https://github.com/greenbone/vulnerability-tests",

However that link returns a 404.

I suspect that - since I can’t locate it in github - the vulnerability-tests dockerfile and other components are hidden away now with the community feed.

So if a container developer comes across this thread, and you see value in baking something like this in to the VTs container, can I suggest the following:
Change:

rm -rf "${MOUNT_PATH}/"*

to:

find "${MOUNT_PATH}" ! -path "${MOUNT_PATH}/private" -exec rm -rf {} +

That is assuming the link that I posted to using the /private folder for custom VTs is consistent with GB’s intent.

I’ve managed to get custom vulns in to the VTs container by using a container that is FROM the VTs container.

FROM greenbone/vulnerability-tests:latest

# Add vulns in from local to the container
COPY vulns/ /var/lib/openvas/22.04/vt-data/nasl/private/

But from here, I can see that the new test vuln isn’t making it in to my NVT list, and when I make changes to the custom vuln, I get the following message:

greenbone-community-edition-ospd-openvas-1         | OSPD[7] 2023-09-07 04:53:07,544: INFO: (ospd_openvas.daemon) VTs were up to date. Feed version is 202309060551.

So, do I need to manually increment the feed version so it updates?

There are some references around the forum about needing to disable signature checking for custom vulns. Is that still a thing? If so, can anyone advise how you go about doing that with the community containers?

I’ve managed to get VTs in to my community-container using the above wrapper container.
I’ve got another issue with getting the VTs to update. I’ve raised that in the VTs page as it’s not a container-specific discussion, and I’ve found a way to achieve what I asked in this thread.

New thread:

1 Like