Updating feeds in Portainer

I have built out my Greenbone Community Edition stack in Portainer and it works great. I can see all the containers, log into the web and run scans. All that data is persistent and functioning as it should. The only thing I can’t figure out is how to get the feeds updated. I have tried to delete the volumes and let the stack re-create them but it doesn’t seem to actually update any of the feeds. I have tried to recreate the containers, pull new versions and still no feed updates. Is there something I would need to add to the .yml file to get the feeds to update? I have run this from a dedicated VM running docker compose and can use those commands to update feeds, but not sure where I can use them to update the feeds in the Portainer environment.

Thanks for your post @Disrupt2471 . Unfortunately, for me, this description doesn’t tell the whole story of what you have done. These details are critical to solving the problem. However, for others, they may intuitively know what steps you have taken.

So, have you created your own container registry with your own repositories built from source code, or have you simply ported Greenbone’s publicly provided containers to Portainer? Since you used use the word my Greenbone Container Edition stack, I interpret that to be you have build your own registry.

Otherwise, following the our Greenbone Community Containers feed update workflow, you can see that the process is rather simple:

docker compose -f $DOWNLOAD_DIR/docker-compose.yml -p greenbone-community-edition pull notus-data vulnerability-tests scap-data dfn-cert-data cert-bund-data report-formats data-objects
docker compose -f $DOWNLOAD_DIR/docker-compose.yml -p greenbone-community-edition up -d notus-data vulnerability-tests scap-data dfn-cert-data cert-bund-data report-formats data-objects

You can re-pull only the container images that hold the feed data. Once the updated container images have been pulled, I believe the gvmd container will recognize the changes and import the new data into the PostgreSQL database. Therefore, maybe gvmd is unable to see the newly updated data volumes. :thinking:

If this doesn’t help, maybe providing more details will provide clarity, or someone else will intuitively understand your issue better.

Good day and thank you for responding. To build in portainer, you can either create a stack, which adds all the containers to an “application stack”. The other way to setup would be to build off of pre-built containers and they basically just import. Please see below the screenshot of where you would add your docker file text. Once that is there, you can then create the stack. Based off of what is in the dockerfile text it will build out the containers.

It wouldn’t let me do a second embedded image so here is the continuation of the first post.

Once the containers are built, they look something like this.It just goes out and pulls them from the Greenbone containers. If needed I can provide the .yml file that it was built off.


I am not sure why there are soo many that are stopped, but you start them, and the run for a few a min and then stop with exit code 0 like they are supposed to?

It’s very interesting to hear about your use case! :slight_smile: I doubt that I will personally have time to contribute much to your issue, since I’m not familiar with Portainer. However, for anyone who is going to pitch in here, I guess that the .yml file will be prerequisite.

I wonder if it would be the same type of situation if someone built it out in docker desktop since it’s similar to how portainer would run it. Here is my .yml file below. Just paste that into your web configure page on portainer stack creation page and it will create the containers. The only thing I changed was the IP for the web interface for GSA which I omitted from the file below.

services:
vulnerability-tests:
image: greenbone/vulnerability-tests
environment:
STORAGE_PATH: /var/lib/openvas/22.04/vt-data/nasl
volumes:
- vt_data_vol:/mnt

notus-data:
image: greenbone/notus-data
volumes:
- notus_data_vol:/mnt

scap-data:
image: greenbone/scap-data
volumes:
- scap_data_vol:/mnt

cert-bund-data:
image: greenbone/cert-bund-data
volumes:
- cert_data_vol:/mnt

dfn-cert-data:
image: greenbone/dfn-cert-data
volumes:
- cert_data_vol:/mnt
depends_on:
- cert-bund-data

data-objects:
image: greenbone/data-objects
volumes:
- data_objects_vol:/mnt

report-formats:
image: greenbone/report-formats
volumes:
- data_objects_vol:/mnt
depends_on:
- data-objects

gpg-data:
image: greenbone/gpg-data
volumes:
- gpg_data_vol:/mnt

redis-server:
image: greenbone/redis-server
restart: on-failure
volumes:
- redis_socket_vol:/run/redis/

pg-gvm:
image: greenbone/pg-gvm:stable
restart: on-failure
volumes:
- psql_data_vol:/var/lib/postgresql
- psql_socket_vol:/var/run/postgresql

gvmd:
image: greenbone/gvmd:stable
restart: on-failure
volumes:
- gvmd_data_vol:/var/lib/gvm
- scap_data_vol:/var/lib/gvm/scap-data/
- cert_data_vol:/var/lib/gvm/cert-data
- data_objects_vol:/var/lib/gvm/data-objects/gvmd
- vt_data_vol:/var/lib/openvas/plugins
- psql_data_vol:/var/lib/postgresql
- gvmd_socket_vol:/run/gvmd
- ospd_openvas_socket_vol:/run/ospd
- psql_socket_vol:/var/run/postgresql
depends_on:
pg-gvm:
condition: service_started
scap-data:
condition: service_completed_successfully
cert-bund-data:
condition: service_completed_successfully
dfn-cert-data:
condition: service_completed_successfully
data-objects:
condition: service_completed_successfully
report-formats:
condition: service_completed_successfully

gsa:
image: greenbone/gsa:stable
restart: on-failure
ports:
- x.x.x.x:9392:80
volumes:
- gvmd_socket_vol:/run/gvmd
depends_on:
- gvmd

Sets log level of openvas to the set LOG_LEVEL within the env

and changes log output to /var/log/openvas instead /var/log/gvm

to reduce likelyhood of unwanted log interferences

configure-openvas:
image: greenbone/openvas-scanner:stable
volumes:
- openvas_data_vol:/mnt
- openvas_log_data_vol:/var/log/openvas
command:
- /bin/sh
- -c
- |
printf “table_driven_lsc = yes\nopenvasd_server = http://openvasd:80\n” > /mnt/openvas.conf
sed “s/127/128/” /etc/openvas/openvas_log.conf | sed ‘s/gvm/openvas/’ > /mnt/openvas_log.conf
chmod 644 /mnt/openvas.conf
chmod 644 /mnt/openvas_log.conf
touch /var/log/openvas/openvas.log
chmod 666 /var/log/openvas/openvas.log

shows logs of openvas

openvas:
image: greenbone/openvas-scanner:stable
restart: on-failure
volumes:
- openvas_data_vol:/etc/openvas
- openvas_log_data_vol:/var/log/openvas
command:
- /bin/sh
- -c
- |
cat /etc/openvas/openvas.conf
tail -f /var/log/openvas/openvas.log
depends_on:
configure-openvas:
condition: service_completed_successfully

openvasd:
image: greenbone/openvas-scanner:stable
restart: on-failure
environment:
# service_notus is set to disable everything but notus,
# if you want to utilize openvasd directly removed OPENVASD_MODE
OPENVASD_MODE: service_notus
GNUPGHOME: /etc/openvas/gnupg
LISTENING: 0.0.0.0:80
volumes:
- openvas_data_vol:/etc/openvas
- openvas_log_data_vol:/var/log/openvas
- gpg_data_vol:/etc/openvas/gnupg
- notus_data_vol:/var/lib/notus
# enable port forwarding when you want to use the http api from your host machine
# ports:
# - 127.0.0.1:3000:80
depends_on:
vulnerability-tests:
condition: service_completed_successfully
configure-openvas:
condition: service_completed_successfully
gpg-data:
condition: service_completed_successfully
networks:
default:
aliases:
- openvasd

ospd-openvas:
image: greenbone/ospd-openvas:stable
restart: on-failure
hostname: ospd-openvas.local
cap_add:
- NET_ADMIN # for capturing packages in promiscuous mode
- NET_RAW # for raw sockets e.g. used for the boreas alive detection
security_opt:
- seccomp=unconfined
- apparmor=unconfined
command:
[
“ospd-openvas”,
“-f”,
“–config”,
“/etc/gvm/ospd-openvas.conf”,
“–notus-feed-dir”,
“/var/lib/notus/advisories”,
“-m”,
“666”
]
volumes:
- gpg_data_vol:/etc/openvas/gnupg
- vt_data_vol:/var/lib/openvas/plugins
- notus_data_vol:/var/lib/notus
- ospd_openvas_socket_vol:/run/ospd
- redis_socket_vol:/run/redis/
- openvas_data_vol:/etc/openvas/
- openvas_log_data_vol:/var/log/openvas
depends_on:
redis-server:
condition: service_started
gpg-data:
condition: service_completed_successfully
vulnerability-tests:
condition: service_completed_successfully
configure-openvas:
condition: service_completed_successfully

gvm-tools:
image: greenbone/gvm-tools
volumes:
- gvmd_socket_vol:/run/gvmd
- ospd_openvas_socket_vol:/run/ospd
depends_on:
- gvmd
- ospd-openvas

volumes:
gpg_data_vol:
scap_data_vol:
cert_data_vol:
data_objects_vol:
gvmd_data_vol:
psql_data_vol:
vt_data_vol:
notus_data_vol:
psql_socket_vol:
gvmd_socket_vol:
ospd_openvas_socket_vol:
redis_socket_vol:
openvas_data_vol:
openvas_log_data_vol:

Following this. I’m having the same concern. Couldn’t “force” update the feeds.

My implementation is practically the same as Disrupt2471:

I tried doing following the workflow to update the feeds using the CLI commands:

docker compose -f $DOWNLOAD_DIR/docker-compose.yml -p greenbone-community-edition pull notus-data vulnerability-tests scap-data dfn-cert-data cert-bund-data report-formats data-objects
docker compose -f $DOWNLOAD_DIR/docker-compose.yml -p greenbone-community-edition up -d notus-data vulnerability-tests scap-data dfn-cert-data cert-bund-data report-formats data-objects

but this ended up creating a new “unmanageable” stack in portainer.

Exchanging ideas here. :slight_smile:

I’m noticing that you’re still using the greenbone registry (which doesn’t receive new feeds anymore; all info in the announcement thread) instead of the new registry.community.greenbone.net/community registry. I’d advise that you check out the updated compose file in the documentation.

Also, I’ve had the issue of feed updates not being picked up on a few occasions, but restarting ospd-openvas as per the troubleshooting documentation did the trick for me every time: docker compose -f $DOWNLOAD_DIR/docker-compose.yml -p greenbone-community-edition restart ospd-openvas

Feel free to give it a shot and report back :slight_smile:

1 Like

Thanks for pointing that out. Can you confirm I have the syntax correct to pull the new containers with the code below?

vulnerability-tests:
    image: registry.community.greenbone.net/community/vulnerability-tests
    environment:
        STORAGE_PATH: /var/lib/openvas/22.04/vt-data/nasl
    volumes:
        - vt_data_vol:/mnt

Is that how it should look for the container images? Looking to get it correct first before downing the entire system.

Syntax looks good in the snippet. Of course this is only an excerpt and all entries should be updated accordingly to the new registry. If you want to avoid typos, you can download the current Docker compose file here as @M3iSt3RSL4D3 already mentioned.

If you’re afraid of downing the system, download the new compose file into a separate file and check the diff with the currently running file. If only the image files have changed to the new registry, you should be good to go. If you wanna test the new compose file without touching your current project, you can also alter the -p switch from the commands in the documentation to a different project name, so Docker will create a separate project. :slight_smile:

1 Like

I updated all of my images to the new string and looks like all is well. The UI is different now but it makes sense since it’s using the new containers. Here is my current feed status.

Thanks @TreAtW for your help!

2 Likes