gsad main:CRITICAL:2024-05-02 21h18.40 utc:13: main: Could not load private SSL key from /var/lib/gvm/private/CA/serverkey.pem: Failed to open file “/var/lib/gvm/private/CA/serverkey.pem”: Permission denied
From the source of the file /root/.ssl/privkey.pem is looks like docker may not have permissions to copy it into the container, or else, it will be copied with permissions that are not accessible to GSA. You can get a shell on the container and inspect the /var/lib/gvm/private/CA/serverkey.pem file to determine.
In fact, here are the workflow instructions for that process. The permission issue is mentioned there. Last I checked these instructions were effective. If you have trouble with these instructions linked above, feel free to relay the issue you are having.
Those are the instructions we followed to get as far as we have. The key and cert are owned by root, which is the user running the containers. When we start the containers, we get the error in the original post. We did chmod the privkey file to 644 to see if it made a difference, but it did not. We still see the permissions error when we restart the containers. I am not seeing the reference to permissions in the instructions you mention - I must be reading right over it.
I just reviewed the docs on how to enable SSL/TLS for GSAD in the Docker containers. Seems there isn’t a instruction to change the certificate file permissions after they are generated and before the containers are started. I can take the blame for that misstep.
There are really two main options for setting the permissions of the certificate files before they are loaded to the container via the docker-compose.yml file.
This will simply alter the certs to have global read permissions. GSAD will effectively pick them up using this method, but the permissions are a little egregious.
sudo chmod 444 *.pem
Secondly, you can give the cert files minimal permissions and configure their ownership using the user id that gsad has in the greenbone-community-edition-gsa-1 container.
sudo chown 1001:1001 *.pem
sudo chmod 400 *.pem
GSAD will be able to find the certificates once these changes have been made.
Also, I noted that when testing various changes to these file permissions, the containers had to be completely stopped and restarted from scratch for the changes to take affect, I have no idea why that is, but I could not simply issue the docker compose -f docker-compose-ssl.yml -p greenbone-community-edition up -d again. Must take the containers down and up again. i.e.:
docker compose -f docker-compose-ssl.yml -p greenbone-community-edition down
docker compose -f docker-compose-ssl.yml -p greenbone-community-edition up -d
Well, the Monday-morning dust took until Thursday to settle. The first option you offered was something we had already done repeatedly. We tried the second option and now the web page simply times out. Better, but not quite there.
The only thing we can see different from the instructions + your comments above and what we’ve done is that we sourced our certificate through Lets Encrypt rather than the openssl command in the instructions. Is that possibly a problem?
In the gsa container, the logs no longer indicate any error.
In the docker-compose.yml gsa section, we have the following:
Your logs will indicate whether that configuration is succesfull. Unfortunately, I can’t see that.
Please note I have modified by workaround above to also set the group ID for the certfiles.
I can suggest setting up the containers according to the official instructions, workflows first (including the modifications I suggested above) with a self-signed certificate first, to verify it’s working this way first.
Fair. The logs for gsa are not displaying an error, only that GSAD is starting. They are not displaying much of anything, though. Should I be looking in a different container?
EDIT: This might be a problem. When I assigned the group by number as indicated in your change above, the system appears to be translating 1001 to the name of one of my users…
If you already have a user on the local system with that user ID, Linux will automatically assign the existing username to the permission. Also, if you don’t have the gsad user on the underlying Linux system that the containers will run on, then you cannot assign the files the user owner and group gsad.
In the end, it would be easier if the certificate files were automatically generated and placed in the correct location automatically each time the GSAD container is started.
Thank you. Ever feel like you were born to find the gaps in documentation? I support another system that every time they released an update, I spent weeks editing their documentation because it was unusable when it came time to actually perform the upgrades. Thank heavens this documentation is better than that.
New questions: Isn’t 1001 often the first ID used when a new user is added to a system? Why hasn’t anyone else reported running into this issue? Also, we set the chmod for the certificates to 644, so even if it was the wrong owner, anyone could read it. That didn’t appear to change anything.
Correcting myself - there is no user currently on our server with uid 1001. The system assigned uid 2001 to the first user we added. But it gave that user gid 1001.
Did the documentation instruct us to create gsad as a user on the docker host/base system? If it does, I am still not seeing it.
Your last comment about it being easier if the certificates were automatically generated - that’s not currently an option, correct? To ask a different way, are you saying that’s what I should be doing or are you saying that ideally that would be the approach the system would take?
Thank you for your patience and assistance, but we have wasted an inordinate amount of time on this aspect of the system. It was working perfectly otherwise, so we are going to cut losses and deploy a proxy in front of the system and move on to other pressing issues.
An update for anyone that finds this issue in the future. In the process of securing the server originally, we copied the firewall configuration from another (non-Greenbone) server that was being managed similarly and using the same proxy.
It was not until we were troubleshooting why the proxy would not work even though we had a matching configuration right in front of us that we realized that the firewall configuration had an interface from the old server rather than the value we needed for the new server.
So, if I understand the expert correctly, while docker was able to listen in all zones, it worked, but when the firewall was deployed and it restrained docker to the interfaces it was supposed to be using, the actual interface it needed was not available.