VM Filesystem is getting corrupted post system restart

Hi Team,

I ‘ve installed Greenbone OpenSource tool OpenVAS which is quite good but I am facing some issue.

I have done two types of installation:

  1. On VM (base image Ubuntu 22.04)
  2. On docker container inside a VM

In both setups, if we are restarting the VM (to test in case machine is going down), we are unable to restart greenbone services again. It is also making /etc/hosts file only readable.

sudo fsck -fy /dev/vda1

sudo: unable to resolve host openvas-docker: Temporary failure in name resolution

fsck from util-linux 2.37.2

e2fsck 1.46.5 (30-Dec-2021)

cloudimg-rootfs: recovering journal

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

Block bitmap differences: +(1064883–1064892) +(1064897–1064900) +(1064904–1064908) -(5268398–5268399) +5268400 -(5268410–5268415) -(5272363–5272364)

Fix? yes

Free blocks count wrong for group #32 (9386, counted=9367).

Fix? yes

Free blocks count wrong for group #160 (11476, counted=11485).

Fix? yes

Free blocks count wrong (16238212, counted=16236154).

Fix? yes

Inode bitmap differences: -517673

Fix? yes

Free inodes count wrong for group #32 (20, counted=1).

Fix? yes

Directories count wrong for group #32 (2206, counted=2225).

Fix? yes

Free inodes count wrong for group #162 (12540, counted=12549).

Fix? yes

Directories count wrong for group #162 (102, counted=93).

Fix? yes

Free inodes count wrong (9760448, counted=9760437).

Fix? yes

cloudimg-rootfs: ***** FILE SYSTEM WAS MODIFIED *****

cloudimg-rootfs: ***** REBOOT SYSTEM *****

cloudimg-rootfs: 561483/10321920 files (0.1% non-contiguous), 4706945/20943099 blocks

systemctl status docker

× docker.service - Docker Application Container Engine

Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)

Active: failed (Result: exit-code) since ; 25min ago

TriggeredBy: × docker.socket

Process: 1013 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)

Main PID: 1013 (code=exited, status=1/FAILURE)

CPU: 44ms

openvas-docker systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.

openvas-docker systemd[1]: Stopped Docker Application Container Engine.

openvas-docker systemd[1]: docker.service: Start request repeated too quickly.

openvas-docker systemd[1]: docker.service: Failed with result ‘exit-code’.

openvas-docker systemd[1]: Failed to start Docker Application Container Engine.

journalctl -xe:

Subject: Unit process exited

░░ Defined-By: systemd

░░ An ExecStart= process belonging to unit apt-daily.service has exited.

░░ The process’ exit code is ‘exited’ and its exit status is 2.

openvas-docker systemd[1]: apt-daily.service: Failed with result ‘exit-code’.

░░ Subject: Unit failed

░░ Defined-By: systemd

░░ The unit apt-daily.service has entered the ‘failed’ state with result ‘exit-code’.

openvas-docker systemd[1]: Failed to start Daily apt download activities.

░░ Subject: A start job for unit apt-daily.service has failed

░░ Defined-By: systemd
░░ A start job for unit apt-daily.service has finished with a failure.
░░ The job identifier is 1392 and the job result is failed.

From looking at the code you posted, I guess you are not using the official Greenbone Docker containers to my knowledge they do not use an openvas-docker systemd file. My suggestion is to use either:

Your Linux operations practices are broken. You need to shutdown the volumes and unmount them before turning them off. That has NOTHING to do with GVM, it pure system operations.

If docker is using the host filesystem you need to shutdown the host as well.

thanks for replying, i have used official docs & able to run greenbone on vm as well as on dockers. The problem occurs when we are restarting the VM.

Yeah, it purely system operations. I just wanted to know why it is happening on with Systems(VMs) having gvm installed. I tried on two ubuntu machines hosted on Canonical Ubuntu Openstack & got same results.
If you can help me figure it out probably, I can retry the setup, make OpenVAS up & running. Then simply restart my VM without touching anything else to reproduce the issue.
@rippledj @Lukas

links used for installation:
First setup on Ubuntu 22.04 LTS

Second setup on Ubuntu 22.04 LTS

It will happen with ANY software writing and operating a filesystem while you turn it OFF ! This is not GVM related …

1 Like