Can GVMD Resume Tasks After Restart?

After restarting the gvmd service, all active tasks are stopped. The logs show entries like:
Status of task <name> (<guid>) has changed to Interrupted

Is it possible to configure GVMD to continue active tasks after a restart?

@Alexey how about starting task from gui after restart.

Why to restart gvmd during the scans?

Eero

The task starts from the beginning when it was resumed from the gui and the scanning progress is lost. I don’t really use the gui and manage gvmd with GMP. I have several scanners in k8s and one gvmd service and I am testing the loss of my service’s connection to gvmd

We’re running OpenVAS in Kubernetes: gvmd is one service, and the scanners run as agents in the cluster. When gvmd restarts, all running scans get interrupted and don’t continue — we have to start them again, which is a problem because scans can take up to 90 minutes.

  • Is there a way to make gvmd resume scans automatically after a restart, or somehow avoid losing scan progress?
  • Also, is it possible to scale gvmd (run multiple instances with a shared database) to avoid this single point of failure? Any tricks or best practices for scaling gvmd in distributed setups?

Would love to hear about any solutions, workarounds, or tips!

Hi, you can try the resume task command. But I am not aware of all the details of it’s implementation. I am only aware that it has shortcomings and may not work as expected.

As a general rule, you should not restart gvmd when scans are running. Also it’s not possible to run several instances of gvmd on the same db. YOU MUST NOT DO THAT. It will break the db possibly. It’s just not intended to work in such a way. It’s developed as a monolithic daemon with exclusive access to the database.

Hey bricks, thanks for your reply!

Yeah, we already got burned by trying to run several gvmd instances on the same DB — learned that lesson the hard way! :smiley:

About restarts, we totally get that we shouldn’t restart gvmd during scans, but in k8s stuff just happens — nodes can die, pods get evicted, or gvmd restarts because of lack of resources.
Right now, our only option is to restart the scan if it fails, but honestly, that’s not great for long scans — we lose all progress and waste a lot of time and resources.

If anyone has a better idea, we’d really appreciate it!


By the way, I found something in the gvmd source code. There’s a function called init_manage that calls init_manage_internal with a hardcoded parameter stop_tasks = 1:

Maybe this is why all tasks are stopped on restart?

@butschster try to change it to 0 and recompile?

Eero

@butschster

It might be more sensible to split the scan into smaller chunks and run the scans through the CLI interface. I’m not sure if any data is lost during a restart, so that might be the cause.

Eero

Some disclaimer from Greenbone and my side, doing so is on your own risk! As gvmd is the main component, changes might have unwanted side effects and can result in data loss. You will be on your own.

Of course, you can try it at your own risk. The most sensible approach is to break the scans into small chunks and try to stabilize the environment so that gvmd doesn’t need to be restarted in the middle of a scan.

Eero

This topic was automatically closed after 90 days. New replies are no longer allowed.