Another “it’s being slow” thread - I’ve read a number of them but not found any obvious fixes.
I’m running the latest community docker containers using a compose file pretty much identical to the documentation.
The host it is running on has a quad core Xeon E3-1220 3GHz, 32Gb RAM and local SSD storage. Not much else running on it.
First scan of one Windows+IIS server and one Ubuntu+Apache+Wordpress server took 5h30. I subsequently split that down and the Windows+IIS machine took 19m to scan, and a second Linux box with no web services also took about 19m. But scanning the first Linux webserver again took more than 3h before I aborted. CPU on both the scanning host and the Linux host were maxed out on all cores. It had got through to about 80% reasonably quickly but then just bogged down scanning web vulnerabilities.
Using the ‘full and fast’ scan, with standard options on sequence. The Linux VM wasn’t running any on-host firewall, and both it and the scanning host were on the same network with no firewall in between.
It hadn’t hung on the scan, I was watching with htop and could see the checks ending and new checks starting, but each took minutes to run. In the report there were a number of error messages: NVT timeout after 320 seconds; NVT timeout after 900 seconds; etc. So if those tests were timing out after such a long time it’s not surprising the whole scan took a long time, if I scaled up to scanning all the servers it would take weeks…
Could this be because it’s running as a container? Would host networking speed it up? Are there any other tweaks?
No, no swapping on either scanning host or webserver host. No other issues that I can see. Other non-web hosts scanned fine. I do have one other Linux/Apache/Wordpress host that I’ll try to scan and see how that compares…
Maximum concurrently executed NVTs per host/Maximum concurrently scanned hosts
Select the speed of the scan on one host. The default values are chosen sensibly. If more VTs run simultaneously on a system or more systems are scanned at the same time, the scan may have a negative impact on either the performance of the scanned systems, the network or the appliance itself. These values “maxhosts” and “maxchecks” may be tweaked.
I think I’ve pointed the cause at the target machine although not resolved it. Even disabling all the ‘web server’ and ‘web application abuse’ categories, as well as OS-specific categories not in use took about 2h30 to scan that machine. Other Ubuntu 18.04 servers took similar amounts of time, whereas Ubuntu 20.04 servers took around 20m for the ‘full and fast’ default, or about 13m for my reduced scan config.
Hardware-wise the target machines were very similar, VMs running on the same vSphere7 hosts with similar resources, all running from pure SSD SAN storage, and all network connectivity between scanning host and target hosts 1Gb or greater. No local firewalling, but maybe apparmor is messing things up.
It’s a mystery but it doesn’t appear to be an issue with the scanning server, but if anyone has any suggestions I’ll give them a whirl (although I’m midway through phasing out the Ubuntu 18.04 machines before it goes EOL).