Slow scanning of Linux webservers

Another “it’s being slow” thread - I’ve read a number of them but not found any obvious fixes.

I’m running the latest community docker containers using a compose file pretty much identical to the documentation.

The host it is running on has a quad core Xeon E3-1220 3GHz, 32Gb RAM and local SSD storage. Not much else running on it.

First scan of one Windows+IIS server and one Ubuntu+Apache+Wordpress server took 5h30. I subsequently split that down and the Windows+IIS machine took 19m to scan, and a second Linux box with no web services also took about 19m. But scanning the first Linux webserver again took more than 3h before I aborted. CPU on both the scanning host and the Linux host were maxed out on all cores. It had got through to about 80% reasonably quickly but then just bogged down scanning web vulnerabilities.

Using the ‘full and fast’ scan, with standard options on sequence. The Linux VM wasn’t running any on-host firewall, and both it and the scanning host were on the same network with no firewall in between.

It hadn’t hung on the scan, I was watching with htop and could see the checks ending and new checks starting, but each took minutes to run. In the report there were a number of error messages: NVT timeout after 320 seconds; NVT timeout after 900 seconds; etc. So if those tests were timing out after such a long time it’s not surprising the whole scan took a long time, if I scaled up to scanning all the servers it would take weeks…

Could this be because it’s running as a container? Would host networking speed it up? Are there any other tweaks?

Hi,

Did your machine swapped during the scan ?
There is something broken with your setup.

No, no swapping on either scanning host or webserver host. No other issues that I can see. Other non-web hosts scanned fine. I do have one other Linux/Apache/Wordpress host that I’ll try to scan and see how that compares…

Just a hint, disable web-checks and see how fast that is.

This is a sign for various problems on the target host or the connection towards it:

  • Network congestion / issues (low bandwidth, target not reachable at all together with a unscanned_closed = no scan config, …)
  • Overloaded target host / service (generally slow to respond, not able to handle the current connections, …)
  • IDS/IPS, WAF or similar slowing down probes / connections or blocking them completely
  • A host / service initially reachable during the port scan phase but either stopped to respond at all or starts “flapping”

Overall debugging would need to be done on either the target host itself or the network connection in between.

Some additional related resources:

1 Like

Thanks for that, some reading for me to do later!

I think I’ve pointed the cause at the target machine although not resolved it. Even disabling all the ‘web server’ and ‘web application abuse’ categories, as well as OS-specific categories not in use took about 2h30 to scan that machine. Other Ubuntu 18.04 servers took similar amounts of time, whereas Ubuntu 20.04 servers took around 20m for the ‘full and fast’ default, or about 13m for my reduced scan config.

Hardware-wise the target machines were very similar, VMs running on the same vSphere7 hosts with similar resources, all running from pure SSD SAN storage, and all network connectivity between scanning host and target hosts 1Gb or greater. No local firewalling, but maybe apparmor is messing things up.

It’s a mystery but it doesn’t appear to be an issue with the scanning server, but if anyone has any suggestions I’ll give them a whirl (although I’m midway through phasing out the Ubuntu 18.04 machines before it goes EOL).

1 Like