I have Greenbone Community Edition setup in my Oracle Linux 9.2.
Currently running Greenbone Security Assistant v22.5.3, and OpenVAS Scanner version 22.7.3
I have configured my Scan Configs with only 1 item from SSL / TLS family.
SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection
However, my scan result reported 0 issue, but in fact I purposely make these issues for my initial test.
I have confirm the issues are valid using testssl tool.
Here is my VirtualHost config purposely enabled all SSL Protocols, and SSL Cipher Suite.
SSLProtocol +ALL
and here is the report from testssl to confirmed weak protocols such as TLSv1.0, TLSv1.1 is supported, and weak cipher suite:
Testing protocols via sockets except NPN+ALPN
SSLv2 not offered (OK)
SSLv3 not offered (OK)
TLS 1 offered (deprecated)
TLS 1.1 offered (deprecated)
TLS 1.2 offered (OK)
TLS 1.3 offered (OK): final
NPN/SPDY not offered
ALPN/HTTP2 h2, http/1.1 (offered)
Could please enlighten me where am I doing it wrongly ?
How did you create the custom scan configuration? If you create a scan configuration with only one single VT (the * SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection in your case) it will not work because I believe there are some base-level VTs that need to be included for a vulnerability scan to work properly.
You should create your config by cloning the Base scan config (Basic configuration template with a minimum set of NVTs required for a scan) and then adding your desired VTs on top of those.
This is the Greenbone reports,
which testssl command able to detect those TLS vulnerability and I also able to see from my web server access log that greenbone does test, but greenbone not reported as issue.
Because the web server (Apache HTTPD) is named-based virtual host.
When Greenbone scan the vulnerabilities, that domain has the problem, but not the IP.
because when access with IP, it will fall into another virtual host.
Hence Greenbone not reporting it as a problem because the IP was good.
Just a precaution that some may some consider it unethical to scan IT infrastructure that you do not own or have explicit authorization to scan. I see that your reports are scanning the greenbone.ddns.net host.
After some scans and adding more VTs to the scan config, I found that TLS1.0 and TLS1.1 are enabled with renegotiation support.
The script is SSL/TLS: Safe/Secure Renegotiation Support Status OID: 1.3.6.1.4.1.25623.1.0.117757
Summary
Checks and reports if a remote SSL/TLS service supports
safe/secure renegotiation.
Detection Result
Protocol Version | Safe/Secure Renegotiation Support Status
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SSLv3 | Unknown, Reason: Scanner failed to negotiate an SSL/TLS connection (Either the scanner or the remote host is probably not supporting / accepting this SSL/TLS protocol version).
TLSv1.0 | Enabled, Note: While the remote service announces the support of safe/secure renegotiation it still might not support / accept renegotiation at all.
TLSv1.1 | Enabled, Note: While the remote service announces the support of safe/secure renegotiation it still might not support / accept renegotiation at all.
TLSv1.2 | Enabled, Note: While the remote service announces the support of safe/secure renegotiation it still might not support / accept renegotiation at all.
TLSv1.3 | Disabled (The TLSv1.3 protocol generally doesn't support renegotiation so this is always reported as 'Disabled')
The SSL/TLS cipher and version determination is purely done via NASL based scripts and the code part is using the IP and not the hostname(s) of the target to query this information (technical: No SNI extension with the host name is sent).
There is an internal task to improve / extend this functionality to use / sent the SNI extension in SSL/TLS requests. But as this is a hugely invasive task which can have various negative side-effects not really testable there is no timeline available for this.
Are there still no plans to address this issue? This seems like a HUGE issue not being able to assess assets by virtual host headers nor SNI fields being injected. Especially with these being common practice now for Apache, NGINX, CloudFlare, Akamai, you name it are all expecting host heades and subject name indicators to present the actual site and TLS data.
SNI: Nothing changed since November 2024 and i don’t think that anything will change anytime soon. If you deem this important feel free to provide any patches to the relevant code (not sure where / how though).
I did confirm in a network capture that the target name in the scan is properly being injected as the host header and my problem is purely the SNI issue.
Well I did a pure SSL capture in my lab this morning and can confirm the SNI is set properly in the client hello. My issue with the specific host being tested must be something else so I will keep digging. Thanks for pointing me in the right direction.
Only a short note to avoid confusion in the future. There are two “ways” of SSL/TLS relevant communication involved:
The “usual” communication with the remote target done by the scanner. This is using GnutTLS as described previously and AFAICT is passing a SNI in the Client Hello accordingly
The “enumeration” of SSL/TLS cipher suites, this is purely done in NASL without any GnuTLS involved. And this part currently doesn’t pass a SNI in the Client Hello