If you secure your entire infrastructure at the transport layer with end to end SSL for both internal and external traffic, then you likely have a ton of endpoints, each with their own SSL software stacks and a wide array of different certificates, some CA signed, others not etc. Both the amount of these and the rate at which these may change over time can exponentially increase, especially if these apps are deployed within a container architecture where new builds are being deployed frequently.
SSL is important and often when developers are responsible for image builds they are the ones installing lower level SSL related libraries into their Docker images and not necessarily always the resident DevOps security expert. This can lead to a myriad of SSL implementations running in your infrastructure. In short, its critical to consistently test these endpoints and evaluate their SSL capabilities and certificates validity. Failing to do so can lead to both vulnerabilities as well as hard to triage connectivity issues when SSL library upgrades suddenly disable older TLS versions or feature sets without warning.
One fantastic tool that already exists out there is testssl.sh (https://github.com/drwetter/testssl.sh) This is a actively maintained and widely used command line tool that you can point to any TLS endpoint and have it interrogate it for TLS support levels, cipher checks, vulnerabilities and certificate validity. It outputs to STDOUT as well as csv, json and html file output.
My use case
My particular needs centered around being able to test hundreds of endpoints exposed as containers on various clusters. The information about what containers were running, and what ports were exposed at various layers of the stacks was readily available via orchestrator APIs. Using these APIs I was able to collect a unique list of HTTPS endpoints which I could then use to generate a long list of testssl.sh commands that needed to be executed.
The testssl.sh project already has some parallel command file execution built in but I found it a bit confusing to use and had some issues with it. In short I wanted to be able to generate a execute a full featured normal testssl.sh command as a user normally would (rather than some special subset w/ differing output behavior).. and do it at larger scale in parallel.
I already had another script I wrote that could programmatically generate the appropriate testssl.sh commands replete with generated output paths that matches the service hierarchy within the target container cluster. I just wanted to be able to take the output from this, or any other file that even a DevOps human could write in any editor they want, drop it into a directory and have it automatically processed.
Written in Python, the script is intended to serve as part of a larger pipeline for mass concurrent invocations of testssl.sh. I’m currently using it in production to evaluate hundreds of endpoints that change daily on a continual (configurable) interval.
The daemon provides a long lived watchdog process that monitors a directory (via watchdog) for
testssl.sh command files. As new files appear within the
--input-dir containing the
--filename-filter they are consumed and evaluated for
testssl.sh commands, one per line. Each
testssl.sh command is processed in a separate thread and processing results are logged to a YAML or JSON result file under the
--output-dir. The actual output from each invoked
testssl.sh invocation (i.e. via
--*file arguments) is also written to disk scoped within a timestamped output directory under the
I hope this may be of use to others reading this. I also wrote a separate project at https://github.com/bitsofinfo/testssl.sh-alerts which can consume the output of testssl.sh JSON result files and then “react” to the contents within them, such as sending Slack alerts about the results.