Tag Archive for: it security

While the German government has yet to implement the necessary adjustments for the NIS2 directive, organizations shouldn’t lose momentum. Although the enforcement is now expected in Spring 2025 instead of October 2024, the core requirements remain unchanged. While there remains a lot of work for companies, especially operators of critical infrastructure, most of it is clear and well-defined. Organizations must still focus on robust vulnerability management, such as that offered by Greenbone.

Missed Deadlines and the Need for Action

Initially, Germany was supposed to introduce the NIS2 compliance law by October 17, 2024, but the latest drafts failed to gain approval, and even the Ministry of the Interior does not anticipate a timely implementation. If the parliamentary process proceeds swiftly, the law could take effect by Q1 2025, the Ministry announced.

A recent study by techconsult (only in German), commissioned by Plusnet, reveals that while 67% of companies expect cyberattacks to increase, many of them still lack full compliance. NIS2 mandates robust security measures, regular risk assessments and rapid response to incidents. Organizations must report security breaches within 24 hours and deploy advanced detection systems, especially those already covered under the previous NIS1 framework.

Increased Security Budgets and Challenges

84% of organizations plan to increase their security spending, with larger enterprises projecting up to a 12% rise. Yet only 29% have fully implemented the necessary measures, citing workforce shortages and lack of awareness as key obstacles. The upcoming NIS2 directive presents not only a compliance challenge but also an opportunity to strengthen cyber resilience and gain customer trust. Therefore, 34% of organizations will invest in vulnerability management in the future.

Despite clear directives from the EU, political delays are undermining the urgency. The Bundesrechnungshof and other institutions have criticized the proposed exemptions for government agencies, which could weaken overall cybersecurity efforts. Meanwhile, the healthcare sector faces its own set of challenges, with some facilities granted extended transition periods until 2030.

Invest now to Stay Ahead

Latest since the NIS2 regulations impend, businesses are aware of the risks and are willing to invest in their security infrastructure. As government action lags, companies must take proactive measures. Effective vulnerability management solutions, like those provided by Greenbone, are critical to maintaining compliance and security.

A 2023 World Economic Forum report surveyed 151 global organizational leaders and found that 93% of cyber leaders and 86% business leaders believe a catastrophic cyber event is likely within the next two years. Still, many software vendors prioritize rapid development and product innovation above security. This month, CISA’s Director Jen Easterly stated software vendors “are building problems that open the doors for villains” and that “we don’t have a cyber security problem – we have a software quality problem”. Downstream, customers benefit from innovative software solutions, but are also exposed to the risks from poorly written software applications; financially motivated ransomware attacks, wiper malware, nation-state espionage and data theft, costly downtime, reputational damage and even insolvency.

However astute, the Director’s position glosses over the true cyber risk landscape. For example, as identified by Bruce Schneier back in 1999; IT complexity increases the probability of human error leading to misconfigurations [1][2][3]. Greenbone identifies both known software vulnerabilities and misconfigurations with industry leading vulnerability test coverage and compliance tests attesting CIS controls and other standards such as the BSI basic controls for Microsoft Office.

At the end of the day, organizations hold responsibility to their stakeholders, customers and the general public. They need to stay focused and protect themselves with fundamental IT security activities including Vulnerability Management. In September 2024’s Threat Tracking blog post, we review the most pressing new developments in the enterprise cybersecurity landscape threatening SMEs and large organizations alike.

SonicOS Exploited in Akira Ransomware Campaigns

CVE-2024-40766 (CVSS 10 Critical) impacting SonicWall’s flagship OS SonicOS, has been identified as a known vector for campaigns distributing Akira ransomware. Akira, originally written in C++, has been active since early 2023. A second Rust-based version became the dominant strain in the second half of 2023. The primary group behind Akira is believed to stem from the dissolved Conti ransomware gang. Akira is now operated as a Ransomware as a Service (RaaS) leveraging a double extortion tactic against targets in Germany and across the EU, North America, and Australia. As of January 2024, Akira had compromised over 250 businesses and critical infrastructure entities, extorting over 42 million US-Dollar.

Akira’s tactics include exploiting known vulnerabilities for initial access such as:

Greenbone includes tests to identify SonicWall devices vulnerable to CVE-2024-40766 [1][2] and all other vulnerabilities exploited by the Akira ransomware gang for initial access.

Urgent Patch for Veeam Backup and Restoration

Ransomware is the apex cyber threat, especially in healthcare. The US Human and Healthcare Services (HHS) reports that large breaches increased by 256% and ransomware incidents by 264% over the past five years. Organizations have responded with more proactive cybersecurity measures to prevent initial access and more robust incident response and recovery, including more robust backup solutions. Backup systems are thus a prime target for ransomware operators.

Veeam is a leading vendor of enterprise backup solutions globally and promotes its products as a viable safeguard against ransomware attacks. CVE-2024-40711 (CVSS 10 Critical), a recently disclosed vulnerability in Veeam Backup and Recovery is especially perilous since it could allow hackers to target the last line of protection against ransomware – backups. The vulnerability was discovered and responsibly reported by Florian Hauser of CODE WHITE GmbH, a German cybersecurity research company. Unauthorized Remote Code Execution (RCE) via CVE-2024-40711 was quickly verified by security researchers within 24 hours of the disclosure, and proof-of-concept code is now publicly available online, compounding the risk.

Veeam Backup & Replication version 12.1.2.172 and all earlier v12 builds are vulnerable and customers need to patch affected instances with urgency. Greenbone can detect CVE-2024-40711 in Veeam Backup and Restoration allowing IT security teams to stay one step ahead of ransomware gangs.

Blast-RADIUS Highlights a 20 Year old MD5 Collision Attack

RADIUS is a powerful and flexible authentication, authorization, and accounting (AAA) protocol used in enterprise environments to validate user-supplied credentials against a central authentication service such as Active Directory (AD), LDAP, or VPN services. Dubbed BlastRADIUS, CVE-2024-3596 is a newly disclosed attack against the UDP implementation of RADIUS, accompanied by a dedicated website, research paper, and attack details. Proof-of-concept code is also available from a secondary source.

Blast-RADIUS is an Adversary in The Middle (AiTM) attack that exploits a chosen-prefix collision weakness in MD5 originally identified in 2004 and improved in 2009. The researchers exponentially reduced the time required to spoof MD5 collisions and released their improved version of hashclash. The attack can allow an active AiTM positioned between a RADIUS client and a RADIUS server to trick the client into honoring a forged Access-Accept response despite the RADIUS server issuing a Access-Reject response. This is accomplished by computing an MD5 collision between the expected Access-Reject and a forged Access-Accept response allowing an attacker to approve login requests.

Greenbone can detect a wide array vulnerable RADIUS implementations in enterprise networking devices such as F5 BIG-IP [1], Fortinet FortiAuthenticator [2] and FortiOS [3], Palo Alto PAN-OS [4], Aruba CX Switches [5] and ClearPass Policy Manager [6], and on the OS level in Oracle Linux [7][8], SUSE [9][10][11], OpenSUSE [12][13], Red Had [14][15], Fedora [16][17], Amazon [18], Alma [19][20], and Rocky Linux [21][22] among others.

Urgent: CVE-2024-27348 in Apache HugeGraph-Server

CVE-2024-27348 (CVSS 9.8 Critical) is a RCE vulnerability in the open-source Apache HugeGraph-Server that affects all versions of 1.0 before 1.3.0 in Java8 and Java11. HugeGraph-Server provides an API interface used to store, query, and analyze complex relationships between data points and is commonly used for analyzing data from social networks, recommendation systems and for fraud detection.

CVE-2024-27348 allows attackers to bypass the sandbox restrictions within the Gremlin query language by exploiting inadequate Java reflection filtering. An attacker can leverage the vulnerability by crafting malicious Gremlin scripts and submitting them via API to the HugeGraph /gremlin endpoint to execute arbitrary commands. The vulnerability can be exploited via remote, adjacent, or local access to the API and can enable privilege escalation.

It is being actively exploited in hacking campaigns. Proof-of-concept exploit code [1][2][3] and an in-depth technical analysis are publicly available giving cyber criminals a head start in developing attacks. Greenbone includes an active check and version detection test to identify vulnerable instances of Apache HugeGraph-Server. Users are advised to update to the latest version.

Ivanti has Been an Open Door for Attackers in 2024

Our blog has covered vulnerabilities in Invati products several times this year [1][2][3]. September 2024 was another hot month for weaknesses in Ivanti products. Ivanti finally patched CVE-2024-29847 (CVSS 9.8 Critical), a RCE vulnerability impacting Ivanti Endpoint Manager (EPM), first reported in May 2024. Proof-of-concept exploit code and a technical description are now publicly available, increasing the threat. Although there is no evidence of active exploitation yet, this CVE should be considered high priority and patched with urgency.

However, in September 2024, CISA also identified a staggering four new vulnerabilities in Ivanti products being actively exploited in the wild. Greenbone can detect all of these new additions to CISA KEV and previous vulnerabilities in Ivanti products. Here are the details:

Summary

In this month’s Threat Tracking blog, we highlighted major cybersecurity developments including critical vulnerabilities such as CVE-2024-40766 exploited by Akira ransomware, CVE-2024-40711 impacting Veeam Backup and the newly disclosed Blast-RADIUS attack that could impact enterprise AAA. Proactive cybersecurity activities such as continuous vulnerability management and compliance attestation help to mitigate risks from ransomware, wiper malware, and espionage campaigns, allowing defenders to close security gaps before adversaries can exploit them.

The cybersecurity risk environment has been red hot through the first half of 2024. Critical vulnerabilities in even the most critical technologies are perpetually open to cyber attacks, and defenders face the continuous struggle to identify and remediate these relentlessly emerging security gaps. Large organizations are being targeted by sophisticated “big game hunting” campaigns by ransomware gangs seeking to hit the ransomware jackpot. The largest ransomware payout ever was reported in August – 75 million Dollar to the Dark Angels gang. Small and medium sized enterprises are targeted on a daily basis by automated “mass exploitation” attacks, also often seeking to deliver ransomware [1][2][3].

A quick look at CISA’s Top Routinely Exploited Vulnerabilities shows us that even though cyber criminals can turn new CVE (Common Vulnerabilities and Exposures) information into exploit code in a matter of days or even hours, older vulnerabilities from years past are still on their radar.

In this month’s Threat Tracking blog post, we will point out some of the top cybersecurity risks to enterprise cybersecurity, highlighting vulnerabilities recently reported as actively exploited and other critical vulnerabilities in enterprise IT products.

The BSI Improves LibreOffice’s Mitigation of Human Error

OpenSource Security on behalf of the German Federal Office for Information Security (BSI) recently identified a secure-by-design flaw in LibreOffice. Tracked as CVE-2024-6472 (CVSS 7.8 High), it was found that users could enable unsigned macros embedded in LibreOffice documents, overriding the “high security mode” setting. While exploitation requires human interaction, the weakness addresses a false sense of security, that unsigned macros could not be executed when “high security mode” enabled.

KeyTrap: DoS Attack Against DNSSEC

In February 2024, academics at the German National Research Center for Applied Cybersecurity (ATHENE) in Darmstadt disclosed “the worst attack on DNS ever discovered”. According to German researchers, a single packet can cause a “Denial of Service” (DoS) by exhausting a DNSSEC-validating DNS resolver. Dubbed “KeyTrap”, attackers can exploit the weakness to prevent clients using a compromised DNS server from accessing the internet or local network resources. The culprit is a design flaw in the current DNSSEC specification [RFC-9364] that dates back more than 20 years [RFC-3833].

Published in February 2024 and tracked as CVE-2023-50387 (CVSS 7.5 High), exploitation of the vulnerability is considered trivial and proof-of-concept code is available on GitHub. The availability of exploit code means that low skilled criminals can easily launch attacks. Greenbone can identify systems with vulnerable DNS applications impacted by CVE-2023-50387 with local security checks (LSC) for all operating systems.

CVE-2024-23897 in Jenkins Used to Breach Indian Bank

CVE-2024-23897 (CVSS 9.8 Critical) in Jenkins (versions 2.441 and LTS 2.426.2 and earlier) is being actively exploited and used in ransomware campaigns including one against the National Payments Corporation of India (NPCI). Jenkins is an open-source automation server used primarily for continuous integration (CI) and continuous delivery (CD) in software development operations (DevOps).

The Command Line Interface (CLI) in affected versions of Jenkins contains a path traversal vulnerability [CWE-35] caused by a feature that replaces the @-character followed by a file path with the file’s actual contents. This allows attackers to read the contents of sensitive files including those that provide unauthorized access and subsequent code execution. CVE-2024-23897 and its use in ransomware attacks follows a joint CISA and FBI alert for software vendors to address path traversal vulnerabilities [CWE-35] in their products. Greenbone includes an active check [1] and two version detection tests [2][3] for identifying vulnerable versions of Jenkins on Windows and Linux.

2 New Actively Exploited CVEs in String of Apache OFBiz Flaws

Apache OFBiz (Open For Business) is a popular open-source enterprise resource planning (ERP) and e-commerce software suite developed by the Apache Software Foundation. In August 2024, CISA alerted the cybersecurity community to active exploitation of Apache OFBiz via CVE-2024-38856 (CVSS 9.8 Critical) affecting versions before 18.12.13. CVE-2024-38856 is a path traversal vulnerability [CWE-35] that affects OFBiz’s “override view” functionality allowing unauthenticated attackers Remote Code Execution (RCE) on the affected system.

CVE-2024-38856 is a bypass of a previously patched vulnerability, CVE-2024-36104, just published in June 2024, indicating that the initial fix did not fully remediate the problem. This also builds upon another 2024 vulnerability in OFBiz, CVE-2024-32113 (CVSS 9.8 Critical), which was also being actively exploited to distribute Mirai botnet. Finally, in early September 2024, two new critical severity CVEs, CVE-2024-45507 and CVE-2024-45195 (CVSS 9.8 Critical) were added to the list of threats impacting current versions of OFBiz.

Due to the notice of active exploitation and Proof-of-Concept (PoC) exploits being readily available for CVE-2024-38856 [1][2] and CVE-2024-32113 [1][2] affected users need to patch urgently. Greenbone can detect all aforementioned CVEs in Apache OFBiz with both active and version checks.

CVE-2022-0185 in the Linux Kernel Actively Exploited

CVE-2022-0185 (CVSS 8.4 High), an heap-based buffer overflow vulnerability in the Linux kernel, was added to CISA KEV in August 2024. Publicly available PoC-exploit-code and detailed technical descriptions of the vulnerability have contributed to the increase in cyber attacks exploiting CVE-2022-0185.

In CVE-2022-0185 in Linux’s “legacy_parse_param()” function within the Filesystem Context functionality the length of supplied parameters is not being properly verified. This flaw allows an unprivileged local user to escalate their privileges to the root user.

Greenbone could detect CVE-2022-0185 since it was disclosed in early 2022 via vulnerability test modules covering a wide set of Linux distributions including Red Hat, Ubuntu, SuSE, Amazon Linux, Rocky Linux, Fedora, Oracle Linux and Enterprise products such as IBM Spectrum Protect Plus.

New VoIP and PBX Vulnerabilities

A handful of CVEs were published in August 2024 impacting enterprise voice communication systems. The vulnerabilities were disclosed in Cisco’s small business VOIP systems and Asterisk, a popular open-source PBX branch system. Let’s dig into the specifics:

Cisco Small Business IP Phones Offer RCE and DoS

Three high severity vulnerabilities were disclosed that impact the web-management console of Cisco Small Business SPA300 Series and SPA500 Series IP Phones. While underscoring the importance of not exposing management consoles to the internet, these vulnerabilities also represent a vector for an insider or dormant attacker who has already gained access to an organization’s network to pivot their attacks to higher value assets and disrupt business operations.

Greenbone includes detection for all newly disclosed CVEs in Cisco Small Business IP Phone. Here is a brief technical description of each:

  • CVE-2024-20454 and CVE-2024-20450 (CVSS 9.8 Critical): An unauthenticated, remote attacker could execute arbitrary commands on the underlying operating system with root privileges because incoming HTTP packets are not properly checked for size, which could result in a buffer overflow.
  • CVE-2024-20451 (CVSS 7.5 High): An unauthenticated, remote attacker could cause an affected device to reload unexpectedly causing a Denial of Service because HTTP packets are not properly checked for size.

CVE-2024-42365 in Asterisk PBX Telephony Toolkit

Asterisk is an open-source private branch exchange (PBX) and telephony toolkit. PBX is a system used to manage internal and external call routing and can use traditional phone lines (analog or digital) or VoIP (IP PBX). CVE-2024-42365, published in August 2024, impacts versions of asterisk before 18.24.2, 20.9.2 and 21.4.2 and certified-asterisk versions 18.9-cert11 and 20.7-cert2. An exploit module has also been published for the Metasploit attack framework adding to the risk, however, active exploitation in the wild has not yet been observed.

Greenbone can detect CVE-2024-42365 via network scans. Here is a brief technical description of the vulnerability:

  • CVE-2024-42365 (CVSS 8.8 High): An AMI user with “write=originate” may change all configuration files in the “/etc/asterisk/” directory. This occurs because they are able to curl remote files and write them to disk but are also able to append to existing files using the FILE function inside the SET application. This issue may result in privilege escalation, Remote Code Execution or blind server-side request forgery with arbitrary protocols.

Browsers: Perpetual Cybersecurity Threats

CVE-2024-7971 and CVE-2024-7965, two new CVSS 8.8 High severity vulnerabilities in the Chrome browser, are being actively exploited for RCE. Either CVE can be triggered when victims are tricked into simply visiting a malicious web page. Google acknowledges that exploit code is publicly available, giving even low skilled cyber criminals the ability to launch attacks. Google Chrome has seen a steady stream of new vulnerabilities and active exploitation in recent years. A quick inspection of Mozilla Firefox shows a similar continuous stream of critical and high severity CVEs; seven Critical and six High severity vulnerabilities were disclosed in Firefox during August 2024, although active exploitation of these has not been reported.

The continuous onslaught of vulnerabilities in major browsers underscores the need for diligence to ensure that updates are applied as soon as they become available. Due to Chrome’s high market share of over 65% (over 70% considering Chromium-based Microsoft Edge) its vulnerabilities receive increased attention from cyber criminals. Considering the high number of severe vulnerabilities impacting Chromium’s V8 engine (more than 40 so far in 2024), Google Workspace admins might consider disabling V8 for all users in their organization to increase security. Other options for hardening browser security in high-risk scenarios include using remote browser isolation, network segmentation and booting from secure baseline images to ensure endpoints are not compromised.

Greenbone includes active authenticated vulnerability tests to identify vulnerable versions of browsers for Linux, Windows and macOS.

Summary

New critical and remotely exploitable vulnerabilities are being disclosed at record shattering rates amidst a red hot cyber risk environment. Asking IT security teams to manually track newly exposed vulnerabilities in addition to applying patches imposes an impossible burden and risks leaving critical vulnerabilities undetected and exposed. Vulnerability management is considered a fundamental cybersecurity activity; defenders of large, medium and small organizations need to employ tools such as Greenbone to automatically seek and report vulnerabilities across an organization’s IT infrastructure. 

Conducting automated network vulnerability scans and authenticated scans of each system’s host attack surface can dramatically reduce the workload on defenders, automatically providing them with a list of remediation tasks that is sortable according to threat severity.

OpenVAS began in 2005 when Nessus transitioned from open source to a proprietary license. Two companies, Intevation and DN Systems adopted the existing project and began evolving and maintaining it under a GPL v2.0 license. Since then, OpenVAS has evolved into Greenbone, the most widely-used and applauded open-source vulnerability scanner and vulnerability management solution in the world. We are proud to offer Greenbone as both a free Community Edition for developers and also as a range of enterprise products featuring our Greenbone Enterprise Feed to serve the public sector and private enterprises alike.

As the “old-dog” on the block, Greenbone is hip to the marketing games that cybersecurity vendors like to play. However, our own goals remain steadfast – to share the truth about our product and industry leading vulnerability test coverage. So, when we reviewed a recent 2024 network vulnerability scanner benchmark report published by a competitor, we were a little shocked to say the least.

As the most recognized open-source vulnerability scanner, it makes sense that Greenbone was included in the competition for top dog. However, while we are honored to be part of the test, some facts made us scratch our heads. You might say we have a “bone to pick” about the results. Let’s jump into the details.

What the 2024 Benchmark Results Found

The 2024 benchmark test conducted by Pentest-Tools ranked leading vulnerability scanners according to two factors: Detection Availability (the CVEs each scanner has detection tests for) and Detection Accuracy (how effective their detection tests are).

The benchmark pitted our free Community Edition of Greenbone and the Greenbone Community Feed against the enterprise products of other vendors: Qualys, Rapid7, Tenable, Nuclei, Nmap, and Pentest-Tools’ own product. The report ranked Greenbone 5th in Detection Availability and roughly tied for 4th place in Detection Accuracy. Not bad for going up against titans of the cybersecurity industry.

The only problem is, as mentioned above, Greenbone has an enterprise product too, and when the results are recalculated using our Greenbone Enterprise Feed, the findings are starkly different – Greenbone wins hands down.

Here is What we Found

 Bar chart from the 2024 benchmark for network vulnerability scanners: Greenbone Enterprise achieves the highest values with 78% availability and 61% accuracy

 

Our Enterprise Feed Detection Availability Leads the Pack

According to our own internal findings, which can be verified using our SecInfo Portal, the Greenbone Enterprise Feed has detection tests for 129 of the 164 CVEs included in the test. This means our Enterprise product’s Detection Availability is a staggering 70.5% higher than reported, placing us heads and tails above the rest.

To be clear, the Greenbone Enterprise Feed tests aren’t something we added on after the fact. Greenbone updates both our Community and Enterprise Feeds on a daily basis and we are often the first to release vulnerability tests when a CVE is published. A review of our vulnerability test coverage shows they have been available from day one.

Our Detection Accuracy was far Underrated

And another thing. Greenbone isn’t like those other scanners. The way Greenbone is designed gives it strong industry leading advantages. For example, our scanner can be controlled via API allowing users to develop their own custom tools and control all the features of Greenbone in any way they like. Secondly, our Quality of Detection (QoD) ranking doesn’t even exist on most other vulnerability scanners.

The report author made it clear they simply used the default configuration for each scanner. However, without applying Greenbone’s QoD filter properly, the benchmark test failed to fairly assess Greenbone’s true CVE detection rate. Applying these findings Greenbone again comes out ahead of the pack, detecting an estimated 112 out of the 164 CVEs.

Summary

While we were honored that our Greenbone Community Edition ranked 5th in Detection Availability and tied for 4th in Detection Accuracy in a recently published network vulnerability scanner benchmark, these results fail to consider the true power of the Greenbone Enterprise Feed. It stands to reason that our Enterprise product should be in the running. Afterall, the benchmark included enterprise offerings from other vendors.

When recalculated using the Enterprise Feed, Greenbone’s Detection Availability leaps to 129 of the 164 CVEs on the test, 70.5% above what was reported. Also, using the default settings fails to account for Greenbone’s Quality of Detection (QoD) feature. When adjusted for these oversights, Greenbone ranks at the forefront of the competition. As the most used open-source vulnerability scanner in the world, Greenbone continues to lead in vulnerability coverage, timely publication of vulnerability tests, and truly enterprise grade features such as a flexible API architecture, advanced filtering, and Quality of Detection scores.

Every business has mission critical activities. Security controls are meant to protect those critical activities to ensure business operations and strategic goals can be sustained indefinitely. Using an “Install and forget”-approach to security provides few assurances for achieving these objectives. An ever-changing digital landscape means a security gap could lead to a high stakes data breach. Things like privilege creep, server sprawl, and configuration errors tend to pop-up like weeds. Security teams who don’t continuously monitor don’t catch them – attackers do. For this reason, cyber security frameworks tend to be iterative processes that include monitoring, auditing, and continuous improvement.

Security officers should be asking: What does our organization need to measure to gain strong assurances and enable continuous improvement? In this article we will take you through a rationale for Key Performance Indicators (KPI) in cyber security outlined by industry leaders such as NIST and The SANS Institute and define a core set of vulnerability management specific KPIs. The most fundamental KPIs covered here can serve as a starting point for organizations implementing a vulnerability management program from scratch, while the more advanced measures can provide depth of visibility for organizations with mature vulnerability management programs already in place.

Cyber Security KPI Support Core Strategic Business Goals

KPI are generated by collecting and analyzing relevant performance data and are mainly used for two strategic goals. The first is to facilitate evidence-based decision making. For example, KPI can help managers benchmark how vulnerability management programs are performing in order to assess the overall level of risk mitigation and decide whether to allocate more resources or accept the status-quo. The second core strategic goal that KPIs support is to provide accountability of security activities. KPI can help identify causes of poor performance and provide an early warning of insufficient or poorly implemented security controls. With proper monitoring of vulnerability management performance, the effectiveness of existing procedures can be evaluated, allowing them to be adjusted or supplemented with additional controls. The evidence collected while generating KPIs can also be used to demonstrate compliance with internal policies, mandatory or voluntary cyber security standards, or any applicable laws and regulations by evidencing cyber security program activities.

The scope of measuring KPI can be enterprise-wide or focused on departments or infrastructure that is critical to business operations. This scope can also be adjusted as a cybersecurity program matures. During the initial stages of starting a vulnerability management, only basic information may be available to build KPI metrics from. However, as a program matures, data collection will become more robust, supporting more complex KPI metrics. More advanced measures may also be justified to gain high visibility for organizations with increased risk.

Types of Cyber Security Measures

NIST SP 800-55 V1 (and it’s predecessor NIST SP 800-55 r2) focuses on the development and collection of three types of measures:

  • Implementation Measures: These measure the execution of security policy and gauge the progress of implementation. Examples include: the total number of information systems scanned and the percentage of critical systems scanned for vulnerabilities.
  • Effectiveness/Efficiency Measures: These measure the results of security activities and monitor program-level and system-level processes. This can help gauge if security controls are implemented correctly, operating as intended, and producing a desirable outcome. For example, the percentage of all identified critical severity vulnerabilities that have been mitigated across all operationally critical infrastructure.
  • Impact Measures: These measure the business consequences of security activities such as cost savings, costs incurred by addressing security vulnerabilities, or other business related impacts of information security.

Important Indicators for Vulnerability Management

Since vulnerability management is fundamentally the process of identifying and remediating known vulnerabilities, KPI that provide insight into the detection and remediation of known threats are most appropriate. In addition to these two key areas, assessing a particular vulnerability management tool’s effectiveness for detecting vulnerabilities can help compare different products. Since these are the most logical ways to evaluate vulnerability management activities, our list has grouped KPI into these three categories. Tags are also added to each item indicating which purpose specified in NIST SP 800-55 the metric satisfies.

While not an exhaustive list, here are some key KPIs for vulnerability management:

Detection Performance Metrics

  • Scan Coverage (Implementation): This measures the percentage of an organization’s total assets that are being scanned for vulnerabilities. Scan coverage is especially relevant at the early stages of program implementation for setting targets and measuring the evolving maturity of the program. Scan coverage can also be used to identify gaps in an organization’s IT infrastructure that are not being scanned putting them at increased risk.
  • Mean Time to Detect (MTTD) (Efficiency): This measures the average time to detect vulnerabilities from when information is first published and when a security control is able to identify it. MTTD may be improved by adjusting the frequency of updating a vulnerability scanner’s modules or frequency of conducting scans.
  • Unidentified Vulnerabilities Ratio (Effectiveness): The ratio of vulnerabilities identified proactively through scans versus those discovered through breach or incident post-mortem analyses. A higher ratio suggests better proactive detection capabilities.
  • Automated Discovery Rate (Efficiency): This metric measures the percentage of vulnerabilities identified by automated tools versus manual discovery methods. Higher automation can lead to more consistent and faster detection.

Remediation Performance Metrics

  • Mean Time to Remediate (MTTR; Efficiency): This measures the average time taken to fix vulnerabilities after they are detected. By tracking remediation times organizations can gauge their responsiveness to security threats and evaluate the risk posed by exposure time. A shorter MTTR generally indicates a more agile security operation.
  • Remediation Coverage (Effectiveness): This metric represents the proportion of detected vulnerabilities that have been successfully remediated and serves as a critical indicator of effectiveness in addressing identified security risks. Remediation coverage can be adjusted to specifically reflect the rate of closing critical or high severity security gaps. By focusing on the most dangerous vulnerabilities first, security teams can more effectively minimize risk exposure.
  • Risk Score Reduction (Impact): This metric reflects the overall impact that vulnerability management activities are having to risk. By monitoring changes in the risk score, managers can evaluate how well the threat posed by exposed vulnerabilities is being managed. Risk Score Reduction is typically calculated using risk assessment tools that provide a contextual view of each organization’s unique IT infrastructure and risk profile.
  • Rate Of Compliance (Impact): This metric represents the percentage of systems that comply with specific cyber security regulations, standards, or internal policies. It serves as an essential measure for gauging compliance status and provides evidence of this status to various stakeholders. It also serves as a warning if compliance requirements are not being satisfied, thereby reducing the risk of penalties and ensuring the intended security posture put forth by the compliance target.
  • Vulnerability Reopen Rate (Efficiency): This metric measures the percentage of vulnerabilities that are reopened after being marked as resolved. Reopen rate indicates the efficiency of remediation efforts. Ideally, once a remediation ticket has been closed, the vulnerability does not issue another ticket.
  • Cost of Remediation (Impact): This metric measures the total cost associated with fixing detected vulnerabilities, encompassing both direct and indirect expenses. Cost analysis can aid decisions for budgeting and resource allocation by tracking the amount of time and resources required to detect and apply remediation.

Vulnerability Scanner Effectiveness Metrics

  • True Positive Detection Rate (Effectiveness): This measures the percentage of vulnerabilities that can be accurately detected by a particular tool. True positive detection rate measures the effective coverage of a vulnerability scanning tool and allows two vulnerability scanning products to be compared according to their relative value.
  • False Positive Detection Rate (Effectiveness): This metric measures the frequency at which a tool incorrectly identifies non-existent vulnerabilities as being present. This can lead to wasted resources and effort. False positive detection rate can gauge the reliability of a vulnerability scanning tool to ensure it aligns with operational requirements.

Key Takeaways

By generating and analyzing Key Performance Indicators (KPIs), organizations can satisfy fundamental cybersecurity requirements for continuous monitoring and improvement. KPI also supports core business strategies such as evidence-based decision making and accountability.

With quantitative insight into vulnerability management processes, organizations can better gauge their progress and more accurately evaluate their cyber security risk posture. By aggregating an appropriate set of KPIs, organizations can track the maturity of their vulnerability management activities, identify gaps in controls, policies, and procedures that limit the effectiveness and efficiency of their vulnerability remediation, and ensure alignment with compliance with internal risk requirements and relevant security standards, laws and regulations.

References

National Institute of Standards and Technology. Measurement Guide for Information Security: Volume 1 — Identifying and Selecting Measures. NIST, January 2024, https://csrc.nist.gov/pubs/sp/800/55/v1/ipd

National Institute of Standards and Technology. Performance Measurement Guide for Information Security, Revision 2. NIST, November 2022, https://csrc.nist.gov/pubs/sp/800/55/r2/iwd

National Institute of Standards and Technology. Assessing Security and Privacy Controls in Information Systems and Organizations Revision 5. NIST, January 2022, https://csrc.nist.gov/pubs/sp/800/53/a/r5/final

National Institute of Standards and Technology. Guide for Conducting Risk Assessments Revision 1. NIST, September 2012, https://csrc.nist.gov/pubs/sp/800/30/r1/final

National Institute of Standards and Technology. Guide to Enterprise Patch Management Planning: Preventive Maintenance for Technology Revision 4. NIST, April 2022, https://csrc.nist.gov/pubs/sp/800/40/r4/final

SANS Institute. A SANS 2021 Report: Making Visibility Definable and Measurable. SANS Institute, June 2021, https://www.sans.org/webcasts/2021-report-making-visibility-definable-measurable-119120/

SANS Institute. A Guide to Security Metrics. SANS Institute, June 2006, https://www.sans.org/white-papers/55/

The German implementation of the EU’s NIS2 directive is becoming more and more defined: End of July, the NIS2 Implementation Act passed the German government’s cabinet, a final decision in the Bundestag is imminent. For all companies and authorities wondering whether this concerns them, the BSI has now launched a comprehensive website with an impact assessment and valuable information under the catchy hashtag #nis2know.

Even if the Bundestag resolution is not yet passed and thus the originally planned date in October will perhaps not be feasible anymore, companies must prepare now, the Federal Office for Information Security (BSI) demands. The BSI is therefore providing companies and organizations of all kinds with an eight-part questionnaire (in German only) to help IT managers and managers find out whether the strict regulations of NIS2 also apply to them. For all companies and organizations that fall under the NIS2 regulation, the BSI also provides further assistance and answers to the question of what they can do now in advance of NIS2 coming into force.

High need, high demand

Demand appears to be high, with both BSI head Claudia Plattner and Federal CIO Markus Richter reporting success in the form of several thousand hits in the first few days (for example on LinkedIn: Plattner, Richter). The NIS2 vulnerability test can be found directly on the BSI website. Here you will find “specific questions based on the directive to classify your company”. The questions are “kept short and precise and are explained in more detail in small print if necessary”. Anyone filling out the BSI’s questionnaire will know within minutes whether their company or organization is affected by NIS2.

In the questions, the respondent must address whether their company is the operator of a critical facility, a provider of publicly accessible telecommunications services or public telecommunications networks, a qualified trust service provider, a top-level domain name registry or a DNS service provider. Even if the company is a non-qualified trust service provider or offers goods and services that fall under one of the types of facilities specified in Annex 1 or 2 of the NIS 2 Directive, it is affected by the NIS 2 regulations.

Anybody who can answer all questions with “No” is not affected by NIS2. For everyone else, however, the BSI offers extensive help and research options on what to do now. A FAQ list explains in detail in nine questions the current status, whether you should wait or already start preparing. Links to sources and contacts can be found here, as well as further information for the impact checks and explanations of terms (for example: What does “important”, “essential” and “particularly important” mean in the context of NIS2?) Also very important are the sections that explain which obligations and evidence affected companies must provide when and where, as well as the still unanswered discussion as to when NIS2 becomes binding.

The BSI’s wealth of information also includes support services for businesses, as well as clear instructions for the next steps and basic explanations on critical infrastructures (KRITIS) in general.

Take action now, despite waiting for the Bundestag

The national implementation of the European NIS2 Directive, which has been the subject of heated debate in some quarters, was recently delayed due to major differences of opinion between the parties involved, meaning that the previously expected date had to be postponed. The Federal Ministry of the Interior had already confirmed weeks ago that it would not come into force in October.

Irrespective of the wait for the Bundestag, those affected should take action now, writes the BSI: responsible persons and teams must be appointed, roles and tasks must be defined, but also an inventory is to be taken and processes are to be set up for continuous improvement. Preparing for the upcoming reporting obligation should be a top priority.

Extensive information also from Greenbone

Greenbone has also devoted numerous blog posts and guides to the topic of NIS2 in recent months, from the Cyber Resilience Act and the threat situation for municipalities to effective measures and basically everything what is needed to know about NIS2 right now.

Ransomware, phishing, denial of service attacks: according to a recent study, 84 per cent of the companies surveyed are concerned about the security of their IT systems and see a further increase in the threat situation. For good reason, as companies are also concerned about outdated code, data theft by employees, inadequate protection of company […]

IT security teams don’t necessarily need to know what CSAF is, but on the other hand, familiarity with what’s happening “under the hood” of a vulnerability management platform can give context to how next-gen vulnerability management is evolving, and the advantages of automated vulnerability management. In this article, we take an introductory journey through CSAF 2.0, what it is, and how it seeks to benefit enterprise vulnerability management. 

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

What is CSAF?

The Common Security Advisory Framework (CSAF) 2.0 is a standardized, machine-readable vulnerability advisory format. CSAF 2.0 enables the upstream cybersecurity intelligence community, including software and hardware vendors, governments, and independent researchers to provide information about vulnerabilities. Downstream, CSAF allows vulnerability information consumers to aggregate security advisories from a decentralized group of providers and automate risk assessment with more reliable information and less resource overhead.

By providing a standardized machine readable format, CSAF represents an evolution towards “next-gen” automated vulnerability management which can reduce the burden on IT security teams facing an ever increasing number of CVE disclosures, and improve risk-based decision making in the face of an “ad-hoc” approach to vulnerability intelligence sharing.

CSAF 2.0 is the replacement for the Common Vulnerability Reporting Framework (CVRF) v1.2 and extends its predecessor’s capabilities to offer greater flexibility.

Here are the key takeaways:

  • CSAF is an international open standard for machine readable vulnerability advisory documents that uses the JSON markup language.
  • CSAF aggregation is a decentralized model of distributing vulnerability information.
  • CSAF 2.0 is designed to enable next-gen automated enterprise vulnerability management.

The Traditional Process of Vulnerability Management

The traditional process of vulnerability management is a difficult process for large organizations with complex IT environments. The number of CVEs published each patch cycle has been increasing at an unmanageable pace [1][2]. In a traditional vulnerability management process, IT security teams collect vulnerability information manually via Internet searches. In this way, the process involves extensive manual effort to collect, analyze, and organize information from a variety of sources and ad-hoc document formats.

These sources typically include:

  • Vulnerability tracking databases such as NIST NVD
  • Product vendor security advisories
  • National and international CERT advisories
  • CVE numbering authority (CNA) assessments
  • Independent security research
  • Security intelligence platforms
  • Exploit code databases

The ultimate goal of conducting a well-informed risk assessment can be confounded during this process in several ways. Advisories, even those provided by the product vendor themselves, are often incomplete and come in a variety of non-standardized formats. This lack of cohesion makes data-driven decision making difficult and increases the probability of error.

Let’s briefly review the existing vulnerability information pipeline from both the creator and consumer perspectives:

The Vulnerability Disclosure Process

Common Vulnerability and Exposure (CVE) records published in the National Vulnerability Database (NVD) of the NIST (National Institute of Standards and Technology) represent the world’s most centralized global repository of vulnerability information. Here is an overview of how the vulnerability disclosure process works:

  1. Product vendors become aware of a security vulnerability from their own security testing or from independent security researchers, triggering an internal vulnerability disclosure policy into action. In other cases, independent security researchers may interact directly with a CVE Numbering Authority (CNA) to publish the vulnerability without prior consultation with the product vendor.
  2. Vulnerability aggregators such as NIST NVD and national CERTs create unique tracking IDs (such as a CVE ID) and add the disclosed vulnerability to a centralized database where product users and vulnerability management platforms such as Greenbone can become aware and track progress.
  3. Various stakeholders such as the product vendor, NIST NVD and independent researchers publish advisories that may or may not include remediation information, expected dates for official patches, a list of affected products, CVSS impact assessment and severity ratings, Common Platform Enumeration (CPE) or Common Weakness Enumeration (CWE).
  4. Other cyber-threat intelligence providers such as CISA’s Known Exploited Vulnerabilities (KEV) and First.org’s Exploit Prediction Scoring System (EPSS) provide additional risk context.

The Vulnerability Management Process

Product users are responsible for ingesting vulnerability information and applying it to mitigate the risk of exploitation. Here is an overview of the traditional enterprise vulnerability management process:

  1. Product users need to manually search CVE databases and monitor security advisories that pertain to their software and hardware assets or utilize a vulnerability management platform such as Greenbone which automatically aggregate the available ad-hoc threat advisories.
  2. Product users must match the available information to their IT asset inventory. This typically involves maintaining an asset inventory and conducting manual matching, or using a vulnerability scanning product to automate the process of building an asset inventory and executing vulnerability tests.
  3. IT security teams prioritize the discovered vulnerabilities according to the contextual risk presented to critical IT systems, business operations, and in some cases public safety.
  4. Remediation tasks are assigned according to the final risk assessment and available resources.

What is Wrong with Traditional Vulnerability Management?

Traditional or manual vulnerability management processes are operationally complex and lack efficiency. Aside from the operational difficulties of implementing software patches, the lack of accessible and reliable information bogs down efforts to effectively triage and remediate vulnerabilities. Using CVSS alone to assess risk has also been criticized [1][2] for lacking sufficient context to satisfy robust risk-based decision making. Although vulnerability management platforms such as Greenbone greatly reduce the burden on IT security teams, the overall process is still often plagued by time-consuming manual aggregation of ad-hoc vulnerability advisories that can often result in incomplete information.

Especially in the face of an ever increasing number of vulnerabilities, aggregating ad-hoc security information risks being too slow and introduces more human error, increasing vulnerability exposure time and confounding risk-based vulnerability prioritization.

Lack of Standardization Results in Ad-hoc Intelligence

The current vulnerability disclosure process lacks a formal method of distinguishing between reliable vendor provided information, and information provided by arbitrary independent security researchers such as Partner CNAs. In fact, the official CVE website itself promotes the low requirements for becoming a CNA. This results in a large number of CVEs being issued without detailed context, forcing extensive manual enrichment downstream.

Which information is included depends on the CNA’s discretion and there is no way to classify the reliability of the information. As a simple example of the problem, the affected products in an ad-hoc advisory are often provided using a wide range of descriptors that need to be manually interpreted. For example:

  • Version 8.0.0 – 8.0.1
  • Version 8.1.5 and later
  • Version <= 8.1.5
  • Versions prior to 8.1.5
  • All versions < V8.1.5
  • 0, V8.1, V8.1.1, V8.1.2, V8.1.3, V8.1.4, V8.1.5

Scalability

Because vendors, assessors (CNAs), and aggregators utilize various distribution methods and formats for their advisories, the challenge of efficiently tracking and managing vulnerabilities becomes operationally complex and difficult to scale. Furthermore, the increasing rate of vulnerability disclosure exacerbates manual processes, overwhelms security teams, and increases the risk of error or delay in remediation efforts.

Difficult to Assess Risk Context

NIST SP 800-40r4 “Guide to Enterprise Patch Management Planning” Section 3 advises the application of enterprise level vulnerability metrics. Because risk ultimately depends on each vulnerability’s context – factors such as affected systems, potential impact, and exploitability – the current environment of ad-hoc security intelligence presents a significant barrier to robust risk-based vulnerability management.

How Does CSAF 2.0 Solve These Problems?

CSAF documents are essential cyber threat advisories designed to optimize the vulnerability information supply chain. Instead of manually aggregating ad-hoc vulnerability data, product users can automatically aggregate machine-readable CSAF advisories from trusted sources into an Advisory Management System that combines core vulnerability management functions of asset matching and risk assessment. In this way, security content automation with CSAF aims to address the challenges of traditional vulnerability management by providing more reliable and efficient security intelligence, creating the potential for next-gen vulnerability management.

Here are some specific ways that CSAF 2.0 solves the problems of traditional vulnerability management:

More Reliable Security Information

CSAF 2.0 remedies the crux of ad-hoc security intelligence by standardizing several aspects of a vulnerability disclosure. For example, the affected version specifier fields allow standardized data such as Version Range Specifier (vers), Common Platform Enumeration (CPE), Package URL specification, CycloneDX SBOM as well as the product’s common name, serial number, model number, SKU or file hash to identify affected product versions.

In addition to standardizing product versions, CSAF 2.0 also supports Vulnerability Exploitability eXchange (VEX) for product vendors, trusted CSAF providers, or independent security researchers to explicitly declare product remediation status. VEX provides product users with recommendations for remedial actions.

The explicit VEX status declarations are:

  • Not affected: No remediation is required regarding a vulnerability.
  • Affected: Actions are recommended to remediate or address a vulnerability.
  • Fixed: Represents that these product versions contain a fix for a vulnerability.
  • Under Investigation: It is not yet known whether these product versions are affected by a vulnerability. An update will be provided in a later release.

More Effective Use of Resources

CSAF enables several upstream and downstream optimizations to the traditional vulnerability management process. The OASIS CSAF 2.0 documentation includes descriptions of several compliance goals that enable cybersecurity administrators to automate their security operations for more efficient use of resources.

Here are some compliance targets referenced in the CSAF 2.0 documentation that support more effective use of resources above and beyond the traditional vulnerability management process:

  • Advisory Management System: A software system that consumes data and produces CSAF 2.0 compliant advisory documents. This allows CSAF producing teams to assess the quality of data being ingested at a point in time, verify, convert, and publish it as a valid CSAF 2.0 security advisory. This allows CSAF producers to optimize the efficiency of their information pipeline while verifying accurate advisories are published.
  • CSAF Management System: A program that can manage CSAF documents and is able to display their details as required by CSAF viewer. At the most fundamental level, this allows both upstream producers and downstream consumers of security advisories to view their content in a human readable format.
  • CSAF Asset Matching System / SBOM Matching System: A program that integrates with a database of IT assets including Software Bill of Materials (SBOM) and can match assets to any CSAF advisories. An asset matching system serves to provide a CSAF consuming organization with visibility into their IT infrastructure, identify where vulnerable products exist, and optimally provide automated risk assessment and remediation information.
  • Engineering System: A software analysis environment within which analysis tools execute. An engineering system might include a build system, a source control system, a result management system, a bug tracking system, a test execution system and so on.

Decentralized Cybersecurity Information

A recent outage of the NIST National Vulnerability Database (NVD) CVE enrichment process demonstrates how reliance on a single source of vulnerability information can be risky. CSAF is decentralized, allowing downstream vulnerability consumers to source and integrate information from a variety of sources. This decentralized model of intelligence sharing is more resilient to an outage by one information provider, while sharing the burden of vulnerability enrichment more effectively distributes the workload across a wider set of stakeholders.

Enterprise IT product vendors such as RedHat and Cisco have already created their own CSAF and VEX feeds while government cybersecurity agencies and national CERT programs such as the German Federal Office For Information Security Agency (BSI) and US Cybersecurity & Infrastructure Security Agency (CISA) have also developed CSAF 2.0 sharing capabilities. 

The decentralized model also allows for multiple stakeholders to weigh in on a particular vulnerability providing downstream consumers with more context about a vulnerability. In other words, an information gap in one advisory may be filled by an alternative producer that provides the most accurate assessment or specialized analysis.

Improved Risk Assessment and Vulnerability Prioritization

Overall, the benefits of CSAF 2.0 contribute to more accurate and efficient risk assessment, prioritization and remediation efforts. Product vendors can directly publish reliable VEX advisories giving cybersecurity decision makers more timely and trustworthy remediation information. Also, the aggregate severity (aggregate_severity) object in CSAF 2.0 acts as a vehicle to convey reliable urgency and criticality information for a group of vulnerabilities, enabling a more unified risk analysis, and more data driven prioritization of remediation efforts, reducing the exposure time of critical vulnerabilities.

Summary

Traditional vulnerability management processes are plagued by lack of standardization resulting in reliability and scalability issues and increasing the difficulty of assessing risk context and the likelihood of error.

The Common Security Advisory Framework (CSAF) 2.0 seeks to revolutionize the existing process of vulnerability management by enabling more reliable, automated vulnerability intelligence gathering. By providing a standardized machine-readable format for sharing cybersecurity vulnerability information, and decentralizing its source, CSAF 2.0 empowers organizations to harness more reliable security information to achieve more accurate, efficient, and consistent vulnerability management operations.

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

“Your company can be ruined in just 62 minutes”: This is how the security provider Crowdstrike advertises. Now the US manufacturer has itself caused an estimated multi-billion-dollar loss due to a faulty product update – at breakneck speed.

On 19 July at 04:09 (UTC), the security specialist CrowdStrike distributed a driver update for its Falcon software for Windows PCs and servers. Just 159 minutes later, at 06:48 UTC, Google Compute Engine reported the problem, which “only” affected certain Windows computers and servers running CrowdStrike Falcon software.

Almost five per cent of global air traffic was unceremoniously paralysed as a result, and 5,000 flights had to be cancelled. Supermarkets from Germany to New Zealand had to close because the checkout systems failed. A third of all Japanese MacDonalds branches closed their doors at short notice. Among the US authorities affected were the Department of Homeland Security, NASA, the Federal Trade Commission, the National Nuclear Security Administration and the Department of Justice. In the UK, even most doctors’ surgeries were affected.

The problem

The incident points to a burning problem: the centralisation of services and the increasing networking of the IT systems behind them makes us vulnerable. If one service provider in the digital supply chain is affected, the entire chain can break, leading to large-scale outages. As a result, the Microsoft Azure cloud was also affected, with thousands of virtual servers unsuccessfully attempting to restart. Prominent people affected reacted quite clearly. Elon Musk, for example, wants to ban CloudStrike products from all his systems.

More alarming, however, is the fact that security software is being used in areas for which it is not intended. Although the manufacturer advertises quite drastically about the threat posed by third parties, it accepts no responsibility for the problems that its own products can cause and their consequential damage. CrowdStrike expressly advises against using the solutions in critical areas in its terms and conditions. It literally states – and in capital letters: “THE OFFERINGS AND CROWDSTRIKE TOOLS ARE NOT FAULT-TOLERANT AND ARE NOT DESIGNED OR INTENDED FOR USE IN ANY HAZARDOUS ENVIRONMENT.”

The question of liability

Not suitable for critical infrastructures, but often used there: How can this happen? Negligent errors with major damage, but no liability on the part of the manufacturer: How can this be?

In the context of open source, it is often incorrectly argued that the question of liability in the event of malfunctions and risks is unresolved, even though most manufacturers who place open source on the market with their products do provide a warranty.

We can do a lot to make things better by tackling the problems caused by poor quality and dependence on individual large manufacturers. Of course, an open source supply chain is viewed critically, and that’s a good thing. But it has clear advantages over a proprietary supply chain. The incident is a striking example of this. It is easy to prevent an open source company from rolling out a scheduled update in which basic components simply do not work by using appropriate toolchains, and this is what happens.

The consequences

So what can we learn from this disaster and what are the next steps to take? Here are some suggestions:

  1. improve quality: The best lever to put pressure on manufacturers is to increase the motivation for quality via stricter liability. The Cyber Resilience Act (CRA) offers initial approaches here.
  2. Safety first: In this case, this rule relates primarily to the technical approach to product development. Deeply intervening in customer systems is controversial in terms of security. Many customers reject this, but those affected obviously do not (yet). They have now suffered the damage. There are alternatives, which are also based on open source.
  3. use software only as intended: If a manufacturer advises against use in a critical environment, then this is not just a phrase in the general terms and conditions, but a reason for exclusion.
  4. centralisation with a sense of proportion: There are advantages and disadvantages to centralising the digital supply chain that need to be weighed up against each other. When dependency meets a lack of trustworthiness, risks and damage arise. User authorities and companies then stand helplessly in the queue, without alternatives and without their own sovereignty.

Why is Greenbone not a security provider like any other? How did Greenbone come about and what impact does Greenbone’s long history have on the quality of its vulnerability scanners and the security of its customers? The new video “Demystify Greenbone” provides answers to these questions in an twelve-minute overview. It shows why experts need […]