Tag Archive for: Schwachstellenmanagement

OpenVAS began in 2005 when Nessus transitioned from open source to a proprietary license. Two companies, Intevation and DN Systems adopted the existing project and began evolving and maintaining it under a GPL v2.0 license. Since then, OpenVAS has evolved into Greenbone, the most widely-used and applauded open-source vulnerability scanner and vulnerability management solution in the world. We are proud to offer Greenbone as both a free Community Edition for developers and also as a range of enterprise products featuring our Greenbone Enterprise Feed to serve the public sector and private enterprises alike.

As the “old-dog” on the block, Greenbone is hip to the marketing games that cybersecurity vendors like to play. However, our own goals remain steadfast – to share the truth about our product and industry leading vulnerability test coverage. So, when we reviewed a recent 2024 network vulnerability scanner benchmark report published by a competitor, we were a little shocked to say the least.

As the most recognized open-source vulnerability scanner, it makes sense that Greenbone was included in the competition for top dog. However, while we are honored to be part of the test, some facts made us scratch our heads. You might say we have a “bone to pick” about the results. Let’s jump into the details.

What the 2024 Benchmark Results Found

The 2024 benchmark test conducted by Pentest-Tools ranked leading vulnerability scanners according to two factors: Detection Availability (the CVEs each scanner has detection tests for) and Detection Accuracy (how effective their detection tests are).

The benchmark pitted our free Community Edition of Greenbone and the Greenbone Community Feed against the enterprise products of other vendors: Qualys, Rapid7, Tenable, Nuclei, Nmap, and Pentest-Tools’ own product. The report ranked Greenbone 5th in Detection Availability and roughly tied for 4th place in Detection Accuracy. Not bad for going up against titans of the cybersecurity industry.

The only problem is, as mentioned above, Greenbone has an enterprise product too, and when the results are recalculated using our Greenbone Enterprise Feed, the findings are starkly different – Greenbone wins hands down.

Here is What we Found

 Bar chart from the 2024 benchmark for network vulnerability scanners: Greenbone Enterprise achieves the highest values with 78% availability and 61% accuracy

 

Our Enterprise Feed Detection Availability Leads the Pack

According to our own internal findings, which can be verified using our SecInfo Portal, the Greenbone Enterprise Feed has detection tests for 129 of the 164 CVEs included in the test. This means our Enterprise product’s Detection Availability is a staggering 70.5% higher than reported, placing us heads and tails above the rest.

To be clear, the Greenbone Enterprise Feed tests aren’t something we added on after the fact. Greenbone updates both our Community and Enterprise Feeds on a daily basis and we are often the first to release vulnerability tests when a CVE is published. A review of our vulnerability test coverage shows they have been available from day one.

Our Detection Accuracy was far Underrated

And another thing. Greenbone isn’t like those other scanners. The way Greenbone is designed gives it strong industry leading advantages. For example, our scanner can be controlled via API allowing users to develop their own custom tools and control all the features of Greenbone in any way they like. Secondly, our Quality of Detection (QoD) ranking doesn’t even exist on most other vulnerability scanners.

The report author made it clear they simply used the default configuration for each scanner. However, without applying Greenbone’s QoD filter properly, the benchmark test failed to fairly assess Greenbone’s true CVE detection rate. Applying these findings Greenbone again comes out ahead of the pack, detecting an estimated 112 out of the 164 CVEs.

Summary

While we were honored that our Greenbone Community Edition ranked 5th in Detection Availability and tied for 4th in Detection Accuracy in a recently published network vulnerability scanner benchmark, these results fail to consider the true power of the Greenbone Enterprise Feed. It stands to reason that our Enterprise product should be in the running. Afterall, the benchmark included enterprise offerings from other vendors.

When recalculated using the Enterprise Feed, Greenbone’s Detection Availability leaps to 129 of the 164 CVEs on the test, 70.5% above what was reported. Also, using the default settings fails to account for Greenbone’s Quality of Detection (QoD) feature. When adjusted for these oversights, Greenbone ranks at the forefront of the competition. As the most used open-source vulnerability scanner in the world, Greenbone continues to lead in vulnerability coverage, timely publication of vulnerability tests, and truly enterprise grade features such as a flexible API architecture, advanced filtering, and Quality of Detection scores.

Every business has mission critical activities. Security controls are meant to protect those critical activities to ensure business operations and strategic goals can be sustained indefinitely. Using an “Install and forget”-approach to security provides few assurances for achieving these objectives. An ever-changing digital landscape means a security gap could lead to a high stakes data breach. Things like privilege creep, server sprawl, and configuration errors tend to pop-up like weeds. Security teams who don’t continuously monitor don’t catch them – attackers do. For this reason, cyber security frameworks tend to be iterative processes that include monitoring, auditing, and continuous improvement.

Security officers should be asking: What does our organization need to measure to gain strong assurances and enable continuous improvement? In this article we will take you through a rationale for Key Performance Indicators (KPI) in cyber security outlined by industry leaders such as NIST and The SANS Institute and define a core set of vulnerability management specific KPIs. The most fundamental KPIs covered here can serve as a starting point for organizations implementing a vulnerability management program from scratch, while the more advanced measures can provide depth of visibility for organizations with mature vulnerability management programs already in place.

Cyber Security KPI Support Core Strategic Business Goals

KPI are generated by collecting and analyzing relevant performance data and are mainly used for two strategic goals. The first is to facilitate evidence-based decision making. For example, KPI can help managers benchmark how vulnerability management programs are performing in order to assess the overall level of risk mitigation and decide whether to allocate more resources or accept the status-quo. The second core strategic goal that KPIs support is to provide accountability of security activities. KPI can help identify causes of poor performance and provide an early warning of insufficient or poorly implemented security controls. With proper monitoring of vulnerability management performance, the effectiveness of existing procedures can be evaluated, allowing them to be adjusted or supplemented with additional controls. The evidence collected while generating KPIs can also be used to demonstrate compliance with internal policies, mandatory or voluntary cyber security standards, or any applicable laws and regulations by evidencing cyber security program activities.

The scope of measuring KPI can be enterprise-wide or focused on departments or infrastructure that is critical to business operations. This scope can also be adjusted as a cybersecurity program matures. During the initial stages of starting a vulnerability management, only basic information may be available to build KPI metrics from. However, as a program matures, data collection will become more robust, supporting more complex KPI metrics. More advanced measures may also be justified to gain high visibility for organizations with increased risk.

Types of Cyber Security Measures

NIST SP 800-55 V1 (and it’s predecessor NIST SP 800-55 r2) focuses on the development and collection of three types of measures:

  • Implementation Measures: These measure the execution of security policy and gauge the progress of implementation. Examples include: the total number of information systems scanned and the percentage of critical systems scanned for vulnerabilities.
  • Effectiveness/Efficiency Measures: These measure the results of security activities and monitor program-level and system-level processes. This can help gauge if security controls are implemented correctly, operating as intended, and producing a desirable outcome. For example, the percentage of all identified critical severity vulnerabilities that have been mitigated across all operationally critical infrastructure.
  • Impact Measures: These measure the business consequences of security activities such as cost savings, costs incurred by addressing security vulnerabilities, or other business related impacts of information security.

Important Indicators for Vulnerability Management

Since vulnerability management is fundamentally the process of identifying and remediating known vulnerabilities, KPI that provide insight into the detection and remediation of known threats are most appropriate. In addition to these two key areas, assessing a particular vulnerability management tool’s effectiveness for detecting vulnerabilities can help compare different products. Since these are the most logical ways to evaluate vulnerability management activities, our list has grouped KPI into these three categories. Tags are also added to each item indicating which purpose specified in NIST SP 800-55 the metric satisfies.

While not an exhaustive list, here are some key KPIs for vulnerability management:

Detection Performance Metrics

  • Scan Coverage (Implementation): This measures the percentage of an organization’s total assets that are being scanned for vulnerabilities. Scan coverage is especially relevant at the early stages of program implementation for setting targets and measuring the evolving maturity of the program. Scan coverage can also be used to identify gaps in an organization’s IT infrastructure that are not being scanned putting them at increased risk.
  • Mean Time to Detect (MTTD) (Efficiency): This measures the average time to detect vulnerabilities from when information is first published and when a security control is able to identify it. MTTD may be improved by adjusting the frequency of updating a vulnerability scanner’s modules or frequency of conducting scans.
  • Unidentified Vulnerabilities Ratio (Effectiveness): The ratio of vulnerabilities identified proactively through scans versus those discovered through breach or incident post-mortem analyses. A higher ratio suggests better proactive detection capabilities.
  • Automated Discovery Rate (Efficiency): This metric measures the percentage of vulnerabilities identified by automated tools versus manual discovery methods. Higher automation can lead to more consistent and faster detection.

Remediation Performance Metrics

  • Mean Time to Remediate (MTTR; Efficiency): This measures the average time taken to fix vulnerabilities after they are detected. By tracking remediation times organizations can gauge their responsiveness to security threats and evaluate the risk posed by exposure time. A shorter MTTR generally indicates a more agile security operation.
  • Remediation Coverage (Effectiveness): This metric represents the proportion of detected vulnerabilities that have been successfully remediated and serves as a critical indicator of effectiveness in addressing identified security risks. Remediation coverage can be adjusted to specifically reflect the rate of closing critical or high severity security gaps. By focusing on the most dangerous vulnerabilities first, security teams can more effectively minimize risk exposure.
  • Risk Score Reduction (Impact): This metric reflects the overall impact that vulnerability management activities are having to risk. By monitoring changes in the risk score, managers can evaluate how well the threat posed by exposed vulnerabilities is being managed. Risk Score Reduction is typically calculated using risk assessment tools that provide a contextual view of each organization’s unique IT infrastructure and risk profile.
  • Rate Of Compliance (Impact): This metric represents the percentage of systems that comply with specific cyber security regulations, standards, or internal policies. It serves as an essential measure for gauging compliance status and provides evidence of this status to various stakeholders. It also serves as a warning if compliance requirements are not being satisfied, thereby reducing the risk of penalties and ensuring the intended security posture put forth by the compliance target.
  • Vulnerability Reopen Rate (Efficiency): This metric measures the percentage of vulnerabilities that are reopened after being marked as resolved. Reopen rate indicates the efficiency of remediation efforts. Ideally, once a remediation ticket has been closed, the vulnerability does not issue another ticket.
  • Cost of Remediation (Impact): This metric measures the total cost associated with fixing detected vulnerabilities, encompassing both direct and indirect expenses. Cost analysis can aid decisions for budgeting and resource allocation by tracking the amount of time and resources required to detect and apply remediation.

Vulnerability Scanner Effectiveness Metrics

  • True Positive Detection Rate (Effectiveness): This measures the percentage of vulnerabilities that can be accurately detected by a particular tool. True positive detection rate measures the effective coverage of a vulnerability scanning tool and allows two vulnerability scanning products to be compared according to their relative value.
  • False Positive Detection Rate (Effectiveness): This metric measures the frequency at which a tool incorrectly identifies non-existent vulnerabilities as being present. This can lead to wasted resources and effort. False positive detection rate can gauge the reliability of a vulnerability scanning tool to ensure it aligns with operational requirements.

Key Takeaways

By generating and analyzing Key Performance Indicators (KPIs), organizations can satisfy fundamental cybersecurity requirements for continuous monitoring and improvement. KPI also supports core business strategies such as evidence-based decision making and accountability.

With quantitative insight into vulnerability management processes, organizations can better gauge their progress and more accurately evaluate their cyber security risk posture. By aggregating an appropriate set of KPIs, organizations can track the maturity of their vulnerability management activities, identify gaps in controls, policies, and procedures that limit the effectiveness and efficiency of their vulnerability remediation, and ensure alignment with compliance with internal risk requirements and relevant security standards, laws and regulations.

References

National Institute of Standards and Technology. Measurement Guide for Information Security: Volume 1 — Identifying and Selecting Measures. NIST, January 2024, https://csrc.nist.gov/pubs/sp/800/55/v1/ipd

National Institute of Standards and Technology. Performance Measurement Guide for Information Security, Revision 2. NIST, November 2022, https://csrc.nist.gov/pubs/sp/800/55/r2/iwd

National Institute of Standards and Technology. Assessing Security and Privacy Controls in Information Systems and Organizations Revision 5. NIST, January 2022, https://csrc.nist.gov/pubs/sp/800/53/a/r5/final

National Institute of Standards and Technology. Guide for Conducting Risk Assessments Revision 1. NIST, September 2012, https://csrc.nist.gov/pubs/sp/800/30/r1/final

National Institute of Standards and Technology. Guide to Enterprise Patch Management Planning: Preventive Maintenance for Technology Revision 4. NIST, April 2022, https://csrc.nist.gov/pubs/sp/800/40/r4/final

SANS Institute. A SANS 2021 Report: Making Visibility Definable and Measurable. SANS Institute, June 2021, https://www.sans.org/webcasts/2021-report-making-visibility-definable-measurable-119120/

SANS Institute. A Guide to Security Metrics. SANS Institute, June 2006, https://www.sans.org/white-papers/55/

NIS2 Umsetzung gezielt auf den Weg bringen!

The deadline for the implementation of NIS2 is approaching – by October 17, 2024, stricter cybersecurity measures are to be transposed into law in Germany via the NIS2 Implementation Act. Other member states will develop their own legislature based on EU Directive 2022/2555. We have taken a close look at this directive for you to provide you with the most important pointers and signposts for the entry into force of NIS2 in this short video. In this video, you will find out whether your company is affected, what measures you should definitely take, which cybersecurity topics you need to pay particular attention to, who you can consult in this regard and what the consequences of non-compliance are.

Preview image for the video 'What you need to know about NIS2' with European star circle and NIS2 lettering - redirects to YouTube

Learn about the Cyber Resilience Act, which provides a solid framework to strengthen your organization’s resilience against cyberattacks. The ENISA Common Criteria will help you assess the security of your IT products and systems and take a risk-minimizing approach right from the development stage. Also prioritize the introduction of an information management system, for example by implementing ISO 27001 certification for your company. Seek advice about IT baseline protection from specialists recommended by the BSI or your local responsible office.

In addition to the BSI as a point of contact for matters relating to NIS2, we are happy to assist you and offer certified solutions in the areas of vulnerability management and penetration testing. By taking a proactive approach, you can identify security gaps in your systems at an early stage and secure them before they can be used for an attack. Our vulnerability management solution automatically scans your system for weaknesses and reports back to you regularly. During penetration testing, a human tester attempts to penetrate your system to give you final assurance about the attack surface of your systems.

You should also make it a habit to stay up to date with regular cybersecurity training and establish a lively exchange with other NIS2 companies. This is the only way for NIS2 to lead to a sustainable increase in the level of cyber security in Europe.

To track down the office responsible for you, follow the respective link for your state.

Austria France Malta
Belgium Germany Netherlands
Bulgaria Greece Poland
Croatia Hungary Portugal
Cyprus Ireland Romania
Czech Republic Italy Slovakia
Denmark Latvia Slovenia
Estonia Lithuania Spain
Finland Luxembourg Sweden

IT security teams don’t necessarily need to know what CSAF is, but on the other hand, familiarity with what’s happening “under the hood” of a vulnerability management platform can give context to how next-gen vulnerability management is evolving, and the advantages of automated vulnerability management. In this article, we take an introductory journey through CSAF 2.0, what it is, and how it seeks to benefit enterprise vulnerability management. 

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

What is CSAF?

The Common Security Advisory Framework (CSAF) 2.0 is a standardized, machine-readable vulnerability advisory format. CSAF 2.0 enables the upstream cybersecurity intelligence community, including software and hardware vendors, governments, and independent researchers to provide information about vulnerabilities. Downstream, CSAF allows vulnerability information consumers to aggregate security advisories from a decentralized group of providers and automate risk assessment with more reliable information and less resource overhead.

By providing a standardized machine readable format, CSAF represents an evolution towards “next-gen” automated vulnerability management which can reduce the burden on IT security teams facing an ever increasing number of CVE disclosures, and improve risk-based decision making in the face of an “ad-hoc” approach to vulnerability intelligence sharing.

CSAF 2.0 is the replacement for the Common Vulnerability Reporting Framework (CVRF) v1.2 and extends its predecessor’s capabilities to offer greater flexibility.

Here are the key takeaways:

  • CSAF is an international open standard for machine readable vulnerability advisory documents that uses the JSON markup language.
  • CSAF aggregation is a decentralized model of distributing vulnerability information.
  • CSAF 2.0 is designed to enable next-gen automated enterprise vulnerability management.

The Traditional Process of Vulnerability Management

The traditional process of vulnerability management is a difficult process for large organizations with complex IT environments. The number of CVEs published each patch cycle has been increasing at an unmanageable pace [1][2]. In a traditional vulnerability management process, IT security teams collect vulnerability information manually via Internet searches. In this way, the process involves extensive manual effort to collect, analyze, and organize information from a variety of sources and ad-hoc document formats.

These sources typically include:

  • Vulnerability tracking databases such as NIST NVD
  • Product vendor security advisories
  • National and international CERT advisories
  • CVE numbering authority (CNA) assessments
  • Independent security research
  • Security intelligence platforms
  • Exploit code databases

The ultimate goal of conducting a well-informed risk assessment can be confounded during this process in several ways. Advisories, even those provided by the product vendor themselves, are often incomplete and come in a variety of non-standardized formats. This lack of cohesion makes data-driven decision making difficult and increases the probability of error.

Let’s briefly review the existing vulnerability information pipeline from both the creator and consumer perspectives:

The Vulnerability Disclosure Process

Common Vulnerability and Exposure (CVE) records published in the National Vulnerability Database (NVD) of the NIST (National Institute of Standards and Technology) represent the world’s most centralized global repository of vulnerability information. Here is an overview of how the vulnerability disclosure process works:

  1. Product vendors become aware of a security vulnerability from their own security testing or from independent security researchers, triggering an internal vulnerability disclosure policy into action. In other cases, independent security researchers may interact directly with a CVE Numbering Authority (CNA) to publish the vulnerability without prior consultation with the product vendor.
  2. Vulnerability aggregators such as NIST NVD and national CERTs create unique tracking IDs (such as a CVE ID) and add the disclosed vulnerability to a centralized database where product users and vulnerability management platforms such as Greenbone can become aware and track progress.
  3. Various stakeholders such as the product vendor, NIST NVD and independent researchers publish advisories that may or may not include remediation information, expected dates for official patches, a list of affected products, CVSS impact assessment and severity ratings, Common Platform Enumeration (CPE) or Common Weakness Enumeration (CWE).
  4. Other cyber-threat intelligence providers such as CISA’s Known Exploited Vulnerabilities (KEV) and First.org’s Exploit Prediction Scoring System (EPSS) provide additional risk context.

The Vulnerability Management Process

Product users are responsible for ingesting vulnerability information and applying it to mitigate the risk of exploitation. Here is an overview of the traditional enterprise vulnerability management process:

  1. Product users need to manually search CVE databases and monitor security advisories that pertain to their software and hardware assets or utilize a vulnerability management platform such as Greenbone which automatically aggregate the available ad-hoc threat advisories.
  2. Product users must match the available information to their IT asset inventory. This typically involves maintaining an asset inventory and conducting manual matching, or using a vulnerability scanning product to automate the process of building an asset inventory and executing vulnerability tests.
  3. IT security teams prioritize the discovered vulnerabilities according to the contextual risk presented to critical IT systems, business operations, and in some cases public safety.
  4. Remediation tasks are assigned according to the final risk assessment and available resources.

What is Wrong with Traditional Vulnerability Management?

Traditional or manual vulnerability management processes are operationally complex and lack efficiency. Aside from the operational difficulties of implementing software patches, the lack of accessible and reliable information bogs down efforts to effectively triage and remediate vulnerabilities. Using CVSS alone to assess risk has also been criticized [1][2] for lacking sufficient context to satisfy robust risk-based decision making. Although vulnerability management platforms such as Greenbone greatly reduce the burden on IT security teams, the overall process is still often plagued by time-consuming manual aggregation of ad-hoc vulnerability advisories that can often result in incomplete information.

Especially in the face of an ever increasing number of vulnerabilities, aggregating ad-hoc security information risks being too slow and introduces more human error, increasing vulnerability exposure time and confounding risk-based vulnerability prioritization.

Lack of Standardization Results in Ad-hoc Intelligence

The current vulnerability disclosure process lacks a formal method of distinguishing between reliable vendor provided information, and information provided by arbitrary independent security researchers such as Partner CNAs. In fact, the official CVE website itself promotes the low requirements for becoming a CNA. This results in a large number of CVEs being issued without detailed context, forcing extensive manual enrichment downstream.

Which information is included depends on the CNA’s discretion and there is no way to classify the reliability of the information. As a simple example of the problem, the affected products in an ad-hoc advisory are often provided using a wide range of descriptors that need to be manually interpreted. For example:

  • Version 8.0.0 – 8.0.1
  • Version 8.1.5 and later
  • Version <= 8.1.5
  • Versions prior to 8.1.5
  • All versions < V8.1.5
  • 0, V8.1, V8.1.1, V8.1.2, V8.1.3, V8.1.4, V8.1.5

Scalability

Because vendors, assessors (CNAs), and aggregators utilize various distribution methods and formats for their advisories, the challenge of efficiently tracking and managing vulnerabilities becomes operationally complex and difficult to scale. Furthermore, the increasing rate of vulnerability disclosure exacerbates manual processes, overwhelms security teams, and increases the risk of error or delay in remediation efforts.

Difficult to Assess Risk Context

NIST SP 800-40r4 “Guide to Enterprise Patch Management Planning” Section 3 advises the application of enterprise level vulnerability metrics. Because risk ultimately depends on each vulnerability’s context – factors such as affected systems, potential impact, and exploitability – the current environment of ad-hoc security intelligence presents a significant barrier to robust risk-based vulnerability management.

How Does CSAF 2.0 Solve These Problems?

CSAF documents are essential cyber threat advisories designed to optimize the vulnerability information supply chain. Instead of manually aggregating ad-hoc vulnerability data, product users can automatically aggregate machine-readable CSAF advisories from trusted sources into an Advisory Management System that combines core vulnerability management functions of asset matching and risk assessment. In this way, security content automation with CSAF aims to address the challenges of traditional vulnerability management by providing more reliable and efficient security intelligence, creating the potential for next-gen vulnerability management.

Here are some specific ways that CSAF 2.0 solves the problems of traditional vulnerability management:

More Reliable Security Information

CSAF 2.0 remedies the crux of ad-hoc security intelligence by standardizing several aspects of a vulnerability disclosure. For example, the affected version specifier fields allow standardized data such as Version Range Specifier (vers), Common Platform Enumeration (CPE), Package URL specification, CycloneDX SBOM as well as the product’s common name, serial number, model number, SKU or file hash to identify affected product versions.

In addition to standardizing product versions, CSAF 2.0 also supports Vulnerability Exploitability eXchange (VEX) for product vendors, trusted CSAF providers, or independent security researchers to explicitly declare product remediation status. VEX provides product users with recommendations for remedial actions.

The explicit VEX status declarations are:

  • Not affected: No remediation is required regarding a vulnerability.
  • Affected: Actions are recommended to remediate or address a vulnerability.
  • Fixed: Represents that these product versions contain a fix for a vulnerability.
  • Under Investigation: It is not yet known whether these product versions are affected by a vulnerability. An update will be provided in a later release.

More Effective Use of Resources

CSAF enables several upstream and downstream optimizations to the traditional vulnerability management process. The OASIS CSAF 2.0 documentation includes descriptions of several compliance goals that enable cybersecurity administrators to automate their security operations for more efficient use of resources.

Here are some compliance targets referenced in the CSAF 2.0 documentation that support more effective use of resources above and beyond the traditional vulnerability management process:

  • Advisory Management System: A software system that consumes data and produces CSAF 2.0 compliant advisory documents. This allows CSAF producing teams to assess the quality of data being ingested at a point in time, verify, convert, and publish it as a valid CSAF 2.0 security advisory. This allows CSAF producers to optimize the efficiency of their information pipeline while verifying accurate advisories are published.
  • CSAF Management System: A program that can manage CSAF documents and is able to display their details as required by CSAF viewer. At the most fundamental level, this allows both upstream producers and downstream consumers of security advisories to view their content in a human readable format.
  • CSAF Asset Matching System / SBOM Matching System: A program that integrates with a database of IT assets including Software Bill of Materials (SBOM) and can match assets to any CSAF advisories. An asset matching system serves to provide a CSAF consuming organization with visibility into their IT infrastructure, identify where vulnerable products exist, and optimally provide automated risk assessment and remediation information.
  • Engineering System: A software analysis environment within which analysis tools execute. An engineering system might include a build system, a source control system, a result management system, a bug tracking system, a test execution system and so on.

Decentralized Cybersecurity Information

A recent outage of the NIST National Vulnerability Database (NVD) CVE enrichment process demonstrates how reliance on a single source of vulnerability information can be risky. CSAF is decentralized, allowing downstream vulnerability consumers to source and integrate information from a variety of sources. This decentralized model of intelligence sharing is more resilient to an outage by one information provider, while sharing the burden of vulnerability enrichment more effectively distributes the workload across a wider set of stakeholders.

Enterprise IT product vendors such as RedHat and Cisco have already created their own CSAF and VEX feeds while government cybersecurity agencies and national CERT programs such as the German Federal Office For Information Security Agency (BSI) and US Cybersecurity & Infrastructure Security Agency (CISA) have also developed CSAF 2.0 sharing capabilities. 

The decentralized model also allows for multiple stakeholders to weigh in on a particular vulnerability providing downstream consumers with more context about a vulnerability. In other words, an information gap in one advisory may be filled by an alternative producer that provides the most accurate assessment or specialized analysis.

Improved Risk Assessment and Vulnerability Prioritization

Overall, the benefits of CSAF 2.0 contribute to more accurate and efficient risk assessment, prioritization and remediation efforts. Product vendors can directly publish reliable VEX advisories giving cybersecurity decision makers more timely and trustworthy remediation information. Also, the aggregate severity (aggregate_severity) object in CSAF 2.0 acts as a vehicle to convey reliable urgency and criticality information for a group of vulnerabilities, enabling a more unified risk analysis, and more data driven prioritization of remediation efforts, reducing the exposure time of critical vulnerabilities.

Summary

Traditional vulnerability management processes are plagued by lack of standardization resulting in reliability and scalability issues and increasing the difficulty of assessing risk context and the likelihood of error.

The Common Security Advisory Framework (CSAF) 2.0 seeks to revolutionize the existing process of vulnerability management by enabling more reliable, automated vulnerability intelligence gathering. By providing a standardized machine-readable format for sharing cybersecurity vulnerability information, and decentralizing its source, CSAF 2.0 empowers organizations to harness more reliable security information to achieve more accurate, efficient, and consistent vulnerability management operations.

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

Public-key cryptography underpins enterprise network security and thus, securing the confidentiality of private keys is one of the most critical IT security challenges for preventing unauthorized access and maintaining the confidentiality of data. While Quantum Safe Cryptography (QSC) has emerged as a top concern for the future, recent critical vulnerabilities like CVE-2024-3094 (CVSS 10) in XZ Utils and the newly disclosed CVE-2024-31497 (CVSS 8.8) in PuTTY are here and now – real and present dangers.

Luckily, the XZ Utils vulnerability was caught before widespread deployment into Linux stable release branches. However, by comparison, CVE-2024-31497 in PuTTY represents a much bigger threat than the aforementioned vulnerability in XZ Utils despite its lower CVSS score. Let’s examine the details to understand why and review Greenbone’s capabilities for detecting known cryptographic vulnerabilities.

A Primer On Public Key Authentication

Public-key infrastructure (PKI) is fundamental to a wide array of digital trust services such as Internet and enterprise LAN authentication, authorization, privacy, and application security. For public-key authentication both the client and server each need a pair of interconnected cryptographic keys: a private key, and a public key. The public keys are openly shared between the two connecting parties, while the private keys are used to digitally sign messages sent between them, and the associated public keys are used to decrypt those messages. This is how each party fundamentally verifies the other’s identity and how a single symmetric key is agreed upon for continuous encrypted communication with an optimal connection speed.

In the client-server model of communication, if the client’s private key is compromised, an attacker can potentially authenticate to any resources that honor it. If the server’s private key is compromised, an attacker can potentially spoof the server’s identity and conduct Adversary-in-the-Middle (AitM) attacks.

CVE-2024-31497 Affects All Versions of PuTTY

CVE-2024-31497 in the popular Windows SSH client PuTTY allows an attacker to recover a client’s NIST P-521 secret key by capturing and analyzing approximately 60 digital signatures due to biased ECDSA nonce generation. As of NIST SP-800-186 (2023) NIST ECDSA P-521 keys are still classified among those offering the highest cryptographic resilience and recommended for use in various applications, including SSL/TLS and Secure Shell (SSH) applications. So, a vulnerability in an application’s implementation of ECDSA P-521 authentication is a serious disservice to IT teams who have otherwise applied appropriately strong encryption standards.

In the case of CVE-2024-31497, the client’s digital signatures are subject to cryptanalysis attacks that can reveal the private key. While developing an exploit for CVE-2024-31497 is a highly skilled endeavor requiring expert cryptographers and computer engineers, a proof-of-concept (PoC) code has been released publically, indicating a high risk that CVE-2024-31497 may be actively exploited by even low skilled attackers in the near future.

Adversaries could capture a victim’s signatures by monitoring network traffic, but signatures may already be publicly available if PuTTY was used for signing commits of public GitHub repositories using NIST ECDSA P-521 keys. In other words, adversaries may be able to find enough information to compromise a private key from publicly accessible data, enabling supply-chain attacks on a victim’s software.

CVE-2024-31497 affects all versions of PuTTY after 0.68 (early 2017) before 0.81 and affects FileZilla before 3.67.0, WinSCP before 6.3.3, TortoiseGit before 2.15.0.1, and TortoiseSVN through 1.14.6, and potentially other products.

On the bright side, Greenbone is able to detect the various vulnerable versions of PuTTY with multiple Vulnerability Tests (VTs). Greenbone can identify Windows Registry Keys that indicate a vulnerable version of PuTTY is present on a scan target, and has additional tests for PuTTY for Linux [1][2][3], FileZilla [4][5], and versions of Citrix Hypervisor/XenServer [6] susceptible to CVE-2024-31497.

Greenbone Protects Against Known Encryption Flaws

Encryption flaws can be caused by weak cryptographic algorithms, misconfigurations, and flawed implementations of an otherwise strong encryption algorithm, such as the case of CVE-2024-31497. Greenbone includes over 6,500 separate Network Vulnerability Tests (NVTs) and Local Security Checks (LSCs) that can identify all types of cryptographic flaws. Some examples of cryptographic flaws that Greebone can detect include:

  • Application Specific Vulnerabilities: Greenbone can detect over 6500 OS and application specific encryption vulnerabilities for which CVEs have been published.
  • Lack Of Encryption: Unencrypted remote authentication or other data transfers, and even unencrypted local services pose a significant risk to sensitive data when attackers have gained an advantageous position such as the ability to monitor network traffic.
  • Support For Weak Encryption Algorithms: Weak encryption algorithms or cipher suites no longer provide strong assurances against cryptanalysis attacks. When they are in use, communications are at higher risk of data theft and an attacker may be able to forge communication to execute arbitrary commands on a victim’s system. Greenbone includes more than 1000 NVTs to detect remote services using weak encryption algorithms.
  • Non-Compliant TLS Settings And HTTPS Security Headers: Greenbone has NVTs to detect when HTTP Strict Transport Security (HSTS) is not configured and verify web-server TLS policy.

Summary

SSH public-key authentication is widely considered one of the most – if not the most secure remote access protocol, but two recent vulnerabilities have put this critical service in the spotlight. CVE-2024-3094, a trojan planted in XZ Utils found its way into some experimental Linux repositories before it’s discovery, and CVE-2024-31497 in PuTTY allows a cryptographic attack to extract a client’s private key if an attacker can obtain roughly 60 digital signatures.

Greenbone can detect emerging threats to encryption such as CVE-2024-31497 and includes over 6,500 other vulnerability tests to identify a range of encryption vulnerabilities.

How is artificial intelligence (AI) changing the cybersecurity landscape? Will AI make the cyber world more secure or less secure? I was able to explore these questions at the panel discussion during the “Potsdam Conference for National Cybersecurity 2024” together with Prof. Dr. Sandra Wachter, Dr. Kim Nguyen, Dr. Sven Herpig. Does AI deliver what it promises today? And what does the future look like with AI?

Four experts discuss the opportunities and risks of artificial intelligence in cybersecurity during a panel at the 2024 Potsdam Conference on National Cybersecurity at the Hasso Plattner Institute.
Cybersecurity is already difficult enough for many companies and institutions. Will the addition of artificial intelligence (AI) now make it even more dangerous for them or will AI help to better protect IT systems? What do we know? And what risks are we looking at here? Economic opportunities and social risks are the focus of both public attention and currently planned legislation. The EU law on artificial intelligence expresses many of the hopes and fears associated with AI.

Hopes and fears

We hope that many previously unresolved technical challenges can be overcome. Business and production processes should be accelerated, and machines should be able to handle increasingly complex tasks autonomously. AI can also offer unique protection in the military sector, saving many lives, for example in the form of AI-supported defense systems such as the Iron Dome.

On the other, darker side of AI are threats such as mass manipulation through deepfakes, sophisticated phishing attacks or simply the fear of job losses that goes hand in hand with any technical innovation. More and more chatbots are replacing service employees, image generators are replacing photographers and graphic designers, text generators are replacing journalists and authors, and generated music is replacing musicians and composers. In almost every profession, there is a fear of being affected sooner or later. This even applies to the IT sector, where a rich choice of jobs was previously perceived as a certainty. These fears are often very justified, but sometimes they are not.

In the area of cyber security, however, it is not yet clear to what extent autonomous AI can create more security and replace the urgently needed security experts or existing solutions. This applies to both attackers and defenders. Of course, the unfair distribution of tasks remains: While defenders want (and need) to close as many security gaps as possible, a single vulnerability is enough for the attackers to launch a successful attack. Fortunately, defenders can fall back on tools and mechanisms that automate a lot of work, even today. Without this automation, the defenders are lost. Unfortunately, AI does not yet help well enough. This is demonstrated by the ever-increasing damage caused by conventional cyber attacks, even though there are supposedly already plenty of AI defenses. On the other hand, there is the assumption that attackers are becoming ever more powerful and threatening thanks to AI.

For more cyber security, we need to take a closer look. We need a clearer view of the facts.

Where do we stand today?

So far, we know nothing about technical cyber attacks generated by artificial intelligence. There are currently no relevant, verifiable cases, only theoretically constructed scenarios. This may change, but as things stand today, this is the case. We don’t know of any AI that could currently generate sufficiently sophisticated attacks. What we do know is that phishing is very easy to implement with generative language models and that these spam and phishing emails appear to us to be more skillful, at least anecdotally. Whether this causes more damage than the already considerable damage, on the other hand, is not known. It is already terrible enough today, even without AI. However, we know that phishing is only ever the first step in accessing a vulnerability.

Elmar Geese, Greenbone board member, speaks at the 2024 Potsdam Conference on National Cybersecurity at the Hasso Plattner Institute about the opportunities and risks of artificial intelligence in cybersecurity.

Member of the Greenbone Board Elmar Geese at the Potsdam Conference for national cybersecurity at Hasso-Plattner-Institute (HPI), picture: Nicole Krüger

How can we protect ourselves?

The good news is that an exploited vulnerability can almost always be found and fixed beforehand. Then even the best attack created with generative AI would come to nothing. And that’s how it has to be done. Because whether I am under threat from a conventional attack today or an AI in my network the day after tomorrow, a vulnerability in the software or in the security configuration will always be necessary for an attack to succeed. Two strategies then offer the best protection: firstly, being prepared for the worst-case scenario, for example through backups together with the ability to restore systems in a timely manner. The second is to look for the gaps yourself every day and close them before they can be exploited. Simple rule of thumb: every gap that exists can and will be exploited. 

Role and characteristics of AI

AI systems are themselves very good targets for attacks. Just like the internet, they were not designed with “security by design” in mind. AI systems are just software and hardware, just like any other target. Only in contrast to AI systems, conventional IT systems, whose functionality can be more or less understood with sufficient effort, can be repaired in a manner comparable to surgical interventions. They can be “patched”. This does not work with AI. If a language model does not know what to do, it does not produce a status or even an error message, it “hallucinates”. However, hallucinating is just a fancy term for lying, guessing, inventing something or doing strange things. Such an error cannot be patched, but requires the system to be retrained, for example, without being able to clearly identify the cause of the error.

If it is very obvious and an AI thinks dogs are fish, for example, it is easy to at least recognize the error. However, if it has to state a probability as to whether it has detected a dangerous or harmless anomaly on an X-ray image, for example, it becomes more difficult. It is not uncommon for AI products to be discontinued because the error cannot be corrected. A prominent first example was Tay, a chatbot launched unsuccessfully twice by Microsoft, which was discontinued even faster the second time than the first.

What we can learn from this: lower the bar, focus on trivial AI functions and then it will work. That’s why many AI applications that are coming onto the market today are here to stay. They are useful little helpers that speed up processes and provide convenience. Perhaps they will soon be able to drive cars really well and safely. Or maybe not.

The future with AI

Many AI applications today are anecdotally impressive. However, they can only be created for use in critical fields with a great deal of effort and specialization. The Iron Dome only works because it is the result of well over ten years of development work. Today, it recognizes missiles with a probability of 99% and can shoot them down – and not inadvertently civilian objects – before they cause any damage. For this reason, AI is mostly used to support existing systems and not autonomously. Even if, as the advertising promises, they can formulate emails better than we can or want to ourselves, nobody today wants to hand over their own emails, chat inboxes and other communication channels to an AI that takes care of the correspondence and only informs us of important matters with summaries.

Will that happen in the near future? Probably not. Will it happen at some point? We don’t know. When the time perhaps comes, our bots will be writing messages to each other, our combat robots will be fighting our wars against each other, and AI cyber attackers and defenders will be competing against each other. When they realize that what they are doing is pointless, they might ask themselves what kind of beings they are hiring to do it. Then perhaps they will simply stop, set up communication lines, leave our galaxy and leave us helpless. At least we’ll still have our AI act and can continue to regulate “weak AI” that hasn’t made it away.

Why is Greenbone not a security provider like any other? How did Greenbone come about and what impact does Greenbone’s long history have on the quality of its vulnerability scanners and the security of its customers? The new video “Demystify Greenbone” provides answers to these questions in an twelve-minute overview. It shows why experts need […]

“Support for early crisis detection” was the topic of a high-profile panel on the second day of this year’s PITS Congress. On stage: Greenbone CEO Jan-Oliver Wagner together with other experts from the Federal Criminal Police Office, the German Armed Forces, the Association of Municipal IT Service Providers VITAKO and the Federal Office for Information Security.

Panel discussion at the PITS Congress 2024 on the topic of early crisis detection with Greenbone CEO Dr. Jan-Oliver Wagner and representatives from the BSI, Bundeswehr, BKA and VITAKO.

Once again this year, Behörden Spiegel organized its popular conference on Public IT Security (PITS). Hundreds of security experts gathered at the renowned Hotel Adlon in Berlin for two days of forums, presentations and an exhibition of IT security companies. In 2024, the motto of the event was “Security Performance Management” – and so it was only natural that Greenbone, as a leading provider of vulnerability management, was also invited (as in 2023), for example in the panel on early crisis detection, which Greenbone CEO Dr. Jan-Oliver Wagner opened with a keynote speech.

In his presentation, Jan-Oliver Wagner explained his view on strategic crisis detection, talking about the typical “earthquakes” and the two most important components: Knowing where vulnerabilities are, and providing technologies to address them.

Greenbone has built up this expertise over many years, also making it vailable to the public, in open source, always working together with important players on the market. For example, contacts with the German Federal Office for Information Security (BSI) were there right from the start: “The BSI already had the topic of vulnerability management on its radar when IT security was still limited to firewalls and antiviruses,” Wagner is praising the BSI, the German government’s central authority for IT security.

Today, the importance of two factors is clear: “Every organization must know how and where it is vulnerable, know its own response capabilities and has to keep working on improving them continuously. Cyber threats are like earthquakes. We can’t prevent them, we can only prepare for them and respond to them in the best possible way.”

“A crisis has often happened long before the news break”

According to Jan-Oliver Wagner’s definition, the constant cyber threat evolves into a veritable “crisis” when, for example, a threat “hits a society, economy or nation where many organizations have a lot of vulnerabilities and a low ability to react quickly. Speed is very important. You have to be faster than the attack happens.” The other participants on the panel also addressed this and used the term “getting ahead of the wave”.

The crisis is often already there long before it is mentioned in the news, individual organizations need to protect themselves and prepare themselves so that they can react to unknown situations on a daily basis. “A cyber nation supports organizations and the nation by providing the means to achieve this state,” says Jan-Oliver Wagner.

Differences between the military and local authorities

Major General Dr Michael Färber, Head of Planning and Digitalization, Cyber & Information Space Command, explained the Bundeswehr’s perspective: According to him, a crisis occurs when the measures and options for responding are no longer sufficient. “Then something develops into a crisis.”

From the perspective of small cities and similar local authorities, however, the picture is different, according to Katrin Giebel, Head of VITAKO, the Federal Association of Municipal IT Service Providers. “80 percent of administrative services take place at the municipal level. Riots would already occur when the vehicle registration is not available.” Cities and municipalities keep being hit hard by cyber attacks, and crises start much earlier here: “For us, threats are almost the same as a crisis.”

Massive negligence in organizations is frightening, says BSI

The BSI, on the other hand, defines a “crisis” as when an individual organization is unable or no longer able to solve a problem on its own. Dr Dirk Häger, Head of the Operational Cyber Security Department at the BSI: “As soon as two departments are affected, the crisis team convenes. For us, a crisis exists as soon as we cannot solve a problem with the standard organization.” This is giving a crucial role to those employees who decide whether to call together a meeting or not. “You just reach a point where you agree: now we need the crisis team.”

Something that Häger finds very frightening, however, is how long successful attacks continue to take place after crises have actually already been resolved, for example in view of the events surrounding the Log4j vulnerability. “We put a lot of effort into this, especially at the beginning. The Log4j crisis was over, but many organizations were still vulnerable and had inadequate response capabilities. But nobody investigates it anymore,” complains the head of department from the BSI.

How to increase the speed of response?

Asked by moderator Dr. Eva-Charlotte Proll, editor-in-chief and publisher at Behörden Spiegel, what would help in view of these insights, he describes the typical procedure and decision-making process in the current, exemplary checkpoint incident: “Whether something is a crisis or not is expert knowledge. In this case, it was a flaw that was initiated and exploited by state actors.” Action was needed at the latest when the checkpoint backdoor was beginning to be exploited by other (non-state) attackers. Knowledge of this specific threat situation is also of key importance for those affected.

Also Jan Oliver Wagner once again emphasized the importance of the knowledge factor. Often the threat situation is not being discussed appropriately. At the beginning of 2024, for example, an important US authority (NIST) reduced the amount of information in its vulnerability database – a critical situation for every vulnerability management provider and their customers. Furthermore, the fact that NIST is still not defined as a critical infrastructure shows that action is needed.

The information provided by NIST is central to the National Cyber Defense Center’s ability to create a situational picture as well, agrees Färber. This also applies to cooperation with the industry: several large companies “boast that they can deliver exploit lists to their customers within five minutes. We can improve on that, too.”

Carsten Meywirth, Head of Department at the BKA, emphasized the differences between state and criminal attacks, also using the example of the supply chain attack on Solarwinds. Criminal attackers often have little interest in causing a crisis because too much media attention might jeopardize their potential financial returns. And security authorities need to stay ahead of the wave – which requires intelligence and the potential to disrupt the attackers’ infrastructure.

BKA: International cooperation

According to Major General Färber, Germany is always among the top 4 countries in terms of attacks. The USA is always in first place, but states like Germany end up in the attackers’ dragnets so massively simply because of their economy’s size. This is what makes outstanding international cooperation in investigating and hunting down perpetrators so important. “Especially the cooperation of Germany, the USA and the Netherlands is indeed very successful, but the data sprints with the Five Eyes countries (USA, UK, Australia, Canada and New Zealand) are also of fundamental importance, because that is where intelligence findings come to the table, are being shared and compared. “Successful identification of perpetrators is usually impossible without such alliances,” says Michael Färber. But Germany is well positioned with its relevant organizations: “We have significantly greater redundancy than others, and that is a major asset in this fight.” In the exemplary “Operation Endgame“, a cooperation between the security authorities and the private sector launched by the FBI, the full power of these structures is now becoming apparent. “We must and will continue to expand this.”

“We need an emergency number for local authorities in IT crises”

Getting ahead of the situation like this is still a dream of the future for the municipalities. They are heavily reliant on inter-federal support and a culture of cooperation in general. An up-to-date picture of the situation is “absolutely important” for them, Katrin Giebel from VITAKO reports. As a representative of the municipal IT service providers, she is very familiar with many critical situations and the needs of the municipalities – from staff shortages to a lack of expertise or an emergency number for IT crises that is still missing today. Such a hotline would not only be helpful, but it would also correspond to the definition from Wagner’s introductory presentation: “A cyber nation protects itself by helping companies to protect themselves.”

BSI: prevention is the most important thing

Even if the BSI does not see itself in a position to fulfil such a requirement on its own, this decentralized way of thinking has always been internalized. But whether the BSI should be developed into a central office in this sense is something that needs to be discussed first, explains Dirk Häger from the BSI. “But prevention is much more important. Anyone who puts an unsecured system online today will quickly be hacked. The threat is there. We must be able to fend it off. And that is exactly what prevention is.”

Wagner adds that information is key to this. And distributing information is definitely a task for the state, which is where he sees the existing organizations in the perfect role.

Sponsor wall of the PITS Congress 2024 with logos of leading IT security companies such as Greenbone, Cisco, HP and other partners from government and industry.

Winter is coming: The motto of House Stark from the series “Game of Thrones” indicates the approach of an undefined disaster. One could also surmise something similar when reading many articles that are intended to set the mood for the upcoming NIS2 Implementation Act (NIS2UmsuCG). Is NIS2 a roller of ice and fire that will bury the entire European IT landscape and from which only those who attend one of the countless webinars and follow all the advice can save themselves?

NIS2 as such is merely a directive issued by the EU. It is intended to ensure the IT security of operators of important and critical infrastructures, which may not yet be optimal, and to increase cyber resilience. Based on this directive, the member states are now called upon to create a corresponding law that transposes this directive into national law.

What is to be protected?

The NIS Directive was introduced by the EU back in 2016 to protect industries and service providers relevant to society from attacks in the cybersphere. This regulation contains binding requirements for the protection of IT structures in companies that operate as critical infrastructure (KRITIS) operators. These are companies that play an indispensable role within society because they operate in areas such as healthcare services, energy supply and transport. In other words, areas where deliberately caused disruptions or failures can lead to catastrophic situations – raise your hand if your household is equipped to survive a power outage lasting several days with all its consequences…

As digitalisation continues to advance, the EU had to create a follow-up regulation (NIS2), which on the one hand places stricter requirements on information security, but on the other hand also covers a larger group of companies that are “important” or “particularly important” for society. These companies are now required to fulfil certain standards in information security.

Although the NIS2 Directive was already adopted in December 2022, the member states have until 17 October 2024 to pass a corresponding implementing law. Germany will probably not make it by then. Nevertheless, there is no reason to sit back. The NIS2UmsuCG is coming, and with it increased demands on the IT security of many companies and institutions.

Who needs to act now?

Companies from four groups are affected. Firstly, there are the particularly important organisations with 250 or more employees or an annual turnover of 50 million euros and a balance sheet total of 43 million euros or more. A company that fulfils these criteria and is active in one of the following sectors: energy, transport, finance/insurance, health, water/sewage, IT and telecommunications or space is particularly important.

In addition, there are the important organisations with 50 or more employees or a turnover of 10 million euros and a balance sheet total of 10 million euros. If a company fulfils these criteria and is active in one of the following sectors: postal/courier, chemicals, research, manufacturing (medical/diagnostics, IT, electrical, optical, mechanical engineering, automotive/parts, vehicle construction), digital services (marketplaces, search engines, social networks), food (wholesale, production, processing) or waste disposal (waste management), it is considered important.

In addition to particularly important and important facilities, there are also critical facilities, which continue to be defined by the KRITIS methodology. Federal facilities are also regulated.

What needs to be done?

In concrete terms, this means that all affected companies and institutions, regardless of whether they are “particularly important” or “important”, must fulfil a series of requirements and obligations that leave little room for interpretation and must therefore be strictly observed. Action must be taken in the following areas:

Risk management

Affected companies are obliged to introduce comprehensive risk management. In addition to access control, multi-factor authentication and single sign-on (SSO), this also includes training and incident management as well as an ISMS and risk analyses. This also includes vulnerability management and the use of vulnerability and compliance scans.

Reporting obligations

All companies are obliged to report “significant security incidents”: these must be reported to the BSI reporting centre immediately, but within 24 hours at the latest. Further updates must be made within 72 hours and 30 days.

Registration

Companies are obliged to determine for themselves whether they are affected by the NIS2 legislation and to register themselves within a period of three months. Important: Nobody tells a company that it falls under the NIS2 regulation and must register. The responsibility lies solely with the individual companies and their directors.

Evidence

It is not enough to simply take the specified precautions; appropriate evidence must also be provided. Important and particularly important facilities will be inspected by the BSI on a random basis, and appropriate documentation must be submitted. KRITIS facilities will be inspected on a regular basis every three years.

Duty to inform

In future, it will no longer be possible to sweep security incidents under the carpet. The BSI will be authorised to issue instructions to inform customers about security incidents. The BSI will also be authorised to issue instructions on informing the public about security incidents.

Governance

Managing directors are obliged to approve risk management measures. Training on the topic will also become mandatory. Particularly serious: Managing directors are personally liable with their private assets for breaches of duty.

Sanctions

In the past, companies occasionally preferred to accept the vague possibility of a fine rather than making concrete investments in cyber security measures, as the fine seemed quite acceptable. NIS2 now counters this with new offences and in some cases drastically increased fines. This is further exacerbated by the personal liability of managing directors.

As can be seen, the expected NIS2 implementation law is a complex structure that covers many areas and whose requirements can rarely be covered by a single solution.

What measures should be taken as soon as possible?

Continuously scan your IT systems for vulnerabilities. This will uncover, prioritise and document security gaps as quickly as possible. Thanks to regular scans and detailed reports, you create the basis for documenting the development of the security of your IT infrastructure. At the same time, you fulfil your obligation to provide evidence and are well prepared in the event of an audit.

On request, experts can take over the complete operation of vulnerability management in your company. This also includes services such as web application pentesting, which specifically identifies vulnerabilities in web applications. This covers an important area in the NIS2 catalogue of requirements and fulfils the requirements of § 30 (risk management measures).

Conclusion

There is no single, all-encompassing measure that will immediately make you fully NIS2-compliant. Rather, there are a number of different measures that, taken together, provide a good basis. One component of this is vulnerability management with Greenbone. If you keep this in mind and put the right building blocks in place in good time, you will be on the safe side as an IT manager. And winter can come.

The IT-Grundschutz-Compendium of the Federal Office for Information Security (BSI) has, in recent years, provided clear guidelines for users of Microsoft Office. Since April 2024, Greenbone’s enterprise products have integrated tests to verify whether a company is implementing these instructions. The BSI guidelines are aligned with the Center for Internet Security (CIS) guidelines.

In the section “APP:Applications 1.1. Office Products” the BSI specifies the “requirements for the functionality of Office product components.” The goal is to protect the data processed and used by the Office software. While Microsoft Office is likely the primary reference due to its widespread market penetration, the model behind the BSI guidelines aims to apply to any office product “that is locally installed and used to view, edit, or create documents, excluding email applications.”

BSI Guidelines

The module explicitly builds on the requirements of the “APP.6 General Software” component and refers to the modules “APP.5.3 General Email Client,” “APP.4.3 Relational Databases,” and “OPS.2.2 Cloud Usage,” although it expressly does not consider these.

The BSI identifies three main threats to Office suites:

  • Lack of customization of Office products to the institution’s needs
  • Malicious content in Office documents
  • Loss of integrity of Office documents

The components listed in the BSI IT-Grundschutz-Compendium include 16 points, some of which have since been removed. Greenbone has developed several hundred tests, primarily addressing five of the basic requirements, including “Secure opening of documents from external sources” (APP.1.1. A3) and “Use of encryption and digital signatures” listed in APP.1.1. A15. The BSI specifies:

“All documents obtained from external sources MUST be checked for malware before being opened. All file formats deemed problematic and all unnecessary within the institution MUST be banned. If possible, they SHOULD be blocked. Technical measures SHOULD enforce that documents from external sources are checked.”

Regarding encryption, it states: “Data with increased protection requirements SHOULD only be stored or transmitted in encrypted form. Before using an encryption method integrated into an Office product, it SHOULD be checked whether it offers sufficient protection. Additionally, a method SHOULD be used that allows macros and documents to be digitally signed.”

CIS Guidelines Enhance Basic Protection

In addition to the requirements listed in the BSI Basic Protection Manual, the CIS Benchmark from the Center for Internet Security (CIS) for Microsoft Office includes further and more specific suggestions for securing Microsoft products. The CIS guidelines are developed by a community of security experts and represent a consensus-based best practice collection for Microsoft Office.

As one of the first and only vulnerability management providers, Greenbone now offers tests on security-relevant features mentioned in the CIS guidelines, uniting CIS and BSI instructions in numerous, sometimes in-depth tests, such as on ActiveX Control Initialization in Microsoft Office. The Greenbone Vulnerability Management tests whether this switch is set to “enabled”, but also many other settings, for example, whether “Always prevent untrusted Microsoft Query files from opening” is set to “Enabled” among many others.

Many tests focus on external content, integrating macros, and whether and how these external contents are signed, verifiable, and thus trustworthy or not, and whether administrators have done their homework in configuring Microsoft Office. According to the BSI, one of the most significant threats (and the first mentioned) is the lack of adaptation of Office products to the reality and the business processes in the company. Greenbone’s new tests ensure efficient compliance with regulations, making it harder for attackers and malware to establish a foothold and cause damage in the company.