NIS2 Umsetzung gezielt auf den Weg bringen!

The deadline for the implementation of NIS2 is approaching – by October 17, 2024, stricter cybersecurity measures are to be transposed into law in Germany via the NIS2 Implementation Act. Other member states will develop their own legislature based on EU Directive 2022/2555. We have taken a close look at this directive for you to provide you with the most important pointers and signposts for the entry into force of NIS2 in this short video. In this video, you will find out whether your company is affected, what measures you should definitely take, which cybersecurity topics you need to pay particular attention to, who you can consult in this regard and what the consequences of non-compliance are.

Learn about the Cyber Resilience Act, which provides a solid framework to strengthen your organization’s resilience against cyberattacks. The ENISA Common Criteria will help you assess the security of your IT products and systems and take a risk-minimizing approach right from the development stage. Also prioritize the introduction of an information management system, for example by implementing ISO 27001 certification for your company. Seek advice about IT baseline protection from specialists recommended by the BSI or your local responsible office.

In addition to the BSI as a point of contact for matters relating to NIS2, we are happy to assist you and offer certified solutions in the areas of vulnerability management and penetration testing. By taking a proactive approach, you can identify security gaps in your systems at an early stage and secure them before they can be used for an attack. Our vulnerability management solution automatically scans your system for weaknesses and reports back to you regularly. During penetration testing, a human tester attempts to penetrate your system to give you final assurance about the attack surface of your systems.

You should also make it a habit to stay up to date with regular cybersecurity training and establish a lively exchange with other NIS2 companies. This is the only way for NIS2 to lead to a sustainable increase in the level of cyber security in Europe.

To track down the office responsible for you, follow the respective link for your state.

Austria France Malta
Belgium Germany Netherlands
Bulgaria Greece Poland
Croatia Hungary Portugal
Cyprus Ireland Romania
Czech Republic Italy Slovakia
Denmark Latvia Slovenia
Estonia Lithuania Spain
Finland Luxembourg Sweden

IT security teams don’t necessarily need to know what CSAF is, but on the other hand, familiarity with what’s happening “under the hood” of a vulnerability management platform can give context to how next-gen vulnerability management is evolving, and the advantages of automated vulnerability management. In this article, we take an introductory journey through CSAF 2.0, what it is, and how it seeks to benefit enterprise vulnerability management. 

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

What is CSAF?

The Common Security Advisory Framework (CSAF) 2.0 is a standardized, machine-readable vulnerability advisory format. CSAF 2.0 enables the upstream cybersecurity intelligence community, including software and hardware vendors, governments, and independent researchers to provide information about vulnerabilities. Downstream, CSAF allows vulnerability information consumers to aggregate security advisories from a decentralized group of providers and automate risk assessment with more reliable information and less resource overhead.

By providing a standardized machine readable format, CSAF represents an evolution towards “next-gen” automated vulnerability management which can reduce the burden on IT security teams facing an ever increasing number of CVE disclosures, and improve risk-based decision making in the face of an “ad-hoc” approach to vulnerability intelligence sharing.

CSAF 2.0 is the replacement for the Common Vulnerability Reporting Framework (CVRF) v1.2 and extends its predecessor’s capabilities to offer greater flexibility.

Here are the key takeaways:

  • CSAF is an international open standard for machine readable vulnerability advisory documents that uses the JSON markup language.
  • CSAF aggregation is a decentralized model of distributing vulnerability information.
  • CSAF 2.0 is designed to enable next-gen automated enterprise vulnerability management.

The Traditional Process of Vulnerability Management

The traditional process of vulnerability management is a difficult process for large organizations with complex IT environments. The number of CVEs published each patch cycle has been increasing at an unmanageable pace [1][2]. In a traditional vulnerability management process, IT security teams collect vulnerability information manually via Internet searches. In this way, the process involves extensive manual effort to collect, analyze, and organize information from a variety of sources and ad-hoc document formats.

These sources typically include:

  • Vulnerability tracking databases such as NIST NVD
  • Product vendor security advisories
  • National and international CERT advisories
  • CVE numbering authority (CNA) assessments
  • Independent security research
  • Security intelligence platforms
  • Exploit code databases

The ultimate goal of conducting a well-informed risk assessment can be confounded during this process in several ways. Advisories, even those provided by the product vendor themselves, are often incomplete and come in a variety of non-standardized formats. This lack of cohesion makes data-driven decision making difficult and increases the probability of error.

Let’s briefly review the existing vulnerability information pipeline from both the creator and consumer perspectives:

The Vulnerability Disclosure Process

Common Vulnerability and Exposure (CVE) records published in the National Vulnerability Database (NVD) of the NIST (National Institute of Standards and Technology) represent the world’s most centralized global repository of vulnerability information. Here is an overview of how the vulnerability disclosure process works:

  1. Product vendors become aware of a security vulnerability from their own security testing or from independent security researchers, triggering an internal vulnerability disclosure policy into action. In other cases, independent security researchers may interact directly with a CVE Numbering Authority (CNA) to publish the vulnerability without prior consultation with the product vendor.
  2. Vulnerability aggregators such as NIST NVD and national CERTs create unique tracking IDs (such as a CVE ID) and add the disclosed vulnerability to a centralized database where product users and vulnerability management platforms such as Greenbone can become aware and track progress.
  3. Various stakeholders such as the product vendor, NIST NVD and independent researchers publish advisories that may or may not include remediation information, expected dates for official patches, a list of affected products, CVSS impact assessment and severity ratings, Common Platform Enumeration (CPE) or Common Weakness Enumeration (CWE).
  4. Other cyber-threat intelligence providers such as CISA’s Known Exploited Vulnerabilities (KEV) and First.org’s Exploit Prediction Scoring System (EPSS) provide additional risk context.

The Vulnerability Management Process

Product users are responsible for ingesting vulnerability information and applying it to mitigate the risk of exploitation. Here is an overview of the traditional enterprise vulnerability management process:

  1. Product users need to manually search CVE databases and monitor security advisories that pertain to their software and hardware assets or utilize a vulnerability management platform such as Greenbone which automatically aggregate the available ad-hoc threat advisories.
  2. Product users must match the available information to their IT asset inventory. This typically involves maintaining an asset inventory and conducting manual matching, or using a vulnerability scanning product to automate the process of building an asset inventory and executing vulnerability tests.
  3. IT security teams prioritize the discovered vulnerabilities according to the contextual risk presented to critical IT systems, business operations, and in some cases public safety.
  4. Remediation tasks are assigned according to the final risk assessment and available resources.

What is Wrong with Traditional Vulnerability Management?

Traditional or manual vulnerability management processes are operationally complex and lack efficiency. Aside from the operational difficulties of implementing software patches, the lack of accessible and reliable information bogs down efforts to effectively triage and remediate vulnerabilities. Using CVSS alone to assess risk has also been criticized [1][2] for lacking sufficient context to satisfy robust risk-based decision making. Although vulnerability management platforms such as Greenbone greatly reduce the burden on IT security teams, the overall process is still often plagued by time-consuming manual aggregation of ad-hoc vulnerability advisories that can often result in incomplete information.

Especially in the face of an ever increasing number of vulnerabilities, aggregating ad-hoc security information risks being too slow and introduces more human error, increasing vulnerability exposure time and confounding risk-based vulnerability prioritization.

Lack of Standardization Results in Ad-hoc Intelligence

The current vulnerability disclosure process lacks a formal method of distinguishing between reliable vendor provided information, and information provided by arbitrary independent security researchers such as Partner CNAs. In fact, the official CVE website itself promotes the low requirements for becoming a CNA. This results in a large number of CVEs being issued without detailed context, forcing extensive manual enrichment downstream.

Which information is included depends on the CNA’s discretion and there is no way to classify the reliability of the information. As a simple example of the problem, the affected products in an ad-hoc advisory are often provided using a wide range of descriptors that need to be manually interpreted. For example:

  • Version 8.0.0 – 8.0.1
  • Version 8.1.5 and later
  • Version <= 8.1.5
  • Versions prior to 8.1.5
  • All versions < V8.1.5
  • 0, V8.1, V8.1.1, V8.1.2, V8.1.3, V8.1.4, V8.1.5

Scalability

Because vendors, assessors (CNAs), and aggregators utilize various distribution methods and formats for their advisories, the challenge of efficiently tracking and managing vulnerabilities becomes operationally complex and difficult to scale. Furthermore, the increasing rate of vulnerability disclosure exacerbates manual processes, overwhelms security teams, and increases the risk of error or delay in remediation efforts.

Difficult to Assess Risk Context

NIST SP 800-40r4 “Guide to Enterprise Patch Management Planning” Section 3 advises the application of enterprise level vulnerability metrics. Because risk ultimately depends on each vulnerability’s context – factors such as affected systems, potential impact, and exploitability – the current environment of ad-hoc security intelligence presents a significant barrier to robust risk-based vulnerability management.

How Does CSAF 2.0 Solve These Problems?

CSAF documents are essential cyber threat advisories designed to optimize the vulnerability information supply chain. Instead of manually aggregating ad-hoc vulnerability data, product users can automatically aggregate machine-readable CSAF advisories from trusted sources into an Advisory Management System that combines core vulnerability management functions of asset matching and risk assessment. In this way, security content automation with CSAF aims to address the challenges of traditional vulnerability management by providing more reliable and efficient security intelligence, creating the potential for next-gen vulnerability management.

Here are some specific ways that CSAF 2.0 solves the problems of traditional vulnerability management:

More Reliable Security Information

CSAF 2.0 remedies the crux of ad-hoc security intelligence by standardizing several aspects of a vulnerability disclosure. For example, the affected version specifier fields allow standardized data such as Version Range Specifier (vers), Common Platform Enumeration (CPE), Package URL specification, CycloneDX SBOM as well as the product’s common name, serial number, model number, SKU or file hash to identify affected product versions.

In addition to standardizing product versions, CSAF 2.0 also supports Vulnerability Exploitability eXchange (VEX) for product vendors, trusted CSAF providers, or independent security researchers to explicitly declare product remediation status. VEX provides product users with recommendations for remedial actions.

The explicit VEX status declarations are:

  • Not affected: No remediation is required regarding a vulnerability.
  • Affected: Actions are recommended to remediate or address a vulnerability.
  • Fixed: Represents that these product versions contain a fix for a vulnerability.
  • Under Investigation: It is not yet known whether these product versions are affected by a vulnerability. An update will be provided in a later release.

More Effective Use of Resources

CSAF enables several upstream and downstream optimizations to the traditional vulnerability management process. The OASIS CSAF 2.0 documentation includes descriptions of several compliance goals that enable cybersecurity administrators to automate their security operations for more efficient use of resources.

Here are some compliance targets referenced in the CSAF 2.0 documentation that support more effective use of resources above and beyond the traditional vulnerability management process:

  • Advisory Management System: A software system that consumes data and produces CSAF 2.0 compliant advisory documents. This allows CSAF producing teams to assess the quality of data being ingested at a point in time, verify, convert, and publish it as a valid CSAF 2.0 security advisory. This allows CSAF producers to optimize the efficiency of their information pipeline while verifying accurate advisories are published.
  • CSAF Management System: A program that can manage CSAF documents and is able to display their details as required by CSAF viewer. At the most fundamental level, this allows both upstream producers and downstream consumers of security advisories to view their content in a human readable format.
  • CSAF Asset Matching System / SBOM Matching System: A program that integrates with a database of IT assets including Software Bill of Materials (SBOM) and can match assets to any CSAF advisories. An asset matching system serves to provide a CSAF consuming organization with visibility into their IT infrastructure, identify where vulnerable products exist, and optimally provide automated risk assessment and remediation information.
  • Engineering System: A software analysis environment within which analysis tools execute. An engineering system might include a build system, a source control system, a result management system, a bug tracking system, a test execution system and so on.

Decentralized Cybersecurity Information

A recent outage of the NIST National Vulnerability Database (NVD) CVE enrichment process demonstrates how reliance on a single source of vulnerability information can be risky. CSAF is decentralized, allowing downstream vulnerability consumers to source and integrate information from a variety of sources. This decentralized model of intelligence sharing is more resilient to an outage by one information provider, while sharing the burden of vulnerability enrichment more effectively distributes the workload across a wider set of stakeholders.

Enterprise IT product vendors such as RedHat and Cisco have already created their own CSAF and VEX feeds while government cybersecurity agencies and national CERT programs such as the German Federal Office For Information Security Agency (BSI) and US Cybersecurity & Infrastructure Security Agency (CISA) have also developed CSAF 2.0 sharing capabilities. 

The decentralized model also allows for multiple stakeholders to weigh in on a particular vulnerability providing downstream consumers with more context about a vulnerability. In other words, an information gap in one advisory may be filled by an alternative producer that provides the most accurate assessment or specialized analysis.

Improved Risk Assessment and Vulnerability Prioritization

Overall, the benefits of CSAF 2.0 contribute to more accurate and efficient risk assessment, prioritization and remediation efforts. Product vendors can directly publish reliable VEX advisories giving cybersecurity decision makers more timely and trustworthy remediation information. Also, the aggregate severity (aggregate_severity) object in CSAF 2.0 acts as a vehicle to convey reliable urgency and criticality information for a group of vulnerabilities, enabling a more unified risk analysis, and more data driven prioritization of remediation efforts, reducing the exposure time of critical vulnerabilities.

Summary

Traditional vulnerability management processes are plagued by lack of standardization resulting in reliability and scalability issues and increasing the difficulty of assessing risk context and the likelihood of error.

The Common Security Advisory Framework (CSAF) 2.0 seeks to revolutionize the existing process of vulnerability management by enabling more reliable, automated vulnerability intelligence gathering. By providing a standardized machine-readable format for sharing cybersecurity vulnerability information, and decentralizing its source, CSAF 2.0 empowers organizations to harness more reliable security information to achieve more accurate, efficient, and consistent vulnerability management operations.

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

“Your company can be ruined in just 62 minutes”: This is how the security provider Crowdstrike advertises. Now the US manufacturer has itself caused an estimated multi-billion-dollar loss due to a faulty product update – at breakneck speed.

On 19 July at 04:09 (UTC), the security specialist CrowdStrike distributed a driver update for its Falcon software for Windows PCs and servers. Just 159 minutes later, at 06:48 UTC, Google Compute Engine reported the problem, which “only” affected certain Windows computers and servers running CrowdStrike Falcon software.

Almost five per cent of global air traffic was unceremoniously paralysed as a result, and 5,000 flights had to be cancelled. Supermarkets from Germany to New Zealand had to close because the checkout systems failed. A third of all Japanese MacDonalds branches closed their doors at short notice. Among the US authorities affected were the Department of Homeland Security, NASA, the Federal Trade Commission, the National Nuclear Security Administration and the Department of Justice. In the UK, even most doctors’ surgeries were affected.

The problem

The incident points to a burning problem: the centralisation of services and the increasing networking of the IT systems behind them makes us vulnerable. If one service provider in the digital supply chain is affected, the entire chain can break, leading to large-scale outages. As a result, the Microsoft Azure cloud was also affected, with thousands of virtual servers unsuccessfully attempting to restart. Prominent people affected reacted quite clearly. Elon Musk, for example, wants to ban CloudStrike products from all his systems.

More alarming, however, is the fact that security software is being used in areas for which it is not intended. Although the manufacturer advertises quite drastically about the threat posed by third parties, it accepts no responsibility for the problems that its own products can cause and their consequential damage. CrowdStrike expressly advises against using the solutions in critical areas in its terms and conditions. It literally states – and in capital letters: “THE OFFERINGS AND CROWDSTRIKE TOOLS ARE NOT FAULT-TOLERANT AND ARE NOT DESIGNED OR INTENDED FOR USE IN ANY HAZARDOUS ENVIRONMENT.”

The question of liability

Not suitable for critical infrastructures, but often used there: How can this happen? Negligent errors with major damage, but no liability on the part of the manufacturer: How can this be?

In the context of open source, it is often incorrectly argued that the question of liability in the event of malfunctions and risks is unresolved, even though most manufacturers who place open source on the market with their products do provide a warranty.

We can do a lot to make things better by tackling the problems caused by poor quality and dependence on individual large manufacturers. Of course, an open source supply chain is viewed critically, and that’s a good thing. But it has clear advantages over a proprietary supply chain. The incident is a striking example of this. It is easy to prevent an open source company from rolling out a scheduled update in which basic components simply do not work by using appropriate toolchains, and this is what happens.

The consequences

So what can we learn from this disaster and what are the next steps to take? Here are some suggestions:

  1. improve quality: The best lever to put pressure on manufacturers is to increase the motivation for quality via stricter liability. The Cyber Resilience Act (CRA) offers initial approaches here.
  2. Safety first: In this case, this rule relates primarily to the technical approach to product development. Deeply intervening in customer systems is controversial in terms of security. Many customers reject this, but those affected obviously do not (yet). They have now suffered the damage. There are alternatives, which are also based on open source.
  3. use software only as intended: If a manufacturer advises against use in a critical environment, then this is not just a phrase in the general terms and conditions, but a reason for exclusion.
  4. centralisation with a sense of proportion: There are advantages and disadvantages to centralising the digital supply chain that need to be weighed up against each other. When dependency meets a lack of trustworthiness, risks and damage arise. User authorities and companies then stand helplessly in the queue, without alternatives and without their own sovereignty.

Most virtual servers in the Amazon Elastic Compute Cloud EC2 run a version of Linux that has been specially customised for the needs of the cloud. The latest generation of scanners from Greenbone has also been available for the Amazon Web Services operating system for a few weeks now. Over 1,900 additional, customised tests for the latest versions of Amazon Linux (Linux 2 and Linux 2023) have been integrated in recent months, explains Julio Saldana, Product Owner at Greenbone.

Significantly better performance thanks to Notus

Greenbone has been supplementing its vulnerability management with the Notus scan engine since 2022. The innovations in the architecture are primarily aimed at significantly increasing the performance of the security checks. Described as a “milestone” by Greenbone CIO Elmar Geese, the new scanner generation works in two parts: A generator queries the extensive software version data from the company’s servers and saves it in a handy Json format. Because this no longer happens at runtime, but in the background, the actual scanner (the second part of Notus) can simply read and synchronise the data from the Json files in parallel. Waiting times are eliminated. “This is much more efficient, requires fewer processes, less overhead and less memory,” explain the Greenbone developers.

Amazon Linux

Amazon Linux is a fork of Red Hat Linux sources that Amazon has been using and customising since 2011 to meet the needs of its cloud customers. It is largely binary-compatible with Red Hat, initially based on Fedora and later on CentOS. Amazon Linux was followed by Amazon Linux 2, and the latest version is now available as Amazon Linux 2023. The manufacturer plans to release a new version every two years. The version history of the official documentation also includes a feature comparison, as the differences are significant: Amazon Linux 2023 is the first version to also use Systemd, for example. Greenbone’s vulnerability scan was also available on Amazon Linux from the very beginning.

Public-key cryptography underpins enterprise network security and thus, securing the confidentiality of private keys is one of the most critical IT security challenges for preventing unauthorized access and maintaining the confidentiality of data. While Quantum Safe Cryptography (QSC) has emerged as a top concern for the future, recent critical vulnerabilities like CVE-2024-3094 (CVSS 10) in XZ Utils and the newly disclosed CVE-2024-31497 (CVSS 8.8) in PuTTY are here and now – real and present dangers.

Luckily, the XZ Utils vulnerability was caught before widespread deployment into Linux stable release branches. However, by comparison, CVE-2024-31497 in PuTTY represents a much bigger threat than the aforementioned vulnerability in XZ Utils despite its lower CVSS score. Let’s examine the details to understand why and review Greenbone’s capabilities for detecting known cryptographic vulnerabilities.

A Primer On Public Key Authentication

Public-key infrastructure (PKI) is fundamental to a wide array of digital trust services such as Internet and enterprise LAN authentication, authorization, privacy, and application security. For public-key authentication both the client and server each need a pair of interconnected cryptographic keys: a private key, and a public key. The public keys are openly shared between the two connecting parties, while the private keys are used to digitally sign messages sent between them, and the associated public keys are used to decrypt those messages. This is how each party fundamentally verifies the other’s identity and how a single symmetric key is agreed upon for continuous encrypted communication with an optimal connection speed.

In the client-server model of communication, if the client’s private key is compromised, an attacker can potentially authenticate to any resources that honor it. If the server’s private key is compromised, an attacker can potentially spoof the server’s identity and conduct Adversary-in-the-Middle (AitM) attacks.

CVE-2024-31497 Affects All Versions of PuTTY

CVE-2024-31497 in the popular Windows SSH client PuTTY allows an attacker to recover a client’s NIST P-521 secret key by capturing and analyzing approximately 60 digital signatures due to biased ECDSA nonce generation. As of NIST SP-800-186 (2023) NIST ECDSA P-521 keys are still classified among those offering the highest cryptographic resilience and recommended for use in various applications, including SSL/TLS and Secure Shell (SSH) applications. So, a vulnerability in an application’s implementation of ECDSA P-521 authentication is a serious disservice to IT teams who have otherwise applied appropriately strong encryption standards.

In the case of CVE-2024-31497, the client’s digital signatures are subject to cryptanalysis attacks that can reveal the private key. While developing an exploit for CVE-2024-31497 is a highly skilled endeavor requiring expert cryptographers and computer engineers, a proof-of-concept (PoC) code has been released publically, indicating a high risk that CVE-2024-31497 may be actively exploited by even low skilled attackers in the near future.

Adversaries could capture a victim’s signatures by monitoring network traffic, but signatures may already be publicly available if PuTTY was used for signing commits of public GitHub repositories using NIST ECDSA P-521 keys. In other words, adversaries may be able to find enough information to compromise a private key from publicly accessible data, enabling supply-chain attacks on a victim’s software.

CVE-2024-31497 affects all versions of PuTTY after 0.68 (early 2017) before 0.81 and affects FileZilla before 3.67.0, WinSCP before 6.3.3, TortoiseGit before 2.15.0.1, and TortoiseSVN through 1.14.6, and potentially other products.

On the bright side, Greenbone is able to detect the various vulnerable versions of PuTTY with multiple Vulnerability Tests (VTs). Greenbone can identify Windows Registry Keys that indicate a vulnerable version of PuTTY is present on a scan target, and has additional tests for PuTTY for Linux [1][2][3], FileZilla [4][5], and versions of Citrix Hypervisor/XenServer [6] susceptible to CVE-2024-31497.

Greenbone Protects Against Known Encryption Flaws

Encryption flaws can be caused by weak cryptographic algorithms, misconfigurations, and flawed implementations of an otherwise strong encryption algorithm, such as the case of CVE-2024-31497. Greenbone includes over 6,500 separate Network Vulnerability Tests (NVTs) and Local Security Checks (LSCs) that can identify all types of cryptographic flaws. Some examples of cryptographic flaws that Greebone can detect include:

  • Application Specific Vulnerabilities: Greenbone can detect over 6500 OS and application specific encryption vulnerabilities for which CVEs have been published.
  • Lack Of Encryption: Unencrypted remote authentication or other data transfers, and even unencrypted local services pose a significant risk to sensitive data when attackers have gained an advantageous position such as the ability to monitor network traffic.
  • Support For Weak Encryption Algorithms: Weak encryption algorithms or cipher suites no longer provide strong assurances against cryptanalysis attacks. When they are in use, communications are at higher risk of data theft and an attacker may be able to forge communication to execute arbitrary commands on a victim’s system. Greenbone includes more than 1000 NVTs to detect remote services using weak encryption algorithms.
  • Non-Compliant TLS Settings And HTTPS Security Headers: Greenbone has NVTs to detect when HTTP Strict Transport Security (HSTS) is not configured and verify web-server TLS policy.

Summary

SSH public-key authentication is widely considered one of the most – if not the most secure remote access protocol, but two recent vulnerabilities have put this critical service in the spotlight. CVE-2024-3094, a trojan planted in XZ Utils found its way into some experimental Linux repositories before it’s discovery, and CVE-2024-31497 in PuTTY allows a cryptographic attack to extract a client’s private key if an attacker can obtain roughly 60 digital signatures.

Greenbone can detect emerging threats to encryption such as CVE-2024-31497 and includes over 6,500 other vulnerability tests to identify a range of encryption vulnerabilities.

How is artificial intelligence (AI) changing the cybersecurity landscape? Will AI make the cyber world more secure or less secure? I was able to explore these questions at the panel discussion during the “Potsdam Conference for National Cybersecurity 2024” together with Prof. Dr. Sandra Wachter, Dr. Kim Nguyen, Dr. Sven Herpig. Does AI deliver what it promises today? And what does the future look like with AI?

HPI Security Panel
Cybersecurity is already difficult enough for many companies and institutions. Will the addition of artificial intelligence (AI) now make it even more dangerous for them or will AI help to better protect IT systems? What do we know? And what risks are we looking at here? Economic opportunities and social risks are the focus of both public attention and currently planned legislation. The EU law on artificial intelligence expresses many of the hopes and fears associated with AI.

Hopes and fears

We hope that many previously unresolved technical challenges can be overcome. Business and production processes should be accelerated, and machines should be able to handle increasingly complex tasks autonomously. AI can also offer unique protection in the military sector, saving many lives, for example in the form of AI-supported defense systems such as the Iron Dome.

On the other, darker side of AI are threats such as mass manipulation through deepfakes, sophisticated phishing attacks or simply the fear of job losses that goes hand in hand with any technical innovation. More and more chatbots are replacing service employees, image generators are replacing photographers and graphic designers, text generators are replacing journalists and authors, and generated music is replacing musicians and composers. In almost every profession, there is a fear of being affected sooner or later. This even applies to the IT sector, where a rich choice of jobs was previously perceived as a certainty. These fears are often very justified, but sometimes they are not.

In the area of cyber security, however, it is not yet clear to what extent autonomous AI can create more security and replace the urgently needed security experts or existing solutions. This applies to both attackers and defenders. Of course, the unfair distribution of tasks remains: While defenders want (and need) to close as many security gaps as possible, a single vulnerability is enough for the attackers to launch a successful attack. Fortunately, defenders can fall back on tools and mechanisms that automate a lot of work, even today. Without this automation, the defenders are lost. Unfortunately, AI does not yet help well enough. This is demonstrated by the ever-increasing damage caused by conventional cyber attacks, even though there are supposedly already plenty of AI defenses. On the other hand, there is the assumption that attackers are becoming ever more powerful and threatening thanks to AI.

For more cyber security, we need to take a closer look. We need a clearer view of the facts.

Where do we stand today?

So far, we know nothing about technical cyber attacks generated by artificial intelligence. There are currently no relevant, verifiable cases, only theoretically constructed scenarios. This may change, but as things stand today, this is the case. We don’t know of any AI that could currently generate sufficiently sophisticated attacks. What we do know is that phishing is very easy to implement with generative language models and that these spam and phishing emails appear to us to be more skillful, at least anecdotally. Whether this causes more damage than the already considerable damage, on the other hand, is not known. It is already terrible enough today, even without AI. However, we know that phishing is only ever the first step in accessing a vulnerability.

Member of the Greenbone Board Elmar Geese at the Potsdam Conference for national cybersecurity at Hasso-Plattner-Institute (HPI), picture: Nicole Krüger

How can we protect ourselves?

The good news is that an exploited vulnerability can almost always be found and fixed beforehand. Then even the best attack created with generative AI would come to nothing. And that’s how it has to be done. Because whether I am under threat from a conventional attack today or an AI in my network the day after tomorrow, a vulnerability in the software or in the security configuration will always be necessary for an attack to succeed. Two strategies then offer the best protection: firstly, being prepared for the worst-case scenario, for example through backups together with the ability to restore systems in a timely manner. The second is to look for the gaps yourself every day and close them before they can be exploited. Simple rule of thumb: every gap that exists can and will be exploited. 

Role and characteristics of AI

AI systems are themselves very good targets for attacks. Just like the internet, they were not designed with “security by design” in mind. AI systems are just software and hardware, just like any other target. Only in contrast to AI systems, conventional IT systems, whose functionality can be more or less understood with sufficient effort, can be repaired in a manner comparable to surgical interventions. They can be “patched”. This does not work with AI. If a language model does not know what to do, it does not produce a status or even an error message, it “hallucinates”. However, hallucinating is just a fancy term for lying, guessing, inventing something or doing strange things. Such an error cannot be patched, but requires the system to be retrained, for example, without being able to clearly identify the cause of the error.

If it is very obvious and an AI thinks dogs are fish, for example, it is easy to at least recognize the error. However, if it has to state a probability as to whether it has detected a dangerous or harmless anomaly on an X-ray image, for example, it becomes more difficult. It is not uncommon for AI products to be discontinued because the error cannot be corrected. A prominent first example was Tay, a chatbot launched unsuccessfully twice by Microsoft, which was discontinued even faster the second time than the first.

What we can learn from this: lower the bar, focus on trivial AI functions and then it will work. That’s why many AI applications that are coming onto the market today are here to stay. They are useful little helpers that speed up processes and provide convenience. Perhaps they will soon be able to drive cars really well and safely. Or maybe not.

The future with AI

Many AI applications today are anecdotally impressive. However, they can only be created for use in critical fields with a great deal of effort and specialization. The Iron Dome only works because it is the result of well over ten years of development work. Today, it recognizes missiles with a probability of 99% and can shoot them down – and not inadvertently civilian objects – before they cause any damage. For this reason, AI is mostly used to support existing systems and not autonomously. Even if, as the advertising promises, they can formulate emails better than we can or want to ourselves, nobody today wants to hand over their own emails, chat inboxes and other communication channels to an AI that takes care of the correspondence and only informs us of important matters with summaries.

Will that happen in the near future? Probably not. Will it happen at some point? We don’t know. When the time perhaps comes, our bots will be writing messages to each other, our combat robots will be fighting our wars against each other, and AI cyber attackers and defenders will be competing against each other. When they realize that what they are doing is pointless, they might ask themselves what kind of beings they are hiring to do it. Then perhaps they will simply stop, set up communication lines, leave our galaxy and leave us helpless. At least we’ll still have our AI act and can continue to regulate “weak AI” that hasn’t made it away.

Why is Greenbone not a security provider like any other? How did Greenbone come about and what impact does Greenbone’s long history have on the quality of its vulnerability scanners and the security of its customers? The new video “Demystify Greenbone” provides answers to these questions in an twelve-minute overview. It shows why experts need their own specialised vocabulary for detecting vulnerabilities and what it means.

Greenbone is a technology-focussed company that promotes the open source idea to achieve maximum security for companies and institutions. In the video you will learn how Greenbone uses open source code to create a customised portfolio and which solutions are best suited to optimally secure your network. How do the feeds affect the solutions? What deployment models does Greenbone offer? Discover it. Discover Greenbone. Demystify Greenbone!

“Support for early crisis detection” was the topic of a high-profile panel on the second day of this year’s PITS Congress. On stage: Greenbone CEO Jan-Oliver Wagner together with other experts from the Federal Criminal Police Office, the German Armed Forces, the Association of Municipal IT Service Providers VITAKO and the Federal Office for Information Security.

On security crises f.l.t.r.: Dr. Jan-Oliver Wagner, CEO (Greenbone), Dr. Dirk Häger, Head of Operational Cyber Security Department (Federal Office for Information Security), Katrin Giebel, Head of Office (VITAKO Federal Association of Municipal IT Service Providers), Major General Dr. Michael Färber, Head of Planning and Digitization Department (Cyber & Information Command) and Carsten Meywirth, Head of Cybercrime Department (Federal Criminal Police Office).

Once again this year, Behörden Spiegel organized its popular conference on Public IT Security (PITS). Hundreds of security experts gathered at the renowned Hotel Adlon in Berlin for two days of forums, presentations and an exhibition of IT security companies. In 2024, the motto of the event was “Security Performance Management” – and so it was only natural that Greenbone, as a leading provider of vulnerability management, was also invited (as in 2023), for example in the panel on early crisis detection, which Greenbone CEO Dr. Jan-Oliver Wagner opened with a keynote speech.

In his presentation, Jan-Oliver Wagner explained his view on strategic crisis detection, talking about the typical “earthquakes” and the two most important components: Knowing where vulnerabilities are, and providing technologies to address them.

Greenbone has built up this expertise over many years, also making it vailable to the public, in open source, always working together with important players on the market. For example, contacts with the German Federal Office for Information Security (BSI) were there right from the start: “The BSI already had the topic of vulnerability management on its radar when IT security was still limited to firewalls and antiviruses,” Wagner is praising the BSI, the German government’s central authority for IT security.

Today, the importance of two factors is clear: “Every organization must know how and where it is vulnerable, know its own response capabilities and has to keep working on improving them continuously. Cyber threats are like earthquakes. We can’t prevent them, we can only prepare for them and respond to them in the best possible way.”

“A crisis has often happened long before the news break”

According to Jan-Oliver Wagner’s definition, the constant cyber threat evolves into a veritable “crisis” when, for example, a threat “hits a society, economy or nation where many organizations have a lot of vulnerabilities and a low ability to react quickly. Speed is very important. You have to be faster than the attack happens.” The other participants on the panel also addressed this and used the term “getting ahead of the wave”.

The crisis is often already there long before it is mentioned in the news, individual organizations need to protect themselves and prepare themselves so that they can react to unknown situations on a daily basis. “A cyber nation supports organizations and the nation by providing the means to achieve this state,” says Jan-Oliver Wagner.

Differences between the military and local authorities

Major General Dr Michael Färber, Head of Planning and Digitalization, Cyber & Information Space Command, explained the Bundeswehr’s perspective: According to him, a crisis occurs when the measures and options for responding are no longer sufficient. “Then something develops into a crisis.”

From the perspective of small cities and similar local authorities, however, the picture is different, according to Katrin Giebel, Head of VITAKO, the Federal Association of Municipal IT Service Providers. “80 percent of administrative services take place at the municipal level. Riots would already occur when the vehicle registration is not available.” Cities and municipalities keep being hit hard by cyber attacks, and crises start much earlier here: “For us, threats are almost the same as a crisis.”

Massive negligence in organizations is frightening, says BSI

The BSI, on the other hand, defines a “crisis” as when an individual organization is unable or no longer able to solve a problem on its own. Dr Dirk Häger, Head of the Operational Cyber Security Department at the BSI: “As soon as two departments are affected, the crisis team convenes. For us, a crisis exists as soon as we cannot solve a problem with the standard organization.” This is giving a crucial role to those employees who decide whether to call together a meeting or not. “You just reach a point where you agree: now we need the crisis team.”

Something that Häger finds very frightening, however, is how long successful attacks continue to take place after crises have actually already been resolved, for example in view of the events surrounding the Log4j vulnerability. “We put a lot of effort into this, especially at the beginning. The Log4j crisis was over, but many organizations were still vulnerable and had inadequate response capabilities. But nobody investigates it anymore,” complains the head of department from the BSI.

How to increase the speed of response?

Asked by moderator Dr. Eva-Charlotte Proll, editor-in-chief and publisher at Behörden Spiegel, what would help in view of these insights, he describes the typical procedure and decision-making process in the current, exemplary checkpoint incident: “Whether something is a crisis or not is expert knowledge. In this case, it was a flaw that was initiated and exploited by state actors.” Action was needed at the latest when the checkpoint backdoor was beginning to be exploited by other (non-state) attackers. Knowledge of this specific threat situation is also of key importance for those affected.

Also Jan Oliver Wagner once again emphasized the importance of the knowledge factor. Often the threat situation is not being discussed appropriately. At the beginning of 2024, for example, an important US authority (NIST) reduced the amount of information in its vulnerability database – a critical situation for every vulnerability management provider and their customers. Furthermore, the fact that NIST is still not defined as a critical infrastructure shows that action is needed.

The information provided by NIST is central to the National Cyber Defense Center’s ability to create a situational picture as well, agrees Färber. This also applies to cooperation with the industry: several large companies “boast that they can deliver exploit lists to their customers within five minutes. We can improve on that, too.”

Carsten Meywirth, Head of Department at the BKA, emphasized the differences between state and criminal attacks, also using the example of the supply chain attack on Solarwinds. Criminal attackers often have little interest in causing a crisis because too much media attention might jeopardize their potential financial returns. And security authorities need to stay ahead of the wave – which requires intelligence and the potential to disrupt the attackers’ infrastructure.

BKA: International cooperation

According to Major General Färber, Germany is always among the top 4 countries in terms of attacks. The USA is always in first place, but states like Germany end up in the attackers’ dragnets so massively simply because of their economy’s size. This is what makes outstanding international cooperation in investigating and hunting down perpetrators so important. “Especially the cooperation of Germany, the USA and the Netherlands is indeed very successful, but the data sprints with the Five Eyes countries (USA, UK, Australia, Canada and New Zealand) are also of fundamental importance, because that is where intelligence findings come to the table, are being shared and compared. “Successful identification of perpetrators is usually impossible without such alliances,” says Michael Färber. But Germany is well positioned with its relevant organizations: “We have significantly greater redundancy than others, and that is a major asset in this fight.” In the exemplary “Operation Endgame“, a cooperation between the security authorities and the private sector launched by the FBI, the full power of these structures is now becoming apparent. “We must and will continue to expand this.”

“We need an emergency number for local authorities in IT crises”

Getting ahead of the situation like this is still a dream of the future for the municipalities. They are heavily reliant on inter-federal support and a culture of cooperation in general. An up-to-date picture of the situation is “absolutely important” for them, Katrin Giebel from VITAKO reports. As a representative of the municipal IT service providers, she is very familiar with many critical situations and the needs of the municipalities – from staff shortages to a lack of expertise or an emergency number for IT crises that is still missing today. Such a hotline would not only be helpful, but it would also correspond to the definition from Wagner’s introductory presentation: “A cyber nation protects itself by helping companies to protect themselves.”

BSI: prevention is the most important thing

Even if the BSI does not see itself in a position to fulfil such a requirement on its own, this decentralized way of thinking has always been internalized. But whether the BSI should be developed into a central office in this sense is something that needs to be discussed first, explains Dirk Häger from the BSI. “But prevention is much more important. Anyone who puts an unsecured system online today will quickly be hacked. The threat is there. We must be able to fend it off. And that is exactly what prevention is.”

Wagner adds that information is key to this. And distributing information is definitely a task for the state, which is where he sees the existing organizations in the perfect role.

Winter is coming: The motto of House Stark from the series “Game of Thrones” indicates the approach of an undefined disaster. One could also surmise something similar when reading many articles that are intended to set the mood for the upcoming NIS2 Implementation Act (NIS2UmsuCG). Is NIS2 a roller of ice and fire that will bury the entire European IT landscape and from which only those who attend one of the countless webinars and follow all the advice can save themselves?

NIS2 as such is merely a directive issued by the EU. It is intended to ensure the IT security of operators of important and critical infrastructures, which may not yet be optimal, and to increase cyber resilience. Based on this directive, the member states are now called upon to create a corresponding law that transposes this directive into national law.

What is to be protected?

The NIS Directive was introduced by the EU back in 2016 to protect industries and service providers relevant to society from attacks in the cybersphere. This regulation contains binding requirements for the protection of IT structures in companies that operate as critical infrastructure (KRITIS) operators. These are companies that play an indispensable role within society because they operate in areas such as healthcare services, energy supply and transport. In other words, areas where deliberately caused disruptions or failures can lead to catastrophic situations – raise your hand if your household is equipped to survive a power outage lasting several days with all its consequences…

As digitalisation continues to advance, the EU had to create a follow-up regulation (NIS2), which on the one hand places stricter requirements on information security, but on the other hand also covers a larger group of companies that are “important” or “particularly important” for society. These companies are now required to fulfil certain standards in information security.

Although the NIS2 Directive was already adopted in December 2022, the member states have until 17 October 2024 to pass a corresponding implementing law. Germany will probably not make it by then. Nevertheless, there is no reason to sit back. The NIS2UmsuCG is coming, and with it increased demands on the IT security of many companies and institutions.

Who needs to act now?

Companies from four groups are affected. Firstly, there are the particularly important organisations with 250 or more employees or an annual turnover of 50 million euros and a balance sheet total of 43 million euros or more. A company that fulfils these criteria and is active in one of the following sectors: energy, transport, finance/insurance, health, water/sewage, IT and telecommunications or space is particularly important.

In addition, there are the important organisations with 50 or more employees or a turnover of 10 million euros and a balance sheet total of 10 million euros. If a company fulfils these criteria and is active in one of the following sectors: postal/courier, chemicals, research, manufacturing (medical/diagnostics, IT, electrical, optical, mechanical engineering, automotive/parts, vehicle construction), digital services (marketplaces, search engines, social networks), food (wholesale, production, processing) or waste disposal (waste management), it is considered important.

In addition to particularly important and important facilities, there are also critical facilities, which continue to be defined by the KRITIS methodology. Federal facilities are also regulated.

What needs to be done?

In concrete terms, this means that all affected companies and institutions, regardless of whether they are “particularly important” or “important”, must fulfil a series of requirements and obligations that leave little room for interpretation and must therefore be strictly observed. Action must be taken in the following areas:

Risk management

Affected companies are obliged to introduce comprehensive risk management. In addition to access control, multi-factor authentication and single sign-on (SSO), this also includes training and incident management as well as an ISMS and risk analyses. This also includes vulnerability management and the use of vulnerability and compliance scans.

Reporting obligations

All companies are obliged to report “significant security incidents”: these must be reported to the BSI reporting centre immediately, but within 24 hours at the latest. Further updates must be made within 72 hours and 30 days.

Registration

Companies are obliged to determine for themselves whether they are affected by the NIS2 legislation and to register themselves within a period of three months. Important: Nobody tells a company that it falls under the NIS2 regulation and must register. The responsibility lies solely with the individual companies and their directors.

Evidence

It is not enough to simply take the specified precautions; appropriate evidence must also be provided. Important and particularly important facilities will be inspected by the BSI on a random basis, and appropriate documentation must be submitted. KRITIS facilities will be inspected on a regular basis every three years.

Duty to inform

In future, it will no longer be possible to sweep security incidents under the carpet. The BSI will be authorised to issue instructions to inform customers about security incidents. The BSI will also be authorised to issue instructions on informing the public about security incidents.

Governance

Managing directors are obliged to approve risk management measures. Training on the topic will also become mandatory. Particularly serious: Managing directors are personally liable with their private assets for breaches of duty.

Sanctions

In the past, companies occasionally preferred to accept the vague possibility of a fine rather than making concrete investments in cyber security measures, as the fine seemed quite acceptable. NIS2 now counters this with new offences and in some cases drastically increased fines. This is further exacerbated by the personal liability of managing directors.

As can be seen, the expected NIS2 implementation law is a complex structure that covers many areas and whose requirements can rarely be covered by a single solution.

What measures should be taken as soon as possible?

Continuously scan your IT systems for vulnerabilities. This will uncover, prioritise and document security gaps as quickly as possible. Thanks to regular scans and detailed reports, you create the basis for documenting the development of the security of your IT infrastructure. At the same time, you fulfil your obligation to provide evidence and are well prepared in the event of an audit.

On request, experts can take over the complete operation of vulnerability management in your company. This also includes services such as web application pentesting, which specifically identifies vulnerabilities in web applications. This covers an important area in the NIS2 catalogue of requirements and fulfils the requirements of § 30 (risk management measures).

Conclusion

There is no single, all-encompassing measure that will immediately make you fully NIS2-compliant. Rather, there are a number of different measures that, taken together, provide a good basis. One component of this is vulnerability management with Greenbone. If you keep this in mind and put the right building blocks in place in good time, you will be on the safe side as an IT manager. And winter can come.