Greenbone is stepping up its commitment to open source and the community edition of its vulnerability management software. In addition to the open source code on Github, Greenbone now also provides pre-configured and tested Docker containers.

Official containers from the manufacturer itself

The Greenbone Community Containers are regularly built automatically and are also available for ARM and Raspberry Pi.

Björn Ricks, Senior Software Developer at Greenbone, sees this as a “big improvement for admins who just want to give Greenbone a try. Our official containers replace the many different Docker images that exist on the web with an official, always up-to-date, always-maintained version of Greenbone.”

Official Docker Container for Greenbone Community Edition

Hi Björn, what is your role at Greenbone?

Björn Ricks: One of my current tasks is to provide community container builds at Greenbone. Taking care of the community has always been a big concern of mine and for a long time I wanted to make sure that we also provide “official” Docker images of Greenbone. I’m very pleased that this has now worked out.

What is the benefit of the images for the community?

Björn Ricks: We make it much easier for administrators and users who want to test Greenbone. The installation now works completely independent of the operating system used: just download and run the Docker compose file that describes the services, open the browser and scan the local network. I think that’s a much lower barrier to entry, ideal even for anyone who doesn’t yet know the details and capabilities of our products.

Why does Greenbone now provide containers itself? There were already some on the net, weren’t there?

Björn Ricks: Yes, that’s right, but we found out that some people were unsure about the content, legitimacy and maintenance of these images. That’s why we decided to offer Docker images signed by us with verified and secured content.
All the container images existing on the network have different version status and even more so different quality grade. It is often impossible to tell from the outside whether an image is “any good” or not. Of course, you also have to trust the external authors and maintainers that they know what they are doing and that their images do not contain any additional security vulnerabilities. Only we, as producers of our own software, can guarantee that the published container images have the current version status and the desired quality grade.

Does Greenbone also plan to provide Docker images for its commercial product line, Greenbone Enterprise Appliances?

Björn Ricks: That depends on requests from our commercial customers. The Greenbone Community Edition includes access to the community feed with around 100,000 vulnerability tests. Our commercial feed contains even more tests, including those for many proprietary products that our customers use.

We have found that our customers are happy with our appliances, our virtual appliances, and our cloud solution – all of which qualify for use of the commercial feed subscription. However, this could change, and if it does, we will consider offering Docker containers to commercial customers.

How often are the images updated and what feed is included?

Björn Ricks: The images are built and published directly from the source code repositories. So they are always up to date and contain all patches. At the moment only the community feed is available for the images, but this might change in the future.

Where can I get the images and the documentation?

Björn Ricks: The Docker compose file for orchestrating the services is linked in the documentation, The Dockerfiles for building the Docker images can also be found on Github in the corresponding repositories, and are quite easy to download, for example: here.

Greenbone, the global leader in open source vulnerability management solutions, has launched a community portal for its user and developer community, making the extensive information available for community editions clearer and easier to access.

Community Portal Greenbone

Who is the portal for?

At, vulnerability management experts invite users, developers and all IT professionals who are professionally involved in security and protection against hackers to browse forums, blogs, news and documentation and help shape the pages.

Central point of contact
“Our new Community Portal is the central place where users, experts, Greenbone employees and anyone else interested can meet and get up-to-the-minute information about the products, the company or new features,” explains Greenbone’s Community Manager DeeAnn Little: “We want the portal to be a home for the large, worldwide Greenbone community, with all the links and information anyone who works with our vulnerability management tools needs.”

What the new portal offers
For both Greenbone OpenVAS and the Greenbone Community Edition, you can find (under “Getting started“) numerous instructions on how to install and configure the community versions. In addition, there are news and updates, for example about the recently released Docker container releases of the Community Edition but also current figures about Greenbone installations on a world map and a completely revised forum with new categories and Blog.

For the community, with the community
“All this would not be possible without the numerous contributions from the Greenbone community, but at the same time this is only the first step,” explains Little: “In the future, we will also have our experts explain technical details and present new features here.

Greenbone invites the large community to give input and suggestions which topics are of relevance and interest for them Little explains:

“We welcome all input and all suggestions, ideas and ideas for improvement, which is exactly what the portal is here for. Send us your questions, any questions! What have we missed? What would you like to see? How can we make the portal, the forum and the new pages even better? What topics would you like to see – what should we report on?” You can leave your statement here, we will be glad to reveive it.

Greenbone Community Forum in a new look

Greenbone has also integrated the popular User Forum into the Community Portal. With the new look, it will continue to provide users of Greenbone’s software – regardless of their technical background – with a platform for ideas, mutual help, but also feedback.

“The forum is a place where users can meet and help each other as equals – it’s a place of exchange where we can always learn, too,” Little explains. “Whether it’s a beginner’s question, more in-depth howtos, or getting started guides, many a user will find help from experienced users in the forum, even in exotic setups.”

Greenbone, a world leader in open source vulnerability management software, has released its latest scanner, Notus.

“With Notus, a milestone for the performance of extensive comparisons of software versions has been created in recent years,” explains CIO Elmar Geese.

With Notus, Greenbone is also responding to customer requests for better performance in version checks. Whether a security vulnerability is dangerous for a company depends mainly on the installed software versions and their patch level. In very many cases, a vulnerability scanner must therefore match a large number of software versions and detect combinations of these. As the complexity of the setups increases, this test becomes more and more extensive. However, because the overall result of the scan also depends heavily on this data collection, Notus will enable such scans much faster than any of its predecessors.

Faster thanks to JSON

“The scanner rattles off the relevant servers and captures software running there. For the actual scan, it essentially only gets the info about affected and fixed packages,” explains Björn Ricks, Senior Software Developer at Greenbone. “With the previously used scanner and its predecessors, we usually had to start a separate process per version check, meaning a separate manually created script. Generating these scripts automatically is time-consuming.” Notus, on the other hand, only loads the data it needs from JSON files. Ricks sums it up, “Notus is significantly more efficient, requires fewer processes, less overhead, less memory, …”

CIO Geese then also declares the Notus scanner to be a “milestone for our users, it improves the performance significantly. Our well-known high detection quality as well as performance, central goals of our product strategy, will be optimally supported by the new scanner.”

Notus, Greenbone and OpenVAS

The Notus project consists of two parts: a Notus generator, which creates the JSON files containing information about vulnerable RPM/Debian packages, and the Notus scanner, which loads these JSON files and interprets the information from them.

OpenVAS, the Open Vulnerability Assessment System, was created in 2005, when the development team of the Nessus vulnerability scanner decided to stop working under open source licenses and move to a proprietary business model.

Since 2008, Greenbone has been providing professional vulnerability scanning support. For this purpose, Greenbone took over the further development of OpenVAS, added several software components and thus transformed OpenVAS into a comprehensive vulnerability management solution that still carries the values of free software. The first appliances came onto the market in spring 2010.

Greenbone is now a TISAX participant and its Information Security Management System (ISMS) and data protection processes are certified within the German automotive industry’s TISAX scheme. “We have taken this step as an effort in providing the best possible protection of sensitive and confidential information for our customers, as the next logical step after being successfully certified for worldwide accepted international industry standards like ISO 27001 and ISO 9001.” – Dr. Jan-Oliver Wagner, CEO of Greenbone. The results are available on the ENX portal using the Scope ID S3LW9L and the Assessment ID A1P7V9. TISAX and TISAX results are not intended for general public.

TISAX, the “Trusted Information Security Assessment Exchange”, is a mechanism for checking and exchanging test results according to industry-specific standards. Originally created as a system for the exchange of standardized test results in the automotive industry, it is optimized for the risk assessment of suppliers. Therefore, TISAX is being developed and governed by the ENX Association and published by the German Association of the Automotive Industry (VDA). Its focus lies on secure information processing between business partners, protection of prototypes and data protection in accordance with the EU’s General Data Protection Regulation (GDPR) for potential deals between car manufacturers and their service providers or suppliers.

As a crucial part of a secure supply chain, TISAX is a standard for Information Security Management Systems (ISMS), originally derived from the ISO/IEC 27001 standard in 2017, but has since diverged. For the automotive industry, TISAX brings standardization, quality assurance and guarantees information security measures are assessed by audit providers in accordance with the VDA standards. Audits according to TISAX, especially for service providers and suppliers, are carried out by so-called “TISAX audit service providers” and come with three levels of maturity an overview of which you can find in the TISAX Participant Handbook and on websites of certification providers like Adacor (German only).

Greenbone’s certifications increase our products’ value for our customers, not just by saving time and money, but also by proving our outstanding security level and high standards. Elmar Geese, CIO at Greenbone: “With TISAX, we document our independently audited security status. Customers do not need to do individual assessments, work with lengthy questionnaires or all the other things needed in a bottom-up audit. We guarantee that we meet their security requirements.”

Therefore, Greenbone follows the question catalogue of information security of the German Association of the Automotive Industry (VDA ISA). The assessment was conducted by an audit provider. The result is exclusively retrievable via the ENX portal (Scope ID: S3LW9L, Assessment ID: A1P7V9).

In networked production, IT and OT are growing closer and closer together. Where once a security gap “only” caused a data leak, today the entire production can collapse. Those who carry out regular active and passive vulnerability scans can protect themselves.

What seems somewhat strange in the case of physical infrastructure – who would recreate a break-in to test their alarm system – is a tried and tested method in IT for identifying vulnerabilities. This so-called active scanning can be performed daily and automatically. Passive scanning, on the other hand, detects an intrusion in progress, because every cyber intrusion also leaves traces, albeit often hidden.

Controlling the Traffic

Firewalls and antivirus programs, for example, use passive scanning to check traffic reaching a system. This data is then checked against a database. Information about malware, unsafe requests and other anomalies is stored there. For example, if the firewall receives a request from an insecure sender that wants to read out users’ profile data, it rejects the request. The system itself is unaware of this because the passive scan does not access the system but only the data traffic.

The advantage of this is the fact that the system does not have to use any additional computing power. Despite the scan, the full bandwidth can be used. This is particularly useful for critical components. They should have the highest possible availability. The fewer additional activities they perform, the better.

The disadvantage of passive scanning is that only systems that are actively communicating by themselves can be seen. This does not include office software or PDF readers, for example. But even services that do communicate do so primarily with their main functions. Functions with vulnerabilities that are rarely or not at all used in direct operation are not visible, or are only visible when the attack is already in progress.

Checking the Infrastructure

Active scans work differently and simulate attacks. They make requests to the system and thereby try to trigger different reactions. For example, the active scanner sends a request for data transfer to various programs in the system. If one of the programs responds and forwards the data to the simulated unauthorized location, the scanner has found a security hole.

Differences between active and passive vulnerability scans

Left: Active scans send queries to the system in an attempt to trigger different responses. Right: Passive scans check the traffic reaching a system and match this data against a database.

The advantage: the data quality that can be achieved with active scanning is higher than with passive scanning. Since interaction takes place directly with software and interfaces, problems can be identified in programs that do not normally communicate directly with the network. This is also how vulnerabilities are discovered in programs such as Office applications.

However, when interacting directly, systems have to handle extra requests which may then affect the basic functions of a program. Operating technology such as machine control systems, for example, are not necessarily designed to perform secondary tasks. Here, scanning under supervision and, as a supplement, continuous passive scanning are recommended.

Scanning Actively, but Minimally Invasive

Nevertheless, active scanning is essential for operational cyber security. This is because the risk posed by the short-term overuse of a system component is small compared to a production outage or data leak. Moreover, active scans not only uncover vulnerabilities, they can also enhance passive scans. For example, the vulnerabilities that are detected can be added to firewall databases. This also helps other companies that use similar systems.

Active and Passive Scanning Work Hand in Hand

Since the passive scanner can also provide the active scanner with helpful information, such as information about cell phones or properties about network services, these two security tools can be considered as complementary. What they both have in common is that they always automatically get the best out of the given situation in the network. For the passive and active scanning techniques, it does not matter which or how many components and programs the network consists of. Both security technologies recognize this by themselves and adjust to it. Only with a higher level of security does the optimized tuning of network and scanners begin.

So it is not a question of whether to use one or the other. Both methods are necessary to ensure a secure network environment. A purely passive approach will not help in many cases. Proactive vulnerability management requires active scans and tools to manage them. This is what Greenbone’s vulnerability management products provide.

Open source is unceasingly on the rise among the vast majority of companies, software manufacturers and providers. However, this triumphant advance is also increasing the importance of monitoring the supply chain of the software used, which third parties have developed in accordance with open-source traditions. But not everyone using open-source software follows all the tried and true rules. Greenbone can help track down such mistakes. This blog post explains the problem and how to avoid it.

Supply Chains in Open-Source-Software


Vulnerabilities in Log4j, Docker or NPM

At the end of 2021, the German Federal Office for Information Security (BSI) officially sounded the alarm about a remotely exploitable vulnerability in the logging library Log4j. At the time, critics of open-source software promptly spoke out: open-source software like Log4j was implicitly insecure and a practically incalculable risk in the supply chain of other programs.

Although the open-source developers themselves fixed the problem within a few hours, countless commercial products still contain outdated versions of Log4j – with no chance of ever being replaced. This is not an isolated case: recently, the story of a developer for NPM (Node Package Manager, a software package format for the web server Node.js) caused a stir, who massively shook the trust in the open-source supply chain and the development community with his actually well-meant protest against the war in Ukraine.

Open Source in Criticism

It was not the first time that NPM became a target. The package manager was already affected by attacks at the end of 2021. At that time, developer Oleg Drapeza published a bug report on GitHub after finding malicious code to harvest passwords in the UAParser.js library. Piece by piece, the original author of the software, Faisal Salman, was able to reconstruct that someone had hacked into his account in NPM’s package system and placed malicious code there. The problem: UAParser.js is a module for Node.js and is used in millions of setups worldwide. Accordingly, the circle of affected users was enormous.

Again, the open-source critics said that open-source software like UAParser.js is implicitly insecure and a practically incalculable risk in the supply chain of other programs. Even more: open-source developers, according to the explicit accusation, incorporate external components such as libraries or container images far too carelessly and hardly give a thought to the associated security implications. For this reason, their work is inherently vulnerable to security attacks, especially in the open-source supply chain. Alyssa Shames discusses the problem on using the example of containers and describes the dangers in detail.

The Dark Side of the Bazaar

DevOps and Cloud Native have indeed had a major impact on the way we work in development in recent years. Integrating components that exist in the community into one’s own application instead of programming comparable functionality from scratch is part of the self-image of the entire open-source scene. This community and its offer can be compared with a bazaar, with all advantages and disadvantages. Many developers place their programs under an open license, precisely because they value the contributions of the other “bazaar visitors”. In this way, others who have similar problems can benefit – under the same conditions – and do not have to reinvent the wheel. In the past, this applied more or less only to individual components of software, but cloud and containers have now led to developers no longer just adopting individual components, but entire images. These are software packages, possibly even including the operating system, which in the worst case can start untested on the developer’s own infrastructure.

A Growing Risk?

In fact, the potential attack vector is significantly larger than before and is being actively exploited. According to Dev-Insider, for example, in the past year the number of attacks on open-source components of software supply chains increased by 430 percent, according to a study by vendor Sonatype. This is confirmed by Synopsis’ risk-analysis report, which also notes that commercial code today is mostly open-source software. As early as 2020, cloud-native expert Aquasec reported about attacks on the Docker API, which cyber criminals used to cryptomine Docker images.

However, developers who rely on open-source components or come from the open-source community are not nearly as inattentive as such reports suggest. Unlike in proprietary products, for example, where only a company’s employees can keep an eye on the code, many people look at the managed source code in open-source projects. It is obvious that security vulnerabilities regularly come to light, as in the case of Log4j, Docker or NPM. Here, the open-source scene proves that it works well, not that its software is fundamentally (more) insecure.

Not Left Unprotected

A major problem, on the other hand – regardless of whether open source or proprietary software is used – is the lack of foresight in the update and patch strategy of some providers. This is the only reason why many devices are found with outdated, often vulnerable software versions, which can serve as a barn door for attackers. The Greenbone Enterprise Appliance, Greenbone’s professional product line, helps to find such gaps and close them.

In addition, complex security leaks like the ones described above in Log4j or UAParser.js are the exception rather than the rule. Most attacks are carried out using much simpler methods: Malware is regularly found in the ready-made images for Docker containers in Docker Hub, for example, which turns a database into the Bitcoin miner described above. Developers who integrate open-source components are by no means unprotected against these activities. Standards have long been in place to prevent attacks of the kind described, for example to obtain ready-made container images only directly from the manufacturer of a solution or, better still, to build them themselves using the CI/CD pipeline. On top of that, a healthy dose of mistrust is always a good thing for developers, for example when software comes from a source that is clearly not that of the manufacturer.

Supply-Chain Monitoring at Greenbone

Greenbone demonstrates that open-source software is not an incalculable risk in its own program with its products, the Greenbone Enterprise Appliances. The company has a set of guidelines that integrate the supply chain issue in software development into the entire development cycle. In addition to extensive functional tests, Greenbone subjects its products to automated tests with common security tools, for example. Anyone who buys from Greenbone is rightly relying on the strong combination of open-source transparency and the manufacturer’s meticulous quality assurance, an effort that not all open-source projects can generally afford.

Apache, IIS, NGINX, MongoDB, Oracle, PostgreSQL, Windows, Linux: one year after launch, Greenbone brings numerous new compliance policies for CIS Benchmarks in its products. CIS Benchmarks are used by enterprises, organizations or government agencies to verify that all software products, applications, operating systems and other components in use meet secure specifications. Similar to the IT-Grundschutz compendium of the German Federal Office for Information Security (BSI), the Center for Internet Security (CIS), a non-profit organization founded in 2000, provides comprehensive IT security best practices for governments, industry and academia. Greenbone developed its first compliance policies for CIS Benchmarks back in 2021. Now, 18 additional compliance policies are being added.

Compliance policies for CIS Benchmarks

Benchmarks for Corporate Security

The CIS Benchmarks map corporate and government guidelines that serve as benchmarks for compliance. The benchmarks describe configurations, conditions, audits and tests for various setups and systems in detail. After a successful scan, IT admins receive a comprehensive report with a percentage figure that provides information about the compliance of the systems, but also immediate recommendations for further hardening measures.

Compared to the requirements of IT-Grundschutz, CIS Benchmarks often prove to be significantly more detailed, but therefore also more comprehensive. Unlike the many tests in the Greenbone Enterprise Feed, which look for security gaps and vulnerabilities to help defend against attacks, the CIS Benchmarks serve to prove that a company or an authority complies with the applicable compliance regulations at all times and has always done so.

CIS Benchmarks at Greenbone

Already since 2021, Greenbone integrates numerous compliance policies for CIS Benchmarks. These policies are sets of tests that a Greenbone product runs on a target system. In simple terms, for each individual requirement or recommendation from a CIS Benchmark, a vulnerability test is developed to verify compliance with that requirement or recommendation. All tests are combined by Greenbone into scan configurations and added to the Greenbone Enterprise Feed. Since the scan configurations in this case map enterprise or government policies, they are referred to as “compliance policies”.

In 2022, Greenbone is significantly expanding the set of CIS compliance policies included in the Greenbone Enterprise Feed. 18 additional compliance policies for CIS Benchmarks for diverse product families have been added. In addition to a compliance policy for Docker containers, tests are now available for Windows 10 Enterprise, Windows 2019 Server, Centos and distribution-independent Linux benchmarks. In addition, web masters running servers such as Apache (2.2 and 2.4), NGINX, Tomcat, and Microsoft IIS 10, as well as database administrations (MongoDB 3.2 and 3.6, Oracle Community Server 5.6 and 5.7, and PostgreSQL 9.6, 10, 11, and 12) can now access compliance policies for CIS Benchmarks.

CIS Benchmarks: Level 1, 2 and STIG

The CIS Benchmarks are divided into several levels (Level 1, 2 and STIG) and usually include several configuration profiles to be tested. Level 1 provides basic recommendations for reducing an organization’s attack surface, while Level 2 addresses users with special security needs. STIG – the former Level 3 – on the other hand is mainly used in military or government environments. STIG stands for Security Technical Implementation Guide. The US Department of Defense maintains a web page with all the details. The DISA STIGs (Defense Information Systems Agency Security Technical Implementation Guides) described there are a requirement of the US Department of Defense.

Certified by CIS

Greenbone is a member of the CIS consortium and is continuously expanding its CIS Benchmark scan configurations. Like all compliance policies developed by Greenbone on the basis of CIS Benchmarks, the latest ones are certified by CIS – this means maximum security when it comes to auditing a system according to CIS hardening recommendations. This not only simplifies the preparation of audits, important criteria can be checked in advance with a scan by a Greenbone product and, if necessary, any weaknesses found can be remedied before problems arise.

The German Federal Office for Information Security warns about the use of antivirus software from the Russian manufacturer Kaspersky. No surprising, since security is a matter of trust. Security software even more so.

In the course of the war in Ukraine, a closed-source provider like Kaspersky is hit at its weakest point. Because its customers must believe something that they want to know, and in critical areas of use even have to know: that the use of a software does not involve any risks that cannot be audited.

German Federal Office for Information Security warns aboutmanufacturer Kaspersky

The vendor tried to meet this requirement without making its sources open source, through so-called transparency centers where source code may be viewed. For various reasons, this is no longer enough for customers.

The current cause is the war in Ukraine and ultimately the fact that it is a Russian company, but the reasons and causes lie deeper. Ultimately, not only Russian providers are affected by the fundamental problem. Software (and hardware), just like the data it processes, can only be trusted if the sources are open and the production process is transparent.

We already know the problem from other contexts – whether a construct is called “Transparency Center”, “Safe Harbour” or “Privacy Shield” – in the end these are marketing terms that cannot disguise the fact that they cannot provide the transparency and trust that we need for secure digital infrastructures. Only open source can do that.

Cyber security to defend against cyber attacks

Hardly any other topic is currently as present as the war in Ukraine, which is claiming numerous civilian and military victims. But in today’s interconnected and digitized world, the threat is not only military attacks, but is also expanding into cyber space. According to the Institute for Economics and Peace (IEP), cyber attacks on Ukraine and other countries are already on the rise [1]. Critical national infrastructures (CNI) are particularly at risk.

According to the federal government’s definition, this includes “organizations or facilities of vital importance to the state polity, the failure or impairment of which would result in sustained supply shortages, significant disruptions to public safety, or other dramatic consequences.” Thus, components of CNI in most countries include healthcare, energy, water, transportation, and information and communications technology sectors.

But it is not just CNI organizations that must be particularly well protected against cyber attacks and reduce their attack surface. For the entire IT infrastructure, there is a fundamental danger, a fundamental vulnerability that must be countered: defensively and sustainably. Eliminating vulnerabilities in IT infrastructures has a crucial impact here. Since most vulnerabilities have been known for a long time, they can be detected and subsequently removed with the help of vulnerability management. Ultimately, this means staying one step ahead of cyber criminals.

“We consistently focus on strengthening the defensive rather than the offensive,” says Elmar Geese, CIO/CMO of Greenbone. In doing so, Geese follows the view of internationally renowned experts such as Manuel Atug from AG KRITIS: “Taking the offensive is never expedient, especially not in a war. Because then you become a combatant in a war and risk a lot, which many people obviously don’t realize.” According to Atug, it is not possible to foresee what the consequences might be for attackers [2].

Therefore, our goal is to have a strong defense. We are happy to see that our open-source technology is also helping to fend off Russian cyber attacks in Ukraine.



Jennifer Außendorf, project lead of the project for Predictive Vulnerability Management

Project lead Jennifer Außendorf

Identifying tomorrow’s vulnerabilities today with Predictive Vulnerability Management: Together with international partners from across Europe, Greenbone’s cyber security experts are developing a novel cyber resilience platform that uses artificial intelligence and machine learning to detect vulnerabilities before they can be exploited, helping to prevent attacks.

Greenbone is strengthening its internal research in the field of “Predictive Vulnerability Management” and will additionally participate in publicly funded research and development projects in 2022. Currently, the security experts are working on a funding application for a European Union project. Until the first phase of the application submission is completed, Greenbone is involved within an international consortium and is working on a joint cyber resilience platform. The focus here is on preventing attacks in advance so that remedial action can be taken more quickly in an acute emergency. Methods for detecting anomalies by combining and analyzing a wide variety of sources from network monitoring and network analysis data will help to achieve this. The research area focuses on active defense against cyber attacks and includes penetration tests and their automation and improvement through machine learning.

In an interview, project manager Jennifer Außendorf explains what the term “Predictive Vulnerability Management” means.

Jennifer, what is cyber resilience all about? Predictive Vulnerability Management sounds so much like Minority Report, where the police unit “Precrime” hunted down criminals who would only commit crimes in the future.

Jennifer Außendorf: Predicting attacks is the only overlap, I think. The linchpin here is our Greenbone Cloud Service. It allows us to access very large amounts of data. We analyze it to enable prediction and remediation, providing both warnings for imminent threats and effective measures to address the vulnerabilities.

For example, we can also identify future threats earlier because we are constantly improving Predictive Vulnerability Management with machine learning. In the area of “Remediation”, we create a “reasoned action” capability for users: they are often overwhelmed by the number of vulnerabilities and find it difficult to assess which threats are acute and urgent based purely on CVSS scores.

One solution would be to provide a short list of the most critical current vulnerabilities – based on the results of artificial intelligence. This should consider even more influencing variables than the CVSS value, which tends to assess the technical severity. Such a solution should be user-friendly and accessible on a platform – of course strictly anonymized and GDPR-compliant.

Why is Greenbone going public with this now?

Jennifer Außendorf: On the one hand, this is an incredibly exciting topic for research, for which we provide the appropriate real-life data. The large amounts of data generated by the scans can be used in a variety of ways to protect customers. Figuring out what is possible with the data and how we can use that to add value for users and customers is a big challenge.

On the other hand, Greenbone wants to use the project to strengthen cyber security in the EU. For one thing, this is a hot topic right now: customers often end up with American companies when looking for cyber defenses, which usually doesn’t sit well with the GDPR. Greenbone has decided to launch a project consortium and will seek project funding in parallel.

Who will or should participate in the consortium?

Jennifer Außendorf: The consortium will consist of a handful of companies as the core of the group and will be complemented by research partners, technical partners for development and a user group of other partners and testers.

Because the project will take place at EU level, it is important for us to involve as many different member states as possible. We hope that the different backgrounds of the partners will generate creative ideas and approaches to solutions, from which the project can only benefit. This applies equally to the phase of building up the consortium.

Are there other players in the field of Predictive Vulnerability Management so far or has no one tried this yet?

Jennifer Außendorf: At the moment, we don’t see any competitors – Greenbone also deliberately wants to be an innovation driver here. Yes, the buzzwords “thought leadership”, “cloud repurpose” and “cyber resilience” are certainly floating around, but there is one thing that only we (and our customers) have: the anonymized data, which is essential for the research results, and above all the large amount of data that makes it possible to apply machine learning and other methods in connection with artificial intelligence in the first place – only we have that.

What is the current status there, what is on the roadmap?

Jennifer Außendorf: We are currently in the process of specifying the individual topics in more detail with the first research partners. They have many years of experience in cyber security and machine learning and provide very valuable input. We are also currently working on expanding the consortium and recruiting additional partners. Work on the actual application should start soon.

Our goal is to incorporate the results of the project directly into our products and thus make them available to our customers and users. Ultimately, they should benefit from the results and thus increase cyber resilience in their companies. That is the ultimate goal.