In networked production, IT and OT are growing closer and closer together. Where once a security gap “only” caused a data leak, today the entire production can collapse. Those who carry out regular active and passive vulnerability scans can protect themselves.

What seems somewhat strange in the case of physical infrastructure – who would recreate a break-in to test their alarm system – is a tried and tested method in IT for identifying vulnerabilities. This so-called active scanning can be performed daily and automatically. Passive scanning, on the other hand, detects an intrusion in progress, because every cyber intrusion also leaves traces, albeit often hidden.

Controlling the Traffic

Firewalls and antivirus programs, for example, use passive scanning to check traffic reaching a system. This data is then checked against a database. Information about malware, unsafe requests and other anomalies is stored there. For example, if the firewall receives a request from an insecure sender that wants to read out users’ profile data, it rejects the request. The system itself is unaware of this because the passive scan does not access the system but only the data traffic.

The advantage of this is the fact that the system does not have to use any additional computing power. Despite the scan, the full bandwidth can be used. This is particularly useful for critical components. They should have the highest possible availability. The fewer additional activities they perform, the better.

The disadvantage of passive scanning is that only systems that are actively communicating by themselves can be seen. This does not include office software or PDF readers, for example. But even services that do communicate do so primarily with their main functions. Functions with vulnerabilities that are rarely or not at all used in direct operation are not visible, or are only visible when the attack is already in progress.

Checking the Infrastructure

Active scans work differently and simulate attacks. They make requests to the system and thereby try to trigger different reactions. For example, the active scanner sends a request for data transfer to various programs in the system. If one of the programs responds and forwards the data to the simulated unauthorized location, the scanner has found a security hole.

Differences between active and passive vulnerability scans

Left: Active scans send queries to the system in an attempt to trigger different responses. Right: Passive scans check the traffic reaching a system and match this data against a database.

The advantage: the data quality that can be achieved with active scanning is higher than with passive scanning. Since interaction takes place directly with software and interfaces, problems can be identified in programs that do not normally communicate directly with the network. This is also how vulnerabilities are discovered in programs such as Office applications.

However, when interacting directly, systems have to handle extra requests which may then affect the basic functions of a program. Operating technology such as machine control systems, for example, are not necessarily designed to perform secondary tasks. Here, scanning under supervision and, as a supplement, continuous passive scanning are recommended.

Scanning Actively, but Minimally Invasive

Nevertheless, active scanning is essential for operational cyber security. This is because the risk posed by the short-term overuse of a system component is small compared to a production outage or data leak. Moreover, active scans not only uncover vulnerabilities, they can also enhance passive scans. For example, the vulnerabilities that are detected can be added to firewall databases. This also helps other companies that use similar systems.

Active and Passive Scanning Work Hand in Hand

Since the passive scanner can also provide the active scanner with helpful information, such as information about cell phones or properties about network services, these two security tools can be considered as complementary. What they both have in common is that they always automatically get the best out of the given situation in the network. For the passive and active scanning techniques, it does not matter which or how many components and programs the network consists of. Both security technologies recognize this by themselves and adjust to it. Only with a higher level of security does the optimized tuning of network and scanners begin.

So it is not a question of whether to use one or the other. Both methods are necessary to ensure a secure network environment. A purely passive approach will not help in many cases. Proactive vulnerability management requires active scans and tools to manage them. This is what Greenbone’s vulnerability management products provide.


Open source is unceasingly on the rise among the vast majority of companies, software manufacturers and providers. However, this triumphant advance is also increasing the importance of monitoring the supply chain of the software used, which third parties have developed in accordance with open-source traditions. But not everyone using open-source software follows all the tried and true rules. Greenbone can help track down such mistakes. This blog post explains the problem and how to avoid it.

Supply Chains in Open-Source-Software

 

Vulnerabilities in Log4j, Docker or NPM

At the end of 2021, the German Federal Office for Information Security (BSI) officially sounded the alarm about a remotely exploitable vulnerability in the logging library Log4j. At the time, critics of open-source software promptly spoke out: open-source software like Log4j was implicitly insecure and a practically incalculable risk in the supply chain of other programs.

Although the open-source developers themselves fixed the problem within a few hours, countless commercial products still contain outdated versions of Log4j – with no chance of ever being replaced. This is not an isolated case: recently, the story of a developer for NPM (Node Package Manager, a software package format for the web server Node.js) caused a stir, who massively shook the trust in the open-source supply chain and the development community with his actually well-meant protest against the war in Ukraine.

Open Source in Criticism

It was not the first time that NPM became a target. The package manager was already affected by attacks at the end of 2021. At that time, developer Oleg Drapeza published a bug report on GitHub after finding malicious code to harvest passwords in the UAParser.js library. Piece by piece, the original author of the software, Faisal Salman, was able to reconstruct that someone had hacked into his account in NPM’s package system and placed malicious code there. The problem: UAParser.js is a module for Node.js and is used in millions of setups worldwide. Accordingly, the circle of affected users was enormous.

Again, the open-source critics said that open-source software like UAParser.js is implicitly insecure and a practically incalculable risk in the supply chain of other programs. Even more: open-source developers, according to the explicit accusation, incorporate external components such as libraries or container images far too carelessly and hardly give a thought to the associated security implications. For this reason, their work is inherently vulnerable to security attacks, especially in the open-source supply chain. Alyssa Shames discusses the problem on docker.com using the example of containers and describes the dangers in detail.

The Dark Side of the Bazaar

DevOps and Cloud Native have indeed had a major impact on the way we work in development in recent years. Integrating components that exist in the community into one’s own application instead of programming comparable functionality from scratch is part of the self-image of the entire open-source scene. This community and its offer can be compared with a bazaar, with all advantages and disadvantages. Many developers place their programs under an open license, precisely because they value the contributions of the other “bazaar visitors”. In this way, others who have similar problems can benefit – under the same conditions – and do not have to reinvent the wheel. In the past, this applied more or less only to individual components of software, but cloud and containers have now led to developers no longer just adopting individual components, but entire images. These are software packages, possibly even including the operating system, which in the worst case can start untested on the developer’s own infrastructure.

A Growing Risk?

In fact, the potential attack vector is significantly larger than before and is being actively exploited. According to Dev-Insider, for example, in the past year the number of attacks on open-source components of software supply chains increased by 430 percent, according to a study by vendor Sonatype. This is confirmed by Synopsis’ risk-analysis report, which also notes that commercial code today is mostly open-source software. As early as 2020, cloud-native expert Aquasec reported about attacks on the Docker API, which cyber criminals used to cryptomine Docker images.

However, developers who rely on open-source components or come from the open-source community are not nearly as inattentive as such reports suggest. Unlike in proprietary products, for example, where only a company’s employees can keep an eye on the code, many people look at the managed source code in open-source projects. It is obvious that security vulnerabilities regularly come to light, as in the case of Log4j, Docker or NPM. Here, the open-source scene proves that it works well, not that its software is fundamentally (more) insecure.

Not Left Unprotected

A major problem, on the other hand – regardless of whether open source or proprietary software is used – is the lack of foresight in the update and patch strategy of some providers. This is the only reason why many devices are found with outdated, often vulnerable software versions, which can serve as a barn door for attackers. The Greenbone Enterprise Appliance, Greenbone’s professional product line, helps to find such gaps and close them.

In addition, complex security leaks like the ones described above in Log4j or UAParser.js are the exception rather than the rule. Most attacks are carried out using much simpler methods: Malware is regularly found in the ready-made images for Docker containers in Docker Hub, for example, which turns a database into the Bitcoin miner described above. Developers who integrate open-source components are by no means unprotected against these activities. Standards have long been in place to prevent attacks of the kind described, for example to obtain ready-made container images only directly from the manufacturer of a solution or, better still, to build them themselves using the CI/CD pipeline. On top of that, a healthy dose of mistrust is always a good thing for developers, for example when software comes from a source that is clearly not that of the manufacturer.

Supply-Chain Monitoring at Greenbone

Greenbone demonstrates that open-source software is not an incalculable risk in its own program with its products, the Greenbone Enterprise Appliances. The company has a set of guidelines that integrate the supply chain issue in software development into the entire development cycle. In addition to extensive functional tests, Greenbone subjects its products to automated tests with common security tools, for example. Anyone who buys from Greenbone is rightly relying on the strong combination of open-source transparency and the manufacturer’s meticulous quality assurance, an effort that not all open-source projects can generally afford.

Apache, IIS, NGINX, MongoDB, Oracle, PostgreSQL, Windows, Linux: one year after launch, Greenbone brings numerous new compliance policies for CIS Benchmarks in its products. CIS Benchmarks are used by enterprises, organizations or government agencies to verify that all software products, applications, operating systems and other components in use meet secure specifications. Similar to the IT-Grundschutz compendium of the German Federal Office for Information Security (BSI), the Center for Internet Security (CIS), a non-profit organization founded in 2000, provides comprehensive IT security best practices for governments, industry and academia. Greenbone developed its first compliance policies for CIS Benchmarks back in 2021. Now, 18 additional compliance policies are being added.

Compliance policies for CIS Benchmarks

Benchmarks for Corporate Security

The CIS Benchmarks map corporate and government guidelines that serve as benchmarks for compliance. The benchmarks describe configurations, conditions, audits and tests for various setups and systems in detail. After a successful scan, IT admins receive a comprehensive report with a percentage figure that provides information about the compliance of the systems, but also immediate recommendations for further hardening measures.

Compared to the requirements of IT-Grundschutz, CIS Benchmarks often prove to be significantly more detailed, but therefore also more comprehensive. Unlike the many tests in the Greenbone Enterprise Feed, which look for security gaps and vulnerabilities to help defend against attacks, the CIS Benchmarks serve to prove that a company or an authority complies with the applicable compliance regulations at all times and has always done so.

CIS Benchmarks at Greenbone

Already since 2021, Greenbone integrates numerous compliance policies for CIS Benchmarks. These policies are sets of tests that a Greenbone product runs on a target system. In simple terms, for each individual requirement or recommendation from a CIS Benchmark, a vulnerability test is developed to verify compliance with that requirement or recommendation. All tests are combined by Greenbone into scan configurations and added to the Greenbone Enterprise Feed. Since the scan configurations in this case map enterprise or government policies, they are referred to as “compliance policies”.

In 2022, Greenbone is significantly expanding the set of CIS compliance policies included in the Greenbone Enterprise Feed. 18 additional compliance policies for CIS Benchmarks for diverse product families have been added. In addition to a compliance policy for Docker containers, tests are now available for Windows 10 Enterprise, Windows 2019 Server, Centos and distribution-independent Linux benchmarks. In addition, web masters running servers such as Apache (2.2 and 2.4), NGINX, Tomcat, and Microsoft IIS 10, as well as database administrations (MongoDB 3.2 and 3.6, Oracle Community Server 5.6 and 5.7, and PostgreSQL 9.6, 10, 11, and 12) can now access compliance policies for CIS Benchmarks.

CIS Benchmarks: Level 1, 2 and STIG

The CIS Benchmarks are divided into several levels (Level 1, 2 and STIG) and usually include several configuration profiles to be tested. Level 1 provides basic recommendations for reducing an organization’s attack surface, while Level 2 addresses users with special security needs. STIG – the former Level 3 – on the other hand is mainly used in military or government environments. STIG stands for Security Technical Implementation Guide. The US Department of Defense maintains a web page with all the details. The DISA STIGs (Defense Information Systems Agency Security Technical Implementation Guides) described there are a requirement of the US Department of Defense.

Certified by CIS

Greenbone is a member of the CIS consortium and is continuously expanding its CIS Benchmark scan configurations. Like all compliance policies developed by Greenbone on the basis of CIS Benchmarks, the latest ones are certified by CIS – this means maximum security when it comes to auditing a system according to CIS hardening recommendations. This not only simplifies the preparation of audits, important criteria can be checked in advance with a scan by a Greenbone product and, if necessary, any weaknesses found can be remedied before problems arise.

The German Federal Office for Information Security warns about the use of antivirus software from the Russian manufacturer Kaspersky. No surprising, since security is a matter of trust. Security software even more so.

In the course of the war in Ukraine, a closed-source provider like Kaspersky is hit at its weakest point. Because its customers must believe something that they want to know, and in critical areas of use even have to know: that the use of a software does not involve any risks that cannot be audited.

German Federal Office for Information Security warns aboutmanufacturer Kaspersky

The vendor tried to meet this requirement without making its sources open source, through so-called transparency centers where source code may be viewed. For various reasons, this is no longer enough for customers.

The current cause is the war in Ukraine and ultimately the fact that it is a Russian company, but the reasons and causes lie deeper. Ultimately, not only Russian providers are affected by the fundamental problem. Software (and hardware), just like the data it processes, can only be trusted if the sources are open and the production process is transparent.

We already know the problem from other contexts – whether a construct is called “Transparency Center”, “Safe Harbour” or “Privacy Shield” – in the end these are marketing terms that cannot disguise the fact that they cannot provide the transparency and trust that we need for secure digital infrastructures. Only open source can do that.


Cyber security to defend against cyber attacks

Hardly any other topic is currently as present as the war in Ukraine, which is claiming numerous civilian and military victims. But in today’s interconnected and digitized world, the threat is not only military attacks, but is also expanding into cyber space. According to the Institute for Economics and Peace (IEP), cyber attacks on Ukraine and other countries are already on the rise [1]. Critical national infrastructures (CNI) are particularly at risk.

According to the federal government’s definition, this includes “organizations or facilities of vital importance to the state polity, the failure or impairment of which would result in sustained supply shortages, significant disruptions to public safety, or other dramatic consequences.” Thus, components of CNI in most countries include healthcare, energy, water, transportation, and information and communications technology sectors.

But it is not just CNI organizations that must be particularly well protected against cyber attacks and reduce their attack surface. For the entire IT infrastructure, there is a fundamental danger, a fundamental vulnerability that must be countered: defensively and sustainably. Eliminating vulnerabilities in IT infrastructures has a crucial impact here. Since most vulnerabilities have been known for a long time, they can be detected and subsequently removed with the help of vulnerability management. Ultimately, this means staying one step ahead of cyber criminals.

“We consistently focus on strengthening the defensive rather than the offensive,” says Elmar Geese, CIO/CMO of Greenbone. In doing so, Geese follows the view of internationally renowned experts such as Manuel Atug from AG KRITIS: “Taking the offensive is never expedient, especially not in a war. Because then you become a combatant in a war and risk a lot, which many people obviously don’t realize.” According to Atug, it is not possible to foresee what the consequences might be for attackers [2].

Therefore, our goal is to have a strong defense. We are happy to see that our open-source technology is also helping to fend off Russian cyber attacks in Ukraine.

[1] https://www.zeit.de/news/2022-03/02/experten-warnen-vor-cyberterrorismus-im-ukraine-konflikt

[2] https://background.tagesspiegel.de/cybersecurity/putin-wird-sich-nicht-wegcybern-lassen


Jennifer Außendorf, project lead of the project for Predictive Vulnerability Management

Project lead Jennifer Außendorf

Identifying tomorrow’s vulnerabilities today with Predictive Vulnerability Management: Together with international partners from across Europe, Greenbone’s cyber security experts are developing a novel cyber resilience platform that uses artificial intelligence and machine learning to detect vulnerabilities before they can be exploited, helping to prevent attacks.

Greenbone is strengthening its internal research in the field of “Predictive Vulnerability Management” and will additionally participate in publicly funded research and development projects in 2022. Currently, the security experts are working on a funding application for a European Union project. Until the first phase of the application submission is completed, Greenbone is involved within an international consortium and is working on a joint cyber resilience platform. The focus here is on preventing attacks in advance so that remedial action can be taken more quickly in an acute emergency. Methods for detecting anomalies by combining and analyzing a wide variety of sources from network monitoring and network analysis data will help to achieve this. The research area focuses on active defense against cyber attacks and includes penetration tests and their automation and improvement through machine learning.

In an interview, project manager Jennifer Außendorf explains what the term “Predictive Vulnerability Management” means.

Jennifer, what is cyber resilience all about? Predictive Vulnerability Management sounds so much like Minority Report, where the police unit “Precrime” hunted down criminals who would only commit crimes in the future.

Jennifer Außendorf: Predicting attacks is the only overlap, I think. The linchpin here is our Greenbone Cloud Service. It allows us to access very large amounts of data. We analyze it to enable prediction and remediation, providing both warnings for imminent threats and effective measures to address the vulnerabilities.

For example, we can also identify future threats earlier because we are constantly improving Predictive Vulnerability Management with machine learning. In the area of “Remediation”, we create a “reasoned action” capability for users: they are often overwhelmed by the number of vulnerabilities and find it difficult to assess which threats are acute and urgent based purely on CVSS scores.

One solution would be to provide a short list of the most critical current vulnerabilities – based on the results of artificial intelligence. This should consider even more influencing variables than the CVSS value, which tends to assess the technical severity. Such a solution should be user-friendly and accessible on a platform – of course strictly anonymized and GDPR-compliant.

Why is Greenbone going public with this now?

Jennifer Außendorf: On the one hand, this is an incredibly exciting topic for research, for which we provide the appropriate real-life data. The large amounts of data generated by the scans can be used in a variety of ways to protect customers. Figuring out what is possible with the data and how we can use that to add value for users and customers is a big challenge.

On the other hand, Greenbone wants to use the project to strengthen cyber security in the EU. For one thing, this is a hot topic right now: customers often end up with American companies when looking for cyber defenses, which usually doesn’t sit well with the GDPR. Greenbone has decided to launch a project consortium and will seek project funding in parallel.

Who will or should participate in the consortium?

Jennifer Außendorf: The consortium will consist of a handful of companies as the core of the group and will be complemented by research partners, technical partners for development and a user group of other partners and testers.

Because the project will take place at EU level, it is important for us to involve as many different member states as possible. We hope that the different backgrounds of the partners will generate creative ideas and approaches to solutions, from which the project can only benefit. This applies equally to the phase of building up the consortium.

Are there other players in the field of Predictive Vulnerability Management so far or has no one tried this yet?

Jennifer Außendorf: At the moment, we don’t see any competitors – Greenbone also deliberately wants to be an innovation driver here. Yes, the buzzwords “thought leadership”, “cloud repurpose” and “cyber resilience” are certainly floating around, but there is one thing that only we (and our customers) have: the anonymized data, which is essential for the research results, and above all the large amount of data that makes it possible to apply machine learning and other methods in connection with artificial intelligence in the first place – only we have that.

What is the current status there, what is on the roadmap?

Jennifer Außendorf: We are currently in the process of specifying the individual topics in more detail with the first research partners. They have many years of experience in cyber security and machine learning and provide very valuable input. We are also currently working on expanding the consortium and recruiting additional partners. Work on the actual application should start soon.

Our goal is to incorporate the results of the project directly into our products and thus make them available to our customers and users. Ultimately, they should benefit from the results and thus increase cyber resilience in their companies. That is the ultimate goal.

Contact Free Trial Buy Here Back to Overview

Both the cryptocurrency Bitcoin and the darknet have a dubious reputation. The media like to portray both as opaque, criminal parallel worlds. For Ransomware as a Service, Bitcoin and the darknet are welcome tools. Organized crime has been using them for a long time to disguise its business, even if it by no means makes the criminals anonymous and safe from prosecution.

Ransomware became the world’s biggest threat to IT systems in 2021. If you want to successfully protect yourself against it, you also need to understand how the parties involved proceed. Part one of this series of articles focused on the business model of Ransomware as a Service. Part two showed why this “professionalization” also leads to a changed mindset among attackers. Part three now explains why the IT tools that organized crime uses to order and transfer money are far from secure.

Ransomware as a Service: abstract image of Bitcoin logo

Anonymous and Secure?

Bitcoin as a means of payment and the darknet are proving to be practical, helpful and attractive for attackers. Under the cloak of supposed anonymity, they think they are protected from prosecution and shielded from consequences. But this is a common misconception: neither Bitcoin nor the darknet are anonymous in practice.

While cryptocurrency was never designed for anonymity, but explicitly for traceability of transactions even without a reliable central authority, the darknet turns out to be not even remotely as anonymous as its creators would have liked. This is also shown by reports such as the recent ones about KAX17’s “de-anonymization attacks” on the Tor network. Nearly always, classic investigative methods are enough for law enforcement to track down even ransomware actors like the REvil group. This group had collected half a million euros in ransoms in more than 5,000 infections, according to Heise [German only].

Never a Good Idea: Cooperating With Criminals

No matter whether online or offline, anyone who gets involved with blackmailers is abandoned. As in real life, good advice is never to pay a ransom. Regardless of how professional the hotline on the other end seems, trust is not appropriate. The operators of REvils Ransomware as a Service, for example, even stole the extorted ransoms from their clients via a backdoor in the malware.

It all started out so friendly and idealistic. Roger Dingledine and Nick Mathewson laid the foundations for the Tor network in the early 2000s. Based on the idea of onion rings, numerous cryptographically secured layers on top of each other were supposed to ensure reliable anonymity on the web – in their opinion, a fundamental right, analogous to the privacy definition of Eric Hughes “Cypherpunk’s Manifesto”. Then in 2009, Bitcoin saw the light of day, first described by the almost mystical figure of Satoshi Nakamoto.

Darknet and Bitcoin Are Not “Criminal”

Neither the darknet nor Bitcoin were designed to conceal or enable dark schemes. The goal was to create free, independent, supposedly uncontrollable and largely secure structures for information exchange and payment. Like a knife, however, the services can be instrumentalized for both good and evil – and, of course, organized crime knows how to use this to its advantage. The focus is not always on leaving no traces. Most often, the focus is on the simplicity and availability of the means. Bitcoin and the darknet are simply the tools of choice because they are there.

But as in the real world, the easiest way to catch the extortionists is during the money transfer: a blockchain like Bitcoin documents all transactions ever made, including the wallet information (i.e., the Bitcoin owner), and makes it available for viewing at any time. The same applies to the darknet: even if anonymity is technically possible, people regularly fail to meet the simplest requirements. GPS meta-data can be found in photos or UPS codes in the illegal store. The legendary drug store Silkroad was busted because employees made mistakes and confessed.

Digitized, Organized Crime

The darknet and cryptocurrencies are helpful tools for organized crime and thus fire accelerators for the rapidly growing number of serious ransomware attacks. But they are by no means essential, nor are they to blame. Such cyber crime is just the modern IT variant of what we can also experience on the streets of any major city. Ransomware is, so to speak, the modern protection racket, Bitcoin is the garbage can for the handover, and the darknet is the dark bar where deals are made.

The perfidy is not in the tools, but in the methods and the long experience in the “business”. Trend Micro, for example, describes the “double extortion ransomware” approach. Here, attackers first make an image of the data and threaten to publish it if payment is not made (i.e., if it is not decrypted). Organized crime has been in the extortion business not just since Bitcoin or the darknet came into existence. Even though the two technologies now enable cyber criminals to extort large sums of money undetected at first, conventional methods are almost always sufficient for detection. The most important prerequisite here is that enough law enforcement personnel are available, not primarily their technical equipment.

Take Precautions

But at this point, in the company, the horse has already bolted. If you are faced with encrypted data and a ransom demand, the darknet, Bitcoin and the detection rate are probably of secondary importance. Much more important is the question of how to get out of the unfortunate situation. And you can only do that if you were prepared. This includes backups, restore tests and the immediate disconnection of all affected machines (network split) – in other words, proactive risk management, disaster recovery tests and constant maintenance of your own systems. Another important component is multi-factor authentication, which prevents attackers from shimmying from one system to the next using acquired passwords alone.

The most important thing, however, is to avoid critical situations in the first place and to identify vulnerabilities in your own systems and close them quickly. Modern vulnerability management like Greenbone’s does just that: it gives you the ability to close gaps in your systems, making the corporate network unattractive, costly, and thus a deterrent to professional cyber criminals, not just from the Ransomware-as-a-Service world.

Greenbone’s products monitor the corporate network or external IT resources for potential vulnerabilities by continuously and fully automatically examining it and, as Greenbone Enterprise Appliances or the Greenbone Cloud Service (software as a service hosted in German data centers), guarantee security by always up-to-date scans and tests.

How this works is described by Elmar Geese, CIO/CMO at Greenbone, also here in the blog with a post around the Log4j vulnerability. In addition, Geese explains how quickly and securely the administration and management are also informed of the latest vulnerabilities and how exactly the scan for vulnerabilities such as Log4Shell is carried out.


We are proud to have received ISO certification of our management systems for the aspects of quality (ISO 9001) and information security (ISO 27001) at the end of 2021.

Logos of ISO certification of our management systems

Our success makes us grow, and our growth promotes structure and processes. That is why we actively accompany the creation of structures and processes even more than in the past. In doing so, we are guided by the following goals:

  • Create value for our clientele
  • Provide great products and services
  • Continuously increase the satisfaction of our employees
  • Promote and manage our growth

When we decided to certify information security and quality in our company according to ISO 27001 and ISO 9001 standards, we took the specifics of an agile company into account from the very beginning.

ISO-certified management systems and agile management seem to be a contradiction, but they are not. In this article, we will briefly explain how these two worlds complement each other perfectly and how we combine the respective advantages in one company.

Although agility is not a goal in itself, we were aware that we wanted to run an agile company in an agile way. We understand it like this:

  • We have a common goal.
  • Clarity and explicitness in communication are prerequisites for results-oriented action.
  • Hierarchies are tools, not status functions.
  • Processes are paths to the goal, not goals themselves.

We have recognized that ideally we can use a toolbox that is as universal as possible in the different areas of our organization, which on the one hand helps us to organize our processes in the best possible way, and on the other hand leaves enough room for the different needs of the different teams and areas.

The concepts from worlds as different as “ISO” and “Agile” have helped us and continue to help us. What they have in common is that the concepts require management systems that are more similar in their basic structure than one might think.

It is always about:

  • Focus on sufficiently clearly defined objectives
  • Reliable and appropriate guidelines
  • Comprehensibly defined and helpful processes
  • Measuring points to evaluate, adjust and change as necessary
  • Supportive team members and servant leaders who operate within this structure
  • A continuous improvement process

This is what we call a management system and its inherent agility is defined by the context and purpose when it is applied. It allows us to measure the results and the quality of the processes through a system of objectives and performance indicators.

We are proud and happy that we could now certify our management systems very successfully for the aspects “Quality” (ISO 9001) and “Information Security” (ISO 27001). It helps us and it also helps you as our clientele. It measurably documents two very important characteristics that you expect from us and our products and services and that you ultimately want to ensure by using our products in your own organization, namely:

  • Security, and
  • Quality of information technology systems.

It’s our mission at Greenbone to ensure this through one of the leading vulnerability management products. We do it every day, in over 100,000 organizations around the world.


Contact Free Trial Buy Here Back to Overview