Kim Nguyen (German Federal Printing Office) on AI and Cyber Security: “Trust Is the Locational Advantage of the EU.”

Starting August 2025, businesses and administrative bodies must implement initial provisions of the EU AI Act – a new era of responsibility in dealing with artificial intelligence begins. Since the AI Act not only demands technical adjustments, but a fundamental rethinking, AI will prospectively be evaluated in a more nuanced way, taking risk and use case into account. This is especially true for AI encroaching on sensitive areas of life or working with personally identifiable data.

For organizations, this means: They have to grapple intensively with the ecosystem surrounding their AI-systems, detect risks early, and address them deliberately. Transparency on underlying data, comprehensible models, and human supervision are no longer optional; they are mandatory. Simultaneously, the AI Act offers a valuable framework to build trust and, in the long run, use AI safely and responsibly. Vulnerability management and cybersecurity are not exempt from this.

AI Interview with Cybersecurity Experts

We have interviewed Kim Nguyen, Senior Vice President of Innovation at the German Federal Printing Office (Bundesdruckerei) and seasoned leader and face of their Trusted Services, on the topic of AI, regulation, and the impact on cybersecurity. Additionally, Greenbone CMO Elmar Geese gives a forecast on the future of vulnerability management.

Kim Nguyen, Senior Vice President für Innovation bei der Bundesdruckerei

Greenbone: Kim, the topic of AI is on everyone’s lips right now, especially at events like the recent Potsdam Conference on National Cybersecurity (Potsdamer Konferenz für Nationale Cybersecurity). And you are in the thick of the public discourse.

Nguyen: Yes, I can not deny that the topic of AI is very dear to my heart, as you can tell by looking at my publications and keynotes on the topic. But my approach is a bit different than that of others. It places emphasis on trust and has many different dimensions, one of which is benevolence. That means the well-being of individual users needs to be in focus at all times. Users assume the system operates in their best interest, not in pursuit of an unknown agenda.

Greenbone: What do you think, will cybersecurity as a whole become more secure with AI, or less?

Nguyen: Of course, artificial intelligence has long reached cybersecurity – as a risk and as a chance: On one hand, it increases the attack vector, as cyber-criminals can accelerate, automate, and aim their attacks better. On the other hand, it can help harden defenses, for example, by analysing real-time data from different security sources to automatically identify security incidents and react accordingly.

„A Cat-and-Mouse Game“

To keep up in this cat-and-mouse game between attackers and defenders, you have to rely on AI, especially for defense. Government regulation is crucial here, as without appropriate legislation and technical standards, no one could know what is permitted and trustworthy and what is not.

Moreover, lawmakers must continue to actively intervene in these highly dynamic technical developments to ensure legal certainty and clear guidance. Finding the right measures and simultaneously leaving slack to encourage innovation and allow AI to be an enabler is not easy, but immensely important.

Greenbone: What do you regard as the most important questions/regulations in the EU AI Act and regulations that organizations have to face? What else is coming our way? What are big institutions like the Federal Printing Office doing to prepare?

Nguyen: With the AI Act, organizations must classify their AI-systems in a risk-based manner and fulfill different requirements regarding transparency, data-quality, governance, and security, depending on the classification – especially regarding high-risk appliances.

However, it is not just about assuring compliance, but utilizing the regulatory framework as a strategic lever for trustworthy innovation and sustainable competitiveness. It is not sufficient to focus strictly on an appropriate AI model. Integration, training of the model, and educating users are just as important. Comprehensive security guidelines – so-called “guard rails” – must be set up to ensure the system does not undertake any unauthorized processes.

„Well-Practiced Processes Bring Replicability, Robustness and Transparency to the Foreground“

The Printing Office, as a federal technology company, has been active in the high-security sector for years. We enact well-attuned processes and structures to bring replicability, robustness, and transparency to the foreground and bring trust in different AI solutions to administrations. With the AI Competence Center, we support federal agencies and ministries in developing AI applications. We have built the platform PLAIN, which offers a shared infrastructure for data and AI-applications with the Federal Foreign Office, and we developed an AI-assistant, Assistant.iQ, that meets the administration’s requirements for data security, traceability, and flexibility.

Greenbone: Open Source is a minimum requirement for trust in software, IT, and cybersecurity – is that even possible for AI, and if so, to what degree?

Nguyen: Open Source is an important topic in AI, as it can provide the necessary trust by reviewing code and models. This requires results to be examinable and verifiable, which necessitates a community that actively cares and participates.

The Open Source approach of many projects is ambitious and admirable, but many projects do not get sufficiently maintained over time or come to a standstill all together. In any case, you have to look closely when it comes to the topic of Open Source and AI. In other words: Not all Open Source is created equal. When AI developers publish under an open-source license, that does not mean you get an open-source AI.

For a start, the numerical values, so-called weights of an AI model, are very important, as they determine how it processes input and makes decisions. Then you have to consider the training data – which is often not disclosed to customers and users. Only with them can one arrive at an assessment of how transparent, trustworthy, and reproducible an open source model really is. Only when the complete knowledge behind different models is freely available, viable ideas can be built upon that foundation and lead to innovation.

Greenbone: What is missing to enable the safe deployment of AI? What do we have to change?

Nguyen: Safe deployment of AI requires, in addition to technical excellence, an appropriate Mindset for development, governance, and responsibility. Concretely, we have to keep the principle of “Security by Design” in mind from the very start. What this means: Developers must always systematically examine what could go wrong and integrate those risks early on in the blueprint and architecture of the model.

Equally important is the transparency across the edges of AI-systems: Language models currently only function reliably within certain contexts – outside of their training domain, they may deliver plausible, but potentially erroneous results. Developers should therefore clearly communicate where their model works reliably and where it fails.

Mindset, Context, and Copyright

If we do not want to experience major trust and compliance issues, we must not neglect questions about copyright and training data. Then, you need clear test data, an appropriate evaluation infrastructure, and ongoing monitoring of bias and fairness.

A balanced combination of legal regulation, technical self-commitment, and fast reacting governance is the key to an AI allowing one to protect democratic values and take technological responsibility.

Greenbone: Do you believe the EU has a competitive advantage?

Nguyen: Yes, the EU has a real advantage in the global AI competition – and it is rooted in trust. Other regions primarily bet on speed and market dominance – and in doing so, as recently happened in the U.S., largely absolve tech giants of responsibility for societal risks. On the contrary, Europe establishes a downright exemplary model with the AI Act, relying on security, data protection, and a human-centered approach to development.

Precisely because AI is increasingly entering sensitive areas of life, protection of personal data and the enforcement of democratic values are becoming increasingly crucial. With its governance structure, the EU is building mandatory standards that many countries and organizations around the world look toward. This focus on values will pay off for Europe in the long run – specifically in the export of technology and the strengthening of societal trust in democracy and systems on-site.

Especially in the development of human-centered AI, Europe is a trailblazer. However, regulation must not become a hindrance to innovation: Trust and security must go hand in hand with readiness to invest, technological openness, and fast implementability. Europe can set standards – and build up a unique, competitive AI-identity.

Greenbone-CMO Elmar Geese on AI in Vulnerability Management

Greenbone: Mr. Geese, AI is on everyone’s lips – what changes does AI bring to vulnerability management?

Geese: I think AI is going to support us a lot, but it can never replace vulnerability management fully. Although AI can take care of time-intensive routine tasks like, for example, the evaluation of large quantities of data, finding patterns, and making suggestions for prioritization, security teams must stay in charge of final decisions and stay in control, especially in complex and critical cases where human understanding of context is invaluable.

The purposeful usage – with careful judgement and planning – in vulnerability management brings numerous advantages, without having to relinquish control completely. We are already using AI today to provide a better product to our customers, completely without relaying client data to big AI service providers. Our “trustworthy AI” works completely without the transfer and central collection of data.

Greenbone: What risks do you have to consider?

Geese: According to today’s state of technology, the use of AI in security-critical areas has several risks that need to be contained. Automation creates many chances, but also risks like flawed decision making, new attack vectors, or unintended system effects. An AI with “measured judgment” combines human and machine strengths, such that technological advantages like speed and scalability can be harnessed, without disempowering technical staff or taking security risks.

Greenbone and KI

Greenbone counts on the purposeful use of artificial intelligence to efficiently detect vulnerabilities in the IT-sector and support priorities. All the while, the security teams stay responsible and in control at all times, especially when it comes to sensitive and complex decisions. Data protection always takes top priority for us: Customer data will never be transferred to external AI companies.

Our approach combines the advantages of modern technology with human reasoning – for contemporary and responsible cybersecurity.

Contact us for further information.