Event recap: «ChatGPT zieht in den Krieg?»
As tech giants like OpenAI forge ties with military institutions, the line between civilian and military AI is blurring – with profound implications for law, ethics, and democracy. On 14 April, the DSI convened a panel of experts to examine these developments from multiple perspectives
On April 14, 2026, the DSI hosted a panel discussion titled «ChatGPT zieht in den Krieg?» (Is ChatGPT Heading to War?) on the topic of private-sector AI military cooperation. The event brought together experts from various disciplines to examine current developments and their implications from ethical, legal, and security policy perspectives.
The panel featured Prof. Dr. Markus Christen, Managing Director of the DSI and ethicist; Dr. Angela Müller, Director of AlgorithmWatch CH with a focus on legal and ethical aspects; and Dr. Myriam Dunn Cavelty, security expert and researcher at ETH Zurich. The discussion was moderated by Dr. Elif Askin from the Faculty of Law at the University of Zurich, who specializes in public law and human rights.
The discussion began with current media debates, particularly those surrounding collaborations between technology companies such as OpenAI and government military institutions in the United States. These developments raise fundamental questions: What ethical and legal challenges arise from the use of everyday AI applications for military purposes? To what extent are automated decisions or autonomous weapons systems conceivable or permissible? And what impact does this have on society and democracy?
Several key issues were highlighted during the discussion. One important point was the increasing concentration of power in the tech industry. Since core AI technologies are developed and controlled by a few large companies, dependencies arise that pose both political and societal risks. Many critical decisions ultimately depend on the actions and interests of these actors.
At the same time, the complexity of the issue was emphasized. While dystopian scenarios such as autonomous «killer robots» or widespread surveillance often take center stage, it was also pointed out that AI technologies enable legitimate and socially valuable applications, for example in medicine or research. This ambivalence makes it difficult to establish clear regulatory guidelines, as AI, as a technology, offers both positive and problematic applications.
Another point of discussion was the existing use of AI in military contexts. This is by no means new, as earlier systems were also used for decision support. The central question therefore shifts toward a normative assessment: Which forms of use are socially acceptable, and where should clear boundaries be drawn?
The lack of transparency in many AI systems was also critically examined. The so-called «black box problem makes it difficult to clearly assign responsibility, for instance, in the case of erroneous or problematic decisions. In this context, the question arises as to whether responsibility lies more in the design of the systems or in their specific application.
In addition, the discussion focused on how governments are using AI technologies in various fields, ranging from human resources and logistics to image generation and research. It is often difficult to distinguish between real-world applications and exaggerated portrayals in media reports.
A topic that sparked intense discussion from the audience was the concept of “cognitive warfare.” Here, the focus is less on the technical capabilities of AI and more on its influence on perception, opinion formation, and social trust. Disinformation and psychological effects can have significant impacts on democratic processes and social cohesion, even without highly developed AI systems, as examples such as Cambridge Analytica demonstrate.
In conclusion, it became clear that the development and application of AI are heavily influenced by experts who often work in private companies. This raises the question of how research and academic institutions can be more closely involved and supported to ensure a broader and independent knowledge base.
The panel discussion clearly demonstrated how multifaceted and interdisciplinary the debate surrounding AI in a military context is. It provided a space for nuanced perspectives and an open exchange between experts and the audience. We would like to thank our panelists and all participants for their engaging discussion, and DSI Communities Ethics and AI & Law for the organisation.