Share this article.

AI as a dual-use technology – a cautionary tale

  • Many technologies we embrace today have a military lineage.
  • These are dual-use technologies.
  • Knowledge engineering and robotics specialist Professor Emeritus Haruki Ueno, National Institute of Informatics, Japan, points to artifical intelligence (AI) as such a technology.
  • While such technologies benefit society, they can have a horrifying shadow.
  • Concerning AI, Ueno urges caution.

Few countries were scarred more by a quantum leap in military technology during the Second World War than Japan; the atomic explosions that decimated Hiroshima and Nagasaki nearly 80 years ago are still seared into the nation’s psyche. So, it is unsurprising that the country is wary of academia giving research impetus and energy to military technological development, even if it encourages dual-use technology with broader benefits to society. Artificial intelligence (AI) is too attractive a game-changing technology for powerful countries not to consider its use in military conflict, especially if it has spinoffs during times of peace. One of Japan’s most respected computer science and engineering researchers is urging caution.

Dr Haruki Ueno is Professor Emeritus at the National Institute of Informatics, and The Graduate University for Advanced Study (Sokendai) in Japan. As a knowledge engineering and robotics specialist, he is part of the corps of academics at the forefront of his country’s research into machine learning, autonomous systems, and artificial intelligence. Writing in Fusion of Machine Learning Paradigms, Ueno presents AI as a quintessential example of what is called dual-use technology – crucial for both civilian and military aims. He notes that outside of Japan, academics in his field are working with organisations encouraging the development of such emerging technologies for use by the military that may one day spin off into the civilian realm. One such organisation stands head and shoulders above the others.

DARPA’s muscle

Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense that drives research and development. Since its establishment in 1958 in response to the Soviet Union’s launch of Sputnik, DARPA has played a critical role in maintaining the US military’s technological superiority. However, the influence of DARPA’s research extends far beyond the military sphere, offering numerous benefits to civilian life. The internet, for example, emerged from DARPA research and development, as did GPS and numerous other everyday technologies we take for granted and significant leaps in medicine, such as prosthetics, vaccine technology, and biodefence, that have benefitted humanity.

Ueno presents artificial intelligence as a quintessential example of what is called dual-use technology – crucial for both civilian and military aims.

Where DARPA stands out is in its ‘muscle’. Although it is an agency of the US Department of Defense, its influence on global emerging technology is significant. DARPA’s projects often set the pace for similar research and development efforts worldwide. The agency’s work encourages other countries to invest in their own research and development initiatives to keep up or collaborate in areas of mutual interest.

The headquarters of the Defense Advanced Research Projects Agency (DARPA) in Ballston, Virginia.

As a result, other countries have adopted or adapted the DARPA model and developed DARPA-like organisations to foster innovation in dual-use technologies through close collaboration between academia, industry, the military, and the government. However, as Ueno points out, while such collaborations might accelerate dual-use technologies, they invite circumspection, especially when those technologies can do serious harm.

Numerous chilling scenarios

In Fusion of Machine Learning Paradigms, Ueno explains that AI is a broad term encompassing different approaches and technologies. He points specifically to agent-based AI, which models AI entities as ‘agents’ – autonomous decision-makers that perceive their environment, make decisions based on their perceptions, and then act to achieve specific goals. As a dual-use technology, agent-based AI has found its purpose in multiple peaceful applications, including autonomous vehicles and ‘smart’ factory automation.

Ueno highlights the urgent need for robust ethical standards, international cooperation, and stringent control measures to prevent misuse of AI technology.

However, agent-based AI is also at the forefront of a significant shift in weapons development; lethal autonomous weapons systems (LAWS) are a case in point. LAWS are a class of military technologies designed to independently identify, track, and attack targets based on pre-programmed criteria and algorithms without requiring real-time human oversight or control. Such developments raise multiple ethical issues and present numerous chilling scenarios.

Ethical considerations should be central to the research and development of highly advanced AI technologies.

In this vein, Ueno underscores the dual-use dilemma as typified in AI technologies. While these technologies promise significant benefits in healthcare, environmental protection, and disaster management, they also pose risks related to surveillance, autonomous weaponry, and cybersecurity. The last point is particularly pertinent. What characterises AI above much of the technology designed and championed by DARPA and similar organisations is its software nature, enabling its easy dissemination and potential theft, thereby intensifying the dual-use dilemma. Ueno, therefore, highlights the urgent need for robust ethical standards, international cooperation, and stringent control measures to prevent misuse.

Revolutionary and horrifying

Other issues surrounding DARPA-like organisations driving AI technology include the influence military funding might have on the direction of academic and industry research. Critics argue it might prioritise defence-related projects over other socially beneficial research areas. The DARPA model also operates with a level of secrecy and confidentiality necessary for national security, but this raises concerns about transparency and public accountability, especially in democratic societies. The lines are fine.

The blockbuster film Oppenheimer told the story of what’s possible when a significant military power directs the world’s leading academics in a nascent technology. The outcome was both revolutionary and horrifying. But the technology unleashed on Hiroshima and Nagasaki, tethered, now powers homes and industries, and bodies are healed today by technologies kindred to those that still kill.

Such is the dual-use dilemma.

What fundamental ethical questions should AI researchers consider about the direction of their research?

AI researchers must always remember the fundamental principle that AI technology should be developed for the benefit of human society. This means that ethical considerations should be central to the research and development of highly advanced AI technologies. Additionally, academic freedom is a fundamental right in democratic societies and must be upheld. Therefore, individual AI researchers should demonstrate a strong sense of ethics and responsibility to gain societal acceptance, with a focus on maintaining transparency and accountability in their research activities.

Given that AI technology is a double-edged sword, its progress always carries the potential to harm human society. As such, it is essential to approach research within a framework that safeguards human dignity and promotes the pursuit of happiness, ensuring that AI development aligns with ethical principles and societal wellbeing.

What fundamental questions should AI researchers working with DARPA-like organisations ask themselves?

DARPA-like organisations primarily focus on advancing cutting-edge dual-use AI technology for national security purposes. AI researchers collaborating with these organisations play a crucial role in developing AI technologies that are not only innovative for military applications but also ethically sound within academia and society. In this context, AI researchers are expected to uphold high ethical standards, especially when engaging in the development of lethal autonomous weapons (LAWs), carefully considering the ethical implications as individual researchers.

What benefits could DARPA-like organisations worldwide bring to AI research?

The advantages of DARPA-like organisations in AI development are substantial, resulting in significant progress at the university level, boosting industry competitiveness, and fortifying military capabilities, all while maintaining a reasonable national budget. This is a primary factor driving many major countries to establish or consider establishing DARPA-like entities with adjustments tailored to each country’s circumstances, policies, and cultural context.

Multinational defence cooperation is progressing, with academia taking on a growing role, exemplified by the participation of universities from numerous major countries. AI technology research is shaped by cultural influences, underscoring the importance of international collaboration facilitated by DARPA-like organisations for advancing AI development. This collaboration is anticipated to strengthen academic, industrial, and societal dimensions, as well as the collective defence capabilities of participating nations. Addressing the dual-use dilemma and safeguarding academic freedom remain key priorities.

What excites you most about the direction of AI research as a dual-use technology?

In the history of AI, I understand that tackling suitable and ambitious topics, recognised as high-risk, high-return objectives, is crucial for developing innovative AI technology. This is because our understanding of the human intelligence mechanism is complex, while AI systems are essentially simulation models of human intelligence. Moreover, the research and development of such AI systems require top-tier researchers and substantial financial support. It’s widely acknowledged that achieving a simple model is relatively straightforward using low-level ideas and technologies.

In general, most dual-use technologies for military applications involve a high level of complexity, aligning with the high-risk, high-return paradigm. By engaging with such challenging themes, significant advancements in AI can be achieved, as demonstrated by DARPA. Rapid progress at university, industry, and the military, in other words whole national power, is highly expected in this framework. Universities contribute as a key player with the support of the citizens.

What is your biggest concern about the direction of AI research as a dual-use technology?

There is concern that researchers, driven by curiosity, may overlook research ethics and inadvertently contribute to the research and development of LAWs, that could bring about the destruction of humanity. There is also concern that misapplications of AI technology could lead to job displacement in human society or the neglect of heartfelt human services. Indeed, there are threats of imminent achievement of AGI (Artificial General Intelligence), leading to social unrest.

International collaboration among universities is a trend of the times, with most major countries actively promoting it. Meanwhile, many research universities are actively cooperating with DARPA-like organisations. This underscores the urgent international issue of preventing the leakage and theft of research results. It is necessary for research universities worldwide to collaborate and promptly finalise a document outlining strategies for addressing the dual-use dilemma that they can all agree upon. Japanese universities should join the community.

Related posts.

Further reading

Ueno, H, (2023) Artificial Intelligence as Dual-Use Technology. In: Hatzilygeroudis, IK, Tsihrintzis, GA, Jain, LC, (eds) Fusion of Machine Learning Paradigms. Intelligent Systems Reference Library, 236, Springer, Cham, doi.org/10.1007/978-3-031-22371-6_2

Dr Haruki Ueno

Dr Haruki Ueno is Professor Emeritus, National Institute of Informatics, Japan; Professor Emeritus, Graduate University for Advanced Studies, Japan; and Member of the Engineering Academy of Japan. He received a BE from the National Defense Academy in 1964, and a PhD from Tokyo Denki University in 1977. Ueno was previously Research Associate, Institute of Medical Informatics, University of Missouri, and Professor, Graduate School of Information Science and Technology, The University of Tokyo.

Contact Details

e: [email protected]
w: www.nii.ac.jp/en

Collaborators

  • Dr Yoshiaki Shirai, Professor Emeritus, Osaka University
  • Dr Hiroshi Suzuki, Member of Engineering Academy of Japan

Cite this Article

Ueno, H, (2024) AI as a dual-use technology – a cautionary tale,
Research Features, 152.
DOI:
10.26904/RF-152-6479672419

Creative Commons Licence

(CC BY-NC-ND 4.0) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Creative Commons License

What does this mean?
Share: You can copy and redistribute the material in any medium or format