Defence Minister Khawaja Asif has cautioned that future wars could become much more dangerous as the use of artificial intelligence in military systems accelerates, posing serious risks to international peace and stability. “We must ensure that harnessing AI is only for promoting peace and development, not conflict and instability,” he says. During a high-level discussion on artificial intelligence (AI) on Wednesday, the defense czar said these things. Asif urged all states to make sure that technology advancements are harnessed under the UN Charter for the sake of humanity, warning that while AI has made decision-making processes simpler, it has also created conditions where future battles may be substantially more dangerous.
The defense minister added that earlier this year, Pakistan unveiled its first national AI strategy, demonstrating the government’s dedication to responsible innovation while guaranteeing protections against abuse. “To avoid an unchecked arms race in this area, [the] world must take collective action,” he continued. Khawaja Asif continued, “We must make sure that AI is used to advance peace and development, not conflict and instability.” In closing, the minister reaffirmed Pakistan’s commitment to backing international initiatives that strike a balance between the demands of world peace and stability and scientific advancement.
AI is “no longer a distant horizon — it is here, transforming daily life, the information space, and the global economy at breathtaking speed,” according to UN Secretary-General Antonio Guterres, who opened the discussion earlier. “When used responsibly, AI can strengthen prevention and protection in a myriad of ways, including anticipating food insecurity, supporting de-mining, and helping identify potential outbreaks of violence,” he stated. He emphasized that “innovation must serve humanity – not undermine it,” citing risks to information integrity and AI-enabled hacks that can impair essential infrastructure in minutes. The UN president highlighted that the UNGA created an annual Global Dialogue on AI Governance and an Independent International Scientific Panel on AI last month.
Speaking about priorities, he said that humans must always have the final say in life-or-death decisions and that an algorithm cannot determine humanity’s destiny. He reiterated his request for a legally binding instrument by 2026 that will prohibit lethal autonomous weapons systems that operate without human control, urging the council and member states to ensure that human control and judgment are maintained in every use of force. “Similarly, humans, not machines, must make all decisions regarding the use of nuclear weapons,” he emphasized.


















