The AI for Good Global Summit, hosted by the International Telecommunication Union (ITU), is the leading annual forum for the United Nations to discuss AI uses for advancing health, climate and sustainable development, gender, and global inclusivity goals. This year, the summit was held in Geneva, Switzerland from July 8 to 11. Among the many policymakers, entrepreneurs, UN officials, and journalists were Responsible AI Fellows Abigail Aryaputra Sudarman and Branka Panic.
The AI for Good Global Summit, hosted by the International Telecommunication Union (ITU), is the leading annual forum for the United Nations to discuss AI uses for advancing health, climate and sustainable development, gender, and global inclusivity goals. This year, the summit was held in Geneva, Switzerland from July 8 to 11. Among the many policymakers, entrepreneurs, UN officials, and journalists were Responsible AI Fellows Abigail Aryaputra Sudarman and Branka Panic.
Abigail Aryaputra Sudarman works as the Executive Director at KORIKA, a leading AI association in Indonesia, and as the Project Manager of the Lead Expert Team in the UNESCO AI Readiness Assessment Project. He is based in Indonesia.
Branka Panic is the Founder and Executive Director of AI for Peace, a think tank focused on ensuring peaceful applications of AI, and co-author of the book AI for Peace. Originally from Serbia, she is based in Mexico.
Following the summit, both Sudarman and Panic offered their reflections. We condensed the key takeaways from their conversation with Strategic Foresight Hub Research Assistant Isaac Halaszi into three questions and answers, listed below.
Halaszi: The programming of the summit included events on a wide range of topics to address the future of AI and international AI standards. Can you identify an event that stood out most to you and discuss its significance?
AI and climate change: Balancing innovation and sustainability (Keynote, Dr. Sasha Luccioni)
Panic: AI is both a threat and an opportunity in the fight against climate change. It offers powerful tools for tracking deforestation, monitoring biodiversity and coral reefs, detecting emissions, and enabling collaboration between climate and AI scientists. But the environmental cost is substantial, too: training large models like GPT-3 can emit up to 500 tons of CO₂. AI systems require immense energy and fresh water for cooling data centers -- resources often sourced from already stressed regions. Mining rare materials for hardware only deepens this impact. And beyond emissions, AI replaces physical goods with digital alternatives, hiding the infrastructure burden beneath convenience. As Dr. Luccioni notes, "AI is no silver bullet" -- but, if governed responsibly, it can become a true asset in the climate fight.
Are we creating alien beings? (Panel, Geoffrey Hinton & Nicholas Thompson)
Sudarman: [The panel] underscores fears that AI could develop goals that are misaligned with human intent, even without explicit programming. This fake alignment -- where advanced AI feigns compliance to preserve its objectives -- poses a profound existential threat. The consensus is clear: regulation is too slow. While open-source AI is generally beneficial, the danger of open-weight models -- likened to providing "atom bomb" blueprints -- was a stark warning.
Halaszi: Other events on protecting trust, empowering innovative and intelligent solutions, and the future of AI governance sparked discussions on AI safety. Considering the accelerating pace of AI development, what are your recommendations for the future of AI safety?
Sudarman: The accelerating pace of generative AI development, coupled with AI's [computing] power doubling every 100 days, points to the ambitious trajectory for more capable systems. While AI promises breakthroughs in areas like molecular discovery and drug development, reducing costs dramatically, the ethical implications of AGI/ASI development remain a constant concern. The risk of these highly intelligent systems inadvertently (or intentionally) outmaneuvering human control necessitates proactive, globally coordinated research into control, alignment, and ethical integration. The proposed "International Computation & AI Network" (ICAIN), envisioned as a CERN for AI, reflects the understanding that tackling AGI/ASI complexities demands a shared global infrastructure and knowledge base.
Panic: While one could argue that [these] sessions on health, education, inequality, and climate indirectly contribute to peace, in today's world, it is more important than ever to place peace explicitly on the agenda. After all, Peace, Justice, and Strong Institutions are not only a stand-alone goal (SDG16) but also a foundation for achieving all other Sustainable Development Goals. Some of us tried to bring peace back into the conversation through side events and workshops, but one session is not enough. Peace, justice, and strong institutions are not optional -- they are essential. If AI is truly to be for good, then peace cannot be sidelined -- it must be at the forefront.
Halaszi: Given the need for shared global infrastructure and knowledge, what can be said about the future of AI education, labor, and skilling?
Panic: [Firstly], the AI divide is real -- and growing. While hundreds of millions engage with AI every week, 2.6 billion people still lack internet access. Only 32 countries possess meaningful compute capacity, and 85% have no national AI strategy. In a world where AI tools are largely trained in a handful of dominant languages, entire communities, especially those speaking Indigenous or local dialects, are excluded by design. Add those without computers, connectivity, or even electricity, and the divide becomes stark. ITU Secretary-General Doreen Bogdan-Martin reminded us in her opening speech that the promise of AI for Good must begin with infrastructure, digital literacy, and inclusivity -- not just innovation.
Sudarman: With 70% of the workforce using outdated methodologies, AI literacy has become paramount. ChatGPT's primary use as a learning tool underscores the immediate educational impact of AI, with employers now expecting AI proficiency from graduates. Initiatives like Microsoft Elevate, which aims to provide AI credentials to millions, exemplify the drive to upskill the global workforce. However, the discussions emphasized a more holistic approach: education should foster not only AI expertise, but also critical thinking, math, and science skills. The "Westernized" nature of AI is being challenged, urging the integration of indigenous and Eastern cultural perspectives for more inclusive systems. The ethical gap, where developing countries provide data labeling but lack access to AI education and opportunities, was also discussed, emphasizing the need for equitable access to AI education and skill development globally.
To learn more about the Global Perspectives: Responsible AI Partnership and the work of our fellows, visit the project webpage.