Highlights from the Kyoto Conference 5: Business leaders and civil society take the stage cross-sector dialogue gains momentum: Lunch sessions
After the two morning panel discussions concluded, the clock had passed noon. The program now moved to Part II, the lunch session titled “Redesigning Society in the Age of AI.” Participants were free to choose among three venues, listening to discussions while enjoying traditional Japanese boxed lunches. As a relaxed atmosphere settled into the room atop a still-present sense of anticipation, panelists from the business sector and civil society joined the stage, and the cross-sector dialogue became more engaged.
Details
The business world engages with AI — A test of executive leadership
In Room A, the main venue of the Kyoto Conference, the lunch-session panel centered on how businesses should leverage AI.
“Until a little over a decade ago, Japanese companies often relied on a single president to make all decisions without using AI. Uniform decision-making prevailed, and I believe this was a major reason for Japan’s prolonged economic stagnation. While carefully considering the reliability of data, companies should incorporate AI to diversify their decision-making,” remarked Toshiaki Higashihara, Executive Chairman of Hitachi, Ltd. and a director of the Kyoto Institute of Philosophy. He predicted that more companies would begin integrating AI into management decisions. Noting that roughly 60 percent of Hitachi’s 282,000 employees are non-Japanese, he emphasized the importance of working under a shared set of values that transcends national borders, adding, “AI may be introduced into purpose-driven management,” referring to corporate efforts to articulate organizational meaning and social value.
Robert Thomson, CEO of News Corp, publisher of The Wall Street Journal and other major media outlets, stressed that top executives must understand AI themselves. “If AI is just a technic's thing, like a tyranny of the technics, then you're going to make a mistake,” he warned, arguing that social oversight of AI developers is essential to counter the biases and prejudices associated with AI risks. From the viewpoint of journalism, he further noted, “It’s important for all Japanese companies that intellectual property is protected and not taken for granted. Open source shouldn't be synonym for theft.”
The discussion continued with R. Edward Freeman, Distinguished Professor at the University of Virginia, widely regarded as the originator of stakeholder theory, and Anton Rupert, Board Member of the Geneva Science and Diplomacy Anticipator Foundation (GESDA) in Switzerland. Professor Freeman underscored the need “to continue to be almost vigilant both for benefits and risks for what this technology is and keep the conversation alive.” Rupert cautioned that “AI is something that is inherently us. It is designed on us, it is based on us, and the good and bad that comes with us.”
Can AI be trusted? — A debate on regulation
Room B-1 hosted AI-focused discussion, this time centered on the keyword “trust.”
Among the five speakers, Börje Ekholm, CEO of Ericsson (Sweden), opened with an observation: “I think structurally you can't trust AI because you don't know what has gone into the models.” He continued, “We need to start to debate how we interact with something that may have superior capabilities to us in many ways but that we can't trust,” arguing that the business world must approach AI on the premise of its inherent imperfections.
Jin Roy Ryu, Chairman of the Federation of Korean Industries and Chairman & CEO of Poongsan Group, focused on the difference between humans and AI when measured through the lens of trust. “When it comes to human relationship, they can't take care of that ... Can AI help people? Can AI be a cupid factor trying to put people together? I don't think that's possible,” he remarked.
The conversation also turned to regulatory approaches. The European Union has strengthened legal regulations on AI development, while the United States and China lean toward innovation-first, industry-led governance. Against this backdrop, Audrey Yamamoto, President & CEO of the U.S.-Japan Council, commented, “You don't want to overregulate before any innovation has happened. So, I think we're in a place of trying to figure out what is the proper amount of governance. Given some of the lessons learned from how it's unfolded in terms of its usage and lack of governance in the U.S, perhaps other countries can benefit from that by walking the fine line of allowing innovation to continue to flourish but with some guardrails to protect human beings and human life.”
Democracy in distress — A call for action
Room D addressed the broader theme of “Society in the Age of AI,” focusing specifically on “Reconstructing Democracy.”
“In the past one or two years, divisions have deepened in Japan. In the online world, messages are becoming shorter and more sensational. Extreme claims are fueling polarization,” noted Koji Matsui, Mayor of Kyoto City and the only panelist representing a local government. Responding to this, Fritz Breithaupt, Professor at the University of Pennsylvania, remarked, “[In the U.S.] we now run a national experiment of polarization where the country is fully divided.”
The Kyoto Institute of Philosophy had framed the challenges of our time as driven by two forces—“fragmentation” and “transformation”—and provided conference participants with philosophical prompts in advance. In this panel, fragmentation was addressed head-on. As polarization expands and intensifies, what actions should humans—the very agents responsible for creating such divisions—take? Panelists highlighted the importance of “rational emotions” and face-to-face communication, before the conversation shifted to questions such as whether machines can possess emotions and whether AI or robots should be granted a form of human status.
The session concluded with a playful final question from moderator Thomas Beschorner, Professor at the University of St. Gallen (Switzerland) and director of the Institute for Business Ethics: “What would you like to see from the Kyoto Institute of Philosophy after this conference? ... Let's give them some homework.”
Professor Breithaupt called on the Institute “to take this task of being human in the age of AI seriously ... To start to define it. And then think about the policies that can help that.” Manuel Gustavo Isaac, Science Anticipation Philosophy Lead at GESDA, emphasized the need for careful dialogue while expanding networks of collaboration. Meanwhile, Kaori Karasawa, Professor at the University of Tokyo, reflected, “I believe that this Kyoto Conference will help articulate the underlying principles and the scenarios for translating them into practice. The next step is action.” Mayor Matsui, the final respondent, encouraged the Institute to propose multilayered values that move beyond binaries of good and evil, and also to consider “how [AI and robots] can be involved in society.” With this, he offered a clear direction for the Institute’s future work.
Others



