Highlights from the Kyoto Conference: Reconsidering ‘value’ in the age of AI — Launching the discussion from fundamental questions: Panel discussions Part I
At the inaugural two-day Kyoto Conference, two keynote speeches were followed by nine panel discussions — including lunch sessions — and a roundtable, at which participants exchanged ideas freely around shared tables. The Kyoto Institute of Philosophy organized these sessions into six sequential parts, each designed to deepen philosophical reflection step by step. The theme of Part I was The Fundamental Questions of Value.
Details
From the ‘Ivory Tower’ to dialogue — Ten scholars present philosophical perspectives to industry
Part I served as a starting point for reexamining the question “What is value?” from a philosophical perspective. Ten leading scholars representing the academic community were divided into two panels for this discussion.
The first panel featured Professor Maurizio Ferraris of the University of Turin (Italy), President Teruo Fujii of the University of Tokyo, Professor Emerita Heisook Kim of Ewha Womans University (Korea), and Emeritus Professor Michael Neocosmos of Rhodes University (South Africa). The discussion was moderated by Professor Noburu Notomi of the University of Tokyo.
Professor Notomi opened the session by inviting each panelist to share what they considered the most crucial aspect of philosophical debate on value. He led off with his own view: “In Japanese society, whenever we talk about value (kachi), it means economic value: How much? ... This reduction to economic value is prevailing and dangerous.”
Next to speak was Professor Ferraris, who said: “I believe that the solution to the most compelling problems of contemporary society is not to leave the administration of data to five American capitalists and one Bolshevik chief in China ... If we do not create this digital welfare, the alternative, as we are seeing, will be warfare.”
Ferraris’ remark was a veiled reference to the fierce competition between the United States and China over AI supremacy.
President Fujii then raised the issue of bias embedded in the large language models that power interactive AI services such as ChatGPT: “There are, so to say, 7,000 languages throughout the world, but 40% of them don’t have a written form. How could we involve all these languages and cultures into our models to be shared, if possible?”
Remaining panelists Kim and Neocosmos both warned against the growing neglect of “universality.”
Professor Kim stated: “With regard to values, the problem our era faces is, paradoxically, that it is becoming more and more an age without values ... The crisis lies in the disappearance of universalism and the spread of relativism, skepticism, and nihilism.”
Professor Neocosmos added: “In the world today, there is a crisis of thought ... I mean to say there is a crisis of universal humanity, of the idea of the universal.”
As president of the International Federation of Philosophical Societies, Professor Kim then referred to Professor Deguchi’s keynote proposal of a “WE-turn,” emphasizing that now is precisely the time to rethink how people relate to one another and how they stay connected. Professor Ferraris agreed, underscoring the importance of philosophy stepping beyond conventional boundaries: “Academia should go outside of academia, especially philosophy, which was classically the ivory tower. We should speak with entrepreneurs, common people, and the media.”
President Fujii responded by expressing hope for broader public participation: “I hope we can bring a broader public into this meeting next time.”
AI’s hidden errors and biases — Corrected by discerning human judgment
The second panel examined more deeply the issues surrounding AI. A distinctive feature of this session was the dynamic exchange among Tokyo University of the Arts President Katsuhiko Hibino — also an acclaimed contemporary artist—and three experts in AI and robotics.
President Hibino explained that he had rediscovered the value of art amid the social transformations brought about by AI. Imagine placing an apple on a table and having everyone draw it, he suggested. “My drawing would not be the correct one, nor would yours be wrong ... Art exists precisely in the differences of value and evaluation that arise between them,” Hibino said.
He continued: “When something differs from your own sense of value, you may tolerate it, but you cannot fully agree with it — and there will always be things you do not understand ... But art has the power to accept not understanding.”
Addressing the audience, he said, “When we consider the challenges posed by AI, we may find answers to questions like ‘What defines humanity?’ and ‘What can humans accomplish?’ by approaching them through the lens of art.”
Professor Yutaka Matsuo of the University of Tokyo, who serves as chairperson of the Japanese government’s AI Strategy Council, picked up the thread of “value” and extended the discussion. He noted that value judgments can enter AI development at multiple points: in “pre-training,” when AI systems absorb data, and in “post-training,” when humans correct AI’s mistaken outputs.
“AI can be used for education, military purposes, or for job judgment and recruitment. So, whether the purpose itself is good or not is, of course, a very important part where value comes in,” Matsuo said. He added that how we decide to use AI inevitably introduces further layers of value.
Taking a social perspective, Dr. Karoliina Snell of the University of Helsinki expressed concern about the concentration of AI development in Silicon Valley. As a scholar of AI in the medical and healthcare fields, she said emphatically: “How to transform these ideas [shaped in specific areas like USA] into practice, and ethical things, and values into a welfare state like Finland, is something I find very difficult.”
After a period of discussion, moderator Professor Mathias Risse of Harvard University posed a central question to the four panelists: “To be on the optimistic path for artificial intelligence to make our lives good, for evolution to go the right way, what does that take?”
Dr. Snell replied: “We also need courage to sometimes say no ... We can also put the brakes on it.”
Professor Matsuo followed: “We have to be very careful about adopting new technologies at a very rapid speed. Maybe we have to slow down a little bit and take more time to think about what is going on and what the future could be.”
Distinguished Professor Hiroshi Ishiguro of Osaka University, known for creating teleoperated androids modeled after himself, added: “[W]e need to consider what kind of rules we need for using technology to develop better societies.”
President Hibino then returned to the realm of art in response to Professor Risse’s question. If AI is asked to generate images mimicking his own artistic style, he noted, some results will inevitably be “wrong.” Identifying these differences is precisely the role of human mekiki — the discerning eye — and that is what makes the process of correcting AI’s errors essential.
“AI shows us incorrect answers, and that is precisely why the correct ones come into view,” Hibino said. “With the discerning eye that evaluates value ... there is the possibility that entirely new forms of society can emerge.”
Others



