
Hitomi Yanaka (D.Eng.)
Title
Team Director
Members
-
Team directorHitomi Yanaka
-
Visiting scientistTakuto Asakura
-
Visiting scientistNamgi Han
-
Visiting scientistMasaru Isonuma
-
Visiting scientistYusuke Sakai
-
Visiting scientistHiroyuki Katsura
-
Student traineeTaisei Yamamoto
-
Student traineeGouki Minegishi
-
Student traineeShota Kizawa
-
Student traineeYosuke Mikami
-
Student traineeHirohane Takagi
-
Student traineeDaiki Matsuoka
-
Student traineeKoki Ryu
-
Student traineeAnirudh Kondapally
-
Student traineeEiji Iimori
-
Student traineeRyo Yoshida
-
Part-time worker ITomoki Doi
-
Part-time worker IYusuke Ide
-
Part-time worker IIRyoma Kumon
Introduction
Humans perform various inference from given information and make decisions in everyday life. Recently, large language model research using massive data and deep learning has accelerated, and interactive decision-making support using AI has become a reality. However, it is challenging to explain how current AI understands input meaning and performs inference. Toward truly reliable AI, it is necessary to solve the problems of explainable AI from multifaceted perspectives. Based on interdisciplinary approaches of the humanities and sciences, our team aims to elucidate the meaning acquisition and inference processes of AI and realize explainable AI that provide explanations that support humans.
Main Research Field
Informatics
Research Field
Complex Systems / Humanities / Intelligent Informatics
Research Subjects
Computational Linguistics
Natural Language Processing
Inference
Explainability
Interpretability
Natural Language Processing
Inference
Explainability
Interpretability
RIKEN Website URL
Related posts
posted on July 25, 2025 15:35Information