
Hitomi Yanaka (D.Eng.)
Title
Team Director
Members
-
Team directorHitomi Yanaka
Introduction
Humans perform various inference from given information and make decisions in everyday life. Recently, large language model research using massive data and deep learning has accelerated, and interactive decision-making support using AI has become a reality. However, it is challenging to explain how current AI understands input meaning and performs inference. Toward truly reliable AI, it is necessary to solve the problems of explainable AI from multifaceted perspectives. Based on interdisciplinary approaches of the humanities and sciences, our team aims to elucidate the meaning acquisition and inference processes of AI and realize explainable AI that provide explanations that support humans.
Main Research Field
Informatics
Research Field
Complex Systems Humanities Intelligent Informatics
Research Subjects
Computational Linguistics
Natural Language Processing
Inference
Explainability
Interpretability
Natural Language Processing
Inference
Explainability
Interpretability
RIKEN Website URL