Hitomi Yanaka
Hitomi Yanaka (D.Eng.)
Title
Team Director

Members

  • Team director
    Hitomi Yanaka
  • Student trainee
    Taisei Yamamoto
  • Student trainee
    Gouki Minegishi
  • Student trainee
    Shota Kizawa
  • Student trainee
    Hirohane Takagi
  • Student trainee
    Daiki Matsuoka
  • Student trainee
    Anirudh Kondapally
  • Student trainee
    Eiji Iimori
  • Part-time worker I
    Tomoki Doi
  • Part-time worker I
    Yusuke Ide
  • Part-time worker II
    Ryoma Kumon

Introduction

Humans perform various inference from given information and make decisions in everyday life. Recently, large language model research using massive data and deep learning has accelerated, and interactive decision-making support using AI has become a reality. However, it is challenging to explain how current AI understands input meaning and performs inference. Toward truly reliable AI, it is necessary to solve the problems of explainable AI from multifaceted perspectives. Based on interdisciplinary approaches of the humanities and sciences, our team aims to elucidate the meaning acquisition and inference processes of AI and realize explainable AI that provide explanations that support humans.

Main Research Field
Informatics
Research Field
Complex Systems / Humanities / Intelligent Informatics
Research Subjects
Computational Linguistics
Natural Language Processing
Inference
Explainability
Interpretability
RIKEN Website URL