July 30, 2021 12:49

Abstract

Natural Language Understanding Team (https://aip.riken.jp/labs/goalorient_tech/nat_lang_understand/) at RIKEN AIP

The seminar consists of five talks as follows;

  1. On Position Representations of Transformer Models (25 min)
  2. Gradient-based End-to-end Training of Deep Neural Networks with a Symbolic Module Layer (25 min)
  3. Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries (30 min)
  4. Teaching what’s between the lines: ongoing challenges of commonsense reasoning (15 min)
  5. Insights from Organizing the First Japanese Quiz AI Championship (15 min)

*The first, second, third and forth talk will be delivered in English. The fifth talk will be provided in Japanese. (Simultaneous interpretation will not be available.)
If you want to join the Japanese session, please join this seminar after around 4:30pm.

Speaker 1: Shun Kiyono (25 min)
Title: On Position Representations of Transformer Models
Abstract: Transformer models have become a ubiquitous architecture for various research fields, not only for NLP but also for vision and speech.
Position representation is an important component in Transformer that is responsible for building a sequence order-aware representation.
I will introduce some of the recent developments in position representations, including our work.

Speaker 2: Masashi Yoshikawa (25 min)
Title: Gradient-based End-to-end Training of Deep Neural Networks with a Symbolic Module Layer
Abstract: I believe the key to advance further the state of deep learning (DL)-based natural language processing (NLP) is combining its technologies with those of symbolic reasoning. By the combination, it will be possible to develop an AI system that is more data-efficient, and more robust to variations of input texts (e.g., text domains), with the more visible inference process essential to symbols. However, as a major key to the success of DL is end-to-end training using backpropagation, it is challenging to incorporate discrete symbolic function within a neural network. In this presentation, I will talk about my recent effort to overcome the problem and achieve such a fusion. In particular, we tackle numerical reasoning, which is an unsolved problem in DL-based NLP. I approach this problem by incorporating an arithmetic calculator layer within a DL-based reasoning model.

Speaker 3: Benjamin Heinzerling (30 min)
Title: Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries
Abstract: Language models (LMs) appear to memorize world knowledge facts during training. For example, BERT correctly answers the query “Paris is the capital of [MASK]” with “France”.
This observation suggests that LMs can serve as an alternative or complement to structured knowledge bases like Wikidata: During training, a LM encounters world knowledge facts expressed in its training data. Some of these facts are stored in some form in the LM’s parameters and some of these facts can be queried by means of a suitable query, similar to the one given above.
However, this emerging LM-as-KB paradigm is faced with several foundational questions. Is masked language modeling a good way to represent entities in a LM? How much knowledge can a LM store? How can stored knowledge be queried? In this talk, we give initial answers to these questions.

Speaker 4: Ana Brassard (15 min)
Title: Teaching what’s between the lines: ongoing challenges of commonsense reasoning
Abstract: Ambiguous sentences are correctly understood by knowing which interpretation is more likely, and a speaker’s intention can be understood without being explicitly said. Common sense, consisting of everyday knowledge and basic reasoning skills, is a fundamental part of communication and a long-standing challenge in NLP. In this presentation, I will briefly introduce what’s missing in current commonsense resources and our team’s efforts towards solving some of its critical issues.

Speaker 5: Jun Suzuki (15 min)
Title:
[English] Insights from Organizing the First Japanese Quiz AI Championship
[日本語] AI王 〜クイズAI日本一決定戦〜(第一回)の実施報告および得られた知見の紹介
Abstract:
[English] Question answering (QA) is a long-standing major research topic in Natural Language Processing (NLP). While this topic has seen significant technological advances and has attracted many researchers in the NLP community, QA research in Japan is much less active than overseas. To promote QA research in Japan, we organized the First Japanese Quiz AI championship. This event provided an opportunity to acquire and adapt recent techniques and knowledge from the wider QA research community by building Japanese QA systems.
In this talk, I will briefly introduce the championship format and the strategies taken by the participants. I will also describe some findings of this championship.
This is a joint project of the Natural Language Understanding Team and the Language Information Access Technology Team.
[日本語] 質問応答(QA)は、自然言語処理研究における主要かつ長年の研究テーマであり,DNN技術の革新的な発展に伴い,現在再び盛んに研究されている研究領域です.しかし,日本におけるQA研究は海外と比較して現在あまり盛んに行われていません.そこで,日本におけるQA研究促進のために、「日本語によるクイズ」を題材として質問応答のコンペティションを3月に開催し、最新の技術や知識を習得する場を提供しました.
本講演では、このコンペティションの概要と参加システムの戦略などについて簡単に紹介します.また、この大会で得られたいくつかの知見についても紹介します.
(講演は日本語で行う予定です)
本プロジェクトは自然言語理解チームと言情報アクセス技術チームの共同プロジェクトです.


All participants are required to agree with the AIP Open Seminar Series Code of Conduct.
Please see the URL below.
https://aip.riken.jp/event-list/termsofparticipation/?lang=en

RIKEN AIP will expect adherence to this code throughout the event. We expect cooperation from all participants to help ensure a safe environment for everybody.


More Information

Date September 1, 2021 (Wed) 15:00 - 17:00
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/119510

Related Laboratories

last updated on October 17, 2024 09:22Laboratory