June 19, 2023 12:41

Abstract

**The recent advancements in large language models (LLMs) have significantly enhanced their capabilities, leading to a paradigm shift in how AI can be integrated into our society.** This is particularly evident in the case of ChatGPT, including the latest iteration, GPT-4. It is intriguing how the straightforward approach of Transformer models trained on massive datasets has achieved such remarkable progress, which should pique our interest on both conceptual and technical levels.

This mini-workshop aims to provide tutorial-style talks introducing the technology behind GPT-4 and discussing its implications for society. The technical talks are tailored for researchers in the field of AI.

**Program:**

**Day 1 – June 30th**

– 14:30 – 14:50 *Matthias Weissenbacher* Opening Remarks: Transformers and the Power of GPT4
– 14:50 – 15:15 (5 min Q&A) *Noriki Nishida* Introductory Talk: Standard supervision vs. context learning in NLP
– 14:15 – 15:40 (5 min Q&A) *Benjamin Heinzerling* Why can LMs learn in-context? An overview of current theories
– 15:40-16:10 (10 min Q&A) *Koiti Hasida* Future R&D directions of generative AI & social impacts

**Day 2 – July 3rd**

– 13:30 – 13:35 Opening Remarks – Day 2
– 13:35 – 14:00 (5 min Q&A) *Hiroshi Nakagawa* How to live together with ChatGPT?
– 14:00 – 14:40 (10 min Q&A) *Hiroki Teranishi* How to train large language models on the example of GPT3 (LLaMA)
– 14:40 – 15:20 (10 min Q&A) *Yuji Matsumoto* Instruct GTP and the role of experts feedback (via RL) – example ChatGPT/GPT4
– 15:20 – 15:30 *Koiti Hasida* (Moderator) Open Discussion on the Future of AI (AGI) and Humanity

More Information

Date June 30, 2023 (Fri) - July 3, 2023(Mon) 14:30 - 15:30
URL https://c5dc59ed978213830355fc8978.doorkeeper.jp/events/158694

Related Laboratories

last updated on October 17, 2024 09:19Laboratory