May 19, 2021 13:23


Search and Parallel Computing Unit ( at RIKEN AIP

Speaker 1: (10 min.) Kazuki Yoshizoe
Title: Overview of Search and Parallel Computing Unit

Speaker 2: (25 min.) Vidal Alcázar
Title: A Small Overview about Heuristic Search and Recent Developments in Bidirectional Search.
Already in the first AI conference ever, at Dartmouth in 1956, the main paradigm used to solve problems like winning a game or proving a theorem was called “reasoning as search”. This consists of advancing towards the goals as if through a maze. In order to avoid getting lost in the maze, computers will use heuristics, aids that help the computer make informed decisions when navigating the search space.

Despite how old and how general heuristic search is as an approach, it remains a relatively small and somehow unknown field. Indeed, most people wouldn’t think of heuristic search when discussing milestones in AI such as Deep Blue and AlphaZero, and yet heuristic search is at the core of both of them.

In this talk, we introduce the field of heuristic search and its relationship with general intelligence, and show how even after more than 50 years of important advancements are being discovered, like recent breakthroughs in bidirectional search.

Speaker 3: (25 min.) Kazuki Yoshizoe
Title: Massively Parallel Monte-Carlo Tree Search and Application to Molecular Design
It is common practice to use large computational resources to train neural networks, known from many examples, such as reinforcement learning applications. However, while massively parallel computing is often used for training models, it is rarely used to search solutions for combinatorial optimization problems.
Often it is misunderstood that large-scale parallel search is impossible. In this talk, we describe our past work on 1,000 worker scale parallel Monte-Carlo Tree Search (MCTS) and propose an improved massively parallel Monte-Carlo Tree Search (MP-MCTS) algorithm. MP-MCTS works efficiently for a 1,000 worker scale on a distributed memory environment using multiple compute nodes and we apply it to molecular design.

This is the first work that applies distributed MCTS to a real-world and non-game problem. Existing works on large-scale parallel MCTS show efficient scalability in terms of the number of rollouts up to 100 workers. Still, they suffer from the degradation in the quality of the solutions. MP-MCTS maintains the search quality at a larger scale. By running MP-MCTS on 256 CPU cores for only 10 minutes, we obtained candidate molecules with similar scores to non-parallel MCTS running for 42 hours. Moreover, our results based on parallel MCTS (combined with a simple RNN model) significantly outperform existing state-of-the-art work. Our method is generic and is expected to speed up other applications of MCTS.

(Main part of this talk is presented at ICLR 2021.)

Speaker 4: (25 min.) Kazuki Yoshizoe
Title: Massively Parallel Statistical Pattern Mining
Discovering significant feature combinations from a large number of features is an important problem in data mining and has various applications in many domains. Naive algorithms fail because the complexity of the problem is exponential to the number of features. We efficiently enumerate statistically significant combinations of features using a massively parallel depth-first search algorithm.

Our statistical pattern mining uses frequent itemset mining (also known as the “beer diaper problem”) with pruning based on statistical significance (such as Fisher P-value). We describe how a depth-first search realizes this and how we parallelize it at a massive scale. The proposed algorithm, Massively Parallel Limitless Arity Multiple-testing Procedure (MP-LAMP), works efficiently on 10,000 core scale (or larger) clusters.

We also show two applications of MP-LAMP to real-world problems. One is a Genome-Wide Association Study (GWAS), which analyzes variants of DNAs associated with a trait (such as a genetic disease). Another is a discovery of hidden risk combinations of posttraumatic stress disorder symptoms.

Speaker 5: (25 min.) Ryuichiro Hataya
Title: Faster AutoAugment: differentiable data augmentation search for image
Data augmentation methods are indispensable heuristics to improve the performance of deep neural networks, especially in image recognition tasks. Recently, several studies, such as AutoAugment [Cubuk et al. 2019], have
shown that augmentation strategies found by search algorithms outperform hand-made strategies. Such methods employ black-box search algorithms over image transformations with continuous or discrete parameters and require a long time to obtain better strategies.

In this talk, we introduce our proposed method, Faster AutoAugment [Hataya et al. 2020]. This method uses a differentiable policy search pipeline for data augmentation, which is much faster than previous methods. For this purpose, we used approximate gradients for several transformation operations with discrete parameters and a differentiable mechanism for selecting operations. As the objective of training, we proposed to minimize the distance between the distributions of augmented and original data, which can also be differentiated. Image classification experiments show that our method achieves significantly faster searching than prior methods without a performance drop.

Related Laboratories

last updated on June 2, 2021 14:34Laboratory