BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//https://techplay.jp//JP
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALDESC:[24th AIP Open Seminar] Talks by Search and Parallel Computing
  Unit 
X-WR-CALNAME:[24th AIP Open Seminar] Talks by Search and Parallel Computing
  Unit 
X-WR-TIMEZONE:Asia/Tokyo
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
BEGIN:STANDARD
DTSTART:19700101T000000
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:815206@techplay.jp
SUMMARY:[24th AIP Open Seminar] Talks by Search and Parallel Computing Unit
  
DTSTART;TZID=Asia/Tokyo:20210512T150000
DTEND;TZID=Asia/Tokyo:20210512T170000
DTSTAMP:20260407T224944Z
CREATED:20210419T060011Z
DESCRIPTION:イベント詳細はこちら\nhttps://techplay.jp/event/81520
 6?utm_medium=referral&utm_source=ics&utm_campaign=ics\n\nSearch and Paral
 lel Computing Unit  (https://aip.riken.jp/labs/generic_tech/search_parall
 elcomput/?lang=en) at RIKEN AIP\n\nSpeaker 1: (10 min.) Kazuki Yoshizoe \
 nTitle: Overview of Search and Parallel Computing Unit\n\nSpeaker 2: (25 
 min.) Vidal Alcázar\nTitle: A Small Overview about Heuristic Search and 
 Recent Developments in Bidirectional Search.\nAbstract: \nAlready in the 
 first AI conference ever\, at Dartmouth in 1956\, the main paradigm used 
 to solve problems like winning a game or proving a theorem was called "re
 asoning as search". This consists of advancing towards the goals as if th
 rough a maze. In order to avoid getting lost in the maze\, computers will
  use heuristics\, aids that help the computer make informed decisions whe
 n navigating the search space.\n\nDespite how old and how general heurist
 ic search is as an approach\, it remains a relatively small and somehow u
 nknown field. Indeed\, most people wouldn't think of heuristic search whe
 n discussing milestones in AI such as Deep Blue and AlphaZero\, and yet h
 euristic search is at the core of both of them.\n\nIn this talk\, we intr
 oduce the field of heuristic search and its relationship with general int
 elligence\, and show how even after more than 50 years of important advan
 cements are being discovered\, like recent breakthroughs in bidirectional
  search.\n\nSpeaker 3:  (25 min.) Kazuki Yoshizoe\nTitle: Massively Paral
 lel Monte-Carlo Tree Search and Application to Molecular Design\nAbstract
 : \nIt is common practice to use large computational resources to train n
 eural networks\, known from many examples\, such as reinforcement learnin
 g applications. However\, while massively parallel computing is often use
 d for training models\, it is rarely used to search solutions for combina
 torial optimization problems.\nOften it is misunderstood that large-scale
  parallel search is impossible. In this talk\, we describe our past work 
 on 1\,000 worker scale parallel Monte-Carlo Tree Search (MCTS) and propos
 e an improved massively parallel Monte-Carlo Tree Search (MP-MCTS) algori
 thm. MP-MCTS works efficiently for a 1\,000 worker scale on a distributed
  memory environment using multiple compute nodes and we apply it to molec
 ular design.\n\nThis is the first work that applies distributed MCTS to a
  real-world and non-game problem. Existing works on large-scale parallel 
 MCTS show efficient scalability in terms of the number of rollouts up to 
 100 workers. Still\, they suffer from the degradation in the quality of t
 he solutions. MP-MCTS maintains the search quality at a larger scale. By 
 running MP-MCTS on 256 CPU cores for only 10 minutes\, we obtained candid
 ate molecules with similar scores to non-parallel MCTS running for 42 hou
 rs. Moreover\, our results based on parallel MCTS (combined with a simple
  RNN model) significantly outperform existing state-of-the-art work. Our 
 method is generic and is expected to speed up other applications of MCTS.
 \n\n(Main part of this talk is presented at ICLR 2021.)\n\nSpeaker 4:  (2
 5 min.) Kazuki Yoshizoe\nTitle: Massively Parallel Statistical Pattern Mi
 ning\nAbstract: \nDiscovering significant feature combinations from a lar
 ge number of features is an important problem in data mining and has vari
 ous applications in many domains. Naive algorithms fail because the compl
 exity of the problem is exponential to the number of features. We efficie
 ntly enumerate statistically significant combinations of features using a
  massively parallel depth-first search algorithm.\n\nOur statistical patt
 ern mining uses frequent itemset mining (also known as the "beer diaper p
 roblem") with pruning based on statistical significance (such as Fisher P
 -value). We describe how a depth-first search realizes this and how we pa
 rallelize it at a massive scale. The proposed algorithm\, Massively Paral
 lel Limitless Arity Multiple-testing Procedure (MP-LAMP)\, works efficien
 tly on 10\,000 core scale (or larger) clusters.\n\nWe also show two appli
 cations of MP-LAMP to real-world problems. One is a Genome-Wide Associati
 on Study (GWAS)\, which analyzes variants of DNAs associated with a trait
  (such as a genetic disease). Another is a discovery of hidden risk combi
 nations of posttraumatic stress disorder symptoms.\n\nSpeaker 5:  (25 min
 .) Ryuichiro Hataya\nTitle: Faster AutoAugment: differentiable data augme
 ntation search for image\nrecognition\nAbstract: \nData augmentation meth
 ods are indispensable heuristics to improve the performance of deep neura
 l networks\, especially in image recognition tasks. Recently\, several st
 udies\, such as AutoAugment [Cubuk et al. 2019]\, have\nshown that augmen
 tation strategies found by search algorithms outperform hand-made strateg
 ies. Such methods employ black-box search algorithms over image transform
 ations with continuous or discrete parameters and require a long time to 
 obtain better strategies.\n\nIn this talk\, we introduce our proposed met
 hod\, Faster AutoAugment [Hataya et al. 2020]. This method uses a differe
 ntiable policy search pipeline for data augmentation\, which is much fast
 er than previous methods. For this purpose\, we used approximate gradient
 s for several transformation operations with discrete parameters and a di
 fferentiable mechanism for selecting operations. As the objective of trai
 ning\, we proposed to minimize the distance between the distributions of 
 augmented and original data\, which can also be differentiated. Image cla
 ssification experiments show that our method achieves significantly faste
 r searching than prior methods without a performance drop.\n\n\n\nAll par
 ticipants are required to agree with the AIP Open Seminar Series Code of 
 Conduct.\nPlease see the URL below.\nhttps://aip.riken.jp/event-list/term
 sofparticipation/?lang=en\n\nRIKEN AIP will expect adherence to this code
  throughout the event. We expect cooperation from all participants to hel
 p ensure a safe environment for everybody.\n\n
LOCATION:オンライン
URL:https://techplay.jp/event/815206?utm_medium=referral&utm_source=ics&utm
 _campaign=ics
END:VEVENT
END:VCALENDAR
