BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//https://techplay.jp//JP
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALDESC:[23rd AIP Open Seminar] Talks by Nonconvex Learning Theory Tea
 m
X-WR-CALNAME:[23rd AIP Open Seminar] Talks by Nonconvex Learning Theory Tea
 m
X-WR-TIMEZONE:Asia/Tokyo
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
BEGIN:STANDARD
DTSTART:19700101T000000
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:812787@techplay.jp
SUMMARY:[23rd AIP Open Seminar] Talks by Nonconvex Learning Theory Team
DTSTART;TZID=Asia/Tokyo:20210428T150000
DTEND;TZID=Asia/Tokyo:20210428T170000
DTSTAMP:20260421T063011Z
CREATED:20210326T060019Z
DESCRIPTION:イベント詳細はこちら\nhttps://techplay.jp/event/81278
 7?utm_medium=referral&utm_source=ics&utm_campaign=ics\n\nNonconvex Learni
 ng Theory Team (https://aip.riken.jp/labs/generic_tech/nonconvex_learn_th
 eory/?lang=en) at RIKEN AIP\n\nSpeaker 1: Takafumi Kanamori (25 min)\nTit
 le: Overview  and Recent Development in the research activity of Non-Conv
 ex Learning Theory Team\nAbstract: The main target of our team is to deve
 lop learning algorithms to deal with complex data and to establish theore
 tical foundations of statistical learning. In this talk\, we will introdu
 ce recent development including statistical inference of probability dist
 ribution on complex domains\, kernel-based feature extraction method for 
 complex data\, the fundamental theory of transfer learning\, and so on. \
 n\nSpeaker 2: Kosaku Takanashi (25 min)\nTitle: Nonlinear Ensemble Method
 s for Time Series Data\nAbstract: We propose a class of ensemble methods 
 that nonlinearly synthesizes multiple sources of information– such as p
 redictive distributions– in a sequential\, time series context. To unde
 rstand its finite sample properties\, we develop a theoretical strategy b
 ased on stochastic processes\, where the ensembled processes are expresse
 d as stochastic differential equations\, evaluated using Itô’s lemma. 
 We determine the conditions and mechanism for which this class of nonline
 ar synthesis outperforms linear ensemble methods. Further\, we identify a
  specific form of nonlinear synthesis that produces exact minimax predict
 ive distributions for Kullback-Leibler risk and\, under certain condition
 s\, quadratic risk. A finite sample simulation study is presented to illu
 strate our results. This is a joint work with Kenichiro McAlinn from Temp
 le university．\n\nSpeaker 3: Yuichiro Wada (25 min)\nTitle: Spectral Em
 bedded Deep Clustering\nAbstract: We propose a clustering method based on
  a deep neural network. Given an unlabeled dataset and the number of clus
 ters\, our method groups the dataset into the given number of clusters in
  the original space. We use a conditional discrete probability distributi
 on defined by a deep neural network as a statistical model. Our strategy 
 is first to estimate the cluster labels of unlabeled data points selected
  from a high-density region. Then secondly\, by using the estimated label
 s and the remaining unlabeled data points\, we conduct semi-supervised le
 arning to train the model. Lastly\, by using the trained model\, we estim
 ate cluster labels of the remaining unlabeled data points. We conduct num
 erical experiments on five commonly used datasets to confirm the effectiv
 eness of the proposed method. This talk is based on a paper Entropy 2019\
 , 21(8)\, 795\, with Shugo Miyamoto\, Takumi Nakagawa\, Leo Andeol\, Wata
 ru Kumagai and Takafumi Kanamori.\n\nSpeaker 4: Hironori Fujisawa (25 min
 )\nTitle: Transfer learning via L1 regularization\nAbstract: Machine lear
 ning algorithms typically require abundant data under a stationary enviro
 nment. However\, environments are nonstationary in many real-world applic
 ations. Critical issues lie in how to effectively adapt models under an e
 ver-changing environment. We propose a method for transferring knowledge 
 from a source domain to a target domain via L1 regularization in high dim
 ension. We incorporate L1 regularization of differences between source pa
 rameters and target parameters\, in addition to an ordinary L1 regulariza
 tion. Hence\, our method yields sparsity for both the estimates themselve
 s and changes of the estimates. The proposed method has a tight estimatio
 n error bound under a stationary environment\, and the estimate remains u
 nchanged from the source estimate under small residuals. Moreover\, the e
 stimate is consistent with the underlying function\, even when the source
  estimate is mistaken due to nonstationarity. Empirical results demonstra
 te that the proposed method effectively balances stability and plasticity
 . This is a joint work with Masaaki Takada (Toshiba Cooperation). This ta
 lk is based on a paper accepted at NeurIPS 2020.\n\n\n\nAll participants 
 are required to agree with the AIP Open Seminar Series Code of Conduct.\n
 Please see the URL below.\nhttps://aip.riken.jp/event-list/termsofpartici
 pation/?lang=en\n\nRIKEN AIP will expect adherence to this code throughou
 t the event. We expect cooperation from all participants to help ensure a
  safe environment for everybody.\n\n
LOCATION:オンライン
URL:https://techplay.jp/event/812787?utm_medium=referral&utm_source=ics&utm
 _campaign=ics
END:VEVENT
END:VCALENDAR
