BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//https://techplay.jp//JP
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALDESC:非凸学習理論チームセミナー (Mr. Dmitry KOPITKOV)
X-WR-CALNAME:非凸学習理論チームセミナー (Mr. Dmitry KOPITKOV)
X-WR-TIMEZONE:Asia/Tokyo
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
BEGIN:STANDARD
DTSTART:19700101T000000
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:789528@techplay.jp
SUMMARY:非凸学習理論チームセミナー (Mr. Dmitry KOPITKOV)
DTSTART;TZID=Asia/Tokyo:20200901T150000
DTEND;TZID=Asia/Tokyo:20200901T160000
DTSTAMP:20260405T135720Z
CREATED:20200813T060006Z
DESCRIPTION:イベント詳細はこちら\nhttps://techplay.jp/event/78952
 8?utm_medium=referral&utm_source=ics&utm_campaign=ics\n\n【Non-convex Le
 arning Theory Team】\n  This is an online seminar. Registration is requi
 red.\n  We’ll send the instruction for attending the online seminar. \n
 \n【Date】　　2020-09-01 15:00-16:00\n【Speaker】　Mr. Dmitry KOPI
 TKOV\n【Title】　General Probabilistic Surface Optimization\n\n【Abst
 ract】 Probabilistic inference\, such as density (ratio) estimation\, is
  a fundamental and highly important problem that needs to be solved in ma
 ny different domains including robotics and computer science. Recently\, 
 a lot of research was done to solve it by producing various objective fun
 ctions optimized over neural network (NN) models. Such Deep Learning (DL)
  based approaches include unnormalized and energy models\, as well as cri
 tics of Generative Adversarial Networks\, where DL has shown top approxim
 ation performance. In this research we contribute a novel algorithm famil
 y\, which generalizes all above\, and allows us to infer different statis
 tical modalities (e.g. data likelihood and ratio between densities) from 
 data samples. The proposed unsupervised technique\, named Probabilistic S
 urface Optimization (PSO)\, views a model as a flexible surface which can
  be pushed according to loss-specific virtual stochastic forces\, where a
  dynamical equilibrium is achieved when the pointwise forces on the surfa
 ce become equal. Concretely\, the surface is pushed up and down at points
  sampled from two different distributions\, with overall up and down forc
 es becoming functions of these two distribution densities and of force in
 tensity magnitudes defined by the loss of a particular PSO instance. Upon
  convergence\, the force equilibrium associated with the Euler-Lagrange e
 quation of the loss enforces an optimized model to be equal to various st
 atistical functions\, such as data density\, depending on the used magnit
 ude functions. Furthermore\, this dynamical-statistical equilibrium is ex
 tremely intuitive and useful\, providing many implications and possible u
 sages in probabilistic inference. We connect PSO to numerous existing sta
 tistical works which are also PSO instances\, and derive new PSO-based in
 ference methods as demonstration of PSO exceptional usability. Additional
 ly\, we investigate the impact of Neural Tangent Kernel (NTK) on PSO equi
 librium. Our study of NTK dynamics during the learning process emphasizes
  the importance of the model kernel adaptation to the specific target fun
 ction for a good learning approximation.
LOCATION:オンライン
URL:https://techplay.jp/event/789528?utm_medium=referral&utm_source=ics&utm
 _campaign=ics
END:VEVENT
END:VCALENDAR
