非凸学習理論チームセミナー (Mr. Dmitry KOPITKOV)
イベント内容
【Non-convex Learning Theory Team】
This is an online seminar. Registration is required.
We’ll send the instruction for attending the online seminar.
【Date】 2020-09-01 15:00-16:00
【Speaker】 Mr. Dmitry KOPITKOV
【Title】 General Probabilistic Surface Optimization
【Abstract】 Probabilistic inference, such as density (ratio) estimation, is a fundamental and highly important problem that needs to be solved in many different domains including robotics and computer science. Recently, a lot of research was done to solve it by producing various objective functions optimized over neural network (NN) models. Such Deep Learning (DL) based approaches include unnormalized and energy models, as well as critics of Generative Adversarial Networks, where DL has shown top approximation performance. In this research we contribute a novel algorithm family, which generalizes all above, and allows us to infer different statistical modalities (e.g. data likelihood and ratio between densities) from data samples. The proposed unsupervised technique, named Probabilistic Surface Optimization (PSO), views a model as a flexible surface which can be pushed according to loss-specific virtual stochastic forces, where a dynamical equilibrium is achieved when the pointwise forces on the surface become equal. Concretely, the surface is pushed up and down at points sampled from two different distributions, with overall up and down forces becoming functions of these two distribution densities and of force intensity magnitudes defined by the loss of a particular PSO instance. Upon convergence, the force equilibrium associated with the Euler-Lagrange equation of the loss enforces an optimized model to be equal to various statistical functions, such as data density, depending on the used magnitude functions. Furthermore, this dynamical-statistical equilibrium is extremely intuitive and useful, providing many implications and possible usages in probabilistic inference. We connect PSO to numerous existing statistical works which are also PSO instances, and derive new PSO-based inference methods as demonstration of PSO exceptional usability. Additionally, we investigate the impact of Neural Tangent Kernel (NTK) on PSO equilibrium. Our study of NTK dynamics during the learning process emphasizes the importance of the model kernel adaptation to the specific target function for a good learning approximation.
注意事項
※ 掲載タイミングや更新頻度によっては、情報提供元ページの内容と差異が発生しますので予めご了承ください。
※ 最新情報の確認や参加申込手続き、イベントに関するお問い合わせ等は情報提供元ページにてお願いします。
新規会員登録
このイベントに申し込むには会員登録が必要です。
アカウント登録済みの方はログインしてください。
※ ソーシャルアカウントで登録するとログインが簡単に行えます。
※ 連携したソーシャルアカウントは、会員登録完了後にいつでも変更できます。