BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//https://techplay.jp//JP
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALDESC:Talk by Prof. Philippe Esling (IRCAM\, Sorbonne université) o
 n June 12 at 4:00 p.m.
X-WR-CALNAME:Talk by Prof. Philippe Esling (IRCAM\, Sorbonne université) o
 n June 12 at 4:00 p.m.
X-WR-TIMEZONE:Asia/Tokyo
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
BEGIN:STANDARD
DTSTART:19700101T000000
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:735526@techplay.jp
SUMMARY:Talk by Prof. Philippe Esling (IRCAM\, Sorbonne université) on Jun
 e 12 at 4:00 p.m.
DTSTART;TZID=Asia/Tokyo:20190612T160000
DTEND;TZID=Asia/Tokyo:20190612T173000
DTSTAMP:20260405T141313Z
CREATED:20190604T140051Z
DESCRIPTION:イベント詳細はこちら\nhttps://techplay.jp/event/73552
 6?utm_medium=referral&utm_source=ics&utm_campaign=ics\n\nTitle : \nModeli
 ng musical creativity with variational inference and probabilistic genera
 tive models\n\nPhilippe Esling\nACIDS team at IRCAM\, Sorbonne universit
 é (Paris\, France)\n\nAbstract :\nThe research project carried by the AC
 IDS team at IRCAM seeks to model musical creativity by extending variatio
 nal learning approaches towards the use of multivariate and multimodal ti
 me series. Our major object of study lies in the properties and perceptio
 n of musical orchestration. Orchestration is the subtle art of writing mu
 sical pieces for orchestra\, by combining the spectral properties of each
  instrument to achieve a particular sonic goal. In this context\, the mul
 tivariate analysis of temporal processes is required given the inherent m
 ultidimensional nature of instrumental mixtures. Furthermore\, time serie
 s need to be scrutinized at variable time scales (termed here granulariti
 es) as a wealth of time scales co-exist in music (from the identity of si
 ngle notes up to the structure of entire pieces). Furthermore\, orchestra
 tion lies at the exact intersection between the symbol (musical writing) 
 and signal (audio recording) representations.\n\nAfter introducing the ge
 neral framework and multiple state-of-art creative applications done in t
 he past years\, we will focus on various applications of the variational 
 learning framework to disentangle factors of audio variation. Hence\, we 
 will detail several recent papers produced by our team\, allowing to regu
 larize the topology of the latent space based on perceptual criteria\, wo
 rking with both audio waveforms and spectral transforms and performing ti
 mbre style transfer between instruments. Finally\, we discuss the develop
 ment of these approaches as creative tools allowing to increase musical c
 reativity in contemporary music and show case-study of recent pieces play
 ed at renowned venues.\n\nWe will open the discussion to the question of 
 creative intelligence through the analysis of orchestration and how this 
 could give rise to a whole new category of generic creative learning syst
 ems.\n\nShort bio:\nPhilippe Esling received a B.Sc in mathematics and co
 mputer science in 2007\, a M.Sc in acoustics and signal processing in 200
 9 and a PhD on data mining and machine learning in 2012. He was a post-do
 ctoral fellow in the department of Genetics and Evolution at the Universi
 ty of Geneva in 2012. He is now an associate professor with tenure at Irc
 am laboratory and Sorbonne Université since 2013. In this short time spa
 n\, he authored and co-authored over 20 peer-reviewed journal papers in p
 restigious journals such as ACM Computing Surveys\, Publications of the N
 ational Academy of Science\, IEEE TSALP and Nucleic Acids Research. He re
 ceived a young researcher award for his work in audio querying in 2011\, 
 a PhD award for his work in multiobjective time series data mining in 201
 3 and several best paper awards. In applied research\, he developed and r
 eleased the first computer-aided orchestration software called Orchids\, 
 commercialized at fall 2014\, which already has a worldwide community of 
 thousands users and led to musical pieces from renowned composers played 
 at international venues. He is the lead investigator of machine learning 
 applied to music generation and orchestration\, and directs the recently 
 created Artificial Creative Intelligence and Data Science (ACIDS) team at
  IRCAM.
LOCATION:RIKEN AIP (Nihombashi) Open area
URL:https://techplay.jp/event/735526?utm_medium=referral&utm_source=ics&utm
 _campaign=ics
END:VEVENT
END:VCALENDAR
