Talk by Kyle Gorman (Google Inc.) and Yohei Oseki (Waseda University & New York University)

2018/05/17(木)10:45 〜 12:15 開催
ブックマーク
OSS

イベント内容

Talk1 10:45~11:30
Speaker: Kyle Gorman (Google Inc.)

Title:Exploiting linguistic structure in two computational learning problems

Abstract:
In the first half of this talk, I consider a problem where computational and quantitative thinking contributes to our understanding of linguistic patterns. In the phonology literature is it often assumed that speakers’ phonotactic knowledge can be inferred relatively directly from lexical statistics. I review some empirical and theoretical challenges to this hypothesis, and then consider the case of English syllable contact clusters (i.e., coda-onset clusters). Previous authors have proposed a number of parochial constraints holding over such sequences, but I argue that the only constraints for which there is substantial quantitative support are those which derive from independently-motivated phonological alternations.
In the second half, I consider a problem where linguistic representations and insights contribute to an engineering problem. Many speech and language technologies require mappings between written and spoken forms of entities like cardinal and ordinal numbers, dates and times, and the like (e.g., “328” “three hundred twenty eight”). Such transductions are normally accomplished using hand-written language-specific grammars, and as such, are substantial barriers for internationalization. We focus on the case of number names, and consider two computational models. The first uses end-to-end recurrent neural networks. The second, inspired by literature on cross-linguistic variation in number systems, uses finite-state transducers constructed from a minimal amount of training data. While both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages. (This portion of the talk describes work performed in collaboration with Richard Sproat.)

Biography:
Kyle Gorman is a computational linguist working on speech and language at Google. Before joining Google, he was a postdoctoral researcher, and an assistant professor, at the Center for Spoken Language Understanding at the Oregon Health & Science University. He received a Ph.D. in linguistics from the University of Pennsylvania in 2013, where he was advised by Charles Yang. He is a maintainer of the OpenFst and OpenGrm libraries, and the principal author of Pynini. He lives in New York City.

Talk2 11:30~12:15
Speaker:Yohei Oseki (Waseda University & New York University)

Title: Integrating natural language processing with cognitive science of language

Abstract:
In natural language processing, language models have been evaluated independently of human data or indirectly via embedded within NLP applications. Consequently, while accuracy increases globally, those models sometimes show behavior that is not "human-like" (e.g. higher performance when humans make errors) and/or uninterpretable (e.g. no explanations of why accuracy increases or decreases). In this talk, extending the computational cognitive science approach to natural language processing, I present computational simulation experiments based on the English Lexicon Project (Balota et al., 2007), a "shared task" in lexical processing, where model prediction and human behavior are directly compared. Specifically, probability estimates of different models (e.g. character/syllable/morpheme n-gram models, HMM, PCFG, etc.) will be transformed into an information-theoretic complexity measure called surprisal (Shannon, 1948) and evaluated against human reaction time data in visual lexical decision experiments vis Monte Carlo Cross-Validation. Furthermore, a new statistical evaluation metric for error analyses called residual score will be proposed to enhance interpretability of language models and evaluate where those models underperform or make errors.

Biography:
Yohei Oseki is an Assistant Professor in the Faculty of Science and Engineering at Waseda University. Before joining Waseda University, he received a Ph.D. from the Department of Linguistics at New York University in 2018, and was a visiting scholar at the Department of Linguistics at the University of Massachusetts Amherst and the Cold Spring Harbor Laboratory. His research integrates natural language processing with cognitive science of language and attempts to reverse-engineer the most "human-like" language model.

注意事項

※ こちらのイベント情報は、外部サイトから取得した情報を掲載しています。
※ 掲載タイミングや更新頻度によっては、情報提供元ページの内容と差異が発生しますので予めご了承ください。
※ 最新情報の確認や参加申込手続き、イベントに関するお問い合わせ等は情報提供元ページにてお願いします。

関連するイベント