AAAI 2020より,多言語文埋め込みフレームワークEMUを解説しました.
今回紹介した記事はこちらのissueで解説しています. https://github.com/jojonki/arXivNotes/issues/371
サポーターも募集中です. https://www.patreon.com/jojonki
--- Support this podcast: https://anchor.fm/lnlp-ninja/supportep32: We need to talk about standard splits
ep31: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge
ep30: Probing the Need for Visual Context in Multimodal Machine Translation
ep29: What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
ep28: Attention is not Explanation
ep27: 今年度の振り返りとこれからについて
ep26: 大規模な自動解析データが形態素解析器をどこまで小さくできるか
ep25: サブワードに基づく単語分散表現の縮約モデリング
ep24: BERT for Joint Intent Classification and Slot Filling
ep23: End-to-End Knowledge-Routed Relational Dialogue System for Automatic Diagnosis
ep22: What are the biases in my data?
ep21: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
ep20: Comprehensive evaluation of statistical speech waveform synthesis
ep19: SentencePiece: A simple and language independent subword tokenizer and detokenizer for NLP
ep18: PyText: A Seamless Path from NLP research to production
ep17: User Modeling for Task Oriented Dialogues
ep16: Contextual Topic Modeling For Dialog Systems
ep15: Another Diversity-Promoting Objective Function for Neural Dialogue Generation
ep14: XNLI: Evaluating Cross-lingual Sentence Representations
ep13: You May Not Need Attention
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Black Wolf Feed (Chapo Premium Feed Bootleg)
Bannon`s War Room