Efficient One Pass End to End Entity Linking for Questions (Paper Explained)
Deep Learning Explainer
How to perform full end-to-end entity linking has always been a challenging problem in NLP. The typical approach for this is to use a model to detect entities and then employ another model to perform entity disambiguation. And this paper beautifully formulates these two steps into a single neural network model.
0:00 - Ya ya ya 0:56 - What's special about this paper 2:10 - System overview 3:29 - Question & entities 6:19 - Mention detection 9:06 - Entity disambiguation 11:46 - Mention detection loss 14:09 - Entity disambiguation loss 15:45 - Datasets 16:24 - Results & discussion 22:55 - Runtime comparison 23:10 - Proof of concept 25:10 - Summary
Connect Twitter https://twitter.com/home Email edwindeeplearning@gmail.com
Related videos: REALM: Retrieval-Augmented Language Model https://youtu.be/JQ-bxQT5Qsw
Question and Answer Test-Train Overlap in Open Domain QA https://youtu.be/Cb5sj4_Ztfo
Paper Efficient One Pass End to End Entity Linking for Questions https://arxiv.org/abs/2010.02413
Code https://github.com/facebookresearch/BLINK/tree/master/elq
Abstract We present ELQ, a fast end-to-end entity linking model for questions, which uses a biencoder to jointly perform mention detection and linking in one pass. Evaluated on WebQSP and GraphQuestions with extended annotations that cover multiple entities per question, ELQ outperforms the previous state of the art by a large margin of +12.7% and +19.6% F1, respectively. With a very fast inference time (1.57 examples/s on a single CPU), ELQ can be useful for downstream question answering systems. In a proof-of-concept experiment, we demonstrate that using ELQ significantly improves the downstream QA performance of GraphRetriever. ... https://www.youtube.com/watch?v=eXN7Bu06RjI
35682081 Bytes