Retrieval-Augmented Generation (RAG)

Аватар автора
Геймерские Впечатления
This video explains the Retrieval-Augmented Generation (RAG) model! This approach combines Dense Passage Retrieval with a Seq2Seq BART generator. This is tested out on knowledge intensive tasks like open-domain QA, jeopardy question generation, and FEVER fact verification. This looks like a really interesting paradigm for building language models that produce factually accurate generations! Thanks for watching! Please Subscribe! Paper Links: Thanks for watching! Time Stamps 0:00 Introduction 2:05 Limitations of Language Models 4:10 Algorithm Walkthrough 5:48 Dense Passage Retrieval 7:44 RAG-Token vs. RAG-Sequence 10:47 Off-the-Shelf Models 11:54 Experiment Datasets 15:03 Results vs. T5 16:16 BART vs. RAG - Jeopardy Questions 17:20 Impact of Retrieved Documents zi 18:53 Ablation Study 20:25 Retrieval Collapse 21:10 Knowledge Graphs as Non-Parametric Memory 21:45 Can we learn better representations for the Document Index? 22:12 How will Efficient Transformers impact this?

0/0


0/0

0/0

0/0

0/0