Transformers

Retriever-Augmented Generation, or RAG, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.

Source: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Retrieval 161 31.88%
Question Answering 64 12.67%
Language Modelling 37 7.33%
Large Language Model 27 5.35%
Information Retrieval 24 4.75%
Text Generation 15 2.97%
Open-Domain Question Answering 12 2.38%
Benchmarking 9 1.78%
Sentence 8 1.58%

Categories