LangChain Multi-Query Retriever for RAG
James Briggs
In this video, we'll learn about an advanced technique for RAG in LangChain called "Multi-Query". Multi-query allows us to broaden our search score by using an LLM to turn one query into multiple, allowing us to search a broader vector space and return a higher variety of results. In this example, we use OpenAI's text-embedding-ada-002, gpt-3.5-turbo, Pinecone vector database, and of course the LangChain library.
š² Subscribe for Latest Articles and Videos: https://www.pinecone.io/newsletter-signup/
šš¼ AI Consulting: https://aurelio.ai
š¾ Discord: https://discord.gg/c5QtDB9RAP
Twitter: https://twitter.com/jamescalam LinkedIn: https://www.linkedin.com/in/jamescalam/
š Thumbnail credit @LaCarnevali
00:00 LangChain Multi-Query 00:31 What is Multi-Query in RAG? 01:50 RAG Index Code 02:56 Creating a LangChain MultiQueryRetriever 07:16 Adding Generation to Multi-Query 08:51 RAG in LangChain using Sequential Chain 11:18 Customizing LangChain Multi Query 13:41 Reducing Multi Query Hallucination 16:56 Multi Query in a Larger RAG Pipeline
#artificialintelligence #nlp #ai #openai #chatbot #langchain #vectordb ... https://www.youtube.com/watch?v=VFf8XJUIHnU
194782948 Bytes