The point about synthetic query generation is good. We found users had very poor queries, so we initially had the LLM generate synthetic queries. But then we found that the results could vary widely based on the specific synthetic query it generated, so we had it create three variants (all in one LLM call, so that you can prompt it to generate a wide variety, instead of getting three very similar ones back), do parallel search, and then use reciprocal rank fusion to combine the list into a set of broadly strong performers. For the searches we use hybrid dense + sparse bm25, since dense doesn't work well for technical words.
This, combined with a subsequent reranker, basically eliminated any of our issues on search.
> For the searches we use hybrid dense + sparse bm25, since dense doesn't work well for technical words.
One thing I’m always curious about is if you could simplify this and get good/better results using SPLADE. The v3 models look really good and seem to provide a good balance of semantic and lexical retrieval.
Yep- that's all best practice. I want to know if we could push performance further- routing the query to different embedding models or scoring strategies, or using multiple re-rankers- still feels like the process is missing something.
OP. The way you improve it is move away from single shot semantic/keyword search and have an agentic system that can evaluate results and do follow-up queries.
Boy, that should not be the concern of the end user (developer) but those implementing RAG solutions as a service at Amazon, Microsoft, Openai and so on.
At Microsoft, that's all baked into Azure AI Search - hybrid search does BM25, vector search, and re-ranking, just with setting booleans to true.
It also has a new Agentic retrieval feature that does the query rewriting and parallel search execution.
So few developers realize that you need more than just vector search, so I still spend many of my talks emphasizing the FULL retrieval stack for RAG.
It's also possible to do it on top of other DBs like Postgres, but takes more effort.
I am working on search but rather for text-to-image retrieval, nevertheless, I am curious if by that's all baked into Azure AI search you also meant synthetic query generation from the grandparent comment. If so, what's your latency for this? And do you extract structured data from the query? If so, do you use LLMs for that?
Moreover I am curious why you guys use bm25 over SPLADE?
Yes, AI Search has a new agentic retrieval feature that includes synthetic query generation: https://techcommunity.microsoft.com/blog/azure-ai-foundry-bl...
You can customize the model used and the max # of queries to generate, so latency depends on those factors, plus the length of the conversation history passed in. The model is usually gpt-4o or gpt-4.1 or the -mini of those, so it's the standard latency for those.
A more recent version of that feature also uses the LLM to dynamically decide which of several indices to query, and executes the searches in parallel.
That query generation approach does not extract structured data. I do maintain another RAG template for PostgreSQL that uses function calling to turn the query into a structured query, such that I can construct SQL filters dynamically.
Docs here:
https://github.com/Azure-Samples/rag-postgres-openai-python/...
Got it, I think this might make sense for a "conversation" type of search not for an instant search feature because lowest latency is gonna be too high IMO.
Fair point on latency, we (Azure AI Search) target both scenarios with different features. For instant search you can just do the usual hybrid + rerank combo, or if you want query rewriting to improve user queries, you can enable QR at a moderate latency hit. We evaluated this approach at length here: https://techcommunity.microsoft.com/blog/azure-ai-foundry-bl...
Of course, agentic retrieval is just better quality-wise for a broader set of scenarios, usual quality-latency trade-off.
We don't do SPLADE today. We've explored it and may get back to it at some point, but we ended up investing more on reranking to boost precision, we've found we have fewer challenges on the recall side.
AI Search team's been working with the Sharepoint team to offer more options, so that devs can get best of both worlds. Might have some stuff ready for Ignite (mid November).
I know :( But I think vector DBs and vector search got so hyped that people thought you could switch entirely over to them. Lots of APIs and frameworks also used "vector store" as the shorthand for "retrieval data source", which didn't help.
I believe that Azure AI Search currently uses lucene for BM25, hnswlib for vector search, and the Bing re-ranking model for semantic ranking. (So, no, it does not, though features are similar)
I must be missing something, this says it can be self-hosted. But the first page of the self-hosting docs say you need accounts with no less than 6 (!) other third-party hosted services.
We have very different ideas about the meaning of self-hosted.
That was my observation as well. To be fair their business is to sell a hosted version, they’re under no obligation to release a truly self hosted version.
I’ve never worked in such a space where the deployed environment had unfettered internet access, no access at all actually.
I’ve probably missed a huge wave of programming technology because of this, and I’ve figured out a way to make it work for a consistent paycheck over these past 20 years.
I’m also not a great example, I think I’ve watched 7 whole hours of YouTube videos ever, and those were all for car repair help.
I shy away from tech that needs to be online/connected/whatever.
> For example - if a "self hosted" service supports off-site backups is it self hosted or just well designed?
There is a big difference between communicating with external services (your example) vs REQUIRING external services (what parent is complaining about).
If in your example the system can run correctly with just local backups I would consider it self-hosted.
The big LLM-based rerankers (e.g. Qwen3-reranker) are what you always wanted your cross-encoder to be, and I highly recommend giving them a try. Unfortunately they're also quite computationally expensive.
Your metadata/tabular data often contains basic facts that a human takes for granted, but which aren't repeated in every text chunk - injecting it can help a lot in making the end model seem less clueless.
The point about queries that don't work with simple RAG (like "summarize the most recent twenty documents") is very important to keep in mind. We made our UI very search-oriented and deemphasized the chat, to try to communicate to users that search is what's happening under the hood - the model only sees what you see.
I agree completely with your point, especially the difficulty of developing the user's mental model for what's going on with context and the need to move away from chat UX. It's interesting that there are still few public examples of non-chat UIs that make context management explicit. It's possible that the big names tried this and decided it wasn't worth it -- but from comments here it seems like everyone that has built a production RAG system has come to the opposite conclusion. I'm guessing the real reason is otherwise: likely for the consumer apps controlling context (especially for free users) and inference time is one of the main levers for cost management at scale. Private RAGs, on the other hand, are more concerned with maximizing result quality and minimizing time spent by employee on a particular problem with cost per query much less of a concern --- that's been my experience at least.
I wish there was more info on the article about actual customer usage - particularly whether it improved process efficiency. It's great to focus on the technical aspects of system optimization but unless this translates to tangible business value it's all just hype.
My biggest RAG learning is to use agentic RAG. (Sorry for buzzword dropping)
- Classic RAG: `User -> Search -> LLM -> User`
- Agentic RAG: `User <-> LLM <-> Search`
Essentially instead of having a fixed loop, you provide the search as a tool to the LLM, which does three things:
- The LLM can search multiple times
- The LLM can adjust the search query
- The LLM can use multiple tools
The combination of these three things has solved a majority of classic RAG problems. It improves user queries, it can map abbreviations, it can correct bad results on its own, you can also let it list directories and load files directly.
I fully support this approach! When I first started experimenting—rather naively—with using tool-enabled LLMs to generate documents (such as reports or ADRs) from the extensive knowledge base in Confluence, I built a few tools to help the LLM search Confluence using CQL (Confluence Query Language) and store the retrieved pages in a dedicated folder. The LLM could then search within that folder with simple filesystem tools and pull entire files into its context as needed. The results were quite good, as long as the context didn’t become overloaded. However, when I later tried to switch to a 'Classic RAG' setup, the output quality dropped significantly and I refrained from switching.
yes but the assistant often doesn't search when it should and very rarely does multiple search rounds (both on gpt5 or on claude sonnet 4.5, weaker models are even worse at tool calling)
Does anyone know how to do versioning for embeddings? Let’s say I want to update/upsert my data and deliver v6 of domain data instead of v1 or filter for data within a specified date range. I am thinking of exploring context prepending to chunks.
Your vector store should let you store the original text as well as metadata, where you can store the version. For e.g. turbopuffer lets you filter on attributes https://turbopuffer.com/docs/query#filtering
Embedding based RAG will always just be OK at best. It is useful for little parts of a chain or tech demos, but in real life use it will always falter.
The difference is this feature explicitly isn't designed to do a whole lot, which is still the best way to build most LLM-based products and sandwich it between non-LLM stuff.
rag will be pronounced differently ad again and again. it has its use cases. we moved to agentic search having rag as a tool while other retrieval strategies we added use real time search in the sources. often skipping ingested and chunked soueces. large changes next windows allow for putting almost whole documents into one request.
Most of my ChatGPT queries use RAG (based on the query ChatGPT will decide if it needs to search the web) to get up to date information about the world. In reality life it's effective and it's why every large provider supports it.
People are usually not querying across 5 million documents in a single scope.
If you want something as simple as "suggest similar tweets" or something across millions of things then embeddings still work.
But if you want something like "compare the documents across these three projects" then you would use full text metadata extraction. Keywords, summaries, table of contents, etc to determine data about each document and each chunk.
I run a few production RAG systems, some as old as end of 2023 and arrived at the same conclusions.
Query expansions and non-naive chunking give the biggest bang for the bug, with chunking being the most resource intensive task, if the input data is chunk (pun intended).
Not here to schlep for AWS but S3 Vectors is hands down the SOTA here. That combined with a Bedrock Knowledge Base to handle Discovery/Rebalance tasks makes for the simplest implementation on the Market.
Once Bedrock KB backed by S3 Vectors is released from Beta it'll eat everybody's lunch.
Assuming that's what he meant, why would it be considered baseline versus anything else? I am genuinely curious because I'd like to know more about issues people face with this or that vector store in general.
OP here. We've been working on agentset.ai full-time for 2 months. The product now gets you something working quite well out of the box. Better than most people with no experience in RAG (I'd say so with confidence).
Ingestion + Agentic Search are two areas that we're focused on in the short term.
I'm not sure there is a chunker in this repo. The file you linked certainly doesn't seem to perform any chunking, it just defines a data model for chunks.
The only place I see that actually operates on chunks does so by fetching them from Redis, and AFAICT nothing in the repo actually writes to Redis, so I assume the chunker is elsewhere.
> Reranking: the highest value 5 lines of code you'll add. The chunk ranking shifted a lot. More than you'd expect. Reranking can many times make up for a bad setup if you pass in enough chunks. We found the ideal reranker set-up to be 50 chunk input -> 15 output.
What is re-ranking in the context of RAG? Why not just show the code if it’s only 5 lines?
OP. Reranking is a specialized LLM that takes the user query, and a list of candidate results, then re-sets the order based on which ones are more relevant to the query.
If you generate embeddings (of the query, and of the candidate documents) and compare them for similarity, you're essentially asking whether the documents "look like the question."
If you get an LLM to evaluate how well each candidate document follows from the query, you're asking whether the documents "look like an answer to the question."
An ideal candidate chunk/document from a cosine-similarity perspective, would be one that perfectly restates what the user said — whether or not that document actually helps the user. Which can be made to work, if you're e.g. indexing a knowledge base where every KB document is SEO-optimized to embed all pertinent questions a user might ask that "should lead" to that KB document. But for such documents, even matching the user's query text against a "dumb" tf-idf index will surface them. LLMs aren't gaining you any ground here. (As is evident by the fact that webpages SEO-optimized in this way could already be easily surfaced by old-school search engines if you typed such a query into them.)
An ideal candidate chunk/document from a re-ranking LLM's perspective, would be one that an instruction-following LLM (with the whole corpus in its context) would spit out as a response, if it were prompted with the user's query. E.g. if the user asks a question that could be answered with data, a document containing that data would rank highly. And that's exactly the kind of documents we'd like "semantic search" to surface.
I've been thinking about the problem of what to do if the answer to a question is very different to the question itself in embedding space. The KB method sounds interesting and not something I thought about, you sort work on the "document side" I guess. I've also heard of HYDE, the works on the query side, you generate hypothetical answers instead to the user query and look for documents that are similar to the answer, if I've understood it correctly.
The main point didn't get hit on by the responses. Re-ranking is just a mini-LLM (for latency/cost reasons) that does a double heck. Embedding model finds the closest M documents in R^N space. Re-ranker picks the top K documents from the M documents. In theory, if we just used Gemini 2.5 Pro or GPT 5 as the re-ranker, the performance would even be better than whatever small re-ranker people choose to use.
the reranker is a cross encoder that sees the docs and the query at the same time. What you normally do is you generating embeddings ahead of time, independent of the prompt used, calculate cosine similarity with the prompt, select the top-k best chunks that match the prompt and only then use a reranker to sort them.
embeddings are a lossy compression, so if you feed the chunks with the prompt at the same time, the results are better. But you can't do this for your whole db, that's why the filtering with cosine similarity at the beginning.
we have been trying to make it so that people dont have to reinvent the wheel, over and over and over again, and have a very straight forward all batteries included that can scale to many millions of documents, combining the best of RAG with traditional search and parametric search,
https://docs.mindsdb.com/mindsdb_sql/knowledge_bases/overvie...
Would love your feedback.
Great read.
But how do people land opportunities to work on exciting project as the author did? I've been trying to get into legal tech in LLM space but I've been unsuccessful.
Anyone here successfully transitioned into legal space? My gut always been legal to the space where LLM can really be useful, the first one is in programming.
OP. We migrated to GPT-5 when it came out but found that it performs worse than 4.1 when you pass lots of context (up to 100K tokens in some cases). We found that it:
a) has worse instruction following; doesn't follow the system prompt
b) produces very long answers which resulted in a bad ux
c) has 125K context window so extreme cases resulted in an error
Again, these were only observed in RAG when you pass lots of chunks, GPT-5 is probably a better model for other taks.
I find it interesting that so many services and tools were investigated except for embedding models. I would have thought that's one of the biggest levers.
But the model is like 18 months old. and recently we've seen big leaps on MTEB. Not sure how well those translate to reality, but I'm a little surpised this wasn't worth looking into.
I have a RAG setup that doesn't work on documents but other data points that we use for generation (the original data is call recordings but it is heavily processed to just a few text chunks).
Instead of a reranker model we do vector search and then simply ask GPT-5 in an extra call which of the results is the most relevant to the input question. Is there an advantage to actual reranker models rather than using a generic LLM?
I think you should do both in parallel, rather than sequentially. Main reason is vector scoring could cut off something that an LLM will score as relevant
Chunking strategy is a big issue. I found acceptable results by shoving large texts to to gemini flash and have it summarize and extract chunks instead of whatever text splitter I tried. I use the method published by Anthropic https://www.anthropic.com/engineering/contextual-retrieval i.e. include full summary along with chunks for each embedding.
I also created a tool to enable the LLM to do vector search on its own .
I do not use Langchain or python.. I use Clojure+ LLMs' REST APIs.
Quite a decent hit. Local models don't perform very well in long contexts. We're planning to support a local-only offline set-up for people to host w/o additional dependencies
Really solid write-up — it’s rare to see someone break down the real tradeoffs of scaling RAG beyond the toy examples. The bit about reranking and chunking actually saving more than fancy LLM tricks hits home to me.
You typically add a lot of metadata with each chunk text to be able to filter it, and do to include in the citations. Injecting metadata means that you see what metadata adds helpful context to the LLM, and when you pass the results to the LLM you pass them in a format like this:
It's described in the remainder of the point - they use an LLM to generate additional search queries, either rephrasings of the user's query or bringing additional context from the chat history.
The article raises several interesting points, but I find its claim that Claude Code relies primarily on grep for code search unconvincing. It's clear that Claude Code can parse and reason about code structure, employing techniques far beyond simple regex matching. Since this assumption underpins much of the article's argument, it makes me question the overall reliability of its conclusions a bit.
Or am I completely misunderstanding how Claude Code works?
The point about synthetic query generation is good. We found users had very poor queries, so we initially had the LLM generate synthetic queries. But then we found that the results could vary widely based on the specific synthetic query it generated, so we had it create three variants (all in one LLM call, so that you can prompt it to generate a wide variety, instead of getting three very similar ones back), do parallel search, and then use reciprocal rank fusion to combine the list into a set of broadly strong performers. For the searches we use hybrid dense + sparse bm25, since dense doesn't work well for technical words.
This, combined with a subsequent reranker, basically eliminated any of our issues on search.
> For the searches we use hybrid dense + sparse bm25, since dense doesn't work well for technical words.
One thing I’m always curious about is if you could simplify this and get good/better results using SPLADE. The v3 models look really good and seem to provide a good balance of semantic and lexical retrieval.
Yep- that's all best practice. I want to know if we could push performance further- routing the query to different embedding models or scoring strategies, or using multiple re-rankers- still feels like the process is missing something.
OP. The way you improve it is move away from single shot semantic/keyword search and have an agentic system that can evaluate results and do follow-up queries.
final tip is to also feed the interpretation of the user search to the user on the other side, so he can check if the llm understanding was correct.
Boy, that should not be the concern of the end user (developer) but those implementing RAG solutions as a service at Amazon, Microsoft, Openai and so on.
At Microsoft, that's all baked into Azure AI Search - hybrid search does BM25, vector search, and re-ranking, just with setting booleans to true. It also has a new Agentic retrieval feature that does the query rewriting and parallel search execution.
Disclosure: I work at MS and help maintain our most popular open-source RAG template, so I follow the best practices closely: https://github.com/Azure-Samples/azure-search-openai-demo/
So few developers realize that you need more than just vector search, so I still spend many of my talks emphasizing the FULL retrieval stack for RAG. It's also possible to do it on top of other DBs like Postgres, but takes more effort.
"It's also possible to do it on top of other DBs like Postgres, but takes more effort."
Shameless plug: plpgsql_bm25: BM25 search implemented in PL/pgSQL (The Unlicense / PUBLIC DOMAIN)
https://github.com/jankovicsandras/plpgsql_bm25
There's an example Postgres_hybrid_search_RRF.ipynb in the repo which shows hybrid search with Reciprocal Rank Fusion ( plpgsql_bm25 + pgvector ).
I am working on search but rather for text-to-image retrieval, nevertheless, I am curious if by that's all baked into Azure AI search you also meant synthetic query generation from the grandparent comment. If so, what's your latency for this? And do you extract structured data from the query? If so, do you use LLMs for that?
Moreover I am curious why you guys use bm25 over SPLADE?
Yes, AI Search has a new agentic retrieval feature that includes synthetic query generation: https://techcommunity.microsoft.com/blog/azure-ai-foundry-bl... You can customize the model used and the max # of queries to generate, so latency depends on those factors, plus the length of the conversation history passed in. The model is usually gpt-4o or gpt-4.1 or the -mini of those, so it's the standard latency for those. A more recent version of that feature also uses the LLM to dynamically decide which of several indices to query, and executes the searches in parallel.
That query generation approach does not extract structured data. I do maintain another RAG template for PostgreSQL that uses function calling to turn the query into a structured query, such that I can construct SQL filters dynamically. Docs here: https://github.com/Azure-Samples/rag-postgres-openai-python/...
I'll ask the search about SPLADE, not sure.
Got it, I think this might make sense for a "conversation" type of search not for an instant search feature because lowest latency is gonna be too high IMO.
Fair point on latency, we (Azure AI Search) target both scenarios with different features. For instant search you can just do the usual hybrid + rerank combo, or if you want query rewriting to improve user queries, you can enable QR at a moderate latency hit. We evaluated this approach at length here: https://techcommunity.microsoft.com/blog/azure-ai-foundry-bl...
Of course, agentic retrieval is just better quality-wise for a broader set of scenarios, usual quality-latency trade-off.
We don't do SPLADE today. We've explored it and may get back to it at some point, but we ended up investing more on reranking to boost precision, we've found we have fewer challenges on the recall side.
I'd love to work with Azure search but because copilot with external items has been made so cheap it's hard to justify...
Do you mean that you're using the Copilot indexer for Sharepoint docs? https://learn.microsoft.com/en-us/microsoftsearch/semantic-i...
AI Search team's been working with the Sharepoint team to offer more options, so that devs can get best of both worlds. Might have some stuff ready for Ignite (mid November).
That is concerning given that pure vector search is terrible outside of abstractions
I know :( But I think vector DBs and vector search got so hyped that people thought you could switch entirely over to them. Lots of APIs and frameworks also used "vector store" as the shorthand for "retrieval data source", which didn't help.
That's why I write blog posts like https://blog.pamelafox.org/2024/06/vector-search-is-not-enou...
It is almost like embeddings are a technology from the olden days.
Are you using Elasticsearch behind the scenes?
I believe that Azure AI Search currently uses lucene for BM25, hnswlib for vector search, and the Bing re-ranking model for semantic ranking. (So, no, it does not, though features are similar)
I must be missing something, this says it can be self-hosted. But the first page of the self-hosting docs say you need accounts with no less than 6 (!) other third-party hosted services.
We have very different ideas about the meaning of self-hosted.
That was my observation as well. To be fair their business is to sell a hosted version, they’re under no obligation to release a truly self hosted version.
I’ve never worked in such a space where the deployed environment had unfettered internet access, no access at all actually.
I’ve probably missed a huge wave of programming technology because of this, and I’ve figured out a way to make it work for a consistent paycheck over these past 20 years.
I’m also not a great example, I think I’ve watched 7 whole hours of YouTube videos ever, and those were all for car repair help.
I shy away from tech that needs to be online/connected/whatever.
You can self-host their code. I don't think there is any official definition of "self hosted" that this violates.
For example - if a "self hosted" service supports off-site backups is it self hosted or just well designed?
> For example - if a "self hosted" service supports off-site backups is it self hosted or just well designed?
There is a big difference between communicating with external services (your example) vs REQUIRING external services (what parent is complaining about).
If in your example the system can run correctly with just local backups I would consider it self-hosted.
In that case I’m self hosting every web page on the internet because I installed Firefox.
I consider this to be good open source and I'm a happy user of their OSS offering. Want no hosted dependencies? Then go write it all in Rust.
that's a stupid take and shows lack of engineering experience
I concur:
The big LLM-based rerankers (e.g. Qwen3-reranker) are what you always wanted your cross-encoder to be, and I highly recommend giving them a try. Unfortunately they're also quite computationally expensive.
Your metadata/tabular data often contains basic facts that a human takes for granted, but which aren't repeated in every text chunk - injecting it can help a lot in making the end model seem less clueless.
The point about queries that don't work with simple RAG (like "summarize the most recent twenty documents") is very important to keep in mind. We made our UI very search-oriented and deemphasized the chat, to try to communicate to users that search is what's happening under the hood - the model only sees what you see.
I agree completely with your point, especially the difficulty of developing the user's mental model for what's going on with context and the need to move away from chat UX. It's interesting that there are still few public examples of non-chat UIs that make context management explicit. It's possible that the big names tried this and decided it wasn't worth it -- but from comments here it seems like everyone that has built a production RAG system has come to the opposite conclusion. I'm guessing the real reason is otherwise: likely for the consumer apps controlling context (especially for free users) and inference time is one of the main levers for cost management at scale. Private RAGs, on the other hand, are more concerned with maximizing result quality and minimizing time spent by employee on a particular problem with cost per query much less of a concern --- that's been my experience at least.
I wish there was more info on the article about actual customer usage - particularly whether it improved process efficiency. It's great to focus on the technical aspects of system optimization but unless this translates to tangible business value it's all just hype.
Similar writeup I did about 1.5 years ago for processing millions of (technical) pages for RAG. Lots has stayed the same it seems
https://jakobs.dev/learnings-ingesting-millions-pages-rag-az...
I also built a RAG system about a year back for technical search, everything seems the same!
My biggest RAG learning is to use agentic RAG. (Sorry for buzzword dropping)
- Classic RAG: `User -> Search -> LLM -> User`
- Agentic RAG: `User <-> LLM <-> Search`
Essentially instead of having a fixed loop, you provide the search as a tool to the LLM, which does three things:
- The LLM can search multiple times
- The LLM can adjust the search query
- The LLM can use multiple tools
The combination of these three things has solved a majority of classic RAG problems. It improves user queries, it can map abbreviations, it can correct bad results on its own, you can also let it list directories and load files directly.
I fully support this approach! When I first started experimenting—rather naively—with using tool-enabled LLMs to generate documents (such as reports or ADRs) from the extensive knowledge base in Confluence, I built a few tools to help the LLM search Confluence using CQL (Confluence Query Language) and store the retrieved pages in a dedicated folder. The LLM could then search within that folder with simple filesystem tools and pull entire files into its context as needed. The results were quite good, as long as the context didn’t become overloaded. However, when I later tried to switch to a 'Classic RAG' setup, the output quality dropped significantly and I refrained from switching.
yes but the assistant often doesn't search when it should and very rarely does multiple search rounds (both on gpt5 or on claude sonnet 4.5, weaker models are even worse at tool calling)
Does anyone know how to do versioning for embeddings? Let’s say I want to update/upsert my data and deliver v6 of domain data instead of v1 or filter for data within a specified date range. I am thinking of exploring context prepending to chunks.
Your vector store should let you store the original text as well as metadata, where you can store the version. For e.g. turbopuffer lets you filter on attributes https://turbopuffer.com/docs/query#filtering
This is a great question
Embedding based RAG will always just be OK at best. It is useful for little parts of a chain or tech demos, but in real life use it will always falter.
Not necessarily? It's been the basis of one of the major ways people would query their data since 2023 on a product I worked on: https://www.honeycomb.io/blog/introducing-query-assistant
The difference is this feature explicitly isn't designed to do a whole lot, which is still the best way to build most LLM-based products and sandwich it between non-LLM stuff.
Super useful for grounding which is often the only way to robustly protect against hallucinations.
rag will be pronounced differently ad again and again. it has its use cases. we moved to agentic search having rag as a tool while other retrieval strategies we added use real time search in the sources. often skipping ingested and chunked soueces. large changes next windows allow for putting almost whole documents into one request.
What do you recommend? Query generation?
Most of my ChatGPT queries use RAG (based on the query ChatGPT will decide if it needs to search the web) to get up to date information about the world. In reality life it's effective and it's why every large provider supports it.
Compared with what?
Full text agentic retrieval. Instead of cosine similarity on vectors, parsing metadata through an agentic loop.
To give a real world example, the way Claude Code works versus how Cursor's embedded database works.
How do you do that on 5 million documents?
People are usually not querying across 5 million documents in a single scope.
If you want something as simple as "suggest similar tweets" or something across millions of things then embeddings still work.
But if you want something like "compare the documents across these three projects" then you would use full text metadata extraction. Keywords, summaries, table of contents, etc to determine data about each document and each chunk.
I run a few production RAG systems, some as old as end of 2023 and arrived at the same conclusions.
Query expansions and non-naive chunking give the biggest bang for the bug, with chunking being the most resource intensive task, if the input data is chunk (pun intended).
Not here to schlep for AWS but S3 Vectors is hands down the SOTA here. That combined with a Bedrock Knowledge Base to handle Discovery/Rebalance tasks makes for the simplest implementation on the Market.
Once Bedrock KB backed by S3 Vectors is released from Beta it'll eat everybody's lunch.
Shill, not schlep.
I'm correcting you less out of pedantry, and more because I find the correct term to be funny.
I feel like I'm schelpin' through these comments, it's all mishigas
You feel like a schlemiel, perhaps?
more a schlimazel, Charles Schultzie, Lucy's everywhere
Especially now that if you google the word schlep, the first result is now something totally different than what you'd expect.
S3 Vectors is hands down the SOTA here
SOTA for what? Isn't it just a vector store?
I think he just means it should be assumed to be standard practice and considered baseline at this point.
Assuming that's what he meant, why would it be considered baseline versus anything else? I am genuinely curious because I'd like to know more about issues people face with this or that vector store in general.
They say the chunker is the most important part, but theirs looks rudimentary: https://github.com/agentset-ai/agentset/blob/main/packages/e...
That is, there is nothing here that one could not easily write without a library.
OP here. We've been working on agentset.ai full-time for 2 months. The product now gets you something working quite well out of the box. Better than most people with no experience in RAG (I'd say so with confidence).
Ingestion + Agentic Search are two areas that we're focused on in the short term.
I'm not sure there is a chunker in this repo. The file you linked certainly doesn't seem to perform any chunking, it just defines a data model for chunks.
The only place I see that actually operates on chunks does so by fetching them from Redis, and AFAICT nothing in the repo actually writes to Redis, so I assume the chunker is elsewhere.
https://github.com/agentset-ai/agentset/blob/main/packages/j...
> Reranking: the highest value 5 lines of code you'll add. The chunk ranking shifted a lot. More than you'd expect. Reranking can many times make up for a bad setup if you pass in enough chunks. We found the ideal reranker set-up to be 50 chunk input -> 15 output.
What is re-ranking in the context of RAG? Why not just show the code if it’s only 5 lines?
OP. Reranking is a specialized LLM that takes the user query, and a list of candidate results, then re-sets the order based on which ones are more relevant to the query.
Here's sample code: https://docs.cohere.com/reference/rerank
What is the difference between reranking versus generating text embeddings and comparing with cosine similarity?
My understanding:
If you generate embeddings (of the query, and of the candidate documents) and compare them for similarity, you're essentially asking whether the documents "look like the question."
If you get an LLM to evaluate how well each candidate document follows from the query, you're asking whether the documents "look like an answer to the question."
An ideal candidate chunk/document from a cosine-similarity perspective, would be one that perfectly restates what the user said — whether or not that document actually helps the user. Which can be made to work, if you're e.g. indexing a knowledge base where every KB document is SEO-optimized to embed all pertinent questions a user might ask that "should lead" to that KB document. But for such documents, even matching the user's query text against a "dumb" tf-idf index will surface them. LLMs aren't gaining you any ground here. (As is evident by the fact that webpages SEO-optimized in this way could already be easily surfaced by old-school search engines if you typed such a query into them.)
An ideal candidate chunk/document from a re-ranking LLM's perspective, would be one that an instruction-following LLM (with the whole corpus in its context) would spit out as a response, if it were prompted with the user's query. E.g. if the user asks a question that could be answered with data, a document containing that data would rank highly. And that's exactly the kind of documents we'd like "semantic search" to surface.
I've been thinking about the problem of what to do if the answer to a question is very different to the question itself in embedding space. The KB method sounds interesting and not something I thought about, you sort work on the "document side" I guess. I've also heard of HYDE, the works on the query side, you generate hypothetical answers instead to the user query and look for documents that are similar to the answer, if I've understood it correctly.
The main point didn't get hit on by the responses. Re-ranking is just a mini-LLM (for latency/cost reasons) that does a double heck. Embedding model finds the closest M documents in R^N space. Re-ranker picks the top K documents from the M documents. In theory, if we just used Gemini 2.5 Pro or GPT 5 as the re-ranker, the performance would even be better than whatever small re-ranker people choose to use.
text similarity finds items that closely match. Reranking my select items that are less semantically "similar" but are more relevant to the query.
Because LLMs are a lot smarter than embeddings and basic math. Think of the vector / lexical search as the first approximation.
the reranker is a cross encoder that sees the docs and the query at the same time. What you normally do is you generating embeddings ahead of time, independent of the prompt used, calculate cosine similarity with the prompt, select the top-k best chunks that match the prompt and only then use a reranker to sort them.
embeddings are a lossy compression, so if you feed the chunks with the prompt at the same time, the results are better. But you can't do this for your whole db, that's why the filtering with cosine similarity at the beginning.
> Chunking Strategy: this takes a lot of effort, you'll probably be spending most of your time on it
Could you share more about chunking strategies you used?
we have been trying to make it so that people dont have to reinvent the wheel, over and over and over again, and have a very straight forward all batteries included that can scale to many millions of documents, combining the best of RAG with traditional search and parametric search, https://docs.mindsdb.com/mindsdb_sql/knowledge_bases/overvie... Would love your feedback.
To somebody thinking of building or paying for such a RAG system, would a workable solution be:
* Upload documents via API into a Google Workspace folder * Use some sort of Google AI search API on those documents in that folder
…placing documents for different customers into different folders.
Or the Azure equivalent whatever that is.
Great read. But how do people land opportunities to work on exciting project as the author did? I've been trying to get into legal tech in LLM space but I've been unsuccessful.
Anyone here successfully transitioned into legal space? My gut always been legal to the space where LLM can really be useful, the first one is in programming.
> LLM: GPT 4.1 -> GPT 5 -> GPT 4.1, covered by Azure credits
whats this roundtrip? also the chronology of the LLM (4.1) doesnt match the rest of the stack (text-embedding-large-3), feels weird
OP. We migrated to GPT-5 when it came out but found that it performs worse than 4.1 when you pass lots of context (up to 100K tokens in some cases). We found that it:
a) has worse instruction following; doesn't follow the system prompt b) produces very long answers which resulted in a bad ux c) has 125K context window so extreme cases resulted in an error
Again, these were only observed in RAG when you pass lots of chunks, GPT-5 is probably a better model for other taks.
love the share, ty
I find it interesting that so many services and tools were investigated except for embedding models. I would have thought that's one of the biggest levers.
i'd go with qwen embedding 3, gemini embeddings or something from mixedbread
they just grabbed the better one (3-large) right off the bat. 6x cost to 3-small, but it's still tiny.
But the model is like 18 months old. and recently we've seen big leaps on MTEB. Not sure how well those translate to reality, but I'm a little surpised this wasn't worth looking into.
I have a RAG setup that doesn't work on documents but other data points that we use for generation (the original data is call recordings but it is heavily processed to just a few text chunks). Instead of a reranker model we do vector search and then simply ask GPT-5 in an extra call which of the results is the most relevant to the input question. Is there an advantage to actual reranker models rather than using a generic LLM?
OP here. rerankers are finetuned small models, they're cheap and very fast compared to an additional GPT-5 call.
It's an async process in my case (custom deep research like) so speed is not that critical
I think you should do both in parallel, rather than sequentially. Main reason is vector scoring could cut off something that an LLM will score as relevant
Thanks for sharing. TIL about rerankers.
Chunking strategy is a big issue. I found acceptable results by shoving large texts to to gemini flash and have it summarize and extract chunks instead of whatever text splitter I tried. I use the method published by Anthropic https://www.anthropic.com/engineering/contextual-retrieval i.e. include full summary along with chunks for each embedding.
I also created a tool to enable the LLM to do vector search on its own .
I do not use Langchain or python.. I use Clojure+ LLMs' REST APIs.
Have you measured your latency, and how sensitive are you to it?
>> Have you measured your latency, and how sensitive are you to it?
Not sensitive to latency at all. My users would rather have well researched answers than poor answers.
Also, I use batch mode APIs for chunking .. it is so much cheaper.
How much of a hit would you take on quality if you moved the processing local? have you experimented with it? don’t think llamaindex has local sadly
Quite a decent hit. Local models don't perform very well in long contexts. We're planning to support a local-only offline set-up for people to host w/o additional dependencies
Really solid write-up — it’s rare to see someone break down the real tradeoffs of scaling RAG beyond the toy examples. The bit about reranking and chunking actually saving more than fancy LLM tricks hits home to me.
Great post, gonna be super useful for me :)
Speaking of embedding models, OpenAIs are getting a little long in the tooth at this stage.
Anybody know what is meant by 'injecting relevant metadata'. Where is it injected?
You typically add a lot of metadata with each chunk text to be able to filter it, and do to include in the citations. Injecting metadata means that you see what metadata adds helpful context to the LLM, and when you pass the results to the LLM you pass them in a format like this:
Title: ... Author: ... Text: ...
for each chunk, instead of just passing the text
> What moved the needle: Query Generation
What does query generation mean in this context, it’s probably not SQL queries right?
I think they mean query expansion: https://en.wikipedia.org/wiki/Query_expansion
It's described in the remainder of the point - they use an LLM to generate additional search queries, either rephrasings of the user's query or bringing additional context from the chat history.
Here's an interesting read on the evolution beyond RAG: https://www.nicolasbustamante.com/p/the-rag-obituary-killed-...
One of the key features in Claude Code is "Agentic Search" aka using (rip)grep/ls to search a codebase without any of the overhead of RAG.
Sounds like even RAG approaches use a similar approach (Query Generation).
The article raises several interesting points, but I find its claim that Claude Code relies primarily on grep for code search unconvincing. It's clear that Claude Code can parse and reason about code structure, employing techniques far beyond simple regex matching. Since this assumption underpins much of the article's argument, it makes me question the overall reliability of its conclusions a bit.
Or am I completely misunderstanding how Claude Code works?
They should've tested other embedding models, there are better ones than openai's (and cheaper)
Which do you suggest?
https://huggingface.co/spaces/mteb/leaderboard
You should use RTEB instead. See here for why: https://huggingface.co/blog/rteb
Here is that leaderboard https://huggingface.co/spaces/mteb/leaderboard?benchmark_nam...
Voyage-3-large seems like SOTA right now
yep
The Qwen3 600M and 4B embedding models are near state of the art and aren't too computationally intensive.
do you still use langchain/llamaindex for other agents/AI use cases?
Nice app bro https://usul.ai/chat/VgnzXjlRdljIDMBVCfqiy
Exactly what kind of processing was done? Your pipeline is a function of the use case, lest you overengineer…
[dead]