Notebooks as an experiment protocol
The paradigm treats each RAG technique as a runnable experiment: inputs, steps, and metrics live together, with knobs exposed for reproducibility and regression.
RAG_Techniques turns RAG from a concept checklist into a reproducible engineering lab. Each technique lives in its own folder with runnable notebooks and explanations, so you can tweak variables across chunking, query transforms, hybrid retrieval, reranking, and evaluation, then compare results and regress safely. The core value is not another framework wrapper, but making the controllable levers explicit: you can run A/B-style comparisons on the same corpus and metrics, then standardize what works for your team. It fits as a practical design ledger for RAG systems, optimized for iteration speed and clarity.
| ✕Traditional Pain Points | ✓Innovative Solutions |
|---|---|
| RAG projects often devolve into tool-stacking: swapping vector DBs or models without a regression-ready variable breakdown, so results are hard to reproduce. | RAG_Techniques decomposes core levers (chunking, query transforms, retrieval mixes, reranking, evaluation) into foldered runnable notebooks, ideal for A/B comparisons and regressions. |
| Team knowledge lives as scattered notes and snippets, making it difficult to turn learnings into repeatable experiments. | Runnable examples connect intent → implementation → metrics, helping teams standardize RAG experimentation and reuse templates. |
1git clone https://github.com/NirDiamant/RAG_Techniques.git && cd RAG_Techniques1python -m venv .venv && . .venv/bin/activate && pip install -U pip jupyterlab1jupyter lab1pip install -U langchain llama-index1export OPENAI_API_KEY='your_key_here'| Core Scene | Target Audience | Solution | Outcome |
|---|---|---|---|
| Enterprise RAG design review and selection | product/architecture leads | run multiple chunking/retrieval/rerank variants on the same corpus and metrics | turn opinions into reproducible evidence and reduce decision churn |
| A baseline library for RAG engineering teams | ML/backend teams | standardize runnable notebook templates and pin regression sets | safer iteration with traceable performance deltas |
| Education and internal enablement | AI enablement owners | use technique folders as labs and walkthroughs | align teams on RAG levers and evaluation standards quickly |