What an LLM returns to you as an answer depends a lot on what you’ve asked an LLM to do! That’s why prompting is super important.
For today’s challenge, we are again stepping in to complete a task that our elves didn’t quite complete.
Elf Julian wanted to provide you with a referenced documentation search pipeline. For that, he already created an indexing pipeline for you, which indexes some of the Haystack 2.0 Beta documentation pages into an InMemoryDocumentStore.
He also started to create a RAG pipeline. However, the PromptBuilder is missing. Without it, the LLM doesn’t know what to do. ๐ข
Your task is to complete the starter colab with a 
PromptBuilder that will achieve referenced documentation search!
๐งก Some Hints:
- The starter colab gives you the option of using the
HuggingFaceLocalGeneratorwith Zephyr 7B Beta. If you chose to do that, this article might be quite useful!- Haystack 2.0 will be using Jinja2 for prompt templating!
- The challenges from previous days also have lots of code that will be useful for this challenge!
๐ฉต Here is the Starter Colab
๐ Useful Docs
PromptBuilder: Used to create templates for prompts you’d like to send to LLMs
HuggingFaceLocalGenerator: Used to generate responses using local open source models from Hugging Face models
GPTGenerator: Used to generate reponses using OpenAI’s GPT models
 
  