Get Started
Have a look at this page to learn how to quickly get up and running with Haystack. It contains instructions for installing, building a basic pipeline, preparing your files, and running a search.
Quick Installation via Pip
To install Haystack via Pip, run:
pip install farm-haystack[inference]
Try Haystack
Let's build your first Retrieval Augmented Generation (RAG) Pipeline and see how Haystack answers questions.
First, install the minimal form of Haystack:
pip install farm-haystack
The following code will load your data to the DocumentStore, build a RAG pipeline, and ask a question based on the data:
from haystack.document_stores import InMemoryDocumentStore
from haystack.utils import build_pipeline, add_example_data, print_answers
# We are model agnostic :) Here, you can choose from: "anthropic", "cohere", "huggingface", and "openai".
provider = "openai"
API_KEY = "sk-..." # ADD YOUR KEY HERE
# We support many different databases. Here we load a simple and lightweight in-memory database.
document_store = InMemoryDocumentStore(use_bm25=True)
# Download and add Game of Thrones TXT articles to Haystack DocumentStore.
# You can also provide a folder with your local documents.
add_example_data(document_store, "data/GoT_getting_started")
# Build a pipeline with a Retriever to get relevant documents to the query and a PromptNode interacting with LLMs using a custom prompt.
pipeline = build_pipeline(provider, API_KEY, document_store)
# Ask a question on the data you just added.
result = pipeline.run(query="Who is the father of Arya Stark?")
# For details, like which documents were used to generate the answer, look into the <result> object
print_answers(result, details="medium")
The output of the pipeline will reference the documents used to generate the answer:
'Query: Who is the father of Arya Stark?'
'Answers:'
[{'answer': 'The father of Arya Stark is Lord Eddard Stark of '
'Winterfell. [Document 1, Document 4, Document 5]'}]
Congratulations, you have just tried your first Haystack pipeline!
Installation
For a complete guide on how to install Haystack, see Installation.
The Building Blocks of Haystack
Here’s a sample of Haystack code showing a question-answering also system using a Retriever and a Reader. For a working code example, check out our starter tutorial.
# DocumentStore: holds all your data
document_store = InMemoryDocumentStore()
# Clean & load your documents into the DocumentStore
dicts = convert_files_to_dicts(doc_dir, clean_func=clean_wiki_text)
document_store.write_documents(dicts)
# Retriever: A Fast and simple algo to identify the most promising candidate documents
retriever = BM25Retriever(document_store)
# Reader: Powerful but slower neural network trained for QA
model_name = "deepset/roberta-base-squad2"
reader = FARMReader(model_name)
# Pipeline: Combines all the components
pipe = ExtractiveQAPipeline(reader, retriever)
# Voilà! Ask a question!
question = "Who is the father of Sansa Stark?"
prediction = pipe.run(query=question)
print_answers(prediction)
Loading Documents into the DocumentStore
In Haystack, DocumentStores expect Documents in a dictionary format. They are loaded as follows:
document_store = InMemoryDocumentStore()
dicts = [
{
'content': DOCUMENT_TEXT_HERE,
'meta': {'name': DOCUMENT_NAME, ...}
}, ...
]
document_store.write_documents(dicts)
When we talk about Documents in Haystack, we are referring specifically to the individual blocks of text held in the DocumentStore. You might want to use all the text in one file as a Document or split it into multiple Documents. This splitting can have a significant impact on speed and performance.
Tip
If Haystack runs very slowly, you might want to try splitting your text into smaller Documents. If you want to improve performance, you might want to try concatenating text to make more significant Documents. See Optimization for more details.
Running Search Queries
There are many different flavors of search that can be created using Haystack. But to give just one example of what can be achieved, let's look more closely at an open domain question answering (ODQA) Pipeline.
Querying in an ODQA system involves searching for an answer to a given question within the entire document store. This process will:
- Make the Retriever filter for a small set of relevant candidate documents.
- Get the Reader to process this set of candidate documents.
- Return potential answers to the given question.
Usually, there are tight time constraints on querying, so it needs to be a lightweight operation. When documents are loaded, Haystack will precompute any .results that might be useful at query time.
In Haystack, querying is performed with a Pipeline
object which connects the Reader to the Retriever.
# Pipeline: Combines all the components
pipe = ExtractiveQAPipeline(reader, retriever)
# Voilà! Ask a question!
question = "Who is the father of Sansa Stark?"
prediction = pipe.run(query=question)
print_answers(prediction)
When the query is complete, you can expect to see results that look something like this:
[
{ 'answer': 'Eddard',
'context': 's Nymeria after a legendary warrior queen. She travels '
"with her father, Eddard, to King's Landing when he is made "
'Hand of the King. Before she leaves,'
}, ...
]
Custom Search Pipelines
Haystack provides many different building blocks for you to mix and match. They include:
- Readers
- Retrievers (sparse and dense)
- DocumentStores
- Summarizers
- Generators
- Translators
These can all be combined in the configuration that you want. Look at our Pipelines page to see what's possible!
Updated 5 months ago