DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

HuggingFaceLocalChatGenerator

This component provides a chat completion interface using a Hugging Face model that runs locally.

NameHuggingFaceLocalGenerator
Folder Path/generators/chat
Most common Position in a PipelineAfter a DynamicChatPromptBuilder
Mandatory Input variablesβ€œmessages”: a list of ChatMessage objects representing the chat
Output variablesβ€œreplies”: a list of strings with all the replies generated by the LLM

Overview

Keep in mind that if LLMs run locally, you may need a powerful machine to run them. This depends strongly on the model you select and its parameter count.

πŸ“˜

This component is designed for chat completion, not for text generation. If you want to use Hugging Face LLMs for text generation, use HuggingFaceLocalGenerator instead.

For remote file authorization, this component uses a HF_API_TOKENΒ environment variable by default. Otherwise, you can pass a Hugging Face API token at initialization with token:

local_generator = HuggingFaceLocalChatGenerator(token=Secret.from_token("<your-api-key>"))

Usage

On its own

from haystack.components.generators.chat import HuggingFaceLocalChatGenerator
from haystack.dataclasses import ChatMessage

generator = HuggingFaceLocalChatGenerator(model="HuggingFaceH4/zephyr-7b-beta")
generator.warm_up()
messages = [ChatMessage.from_user("What's Natural Language Processing? Be brief.")]
print(generator.run(messages))

In a Pipeline

from haystack import Pipeline
from haystack.components.builders.prompt_builder import DynamicChatPromptBuilder
from haystack.components.generators.chat import HuggingFaceLocalChatGenerator
from haystack.dataclasses import ChatMessage

prompt_builder = DynamicChatPromptBuilder()
llm = HuggingFaceLocalChatGenerator(model="HuggingFaceH4/zephyr-7b-beta", token=Secret.from_token("<your-api-key>"))

pipe = Pipeline()
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("llm", llm)
pipe.connect("prompt_builder.prompt", "llm.messages")
location = "Berlin"
messages = [ChatMessage.from_system("Always respond in German even if some input data is in other languages."),
            ChatMessage.from_user("Tell me about {{location}}")]
pipe.run(data={"prompt_builder": {"template_variables":{"location": location}, "prompt_source": messages}})


Related Links