OpenAIChatGenerator
OpenAIChatGenerator
enables chat completion using OpenAIβs large language models (LLMs).
Name | OpenAIChatGenerator |
Folder path | /generators/chat/ |
Most common position in a pipeline | After a ChatPromptBuilder |
Mandatory input variables | βmessagesβ: A list of ChatMessage objects representing the chat |
Output variables | βrepliesβ: A list of alternative replies of the LLM to the input chat |
Overview
OpenAIChatGenerator
supports OpenAI models starting from gpt-3.5-turbo and later (gpt-4, gpt-4-turbo, and so on).
OpenAIChatGenerator
needs an OpenAI key to work. It uses an OPENAI_API_KEY
Β environment variable by default. Otherwise, you can pass an API key at initialization with api_key
:
generator = OpenAIChatGenerator(model="gpt-3.5-turbo")
Then, the component needs a list of ChatMessage
objects to operate. ChatMessage
is a data class that contains a message, a role (who generated the message, such as user
, assistant
, system
, function
), and optional metadata. See the usage section for an example.
You can pass any chat completion parameters valid for the openai.ChatCompletion.create
method directly to OpenAIChatGenerator
using the generation_kwargs
parameter, both at initialization and to run()
method. For more details on the parameters supported by the OpenAI API, refer to the OpenAI documentation.
OpenAIChatGenerator
can support custom deployments of your OpenAI models through the api_base_url
init parameter.
Streaming
OpenAIChatGenerator
supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback
init parameter. Note that streaming the tokens is only compatible with generating a single response, so n
must be set to 1 for streaming to work.
This component is designed for chat completion, so it expects a list of messages, not a single string. If you want to use OpenAI LLMs for text generation (such as translation or summarization tasks) or donβt want to use the ChatMessage object, use
OpenAIGenerator
instead.
Usage
On its own
Basic usage:
from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator
client = OpenAIChatGenerator()
response = client.run(
[ChatMessage.from_user("What's Natural Language Processing? Be brief.")]
)
print(response)
>> {'replies': [ChatMessage(content='Natural Language Processing (NLP) is a
>> subfield of artificial intelligence (AI) that focuses on the interaction
>> between computers and humans through natural language. It involves enabling
>> computers to understand, interpret, and generate human language, enabling
>> various applications such as translation, sentiment analysis, chatbots, and
>> voice assistants.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None,
>> metadata={'model': 'gpt-3.5-turbo-0613', 'index': 0, 'finish_reason':
>> 'stop', 'usage': {'prompt_tokens': 16, 'completion_tokens': 61,
>> 'total_tokens': 77}})]}
With streaming:
from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator
client = OpenAIChatGenerator(streaming_callback=lambda chunk: print(chunk.content, end="", flush=True))
response = client.run(
[ChatMessage.from_user("What's Natural Language Processing? Be brief.")]
)
print(response)
>> Natural Language Processing (NLP) is a
>> subfield of artificial intelligence (AI) that focuses on the interaction
>> between computers and humans through natural language. It involves enabling
>> computers to understand, interpret, and generate human language, enabling
>> various applications such as translation, sentiment analysis, chatbots, and
>> voice assistants.
>> {'replies': [ChatMessage(content='Natural Language Processing (NLP) is a
>> subfield of artificial intelligence (AI) that focuses on the interaction
>> between computers and humans through natural language. It involves enabling
>> computers to understand, interpret, and generate human language, enabling
>> various applications such as translation, sentiment analysis, chatbots, and
>> voice assistants.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None,
>> metadata={'model': 'gpt-3.5-turbo-0613', 'index': 0, 'finish_reason':
>> 'stop', 'usage': {'prompt_tokens': 16, 'completion_tokens': 61,
>> 'total_tokens': 77}})]}
In a Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack import Pipeline
from haystack.utils import Secret
# no parameter init, we don't use any runtime template variables
prompt_builder = ChatPromptBuilder()
llm = OpenAIChatGenerator(api_key=Secret.from_env_var("OPENAI_API_KEY"), model="gpt-3.5-turbo")
pipe = Pipeline()
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("llm", llm)
pipe.connect("prompt_builder.prompt", "llm.messages")
location = "Berlin"
messages = [ChatMessage.from_system("Always respond in German even if some input data is in other languages."),
ChatMessage.from_user("Tell me about {{location}}")]
pipe.run(data={"prompt_builder": {"template_variables":{"location": location}, "template": messages}})
>> {'llm': {'replies': [ChatMessage(content='Berlin ist die Hauptstadt Deutschlands und die grΓΆΓte Stadt des Landes.
>> Es ist eine lebhafte Metropole, die fΓΌr ihre Geschichte, Kultur und einzigartigen SehenswΓΌrdigkeiten bekannt ist.
>> Berlin bietet eine vielfΓ€ltige Kulturszene, beeindruckende architektonische Meisterwerke wie den Berliner Dom
>> und das Brandenburger Tor, sowie weltberΓΌhmte Museen wie das Pergamonmuseum. Die Stadt hat auch eine pulsierende
>> Clubszene und ist fΓΌr ihr aufregendes Nachtleben berΓΌhmt. Berlin ist ein Schmelztiegel verschiedener Kulturen und
>> zieht jedes Jahr Millionen von Touristen an.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None,
>> metadata={'model': 'gpt-3.5-turbo-0613', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 32,
>> 'completion_tokens': 153, 'total_tokens': 185}})]}}
Updated 4 months ago
See parameters details in our API reference: