DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio (Waitlist)
Documentation

AzureOpenAIGenerator

This component enables text generation using OpenAI's large language models (LLMs) through Azure services.

Most common position in a pipelineAfter a PromptBuilder
Mandatory init variables"api_key": The Azure OpenAI API key. Can be set with AZURE_OPENAI_API_KEY env var.

"azure_ad_token": Microsoft Entra ID token. Can be set with AZURE_OPENAI_AD_TOKEN env var.
Mandatory run variables“prompt”: A string containing the prompt for the LLM
Output variables“replies”: A list of strings with all the replies generated by the LLM

”meta”: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and so on
API referenceGenerators
GitHub linkhttps://github.com/deepset-ai/haystack/blob/main/haystack/components/generators/azure.py

Overview

AzureOpenAIGenerator supports OpenAI models deployed through Azure services. To see the list of supported models, head over to Azure documentation. The default model used with the component is gpt-4o-mini.

To work with Azure components, you will need an Azure OpenAI API key, as well as an Azure OpenAI Endpoint. You can learn more about them in Azure documentation.

The component uses AZURE_OPENAI_API_KEY and AZURE_OPENAI_AD_TOKEN environment variables by default. Otherwise, you can pass api_key and azure_ad_token at initialization:

client = AzureOpenAIGenerator(azure_endpoint="<Your Azure endpoint e.g. `https://your-company.azure.openai.com/>",
                        api_key=Secret.from_token("<your-api-key>"),
                        azure_deployment="<a model name>")

📘

We recommend using environment variables instead of initialization parameters.

Then, the component needs a prompt to operate, but you can pass any text generation parameters valid for the openai.ChatCompletion.create method directly to this component using the generation_kwargs parameter, both at initialization and to run() method. For more details on the supported parameters, refer to the Azure documentation.

You can also specify a model for this component through the azure_deployment init parameter.

Streaming

AzureOpenAIGenerator supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback init parameter. Note that streaming the tokens is only compatible with generating a single response, so n must be set to 1 for streaming to work.

📘

This component is designed for text generation, not for chat. If you want to use LLMs for chat, use AzureOpenAIChatGenerator instead.

Usage

On its own

Basic usage:

from haystack.components.generators import AzureOpenAIGenerator
client = AzureOpenAIGenerator()
response = client.run("What's Natural Language Processing? Be brief.")
print(response)

>> {'replies': ['Natural Language Processing (NLP) is a branch of artificial intelligence that focuses on
>> the interaction between computers and human language. It involves enabling computers to understand, interpret,
>> and respond to natural human language in a way that is both meaningful and useful.'], 'meta': [{'model':
>> 'gpt-4o-mini', 'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 16,
>> 'completion_tokens': 49, 'total_tokens': 65}}]}

With streaming:

from haystack.components.generators import AzureOpenAIGenerator

client = AzureOpenAIGenerator(streaming_callback=lambda chunk: print(chunk.content, end="", flush=True))
response = client.run("What's Natural Language Processing? Be brief.")
print(response)

>>> Natural Language Processing (NLP) is a branch of artificial
	intelligence that focuses on the interaction between computers and human
  language. It involves enabling computers to understand, interpret,and respond
  to natural human language in a way that is both meaningful and useful.
>>> {'replies': ['Natural Language Processing (NLP) is a branch of artificial
	intelligence that focuses on the interaction between computers and human
  language. It involves enabling computers to understand, interpret,and respond
  to natural human language in a way that is both meaningful and useful.'],
  'meta': [{'model': 'gpt-4o-mini', 'index': 0, 'finish_reason':
  'stop', 'usage': {'prompt_tokens': 16, 'completion_tokens': 49,
  'total_tokens': 65}}]}

In a Pipeline

from haystack import Pipeline
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.generators import AzureOpenAIGenerator
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack import Document

docstore = InMemoryDocumentStore()
docstore.write_documents([Document(content="Rome is the capital of Italy"), Document(content="Paris is the capital of France")])

query = "What is the capital of France?"

template = """
Given the following information, answer the question.

Context: 
{% for document in documents %}
    {{ document.content }}
{% endfor %}

Question: {{ query }}?
"""
pipe = Pipeline()

pipe.add_component("retriever", InMemoryBM25Retriever(document_store=docstore))
pipe.add_component("prompt_builder", PromptBuilder(template=template))
pipe.add_component("llm", AzureOpenAIGenerator())
pipe.connect("retriever", "prompt_builder.documents")
pipe.connect("prompt_builder", "llm")

res=pipe.run({
    "prompt_builder": {
        "query": query
    },
    "retriever": {
        "query": query
    }
})

print(res)

Related Links

See the parameters details in our API reference: