DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio
API Reference

OpenRouter

OpenRouter integration for Haystack

Module haystack_integrations.components.generators.openrouter.chat.chat_generator

OpenRouterChatGenerator

Enables text generation using OpenRouter generative models.
For supported models, see OpenRouter docs.

Users can pass any text generation parameters valid for the OpenRouter chat completion API
directly to this component using the generation_kwargs parameter in __init__ or the generation_kwargs
parameter in run method.

Key Features and Compatibility:

  • Primary Compatibility: Designed to work seamlessly with the OpenRouter chat completion endpoint.
  • Streaming Support: Supports streaming responses from the OpenRouter chat completion endpoint.
  • Customizability: Supports all parameters supported by the OpenRouter chat completion endpoint.

This component uses the ChatMessage format for structuring both input and output,
ensuring coherent and contextually relevant responses in chat-based text generation scenarios.
Details on the ChatMessage format can be found in the
Haystack docs

For more details on the parameters supported by the OpenRouter API, refer to the
OpenRouter API Docs.

Usage example:

from haystack_integrations.components.generators.openrouter import OpenRouterChatGenerator
from haystack.dataclasses import ChatMessage

messages = [ChatMessage.from_user("What's Natural Language Processing?")]

client = OpenRouterChatGenerator()
response = client.run(messages)
print(response)

>>{'replies': [ChatMessage(_content='Natural Language Processing (NLP) is a branch of artificial intelligence
>>that focuses on enabling computers to understand, interpret, and generate human language in a way that is
>>meaningful and useful.', _role=<ChatRole.ASSISTANT: 'assistant'>, _name=None,
>>_meta={'model': 'openai/gpt-4o-mini', 'index': 0, 'finish_reason': 'stop',
>>'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})]}

OpenRouterChatGenerator.__init__

def __init__(*,
             api_key: Secret = Secret.from_env_var("OPENROUTER_API_KEY"),
             model: str = "openai/gpt-4o-mini",
             streaming_callback: Optional[StreamingCallbackT] = None,
             api_base_url: Optional[str] = "https://openrouter.ai/api/v1",
             generation_kwargs: Optional[Dict[str, Any]] = None,
             tools: Optional[ToolsType] = None,
             timeout: Optional[float] = None,
             extra_headers: Optional[Dict[str, Any]] = None,
             max_retries: Optional[int] = None,
             http_client_kwargs: Optional[Dict[str, Any]] = None)

Creates an instance of OpenRouterChatGenerator. Unless specified otherwise,

the default model is openai/gpt-4o-mini.

Arguments:

  • api_key: The OpenRouter API key.
  • model: The name of the OpenRouter chat completion model to use.
  • streaming_callback: A callback function that is called when a new token is received from the stream.
    The callback function accepts StreamingChunk as an argument.
  • api_base_url: The OpenRouter API Base url.
    For more details, see OpenRouter docs.
  • generation_kwargs: Other parameters to use for the model. These parameters are all sent directly to
    the OpenRouter endpoint. See OpenRouter API docs for more details.
    Some of the supported parameters:
  • max_tokens: The maximum number of tokens the output text can have.
  • temperature: What sampling temperature to use. Higher values mean the model will take more risks.
    Try 0.9 for more creative applications and 0 (argmax sampling) for ones with a well-defined answer.
  • top_p: An alternative to sampling with temperature, called nucleus sampling, where the model
    considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
    comprising the top 10% probability mass are considered.
  • stream: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent
    events as they become available, with the stream terminated by a data: [DONE] message.
  • safe_prompt: Whether to inject a safety prompt before all conversations.
  • random_seed: The seed to use for random sampling.
  • response_format: A JSON schema or a Pydantic model that enforces the structure of the model's response.
    If provided, the output will always be validated against this
    format (unless the model returns a tool call).
    For details, see the OpenAI Structured Outputs documentation.
    Notes:
    • This parameter accepts Pydantic models and JSON schemas for latest models starting from GPT-4o.
    • For structured outputs with streaming,
      the response_format must be a JSON schema and not a Pydantic model.
  • tools: A list of tools or a Toolset for which the model can prepare calls. This parameter can accept either a
    list of Tool objects or a Toolset instance.
  • timeout: The timeout for the OpenRouter API call.
  • extra_headers: Additional HTTP headers to include in requests to the OpenRouter API.
    This can be useful for adding site URL or title for rankings on openrouter.ai
    For more details, see OpenRouter docs.
  • max_retries: Maximum number of retries to contact OpenAI after an internal error.
    If not set, it defaults to either the OPENAI_MAX_RETRIES environment variable, or set to 5.
  • http_client_kwargs: A dictionary of keyword arguments to configure a custom httpx.Clientor httpx.AsyncClient.
    For more information, see the HTTPX documentation.

OpenRouterChatGenerator.to_dict

def to_dict() -> Dict[str, Any]

Serialize this component to a dictionary.

Returns:

The serialized component as a dictionary.