DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio (Waitlist)
Documentation

TransformersTextRouter

Use this component to route text input to various output connections based on a model-defined categorization label.

NameTransformersTextRouter
Folder path/routers/
Most common position in a pipelineFlexible
Mandatory input variables“text”: The text to be routed to one of the specified outputs based on which label it has been categorized into
Output variables“documents”: A dictionary with the label as key and the text as value

Overview

TransformersTextRouter routes text input to various output connections based on its categorization label. This is useful for routing queries to different models in a pipeline depending on their categorization.

First, you need to set a selected model with a model parameter when initializing the component. The selected model then provides the set of labels for categorization.

You can additionally provide the labels parameter – a list of strings of possible class labels to classify each sequence into. If not provided, the component fetches the labels from the model configuration file hosted on the HuggingFace Hub using transformers.AutoConfig.from_pretrained.

To see the full list of parameters, check out our API reference.

Usage

On its own

The TransformersTextRouter isn’t very effective on its own, as its main strength lies in working within a pipeline. The component's true potential is unlocked when it is integrated into a pipeline, where it can efficiently route text to the most appropriate components. Please see the following section for a complete example of usage.

In a pipeline

Below is an example of a simple pipeline that routes English queries to a Text Generator optimized for English text and German queries to a Text Generator optimized for German text.

from haystack.core.pipeline import Pipeline
from haystack.components.routers import TransformersTextRouter
from haystack.components.builders import PromptBuilder
from haystack.components.generators import HuggingFaceLocalGenerator

p = Pipeline()

p.add_component(
  instance=TransformersTextRouter(model="papluca/xlm-roberta-base-language-detection"),
  name="text_router"
)
p.add_component(
  instance=PromptBuilder(template="Answer the question: {{query}}\nAnswer:"),
  name="english_prompt_builder"
)
p.add_component(
  instance=PromptBuilder(template="Beantworte die Frage: {{query}}\nAntwort:"),
  name="german_prompt_builder"
)
p.add_component(
  instance=HuggingFaceLocalGenerator(model="DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1"),
  name="german_llm"
)
p.add_component(
  instance=HuggingFaceLocalGenerator(model="microsoft/Phi-3-mini-4k-instruct"),
  name="english_llm"
)

p.connect("text_router.en", "english_prompt_builder.query")
p.connect("text_router.de", "german_prompt_builder.query")
p.connect("english_prompt_builder.prompt", "english_llm.prompt")
p.connect("german_prompt_builder.prompt", "german_llm.prompt")

# English Example
print(p.run({"text_router": {"text": "What is the capital of Germany?"}}))

# German Example
print(p.run({"text_router": {"text": "Was ist die Hauptstadt von Deutschland?"}}))

Related Links

See the parameters details in our API reference: