Documentation

WeaveConnector

Learn how to use Weights & Biases Weave framework for tracing and monitoring your pipeline components.

Most common position in a pipelineAnywhere, as it’s not connected to other components
Mandatory init variables“pipeline_name”: The name of your pipeline, which will also show up in Weaver dashboard.
Output variables“pipeline_name”: The name of the pipeline that just run
API referenceweights and bias
GitHub linkhttps://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/weights_bias

Overview

This integration allows you to trace and visualize your pipeline execution in Weights & Biases.

Information captured by the Haystack tracing tool, such as API calls, context data, and prompts, is sent to Weights & Biases, where you can see the complete trace of your pipeline execution.

Prerequisites

You need a Weave account to use this feature. You can sign up for free at Weights & Biases website.

You will then need to set the WANDB_API_KEY environment variable with your Weights & Biases API key. Once logged in, you can find your API key on your home page.

Then go to https://wandb.ai/<user_name>/projects and see the full trace for your pipeline under the pipeline name you specified when creating the WeaveConnector.

You will also need to set the HAYSTACK_CONTENT_TRACING_ENABLED environment variable set to true.

Usage

First, install the weights_biases-haystack package to use this connector:

pip install weights_biases-haystack

Then, add it to your pipeline without any connections, and it will automatically start sending traces to Weights & Biases:

import os

from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage

from haystack_integrations.components.connectors import WeaveConnector

pipe = Pipeline()
pipe.add_component("prompt_builder", ChatPromptBuilder())
pipe.add_component("llm", OpenAIChatGenerator(model="gpt-3.5-turbo"))
pipe.connect("prompt_builder.prompt", "llm.messages")

connector = WeaveConnector(pipeline_name="test_pipeline")
pipe.add_component("weave", connector)

messages = [
    ChatMessage.from_system(
        "Always respond in German even if some input data is in other languages."
    ),
    ChatMessage.from_user("Tell me about {{location}}"),
]

response = pipe.run(
    data={
        "prompt_builder": {
            "template_variables": {"location": "Berlin"},
            "template": messages,
        }
    }
)

You can then see the complete trace for your pipeline at https://wandb.ai/<user_name>/projects under the pipeline name you specified when creating the WeaveConnector.