DocumentationAPI ReferenceTutorialsGitHub Code ExamplesDiscord Community

Transcribes audio files.

Module whisper_transcriber


class WhisperTranscriber(BaseComponent)

Transcribes audio files using OpenAI's Whisper. This class supports two underlying implementations:

To use Whisper locally, install it following the instructions on the Whisper GitHub repo and omit the api_key parameter. You can work around a dependency conflict caused by openai-whisper pinning an older tiktoken version than required by Haystack if you install via pip install --no-deps numba llvmlite 'openai-whisper>=20230918'.

To use the API implementation, provide an api_key. You can get one by signing up for an OpenAI account.

For the supported audio formats, languages, and other parameters, see the Whisper API documentation and the official Whisper github repo.


def __init__(api_key: Optional[str] = None,
             model_name_or_path: WhisperModel = "medium",
             device: Optional[Union[str, "torch.device"]] = None,
             api_base: str = "") -> None

Creates a WhisperTranscriber instance.


  • api_key: OpenAI API key. If None, a local installation of Whisper is used.
  • model_name_or_path: Name of the model to use. If using a local installation of Whisper, set this to one of the following values: "tiny", "small", "medium", "large", "large-v2". If using the API, set this value to: "whisper-1" (default).
  • device: Device to use for inference. Only used if you're using a local installation of Whisper. If None, the device is automatically selected.
  • api_base: The OpenAI API Base url, defaults to


def transcribe(audio_file: Union[str, BinaryIO],
               language: Optional[str] = None,
               return_segments: bool = False,
               translate: bool = False,
               **kwargs) -> Dict[str, Any]

Transcribe an audio file.


  • audio_file: Path to the audio file or a binary file-like object.
  • language: Language of the audio file. If None, the language is automatically detected.
  • return_segments: If True, returns the transcription for each segment of the audio file. Supported with local installation of whisper only.
  • translate: If True, translates the transcription to English.


A dictionary containing the transcription text and metadata like timings, segments etc.

def run(query: Optional[str] = None,
        file_paths: Optional[List[str]] = None,
        labels: Optional[MultiLabel] = None,
        documents: Optional[List[Document]] = None,
        meta: Optional[dict] = None)

Transcribe audio files.


  • query: Ignored
  • file_paths: List of paths to audio files.
  • labels: Ignored
  • documents: Ignored
  • meta: Ignored


A dictionary containing a list of Document objects, one for each input file.


def run_batch(queries: Optional[Union[str, List[str]]] = None,
              file_paths: Optional[List[str]] = None,
              labels: Optional[Union[MultiLabel, List[MultiLabel]]] = None,
              documents: Optional[Union[List[Document],
                                        List[List[Document]]]] = None,
              meta: Optional[Union[Dict[str, Any], List[Dict[str,
                                                             Any]]]] = None,
              params: Optional[dict] = None,
              debug: Optional[bool] = None)

Transcribe audio files.


  • queries: Ignored
  • file_paths: List of paths to audio files.
  • labels: Ignored
  • documents: Ignored
  • meta: Ignored
  • params: Ignored
  • debug: Ignored