LLM

class pipecat.services.openpipe.llm.OpenPipeLLMService(*, model='gpt-4.1', api_key=None, base_url=None, openpipe_api_key=None, openpipe_base_url='https://app.openpipe.ai/api/v1', tags=None, **kwargs)[source]

Bases: OpenAILLMService

Parameters:
  • model (str)

  • api_key (str | None)

  • base_url (str | None)

  • openpipe_api_key (str | None)

  • openpipe_base_url (str)

  • tags (Dict[str, str] | None)

create_client(api_key=None, base_url=None, **kwargs)[source]

Create an AsyncOpenAI client instance.

Parameters:
  • api_key – OpenAI API key.

  • base_url – Custom base URL for the API.

  • organization – OpenAI organization ID.

  • project – OpenAI project ID.

  • default_headers – Additional HTTP headers.

  • **kwargs – Additional client configuration arguments.

Returns:

Configured AsyncOpenAI client instance.

async get_chat_completions(context, messages)[source]

Get streaming chat completions from OpenAI API.

Parameters:
  • context (OpenAILLMContext) – The LLM context containing tools and configuration.

  • messages (List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) – List of chat completion messages to send.

Returns:

Async stream of chat completion chunks.

Return type:

openpipe.AsyncStream.<class ‘openai.types.chat.chat_completion_chunk.ChatCompletionChunk’>