LLM

OpenAI LLM service implementation with context aggregators.

class pipecat.services.openai.llm.OpenAIContextAggregatorPair(_user, _assistant)[source]

Bases: object

Pair of OpenAI context aggregators for user and assistant messages.

Parameters:
  • _user (OpenAIUserContextAggregator) – User context aggregator for processing user messages.

  • _assistant (OpenAIAssistantContextAggregator) – Assistant context aggregator for processing assistant messages.

user()[source]

Get the user context aggregator.

Returns:

The user context aggregator instance.

Return type:

OpenAIUserContextAggregator

assistant()[source]

Get the assistant context aggregator.

Returns:

The assistant context aggregator instance.

Return type:

OpenAIAssistantContextAggregator

class pipecat.services.openai.llm.OpenAILLMService(*, model='gpt-4.1', params=None, **kwargs)[source]

Bases: BaseOpenAILLMService

OpenAI LLM service implementation.

Provides a complete OpenAI LLM service with context aggregation support. Uses the BaseOpenAILLMService for core functionality and adds OpenAI-specific context aggregator creation.

Parameters:
  • model (str) – The OpenAI model name to use. Defaults to “gpt-4.1”.

  • params (InputParams | None) – Input parameters for model configuration.

  • **kwargs – Additional arguments passed to the parent BaseOpenAILLMService.

create_context_aggregator(context, *, user_params=LLMUserAggregatorParams(aggregation_timeout=0.5), assistant_params=LLMAssistantAggregatorParams(expect_stripped_words=True))[source]

Create OpenAI-specific context aggregators.

Creates a pair of context aggregators optimized for OpenAI’s message format, including support for function calls, tool usage, and image handling.

Parameters:
  • context (OpenAILLMContext) – The LLM context to create aggregators for.

  • user_params (LLMUserAggregatorParams) – Parameters for user message aggregation.

  • assistant_params (LLMAssistantAggregatorParams) – Parameters for assistant message aggregation.

Returns:

A pair of context aggregators, one for the user and one for the assistant, encapsulated in an OpenAIContextAggregatorPair.

Return type:

OpenAIContextAggregatorPair

class pipecat.services.openai.llm.OpenAIUserContextAggregator(context, *, params=None, **kwargs)[source]

Bases: LLMUserContextAggregator

OpenAI-specific user context aggregator.

Handles aggregation of user messages for OpenAI LLM services. Inherits all functionality from the base LLMUserContextAggregator.

Parameters:
  • context (OpenAILLMContext)

  • params (LLMUserAggregatorParams | None)

class pipecat.services.openai.llm.OpenAIAssistantContextAggregator(context, *, params=None, **kwargs)[source]

Bases: LLMAssistantContextAggregator

OpenAI-specific assistant context aggregator.

Handles aggregation of assistant messages for OpenAI LLM services, with specialized support for OpenAI’s function calling format, tool usage tracking, and image message handling.

Parameters:
  • context (OpenAILLMContext)

  • params (LLMAssistantAggregatorParams | None)

async handle_function_call_in_progress(frame)[source]

Handle a function call in progress.

Adds the function call to the context with an IN_PROGRESS status to track ongoing function execution.

Parameters:

frame (FunctionCallInProgressFrame) – Frame containing function call progress information.

async handle_function_call_result(frame)[source]

Handle the result of a function call.

Updates the context with the function call result, replacing any previous IN_PROGRESS status.

Parameters:

frame (FunctionCallResultFrame) – Frame containing the function call result.

async handle_function_call_cancel(frame)[source]

Handle a cancelled function call.

Updates the context to mark the function call as cancelled.

Parameters:

frame (FunctionCallCancelFrame) – Frame containing the function call cancellation information.

async handle_user_image_frame(frame)[source]

Handle a user image frame from a function call request.

Marks the associated function call as completed and adds the image to the context for processing.

Parameters:

frame (UserImageRawFrame) – Frame containing the user image and request context.