Context

class pipecat.services.openai_realtime_beta.context.OpenAIRealtimeLLMContext(messages=None, tools=None, **kwargs)[source]

Bases: OpenAILLMContext

static upgrade_to_realtime(obj)[source]
Parameters:

obj (OpenAILLMContext)

Return type:

OpenAIRealtimeLLMContext

from_standard_message(message)[source]

Convert from OpenAI message format to OpenAI message format (passthrough).

OpenAI’s format allows both simple string content and structured content: - Simple: {“role”: “user”, “content”: “Hello”} - Structured: {“role”: “user”, “content”: [{“type”: “text”, “text”: “Hello”}]}

Since OpenAI is our standard format, this is a passthrough function.

Parameters:

message (dict) – Message in OpenAI format

Returns:

Same message, unchanged

Return type:

dict

get_messages_for_initializing_history()[source]
add_user_content_item_as_message(item)[source]
class pipecat.services.openai_realtime_beta.context.OpenAIRealtimeUserContextAggregator(context, *, params=None, **kwargs)[source]

Bases: OpenAIUserContextAggregator

Parameters:
  • context (OpenAILLMContext)

  • params (LLMUserAggregatorParams | None)

async process_frame(frame, direction=FrameDirection.DOWNSTREAM)[source]
Parameters:
  • frame (Frame)

  • direction (FrameDirection)

async push_aggregation()[source]

Pushes the current aggregation based on interruption strategies and conditions.

class pipecat.services.openai_realtime_beta.context.OpenAIRealtimeAssistantContextAggregator(context, *, params=None, **kwargs)[source]

Bases: OpenAIAssistantContextAggregator

Parameters:
  • context (OpenAILLMContext)

  • params (LLMAssistantAggregatorParams | None)

async process_frame(frame, direction)[source]
Parameters:
  • frame (Frame)

  • direction (FrameDirection)

async handle_function_call_result(frame)[source]

Handle the result of a function call.

Updates the context with the function call result, replacing any previous IN_PROGRESS status.

Parameters:

frame (FunctionCallResultFrame) – Frame containing the function call result.