OpenAI

class pipecat.services.openai_realtime_beta.openai.CurrentAudioResponse(item_id: str, content_index: int, start_time_ms: int, total_size: int = 0)[source]

Bases: object

Parameters:
  • item_id (str)

  • content_index (int)

  • start_time_ms (int)

  • total_size (int)

item_id: str
content_index: int
start_time_ms: int
total_size: int = 0
class pipecat.services.openai_realtime_beta.openai.OpenAIRealtimeBetaLLMService(*, api_key, model='gpt-4o-realtime-preview-2025-06-03', base_url='wss://api.openai.com/v1/realtime', session_properties=None, start_audio_paused=False, send_transcription_frames=True, **kwargs)[source]

Bases: LLMService

Parameters:
  • api_key (str)

  • model (str)

  • base_url (str)

  • session_properties (SessionProperties | None)

  • start_audio_paused (bool)

  • send_transcription_frames (bool)

adapter_class

alias of OpenAIRealtimeLLMAdapter

can_generate_metrics()[source]
Return type:

bool

set_audio_input_paused(paused)[source]
Parameters:

paused (bool)

async retrieve_conversation_item(item_id)[source]
Parameters:

item_id (str)

async start(frame)[source]

Start the LLM service.

Parameters:

frame (StartFrame) – The start frame.

async stop(frame)[source]

Stop the LLM service.

Parameters:

frame (EndFrame) – The end frame.

async cancel(frame)[source]

Cancel the LLM service.

Parameters:

frame (CancelFrame) – The cancel frame.

async process_frame(frame, direction)[source]

Process a frame.

Parameters:
  • frame (Frame) – The frame to process.

  • direction (FrameDirection) – The direction of frame processing.

async send_client_event(event)[source]
Parameters:

event (ClientEvent)

async handle_evt_input_audio_transcription_completed(evt)[source]
async reset_conversation()[source]
create_context_aggregator(context, *, user_params=LLMUserAggregatorParams(aggregation_timeout=0.5), assistant_params=LLMAssistantAggregatorParams(expect_stripped_words=True))[source]

Create an instance of OpenAIContextAggregatorPair from an OpenAILLMContext. Constructor keyword arguments for both the user and assistant aggregators can be provided.

Parameters:
  • context (OpenAILLMContext) – The LLM context.

  • user_params (LLMUserAggregatorParams, optional) – User aggregator parameters.

  • assistant_params (LLMAssistantAggregatorParams, optional) – User aggregator parameters.

Returns:

A pair of context aggregators, one for the user and one for the assistant, encapsulated in an OpenAIContextAggregatorPair.

Return type:

OpenAIContextAggregatorPair