OpenaiLlmContext

class pipecat.processors.aggregators.openai_llm_context.CustomEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]

Bases: JSONEncoder

default(obj)[source]

Implement this method in a subclass such that it returns a serializable object for o, or calls the base implementation (to raise a TypeError).

For example, to support arbitrary iterators, you could implement default like this:

def default(self, o):
    try:
        iterable = iter(o)
    except TypeError:
        pass
    else:
        return list(iterable)
    # Let the base class default method raise the TypeError
    return super().default(o)
class pipecat.processors.aggregators.openai_llm_context.OpenAILLMContext(messages=None, tools=NOT_GIVEN, tool_choice=NOT_GIVEN)[source]

Bases: object

Parameters:
  • messages (List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam] | None)

  • tools (List[ChatCompletionToolParam] | NotGiven | ToolsSchema)

  • tool_choice (Literal['none', 'auto', 'required'] | ~openai.types.chat.chat_completion_named_tool_choice_param.ChatCompletionNamedToolChoiceParam | ~openai.NotGiven)

get_llm_adapter()[source]
Return type:

BaseLLMAdapter | None

set_llm_adapter(llm_adapter)[source]
Parameters:

llm_adapter (BaseLLMAdapter)

static from_messages(messages)[source]
Parameters:

messages (List[dict])

Return type:

OpenAILLMContext

property messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]
property tools: List[ChatCompletionToolParam] | NotGiven | List[Any]
property tool_choice: Literal['none', 'auto', 'required'] | ChatCompletionNamedToolChoiceParam | NotGiven
add_message(message)[source]
Parameters:

message (ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam)

add_messages(messages)[source]
Parameters:

messages (List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])

set_messages(messages)[source]
Parameters:

messages (List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])

get_messages()[source]
Return type:

List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]

get_messages_json()[source]
Return type:

str

get_messages_for_logging()[source]
Return type:

str

from_standard_message(message)[source]

Convert from OpenAI message format to OpenAI message format (passthrough).

OpenAI’s format allows both simple string content and structured content: - Simple: {“role”: “user”, “content”: “Hello”} - Structured: {“role”: “user”, “content”: [{“type”: “text”, “text”: “Hello”}]}

Since OpenAI is our standard format, this is a passthrough function.

Parameters:

message (dict) – Message in OpenAI format

Returns:

Same message, unchanged

Return type:

dict

to_standard_messages(obj)[source]

Convert from OpenAI message format to OpenAI message format (passthrough).

OpenAI’s format is our standard format throughout Pipecat. This function returns a list containing the original message to maintain consistency with other LLM services that may need to return multiple messages.

Parameters:

obj (dict) – Message in OpenAI format with either: - Simple content: {“role”: “user”, “content”: “Hello”} - List content: {“role”: “user”, “content”: [{“type”: “text”, “text”: “Hello”}]}

Returns:

List containing the original messages, preserving whether

the content was in simple string or structured list format

Return type:

list

get_messages_for_initializing_history()[source]
get_messages_for_persistent_storage()[source]
set_tool_choice(tool_choice)[source]
Parameters:

tool_choice (Literal['none', 'auto', 'required'] | ~openai.types.chat.chat_completion_named_tool_choice_param.ChatCompletionNamedToolChoiceParam | ~openai.NotGiven)

set_tools(tools=NOT_GIVEN)[source]
Parameters:

tools (List[ChatCompletionToolParam] | NotGiven | ToolsSchema)

add_image_frame_message(*, format, size, image, text=None)[source]
Parameters:
  • format (str)

  • size (tuple[int, int])

  • image (bytes)

  • text (str)

add_audio_frames_message(*, audio_frames, text=None)[source]
Parameters:
  • audio_frames (list[AudioRawFrame])

  • text (str)

create_wav_header(sample_rate, num_channels, bits_per_sample, data_size)[source]
class pipecat.processors.aggregators.openai_llm_context.OpenAILLMContextFrame(context)[source]

Bases: Frame

Like an LLMMessagesFrame, but with extra context specific to the OpenAI API. The context in this message is also mutable, and will be changed by the OpenAIContextAggregator frame processor.

Parameters:

context (OpenAILLMContext)

context: OpenAILLMContext