LlmResponse
- class pipecat.processors.aggregators.llm_response.LLMUserAggregatorParams(aggregation_timeout: float = 0.5)[source]
Bases:
object
- Parameters:
aggregation_timeout (float)
- aggregation_timeout: float = 0.5
- class pipecat.processors.aggregators.llm_response.LLMAssistantAggregatorParams(expect_stripped_words: bool = True)[source]
Bases:
object
- Parameters:
expect_stripped_words (bool)
- expect_stripped_words: bool = True
- class pipecat.processors.aggregators.llm_response.LLMFullResponseAggregator(**kwargs)[source]
Bases:
FrameProcessor
This is an LLM aggregator that aggregates a full LLM completion. It aggregates LLM text frames (tokens) received between LLMFullResponseStartFrame and LLMFullResponseEndFrame. Every full completion is returned via the “on_completion” event handler:
@aggregator.event_handler(“on_completion”) async def on_completion(
aggregator: LLMFullResponseAggregator, completion: str, completed: bool,
)
- async process_frame(frame, direction)[source]
- Parameters:
frame (Frame)
direction (FrameDirection)
- class pipecat.processors.aggregators.llm_response.BaseLLMResponseAggregator(**kwargs)[source]
Bases:
FrameProcessor
This is the base class for all LLM response aggregators. These aggregators process incoming frames and aggregate content until they are ready to push the aggregation. In the case of a user, an aggregation might be a full transcription received from the STT service.
The LLM response aggregators also keep a store (e.g. a message list or an LLM context) of the current conversation, that is, it stores the messages said by the user or by the bot.
- abstract property messages: List[dict]
Returns the messages from the current conversation.
- abstract property role: str
Returns the role (e.g. user, assistant…) for this aggregator.
- abstractmethod add_messages(messages)[source]
Add the given messages to the conversation.
- abstractmethod set_messages(messages)[source]
Reset the conversation with the given messages.
- abstractmethod set_tools(tools)[source]
Set LLM tools to be used in the current conversation.
- abstractmethod set_tool_choice(tool_choice)[source]
Set the tool choice. This should modify the LLM context.
- abstractmethod async reset()[source]
Reset the internals of this aggregator. This should not modify the internal messages.
- abstractmethod async handle_aggregation(aggregation)[source]
Adds the given aggregation to the aggregator. The aggregator can use a simple list of message or a context. It doesn’t not push any frames.
- Parameters:
aggregation (str)
- abstractmethod async push_aggregation()[source]
Pushes the current aggregation. For example, iN the case of context aggregation this might push a new context frame.
- class pipecat.processors.aggregators.llm_response.LLMContextResponseAggregator(*, context, role, **kwargs)[source]
Bases:
BaseLLMResponseAggregator
This is a base LLM aggregator that uses an LLM context to store the conversation. It pushes OpenAILLMContextFrame as an aggregation frame.
- Parameters:
context (OpenAILLMContext)
role (str)
- property messages: List[dict]
Returns the messages from the current conversation.
- property role: str
Returns the role (e.g. user, assistant…) for this aggregator.
- property context
- get_context_frame()[source]
- Return type:
OpenAILLMContextFrame
- async push_context_frame(direction=FrameDirection.DOWNSTREAM)[source]
- Parameters:
direction (FrameDirection)
- add_messages(messages)[source]
Add the given messages to the conversation.
- set_messages(messages)[source]
Reset the conversation with the given messages.
- set_tools(tools)[source]
Set LLM tools to be used in the current conversation.
- Parameters:
tools (List)
- set_tool_choice(tool_choice)[source]
Set the tool choice. This should modify the LLM context.
- Parameters:
tool_choice (Literal['none', 'auto', 'required'] | dict)
- async reset()[source]
Reset the internals of this aggregator. This should not modify the internal messages.
- class pipecat.processors.aggregators.llm_response.LLMUserContextAggregator(context, *, params=None, **kwargs)[source]
Bases:
LLMContextResponseAggregator
This is a user LLM aggregator that uses an LLM context to store the conversation. It aggregates transcriptions from the STT service and it has logic to handle multiple scenarios where transcriptions are received between VAD events (UserStartedSpeakingFrame and UserStoppedSpeakingFrame) or even outside or no VAD events at all.
- Parameters:
context (OpenAILLMContext)
params (LLMUserAggregatorParams | None)
- async reset()[source]
Reset the internals of this aggregator. This should not modify the internal messages.
- async handle_aggregation(aggregation)[source]
Adds the given aggregation to the aggregator. The aggregator can use a simple list of message or a context. It doesn’t not push any frames.
- Parameters:
aggregation (str)
- async process_frame(frame, direction)[source]
- Parameters:
frame (Frame)
direction (FrameDirection)
- async push_aggregation()[source]
Pushes the current aggregation based on interruption strategies and conditions.
- class pipecat.processors.aggregators.llm_response.LLMAssistantContextAggregator(context, *, params=None, **kwargs)[source]
Bases:
LLMContextResponseAggregator
This is an assistant LLM aggregator that uses an LLM context to store the conversation. It aggregates text frames received between LLMFullResponseStartFrame and LLMFullResponseEndFrame.
- Parameters:
context (OpenAILLMContext)
params (LLMAssistantAggregatorParams | None)
- property has_function_calls_in_progress: bool
Check if there are any function calls currently in progress.
- Returns:
True if function calls are in progress, False otherwise
- Return type:
bool
- async handle_aggregation(aggregation)[source]
Adds the given aggregation to the aggregator. The aggregator can use a simple list of message or a context. It doesn’t not push any frames.
- Parameters:
aggregation (str)
- async handle_function_call_in_progress(frame)[source]
- Parameters:
frame (FunctionCallInProgressFrame)
- async handle_function_call_result(frame)[source]
- Parameters:
frame (FunctionCallResultFrame)
- async handle_function_call_cancel(frame)[source]
- Parameters:
frame (FunctionCallCancelFrame)
- async handle_user_image_frame(frame)[source]
- Parameters:
frame (UserImageRawFrame)
- async process_frame(frame, direction)[source]
- Parameters:
frame (Frame)
direction (FrameDirection)
- async push_aggregation()[source]
Pushes the current aggregation. For example, iN the case of context aggregation this might push a new context frame.
- class pipecat.processors.aggregators.llm_response.LLMUserResponseAggregator(messages=None, *, params=None, **kwargs)[source]
Bases:
LLMUserContextAggregator
- Parameters:
messages (List[dict] | None)
params (LLMUserAggregatorParams | None)
- async push_aggregation()[source]
Pushes the current aggregation based on interruption strategies and conditions.
- class pipecat.processors.aggregators.llm_response.LLMAssistantResponseAggregator(messages=None, *, params=None, **kwargs)[source]
Bases:
LLMAssistantContextAggregator
- Parameters:
messages (List[dict] | None)
params (LLMAssistantAggregatorParams | None)
- async push_aggregation()[source]
Pushes the current aggregation. For example, iN the case of context aggregation this might push a new context frame.