Frames
- class pipecat.frames.frames.KeypadEntry(*values)[source]
Bases:
str
,Enum
DTMF entries.
- ONE = '1'
- TWO = '2'
- THREE = '3'
- FOUR = '4'
- FIVE = '5'
- SIX = '6'
- SEVEN = '7'
- EIGHT = '8'
- NINE = '9'
- ZERO = '0'
- POUND = '#'
- STAR = '*'
- pipecat.frames.frames.format_pts(pts)[source]
- Parameters:
pts (int | None)
- class pipecat.frames.frames.Frame[source]
Bases:
object
Base frame class.
- id: int
- name: str
- pts: int | None
- metadata: Dict[str, Any]
- transport_source: str | None
- transport_destination: str | None
- class pipecat.frames.frames.SystemFrame[source]
Bases:
Frame
System frames are frames that are not internally queued by any of the frame processors and should be processed immediately.
- class pipecat.frames.frames.DataFrame[source]
Bases:
Frame
Data frames are frames that will be processed in order and usually contain data such as LLM context, text, audio or images.
- class pipecat.frames.frames.ControlFrame[source]
Bases:
Frame
Control frames are frames that, similar to data frames, will be processed in order and usually contain control information such as frames to update settings or to end the pipeline.
- class pipecat.frames.frames.AudioRawFrame(audio, sample_rate, num_channels)[source]
Bases:
object
A chunk of audio.
- Parameters:
audio (bytes)
sample_rate (int)
num_channels (int)
- audio: bytes
- sample_rate: int
- num_channels: int
- num_frames: int = 0
- class pipecat.frames.frames.ImageRawFrame(image, size, format)[source]
Bases:
object
A raw image.
- Parameters:
image (bytes)
size (Tuple[int, int])
format (str | None)
- image: bytes
- size: Tuple[int, int]
- format: str | None
- class pipecat.frames.frames.OutputAudioRawFrame(audio, sample_rate, num_channels)[source]
Bases:
DataFrame
,AudioRawFrame
A chunk of audio. Will be played by the output transport. If the transport supports multiple audio destinations (e.g. multiple audio tracks) the destination name can be specified.
- Parameters:
audio (bytes)
sample_rate (int)
num_channels (int)
- class pipecat.frames.frames.OutputImageRawFrame(image, size, format)[source]
Bases:
DataFrame
,ImageRawFrame
An image that will be shown by the transport. If the transport supports multiple video destinations (e.g. multiple video tracks) the destination name can be specified.
- Parameters:
image (bytes)
size (Tuple[int, int])
format (str | None)
- class pipecat.frames.frames.TTSAudioRawFrame(audio, sample_rate, num_channels)[source]
Bases:
OutputAudioRawFrame
A chunk of output audio generated by a TTS service.
- Parameters:
audio (bytes)
sample_rate (int)
num_channels (int)
- class pipecat.frames.frames.URLImageRawFrame(image, size, format, url=None)[source]
Bases:
OutputImageRawFrame
An output image with an associated URL. These images are usually generated by third-party services that provide a URL to download the image.
- Parameters:
image (bytes)
size (Tuple[int, int])
format (str | None)
url (str | None)
- url: str | None = None
- class pipecat.frames.frames.SpriteFrame(images)[source]
Bases:
DataFrame
An animated sprite. Will be shown by the transport if the transport’s camera is enabled. Will play at the framerate specified in the transport’s camera_out_framerate constructor parameter.
- Parameters:
images (List[OutputImageRawFrame])
- images: List[OutputImageRawFrame]
- class pipecat.frames.frames.TextFrame(text)[source]
Bases:
DataFrame
A chunk of text. Emitted by LLM services, consumed by TTS services, can be used to send text through processors.
- Parameters:
text (str)
- text: str
- class pipecat.frames.frames.LLMTextFrame(text)[source]
Bases:
TextFrame
A text frame generated by LLM services.
- Parameters:
text (str)
- class pipecat.frames.frames.TTSTextFrame(text)[source]
Bases:
TextFrame
A text frame generated by TTS services.
- Parameters:
text (str)
- class pipecat.frames.frames.TranscriptionFrame(text, user_id, timestamp, language=None, result=None)[source]
Bases:
TextFrame
A text frame with transcription-specific data. The result field contains the result from the STT service if available.
- Parameters:
text (str)
user_id (str)
timestamp (str)
language (Language | None)
result (Any | None)
- user_id: str
- timestamp: str
- language: Language | None = None
- result: Any | None = None
- class pipecat.frames.frames.InterimTranscriptionFrame(text, user_id, timestamp, language=None, result=None)[source]
Bases:
TextFrame
A text frame with interim transcription-specific data. The result field contains the result from the STT service if available.
- Parameters:
text (str)
user_id (str)
timestamp (str)
language (Language | None)
result (Any | None)
- text: str
- user_id: str
- timestamp: str
- language: Language | None = None
- result: Any | None = None
- class pipecat.frames.frames.TranslationFrame(text, user_id, timestamp, language=None)[source]
Bases:
TextFrame
A text frame with translated transcription data.
Will be placed in the transport’s receive queue when a participant speaks.
- Parameters:
text (str)
user_id (str)
timestamp (str)
language (Language | None)
- user_id: str
- timestamp: str
- language: Language | None = None
- class pipecat.frames.frames.OpenAILLMContextAssistantTimestampFrame(timestamp)[source]
Bases:
DataFrame
Timestamp information for assistant message in LLM context.
- Parameters:
timestamp (str)
- timestamp: str
- class pipecat.frames.frames.TranscriptionMessage(role, content, user_id=None, timestamp=None)[source]
Bases:
object
A message in a conversation transcript containing the role and content.
Messages are in standard format with roles normalized to user/assistant.
- Parameters:
role (Literal['user', 'assistant'])
content (str)
user_id (str | None)
timestamp (str | None)
- role: Literal['user', 'assistant']
- content: str
- user_id: str | None = None
- timestamp: str | None = None
- class pipecat.frames.frames.TranscriptionUpdateFrame(messages)[source]
Bases:
DataFrame
A frame containing new messages added to the conversation transcript.
This frame is emitted when new messages are added to the conversation history, containing only the newly added messages rather than the full transcript. Messages have normalized roles (user/assistant) regardless of the LLM service used. Messages are always in the OpenAI standard message format, which supports both:
Simple format: [
- {
“role”: “user”, “content”: “Hi, how are you?”
}, {
“role”: “assistant”, “content”: “Great! And you?”
}
]
Content list format: [
- {
“role”: “user”, “content”: [{“type”: “text”, “text”: “Hi, how are you?”}]
}, {
“role”: “assistant”, “content”: [{“type”: “text”, “text”: “Great! And you?”}]
}
]
OpenAI supports both formats. Anthropic and Google messages are converted to the content list format.
- Parameters:
messages (List[TranscriptionMessage])
- messages: List[TranscriptionMessage]
- class pipecat.frames.frames.LLMMessagesFrame(messages)[source]
Bases:
DataFrame
A frame containing a list of LLM messages. Used to signal that an LLM service should run a chat completion and emit an LLMFullResponseStartFrame, TextFrames and an LLMFullResponseEndFrame. Note that the messages property in this class is mutable, and will be be updated by various aggregators.
- Parameters:
messages (List[dict])
- messages: List[dict]
- class pipecat.frames.frames.LLMMessagesAppendFrame(messages)[source]
Bases:
DataFrame
A frame containing a list of LLM messages that need to be added to the current context.
- Parameters:
messages (List[dict])
- messages: List[dict]
- class pipecat.frames.frames.LLMMessagesUpdateFrame(messages)[source]
Bases:
DataFrame
A frame containing a list of new LLM messages. These messages will replace the current context LLM messages and should generate a new LLMMessagesFrame.
- Parameters:
messages (List[dict])
- messages: List[dict]
- class pipecat.frames.frames.LLMSetToolsFrame(tools)[source]
Bases:
DataFrame
A frame containing a list of tools for an LLM to use for function calling. The specific format depends on the LLM being used, but it should typically contain JSON Schema objects.
- Parameters:
tools (List[dict])
- tools: List[dict]
- class pipecat.frames.frames.LLMSetToolChoiceFrame(tool_choice)[source]
Bases:
DataFrame
A frame containing a tool choice for an LLM to use for function calling.
- Parameters:
tool_choice (Literal['none', 'auto', 'required'] | dict)
- tool_choice: Literal['none', 'auto', 'required'] | dict
- class pipecat.frames.frames.LLMEnablePromptCachingFrame(enable)[source]
Bases:
DataFrame
A frame to enable/disable prompt caching in certain LLMs.
- Parameters:
enable (bool)
- enable: bool
- class pipecat.frames.frames.TTSSpeakFrame(text)[source]
Bases:
DataFrame
A frame that contains a text that should be spoken by the TTS in the pipeline (if any).
- Parameters:
text (str)
- text: str
- class pipecat.frames.frames.TransportMessageFrame(message: Any)[source]
Bases:
DataFrame
- Parameters:
message (Any)
- message: Any
- class pipecat.frames.frames.DTMFFrame(button)[source]
Bases:
object
A DTMF button frame
- Parameters:
button (KeypadEntry)
- button: KeypadEntry
- class pipecat.frames.frames.OutputDTMFFrame(button)[source]
Bases:
DTMFFrame
,DataFrame
A DTMF keypress output that will be queued. If your transport supports multiple dial-out destinations, use the transport_destination field to specify where the DTMF keypress should be sent.
- Parameters:
button (KeypadEntry)
- class pipecat.frames.frames.StartFrame(audio_in_sample_rate=16000, audio_out_sample_rate=24000, allow_interruptions=False, enable_metrics=False, enable_usage_metrics=False, report_only_initial_ttfb=False, interruption_strategies=<factory>)[source]
Bases:
SystemFrame
This is the first frame that should be pushed down a pipeline.
- Parameters:
audio_in_sample_rate (int)
audio_out_sample_rate (int)
allow_interruptions (bool)
enable_metrics (bool)
enable_usage_metrics (bool)
report_only_initial_ttfb (bool)
interruption_strategies (List[BaseInterruptionStrategy])
- audio_in_sample_rate: int = 16000
- audio_out_sample_rate: int = 24000
- allow_interruptions: bool = False
- enable_metrics: bool = False
- enable_usage_metrics: bool = False
- report_only_initial_ttfb: bool = False
- interruption_strategies: List[BaseInterruptionStrategy]
- class pipecat.frames.frames.CancelFrame[source]
Bases:
SystemFrame
Indicates that a pipeline needs to stop right away.
- class pipecat.frames.frames.ErrorFrame(error, fatal=False)[source]
Bases:
SystemFrame
This is used notify upstream that an error has occurred downstream the pipeline. A fatal error indicates the error is unrecoverable and that the bot should exit.
- Parameters:
error (str)
fatal (bool)
- error: str
- fatal: bool = False
- class pipecat.frames.frames.FatalErrorFrame(error)[source]
Bases:
ErrorFrame
This is used notify upstream that an unrecoverable error has occurred and that the bot should exit.
- Parameters:
error (str)
- fatal: bool = True
- class pipecat.frames.frames.EndTaskFrame[source]
Bases:
SystemFrame
This is used to notify the pipeline task that the pipeline should be closed nicely (flushing all the queued frames) by pushing an EndFrame downstream.
- class pipecat.frames.frames.CancelTaskFrame[source]
Bases:
SystemFrame
This is used to notify the pipeline task that the pipeline should be stopped immediately by pushing a CancelFrame downstream.
- class pipecat.frames.frames.StopTaskFrame[source]
Bases:
SystemFrame
This is used to notify the pipeline task that it should be stopped as soon as possible (flushing all the queued frames) but that the pipeline processors should be kept in a running state.
- class pipecat.frames.frames.FrameProcessorPauseUrgentFrame(processor)[source]
Bases:
SystemFrame
This frame is used to pause frame processing for the given processor as fast as possible. Pausing frame processing will keep frames in the internal queue which will then be processed when frame processing is resumed with FrameProcessorResumeFrame.
- Parameters:
processor (FrameProcessor)
- processor: FrameProcessor
- class pipecat.frames.frames.FrameProcessorResumeUrgentFrame(processor)[source]
Bases:
SystemFrame
This frame is used to resume frame processing for the given processor if it was previously paused as fast as possible. After resuming frame processing all queued frames will be processed in the order received.
- Parameters:
processor (FrameProcessor)
- processor: FrameProcessor
- class pipecat.frames.frames.StartInterruptionFrame[source]
Bases:
SystemFrame
Emitted by VAD to indicate that a user has started speaking (i.e. is interruption). This is similar to UserStartedSpeakingFrame except that it should be pushed concurrently with other frames (so the order is not guaranteed).
- class pipecat.frames.frames.StopInterruptionFrame[source]
Bases:
SystemFrame
Emitted by VAD to indicate that a user has stopped speaking (i.e. no more interruptions). This is similar to UserStoppedSpeakingFrame except that it should be pushed concurrently with other frames (so the order is not guaranteed).
- class pipecat.frames.frames.UserStartedSpeakingFrame(emulated=False)[source]
Bases:
SystemFrame
Emitted by VAD to indicate that a user has started speaking. This can be used for interruptions or other times when detecting that someone is speaking is more important than knowing what they’re saying (as you will with a TranscriptionFrame)
- Parameters:
emulated (bool)
- emulated: bool = False
- class pipecat.frames.frames.UserStoppedSpeakingFrame(emulated=False)[source]
Bases:
SystemFrame
Emitted by the VAD to indicate that a user stopped speaking.
- Parameters:
emulated (bool)
- emulated: bool = False
- class pipecat.frames.frames.EmulateUserStartedSpeakingFrame[source]
Bases:
SystemFrame
Emitted by internal processors upstream to emulate VAD behavior when a user starts speaking.
- class pipecat.frames.frames.EmulateUserStoppedSpeakingFrame[source]
Bases:
SystemFrame
Emitted by internal processors upstream to emulate VAD behavior when a user stops speaking.
- class pipecat.frames.frames.VADUserStartedSpeakingFrame[source]
Bases:
SystemFrame
Frame emitted when VAD detects the user has definitively started speaking.
- class pipecat.frames.frames.VADUserStoppedSpeakingFrame[source]
Bases:
SystemFrame
Frame emitted when VAD detects the user has definitively stopped speaking.
- class pipecat.frames.frames.BotInterruptionFrame[source]
Bases:
SystemFrame
Emitted by when the bot should be interrupted. This will mainly cause the same actions as if the user interrupted except that the UserStartedSpeakingFrame and UserStoppedSpeakingFrame won’t be generated.
- class pipecat.frames.frames.BotStartedSpeakingFrame[source]
Bases:
SystemFrame
Emitted upstream by transport outputs to indicate the bot started speaking.
- class pipecat.frames.frames.BotStoppedSpeakingFrame[source]
Bases:
SystemFrame
Emitted upstream by transport outputs to indicate the bot stopped speaking.
- class pipecat.frames.frames.BotSpeakingFrame[source]
Bases:
SystemFrame
Emitted upstream by transport outputs while the bot is still speaking. This can be used, for example, to detect when a user is idle. That is, while the bot is speaking we don’t want to trigger any user idle timeout since the user might be listening.
- class pipecat.frames.frames.MetricsFrame(data)[source]
Bases:
SystemFrame
Emitted by processor that can compute metrics like latencies.
- Parameters:
data (List[MetricsData])
- data: List[MetricsData]
- class pipecat.frames.frames.FunctionCallFromLLM(function_name, tool_call_id, arguments, context)[source]
Bases:
object
Represents a function call returned by the LLM to be registered for execution.
- Parameters:
function_name (str)
tool_call_id (str)
arguments (Mapping[str, Any])
context (Any)
- function_name
The name of the function.
- Type:
str
- tool_call_id
A unique identifier for the function call.
- Type:
str
- arguments
The arguments for the function.
- Type:
Mapping[str, Any]
- context
The LLM context.
- Type:
OpenAILLMContext
- function_name: str
- tool_call_id: str
- arguments: Mapping[str, Any]
- context: Any
- class pipecat.frames.frames.FunctionCallsStartedFrame(function_calls)[source]
Bases:
SystemFrame
A frame signaling that one or more function call execution is going to start.
- Parameters:
function_calls (Sequence[FunctionCallFromLLM])
- function_calls: Sequence[FunctionCallFromLLM]
- class pipecat.frames.frames.FunctionCallInProgressFrame(function_name, tool_call_id, arguments, cancel_on_interruption=False)[source]
Bases:
SystemFrame
A frame signaling that a function call is in progress.
- Parameters:
function_name (str)
tool_call_id (str)
arguments (Any)
cancel_on_interruption (bool)
- function_name: str
- tool_call_id: str
- arguments: Any
- cancel_on_interruption: bool = False
- class pipecat.frames.frames.FunctionCallCancelFrame(function_name, tool_call_id)[source]
Bases:
SystemFrame
A frame to signal a function call has been cancelled.
- Parameters:
function_name (str)
tool_call_id (str)
- function_name: str
- tool_call_id: str
- class pipecat.frames.frames.FunctionCallResultProperties(run_llm=None, on_context_updated=None)[source]
Bases:
object
Properties for a function call result frame.
- Parameters:
run_llm (bool | None)
on_context_updated (Callable[[], Awaitable[None]] | None)
- run_llm: bool | None = None
- on_context_updated: Callable[[], Awaitable[None]] | None = None
- class pipecat.frames.frames.FunctionCallResultFrame(function_name, tool_call_id, arguments, result, run_llm=None, properties=None)[source]
Bases:
SystemFrame
A frame containing the result of an LLM function (tool) call.
- Parameters:
function_name (str)
tool_call_id (str)
arguments (Any)
result (Any)
run_llm (bool | None)
properties (FunctionCallResultProperties | None)
- function_name: str
- tool_call_id: str
- arguments: Any
- result: Any
- run_llm: bool | None = None
- properties: FunctionCallResultProperties | None = None
- class pipecat.frames.frames.STTMuteFrame(mute)[source]
Bases:
SystemFrame
System frame to mute/unmute the STT service.
- Parameters:
mute (bool)
- mute: bool
- class pipecat.frames.frames.TransportMessageUrgentFrame(message: Any)[source]
Bases:
SystemFrame
- Parameters:
message (Any)
- message: Any
- class pipecat.frames.frames.UserImageRequestFrame(user_id, context=None, function_name=None, tool_call_id=None, video_source=None)[source]
Bases:
SystemFrame
A frame to request an image from the given user. The frame might be generated by a function call in which case the corresponding fields will be properly set.
- Parameters:
user_id (str)
context (Any | None)
function_name (str | None)
tool_call_id (str | None)
video_source (str | None)
- user_id: str
- context: Any | None = None
- function_name: str | None = None
- tool_call_id: str | None = None
- video_source: str | None = None
- class pipecat.frames.frames.InputAudioRawFrame(audio, sample_rate, num_channels)[source]
Bases:
SystemFrame
,AudioRawFrame
A chunk of audio usually coming from an input transport. If the transport supports multiple audio sources (e.g. multiple audio tracks) the source name will be specified.
- Parameters:
audio (bytes)
sample_rate (int)
num_channels (int)
- class pipecat.frames.frames.InputImageRawFrame(image, size, format)[source]
Bases:
SystemFrame
,ImageRawFrame
An image usually coming from an input transport. If the transport supports multiple video sources (e.g. multiple video tracks) the source name will be specified.
- Parameters:
image (bytes)
size (Tuple[int, int])
format (str | None)
- class pipecat.frames.frames.UserAudioRawFrame(audio, sample_rate, num_channels, user_id='')[source]
Bases:
InputAudioRawFrame
A chunk of audio, usually coming from an input transport, associated to a user.
- Parameters:
audio (bytes)
sample_rate (int)
num_channels (int)
user_id (str)
- user_id: str = ''
- class pipecat.frames.frames.UserImageRawFrame(image, size, format, user_id='', request=None)[source]
Bases:
InputImageRawFrame
An image associated to a user.
- Parameters:
image (bytes)
size (Tuple[int, int])
format (str | None)
user_id (str)
request (UserImageRequestFrame | None)
- user_id: str = ''
- request: UserImageRequestFrame | None = None
- class pipecat.frames.frames.VisionImageRawFrame(image, size, format, text=None)[source]
Bases:
InputImageRawFrame
An image with an associated text to ask for a description of it.
- Parameters:
image (bytes)
size (Tuple[int, int])
format (str | None)
text (str | None)
- text: str | None = None
- class pipecat.frames.frames.InputDTMFFrame(button)[source]
Bases:
DTMFFrame
,SystemFrame
A DTMF keypress input.
- Parameters:
button (KeypadEntry)
- class pipecat.frames.frames.OutputDTMFUrgentFrame(button)[source]
Bases:
DTMFFrame
,SystemFrame
A DTMF keypress output that will be sent right away. If your transport supports multiple dial-out destinations, use the transport_destination field to specify where the DTMF keypress should be sent.
- Parameters:
button (KeypadEntry)
- class pipecat.frames.frames.EndFrame[source]
Bases:
ControlFrame
Indicates that a pipeline has ended and frame processors and pipelines should be shut down. If the transport receives this frame, it will stop sending frames to its output channel(s) and close all its threads. Note, that this is a control frame, which means it will received in the order it was sent (unline system frames).
- class pipecat.frames.frames.StopFrame[source]
Bases:
ControlFrame
Indicates that a pipeline should be stopped but that the pipeline processors should be kept in a running state. This is normally queued from the pipeline task.
- class pipecat.frames.frames.HeartbeatFrame(timestamp)[source]
Bases:
ControlFrame
This frame is used by the pipeline task as a mechanism to know if the pipeline is running properly.
- Parameters:
timestamp (int)
- timestamp: int
- class pipecat.frames.frames.FrameProcessorPauseFrame(processor)[source]
Bases:
ControlFrame
This frame is used to pause frame processing for the given processor. Pausing frame processing will keep frames in the internal queue which will then be processed when frame processing is resumed with FrameProcessorResumeFrame.
- Parameters:
processor (FrameProcessor)
- processor: FrameProcessor
- class pipecat.frames.frames.FrameProcessorResumeFrame(processor)[source]
Bases:
ControlFrame
This frame is used to resume frame processing for the given processor if it was previously paused. After resuming frame processing all queued frames will be processed in the order received.
- Parameters:
processor (FrameProcessor)
- processor: FrameProcessor
- class pipecat.frames.frames.LLMFullResponseStartFrame[source]
Bases:
ControlFrame
Used to indicate the beginning of an LLM response. Following by one or more TextFrame and a final LLMFullResponseEndFrame.
- class pipecat.frames.frames.LLMFullResponseEndFrame[source]
Bases:
ControlFrame
Indicates the end of an LLM response.
- class pipecat.frames.frames.TTSStartedFrame[source]
Bases:
ControlFrame
Used to indicate the beginning of a TTS response. Following TTSAudioRawFrames are part of the TTS response until an TTSStoppedFrame. These frames can be used for aggregating audio frames in a transport to optimize the size of frames sent to the session, without needing to control this in the TTS service.
- class pipecat.frames.frames.TTSStoppedFrame[source]
Bases:
ControlFrame
Indicates the end of a TTS response.
- class pipecat.frames.frames.ServiceUpdateSettingsFrame(settings)[source]
Bases:
ControlFrame
A control frame containing a request to update service settings.
- Parameters:
settings (Mapping[str, Any])
- settings: Mapping[str, Any]
- class pipecat.frames.frames.LLMUpdateSettingsFrame(settings: Mapping[str, Any])[source]
Bases:
ServiceUpdateSettingsFrame
- Parameters:
settings (Mapping[str, Any])
- class pipecat.frames.frames.TTSUpdateSettingsFrame(settings: Mapping[str, Any])[source]
Bases:
ServiceUpdateSettingsFrame
- Parameters:
settings (Mapping[str, Any])
- class pipecat.frames.frames.STTUpdateSettingsFrame(settings: Mapping[str, Any])[source]
Bases:
ServiceUpdateSettingsFrame
- Parameters:
settings (Mapping[str, Any])
- class pipecat.frames.frames.VADParamsUpdateFrame(params)[source]
Bases:
ControlFrame
A control frame containing a request to update VAD params. Intended to be pushed upstream from RTVI processor.
- Parameters:
params (VADParams)
- params: VADParams
- class pipecat.frames.frames.FilterControlFrame[source]
Bases:
ControlFrame
Base control frame for other audio filter frames.
- class pipecat.frames.frames.FilterUpdateSettingsFrame(settings)[source]
Bases:
FilterControlFrame
Control frame to update filter settings.
- Parameters:
settings (Mapping[str, Any])
- settings: Mapping[str, Any]
- class pipecat.frames.frames.FilterEnableFrame(enable)[source]
Bases:
FilterControlFrame
Control frame to enable or disable the filter at runtime.
- Parameters:
enable (bool)
- enable: bool
- class pipecat.frames.frames.MixerControlFrame[source]
Bases:
ControlFrame
Base control frame for other audio mixer frames.
- class pipecat.frames.frames.MixerUpdateSettingsFrame(settings)[source]
Bases:
MixerControlFrame
Control frame to update mixer settings.
- Parameters:
settings (Mapping[str, Any])
- settings: Mapping[str, Any]
- class pipecat.frames.frames.MixerEnableFrame(enable)[source]
Bases:
MixerControlFrame
Control frame to enable or disable the mixer at runtime.
- Parameters:
enable (bool)
- enable: bool