BaseTransport
- class pipecat.transports.base_transport.TransportParams(*, camera_in_enabled=False, camera_out_enabled=False, camera_out_is_live=False, camera_out_width=1024, camera_out_height=768, camera_out_bitrate=800000, camera_out_framerate=30, camera_out_color_format='RGB', audio_out_enabled=False, audio_out_sample_rate=None, audio_out_channels=1, audio_out_bitrate=96000, audio_out_10ms_chunks=4, audio_out_mixer=None, audio_out_destinations=<factory>, audio_in_enabled=False, audio_in_sample_rate=None, audio_in_channels=1, audio_in_filter=None, audio_in_stream_on_start=True, audio_in_passthrough=True, video_in_enabled=False, video_out_enabled=False, video_out_is_live=False, video_out_width=1024, video_out_height=768, video_out_bitrate=800000, video_out_framerate=30, video_out_color_format='RGB', video_out_destinations=<factory>, vad_enabled=False, vad_audio_passthrough=False, vad_analyzer=None, turn_analyzer=None)[source]
Bases:
BaseModel
- Parameters:
camera_in_enabled (bool)
camera_out_enabled (bool)
camera_out_is_live (bool)
camera_out_width (int)
camera_out_height (int)
camera_out_bitrate (int)
camera_out_framerate (int)
camera_out_color_format (str)
audio_out_enabled (bool)
audio_out_sample_rate (int | None)
audio_out_channels (int)
audio_out_bitrate (int)
audio_out_10ms_chunks (int)
audio_out_mixer (BaseAudioMixer | Mapping[str | None, BaseAudioMixer] | None)
audio_out_destinations (List[str])
audio_in_enabled (bool)
audio_in_sample_rate (int | None)
audio_in_channels (int)
audio_in_filter (BaseAudioFilter | None)
audio_in_stream_on_start (bool)
audio_in_passthrough (bool)
video_in_enabled (bool)
video_out_enabled (bool)
video_out_is_live (bool)
video_out_width (int)
video_out_height (int)
video_out_bitrate (int)
video_out_framerate (int)
video_out_color_format (str)
video_out_destinations (List[str])
vad_enabled (bool)
vad_audio_passthrough (bool)
vad_analyzer (VADAnalyzer | None)
turn_analyzer (BaseTurnAnalyzer | None)
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- camera_in_enabled: bool
- camera_out_enabled: bool
- camera_out_is_live: bool
- camera_out_width: int
- camera_out_height: int
- camera_out_bitrate: int
- camera_out_framerate: int
- camera_out_color_format: str
- audio_out_enabled: bool
- audio_out_sample_rate: int | None
- audio_out_channels: int
- audio_out_bitrate: int
- audio_out_10ms_chunks: int
- audio_out_mixer: BaseAudioMixer | Mapping[str | None, BaseAudioMixer] | None
- audio_out_destinations: List[str]
- audio_in_enabled: bool
- audio_in_sample_rate: int | None
- audio_in_channels: int
- audio_in_filter: BaseAudioFilter | None
- audio_in_stream_on_start: bool
- audio_in_passthrough: bool
- video_in_enabled: bool
- video_out_enabled: bool
- video_out_is_live: bool
- video_out_width: int
- video_out_height: int
- video_out_bitrate: int
- video_out_framerate: int
- video_out_color_format: str
- video_out_destinations: List[str]
- vad_enabled: bool
- vad_audio_passthrough: bool
- vad_analyzer: VADAnalyzer | None
- turn_analyzer: BaseTurnAnalyzer | None
- class pipecat.transports.base_transport.BaseTransport(*, name=None, input_name=None, output_name=None)[source]
Bases:
BaseObject
- Parameters:
name (str | None)
input_name (str | None)
output_name (str | None)
- abstractmethod input()[source]
- Return type:
FrameProcessor
- abstractmethod output()[source]
- Return type:
FrameProcessor