LLM
- class pipecat.services.aws.llm.AWSBedrockContextAggregatorPair(_user: 'AWSBedrockUserContextAggregator', _assistant: 'AWSBedrockAssistantContextAggregator')[source]
Bases:
object
- Parameters:
_user (AWSBedrockUserContextAggregator)
_assistant (AWSBedrockAssistantContextAggregator)
- user()[source]
- Return type:
AWSBedrockUserContextAggregator
- assistant()[source]
- Return type:
AWSBedrockAssistantContextAggregator
- class pipecat.services.aws.llm.AWSBedrockLLMContext(messages=None, tools=None, tool_choice=None, *, system=None)[source]
Bases:
OpenAILLMContext
- Parameters:
messages (List[dict] | None)
tools (List[dict] | None)
tool_choice (dict | None)
system (str | None)
- static upgrade_to_bedrock(obj)[source]
- Parameters:
obj (OpenAILLMContext)
- Return type:
AWSBedrockLLMContext
- classmethod from_openai_context(openai_context)[source]
- Parameters:
openai_context (OpenAILLMContext)
- classmethod from_messages(messages)[source]
- Parameters:
messages (List[dict])
- Return type:
AWSBedrockLLMContext
- classmethod from_image_frame(frame)[source]
- Parameters:
frame (VisionImageRawFrame)
- Return type:
AWSBedrockLLMContext
- set_messages(messages)[source]
- Parameters:
messages (List)
- to_standard_messages(obj)[source]
Convert AWS Bedrock message format to standard structured format.
Handles text content and function calls for both user and assistant messages.
- Parameters:
obj –
Message in AWS Bedrock format: {
”role”: “user/assistant”, “content”: [{“text”: str} | {“toolUse”: {…}} | {“toolResult”: {…}}]
}
- Returns:
- [
- {
“role”: “user/assistant/tool”, “content”: [{“type”: “text”, “text”: str}]
}
]
- Return type:
List of messages in standard format
- from_standard_message(message)[source]
Convert standard format message to AWS Bedrock format.
Handles conversion of text content, tool calls, and tool results. Empty text content is converted to “(empty)”.
- Parameters:
message –
Message in standard format: {
”role”: “user/assistant/tool”, “content”: str | [{“type”: “text”, …}], “tool_calls”: [{“id”: str, “function”: {“name”: str, “arguments”: str}}]
}
- Returns:
- {
“role”: “user/assistant”, “content”: [
{“text”: str} | {“toolUse”: {“toolUseId”: str, “name”: str, “input”: dict}} | {“toolResult”: {“toolUseId”: str, “content”: […], “status”: str}}
]
}
- Return type:
Message in AWS Bedrock format
- add_image_frame_message(*, format, size, image, text=None)[source]
- Parameters:
format (str)
size (tuple[int, int])
image (bytes)
text (str)
- add_message(message)[source]
- get_messages_for_persistent_storage()[source]
- get_messages_for_logging()[source]
- Return type:
str
- class pipecat.services.aws.llm.AWSBedrockUserContextAggregator(context, *, params=None, **kwargs)[source]
Bases:
LLMUserContextAggregator
- Parameters:
context (OpenAILLMContext)
params (LLMUserAggregatorParams | None)
- class pipecat.services.aws.llm.AWSBedrockAssistantContextAggregator(context, *, params=None, **kwargs)[source]
Bases:
LLMAssistantContextAggregator
- Parameters:
context (OpenAILLMContext)
params (LLMAssistantAggregatorParams | None)
- async handle_function_call_in_progress(frame)[source]
- Parameters:
frame (FunctionCallInProgressFrame)
- async handle_function_call_result(frame)[source]
- Parameters:
frame (FunctionCallResultFrame)
- async handle_function_call_cancel(frame)[source]
- Parameters:
frame (FunctionCallCancelFrame)
- async handle_user_image_frame(frame)[source]
- Parameters:
frame (UserImageRawFrame)
- class pipecat.services.aws.llm.AWSBedrockLLMService(*, model, aws_access_key=None, aws_secret_key=None, aws_session_token=None, aws_region='us-east-1', params=None, client_config=None, **kwargs)[source]
Bases:
LLMService
This class implements inference with AWS Bedrock models including Amazon Nova and Anthropic Claude.
Requires AWS credentials to be configured in the environment or through boto3 configuration.
- Parameters:
model (str)
aws_access_key (str | None)
aws_secret_key (str | None)
aws_session_token (str | None)
aws_region (str)
params (InputParams | None)
client_config (Config | None)
- adapter_class
alias of
AWSBedrockLLMAdapter
- class InputParams(*, max_tokens=<factory>, temperature=<factory>, top_p=<factory>, stop_sequences=<factory>, latency=<factory>, additional_model_request_fields=<factory>)[source]
Bases:
BaseModel
- Parameters:
max_tokens (int | None)
temperature (float | None)
top_p (float | None)
stop_sequences (List[str] | None)
latency (str | None)
additional_model_request_fields (Dict[str, Any] | None)
- max_tokens: int | None
- temperature: float | None
- top_p: float | None
- stop_sequences: List[str] | None
- latency: str | None
- additional_model_request_fields: Dict[str, Any] | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- can_generate_metrics()[source]
- Return type:
bool
- create_context_aggregator(context, *, user_params=LLMUserAggregatorParams(aggregation_timeout=0.5), assistant_params=LLMAssistantAggregatorParams(expect_stripped_words=True))[source]
Create an instance of AWSBedrockContextAggregatorPair from an OpenAILLMContext. Constructor keyword arguments for both the user and assistant aggregators can be provided.
- Parameters:
context (OpenAILLMContext) – The LLM context.
user_params (LLMUserAggregatorParams, optional) – User aggregator parameters.
assistant_params (LLMAssistantAggregatorParams, optional) – User aggregator parameters.
- Returns:
A pair of context aggregators, one for the user and one for the assistant, encapsulated in an AWSBedrockContextAggregatorPair.
- Return type:
AWSBedrockContextAggregatorPair
- async process_frame(frame, direction)[source]
Process a frame.
- Parameters:
frame (Frame) – The frame to process.
direction (FrameDirection) – The direction of frame processing.