LLM

class pipecat.services.anthropic.llm.AnthropicContextAggregatorPair(_user: 'AnthropicUserContextAggregator', _assistant: 'AnthropicAssistantContextAggregator')[source]

Bases: object

Parameters:
  • _user (AnthropicUserContextAggregator)

  • _assistant (AnthropicAssistantContextAggregator)

user()[source]
Return type:

AnthropicUserContextAggregator

assistant()[source]
Return type:

AnthropicAssistantContextAggregator

class pipecat.services.anthropic.llm.AnthropicLLMService(*, api_key, model='claude-sonnet-4-20250514', params=None, client=None, **kwargs)[source]

Bases: LLMService

This class implements inference with Anthropic’s AI models.

Can provide a custom client via the client kwarg, allowing you to use AsyncAnthropicBedrock and AsyncAnthropicVertex clients

Parameters:
  • api_key (str)

  • model (str)

  • params (InputParams | None)

adapter_class

alias of AnthropicLLMAdapter

class InputParams(*, enable_prompt_caching_beta=False, max_tokens=<factory>, temperature=<factory>, top_k=<factory>, top_p=<factory>, extra=<factory>)[source]

Bases: BaseModel

Parameters:
  • enable_prompt_caching_beta (bool | None)

  • max_tokens (int | None)

  • temperature (float | None)

  • top_k (int | None)

  • top_p (float | None)

  • extra (Dict[str, Any] | None)

enable_prompt_caching_beta: bool | None
max_tokens: int | None
temperature: float | None
top_k: int | None
top_p: float | None
extra: Dict[str, Any] | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

can_generate_metrics()[source]
Return type:

bool

property enable_prompt_caching_beta: bool
create_context_aggregator(context, *, user_params=LLMUserAggregatorParams(aggregation_timeout=0.5), assistant_params=LLMAssistantAggregatorParams(expect_stripped_words=True))[source]

Create an instance of AnthropicContextAggregatorPair from an OpenAILLMContext. Constructor keyword arguments for both the user and assistant aggregators can be provided.

Parameters:
  • context (OpenAILLMContext) – The LLM context.

  • user_params (LLMUserAggregatorParams, optional) – User aggregator parameters.

  • assistant_params (LLMAssistantAggregatorParams, optional) – User aggregator parameters.

Returns:

A pair of context aggregators, one for the user and one for the assistant, encapsulated in an AnthropicContextAggregatorPair.

Return type:

AnthropicContextAggregatorPair

async process_frame(frame, direction)[source]

Process a frame.

Parameters:
  • frame (Frame) – The frame to process.

  • direction (FrameDirection) – The direction of frame processing.

class pipecat.services.anthropic.llm.AnthropicLLMContext(messages=None, tools=None, tool_choice=None, *, system=anthropic.NOT_GIVEN)[source]

Bases: OpenAILLMContext

Parameters:
  • messages (List[dict] | None)

  • tools (List[dict] | None)

  • tool_choice (dict | None)

  • system (str | anthropic.NotGiven)

static upgrade_to_anthropic(obj)[source]
Parameters:

obj (OpenAILLMContext)

Return type:

AnthropicLLMContext

classmethod from_openai_context(openai_context)[source]
Parameters:

openai_context (OpenAILLMContext)

classmethod from_messages(messages)[source]
Parameters:

messages (List[dict])

Return type:

AnthropicLLMContext

classmethod from_image_frame(frame)[source]
Parameters:

frame (VisionImageRawFrame)

Return type:

AnthropicLLMContext

set_messages(messages)[source]
Parameters:

messages (List)

to_standard_messages(obj)[source]

Convert Anthropic message format to standard structured format.

Handles text content and function calls for both user and assistant messages.

Parameters:

obj

Message in Anthropic format: {

”role”: “user/assistant”, “content”: str | [{“type”: “text/tool_use/tool_result”, …}]

}

Returns:

[
{

“role”: “user/assistant/tool”, “content”: [{“type”: “text”, “text”: str}]

}

]

Return type:

List of messages in standard format

from_standard_message(message)[source]

Convert standard format message to Anthropic format.

Handles conversion of text content, tool calls, and tool results. Empty text content is converted to “(empty)”.

Parameters:

message

Message in standard format: {

”role”: “user/assistant/tool”, “content”: str | [{“type”: “text”, …}], “tool_calls”: [{“id”: str, “function”: {“name”: str, “arguments”: str}}]

}

Returns:

{

“role”: “user/assistant”, “content”: str | [

{“type”: “text”, “text”: str} | {“type”: “tool_use”, “id”: str, “name”: str, “input”: dict} | {“type”: “tool_result”, “tool_use_id”: str, “content”: str}

]

}

Return type:

Message in Anthropic format

add_image_frame_message(*, format, size, image, text=None)[source]
Parameters:
  • format (str)

  • size (tuple[int, int])

  • image (bytes)

  • text (str)

add_message(message)[source]
get_messages_with_cache_control_markers()[source]
Return type:

List[dict]

get_messages_for_persistent_storage()[source]
get_messages_for_logging()[source]
Return type:

str

class pipecat.services.anthropic.llm.AnthropicUserContextAggregator(context, *, params=None, **kwargs)[source]

Bases: LLMUserContextAggregator

Parameters:
  • context (OpenAILLMContext)

  • params (LLMUserAggregatorParams | None)

class pipecat.services.anthropic.llm.AnthropicAssistantContextAggregator(context, *, params=None, **kwargs)[source]

Bases: LLMAssistantContextAggregator

Parameters:
  • context (OpenAILLMContext)

  • params (LLMAssistantAggregatorParams | None)

async handle_function_call_in_progress(frame)[source]
Parameters:

frame (FunctionCallInProgressFrame)

async handle_function_call_result(frame)[source]
Parameters:

frame (FunctionCallResultFrame)

async handle_function_call_cancel(frame)[source]
Parameters:

frame (FunctionCallCancelFrame)

async handle_user_image_frame(frame)[source]
Parameters:

frame (UserImageRawFrame)