LlmService
Base classes for Large Language Model services with function calling support.
- class pipecat.services.llm_service.FunctionCallResultCallback(*args, **kwargs)[source]
Bases:
Protocol
Protocol for function call result callbacks.
Handles the result of an LLM function call execution.
- class pipecat.services.llm_service.FunctionCallParams(function_name, tool_call_id, arguments, llm, context, result_callback)[source]
Bases:
object
Parameters for a function call.
- Parameters:
function_name (str) – The name of the function being called.
tool_call_id (str) – A unique identifier for the function call.
arguments (Mapping[str, Any]) – The arguments for the function.
llm (LLMService) – The LLMService instance being used.
context (OpenAILLMContext) – The LLM context.
result_callback (FunctionCallResultCallback) – Callback to handle the result of the function call.
- function_name: str
- tool_call_id: str
- arguments: Mapping[str, Any]
- llm: LLMService
- context: OpenAILLMContext
- result_callback: FunctionCallResultCallback
- class pipecat.services.llm_service.FunctionCallRegistryItem(function_name, handler, cancel_on_interruption)[source]
Bases:
object
Represents an entry in the function call registry.
This is what the user registers when calling register_function.
- Parameters:
function_name (str | None) – The name of the function (None for catch-all handler).
handler (Callable[[FunctionCallParams], Awaitable[None]]) – The handler for processing function call parameters.
cancel_on_interruption (bool) – Whether to cancel the call on interruption.
- function_name: str | None
- handler: Callable[[FunctionCallParams], Awaitable[None]]
- cancel_on_interruption: bool
- class pipecat.services.llm_service.FunctionCallRunnerItem(registry_item, function_name, tool_call_id, arguments, context, run_llm=None)[source]
Bases:
object
Internal function call entry for the function call runner.
The runner executes function calls in order.
- Parameters:
registry_item (FunctionCallRegistryItem) – The registry item containing handler information.
function_name (str) – The name of the function.
tool_call_id (str) – A unique identifier for the function call.
arguments (Mapping[str, Any]) – The arguments for the function.
context (OpenAILLMContext) – The LLM context.
run_llm (bool | None) – Optional flag to control LLM execution after function call.
- registry_item: FunctionCallRegistryItem
- function_name: str
- tool_call_id: str
- arguments: Mapping[str, Any]
- context: OpenAILLMContext
- run_llm: bool | None = None
- class pipecat.services.llm_service.LLMService(run_in_parallel=True, **kwargs)[source]
Bases:
AIService
Base class for all LLM services.
Handles function calling registration and execution with support for both parallel and sequential execution modes. Provides event handlers for completion timeouts and function call lifecycle events.
- Parameters:
run_in_parallel (bool) – Whether to run function calls in parallel or sequentially. Defaults to True.
**kwargs – Additional arguments passed to the parent AIService.
- Event handlers:
on_completion_timeout: Called when an LLM completion timeout occurs. on_function_calls_started: Called when function calls are received and
execution is about to start.
Example
```python @task.event_handler(“on_completion_timeout”) async def on_completion_timeout(service):
logger.warning(“LLM completion timed out”)
@task.event_handler(“on_function_calls_started”) async def on_function_calls_started(service, function_calls):
logger.info(f”Starting {len(function_calls)} function calls”)
- adapter_class
alias of
OpenAILLMAdapter
- get_llm_adapter()[source]
Get the LLM adapter instance.
- Returns:
The adapter instance used for LLM communication.
- Return type:
BaseLLMAdapter
- create_context_aggregator(context, *, user_params=LLMUserAggregatorParams(aggregation_timeout=0.5), assistant_params=LLMAssistantAggregatorParams(expect_stripped_words=True))[source]
Create a context aggregator for managing LLM conversation context.
Must be implemented by subclasses.
- Parameters:
context (OpenAILLMContext) – The LLM context to create an aggregator for.
user_params (LLMUserAggregatorParams) – Parameters for user message aggregation.
assistant_params (LLMAssistantAggregatorParams) – Parameters for assistant message aggregation.
- Returns:
A context aggregator instance.
- Return type:
Any
- async start(frame)[source]
Start the LLM service.
- Parameters:
frame (StartFrame) – The start frame.
- async stop(frame)[source]
Stop the LLM service.
- Parameters:
frame (EndFrame) – The end frame.
- async cancel(frame)[source]
Cancel the LLM service.
- Parameters:
frame (CancelFrame) – The cancel frame.
- async process_frame(frame, direction)[source]
Process a frame.
- Parameters:
frame (Frame) – The frame to process.
direction (FrameDirection) – The direction of frame processing.
- register_function(function_name, handler, start_callback=None, *, cancel_on_interruption=True)[source]
Register a function handler for LLM function calls.
- Parameters:
function_name (str | None) – The name of the function to handle. Use None to handle all function calls with a catch-all handler.
handler (Any) – The function handler. Should accept a single FunctionCallParams parameter.
start_callback – Legacy callback function (deprecated). Put initialization code at the top of your handler instead.
cancel_on_interruption (bool) – Whether to cancel this function call when an interruption occurs. Defaults to True.
- unregister_function(function_name)[source]
Remove a registered function handler.
- Parameters:
function_name (str | None) – The name of the function handler to remove.
- has_function(function_name)[source]
Check if a function handler is registered.
- Parameters:
function_name (str) – The name of the function to check.
- Returns:
True if the function is registered or if a catch-all handler (None) is registered.
- async run_function_calls(function_calls)[source]
Execute a sequence of function calls from the LLM.
Triggers the on_function_calls_started event and executes functions either in parallel or sequentially based on the run_in_parallel setting.
- Parameters:
function_calls (Sequence[FunctionCallFromLLM]) – The function calls to execute.
- async request_image_frame(user_id, *, function_name=None, tool_call_id=None, text_content=None, video_source=None)[source]
Request an image from a user.
Pushes a UserImageRequestFrame upstream to request an image from the specified user.
- Parameters:
user_id (str) – The ID of the user to request an image from.
function_name (str | None) – Optional function name associated with the request.
tool_call_id (str | None) – Optional tool call ID associated with the request.
text_content (str | None) – Optional text content/context for the image request.
video_source (str | None) – Optional video source identifier.