LLM
- class pipecat.services.grok.llm.GrokContextAggregatorPair(_user: pipecat.services.openai.llm.OpenAIUserContextAggregator, _assistant: pipecat.services.openai.llm.OpenAIAssistantContextAggregator)[source]
Bases:
object
- Parameters:
_user (OpenAIUserContextAggregator)
_assistant (OpenAIAssistantContextAggregator)
- user()[source]
- Return type:
OpenAIUserContextAggregator
- assistant()[source]
- Return type:
OpenAIAssistantContextAggregator
- class pipecat.services.grok.llm.GrokLLMService(*, api_key, base_url='https://api.x.ai/v1', model='grok-3-beta', **kwargs)[source]
Bases:
OpenAILLMService
A service for interacting with Grok’s API using the OpenAI-compatible interface.
This service extends OpenAILLMService to connect to Grok’s API endpoint while maintaining full compatibility with OpenAI’s interface and functionality.
- Parameters:
api_key (str) – The API key for accessing Grok’s API
base_url (str, optional) – The base URL for Grok API. Defaults to “https://api.x.ai/v1”
model (str, optional) – The model identifier to use. Defaults to “grok-3-beta”
**kwargs – Additional keyword arguments passed to OpenAILLMService
- create_client(api_key=None, base_url=None, **kwargs)[source]
Create OpenAI-compatible client for Grok API endpoint.
- async start_llm_usage_metrics(tokens)[source]
Accumulate token usage metrics during processing.
This method intercepts the incremental token updates from Grok’s API and accumulates them instead of passing each update to the metrics system. The final accumulated totals are reported at the end of processing.
- Parameters:
tokens (LLMTokenUsage) – The token usage metrics for the current chunk of processing, containing prompt_tokens and completion_tokens counts.
- create_context_aggregator(context, *, user_params=LLMUserAggregatorParams(aggregation_timeout=0.5), assistant_params=LLMAssistantAggregatorParams(expect_stripped_words=True))[source]
Create an instance of GrokContextAggregatorPair from an OpenAILLMContext. Constructor keyword arguments for both the user and assistant aggregators can be provided.
- Parameters:
context (OpenAILLMContext) – The LLM context.
user_params (LLMUserAggregatorParams, optional) – User aggregator parameters.
assistant_params (LLMAssistantAggregatorParams, optional) – User aggregator parameters.
- Returns:
A pair of context aggregators, one for the user and one for the assistant, encapsulated in an GrokContextAggregatorPair.
- Return type:
GrokContextAggregatorPair