LLM

class pipecat.services.fireworks.llm.FireworksLLMService(*, api_key, model='accounts/fireworks/models/firefunction-v2', base_url='https://api.fireworks.ai/inference/v1', **kwargs)[source]

Bases: OpenAILLMService

A service for interacting with Fireworks AI using the OpenAI-compatible interface.

This service extends OpenAILLMService to connect to Fireworks’ API endpoint while maintaining full compatibility with OpenAI’s interface and functionality.

Parameters:
  • api_key (str) – The API key for accessing Fireworks AI

  • model (str, optional) – The model identifier to use. Defaults to “accounts/fireworks/models/firefunction-v2”

  • base_url (str, optional) – The base URL for Fireworks API. Defaults to “https://api.fireworks.ai/inference/v1

  • **kwargs – Additional keyword arguments passed to OpenAILLMService

create_client(api_key=None, base_url=None, **kwargs)[source]

Create OpenAI-compatible client for Fireworks API endpoint.

async get_chat_completions(context, messages)[source]

Get chat completions from Fireworks API.

Removes OpenAI-specific parameters not supported by Fireworks.

Parameters:
  • context (OpenAILLMContext)

  • messages (List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])