PHANTOM
🇮🇳 IN
Skip to main content

Connect CrewAI to LLMs

CrewAI connects to LLMs through native SDK integrations for the most popular providers (OpenAI, Anthropic, Google Gemini, Azure, and AWS Bedrock), and uses LiteLLM as a flexible fallback for all other providers.
By default, CrewAI uses the gpt-4o-mini model. This is determined by the OPENAI_MODEL_NAME environment variable, which defaults to “gpt-4o-mini” if not set. You can easily configure your agents to use a different model or provider as described in this guide.

Supported Providers

LiteLLM supports a wide range of providers, including but not limited to:
  • OpenAI
  • Anthropic
  • Google (Vertex AI, Gemini)
  • Azure OpenAI
  • AWS (Bedrock, SageMaker)
  • Cohere
  • VoyageAI
  • Hugging Face
  • Ollama
  • Mistral AI
  • Replicate
  • Together AI
  • AI21
  • Cloudflare Workers AI
  • DeepInfra
  • Groq
  • SambaNova
  • Nebius AI Studio
  • NVIDIA NIMs
  • And many more!
For a complete and up-to-date list of supported providers, please refer to the LiteLLM Providers documentation.
To use any provider not covered by a native integration, add LiteLLM as a dependency to your project:
uv add 'crewai[litellm]'
Native providers (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock) use their own SDK extras — see the Provider Configuration Examples.

Changing the LLM

To use a different LLM with your CrewAI agents, you have several options:
Pass the model name as a string when initializing the agent:
from crewai import Agent

# Using OpenAI's GPT-4
openai_agent = Agent(
    role='OpenAI Expert',
    goal='Provide insights using GPT-4',
    backstory="An AI assistant powered by OpenAI's latest model.",
    llm='gpt-4'
)

# Using Anthropic's Claude
claude_agent = Agent(
    role='Anthropic Expert',
    goal='Analyze data using Claude',
    backstory="An AI assistant leveraging Anthropic's language model.",
    llm='claude-2'
)

Configuration Options

When configuring an LLM for your agent, you have access to a wide range of parameters:
ParameterTypeDescription
modelstrThe name of the model to use (e.g., “gpt-4”, “claude-2”)
temperaturefloatControls randomness in output (0.0 to 1.0)
max_tokensintMaximum number of tokens to generate
top_pfloatControls diversity of output (0.0 to 1.0)
frequency_penaltyfloatPenalizes new tokens based on their frequency in the text so far
presence_penaltyfloatPenalizes new tokens based on their presence in the text so far
stopstr, List[str]Sequence(s) to stop generation
base_urlstrThe base URL for the API endpoint
api_keystrYour API key for authentication
For a complete list of parameters and their descriptions, refer to the LLM class documentation.

Connecting to OpenAI-Compatible LLMs

You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
import os

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
os.environ["OPENAI_MODEL_NAME"] = "your-model-name"

Using Local Models with Ollama

For local models like those provided by Ollama:
1
2

Pull the desired model

For example, run ollama pull llama3.2 to download the model.
3

Configure your agent

    agent = Agent(
        role='Local AI Expert',
        goal='Process information using a local model',
        backstory="An AI assistant running on local hardware.",
        llm=LLM(model="ollama/llama3.2", base_url="http://localhost:11434")
    )

Changing the Base API URL

You can change the base API URL for any LLM provider by setting the base_url parameter:
Code
llm = LLM(
    model="custom-model-name",
    base_url="https://api.your-provider.com/v1",
    api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.

Conclusion

By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the LiteLLM documentation for the most up-to-date information on supported models and configuration options.