Skip to content

Multi-Provider LLM Support

HAI RapidUI provides seamless integration with multiple Large Language Model (LLM) providers, giving you the flexibility to choose the best AI model for your specific needs. Switch between providers effortlessly and leverage the unique strengths of each platform.

Note

Based on the analysis, these models are suitable for their respective use cases.

RankModelProviderRecommended Use Case
1Claude 4AnthropicGreat for modernize UI mode
2Claude 3.7AnthropicStable for modernize and replica UI mode
3Claude 3.5AnthropicDecent result in both

Supported Providers

OpenAI GPT-4

OpenAI's flagship model for code generation and problem-solving.

Configuration

env
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o

Default Model: gpt-4o

Anthropic Claude

Anthropic's Claude models for code quality and safety.

Configuration

env
# Anthropic Configuration
ANTHROPIC_API_KEY=your_anthropic_api_key_here
CLAUDE_MODEL=claude-3-5-sonnet-20241022

Default Model: claude-3-5-sonnet-20241022

AWS Bedrock (Claude)

Amazon's managed AI service providing access to Claude models.

Configuration

env
# AWS Bedrock Configuration
AWS_ACCESS_KEY_ID=your_aws_access_key_id_here
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key_here
AWS_REGION=us-east-1
BEDROCK_CLAUDE_MODEL=anthropic.claude-3-5-sonnet-20241022-v2:0
AWS_SESSION_TOKEN=

Default Model: anthropic.claude-3-5-sonnet-20241022-v2:0

Google Vertex AI (Gemini)

Google's AI platform offering access to Gemini models.

Configuration

env
# Google Cloud Configuration
GOOGLE_APPLICATION_CREDENTIALS=path_to_your_service_account_json
VERTEX_AI_MODEL=gemini-1.5-flash
VERTEX_AI_LOCATION=us-central1
VERTEX_AI_PROJECT_ID=your_project_id_here

Default Model: gemini-1.5-flash

Provider Switching

Runtime Switching

Switch between providers seamlessly during your development session:

bash
# In the CLI, use these commands:
? Current project: /your/project/path/input
Select an action: (Use arrow keys)
 Convert as Modules 
  Switch Providers (openai, anthropic, bedrockanthropic, vertexai) 
  Exit

Provider Selection During Initialization

When initializing a new project with npx @hai/rapidui init, you'll be prompted to select your preferred LLM provider. This selection is stored in your project configuration and can be changed later.

Provider-Specific Considerations

Token Limits

Each provider has different token limits that affect conversation history and context window:

ProviderModelMax Input TokensMax Output Tokens
OpenAIgpt-4o128,0004,096
Anthropicclaude-3-5-sonnet200,0004,096
AWS Bedrockclaude-3-5-sonnet200,0004,096
Google Vertex AIgemini-1.5-flash1,000,0008,192