Multi-Provider LLM Support
HAI RapidUI provides seamless integration with multiple Large Language Model (LLM) providers, giving you the flexibility to choose the best AI model for your specific needs. Switch between providers effortlessly and leverage the unique strengths of each platform.
Recommended LLM Models
Note
Based on the analysis, these models are suitable for their respective use cases.
| Rank | Model | Provider | Recommended Use Case |
|---|---|---|---|
| 1 | Claude 4 | Anthropic | Great for modernize UI mode |
| 2 | Claude 3.7 | Anthropic | Stable for modernize and replica UI mode |
| 3 | Claude 3.5 | Anthropic | Decent result in both |
Supported Providers
OpenAI GPT-4
OpenAI's flagship model for code generation and problem-solving.
Configuration
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4oDefault Model: gpt-4o
Anthropic Claude
Anthropic's Claude models for code quality and safety.
Configuration
# Anthropic Configuration
ANTHROPIC_API_KEY=your_anthropic_api_key_here
CLAUDE_MODEL=claude-3-5-sonnet-20241022Default Model: claude-3-5-sonnet-20241022
AWS Bedrock (Claude)
Amazon's managed AI service providing access to Claude models.
Configuration
# AWS Bedrock Configuration
AWS_ACCESS_KEY_ID=your_aws_access_key_id_here
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key_here
AWS_REGION=us-east-1
BEDROCK_CLAUDE_MODEL=anthropic.claude-3-5-sonnet-20241022-v2:0
AWS_SESSION_TOKEN=Default Model: anthropic.claude-3-5-sonnet-20241022-v2:0
Google Vertex AI (Gemini)
Google's AI platform offering access to Gemini models.
Configuration
# Google Cloud Configuration
GOOGLE_APPLICATION_CREDENTIALS=path_to_your_service_account_json
VERTEX_AI_MODEL=gemini-1.5-flash
VERTEX_AI_LOCATION=us-central1
VERTEX_AI_PROJECT_ID=your_project_id_hereDefault Model: gemini-1.5-flash
Provider Switching
Runtime Switching
Switch between providers seamlessly during your development session:
# In the CLI, use these commands:
? Current project: /your/project/path/input
Select an action: (Use arrow keys)
❯ Convert as Modules
Switch Providers (openai, anthropic, bedrockanthropic, vertexai)
ExitProvider Selection During Initialization
When initializing a new project with npx @hai/rapidui init, you'll be prompted to select your preferred LLM provider. This selection is stored in your project configuration and can be changed later.
Provider-Specific Considerations
Token Limits
Each provider has different token limits that affect conversation history and context window:
| Provider | Model | Max Input Tokens | Max Output Tokens |
|---|---|---|---|
| OpenAI | gpt-4o | 128,000 | 4,096 |
| Anthropic | claude-3-5-sonnet | 200,000 | 4,096 |
| AWS Bedrock | claude-3-5-sonnet | 200,000 | 4,096 |
| Google Vertex AI | gemini-1.5-flash | 1,000,000 | 8,192 |