Skip to content

State and Memory Handling

HAI RapidUI implements a comprehensive state and memory management system that ensures efficient processing, context preservation, and resumable operations throughout the UI modernization process.

Key Components

The state and memory handling system consists of three integrated components:

1. Batch Processing

Batch processing enables efficient handling of multiple wireframes by processing them in logical groups.

Configuration

env
BATCH_SIZE=5  # Default batch size, configurable

Key Features

  • Processes wireframes in configurable batch sizes
  • Optimizes resource usage and LLM token consumption
  • Enables parallel development of different modules
  • Intelligently groups related UI components

2. Conversation History Management

Maintains context throughout the UI modernization process for more coherent and consistent code generation.

Storage Location

.hai-rapidui/conversationhistory.json

Key Features

  • Preserves context between interactions
  • Intelligently manages token limits through summarization
  • Supports module-specific conversation histories
  • Enables resumable sessions after interruptions
  • Optimizes token usage based on provider limits

3. Module State Tracking

Tracks the progress, status, and metadata of each module throughout the modernization process.

Storage Location

.hai-rapidui/module_state.json

Key Features

  • Tracks module status (PENDING, IN_PROGRESS, COMPLETED)
  • Records which wireframes have been processed
  • Stores timing information and performance metrics
  • Enables resumable operations if interrupted
  • Provides detailed progress reporting

Integrated Workflow

The three components work together to create a seamless workflow:

  1. Project Initialization

    • System analyzes input directory structure
    • Modules are identified (manually or via auto-modularization)
    • Module states are initialized
    • Configuration is stored for future sessions
  2. Processing Cycle

    • Wireframes are processed in configurable batches
    • Conversation history maintains context between interactions
    • Module state tracks progress and enables resumability
    • Error handling mechanisms monitor for issues
  3. Error Handling & Recovery

    • If processing is interrupted, state is preserved
    • Operations can be resumed from the last completed batch
    • Error information is recorded for troubleshooting
    • Automatic retry mechanisms attempt to resolve issues

Error Handling System

HAI RapidUI includes a robust error handling system:

Storage Location

.hai-rapidui/errorhandler.json

Key Features

  • Automatic Error Detection: Monitors for coding errors during implementation
  • Error Classification: Categorizes errors by type and severity
  • Automatic Retry: Failed operations are automatically retried with different approaches
  • Error Escalation: Persistent errors are escalated with detailed logging
  • Recovery Strategies: Implements different strategies based on error type

Comprehensive Logging

Detailed execution tracking provides visibility into the modernization process:

Storage Locations

.hai-rapidui/execution.log
.hai-rapidui/error.log

Key Features

  • Execution Tracking: Detailed logs of all operations
  • Error Logging: Comprehensive error information
  • Performance Metrics: Timing and resource usage statistics
  • Debugging Information: Detailed context for troubleshooting
  • Session Resumption Data: Information needed to resume interrupted sessions

Benefits

  • Efficiency: Optimized resource usage through batch processing
  • Consistency: Maintained context through conversation history
  • Reliability: Resumable operations through module state tracking
  • Visibility: Clear progress tracking and reporting
  • Flexibility: Support for parallel development and team collaboration
  • Resilience: Robust error handling and recovery mechanisms

Configuration Options

SettingDescriptionDefault
BATCH_SIZENumber of wireframes to process in each batch5
MAX_TOKENSMaximum tokens for conversation historyProvider-specific
SUMMARY_THRESHOLDPercentage of token limit that triggers summarization80%
ERROR_RETRY_ATTEMPTSNumber of automatic retry attempts for errors3
ERROR_ESCALATION_THRESHOLDNumber of failures before error escalation2

Provider-Specific Memory Management

HAI RapidUI optimizes memory usage based on the selected LLM provider:

ProviderMax Input TokensConversation History Strategy
OpenAI (gpt-4o)128,000Aggressive summarization at 80% threshold
Anthropic (claude-3-5-sonnet)200,000Standard summarization at 80% threshold
AWS Bedrock (claude-3-5-sonnet)200,000Standard summarization at 80% threshold
Google Vertex AI (gemini-1.5-flash)1,000,000Minimal summarization at 90% threshold