Performance Optimization Guide

Guides
Last updated: February 16, 2026

Welcome to the HarborDB performance optimization guide. This comprehensive resource will help you optimize every aspect of your PostgreSQL workflow, from writing efficient queries to configuring HarborDB and macOS for maximum performance.

Understanding Performance Factors

Several factors affect your database performance in HarborDB:

| Factor | Impact | Optimization Level | | ---------------------------- | ---------- | ------------------ | | Query Design | High | Application | | Database Indexes | High | Database | | Network Connection | Medium | Infrastructure | | System Resources | Medium | macOS | | HarborDB Settings | Low-Medium | Application | | PostgreSQL Configuration | High | Database |

Query Optimization

Writing Efficient Queries

1. Select Only What You Need

-- ❌ Inefficient
SELECT * FROM customers;

-- ✅ Efficient
SELECT customer_id, name, email FROM customers;

2. Use WHERE Clauses Effectively

-- ❌ Inefficient (full table scan)
SELECT * FROM orders
WHERE EXTRACT(YEAR FROM order_date) = 2024;

-- ✅ Efficient (uses index)
SELECT * FROM orders
WHERE order_date >= '2024-01-01'
  AND order_date < '2025-01-01';

3. Limit Result Sets

-- Always limit when exploring
SELECT * FROM large_table LIMIT 100;

-- Use pagination for large results
SELECT * FROM products
ORDER BY created_at DESC
LIMIT 50 OFFSET 0;

4. Avoid N+1 Query Problems

-- ❌ Inefficient (multiple queries)
-- First query: SELECT * FROM orders WHERE status = 'pending'
-- Then for each order: SELECT * FROM customers WHERE customer_id = ?

-- ✅ Efficient (single query with JOIN)
SELECT o.*, c.name, c.email
FROM orders o
JOIN customers c ON o.customer_id = c.customer_id
WHERE o.status = 'pending';

Using EXPLAIN to Analyze Queries

HarborDB makes it easy to analyze query performance:

  1. Write your query in the editor
  2. Click "Explain" (⚡) in the toolbar
  3. Review the execution plan in the results pane

Understanding EXPLAIN Output

  • Seq Scan: Full table scan (often slow for large tables)
  • Index Scan: Using an index (usually faster)
  • Nested Loop: Joining tables (check if appropriate)
  • Cost estimates: Higher numbers mean more work

Common Optimization Patterns

-- Add missing index suggestion
EXPLAIN ANALYZE
SELECT * FROM orders WHERE customer_id = 1234;

-- Result might show: "Seq Scan on orders"
-- Solution: CREATE INDEX idx_orders_customer_id ON orders(customer_id);

Database Index Optimization

When to Add Indexes

Add indexes for:

  • Primary keys (automatically indexed)
  • Foreign keys (often need manual indexing)
  • Frequently filtered columns
  • Frequently sorted columns
  • Columns used in JOIN conditions

Creating Effective Indexes

-- Single column index
CREATE INDEX idx_orders_status ON orders(status);

-- Multi-column index (order matters!)
CREATE INDEX idx_orders_date_status ON orders(order_date, status);

-- Partial index (for specific subsets)
CREATE INDEX idx_active_users ON users(email) WHERE active = true;

-- Unique index for constraint
CREATE UNIQUE INDEX idx_unique_email ON users(email);

Index Maintenance

-- Check index usage
SELECT schemaname, tablename, indexname
FROM pg_stat_user_indexes
WHERE idx_scan = 0;  -- Never used indexes

-- Rebuild fragmented indexes
REINDEX INDEX idx_orders_status;

-- Update statistics for query planner
ANALYZE orders;

Connection Optimization

Connection Pool Settings

Optimize your HarborDB connections:

  1. Open PreferencesConnections
  2. Adjust settings based on your workflow:

| Setting | Recommended Value | Purpose | | ---------------------- | ----------------- | --------------------------- | | Connection Timeout | 30 seconds | Prevent hanging connections | | Keep-Alive | Enabled | Maintain idle connections | | Max Connections | 5-10 | Balance performance/memory | | SSL Mode | Prefer | Secure without overhead |

Network Optimization

For remote databases:

  1. Use SSH tunneling for secure, optimized connections
  2. Enable compression for large result sets
  3. Schedule heavy queries during off-peak hours
  4. Monitor network latency with built-in tools
-- Test network latency
SELECT 1 as test;  -- Simple query to measure round-trip time

HarborDB Performance Settings

Memory Management

Adjust memory usage in PreferencesPerformance:

| Setting | Recommendation | Impact | | --------------------- | -------------- | ----------------------------- | | Query Cache | 256 MB | Faster repeated queries | | Result Set Memory | 512 MB | Handle larger datasets | | Streaming Mode | Enabled | Better for large results | | Auto-Refresh | 30 seconds | Balance freshness/performance |

Interface Optimization

  1. Disable animations in Preferences → Appearance
  2. Reduce auto-complete delay to 100ms
  3. Limit sidebar auto-expansion depth
  4. Use dark mode for OLED display efficiency

Keyboard Shortcuts for Speed

Master these productivity boosters:

| Shortcut | Action | Time Saved | | -------- | -------------- | -------------- | | ⌘ + R | Execute query | 2-3 seconds | | ⌘ + . | Cancel query | Prevents waits | | ⌘ + T | New query tab | 1-2 seconds | | ⌘ + S | Save query | 1 second | | ⌘ + E | Export results | 2 seconds | | ⌘ + F | Find in query | 3-5 seconds |

macOS System Optimization

Hardware Considerations

Memory (RAM)

  • Minimum: 8 GB for basic use
  • Recommended: 16 GB for professional work
  • Optimal: 32 GB for large datasets

Storage (SSD)

  • Minimum: 256 GB free space
  • Recommended: 512 GB+ for database files and exports
  • Use external SSD for large database storage

Processor

  • Apple Silicon: M1/M2/M3 optimized
  • Intel: i5 minimum, i7/i9 recommended

macOS Settings

  1. Energy Saver (System Preferences):

    • Disable "Put hard disks to sleep"
    • Keep display on during long queries
  2. Activity Monitor:

    • Monitor HarborDB memory usage
    • Check PostgreSQL process resources
  3. Disk Utility:

    • Ensure SSD TRIM is enabled
    • Monitor disk health and free space

Terminal Performance Commands

# Check system resources
top -o cpu  # CPU usage
vm_stat     # Memory usage
iostat      # Disk I/O

# Monitor HarborDB specifically
ps aux | grep HarborDB
lsof -p $(pgrep HarborDB) | wc -l  # Open files

Large Dataset Strategies

Working with Millions of Rows

1. Server-Side Pagination

-- Instead of loading all rows
SELECT * FROM huge_table;

-- Use pagination
SELECT * FROM huge_table
ORDER BY id
LIMIT 1000 OFFSET 0;

-- Next page
SELECT * FROM huge_table
ORDER BY id
LIMIT 1000 OFFSET 1000;

2. Streaming Results

Enable in PreferencesPerformance:

  • ✅ "Enable streaming for results > 10,000 rows"
  • ✅ "Stream chunk size: 1000 rows"
  • ✅ "Background streaming: Enabled"

3. Materialized Views

-- Pre-compute expensive queries
CREATE MATERIALIZED VIEW daily_sales_summary AS
SELECT
    DATE(order_date) as day,
    COUNT(*) as orders,
    SUM(total) as revenue
FROM orders
GROUP BY DATE(order_date);

-- Refresh periodically
REFRESH MATERIALIZED VIEW daily_sales_summary;

Export Optimization

For large exports:

  1. Export to CSV (more efficient than JSON)
  2. Use chunked exports (Preferences → Export)
  3. Compress exports automatically
  4. Export directly to external storage

Monitoring Performance

Built-in HarborDB Tools

  1. Query Timer: Shows execution time in status bar
  2. Memory Monitor: View in Window → Performance Monitor
  3. Connection Status: Real-time network metrics
  4. Query History: Review past query performance

PostgreSQL Monitoring Queries

-- Active queries
SELECT
    pid,
    usename,
    query_start,
    state,
    query
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start DESC;

-- Slow queries
SELECT
    query,
    calls,
    total_time,
    mean_time,
    rows
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Table statistics
SELECT
    schemaname,
    relname,
    seq_scan,
    idx_scan,
    n_tup_ins,
    n_tup_upd,
    n_tup_del
FROM pg_stat_user_tables
ORDER BY seq_scan DESC;

Common Performance Problems & Solutions

Problem: "Query takes too long"

Solution Checklist:

  1. [ ] Add appropriate indexes
  2. [ ] Rewrite query to be more efficient
  3. [ ] Check PostgreSQL configuration
  4. [ ] Verify network connectivity
  5. [ ] Increase timeout settings if needed

Problem: "HarborDB uses too much memory"

Solution Steps:

  1. Reduce query cache size (Preferences → Performance)
  2. Close unused query tabs
  3. Restart HarborDB periodically
  4. Check for memory leaks (Activity Monitor)
  5. Increase system RAM if consistently hitting limits

Problem: "Export/Import very slow"

Optimization Tips:

  1. Use CSV format instead of JSON
  2. Disable antivirus scanning on target folder
  3. Export to SSD instead of HDD
  4. Use PostgreSQL native tools (pg_dump/pg_restore) for very large operations
  5. Split large operations into smaller batches

Problem: "Interface feels sluggish"

Quick Fixes:

  1. Disable syntax highlighting for very large queries
  2. Reduce sidebar auto-expansion
  3. Use simpler UI theme
  4. Close other resource-intensive applications
  5. Restart HarborDB and macOS

Advanced Optimization Techniques

Partitioning Large Tables

-- Create partitioned table
CREATE TABLE orders_partitioned (
    order_id BIGSERIAL,
    order_date DATE NOT NULL,
    customer_id INT,
    total DECIMAL(10,2)
) PARTITION BY RANGE (order_date);

-- Create partitions
CREATE TABLE orders_2024_q1 PARTITION OF orders_partitioned
    FOR VALUES FROM ('2024-01-01') TO ('2024-04-01');

CREATE TABLE orders_2024_q2 PARTITION OF orders_partitioned
    FOR VALUES FROM ('2024-04-01') TO ('2024-07-01');

Query Plan Hints

-- Force index usage (use cautiously)
SET enable_seqscan = off;

-- Temporarily increase work memory
SET work_mem = '64MB';

-- Use specific join method
SET enable_nestloop = off;
SET enable_hashjoin = on;

Connection Pooling with PGBouncer

For high-concurrency applications:

  1. Install PGBouncer (Homebrew: brew install pgbouncer)
  2. Configure for transaction pooling
  3. Connect HarborDB through PGBouncer
  4. Monitor connection reuse statistics

Performance Testing

Creating Benchmark Queries

-- Test query performance
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE order_date BETWEEN '2024-01-01' AND '2024-12-31'
  AND status = 'completed'
ORDER BY total DESC
LIMIT 100;

-- Compare with different approaches
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE status = 'completed'
  AND order_date >= '2024-01-01'
  AND order_date <= '2024-12-31'
ORDER BY total DESC
LIMIT 100;

Monitoring Over Time

Create a performance log table:

CREATE TABLE query_performance_log (
    log_id SERIAL PRIMARY KEY,
    query_hash TEXT,
    execution_time_ms INTEGER,
    row_count INTEGER,
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    notes TEXT
);

-- Log slow queries
INSERT INTO query_performance_log
    (query_hash, execution_time_ms, row_count, notes)
SELECT
    md5(query),
    EXTRACT(MILLISECONDS FROM NOW() - query_start),
    result_rows,
    'Slow query detected'
FROM pg_stat_activity
WHERE state = 'active'
  AND NOW() - query_start > INTERVAL '5 seconds';

Best Practices Summary

Daily Workflow

  1. Start with EXPLAIN for new complex queries
  2. Use LIMIT when exploring data
  3. Close unused connections
  4. Regularly restart HarborDB (weekly)
  5. Monitor system resources during heavy work

Weekly Maintenance

  1. Review query performance logs
  2. Check for unused indexes
  3. Update PostgreSQL statistics (ANALYZE)
  4. Clean up temporary files
  5. Backup performance configurations

Monthly Review

  1. Analyze slow query patterns
  2. Consider table partitioning for growth
  3. Review hardware needs
  4. Update HarborDB and PostgreSQL
  5. Document performance improvements

Getting Help

Performance Tuning Support

If you need additional help:

  1. Share query plans with our support team
  2. Export performance logs (Preferences → Export Logs)
  3. Include system specifications (macOS version, RAM, storage)
  4. Describe your workflow and typical dataset sizes

Additional Resources

Was this helpful?

Help us improve this documentation by providing feedback.