MemoryGraph supports migrating memories between different backend types with full validation, verification, and rollback capabilities. Your data is never locked in.

Supported Backends

Backend Type Best For
SQLite Local file Development, single-user, getting started
FalkorDBLite Embedded graph Local graph queries without a server
FalkorDB Redis-based graph Production, high performance, cloud sync
Neo4j Enterprise graph Enterprise deployments, complex queries
Memgraph In-memory graph Real-time analytics, streaming data

Quick Start

1. Validate First (Recommended)

Always start with a dry-run to ensure the migration will succeed:

memorygraph migrate --to falkordb --to-uri redis://prod.example.com --dry-run

Output shows the dry-run validation:

Dry-run successful - migration would proceed safely
Source memories to migrate - 150

2. Execute Migration

Once validation succeeds, run the actual migration:

memorygraph migrate --to falkordb --to-uri redis://prod.example.com --verbose

The migration runs through 6 phases with progress output:

Phase 1 - Pre-flight validation
Phase 2 - Exporting from source (150 memories)
Phase 3 - Validating export
Phase 4 - Importing to target
Phase 5 - Verifying migration
Phase 6 - Cleanup

Migration completed successfully!
Migrated 150 memories, 342 relationships
Duration - 3.2 seconds

CLI Options

Option Description Example
--to <backend> Target backend type (required) --to falkordb
--to-uri <uri> Target database URI --to-uri redis://localhost
--to-path <path> Target database path (SQLite/FalkorDBLite) --to-path /data/prod.db
--dry-run Validate without changes --dry-run
--verbose Show detailed progress --verbose
--from <backend> Source backend (defaults to current) --from sqlite

Common Migration Scenarios

Development to Production

Migrate from local SQLite to cloud FalkorDB:

# Current backend is SQLite
memorygraph migrate --to falkordb --to-uri redis://prod.example.com --verbose

Between Graph Databases

Migrate from Neo4j to FalkorDB:

memorygraph migrate \
  --from neo4j --from-uri bolt://localhost \
  --from-username neo4j --from-password password \
  --to falkordb --to-uri redis://localhost

Local Testing

Test migration to a temporary database:

memorygraph migrate \
  --to sqlite \
  --to-path /tmp/test-migration.db \
  --dry-run

MCP Tools

For use within Claude Desktop or other MCP clients:

migrate_database

Parameters for the migrate_database MCP tool:

  • target_backend - The destination backend (e.g., "falkordb")
  • target_config.uri - Connection URI for the target
  • dry_run - Set to true for validation only
  • verify - Enable post-migration verification

validate_migration

Same parameters as migrate_database, performs dry-run validation without making changes.

6-Phase Migration Pipeline

  1. Pre-flight Validation - Verify source and target backends are accessible
  2. Export - Create temporary export file with all memories and relationships
  3. Validation - Verify export file integrity and structure
  4. Import - Import data to target backend (skipped in dry-run)
  5. Verification - Compare counts and sample random memories
  6. Cleanup - Delete temporary files and report statistics

Safety Features

  • βœ“ Dry-Run Mode - Validates migration without making changes
  • βœ“ Verification - Compares source and target data after migration
  • βœ“ Automatic Rollback - Reverts target backend on failure
  • βœ“ Duplicate Detection - Skips memories that already exist in target
  • βœ“ Progress Reporting - Shows real-time progress for large migrations

Export/Import

You can also manually export and import data:

# Export to JSON
memorygraph export --format json --output backup.json

# Import from JSON
memorygraph import --format json --input backup.json

Performance

Dataset Size Export Import Total
Small (<100 memories) <1 sec <2 sec ~3-5 sec
Medium (100-1000) 1-5 sec 2-10 sec ~5-20 sec
Large (1000-10000) 5-30 sec 10-50 sec ~20-120 sec

Performance varies by backend type and network latency.

Troubleshooting

"Source backend not accessible"

# Check current backend
memorygraph health

# Verify environment variables
echo $MEMORY_BACKEND
echo $MEMORY_SQLITE_PATH

"Verification failed - Memory count mismatch"

  1. Check target backend logs
  2. Try migration again with --verbose
  3. Check disk space on target
  4. Rollback happens automatically

Enable Debug Logging

export MEMORYGRAPH_LOG_LEVEL=DEBUG
memorygraph migrate --to <target> --verbose

Best Practices

  1. Always dry-run first - --dry-run validates without changes
  2. Use verification - Adds minimal overhead but ensures integrity
  3. Backup before migrating - Export current state before migration
  4. Test with small datasets - Test in staging before production
# Recommended workflow
memorygraph export --format json --output backup-$(date +%Y%m%d).json
memorygraph migrate --to <target> --dry-run
memorygraph migrate --to <target> --verbose
memorygraph health

After Migration

Update your environment to point to the new backend:

# Switch to new backend
export MEMORY_BACKEND=falkordb
export MEMORY_FALKORDB_URI=redis://prod

# Verify data and create backup
memorygraph health
memorygraph export --format json --output prod-backup.json