Skip to content

This document outlines backup strategies, restore procedures, and disaster recovery plans for TendSocial AI.

Table of Contents


Overview

A comprehensive backup strategy is essential for:

  • Data Protection: Prevent data loss from hardware failure, bugs, or attacks
  • Compliance: Meet data retention requirements
  • Peace of Mind: Recover from disasters with minimal downtime
  • Version Control: Restore to previous states if needed

Backup Principles

  1. 3-2-1 Rule:

    • 3 copies of data
    • 2 different storage types
    • 1 off-site backup
  2. Regular Schedule:

    • Daily automated backups (minimum)
    • Weekly full backups
    • Monthly archives for long-term retention
  3. Test Regularly:

    • Monthly restore drills
    • Verify backup integrity
    • Document recovery time

What to Back Up

Critical Data

  1. Database

    • User accounts and profiles
    • Companies and teams
    • Brand profiles and settings
    • Content (blogs, videos, social posts)
    • Campaigns and calendars
    • Analytics data
  2. User-Generated Files

    • Uploaded images and media
    • Generated cover images
    • Thumbnails and social graphics
    • Logos and brand assets
  3. Configuration

    • Environment variables (encrypted)
    • Integration settings
    • API configurations
    • Git repository connections
  4. Application Code

    • Already in Git
    • Ensure proper branching strategy
    • Tag stable releases

What NOT to Back Up

  • node_modules/ - Reproducible from package.json
  • Build artifacts (dist/) - Regenerated on deploy
  • Logs older than 30 days - Archive separately if needed
  • Temporary files and caches

Database Backup

PostgreSQL Backup (Production)

Manual Backup

bash
# Full database dump
pg_dump -h localhost -U tendsocial -d tendsocial > backup_$(date +%Y%m%d_%H%M%S).sql

# Compressed backup (recommended)
pg_dump -h localhost -U tendsocial -d tendsocial | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz

# Backup to custom format (faster restore, parallel)
pg_dump -h localhost -U tendsocial -d tendsocial -Fc -f backup_$(date +%Y%m%d_%H%M%S).dump

Automated Daily Backup Script

Create scripts/backup-db.sh:

bash
#!/bin/bash

# Configuration
DB_HOST="${DB_HOST:-localhost}"
DB_USER="${DB_USER:-tendsocial}"
DB_NAME="${DB_NAME:-tendsocial}"
BACKUP_DIR="${BACKUP_DIR:-/var/backups/tendsocial}"
RETENTION_DAYS=30

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Generate backup filename with timestamp
BACKUP_FILE="$BACKUP_DIR/db_backup_$(date +%Y%m%d_%H%M%S).sql.gz"

# Perform backup
echo "Starting backup at $(date)"
pg_dump -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" | gzip > "$BACKUP_FILE"

if [ $? -eq 0 ]; then
    echo "Backup successful: $BACKUP_FILE"
    
    # Calculate size
    SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
    echo "Backup size: $SIZE"
    
    # Delete old backups
    find "$BACKUP_DIR" -name "db_backup_*.sql.gz" -type f -mtime +$RETENTION_DAYS -delete
    echo "Deleted backups older than $RETENTION_DAYS days"
else
    echo "Backup failed!"
    exit 1
fi

Make it executable:

bash
chmod +x scripts/backup-db.sh

Add to crontab for daily backups at 2 AM:

bash
crontab -e
# Add this line:
0 2 * * * /path/to/tendsocial-ai/scripts/backup-db.sh >> /var/log/tendsocial-backup.log 2>&1

Cloud-Managed Backups

Google Cloud SQL

bash
# Create automated backup policy
gcloud sql instances patch tendsocial-db \
  --backup-start-time=02:00 \
  --retained-backups-count=30 \
  --retained-transaction-log-days=7

# Manual on-demand backup
gcloud sql backups create \
  --instance=tendsocial-db \
  --description="Pre-migration backup"

# List backups
gcloud sql backups list --instance=tendsocial-db

# Restore from backup
gcloud sql backups restore BACKUP_ID \
  --backup-instance=tendsocial-db \
  --backup-id=BACKUP_ID

AWS RDS

bash
# Automated backups (via AWS Console or CloudFormation)
# - Retention: 30 days
# - Backup window: 02:00-04:00 UTC
# - Point-in-time recovery enabled

# Manual snapshot
aws rds create-db-snapshot \
  --db-instance-identifier tendsocial-db \
  --db-snapshot-identifier tendsocial-manual-$(date +%Y%m%d)

# List snapshots
aws rds describe-db-snapshots \
  --db-instance-identifier tendsocial-db

# Restore from snapshot
aws rds restore-db-instance-from-db-snapshot \
  --db-instance-identifier tendsocial-db-restored \
  --db-snapshot-identifier SNAPSHOT_ID

Supabase

  • Automatic daily backups included in paid plans
  • Point-in-time recovery (PITR) available
  • Manual backups via Dashboard → Database → Backups

Prisma-Specific Considerations

Export Schema

bash
# Save current schema
pnpm exec prisma db pull > backup/schema_$(date +%Y%m%d).prisma

# Export migration history
cp -r prisma/migrations backup/migrations_$(date +%Y%m%d)

Seed Data Export

If you have important seed data:

typescript
// scripts/export-seed-data.ts
import { PrismaClient } from '@prisma/client';
import fs from 'fs';

const prisma = new PrismaClient();

async function exportData() {
  const data = {
    users: await prisma.user.findMany(),
    companies: await prisma.company.findMany(),
    // ... other models
  };
  
  fs.writeFileSync(
    `backup/seed_data_${new Date().toISOString()}.json`,
    JSON.stringify(data, null, 2)
  );
}

exportData();

Run with:

bash
pnpm exec ts-node scripts/export-seed-data.ts

File Storage Backup

AWS S3 Backup

Cross-Region Replication

bash
# Set up replication rule (via AWS Console or CLI)
aws s3api put-bucket-replication \
  --bucket tendsocial-assets \
  --replication-configuration file://replication.json

# replication.json
{
  "Role": "arn:aws:iam::ACCOUNT_ID:role/s3-replication-role",
  "Rules": [{
    "Status": "Enabled",
    "Priority": 1,
    "Destination": {
      "Bucket": "arn:aws:s3:::tendsocial-assets-backup",
      "ReplicationTime": {
        "Status": "Enabled",
        "Time": { "Minutes": 15 }
      }
    }
  }]
}

Versioning

bash
# Enable versioning (prevents accidental deletes)
aws s3api put-bucket-versioning \
  --bucket tendsocial-assets \
  --versioning-configuration Status=Enabled

# Lifecycle policy to move old versions to Glacier
aws s3api put-bucket-lifecycle-configuration \
  --bucket tendsocial-assets \
  --lifecycle-configuration file://lifecycle.json

Manual S3 Backup

bash
# Sync to backup bucket
aws s3 sync s3://tendsocial-assets s3://tendsocial-assets-backup \
  --storage-class STANDARD_IA

# Download locally
aws s3 sync s3://tendsocial-assets ./local-backup/s3 \
  --exclude "*.tmp" \
  --exclude "thumbnails/*"

Cloudflare R2 Backup

bash
# Using rclone (install from rclone.org)
rclone sync r2:tendsocial-assets /backup/r2 \
  --progress \
  --transfers 10

# Or sync to another cloud provider
rclone sync r2:tendsocial-assets s3:backup-bucket/r2

Local Development Backup

bash
# Backup public/uploads directory
tar -czf backup/uploads_$(date +%Y%m%d).tar.gz public/uploads

# Rsync to external drive
rsync -avz --delete public/uploads/ /mnt/external/tendsocial-uploads/

Configuration Backup

Environment Variables

DO NOT commit .env files to Git!

Secure Backup Method

bash
# Encrypt environment files
gpg --symmetric --cipher-algo AES256 backend/.env
# Creates backend/.env.gpg

# Upload encrypted file to secure storage
aws s3 cp backend/.env.gpg s3://tendsocial-secrets/env/backend.env.gpg

# To restore:
aws s3 cp s3://tendsocial-secrets/env/backend.env.gpg backend/.env.gpg
gpg --decrypt backend/.env.gpg > backend/.env

Google Secret Manager:

bash
# Store secret
echo -n "your-jwt-secret" | gcloud secrets create jwt-secret --data-file=-

# Access in production
gcloud secrets versions access latest --secret="jwt-secret"

AWS Secrets Manager:

bash
# Store secret
aws secretsmanager create-secret \
  --name tendsocial/jwt-secret \
  --secret-string "your-jwt-secret"

# Retrieve secret
aws secretsmanager get-secret-value \
  --secret-id tendsocial/jwt-secret \
  --query SecretString --output text

Git Configuration Backup

bash
# Export Git integration settings from database
# (Assuming stored in `integrations` table)

# Create backup script
cat > scripts/backup-integrations.sh << 'EOF'
#!/bin/bash
psql $DATABASE_URL -c "COPY (SELECT * FROM integrations) TO STDOUT CSV HEADER" > backup/integrations_$(date +%Y%m%d).csv
EOF

Restore Procedures

Database Restore

PostgreSQL Restore

bash
# From SQL dump
gunzip < backup_20250124.sql.gz | psql -h localhost -U tendsocial -d tendsocial

# From custom format (parallel restore)
pg_restore -h localhost -U tendsocial -d tendsocial -j 4 backup_20250124.dump

# Restore specific table only
pg_restore -h localhost -U tendsocial -d tendsocial -t users backup_20250124.dump

Cloud SQL Restore (Google Cloud)

bash
# List available backups
gcloud sql backups list --instance=tendsocial-db

# Restore
gcloud sql backups restore BACKUP_ID \
  --backup-instance=tendsocial-db \
  --backup-id=BACKUP_ID

AWS RDS Point-in-Time Restore

bash
# Restore to specific time
aws rds restore-db-instance-to-point-in-time \
  --source-db-instance-identifier tendsocial-db \
  --target-db-instance-identifier tendsocial-db-restored \
  --restore-time 2025-01-24T02:00:00Z

File Storage Restore

bash
# Restore from S3 backup
aws s3 sync s3://tendsocial-assets-backup s3://tendsocial-assets

# Restore locally
aws s3 sync s3://tendsocial-assets ./public/uploads

Full System Restore

Step-by-Step Recovery

  1. Provision Infrastructure

    bash
    # Deploy backend (Docker)
    docker pull gcr.io/PROJECT_ID/tendsocial-backend:latest
    
    # Or redeploy to Cloud Run
    gcloud run deploy tendsocial-backend --image=gcr.io/PROJECT_ID/tendsocial-backend:latest
  2. Restore Database

    bash
    # Restore from most recent backup
    gcloud sql backups restore BACKUP_ID --instance=tendsocial-db
  3. Restore Files

    bash
    # Verify S3 replication is active or sync from backup
    aws s3 sync s3://tendsocial-assets-backup s3://tendsocial-assets
  4. Restore Configuration

    bash
    # Decrypt and restore environment variables
    gpg --decrypt backend/.env.gpg > backend/.env
    
    # Or pull from Secret Manager
    gcloud secrets versions access latest --secret="jwt-secret" > .secrets/jwt
  5. Verify Application

    bash
    # Check health endpoint
    curl https://api.tendsocial.com/health
    
    # Verify database connection
    pnpm exec prisma db pull
    
    # Test critical flows
    # - User login
    # - Content creation
    # - File uploads

Disaster Recovery

Recovery Time Objective (RTO) & Recovery Point Objective (RPO)

ScenarioRTORPOStrategy
Database corruption< 1 hour< 15 minutesAutomated backups + PITR
File storage loss< 30 minutes< 1 hourCross-region replication
Full system failure< 2 hours< 1 hourInfrastructure as Code + Backups
Regional outage< 4 hours< 1 hourMulti-region deployment

Emergency Contacts

Maintain a list of:

  • Database admin credentials (stored in password manager)
  • Cloud provider support contacts
  • On-call engineer rotation
  • Escalation procedures

Disaster Recovery Plan

1. Detection

  • Automated monitoring alerts
  • User reports
  • System health checks

2. Assessment

  • Determine scope of failure
  • Identify affected systems
  • Estimate impact (users, data)

3. Communication

  • Notify team via Slack/Discord
  • Post status page update
  • Email affected customers (if applicable)

4. Recovery

  • Execute restore procedures (see above)
  • Verify data integrity
  • Test critical functionality

5. Post-Mortem

  • Document incident timeline
  • Identify root cause
  • Implement preventive measures
  • Update DR plan

Backup Automation

Complete Backup Script

Create scripts/full-backup.sh:

bash
#!/bin/bash
set -e

BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_ROOT="/var/backups/tendsocial/$BACKUP_DATE"
LOG_FILE="/var/log/tendsocial-backup.log"

log() {
    echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}

log "=== Starting full backup ==="

# Create backup directory
mkdir -p "$BACKUP_ROOT"

# 1. Database backup
log "Backing up database..."
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME | gzip > "$BACKUP_ROOT/database.sql.gz"

# 2. Prisma schema
log "Backing up Prisma schema..."
cp prisma/schema.prisma "$BACKUP_ROOT/schema.prisma"
cp -r prisma/migrations "$BACKUP_ROOT/migrations"

# 3. Configuration (encrypted)
log "Backing up configuration..."
gpg --symmetric --cipher-algo AES256 --batch --yes --passphrase="$BACKUP_PASSPHRASE" backend/.env
mv backend/.env.gpg "$BACKUP_ROOT/env.gpg"

# 4. Upload to S3
log "Uploading to S3..."
aws s3 sync "$BACKUP_ROOT" "s3://tendsocial-backups/$BACKUP_DATE/" --storage-class STANDARD_IA

# 5. Cleanup old local backups (keep 7 days)
log "Cleaning up old backups..."
find /var/backups/tendsocial -type d -mtime +7 -exec rm -rf {} +

# 6. Verify backup
log "Verifying backup..."
aws s3 ls "s3://tendsocial-backups/$BACKUP_DATE/" --recursive

log "=== Backup complete ==="

Scheduling with Cron

bash
# Edit crontab
crontab -e

# Daily full backup at 2 AM
0 2 * * * /path/to/scripts/full-backup.sh

# Hourly incremental backups (if supported)
0 * * * * /path/to/scripts/incremental-backup.sh

# Weekly cleanup of old S3 backups
0 3 * * 0 /path/to/scripts/cleanup-old-backups.sh

Kubernetes CronJob (if using K8s)

yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: tendsocial-backup
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: backup
            image: postgres:14
            env:
            - name: PGPASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-credentials
                  key: password
            command:
            - /bin/sh
            - -c
            - |
              pg_dump -h $DB_HOST -U $DB_USER $DB_NAME | gzip | aws s3 cp - s3://backups/db_$(date +%Y%m%d).sql.gz
          restartPolicy: OnFailure

Testing Backups

Monthly Backup Verification Checklist

  • [ ] Download latest backup
  • [ ] Restore to test environment
  • [ ] Verify database integrity (row counts, key records)
  • [ ] Test application functionality
  • [ ] Verify file storage accessibility
  • [ ] Check backup size (ensure not corrupted)
  • [ ] Review backup logs for errors
  • [ ] Document restore time
  • [ ] Update DR plan if needed

Automated Backup Testing

bash
# scripts/test-backup.sh
#!/bin/bash

LATEST_BACKUP=$(aws s3 ls s3://tendsocial-backups/ | sort | tail -n 1 | awk '{print $2}')

echo "Testing backup: $LATEST_BACKUP"

# Download backup
aws s3 sync "s3://tendsocial-backups/$LATEST_BACKUP" ./test-restore/

# Restore to test database
gunzip < ./test-restore/database.sql.gz | psql -h localhost -U test_user -d test_db

# Run verification queries
psql -h localhost -U test_user -d test_db -c "SELECT COUNT(*) FROM users;"
psql -h localhost -U test_user -d test_db -c "SELECT COUNT(*) FROM companies;"

echo "Backup test complete"

Best Practices

  1. Automate Everything

    • Manual backups are forgotten
    • Use cron, GitHub Actions, or cloud-native schedulers
  2. Encrypt Sensitive Backups

    • Database dumps contain user data
    • Use GPG, cloud encryption, or vault services
  3. Monitor Backup Jobs

    • Alert on failures
    • Track backup sizes (sudden changes indicate issues)
  4. Test Restores Regularly

    • A backup you can't restore is useless
    • Schedule quarterly DR drills
  5. Document Everything

    • Update this guide as infrastructure changes
    • Include access credentials locations (password manager)
  6. Version Backups

    • Keep multiple generations (daily, weekly, monthly)
    • Balance retention vs. storage costs


Last Updated: November 2025
Next Review: February 2026

TendSocial Documentation