This document outlines backup strategies, restore procedures, and disaster recovery plans for TendSocial AI.
Table of Contents
- Overview
- What to Back Up
- Database Backup
- File Storage Backup
- Configuration Backup
- Restore Procedures
- Disaster Recovery
- Backup Automation
- Testing Backups
Overview
A comprehensive backup strategy is essential for:
- Data Protection: Prevent data loss from hardware failure, bugs, or attacks
- Compliance: Meet data retention requirements
- Peace of Mind: Recover from disasters with minimal downtime
- Version Control: Restore to previous states if needed
Backup Principles
3-2-1 Rule:
- 3 copies of data
- 2 different storage types
- 1 off-site backup
Regular Schedule:
- Daily automated backups (minimum)
- Weekly full backups
- Monthly archives for long-term retention
Test Regularly:
- Monthly restore drills
- Verify backup integrity
- Document recovery time
What to Back Up
Critical Data
Database
- User accounts and profiles
- Companies and teams
- Brand profiles and settings
- Content (blogs, videos, social posts)
- Campaigns and calendars
- Analytics data
User-Generated Files
- Uploaded images and media
- Generated cover images
- Thumbnails and social graphics
- Logos and brand assets
Configuration
- Environment variables (encrypted)
- Integration settings
- API configurations
- Git repository connections
Application Code
- Already in Git
- Ensure proper branching strategy
- Tag stable releases
What NOT to Back Up
node_modules/- Reproducible from package.json- Build artifacts (
dist/) - Regenerated on deploy - Logs older than 30 days - Archive separately if needed
- Temporary files and caches
Database Backup
PostgreSQL Backup (Production)
Manual Backup
# Full database dump
pg_dump -h localhost -U tendsocial -d tendsocial > backup_$(date +%Y%m%d_%H%M%S).sql
# Compressed backup (recommended)
pg_dump -h localhost -U tendsocial -d tendsocial | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz
# Backup to custom format (faster restore, parallel)
pg_dump -h localhost -U tendsocial -d tendsocial -Fc -f backup_$(date +%Y%m%d_%H%M%S).dumpAutomated Daily Backup Script
Create scripts/backup-db.sh:
#!/bin/bash
# Configuration
DB_HOST="${DB_HOST:-localhost}"
DB_USER="${DB_USER:-tendsocial}"
DB_NAME="${DB_NAME:-tendsocial}"
BACKUP_DIR="${BACKUP_DIR:-/var/backups/tendsocial}"
RETENTION_DAYS=30
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Generate backup filename with timestamp
BACKUP_FILE="$BACKUP_DIR/db_backup_$(date +%Y%m%d_%H%M%S).sql.gz"
# Perform backup
echo "Starting backup at $(date)"
pg_dump -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" | gzip > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup successful: $BACKUP_FILE"
# Calculate size
SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
echo "Backup size: $SIZE"
# Delete old backups
find "$BACKUP_DIR" -name "db_backup_*.sql.gz" -type f -mtime +$RETENTION_DAYS -delete
echo "Deleted backups older than $RETENTION_DAYS days"
else
echo "Backup failed!"
exit 1
fiMake it executable:
chmod +x scripts/backup-db.shAdd to crontab for daily backups at 2 AM:
crontab -e
# Add this line:
0 2 * * * /path/to/tendsocial-ai/scripts/backup-db.sh >> /var/log/tendsocial-backup.log 2>&1Cloud-Managed Backups
Google Cloud SQL
# Create automated backup policy
gcloud sql instances patch tendsocial-db \
--backup-start-time=02:00 \
--retained-backups-count=30 \
--retained-transaction-log-days=7
# Manual on-demand backup
gcloud sql backups create \
--instance=tendsocial-db \
--description="Pre-migration backup"
# List backups
gcloud sql backups list --instance=tendsocial-db
# Restore from backup
gcloud sql backups restore BACKUP_ID \
--backup-instance=tendsocial-db \
--backup-id=BACKUP_IDAWS RDS
# Automated backups (via AWS Console or CloudFormation)
# - Retention: 30 days
# - Backup window: 02:00-04:00 UTC
# - Point-in-time recovery enabled
# Manual snapshot
aws rds create-db-snapshot \
--db-instance-identifier tendsocial-db \
--db-snapshot-identifier tendsocial-manual-$(date +%Y%m%d)
# List snapshots
aws rds describe-db-snapshots \
--db-instance-identifier tendsocial-db
# Restore from snapshot
aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier tendsocial-db-restored \
--db-snapshot-identifier SNAPSHOT_IDSupabase
- Automatic daily backups included in paid plans
- Point-in-time recovery (PITR) available
- Manual backups via Dashboard → Database → Backups
Prisma-Specific Considerations
Export Schema
# Save current schema
pnpm exec prisma db pull > backup/schema_$(date +%Y%m%d).prisma
# Export migration history
cp -r prisma/migrations backup/migrations_$(date +%Y%m%d)Seed Data Export
If you have important seed data:
// scripts/export-seed-data.ts
import { PrismaClient } from '@prisma/client';
import fs from 'fs';
const prisma = new PrismaClient();
async function exportData() {
const data = {
users: await prisma.user.findMany(),
companies: await prisma.company.findMany(),
// ... other models
};
fs.writeFileSync(
`backup/seed_data_${new Date().toISOString()}.json`,
JSON.stringify(data, null, 2)
);
}
exportData();Run with:
pnpm exec ts-node scripts/export-seed-data.tsFile Storage Backup
AWS S3 Backup
Cross-Region Replication
# Set up replication rule (via AWS Console or CLI)
aws s3api put-bucket-replication \
--bucket tendsocial-assets \
--replication-configuration file://replication.json
# replication.json
{
"Role": "arn:aws:iam::ACCOUNT_ID:role/s3-replication-role",
"Rules": [{
"Status": "Enabled",
"Priority": 1,
"Destination": {
"Bucket": "arn:aws:s3:::tendsocial-assets-backup",
"ReplicationTime": {
"Status": "Enabled",
"Time": { "Minutes": 15 }
}
}
}]
}Versioning
# Enable versioning (prevents accidental deletes)
aws s3api put-bucket-versioning \
--bucket tendsocial-assets \
--versioning-configuration Status=Enabled
# Lifecycle policy to move old versions to Glacier
aws s3api put-bucket-lifecycle-configuration \
--bucket tendsocial-assets \
--lifecycle-configuration file://lifecycle.jsonManual S3 Backup
# Sync to backup bucket
aws s3 sync s3://tendsocial-assets s3://tendsocial-assets-backup \
--storage-class STANDARD_IA
# Download locally
aws s3 sync s3://tendsocial-assets ./local-backup/s3 \
--exclude "*.tmp" \
--exclude "thumbnails/*"Cloudflare R2 Backup
# Using rclone (install from rclone.org)
rclone sync r2:tendsocial-assets /backup/r2 \
--progress \
--transfers 10
# Or sync to another cloud provider
rclone sync r2:tendsocial-assets s3:backup-bucket/r2Local Development Backup
# Backup public/uploads directory
tar -czf backup/uploads_$(date +%Y%m%d).tar.gz public/uploads
# Rsync to external drive
rsync -avz --delete public/uploads/ /mnt/external/tendsocial-uploads/Configuration Backup
Environment Variables
DO NOT commit .env files to Git!
Secure Backup Method
# Encrypt environment files
gpg --symmetric --cipher-algo AES256 backend/.env
# Creates backend/.env.gpg
# Upload encrypted file to secure storage
aws s3 cp backend/.env.gpg s3://tendsocial-secrets/env/backend.env.gpg
# To restore:
aws s3 cp s3://tendsocial-secrets/env/backend.env.gpg backend/.env.gpg
gpg --decrypt backend/.env.gpg > backend/.envUse Secret Managers (Recommended)
Google Secret Manager:
# Store secret
echo -n "your-jwt-secret" | gcloud secrets create jwt-secret --data-file=-
# Access in production
gcloud secrets versions access latest --secret="jwt-secret"AWS Secrets Manager:
# Store secret
aws secretsmanager create-secret \
--name tendsocial/jwt-secret \
--secret-string "your-jwt-secret"
# Retrieve secret
aws secretsmanager get-secret-value \
--secret-id tendsocial/jwt-secret \
--query SecretString --output textGit Configuration Backup
# Export Git integration settings from database
# (Assuming stored in `integrations` table)
# Create backup script
cat > scripts/backup-integrations.sh << 'EOF'
#!/bin/bash
psql $DATABASE_URL -c "COPY (SELECT * FROM integrations) TO STDOUT CSV HEADER" > backup/integrations_$(date +%Y%m%d).csv
EOFRestore Procedures
Database Restore
PostgreSQL Restore
# From SQL dump
gunzip < backup_20250124.sql.gz | psql -h localhost -U tendsocial -d tendsocial
# From custom format (parallel restore)
pg_restore -h localhost -U tendsocial -d tendsocial -j 4 backup_20250124.dump
# Restore specific table only
pg_restore -h localhost -U tendsocial -d tendsocial -t users backup_20250124.dumpCloud SQL Restore (Google Cloud)
# List available backups
gcloud sql backups list --instance=tendsocial-db
# Restore
gcloud sql backups restore BACKUP_ID \
--backup-instance=tendsocial-db \
--backup-id=BACKUP_IDAWS RDS Point-in-Time Restore
# Restore to specific time
aws rds restore-db-instance-to-point-in-time \
--source-db-instance-identifier tendsocial-db \
--target-db-instance-identifier tendsocial-db-restored \
--restore-time 2025-01-24T02:00:00ZFile Storage Restore
# Restore from S3 backup
aws s3 sync s3://tendsocial-assets-backup s3://tendsocial-assets
# Restore locally
aws s3 sync s3://tendsocial-assets ./public/uploadsFull System Restore
Step-by-Step Recovery
Provision Infrastructure
bash# Deploy backend (Docker) docker pull gcr.io/PROJECT_ID/tendsocial-backend:latest # Or redeploy to Cloud Run gcloud run deploy tendsocial-backend --image=gcr.io/PROJECT_ID/tendsocial-backend:latestRestore Database
bash# Restore from most recent backup gcloud sql backups restore BACKUP_ID --instance=tendsocial-dbRestore Files
bash# Verify S3 replication is active or sync from backup aws s3 sync s3://tendsocial-assets-backup s3://tendsocial-assetsRestore Configuration
bash# Decrypt and restore environment variables gpg --decrypt backend/.env.gpg > backend/.env # Or pull from Secret Manager gcloud secrets versions access latest --secret="jwt-secret" > .secrets/jwtVerify Application
bash# Check health endpoint curl https://api.tendsocial.com/health # Verify database connection pnpm exec prisma db pull # Test critical flows # - User login # - Content creation # - File uploads
Disaster Recovery
Recovery Time Objective (RTO) & Recovery Point Objective (RPO)
| Scenario | RTO | RPO | Strategy |
|---|---|---|---|
| Database corruption | < 1 hour | < 15 minutes | Automated backups + PITR |
| File storage loss | < 30 minutes | < 1 hour | Cross-region replication |
| Full system failure | < 2 hours | < 1 hour | Infrastructure as Code + Backups |
| Regional outage | < 4 hours | < 1 hour | Multi-region deployment |
Emergency Contacts
Maintain a list of:
- Database admin credentials (stored in password manager)
- Cloud provider support contacts
- On-call engineer rotation
- Escalation procedures
Disaster Recovery Plan
1. Detection
- Automated monitoring alerts
- User reports
- System health checks
2. Assessment
- Determine scope of failure
- Identify affected systems
- Estimate impact (users, data)
3. Communication
- Notify team via Slack/Discord
- Post status page update
- Email affected customers (if applicable)
4. Recovery
- Execute restore procedures (see above)
- Verify data integrity
- Test critical functionality
5. Post-Mortem
- Document incident timeline
- Identify root cause
- Implement preventive measures
- Update DR plan
Backup Automation
Complete Backup Script
Create scripts/full-backup.sh:
#!/bin/bash
set -e
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_ROOT="/var/backups/tendsocial/$BACKUP_DATE"
LOG_FILE="/var/log/tendsocial-backup.log"
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
log "=== Starting full backup ==="
# Create backup directory
mkdir -p "$BACKUP_ROOT"
# 1. Database backup
log "Backing up database..."
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME | gzip > "$BACKUP_ROOT/database.sql.gz"
# 2. Prisma schema
log "Backing up Prisma schema..."
cp prisma/schema.prisma "$BACKUP_ROOT/schema.prisma"
cp -r prisma/migrations "$BACKUP_ROOT/migrations"
# 3. Configuration (encrypted)
log "Backing up configuration..."
gpg --symmetric --cipher-algo AES256 --batch --yes --passphrase="$BACKUP_PASSPHRASE" backend/.env
mv backend/.env.gpg "$BACKUP_ROOT/env.gpg"
# 4. Upload to S3
log "Uploading to S3..."
aws s3 sync "$BACKUP_ROOT" "s3://tendsocial-backups/$BACKUP_DATE/" --storage-class STANDARD_IA
# 5. Cleanup old local backups (keep 7 days)
log "Cleaning up old backups..."
find /var/backups/tendsocial -type d -mtime +7 -exec rm -rf {} +
# 6. Verify backup
log "Verifying backup..."
aws s3 ls "s3://tendsocial-backups/$BACKUP_DATE/" --recursive
log "=== Backup complete ==="Scheduling with Cron
# Edit crontab
crontab -e
# Daily full backup at 2 AM
0 2 * * * /path/to/scripts/full-backup.sh
# Hourly incremental backups (if supported)
0 * * * * /path/to/scripts/incremental-backup.sh
# Weekly cleanup of old S3 backups
0 3 * * 0 /path/to/scripts/cleanup-old-backups.shKubernetes CronJob (if using K8s)
apiVersion: batch/v1
kind: CronJob
metadata:
name: tendsocial-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: postgres:14
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
command:
- /bin/sh
- -c
- |
pg_dump -h $DB_HOST -U $DB_USER $DB_NAME | gzip | aws s3 cp - s3://backups/db_$(date +%Y%m%d).sql.gz
restartPolicy: OnFailureTesting Backups
Monthly Backup Verification Checklist
- [ ] Download latest backup
- [ ] Restore to test environment
- [ ] Verify database integrity (row counts, key records)
- [ ] Test application functionality
- [ ] Verify file storage accessibility
- [ ] Check backup size (ensure not corrupted)
- [ ] Review backup logs for errors
- [ ] Document restore time
- [ ] Update DR plan if needed
Automated Backup Testing
# scripts/test-backup.sh
#!/bin/bash
LATEST_BACKUP=$(aws s3 ls s3://tendsocial-backups/ | sort | tail -n 1 | awk '{print $2}')
echo "Testing backup: $LATEST_BACKUP"
# Download backup
aws s3 sync "s3://tendsocial-backups/$LATEST_BACKUP" ./test-restore/
# Restore to test database
gunzip < ./test-restore/database.sql.gz | psql -h localhost -U test_user -d test_db
# Run verification queries
psql -h localhost -U test_user -d test_db -c "SELECT COUNT(*) FROM users;"
psql -h localhost -U test_user -d test_db -c "SELECT COUNT(*) FROM companies;"
echo "Backup test complete"Best Practices
Automate Everything
- Manual backups are forgotten
- Use cron, GitHub Actions, or cloud-native schedulers
Encrypt Sensitive Backups
- Database dumps contain user data
- Use GPG, cloud encryption, or vault services
Monitor Backup Jobs
- Alert on failures
- Track backup sizes (sudden changes indicate issues)
Test Restores Regularly
- A backup you can't restore is useless
- Schedule quarterly DR drills
Document Everything
- Update this guide as infrastructure changes
- Include access credentials locations (password manager)
Version Backups
- Keep multiple generations (daily, weekly, monthly)
- Balance retention vs. storage costs
Related Documentation
- Deployment Guide - Deployment procedures
- Tech Stack - Infrastructure details
- Architecture Overview - System design
Last Updated: November 2025
Next Review: February 2026