Flow - Deployment Guide
Complete deployment guide for the Flow Event Discovery Platform across all environments.
Last Updated: March 10, 2026 Version: 1.0.0
Documento pre-migrazione Supabase-only
Questa guida copre il deployment dello stack microservizi legacy. Dal 2026-03-29 Flow è Supabase-only e il deployment è drasticamente più semplice:
- Admin portal: push su GitHub → Vercel auto-deploy
- Edge Functions:
npx supabase functions deploy <name>- Database:
npx supabase db pushper applicare migrazioni- Mobile:
flutter build ipa/flutter build appbundle+ upload store standardLe sezioni su Docker Compose, orchestrazione microservizi, setup MongoDB/Redis e CI pipelines Node.js non si applicano più.
Setup completo in Architettura Overview.
Table of Contents
- Overview
- Prerequisites
- Environment Configuration
- Local Development Deployment
- Supabase Setup
- Database Setup and Migrations
- Staging Deployment
- Production Deployment
- Admin Portal Deployment (Vercel)
- Backend Services Deployment
- Mobile App Deployment
- Database Backup and Recovery
- Monitoring and Logging
- CI/CD Pipeline
- Security Considerations
- Scaling Strategies
- Troubleshooting
Overview
The Flow platform consists of multiple components:
- Mobile App: Flutter application (iOS & Android)
- Admin Portal: Next.js application deployed on Vercel
- Backend Services: Node.js microservices
- AI Services: Python FastAPI services
- Databases: MongoDB (primary), PostgreSQL (Supabase for admin)
- Cache & Real-time: Redis
- Search: Elasticsearch
- Notifications: Firebase Cloud Messaging
Architecture Deployment Overview
┌─────────────────────────────────────────────────────────────┐
│ Production Layer │
├─────────────────────────────────────────────────────────────┤
│ Mobile Apps Admin Portal (Vercel) Web App (Future) │
└───────┬──────────────┬──────────────────────────┬───────────┘
│ │ │
┌───────┴──────────────┴──────────────────────────┴───────────┐
│ API Gateway / Load Balancer │
│ (GCP Load Balancer or NGINX/Kong) │
└───────┬──────────────────────────────────────────────────────┘
│
┌───────┴──────────────────────────────────────────────────────┐
│ Backend Services Layer │
├─────────┬─────────┬─────────┬─────────┬─────────┬───────────┤
│ User │ Event │ Social │ Notify │Realtime │ AI │
│ Service │ Service │ Service │ Service │ Service │ Services │
└─────┬───┴─────┬───┴─────┬───┴─────┬───┴─────┬───┴─────┬─────┘
│ │ │ │ │ │
┌─────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────┐
│ Data Layer │
├─────────┬──────────┬─────────┬──────────┬────────────────────┤
│ MongoDB │ Redis │ Elastic│ Firebase │ Supabase/Postgres │
│(Primary)│ (Cache) │ Search │ (FCM) │ (Admin Data) │
└─────────┴──────────┴─────────┴──────────┴────────────────────┘
Prerequisites
Required Tools
- Docker 20.10+ and Docker Compose 2.x
- Node.js 18+ and npm 8+
- Python 3.9+ and pip
- Flutter 3.x SDK (for mobile deployment)
- Git 2.x
- kubectl (for Kubernetes deployments)
- gcloud CLI (for GCP deployments)
- vercel CLI (for admin portal)
Cloud Accounts
- Google Cloud Platform (primary hosting)
- Project with billing enabled
- APIs enabled: Compute, Container Registry, Kubernetes Engine, Cloud Storage
- Vercel account (for admin portal)
- Supabase project (for admin authentication)
- Firebase project (for mobile push notifications)
- SendGrid account (for email notifications)
Domain and SSL
- Domain name configured
- SSL certificates (managed by cloud provider or Let’s Encrypt)
- DNS access for subdomain configuration
Environment Configuration
Environment Types
- Development (dev): Local development with Docker Compose
- Staging (stage): Cloud-hosted pre-production environment
- Production (prod): Live production environment
Environment Variables by Service
API Gateway
File: backend/api-gateway/.env
# Development
NODE_ENV=development
PORT=3000
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379
JWT_SECRET=your-super-secret-jwt-key-change-in-production
JWT_REFRESH_SECRET=your-refresh-token-secret
JWT_EXPIRES_IN=15m
JWT_REFRESH_EXPIRES_IN=30d
# Service URLs (internal)
USER_SERVICE_URL=http://user-service:3001
EVENT_SERVICE_URL=http://event-service:3002
SOCIAL_SERVICE_URL=http://social-service:3003
NOTIFICATION_SERVICE_URL=http://notification-service:3004
REALTIME_SERVICE_URL=http://realtime-service:3005
# CORS
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:8080
# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX_REQUESTS=100
# Logging
LOG_LEVEL=debugStaging:
NODE_ENV=staging
PORT=3000
MONGODB_URI=mongodb+srv://user:password@cluster.mongodb.net/flow-staging?retryWrites=true&w=majority
REDIS_URL=redis://:[password]@redis-staging.example.com:6379
JWT_SECRET=[STRONG_SECRET_FROM_SECRET_MANAGER]
JWT_REFRESH_SECRET=[STRONG_REFRESH_SECRET]
ALLOWED_ORIGINS=https://staging.flowapp.com,https://admin-staging.flowapp.com
LOG_LEVEL=infoProduction:
NODE_ENV=production
PORT=3000
MONGODB_URI=mongodb+srv://user:password@cluster.mongodb.net/flow-prod?retryWrites=true&w=majority
REDIS_URL=redis://:[password]@redis-prod.example.com:6379
JWT_SECRET=[SECRET_FROM_GCP_SECRET_MANAGER]
JWT_REFRESH_SECRET=[SECRET_FROM_GCP_SECRET_MANAGER]
ALLOWED_ORIGINS=https://flowapp.com,https://admin.flowapp.com
LOG_LEVEL=warn
SENTRY_DSN=[SENTRY_DSN_FOR_ERROR_TRACKING]User Service
File: backend/user-service/.env
NODE_ENV=development
PORT=3001
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379
JWT_SECRET=your-super-secret-jwt-key-change-in-production
# Email Service
SENDGRID_API_KEY=your-sendgrid-api-key
FROM_EMAIL=noreply@flowapp.com
# Upload limits
MAX_FILE_SIZE=5242880
UPLOAD_ALLOWED_TYPES=image/jpeg,image/png,image/webp
# Storage (Cloud Storage)
STORAGE_PROVIDER=gcs
GCS_BUCKET_NAME=flow-uploads-dev
GCS_PROJECT_ID=flow-project-devEvent Service
File: backend/event-service/.env
NODE_ENV=development
PORT=3002
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379
ELASTICSEARCH_URL=http://elasticsearch:9200
# External APIs (for event aggregation)
FACEBOOK_EVENTS_API_KEY=optional
EVENTBRITE_API_KEY=optionalSocial Service
File: backend/social-service/.env
NODE_ENV=development
PORT=3003
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379Notification Service
File: backend/notification-service/.env
NODE_ENV=development
PORT=3004
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379
# Firebase (for push notifications)
FIREBASE_PROJECT_ID=your-firebase-project-id
FIREBASE_CLIENT_EMAIL=firebase-adminsdk@your-project.iam.gserviceaccount.com
FIREBASE_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n"
# SendGrid (for email)
SENDGRID_API_KEY=your-sendgrid-api-key
FROM_EMAIL=notifications@flowapp.comRealtime Service
File: backend/realtime-service/.env
NODE_ENV=development
PORT=3005
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379
# Socket.IO Configuration
SOCKETIO_CORS_ORIGINS=http://localhost:3000,http://localhost:8080
SOCKETIO_ADAPTER=redis
# Redis Adapter (REQUIRED for production horizontal scaling)
REDIS_ADAPTER_ENABLED=trueAdmin Portal
File: admin-portal/.env.local
# Supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
# Backend API
NEXT_PUBLIC_API_URL=http://localhost:3000
NEXT_PUBLIC_WS_URL=http://localhost:3005
# Feature Flags
NEXT_PUBLIC_ENABLE_ANALYTICS=true
NEXT_PUBLIC_ENABLE_AUDIT_LOGS=trueStaging:
NEXT_PUBLIC_SUPABASE_URL=https://staging-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=[STAGING_ANON_KEY]
SUPABASE_SERVICE_ROLE_KEY=[STAGING_SERVICE_KEY]
NEXT_PUBLIC_API_URL=https://api-staging.flowapp.com
NEXT_PUBLIC_WS_URL=https://ws-staging.flowapp.comProduction:
NEXT_PUBLIC_SUPABASE_URL=https://prod-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=[PROD_ANON_KEY]
SUPABASE_SERVICE_ROLE_KEY=[PROD_SERVICE_KEY]
NEXT_PUBLIC_API_URL=https://api.flowapp.com
NEXT_PUBLIC_WS_URL=https://ws.flowapp.com
SENTRY_DSN=[SENTRY_DSN]AI Services
File: ai-services/recommendation-engine/.env
MONGODB_URI=mongodb://admin:password123@mongodb:27017/flow?authSource=admin
REDIS_URL=redis://:redis123@redis:6379
MODEL_PATH=/app/models
PYTHON_ENV=development
# Model configuration
MODEL_UPDATE_INTERVAL_HOURS=24
BATCH_SIZE=32Local Development Deployment
Step 1: Clone and Setup
# Clone repository
git clone [repository-url]
cd Flow
# Create environment files
cp .env.example .env
cp backend/api-gateway/.env.example backend/api-gateway/.env
cp backend/user-service/.env.example backend/user-service/.env
cp backend/event-service/.env.example backend/event-service/.env
cp backend/social-service/.env.example backend/social-service/.env
cp backend/notification-service/.env.example backend/notification-service/.env
cp backend/realtime-service/.env.example backend/realtime-service/.env
cp admin-portal/.env.example admin-portal/.env.localStep 2: Start Infrastructure Services
# Start databases and cache
docker-compose up -d mongodb redis elasticsearch
# Wait for services to be ready
docker-compose ps
# Verify MongoDB is ready
docker exec flow-mongodb mongosh --eval "db.adminCommand('ping')"
# Verify Redis is ready
docker exec flow-redis redis-cli -a redis123 ping
# Verify Elasticsearch is ready
curl http://localhost:9200/_cluster/healthStep 3: Initialize Database
# Run MongoDB initialization script
docker exec -i flow-mongodb mongosh -u admin -p password123 --authenticationDatabase admin < scripts/mongo-init.js
# Seed sample data (optional)
cd backend
npm run seed:devStep 4: Start Backend Services
Option A: Docker Compose (Recommended for full-stack testing)
# Start all backend services
docker-compose up -d api-gateway user-service event-service social-service notification-service realtime-service
# View logs
docker-compose logs -fOption B: Local Development (Recommended for active development)
# Install dependencies
cd backend
npm run install:all
# Start all services in development mode
npm run dev:all
# Or start individual services
npm run dev:api-gateway # Port 3000
npm run dev:user-service # Port 3001
npm run dev:event-service # Port 3002
npm run dev:social-service # Port 3003
npm run dev:notification-service # Port 3004
npm run dev:realtime-service # Port 3005Step 5: Start AI Services
# Install Python dependencies
cd ai-services
pip install -r requirements.txt
# Start recommendation engine
cd recommendation-engine
uvicorn main:app --reload --port 8001
# Start matchmaking service (in another terminal)
cd ../matchmaking-service
uvicorn main:app --reload --port 8002Step 6: Start Admin Portal
cd admin-portal
# Install dependencies
npm install
# Run development server
npm run dev
# Admin portal available at: http://localhost:3000Step 7: Start Mobile App
cd mobile/flow_app
# Install Flutter dependencies
flutter pub get
# Run code generation
flutter pub run build_runner build --delete-conflicting-outputs
# Run on iOS simulator
flutter run -d ios
# Run on Android emulator
flutter run -d android
# Or run on physical device
flutter devices
flutter run -d [device-id]Verification
Test the deployment:
# Check API Gateway health
curl http://localhost:3000/health
# Check individual services
curl http://localhost:3001/health # User Service
curl http://localhost:3002/health # Event Service
curl http://localhost:3003/health # Social Service
curl http://localhost:3004/health # Notification Service
curl http://localhost:3005/health # Realtime Service
# Check AI services
curl http://localhost:8001/health # Recommendation Engine
curl http://localhost:8002/health # Matchmaking Service
# Test authentication
curl -X POST http://localhost:3000/api/auth/register \
-H "Content-Type: application/json" \
-d '{"email":"test@example.com","password":"Test123!","firstName":"Test","lastName":"User"}'Supabase Setup
The admin portal uses Supabase for authentication and data management.
Step 1: Create Supabase Project
- Go to https://supabase.com
- Create new project
- Note your project URL and anon key
- Save service role key (from Settings > API)
Step 2: Configure Database Schema
-- Create admin users table
CREATE TABLE admin_users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email TEXT UNIQUE NOT NULL,
role TEXT NOT NULL CHECK (role IN ('super_admin', 'admin', 'vendor', 'moderator')),
first_name TEXT,
last_name TEXT,
avatar_url TEXT,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
-- Create audit logs table
CREATE TABLE audit_logs (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID REFERENCES admin_users(id),
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT,
details JSONB,
ip_address INET,
user_agent TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
-- Create vendor profiles table
CREATE TABLE vendor_profiles (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
admin_user_id UUID REFERENCES admin_users(id) UNIQUE,
business_name TEXT NOT NULL,
business_type TEXT,
description TEXT,
website TEXT,
phone TEXT,
address JSONB,
is_verified BOOLEAN DEFAULT false,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
-- Enable Row Level Security
ALTER TABLE admin_users ENABLE ROW LEVEL SECURITY;
ALTER TABLE audit_logs ENABLE ROW LEVEL SECURITY;
ALTER TABLE vendor_profiles ENABLE ROW LEVEL SECURITY;
-- Create policies
CREATE POLICY "Admin users can read all admin users"
ON admin_users FOR SELECT
TO authenticated
USING (true);
CREATE POLICY "Users can update their own profile"
ON admin_users FOR UPDATE
TO authenticated
USING (auth.uid() = id);
CREATE POLICY "Audit logs are readable by admins"
ON audit_logs FOR SELECT
TO authenticated
USING (
EXISTS (
SELECT 1 FROM admin_users
WHERE id = auth.uid() AND role IN ('super_admin', 'admin')
)
);
CREATE POLICY "Vendors can read own profile"
ON vendor_profiles FOR SELECT
TO authenticated
USING (admin_user_id = auth.uid());
CREATE POLICY "Vendors can update own profile"
ON vendor_profiles FOR UPDATE
TO authenticated
USING (admin_user_id = auth.uid());Step 3: Configure Authentication
- Enable Email authentication in Supabase Dashboard
- Configure email templates
- Set up OAuth providers (optional): Google, Facebook
- Configure redirect URLs for each environment
Development:
- Site URL:
http://localhost:3000 - Redirect URLs:
http://localhost:3000/auth/callback
Staging:
- Site URL:
https://admin-staging.flowapp.com - Redirect URLs:
https://admin-staging.flowapp.com/auth/callback
Production:
- Site URL:
https://admin.flowapp.com - Redirect URLs:
https://admin.flowapp.com/auth/callback
Step 4: Create First Admin User
-- Insert first super admin (run in Supabase SQL Editor)
INSERT INTO admin_users (email, role, first_name, last_name)
VALUES ('admin@flowapp.com', 'super_admin', 'Super', 'Admin');
-- Then create auth user via Supabase Dashboard or APIDatabase Setup and Migrations
MongoDB Migrations
We use migrate-mongo for MongoDB migrations.
Setup
cd backend
npm install -g migrate-mongo
# Initialize (if not already done)
migrate-mongo initConfiguration
File: backend/migrate-mongo-config.js
module.exports = {
mongodb: {
url: process.env.MONGODB_URI || "mongodb://admin:password123@localhost:27017",
databaseName: "flow",
options: {
useNewUrlParser: true,
useUnifiedTopology: true,
}
},
migrationsDir: "migrations",
changelogCollectionName: "changelog",
migrationFileExtension: ".js",
useFileHash: false,
moduleSystem: 'commonjs',
};Create Migration
# Create new migration
migrate-mongo create add-indexes-to-events
# Edit the created file in migrations/ directoryExample Migration: migrations/[timestamp]-add-indexes-to-events.js
module.exports = {
async up(db, client) {
// Create indexes
await db.collection('events').createIndex({ slug: 1 }, { unique: true });
await db.collection('events').createIndex({ 'organizer.id': 1 });
await db.collection('events').createIndex({ 'datetime.start': 1 });
await db.collection('events').createIndex({ 'location.coordinates': '2dsphere' });
await db.collection('events').createIndex({ category: 1, 'datetime.start': 1 });
await db.collection('events').createIndex({ status: 1, featured: 1 });
},
async down(db, client) {
// Rollback
await db.collection('events').dropIndex('slug_1');
await db.collection('events').dropIndex('organizer.id_1');
await db.collection('events').dropIndex('datetime.start_1');
await db.collection('events').dropIndex('location.coordinates_2dsphere');
await db.collection('events').dropIndex('category_1_datetime.start_1');
await db.collection('events').dropIndex('status_1_featured_1');
}
};Run Migrations
# Development
export MONGODB_URI=mongodb://admin:password123@localhost:27017/flow?authSource=admin
migrate-mongo up
# Staging
export MONGODB_URI=[STAGING_MONGODB_URI]
migrate-mongo up
# Production (dry-run first)
export MONGODB_URI=[PROD_MONGODB_URI]
migrate-mongo up --dry-run
migrate-mongo up
# Rollback
migrate-mongo down
# Status
migrate-mongo statusSupabase Migrations
Supabase uses PostgreSQL migrations managed through their CLI or Dashboard.
Using Supabase CLI
# Install Supabase CLI
npm install -g supabase
# Login
supabase login
# Link to project
cd admin-portal
supabase link --project-ref [your-project-ref]
# Create migration
supabase migration new add_user_preferences
# Edit the migration file in supabase/migrations/Example: supabase/migrations/[timestamp]_add_user_preferences.sql
-- Create user preferences table
CREATE TABLE user_preferences (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
admin_user_id UUID REFERENCES admin_users(id) UNIQUE,
theme TEXT DEFAULT 'light' CHECK (theme IN ('light', 'dark', 'auto')),
notifications_enabled BOOLEAN DEFAULT true,
email_digest TEXT DEFAULT 'daily' CHECK (email_digest IN ('none', 'daily', 'weekly')),
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
-- Enable RLS
ALTER TABLE user_preferences ENABLE ROW LEVEL SECURITY;
-- Create policy
CREATE POLICY "Users can manage own preferences"
ON user_preferences
FOR ALL
TO authenticated
USING (admin_user_id = auth.uid())
WITH CHECK (admin_user_id = auth.uid());Apply Migrations
# Development (local)
supabase db push
# Staging/Production
supabase db push --linked
# Or use Supabase Dashboard > Database > MigrationsStaging Deployment
Staging environment mirrors production but with test data and lower resource allocation.
Infrastructure Setup (Google Cloud Platform)
Step 1: Create GCP Project
# Set project
gcloud config set project flow-staging
# Enable required APIs
gcloud services enable \
compute.googleapis.com \
container.googleapis.com \
containerregistry.googleapis.com \
cloudbuild.googleapis.com \
secretmanager.googleapis.comStep 2: Create Kubernetes Cluster
# Create GKE cluster (staging - smaller size)
gcloud container clusters create flow-staging \
--region=europe-west1 \
--num-nodes=2 \
--machine-type=n1-standard-2 \
--enable-autoscaling \
--min-nodes=2 \
--max-nodes=5 \
--enable-autorepair \
--enable-autoupgrade \
--disk-size=50GB
# Get credentials
gcloud container clusters get-credentials flow-staging --region=europe-west1Step 3: Setup MongoDB Atlas (Staging)
- Create MongoDB Atlas account
- Create new cluster (M10 or M20 for staging)
- Configure network access (add GKE cluster IPs)
- Create database user
- Get connection string
Step 4: Setup Redis (Cloud Memorystore)
# Create Redis instance
gcloud redis instances create flow-redis-staging \
--size=1 \
--region=europe-west1 \
--tier=basic \
--redis-version=redis_7_0
# Get connection info
gcloud redis instances describe flow-redis-staging --region=europe-west1Step 5: Setup Secrets Manager
# Create secrets
echo -n "[STRONG_JWT_SECRET]" | gcloud secrets create jwt-secret-staging --data-file=-
echo -n "[REFRESH_SECRET]" | gcloud secrets create jwt-refresh-secret-staging --data-file=-
echo -n "[MONGODB_URI]" | gcloud secrets create mongodb-uri-staging --data-file=-
echo -n "[REDIS_URL]" | gcloud secrets create redis-url-staging --data-file=-
echo -n "[SENDGRID_KEY]" | gcloud secrets create sendgrid-key-staging --data-file=-
echo -n "[FIREBASE_CREDS]" | gcloud secrets create firebase-creds-staging --data-file=-
# Grant access to GKE service account
PROJECT_ID=$(gcloud config get-value project)
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
gcloud secrets add-iam-policy-binding jwt-secret-staging \
--member="serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
# Repeat for other secretsStep 6: Build and Push Docker Images
# Configure Docker for GCR
gcloud auth configure-docker
# Build and push images
cd backend/api-gateway
docker build -t gcr.io/flow-staging/api-gateway:latest .
docker push gcr.io/flow-staging/api-gateway:latest
cd ../user-service
docker build -t gcr.io/flow-staging/user-service:latest .
docker push gcr.io/flow-staging/user-service:latest
# Repeat for all services...
# Or use automated script
cd ../../scripts
./build-and-push-staging.shStep 7: Deploy to Kubernetes
Create Kubernetes secrets from GCP Secret Manager:
# Create secrets in Kubernetes
kubectl create secret generic app-secrets \
--from-literal=jwt-secret=$(gcloud secrets versions access latest --secret=jwt-secret-staging) \
--from-literal=jwt-refresh-secret=$(gcloud secrets versions access latest --secret=jwt-refresh-secret-staging) \
--from-literal=mongodb-uri=$(gcloud secrets versions access latest --secret=mongodb-uri-staging) \
--from-literal=redis-url=$(gcloud secrets versions access latest --secret=redis-url-staging) \
--from-literal=sendgrid-key=$(gcloud secrets versions access latest --secret=sendgrid-key-staging) \
--namespace=flow-stagingDeploy services:
# Create namespace
kubectl create namespace flow-staging
# Apply configurations
kubectl apply -f infrastructure/k8s/staging/
# Or using Helm
helm install flow-backend ./infrastructure/helm/flow-backend \
--namespace flow-staging \
--values infrastructure/helm/values-staging.yaml
# Check deployment status
kubectl get pods -n flow-staging
kubectl get services -n flow-stagingStep 8: Configure Load Balancer and Ingress
File: infrastructure/k8s/staging/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flow-ingress
namespace: flow-staging
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
tls:
- hosts:
- api-staging.flowapp.com
- ws-staging.flowapp.com
secretName: flow-staging-tls
rules:
- host: api-staging.flowapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 3000
- host: ws-staging.flowapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: realtime-service
port:
number: 3005Apply ingress:
kubectl apply -f infrastructure/k8s/staging/ingress.yamlStep 9: Configure DNS
Point DNS records to the Load Balancer IP:
# Get Load Balancer IP
kubectl get ingress -n flow-staging
# Add DNS A records:
# api-staging.flowapp.com -> [LB_IP]
# ws-staging.flowapp.com -> [LB_IP]Production Deployment
Production deployment follows similar steps to staging but with:
- Higher resource allocation
- Production-grade monitoring
- Automated backups
- Multi-region redundancy (future)
Differences from Staging
- Cluster Size: Larger nodes and more replicas
- Database: MongoDB Atlas M30+ with replica sets
- Redis: Standard tier with HA
- Monitoring: Full Prometheus + Grafana + AlertManager
- Backups: Automated daily backups with point-in-time recovery
- SSL: Production Let’s Encrypt certificates
- Secrets: Stricter access controls
Production Cluster Setup
# Set production project
gcloud config set project flow-production
# Create production GKE cluster
gcloud container clusters create flow-production \
--region=europe-west1 \
--num-nodes=3 \
--machine-type=n1-standard-4 \
--enable-autoscaling \
--min-nodes=3 \
--max-nodes=10 \
--enable-autorepair \
--enable-autoupgrade \
--disk-size=100GB \
--enable-stackdriver-kubernetes \
--maintenance-window-start=2024-01-01T02:00:00Z \
--maintenance-window-duration=4h
# Get credentials
gcloud container clusters get-credentials flow-production --region=europe-west1Production MongoDB Setup
- Create MongoDB Atlas M30 cluster (or higher)
- Enable automated backups
- Configure replica set with 3+ nodes
- Set up monitoring and alerts
- Enable encryption at rest
- Configure VPC peering or PrivateLink
Production Redis Setup
# Create Redis instance (Standard tier with HA)
gcloud redis instances create flow-redis-production \
--size=5 \
--region=europe-west1 \
--tier=standard-ha \
--redis-version=redis_7_0 \
--replica-count=1
# Get connection info
gcloud redis instances describe flow-redis-production --region=europe-west1Production Deployment
# Build production images with version tags
VERSION=$(git rev-parse --short HEAD)
docker build -t gcr.io/flow-production/api-gateway:$VERSION -t gcr.io/flow-production/api-gateway:latest ./backend/api-gateway
docker push gcr.io/flow-production/api-gateway:$VERSION
docker push gcr.io/flow-production/api-gateway:latest
# Deploy using Helm with production values
helm upgrade --install flow-backend ./infrastructure/helm/flow-backend \
--namespace flow-production \
--values infrastructure/helm/values-production.yaml \
--set image.tag=$VERSION \
--wait
# Verify deployment
kubectl get pods -n flow-production
kubectl rollout status deployment/api-gateway -n flow-productionZero-Downtime Deployment Strategy
# Deployment with rolling update strategy
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
spec:
containers:
- name: api-gateway
image: gcr.io/flow-production/api-gateway:latest
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10Admin Portal Deployment (Vercel)
The admin portal is deployed to Vercel for optimal Next.js hosting.
Prerequisites
# Install Vercel CLI
npm install -g vercel
# Login to Vercel
vercel loginDevelopment Deployment
cd admin-portal
# Deploy to preview
vercel
# The deployment will be available at: https://flow-admin-[random].vercel.appProduction Deployment
Step 1: Configure Project
# Link to Vercel project
vercel link
# Set environment variables
vercel env add NEXT_PUBLIC_SUPABASE_URL production
vercel env add NEXT_PUBLIC_SUPABASE_ANON_KEY production
vercel env add SUPABASE_SERVICE_ROLE_KEY production
vercel env add NEXT_PUBLIC_API_URL production
vercel env add NEXT_PUBLIC_WS_URL production
vercel env add SENTRY_DSN productionStep 2: Configure vercel.json
File: admin-portal/vercel.json
{
"buildCommand": "npm run build",
"devCommand": "npm run dev",
"installCommand": "npm install",
"framework": "nextjs",
"regions": ["fra1"],
"env": {
"NEXT_PUBLIC_SUPABASE_URL": "@supabase-url-production",
"NEXT_PUBLIC_SUPABASE_ANON_KEY": "@supabase-anon-key-production"
},
"headers": [
{
"source": "/(.*)",
"headers": [
{
"key": "X-Frame-Options",
"value": "DENY"
},
{
"key": "X-Content-Type-Options",
"value": "nosniff"
},
{
"key": "Referrer-Policy",
"value": "strict-origin-when-cross-origin"
},
{
"key": "Permissions-Policy",
"value": "camera=(), microphone=(), geolocation=()"
}
]
}
]
}Step 3: Deploy to Production
# Deploy to production
vercel --prod
# Configure custom domain
vercel domains add admin.flowapp.com
# The admin portal will be available at: https://admin.flowapp.comStep 4: Configure Vercel Project Settings
- Go to Vercel Dashboard
- Select project
- Configure:
- Build & Development: Auto-detect (Next.js)
- Environment Variables: Add all production variables
- Domains: Add custom domain
- Git: Configure production branch (main)
- Deploy Hooks: Optional webhooks
Automatic Deployments
Vercel automatically deploys on git push:
- Main branch: Production deployment
- Feature branches: Preview deployments
Staging Deployment
# Deploy staging
vercel --prod --scope staging
# Configure staging domain
vercel domains add admin-staging.flowapp.comBackend Services Deployment
Deployment Options
- Google Kubernetes Engine (Recommended for production)
- Docker Swarm (Alternative for smaller deployments)
- Cloud Run (Serverless alternative)
- Traditional VMs (Not recommended)
Option 1: Kubernetes (GKE) - Recommended
See Staging Deployment and Production Deployment sections above.
Option 2: Docker Swarm
# Initialize Swarm
docker swarm init
# Deploy stack
docker stack deploy -c docker-compose.prod.yml flow
# Check services
docker service ls
# Scale services
docker service scale flow_api-gateway=3
docker service scale flow_event-service=2
# View logs
docker service logs -f flow_api-gatewayOption 3: Cloud Run (Serverless)
# Deploy individual service to Cloud Run
gcloud run deploy api-gateway \
--image gcr.io/flow-production/api-gateway:latest \
--platform managed \
--region europe-west1 \
--allow-unauthenticated \
--set-env-vars NODE_ENV=production \
--set-secrets JWT_SECRET=jwt-secret-production:latest \
--min-instances 1 \
--max-instances 10 \
--cpu 1 \
--memory 512Mi
# Get service URL
gcloud run services describe api-gateway --region europe-west1 --format 'value(status.url)'Note: Cloud Run is suitable for stateless services (API Gateway, User Service, Event Service) but NOT recommended for Realtime Service (WebSocket requirements).
Mobile App Deployment
iOS Deployment
Prerequisites
- Apple Developer Account ($99/year)
- Xcode 15+
- iOS Distribution Certificate
- Provisioning Profile
Step 1: Configure App
cd mobile/flow_app
# Update version in pubspec.yaml
# version: 1.0.0+1 (format: major.minor.patch+buildNumber)Step 2: Configure Firebase
- Download
GoogleService-Info.plistfrom Firebase Console - Place in
ios/Runner/ - Update
ios/Runner/Info.plistwith required permissions
Step 3: Build for Release
# Build iOS app
flutter build ios --release
# Or build IPA
flutter build ipa --releaseStep 4: Upload to App Store Connect
- Open
ios/Runner.xcworkspacein Xcode - Select “Any iOS Device”
- Product > Archive
- Distribute App > App Store Connect
- Upload
Step 5: TestFlight (Beta Testing)
- Go to App Store Connect
- Select app > TestFlight
- Add internal/external testers
- Distribute build
Android Deployment
Prerequisites
- Google Play Console Account ($25 one-time)
- Android Studio
- Keystore file for signing
Step 1: Create Keystore
keytool -genkey -v -keystore flow-release-key.jks -keyalg RSA -keysize 2048 -validity 10000 -alias flowStep 2: Configure Signing
File: android/key.properties
storePassword=[KEYSTORE_PASSWORD]
keyPassword=[KEY_PASSWORD]
keyAlias=flow
storeFile=/path/to/flow-release-key.jksFile: android/app/build.gradle
def keystoreProperties = new Properties()
def keystorePropertiesFile = rootProject.file('key.properties')
if (keystorePropertiesFile.exists()) {
keystoreProperties.load(new FileInputStream(keystorePropertiesFile))
}
android {
signingConfigs {
release {
keyAlias keystoreProperties['keyAlias']
keyPassword keystoreProperties['keyPassword']
storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null
storePassword keystoreProperties['storePassword']
}
}
buildTypes {
release {
signingConfig signingConfigs.release
}
}
}Step 3: Configure Firebase
- Download
google-services.jsonfrom Firebase Console - Place in
android/app/
Step 4: Build APK/AAB
# Build APK
flutter build apk --release
# Build App Bundle (recommended for Play Store)
flutter build appbundle --releaseStep 5: Upload to Google Play Console
- Go to Google Play Console
- Create new release (Internal/Closed/Open Testing or Production)
- Upload AAB file
- Complete store listing
- Submit for review
Continuous Delivery for Mobile
GitHub Actions workflow (.github/workflows/mobile-cd.yml):
name: Mobile CD
on:
push:
tags:
- 'v*'
jobs:
build-ios:
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- uses: subosito/flutter-action@v2
with:
flutter-version: '3.x'
- name: Install dependencies
working-directory: mobile/flow_app
run: flutter pub get
- name: Build iOS
working-directory: mobile/flow_app
run: flutter build ios --release --no-codesign
# Add fastlane for automatic upload to TestFlight
build-android:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-java@v3
with:
distribution: 'zulu'
java-version: '17'
- uses: subosito/flutter-action@v2
with:
flutter-version: '3.x'
- name: Install dependencies
working-directory: mobile/flow_app
run: flutter pub get
- name: Build Android
working-directory: mobile/flow_app
run: flutter build appbundle --release
# Add upload to Play ConsoleDatabase Backup and Recovery
MongoDB Backup Strategy
Automated Backups (Atlas)
MongoDB Atlas provides automated backups:
- Go to Atlas Dashboard
- Select Cluster > Backup
- Configure backup policy:
- Snapshot frequency: Every 6-12 hours
- Retention: 7 days continuous, 4 weekly, 12 monthly
- Point-in-time restore: Enable (last 24-48 hours)
Manual Backups
# Full database backup
mongodump --uri="mongodb+srv://user:password@cluster.mongodb.net/flow" --out=/backups/flow-$(date +%Y%m%d)
# Backup specific collection
mongodump --uri="mongodb+srv://user:password@cluster.mongodb.net/flow" --collection=events --out=/backups/events-$(date +%Y%m%d)
# Compress backup
tar -czf flow-backup-$(date +%Y%m%d).tar.gz /backups/flow-$(date +%Y%m%d)
# Upload to Cloud Storage
gsutil cp flow-backup-$(date +%Y%m%d).tar.gz gs://flow-backups/mongodb/Automated Backup Script
File: scripts/backup-mongodb.sh
#!/bin/bash
set -e
BACKUP_DIR="/backups/mongodb"
DATE=$(date +%Y%m%d-%H%M%S)
MONGODB_URI="$1"
GCS_BUCKET="gs://flow-backups"
echo "Starting MongoDB backup: $DATE"
# Create backup directory
mkdir -p $BACKUP_DIR/$DATE
# Dump database
mongodump --uri="$MONGODB_URI" --out=$BACKUP_DIR/$DATE
# Compress
tar -czf $BACKUP_DIR/flow-$DATE.tar.gz -C $BACKUP_DIR $DATE
# Upload to GCS
gsutil cp $BACKUP_DIR/flow-$DATE.tar.gz $GCS_BUCKET/mongodb/
# Clean up local files older than 7 days
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -type d -mtime +7 -exec rm -rf {} +
echo "Backup completed: flow-$DATE.tar.gz"Schedule with Cron
# Edit crontab
crontab -e
# Add daily backup at 2 AM
0 2 * * * /path/to/scripts/backup-mongodb.sh "mongodb+srv://user:pass@cluster.mongodb.net/flow" >> /var/log/mongodb-backup.log 2>&1Restore from Backup
# Download backup from GCS
gsutil cp gs://flow-backups/mongodb/flow-20260310-020000.tar.gz .
# Extract
tar -xzf flow-20260310-020000.tar.gz
# Restore entire database
mongorestore --uri="mongodb+srv://user:password@cluster.mongodb.net/flow" ./20260310-020000
# Restore specific collection
mongorestore --uri="mongodb+srv://user:password@cluster.mongodb.net/flow" --collection=events ./20260310-020000/flow/events.bson
# Point-in-time restore (Atlas only)
# Use Atlas Dashboard > Backup > RestoreRedis Backup
Redis persistence is configured with AOF (Append-Only File):
# In redis.conf or docker-compose
appendonly yes
appendfsync everysec
# Manual snapshot
redis-cli BGSAVE
# Backup RDB file
cp /data/dump.rdb /backups/redis-$(date +%Y%m%d).rdb
# Restore
# Stop Redis
# Replace dump.rdb with backup
# Start RedisSupabase Backup
Supabase provides automated PostgreSQL backups:
- Daily automated backups (retained for 7 days on free tier, 30 days on Pro)
- Point-in-time recovery (Pro plan)
- Manual backups via Dashboard
Manual backup:
# Using pg_dump
pg_dump "postgresql://postgres:[password]@db.[project].supabase.co:5432/postgres" > backup-$(date +%Y%m%d).sql
# Restore
psql "postgresql://postgres:[password]@db.[project].supabase.co:5432/postgres" < backup-20260310.sqlMonitoring and Logging
Health Checks
All services expose /health and /ready endpoints:
// Health endpoint
app.get('/health', (req, res) => {
res.status(200).json({
status: 'healthy',
service: 'api-gateway',
version: process.env.VERSION || '1.0.0',
uptime: process.uptime(),
timestamp: new Date().toISOString()
});
});
// Readiness endpoint (checks dependencies)
app.get('/ready', async (req, res) => {
try {
// Check MongoDB
await mongoose.connection.db.admin().ping();
// Check Redis
await redisClient.ping();
res.status(200).json({
status: 'ready',
dependencies: {
mongodb: 'connected',
redis: 'connected'
}
});
} catch (error) {
res.status(503).json({
status: 'not ready',
error: error.message
});
}
});Prometheus Metrics
Setup Prometheus
File: infrastructure/k8s/monitoring/prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__Instrument Services
const promClient = require('prom-client');
// Create metrics
const httpRequestDuration = new promClient.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code']
});
const httpRequestTotal = new promClient.Counter({
name: 'http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['method', 'route', 'status_code']
});
// Middleware to collect metrics
app.use((req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const duration = (Date.now() - start) / 1000;
httpRequestDuration.observe(
{ method: req.method, route: req.route?.path || req.path, status_code: res.statusCode },
duration
);
httpRequestTotal.inc({ method: req.method, route: req.route?.path || req.path, status_code: res.statusCode });
});
next();
});
// Expose metrics endpoint
app.get('/metrics', async (req, res) => {
res.set('Content-Type', promClient.register.contentType);
res.end(await promClient.register.metrics());
});Grafana Dashboards
Install Grafana
# Using Helm
helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana \
--namespace monitoring \
--set persistence.enabled=true \
--set adminPassword=admin
# Get admin password
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode
# Port forward to access
kubectl port-forward --namespace monitoring svc/grafana 3000:80Configure Datasource
- Access Grafana at http://localhost:3000
- Configuration > Data Sources > Add Prometheus
- URL: http://prometheus:9090
- Save & Test
Import Dashboards
Pre-built dashboards for Node.js services:
- Node.js Application Dashboard: ID 11159
- MongoDB Dashboard: ID 2583
- Redis Dashboard: ID 763
Centralized Logging (ELK Stack)
Elasticsearch, Logstash, Kibana
File: infrastructure/k8s/logging/elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
env:
- name: discovery.type
value: "zen"
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50GiApplication Logging
const winston = require('winston');
const { ElasticsearchTransport } = require('winston-elasticsearch');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: {
service: 'api-gateway',
environment: process.env.NODE_ENV
},
transports: [
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
new ElasticsearchTransport({
level: 'info',
clientOpts: { node: process.env.ELASTICSEARCH_URL || 'http://elasticsearch:9200' },
index: 'flow-logs'
})
]
});
module.exports = logger;Alerting
AlertManager Configuration
File: infrastructure/k8s/monitoring/alertmanager-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-config
namespace: monitoring
data:
alertmanager.yml: |
global:
slack_api_url: '[SLACK_WEBHOOK_URL]'
route:
group_by: ['alertname', 'cluster', 'service']
group_wait: 10s
group_interval: 10s
repeat_interval: 12h
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#flow-alerts'
title: 'Flow Alert: {{ .GroupLabels.alertname }}'
text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'Alert Rules
File: infrastructure/k8s/monitoring/alert-rules.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-rules
namespace: monitoring
data:
alerts.yml: |
groups:
- name: flow_alerts
interval: 30s
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status_code=~"5.."}[5m]) > 0.05
for: 5m
labels:
severity: critical
annotations:
description: "High error rate detected: {{ $value }} errors/sec"
- alert: HighLatency
expr: histogram_quantile(0.95, http_request_duration_seconds_bucket) > 1
for: 5m
labels:
severity: warning
annotations:
description: "High latency detected: {{ $value }}s at p95"
- alert: ServiceDown
expr: up{job="kubernetes-pods"} == 0
for: 2m
labels:
severity: critical
annotations:
description: "Service {{ $labels.pod }} is down"
- alert: HighMemoryUsage
expr: container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.9
for: 5m
labels:
severity: warning
annotations:
description: "High memory usage: {{ $value | humanizePercentage }}"CI/CD Pipeline
GitHub Actions Workflows
Backend CI/CD
File: .github/workflows/backend-ci-cd.yml
name: Backend CI/CD
on:
push:
branches: [main, develop]
paths:
- 'backend/**'
pull_request:
branches: [main, develop]
paths:
- 'backend/**'
jobs:
test:
runs-on: ubuntu-latest
services:
mongodb:
image: mongo:6.0
env:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password123
ports:
- 27017:27017
redis:
image: redis:7.0-alpine
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
cache-dependency-path: backend/package-lock.json
- name: Install dependencies
working-directory: backend
run: npm run install:all
- name: Run linting
working-directory: backend
run: npm run lint:all
- name: Run tests
working-directory: backend
run: npm run test:all
env:
MONGODB_URI: mongodb://admin:password123@localhost:27017/flow-test?authSource=admin
REDIS_URL: redis://localhost:6379
- name: Run integration tests
working-directory: backend
run: npm run test:integration
build-and-deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v1
with:
service_account_key: ${{ secrets.GCP_SA_KEY }}
project_id: ${{ secrets.GCP_PROJECT_ID }}
- name: Configure Docker for GCR
run: gcloud auth configure-docker
- name: Build and push Docker images
working-directory: backend
run: |
VERSION=${{ github.sha }}
# Build all services
for service in api-gateway user-service event-service social-service notification-service realtime-service; do
docker build -t gcr.io/${{ secrets.GCP_PROJECT_ID }}/$service:$VERSION \
-t gcr.io/${{ secrets.GCP_PROJECT_ID }}/$service:latest \
./$service
docker push gcr.io/${{ secrets.GCP_PROJECT_ID }}/$service:$VERSION
docker push gcr.io/${{ secrets.GCP_PROJECT_ID }}/$service:latest
done
- name: Deploy to GKE
run: |
gcloud container clusters get-credentials flow-production --region europe-west1
kubectl set image deployment/api-gateway \
api-gateway=gcr.io/${{ secrets.GCP_PROJECT_ID }}/api-gateway:${{ github.sha }} \
--namespace flow-production
kubectl set image deployment/user-service \
user-service=gcr.io/${{ secrets.GCP_PROJECT_ID }}/user-service:${{ github.sha }} \
--namespace flow-production
# Repeat for other services...
kubectl rollout status deployment/api-gateway --namespace flow-productionAdmin Portal CI/CD
File: .github/workflows/admin-portal-ci-cd.yml
name: Admin Portal CI/CD
on:
push:
branches: [main, develop]
paths:
- 'admin-portal/**'
pull_request:
branches: [main, develop]
paths:
- 'admin-portal/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
cache-dependency-path: admin-portal/package-lock.json
- name: Install dependencies
working-directory: admin-portal
run: npm ci
- name: Run linting
working-directory: admin-portal
run: npm run lint
- name: Run type check
working-directory: admin-portal
run: npm run type-check
- name: Run tests
working-directory: admin-portal
run: npm run test
- name: Build
working-directory: admin-portal
run: npm run build
env:
NEXT_PUBLIC_SUPABASE_URL: ${{ secrets.NEXT_PUBLIC_SUPABASE_URL }}
NEXT_PUBLIC_SUPABASE_ANON_KEY: ${{ secrets.NEXT_PUBLIC_SUPABASE_ANON_KEY }}
deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Deploy to Vercel
uses: amondnet/vercel-action@v25
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID }}
vercel-args: '--prod'
working-directory: admin-portalMobile CI (Flutter)
Already exists at .github/workflows/flutter_ci.yml - extended version:
name: Flutter CI/CD
on:
push:
branches: [main, develop]
paths:
- 'mobile/flow_app/**'
pull_request:
branches: [main, develop]
paths:
- 'mobile/flow_app/**'
jobs:
analyze-and-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Flutter
uses: subosito/flutter-action@v2
with:
flutter-version: '3.x'
channel: 'stable'
- name: Install dependencies
working-directory: mobile/flow_app
run: flutter pub get
- name: Format check
working-directory: mobile/flow_app
run: dart format --output=none --set-exit-if-changed .
- name: Analyze
working-directory: mobile/flow_app
run: flutter analyze --no-preamble
- name: Run tests
working-directory: mobile/flow_app
run: flutter test --reporter=compact --coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: mobile/flow_app/coverage/lcov.info
build-android:
needs: analyze-and-test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-java@v3
with:
distribution: 'zulu'
java-version: '17'
- uses: subosito/flutter-action@v2
with:
flutter-version: '3.x'
- name: Install dependencies
working-directory: mobile/flow_app
run: flutter pub get
- name: Build APK
working-directory: mobile/flow_app
run: flutter build apk --release
- name: Upload APK artifact
uses: actions/upload-artifact@v3
with:
name: release-apk
path: mobile/flow_app/build/app/outputs/flutter-apk/app-release.apkSecurity Considerations
Security Checklist for Production
Infrastructure Security
- Enable VPC/Firewall rules to restrict access
- Use private subnets for databases and internal services
- Enable DDoS protection on load balancer
- Configure SSL/TLS for all public endpoints
- Use strong SSL ciphers (TLS 1.2+)
- Implement rate limiting on API Gateway
- Enable Web Application Firewall (WAF)
- Set up intrusion detection system (IDS)
Application Security
- Use environment variables for all secrets (never commit)
- Implement JWT token expiration and refresh mechanism
- Enable CORS with specific origins (no wildcards in production)
- Validate and sanitize all user inputs
- Use parameterized queries to prevent SQL/NoSQL injection
- Implement proper password hashing (bcrypt with salt rounds >= 10)
- Enable HTTPS-only cookies with Secure and HttpOnly flags
- Implement CSRF protection for admin portal
- Use Content Security Policy (CSP) headers
- Sanitize HTML output to prevent XSS attacks
Database Security
- Enable authentication on MongoDB and Redis
- Use strong passwords (16+ characters, alphanumeric + symbols)
- Restrict database access to specific IPs/VPCs
- Enable encryption at rest for MongoDB
- Enable encryption in transit (TLS/SSL)
- Implement least privilege principle for database users
- Regular security audits and updates
- Enable audit logging for database access
Secrets Management
Using Google Secret Manager:
# Store secrets
echo -n "super-secret-jwt-key" | gcloud secrets create jwt-secret --data-file=-
# Access in application
const { SecretManagerServiceClient } = require('@google-cloud/secret-manager');
const client = new SecretManagerServiceClient();
async function getSecret(secretName) {
const [version] = await client.accessSecretVersion({
name: `projects/${projectId}/secrets/${secretName}/versions/latest`,
});
return version.payload.data.toString('utf8');
}API Security
// Rate limiting
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
standardHeaders: true,
legacyHeaders: false,
handler: (req, res) => {
res.status(429).json({
error: 'Too many requests, please try again later.'
});
}
});
app.use('/api/', limiter);
// Helmet for security headers
const helmet = require('helmet');
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"],
scriptSrc: ["'self'"],
imgSrc: ["'self'", 'data:', 'https:'],
},
},
hsts: {
maxAge: 31536000,
includeSubDomains: true,
preload: true
}
}));
// Input validation
const { body, validationResult } = require('express-validator');
app.post('/api/users/register',
body('email').isEmail().normalizeEmail(),
body('password').isLength({ min: 8 }).matches(/^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)/),
(req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
// Process registration
}
);Security Updates and Patches
# Regular dependency updates
npm audit
npm audit fix
# Update all packages
npm update
# Check for outdated packages
npm outdatedScaling Strategies
Horizontal Scaling
Backend Services (Kubernetes)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-gateway-hpa
namespace: flow-production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-gateway
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 30
- type: Pods
value: 2
periodSeconds: 30
selectPolicy: MaxDatabase Scaling
MongoDB:
- Use MongoDB Atlas auto-scaling
- Configure replica sets (minimum 3 nodes)
- Implement sharding for large collections
- Use read replicas for read-heavy workloads
Redis:
- Use Redis Cluster for horizontal scaling
- Implement client-side sharding
- Use read replicas for caching
Vertical Scaling
# Increase node pool size in GKE
gcloud container node-pools update default-pool \
--cluster=flow-production \
--region=europe-west1 \
--machine-type=n1-standard-8
# Increase MongoDB instance size in Atlas
# Go to Atlas Dashboard > Edit Configuration > Select larger tierCaching Strategy
Multi-layer caching:
- Application-level: In-memory cache (Node.js)
- Distributed cache: Redis
- CDN: CloudFlare or Cloud CDN for static assets
- Database: Query result caching
const NodeCache = require('node-cache');
const memoryCache = new NodeCache({ stdTTL: 300 });
async function getCachedData(key, fetchFunction, ttl = 300) {
// Check memory cache
let data = memoryCache.get(key);
if (data) return data;
// Check Redis
data = await redisClient.get(key);
if (data) {
memoryCache.set(key, JSON.parse(data), ttl);
return JSON.parse(data);
}
// Fetch from database
data = await fetchFunction();
// Store in Redis and memory
await redisClient.setex(key, ttl, JSON.stringify(data));
memoryCache.set(key, data, ttl);
return data;
}Load Balancing
Global Load Balancer (GCP):
# Create backend service
gcloud compute backend-services create flow-backend \
--protocol=HTTP \
--health-checks=flow-health-check \
--global
# Add instance group
gcloud compute backend-services add-backend flow-backend \
--instance-group=flow-instances \
--instance-group-zone=europe-west1-b \
--global
# Create URL map
gcloud compute url-maps create flow-lb \
--default-service=flow-backend
# Create target HTTP proxy
gcloud compute target-http-proxies create flow-http-proxy \
--url-map=flow-lb
# Create forwarding rule
gcloud compute forwarding-rules create flow-forwarding-rule \
--global \
--target-http-proxy=flow-http-proxy \
--ports=80Database Optimization
Indexing strategy:
// Create indexes for frequently queried fields
db.events.createIndex({ slug: 1 }, { unique: true });
db.events.createIndex({ 'organizer.id': 1 });
db.events.createIndex({ 'datetime.start': 1 });
db.events.createIndex({ 'location.coordinates': '2dsphere' });
db.events.createIndex({ category: 1, 'datetime.start': 1 });
db.events.createIndex({ status: 1, featured: 1, trending: -1 });
// Compound indexes for common queries
db.users.createIndex({ email: 1 });
db.users.createIndex({ 'gamification.points': -1 });
db.users.createIndex({ 'social.groups': 1 });Troubleshooting
Common Issues and Solutions
Issue: Services Can’t Connect to MongoDB
Symptoms: Connection refused or authentication failed
Solutions:
# Check MongoDB is running
docker ps | grep mongodb
kubectl get pods -n flow-production | grep mongodb
# Verify connection string
echo $MONGODB_URI
# Test connection
mongosh "$MONGODB_URI"
# Check firewall rules (GCP)
gcloud compute firewall-rules list
# Check network policies (K8s)
kubectl get networkpolicies -n flow-productionIssue: High Latency on API Calls
Symptoms: Slow response times, timeouts
Solutions:
- Check database indexes
- Enable query profiling
- Review cache hit rates
- Scale up services
- Optimize N+1 queries
# Enable MongoDB profiling
db.setProfilingLevel(2)
db.system.profile.find().limit(10).sort({ ts: -1 }).pretty()
# Check Redis cache hit rate
redis-cli INFO stats | grep keyspace
# Review Prometheus metrics
# Query: rate(http_request_duration_seconds_sum[5m]) / rate(http_request_duration_seconds_count[5m])Issue: WebSocket Connections Not Working
Symptoms: Real-time features not updating, socket disconnect
Solutions:
# Check if Redis adapter is enabled
kubectl logs -n flow-production realtime-service-xxx | grep "redis adapter"
# Verify sticky sessions on load balancer
kubectl get service realtime-service -n flow-production -o yaml
# Test WebSocket connection
wscat -c ws://localhost:3005Kubernetes Service should have sessionAffinity:
apiVersion: v1
kind: Service
metadata:
name: realtime-service
spec:
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800Issue: Out of Memory (OOM) Errors
Symptoms: Pods restarting, memory errors in logs
Solutions:
# Check memory usage
kubectl top pods -n flow-production
# Increase memory limits
kubectl set resources deployment/api-gateway \
--limits=memory=1Gi \
--requests=memory=512Mi \
-n flow-production
# Review memory leaks
# Use Node.js heap snapshots
node --inspect index.jsIssue: Database Migration Failed
Symptoms: Migration errors, schema inconsistencies
Solutions:
# Check migration status
migrate-mongo status
# Rollback last migration
migrate-mongo down
# Fix migration file and re-run
migrate-mongo up
# Manual fix (if needed)
mongosh "$MONGODB_URI" --eval "db.changelog.find()"Issue: CI/CD Pipeline Failing
Symptoms: Build or deploy failures in GitHub Actions
Solutions:
- Check GitHub Actions logs
- Verify secrets are set correctly
- Test build locally
- Check resource quotas in GCP
# Test build locally
docker build -t test-build ./backend/api-gateway
# Check GCP quotas
gcloud compute project-info describe --project=flow-production
# Verify secrets
gcloud secrets versions access latest --secret=jwt-secret-productionDebugging Tools
# Kubernetes debugging
kubectl describe pod [pod-name] -n flow-production
kubectl logs -f [pod-name] -n flow-production
kubectl exec -it [pod-name] -n flow-production -- /bin/sh
# Port forwarding for local debugging
kubectl port-forward svc/api-gateway 3000:3000 -n flow-production
# MongoDB debugging
mongosh "$MONGODB_URI"
db.currentOp()
db.serverStatus()
# Redis debugging
redis-cli -h [redis-host] -a [password]
INFO
MONITOR
# Network debugging
kubectl run -it --rm debug --image=nicolaka/netshoot -n flow-production -- /bin/bashPerformance Profiling
# Node.js profiling
node --prof index.js
node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt
# Load testing
npm install -g artillery
artillery quick --count 100 --num 10 http://localhost:3000/api/eventsAppendix
Useful Commands Reference
# Docker Compose
docker-compose up -d # Start services
docker-compose down # Stop services
docker-compose logs -f [service] # View logs
docker-compose restart [service] # Restart service
docker-compose ps # List services
# Kubernetes
kubectl get pods -n [namespace] # List pods
kubectl describe pod [pod] -n [ns] # Pod details
kubectl logs -f [pod] -n [ns] # Stream logs
kubectl exec -it [pod] -n [ns] -- sh # Shell into pod
kubectl apply -f [file.yaml] # Apply config
kubectl delete -f [file.yaml] # Delete config
# GCloud
gcloud auth login # Login
gcloud config set project [id] # Set project
gcloud container clusters list # List clusters
gcloud compute instances list # List VMs
gcloud secrets list # List secrets
# MongoDB
mongosh [uri] # Connect
db.stats() # Database stats
db.collection.find() # Query
db.collection.createIndex() # Create index
# Redis
redis-cli -h [host] -a [pass] # Connect
KEYS * # List keys
GET [key] # Get value
FLUSHALL # Clear all (careful!)Environment URLs
Development:
- API Gateway: http://localhost:3000
- Admin Portal: http://localhost:3000 (Next.js)
- WebSocket: http://localhost:3005
Staging:
- API: https://api-staging.flowapp.com
- Admin: https://admin-staging.flowapp.com
- WebSocket: https://ws-staging.flowapp.com
Production:
- API: https://api.flowapp.com
- Admin: https://admin.flowapp.com
- WebSocket: https://ws.flowapp.com
- Mobile App: iOS App Store / Google Play Store
Support and Documentation
- Technical Documentation:
/docs/TECHNICAL_ARCHITECTURE.md - API Documentation:
/docs/API_DOCUMENTATION.md - Database Schema:
/docs/DATABASE_SCHEMA.md - Development Guide:
/docs/DEVELOPMENT_GUIDE.md - Architecture Improvements:
/docs/architecture-improvements.md - Issues Tracker:
/docs/issues/
Document Version: 1.0.0 Last Updated: March 10, 2026 Maintained by: Flow DevOps Team