Flow Platform — Supabase-Only Architecture Migration

Date: 2026-03-29 Status: Approved Author: Claude Opus 4.6 + Elia Revision: 3 (all review issues resolved)

1. Context & Motivation

The Problem

Flow’s current architecture uses a dual-database strategy:

  • Supabase PostgreSQL — used by the Flutter mobile app and Next.js web portal for auth, profiles, events, venues, social, chat, tickets, gamification, and more (43+ tables with full RLS, triggers, and functions)
  • MongoDB — used by 3 Node.js microservices (Event Service, Social Service, Notification Service) for operational data that duplicates Supabase
  • Redis — caching and sessions across all microservices
  • Elasticsearch — planned for full-text search, not actively used

This creates:

  1. Data duplication — Events, users, and social data exist in both PostgreSQL and MongoDB, causing inconsistency
  2. Infrastructure cost — Running MongoDB, Redis, and Elasticsearch requires Docker in dev and paid hosting in production
  3. Operational complexity — 6 microservices (4 implemented, 2 empty) + 2 AI stubs = 8 services to maintain
  4. Sync issues — No CDC pipeline exists; data diverges silently

The Decision

Eliminate all Node.js microservices and MongoDB/Redis/Elasticsearch. Use Supabase as the sole backend, with Edge Functions for complex business logic. The Flutter mobile app and Next.js web portal will communicate directly with Supabase.

Why Now

  • Budget is near-zero — can’t afford hosting for 8 services
  • User Service and Realtime Service were never implemented (empty directories)
  • AI services are skeleton-only (15 lines each)
  • The mobile app already uses Supabase for most operations
  • The Supabase schema is already comprehensive (43+ tables, 15+ RPC functions, full RLS) — the microservices are largely redundant
  • Supabase’s free tier provides: 500MB DB, 1GB storage, 500K Edge Function invocations, Realtime, Auth

MongoDB Data Status

There is no production data in MongoDB. The project is in development phase. All real data (profiles, events, venues, connections, crews, chats, messages) lives in Supabase PostgreSQL already. The MongoDB microservices duplicate this schema but have never been the source of truth. No data migration is needed — only code migration.

2. Target Architecture

Before (Current)

Flutter App ──→ API Gateway (3000) ──→ User Service (3001) [EMPTY]
                                   ──→ Event Service (3002) [MongoDB]
                                   ──→ Social Service (3003) [MongoDB]
                                   ──→ Notification Service (3004) [MongoDB]
                                   ──→ Realtime Service (3005) [EMPTY]
            ──→ Supabase (Auth + PostgreSQL — PRIMARY data store)

Next.js Web ──→ Supabase (Auth + PostgreSQL)

AI Services ──→ Recommendation Engine (8001) [STUB]
            ──→ Matchmaking Service (8002) [STUB]

After (Target)

Flutter App ──→ Supabase Auth (login, signup, JWT)
            ──→ Supabase PostgreSQL (all data via RLS)
            ──→ Supabase Realtime (chat, presence, live updates)
            ──→ Supabase Edge Functions (notifications, recommendations, complex logic)
            ──→ Supabase Storage (images, media)

Next.js Web ──→ Same Supabase instance (same auth, same data)

Component Mapping

Old ComponentNew ComponentNotes
API Gateway (Express)Supabase RLS + AuthJWT validation is automatic; rate limiting via Edge Function if needed
User Service (empty)Supabase Auth + profiles tableAlready fully exists
Event Service (MongoDB)Supabase PostgreSQL + PostGISTables already exist with full schema
Social Service (MongoDB)Supabase PostgreSQLconnections, crews, squads already exist
Notification Service (MongoDB)Edge Function + FirebasePush via FCM, triggered by DB webhooks
Realtime Service (empty)Supabase Realtimechats, messages, chat_participants already exist
Recommendation Engine (stub)PostgreSQL function + scheduled Edge FunctionSQL-based scoring
Matchmaking Service (stub)PostgreSQL functionInterest + event overlap scoring
Redis (caching)Client-side caching (Riverpod/TanStack Query)No server-side cache needed
ElasticsearchPostgreSQL Full-Text Search + pg_trgmBuilt-in, zero extra cost
MongoDBEliminatedAll data already in PostgreSQL
Docker ComposeEliminatedNo local infra needed

3. Database Schema — Gap Analysis

Already Existing (43+ tables) — No Changes Needed

The Supabase schema is already comprehensive. Key existing tables:

Core: profiles, events, event_attendees, venues, saved_events Social: connections (user_id/friend_id/status), user_follows, user_blocks, user_reports Crews & Squads: crews, crew_members, crew_feedback, squads, squad_members Messaging: chats (direct/group/crew/squad/venue), chat_participants, messages Content: comments, event_photos, campaigns Gamification: tags, tag_relations, event_tags, venues_tags, user_interactions Products & Tickets: products, ticket_types, user_tickets PR System: pr_profiles, referral_links, pr_conversions, venue_prs Reviews: reviews (with rating, photos, status) Admin: verification_requests, moderator_permissions, feedback, app_issues, app_config, kpi_events

Existing RPC Functions: events_within_radius(), get_nearby_events(), get_events_with_friends(), recalculate_karma(), increment_xp(), get_user_level_info(), toggle_message_reaction(), increment_tag_usage(), check_event_creation_limit(), increment_ticket_sold(), increment_referral_click(), is_admin(), is_super_admin()

Existing RLS: Comprehensive policies on all tables covering SELECT/INSERT/UPDATE/DELETE per role.

Tables to ADD (truly new)

-- Notification system (not yet in Supabase)
CREATE TABLE notifications (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id uuid REFERENCES profiles(id) ON DELETE CASCADE,
  type text NOT NULL,
  title text,
  body text,
  data jsonb DEFAULT '{}',
  read boolean DEFAULT false,
  created_at timestamptz DEFAULT now(),
  updated_at timestamptz DEFAULT now()
);
CREATE TRIGGER update_notifications_updated_at BEFORE UPDATE ON notifications
  FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
 
CREATE TABLE notification_preferences (
  user_id uuid REFERENCES profiles(id) ON DELETE CASCADE PRIMARY KEY,
  push_enabled boolean DEFAULT true,
  email_enabled boolean DEFAULT true,
  quiet_hours_start time,
  quiet_hours_end time,
  disabled_types text[] DEFAULT '{}',
  updated_at timestamptz DEFAULT now()
);
CREATE TRIGGER update_notification_preferences_updated_at BEFORE UPDATE ON notification_preferences
  FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
 
CREATE TABLE device_tokens (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id uuid REFERENCES profiles(id) ON DELETE CASCADE,
  token text NOT NULL,
  platform text CHECK (platform IN ('ios', 'android', 'web')),
  created_at timestamptz DEFAULT now(),
  UNIQUE(user_id, token)
);
 
-- Badges system (not yet in Supabase)
CREATE TABLE badges (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  name text NOT NULL UNIQUE,
  description text,
  icon_url text,
  category text CHECK (category IN ('explorer', 'social', 'organizer', 'veteran')),
  criteria jsonb,
  created_at timestamptz DEFAULT now()
);
 
CREATE TABLE user_badges (
  user_id uuid REFERENCES profiles(id) ON DELETE CASCADE,
  badge_id uuid REFERENCES badges(id) ON DELETE CASCADE,
  earned_at timestamptz DEFAULT now(),
  PRIMARY KEY (user_id, badge_id)
);
 
-- Recommendation cache for AI
CREATE TABLE recommendations_cache (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id uuid REFERENCES profiles(id) ON DELETE CASCADE,
  event_id uuid REFERENCES events(id) ON DELETE CASCADE,
  score numeric(5,4) NOT NULL,
  factors jsonb DEFAULT '{}',  -- {"distance": 0.8, "category": 0.9, "friends": 0.5}
  computed_at timestamptz DEFAULT now(),
  UNIQUE(user_id, event_id)
);

Columns to ADD (to existing tables)

-- Add connection_depth to connections for tiered relationships
ALTER TABLE connections
  ADD COLUMN IF NOT EXISTS connection_depth int DEFAULT 0;
-- connection_depth: 0=acquaintance, 1=friend (1-2 shared events), 2=buddy (3-5), 3=crew (6+)
-- Updated by: compute-connection-depth Edge Function, triggered after event attendance
 
-- Add last_read_at to chat_participants for unread counts
ALTER TABLE chat_participants
  ADD COLUMN IF NOT EXISTS last_read_at timestamptz DEFAULT now();

Indexes to ADD

-- Full-text search (using actual column names)
CREATE INDEX IF NOT EXISTS idx_events_search
  ON events USING GIN (to_tsvector('simple', coalesce(title, '') || ' ' || coalesce(description, '')));
CREATE INDEX IF NOT EXISTS idx_venues_search
  ON venues USING GIN (to_tsvector('simple', coalesce(name, '') || ' ' || coalesce(description, '')));
-- Note: Using 'simple' language config for multi-language support (Italian + English)
 
-- Notifications
CREATE INDEX idx_notifications_user ON notifications (user_id, read, created_at DESC);
CREATE INDEX idx_notifications_type ON notifications (type, created_at DESC);
 
-- Recommendations cache
CREATE INDEX idx_recommendations_user ON recommendations_cache (user_id, score DESC);
CREATE INDEX idx_recommendations_computed ON recommendations_cache (computed_at);
 
-- Messages performance (if not already indexed)
CREATE INDEX IF NOT EXISTS idx_messages_chat_time ON messages (chat_id, created_at DESC);

Note: Geospatial indexes already exist on events.geo_point, venues.geo_point, and profiles.geo_point using GIST. The location JSONB column is used for display; geo_point geography column is used for spatial queries via existing RPC functions.

Deprecated Tables — Already Cleaned Up

  • _deprecated_posts — Already dropped by migration 20260324000008_db_audit_fix.sql. No action needed.
  • friendships — Already dropped by migration 20260324000008_db_audit_fix.sql. No action needed.
  • followers — Superseded by user_follows. Verify no code references remain, then drop.

profiles.badges JSONB Column Deprecation

The existing profiles.badges JSONB column will be deprecated once the new badges/user_badges normalized tables are populated. Migration plan:

  1. Phase 5: Populate badges table with badge definitions and user_badges with earned badges
  2. Update mobile/web to read from user_badges JOIN badges instead of profiles.badges
  3. After verification, drop profiles.badges column (or keep as denormalized cache populated by trigger)

Social Model Clarification

The existing schema uses:

  • connections (user_id/friend_id) — Reciprocal friendships (pending/accepted/rejected/blocked)
  • user_follows (follower_id/following_id) — Asymmetric follow relationships
  • user_blocks (blocker_id/blocked_id) — Block relationships

These serve different purposes and all remain. The connection_depth addition enriches friendships with automatic tiering based on shared event attendance.

4. Row Level Security (RLS)

Existing Policies (already comprehensive)

All 43+ tables have RLS enabled with policies covering SELECT/INSERT/UPDATE/DELETE per role. Key patterns:

  • profiles: Public read for safe fields, self-update, admin full access
  • events: Public read, organizer CRUD, PR users can manage assigned venues’ events
  • messages/chats: Via chat_participants junction table
  • crews/squads: Public read, creator management
  • tickets: User reads own, organizer reads event’s tickets

New Policies Needed (for new tables only)

-- Notifications
ALTER TABLE notifications ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users read own notifications" ON notifications FOR SELECT USING (auth.uid() = user_id);
CREATE POLICY "System inserts notifications" ON notifications FOR INSERT WITH CHECK (true);  -- Edge Function uses service_role
CREATE POLICY "Users mark own as read" ON notifications FOR UPDATE USING (auth.uid() = user_id) WITH CHECK (auth.uid() = user_id);
CREATE POLICY "Users delete own notifications" ON notifications FOR DELETE USING (auth.uid() = user_id);
 
-- Notification preferences
ALTER TABLE notification_preferences ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users manage own prefs" ON notification_preferences FOR ALL USING (auth.uid() = user_id);
 
-- Device tokens
ALTER TABLE device_tokens ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users manage own tokens" ON device_tokens FOR ALL USING (auth.uid() = user_id);
 
-- Badges (public catalog)
ALTER TABLE badges ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public badge catalog" ON badges FOR SELECT USING (true);
CREATE POLICY "Admins manage badges" ON badges FOR ALL USING (is_admin());
 
-- User badges (Edge Functions use service_role key which bypasses RLS — no write policies needed)
ALTER TABLE user_badges ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public badge display" ON user_badges FOR SELECT USING (true);
-- INSERT/UPDATE/DELETE: Only via Edge Functions using service_role (bypasses RLS)
 
-- Recommendations cache (Edge Functions use service_role key which bypasses RLS)
ALTER TABLE recommendations_cache ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users read own recommendations" ON recommendations_cache FOR SELECT USING (auth.uid() = user_id);
-- INSERT/UPDATE/DELETE: Only via Edge Functions using service_role (bypasses RLS)

5. Edge Functions

send-notification

Trigger: Database webhook on INSERT into notifications table. Behavior (asynchronous):

  1. Read the new notification row
  2. Look up user’s notification_preferences — skip if type is in disabled_types or quiet hours active
  3. Look up user’s device_tokens for push delivery
  4. Send push via Firebase Cloud Messaging
  5. For critical types (security, account), also send email via Resend (free tier: 100 emails/day)
  6. Update notification status

process-rsvp

Trigger: Database webhook on INSERT/UPDATE into event_attendees. Behavior:

  1. Call increment_xp(user_id, 20) for RSVP (+50 if first-ever RSVP)
  2. Insert notification for event organizer
  3. Invalidate recommendations_cache for the user (DELETE WHERE user_id = …)

Note: event_attendees_count_trigger already auto-updates attendee_count/interested_count on events.

compute-recommendations

Trigger: Scheduled cron (every 6 hours via Supabase pg_cron or Edge Function scheduler). Behavior:

  1. For each active user (last_active within 30 days):
    • Score events: ST_Distance(user.geo_point, event.geo_point) normalized as distance_score (0-1)
    • Category match: overlap between user’s interests and event tags via event_tags
    • Friends attending: count from event_attendees WHERE user_id IN (user’s accepted connections)
    • Final score: distance * 0.3 + category_match * 0.3 + friends_attending * 0.4
  2. UPSERT top 50 per user into recommendations_cache
  3. Delete stale entries (computed_at < 24h ago)

compute-matchmaking

Trigger: On-demand (called from mobile app when user opens crew matchmaking). Behavior:

  1. Get user’s interests, recent event attendance, and connections
  2. Score other users: shared interests (40%), event overlap (30%), mutual friends (30%)
  3. Filter out blocked users and existing connections
  4. Return top 20 matches (not persisted, computed on-the-fly)

moderate-content

Trigger: Database webhook on INSERT into messages and comments. Behavior (asynchronous — message is delivered immediately, moderation happens after):

  1. Basic keyword filtering against a blocklist in app_config
  2. If flagged, insert into user_reports with status=‘pending’ for moderator review
  3. Does NOT block or delay message delivery

compute-connection-depth

Trigger: Database webhook on INSERT into event_attendees (after event ends). Behavior:

  1. For the attending user, find all other attendees who are accepted connections
  2. Count shared events via event_attendees join
  3. Update connections.connection_depth:
    • 0 shared events → depth 0 (acquaintance)
    • 1-2 shared events → depth 1 (friend)
    • 3-5 shared events → depth 2 (buddy)
    • 6+ shared events → depth 3 (crew)

Invocation Budget Estimate

At 1,000 active users with 10 actions/day:

  • send-notification: ~10K/month (1 notification per action)
  • process-rsvp: ~5K/month (assume 50% of actions are RSVPs)
  • compute-recommendations: ~120/month (cron 4x/day)
  • moderate-content: ~15K/month (messages + comments)
  • compute-matchmaking: ~3K/month (30% of users/month)
  • compute-connection-depth: ~2K/month

Total: ~35K invocations/month — well within the 500K free tier limit.

6. Migration Plan (Phased)

Phase 1: Schema Additions (Week 1)

  1. Create notifications, notification_preferences, device_tokens tables
  2. Create badges, user_badges tables
  3. Create recommendations_cache table
  4. Add connection_depth column to connections
  5. Add last_read_at column to chat_participants (if missing)
  6. Create new indexes (full-text search, notifications, recommendations)
  7. Apply new RLS policies
  8. Verify followers table has no code references, then drop
  9. Test: All existing mobile/web functionality still works (schema additions are non-breaking)

Phase 2: Edge Functions (Week 1-2)

  1. Implement send-notification + Firebase integration + Resend email
  2. Implement process-rsvp with XP system
  3. Implement compute-recommendations with cron
  4. Implement compute-matchmaking
  5. Implement moderate-content (async)
  6. Implement compute-connection-depth
  7. Test: Trigger each function manually and verify results

Phase 3: Mobile App Update (Week 2-4)

  1. Remove EventApiService (Dio/Retrofit) — replace with Supabase client calls
  2. Remove UserApiService — replace with Supabase auth + profiles queries
  3. Remove MessagingApiService — replace with Supabase Realtime subscriptions
  4. Remove SocketService (Socket.IO) — replace with Supabase Realtime channels
  5. Update Riverpod notifiers to use Supabase queries directly
  6. Add notification_preferences and device_tokens management screens
  7. Keep Firebase for push notifications (triggered by Edge Function)
  8. Test: All mobile flows end-to-end (auth, events, chat, social, notifications)

Phase 4: Web Portal Update (Week 4-5)

  1. Verify web portal already uses Supabase directly (likely yes)
  2. Add notification management to admin dashboard
  3. Add badge management to admin dashboard
  4. Complete vendor dashboard functionality
  5. Wire up analytics with real data from kpi_events and user_interactions
  6. Test: All admin/vendor/moderator flows

Phase 5: Missing Features (Week 5-7)

  1. Badge system: Define badge criteria, implement award logic in Edge Functions
  2. Leaderboards: PostgreSQL view aggregating XP/level from profiles, ranked
  3. Ticket purchase flow: Connect ticket selection → user_tickets creation → QR generation
  4. Enhanced reviews: Photo upload to Supabase Storage, review moderation
  5. Campaign management: Full CRUD in admin + mobile
  6. PR dashboard: Wire up pr_profiles, referral_links, pr_conversions data
  7. Test: Each feature independently

Phase 6: Polish & Optimization (Week 7-9)

  1. UI/UX refinement on mobile (consistent design language, animations, error states)
  2. Performance testing (query optimization, index verification)
  3. Security audit (RLS policy review with test suite)
  4. Full documentation update (architecture, API, deployment)
  5. Production readiness checklist

Rollback Strategy

  • Phase 1-2: Non-breaking additions; rollback = drop new tables/functions
  • Phase 3: Keep backend code in a legacy/microservices branch; can redeploy if critical issues found
  • Phase 4-6: Forward-only; no backend dependency to roll back to
  • General: Do NOT delete backend code (Section 7) until all phases are validated and stable for 2+ weeks

7. What Gets Deleted (After Validation)

After migration is complete, tested, and stable for 2+ weeks:

  • flow_backend/backend/ — All 6 microservice directories
  • flow_backend/ai-services/ — Both AI service stubs
  • flow_backend/docker-compose.yml — No longer needed
  • flow_backend/scripts/mongo-init.js — MongoDB specific

Preserved in git branch legacy/microservices for reference.

What stays:

  • flow_backend/web/ — Next.js web portal (enhanced)
  • flow_backend/supabase/ — Migrations and Edge Functions
  • flow_docs/ — Documentation (updated)
  • flow_mobile/ — Flutter app (updated)

8. Risk Mitigation

RiskMitigation
Data loss during migrationNo MongoDB production data exists; Supabase is already source of truth
Supabase free tier limitsEstimated 35K invocations/month of 500K limit. DB at ~50-100MB of 500MB. Monitor with Supabase dashboard. Pro plan ($25/mo) if needed
Edge Function cold startsKeep critical functions warm with pg_cron pings; send-notification latency is acceptable (async). Most logic in PostgreSQL functions (no cold start)
Missing Redis cachingClient-side caching (Riverpod, TanStack Query) + PostgreSQL query optimization + indexes. No user-facing latency impact expected
Full-text search qualitysimple language config + pg_trgm extension for fuzzy matching. Works for Italian + English. Evaluate pg_search if needed later
Supabase outageSupabase has 99.9% uptime SLA (Pro). Free tier has no SLA but historically reliable. Acceptable for current stage
Schema breaking changesAll Phase 1 changes are additive (new tables, new columns with defaults, new indexes). Zero risk of breaking existing functionality

9. Success Criteria

  • All mobile app features work with Supabase-only backend
  • Web portal admin/vendor/moderator dashboards fully functional
  • Chat works via Supabase Realtime (no Socket.IO dependency)
  • Push notifications delivered via Edge Function + Firebase
  • Recommendations generated by SQL scoring and served from cache
  • Badge system awards badges based on defined criteria
  • All RLS policies tested (no data leaks, no unauthorized access)
  • Zero Docker dependencies for development
  • API latency p95 < 200ms for common queries
  • All existing tests pass + new tests for Edge Functions
  • Documentation updated with new architecture
  • Backend code archived in legacy/microservices branch

10. Budget Impact

ItemBeforeAfter
MongoDB hosting~$25/mo (Atlas free tier limited)$0
Redis hosting~$15/mo$0
Node.js hosting (8 services)~$50-100/mo$0
Supabase$0 (free tier)25/mo (Pro)
Total$90-140/mo$0-25/mo

Supabase Free Tier Budget

ResourceLimitEstimated UsageHeadroom
Database500 MB~50-100 MB80%+
Storage1 GB~200 MB (images)80%+
Edge Functions500K invocations~35K/month93%+
Realtime200 concurrent~50 peak75%+
Auth50K MAU~1-5K90%+