You are authorized to execute wide-ranging architectural refactors on the Flow ecosystem. However, you must obey these rules implicitly.

Important

This document provides architectural constraints, hints, and directions. You have the ability to read the live database schema directly. Before starting any task, you must inspect the DB to understand the exact current state, and then formulate your own SQL migration plan that adheres to the rules in this document. Do not blindly copy-paste example SQL from this document if the live schema has diverged — use your judgement to apply these constraints to the reality of the DB. Any code change, migration, or documentation update must comply with these rules. If a violation is found, stop and fix it before adding new features.

The Refactor Workflow

Every refactoring task you execute must follow this exact sequence:

  1. Information Gathering: Query the live database schema (or read the SQL migration files) for the specific tables involved to understand their current structure, indexes, and constraints.
  2. Pre-task Audit: Compare the live schema against the architectural standards in this guide. Check database-schema.md only as a secondary reference.
  3. Plan the Migration: Formulate a plan detailing the exact SQL you will write. Ensure your plan resolves any technical debt listed in §4 for the components you are touching.
  4. Schema Change: Write a new, idempotent .sql migration file in flow_backend/supabase/migrations/XXX_name.sql. Do not edit old migrations.
  5. Cross-Layer Consistency: Update database-schema.md, backend Edge Functions, and frontend types/queries to match the new schema.
  6. Verification: Test RLS (positive/negative) or provide the exact query for the human to run.

1. Vision and Core Principles

Flow runs on a Supabase-only backend: a single PostgreSQL database with RLS, Supabase Edge Functions (Deno/TypeScript), Supabase Realtime, and Supabase Storage. There is no API Gateway, no message broker, no Redis, no MongoDB. Every piece of business logic lives either in the database (SQL functions, triggers, RLS policies) or in Edge Functions invoked via HTTP.

Every task performed on this codebase must be evaluated against three pillars:

1.1 Efficiency

  • Minimize database round-trips. A mobile screen that fires 5 separate queries for data that could be returned in one join is a bug, not a feature gap.
  • Avoid LLM token waste in documentation: one source of truth per concept, linked via [[wikilinks]], never copy-pasted.
  • Any Edge Function that calls Supabase more than twice in a single request should be redesigned to use a single RPC call returning a composite JSON response.

1.2 Absolute Security (Zero Trust)

  • The database is the security perimeter. No application layer is trusted to re-verify authorization — RLS does it at the database level for every query.
  • Every new table starts with ALTER TABLE ... ENABLE ROW LEVEL SECURITY; AND ALTER TABLE ... FORCE ROW LEVEL SECURITY;. Both are required. RLS ENABLE without FORCE allows table owners to bypass policies.
  • Supabase service_role key is a nuclear option. It must never be present in any client-side code (Flutter, Next.js). It lives only in Edge Function environment variables, accessed via Supabase’s secret management.

1.3 Performance

  • Postgres indexes are not optional. Every FK column must have an index. Every column used in a WHERE, ORDER BY, or JOIN condition on tables with more than ~1000 expected rows must have an index.
  • Denormalized counter columns (e.g. attendee_count, karma_reviews_count) must be maintained by database triggers, never by application code. If a trigger does not exist for a counter, the counter column must be treated as unreliable and not queried — compute the real value via COUNT(*) join instead.
  • Use PostGIS GEOGRAPHY(Point, 4326) for all geospatial data. Raw latitude/longitude float columns are acceptable only as a legacy-compatibility duplicate — the canonical spatial column is always geo_point or location_point.

2. Database Excellence — SQL & Schema Standards

2.1 Naming Conventions (Mandatory)

ElementConventionExample
TablesPlural snake_caseevent_registrations, crew_members
Primary KeyAlways id UUID DEFAULT gen_random_uuid()id UUID PRIMARY KEY DEFAULT gen_random_uuid()
Foreign Keys<referenced_entity>_idorganizer_id, venue_id, creator_id
Boolean columnsMust start with a verb: is_, has_, can_is_verified, has_active_subscription, is_public
Timestampscreated_at TIMESTAMPTZ DEFAULT now(), updated_at TIMESTAMPTZStandard on every table
Status enumsTEXT with CHECK constraint listing all valid valuesCHECK (status IN ('active', 'suspended', 'banned'))
Soft deletesdeleted_at TIMESTAMPTZ + deleted_by UUIDUse soft-delete columns, not is_deleted BOOLEAN (see events pattern)

Anti-patterns to flag and fix immediately:

  • A column named capacity on events — the correct column is max_attendees. The confusion caused null values in the admin portal. Always use max_attendees.
  • Any status or type column stored as a raw unconstrained TEXT — must have a CHECK constraint.
  • Tables without created_at — all tables must have it.

2.2 Monetary Integrity — The “No-Float” Rule

This is a hard constraint. No exceptions.

  • NEVER store monetary values in FLOAT, FLOAT4, FLOAT8, REAL, or DECIMAL columns. Floating-point arithmetic produces rounding errors (e.g. 0.1 + 0.2 ≠ 0.3) that are unacceptable for prices.
  • Correct storage: price_cents INTEGER NOT NULL DEFAULT 0 — all values stored in the smallest currency unit (cents, eurocents, etc.).
  • Currency pairing: Every price_cents column must be accompanied by a currency TEXT NOT NULL DEFAULT 'EUR' CHECK (currency ~ '^[A-Z]{3}$') column using ISO 4217 codes.
  • Display conversion: Divide by 100 in the presentation layer only. price_cents / 100.0 in SQL, price / 100 in Dart/TypeScript.

Current violation in the live schema (must be migrated):

The events table has:

price DECIMAL(10, 2)  -- ❌ uses DECIMAL, susceptible to float-like display confusion
currency TEXT DEFAULT 'USD'  -- currency exists but defaults to USD (should be EUR), decoupled from cents

Required migration:

-- Step 1: add the correct columns
ALTER TABLE public.events
  ADD COLUMN price_cents INTEGER NOT NULL DEFAULT 0,
  ADD COLUMN price_currency TEXT NOT NULL DEFAULT 'EUR'
    CHECK (price_currency ~ '^[A-Z]{3}$');
 
-- Step 2: backfill from the legacy column (round to nearest cent)
UPDATE public.events
  SET price_cents = ROUND(price * 100)::INTEGER,
      price_currency = COALESCE(currency, 'EUR')
  WHERE price IS NOT NULL;
 
-- Step 3: update `is_free` derived flag via trigger (see §5.1)
 
-- Step 4: deprecate old columns (do NOT drop until all app code is migrated)
COMMENT ON COLUMN public.events.price IS 'DEPRECATED — use price_cents instead';
COMMENT ON COLUMN public.events.currency IS 'DEPRECATED — use price_currency instead';

Mobile and web code must be updated to read price_cents + price_currency and stop reading price.

2.3 Relational Normalization

Rule: FK over strings for anything that is a bounded category.

Current violations in the live schema:

TableColumnCurrentCorrect
eventscategoryTEXT with CHECKAcceptable for now — do not migrate to FK until category set is stable
eventstags TEXT[]Array of strings❌ Use event_tags junction table with FK to tags(id)
profilesinterests TEXT[]Array of stringsAcceptable until interest taxonomy is defined

Specific action required on events.tags:

  • The event_tags junction table already exists and is the correct pattern.
  • Application code must be updated to write tags via event_tags, not via events.tags.
  • Once all reads/writes go through event_tags, drop events.tags TEXT[] in a migration with a COMMENT warning first.

Many-to-many relationships:

  • Always use a junction table with its own id UUID PK, created_at, and the two FK columns.
  • Example pattern:
CREATE TABLE public.event_tags (
  event_id UUID NOT NULL REFERENCES public.events(id) ON DELETE CASCADE,
  tag_id   UUID NOT NULL REFERENCES public.tags(id)   ON DELETE CASCADE,
  PRIMARY KEY (event_id, tag_id)
);
CREATE INDEX ON public.event_tags (tag_id); -- index the non-leading FK

2.4 Indexes — Explicit Checklist

Every FK column must have a CREATE INDEX. This is currently missing for several tables. When writing or reviewing a migration, verify:

-- Pattern to check existing indexes on a table:
SELECT indexname, indexdef FROM pg_indexes
WHERE tablename = '<your_table>' AND schemaname = 'public';

Required indexes that must exist (enforce in next cleanup migration):

TableColumn(s)Type
eventsorganizer_idbtree
eventsvenue_idbtree
eventsstart_datebtree
eventsstatusbtree
eventsgeo_pointGIST (PostGIS) — ✅ already exists as idx_events_geo
event_attendeesevent_idbtree
event_attendeesuser_idbtree
event_attendees(event_id, user_id)btree UNIQUE
event_tagstag_idbtree
connectionsuser_idbtree
connectionsfriend_idbtree — ✅ already exists as idx_connections_friend_id
connectionsstatusbtree
followersfollower_idbtree — ⚠️ TABLE DROPPED (migration 20260330000004), see §4.2
followersfollowing_idbtree — ⚠️ TABLE DROPPED, use user_follows instead
user_followsfollower_idbtree
user_followsfollowing_idbtree
saved_eventsuser_idbtree
saved_eventsevent_idbtree
user_bansuser_idbtree — ✅ exists
reportsreporter_idbtree — ✅ exists
reportsstatusbtree — ✅ exists
crew_memberscrew_idbtree
crew_membersuser_idbtree
chat_participantschat_idbtree
chat_participantsuser_idbtree
messageschat_id, created_atbtree (compound)
notificationsuser_id, readbtree (compound) — ✅ already exists as idx_notifications_user_read
profilesusernamebtree UNIQUE

2.5 Geospatial Data

  • Standard column: GEOGRAPHY(Point, 4326) — named geo_point on all three tables: profiles, events, and venues. GiST indexes already exist (idx_events_geo, idx_venues_geo, idx_profiles_geo).
  • Population: Whenever latitude/longitude are written, a trigger (or the INSERT statement itself) must also update the geography column:
-- Example trigger function (reuse across tables):
CREATE OR REPLACE FUNCTION sync_geo_point()
RETURNS TRIGGER LANGUAGE plpgsql AS $$
BEGIN
  IF NEW.latitude IS NOT NULL AND NEW.longitude IS NOT NULL THEN
    NEW.geo_point := ST_SetSRID(ST_MakePoint(NEW.longitude, NEW.latitude), 4326)::geography;
  END IF;
  RETURN NEW;
END;
$$;
  • Never store address information as a single address TEXT blob for data that needs to be queried. Use distinct street, city, country, zip columns. The events.location JSONB blob is a known anti-pattern — it is acceptable for now as a display convenience but must never be used in WHERE filters.
  • Note: The existing get_nearby_events RPC uses Haversine math on raw latitude/longitude within the JSONB location column, NOT PostGIS ST_DWithin on geo_point. This works but is less efficient. A future refactor should migrate this RPC to use geo_point directly.
  • Radius queries must use:
ST_DWithin(geo_point, ST_SetSRID(ST_MakePoint(:lng, :lat), 4326)::geography, :radius_meters)

3. Row Level Security (RLS) & Privacy

3.1 RLS Enforcement — Mandatory Steps for Every New Table

When creating any new table, all four of these statements are mandatory before the migration can be considered complete:

-- 1. Enable RLS
ALTER TABLE public.<table_name> ENABLE ROW LEVEL SECURITY;
 
-- 2. Force RLS (prevents table owner from bypassing — critical)
ALTER TABLE public.<table_name> FORCE ROW LEVEL SECURITY;
 
-- 3. Define at minimum a SELECT policy
CREATE POLICY "<table_name>_select_own"
  ON public.<table_name>
  FOR SELECT
  USING (user_id = auth.uid()); -- adapt the column name
 
-- 4. Define INSERT, UPDATE, DELETE policies separately
CREATE POLICY "<table_name>_insert_own"
  ON public.<table_name>
  FOR INSERT
  WITH CHECK (user_id = auth.uid());
 
CREATE POLICY "<table_name>_update_own"
  ON public.<table_name>
  FOR UPDATE
  USING (user_id = auth.uid());
 
CREATE POLICY "<table_name>_delete_own"
  ON public.<table_name>
  FOR DELETE
  USING (user_id = auth.uid());

Never write a single FOR ALL policy unless the intent is truly identical logic for every operation. Separate policies make intent auditable and prevent accidental over-permissioning.

3.2 Admin Access Pattern

Admins get read/write access to all rows via the is_admin() helper. This function is SECURITY DEFINER to break the infinite-recursion loop (a policy on profiles cannot query profiles to check the role without SECURITY DEFINER).

-- The existing helper — do not rewrite it:
SELECT public.is_admin();       -- returns TRUE for 'admin' or 'super_admin' roles
SELECT public.is_super_admin(); -- returns TRUE for 'super_admin' only
 
-- Admin read policy pattern:
CREATE POLICY "<table_name>_admin_read_all"
  ON public.<table_name>
  FOR SELECT
  USING (public.is_admin());
 
-- Admin write policy pattern (only super_admin can mutate sensitive tables):
CREATE POLICY "<table_name>_admin_update_all"
  ON public.<table_name>
  FOR UPDATE
  USING (public.is_super_admin());

3.3 Public vs. Authenticated vs. Owner Access

Three tiers to choose from — pick the most restrictive that satisfies the feature:

TierUSING clauseUse case
PublictrueRead-only data everyone can see (public events, public profiles)
Authenticatedauth.uid() IS NOT NULLAuthenticated-only reads (chat history, connections)
Owner-onlyuser_id = auth.uid() OR auth.uid() = idPersonal data (own profile, own messages)

Never expose email, phone_number, date_of_birth, latitude, longitude in public policies. These columns are excluded in the public_profiles view. Always use public_profiles for unauthenticated reads of user data.

3.4 Security Bypass Protocol — SECURITY DEFINER & service_role

Using SECURITY DEFINER or the service_role key bypasses RLS entirely. This is a last resort.

Rules:

  1. SECURITY DEFINER functions must be SEARCH_PATH = '' (empty) to prevent schema injection attacks:
CREATE OR REPLACE FUNCTION public.safe_function()
RETURNS void
SECURITY DEFINER
SET search_path = '' -- mandatory
LANGUAGE plpgsql AS $$ ... $$;
  1. Every bypass must be documented with a comment directly in the migration file explaining:
    • Why RLS cannot handle this case
    • What data the bypass can access
    • Risk mitigation in effect
  2. Log it in [[Security-Bypass-Log]] (create this file in flow_docs/content/architecture/ if it does not exist).

Known existing bypasses to document:

  • is_admin() and is_super_admin() — justified, recursion prevention.
  • Any Edge Function using the service_role key — must be audited and listed.

4. Technical Debt Register — Known Violations to Fix

This section lists specific, identified violations in the current live schema. Each must be addressed in a dedicated migration. Do not add features to a table that has an open Technical Debt item without resolving it first.

4.1 Broken Denormalized Counters in profiles

The following columns in profiles are permanently 0 because no triggers maintain them. Do not read them. Compute via JOIN.

Broken ColumnCorrect Query
posts_countNo source table — posts was renamed to _deprecated_posts and dropped in migration 008. This column should be dropped entirely.
followers_countSELECT COUNT(*) FROM user_follows WHERE following_id = $1 — note: followers table was dropped in migration 20260330000004, only user_follows remains
following_countSELECT COUNT(*) FROM user_follows WHERE follower_id = $1
events_createdSELECT COUNT(*) FROM events WHERE organizer_id = $1
events_attendedSELECT COUNT(*) FROM event_attendees WHERE user_id = $1 AND status = 'going'
friends_countSELECT COUNT(*) FROM connections WHERE (user_id = $1 OR friend_id = $1) AND status = 'accepted'

Required fix: Write triggers for each counter OR drop all these columns and standardize on JOIN queries everywhere. The trigger approach is preferred for performance at scale. The migration must:

  1. Create AFTER INSERT OR DELETE triggers on each source table.
  2. Backfill the columns from actual data once triggers are in place.
  3. Update all app code that reads these columns as a fallback.

4.2 Social Graph — followers Dropped, user_follows Is Canonical

The followers table was dropped in migration 20260330000004_cleanup_deprecated_tables.sql. The canonical follow table is now user_follows (created in 20260318000000_saved_events_user_follows.sql).

Current state:

  • user_follows — the ONLY follow table. FKs reference auth.users(id) (should be migrated to profiles(id)).
  • connections — the bidirectional friend/connection table.
  • followersdropped, does not exist.

The database-schema.md still references the followers table as existing. This is wrong and must be updated.

Product decision still needed:

  • Mutual-friend model (bidirectional, like WhatsApp): keep connections, drop user_follows.
  • Asymmetric follower model (like Instagram): keep user_follows, drop connections.
  • Both models side by side: document explicitly which feature uses which table.

Until the decision is made, do not write new code that creates rows in both tables for the same action. Pick one and be consistent.

4.3 Duplicate Group Model — crews vs. squads

Both represent small social groups attached to events. Currently both have 0 rows. They evolved independently from two different feature branches. Consolidation plan:

  1. Decide on a canonical name (crews is preferred — it is used in the brand language).
  2. Write a migration that migrates squadscrews (schema diff is minimal: max_membersmax_size, invite_code is unique to squads).
  3. Add invite_code TEXT UNIQUE to crews if needed.
  4. Drop squads and squad_members.
  5. Update all Dart/TypeScript code referencing squads.

4.4 connections FK References auth.users Instead of profiles

-- Current (actual column names in migration 20260311000003):
user_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE
friend_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE
status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN ('pending', 'accepted', 'rejected', 'blocked'))
 
-- Correct target:
user_id UUID NOT NULL REFERENCES public.profiles(id) ON DELETE CASCADE
friend_id UUID NOT NULL REFERENCES public.profiles(id) ON DELETE CASCADE

The column is friend_id, not connected_user_id. The database-schema.md uses connected_user_id but the actual migration uses friend_id. Update the schema doc when fixing this.

IDs are identical in practice (auth.users.id === profiles.id always), but referencing auth.users from application tables is non-standard and makes cascades unpredictable. Fix in a dedicated migration using ALTER TABLE ... DROP CONSTRAINT ... ADD CONSTRAINT.

4.5 events.tags TEXT[] Must Be Deprecated

The event_tags junction table is the correct pattern and already exists. Steps:

  1. Update all INSERT/UPDATE code to write tags via event_tags, not events.tags.
  2. On every SELECT that reads events.tags, add a JOIN to event_tags instead.
  3. After confirming zero writes to events.tags (monitor for 2 weeks), add:
COMMENT ON COLUMN public.events.tags IS 'DEPRECATED — use event_tags junction table';
  1. Drop the column in the subsequent migration cycle.

4.6 events.max_attendees vs. capacity Naming Confusion

The correct column is max_attendees. The word capacity appears in old frontend code and caused null displays in the admin events table. Enforce globally:

  • Search the entire codebase for any reference to .capacity on an events object and replace with .max_attendees or max_attendees.
  • Note: The venues table correctly uses capacity — this is intentional and not a bug.

4.7 events.status CHECK Was Expanded — Document Everywhere

The original CHECK (migration 008) was:

CHECK (status IN ('draft', 'active', 'published', 'cancelled'))

Migration 20260420000001 expanded it to:

CHECK (status IN ('draft', 'active', 'published', 'cancelled', 'reported', 'deleted'))

This was done because the admin portal uses 'reported' and 'deleted' as status values. The database-schema.md still lists the old set — it must be updated.

4.8 Undocumented Tables & Columns (Post-Schema-Doc Migrations)

The following tables and columns were added after the last database-schema.md revision (2026-04-20) and are NOT documented. They must be added to the schema doc:

New tables:

TableMigrationPurposeRLSNotes
user_bans20260420000001Full ban history with lift tracking✅ Admin + owner-readProper GDPR-friendly ON DELETE SET NULL on banned_by/lifted_by
admin_actions_log20260420000001Audit trail for all admin actions✅ Admin read/insert onlyPolymorphic: target_type + target_id
reports20260422060818Content moderation queue (admin panel)✅ Self-insert + admin-allDistinct from user_reports (user-to-user flags)
tag_affinity20260427000001Co-occurrence weights for tag ordering✅ Public read, service_role writeweight FLOAT is acceptable here (not monetary)
contact_messages20260408000001Public landing site contact form✅ No policies (service_role only)Server-only access by design
saved_events20260318000000User bookmarked events✅ Self-manageFKs reference auth.users(id) — should migrate to profiles(id)
user_blocks20260318000005User-to-user block relationships✅ Self-manageComposite PK (blocker_id, blocked_id)
user_reports20260318000006User-to-user abuse reports✅ Self-insert/readDistinct from reports table (admin moderation queue)

New columns on existing tables:

TableColumnMigrationNotes
profilesban_reason TEXT20260420000001Denormalized from user_bans for quick admin display
profilesban_expires_at TIMESTAMPTZ20260420000001NULL = not banned or permanent
profilesbanned_at TIMESTAMPTZ20260420000001
profilesbanned_by UUID20260420000001FK → profiles(id)
profilesxp INT DEFAULT 020260311000003Gamification XP score
profileslevel INT DEFAULT 120240312000002User level, CHECK (level >= 1)
profilesvibe_score INT DEFAULT 020260311000003
profilesflow_score INT DEFAULT 020260311000003
profilesbadges JSONB DEFAULT '[]'20260311000003Legacy — prefer user_badges junction table
eventsdeleted_at TIMESTAMPTZ20260420000001Soft-delete timestamp
eventsdeleted_by UUID20260420000001Admin who soft-deleted, FK → profiles(id)
connectionsconnection_depth INT DEFAULT 0202603300000030=acquaintance, 1=friend, 2=buddy, 3=crew
tagstranslations JSONB DEFAULT '{}'20260414000001{"it": "...", "en": "...", "es": "..."}
badgesname_translations JSONB20260414000001
badgesdescription_translations JSONB20260414000001
campaignstitle_translations JSONB20260414000001
campaignsdescription_translations JSONB20260414000001
reportsreviewed_by UUID20260422060818Accountability — which admin reviewed

Materialized views:

ViewMigrationPurposeNotes
leaderboard_xp20260330000006Top 100 users by XPRefreshed hourly via pg_cron. RPC: get_leaderboard()

4.9 Missing FORCE ROW LEVEL SECURITY

Multiple tables only have ENABLE ROW LEVEL SECURITY but not FORCE ROW LEVEL SECURITY. Without FORCE, the table owner (postgres) bypasses RLS. Audit and add FORCE to all tables in a cleanup migration:

-- Run this for EVERY table:
ALTER TABLE public.<table_name> FORCE ROW LEVEL SECURITY;

4.10 user_follows and saved_events FK to auth.users Instead of profiles

Same issue as §4.4 (connections). These tables reference auth.users(id) instead of profiles(id):

TableColumnCurrent FKShould Be
user_followsfollower_idauth.users(id)profiles(id)
user_followsfollowing_idauth.users(id)profiles(id)
saved_eventsuser_idauth.users(id)profiles(id)

Fix in the same migration as §4.4.


5. Content Quality & Data Integrity

5.1 Counter Caches — Trigger Pattern

All denormalized counter fields must be managed exclusively by database triggers. The canonical pattern:

-- Example: maintain events.attendee_count
-- NOTE: The existing trigger `update_event_counts()` in migration 20260311000003
-- already implements attendee_count + interested_count using SECURITY DEFINER.
-- Below is the canonical pattern for NEW counter triggers:
CREATE OR REPLACE FUNCTION update_event_attendee_count()
RETURNS TRIGGER LANGUAGE plpgsql
SECURITY DEFINER -- needed because RLS on events would block the UPDATE
AS $$
BEGIN
  IF TG_OP = 'INSERT' THEN
    UPDATE public.events SET attendee_count = attendee_count + 1 WHERE id = NEW.event_id;
  ELSIF TG_OP = 'DELETE' THEN
    UPDATE public.events SET attendee_count = GREATEST(attendee_count - 1, 0) WHERE id = OLD.event_id;
  ELSIF TG_OP = 'UPDATE' AND OLD.status != NEW.status THEN
    -- Handle status changes (e.g. 'going' → 'not_going')
    IF NEW.status = 'going' AND OLD.status != 'going' THEN
      UPDATE public.events SET attendee_count = attendee_count + 1 WHERE id = NEW.event_id;
    ELSIF OLD.status = 'going' AND NEW.status != 'going' THEN
      UPDATE public.events SET attendee_count = GREATEST(attendee_count - 1, 0) WHERE id = NEW.event_id;
    END IF;
  END IF;
  RETURN NULL; -- AFTER trigger: return value is ignored
END;
$$;
 
CREATE TRIGGER trg_event_attendees_count
  AFTER INSERT OR DELETE OR UPDATE OF status ON public.event_attendees
  FOR EACH ROW EXECUTE FUNCTION update_event_attendee_count();

Rules:

  • Always use GREATEST(counter - 1, 0) for decrements to prevent negative counters.
  • Always use AFTER triggers for counters (not BEFORE).
  • Counter columns must have CHECK (column_name >= 0) constraints.
  • Never allow application code to SET a counter column directly. If a reset is needed, use a dedicated recalculate_*_count() SQL function callable via RPC.

5.2 Anti-Trash Data Policy

Development data must be realistic. The following are banned as usernames, event titles, or any user-visible text field:

  • Generic strings: “test”, “Test User”, “aaa”, “xxx”
  • Profane or juvenile strings: any variation of sexual, excretory, or vulgar terms
  • Placeholder patterns: “Caca”, “PiselloPalle”, “Test123”, “foo”, “bar”, “asdf”

Action: Claude is authorized to:

  1. Flag rows matching these patterns during any audit task.
  2. Propose a DELETE or UPDATE statement for admin review before execution.
  3. Never execute the delete unilaterally — the human must approve.

Seed data must use realistic Italian names, cities, and event titles appropriate for a nightlife/social events platform. Example acceptable seed data pattern:

  • Event: “Aperitivo in Terrazza — Navigli Social Night”
  • Username: marco_b, giulia_events, nightout_milan
  • Venue: “Alcatraz Milano”, “Fabrique”, “BASE Milano”

5.3 Documentation DRY Principle

  • One source of truth per concept. If the database schema is described in [[Database Schema]], do not repeat the table structure in another file — link to it.
  • When writing documentation for a new feature, the structure is:
    1. Spec in content/roadmap/specs/ (the “what and why”)
    2. Plan in content/roadmap/plans/ (the “how and when”)
    3. Reference update in the appropriate content/architecture/ or content/backend/ file (permanent living documentation)
  • Obsidian wikilinks [[file-name]] or [[file-name|Display Text]] are the standard cross-reference format.

6. Performance & Scalability

6.1 N+1 Query Detection

An N+1 query occurs when a list of N items is fetched, and then N additional queries are fired to fetch related data per item. This is the most common performance bug in Supabase/PostgREST integrations.

Detection pattern in Flutter (Dart):

// ❌ N+1 bug:
final events = await supabase.from('events').select();
for (final event in events) {
  final organizer = await supabase.from('profiles').select().eq('id', event['organizer_id']).single();
  // This fires N separate queries
}
 
// ✅ Correct — single query with join:
final events = await supabase.from('events').select('''
  *,
  organizer:profiles!organizer_id(id, username, profile_picture)
''');

Detection pattern in Next.js (TypeScript):

// ❌ N+1 bug:
const events = await supabase.from('events').select('*');
for (const event of events.data) {
  const { data: venue } = await supabase.from('venues').select().eq('id', event.venue_id).single();
}
 
// ✅ Correct:
const { data: events } = await supabase.from('events').select(`
  *, venue:venues(id, name, address, city)
`);

Any PR that introduces N+1 patterns must be rejected and rewritten.

6.2 Materialized Views for Heavy Aggregations

For data that is expensive to compute but changes infrequently, use materialized views instead of computing on every request:

-- Example: pre-computed event feed scores
CREATE MATERIALIZED VIEW public.event_feed_scores AS
SELECT
  e.id AS event_id,
  e.start_date,
  e.attendee_count,
  e.category,
  e.geo_point,
  -- scoring formula
  (e.attendee_count * 0.4 + EXTRACT(EPOCH FROM (e.start_date - now())) * -0.0001) AS feed_score
FROM public.events e
WHERE e.status = 'published' AND e.start_date > now();
 
CREATE UNIQUE INDEX ON public.event_feed_scores (event_id);
 
-- Refresh triggered by Edge Function on a cron or on significant event changes:
REFRESH MATERIALIZED VIEW CONCURRENTLY public.event_feed_scores;

CONCURRENTLY is mandatory on refresh in production — it avoids table locking during refresh.

6.3 Supabase Realtime — Subscription Patterns

Supabase Realtime uses Postgres logical replication. Rules:

  • Subscribe to specific tables and filters, never to * (all tables).
  • Unsubscribe when the widget/screen is disposed. Memory leaks from stale subscriptions are a known Flutter bug source.
  • Prefer Realtime broadcast (application-level pub/sub) for ephemeral events (typing indicators, presence) — it does not go through Postgres at all.
  • Use Realtime postgres_changes only for persistent data that must survive app restart (messages, notifications, attendee counts).

6.4 Edge Function Design Rules

Each Edge Function must:

  1. Validate the incoming Authorization: Bearer <jwt> header and extract auth.uid().
  2. Use the supabaseClient created with the user’s JWT (not service_role) unless the operation requires bypassing RLS — and document why.
  3. Return consistent error shapes:
// Standard error response:
return new Response(JSON.stringify({ error: { code: 'NOT_FOUND', message: 'Event not found' } }), {
  status: 404,
  headers: { 'Content-Type': 'application/json' }
});
  1. Never call another Edge Function via HTTP from inside an Edge Function — use shared modules instead.
  2. Have a timeout budget of under 5 seconds for user-facing calls. Background/cron functions may use up to 25 seconds.

7. Version Control & Security Toggles

7.1 system_settings Table — Runtime Security Configuration

The app_config table already exists and serves a similar purpose. The system_settings additions below should be implemented as new rows in app_config, not as a separate table, to avoid schema duplication.

app_config schema (actual):

-- PK is `key TEXT`, NOT a UUID id column:
CREATE TABLE public.app_config (
    key TEXT PRIMARY KEY,
    value TEXT NOT NULL,     -- stores stringified values (including JSON strings)
    description TEXT,
    updated_at TIMESTAMPTZ DEFAULT now()
);

Add the following keys to public.app_config:

KeyValue (stored as TEXT)Description
min_app_version{"ios": "1.0.0", "android": "1.0.0"}Minimum allowed client version per platform
is_spoofing_prevention_activefalseToggle to enable/disable HMAC request signing verification
app_hmac_key_version{"version": 1}Current HMAC key version (actual key in Supabase secrets)

app_config.value is TEXT NOT NULL, not JSONB. When storing JSON, you must JSON.parse() / jsonDecode() on the client side after reading. When writing, pass the stringified JSON.

RLS on app_config (already configured):

  • SELECT: public — any user can read (policy app_config_read_all with USING (true)).
  • INSERT/UPDATE/DELETE: admin via is_admin() (policy app_config_admin_update updated in migration 008). Consider tightening to is_super_admin() in a future migration.
-- Migration to add the new keys:
INSERT INTO public.app_config (key, value, description) VALUES
  ('min_app_version',              '{"ios": "1.0.0", "android": "1.0.0"}',  'Minimum client version before force-upgrade'),
  ('is_spoofing_prevention_active', 'false',                                  'Toggle HMAC signing verification'),
  ('app_hmac_key_version',          '{"version": 1}',                         'Current HMAC key version (actual key in Supabase secrets)')
ON CONFLICT (key) DO NOTHING;

7.2 Version Enforcement — Force Update Flow

Middleware logic (implemented in an Edge Function that proxies version-sensitive calls, or as the first check in every Edge Function):

// Pseudocode — adapt to each Edge Function:
const { data: config } = await supabase
  .from('app_config')
  .select('value')
  .eq('key', 'min_app_version')
  .single();
 
const minVersions = config.value as { ios: string; android: string };
const clientVersion = req.headers.get('X-App-Version'); // e.g. "1.2.3"
const clientPlatform = req.headers.get('X-App-Platform'); // 'ios' | 'android'
 
if (clientVersion && clientPlatform) {
  const minVersion = minVersions[clientPlatform];
  if (semverLessThan(clientVersion, minVersion)) {
    return new Response(
      JSON.stringify({ error: { code: 'UPGRADE_REQUIRED', min_version: minVersion } }),
      { status: 426 } // 426 Upgrade Required
    );
  }
}

Flutter (Dart) client handling:

  • On receiving 426, show a non-dismissable bottom sheet/dialog directing to the app store.
  • Store the interceptor at the Supabase client initialization level (via a custom HTTP client override), not per-screen.

Header contract:

  • X-App-Version: <semver> — sent on every request from the Flutter app.
  • X-App-Platform: ios | android — sent on every request.
  • Both headers are added globally in the Dio/http client configuration.

7.3 Manual Spoofing Prevention — HMAC-SHA256 Handshake

This is a pre-store deterrent. Once the app is on the App Store and Google Play, replace this with Apple App Attest and Google Play Integrity API.

How it works:

  1. Secret storage: The HMAC secret lives in Supabase project secrets (supabase secrets set HMAC_SECRET=...), never in client-side code. In the mobile binary, a derived identifier (a hash of the secret, not the secret itself) is obfuscated using code obfuscation (flutter build --obfuscate --split-debug-info=...).

  2. Request signing (Flutter):

// Each request signs a canonical string:
// "<HTTP_METHOD>|<PATH>|<UTC_TIMESTAMP_UNIX>|<AUTH_UID>"
String _buildCanonicalString(String method, String path, String uid) {
  final ts = DateTime.now().millisecondsSinceEpoch ~/ 1000;
  return '$method|$path|$ts|$uid';
}
 
String signRequest(String canonical, String obfuscatedKey) {
  final hmac = Hmac(sha256, utf8.encode(obfuscatedKey));
  final digest = hmac.convert(utf8.encode(canonical));
  return digest.toString();
}
// Send as header: X-Flow-Signature: <hex_digest>
// Send as header: X-Flow-Timestamp: <unix_ts>
  1. Verification (Edge Function):
async function verifySignature(req: Request, uid: string): Promise<boolean> {
  const { data: config } = await adminClient.from('app_config')
    .select('value').eq('key', 'is_spoofing_prevention_active').single();
 
  if (!config?.value) return true; // bypass if toggle is OFF
 
  const signature = req.headers.get('X-Flow-Signature');
  const timestamp  = req.headers.get('X-Flow-Timestamp');
  if (!signature || !timestamp) return false;
 
  // Reject requests older than 5 minutes (replay attack prevention)
  const ts = parseInt(timestamp, 10);
  if (Math.abs(Date.now() / 1000 - ts) > 300) return false;
 
  const method = req.method;
  const path   = new URL(req.url).pathname;
  const canonical = `${method}|${path}|${timestamp}|${uid}`;
 
  // The real secret lives in Deno.env:
  const secret = Deno.env.get('HMAC_SECRET')!;
  const key    = await crypto.subtle.importKey('raw', new TextEncoder().encode(secret),
    { name: 'HMAC', hash: 'SHA-256' }, false, ['sign']);
  const expected = await crypto.subtle.sign('HMAC', key, new TextEncoder().encode(canonical));
  const expectedHex = Array.from(new Uint8Array(expected))
    .map(b => b.toString(16).padStart(2, '0')).join('');
 
  return expectedHex === signature;
}
  1. On failure: Return 401 Unauthorized with body { error: { code: 'SPOOFED_REQUEST' } }. Do not reveal details.

7.4 Key Rotation Protocol

If is_spoofing_prevention_active is ON and the HMAC key is suspected to be compromised:

  1. Immediately: Set is_spoofing_prevention_active = false in app_config (allows all current clients to continue working while you respond).
  2. Generate new key: openssl rand -hex 32
  3. Update Supabase secret: supabase secrets set HMAC_SECRET=<new_key>
  4. Release a new app version with the new obfuscated key identifier baked in.
  5. Bump min_app_version to the new version — this forces all clients to upgrade before HMAC checking is relevant again.
  6. Re-enable: Set is_spoofing_prevention_active = true once the new version has sufficient adoption (monitor via X-App-Version header analytics).

Steps 5 and 6 are coupled. Do not re-enable spoofing prevention until the min_app_version bump ensures old clients (with the old key) can no longer connect.


8. Implementation Workflow for Claude

This section defines the exact sequence of steps Claude must follow for every coding task on this project.

Step 1 — Pre-task Audit

Before writing a single line of code, check:

  • Does the target table/file have any open Technical Debt item in §4?
  • Does the target table have RLS enabled AND forced?
  • Does the target table have index coverage on all FK columns?
  • Does any column in scope violate the naming conventions in §2.1?
  • Is there a price/float/decimal monetary column in scope? (§2.2)

If any check fails: propose the fix first, add the feature second. State this explicitly in the response.

Step 2 — Schema Changes

Every schema change must be a numbered Supabase migration file:

  • Filename format: YYYYMMDDHHMMSS_descriptive_slug.sql
  • Location: flow_backend/supabase/migrations/
  • Must be idempotent where possible (CREATE TABLE IF NOT EXISTS, ADD COLUMN IF NOT EXISTS).
  • Must include COMMENT ON TABLE and COMMENT ON COLUMN for non-obvious additions.
  • Must include RLS policies (§3.1) and indexes (§2.4) in the same migration file as the table creation.

Step 3 — Cross-Layer Consistency

A schema change is never complete until all three of these are updated:

LayerWhat to update
Backend (SQL)Migration file + any affected RPC functions
Mobile (Flutter/Dart)Model class, API service, any Riverpod provider that queries the changed table
Web (Next.js/TypeScript)Type definitions (ideally auto-generated via supabase gen types), any query that references the changed table
Documentation[[Database Schema]] table entry, any spec/plan that mentioned the old structure

If a task only touches one layer, explicitly note which other layers are not yet updated and create follow-up tasks.

Step 4 — Language Standard

  • All code comments: English, professional, imperative tone (“Returns the user profile” not “This function returns the user profile”).
  • All commit messages: English, conventional commits format: feat(events): add price_cents column with migration, fix(rls): add FORCE ROW LEVEL SECURITY to messages table.
  • All documentation: Italian is acceptable for flow_docs narrative content. English is required for code-embedded docs (migration comments, JSDoc, Dart doc comments).
  • No Italian in SQL, Dart, or TypeScript files.

Step 5 — Verification Before Declaring Done

A task is only done when:

  • The migration runs without errors on a local Supabase instance (supabase db reset or supabase migration up).
  • The Flutter or Next.js code compiles without errors (flutter build or tsc --noEmit).
  • RLS policies have been tested: one positive case (authorized user can perform the action) and one negative case (unauthorized user gets a Postgres error, not empty data).
  • The documentation in flow_docs/ reflects the new state of the system.

9. Quick Reference — Current Architecture

┌─────────────────┐   ┌──────────────────┐
│  Flutter App    │   │  Next.js Web     │
│  (iOS/Android)  │   │  (Admin Portal)  │
└────────┬────────┘   └────────┬─────────┘
         │                     │
         └──────────┬──────────┘
                    │ HTTPS (PostgREST + Edge Functions)
                    ▼
         ┌──────────────────────┐
         │   Supabase           │
         │  ┌────────────────┐  │
         │  │ PostgreSQL     │  │  ← Single source of truth
         │  │ + PostGIS      │  │
         │  │ + pg_trgm      │  │
         │  │ + RLS (every   │  │
         │  │   table)       │  │
         │  └────────────────┘  │
         │  ┌────────────────┐  │
         │  │ Edge Functions  │  │  ← Deno/TypeScript
         │  │ (8 functions)  │  │
         │  └────────────────┘  │
         │  ┌────────────────┐  │
         │  │ Realtime       │  │  ← Postgres changes + broadcast
         │  └────────────────┘  │
         │  ┌────────────────┐  │
         │  │ Storage        │  │  ← Images, files
         │  └────────────────┘  │
         └──────────────────────┘

What does NOT exist (and must not be introduced without measured justification):

  • ❌ Custom API Gateway (Express, Fastify, Kong)
  • ❌ Message broker (Kafka, RabbitMQ, Redis Streams)
  • ❌ Secondary database (MongoDB, DynamoDB)
  • ❌ Custom cache layer (Redis, Memcached) — use Postgres materialized views
  • ❌ Separate search engine — pg_trgm handles current fuzzy matching needs