You are authorized to execute wide-ranging architectural refactors on the Flow ecosystem. However, you must obey these rules implicitly.
Important
This document provides architectural constraints, hints, and directions. You have the ability to read the live database schema directly. Before starting any task, you must inspect the DB to understand the exact current state, and then formulate your own SQL migration plan that adheres to the rules in this document. Do not blindly copy-paste example SQL from this document if the live schema has diverged — use your judgement to apply these constraints to the reality of the DB. Any code change, migration, or documentation update must comply with these rules. If a violation is found, stop and fix it before adding new features.
The Refactor Workflow
Every refactoring task you execute must follow this exact sequence:
- Information Gathering: Query the live database schema (or read the SQL migration files) for the specific tables involved to understand their current structure, indexes, and constraints.
- Pre-task Audit: Compare the live schema against the architectural standards in this guide. Check
database-schema.mdonly as a secondary reference. - Plan the Migration: Formulate a plan detailing the exact SQL you will write. Ensure your plan resolves any technical debt listed in §4 for the components you are touching.
- Schema Change: Write a new, idempotent
.sqlmigration file inflow_backend/supabase/migrations/XXX_name.sql. Do not edit old migrations. - Cross-Layer Consistency: Update
database-schema.md, backend Edge Functions, and frontend types/queries to match the new schema. - Verification: Test RLS (positive/negative) or provide the exact query for the human to run.
1. Vision and Core Principles
Flow runs on a Supabase-only backend: a single PostgreSQL database with RLS, Supabase Edge Functions (Deno/TypeScript), Supabase Realtime, and Supabase Storage. There is no API Gateway, no message broker, no Redis, no MongoDB. Every piece of business logic lives either in the database (SQL functions, triggers, RLS policies) or in Edge Functions invoked via HTTP.
Every task performed on this codebase must be evaluated against three pillars:
1.1 Efficiency
- Minimize database round-trips. A mobile screen that fires 5 separate queries for data that could be returned in one join is a bug, not a feature gap.
- Avoid LLM token waste in documentation: one source of truth per concept, linked via
[[wikilinks]], never copy-pasted. - Any Edge Function that calls Supabase more than twice in a single request should be redesigned to use a single RPC call returning a composite JSON response.
1.2 Absolute Security (Zero Trust)
- The database is the security perimeter. No application layer is trusted to re-verify authorization — RLS does it at the database level for every query.
- Every new table starts with
ALTER TABLE ... ENABLE ROW LEVEL SECURITY;ANDALTER TABLE ... FORCE ROW LEVEL SECURITY;. Both are required. RLSENABLEwithoutFORCEallows table owners to bypass policies. - Supabase
service_rolekey is a nuclear option. It must never be present in any client-side code (Flutter, Next.js). It lives only in Edge Function environment variables, accessed via Supabase’s secret management.
1.3 Performance
- Postgres indexes are not optional. Every FK column must have an index. Every column used in a
WHERE,ORDER BY, orJOINcondition on tables with more than ~1000 expected rows must have an index. - Denormalized counter columns (e.g.
attendee_count,karma_reviews_count) must be maintained by database triggers, never by application code. If a trigger does not exist for a counter, the counter column must be treated as unreliable and not queried — compute the real value viaCOUNT(*)join instead. - Use PostGIS
GEOGRAPHY(Point, 4326)for all geospatial data. Rawlatitude/longitudefloat columns are acceptable only as a legacy-compatibility duplicate — the canonical spatial column is alwaysgeo_pointorlocation_point.
2. Database Excellence — SQL & Schema Standards
2.1 Naming Conventions (Mandatory)
| Element | Convention | Example |
|---|---|---|
| Tables | Plural snake_case | event_registrations, crew_members |
| Primary Key | Always id UUID DEFAULT gen_random_uuid() | id UUID PRIMARY KEY DEFAULT gen_random_uuid() |
| Foreign Keys | <referenced_entity>_id | organizer_id, venue_id, creator_id |
| Boolean columns | Must start with a verb: is_, has_, can_ | is_verified, has_active_subscription, is_public |
| Timestamps | created_at TIMESTAMPTZ DEFAULT now(), updated_at TIMESTAMPTZ | Standard on every table |
| Status enums | TEXT with CHECK constraint listing all valid values | CHECK (status IN ('active', 'suspended', 'banned')) |
| Soft deletes | deleted_at TIMESTAMPTZ + deleted_by UUID | Use soft-delete columns, not is_deleted BOOLEAN (see events pattern) |
Anti-patterns to flag and fix immediately:
- A column named
capacityonevents— the correct column ismax_attendees. The confusion caused null values in the admin portal. Always usemax_attendees. - Any status or type column stored as a raw unconstrained
TEXT— must have aCHECKconstraint. - Tables without
created_at— all tables must have it.
2.2 Monetary Integrity — The “No-Float” Rule
This is a hard constraint. No exceptions.
- NEVER store monetary values in
FLOAT,FLOAT4,FLOAT8,REAL, orDECIMALcolumns. Floating-point arithmetic produces rounding errors (e.g.0.1 + 0.2 ≠ 0.3) that are unacceptable for prices. - Correct storage:
price_cents INTEGER NOT NULL DEFAULT 0— all values stored in the smallest currency unit (cents, eurocents, etc.). - Currency pairing: Every
price_centscolumn must be accompanied by acurrency TEXT NOT NULL DEFAULT 'EUR' CHECK (currency ~ '^[A-Z]{3}$')column using ISO 4217 codes. - Display conversion: Divide by 100 in the presentation layer only.
price_cents / 100.0in SQL,price / 100in Dart/TypeScript.
Current violation in the live schema (must be migrated):
The events table has:
price DECIMAL(10, 2) -- ❌ uses DECIMAL, susceptible to float-like display confusion
currency TEXT DEFAULT 'USD' -- currency exists but defaults to USD (should be EUR), decoupled from centsRequired migration:
-- Step 1: add the correct columns
ALTER TABLE public.events
ADD COLUMN price_cents INTEGER NOT NULL DEFAULT 0,
ADD COLUMN price_currency TEXT NOT NULL DEFAULT 'EUR'
CHECK (price_currency ~ '^[A-Z]{3}$');
-- Step 2: backfill from the legacy column (round to nearest cent)
UPDATE public.events
SET price_cents = ROUND(price * 100)::INTEGER,
price_currency = COALESCE(currency, 'EUR')
WHERE price IS NOT NULL;
-- Step 3: update `is_free` derived flag via trigger (see §5.1)
-- Step 4: deprecate old columns (do NOT drop until all app code is migrated)
COMMENT ON COLUMN public.events.price IS 'DEPRECATED — use price_cents instead';
COMMENT ON COLUMN public.events.currency IS 'DEPRECATED — use price_currency instead';Mobile and web code must be updated to read price_cents + price_currency and stop reading price.
2.3 Relational Normalization
Rule: FK over strings for anything that is a bounded category.
Current violations in the live schema:
| Table | Column | Current | Correct |
|---|---|---|---|
events | category | TEXT with CHECK | Acceptable for now — do not migrate to FK until category set is stable |
events | tags TEXT[] | Array of strings | ❌ Use event_tags junction table with FK to tags(id) |
profiles | interests TEXT[] | Array of strings | Acceptable until interest taxonomy is defined |
Specific action required on events.tags:
- The
event_tagsjunction table already exists and is the correct pattern. - Application code must be updated to write tags via
event_tags, not viaevents.tags. - Once all reads/writes go through
event_tags, dropevents.tags TEXT[]in a migration with aCOMMENTwarning first.
Many-to-many relationships:
- Always use a junction table with its own
id UUID PK,created_at, and the two FK columns. - Example pattern:
CREATE TABLE public.event_tags (
event_id UUID NOT NULL REFERENCES public.events(id) ON DELETE CASCADE,
tag_id UUID NOT NULL REFERENCES public.tags(id) ON DELETE CASCADE,
PRIMARY KEY (event_id, tag_id)
);
CREATE INDEX ON public.event_tags (tag_id); -- index the non-leading FK2.4 Indexes — Explicit Checklist
Every FK column must have a CREATE INDEX. This is currently missing for several tables. When writing or reviewing a migration, verify:
-- Pattern to check existing indexes on a table:
SELECT indexname, indexdef FROM pg_indexes
WHERE tablename = '<your_table>' AND schemaname = 'public';Required indexes that must exist (enforce in next cleanup migration):
| Table | Column(s) | Type |
|---|---|---|
events | organizer_id | btree |
events | venue_id | btree |
events | start_date | btree |
events | status | btree |
events | geo_point | GIST (PostGIS) — ✅ already exists as idx_events_geo |
event_attendees | event_id | btree |
event_attendees | user_id | btree |
event_attendees | (event_id, user_id) | btree UNIQUE |
event_tags | tag_id | btree |
connections | user_id | btree |
connections | friend_id | btree — ✅ already exists as idx_connections_friend_id |
connections | status | btree |
followers | follower_id | btree — ⚠️ TABLE DROPPED (migration 20260330000004), see §4.2 |
followers | following_id | btree — ⚠️ TABLE DROPPED, use user_follows instead |
user_follows | follower_id | btree |
user_follows | following_id | btree |
saved_events | user_id | btree |
saved_events | event_id | btree |
user_bans | user_id | btree — ✅ exists |
reports | reporter_id | btree — ✅ exists |
reports | status | btree — ✅ exists |
crew_members | crew_id | btree |
crew_members | user_id | btree |
chat_participants | chat_id | btree |
chat_participants | user_id | btree |
messages | chat_id, created_at | btree (compound) |
notifications | user_id, read | btree (compound) — ✅ already exists as idx_notifications_user_read |
profiles | username | btree UNIQUE |
2.5 Geospatial Data
- Standard column:
GEOGRAPHY(Point, 4326)— namedgeo_pointon all three tables:profiles,events, andvenues. GiST indexes already exist (idx_events_geo,idx_venues_geo,idx_profiles_geo). - Population: Whenever
latitude/longitudeare written, a trigger (or the INSERT statement itself) must also update the geography column:
-- Example trigger function (reuse across tables):
CREATE OR REPLACE FUNCTION sync_geo_point()
RETURNS TRIGGER LANGUAGE plpgsql AS $$
BEGIN
IF NEW.latitude IS NOT NULL AND NEW.longitude IS NOT NULL THEN
NEW.geo_point := ST_SetSRID(ST_MakePoint(NEW.longitude, NEW.latitude), 4326)::geography;
END IF;
RETURN NEW;
END;
$$;- Never store address information as a single
address TEXTblob for data that needs to be queried. Use distinctstreet,city,country,zipcolumns. Theevents.location JSONBblob is a known anti-pattern — it is acceptable for now as a display convenience but must never be used in WHERE filters. - Note: The existing
get_nearby_eventsRPC uses Haversine math on rawlatitude/longitudewithin the JSONBlocationcolumn, NOT PostGISST_DWithinongeo_point. This works but is less efficient. A future refactor should migrate this RPC to usegeo_pointdirectly. - Radius queries must use:
ST_DWithin(geo_point, ST_SetSRID(ST_MakePoint(:lng, :lat), 4326)::geography, :radius_meters)3. Row Level Security (RLS) & Privacy
3.1 RLS Enforcement — Mandatory Steps for Every New Table
When creating any new table, all four of these statements are mandatory before the migration can be considered complete:
-- 1. Enable RLS
ALTER TABLE public.<table_name> ENABLE ROW LEVEL SECURITY;
-- 2. Force RLS (prevents table owner from bypassing — critical)
ALTER TABLE public.<table_name> FORCE ROW LEVEL SECURITY;
-- 3. Define at minimum a SELECT policy
CREATE POLICY "<table_name>_select_own"
ON public.<table_name>
FOR SELECT
USING (user_id = auth.uid()); -- adapt the column name
-- 4. Define INSERT, UPDATE, DELETE policies separately
CREATE POLICY "<table_name>_insert_own"
ON public.<table_name>
FOR INSERT
WITH CHECK (user_id = auth.uid());
CREATE POLICY "<table_name>_update_own"
ON public.<table_name>
FOR UPDATE
USING (user_id = auth.uid());
CREATE POLICY "<table_name>_delete_own"
ON public.<table_name>
FOR DELETE
USING (user_id = auth.uid());Never write a single FOR ALL policy unless the intent is truly identical logic for every operation. Separate policies make intent auditable and prevent accidental over-permissioning.
3.2 Admin Access Pattern
Admins get read/write access to all rows via the is_admin() helper. This function is SECURITY DEFINER to break the infinite-recursion loop (a policy on profiles cannot query profiles to check the role without SECURITY DEFINER).
-- The existing helper — do not rewrite it:
SELECT public.is_admin(); -- returns TRUE for 'admin' or 'super_admin' roles
SELECT public.is_super_admin(); -- returns TRUE for 'super_admin' only
-- Admin read policy pattern:
CREATE POLICY "<table_name>_admin_read_all"
ON public.<table_name>
FOR SELECT
USING (public.is_admin());
-- Admin write policy pattern (only super_admin can mutate sensitive tables):
CREATE POLICY "<table_name>_admin_update_all"
ON public.<table_name>
FOR UPDATE
USING (public.is_super_admin());3.3 Public vs. Authenticated vs. Owner Access
Three tiers to choose from — pick the most restrictive that satisfies the feature:
| Tier | USING clause | Use case |
|---|---|---|
| Public | true | Read-only data everyone can see (public events, public profiles) |
| Authenticated | auth.uid() IS NOT NULL | Authenticated-only reads (chat history, connections) |
| Owner-only | user_id = auth.uid() OR auth.uid() = id | Personal data (own profile, own messages) |
Never expose email, phone_number, date_of_birth, latitude, longitude in public policies. These columns are excluded in the public_profiles view. Always use public_profiles for unauthenticated reads of user data.
3.4 Security Bypass Protocol — SECURITY DEFINER & service_role
Using SECURITY DEFINER or the service_role key bypasses RLS entirely. This is a last resort.
Rules:
SECURITY DEFINERfunctions must beSEARCH_PATH = ''(empty) to prevent schema injection attacks:
CREATE OR REPLACE FUNCTION public.safe_function()
RETURNS void
SECURITY DEFINER
SET search_path = '' -- mandatory
LANGUAGE plpgsql AS $$ ... $$;- Every bypass must be documented with a comment directly in the migration file explaining:
- Why RLS cannot handle this case
- What data the bypass can access
- Risk mitigation in effect
- Log it in
[[Security-Bypass-Log]](create this file inflow_docs/content/architecture/if it does not exist).
Known existing bypasses to document:
is_admin()andis_super_admin()— justified, recursion prevention.- Any Edge Function using the
service_rolekey — must be audited and listed.
4. Technical Debt Register — Known Violations to Fix
This section lists specific, identified violations in the current live schema. Each must be addressed in a dedicated migration. Do not add features to a table that has an open Technical Debt item without resolving it first.
4.1 Broken Denormalized Counters in profiles
The following columns in profiles are permanently 0 because no triggers maintain them. Do not read them. Compute via JOIN.
| Broken Column | Correct Query |
|---|---|
posts_count | No source table — posts was renamed to _deprecated_posts and dropped in migration 008. This column should be dropped entirely. |
followers_count | SELECT COUNT(*) FROM user_follows WHERE following_id = $1 — note: followers table was dropped in migration 20260330000004, only user_follows remains |
following_count | SELECT COUNT(*) FROM user_follows WHERE follower_id = $1 |
events_created | SELECT COUNT(*) FROM events WHERE organizer_id = $1 |
events_attended | SELECT COUNT(*) FROM event_attendees WHERE user_id = $1 AND status = 'going' |
friends_count | SELECT COUNT(*) FROM connections WHERE (user_id = $1 OR friend_id = $1) AND status = 'accepted' |
Required fix: Write triggers for each counter OR drop all these columns and standardize on JOIN queries everywhere. The trigger approach is preferred for performance at scale. The migration must:
- Create
AFTER INSERT OR DELETEtriggers on each source table. - Backfill the columns from actual data once triggers are in place.
- Update all app code that reads these columns as a fallback.
4.2 Social Graph — followers Dropped, user_follows Is Canonical
The followers table was dropped in migration 20260330000004_cleanup_deprecated_tables.sql. The canonical follow table is now user_follows (created in 20260318000000_saved_events_user_follows.sql).
Current state:
- ✅
user_follows— the ONLY follow table. FKs referenceauth.users(id)(should be migrated toprofiles(id)). - ✅
connections— the bidirectional friend/connection table. - ❌
followers— dropped, does not exist.
The database-schema.md still references the followers table as existing. This is wrong and must be updated.
Product decision still needed:
- Mutual-friend model (bidirectional, like WhatsApp): keep
connections, dropuser_follows. - Asymmetric follower model (like Instagram): keep
user_follows, dropconnections. - Both models side by side: document explicitly which feature uses which table.
Until the decision is made, do not write new code that creates rows in both tables for the same action. Pick one and be consistent.
4.3 Duplicate Group Model — crews vs. squads
Both represent small social groups attached to events. Currently both have 0 rows. They evolved independently from two different feature branches. Consolidation plan:
- Decide on a canonical name (
crewsis preferred — it is used in the brand language). - Write a migration that migrates
squads→crews(schema diff is minimal:max_members→max_size,invite_codeis unique tosquads). - Add
invite_code TEXT UNIQUEtocrewsif needed. - Drop
squadsandsquad_members. - Update all Dart/TypeScript code referencing
squads.
4.4 connections FK References auth.users Instead of profiles
-- Current (actual column names in migration 20260311000003):
user_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE
friend_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE
status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN ('pending', 'accepted', 'rejected', 'blocked'))
-- Correct target:
user_id UUID NOT NULL REFERENCES public.profiles(id) ON DELETE CASCADE
friend_id UUID NOT NULL REFERENCES public.profiles(id) ON DELETE CASCADEThe column is
friend_id, notconnected_user_id. Thedatabase-schema.mdusesconnected_user_idbut the actual migration usesfriend_id. Update the schema doc when fixing this.
IDs are identical in practice (auth.users.id === profiles.id always), but referencing auth.users from application tables is non-standard and makes cascades unpredictable. Fix in a dedicated migration using ALTER TABLE ... DROP CONSTRAINT ... ADD CONSTRAINT.
4.5 events.tags TEXT[] Must Be Deprecated
The event_tags junction table is the correct pattern and already exists. Steps:
- Update all INSERT/UPDATE code to write tags via
event_tags, notevents.tags. - On every SELECT that reads
events.tags, add a JOIN toevent_tagsinstead. - After confirming zero writes to
events.tags(monitor for 2 weeks), add:
COMMENT ON COLUMN public.events.tags IS 'DEPRECATED — use event_tags junction table';- Drop the column in the subsequent migration cycle.
4.6 events.max_attendees vs. capacity Naming Confusion
The correct column is max_attendees. The word capacity appears in old frontend code and caused null displays in the admin events table. Enforce globally:
- Search the entire codebase for any reference to
.capacityon an events object and replace with.max_attendeesormax_attendees. - Note: The
venuestable correctly usescapacity— this is intentional and not a bug.
4.7 events.status CHECK Was Expanded — Document Everywhere
The original CHECK (migration 008) was:
CHECK (status IN ('draft', 'active', 'published', 'cancelled'))Migration 20260420000001 expanded it to:
CHECK (status IN ('draft', 'active', 'published', 'cancelled', 'reported', 'deleted'))This was done because the admin portal uses 'reported' and 'deleted' as status values. The database-schema.md still lists the old set — it must be updated.
4.8 Undocumented Tables & Columns (Post-Schema-Doc Migrations)
The following tables and columns were added after the last database-schema.md revision (2026-04-20) and are NOT documented. They must be added to the schema doc:
New tables:
| Table | Migration | Purpose | RLS | Notes |
|---|---|---|---|---|
user_bans | 20260420000001 | Full ban history with lift tracking | ✅ Admin + owner-read | Proper GDPR-friendly ON DELETE SET NULL on banned_by/lifted_by |
admin_actions_log | 20260420000001 | Audit trail for all admin actions | ✅ Admin read/insert only | Polymorphic: target_type + target_id |
reports | 20260422060818 | Content moderation queue (admin panel) | ✅ Self-insert + admin-all | Distinct from user_reports (user-to-user flags) |
tag_affinity | 20260427000001 | Co-occurrence weights for tag ordering | ✅ Public read, service_role write | weight FLOAT is acceptable here (not monetary) |
contact_messages | 20260408000001 | Public landing site contact form | ✅ No policies (service_role only) | Server-only access by design |
saved_events | 20260318000000 | User bookmarked events | ✅ Self-manage | FKs reference auth.users(id) — should migrate to profiles(id) |
user_blocks | 20260318000005 | User-to-user block relationships | ✅ Self-manage | Composite PK (blocker_id, blocked_id) |
user_reports | 20260318000006 | User-to-user abuse reports | ✅ Self-insert/read | Distinct from reports table (admin moderation queue) |
New columns on existing tables:
| Table | Column | Migration | Notes |
|---|---|---|---|
profiles | ban_reason TEXT | 20260420000001 | Denormalized from user_bans for quick admin display |
profiles | ban_expires_at TIMESTAMPTZ | 20260420000001 | NULL = not banned or permanent |
profiles | banned_at TIMESTAMPTZ | 20260420000001 | |
profiles | banned_by UUID | 20260420000001 | FK → profiles(id) |
profiles | xp INT DEFAULT 0 | 20260311000003 | Gamification XP score |
profiles | level INT DEFAULT 1 | 20240312000002 | User level, CHECK (level >= 1) |
profiles | vibe_score INT DEFAULT 0 | 20260311000003 | |
profiles | flow_score INT DEFAULT 0 | 20260311000003 | |
profiles | badges JSONB DEFAULT '[]' | 20260311000003 | Legacy — prefer user_badges junction table |
events | deleted_at TIMESTAMPTZ | 20260420000001 | Soft-delete timestamp |
events | deleted_by UUID | 20260420000001 | Admin who soft-deleted, FK → profiles(id) |
connections | connection_depth INT DEFAULT 0 | 20260330000003 | 0=acquaintance, 1=friend, 2=buddy, 3=crew |
tags | translations JSONB DEFAULT '{}' | 20260414000001 | {"it": "...", "en": "...", "es": "..."} |
badges | name_translations JSONB | 20260414000001 | |
badges | description_translations JSONB | 20260414000001 | |
campaigns | title_translations JSONB | 20260414000001 | |
campaigns | description_translations JSONB | 20260414000001 | |
reports | reviewed_by UUID | 20260422060818 | Accountability — which admin reviewed |
Materialized views:
| View | Migration | Purpose | Notes |
|---|---|---|---|
leaderboard_xp | 20260330000006 | Top 100 users by XP | Refreshed hourly via pg_cron. RPC: get_leaderboard() |
4.9 Missing FORCE ROW LEVEL SECURITY
Multiple tables only have ENABLE ROW LEVEL SECURITY but not FORCE ROW LEVEL SECURITY. Without FORCE, the table owner (postgres) bypasses RLS. Audit and add FORCE to all tables in a cleanup migration:
-- Run this for EVERY table:
ALTER TABLE public.<table_name> FORCE ROW LEVEL SECURITY;4.10 user_follows and saved_events FK to auth.users Instead of profiles
Same issue as §4.4 (connections). These tables reference auth.users(id) instead of profiles(id):
| Table | Column | Current FK | Should Be |
|---|---|---|---|
user_follows | follower_id | auth.users(id) | profiles(id) |
user_follows | following_id | auth.users(id) | profiles(id) |
saved_events | user_id | auth.users(id) | profiles(id) |
Fix in the same migration as §4.4.
5. Content Quality & Data Integrity
5.1 Counter Caches — Trigger Pattern
All denormalized counter fields must be managed exclusively by database triggers. The canonical pattern:
-- Example: maintain events.attendee_count
-- NOTE: The existing trigger `update_event_counts()` in migration 20260311000003
-- already implements attendee_count + interested_count using SECURITY DEFINER.
-- Below is the canonical pattern for NEW counter triggers:
CREATE OR REPLACE FUNCTION update_event_attendee_count()
RETURNS TRIGGER LANGUAGE plpgsql
SECURITY DEFINER -- needed because RLS on events would block the UPDATE
AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
UPDATE public.events SET attendee_count = attendee_count + 1 WHERE id = NEW.event_id;
ELSIF TG_OP = 'DELETE' THEN
UPDATE public.events SET attendee_count = GREATEST(attendee_count - 1, 0) WHERE id = OLD.event_id;
ELSIF TG_OP = 'UPDATE' AND OLD.status != NEW.status THEN
-- Handle status changes (e.g. 'going' → 'not_going')
IF NEW.status = 'going' AND OLD.status != 'going' THEN
UPDATE public.events SET attendee_count = attendee_count + 1 WHERE id = NEW.event_id;
ELSIF OLD.status = 'going' AND NEW.status != 'going' THEN
UPDATE public.events SET attendee_count = GREATEST(attendee_count - 1, 0) WHERE id = NEW.event_id;
END IF;
END IF;
RETURN NULL; -- AFTER trigger: return value is ignored
END;
$$;
CREATE TRIGGER trg_event_attendees_count
AFTER INSERT OR DELETE OR UPDATE OF status ON public.event_attendees
FOR EACH ROW EXECUTE FUNCTION update_event_attendee_count();Rules:
- Always use
GREATEST(counter - 1, 0)for decrements to prevent negative counters. - Always use
AFTERtriggers for counters (notBEFORE). - Counter columns must have
CHECK (column_name >= 0)constraints. - Never allow application code to SET a counter column directly. If a reset is needed, use a dedicated
recalculate_*_count()SQL function callable via RPC.
5.2 Anti-Trash Data Policy
Development data must be realistic. The following are banned as usernames, event titles, or any user-visible text field:
- Generic strings: “test”, “Test User”, “aaa”, “xxx”
- Profane or juvenile strings: any variation of sexual, excretory, or vulgar terms
- Placeholder patterns: “Caca”, “PiselloPalle”, “Test123”, “foo”, “bar”, “asdf”
Action: Claude is authorized to:
- Flag rows matching these patterns during any audit task.
- Propose a
DELETEorUPDATEstatement for admin review before execution. - Never execute the delete unilaterally — the human must approve.
Seed data must use realistic Italian names, cities, and event titles appropriate for a nightlife/social events platform. Example acceptable seed data pattern:
- Event: “Aperitivo in Terrazza — Navigli Social Night”
- Username:
marco_b,giulia_events,nightout_milan - Venue: “Alcatraz Milano”, “Fabrique”, “BASE Milano”
5.3 Documentation DRY Principle
- One source of truth per concept. If the database schema is described in
[[Database Schema]], do not repeat the table structure in another file — link to it. - When writing documentation for a new feature, the structure is:
- Spec in
content/roadmap/specs/(the “what and why”) - Plan in
content/roadmap/plans/(the “how and when”) - Reference update in the appropriate
content/architecture/orcontent/backend/file (permanent living documentation)
- Spec in
- Obsidian wikilinks
[[file-name]]or[[file-name|Display Text]]are the standard cross-reference format.
6. Performance & Scalability
6.1 N+1 Query Detection
An N+1 query occurs when a list of N items is fetched, and then N additional queries are fired to fetch related data per item. This is the most common performance bug in Supabase/PostgREST integrations.
Detection pattern in Flutter (Dart):
// ❌ N+1 bug:
final events = await supabase.from('events').select();
for (final event in events) {
final organizer = await supabase.from('profiles').select().eq('id', event['organizer_id']).single();
// This fires N separate queries
}
// ✅ Correct — single query with join:
final events = await supabase.from('events').select('''
*,
organizer:profiles!organizer_id(id, username, profile_picture)
''');Detection pattern in Next.js (TypeScript):
// ❌ N+1 bug:
const events = await supabase.from('events').select('*');
for (const event of events.data) {
const { data: venue } = await supabase.from('venues').select().eq('id', event.venue_id).single();
}
// ✅ Correct:
const { data: events } = await supabase.from('events').select(`
*, venue:venues(id, name, address, city)
`);Any PR that introduces N+1 patterns must be rejected and rewritten.
6.2 Materialized Views for Heavy Aggregations
For data that is expensive to compute but changes infrequently, use materialized views instead of computing on every request:
-- Example: pre-computed event feed scores
CREATE MATERIALIZED VIEW public.event_feed_scores AS
SELECT
e.id AS event_id,
e.start_date,
e.attendee_count,
e.category,
e.geo_point,
-- scoring formula
(e.attendee_count * 0.4 + EXTRACT(EPOCH FROM (e.start_date - now())) * -0.0001) AS feed_score
FROM public.events e
WHERE e.status = 'published' AND e.start_date > now();
CREATE UNIQUE INDEX ON public.event_feed_scores (event_id);
-- Refresh triggered by Edge Function on a cron or on significant event changes:
REFRESH MATERIALIZED VIEW CONCURRENTLY public.event_feed_scores;CONCURRENTLY is mandatory on refresh in production — it avoids table locking during refresh.
6.3 Supabase Realtime — Subscription Patterns
Supabase Realtime uses Postgres logical replication. Rules:
- Subscribe to specific tables and filters, never to
*(all tables). - Unsubscribe when the widget/screen is disposed. Memory leaks from stale subscriptions are a known Flutter bug source.
- Prefer Realtime
broadcast(application-level pub/sub) for ephemeral events (typing indicators, presence) — it does not go through Postgres at all. - Use Realtime
postgres_changesonly for persistent data that must survive app restart (messages, notifications, attendee counts).
6.4 Edge Function Design Rules
Each Edge Function must:
- Validate the incoming
Authorization: Bearer <jwt>header and extractauth.uid(). - Use the
supabaseClientcreated with the user’s JWT (notservice_role) unless the operation requires bypassing RLS — and document why. - Return consistent error shapes:
// Standard error response:
return new Response(JSON.stringify({ error: { code: 'NOT_FOUND', message: 'Event not found' } }), {
status: 404,
headers: { 'Content-Type': 'application/json' }
});- Never call another Edge Function via HTTP from inside an Edge Function — use shared modules instead.
- Have a timeout budget of under 5 seconds for user-facing calls. Background/cron functions may use up to 25 seconds.
7. Version Control & Security Toggles
7.1 system_settings Table — Runtime Security Configuration
The
app_configtable already exists and serves a similar purpose. Thesystem_settingsadditions below should be implemented as new rows inapp_config, not as a separate table, to avoid schema duplication.
app_config schema (actual):
-- PK is `key TEXT`, NOT a UUID id column:
CREATE TABLE public.app_config (
key TEXT PRIMARY KEY,
value TEXT NOT NULL, -- stores stringified values (including JSON strings)
description TEXT,
updated_at TIMESTAMPTZ DEFAULT now()
);Add the following keys to public.app_config:
| Key | Value (stored as TEXT) | Description |
|---|---|---|
min_app_version | {"ios": "1.0.0", "android": "1.0.0"} | Minimum allowed client version per platform |
is_spoofing_prevention_active | false | Toggle to enable/disable HMAC request signing verification |
app_hmac_key_version | {"version": 1} | Current HMAC key version (actual key in Supabase secrets) |
app_config.valueisTEXT NOT NULL, not JSONB. When storing JSON, you mustJSON.parse()/jsonDecode()on the client side after reading. When writing, pass the stringified JSON.
RLS on app_config (already configured):
SELECT: public — any user can read (policyapp_config_read_allwithUSING (true)).INSERT/UPDATE/DELETE: admin viais_admin()(policyapp_config_admin_updateupdated in migration 008). Consider tightening tois_super_admin()in a future migration.
-- Migration to add the new keys:
INSERT INTO public.app_config (key, value, description) VALUES
('min_app_version', '{"ios": "1.0.0", "android": "1.0.0"}', 'Minimum client version before force-upgrade'),
('is_spoofing_prevention_active', 'false', 'Toggle HMAC signing verification'),
('app_hmac_key_version', '{"version": 1}', 'Current HMAC key version (actual key in Supabase secrets)')
ON CONFLICT (key) DO NOTHING;7.2 Version Enforcement — Force Update Flow
Middleware logic (implemented in an Edge Function that proxies version-sensitive calls, or as the first check in every Edge Function):
// Pseudocode — adapt to each Edge Function:
const { data: config } = await supabase
.from('app_config')
.select('value')
.eq('key', 'min_app_version')
.single();
const minVersions = config.value as { ios: string; android: string };
const clientVersion = req.headers.get('X-App-Version'); // e.g. "1.2.3"
const clientPlatform = req.headers.get('X-App-Platform'); // 'ios' | 'android'
if (clientVersion && clientPlatform) {
const minVersion = minVersions[clientPlatform];
if (semverLessThan(clientVersion, minVersion)) {
return new Response(
JSON.stringify({ error: { code: 'UPGRADE_REQUIRED', min_version: minVersion } }),
{ status: 426 } // 426 Upgrade Required
);
}
}Flutter (Dart) client handling:
- On receiving
426, show a non-dismissable bottom sheet/dialog directing to the app store. - Store the interceptor at the Supabase client initialization level (via a custom HTTP client override), not per-screen.
Header contract:
X-App-Version: <semver>— sent on every request from the Flutter app.X-App-Platform: ios | android— sent on every request.- Both headers are added globally in the Dio/http client configuration.
7.3 Manual Spoofing Prevention — HMAC-SHA256 Handshake
This is a pre-store deterrent. Once the app is on the App Store and Google Play, replace this with Apple App Attest and Google Play Integrity API.
How it works:
-
Secret storage: The HMAC secret lives in Supabase project secrets (
supabase secrets set HMAC_SECRET=...), never in client-side code. In the mobile binary, a derived identifier (a hash of the secret, not the secret itself) is obfuscated using code obfuscation (flutter build --obfuscate --split-debug-info=...). -
Request signing (Flutter):
// Each request signs a canonical string:
// "<HTTP_METHOD>|<PATH>|<UTC_TIMESTAMP_UNIX>|<AUTH_UID>"
String _buildCanonicalString(String method, String path, String uid) {
final ts = DateTime.now().millisecondsSinceEpoch ~/ 1000;
return '$method|$path|$ts|$uid';
}
String signRequest(String canonical, String obfuscatedKey) {
final hmac = Hmac(sha256, utf8.encode(obfuscatedKey));
final digest = hmac.convert(utf8.encode(canonical));
return digest.toString();
}
// Send as header: X-Flow-Signature: <hex_digest>
// Send as header: X-Flow-Timestamp: <unix_ts>- Verification (Edge Function):
async function verifySignature(req: Request, uid: string): Promise<boolean> {
const { data: config } = await adminClient.from('app_config')
.select('value').eq('key', 'is_spoofing_prevention_active').single();
if (!config?.value) return true; // bypass if toggle is OFF
const signature = req.headers.get('X-Flow-Signature');
const timestamp = req.headers.get('X-Flow-Timestamp');
if (!signature || !timestamp) return false;
// Reject requests older than 5 minutes (replay attack prevention)
const ts = parseInt(timestamp, 10);
if (Math.abs(Date.now() / 1000 - ts) > 300) return false;
const method = req.method;
const path = new URL(req.url).pathname;
const canonical = `${method}|${path}|${timestamp}|${uid}`;
// The real secret lives in Deno.env:
const secret = Deno.env.get('HMAC_SECRET')!;
const key = await crypto.subtle.importKey('raw', new TextEncoder().encode(secret),
{ name: 'HMAC', hash: 'SHA-256' }, false, ['sign']);
const expected = await crypto.subtle.sign('HMAC', key, new TextEncoder().encode(canonical));
const expectedHex = Array.from(new Uint8Array(expected))
.map(b => b.toString(16).padStart(2, '0')).join('');
return expectedHex === signature;
}- On failure: Return
401 Unauthorizedwith body{ error: { code: 'SPOOFED_REQUEST' } }. Do not reveal details.
7.4 Key Rotation Protocol
If is_spoofing_prevention_active is ON and the HMAC key is suspected to be compromised:
- Immediately: Set
is_spoofing_prevention_active = falseinapp_config(allows all current clients to continue working while you respond). - Generate new key:
openssl rand -hex 32 - Update Supabase secret:
supabase secrets set HMAC_SECRET=<new_key> - Release a new app version with the new obfuscated key identifier baked in.
- Bump
min_app_versionto the new version — this forces all clients to upgrade before HMAC checking is relevant again. - Re-enable: Set
is_spoofing_prevention_active = trueonce the new version has sufficient adoption (monitor viaX-App-Versionheader analytics).
Steps 5 and 6 are coupled. Do not re-enable spoofing prevention until the min_app_version bump ensures old clients (with the old key) can no longer connect.
8. Implementation Workflow for Claude
This section defines the exact sequence of steps Claude must follow for every coding task on this project.
Step 1 — Pre-task Audit
Before writing a single line of code, check:
- Does the target table/file have any open Technical Debt item in §4?
- Does the target table have RLS enabled AND forced?
- Does the target table have index coverage on all FK columns?
- Does any column in scope violate the naming conventions in §2.1?
- Is there a
price/float/decimalmonetary column in scope? (§2.2)
If any check fails: propose the fix first, add the feature second. State this explicitly in the response.
Step 2 — Schema Changes
Every schema change must be a numbered Supabase migration file:
- Filename format:
YYYYMMDDHHMMSS_descriptive_slug.sql - Location:
flow_backend/supabase/migrations/ - Must be idempotent where possible (
CREATE TABLE IF NOT EXISTS,ADD COLUMN IF NOT EXISTS). - Must include
COMMENT ON TABLEandCOMMENT ON COLUMNfor non-obvious additions. - Must include RLS policies (§3.1) and indexes (§2.4) in the same migration file as the table creation.
Step 3 — Cross-Layer Consistency
A schema change is never complete until all three of these are updated:
| Layer | What to update |
|---|---|
| Backend (SQL) | Migration file + any affected RPC functions |
| Mobile (Flutter/Dart) | Model class, API service, any Riverpod provider that queries the changed table |
| Web (Next.js/TypeScript) | Type definitions (ideally auto-generated via supabase gen types), any query that references the changed table |
| Documentation | [[Database Schema]] table entry, any spec/plan that mentioned the old structure |
If a task only touches one layer, explicitly note which other layers are not yet updated and create follow-up tasks.
Step 4 — Language Standard
- All code comments: English, professional, imperative tone (“Returns the user profile” not “This function returns the user profile”).
- All commit messages: English, conventional commits format:
feat(events): add price_cents column with migration,fix(rls): add FORCE ROW LEVEL SECURITY to messages table. - All documentation: Italian is acceptable for flow_docs narrative content. English is required for code-embedded docs (migration comments, JSDoc, Dart doc comments).
- No Italian in SQL, Dart, or TypeScript files.
Step 5 — Verification Before Declaring Done
A task is only done when:
- The migration runs without errors on a local Supabase instance (
supabase db resetorsupabase migration up). - The Flutter or Next.js code compiles without errors (
flutter buildortsc --noEmit). - RLS policies have been tested: one positive case (authorized user can perform the action) and one negative case (unauthorized user gets a Postgres error, not empty data).
- The documentation in
flow_docs/reflects the new state of the system.
9. Quick Reference — Current Architecture
┌─────────────────┐ ┌──────────────────┐
│ Flutter App │ │ Next.js Web │
│ (iOS/Android) │ │ (Admin Portal) │
└────────┬────────┘ └────────┬─────────┘
│ │
└──────────┬──────────┘
│ HTTPS (PostgREST + Edge Functions)
▼
┌──────────────────────┐
│ Supabase │
│ ┌────────────────┐ │
│ │ PostgreSQL │ │ ← Single source of truth
│ │ + PostGIS │ │
│ │ + pg_trgm │ │
│ │ + RLS (every │ │
│ │ table) │ │
│ └────────────────┘ │
│ ┌────────────────┐ │
│ │ Edge Functions │ │ ← Deno/TypeScript
│ │ (8 functions) │ │
│ └────────────────┘ │
│ ┌────────────────┐ │
│ │ Realtime │ │ ← Postgres changes + broadcast
│ └────────────────┘ │
│ ┌────────────────┐ │
│ │ Storage │ │ ← Images, files
│ └────────────────┘ │
└──────────────────────┘
What does NOT exist (and must not be introduced without measured justification):
- ❌ Custom API Gateway (Express, Fastify, Kong)
- ❌ Message broker (Kafka, RabbitMQ, Redis Streams)
- ❌ Secondary database (MongoDB, DynamoDB)
- ❌ Custom cache layer (Redis, Memcached) — use Postgres materialized views
- ❌ Separate search engine —
pg_trgmhandles current fuzzy matching needs
Related Documents
- database-schema — live schema reference
- overview — architecture overview
- api-reference — Edge Function API contracts
- database-audit-2026-03-24 — migration 008 full details
- technical-architecture — pre-migration legacy reference (historical)