Supabase + Cloudflare Pages: Zero-Cost Full-Stack Deployment
Build production-ready full-stack applications with Supabase (free PostgreSQL) and Cloudflare Pages (free hosting). Complete guide covering database setup, edge functions, security patterns, and deployment.
Community Pattern Notice: This guide documents the actual implementation pattern used to build claude-world.com. The patterns shown here are production-tested and can be adapted for your own projects.
Building a full-stack application typically requires paying for hosting, database, and infrastructure. But with the right combination of services, you can deploy a production-ready application for essentially $0/month.
This guide covers the complete Supabase + Cloudflare Pages integration pattern, including database setup, edge functions, security hardening, and deployment.
Why This Stack?
| Component | Purpose | Cost |
|---|---|---|
| Cloudflare Pages | Static hosting + Edge Functions | Free |
| Supabase | PostgreSQL database + Auth | Free tier |
| Astro | Static site generator with SSR | Free |
Total cost for MVP: $0/month (domain ~$10/year)
The combination provides:
- Global CDN - Cloudflare’s edge network for fast content delivery
- Edge Functions - Server-side logic without managing servers
- PostgreSQL - Full relational database with Supabase
- Row Level Security - Database-level access control
- Auto-scaling - Handles traffic spikes without configuration
Project Structure
Understanding the directory structure helps you organize code logically:
project/
├── src/
│ ├── lib/
│ │ ├── supabase.ts # Database client
│ │ ├── rate-limit.ts # Rate limiting
│ │ └── security.ts # CSRF protection
│ └── pages/
│ └── api/ # Edge functions
├── supabase/
│ └── migrations/ # SQL migrations
├── wrangler.toml # Cloudflare config
├── .env.example # Environment template
└── .dev.vars # Local dev secrets (gitignored)
This separation keeps concerns organized:
src/lib/- Reusable utilitiessrc/pages/api/- API endpointssupabase/- Database migrations and schema
1. Cloudflare Configuration
wrangler.toml Setup
The wrangler.toml file configures how Cloudflare builds and deploys your project:
# Cloudflare Pages configuration
name = "your-project"
compatibility_date = "2025-01-01"
compatibility_flags = ["nodejs_compat"]
# Build output directory
pages_build_output_dir = "./dist"
# Environment variables
[vars]
PUBLIC_SUPABASE_URL = "https://your-project.supabase.co"
# IMPORTANT: For production, set ANON_KEY as a secret:
# wrangler secret put PUBLIC_SUPABASE_ANON_KEY
# Optional: KV Namespace for sessions
# [[kv_namespaces]]
# binding = "SESSION"
# id = "your-kv-namespace-id"
Key points:
nodejs_compatflag enables Node.js APIs in Workerspages_build_output_dirpoints to your Astro build output- Non-sensitive variables can go in
[vars], but secrets should usewrangler secret
Environment Variables
Create two files for managing secrets:
# .env.example (committed to git - shows required variables)
PUBLIC_SUPABASE_URL=https://your-project.supabase.co
PUBLIC_SUPABASE_ANON_KEY=your-anon-key
# .dev.vars (gitignored - actual secrets for local development)
PUBLIC_SUPABASE_URL=https://your-project.supabase.co
PUBLIC_SUPABASE_ANON_KEY=your-anon-key
Setting Production Secrets:
# Set secrets via Cloudflare CLI
wrangler secret put PUBLIC_SUPABASE_ANON_KEY
# Or via Cloudflare Dashboard:
# Pages > Your Project > Settings > Environment variables
Why two files? The .env.example documents what variables are needed without exposing actual values. The .dev.vars file is Cloudflare’s convention for local development secrets.
2. Supabase Database Integration
Creating the Supabase Client
The client should be created as a singleton to avoid multiple connections:
// src/lib/supabase.ts
import { createClient, type SupabaseClient } from '@supabase/supabase-js';
const supabaseUrl = import.meta.env.PUBLIC_SUPABASE_URL || '';
const supabaseAnonKey = import.meta.env.PUBLIC_SUPABASE_ANON_KEY || '';
// Singleton pattern - create client only when needed
let supabase: SupabaseClient | null = null;
export function getSupabaseClient(): SupabaseClient | null {
if (!supabaseUrl || !supabaseAnonKey) {
console.warn('Supabase credentials not configured');
return null;
}
if (!supabase) {
supabase = createClient(supabaseUrl, supabaseAnonKey);
}
return supabase;
}
Why graceful fallback? By returning null when credentials aren’t configured, you can develop locally without a database connection. The application degrades gracefully instead of crashing.
Type-Safe Database Functions
Define interfaces for your data and create type-safe functions:
// Type-safe interfaces
export interface Subscriber {
id?: string;
email: string;
source: string;
preferred_language: 'en' | 'zh-tw' | 'ja';
subscribed_at?: string;
confirmed?: boolean;
unsubscribed_at?: string | null;
}
// Example function with graceful fallback
export async function subscribeToNewsletter(
email: string,
source: string = 'website',
language: 'en' | 'zh-tw' | 'ja' = 'en'
): Promise<{ success: boolean; error?: string }> {
// Validate email
if (!email || !email.includes('@')) {
return { success: false, error: 'Invalid email address' };
}
const client = getSupabaseClient();
// Graceful fallback when Supabase is not configured
if (!client) {
console.log('Newsletter subscription (no Supabase):', { email, source, language });
return { success: true }; // Allow development without database
}
try {
// Check if already subscribed
const { data: existing } = await client
.from('newsletter_subscribers')
.select('id, unsubscribed_at')
.eq('email', email.toLowerCase())
.single();
if (existing) {
if (existing.unsubscribed_at) {
// Re-subscribe
await client
.from('newsletter_subscribers')
.update({ unsubscribed_at: null, source, preferred_language: language })
.eq('id', existing.id);
}
// Generic success to prevent email enumeration
return { success: true };
}
// New subscription
await client
.from('newsletter_subscribers')
.insert({
email: email.toLowerCase(),
source,
preferred_language: language,
confirmed: true, // Single opt-in
});
return { success: true };
} catch (err) {
console.error('Newsletter subscription error:', err);
return { success: false, error: 'Failed to subscribe. Please try again.' };
}
}
Security considerations:
- Email normalization - Always lowercase emails to prevent duplicates
- Generic responses - Return the same response whether email exists or not to prevent enumeration attacks
- Re-subscription handling - Allow users to re-subscribe if they previously unsubscribed
Database Migrations
Create your table with proper constraints and security:
-- supabase/migrations/001_create_newsletter.sql
CREATE TABLE IF NOT EXISTS newsletter_subscribers (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
email TEXT UNIQUE NOT NULL,
source TEXT DEFAULT 'website',
preferred_language VARCHAR(10) DEFAULT 'en' CHECK (preferred_language IN ('en', 'zh-tw', 'ja')),
subscribed_at TIMESTAMPTZ DEFAULT NOW(),
confirmed BOOLEAN DEFAULT FALSE,
unsubscribed_at TIMESTAMPTZ
);
-- Indexes for faster queries
CREATE INDEX IF NOT EXISTS idx_newsletter_email ON newsletter_subscribers(email);
CREATE INDEX IF NOT EXISTS idx_newsletter_source ON newsletter_subscribers(source);
-- Enable Row Level Security (CRITICAL for production)
ALTER TABLE newsletter_subscribers ENABLE ROW LEVEL SECURITY;
-- Policy: Allow anonymous inserts (for signup)
CREATE POLICY "Allow anonymous inserts" ON newsletter_subscribers
FOR INSERT TO anon
WITH CHECK (true);
-- Policy: Deny reads to anonymous users (protect email list)
-- Only authenticated/service role can read
CREATE POLICY "Deny anonymous reads" ON newsletter_subscribers
FOR SELECT TO anon
USING (false);
Why Row Level Security (RLS)?
Without RLS, anyone with your anon key could read your entire subscriber list. RLS ensures:
- Anonymous users can only insert (sign up)
- Anonymous users cannot read the table (protecting your email list)
- Only authenticated users or service role can query data
Running Migrations:
# Via Supabase Dashboard
# Go to: SQL Editor > New Query > Paste and Run
# Or via Supabase CLI
supabase db push
3. Security Patterns
Rate Limiting
Protect your API endpoints from abuse with in-memory rate limiting:
// src/lib/rate-limit.ts
interface RateLimitEntry {
timestamps: number[];
blockedUntil?: number;
}
// In-memory store (per-instance)
const rateLimitStore = new Map<string, RateLimitEntry>();
export interface RateLimitConfig {
windowMs: number;
maxRequests: number;
blockDurationMs?: number;
}
// Preset configurations
export const RATE_LIMIT_CONFIGS = {
strict: {
windowMs: 60 * 1000, // 1 minute
maxRequests: 5, // 5 requests
blockDurationMs: 5 * 60 * 1000, // Block 5 minutes
},
standard: {
windowMs: 60 * 1000,
maxRequests: 30,
blockDurationMs: 60 * 1000,
},
lenient: {
windowMs: 60 * 1000,
maxRequests: 100,
},
} as const;
// Get client IP (Cloudflare provides this)
export function getClientId(request: Request): string {
// CF-Connecting-IP is the real client IP on Cloudflare
const cfIp = request.headers.get('CF-Connecting-IP');
if (cfIp) return cfIp;
const xForwardedFor = request.headers.get('X-Forwarded-For');
if (xForwardedFor) return xForwardedFor.split(',')[0].trim();
return 'unknown';
}
Why sliding window? A simple counter resets at fixed intervals, allowing bursts at the boundary. Sliding window tracks individual request timestamps for smoother rate limiting.
export interface RateLimitResult {
allowed: boolean;
remaining: number;
resetAt: number;
retryAfter?: number;
}
export function checkRateLimit(
clientId: string,
endpoint: string,
config: RateLimitConfig
): RateLimitResult {
const now = Date.now();
const key = `${clientId}:${endpoint}`;
let entry = rateLimitStore.get(key);
// Check if blocked
if (entry?.blockedUntil && entry.blockedUntil > now) {
return {
allowed: false,
remaining: 0,
resetAt: entry.blockedUntil,
retryAfter: Math.ceil((entry.blockedUntil - now) / 1000),
};
}
if (!entry) {
entry = { timestamps: [] };
rateLimitStore.set(key, entry);
}
// Clean old timestamps
const windowStart = now - config.windowMs;
entry.timestamps = entry.timestamps.filter(ts => ts > windowStart);
// Check limit
if (entry.timestamps.length >= config.maxRequests) {
if (config.blockDurationMs) {
entry.blockedUntil = now + config.blockDurationMs;
}
return {
allowed: false,
remaining: 0,
resetAt: entry.blockedUntil || (entry.timestamps[0] + config.windowMs),
retryAfter: Math.ceil(config.windowMs / 1000),
};
}
entry.timestamps.push(now);
return {
allowed: true,
remaining: config.maxRequests - entry.timestamps.length,
resetAt: now + config.windowMs,
};
}
export function rateLimitResponse(result: RateLimitResult): Response {
return new Response(
JSON.stringify({ error: 'Too many requests', retryAfter: result.retryAfter }),
{
status: 429,
headers: {
'Content-Type': 'application/json',
'Retry-After': (result.retryAfter || 60).toString(),
},
}
);
}
CSRF Protection
Cross-Site Request Forgery protection validates that requests come from your domain:
// src/lib/security.ts
const ALLOWED_ORIGINS = [
'https://your-domain.com',
'https://www.your-domain.com',
];
// Add localhost for development
if (import.meta.env.DEV) {
ALLOWED_ORIGINS.push(
'http://localhost:4321',
'http://localhost:3000'
);
}
export function validateOrigin(request: Request): boolean {
const origin = request.headers.get('Origin');
const referer = request.headers.get('Referer');
// Permissive in development
if (import.meta.env.DEV) return true;
// Check Origin header
if (origin) {
return ALLOWED_ORIGINS.includes(origin);
}
// Fall back to Referer
if (referer) {
try {
const refererUrl = new URL(referer);
return ALLOWED_ORIGINS.includes(refererUrl.origin);
} catch {
return false;
}
}
return false; // Reject if no origin info
}
export function csrfForbiddenResponse(): Response {
return new Response(
JSON.stringify({ error: 'Forbidden' }),
{ status: 403, headers: { 'Content-Type': 'application/json' } }
);
}
export const SECURITY_HEADERS = {
'X-Content-Type-Options': 'nosniff',
'X-Frame-Options': 'DENY',
'Cache-Control': 'no-store, max-age=0',
};
Why both Origin and Referer? Some browsers or configurations may not send the Origin header for same-origin requests. The Referer header provides a fallback.
4. API Routes (Edge Functions)
Complete API Route Example
Putting it all together in a secure API endpoint:
// src/pages/api/newsletter.ts
import type { APIRoute } from 'astro';
import { subscribeToNewsletter } from '../../lib/supabase';
import { checkRateLimit, getClientId, rateLimitResponse, RATE_LIMIT_CONFIGS } from '../../lib/rate-limit';
import { validateOrigin, csrfForbiddenResponse, SECURITY_HEADERS } from '../../lib/security';
// Ensure server-side rendering (not static)
export const prerender = false;
export const POST: APIRoute = async ({ request }) => {
// 1. CSRF protection
if (!validateOrigin(request)) {
return csrfForbiddenResponse();
}
// 2. Rate limiting
const clientId = getClientId(request);
const rateLimitResult = checkRateLimit(clientId, 'newsletter', RATE_LIMIT_CONFIGS.strict);
if (!rateLimitResult.allowed) {
return rateLimitResponse(rateLimitResult);
}
// 3. Process request
try {
const { email, source = 'website', language = 'en' } = await request.json();
if (!email) {
return new Response(
JSON.stringify({ error: 'Email is required' }),
{ status: 400, headers: { 'Content-Type': 'application/json', ...SECURITY_HEADERS } }
);
}
const result = await subscribeToNewsletter(email, source, language);
if (result.success) {
return new Response(
JSON.stringify({ message: 'Successfully subscribed!' }),
{ status: 200, headers: { 'Content-Type': 'application/json', ...SECURITY_HEADERS } }
);
} else {
return new Response(
JSON.stringify({ error: result.error }),
{ status: 400, headers: { 'Content-Type': 'application/json', ...SECURITY_HEADERS } }
);
}
} catch (error) {
console.error('Newsletter API error:', error);
return new Response(
JSON.stringify({ error: 'Internal server error' }),
{ status: 500, headers: { 'Content-Type': 'application/json', ...SECURITY_HEADERS } }
);
}
};
Security layers in order:
- CSRF Protection - Validates request origin
- Rate Limiting - Prevents abuse
- Input Validation - Ensures required fields
- Error Handling - Never leaks internal errors
The prerender = false export is critical - it tells Astro to run this as a server-side function, not generate a static page.
5. Advanced Patterns
Event Registration with QR Codes
For more complex use cases like event registration:
// Generate cryptographically secure QR token
function generateQRToken(): string {
return crypto.randomUUID();
}
// Registration with duplicate handling
export async function registerForEvent(
eventSlug: string,
email: string,
name: string
): Promise<{ success: boolean; registration?: EventRegistration }> {
const client = getSupabaseClient();
if (!client) return { success: false };
// Check for existing registration
const { data: existing } = await client
.from('event_registrations')
.select('*')
.eq('event_slug', eventSlug)
.eq('email', email.toLowerCase())
.single();
if (existing) {
// Return existing (good UX, prevents enumeration)
return { success: true, registration: existing };
}
// New registration
const { data, error } = await client
.from('event_registrations')
.insert({
event_slug: eventSlug,
email: email.toLowerCase(),
name,
qr_code_token: generateQRToken(),
status: 'registered',
})
.select()
.single();
if (error) throw error;
return { success: true, registration: data };
}
Privacy-Conscious Analytics
Track usage without storing personal data:
-- Analytics table with privacy-conscious design
CREATE TABLE analytics_events (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
event_type TEXT NOT NULL,
event_data JSONB DEFAULT '{}',
client_ip_hash TEXT, -- Hashed, not raw IP
user_agent TEXT,
country TEXT DEFAULT 'unknown', -- From CF-IPCountry header
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Enable RLS
ALTER TABLE analytics_events ENABLE ROW LEVEL SECURITY;
-- Allow anonymous inserts (tracking), deny reads
CREATE POLICY "Allow anonymous tracking" ON analytics_events
FOR INSERT TO anon WITH CHECK (true);
// Get country from Cloudflare header
function getCountry(request: Request): string {
return request.headers.get('CF-IPCountry') || 'unknown';
}
// Hash IP for privacy
async function hashIP(ip: string): Promise<string> {
const encoder = new TextEncoder();
const data = encoder.encode(ip + 'your-salt');
const hashBuffer = await crypto.subtle.digest('SHA-256', data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
return hashArray.map(b => b.toString(16).padStart(2, '0')).join('').slice(0, 16);
}
Why hash IPs? Storing raw IPs creates privacy concerns and potential GDPR issues. Hashing with a salt allows you to track unique visitors without storing identifiable information.
6. Deployment
Astro Configuration
Configure Astro for Cloudflare deployment:
// astro.config.mjs
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
import react from '@astrojs/react';
import tailwind from '@astrojs/tailwind';
export default defineConfig({
output: 'hybrid', // Static by default, SSR opt-in
adapter: cloudflare({
platformProxy: {
enabled: true,
},
}),
integrations: [react(), tailwind()],
});
Why hybrid mode? Most pages can be static (faster, cheaper), while API routes and dynamic pages use SSR. This gives you the best of both worlds.
Deployment Commands
# 1. Install dependencies
pnpm install
# 2. Build
pnpm build
# 3. Preview locally (with Cloudflare environment)
pnpm preview # or: wrangler pages dev ./dist
# 4. Deploy
# Option A: Connect GitHub repo in Cloudflare Dashboard (recommended)
# Option B: Direct deploy
wrangler pages deploy ./dist
Production Checklist
Before going live, verify:
- Set Supabase secrets via
wrangler secret put - Enable RLS on all tables
- Configure allowed origins in security.ts
- Set up Cloudflare Rate Limiting Rules (Dashboard)
- Enable Cloudflare WAF rules
- Configure custom domain with SSL
7. Troubleshooting
”Supabase credentials not configured”
# Check if env vars are set
wrangler pages dev ./dist --local
# Set secrets for production
wrangler secret put PUBLIC_SUPABASE_URL
wrangler secret put PUBLIC_SUPABASE_ANON_KEY
RLS Policy Blocking Queries
-- Debug: Check current policies
SELECT * FROM pg_policies WHERE tablename = 'your_table';
-- Temporarily disable RLS (development only!)
ALTER TABLE your_table DISABLE ROW LEVEL SECURITY;
Rate Limit Not Working in Production
The in-memory rate limiter works per-instance. For multi-instance deployments:
- Use Cloudflare Rate Limiting Rules (Dashboard) - Recommended for production
- Or use Cloudflare KV for shared state across instances
- Or use Supabase for rate limit tracking
Security Best Practices Summary
- Row Level Security (RLS) - Always enable on Supabase tables
- CSRF Protection - Validate Origin/Referer headers on all POST endpoints
- Rate Limiting - Protect signup/auth endpoints strictly
- Input Validation - Validate and sanitize all inputs
- Error Handling - Never leak internal errors to clients
- Secrets Management - Use
wrangler secretfor production, never commit secrets
Getting Started
Today:
- Create a Supabase project (free tier)
- Set up Cloudflare Pages (connect your GitHub repo)
- Configure environment variables
This week:
- Implement your first API endpoint with security layers
- Create database tables with RLS
- Test locally with
wrangler pages dev
This month:
- Add advanced features (auth, file upload, etc.)
- Configure Cloudflare WAF and rate limiting
- Set up monitoring and analytics
This guide is based on the actual implementation of claude-world.com, built from zero to production in 48 hours using Claude Code.
Resources: