The obvious path
If you’re building a Next.js web application in 2025, the most obvious path is:
- Deploy: Vercel
- Database: Neon, PlanetScale, or another managed PostgreSQL
- Storage: AWS S3 or Cloudflare R2
- ORM: Prisma or Drizzle
- Auth: NextAuth.js or Clerk
It’s the “one-developer studio” stack that works, has documentation, has Stack Overflow. I seriously considered this path.
And went somewhere else.
The cost factor
The main reason was cost — not today’s cost, but the cost at scale.
JoBoEco is a system for academic university events. The usage pattern is interesting: massive spikes in very specific windows (when registrations open, when the submission deadline closes, on the day of the event) and essentially zero traffic the rest of the time. An event-driven application in the most literal sense.
On Vercel, you pay per function execution. The bill for an event with a thousand registrations in an 8-hour window could be negligible — or it could explode depending on what’s happening in the application. Serverless pricing models have this unpredictability.
On Cloudflare Workers, the free plan includes 100,000 requests per day. The paid plan ($5/month) covers 10 million requests per month. For JoBoEco’s volume, this is essentially free.
D1 (Cloudflare’s SQLite database) has on the free plan: 5 million reads and 100k writes per day. Compared to Neon or PlanetScale — with much more restrictive free plans for concurrent connections and branches — D1 proved far more generous for the project’s usage pattern.
R2 doesn’t charge for egress (data transfer out of the bucket). For submission documents, event logos, enrollment proofs, and downloadable templates, this avoids an entire category of variable cost.
The bottom line: Cloudflare Workers + D1 + R2 = a stack that runs on the free tier for typical JoBoEco volume. This isn’t a marginal advantage — it’s the difference between a sustainable project and one that needs sponsorship to pay its infrastructure bills.
The real risk: D1 is distributed SQLite
Here I need to be honest about the tradeoff, because it’s real.
D1 is SQLite running on Cloudflare’s infrastructure, distributed at the edge. In practical terms, this means a few limitations:
-
Schema migrations are simpler — SQLite has fewer native types than PostgreSQL. No
ENUM, no arrays, no JSONB with GIN indexes. I simulated ENUMs withTEXTcolumns validated at the application layer. -
No concurrent connections in the traditional sense — each Worker invocation creates its own connection to D1. This has implications for read-modify-write atomics in high-concurrency scenarios.
-
No extensions — no PostGIS, pg_trgm, or any other extension. For simple text search I used
LIKE, which works for the project’s volume.
For JoBoEco, none of these limitations were a real blocker. The schema is relational but not extremely complex — queries are practically all SELECT with direct JOINs, with no need for recursive CTEs or advanced window functions.
What positively surprised me: the latency. Edge Workers + D1 at the edge = responses that feel local even for users in different regions. For an application like this, where most users are in Brazil and the deploy is in a South American region, the benefit is concrete.
What I missed: native observability. D1’s tooling for inspecting the production database is still quite rudimentary compared to PgAdmin or the Neon console. Drizzle Studio fills part of that gap, but it’s not the same thing.
Why Drizzle and not Prisma
When the project started, Prisma didn’t yet have stable D1 support. Drizzle did. End of story from a technical standpoint.
But even if Prisma had support, I’d probably have chosen Drizzle anyway. The philosophy is different: Drizzle assumes you know SQL, and uses TypeScript to type the operations. Prisma abstracts the database so much that when you need to optimize a specific query, you end up fighting the abstraction.
With Drizzle, what you write is essentially typed SQL. No surprises between “what I wrote” and “what went to the database”.
One thing I caught early: Drizzle opens a prepare statement per query. In Workers, where the connection is ephemeral, this has a minimal but consistent cost. For very frequent queries, it’s better to use db.select().from() directly rather than creating separate prepared statements.
Next.js App Router: the bet that paid off
This was another decision that could have gone badly. The Next.js App Router, when I started JoBoEco, was in an… interesting phase. Good in theory, active bugs in practice, incomplete documentation in certain areas.
Why did I go with the App Router anyway?
Because OpenNext — the library that makes Next.js run on Cloudflare Workers — was being developed primarily for the App Router. The Pages Router had better support at some point, but community investment was clearly in the App Router. Going against that direction would be deliberately buying technical debt.
The bet paid off. The App Router is now stable, and the integration with OpenNext is mature enough for production deployment without major surprises. The generated bundle is still larger than I’d like (the deploy exceeds 12MB compressed), but it works.
OpenNext: the missing link
It’s impossible to talk about Next.js on Cloudflare without talking about OpenNext. It transforms the next build output into a Worker bundle that Wrangler can deploy. The configuration ended up quite lean:
import type { OpenNextConfig } from '@opennextjs/cloudflare'
const config: OpenNextConfig = { default: { override: { wrapper: 'cloudflare-node', converter: 'edge', incrementalCache: async () => { const { KVIncrementalCache } = await import( '@opennextjs/cloudflare/kv-cache' ) return new KVIncrementalCache() }, queue: 'dummy', tagCache: 'dummy', }, }, middleware: { external: true, },}
export default configThe important detail: middleware: { external: true }. This makes Next.js middleware run as a separate Worker, improving latency for protected routes — which in JoBoEco are the majority.
There’s a gotcha that cost me a few hours: Next.js Image Optimization doesn’t work in Workers by default. The solution was to disable the native optimizer and serve images directly via R2. For the project’s use cases this wasn’t an issue.
The authentication stack
NextAuth.js v4 with CredentialsProvider was the choice, but not without caveats. It works, but the DX for customizing flows — like password reset with an expired token, or resending a confirmation email — is bureaucratic. You end up writing a lot of boilerplate for cases that any SaaS auth solution handles with a single line of config.
If I were starting today, I’d look more seriously at Auth.js v5 (the new version of NextAuth) or Clerk. Clerk has far superior DX, but has a cost above a certain volume. For a project that needs to be 100% viable on the free tier, NextAuth still makes sense — but with v5, not v4.
The stack summary
| Layer | Technology | Main motivation |
|---|---|---|
| Framework | Next.js 16 (App Router) | Ecosystem + OpenNext support |
| Runtime | Cloudflare Workers | Cost, edge latency |
| Database | Cloudflare D1 | Cost, native integration with Workers |
| Storage | Cloudflare R2 | No egress cost |
| ORM | Drizzle ORM | D1 support, transparent SQL |
| Auth | NextAuth.js v4 | Available, works |
| Deploy | OpenNext + Wrangler | Only viable path for CF Workers |
| Resend | Clean API, good DX | |
| Payments | MercadoPago | Only one with a good Pix API in Brazil |
What I would change
Very little, honestly. The stack proved solid for the problem at hand. The only area where I have genuine doubts is auth — but that’s a DX detail, not a functionality issue.
D1 as a production database was the riskiest decision on paper and the one that most positively surprised me in practice. If the project grows to larger volumes, an eventual migration to PostgreSQL via Hyperdrive (also from Cloudflare) would be a relatively smooth path.
In the next part, I get into what was probably the most labor-intensive part of the system: Pix payments with MercadoPago. Webhooks that may never arrive, polling that can’t be naive, and the external_reference scheme that saved me from having to build a separate lookup system.