- Published on
Next.js Static Export + Member Area on Azure Serverless: Split Without Breaking UX
In my freelance mission, we built a website factory with dozens of brands and tens of thousands of pages. The marketing surface must stay static for performance and cost, while a new member area needs authentication, personal data, favorites, and subscriptions. Payments were out of scope at that time, which simplified PCI concerns but not the seams.
The decision was to keep the public site as a pure static export and add a member SPA backed by Azure Functions within the same Nextjs platform so the seam remained invisible.
The objective was simple to state and subtle to execute. We wanted a clean cut between anonymous SEO traffic and personalized flows, yet with one cohesive UX and shared design system. The constraint from customers was strong ownership of domains, so we had to get routing, caching, and identity exactly right. That is where the work became interesting, and sometimes a bit stubborn to get right in prod enviroments.
Split strategy inside one Next.js instance
The marketing website runs as a static export. Content is prebuilt and served from CDN with aggressive TTLs and tag based invalidation. The only runtime dependency is search through Elasticsearch, called from isolated handlers that do not alter the cache model.
The member area is a client app that boots behind an authenticated boundary and talks to Azure Functions over HTTP. It shares the design system and a small shell with the public site, but its data model and caching policies are independent. We avoid SSR for the member area to keep the static guarantees and decouple elastic capacity from page delivery.
This split preserves fast paths for crawlers and first loads, while allowing application style interactions where users expect them. It also reduces blast radius when we iterate on the member features becaus we do not risk invalidating public caches.
Identity and security contracts
Two options were on the table. Token based auth with storage in memory and refresh dance, or cookie based auth with httpOnly cookies scoped to the domain. Since we stayed on a single domain and wanted a simple CSRF story, we chose signed cookies with SameSite=Lax for the member area and explicit CSRF tokens on unsafe methods. That avoided fragile storage logic and reduced cross tab inconsistencies.
CORS was kept as narrow as possible. The public surface does not need cross origin. Member calls target the Functions domain behind the same apex, which avoids preflight storms. For SEO we made sure that public HTML and edge redirects never depend on cookies, otherwise caches get poisoned and TTFB suffers.
On logout we aggressively cleared cookies and local state and forced cache bust on the SPA shell to avoid ghosts of previous sessions. The boring details here made the system calmer and easier to reason about.
Routing, subpaths, and cache keys
Subdomain isolation would have been ideal for infrastructure cleanliness, yet customers insisted on one domain for brand and legal reasons. We therefore carved the member area under a subpath and used edge redirects to route traffic to the right handlers without leaking auth state into the static cache.
Cache keys varied on a small set of headers and never on cookies for public routes. We used tag based invalidation for content updates and left search results uncached on the CDN to avoid staleness combined with personalization hints. Error routes and 404 were normalized so both halves of the platform behaved the same under failure. It sounds minor, but consistent failure paths make debugging faster 🕵️♂️
Data and caching policies that do not fight each other
The public site uses immutable asset hashing and long TTL HTML with targeted purges. The member SPA uses fine grained client caches with SWR style policies and optimistic updates for low risk writes. We monitored revalidate storms and added jitter to throtle bursts after deploys.
Search remained the only runtime on the public surface. We bounded query complexity and guarded facets with allowlists to keep CPU use predictable. For the member area, we allowed richer queries but wrapped them with server side rate limits and timeouts to keep tail latencies in check.
These choices preserved fast paths for anonymous users and gave us flexibility where people expect interactivity. The relief on incident calls was real when caches stopped fighting each other during high traffic windows.
Observability and cold starts
We propagated a correlation id from the edge to the Functions runtime and back to the browser so traces stitched across the whole request path. Logs carried the id, the route, and a minimal user scope without PII. We sampled aggressively on the public surface and more heavily on member flows where tail latency mattered.
Cold starts were visible in the early days. We reduced package size, trimmed dependencies, and moved a few hot endpoints to a higher memory tier where the CPU boost paid off. Prewarming helped a bit, but the big wins came from cutting dynamic imports in the handler path and warming connections to dependent services. Seeing p95 cold start dips after those changes felt good 🚀
Quotas were enforced per route family and per tenant where relevant. When we tripped a quota, the UI degraded to cached projections and offered a soft retry rather than blocking the user with hard failures. That small detail kept support tickets sane.
Serverless Framework on Azure
The Serverless Framework worked smoothly on AWS. On Azure, it was usable but required a few pragmatic tweaks. Function naming and resource grouping needed custom conventions to avoid conflicts. The default webpack bundling had to be adjusted for Functions v4 and Node runtime parity. Some bindings demanded explicit configuration that the generic tooling hides on AWS.
It was not a deal breaker. The framework remained valuable as a shared language across teams. But we learned to document the Azure specific deltas and to pin versions more carefully. After that, deployments became predictable again, and the team stopped losing time on tooling friction.
KPIs and what moved the needle
We tracked public TTFB, member TTI, error rates for Azure 5xx, cold start distributions, and the cache hit ratio on the CDN. The biggest gains came from removing cookie dependence on public cache keys, right sizing the Functions memory tier for hot paths, and keeping search out of the CDN layer so staleness did not linger.
An underrated win was consistency. Normalized error pages, unified redirects, and shared telemetry cut our MTTR even when nothing else changed. It made the platform feel less capricious over time.
Closing
Keeping static export and a member SPA in one Next instance is viable when contracts are explicit and cache semantics are tight. The seam disappears for users, operations stay boring, and the product can evolve without dragging the marketing surface into every change. It takes discipline more than heroics, and the payoff is compounding.
I would make the same trade again. The separation sharpened our mental model, and the platform became steadier and less buggy where it actually counts.