A 200 ms faster response time (web.dev) is no longer a luxury in 2026 - it is a hard revenue lever. Edge caching reduces the TTFB (time to first byte) of a Shopware store by 60 to 80 % (Cloudflare/Vercel 2026) - from a typical 200 to 800 ms down to 20 to 50 ms from the user's point of view. Just 100 ms of additional load time can cost up to 1 % of revenue (Amazon/Walmart via Conductor), while good Core Web Vitals boost conversion by 15 to 30 % (web.dev/totalcommerce 2026). This guide shows how Shopware stores can hit sub-200 ms globally using a reverse HTTP cache, ESI and cache tag invalidation - in a technically sound, compliant and measurable way.
Why TTFB is decisive in e-commerce
TTFB is the time between the HTTP request and the first received byte. It forms the base of all Core Web Vitals metrics - especially LCP (Largest Contentful Paint). The 2026 gold standard is below 200 ms (web.dev); yet only 62 % of mobile pages achieve a 'Good' LCP under 2.5 s (Web Almanac 2025). Studies consistently link load time to conversion: pages with 1 s load time reach up to 39 % CR, pages at 5 s fall to 22 % (ALM Corp 2026). A 2 s delay increases bounce rate by 103 % (Akamai/Ringly 2026), and every further 100 ms costs around 7 % conversion (Akamai via Ringly). For a Shopware store with 500,000 EUR in monthly revenue, 300 ms of saved TTFB translates into up to 10,500 EUR of additional monthly revenue on paper - a figure that justifies edge investments in the vast majority of cases.
Walmart reports: every additional second of speed lifts conversion by around 2 % (Walmart SlideShare). A 31 % LCP reduction yields +8 % revenue (techcognate). Speed is therefore not just an engineering topic but a direct driver of e-commerce KPIs - see also the article on Core Web Vitals and PageSpeed.
The clearest effects appear in international stores served from a single European data centre. An order from Singapore triggers at least one 300 ms roundtrip per page load - aggregated over homepage, category view, product detail and checkout this quickly becomes double-digit seconds until purchase. Edge caching breaks this chain: the first contact with the store hits a nearby edge node, only the truly personalised part (cart, login) actually reaches the origin. For DACH-only stores without an international audience edge caching stays relevant nonetheless: peak traffic spikes (Black Friday, TV advertising, newsletter send) hit the edge instead of the origin, scaling becomes predictable.
Edge vs. origin: the latency math
A Shopware origin in Frankfurt serves a user in Sydney with a base latency of roughly 300 to 400 ms - purely from physical distance. On top come the TLS handshake, Shopware rendering and database roundtrips. An edge node in Sydney cuts distance latency to under 50 ms - provided the response is already cached there. In practice, projects with full-page caching report -50 to -80 % load time and TTFB below 200 ms globally (FatLab). Edge middleware with stale-while-revalidate drops origin requests by 85 to 95 % (digitalapplied), origin bandwidth costs fall by 30 to 40 % (Cloudflare). A fashion retailer case shows: -70 % page load and -60 % TTFB after edge rollout (Harper.fast). The crucial distinction is between asset-based CDNs (images, CSS, JS) and true edge caching of the HTML response: only the latter relieves the Shopware origin from compute-heavy rendering - and that is exactly what turns 'fast' into 'sub-200 ms worldwide' in the latency math.
| Metric | Origin-only | Single-region CDN | Global edge |
|---|---|---|---|
| TTFB Europe | 200-400 ms | 80-150 ms | 20-50 ms |
| TTFB North America | 400-700 ms | 200-300 ms | 30-60 ms |
| TTFB APAC | 600-900 ms | 400-600 ms | 40-80 ms |
| Peak origin load | 100 % | 60-70 % | 5-15 % |
| Cache hit rate | n/a | 60-75 % | 85-95 % |
| Invalidation complexity | low | medium | high - but manageable |
Shopware reverse HTTP cache - architecture
Shopware 6 supports a reverse HTTP cache as a proxy between the user and the Shopware application (Shopware Dev Docs). The proxy stores full HTML responses including headers and serves them on the next request without a backend roundtrip. Since version 6.4.11.0 a Fastly integration is available in storefront.yaml (Shopware Docs); likewise Varnish with the XKey module for stable BAN invalidation (Shopware hosting guide) or other reverse proxies can be connected. The new Shopware caching engine relies on standard Cache-Control and Vary headers (Shopware news), which makes it compatible with a wide range of proxies - in our Shopware hosting we tune the layers to the store topology. A typical three-layer setup looks like this: Shopware renders on the origin, PHP-FPM with OPcache and preloading reduces render time, Redis holds object cache and sessions, and the reverse cache with tag support sits in front. The soft purge function is decisive here: on purge the entry stays in the cache flagged as stale, users receive the old response within the TTL window while a refresh runs in the background. This prevents a cache miss avalanche when hundreds of products are updated simultaneously.
storefront:
reverse_proxy:
enabled: true
ban_method: BAN
hosts:
- 'http://edge-node-01.internal:8080'
- 'http://edge-node-02.internal:8080'
max_parallel_invalidations: 3
redis_url: 'redis://cache.internal:6379/2'
framework:
http_cache:
enabled: true
default_ttl: 7200
stale_while_revalidate: 60
stale_if_error: 3600ESI: static page, dynamic fragments
Edge Side Includes (ESI) separate static page sections from dynamic fragments (Fastly ESI guide). The product page is served as a shell with a cache TTL of 2 hours; the cart badge, login state or personalised blocks load as separate fragments that are not cached or cached only briefly. This keeps hit rates high without leaking security-relevant content to third parties. The concept connects directly to Edge Computing and Edge Side Rendering and pairs well with Shopware frontends using Vue/Nuxt. A clean separation matters: what is clearly personalised (user name, cart quantity) and must never land in the shared cache? What is customer-group specific (B2B net prices) and belongs in a segmented cache key? What is globally cacheable (product description, category navigation)? This three-way split determines both hit rate and security.
<!-- Cached: TTL 7200s -->
<main>
<h1>{{ product.name }}</h1>
<div class="product-details">...</div>
<!-- Dynamic: do not cache -->
<esi:include src="/widgets/checkout/cart-widget" />
<!-- Dynamic: personalised price band -->
<esi:include src="/widgets/pricing/b2b-price/{{ product.id }}" />
<!-- Stale-while-revalidate: 60s -->
<esi:include src="/widgets/cross-selling/{{ product.id }}"
ttl="300" stale-while-revalidate="60" />
</main>Clean cache tag invalidation
The critical piece of edge caching is not the caching itself, but targeted invalidation. Shopware tags cache entries with objects such as product-{id}, category-{id} or cms-page-{id}; a product change invalidates the object cache and HTTP cache synchronously (Shopware HTTP cache docs). Without tag-based invalidation only time-based expiration is left - with windows during which outdated prices and stock are served. Details on monitoring the cache layers are covered in the article on shop monitoring for uptime and performance. Tag granularity is a balancing act: tags that are too broad (catalog) invalidate the entire product catalog cache on every change - hit rate collapses. Tags that are too fine-grained (product-1234-attribute-color-red) inflate the edge tag index, making purge lookups slow. The sweet spot usually sits at entity IDs plus two or three aggregating tags (category, manufacturer, brand).
- Tag-based invalidation:
product-1234purges all cache entries referencing this product - detail page, listings, search. - BAN invalidation via XKey (Varnish) or surrogate-key headers (Fastly) as technical options - mentioned neutrally.
- Purge throttling:
max_parallel_invalidationsprevents catalog imports from overloading the origin via purge waves. - Event-driven: Shopware events trigger purges asynchronously via the message queue, not inside the request cycle.
- Time-based fallback: TTL values (2h product page, 24h CMS page) as a safety net.
- Price and stock fragments cached separately via ESI with a shorter TTL.
Stale-while-revalidate: UX during refresh
Stale-while-revalidate (SWR) is the ingredient that turns fast into really fast. After TTL expiry the edge node first serves the old response (stale) and triggers a background refresh in parallel. The user never sees a wait screen on a cache miss. Measurements show -85 to -95 % fewer origin requests with active SWR compared to a hard TTL refresh (digitalapplied) - while data freshness stays within seconds. Complementary, stale-if-error is worth using: if the origin briefly drops or returns 5xx, the edge keeps serving the last cached version. This reduces the conversion impact of deploys, database maintenance windows or short-term network issues to nearly zero - provided the content is not time-critical (flash sale countdown, live stock).
Static assets: stale-while-revalidate=86400. Product listings: stale-while-revalidate=60. Price fragments: stale-while-revalidate=0 (do not serve stale). A staggered strategy prevents security-critical data (prices, stock) from being served outdated while catalog structures stay maximally fast. More performance levers in the Shopware 6 performance optimisation article.
Cache-Control and Vary headers: the new Shopware engine
In the new caching engine Shopware relies on standard HTTP headers (Shopware news): Cache-Control drives TTL and SWR, Vary signals which request headers (accept language, currency, customer group) feed into the cache key. As a result, edge caching works with almost any HTTP proxy and is compatible with HTTP/2, HTTP/3 and Brotli compression. A deeper technical view is also provided by the article on managed hosting for online stores. The Vary header deserves particular attention: a wrong Vary: User-Agent fragments the cache into thousands of variants (every browser string is its own variant), the hit rate collapses. The right approach is semantic variants like Accept-Language (language), sw-currency (currency) or sw-context-hash (customer group). The specific selection depends on the shop setup - single-currency stores need no currency vary, monolingual stores no language vary.
HTTP/2 200
content-type: text/html; charset=UTF-8
cache-control: public, max-age=7200, stale-while-revalidate=60, stale-if-error=3600
vary: Accept-Language, Accept-Encoding, sw-currency, sw-context-hash
surrogate-control: max-age=86400
surrogate-key: product-1234 category-42 manufacturer-7
x-cache: HIT
x-cache-age: 328
x-served-by: edge-fra-03
etag: "a1b2c3d4e5f6"
x-shopware-cache-state: freshMeasurement setup: TTFB, LCP, CWV
Without continuous measurement edge caching remains wishful thinking. A robust setup combines synthetic tests from multiple regions with real-user monitoring - details on infrastructure in the XICTRON cloud. Segmentation by region, device and edge node is critical: an aggregated TTFB median of 180 ms can be misleading if APAC users sit at 600 ms while DACH is at 40 ms. Only region-specific analysis reveals whether the edge strategy truly scales globally or merely optimises the home region. The same holds for cache hit rate: an 80 % average across all routes can mean product pages hit at 95 % while category listings sit at only 40 % - the latter would be a critical sign that filter parameters or sort options unnecessarily bloat the cache key.
- Synthetic monitoring: TTFB checks from Frankfurt, New York, Singapore, Sydney - every minute.
- Real-user monitoring (RUM): LCP, INP, CLS from real user sessions, segmented by region.
- Cache hit rate per URL cluster: product detail pages should typically reach above 85 %.
- Purge latency: time between product change and cache invalidation - target usually below 10 s.
- Origin requests per minute: should drop by 85 to 95 % after rollout (digitalapplied).
- Edge vs. origin error rate: high 5xx at the edge with low 5xx at the origin indicates config issues.
Typical edge caching mistakes
- Personalised data lands in the shared cache: customer-group prices, cart or login state without a
Vary: Cookieorsw-context-hashheader is a critical leak. - No invalidation chain for catalog imports: a batch import of 50,000 products without throttling creates purge waves and overloads the origin.
- TTL too aggressive: 7 days on product pages without tag invalidation leads to outdated prices and stock.
- Caching ESI fragments that should stay dynamic: caching the cart badge for 60 s means a wrong quantity in the header.
- Missing compression at the edge: Brotli/Gzip active at the origin but disabled at the edge - performance gains evaporate.
- No cache busting for assets:
style.csswithout a hashed filename leads to mixed versions at the edge after a deploy. - Soft purge not used: hard purges produce cache miss peaks; soft purge serves stale and refreshes in the background.
- Monitoring only the origin: edge issues stay invisible when only origin logs are evaluated - edge node metrics and RUM from the target regions are mandatory.
Edge caching inside the broader shop stack
Edge caching does not live in isolation. A modern Shopware stack consists of several layers that must be aligned. ERP integrations such as the Dynamics 365 Business Central integration feed in stock and prices - and every change must invalidate cache entries at the edge via clean cache tags. Compliance processes like ZUGFeRD cancellation invoices and credit notes or the extended information duties from September 2026 run server-side in the Shopware origin and cannot be cached - checkout, invoicing and legal fragments must therefore be clearly excluded from the edge cache. The interplay between edge layer, ERP synchronisation and compliance logic determines whether edge caching holds up in production or creates inconsistencies. In practice this means: a clear URL policy defines which routes are cached (/detail/, /navigation/, /, CMS pages) and which are not (/checkout/, /account/, /api/, /admin/). A misconfigured path can cause consequential bugs - for instance if a customer checkout lands in the shared cache for a few seconds. Every rollout therefore deserves a cacheability review at route level, ideally with automated tests that catch problematic cache headers in CI/CD.
Cost model and rollout phases
| Phase | Scope | Expectation |
|---|---|---|
| 1. Single-region reverse cache | Reverse proxy (Varnish or equivalent) in front of the Shopware origin, cache tags active | TTFB DACH -40 to -60 %, origin load -60 % |
| 2. Multi-region edge | Edge nodes in 3-4 regions, ESI for dynamic fragments, SWR active | Global TTFB -60 to -80 %, origin requests -85 % |
| 3. Global edge + personalisation | 8+ regions, personalised ESI fragments, edge middleware for A/B tests | Global TTFB < 200 ms, cache hit rate > 90 % |
Monthly costs depend heavily on the traffic profile: edge bandwidth, request pricing and storage for cache tags. As a rule, efficiency gains at the origin (fewer servers, less database load) offset part of the edge cost - this can be modelled properly in a consulting workshop per store. A typical rollout plan spans 4 to 8 weeks: weeks 1-2 baseline and cacheability audit, weeks 3-4 reverse cache in staging with load tests, weeks 5-6 ESI fragments and tag invalidation, weeks 7-8 multi-region rollout with traffic splitting. For complex catalogs (B2B with customer-group prices, multiple currencies, regional assortments) project duration extends accordingly.
Rollout roadmap in 5 steps
- Measure baseline: TTFB, LCP, INP from 4 regions over 7 days; origin requests per minute; cache hit rate (if available).
- Cacheability audit: which URL clusters are personalisation-free? which fragments must stay dynamic? clarify store topology.
- Reverse cache in staging: connect Varnish or alternative reverse proxies to Shopware, test tag-based invalidation, run load tests.
- Define ESI fragments: cart, login state, personalised prices and recommendations as ESI, tune TTL.
- Rollout with traffic splitting: 10 %, 50 %, 100 % of the region via edge, compare RUM metrics, then roll out to further regions. Programming and Shopware hosting accompany the phases.
Edge caching as a conversion driver in 2026
Low-AOV stores benefit in particular from speed: below 60 EUR AOV the median conversion is 4.63 %, above 200 EUR only 0.95 % (dtcpages) - here every millisecond counts. A typical edge implementation cuts TTFB from 2.4 s to below 0.5 s, and the conversion curve climbs clearly above 1.9 % (at 2.4 s load, ringly.io). Edge caching is no longer a nice-to-have in 2026 but a must - especially for stores with an international target audience and a mobile traffic share above 60 %. Combined with clean product data sovereignty (PIM), semantic product search and generative engine optimization it forms a resilient base for sustainable e-commerce growth. ROI typically materialises within 2 to 4 months: conversion uplift, lower origin costs, better Google rankings through Core Web Vitals improvement and lower bounce rates add up to a stable business case. Important remains: edge caching does not replace backend optimisation. A slowly rendering Shopware origin stays slow with edge too, as soon as cache misses occur - the clean interplay of origin performance, cache strategy and measurability makes the difference.
This article draws on data from: Cloudflare, web.dev, Akamai, Shopware Docs, Ringly.io, ALM Corp, Conductor, Walmart, Harper.fast, dtcpages, Web Almanac, Fastly ESI Guide, FatLab, digitalapplied. The figures cited may vary depending on time, industry and measurement method - benchmark studies typically measure best-case scenarios, the real implementation depends heavily on shop topology, traffic profile and tag strategy.
In summary: in 2026 edge caching for Shopware is a plannable, measurable and calculably justified investment. The technical building blocks - reverse HTTP cache, ESI fragments, cache tag invalidation, stale-while-revalidate - are mature and part of the Shopware open-source base. The challenge lies less in technology than in clean orchestration: which routes are cached? Which tags slice the catalog correctly? Which fragments remain dynamic? How high are TTL and SWR windows per URL cluster? Those who answer these questions up front and measure the setup per region achieve sub-200 ms TTFB globally while noticeably relieving the origin. The direct effect on conversion, bounce rate and Core Web Vitals ranking makes edge caching one of the economically strongest single measures in the Shopware performance playbook.
Frequent questions on edge caching for Shopware
In our experience a reverse cache typically pays off from 500 to 1,000 orders per month onwards when load times clearly exceed 500 ms TTFB. Global edge caching is usually relevant for international audiences or peak traffic above 100 requests per second - the individual assessment will vary depending on the store.
Yes, Shopware Community Edition supports the reverse HTTP cache via the same configuration (storefront.yaml) as other editions (Shopware Dev Docs). The HTTP cache layer is part of the open-source base and uses Symfony HttpCache standards. At our Shopware agency we typically rely on CE plus custom plugins.
Personalised content is typically served through ESI fragments with a dedicated cache key (for example sw-context-hash in Vary). The shell HTML stays in the shared cache, B2B prices are cached only per customer group or even per customer. This avoids leaks of personalised data to other sessions.
Shopware invalidates cache entries based on tags: a change to product-1234 typically invalidates all edge entries that reference this tag - product detail, category listings, search, CMS blocks (Shopware HTTP cache docs). As a rule purge latency stays below 10 s, depending on message queue load and edge configuration.
A classic CDN caches primarily static assets (images, CSS, JS). Edge caching for Shopware additionally caches dynamically rendered HTML responses including personalisation fragments and performs tag-based invalidation. The relief shifts from asset traffic towards actual store rendering - the main cost driver at the origin.
Four metrics are typically decisive: TTFB median per region (target usually below 200 ms), cache hit rate (target usually above 85 %), origin requests per minute (should drop by 85 to 95 %) and LCP improvement in real-user measurement. A solid monitoring setup delivers these metrics segmented by region, device and URL cluster.