The median Lighthouse score across 25,000 measured e-commerce sites sits at 67/100 (Reboot Online). 70.5% (Reboot Online) of stores land in the 50-89 range — the needs improvement bucket. Only individual verticals such as beauty reach the green 90-100 zone with 27% (Reboot Online). At the same time, industry data shows: a load-time improvement of 0.1 seconds can lift conversion by up to 8.4% and average order value by 9.2% (Google/Deloitte/WIRO) — while one extra second of load time typically costs around 7% (Google/Deloitte) of conversion. This article shows how Shopware stores close the gap from 67 to Lighthouse 100/100/100/100 across all four categories — and how a perfect-score focus differs from pure Core Web Vitals work.

Lighthouse 100/100/100/100 for Shopware Stores100Performance100Accessibility100Best Practices100SEOPerformance metricsLCP < 2.5sgreenINP < 200msgreenCLS < 0.1greenA11y auditsWCAG 2.2 AApassColor contrastpassARIA rolespassBest practicesHTTPS + CSPonno deprecatedoksource mapsvalidSEO audits (14)Meta + TitlepassJSON-LD schemapasshreflang + canonicalpassE-commerce median: 67/10067XICTRON standard: 100/100/100/100Sources: Web Almanac 2024 (HTTP Archive), Reboot Online 2025, web.dev/Lighthouse DocumentationThresholds: LCP < 2.5s | INP < 200ms | CLS < 0.1 (Google web.dev)

Four categories: Performance, A11y, BP, SEO

Lighthouse rates a page across four equally weighted categories — and only a perfect score in all four delivers what marketing calls a full Lighthouse score. Unlike pure page-speed work, no single lever is enough; discipline is required across every area.

Performance

LCP, INP, CLS, FCP, TBT and Speed Index — a weighted average of six lab metrics. E-commerce median: 67/100 (Reboot Online).

Accessibility

57 axe-core audits cover roughly 30-40% of WCAG criteria automatically (Deque/DebugBear). Median 84% (Web Almanac 2024).

Best Practices

Security and code-quality audits: HTTPS, valid CSP, no mixed content, no deprecated APIs, no console.error (web.dev/Niteco).

SEO

14 equally weighted audits — a single fail already drops the score to 92/100 (DebugBear). The median sits around 92.

Performance 100: LCP, INP, CLS green

Performance is the toughest category — it combines lab data with the three Core Web Vitals as a de-facto requirement. Google's Good thresholds are LCP < 2.5s, INP < 200ms, CLS < 0.1 for at least 75% (web.dev) of CrUX visits. Web Almanac data shows: only 43% (Web Almanac 2024) of mobile sites meet all three CWV thresholds. INP is improving — 55% pass-rate in 2022 vs. 74% in 2024 (Web Almanac/Mewa 2026) — but 43% (Web Almanac) of sites still miss the 200 ms mark. 47% (Mewa Studio 2026) of all sites pass all three Good thresholds in 2026; the remaining 53% (Mewa) typically lose between 8% and 35% in conversion or traffic.

MetricThreshold (Good)Typical Shopware valueLighthouse-100 target
LCP< 2.5s3.5-5.2s (mobile)< 1.5s
INP< 200ms220-380ms< 100ms
CLS< 0.10.12-0.28< 0.05
FCP< 1.8s2.1-3.4s< 1.2s
TBT< 200ms350-720ms< 100ms
Speed Index< 3.4s4.2-6.8s< 2.8s

Brandcrock and Magespark report typical Shopware shops at desktop ~90, mobile 45-60 (Brandcrock/Magespark) — the mobile delta is the biggest lever. Moving from 67 to 100 takes not one but twelve to sixteen parallel measures:

  • Vite build with tree-shaking — Shopware 6.7 cuts JS/CSS by 25% (Shopware Performance Report/itdelight) and lifts the score from 76 to 78 through the build switch alone.
  • Preload the LCP image + WebP/AVIF<img loading="eager" fetchpriority="high">, plus a WebP/AVIF pipeline for 30-60% less byte volume.
  • Inline critical CSS, defer the rest — through the storefront theme build pipeline.
  • INP via long-task budget — no main-thread JS task longer than 50 ms; third-party scripts via requestIdleCallback.
  • CLS via reserved containersaspect-ratio for every product image, banner and ad slot.
  • HTTP/3 + edge cache — see edge caching for Shopware for global delivery.
  • Self-hosted fonts with font-display: optional — no foreign-domain round trips.
  • Kill third-party scripts or proxy them server-side — see server-side tracking.
  • Service worker for repeat visits — secures consistent 100s, not just first-load scores.
Pro tip: optimise mobile first

Lighthouse Mobile simulates a throttled mid-tier device (Moto G Power) at 1.6 Mbit/s with 150 ms RTT (web.dev). Get mobile to 100 and desktop tends to follow almost for free — the other way round rarely works. Amazon documents internally that every additional 100 ms of latency costs roughly 1% (Cloudflare) of sales.

Accessibility 100: cover 30-40% with WCAG manually

Important: Lighthouse a11y covers only part of WCAG

The accessibility score is based on 57 axe-core audits and covers only about 30-40% (Deque/DebugBear) of WCAG success criteria automatically. A Lighthouse a11y 100 is therefore not proof of accessibility law conformance — it is the floor, not the ceiling. Manual testing against WCAG 2.2 and a complete accessibility audit remain mandatory.

  • Color contrast71% (Web Almanac) of sites fail here (the most common a11y issue). Tailwind grey defaults rarely suffice; check storefront theme variables.
  • Form labels and ARIA — every input needs a label, every custom button a role. Shopware storefront templates sometimes misuse aria-hidden.
  • Heading hierarchy — no H4 without H3, no duplicated H1. Audit listings, detail pages and checkout separately.
  • Language via lang attribute — for multi-lingual setups, set dynamically per storefront sales channel.
  • Focus management — visible outline on every interactive element; no outline: none without a replacement.
  • Images with alt text — product images often inherit the file name. Extend the Shopware media library with a mandatory alt-text field.
  • Bypass mechanismSkip to content link as the first focusable element.

Best Practices 100: security as a score factor

Best practices looks unspectacular but is often the fastest score gain — provided the hosting plays along. Audits cover security and code-quality criteria per web.dev and Niteco. Unlike performance, this category has hardly any grey zones: each audit is binary pass or fail.

AuditWhat is checkedStatus on XICTRON hosting
HTTPS everywhereWhole site on TLS, no mixed contentpass
Content Security PolicyValid CSP against XSSpass
No mixed contentNo HTTP resources on HTTPS pagespass
No deprecated APIsNo `document.write`, no legacy `<applet>`pass
No console.errorClean browser console in lab runpass
Valid source mapsSourcemaps available or absent (no 404)pass
Image aspect ratioElement ratio = natural ratiopass
Geo / notification permissionNo unwanted permission prompt on page loadpass

SEO 100: 14 audits without weak spots

Lighthouse SEO consists of 14 (DebugBear) equally weighted audits. A single fail already lands you at 92/100 — two fails at 86/100. The median is around 92 (DebugBear). Reaching 100 means turning all 14 green. Our SEO consulting treats those 14 items as a baseline and then layers content factors on top that Lighthouse cannot measure.

  • Document has a <title> — Shopware listing pages are often empty when the SEO plug-in is silent.
  • Meta description — individual per category and product page, never auto-generated from the description.
  • HTTP status 2xx — no soft 404s for empty filter listings.
  • <a href> with text — no image-only links without alt or aria-label.
  • Crawlable (robots.txt + meta robots) — no noindex on live paths.
  • Correct hreflang — DE/EN parity, every language channel self-references.
  • Canonical — unique per product, no duplicate canonicals through filter parameters.
  • Structured data — see the next section on JSON-LD.
  • Tap targets — buttons at least 48×48 px with 8 px spacing.
  • Legible font sizes — no font-size < 12px for body copy.
  • Viewport metawidth=device-width, initial-scale=1.
  • Plugin audit — no Flash or Java embeds.
  • Image aspect ratio — same as in best practices.
  • Charset declared<meta charset="utf-8"> in <head>.

Shopware-specific pitfalls

  • Storefront CMS blocks with own JS — slider plug-ins often block the main thread for over 500 ms. Defer-load custom plug-ins or replace them through bespoke development.
  • Twig cache cold — the first Lighthouse run after a deployment measures a cold cache; CI should always warm up with two prior requests.
  • Admin watcher active in front-end — dev-mode leftovers delay hydration. Before any Lighthouse run, set APP_ENV=prod and run bin/console cache:clear.
  • Cookie-consent layer — many third-party layers ship 80-200 KB of JS and break the performance score. Evaluate an in-house or server-side variant.
  • Theme compilation without tree-shaking — older 6.5 themes produce 800 KB+ JS bundles. A PHP 8.5 migration step belongs hand in hand with a theme rebuild.

JSON-LD schema in 6.7.9.0

Lighthouse SEO checks whether structured data exists — not whether it is complete or rich-result eligible. The recent extension of Shopware schemas in 6.7.9.0 ships product, offer, organisation and breadcrumbList schemas out of the box. Details and extension patterns are covered in our article on JSON-LD schemas in Shopware 6.7.9. For a Lighthouse SEO 100 a single valid JSON-LD block is enough — for Google rich results, product reviews, AggregateRating and inventory status should follow, which becomes the next lever once inventory comes from a multi-channel sync.

CSP and best practices

A valid Content Security Policy not only pushes the best-practices score to 100, it also reduces the attack surface for XSS and third-party scripts. The following CSP template works for a Shopware 6.7 storefront with its own tracking infrastructure — it blocks external JS except for explicitly allowlisted origins:

nginx.conf (excerpt)
add_header Content-Security-Policy "\
  default-src 'self'; \
  script-src 'self' 'nonce-$REQUEST_ID' https://static.example.com; \
  style-src 'self' 'unsafe-inline'; \
  img-src 'self' data: https://cdn.example.com; \
  font-src 'self' data:; \
  connect-src 'self' https://analytics.example.com; \
  frame-ancestors 'none'; \
  base-uri 'self'; \
  form-action 'self'; \
  upgrade-insecure-requests;\
" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

Mobile vs desktop: explaining the score gap

A typical Shopware shop is reported by Brandcrock and Magespark at desktop ~90 and mobile 45-60 (Brandcrock/Magespark). The spread is not a measurement glitch but a Lighthouse design decision: mobile assumes throttled CPU (4× slowdown), throttled network (1.6 Mbit/s) and a mid-tier device. To reach a perfect score, optimise consistently against the mobile profile — desktop 100 then drops out almost automatically. A 0.1 second improvement can mean +8.4% in conversion and +9.2% in AOV (Google/Deloitte/WIRO) — and the mobile bounce probability climbs +90% (Google think with Google) when LCP rises from 1 to 5 seconds. Going from mobile-50 to mobile-100 is therefore not cosmetic but directly revenue-relevant.

Measurement setup: CrUX, Lighthouse CI, RUM

A single Lighthouse run only shows one lab value — what matters to Google is CrUX (Chrome User Experience Report), based on real users. To hold 100/100/100/100 over time, three measurement layers are needed in parallel. The layers answer different questions and complement each other:

  • Lab tests via Lighthouse CI — automated per pull request, fail-on-budget thresholds for LCP, INP, CLS, bundle size.
  • Field data via CrUX — the only source Google actually uses for ranking; 75th percentile over 28 days.
  • RUM (real user monitoring) — captured per sales channel, ideally cookieless and server-side for privacy-friendly high-quality data.
  • Performance budget — written down: max 200 KB JS, max 80 KB CSS, max 600 KB images per initial page.
  • A/B Lighthouse — snapshot before each theme update, regression diff after.

Roadmap to all four full scores

  1. Phase 1 — audit baseline (week 1): Lighthouse CI on mobile for home, listing, detail, checkout. Capture current scores and define a performance budget.
  2. Phase 2 — harden the hosting stack (week 1-2): HTTP/3, Brotli, edge caching, CSP headers, Strict-Transport-Security. Delivers best-practices 100 immediately.
  3. Phase 3 — mobile-first performance levers (week 2-4): LCP image preload, WebP/AVIF, critical CSS, long-task budgets, kill third-party scripts.
  4. Phase 4 — a11y pass on the storefront theme (week 4-5): color contrast, form labels, heading hierarchy, focus outlines. Manual WCAG 2.2 review in parallel.
  5. Phase 5 — close the SEO 14 audits (week 5): title, description, canonical, hreflang, robots, structured data, tap targets, font sizes.
  6. Phase 6 — JSON-LD extension (week 5-6): product, offer, review and AggregateRating schemas beyond the JSON-LD extensions in 6.7.9 for rich-result eligibility.
  7. Phase 7 — RUM and regression protection (ongoing): Lighthouse CI pipeline, CrUX monitoring, monthly regression report. Without phase 7, 100/100/100/100 typically slips back within 2-3 releases.
Sources and studies

This article draws on data from: Web Almanac 2024 (HTTP Archive), Reboot Online 2025 (e-commerce Lighthouse study, 25,000 sites), Google/Deloitte/WIRO 2026 (mobile speed conversion impact), web.dev/Lighthouse documentation (thresholds, audit definitions), Sky SEO Digital 2026 (CWV conversion impact), Mewa Studio 2026 (CWV pass-rate analysis), Deque/axe-core (a11y audit coverage), DebugBear (Lighthouse score decomposition), Niteco (best-practices audit list), Shopware Performance Report / itdelight (6.7 build report), Brandcrock and Magespark (typical Shopware score range), Cloudflare (latency-sales correlation at Amazon), Google think with Google (mobile bounce probability). Figures may vary depending on time of measurement, sample and vertical.

In our experience yes — provided hosting, theme, third-party scripts and content are tackled together. For pure marketing sites, 100/100/100/100 is typically achievable with reasonable effort. Fully featured Shopware stores with personalisation and tracking need more work but are usually feasible — see our hosting solutions and Shopware performance work.

No — Lighthouse typically covers about 30-40% of WCAG success criteria automatically (Deque/DebugBear). For accessibility law conformance you usually also need manual testing against WCAG 2.2 and a full accessibility audit.

Lighthouse mobile simulates a throttled mid-tier device on reduced bandwidth with 4× CPU slowdown. Performance issues weigh more heavily there. Mobile optimisation is typically the actual bottleneck — desktop usually follows automatically.

From experience, we recommend Lighthouse CI on every pull request plus weekly smoke tests against the live environment. CrUX data is rolling over 28 days, so a monthly review of field data alongside lab tests is worthwhile.

Industry data typically shows: 0.1 seconds of improvement can mean +8.4% in conversion and +9.2% in order value (Google/Deloitte/WIRO). Three Good CWV thresholds are typically associated with +15-30% in conversion (Sky SEO Digital). Actual impact depends on vertical, audience and starting point.

A central one: HTTPS, valid CSP, HTTP/3, Brotli and edge caching show up directly in best practices and performance. Without dedicated Shopware hosting, holding 100/100/100/100 over time is generally hard.

Tags:#Lighthouse#Performance#Core Web Vitals#Shopware