Frontend Architecture

[Frontend Architecture] SEO Design - Integrated Operation of Rendering, Meta, and Structured Data

[Frontend Architecture] SEO Design - Integrated Operation of Rendering, Meta, and Structured Data

About this article

As the eighth installment of the “Frontend Architecture” category in the series “Architecture Crash Course for the Generative-AI Era,” this article explains SEO (Search Engine Optimization).

The rendering method, URL design, and metadata strategy are decided at the architecture-selection stage and are an area you cannot bolt on later. This article presents guidelines for building these in at initial design - covering the relationship between rendering and SEO, meta tags, OG images, structured data (JSON-LD), sitemaps, URL design, i18n, and Core Web Vitals.

What is SEO design in the first place

SEO design is, roughly speaking, “building the technical foundation so search engines correctly understand your site’s content and rank it higher in search results.”

Imagine how you display a restaurant sign. No matter how good the food, if the sign is unreadable, the place isn’t on the map, and the entrance is unclear — customers won’t come. SEO design is the work of preparing a website’s “sign, map, and entrance” for search engines, and elements like rendering method, meta tags, structured data, and URL design are decided at the architecture-selection stage and cannot be bolted on later.

Why SEO design is needed

Rendering method choice determines SEO

CSR-only SPAs may fail to let search engines correctly capture content. SSR, SSG, and ISR choice directly impacts SEO, and changing later is a major undertaking.

Technical SEO is decided at the architecture stage

URL design, canonical settings, structured data, OG tags — regardless of content quality, technical design mistakes lower search rankings. The tech foundation must be set before content SEO.

Retrofitting SEO costs 10x more

Even if you notice “SEO is weak” after development is done, changing the rendering method means redesigning from scratch. Building it in from the start is overwhelmingly cheaper.

This article’s coverage

Frontend-related security has been split off into separate articles, so this article focuses purely on SEO. Frontend-specific vulnerabilities, CSP, and supply-chain topics that used to live alongside SEO have moved to their respective owner articles.

TopicOwning article
This articleRendering and SEO / meta tags / OG / structured data / sitemap / URL / i18n / Core Web Vitals
Frontend-specific XSS / CSRF / CSP30/07 Auth
Network-layer defenses (HSTS, WAF)50/04 Network security
Dependency / supply-chain monitoring50/07 Vulnerability assessment

The question of this article is “the frontend architecture that wins on search and user experience.” Defensive topics live in separate articles.

SEO on the frontend

Google’s algorithm evolves year over year toward favoring sites with good user experience. Slow, mobile-unfriendly, accessibility-deficient sites won’t rank highly even with good content. SEO is a combined battle of content, performance, accessibility, and structured data - the foundation for advertising-free traffic acquisition.

SEO cannot be bolted on later. It’s decided at architecture-selection time.

SEO and rendering

Choice of rendering method has a direct hit on SEO. Whether content is readable in the HTML state determines crawler comprehension.

flowchart LR
    BOT([Google Bot])
    SSG[SSG<br/>static HTML]
    SSR[SSR<br/>server-generated HTML]
    DYN[Dynamic<br/>Rendering]
    CSR[CSR<br/>empty HTML+JS]
    BOT -->|read instantly| SSG
    BOT -->|read instantly| SSR
    BOT -->|SSR for bots only| DYN
    BOT -->|after JS exec, days-weeks delay| CSR
    classDef bot fill:#fef3c7,stroke:#d97706;
    classDef good fill:#dcfce7,stroke:#16a34a;
    classDef mid fill:#dbeafe,stroke:#2563eb;
    classDef bad fill:#fee2e2,stroke:#dc2626;
    class BOT bot;
    class SSG,SSR good;
    class DYN mid;
    class CSR bad;
MethodSEOReason
SSR / SSGExcellentSearch engines can read HTML directly
CSRMarginalJS execution required, interpreted but delayed
Dynamic RenderingGoodCompromise that SSRs only for bots

Even with CSR, the Google crawler reads HTML after JS execution, so it’s recognized technically. But because execution timing is delayed by days to weeks, it’s a major disadvantage for sites where freshness matters. The conclusion is that for serious SEO, SSG / SSR is the rule.

If SEO is required, SSG or SSR. Fighting with CSR is at a disadvantage.

Meta tag basics

Meta tags inside <head> are the first step in conveying page info to search engines. No matter how high-performance the FW, if these are sloppy, SEO doesn’t even start.

<head>
  <title>Article title | Site name</title>
  <meta name="description" content="Page summary within 120 chars">
  <link rel="canonical" href="https://example.com/post/1">
  <meta name="robots" content="index, follow">
</head>

Design principles:

  • title and description must be unique per page (same across all pages is fatal)
  • Unify duplicate URLs with canonical (trailing-slash issues etc.)
  • Make index/noindex explicit with robots

Mechanisms for consistently managing meta tags as components are standard in FWs - Next.js’s metadata API, Astro’s <SEO> component. There’s no reason not to use them.

Open Graph / Twitter Cards

Open Graph (OG, the standard for SNS share-preview control) and Twitter Cards are meta tags that control preview images, titles, and descriptions on SNS shares. On sites with high SNS traffic, OG quality moves CTR by several times.

<meta property="og:title" content="Article title">
<meta property="og:description" content="Summary">
<meta property="og:image" content="https://example.com/og.png">
<meta property="og:type" content="article">
<meta name="twitter:card" content="summary_large_image">

The standard image size is 1200 x 630px. Almost the same size displays appropriately on major SNS. Manually creating one each time is unrealistic, so building in a “title-to-OG-image auto-generation” mechanism (Next.js ImageResponse, Vercel OG Image, CDN transform APIs) makes operations dramatically easier.

OG images go auto-generated. Manual production becomes a bottleneck and missed SNS-traffic opportunity.

Structured data (JSON-LD)

Structured data is the meta information needed for Google to show rich snippets (star ratings, prices, FAQs, event info) in search results. Written in <script type="application/ld+json"> using schema.org vocabulary.

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Article title",
  "author": { "@type": "Person", "name": "Author" },
  "datePublished": "2026-04-18"
}
</script>

Representative examples are Article / Product / FAQ / BreadcrumbList / Organization / Recipe. Putting these in stands out visually in search results, with CTR going one rank higher.

sitemap.xml and robots.txt

sitemap.xml and robots.txt are files telling search engines “which pages to read and which not to read.” Both are placed at the site root.

FileRole
sitemap.xmlAll-URL list, crawl hint to search engines
robots.txtCrawl-permission instructions, exclusion targets
# robots.txt
User-agent: *
Allow: /
Disallow: /admin/
Sitemap: https://example.com/sitemap.xml

Next.js and Astro have auto-generation plugins. Hand-writing matches reality only on the first attempt and quickly diverges, so plugin automation is required. Manual operation is guaranteed to break down.

For sitemap/robots, leave it to the framework’s auto-generation. Hand-writing is a landmine.

URL design

URLs matter for both SEO and UX. URLs that humans can read and understand is the rule, and search engines also use words contained in URLs as a ranking factor.

✅ /blog/how-to-use-astro
✅ /products/thinkpad-x1-carbon
❌ /blog.php?id=42
❌ /products?itemId=sku123&var=x

Design principles:

  • Include meaningful words (content can be inferred from URL)
  • Lowercase, hyphen-separated (hyphens over underscores)
  • Encoded Japanese URLs are tolerated, but alphanumeric is preferred
  • Hierarchy up to 2-3 levels (too deep is unfavorable for both SEO and UX)

URLs are an element hard to change once published. It’s worth taking time on initial design. When unavoidable, guide from old URLs with 301 redirects.

Internationalization (i18n) and SEO

For multilingual sites, the URL composition per language and hreflang design decide SEO success. Get it wrong and Google sees it as duplicate identical content and rankings fall.

MethodExampleCharacteristics
Subdirectory/en/about /ja/aboutEasy to manage on one domain
Subdomainen.example.com / ja.example.comIndependent management per language
Separate domainexample.com / example.jpSeparated by country/brand

hreflang makes the per-language correspondence explicit. Without it, you get misjudged as “same content on multiple pages.”

<link rel="alternate" hreflang="en" href="https://example.com/en">
<link rel="alternate" hreflang="ja" href="https://example.com/ja">

This project (senkohome.com) adopts the subdomain method (senkohome.com / en.senkohome.com).

Core Web Vitals

Core Web Vitals is Google’s defined trio of perceived-speed metrics, with direct impact on search rankings. An operation that periodically checks them with measurement tools (PageSpeed Insights / Lighthouse / Search Console) is required.

MetricMeaningTarget
LCP (Largest Contentful Paint)Time until largest content is rendered< 2.5 sec
INP (Interaction to Next Paint)Interaction responsiveness< 200 ms
CLS (Cumulative Layout Shift)Layout shift< 0.1

Improvements: image optimization (WebP/AVIF), font optimization (preload/subset), JS bundle reduction (code splitting, drop unneeded deps), specifying image size attributes to prevent CLS. Not a one-shot fix - continuous monitoring is the operational premise.

Real-world measurement data is available in Google Search Console. Build periodic checks into operations.

Author’s note - the case of “every page had the same title” sinking the site

There’s a story about a small media site where 200+ articles all had <title> set to just the site name. Checking Search Console showed only a dozen articles indexed. With no per-article identifier, Google was treating them as duplicate pages.

After putting unique title/description on each article and tidying up canonicals, search traffic grew more than 10x in a few months. Conversely, “they were losing 90%+ of traffic opportunity by missing the basics of basics” up to that point.

I made similar mistakes on my personal blog, and from the month after I started putting unique per-page metadata via Astro’s <SEO> component, click rates clearly changed. SEO is an area “evaluated where you can’t see it” - any corner-cutting silently piles up losses. Tracking by numbers makes vague debate disappear and improvement cycles turn.

For SEO, don’t fight by “feel.” Measurement and unique metadata are 90%.

SEO numerical gates

Note: Industry baseline values as of April 2026. Will become outdated as technology and the talent market shift, so requires periodic updates.

SEO is measurable. Following numbers makes vague debates disappear. Below are industry-standard targets.

MetricRecommendedVerification tool
Lighthouse Performance90 or moreLighthouse / PageSpeed Insights
Lighthouse SEO90 or moreLighthouse
Lighthouse Accessibility90 or moreLighthouse / axe-core
LCP< 2.5 secPageSpeed Insights
INP< 200 msSearch Console
CLS< 0.1PageSpeed Insights
Image alt attributes100% setaxe-core
Structured data errors0Google Rich Results Test
title/description uniqueness rate100%Search Console coverage
canonical tag setting rate100%Search Console

The state where “pages remain that score below 90 on Lighthouse SEO” is evidence of basic omissions. Ensure each page has unique title/description/canonical, and an operation of weekly Search Console check-ins.

Measure weekly with Lighthouse / Search Console. Numbers-driven operation is the modern norm.

SEO pitfalls and forbidden moves

Here are the typical accidents in SEO. All of them are causes of drastic search-traffic drops.

Forbidden moveWhy it’s bad
Same title/description across all pagesGoogle judges “duplicate,” indexing drops sharply. Per-article uniqueness required
No canonical tagTrailing-slash variants treated as separate pages, SEO weight diluted
Build SEO-required sites with CSR onlyGoogle crawler’s JS execution delayed by days to weeks
Hand-write sitemap.xmlDiverges from reality. FW auto-generation required
Manually create OG images every timeBecomes a bottleneck. Move to auto-generation (Vercel OG Image etc.)
No structured dataMissed rich-snippet opportunities. Article / FAQ / BreadcrumbList is the minimum
Publish with alt attributes emptyAccessibility violation + SEO penalty. All images need meaningful alt
Make URLs variable via query params (?id=42)Not human-readable, drops SEO weight. Design path-based
No hreflang in i18nTreated as duplicate content. Make per-language correspondence explicit
Change URLs without 301 redirectsSEO weight resets. Always guide from old URLs with 301 on changes
Depending only on AI-mass-produced contentGoogle’s policy doesn’t value AI mass production. Without first-hand info and E-E-A-T, it backfires
Not incorporating Core Web Vitals measurement into opsDegradation goes unnoticed, rankings gradually slip. Weekly measurement is the baseline

“Even good content doesn’t reach search if settings are sloppy.” Conversely, sites strong in E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) gain relative value precisely in an era awash with AI-mass-produced content.

SEO sinks 90% from missing basics. Unique metadata and measurement matter more than techniques.

AI decision axes

Favored in the AI eraDisfavored in the AI era
Next.js metadata API / Astro SEO (conventionalized)Hand-written meta tags scattered
Auto-generated structured data (JSON-LD)No structured data
Content based on first-hand info and original experienceArticles produced only by AI
Real-data loop with Search ConsoleSEO judgment by gut and feel
  1. SSG / SSR as the base (CSR is at a disadvantage for serious SEO)
  2. Metadata and structured data go through framework standards (Next.js metadata API / Astro SEO)
  3. Measure Core Web Vitals weekly (Lighthouse / Search Console)
  4. Humans guarantee first-hand info and E-E-A-T (don’t depend on AI mass production)

What to decide - what is your project’s answer?

For each of the following, try to articulate your project’s answer in 1-2 sentences. Starting work with these vague always invites later questions like “why did we decide this again?”

  • Rendering method (from an SEO standpoint)
  • Metadata standard (title/description/OG)
  • sitemap/robots generation method (auto-plugin)
  • Scope of structured-data adoption
  • URL design convention
  • i18n method (subdirectory / subdomain / separate domain)
  • Core Web Vitals measurement loop (weekly Lighthouse)

For frontend security settings (CSP, dependency monitoring, etc.), refer to the previous “Auth” article and the security chapter.

Summary

This article covered SEO, including rendering, meta tags, OG image auto-generation, structured data, sitemaps, URL design, i18n, and Core Web Vitals.

SEO is mostly decided at initial design. Base on SSG/SSR, automate the mechanical parts via framework standards, and have humans guarantee originality. That is the practical answer for frontend SEO design in 2026.

Next time we’ll start a new category (Data Architecture).

Back to series TOC -> ‘Architecture Crash Course for the Generative-AI Era’: How to Read This Book

I hope you’ll read the next article as well.