Appendix

Anti-Pattern Catalog — Reverse Lookup Before You're Stuck

Anti-Pattern Catalog — Reverse Lookup Before You're Stuck

About this article

This article is the first installment of the “Appendix” category in the Architecture Crash Course for the Generative-AI Era series, covering the anti-pattern catalog.

The “common misconceptions” and “common failures” from each article are reorganized here as a cross-domain reverse-lookup catalog. The architect’s job is more about avoiding fatal mistakes than picking the perfect option, so this article is designed as a “notice it before you’re stuck” early-warning device. When something feels off, look up the matching category here.

flowchart TB
    SYM([Feel the symptom?<br/>Reverse lookup below])
    A[Architecture overall<br/>Over-engineering / Big-bang rewrite]
    I[Infra<br/>Manual deploy / Single AZ]
    D[Data<br/>NoSQL-everywhere / Broken normalization]
    APP[App<br/>God class / Anemic model]
    F[Frontend<br/>SPA-everywhere / CSS-in-JS abuse]
    S[Security<br/>JWT in localStorage]
    O[Monitoring & ops<br/>Flying blind]
    P[Process<br/>Big PRs / Long-lived branches]
    AI[AI era<br/>Custom languages / Generation dump]
    SYM --> A
    SYM --> I
    SYM --> D
    SYM --> APP
    SYM --> F
    SYM --> S
    SYM --> O
    SYM --> P
    SYM --> AI
    classDef root fill:#fef3c7,stroke:#d97706;
    classDef bad fill:#fee2e2,stroke:#dc2626;
    class SYM root;
    class A,I,D,APP,F,S,O,P,AI bad;

Architecture-wide traps

Decision-making mistakes common across all domains. Less about specific tech selection, more about posture toward design. These show up in nearly every failed project.

Anti-patternSymptomPrescription
Over-engineering (YAGNI violation)Massive unused features, layers, abstractionsBuild only what current requirements need
Jumping on the latest techInformation thin, easy to get stuckPrefer options with 2-3 years of operational track record
Reinventing the wheelBuilding what an existing product coversLean on standard libraries / SaaS
No documentation of decisionsNobody remembers “why we picked this”Write ADRs (Architecture Decision Records)
Conclusions without benchmarksDecided by “feels faster”Always measure first

“Documented rationale” beats “technically correct” in long-term operations.

Infrastructure / deployment traps

Typical mis-selections around cloud, servers, and networking. The root cause is often “just copying a megacorp.” Misjudging your scale and phase is fatal.

Anti-patternSymptomPrescription
Pointless multi-cloudIAM, monitoring, IaC duplicated, ops doubledLean on a single cloud
Adopting K8s without needOps time eats dev timeECS Fargate / Cloud Run is enough
Premature microservicesBog down in distributed transactions and network boundariesStart monolith, split when actually needed
Manual setup without IaCEnvironment drift, not reproducibleManage all resources via Terraform / CDK
RDS not in private subnetDB exposed to public networkAlways private subnet

Doing “AWS Certified-grade” design at a startup melts away product-development time.

Data traps

Typical mis-selections in datastore selection and database operations. Unlike applications, data cannot be rebuilt, so failures here echo for five years.

Anti-patternSymptomPrescription
Running analytical queries on the production DBCustomer-perceived speed degradesSeparate OLTP (operational) and OLAP (analytical) early; build a DWH (Data Warehouse)
Escape into schemaless DBsStuck for AI use and analytics laterPostgreSQL + schema definitions
Master data fragmented per departmentCustomer duplicates, integration impossibleBuild MDM (Master Data Management)
Overwrite-update without historyNo audit trail, ML modeling stallsHistory tables or Event Sourcing
No data quality checksInconsistent data poisons analyticsAutomate with dbt tests / Great Expectations
Backup-restore never validatedRestore fails on actual disasterQuarterly restore drills

Failures in data architecture are about 5x heavier than failures in application architecture — because rebuilding is not an option.

Application traps

Typical errors in code design, modularization, and error handling. They run in the short term, so they’re easy to miss, and slowly suffocate maintainability over months and years.

Anti-patternSymptomPrescription
God classes / God functionsThousands of lines per file, unclear responsibilitySingle Responsibility Principle
Business logic in stored proceduresDB migration impossible, hard to testPush logic into the application
Swallowing errors (catch { })Failures don’t surface, deepenLog + rethrow, or explicit handling
Meaningless naming (data, util, manager)Code intent unreadableName in domain terms
All static methodsUntestable, no DI possibleDI container or constructor injection
Methods returning nullMissed null checks at call sitesOptional / Result types

“Working but unmaintainable code” is the act of forcing technical debt onto everyone but yourself.

Frontend traps

Frontend-specific mis-selections and implementation errors. Many of these directly hit security and performance, affecting user trust.

Anti-patternSymptomPrescription
Storing JWT (JSON Web Token = signed token containing auth state) in localStorageSingle XSS leaks the tokenHttpOnly Cookie + BFF (Backend for Frontend)
CSR for an SEO-critical siteDoesn’t index in searchUse SSR / SSG / ISR
Hand-rolled auth cookiesVulnerability breeding groundDelegate to Clerk / Auth.js / Auth0
In-house CSS design systemHard to hire, learning cost wastedTailwind + shadcn/ui
Serving images unoptimizedLCP destroyed, Core Web Vitals tankCDN transform + WebP / AVIF
Raw React without a meta-frameworkHand-rolled routing, SSR, buildUse Next.js / Astro

Frontend anti-patterns are directly visible to users, so the cost is high.

Security traps

The defining feature of typical security architecture errors is that they are not recoverable. Once data leaks, no after-the-fact mitigation puts it back.

Anti-patternSymptomPrescription
Building auth in-houseHoles in password reset, MFA, etc.Delegate to Auth0 / Cognito / Clerk
Optional MFASingle password leak, instantly compromisedMandatory for all users
Password-only authOne phishing hit and you’re doneStandardize Passkey support
Logging PII in plaintextViolates data protection lawPII masking + audit
Committing secrets to GitLeak -> high bills, legal riskVault / Secret Manager + pre-commit hook
Still allowing TLS 1.0 / 1.1Audit-failingTLS 1.3 mandatory, 1.2 minimum
Trusting “inside the VPN” (perimeter model)Internal compromise propagates everywhereZero Trust, authenticate every request

Security costs 100x more after an incident than during initial setup. Standard equipment from day one.

Monitoring / operations traps

Anti-patterns that surface in operations. Things “appear to be working” for a long time, then erupt all at once during an incident.

Anti-patternSymptomPrescription
Alerts firing but nobody watchingMajor incidents discovered latePagerDuty + on-call rotation
Operating without SLOsCannot discuss quality numericallyDeclare 99.9% availability etc.
Logs not structuredSearch and aggregation impossibleStandardize JSON structured logs
No runbookNew hires panic during night incidentsDocument procedures for common failures
Skipping postmortemsSame incident keeps repeatingAlways record cause and countermeasure post-incident
Direct SSH fixes in productionChanges not reproducible, not auditableOnly via CI/CD pipeline

“Running” and “observable” are not the same thing. What’s not visualized may as well not exist.

Process / organization traps

Anti-patterns at the decision-making and org-running level — upstream of any tech choice. Trying to solve these with technology fails. They have to be treated as people problems.

Anti-patternSymptomPrescription
No documentation of architectural decisionsNobody can answer “why?”Write ADRs
Architect operates in isolationIdealistic decisions disconnected from realityDecide together with implementers
Ignoring Conway’s LawSystem boundaries don’t match org structureReverse-engineer design from team structure
Vendor handoff with no understandingNobody internally understands the internalsBuyer must understand the architecture
Big-bang rewrite projectYears and millions wasted on failureStrangler Fig phased migration
”We won’t know until we try” driftCannot estimate, cannot evaluate riskEnforce PoC -> implementation order

The most common failure mode, oddly, is getting the tech right but losing on process.

AI-era specific traps

Anti-patterns that have surfaced recently, around AI-driven development and AI-as-default operations. These need a viewpoint that classical design doesn’t have, and accumulate as debt if you don’t recognize them.

Anti-patternSymptomPrescription
Adopting an in-house framework that AI can’t writeAI productivity boost doesn’t applyLean on mainstream frameworks (Next.js, Django, etc.)
Schemaless JSON storageAI cannot infer the data structureMake types and schemas explicit
GUI-operation-dependent toolsCannot delegate operation to AIChoose tools operable via CLI / API / IaC
No data catalogRAG and AI agents cannot infer semanticsMaintain metadata and descriptions
Deploying AI-generated code without reviewVulnerabilities and bugs into productionDon’t skip the regular review/test cycle
No vector DB supportCannot bolt RAG on laterDesign assuming pgvector / Pinecone

The AI-era design principle places “can AI fluently write and read this?” at the center of selection.

Major incident damages by anti-pattern

The cost of anti-patterns is a scale that cannot be ignored when measured in dollars. The legendary cases:

CaseYearAnti-patternDamage / impact
Knight Capital2012Deploy procedure error (one machine got old code)$440M lost in 45 minutes, company dissolved
Adobe data leak2013ECB mode + shared password hint150M passwords effectively cracked
Dyn DNS attack2016IoT botnet (Mirai)Twitter / GitHub / Netflix offline for hours, 1.2 Tbps DDoS
GitLab DB deletion2017rm -rf during on-call + all 5 backup methods brokenRestored from 6-hour-old snapshot, 300GB lost
Equifax data leak2017Struts 2 patch left for 2 months$700M settlement, 147M people leaked
Capital One2019IAM over-privilege + WAF misconfig$80M fine, 100M records leaked
AWS Kinesis outage2020Streaming foundation as SPOFus-east-1 long outage, took out the AWS Console
SolarWinds2020Compromise via trusted vendor18,000 organizations compromised through legitimate channels
Log4Shell2021No SBOMWorld’s Java apps got a zero-day, impact unknown to many
Facebook 6h outage2021BGP misconfig + collocated monitoringAd revenue loss > $60M
Meta GDPR fine2023EU citizen data sent to USEUR 1.2B (~$1.3B)
CrowdStrike outage2024Update errorWindows endpoints worldwide BSODed; airports, banks, hospitals halted

A single anti-pattern triggering hundreds of millions to billions of dollars in damage is real, recurring history. “Boring operations are the strongest defense” and “design luxury earned only after the hypothesis pays off” are the historical refrains shaping the industry’s intuition.

Common root causes

These anti-patterns share common root causes. Surface symptoms differ but trace back to the same handful of starting points, so naming the root makes prevention easier.

Root causeTypical phraseCountermeasure
Misjudging scale and phase”Google does microservices, so…”Judge by your own scale
Blind faith in newness”It’s the latest, so it must be better”Weight information density and operational track record
”Build it ourselves to learn""It’ll be educational, so we’ll DIY”Don’t learn in production
Short-term view”It just needs to work right now”Evaluate the 5-year debt
Abandoning documentation”Read the code”Leave reasoning in ADRs

90% of anti-patterns reduce to getting scale and phase wrong or skipping documentation.

Looking back at burning projects, the failure causes are remarkably similar. “We wanted to use new tech,” “We wanted it to look cool,” “We wanted to learn” — designs born from these three motives almost always break down somewhere. By contrast, the “boring but it works” designs frequently keep humming for five years. Half of the architect’s job is fighting your own desires, and the other half is finding the courage to accept being boring.

Author’s note — winners who picked “boring tech”

Stack Overflow is a Q&A site with over 100M monthly hits, and its 2016 published numbers showed it ran on just 9 servers at the time. The stack was extremely conservative — .NET + SQL Server + Redis. No microservices, no NoSQL. CTO David Fullerton has publicly stated “we deliberately choose boring technology”, and it remains a frequent live example of Etsy engineer Dan McKinley’s 2015 essay “Choose Boring Technology.”

By contrast, Uber split into 2,200+ microservices in the mid-2010s and then ran into incident isolation becoming impossible, deteriorating developer experience, and proliferating duplicate functionality. In the 2020s they announced “DOMA” (Domain-Oriented Microservice Architecture), a domain-based reconsolidation. It is frequently cited as “the post-mortem on going to maximum decomposition.” Those who chose boring tech are still humming five years later; those who chased trends are still busy with consolidation projects five years later. This contrast suggests many anti-patterns are the result of paying for personal desires with organizational assets.

Self-check checklist

Use this list to confirm whether your project has stepped into anti-pattern territory. Hitting even one is reason to revisit the relevant reference.

  • Switched to microservices, but the team is < 30 people.
  • Using K8s, but no dedicated SRE.
  • Multi-cloud setup with no clear rationale.
  • Running analytical queries on the production DB.
  • Hand-rolled auth and session management.
  • Storing JWT in localStorage.
  • MFA not enforced for all users.
  • Logs not structured.
  • No SLO defined.
  • No ADRs being written.

How to make the final call

The core of the anti-pattern catalog is the mindset of first eliminating fatal mistakes, then optimizing. The gap between merely-good options is usually small, but stepping on a landmine is not recoverable. Over-engineering, chasing the latest, DIY temptation, scale-mismatched architecture, abandoning documentation — these five make up 90% of all incidents. Stack Overflow’s 9-server operation and Basecamp’s monolith persistence demonstrate that the courage to stay boring produces systems that hum for five years.

The other decisive lens is the realization that the industry’s intuitions are formed by hundred-million-dollar failures. Knight Capital’s $440M in 45 minutes, Equifax’s $700M settlement, Meta’s EUR 1.2B GDPR fine — these are how the industry has paid to learn “how much an anti-pattern costs.” Leaning on standards looks unflashy, but it is the strongest defense.

Selection priority

  1. Defuse the fatal landmines first — security, data, and operations are unrecoverable.
  2. Match scale and phase — copying megacorps and copying startups are both dangerous.
  3. The courage to choose boring — prefer options with 5 years of operational track record.
  4. Document the rationale — leave why-we-chose in ADRs.

“Avoid the landmines, stay boringly consistent.” That is the iron rule for not stepping on anti-patterns.

Summary

This article covered the anti-pattern catalog end-to-end — cross-domain, 9 categories, major-incident damages, common root causes, and a self-check.

Defuse fatal landmines first, match scale, stay boringly consistent, document the rationale. That is the realistic answer to anti-pattern avoidance in 2026.

The next article covers the “best-practice catalog” — the mirror image: the iron-clad first-pick options when in doubt, organized by domain.

Back to series TOC -> ‘Architecture Crash Course for the Generative-AI Era’: How to Read This Book

I hope you’ll read the next article as well.