Case Studies

Large-Enterprise Core - Design Holding Up Organizationally Over New Tech

Large-Enterprise Core - Design Holding Up Organizationally Over New Tech

About this article

As the fourth installment (final) of the “Case Studies” category in the series “Architecture Crash Course for the Generative-AI Era,” this article explains the large-enterprise core case.

For ERP / HR / accounting / sales management at 1,000+ employees, or finance / medical / public regulated areas, “stability, audit, governance” is highest priority over “speed”. Before tech selection, “who approves what” is the design target. This article handles auditability, change-ease, long-term support, legacy integration, vendor selection, and SoR/SoE separation.

Selection basic policy

The core of this area is the 3 points of auditability, change-ease, long-term support, with strong tendency for mature tech chosen over latest tech. Vendor maintenance period, SLA, compatibility with existing assets, and breadth of talent market are emphasized over technical excellence.

PrioritizePostpone
Audit logs, trails, approval flowTime-to-market
Long-term support (LTS 5+ years)Latest-version following
ACID transactionsComplex eventual-consistency handling
Integration with existing systemsGreenfield design
Vendor maintenance / SLA contractsOperating with full OSS

Representative profiles are ERP, accounting systems, HR/payroll, sales management, financial core systems, and medical-info systems. “Choosing because new” is the most dangerous basis in large-enterprise core.

Large-enterprise core is premised on package adoption (SAP, Oracle, Salesforce, etc.), with the modern royal road being limiting custom to “operations the off-the-shelf can’t absorb.” Full-scratch is becoming almost-never chosen.

flowchart TB
    USER([Employee / customer])
    AD[Azure AD / Okta<br/>SAML SSO]
    subgraph CORE["Core packages (80%)"]
        SAP[SAP S/4HANA<br/>accounting/production]
        SF[Salesforce<br/>customers]
        ORACLE[Oracle EBS<br/>HR]
    end
    subgraph CUSTOM["Custom (20-30%)"]
        APP[Java/C#/TypeScript<br/>differentiation only]
    end
    subgraph DATA["Data layer"]
        ORADB[(Oracle / SQL Server<br/>ACID guaranteed)]
    end
    USER --> AD
    AD --> CORE
    AD --> CUSTOM
    CORE --> ORADB
    CUSTOM --> ORADB
    GOV[Governance<br/>TOGAF / ArchiMate / ADR]
    GOV -.- CORE
    GOV -.- CUSTOM
    classDef user fill:#fef3c7,stroke:#d97706;
    classDef auth fill:#dbeafe,stroke:#2563eb;
    classDef pkg fill:#fae8ff,stroke:#a21caf;
    classDef custom fill:#dcfce7,stroke:#16a34a;
    classDef data fill:#f0f9ff,stroke:#0369a1;
    classDef gov fill:#fee2e2,stroke:#dc2626;
    class USER user;
    class AD auth;
    class CORE,SAP,SF,ORACLE pkg;
    class CUSTOM,APP custom;
    class DATA,ORADB data;
    class GOV gov;
AreaRecommendedReason
Core packagesSAP S/4HANA, Oracle EBS, Salesforce, etc.Business knowledge baked into product
Custom-part languagesJava / C# / TypeScriptLTS-long, easy talent acquisition
DBOracle / SQL Server / PostgresACID, vendor maintenance
InfrastructureAWS / Azure (EU region in EU, hybrid possible)Compliance, data sovereignty
AuthAzure AD (Entra ID) / Okta + SAMLIntegration with existing internal AD
MonitoringDatadog / Dynatrace / SplunkEnterprise SLA
GovernanceTOGAF / ArchiMate / ADRDecision documentation

For custom dev, only differentiation elements unabsorbable by packages, ideal to suppress to 20-30% of total.

System / deploy choices

Even in modern times with public-cloud migration progressing, in many cases for large-enterprise core, hybrid cloud is the realistic answer. Existing on-prem assets, data-sovereignty requirements (can’t put personal info abroad, etc.), and existing-ERP-vendor contracts become practical barriers preventing full-cloud migration.

The standard for phased migration is building new development parts on public cloud (AWS / Azure) + containers + IaC, connecting with existing systems via API / event integration. Big-bang migrations replacing all at once have high failure rates - proceed with the Strangler Fig pattern (wrapping existing with new implementation and gradually replacing).

ChooseAvoid
Hybrid composition (existing on-prem + new cloud)Big-bang full-cloud migration
Azure (if MS-system already integrated) / AWS3-cloud AWS + Azure + GCP
Code-ize with Terraform / BicepStay GUI-manual-built
Phase migrate via Strangler FigFull-reform projects

For large-enterprise core, tens-of-billions-yen-class big projects of full reform have extremely high failure rates - phased migration is standard.

Software / data choices

For custom-part language, Java / C# is mainstream - unchanged due to talent pool, LTS period, and depth of enterprise frameworks (Spring / .NET). TypeScript + Node.js is also a choice, but Java tends to remain strong in business-logic-heavy areas.

For DB, any of Oracle / SQL Server / PostgreSQL - ACID consistency is an absolute requirement. In accounting, inventory, and finance, eventual consistency isn’t allowed - strong-consistency transactions are needed. NoSQL, event sourcing, and CQRS limited to “areas where absolutely needed” - not used in core’s core.

ChooseAvoid
Java (Spring Boot) / C# (.NET)Minor languages, custom FW
Oracle / SQL Server / PostgresOperating core systems with NoSQL only
Modular monolithPremature microservices
Package + custom integrationFull-scratch full development

Choosing NoSQL for accounting / inventory consistency requirements is wrong selection in nearly all cases.

Frontend / auth choices

For internal business-app screens, any of React / Vue / Angular is OK - choose to match existing-org skill set. Angular has persistent popularity in large enterprises, suited for large apps with TypeScript-standard, DI, and routing integration. When choosing React, often internally-accessible SPA rather than Next.js is enough, integrating auth with internal IdP.

The large-enterprise standard for auth is Azure AD (Entra ID) / Okta + SAML integrated with internal AD - users log in with employee ID. The modern compliance requirement is mandating multi-factor authentication (MFA) for all employees, with the recent flow also progressing Passkey support.

ChooseAvoid
Angular / React (depends on internal skills)Adopting latest FW (Svelte / Qwik etc.)
Internal IdP integration (Azure AD / Okta + SAML)Independent auth DB
MFA required (Passkey support)Password only
Comply with existing UI guidelinesBuild custom design system

Building custom auth in internal systems is a hotbed of security incidents. Always lean to internal IdP.

Data / governance choices

Core-system data is treated as authoritative records (SoR = System of Record), the source for company-wide analytics, BI, and AI utilization. Master Data Management (MDM), data catalogs (Collibra / DataHub), and data lineage setup are required equipment at this scale. Personal information (PII) masking, GDPR / Personal Information Protection Act compliance, and audit trails are built in from design stage.

Aggregate analytics in DWH (Snowflake / BigQuery / Redshift / Azure Synapse), with fine control via Row Level Security, column-level encryption, and access logs. Standardize ETL/ELT business → analytics with dbt, automating data-quality tests - the modern standard.

ChooseAvoid
DWH aggregation + ELT (dbt)Per-department individual extraction
Data catalog (Collibra / DataHub)Excel-managed master
PII masking + audit logsUnmanaged personal info
Integrate customer / product master via MDMPer-department separate masters

“Customer master scattered per department” at large enterprises is a typical antipattern. Solve via MDM setup.

Security / monitoring choices

Large-enterprise core security premises zero trust + compliance-audit response. The old perimeter model of network inside/outside has collapsed - design authenticating/authorizing every request is required. Per audit requirements like SOC 2 (US standard for service-org security), ISO 27001, PCI DSS (credit-card-industry data-protection standard), decide log-retention period, access control, and data-encryption levels.

For monitoring, enterprise products like Datadog / Dynatrace / Splunk are mainstream, premising SLA-equipped support contracts and 24/365 vendor response. Conduct annual incident-response process during failures, RTO/RPO, and BCP simulations too.

ChooseAvoid
Zero trust + MFA company-wide requiredOld model trusting inside VPN
Datadog / Dynatrace / SplunkOSS only (no SLA contract)
SOC 2 / ISO 27001 audit responsePostpone compliance requirements
Annual BCP simulationJust having plan documents

For large-enterprise core, audit response is a design premise. Adding later distorts design.

Large-enterprise core numerical gates

Note: Industry baseline values as of April 2026. Will become outdated as technology and the talent market shift, so requires periodic updates.

Large-enterprise core is the area operated with orders-of-magnitude numbers. Below are industry standards.

MetricRecommendedReason
Availability SLO99.99% (4.3 min monthly)Finance/payments 99.999%+
Project period2-5 yearsBig-bang reform high failure
Project budgetTens of billions of yenBase on phased migration
Audit-log retention7 years (J-SOX-compliant)PCI DSS 1 year, medical 6 years
MFA requiredAll employeesNo exceptions
LTS requirement5+ years10-year operation premise
Package customization rate20-30% or lessFit to Standard principle
Strangler Fig migration period5-10 yearsBig-bang forbidden
RTO / RPODecided by business impact1 hour / 5 min guideline
Procurement processHalf year-1 yearExecutive approval, bidding required

Lidl SAP eLWIS project (EUR 500M loss), Hershey Halloween incident ($100M loss) - tens-of-billions-class failures like these create the industry’s going rate. The iron rule for large-enterprise core is “mature tech, phased migration, Fit to Standard.”

Large-enterprise core pitfalls and forbidden moves

Typical accident patterns in large-enterprise core. All have management-shaking-level destructive power.

Forbidden moveWhy it’s bad
Big-bang reform with full system renewal in 3 yearsThe pattern of Hershey 1999 Halloween incident ($100M loss), Healthcare.gov 2013 ($2B added)
Fully customize packageLidl SAP eLWIS (EUR 500M loss, project canceled), Fit to Standard violation
Premature microservices-izationBreakdown from transaction-consistency / ops load, start from modular monolith
Adopt latest tech (short LTS) for business systems10-year-operation maintenance-talent depletion, becomes tech debt
Try to meet audit requirements with OSS onlyWithout SLA-equipped support, audit/incident-response impossible
Neglect Excel masters / person-locked managementPer-department individual optimums obstruct company-wide optimum, MDM setup required
Ignore data-sovereignty requirements with public-cloud-onlyRegulatory violations from putting personal info abroad, hybrid realistic answer
No simulation despite making BCP planDoesn’t function in actual disaster, annual drills required
Create proposals with just tech logicDoesn’t pass talent / procurement / audit processes, rejected
Run VPN-centric without ZTNA (zero trust)Internal breach spreads company-wide, the 2021 Pulse Secure incident pattern

Lidl SAP eLWIS (2011-2018, EUR 500M loss), Hershey Halloween incident (1999, $100M loss), Healthcare.gov 2013 launch failure ($2B added) - these are textbook failures showing the importance of design holding up organizationally over tech selection.

For large-enterprise core, organization logic over tech logic. Designs not passing executive-approval processes don’t get approved.

AI-era perspective

The first step for AI utilization in large-enterprise core is asset-izing existing data - schema-explicit RDBs and metadata-organized DWHs are premised. Unlike startups, with 10-20 years of accumulated data assets, organizing this gives huge AI-utilization value, but unorganized produces the state of “have data but can’t use.”

Custom-development parts can receive AI-driven-dev benefits via IaC + standard protocols, but parts on the SAP / Oracle package side remain less AI-compatible. Rational to split as “package side minimal, integration parts in AI-era standard composition.”

Favored in the AI eraDisfavored in the AI era
Data catalog + metadata setupPer-department Excel, person-locked
MDM-integrated masterPer-department separate master
Standard protocols (OIDC / OpenAPI)Custom API gateways
IaC-ized new-build partsGUI-operated legacy

Large-enterprise AI utilization “starts with data organization.” Even AI can’t save unorganized data.

Common failures

Organizing failure patterns frequent in large-enterprise core. All result from underestimating constraints of org scale, compliance, and existing assets - bringing in startup or small-mid SaaS success cases naively breaks down.

Big-bang reform

“Reform all systems in 3 years” projects have extremely high failure rates - phased migration via Strangler Fig is standard.

Adopting latest tech

Short-LTS-period FW and minor languages deplete maintenance talent, becoming tech debt 5 years later.

Fully customize package

Heavily customizing standard features of SAP etc. makes version-up impossible. Fit to Standard (matching operations to package) is the principle.

Premature microservices

Suddenly splitting core systems into dozens of services breaks down from transaction-consistency / ops load. Start from modular monolith.

Try to meet audit requirements with OSS only

Without SLA-equipped support, problems in audit / incident response.

Neglect Excel masters / person-locked management

Without MDM / data-catalog setup, per-department individual optimums obstruct company-wide optimum.

In large enterprises, “technically excellent choices” and organizationally-holding-up choices are often different.

In large-enterprise core reform, even submitting technically-impeccable proposals, getting rejected at meeting end from the executive room for reasons like “this vendor has short transaction history with us” or “this language can’t be supported by our training center” - cases often heard. Without designs passing all organ of organization - talent / training / procurement / audit - rather than tech logic, large enterprises don’t approve. The typical case suggesting that proposals for large enterprises require slides on “talent-acquisition strategy” and “alignment with existing procurement processes”; tech-only proposals become paper airplanes in large enterprises.

Author’s note - cases of “bending the package” cost

German major retailer Lidl progressed the SAP-based core-system reform project “eLWIS” from 2011, but in 2018 took a EUR 500M (about JPY 65B) special loss and canceled the project. The cause was, trying to heavily customize SAP to match Lidl’s unique product-coding system (an internal habit basing on purchase price), the divergence from package standard ballooned to maintenance impossibility. As “the cost of ignoring the principle of matching operations to packages (Fit to Standard),” still told in SAP circles.

US confectionery major Hershey in 1999 forced SAP big-bang migration just before the highest-demand Halloween period - the order system stopped functioning, unable to ship about $100M-equivalent chocolate, with quarterly revenue down 12% YoY. In the industry, told as “Hershey Halloween incident,” the standard case explaining big-bang migration’s danger and necessity of phased migration (Strangler Fig). Large-enterprise core establishing “mature tech, phased migration, Fit to Standard” as base lines - this can be said as the result of accumulated industry common sense about tens-of-billions-class failures.

What to decide - what is your project’s answer?

For each of the following, try to articulate your project’s answer in 1-2 sentences. Starting work with these vague always invites later questions like “why did we decide this again?”

For large-enterprise core selection, discuss in months units, deciding through formal approval processes. Below are items requiring agreement-building among management, legal, security, and IT departments.

  • Package vs custom split (Fit to Standard principle)
  • Cloud strategy (hybrid / public / data sovereignty)
  • Auth foundation (internal IdP integration / MFA / Passkey support)
  • Data governance (MDM / data catalog / PII management)
  • Audit / compliance level (SOC 2 / ISO 27001 etc.)
  • BCP / DR (numerical RTO / RPO definition)
  • EA framework (TOGAF / ArchiMate etc.)
  • Migration strategy (Strangler Fig / big-bang)

How to make the final call

The core of large-enterprise core selection is the thinking of organizationally-holding-up choices over technical excellence. Even with excellent latest tech, can’t adopt without alignment of talent acquisition, vendor maintenance, existing-asset compatibility, and audit response. Topics not considered in startups - Fit to Standard, mature tech, long-term support, existing-IdP integration - become protagonists here.

Another decisive axis is the 2 pillars of phased migration + data asset-ization. Big-projects of full reform have extremely high failure rates - the standard is phased replacement of existing assets via Strangler Fig. Simultaneously, by organizing accumulated business data with DWH / data catalog / MDM, convert to a state usable as AI-era assets.

Selection priorities

  1. Hold up organizationally - talent / vendor maintenance / audit response
  2. Fit to Standard - center on packages, suppress custom
  3. Phased migration - leverage existing assets via Strangler Fig
  4. Data asset-ization - organize via DWH / catalog / MDM

“New tech < design holding up organizationally” is the iron rule of large-enterprise core.

Summary

This article covered the large-enterprise core case, including Fit to Standard, Strangler Fig phased migration, hybrid cloud, data asset-ization, and design holding up organizationally.

Choose mature tech, migrate phased, pass org via Fit to Standard, asset-ize data. That is the practical answer for large-enterprise core design in 2026.

And this was the final installment of the “Case Studies” category. Next time we’ll start a new category (Appendix). Plan to organize the judgment axes spoken throughout this series into practical references - antipattern collection, best-practice collection, and critical-incident collection.

Back to series TOC -> ‘Architecture Crash Course for the Generative-AI Era’: How to Read This Book

I hope you’ll read the next article as well.