About this article
As the fifteenth installment (final) of the âDevOps Architectureâ category in the series âArchitecture Crash Course for the Generative-AI Era,â this article explains ticket and project management.
Tickets are not âto-do listsâ but the orgâs memory device. Beyond 10 people, individual memory becomes unmanageable, and getting this design sloppy hits the org against cognitive walls. This article handles selection of Jira/Linear/GitHub Issues, epic splitting, sprint design, and release-note linkage as mechanisms that donât break down even as headcount grows.
What is ticket management in the first place
Picture a hospital reception number. When a patient arrives, a reception number is issued, and symptoms, attending physician, test results, and prescriptions are all recorded linked to that number. Without numbers, confusion of âhow far did we examine that patient?â arises, with missed responses and duplicate handling becoming frequent.
Ticket management is the reception-number system for software development. By assigning a unique number to each task and recording who, when, what, and how far it was done, everyone on the team shares the same situational awareness.
Why ticket management matters
Memory canât manage beyond a certain headcount
A 3-person team can get by verbally, but beyond 10, âwho was doing that again?â becomes daily. Without tickets, work duplication, omissions, and priority confusion are unavoidable.
Tracking past context
âWhy was this spec chosen?â / âWhen was this bug reported?â - if discussion and decisions are recorded in tickets, context is traceable even 6 months later. Slack conversations flow away, but tickets remain.
Visualizing and predicting progress
Measuring ticket throughput (velocity) lets you objectively estimate how many weeks until release. Judging by numbers rather than gut feel is ticket managementâs strength.
Main tools for ticket operations
Today, ticket-management tools roughly split into 3 lineages. The basic is choosing by the tradeoff between source-code integration and customizability.
| Tool | Characteristics | Suited for |
|---|---|---|
| GitHub Issues / Projects | Tightly coupled with repos / PRs, free | OSS, web services, new SaaS |
| GitLab Issues | Native integration if using GitLab | GitLab env, enterprise |
| Linear | Overwhelmingly fast UI, modern | Startups, studio-style |
| Jira | High-feature, de facto standard for large enterprises | Large scale, regulated industries, complex workflows |
| Notion | Unified with documentation | Lightweight ops, early-stage startup |
| Asana / Trello | Task-management-leaning | Mixed with non-engineer departments |
For new SaaS / web services, GitHub Projects or Linear is the front-runner. GitHub Projects was significantly revamped in 2022, with usability approaching Linear. Jira remains strong in large enterprises but easily falls into the custom-field swamp, safer to avoid for new adoption.
3-tier ticket hierarchy - Epic / Story / Task
âLining up tickets flatlyâ breaks down at hundreds of items. The industry rule is organizing in 3-tier hierarchy - Jira, GitHub Projects, and Linear all premise this structure.
flowchart TB
EPIC["Epic<br/>1-3 months<br/>large feature chunk<br/>e.g.: payment redesign"]
S1["Story #1<br/>1 day-2 weeks<br/>user pays via Apple Pay"]
S2["Story #2<br/>credit-card payment"]
S3["Story #3<br/>view payment history"]
T1["Task: integrate Stripe SDK<br/>(hours-2 days)"]
T2["Task: payment UI implementation"]
T3["Task: receipt email"]
T4["Task: history API"]
EPIC --> S1
EPIC --> S2
EPIC --> S3
S1 --> T1
S1 --> T2
S2 --> T3
S3 --> T4
classDef epic fill:#fef3c7,stroke:#d97706,stroke-width:2px;
classDef story fill:#dbeafe,stroke:#2563eb;
classDef task fill:#dcfce7,stroke:#16a34a;
class EPIC epic;
class S1,S2,S3 story;
class T1,T2,T3,T4 task;
| Tier | Granularity | Period | Example |
|---|---|---|---|
| Epic | Large feature chunk | 1-3 months | âPayment-feature redesignâ / âPasskey-auth introductionâ |
| Story | Unit producing user value | 1 day-2 weeks | âUser can pay with Apple Payâ |
| Task | Technical work unit | Hours-2 days | âStripe SDK integrationâ / âPayment UI implementationâ |
The iron rule for stories is writing in units completing user value from user perspective. âIntegrate Stripe SDKâ is a task, not a story - because just SDK integration delivers no value to users. Writing in the template âAs a [user], I want [behavior], so that [value]â aligns granularity.
Ticket-granularity pitfall - âcompletable in 1 dayâ is the front-runner
Misjudging ticket granularity breaks operations. With 1 ticket = completable in 1 day as the goal, progress becomes visible, PRs are easy to split small, and reviews go fast. Conversely, teams of â1 ticket = 1 weekâ fall into the state of repeating â80% doneâ for 3 weeks in progress reports.
| Granularity | State |
|---|---|
| Hours | Too fine, task-management overhead bloats |
| 1 day (4-8 hours) | Front-runner |
| 2-3 days | Tolerable as mid-size Story |
| 1 week | Consider splitting |
| 2+ weeks | Always split. Promote to Epic and divide into Stories |
The emergence of â80% completeâ is a warning that ticket-granularity design is broken. The remaining 20% is often actually 40-60% equivalent, becoming the major cause of sprint delays. Dividing into sizes answerable in the binary of âdone / not doneâ is the front-runner operation.
When told â80% complete,â that ticket is too big. Itâs the signal to split.
Story points vs time estimation
Task estimation has two main schools. Time estimation (X hours / X days) and Story Points (relative size in Fibonacci 1/2/3/5/8/13). A long-debated topic, with which is superior depending on situation.
| Viewpoint | Time estimation | Story points |
|---|---|---|
| Learning cost | Low | Mid (need team alignment on number meaning) |
| Individual-difference absorption | Weak (implementation-speed gaps emerge) | Strong (relative comparison, individual differences disappear) |
| Customer reporting | Easy | Hard (needs time conversion) |
| Velocity measurement | Inaccurate (time and completion diverge) | Accurate (stable team-unit indicator) |
| AI-generated estimation | Relatively accurate | Bad for AI (needs team context) |
Today, the fieldâs intuitive balance is time estimation for startups / small scale, story points for mid-large scale / multi-team. Linear and GitHub Projects also support story points. Hybrid operation (estimate in points, report in time conversion) is also a realistic answer many teams use.
Sprint design - 1 week / 2 weeks / Kanban
Whether to introduce âsprintsâ (a planning-execution-review cycle delimited by fixed periods) or use Kanban (no period, flow continuously) as the dev unit is also a decision item.
| Method | Period | Suited for |
|---|---|---|
| 1-week sprint | 7 days | Fast iteration, startups |
| 2-week sprint | 14 days | Front-runner (mid-size, SaaS) |
| 3-4 week sprint | 21-28 days | Large scale, regulated industries |
| Kanban (no period) | - | Operations-centric, inquiry-based |
2-week sprint is the front-runner for mid-size SaaS today. 1-week feels heavy with planning overhead, 3+ weeks lets plans go stale midway. Kanban is âflow management over planning,â suited for SRE teams and operations-centric orgs, but for feature-development-centric teams, no period = no rhythm tends to result, requiring caution.
Backlog priority - operate in 5 tiers
Operating priorities in 3 tiers of âhigh / mid / lowâ quickly results in all becoming âhighâ and not functioning. With 5 tiers + numbers, ordering forcibly emerges.
| Priority | Meaning | Upper-bound guideline |
|---|---|---|
| P0 | Incident response, immediate | Always 0 ideal |
| P1 | Definitely finish in this sprint | 60% of sprint capacity |
| P2 | This sprint if margin available | 30% of sprint capacity |
| P3 | Next sprint or later | No upper bound |
| P4 | Idea pool (may not be done) | No upper bound |
The state of â10 P1 ticketsâ is a warning that priorities arenât functioning. Define âP1 is what definitely finishes this sprint,â and parts exceeding velocity (average completion of past sprints) get demoted to P2 - the iron rule. Priorities arenât âprayerâ but numbers reverse-derived from capacity constraints.
A backlog all P1 is the same as no priorities.
What to write in tickets - phased practice
Ticket bodies tend to polarize into âtitle onlyâ or âtoo detailed, no one reads.â Practical to vary content by stage, deciding what to write / who writes at each stage.
| Stage | When to write | What to write | Who writes |
|---|---|---|---|
| 1. Submission | When thought of | Title and 1-paragraph âwant to doâ | Anyone |
| 2. Refinement | Before sprint planning | Acceptance criteria, rough estimate | PM + assignee |
| 3. Start | Sprint start | Tech-design overview, subtask split | Assignee |
| 4. Completion | After PR merge | How verification was done, screenshots | Assignee |
| 5. Review | End of sprint | Learnings, stumbling points | Assignee |
Acceptance criteria (concrete conditions for judging âdoneâ) is most important. âUser can log inâ is weak; writing in verifiable behavior from outside like âcan sign in with Apple ID, first time confirms email, second time onward completes with passkeyâ is the front-runner.
Velocity measurement - stability is top priority
Velocity (the total of story points or ticket count completed in one sprint) is the indicator measuring âwhat the team can promise in this period.â It immediately degrades when management sets it as KPI, but is extremely useful for raising in-team planning accuracy.
| Viewpoint | Content |
|---|---|
| Stability is top priority | Suddenly-rising velocity is a sign of point-padding |
| See average of past 3 sprints | Single numbers have no meaning |
| Donât make management-reporting KPI | Point / ticket-count inflation occurs |
| Donât show individual velocity | Tying to individual evaluation stops cooperation |
If velocity suddenly rises 20%+, point-padding, incident-response concealment, or ticket-granularity collapse is likely happening. Velocity is a prediction tool, not an evaluation tool - breaking this line makes all org numbers untrustworthy.
Ticket-operation antipatterns
Ticket operations rapidly degrade when culture collapses, even with structure ready.
| Antipattern | Why itâs bad |
|---|---|
| All requests via Slack DM | Unsearchable, others canât see, forgotten |
| Permanent neglect after submission | Backlog turns into âlist of dead hopesâ |
| All PR = 1 ticket required, no exceptions | Operational load bloats for emergency hotfixes / minor fixes |
| Management makes ticket count / points KPI | Point-padding, ticket-fragmentation |
| Vague definition of done (no spec) | Disputes over âdone / not doneâ |
| Backlog-triage neglect | Hundreds of tickets from a year ago remain |
Running backlog triage (periodically reviewing the backlog and closing unneeded tickets) monthly is the front-runner. P3/P4 tickets neglected over a year can be closed in principle. Leaving them piles up âlists of lost-motivation dreamsâ and buries truly important tickets.
Ticket-operation numerical gates / sprint metrics
Note: Industry baseline values as of April 2026. Will become outdated as technology and the talent market shift, so requires periodic updates.
Ticket operations are standardly disciplined by numbers. Below are mid-size SaaS standards.
| Metric | Recommended | What to do if exceeded |
|---|---|---|
| 1-ticket granularity | 1 day (4-8 hours) | Force-split over 2 weeks |
| 1 PR lines | ~400 | Split over 1000 |
| Sprint period | 2 weeks | 1-week excessive planning, 3-week stale plans |
| P1 ticket count | 60% or less of sprint capacity | Capacity over: demote to P2 |
| Sprint completion rate | 80%+ | Under 50% three times in a row suspects granularity collapse |
| Velocity variation | Within Âą20% | Rapid increase signs point-padding |
| Backlog stagnation | 1 year or less | P3/P4 over 1 year close in principle |
| Triage frequency | Monthly | Unimplemented becomes graveyard backlog |
| Acceptance-criteria-stated rate | 100% | Unstated leads to âdoneâ disputes |
| Ticket submission to PR merge | 1-3 days | Over 1 week, review granularity |
â80% complete continues 3 weeksâ is the certain sign of ticket-granularity collapse. All-P1 backlog equals zero priority - the standard cause of mid-size SaaS achievement-rate dropping below 50%.
Ticket-granularity goal is completable in 1 day. Split into sizes answerable in the binary of âdone / not done.â
Ticket-operation pitfalls and forbidden moves
Typical accident patterns in ticket operations. All result in org memory accumulating in individual heads.
| Forbidden move | Why itâs bad |
|---|---|
| No tickets in 3+ person teams | Beyond individual-memory limits, frequent forgetting |
| Chat operations with Slack DM requests | Unsearchable, others canât see, forever forgotten |
| All-P1 backlog operation | Same as no priorities, 30+ carryover per sprint |
| Ticket granularity over 2 weeks | Hell of â80% completeâ continuing 3 weeks |
| No acceptance criteria | âDoneâ disputes, quality varies |
| Management makes ticket count / points KPI | Point-padding, ticket-fragmentation, numerical degradation |
| Show individual velocity for evaluation | Cooperation stops, point-grabbing |
| No backlog triage | Hundreds of dead hopes from a year ago remain |
| Adopt Jira for mid-size startups | Custom-field graveyard, operational-cost bloat |
| Just install tools without operational design | Tools first, operation later always fails |
| Line up Epic / Story / Task flat | Breakdown at 100+ items, 3-tier hierarchy required |
| Assume âsmall orgs donât need ticket opsâ | Beyond 3 people, memory limits are exceeded and forgetting incidents become frequent |
| Think âputting in Jira or Linear improves opsâ | Putting high-feature tools into teams without settled granularity, priorities, and acceptance criteria just adds custom-field graveyards |
Cases of âachievement rate dropping below 50% with 100+ P1 ticketsâ frequently occur in mid-size SaaS. Just redefining P1 as âwhatâs promised to complete this sprintâ and force-limiting from velocity recovers achievement rate to over 90% - the power of operational design.
Priorities are not prayers but reverse-derived from capacity constraints. Decide by math.
AI decision axes
| AI-favored | AI-disfavored |
|---|---|
| Tools with API / CLI / MCP support like GitHub Issues | Tools with no API, GUI only |
| Structured acceptance criteria | Free-form long requirements |
| Granularity passable to AI per Story | 10 features mixed in 1 ticket |
| Linkage with Conventional Commits | Commits unrelated to tickets |
- Organize in 3-tier Epic/Story/Task - hierarchical, not flat
- 1-day granularity as goal - â80% complete continuing 3 weeksâ is granularity collapse
- Reverse-derive from velocity in P0-P4 - P1 to 60% of capacity
- State acceptance criteria 100% - function as instruction to AI
What to decide - what is your projectâs answer?
For each of the following, try to articulate your projectâs answer in 1-2 sentences. Starting work with these vague always invites later questions like âwhy did we decide this again?â
- Ticket-management tool (GitHub Projects / Linear / Jira / Notion)
- Ticket-hierarchy operation (Epic / Story / Task)
- Ticket-granularity goal (1-day completion as goal)
- Estimation method (time / story points)
- Sprint period (1 week / 2 weeks / Kanban)
- Priority hierarchy (P0-P4)
- Acceptance-criteria template
- Backlog-triage frequency (monthly recommended)
Authorâs note - âall P1â that killed a release plan
Thereâs a case at a mid-size SaaS where the backlog was filled with 100+ P1 tickets, and quarterly plans dropped achievement rate below 50% three times in a row. The cause was the vague operation of âimportant things are all P1,â resulting in deciding what to start on at sprint start, with 30-40 P1 carryovers untouched every sprint.
This team redefined P1 as âwhatâs promised to complete this sprintâ and force-limited P1 count from velocity, recovering achievement rate to over 90%. Just laying down the rule âP1 upper bound is 60% of sprint capacityâ makes priority design work. Priorities arenât prayers but reverse-derived from capacity constraints - this paradigm shift recovers organizational predictability.
Priorities are math. Deciding by feel makes everything âhigh.â
Summary
This article covered ticket and project management, including tool selection, 3-tier hierarchy, granularity, sprints, priorities, velocity, and AI-era MCP integration.
Organize in 3-tier Epic/Story/Task, 1-day granularity as goal, reverse-derive from velocity in P0-P4, state acceptance criteria 100%. That is the practical answer for ticket operations in 2026.
And this was the final installment of the âDevOps Architectureâ category. Next time weâll start a new category (Enterprise Architecture).
I hope youâll read the next article as well.
đ Series: Architecture Crash Course for the Generative-AI Era (68/89)