Gating The Quiet Decay of Healthy Domains
Why quality gates should help engineers think—not just comply
Most architectural failures don’t announce themselves.
They don’t arrive as outages or broken builds. They arrive quietly, through small local decisions that make sense in isolation and slowly compound into something harder to change.
I was reminded of this recently while building a learning series on distributed systems. After covering orchestration, domain boundaries, and failure ownership, I turned our attention inward—inside a single domain.
Even when boundaries are “correct,” the internal structure of a domain can still decay. Features accumulate. Modules import each other “just this once.” Coordination logic spreads into places it doesn’t belong. Nothing is broken—but everything is harder.
This is not a new observation. What is interesting is how rarely teams create effective feedback loops around it.
Architecture Fails Locally
Large architectural decisions (service boundaries, messaging strategies, consistency models) are important. But they are not where most systems fail.
They fail locally:
When a module takes on one more responsibility
When orchestration logic leaks into a domain service
When “common” code quietly depends on half the system
When refactors feel risky long before they should
These are not testing failures. They are structural failures. And they usually surface only when a seemingly small feature suddenly feels expensive.
That moment—why is this change so hard?—is where architecture becomes real to engineers.
When a Chat Room Becomes the Center of the World
Consider a fairly ordinary capability: chat rooms.
At first, the module is simple and cohesive. It owns the room lifecycle and basic rules:
A room has members
Only members can post messages
A room can be archived
Membership changes follow policy (e.g., invite required)
ChatRoomsModule
├── ChatRoomService
├── MembershipPolicy
├── ChatRoomRepository
Then the product grows, as it always does.
When a room is created, we now also want to:
Create a welcome message
Notify invited members
Record an audit entry
Update a user’s “recent activity”
Emit an analytics event
Kick off a “first reply” reminder workflow
None of these are unreasonable. Each one is a reasonable “just add it” change.
So the easiest path is to add them where the action already happens—inside the chat room service.
After a few iterations, the module looks like this:
ChatRoomsModule
├── ChatRoomService
├── NotificationService
├── AuditLogService
├── ActivityService
├── AnalyticsService
├── ReminderService
├── ChatRoomRepository
And the createRoom() method begins to tell a familiar story:
createRoom(...) {
// create room + enforce basic rules
const room = this.repo.create(...);
// side effects begin
this.repo.addMembers(room.id, ...);
this.repo.addSystemMessage(room.id, "Welcome");
this.notifications.sendInvites(...);
this.audit.record("ROOM_CREATED", ...);
this.activity.markRecent(...);
this.analytics.track("room_created", ...);
this.reminders.scheduleFirstReply(...);
return room;
}
Nothing here is “bad code.” It may be tested. It may work reliably.
But the room lifecycle is no longer just domain work. It has become a coordination hub.
Now a small feature request arrives:
“Allow guests to participate temporarily, but with limited permissions.”
This is where the design cost shows up.
The engineer expects to adjust membership rules and permissions. Instead they discover that “create room” is now the place where half the system gets wired together.
They touch code that doesn’t feel related:
notifications (who gets invited?)
audit (what is recorded?)
analytics (what counts as a room creation?)
reminders (does a guest affect the workflow?)
activity (should guests show up in recent rooms?)
The feature isn’t hard. The structure makes it feel hard.
A Small Structural Change
The issue isn’t that side effects exist. Real systems have side effects.
The issue is where we put the coordination.
A useful way to reframe it is:
The chat room domain should own rules and state transitions.
A separate orchestrator should own cross-capability collaboration.
After the change, we keep a real domain:
ChatRoomsDomainModule
├── ChatRoomService
├── MembershipPolicy
├── PermissionPolicy
├── ChatRoomRepository
The domain still does meaningful work. For example:
Creating a room applies naming rules, initial state, and membership invariants
Adding a member enforces invite rules and prevents duplicates
Posting a message enforces membership and room state
So createRoom() still has content — it just stops being the place where everything else happens:
createRoom(command) {
const room = ChatRoom.create({
createdBy: command.creatorId,
type: command.type,
initialMembers: command.memberIds,
});
// enforce invariants via policy
this.membershipPolicy.assertCanCreate(room, command.creatorId);
// persist domain state
this.repo.save(room);
return room;
}
Then we introduce an orchestration module that coordinates side effects:
ChatRoomOrchestrationsModule
├── CreateChatRoomOrchestrator
The orchestrator calls the domain, then triggers cross-cutting work:
run(command) {
const room = this.chatRooms.createRoom(command);
// cross-capability coordination
this.notifications.sendInvites(room.id, command.memberIds);
this.audit.recordRoomCreated(room.id, command.creatorId);
this.activity.markRecent(room.id, command.creatorId);
this.analytics.trackRoomCreated(room.id);
this.reminders.scheduleFirstReply(room.id);
return room;
}
The system still does all the same things. But now the responsibilities are explicit.
Why This Small Change Matters
This refactor pays off in three practical ways.
1) The domain stays meaningful
The chat room module still owns real business rules: membership, permissions, state transitions. It isn’t a repository wrapper.
2) Coordination becomes visible and movable
Cross-cutting side effects live in one place. When they grow, you can split orchestrators, change sequencing, add transactions, or introduce events—without smearing logic across the domain.
3) Features become cheaper to add
“Temporary guests” is now mostly a domain change:
update membership + permission policy
adjust orchestrator behavior if needed (e.g., no reminders for guests)
You don’t have to untangle a long chain inside a single method that quietly became critical infrastructure.
The problem wasn’t that the system had side effects (although this this example we’re not touching failure modes). It was that the domain became responsible for coordinating them.
Coupling and Cohesion, Without the Lecture
There is no shortage of material explaining coupling and cohesion. Most engineers already know the definitions.
What’s missing in many organizations is not knowledge—it’s early signal.
Cohesion and coupling matter because they directly affect:
Cognitive load
Change amplification
Refactor safety
Team autonomy (assuming teams own domains)
High cohesion means a module has a clear purpose and localized change. Low cohesion means responsibilities have blurred.
Coupling is more subtle. Some coupling is necessary—even desirable. The problem is not coupling itself, but unmanaged and accidental coupling.
When that happens, domains don’t break. They stiffen.
Why We Had to Build This Ourselves
In more constrained ecosystems, static analysis and SAST tools often provide structural guardrails by default.
In the Java world, for example, there are mature tools that can:
Measure coupling and instability
Enforce architectural layering
Detect dependency direction problems
Integrate into CI as first-class quality gates
In the Node.js ecosystem, those tools are far less mature, especially at the framework level.
With NestJS in particular:
Modules are flexible by design
Dependency injection is easy across boundaries
Structural drift is hard to detect automatically
There simply aren’t widely adopted SAST tools that measure coupling and cohesion at the module level for Node or NestJS.
So using Claude Code I built a small one myself.
It wasn’t a large project. In a couple of days, I had a tool that:
Understood our NestJS conventions
Measured coupling and cohesion
Produced a simple structural report
The more interesting part came next.
Turning Architecture Into a Feedback Loop
I wired the analysis into a GitHub Action and passed the results to a Claude Code runner.
Claude reviews the structural output and provides contextual feedback on the pull request.
If a change introduces a meaningful regression, the system:
Explains what changed (e.g., increased coupling or reduced cohesion)
Recommends a likely structural seam or refactor
Links directly to the relevant educational material
Suggests patterns already used successfully in the codebase
This creates a tight feedback loop:
Engineer makes a change
Structural analysis runs
Claude explains the signal in plain language
Engineer decides whether to act now or later
The goal is not “simple” automation. The goal is clarity at the moment of change with contextual actionable next steps.
A Small but Telling Result
Within the first hour of running this in CI, the system caught its first structural regression.
A pull request added a new dependency to a class that already had high coupling. The change extended a long orchestration chain inside a create method—exactly the kind of local structural drift that is easy to miss in review.
The action surfaced the regression. Claude explained the issue and suggested moving the coordination into a dedicated orchestrator.
The engineer refactored the code in the same pull request.
No meeting. No architectural review. No long-term cleanup ticket.
Metrics as Mismatch Detectors
Used correctly, architectural metrics are not scorecards. They are mismatch detectors.
They highlight situations like:
A “common” module that depends on half the system
A foundational module that is highly unstable
An entry-point module accumulating orchestration pressure
A feature module that has become a grab bag
None of these imply bad code. They imply drift.
Quality Gates That Help, Not Police
One of the mistakes organizations make with quality gates is turning them into enforcement mechanisms too early.
A helpful quality gate:
Surfaces trends, not absolutes. Thresholds are configurable.
Encourages discussion and helps articulate tradeoffs
Prevents silent regressions
Instead of dumping raw metrics into a PR, the system:
Filters out insignificant changes
Highlights only meaningful regressions
Explains them in context
Points to concrete examples or patterns
If a change is neutral or improves structure, it stays quiet. A noisy quality gate after all is worse than no gate at all.
Architecture as Continuous Quality
Good architecture is not something you finish.
It is something you maintain through:
Feedback loops
Intentional seams
Lightweight quality gates
And a shared language for discussing tradeoffs
Cohesion and coupling are not academic concepts. They are leading indicators of whether a system will remain easy to change.
The earlier you can see them drifting, the cheaper the correction. For a future blog post, I’m seeing this hold true regardless if the author is a human or AI.