Optimizing Engineering Documents for the Reader
In any engineering organization that has grown past the founding engineering team, writing becomes architecture. As teams scale, it’s no longer enough to have the right answer in your head - others must be able to read, understand, and act on it. Yet too often, authors treat all engineering documents the same, filing everything under the generic label of "RFC" and hoping that the reader will sort out the intent for themselves.
They often don't.
At Luxury Presence the R&D org has grown past 100 people and we’re spinning up an 11th team. Through the growing pains, I’ve found it helpful to recognize that while many documents are technically “requests for comments,” their success (buy in, review, onboarding, exploration, etc.) depends on optimizing for the audience’s role in the process, not the author’s. Below, I’ll walk through three document types in use and the common failure modes I’ve seen - especially when writers fail to align their content to the needs of their audience.
1. Engineering Standards
Audience Intent: “What do we support, and why?”
Author Goal: Establish defaults and minimize divergence.
It’s important to document Engineering Standards - such as what programming languages we use and why. These documents are primarily written for internal reference and onboarding. Their job is not to explore the space or propose a new direction, but to clarify the current standard, the rationale behind it, and how to adopt it.
A good standard is one you can point to during a pull request and say: “This is why we don’t use that here.” It should be short enough to be read, clear enough to be followed, and backed by rationale grounded in organizational goals - like minimizing cost of ownership or ensuring teams default to mature (and boring) solutions.
Common pitfalls:
Debate disguised as documentation. Standards should record decisions, not reopen them.
Rationale missing or vague. “We use TypeScript” is not as useful as “We use and invest in TypeScript as part of our goal to optimize for full-stack feature development”
Too aspirational. A document listing ten aspirational standards without adoption plans is not a standard, it’s a wish list.
AI Rules - Beyond Standards for Engineers
As the tooling landscape rapidly evolves - particularly with the rise of AI copilots and context-aware IDEs like Cursor - our standards are no longer just for human engineers.
Going forward, each engineering standard can (should) be written with machine-enforceable artifacts in mind. These are companion files or structured outputs that help AI assistants apply standards automatically, not just reference them.
The simplest and most immediate example of this is a .cursor/rules/*.mdc
file accompanying a standard. Cursor’s rules system allows us to encode best practices as contextual guidance that appears directly in the editor during development. By aligning our standards with this format and it’s best practices, we allow our tools, not just our teammates, to help enforce consistency, catch violations early, and promote reuse of established patterns.
Whether it's coding conventions, architectural preferences, or migration plans, the goal of these artifacts is to codify expectations in ways AI can act on. This doesn't replace critical thinking, but it does raise the floor.
2. Design Docs
Audience Intent: “Can I help improve this design before it’s built?”
Author Goal: Facilitate effective peer review and early feedback.
Design Docs are living documents in the software development lifecycle (SDLC). Their primary purpose is to expose your ideas to the scrutiny of your peers before you write the code, not after. They help reviewers get ahead of problems such as bad abstractions, slow queries, or misaligned interfaces.
Great design docs focus on clarity. They define the problem in business terms, give enough architectural detail for a senior engineer to reason about implications, and offer multiple solution paths - complete with tradeoffs and interfaces.
Common pitfalls:
Copy-paste documentation. Explaining Kafka Streams by pasting from its website doesn’t help your team understand why it’s the right fit here.
Bad LLM drafting. AI-generated text is fast, but can be irrelevant or misleading in context. Use AI, but if a design doc reads like ChatGPT wrote it, it probably hasn’t been thought through.
Unclear requirements. Vague goals lead to vague solutions. If the requirements aren't clear, the reviewer can't help you find the best path.
Not being an active participant. Design docs should be treated like code review. If a new version is published, the author needs to solicit feedback and be present to respond to questions / feedback.
Telling a story. There’s often an impulse from more junior engineers to detail all the things they tried before arriving at a solution rather than to detail the actual solution first. This does not facilitate effective review. An “alternatives section” can be used to detail things tried and why they did not work.
3. Technology Evaluations & Spikes
Audience Intent: “Are we picking the right tool for the job?”
Author Goal: Remove unknowns and make recommendation.
When exploring new technologies there is a different deliverable, a Tech Evaluation Doc. These are not design docs; they’re research artifacts. Their purpose is to clarify the problem, define requirements, and then systematically evaluate multiple solutions—usually by scoring them against those requirements and organizational constraints.
These docs are essential for buy-in. When an engineer wants to adopt a new queue, ORM, or runtime, the strength of their argument shouldn’t depend on enthusiasm or gut feeling—it should rest on clearly defined criteria, working prototypes, and thoughtful comparisons.
Common pitfalls:
Lack of alignment on the problem. A compelling comparison is meaningless if no one agrees what problem you’re solving. The process can start here, have peers review the problem statement and requirements before any solutions are introduced.
Scorecards without weight. If everything scores 3/5 across the board, your doc is signaling indecision. Scores may start arbitrary but are ultimately relative.
Standards mismatch. Recommending a tool that conflicts with an existing standard (e.g. using Rust in a TypeScript shop) requires extra rigor—and often, an accompanying Standards RFC.
Not All RFCs Are Created Equal
It’s tempting to treat every document as an RFC—Request for Comments is broad enough to cover nearly any type of written proposal. In practice, however, reader expectations differ. An engineer skimming a standards doc wants a quick answer. A reviewer reading a design doc wants to catch subtle issues. A director reading a tech spike wants confidence that due diligence was done.
One of my favorite examples of well-optimized documentation is RFC 6749, the original OAuth2 specification. While I’ve heard many developers complain about its length, I still point engineers new to OAuth2 to that document (alongside a helpful YouTube explainer). Why? Because there is no fluff in RFC 6749. It names every actor in a complex flow, defines every mechanism precisely, and draws sharp boundaries between grant types, flows, and responsibilities. It is optimized not for sales, not for excitement—but for understanding. As a technical document, it’s enduring.
That said, I don’t advocate for every design doc to be as exhaustive as RFC 6749. Design docs should be pragmatic, sized to the scope and complexity of the problem. But when it comes to level of detail and clarity per word, RFC 6749 is a masterclass. It doesn’t waste time—but it also doesn’t assume knowledge. That’s the bar. Every reader walks away smarter.
Final Thought
Writing well in engineering isn’t about prose. It’s about clarity. It’s about matching the form and content of your document to the needs of the reader. Standards are for alignment. Designs are for scrutiny. Evaluations are for decision-making. Knowing which you’re writing—and who you’re writing for—is half the battle.