Categories: Audit Reports, Web3 Security,

How to Write a Comprehensive Audit Reports: Lessons from the Trenches

Over the years, auditing smart contracts and complex decentralized systems, we’ve learned that spotting vulnerabilities is only half the job. What you do with those findings, how you communicate them, and how you guide teams through remediating them, that’s where the real impact lies.

A good audit report doesn’t just list bugs. It tells a story of how the system works, where it breaks, and how it can be fixed. It becomes a record of diligence, a reference for developers, and often the foundation for trust between a project and its community.

In this blog, we want to share the practical lessons we’ve learned from writing, revising, and rethinking audit reports, some from experience, others from hard mistakes.

Understanding the Purpose

The audit report is the only permanent trace of the audit.

Code may change. Teams may rotate. But the report stays. An audit report is more than a deliverable. It is a technical document that needs to speak clearly to several types of readers:

  • the core development team that will implement fixes
  • stakeholders and investors who need to understand the risk landscape
  • and sometimes, other auditors who may build on your work in the future

We’ve seen audit reports used in everything from fundraising decks to emergency incident reviews. This means the report has to be clear, honest, and self-contained. It should capture not only the issues found but also the methodology, scope, design assumptions, and any areas where risk was deferred or acknowledged.

The best reports we’ve written are ones we can hand off months later, and still feel confident that someone reading it for the first time would fully understand the system, the vulnerabilities, and the logic behind each severity label.

The report has to do a lot: inform, persuade, enable, and persist.

So, how do we write something that hits all three?

Start with Structure: The Skeleton of a Good Report

A messy or disorganized report dilutes even the sharpest findings. Over time, we’ve settled into a structure that balances clarity with depth:

  1. Executive Summary
  2. Scope and Methodology
  3. System Overview
  4. Findings and Recommendations
    • Categorized by severity
    • Linked to specific contracts/lines
  5. Best Practices & Suggestions
  6. Post-Audit Developer Response (Optional)
  7. Appendix (Tools, Hashes, Commit IDs, etc.)

Let’s break these down.

The Flow of a Good Report

A comprehensive audit report tells a story. It starts with what was reviewed, explains how the system works, walks through the methodology, and lays out the vulnerabilities discovered. It also reflects your thought process and diligence. A client should be able to read it and feel confident not just in the state of their code, but in the rigour of your analysis.

Begin with a short executive summary. This isn’t fluff, it’s an honest snapshot. If you found multiple high-severity bugs in a relatively mature codebase, Just say so. If the protocol is reasonably secure but could benefit from gas optimizations and design cleanups, that goes here too.

Next, outline the scope and methodology. Detail exactly what you reviewed (contracts, commit hashes, deployed addresses) and what you didn’t. This protects both you and the client from future misunderstandings. We’ve learned to be painfully explicit here. Once, a client assumed we’d audited a proxy implementation that was entirely out of scope. The lack of clarity led to confusion during a production incident. Never again!

When describing methodology, try not to just name-drop tools. Explain what I used (manual review, Slither, fuzzing with Echidna, etc.) and why. Each technique has its strengths. If you manually traced re-entrancy paths across multiple contracts or simulated edge-case interactions on a forked mainnet environment, note that. The idea is to help readers understand the depth and breadth of the review.

Capturing the System’s Essence

A good audit report proves you understand the system you’ve just audited. That means going beyond code syntax and diving into the protocol logic.

This section is often the most intellectually demanding. It requires you to abstract the implementation into a mental model, how contracts interact, where trust boundaries exist, what the assumptions are, and how state transitions happen.

If the project has innovative mechanics, like rebase tokens, L2 message relays, or upgradable proxies, call those out. Use diagrams. Even ASCII ones help.

“The protocol comprises a staking contract, a reward distributor, and a cross-chain bridge. Funds are locked on L1 and released on L2 via an oracle quorum of 5 signers.”

We’ve found that writing this section forces you to double-check your understanding, and sometimes, that’s where hidden risks emerge.

A client once told us: “I felt more confident in your audit because it was clear you understood our business logic.” That stuck with us.

Documenting Findings with Precision

This is the beating heart of the audit report: the findings.

Each issue needs more than a title and a severity label. It should include a clear explanation of what the issue is, why it matters, how it can be reproduced, and what should be done about it.

We learned early on not to rush this part. For every vulnerability, ask yourself: Would someone reading this six months later understand it without you on a call? That’s the bar.

Severity Ratings

  • Critical – Can lead to loss of funds, permanent locking, or unauthorized access.
  • High – Serious bugs, but with limitations or require specific conditions.
  • Medium – May lead to unexpected behaviours or moderate financial risks.
  • Low – Doesn’t directly impact security, but indicates poor design.
  • Informational – Gas optimizations, code clarity, or best practice deviations.

Each issue should contain:

  • Title (e.g., “Incorrect Access Control in EmergencyWithdraw”)
  • Severity (with justification)
  • Description (what the issue is and why it matters)
  • Reproduction steps or PoC
  • Remediation advice
  • Status (Open / Resolved / Acknowledged)

Example:

Title: Unbounded Loop May Cause Out-of-Gas Reverts

Severity: Medium

Description: The withdrawAll() function iterates over all user deposits without a cap. In stress testing, this resulted in reverts due to block gas limits.

Remediation: Consider batching withdrawals or limiting iterations per call.

Don’t Assume, Explain

Clients appreciate it when you explain why something is a vulnerability, not just that it is.

Context makes all the difference.

Describing context is crucial. A storage collision in a proxy pattern isn’t just a bug, it’s a design flaw with downstream impact. A minor re-entrancy vector might only be exploitable under specific liquidity conditions. That nuance matters. Without it, you risk overstating or understating real-world risk.

Severity grading also requires judgment. Don’t just use a checklist. Think about exploitability, impact, and ease of remediation. A medium-severity bug in a time-sensitive pre-sale contract might be far more urgent than a high-severity issue in a paused or deprecated module.

Equally important is the tone. Avoid fear-mongering or arrogance. The goal is to inform, not to shame. Developers are more receptive when the report feels like collaboration, not indictment.

Beyond Vulnerabilities: Adding Value

Even when a codebase is secure, there are almost always things that can be improved: gas inefficiencies, redundant checks, non-standard patterns, or under-documented sections. This section isn’t mandatory, but try to include it in every report. It’s where you can mention:

  • Misuse of tx.origin
  • Poor use of visibility modifiers
  • Code duplication
  • Unused variables or events
  • Missing NatSpec documentation

It shows that you care about code quality beyond just finding bugs.

When possible, we include a follow-up section that tracks whether issues were addressed. Mapping each finding to a client response: fixed, acknowledged, or won’t-fix, adds transparency and shows the audit wasn’t just a box-ticking exercise.

Writing with Clarity

Technical accuracy is non-negotiable, but clarity is what makes a report truly useful.

There’s a difference between writing for PhDs and writing for practitioners. The best audit reports are those that a senior engineer can act on without needing to cross-reference three academic papers.

Avoid jargon for the sake of jargon. You’re not impressing other auditors, you’re helping developers fix their code and giving users peace of mind.

Instead of writing:

“The access control logic is vulnerable to a race condition arising from non-atomic updates to the mutable state due to asynchronous function resolution in the context of reentrancy.”

Just say:

“The function can be re-entered before state is fully updated, allowing attackers to exploit inconsistent logic.”

Keep it clear, concise, and correct.

Also, format the report for readability. Consistent headings, issue numbering, inline code snippets, and hyperlinked references all make the reader’s life easier. And always include an appendix, tool versions, commit hashes, custom test scripts, so the work can be verified or replicated.

Common Mistakes to Avoid

  • Overloading the report with low-impact issues – Prioritise what matters.
  • Vague recommendations – “Use a more secure pattern” helps no one.
  • Ignoring gas usage or test coverage insights – These are part of audit quality.
  • Neglecting to mention what wasn’t in scope – Always call this out.
  • Not versioning the report – Always include a report date and version. W’ve had reports come back months later with changes and no way to track deltas.

Final Thoughts

Writing audit reports isn’t glamorous. But it’s arguably the most impactful. A clear, thorough report can prevent millions in losses. A sloppy one can create a false sense of safety, or worse, lead to missed vulnerabilities.

In this space, where code is law and exploits move fast, the audit report is often the only written record of diligence. It’s a reflection of your thought process, your standards, and your credibility.

So take the time. Write like it matters, because it does.

Recent Blogs

How to Write a Comprehensive Audit Reports: Lessons from the Trenches

Over the years, auditing smart contracts and complex decentralized systems, […]

Read More

The Hacker’s Diary — Entry #42

Liquidate Thyself and Walk Away – Euler Finance hit of 13 March 2023 1. Scene-setting: Protocol Euler Finance

Read More

Security Time Machine: May–June 2025 Blockchain Hacks Report

Blockchain technology, despite its vast potential, continues to be tested by significant vulnerabilities and exploits

Read More

Leading the Wave of Web3 Security

REQUEST AUDIT

STAY AHEAD OF THE SECURITY CURVE.