Precursor Security
Intelligence Library
Article

How to read a penetration testing report

9 April 2026
·
16 min read
·Precursor Security

A penetration testing report documents confirmed vulnerabilities found during a security engagement. It contains an executive summary, methodology, findings ordered by severity, proof-of-concept evidence, and specific remediation guidance for each issue. Reading it correctly means understanding CVSS scores, distinguishing exploited vulnerabilities from theoretical ones, and translating technical findings into a prioritised remediation plan.

Most IT managers receive a penetration testing report and open the findings list first. That is the wrong starting point. The executive summary tells you the scope, the overall risk rating, and the handful of issues that need immediate attention. Start there, then work through the findings in severity order. This guide explains what each section contains and what you should do with it.


[INSERT VISUAL: Anatomy diagram showing the 6 report sections (executive summary, methodology, findings, evidence, remediation, retest) as horizontal labelled blocks on a dark navy (#0e0e42) background with blue (#2c9eff) section headers. Each block includes a one-line description of its purpose.]


What does a penetration testing report contain?

A professional penetration testing report contains six sections. Each has a distinct purpose, and a weak or missing section is a meaningful quality signal.

Executive summary. Written for senior stakeholders who need the strategic picture without the technical detail. It covers the scope of the engagement (which systems were tested and over what period), the overall risk posture, a count of findings by severity, and the highest-priority issues requiring immediate attention. A well-written executive summary should be readable by a board member with no technical background. If the executive summary is dense with CVE references and protocol names, that is a red flag about the quality of the engagement overall.

Methodology. Explains how the test was conducted: the testing approach (black-box, grey-box, or white-box), the frameworks used (OWASP Testing Guide, PTES, CREST guidelines), tools employed, and any constraints applied. This section gives the findings credibility by showing the work was systematic. It also records the rules of engagement: start and end dates, authorised IP ranges, and any systems explicitly excluded from scope.

Findings by severity. The operational core of the report. Each finding is a documented vulnerability or security weakness, ranked from Critical down to Informational. A structured finding entry contains: a title, severity rating, CVSS score, affected asset, description, step-by-step reproduction evidence, business impact statement, and remediation guidance.

Evidence of exploitation. Professional reports include proof-of-concept (PoC) evidence for each confirmed finding, typically annotated screenshots, HTTP request and response captures, command output, or extracted data (sanitised where necessary). The evidence demonstrates that the vulnerability was actually exploited, not just theoretically identified.

Remediation guidance. Every finding carries specific, actionable remediation guidance. Generic advice such as "apply patches" or "harden configuration" is not sufficient. Guidance should name the specific configuration change, patch version, code fix, or architectural modification required.

Retest scope. Better reports include a retest summary section, or reference to a separate retest engagement once fixes are applied. A retest confirms that each remediation was implemented correctly and the vulnerability is no longer exploitable.


How is a penetration testing report structured?

The structure depends on the provider, but all CREST-accredited firms produce reports aligned to the CREST Penetration Testing Guide, which mandates scope definition, methodology documentation, evidence-backed findings, risk ratings, and remediation guidance. Reports may also reference PTES (Penetration Testing Execution Standard) phases or OWASP Testing Guide test case identifiers.

Where you see WSTG (Web Security Testing Guide) test case codes alongside findings, that is a positive signal: it means findings are mapped to a recognised methodology, not generated ad hoc. A report citing WSTG-INPV-05 for SQL injection or WSTG-ATHN-02 for weak credential policy is easier to verify and cross-reference than one that simply names the vulnerability without a methodology anchor.


How do you interpret CVSS scores and severity ratings?

CVSS (Common Vulnerability Scoring System) is the standardised framework for rating vulnerability severity on a scale of 0 to 10. Most penetration testing reports use CVSS v3.1 Base Scores as the primary rating. FIRST (Forum of Incident Response and Security Teams) published CVSS v4.0 in November 2023; some providers are beginning to transition, though v3.1 remains the norm in commercial reports as of 2025.

CVSS scores are calculated from three metric groups: Base (inherent characteristics of the vulnerability), Temporal (how exploitability changes over time, for example as public exploit code becomes available), and Environmental (how your specific infrastructure context affects impact). Most reports present the Base Score only. If yours includes Temporal or Environmental scores, those represent a more accurate picture of risk in your specific environment.

Here is how the CVSS v3.1 bands map to severity and remediation priority:

CVSS scoreSeverityRemediation window
9.0 - 10.0CriticalWithin 24-72 hours; emergency patching if required
7.0 - 8.9HighWithin 5-10 business days
4.0 - 6.9MediumWithin 30 days
0.1 - 3.9LowWithin 90 days or next planned maintenance cycle
0.0InformationalReviewed at next quarterly security review

CVSS scores are not the complete picture. A Medium finding (CVSS 5.4) on an isolated development server with no internet access carries different business weight than the same score on your payment processing application. Always read the business impact statement alongside the score. The score measures the vulnerability's inherent characteristics; the impact statement tells you what an attacker could achieve in your environment.

[INSERT VISUAL: CVSS severity spectrum as a horizontal band from 0.0 (green/Informational) through yellow (Medium 4.0-6.9) to deep red (Critical 9.0-10.0). Each band shows the severity label, CVSS range, and remediation window. White/ceramic background (#F5F5F7), accent #2c9eff for labels.]

A note on CVSS v4.0: the new version removes the "Scope" metric, introduces four distinct score types (CVSS-B, CVSS-BT, CVSS-BE, CVSS-BTE), and adds a Supplemental metric group with contextual labels such as Automatable and Recovery. If your provider has adopted v4.0, ask them to walk you through the score type being used, as CVSS-BTE (all three groups combined) gives the most accurate picture of risk for your specific environment.


What makes a good penetration testing report versus a weak one?

Report quality varies significantly between providers. The table below shows the difference between a strong report and a weak one across the elements that matter most to an IT manager or IT director using the output for remediation planning and compliance.

ElementStrong reportWeak report
Executive summaryReadable by a non-technical board member; covers risk posture, scope, and top findingsDense with CVE references and protocol names; no business context
Finding severityCVSS score plus qualitative label plus business impact statementCVSS score only, or qualitative label only
EvidenceAnnotated screenshots, HTTP request/response captures, extracted dataNo screenshots; findings listed without proof they were exploited
Remediation guidanceSpecific to the technology, version, and configuration in questionGeneric ("apply patches", "harden configuration")
Attack chainsDocuments how low or medium findings combine to create high-impact pathsTreats every finding in isolation
Finding countProportionate to scope; Critical/High/Medium findings dominatePadded with 50+ Informational items about HTTP headers
Retest scopeIncludes retest guidance and confirms vulnerability closureNo retest provision; client cannot verify fixes worked
MethodologyDefines black/grey/white box approach, frameworks (OWASP/CREST), constraintsVague or absent; no framework reference

Red flags to look for. Scanner-only output is the most common quality problem. If a report reads like a formatted export from Nessus or Qualys with no manual testing evidence, it was produced from automated scanning. Automated scanners miss business logic flaws, chained attack paths, and anything requiring contextual judgement. A scan is not a penetration test. The Cobalt State of Pentesting 2024 report found that Critical and High findings together account for roughly 35-40% of all findings across web application engagements. A report where the majority of findings are Informational HTTP header observations warrants scrutiny.

Green flags. Step-by-step reproduction steps that your development team can follow in a test environment. Attack chain documentation that explains how a Medium information disclosure finding combines with a weak credential policy to enable full account takeover. Business impact statements phrased in terms of data exposure, regulatory consequence, or financial risk, not just technical vulnerability descriptions.


How do you read the findings section of a penetration testing report?

Each finding entry in a professional report contains: a title, severity badge, CVSS score, affected asset, description of the vulnerability, step-by-step reproduction evidence, business impact statement, and remediation guidance. Findings appear in descending severity order. Read them in that order.

[INSERT VISUAL: Annotated mock-up of a single finding entry. Shows labelled fields: severity badge (rose/red for Critical), CVSS score (9.1), affected asset (application URL), description text, evidence label (with thumbnail screenshot placeholder), business impact paragraph, and remediation guidance block. Navy background (#0e0e42) or ceramic (#F5F5F7). Alt text: "Anatomy of a single penetration test finding showing title, CVSS score, evidence, impact, and remediation fields."]

What "Proof of Concept" means. A Proof of Concept (PoC) is a demonstration that a vulnerability is genuinely exploitable in your environment. It is the difference between a scanner flagging a version number as potentially vulnerable and a tester actually running an exploit and capturing the output. A report without PoC evidence for High and Critical findings is not a penetration testing report. It is a vulnerability scan report. The two are not equivalent, and the distinction matters for compliance purposes.

Interpreting evidence screenshots. Evidence in a penetration testing report is annotated to show what happened and why it matters. An SQL injection screenshot shows the injected payload in the request, the database error or extracted data in the response, and a note explaining what was accessed. When reviewing evidence, focus on the data visible in the response: database table names, usernames, file contents, session tokens. That is the impact evidence. The payload in the request is the mechanism; the data in the response is the consequence.

Vulnerability versus confirmed exploit. A vulnerability is a weakness that could, under certain conditions, be exploited. A confirmed exploit is a vulnerability that was exploited during the engagement, with evidence recorded. Good reports are explicit about which category each finding falls into. If a tester identified a vulnerability but did not exploit it (due to scope restrictions or business-hours constraints), that should be stated alongside an explanation of the potential impact.

Attack chains. Some vulnerabilities only become critical when chained with others. An information disclosure finding combined with a weak credential policy may enable full account takeover. A read-only SSRF vulnerability in an internal service might allow access to cloud metadata credentials that grant administrative access to the entire infrastructure. Good reports identify these chains explicitly and rate the combined risk, not just the individual components.


What should you do after receiving a penetration testing report?

The Equifax breach of 2017, which exposed data belonging to approximately 147 million individuals, is the most cited example of what happens when security assessment output is not acted upon. A US Government Accountability Office report (GAO-18-559, 2018) found that the specific Apache Struts vulnerability exploited in the breach had been identified in vulnerability assessment output months earlier. The failure was not in the report. It was in the process that should have followed it.

Step 1: Triage by severity. Read the executive summary and the full findings list before assigning any remediation tasks. Confirm your team understands the scope, the overall risk rating, and the most critical issues. Create a triage register: a prioritised list of findings mapped to the systems they affect.

Step 2: Assign named owners. Every finding needs a named owner responsible for remediation. Map findings to development team leads, infrastructure managers, or cloud platform owners depending on the affected system. Without named ownership, findings sit unresolved.

Step 3: Set remediation timelines. Establish clear deadlines based on severity, using the table below as a starting point. Adjust based on compensating controls and exposure level.

SeverityTarget remediation window
CriticalWithin 24-72 hours (emergency patching if required)
HighWithin 5-10 business days
MediumWithin 30 days
LowWithin 90 days or next planned maintenance cycle
InformationalReviewed at next quarterly security review

Step 4: Document accepted risks. Not every finding can be remediated immediately. Some require significant architectural changes or vendor patches not yet available. For these, document an accepted risk entry: the finding reference, the reason remediation is deferred, the compensating control in place, and the review date. For compliance purposes, accepted risk documentation demonstrates that findings were reviewed and a conscious business decision was made, not overlooked.

Step 5: Schedule a retest. Once remediation is complete on Critical and High findings, schedule a retest. A retest is a targeted engagement where the tester attempts to re-exploit each remediated vulnerability using the same steps documented in the original report. Retests are shorter and less expensive than the original engagement. They produce a retest report confirming which findings are resolved, partially resolved, or still open. The Cobalt 2023 State of Pentesting Report found that 37% of pentest findings are not remediated within 90 days of delivery. Scheduling the retest creates a forcing function.

Step 6: Report to stakeholders. Summarise key findings and remediation status for your board or senior leadership. Use the executive summary as a foundation. Frame the discussion in terms of business risk and remediation progress, not technical vulnerability details. Regular reporting keeps security visible at leadership level and builds the case for continued investment.

For web application assessments, the web application penetration testing guide covers what to expect during the engagement and how reports are structured for application-layer findings specifically.


How does a penetration testing report support compliance requirements?

A penetration testing report is evidence. For most compliance frameworks, it demonstrates that your organisation has conducted independent security assessment and is managing identified risks.

ISO 27001 (Annex A.8.8). ISO 27001:2022 Annex A.8.8 requires organisations to manage vulnerabilities in information systems in a timely manner. Penetration testing reports, combined with remediation records and retest evidence, directly satisfy this control. Auditors expect testing at defined intervals (typically annually or after significant changes), documented findings, and tracked remediation.

PCI DSS v4.0 (Requirement 11.4). PCI DSS v4.0, mandatory from March 2025, Requirement 11.4 mandates both external and internal penetration tests annually and after significant infrastructure or application changes. The standard requires methodology aligned to OWASP, NIST, or PTES, with specific documentation covering scope, approach, results, and remediation sign-off. Segmentation testing for cardholder data environment boundaries must also be documented.

Cyber Essentials Plus. Cyber Essentials Plus includes a technical verification assessment by a CREST-accredited certification body. A clean penetration testing report from earlier in the year identifies issues before the certification assessment and provides evidence of proactive security management. The five CE controls (firewalls, secure configuration, access control, malware protection, patch management) map directly to common finding categories in penetration testing reports.

NHS DSPT. The NHS Data Security and Protection Toolkit requires organisations handling NHS patient data to demonstrate compliance with the National Data Guardian's standards. DSPT Assertion 4 covers network and system security, including requirements for regular vulnerability testing. A formal penetration testing report with findings and remediation records provides direct evidence for DSPT submission, and NHS trusts increasingly require this evidence from their supply chain.


Common questions about penetration testing reports

How long should a penetration testing report be?

There is no fixed length. A web application penetration test on a medium-complexity application typically produces a 30 to 60 page report. An internal network test across a large Active Directory environment might produce 80 to 120 pages. Length matters less than completeness. Every finding needs a description, evidence, impact statement, and remediation guidance. A short report with incomplete findings is worse than a long report with thorough documentation.

Can I share my penetration testing report with third parties?

This depends on your engagement contract and information security policy. Most organisations treat penetration testing reports as highly confidential: they document exactly how your systems can be compromised. If a third party (insurer, regulator, enterprise client) requires evidence of testing, consider sharing the executive summary only, or requesting a letter of attestation from your testing provider rather than the full technical report.

What is a sample penetration testing report?

A sample penetration testing report is a redacted or anonymised example of a real report, sometimes published by testing providers to illustrate their reporting format. CREST-accredited providers sometimes share sample reports on request. If you are evaluating a provider and want to understand the quality of their reporting before commissioning, ask to see a redacted sample. Report quality is a direct indicator of engagement quality.

Do I need a penetration testing report template?

Your testing provider should have their own validated template aligned to CREST or CHECK standards. You do not need to supply one. If a prospective provider asks you for a template to populate, that is a red flag: it suggests they lack established reporting processes of their own.

What is the difference between a penetration testing report and a vulnerability assessment report?

A vulnerability assessment report documents identified vulnerabilities, typically generated from automated scanning tools. A penetration testing report documents exploited vulnerabilities, with evidence that an attacker was able to leverage each issue to gain access, escalate privileges, or extract data. A penetration test is manual, contextual, and adversarial. A vulnerability assessment is automated and observational. The two reports look superficially similar, but a penetration testing report carries significantly more evidential weight for compliance and risk management. For a comparison of the two approaches, see vulnerability assessment vs penetration testing.

How often should I commission a penetration test?

Most compliance frameworks require annual testing as a minimum. PCI DSS v4.0 requires annual external and internal penetration tests, plus testing after significant changes. ISO 27001 auditors expect a defined testing frequency aligned with your risk assessment. For organisations with active development pipelines, cloud migrations, or recent acquisitions, annual testing is a floor, not a ceiling. Cobalt's 2024 research found that organisations conducting more than one penetration test per year remediate findings 2.4 times faster than those testing annually. Aligning test frequency with your change cadence produces better remediation outcomes than treating it as a fixed annual event. See also the web application penetration testing checklist for guidance on what to prepare before each engagement.

Expert Guidance

Put this guide into practice

Our CREST-certified penetration testers can validate your configuration, identify gaps, and provide an independent audit report.