01
Chapter one
What regulators actually expect
An auditor's question isn't "do you have a log?" It's "produce the approval for this specific document, in this specific role, at this specific version, right now." A log that passes the first test and fails the second is a decoration, not evidence.
Regulators don’t evaluate document-management systems against a feature checklist. They evaluate them by asking questions and watching how the answers materialize. The questions are specific. They’re about individual documents, individual approvers, individual dates. The auditor wants to see evidence surface, not be assembled.
Most document-management programs that fail audits have audit trails. The trails exist. Events are captured. The failure is almost always at one of three layers: the events don’t attribute to specific people, the log is editable in ways that compromise integrity, or retrieval requires preparation that the auditor didn’t give you. None of these failures are dramatic individually — but they’re exactly what experienced auditors test for.
About this document
Not "in general." The auditor picks a sample — often the first SOP they see, or one they select because of its role in a recent incident — and asks for that document's full history.
By a named person
"The quality team approved it" is not an answer. The auditor expects a specific person, a specific role, and enough attribution metadata to trace the approval to an individual.
At a precise time
The timestamp must be system-generated, not user-declared. Accurate to at least the minute. Tied to a clock the organization didn't control locally.
In under a minute
The evidence must be produced on the spot, not after an email to IT, not after a report generation, not after three days of preparation. Retrieval is the test.
An auditor with twenty years of experience can tell the difference between a system that produces evidence and a system that reconstructs it within the first two questions.
The rest of this guide walks through the specific properties that separate the two kinds of systems — and what ISO 9001 and FDA Part 11 each demand in writing.
02
Chapter two
Anatomy of an audit-worthy trail
Eight structural properties an audit trail needs to hold. Each addresses a specific failure mode regulators know how to probe for.
An audit trail isn’t a single feature — it’s a set of structural properties that together make the evidence defensible. The audit log in an active document lifecycle is built to satisfy all eight. Most alternative systems satisfy some but not all, which is exactly where regulators find their findings.
Every event captured
Creation, edit, approval, publication, archive, reminder, signature, revert. No gaps. No "this event type is not logged."
Named Entra identity
Every event tied to a specific person in the organization's identity system. Not a group, not an automation account.
System clock, not user
The timestamp comes from the server, not from the user's machine. Time-zone handled consistently. UTC for the stored value, localized for display.
Tied to document state
Every event references the exact minor or major version of the document at that moment. "Approved v2.3" not just "approved."
No edits, no deletes
Once written, an entry cannot be modified or removed by anyone — including administrators. Structural, not policy-enforced.
Role + comment + scope
Beyond who-did-what-when: in what role, with what comment, against which document (protocol code + type).
From the document, directly
Not a separate admin console. The Quality Manager opens the document, clicks the menu, views the audit log. No IT ticket required.
For sharing with auditors
The log can be exported as CSV or PDF for inclusion in audit reports. With a hash or signature that proves it wasn't tampered with between export and review.
These eight properties are what the following sections unpack, mapped to specific regulatory expectations.
03
Chapter three
ISO 9001 — clause 8.5 and documented evidence
ISO 9001 doesn't prescribe an "audit log" as such. It requires evidence that the documented-information control process was followed. That evidence has to come from somewhere — and for regulated organizations, the audit log is the only defensible source.
ISO 9001 clause 8.5 (“Control of documented information”) requires that documented information the QMS relies on is:
Properly identified and described
Title, author, reference number, date. Captured in the audit log every time a document event occurs.
Reviewed and approved for suitability and adequacy
The approval event captures who reviewed, in what role, against which version, with what comments.
Controlled for changes
Every edit event captured. Version history shows the evolution. No changes "just appear" without attribution.
Obsolete documents controlled
Archive events show when a version was superseded. End-users can't see archived documents. Compliance can retrieve them for inquiries.
During a surveillance audit, the Quality Manager doesn’t produce an “ISO 9001 report” as such. They open a specific document — the one the auditor asked about — and show its audit log. The log answers the clause 8.5 question in context: who, what, when, with what evidence. The audit moves on. The standard is met because the evidence is available.
ISO 9001 doesn't ask for the audit log by name. It asks for the evidence that clause 8.5 has been followed. In modern document-management programs, the audit log is the evidence.
04
Chapter four
21 CFR Part 11 — the specifically regulated case
Where ISO 9001 asks for evidence in general, Part 11 is specific. It names the audit trail explicitly and says what it must do. And it says what happens when it doesn't.
FDA 21 CFR Part 11 is different from ISO 9001 in kind, not just in degree. ISO is a management-system standard; Part 11 is a regulation. Compliance with Part 11 is binary — either your electronic records are Part 11-compliant or they aren’t. Failing Part 11 has direct consequences: FDA inspection findings, Form 483 observations, Warning Letters, in extreme cases consent decrees.
The most-cited Part 11 clause for document-management systems is §11.10(e):
| Property |
What §11.10(e) says |
How the active lifecycle satisfies it |
| Secure |
Trails must be protected against unauthorized access and modification. |
Append-only, integrated with Entra authentication, inherits tenant-level access controls. |
| Computer-generated |
The trail is produced by the system, not by users manually entering log entries. |
Events fire automatically on every lifecycle action. Users can't add or omit entries. |
| Time-stamped |
Each entry has an independently generated date and time. |
Timestamps come from the SharePoint platform, not from user submissions. |
| Captures create/modify/delete |
All operator actions that create, modify, or delete electronic records. |
Every such event is captured — creation, revision, approval, archive. Plus the events that satisfy §11.10(d) and §11.10(k) by extension. |
| Does not obscure |
The trail cannot obscure previously recorded information. |
Append-only design means earlier entries remain visible regardless of later edits to the document itself. |
A critical nuance. The product provides the capability customers use in their Part 11 compliance program. It doesn’t provide a “Part 11 validated system” — that’s a different assertion that requires validation scope (IQ/OQ/PQ) documented and maintained by the customer’s QA team. The customer’s validation posture remains their responsibility. What we provide is the audit-trail capability §11.10(e) explicitly requires.
§11.10(e)
Audit trail requirements — the clause customers cite most often when evaluating document-management systems.
§11.10(d)
Limiting system access to authorized individuals. Satisfied via Entra authentication integrated with the tenant's identity posture.
§11.50
Signature manifestations — printed name, date, meaning of signature. PAdES via DocuSign plus the audit log's signature events produce this evidence.
05
Chapter five
Named-user attribution
The single most-cited audit-trail failure. "Approved by the QA team" isn't attribution. Part 11 specifically requires it to a person.
Every event in the audit log resolves to a specific Microsoft Entra (Azure AD) identity — a person with a name, a role, and an authenticated session that produced the event. This is structural. It can’t be bypassed by the user, by admins, or by the product itself.
The contrast with how a lot of older document-management systems work is sharp. In legacy systems, shared service accounts were common — “qualityapproval@company.com” as a shared mailbox that multiple quality-team members used. In those systems, audit entries attribute to the shared account, not to the person who actually acted. Under Part 11’s named-person requirement, every such entry is an attribution failure waiting to be flagged.
Shared service account
"qualityapproval@company.com" used by multiple people. Audit entries can't identify who actually acted. Non-defensible under Part 11 §11.10(g).
Group approval
"QA Department" as approver rather than a specific person. Any team member can approve; the log can't tell which.
Named Entra identity
"Jane Smith, Quality Manager, jane.smith@company.com" — attribution to a specific person authenticated through Entra with their individual credentials.
Named identity with role
Attribution to a specific person acting in a specific role at a specific moment. "Jane Smith, in the role of Quality Manager, approved at 2:15 PM."
The MFA question. Part 11 §11.10(g) also requires that access to electronic records uses “authority checks to ensure that only authorized individuals can use the system.” Most customers implement this via multi-factor authentication at the tenant level — MFA on the Entra identity that underlies every audit event. The active-lifecycle product inherits the tenant’s MFA posture; if MFA is required for SharePoint access, every audit event comes from an MFA-authenticated session.
06
Chapter six
Append-only by design
A log that can be edited by administrators isn't append-only, regardless of what the policy says. Structural integrity beats policy every time.
“The audit log is append-only” is a common claim. The honest question is: at what level? Some systems make the log append-only at the user interface — users can’t edit entries through the app — but admins have backend access that allows modifications. Others enforce it at the permission layer but administrators can override. A few enforce it at the architectural level, where no interface exists to modify the log at all.
The last category is the only one that passes serious scrutiny. If an administrator could edit the log even if they never do, a regulator will ask what compensating controls prevent modification. Those controls exist on trust and process — not on architecture.
The integrity of the audit evidence should depend on architecture, not on administrators behaving. Any system where admins could modify the log has a structural weakness, regardless of organizational discipline.
The audit log in the active lifecycle is architecturally append-only. The underlying SharePoint audit storage does not expose modification APIs. There is no administrative override. Even the product owner — including us — has no mechanism to modify entries. The integrity holds whether the organization operates with perfect process discipline or imperfect.
What admins can do. They can’t modify entries, but they can:
- Read the full log (with appropriate permissions)
- Export the log for audit-report inclusion
- Configure who can access which documents (and therefore which logs)
- Set retention policies that determine how long entries persist
None of these compromise the append-only property. They govern access and lifecycle of the log, not its contents.
07
Chapter seven
The 30-second retrieval test
The difference between a decorative audit trail and a real one reveals itself under time pressure. Evidence that needs preparation isn't evidence.
The practical test of an audit trail isn’t whether events are captured in principle. It’s whether the Quality Manager can retrieve a specific piece of evidence, under audit conditions, in under a minute — without having prepared for the specific question the auditor asked.
0s
The auditor asks
"Who approved SOP-CLIN-0047 version 3.0?"
Specific document, specific version, specific question. The Quality Manager has about 60 seconds before the auditor's attention shifts or the question becomes "never mind, we'll come back to that."
10s
Open the library
Find the document by protocol code
Search by SOP-CLIN-0047. Open the document. No "let me go back to my desk" — the whole flow happens on the Quality Manager's laptop in the audit room.
20s
Open the audit log
Three-dot menu → Audit
The audit log opens. Filter by event type = approval. Filter by version = 3.0. The display shows the approval events in order.
30s
Read the evidence
"Jane Smith, Quality Manager, approved at 2:15 PM on March 12"
Plus the chain of earlier approvals — the department lead, Regulatory Affairs, the Medical Director. Plus the signed PAdES signature from the Chief Compliance Officer. All visible in the log, in chronological order, with names and roles.
✓
Question answered
The auditor moves on
Total elapsed time: under a minute. No preparation, no IT ticket, no evidence-gathering project. The next question comes. The pattern repeats.
This is the pattern to rehearse before every audit: can any Quality team member, on any day, retrieve any document’s complete audit history in under a minute? If the answer is yes, the program is operationally ready. If it requires preparation, the program has gaps that will surface under pressure.
08
Chapter eight
Common audit-trail failures
Patterns that look fine until an auditor probes them, and that repeatedly show up in findings.
Gaps in event coverage
Some events get logged, others don't. "We log approvals but not edits" or "we log publication but not archival." Regulators probe for consistency — gaps are findings.
User-reported timestamps
"The user entered the approval date manually." Under Part 11, timestamps must be system-generated. User-entered dates can be falsified after the fact.
Admin edit capability
"Only admins can edit the log." Even without evidence of misuse, the existence of the capability is a structural weakness that auditors probe.
No version binding
Events say "approved" but don't specify which version was approved. Cannot reconstruct "what was the approved version at time T" without inference.
Slow retrieval
"We can produce it — we just need a couple of days." If retrieval requires preparation, the trail is not operational evidence.
Inaccessible to compliance
Audit log exists but only IT can access it. Compliance has to submit a ticket. The operational reality is that the log might as well not exist for day-to-day audit readiness.
Each of these failures reflects a specific design choice that a system either has or doesn’t have. They can’t be remediated through policy or training — they require architectural change. That’s why evaluating an audit trail as part of system selection matters more than evaluating it after adoption.
09
Chapter nine
Dashboard-level evidence
Per-document evidence answers specific questions. Aggregate evidence answers program-level ones — and that's increasingly what regulators want to see alongside the sample audits.
Modern auditors don’t just sample individual documents. They also evaluate program-level metrics: are approvals happening on cadence? is review cadence adhered to across departments? are the roles that should sign off actually signing off? For these questions, per-document retrieval isn’t sufficient — the aggregate view is what satisfies.
Power BI reporting on top of the audit log provides exactly this view:
Review cadence adherence
What percentage of documents are within their review window? Broken down by department and document type. The evidence that the review policy is actually followed.
Approval coverage
Are all required approvers signing off on every document? Specifically: is the fixed approver (Quality, Medical Director) actually present in every approval flow of the corresponding type?
Average approval cycle time
How long from submission to publication? Broken down by document type. Trending over time. Unusual outliers flagged for follow-up.
Rejection rates and reasons
What percentage of submissions are rejected? At which step? What reasons are cited? Useful for process improvement and for showing auditors the approval process is genuinely reviewing, not rubber-stamping.
The dashboard is what a mature Quality program surfaces in monthly management review. It’s also what compliance shows to auditors before they start sampling documents — “here’s our program-level posture” as the context for the detail retrievals.
10
Chapter ten
Implementation — making your trail work
Three practical steps that move an audit trail from "exists" to "operationally ready."
Step 1
Eliminate shared accounts
Every person who makes approval decisions has their own Entra identity. No shared service accounts touching document workflows. MFA enforced at the tenant level. This is the largest single lift for organizations moving from legacy systems.
Step 2
Run the retrieval rehearsal
Pick a document from two years ago. Ask the Quality team to produce its full approval history in under a minute. If they succeed, the program is audit-ready. If they don't, you've found the gap before the regulator does.
Step 3
Wire up the dashboard
Power BI dashboard goes into monthly management review. Cadence, approval coverage, cycle time surfaced to leadership. Gaps become visible at the program level, not just at the document level.
The audit trail that passes a real audit isn't the one with the most features. It's the one where the Quality Manager, on any random Tuesday, can produce any specific document's complete history in 30 seconds without preparation.
If your current audit trail depends on IT support to retrieve, shared-account attribution, user-entered timestamps, or “we’ll prepare that for Monday” answers under audit conditions, the gap is not cosmetic — it’s structural. A 30-minute conversation with our team is usually enough to map your current practice against the eight properties this guide describes, and to identify where the failure modes are most likely to surface.