Skip to main content

Audit Logging

If something goes wrong on Dashify, a setting was changed nobody remembers changing, a user appears who should not exist, a permission was granted by mistake, the audit log answers the question "who did this?"

It is the truthful, append-only record of every meaningful action on the platform. This page explains what gets logged, where it lives, and how it is queried.

What gets logged

Anything that changes state or grants access is audit-logged. A non-exhaustive list:

  • Login attempts (successful and failed).
  • Password changes.
  • 2FA enable/disable.
  • Passkey registrations and removals.
  • API token creation and revocation.
  • Session revocation.
  • User creation, role change, deactivation.
  • Organisation settings changes.
  • Package upgrades/downgrades.
  • SSO config changes.
  • SCIM operations.
  • Project / work item / sprint create/update/delete.
  • Knowledge base article publish.
  • Announcement creation.
  • File upload and delete.
  • Chat channel rename or delete.
  • Cross tenant operations (SuperAdmin only).

Reads are not generally audit-logged because the volume would be enormous and the value is low, knowing every time someone viewed a page is rarely useful. Some sensitive reads (downloading data exports, viewing audit logs themselves) are logged.

What an audit entry contains

Each entry is a document in the auditlogs collection with this rough shape:

Who: actor user id, actor email, actor role, IP address, user agent.
What: event name (user.role.changed), target type (user), target id.
When: timestamp.
Where: tenant id, request id (so it can be correlated with the original request log).
Details: a JSON object specific to the event (old role, new role, who initiated, etc.).

The event names follow a <resource>.<action> convention: auth.login.success, user.role.changed, project.deleted, chat.channel.renamed. Consistent naming makes the log greppable and the UI filterable.

Where it lives

Audit logs go into MongoDB in the auditlogs collection. The collection is append-only by convention, there are no API routes that update or delete entries, only insert. The auditLog model is the only place that writes them, and the only writes it does are inserts.

In production, audit data is also typically streamed to an external long-term store (S3, a SIEM) for retention beyond MongoDB's. Dashify exposes the firehose as a hook; the wiring to a specific destination is operator-configurable.

Performance considerations

Audit logging happens on the request path, every login, every settings change writes a row before the response is returned. Two things keep this from becoming a bottleneck.

Async writes. The audit-log insert is fire-and-forget. The main request returns once the primary operation succeeds; the audit insert continues in the background. If the audit write fails for some reason (Mongo briefly unavailable), the platform logs a Pino warning but does not fail the user-facing request.

Indexed access. The collection has compound indexes on (tenantId, timestamp), (tenantId, actorId, timestamp), and (tenantId, event, timestamp). Every common query, "show me everything in this tenant in the last day," "show me everything user X did," "show me every login.failed", uses an index.

Querying the log

Org Admins (with the organization.view_audit_log permission) can browse their own tenant's audit log from /organization/audit-log. The UI supports:

  • Filtering by event name (or category).
  • Filtering by actor.
  • Filtering by date range.
  • Cursor-based pagination so even a multi-year log loads quickly.

SuperAdmins can browse audit logs across all tenants from the SuperAdmin console. Cross tenant queries use the explicit escape hatch and are themselves audit-logged.

Integrity

The audit log is the most sensitive collection on the platform. If an attacker could rewrite it, they could erase evidence of their actions. Dashify has no API for editing audit entries, but a determined attacker with database-level access could still tamper with them.

In production, the recommendation is to ship audit data to an external write-only store (something like AWS CloudWatch Logs or Datadog Audit) where deletions are forbidden by the storage layer itself. The MongoDB collection is then the fast index for the UI; the external store is the legal-grade record.

What the audit log does not do

  • It does not enforce anything. It records what happened. The enforcement lives in RBAC and validation.
  • It does not prevent quiet attacks, an attacker who has compromised an account and is legitimately authorised to do an action will pass RBAC and write a normal-looking audit entry. The protection there is anomaly detection (next bullet).
  • Detecting abnormal patterns (a user logging in from a new country, a sudden burst of write activity) is a separate layer, not currently implemented but on the roadmap.

The audit log is the forensic layer. Anomaly detection on top would be the behavioural layer.

A worked example

Suppose an Org Admin notices that a user's role has been silently elevated to "admin" without anyone remembering doing it.

Within seconds the admin sees: "Bob promoted Alice from User to Admin yesterday at 3:42pm from IP X." Now the admin can have the right conversation with Bob, or revoke the change, or escalate.

That is the daily value of an audit log.

Key takeaways

  • Audit logging records every state-changing or access-granting action: who, what, when, where.
  • Entries live in MongoDB's auditlogs collection, append-only by convention.
  • Writes are non-blocking, a failed audit write does not fail the user-facing request, but is logged separately.
  • Org Admins query their own tenant's log; SuperAdmins query across tenants.
  • For tamper-proofing, the recommendation is to also stream to an external write-only store.