Skip to main content

Known Gaps

A real product has rough edges. Pretending it does not is worse than naming them. This page is the list of things that, today, Dashify either does not do well or does not do at all. They are not in any particular order, pick the ones that affect your situation.

Authentication & identity

Account recovery without an admin. If a user loses their phone, their recovery codes, and cannot reach an admin, they cannot self-recover. Some products solve this with email-only recovery; we deliberately do not because it weakens the security guarantee. The right answer probably involves passkey-based recovery via a second registered device, but the UX is unfinished.

Step up auth for sensitive actions. Today, a sensitive action (deleting a project, exporting all data) requires the user's normal session. It does not re-prompt for the password or a fresh passkey gesture. Adding step up auth on a per-action basis would close a real gap.

Password breach checking. When a user sets a new password, we do not currently check it against the Have I Been Pwned k-anonymous API for known-breached passwords. Doing so would be a few hours of work and would catch a lot of weak choices.

Account-takeover detection. Logins from a new country, from a new device, with a new user-agent, none of these currently trigger any heightened response. The audit log captures them; nothing acts on them.

Multi tenancy edge cases

Cross tenant references. A user can mention another user in a chat or assign a work item to them, only within their own tenant. There is no concept of cross tenant collaboration even when both tenants would want it. Fixing this is non-trivial because the entire isolation model assumes a single tenant per request.

Tenant-level data export. Users can export their personal data; admins can export some tenant-level data piecemeal; but a clean "give me everything for my tenant in a single archive" flow does not exist. GDPR-style portability requirements would expect this.

Tenant deletion. Deactivating a tenant works. Hard deleting one, wiping every document, every audit log, every Cloudinary file, every Qdrant vector, is a manual operator process. It could be a self serve "delete my organisation" flow with a 30-day cool-down.

Database

Migration story for breaking changes. Mongoose's flexibility means we have rarely needed a migration tool, but when a real schema change comes, our story is "deploy the new code, let it tolerate old documents, fix them in a one shot script." A real migration tool (migrate-mongo is installed but lightly used) should be the default.

Soft-delete leakage in some aggregations. A few aggregation pipelines were written before the soft delete plugin existed and do not filter deletedAt. They should be audited. Most of these are SuperAdmin-only views that are rarely used.

Reference integrity. Mongoose does not enforce foreign keys. A work item references a project via id; if the project is hard deleted, the work item points at nothing. Soft deletes hide most of this, but a thorough integrity check on hard deletes would prevent dangling references.

Performance

Slow queries in audit log search. With many millions of audit rows, the search UI gets slower. The compound indexes are there, but the UI lets users filter on combinations the indexes do not perfectly cover. Either add indexes, or push the audit log into a search-optimised store.

Initial page load on cold cache. First visit to the platform downloads the JS bundle (smaller after the recent splits, but still a few MB). Pre-warming a CDN edge or shipping a smaller "shell" bundle would help.

AI assistant timeout on heavy questions. Questions that retrieve a lot of context can occasionally exceed the response budget. The fix is mostly tuning, retrievalTopK and the model's context window, but there is no automatic graceful fallback ("the question is too big, please narrow it").

Real-time

No "presence" beyond connection state. We know who is connected; we do not know who is typing, idle, or actively viewing a particular page. Slack-style presence is a reasonable next step.

No "edit lock" on shared documents. Two users editing the same KB article today get a last write wins. A real collaborative editor (or at least a soft lock with a "someone else is editing" warning) would prevent silent overwrites.

AI

Single model per tenant. The tenant chooses one chat model and one embedding model. Different features might benefit from different models (embeddings for search vs. for clustering, for example). The data model already supports this; the UI does not yet.

No prompt versioning. System prompts are baked into the AI service code. Changing them requires a deploy. Moving them to the database (with versioning) would let admins iterate on prompt phrasing without engineering involvement.

No conversational memory. Each AI question is independent. There is no "continue the conversation" mode where the assistant remembers what you just asked. This is a deliberate simplification for v1; adding it well requires careful UX and isolation work.

Observability & operations

No first-party SLO dashboard. Grafana dashboards exist for individual metrics; an aggregated "is the platform healthy?" dashboard does not. Easy to build, valuable for operators.

No automatic chaos testing. The platform has Testcontainers for integration tests but no scheduled "kill a worker, see what happens" chaos jobs. Worth adding once the platform reaches the scale where one instance dying is realistic.

No first-class support for log retention beyond the hosting provider's defaults. Operators with long-retention compliance requirements have to wire that up themselves.

Documentation

The API has Swagger docs at /api-docs. They are autogenerated and reasonably complete, but they are not yet the main documentation surface for integrators. A polished, searchable, versioned public API reference would matter for adopters.

The Storybook for the design system is not exposed. Components are documented in code; a hosted Storybook would help future-Nauman and any contributor.

Internationalisation is wired (react-i18next) but only English is shipped. Adding even a partial second language would expose any places where strings are hardcoded.

Testing

End-to-end coverage is selective. Critical flows (login, project create, work item edit, chat send) are covered. Others (KB editing, OKR check ins, file uploads) are tested unit-by-unit but not E2E. Filling the matrix would catch a class of regressions earlier.

No automated browser compatibility testing. Playwright runs in Chromium. Firefox and Safari are checked manually. Adding Playwright's cross-browser mode would catch real bugs.

Deployment

No first-party hosting recipes. The Docker setup runs anywhere, but specific recipes for AWS ECS, Fly.io, Railway, DigitalOcean App Platform would shorten the path for adopters. Each one is a few hours of testing and writing.

No "blue green" or canary deploy pattern. Today, pnpm deploy:backend does a hard cut-over. A canary or blue green approach (route a fraction of traffic to the new version, watch for errors, roll forward or back) would reduce deploy risk.

How to read this list

These are opportunities, not blockers. Dashify is genuinely usable today, the gaps are where the platform could be sharper, not where it is broken. If you adopt Dashify and one of these gaps is in your path, that is exactly the kind of contribution that would be welcome.

Key takeaways

  • Dashify has real, named gaps in authentication, multi tenancy edge cases, database migrations, performance, real time presence, AI sophistication, observability, docs, testing, and deployment.
  • None of them break the core promise; all of them are realistic next-phase work.
  • The honest list is more useful than a polished pretence, adopt with eyes open.