Loading...
A practical local checker for clear asks, audience fit, risk language, decision framing, missing evidence, unnecessary heat, and next-step ownership. Use it with synthetic examples and keep company-private material out of the workflow.
This local checklist runs in the browser and works best with synthetic or sanitized examples.
Use a role or stakeholder group, not a real private name.
Name the decision, follow-through, or action this draft should move.
Keep examples synthetic. Do not paste private workplace details, client data, or company-private text.
The score is a simple review prompt, not an automated approval.
can Maya confirm option B by Wednesday at 3pm?
The draft names an ask with timing.
The draft names the ask, the choice or action, and the date or decision point.
The draft has enough audience and goal context to compress the message.
The draft gives enough context for the named audience without forcing them through raw notes.
The draft labels risk with impact or conditional timing.
Risk is labeled with impact, timing, and the next mitigation step.
The draft frames a recommendation, option, or decision with rationale.
The reader sees the decision path instead of a loose preference.
The draft includes inspectable evidence or a receipt.
The draft includes a receipt, source, date, metric, link, example, or evidence note.
The draft stays focused on observable facts, risk, and next steps.
The draft describes observable facts, risk, choices, and next steps without motive reads.
The draft closes with owner and timing.
The draft names who does what by when, including what the sender will do next.
Drafts fail when the reader has to reconstruct the ask, decision, risk, owner, or proof. The fix is not longer prose. The fix is a repeatable review pass that turns a draft into a clear work object.
Use the checker before important updates, follow-through notes, alignment messages, and escalation drafts. It keeps the message pointed at action while preserving calm language and evidence.
Run these checks in the same order. They move from intent to audience, risk, decision logic, proof, tone, and final ownership.
Can the reader tell what action, answer, or approval is needed?
The draft names the ask, the choice or action, and the date or decision point.
Add one sentence that starts with "Ask:" and names the action, owner, and date.
Is the message compressed for what this audience can decide or unblock?
The draft gives enough context for the named audience without forcing them through raw notes.
Lead with the audience, workstream, status, and why this message is landing with them.
Does the draft state risk, impact, and mitigation without blame?
Risk is labeled with impact, timing, and the next mitigation step.
Name risk as a condition: "Risk is yellow because if X happens, Y slips; mitigation is Z."
Does the draft explain the decision, options, recommendation, and rationale?
The reader sees the decision path instead of a loose preference.
State the recommendation, the option set, the evidence behind it, and the fallback.
What proof, receipt, metric, source, or example would make the message credible?
The draft includes a receipt, source, date, metric, link, example, or evidence note.
Attach the smallest useful receipt below the summary and point to it in the draft.
Does the language stay calm enough to make action easier?
The draft describes observable facts, risk, choices, and next steps without motive reads.
Replace heat with facts: what happened, what changed, what risk grew, and what decision is needed.
Does every requested next step have an owner and timing?
The draft names who does what by when, including what the sender will do next.
Close with owner, action, date, and the sender follow-through.
These examples are generalized public patterns. They show the shape of a strong draft without relying on real internal messages.
The synthetic example names the ask, audience context, risk, recommendation, evidence, owner, deadline, and follow-through.
The synthetic example turns a vague blocker into a decision-ready update with public-source fixture language and a next owner.
The synthetic example removes unnecessary heat while keeping the risk visible and actionable.
Put this checklist next to your stakeholder map, decision log, or manager and IC operating system so important communication gets the same review pass every time.
Write the reader, workstream, and goal before editing the draft.
Add a clear ask with owner, decision, and date.
Tune the detail level for audience fit instead of forwarding raw notes.
State risk as impact and mitigation, not frustration.
Frame the decision with option, recommendation, rationale, and fallback.
Attach the smallest evidence receipt that makes the claim inspectable.
Remove unnecessary heat, blame, absolutes, and motive reads.
Close with next-step ownership and update the decision log after the answer.
Use synthetic examples and public-source patterns only when practicing with AI.
AI can help with review passes, but the useful habit is the rubric. Keep source context synthetic or sanitized, then review and rewrite the final message yourself.
Check a draft against the workplace communication review tool dimensions.
Do not paste private notes, confidential workplace details, client data, or company-private messages into the model.
Turn an emotionally loaded draft into observable facts, risk, and next steps.
Use public-source or synthetic examples; final workplace messages require human review before sending.
Identify what proof would make a draft more credible.
Do not ask the model to infer private facts. Use placeholders until the sender verifies the evidence.
Draft review works best when it sits beside a stakeholder update system, an operating review, and lightweight AI-at-work habits that materialize evidence before messages are written.
It checks whether a draft has clear asks, audience fit, risk language, decision framing, evidence, calm wording, and next-step ownership.
Use synthetic, sanitized, or public-source examples. Do not paste private workplace details, client data, confidential notes, or company-private text.
The page includes AI prompt patterns, but the on-page checker is a deterministic local review checklist. Use AI only as a private drafting aid with safe examples.
Treat the score as a prompt for revision, not approval. The useful output is the list of missing asks, evidence, risk framing, tone issues, and owners.
Paste a synthetic draft into the checker, find the weakest dimension, and rewrite one sentence around the ask, evidence, risk, or owner.
Explore AI-at-work systemsContinue building your career toolkit with these in-depth guides.
Build local dashboards, batch pipelines, retrieval outputs, labeling queues, and prompt playbooks for practical workplace AI.
Map stakeholders, incentives, decision logs, alignment messages, escalation paths, and visibility loops with safe AI support.
Collect weekly evidence, tailor audience-specific summaries, separate facts from asks, track decisions, and surface blockers early.
Use daily capture, weekly review, a priority queue, decision log, evidence log, risk register, stakeholder map, and lightweight AI prompts.
Model source items, model jobs, runs, events, artifacts, approvals, handoffs, notifications, and human gates for safe workplace AI assistants.
Combine a React control center, local API, SQLite assistant state, DuckDB over Parquet analytics, job runs, approvals, artifacts, and source freshness.
Separate heavy analysis rebuilds from lightweight daily inspection over precomputed workplace AI snapshots.
Split local AI analytics into batch ingest, cached analysis, and lightweight dashboard serving on constrained office laptops.
Precompute overview, root cause, resolution, account-risk, prevention, and similar-item tables for fast AI work dashboards.
Declare each report audience, cadence, decision, visuals, drilldowns, required marts, freshness source, API endpoint, owner, status, and cutover gate.
Store top-N similar items with scores, snippets, timestamps, and index versions so dashboards read retrieval results instead of recalculating them.
Parse Markdown notes into provenance-rich chunks, combine FTS5 or BM25 with local embeddings and RRF, and show fallback-aware match reasons.
Schedule label batches outside active office hours, store outputs, version prompts, retry failures, and serve completed labels read-only.
Review ten concrete AI SaaS and side-hustle attempts with validation, distribution, manual-first paths, and reusable assets.
Choose channels before building, define the first 50 reachable users, create proof assets, and avoid cloneable AI wrappers.
Model LLM cost, retries, rate limits, abuse, data retention, secrets, observability, payments, email, support, migrations, backups, CI, smoke tests, and rollback.
Pick developer failure modes, keep sensitive code local, show exact evidence, integrate with GitHub and CI, and prove reliability first.
Decide when full product plumbing is worth it and when it hides weak validation, distribution, or cost control.
Map dependencies, auth sessions, quotas, blockers, retries, queues, approvals, health checks, resumability, and fallback paths.
Track real user signal, conversations, activation, repeat usage, revenue, burden, costs, blockers, distribution, and validation thresholds.
Use proof gates, scripts, scorecards, and failure thresholds before adding login, billing, dashboards, or automation.