Morphism Automation Playbook

assetactive

Morphism Automation Playbook — Complete Task/Command Library + Innovative Extensions

Provenance: Extracted from morphism-automation-playbook.md (Downloads, 2026-03-15). All named automation tasks preserved verbatim.


BLOCK 1: STATUS & STANDUP

STANDUP-DAILY Gather all git activity from the past 24 hours. List commits by author, branch changes, files modified, PRs opened or merged, and any CI events. Format output as a tight standup brief: what shipped, what's in flight, any blockers surfaced by failing tests or stalled PRs.

WEEKLY-SYNTHESIS Pull all PRs merged this week, production deployments, incident tickets, and review threads. Synthesize into a structured weekly update with sections: Shipped, In Review, Incidents, Metrics Delta, Blockers, Action Items. Flag anything that appears in more than one category as a cross-cutting risk.

TEAM-CONTRIBUTION-MAP Summarize last week's PRs grouped first by teammate, then by theme (feature, fix, refactor, infra, docs, test). For each group, surface any dependency chains, review bottlenecks, or risk signals such as large diffs with low review coverage or changes to shared utilities with no corresponding test updates.


BLOCK 2: RELEASE PREP

RELEASE-NOTES-DRAFT Scan all PRs merged since the last release tag. Extract title, author, linked issue, and merge commit. Group by category: Features, Bug Fixes, Performance, Breaking Changes, Deprecations, Internal. Output a formatted release notes document with PR links and short commit hashes. Flag any PR missing a description or linked issue as requiring manual review before publishing.

PRE-TAG-VERIFY Before approving a release tag, run the following checklist and report pass/fail on each item:

  • Changelog entry exists and matches the version being tagged
  • All database migrations are documented and reversible steps are confirmed
  • Feature flags introduced in this release are listed with their default states
  • All required CI checks have passed on the release branch
  • No open P0 or P1 issues are linked to PRs in this release
  • AGENTS.md reflects any new commands or workflows introduced

CHANGELOG-UPDATE Update CHANGELOG.md with this week's highlights. Pull from merged PR titles and bodies. Group under the correct semantic version heading. Ensure entries are in reverse chronological order. Validate that the version number follows semver conventions given what changed. Add PR links inline. Commit the update on a dedicated branch and open a draft PR for review.


BLOCK 3: CI & INCIDENTS

CI-TRIAGE Fetch the most recent CI run results across all active branches. Group failures by likely root cause: environment flakiness, dependency version mismatch, logic regression, timeout, or infrastructure issue. For each group, propose the minimal fix. Rank by blast radius. Identify any test that has failed more than twice in the last five runs and label it a flaky test candidate.

CI-ROOT-CAUSE-GROUP For the current set of CI failures, cluster them using error message similarity and file path overlap. Output one proposed fix per cluster, not per individual failure. Prefer fixes that resolve multiple failures simultaneously.

ISSUE-TRIAGE For all issues opened in the last 48 hours, assign a suggested owner based on file path ownership and recent contributor history. Suggest a priority label: P0, P1, P2, or P3, using severity signals from the issue body. Suggest additional labels from the repo's existing label set. Add a one-sentence triage rationale to each suggestion.


BLOCK 4: CODE QUALITY

COMMIT-BUG-SCAN Scan commits from the last 24 hours or since the last run, whichever is more recent. Flag patterns that commonly indicate bugs: unchecked null returns, error values that are silently discarded, missing await on async calls, boundary conditions in loops, schema changes without migration, config values hardcoded that were previously dynamic. For each finding, propose a minimal targeted fix with a confidence rating of high, medium, or low.

TEST-GAP-FINDER Compare the set of files changed in recent PRs against the test files that exist. Identify changed functions, exported types, or API routes that have no corresponding test coverage. Write focused unit or integration tests for the highest-risk gaps. Open draft PRs using the configured PR creation tool.

PERF-REGRESSION-WATCH Compare performance-sensitive code paths changed in recent commits against available benchmarks, traces, or profiling baselines. Flag any change that increases time complexity, adds unbounded loops, removes caching, or introduces synchronous blocking in an async context. Output a ranked list of regressions with proposed fixes.


BLOCK 5: REPO MAINTENANCE

DEPENDENCY-DRIFT-SCAN Compare the current dependency manifest against the latest stable releases of each package. Identify packages that are more than one major version behind, packages with known CVEs, and packages where the team's pinned version diverges from what other internal services use. Propose a minimal upgrade plan that batches low-risk patch updates and isolates high-risk major version changes.

SAFE-DEPENDENCY-UPGRADE For dependencies flagged as outdated, filter to those where the upgrade path is patch-level or minor-level with no breaking API changes. Propose the upgrade with a before/after diff of the manifest. Run the test suite mentally against the new version's changelog to flag any likely breakage. Output a single PR proposal per upgrade batch.

AGENTS-MD-UPDATE Review recent commits, merged PRs, and any new scripts or Makefiles added to the repo. Identify commands, workflows, or processes that are not yet documented in AGENTS.md. Draft the new entries in the existing format. Open a PR to add them.


BLOCK 6: GROWTH & SKILLS

SKILL-SURFACE Analyze the last two weeks of PRs and review comments the team has written and received. Identify recurring patterns of confusion, repeated review feedback on the same issues, or areas where PRs required multiple revision rounds. Translate these into a ranked list of skills or knowledge areas worth deepening, with one concrete learning resource or internal example linked for each.

PERF-AUDIT Audit the codebase for performance regressions introduced in the last sprint. Prioritize by user-facing impact. For each regression, propose the highest-leverage fix, meaning the smallest code change that produces the largest measurable improvement. Where a fix requires a tradeoff, state the tradeoff explicitly.


BLOCK 7: VIRAL / HIGH-LEVERAGE FEATURE

VIRAL-FEATURE-PROPOSE-AND-SHIP Analyze the application's core user flow, retention data if available, and any usage telemetry. Identify the single highest-leverage feature that would increase sharing, word-of-mouth, or return visits. Propose the feature with a one-paragraph rationale, a minimal implementation plan, and a measurable success metric. Then implement it. Open a PR with the full implementation, tests, and a feature flag so it can be rolled out incrementally.


BLOCK 8: ARCHITECTURE FAILURE MODE ANALYSIS

ARCH-FAILURE-MAP Analyze the codebase architecture and identify the top failure modes. For each failure mode, state the mechanism, the blast radius if it occurs, the current detection capability, and the proposed mitigation. Categories to cover: single points of failure, cascading dependency failures, data consistency gaps, auth boundary weaknesses, observability blind spots, and scalability cliffs.


BLOCK 9: DOCUMENTATION ARTIFACTS

ONE-PAGE-PDF-SUMMARY Generate a one-page PDF that summarizes the application. Include: purpose and core value proposition, architecture overview, key technical decisions, current state of the codebase, team structure, active workstreams, and known risks. Format for an engineering or investor audience depending on the configured context.

SIX-WEEK-ROADMAP-DOC Create a structured document containing a six-week roadmap. Week one and two: foundation and cleanup items. Week three and four: core feature delivery. Week five: hardening, performance, and test coverage. Week six: release prep and documentation. Each week should list specific deliverables, owners, dependencies, and success criteria. Output as a shareable document.


BLOCK 10: INNOVATIVE GOVERNANCE & ANTI-DRIFT COMMANDS

POLICY-COMPLIANCE-AUDIT Define or load the team's code policies from a POLICY.md or equivalent source. Scan recent changes against those policies. Flag violations with the specific policy rule, the file and line, and a proposed remediation. This enforces code-by-policy rather than code-by-convention-drift.

ANTI-HALLUCINATION-VERIFY After any agent-generated output that includes factual claims about the codebase, run a verification pass. Check that every file path mentioned exists, every function name cited is real, every PR number referenced is valid, and every claim about behavior is backed by actual code. Output a verification report and retract or correct any unsupported claims before the output is used.

META-PROMPT-REVIEW Periodically audit the prompts and instructions used to drive automation in this repo. Check whether the prompts are producing outputs that drift from their stated intent. Flag any prompt that has accumulated exceptions, workarounds, or contradictions. Propose a revised, cleaner version of the prompt. This treats prompt quality as a first-class engineering concern.

SELF-REFUTATION-CHECK When an agent proposes a change, a fix, or an architectural recommendation, run a second pass that attempts to argue against the proposal. List the strongest counterarguments. If the counterarguments are stronger than the original proposal, revise or withdraw it. Output both the original proposal and the refutation so the human reviewer can make an informed decision.

EXPERT-PANEL-REVIEW For high-stakes decisions such as architecture changes, security modifications, or data model alterations, simulate review from multiple expert perspectives: a security reviewer, a performance engineer, a product engineer, and a new-contributor perspective. Each perspective should independently assess the change and raise concerns. Synthesize the panel output into a single recommendation with minority opinions preserved.

DOCS-DRIFT-DETECT Compare the current state of documentation files against the actual code they describe. Flag any documented API endpoint, function signature, configuration option, or behavior that no longer matches the implementation. Rank by how frequently the outdated documentation is likely to be read or relied upon. Propose minimal doc patches.

SDK-CONTRACT-GUARD Monitor changes to any public or internal SDK interfaces. When a change would break a documented contract, surface that break explicitly before the PR can be merged. List all known consumers of the changed interface, internal or external. Require explicit acknowledgment of the break before proceeding.

ANTI-DRIFT-BASELINE Establish a baseline snapshot of the repo's key architectural invariants: directory structure conventions, naming conventions, test coverage thresholds, dependency count, API surface size, and documentation completeness score. On each run, compare the current state against the baseline and report drift. Flag when any metric has degraded beyond a configured threshold and open an issue automatically.

GOVERNANCE-GATE Before any automated change is committed to main or a release branch, run a governance gate check. Verify that the change has a human-readable rationale, is within the scope of the automation's defined permissions, does not modify files outside its designated scope, and does not introduce a new external dependency without explicit approval. Block and report any change that fails the gate.

INCIDENT-POSTMORTEM-DRAFT When an incident is detected or closed, automatically draft a postmortem document. Include timeline, contributing factors, detection method, resolution steps, and proposed action items with suggested owners. Format follows the team's postmortem template if one exists in the repo.

KNOWLEDGE-DECAY-ALERT Track which areas of the codebase have had no commits, reviews, or documentation updates in more than 90 days and are also depended upon by active code. Flag these as knowledge decay risks, meaning areas where institutional knowledge may be eroding. Suggest either documentation updates, owner reassignment, or a scheduled review session.

REVIEW-QUALITY-SCORE Analyze recent code review threads. Score each review on dimensions: specificity of feedback, ratio of blocking to non-blocking comments, time to review, and whether the reviewer caught issues that later became bugs. Surface patterns where review quality is declining or where certain types of changes are consistently under-reviewed.

FEATURE-FLAG-AUDIT Scan the codebase for all feature flags. For each flag, report its name, current default state, the date it was introduced based on git history, and whether it has an associated ticket or removal plan. Flag any feature flag that is more than 60 days old and still conditional as a cleanup candidate. Propose a removal PR for flags that are fully rolled out.

SECURITY-SURFACE-SCAN On each PR that touches authentication, authorization, data access, or external API calls, run a targeted security surface scan. Check for common patterns: missing input validation, overly broad permissions, secrets in code, unsafe deserialization, and missing rate limiting. Report findings before merge with a severity rating.

CROSS-SERVICE-IMPACT-MAP When a change modifies a shared library, a data schema, or a core utility, automatically map which other services or modules consume that code. Output a dependency impact map and require that the PR author or reviewer has confirmed compatibility with each downstream consumer before the change is merged.