ColPortMillHallInnCommons

Process · 7 stages · continuous improvement · ~9 min walkthrough

Brief to audit-defensible course, and the refine loop that keeps it current.

Each stage has a human checkpoint where you can approve, edit, or regenerate before the next stage runs. No all-or-nothing black box, no surprise spend.

Brief

Tell Mill what you're teaching, to whom, and in which languages. 90 seconds.

  • Topic, audience, duration target, tone, language(s). No forms. A single free-text field with a few optional toggles.
  • Optional: upload a source document (.docx / .pptx / legacy SCORM .zip) to pin the content. Strict-source mode forbids any claim not anchored in your document, the procurement-defensible path.
  • Optional: pick a starter template if you want a structural hint. Collapsed by default; Mill infers perfectly well without one.

Mechanism · The brief is stored as a Prisma `Course.brief` field with a content-hash used later in the audit trail so a regulator can prove the published content came from this exact input.

Architect

A reasoning-capable LLM proposes a full chapter + section outline with learning objectives. You review before any expensive content-writer pass fires. The architect model is configurable per organisation, plug in the LLM your team already trusts.

  • Chapter structure, per-chapter objectives, section types, and a cost estimate, all before a single cue is written.
  • Regenerate the outline, tweak an objective, swap a section type, or lock a chapter title. The writer will honour every edit.
  • Credits are reserved, not consumed, at this stage. If you reject the outline the reservation releases, you haven't paid for a draft you'd throw away.

Mechanism · Architect output is a typed `ArchitectPlan` record gated by an approval step. The writer won't fire without a matching `approvedAt` timestamp.

Write

The writer pass produces every section in parallel. 10-20 minutes end-to-end for a 60-minute course.

  • Each section is generated with a type-specific prompt (flashcards need different guidance than a decision-table). 33+ section types, each tuned.
  • Section-level retries on validator failure. One bad section doesn't sink the whole course.
  • AI narration and image generation dispatch as the text lands. Mill ships with sensible defaults; each organisation can plug in its own TTS engine and its own image model (anything from a hosted vendor to a self-hosted Stable Diffusion) via the provider settings. The course is fully assembled when the spinner clears.

Mechanism · Writer runs on a per-section worker queue with `WriteJob` rows. Each job logs tokens + cost + model version, giving a verifiable per-section receipt.

Translate

Every additional language runs the same outline through the configured translation model with your per-org glossary applied. The translation engine is configurable; orgs can point Mill at their own LLM, their existing translation memory, or a standalone provider.

  • Your glossary carries DO_NOT_TRANSLATE entries (product names, brand terms) and PREFERRED_TERM entries (e.g. 'employee' → 'collaborateur').
  • Per-field drift check runs automatically. Numeric / unit / regulatoryId fields flagged as hard-fail on divergence, the '5 mg → 5 ml' translation disaster is designed out.
  • Parallel language generation means 10 languages costs ~1.3× one language, not 10×.

Mechanism · Translator output is stored alongside the source in the `Section.i18n` JSON with a `DriftReport` sibling. Reviewer sign-off requires the drift report to be clean.

Review

Assigned reviewers approve the content per language. Two-person sign-off for compliance-grade.

  • SME_REVIEW + COMPLIANCE_SIGNOFF roles required for regulator-ready courses. Same user can't hold both, separation of duties is enforced at the schema level.
  • Reason codes (REVIEWED / APPROVED / AUTHORED / RESPONSIBILITY), IP-hash, and server timestamp captured on every signature.
  • Status chips gate the publish button. You can't accidentally ship an unreviewed translation.

Mechanism · Each signature writes a `ReviewerSignature` row. The model is modelled on 21 CFR Part 11 §11.50/§11.70, same shape the FDA expects for electronic records.

Publish

Click Publish. Mill writes the SCORM/xAPI package(s), signs the manifest, and records an immutable version row.

  • SCORM 1.2, SCORM 2004, and xAPI outputs from the same source. Download all three.
  • Each publish creates a `PublishVersion` row with a SHA-256 manifest hash + the model versions used. The regulator's question 'what did learner Y see on date X?' answers in one query.
  • Republishing is supported. Edit, click Republish, a new version row lands, the learner URL stays stable, the next session serves the new bytes.
  • Hall integration (Team Pro): the new version auto-imports into Hall's LMS via per-org Bearer token. No re-upload ritual.

Mechanism · The manifest hash is stored as `PublishVersion.manifestHash` and is content-addressable. The package is identifiable by its hash alone, so a 'show me exactly what you shipped' request needs no trust in our storage layer.

Refine

Edit any section, regenerate a single cue, re-translate a field, and republish. The learner URL stays stable; the audit trail records each improvement.

  • Section-level editor: tweak any field of any section without touching the rest. Per-language tabs for multilingual edits; drift re-check runs on every edited translated field.
  • Regenerate a single cue when one bullet is stale or one translated figure drifted, no full-course rebuild required, no spending credits on content you already approved.
  • Each republish writes a fresh `PublishVersion` row. Regulators can diff any two versions of the same course and see exactly what changed and when.
  • Annual refresh / corrigenda / policy-update flows all land here. Republishing is deliberate continuous improvement, not a one-way ship.

Mechanism · Editor writes back to `Section.fields` JSON + re-triggers the writer / translator / drift pipeline on the edited sections only. The immutable PublishVersion ledger preserves every republish, content-addressable proof of what learners saw, at any past date.

Questions that come up

How long does a course take end-to-end?

Brief: ~90 seconds. Architect: ~1 minute. Writer: 10-20 minutes for a 60-minute course. Translate (10 languages): +5-8 minutes. Review: depends on your people, not ours. Publish: a few seconds. A solo author with a clean brief ships a single-language course the first afternoon.

What if the architect's outline is wrong?

Regenerate, or edit individual objectives / section types directly. The writer won't run until the outline is approved. No wasted credits on a draft you'd reject.

Can I edit after publish?

Yes. The section-level editor lets you tweak any field of any section, regenerate a single cue, or rewrite whole sections. Click Republish and a new SCORM/xAPI package + a new PublishVersion row are written. The learner URL doesn't change.

What happens if I hit my credit limit mid-course?

Writer jobs pause rather than fail. You top up, jobs resume. Nothing half-generated is lost; partial courses remain drafts you can complete later.

Is there an API?

Yes, per-org Bearer tokens let you kick off courses and fetch publish artefacts programmatically. Used internally by Hall for direct-import and by Zapier customers for LMS sync.

Run it yourself.

130 free credits. Generate a real course, the whole 7-stage path, on your content.

How Mill works, Brief to audit-defensible course in 7 stages — Mill