OpenProse
Use

Examples

A curated path through the OpenProse examples.

Examples

Read these examples in order. They move from the smallest possible service to a program with explicit phases, review, and testing.

The point is not to memorize syntax. Look for the contract shape:

  • what the program requires
  • what it ensures
  • where judgment belongs
  • when a workflow stays declarative
  • when order is important enough for Execution

Hello world

Start here to see the minimum surface area. It is a service, not a program, and it has no required inputs.

hello-world.md
---
name: hello-world
kind: service
---

### Requires

- (nothing -- this service has no required inputs)

### Ensures

- `greeting`: a warm hello and brief self-introduction

Notice how little structure is needed. The service only promises greeting. That is enough for the VM to know what successful completion means.

Use this shape when you are testing whether the OpenProse loop is installed and the agent knows how to read a contract.

Research and summarize

This example adds an input, a more specific output, and strategies for judgment calls.

research-and-summarize.md
---
name: research-and-summarize
kind: service
---

### Requires

- `topic`: a research question or area to investigate (default: "latest developments in AI agents and multi-agent systems")

### Ensures

- `summary`: 5 bullet points covering key findings with practical implications for developers
- each bullet point is: grounded in specific papers or announcements from the past 6 months

### Strategies

- when few sources found: broaden search terms and check adjacent fields
- when findings are too technical: translate to practical developer implications

The useful detail is the constraint on the bullets. The service does not just promise a summary. It promises five bullets, practical implications, and grounding in recent sources.

The Strategies section gives the agent room to reason without leaving the workflow vague. If sources are thin, broaden the search. If the findings are too technical, translate them for developers.

Code review

This is still a single service, but the contract is closer to real work.

code-review.md
---
name: code-review
kind: service
---

### Requires

- `code`: source code or directory to review

### Ensures

- `report`: a unified code review covering security, performance, and maintainability
- each issue has: a severity rating (critical, high, medium, low) and actionable recommendation
- issues are prioritized by severity

### Strategies

- when reviewing large codebases: focus on files with recent changes first
- when many issues found: group by category and highlight the top 5

The output has shape:

  • a unified report
  • coverage across security, performance, and maintainability
  • severity on each issue
  • actionable recommendations
  • priority by severity

That is the difference between "review this" and a contract. The service has room to use judgment, but the result has obligations.

Parallel reviews

This is the first example where OpenProse starts to pay for itself.

parallel-reviews.md
---
name: parallel-reviews
kind: program
---

### Services

- `security-reviewer`
- `perf-reviewer`
- `style-reviewer`
- `synthesizer`

### Requires

- `code`: the code to review

### Ensures

- `report`: a unified code review report covering security, performance, and style

The services are separate points of view:

  • security-reviewer
  • perf-reviewer
  • style-reviewer
  • synthesizer

The program does not need a long script. The roles and the final report are clear enough for a first pass. If you need sharper boundaries later, give the services their own contracts or add explicit execution.

For a guided run, see your first useful workflow.

Captain's chair

This example is larger because order matters. It plans, researches, reviews the plan, implements, reviews the implementation, tests, and then summarizes.

captains-chair.md
---
name: captains-chair
kind: program
---

### Services

- `captain`
- `researcher`
- `coder`
- `critic`
- `tester`

### Requires

- `task`: the feature or task to implement
- `codebase-context`: brief description of the codebase and relevant files

### Ensures

- `result`: completed, reviewed, and tested implementation with summary of changes

### Execution

```prose
# Phase 1: Strategic planning
let plan = call captain
  task: task
  codebase-context: codebase-context

# Phase 2: Parallel research sweep
let docs = call researcher
  topic: task
  focus: "documentation and README files"

let code-patterns = call researcher
  topic: task
  focus: "existing code patterns and implementations"

let existing-tests = call researcher
  topic: task
  focus: "existing tests covering similar functionality"

# Phase 3: Plan synthesis with critic review
let implementation-plan = call captain
  task: "synthesize research into implementation plan"
  plan: plan
  docs: docs
  code-patterns: code-patterns
  existing-tests: existing-tests

let plan-review = call critic
  artifact: implementation-plan
  focus: "architectural concerns, missing edge cases, testability"

if plan-review has critical concerns:
  let implementation-plan = call captain
    task: "revise plan based on critic feedback"
    plan: implementation-plan
    review: plan-review

# Phase 4: Implementation with review
let implementation = call coder
  plan: implementation-plan

let code-review = call critic
  artifact: implementation
  focus: "security, correctness, style, performance"

if code-review has critical issues:
  let implementation = call coder
    plan: implementation-plan
    feedback: code-review

# Phase 5: Testing
let tests = call tester
  plan: implementation-plan
  implementation: implementation

# Phase 6: Final integration
let result = call captain
  task: "final review and summary"
  implementation: implementation
  tests: tests
  code-review: code-review

return result
```

The important change is the Execution block. Declarative contracts are still present, but the program also pins the phases:

  • plan first
  • run research sweeps
  • synthesize an implementation plan
  • let a critic review the plan
  • implement
  • review the implementation
  • test
  • produce the final result

Use this shape when the workflow has real order, branching, or retry points. Do not start here for a small task. Start declarative, add wiring when relationships matter, and use Execution when order matters.

Choosing the right example

ExampleUse it to study
Hello worldthe minimum service contract
Research and summarizeinputs, output constraints, and strategies
Code reviewa practical single-service contract
Parallel reviewsfan out, then synthesize
Captain's chairexplicit order with Execution

Plain prompts are still the right tool for one-off work. These examples are useful when you want a workflow to be inspectable, reusable, and worth improving.

Next: study common patterns.

On this page