Onnai Cookbook

Recipes and tools for AI-powered Unix workflows.


Table of Contents

Part 1: Foundations

  1. Getting Started
  2. API
  3. Chat Mode
  4. When to Generate Code
  5. Document Conversion Tools
  6. Web Interaction Tools

Part 2: Content Creation

  1. Writing a Book
  2. Content Creation Pipeline
  3. Building Websites

Part 3: Automation & Integration

  1. Building AI Agents
  2. Integrations

Part 4: Industry Workflows

  1. Legal Professionals
  2. Creative & Ad Agency
  3. Education & Learning
  4. Interviewer Workflows

Part 5: Extras

  1. Fun with Audio
  2. More Ideas
  3. Contributing

Part 1: Foundations

Getting Started

curl -sSL onnai.ai/install | sh
onnai --add Volcanara

The .context File

Drop a .context file in any project directory to give onnai persistent context:

echo "Python 3.11 project using FastAPI" > .context
echo "Follow PEP 8. Prefer type hints." >> .context

Every onnai command in that directory now understands your project.

Chat Mode

Start an interactive conversation by running onnai or a Mimic without arguments:

Example session:

$ Volcanara
Entering chat mode. Type Ctrl+D or /exit to quit.
Chat history is stored in .chat
---
> What is the capital of France?
The capital of France is Paris.
> Tell me more about Paris
Paris is the largest city in France and serves as its cultural...

Key features: - History stored in .chat file (JSON) in current directory - Each directory has its own conversation history - Includes .context files automatically - Works with all Mimics (coder, writer, etc.)

Exit chat: - Type /exit or /quit - Press Ctrl+D

Tips: - Use different directories for different project conversations - Delete .chat to start fresh - Long histories increase token usage. Chat with Mimics in the same way, i.e. Volcanara [ENTER].

Managing Chat History

# View current chat
cat .chat | jq

# Clear history
rm .chat

# Backup conversation
cp .chat conversation_backup.json

# Search past conversations
cat .chat | jq -r '.[].content' | grep -i "topic"

API

In addition to the onnai command-line client, users and applications can access Mimics directly through the onnai.ai API.

Command Line vs API

MethodBest ForExample
onnai CLITerminal workflows, shell scripts, Unix pipelinescat file.txt | onnai "summarize"
onnai.ai APIWeb apps, mobile apps, backend servicesHTTP POST to onnai.ai/api/chat

Using the API

# Basic API call
curl -X POST https://onnai.ai/api/chat \
  -H "Authorization: Bearer $ONNAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "coder", "message": "explain recursion"}'
# Python example
import requests

response = requests.post(
    "https://onnai.ai/api/chat",
    headers={"Authorization": f"Bearer {api_key}"},
    json={"model": "writer", "message": "draft a tagline"}
)
// JavaScript/Node example
const response = await fetch("https://onnai.ai/api/chat", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${apiKey}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify({ mimic: "coder", message: "review this code" })
});

Use the CLI for terminal tasks and pipelines. Use the API when building applications that need programmatic access to mimics.


When to Generate Code

Sometimes the smartest move is having onnai write the code, not process the data.

The Principle

❌ Slow & Expensive:  cat huge.csv | onnai "sum column 3"
✅ Fast & Cheap:      onnai "write awk to sum column 3" | bash

Use AI to process data when: - Data is small (< 100 lines) - Task requires understanding/judgment - Output needs natural language

Use AI to generate code when: - Data is large - Task is mechanical/repetitive - Transformation is well-defined - You’ll run it multiple times

CSV Processing

# ❌ Expensive: sending all data through AI
cat sales.csv | onnai "calculate total revenue"

# ✅ Cheap: AI writes the command, awk does the work
SCRIPT=$(echo "sum the 'revenue' column (column 4) in a CSV with headers" | \
  onnai "write an awk one-liner")
cat sales.csv | eval "$SCRIPT"

# Even better: save for reuse
echo "sum column 4, skip header" | onnai "awk one-liner" > sum_revenue.awk
cat sales.csv | awk -f sum_revenue.awk

Generate Once, Run Forever

#!/bin/bash
# generate-transformer.sh - Create a reusable data transformer

TASK="$1"
INPUT_SAMPLE="$2"
OUTSCRIPT="${3:-transform.sh}"

{
  echo "Task: $TASK"
  echo "Sample input:"
  head -5 "$INPUT_SAMPLE"
} | onnai "write a bash script to do this transformation. use standard unix tools." > "$OUTSCRIPT"

chmod +x "$OUTSCRIPT"
echo "Generated: $OUTSCRIPT"
echo "Usage: cat data.csv | ./$OUTSCRIPT"

Usage:

./generate-transformer.sh "extract emails from column 2, dedupe, sort" data.csv
# Creates transform.sh - runs instantly on any size file

Image Processing

# ❌ Can't do: AI can't actually manipulate images
cat photo.jpg | onnai "make it grayscale"  # Won't work

# ✅ Generate the ImageMagick command
TASK="convert to grayscale, resize to 800px wide, add 5px white border"
CMD=$(echo "$TASK" | onnai "write ImageMagick convert command for input.jpg output.jpg")
echo "$CMD"  # convert input.jpg -colorspace Gray -resize 800x -border 5 -bordercolor white output.jpg
eval "$CMD"

# Batch process
for img in *.jpg; do
  convert "$img" -colorspace Gray -resize 800x "gray_$img"
done

Bulk Image Operations

#!/bin/bash
# image-batch.sh - Generate and run image processing

TASK="$1"
PATTERN="${2:-*.jpg}"

# Generate the command template
CMD_TEMPLATE=$(echo "$TASK" | \
  onnai "write ImageMagick command. use INPUT and OUTPUT as placeholders.")

echo "Generated command: $CMD_TEMPLATE"
echo "Processing files matching: $PATTERN"

for input in $PATTERN; do
  output="processed_$(basename "$input")"
  CMD=$(echo "$CMD_TEMPLATE" | sed "s/INPUT/$input/g; s/OUTPUT/$output/g")
  eval "$CMD"
  echo "✓ $input$output"
done

Regex Generation

# Let AI write the regex, then grep does the work
PATTERN=$(echo "match US phone numbers in any format" | \
  onnai "write a grep -E regex pattern only, no explanation")

grep -E "$PATTERN" contacts.txt

# Save patterns for reuse
cat << 'EOF' | while read -r description; do
  pattern=$(echo "$description" | onnai "grep -E pattern only")
  echo "$description: $pattern"
done > patterns.txt
emails
US phone numbers
URLs
IP addresses
EOF

SQL Query Generation

# Describe what you want, get SQL
QUERY=$(cat << 'EOF' | onnai "write PostgreSQL query only"
From the 'orders' table:
- Get total revenue by month for 2024
- Include order count
- Sort by month
- Only include months with revenue > 10000
EOF
)

echo "$QUERY"
psql -d mydb -c "$QUERY"

jq Filter Generation

# Complex JSON transformations
FILTER=$(echo "extract all user emails where status is active" | \
  onnai "write jq filter only")

cat users.json | jq "$FILTER"

# Multi-step transforms
cat << 'EOF' | onnai "write jq filter" > transform.jq
- Flatten nested 'orders' arrays
- Sum 'amount' field per 'customer_id'  
- Sort by total descending
- Take top 10
EOF

cat data.json | jq -f transform.jq

FFmpeg Commands

# Video/audio processing
CMD=$(echo "convert to mp4, 720p, 30fps, compress for web" | \
  onnai "write ffmpeg command for input.mov to output.mp4")

eval "$CMD"

# Batch convert
TEMPLATE=$(echo "extract audio as mp3, 128kbps" | \
  onnai "ffmpeg command, use INPUT and OUTPUT placeholders")

for video in *.mp4; do
  CMD=$(echo "$TEMPLATE" | sed "s/INPUT/$video/; s/OUTPUT/${video%.mp4}.mp3/")
  eval "$CMD"
done

Sed/Awk Scripts

# Generate sed script for complex find-replace
SCRIPT=$(cat << 'EOF' | onnai "write sed script"
- Replace "colour" with "color" (all variations)
- Remove lines starting with #
- Convert dates from DD/MM/YYYY to YYYY-MM-DD
EOF
)

echo "$SCRIPT" > transform.sed
sed -f transform.sed input.txt > output.txt

The Hybrid Pattern

AI designs, code executes:

#!/bin/bash
# smart-process.sh - AI designs, code executes

INPUT="$1"
TASK="$2"

# Step 1: AI analyzes a sample
SAMPLE=$(head -20 "$INPUT")
ANALYSIS=$(echo "$SAMPLE" | onnai "analyze this data format")

# Step 2: AI generates processing code
SCRIPT=$({
  echo "Data format: $ANALYSIS"
  echo "Task: $TASK"
} | onnai "write bash script using awk/sed")

# Step 3: Execute on full dataset
echo "$SCRIPT" > /tmp/process.sh
chmod +x /tmp/process.sh
cat "$INPUT" | /tmp/process.sh

When AI Processing IS Better

# Sentiment analysis - needs understanding
cat reviews.txt | onnai "classify each line as positive/negative/neutral"

# Summarization - needs judgment  
cat article.txt | onnai "summarize in 3 bullet points"

# Translation - needs language knowledge
cat english.txt | onnai "translate to Spanish"

# Classification
cat tickets.txt | onnai "categorize as: billing, technical, feature-request"

Decision Guide

Task Approach Why
Sum CSV column Generate awk Mechanical, any size
Find regex matches Generate pattern Pattern is reusable
Resize images Generate ImageMagick AI can’t process binary
Convert video Generate ffmpeg AI can’t process binary
Classify text Process with AI Needs understanding
Summarize Process with AI Needs judgment
Transform JSON Generate jq Faster, reusable
Parse logs Generate grep/awk Mechanical, fast
Extract entities Process with AI Needs NLP

Pro Tips

  1. Cache generated code: Save scripts for repeated tasks
  2. Validate first: echo "$CMD" before eval "$CMD"
  3. Include samples: Show AI a few lines of input for better code
  4. Iterate: If generated code fails, feed error back to AI
  5. Combine approaches: Generate code for transform, use AI for classification

Document Conversion Tools

Onnai works with text streams. These tools convert common document formats for piping.

macOS

textutil (built-in)

# Word docs to text
textutil -convert txt contract.docx -stdout | legal "review"

# RTF to text
textutil -convert txt letter.rtf -stdout | onnai "summarize"

# Batch convert
find . -name "*.docx" -exec textutil -convert txt {} \;

Supports: .doc, .docx, .rtf, .odt, .html, .webarchive

pdftotext (poppler)

brew install poppler

# PDF to text
pdftotext contract.pdf - | legal "flag risky clauses"

# Preserve layout (good for tables)
pdftotext -layout invoice.pdf - | onnai "extract line items"

# Specific pages
pdftotext -f 1 -l 5 longdoc.pdf - | onnai "summarize first 5 pages"

Linux

pdftotext (poppler-utils)

apt install poppler-utils
pdftotext document.pdf - | legal "review"

catdoc (Word docs)

apt install catdoc
catdoc contract.doc | legal "summarize"

pandoc (universal converter)

apt install pandoc

# Word to text
pandoc contract.docx -t plain | legal "review"

# Any format to any format
pandoc brief.docx -o brief.md
pandoc notes.md -o notes.pdf

Cross-Platform

LibreOffice (headless)

libreoffice --headless --convert-to txt contract.docx
cat contract.txt | legal "review"

Apache Tika (Java, heavy but thorough)

java -jar tika-app.jar --text contract.pdf | legal "review"

Handles: PDF, Office docs, emails, images (OCR), archives

OCR for Scanned Documents

tesseract

# macOS
brew install tesseract

# Linux
apt install tesseract-ocr

# Image to text
tesseract scan.png stdout | legal "extract key terms"

# PDF with scanned pages
pdftoppm scan.pdf scan -png
tesseract scan-1.png stdout | legal "review"

ocrmypdf (PDFs with OCR)

pip install ocrmypdf

ocrmypdf scanned.pdf searchable.pdf
pdftotext searchable.pdf - | legal "summarize"

Document Conversion Quick Reference

Format macOS Linux Cross-Platform
.docx textutil catdoc, pandoc LibreOffice
.doc textutil catdoc LibreOffice
.pdf pdftotext pdftotext Tika
.rtf textutil pandoc LibreOffice
Scanned tesseract tesseract ocrmypdf

Web Interaction Tools

Fetch content from the web and pipe it into onnai.

curl

# Fetch and analyze
curl -s https://example.com/api/status | onnai "summarize"

# API responses
curl -s https://api.github.com/repos/owner/repo/issues | onnai "categorize these issues"

# With authentication
curl -s -H "Authorization: Bearer $TOKEN" https://api.example.com/data | onnai "analyze"

# POST and analyze response
curl -s -X POST -d '{"query":"test"}' https://api.example.com | onnai "explain"

# Follow redirects
curl -sL https://short.url/abc | onnai "summarize"

Useful curl flags

Flag Purpose
-s Silent (no progress bar)
-L Follow redirects
-H Add header
-d POST data
-I Headers only

wget

# Fetch page
wget -qO- https://example.com | onnai "summarize"

# Download then process
wget -q https://example.com/report.pdf
pdftotext report.pdf - | legal "review"

httpie (human-friendly alternative)

brew install httpie

http -b https://api.example.com/users | onnai "summarize"
http -b POST https://api.example.com/query data=test | onnai "explain"

HTML Processing

Extracting text from HTML

# lynx (text browser)
lynx -dump https://example.com | onnai "summarize"

# w3m
w3m -dump https://example.com | onnai "summarize"

# pandoc
curl -s https://example.com | pandoc -f html -t plain | onnai "summarize"

Parsing HTML (extract specific content)

# pup (like jq for HTML)
brew install pup

curl -s https://news.site | pup 'article text{}' | onnai "summarize"
curl -s https://example.com | pup 'table tr td text{}' | onnai "analyze"

JSON APIs

# jq for preprocessing
curl -s https://api.github.com/repos/torvalds/linux/commits | \
  jq '.[].commit.message' | \
  onnai "summarize recent changes"

# Extract then analyze
curl -s https://api.example.com/users | \
  jq '.[] | {name, email}' | \
  onnai "find duplicates"

RSS/Atom Feeds

curl -s https://blog.example.com/feed.xml | \
  xmllint --xpath '//item/title/text()' - 2>/dev/null | \
  onnai "summarize recent posts"

Authentication Patterns

# Bearer token
curl -s -H "Authorization: Bearer $TOKEN" https://api.example.com | onnai "analyze"

# Basic auth
curl -s -u user:pass https://api.example.com | onnai "analyze"

# API key in header
curl -s -H "X-API-Key: $KEY" https://api.example.com | onnai "analyze"

Web Quick Reference

Task Command
Fetch page curl -s URL
Follow redirects curl -sL URL
Strip HTML curl -s URL \| lynx -dump -stdin
Parse JSON curl -s URL \| jq '.field'
Extract elements curl -s URL \| pup 'selector'
POST data curl -s -X POST -d 'data' URL


Part 2: Content Creation

Writing a Book with Onnai

A bash script workflow for generating a full book.

The Script

#!/bin/bash
# write-book.sh - Generate a book using onnai
# Usage: ./write-book.sh "Your Book Title" "Genre/Style"

TITLE="$1"
STYLE="${2:-fiction}"
OUTDIR="./book-$(date +%Y%m%d)"

mkdir -p "$OUTDIR"

# Set up context
cat > "$OUTDIR/.context" << EOF
Writing a book titled "$TITLE"
Style: $STYLE
Maintain consistent tone, characters, and plot throughout.
Write in third person past tense.
EOF

cd "$OUTDIR"

echo "=== Generating book: $TITLE ==="

# Step 1: Generate premise
echo "Step 1: Premise..."
echo "$TITLE" | onnai "create a compelling one-paragraph premise" > premise.txt
cat premise.txt

# Step 2: Create outline
echo "Step 2: Outline..."
cat premise.txt | onnai "create a 12-chapter outline with chapter titles and 2-sentence summaries" > outline.txt

# Step 3: Develop characters
echo "Step 3: Characters..."
cat premise.txt | onnai "create 4 main characters with names, backgrounds, motivations, and flaws" > characters.txt

# Step 4: Write each chapter
echo "Step 4: Writing chapters..."
for i in {1..12}; do
  echo "  Chapter $i..."
  
  # Extract this chapter's outline
  cat outline.txt | onnai "extract only chapter $i details" > "ch${i}_outline.txt"
  
  # Build chapter context
  {
    echo "=== CHARACTERS ==="
    cat characters.txt
    echo ""
    echo "=== STORY SO FAR ==="
    if [ $i -gt 1 ]; then
      cat ch*_summary.txt 2>/dev/null
    else
      echo "This is the opening chapter."
    fi
    echo ""
    echo "=== THIS CHAPTER ==="
    cat "ch${i}_outline.txt"
  } > "ch${i}_context.txt"
  
  # Write the chapter
  cat "ch${i}_context.txt" | onnai "write this chapter, approximately 2000 words" > "ch${i}_draft.txt"
  
  # Generate summary for continuity
  cat "ch${i}_draft.txt" | onnai "summarize in 3 sentences for continuity" > "ch${i}_summary.txt"
  
  sleep 2  # rate limiting
done

# Step 5: Compile first draft
echo "Step 5: Compiling..."
{
  echo "# $TITLE"
  echo ""
  for i in {1..12}; do
    echo "## Chapter $i"
    echo ""
    cat "ch${i}_draft.txt"
    echo ""
  done
} > first_draft.md

# Step 6: Editorial pass
echo "Step 6: Editorial review..."
cat first_draft.md | onnai "identify plot holes, inconsistencies, and pacing issues" > editorial_notes.txt

# Step 7: Generate back matter
echo "Step 7: Back matter..."
cat premise.txt characters.txt | onnai "write a compelling back cover blurb, 150 words" > blurb.txt

echo "=== Done! ==="
echo "Output in: $OUTDIR"
echo "  first_draft.md  - Full manuscript"
echo "  editorial_notes.txt - Issues to fix"
echo "  blurb.txt - Back cover copy"

wc -w first_draft.md

Usage

chmod +x write-book.sh

# Fiction
./write-book.sh "The Last Algorithm" "sci-fi thriller"

# Non-fiction
./write-book.sh "Unix for Poets" "technical guide, friendly tone"

# Mystery
./write-book.sh "Death by Daemon" "cozy mystery, tech setting"

Non-Fiction Variant

#!/bin/bash
# write-nonfiction.sh - Generate a non-fiction book

TITLE="$1"
TOPIC="$2"
OUTDIR="./book-$(date +%Y%m%d)"

mkdir -p "$OUTDIR"
cd "$OUTDIR"

cat > .context << EOF
Writing non-fiction: "$TITLE"
Topic: $TOPIC
Style: Clear, engaging, practical with examples.
Audience: Intermediate practitioners.
EOF

# Research/brainstorm
echo "$TOPIC" | onnai "list 10 key concepts readers must understand" > concepts.txt

# Create structure
cat concepts.txt | onnai "organize into 8 chapters with logical progression" > outline.txt

# Write each chapter
for i in {1..8}; do
  echo "Chapter $i..."
  
  cat outline.txt | onnai "extract chapter $i topic" > "ch${i}_topic.txt"
  
  {
    cat "ch${i}_topic.txt"
    echo "Include: explanation, real-world example, common mistakes, key takeaways"
  } | onnai "write this chapter, 2500 words" > "ch${i}.txt"
  
  sleep 2
done

# Compile
{
  echo "# $TITLE"
  echo ""
  cat concepts.txt | onnai "write a preface explaining why this book matters"
  echo ""
  for i in {1..8}; do
    echo "## Chapter $i"
    cat "ch${i}.txt"
    echo ""
  done
  echo "## Conclusion"
  cat outline.txt | onnai "write a conclusion summarizing key points and next steps"
} > manuscript.md

echo "Done: manuscript.md ($(wc -w < manuscript.md) words)"

Editing Pipeline

#!/bin/bash
# edit-book.sh - Multi-pass editing

MANUSCRIPT="$1"

# Pass 1: Structural
cat "$MANUSCRIPT" | onnai "identify structural issues: pacing, chapter balance, narrative arc" > edit_structural.txt

# Pass 2: Line editing
cat "$MANUSCRIPT" | onnai "flag awkward sentences, repetitive words, passive voice" > edit_line.txt

# Pass 3: Consistency
cat "$MANUSCRIPT" | onnai "find inconsistencies: character details, timeline, facts" > edit_consistency.txt

# Pass 4: Dialogue (fiction)
cat "$MANUSCRIPT" | onnai "review dialogue: does each character have distinct voice?" > edit_dialogue.txt

# Summary
{
  echo "# Editorial Report"
  echo "## Structural Issues"
  cat edit_structural.txt
  echo "## Line Edit Notes"  
  cat edit_line.txt
  echo "## Consistency Check"
  cat edit_consistency.txt
  echo "## Dialogue Review"
  cat edit_dialogue.txt
} > editorial_report.md

echo "See: editorial_report.md"

Chapter-by-Chapter Revision

#!/bin/bash
# revise-chapter.sh - Revise a single chapter with notes

CHAPTER="$1"
NOTES="$2"

# Apply notes
{
  echo "=== ORIGINAL ==="
  cat "$CHAPTER"
  echo ""
  echo "=== REVISION NOTES ==="
  cat "$NOTES"
} | onnai "rewrite the chapter incorporating these notes" > "${CHAPTER%.txt}_revised.txt"

# Show diff
diff "$CHAPTER" "${CHAPTER%.txt}_revised.txt" | head -50

Tips

  1. Continuity: The chapter summaries (ch*_summary.txt) feed into subsequent chapters
  2. Rate limiting: Add sleep between API calls to avoid throttling
  3. Iteration: Run the editorial pass, fix issues, repeat
  4. Voice consistency: Put style notes in .context
  5. Large books: Process in batches, checkpoint progress

Converting to Final Formats

# To Word doc
pandoc manuscript.md -o manuscript.docx

# To PDF
pandoc manuscript.md -o manuscript.pdf

# To EPUB
pandoc manuscript.md -o book.epub --metadata title="$TITLE"

# With styling
pandoc manuscript.md -o book.epub --css=ebook.css --toc

Content Creation Pipeline

Transform loose notes into podcasts, newsletters, blog posts, and more.

The Core Idea

One set of notes → multiple content formats:

notes.txt
    ├── blog post
    ├── newsletter
    ├── podcast script
    ├── Twitter thread
    ├── LinkedIn post
    └── YouTube script

Notes to Everything Script

#!/bin/bash
# content-pipeline.sh - Transform notes into multiple formats
# Usage: ./content-pipeline.sh notes.txt "Topic Name"

NOTES="$1"
TOPIC="${2:-$(head -1 "$NOTES")}"
OUTDIR="./content-$(date +%Y%m%d)"

mkdir -p "$OUTDIR"

cat > "$OUTDIR/.context" << EOF
Topic: $TOPIC
Voice: Conversational, knowledgeable, slightly irreverent.
Audience: Technical professionals who value their time.
EOF

cd "$OUTDIR"
cp "../$NOTES" ./source_notes.txt

echo "=== Generating content for: $TOPIC ==="

# Step 1: Extract key points
echo "Extracting key points..."
cat source_notes.txt | onnai "extract 5-7 key insights, one per line" > key_points.txt

# Step 2: Blog post (long form)
echo "Writing blog post..."
{
  cat source_notes.txt
  echo "---"
  cat key_points.txt
} | onnai "write a 1000-word blog post, conversational but informative" > blog_post.md

# Step 3: Newsletter (condensed)
echo "Writing newsletter..."
cat key_points.txt | onnai "write a newsletter section: hook, 3 key points with brief explanations, one actionable takeaway. 300 words max" > newsletter.md

# Step 4: Podcast script
echo "Writing podcast script..."
{
  echo "Format: Solo podcast, 10-15 minutes"
  echo "Include: intro hook, main points with examples, audience callout, outro"
  cat source_notes.txt
} | onnai "write a podcast script with natural speech patterns, verbal transitions, and places to breathe" > podcast_script.md

# Step 5: Twitter/X thread
echo "Writing Twitter thread..."
cat key_points.txt | onnai "write a Twitter thread: hook tweet, 5-7 tweets expanding points, closer with CTA. No hashtags. No emojis." > twitter_thread.txt

# Step 6: LinkedIn post
echo "Writing LinkedIn post..."
cat key_points.txt | onnai "write a LinkedIn post: hook, story or insight, takeaway, question for engagement. 200 words." > linkedin.txt

# Step 7: YouTube script
echo "Writing YouTube script..."
{
  echo "Format: 8-10 minute video"
  echo "Include: hook in first 10 seconds, chapter markers, b-roll suggestions in [brackets]"
  cat source_notes.txt
} | onnai "write a YouTube script optimized for retention" > youtube_script.md

echo "=== Done! ==="
ls -la "$OUTDIR"

Usage

chmod +x content-pipeline.sh

# From meeting notes
./content-pipeline.sh meeting-notes.txt "What I Learned at the Conference"

# From research
./content-pipeline.sh research.txt "Why Everyone's Wrong About Microservices"

# From experience
./content-pipeline.sh lessons.txt "5 Years of Kubernetes in Production"

Individual Format Scripts

Blog Post

#!/bin/bash
# notes-to-blog.sh

cat "$1" | onnai "extract key insights" | \
onnai "write a blog post:
- Compelling hook
- Clear sections (no H1, start with H2)
- Code examples if relevant
- Practical takeaways
- 800-1200 words" > blog_post.md

Newsletter

#!/bin/bash
# notes-to-newsletter.sh

NOTES="$1"
EDITION="${2:-Weekly}"

{
  echo "Newsletter: $EDITION edition"
  cat "$NOTES"
} | onnai "write newsletter:
- Subject line (compelling, no clickbait)
- Personal intro (2 sentences)
- Main insight (with context)
- Quick hits (3 bullet points)
- One recommendation
- Sign off
Keep it under 400 words." > newsletter.md

# Extract subject line
head -1 newsletter.md

Podcast Script

#!/bin/bash
# notes-to-podcast.sh

NOTES="$1"
DURATION="${2:-15}"  # minutes

{
  echo "Duration target: $DURATION minutes (~$(($DURATION * 150)) words)"
  echo "Style: Conversational, thinking out loud"
  cat "$NOTES"
} | onnai "write podcast script:

[INTRO - 30 sec]
Hook the listener immediately.

[SETUP - 2 min]
Context and why this matters.

[MAIN POINTS - bulk of time]
Go deep on each point.
Include: 'Here's the thing...' transitions
Add: pauses marked as [PAUSE]
Add: emphasis marked as *emphasis*

[EXAMPLES - 2 min]
Real stories or scenarios.

[TAKEAWAY - 1 min]
What should listener do differently?

[OUTRO - 30 sec]
CTA and sign off.

Write naturally, like talking to a smart friend." > podcast_script.md

# Word count estimate
echo "Estimated duration: $(($(wc -w < podcast_script.md) / 150)) minutes"

Twitter/X Thread

#!/bin/bash
# notes-to-thread.sh

cat "$1" | onnai "write Twitter thread:

Tweet 1: Hook. Make them stop scrolling. No 'Thread:' prefix.

Tweets 2-7: One insight per tweet. 
- Start with the point, not 'First,' or 'Second,'
- Use line breaks for readability
- No hashtags
- No emojis

Final tweet: Takeaway + soft CTA (follow, thoughts?)

Number each tweet. Max 280 chars each." > thread.txt

# Count tweets
echo "Thread length: $(grep -c '^[0-9]' thread.txt) tweets"

LinkedIn Post

#!/bin/bash
# notes-to-linkedin.sh

cat "$1" | onnai "write LinkedIn post:

Hook line (pattern interrupt)

[blank line]

Short story or observation (3-4 lines)

[blank line]

The insight (what I learned)

[blank line]

Takeaway for the reader

[blank line]

Question to drive comments?

No emojis. No hashtags. Under 200 words." > linkedin.txt

YouTube Script

#!/bin/bash
# notes-to-youtube.sh

NOTES="$1"
DURATION="${2:-10}"  # minutes

{
  echo "Target: $DURATION minute video"
  cat "$NOTES"
} | onnai "write YouTube script:

[0:00 - HOOK]
First 8 seconds: pattern interrupt or bold claim
'In this video...' preview

[0:30 - INTRO]
Brief context, why watch
[B-ROLL: suggestion here]

[1:00 - CHAPTER 1: title here]
Main content
[VISUAL: what to show on screen]

[continue with chapters...]

[LAST 30 SEC - CTA]
Like, subscribe, but make it natural
Tease next video

Include:
- Timestamp markers
- [B-ROLL] suggestions
- [ON SCREEN TEXT] for key points
- Moments marked [EMPHASIS] for editing" > youtube_script.md

Weekly Content Workflow

#!/bin/bash
# weekly-content.sh - Full week's content from one braindump

BRAINDUMP="$1"

# Monday: Extract this week's theme
cat "$BRAINDUMP" | onnai "what's the most interesting idea here?" > theme.txt

# Tuesday: Blog post
cat "$BRAINDUMP" theme.txt | onnai "write blog post around this theme" > tuesday_blog.md

# Wednesday: Podcast
cat tuesday_blog.md | onnai "convert to 15-min podcast script, add mimiclity" > wednesday_podcast.md

# Thursday: Newsletter
{
  echo "Include link to blog and podcast"
  cat theme.txt
  cat "$BRAINDUMP" | onnai "find a secondary insight"
} | onnai "write newsletter" > thursday_newsletter.md

# Friday: Social
cat theme.txt | onnai "write Twitter thread" > friday_thread.txt
cat theme.txt | onnai "write LinkedIn post" > friday_linkedin.txt

echo "Week's content ready:"
ls -la *.md *.txt

Repurposing Existing Content

# Blog post → Thread
cat blog_post.md | onnai "convert to Twitter thread, maintain key insights" > thread.txt

# Podcast transcript → Blog
cat transcript.txt | onnai "convert to blog post, remove verbal tics, add structure" > blog.md

# YouTube transcript → Newsletter
cat yt_transcript.txt | onnai "extract insights, write newsletter summary" > newsletter.md

# Newsletter archive → Blog post
cat newsletter-*.md | onnai "synthesize into comprehensive blog post" > mega_post.md

# Multiple threads → Ebook chapter
cat thread-*.txt | onnai "combine into cohesive chapter, smooth transitions" > chapter.md

Podcast Production Pipeline

#!/bin/bash
# podcast-full.sh - Full podcast production workflow

NOTES="$1"
EPISODE="${2:-episode}"

# Script
cat "$NOTES" | onnai "write 20-minute podcast script" > "${EPISODE}_script.md"

# Show notes
cat "${EPISODE}_script.md" | onnai "extract show notes:
- Episode summary (2 sentences)
- Key topics with timestamps (estimate)
- Links mentioned (placeholder URLs)
- Guest info if applicable" > "${EPISODE}_shownotes.md"

# Transcript cleanup template
cat > "${EPISODE}_transcript_instructions.txt" << 'EOF'
After recording, run:
cat raw_transcript.txt | onnai "clean up transcript:
- Fix speech-to-text errors
- Remove filler words (um, uh, like)
- Keep natural speech patterns
- Add paragraph breaks
- Add speaker labels if multiple people"
EOF

# Social assets
cat "${EPISODE}_script.md" | onnai "extract 5 quotable moments for social media clips" > "${EPISODE}_quotes.txt"

# Episode description
cat "${EPISODE}_shownotes.md" | onnai "write podcast app description, SEO-friendly, 150 words" > "${EPISODE}_description.txt"

echo "Podcast assets ready in: $EPISODE_*"

Newsletter System

#!/bin/bash
# newsletter-system.sh

NOTES_DIR="./newsletter-notes"
ISSUE="$(date +%Y-%m-%d)"

# Collect week's notes
cat "$NOTES_DIR"/*.txt > "week_notes.txt"

# Generate sections
cat week_notes.txt | onnai "write main article, 300 words" > main.md
cat week_notes.txt | onnai "extract 3 quick links with one-line descriptions" > links.md
cat week_notes.txt | onnai "write 'one thing I'm thinking about' section, mimicl" > thinking.md

# Assemble
{
  echo "# Newsletter - $ISSUE"
  echo ""
  cat main.md
  echo ""
  echo "## Quick Links"
  cat links.md
  echo ""
  echo "## One Thing I'm Thinking About"
  cat thinking.md
  echo ""
  echo "---"
  echo "Thanks for reading. Hit reply if anything resonated."
} > "newsletter_${ISSUE}.md"

# Subject line options
cat main.md | onnai "suggest 3 subject lines: one curiosity-driven, one benefit-driven, one contrarian" > subjects.txt

echo "Newsletter ready. Subject options:"
cat subjects.txt

Content Creation Tips

  1. Consistent voice: Set .context once, all content matches
  2. Notes format: Bullet points work best as input
  3. Iterate: First draft → onnai "make it punchier" → final
  4. Batch processing: Generate all formats, then edit the best ones
  5. Repurpose ruthlessly: Every podcast is a blog is a thread is a newsletter

Sample Notes Format

# Raw Notes: Lessons from Prod Incident

- 3am page, DB connection pool exhausted
- Took 2 hours to diagnose (logs were useless)
- Root cause: forgot to close connections in new feature
- We had tests but not for connection lifecycle
- Added connection pool metrics after
- Now have runbook for this scenario
- Team was great, no blame
- Still need better observability
- Connection pooling is subtle
- Should have caught in code review

This becomes a blog post, a podcast episode, a Twitter thread, and a newsletter section—all with one command.


Building Websites

Generate sites from descriptions, content, or scratch.

Landing Page Generator

#!/bin/bash
# landing-page.sh - Generate a complete landing page

PRODUCT="$1"
DESCRIPTION="$2"

{
  echo "Product: $PRODUCT"
  echo "Description: $DESCRIPTION"
} | onnai "create a landing page HTML:
- Hero with headline and CTA
- 3 feature sections (use emoji icons)
- Testimonial section (3 quotes)
- Pricing (3 tiers)
- FAQ (5 questions)
- Footer

Modern CSS, single file, mobile responsive." > index.html

open index.html 2>/dev/null || xdg-open index.html

Usage:

./landing-page.sh "TaskFlow" "AI project management for remote teams"
./landing-page.sh "FreshBite" "Healthy meal kits delivered weekly"

Site from Markdown

#!/bin/bash
# md-to-site.sh - Convert markdown to website

CONTENT_DIR="${1:-.}"
OUTPUT_DIR="${2:-./site}"

mkdir -p "$OUTPUT_DIR"

# Generate CSS
onnai "create modern CSS for a documentation site" > "$OUTPUT_DIR/style.css"

# Convert each markdown file
for md in "$CONTENT_DIR"/*.md; do
  [ -f "$md" ] || continue
  BASENAME=$(basename "$md" .md)
  TITLE=$(head -1 "$md" | sed 's/^#* *//')
  
  {
    echo "<!DOCTYPE html><html><head><title>$TITLE</title>"
    echo "<link rel='stylesheet' href='style.css'></head><body><main>"
    pandoc "$md" 2>/dev/null || cat "$md" | onnai "convert markdown to HTML"
    echo "</main></body></html>"
  } > "$OUTPUT_DIR/${BASENAME}.html"
done

Portfolio Generator

#!/bin/bash
# portfolio.sh - Personal portfolio site

{
  echo "Name: $1"
  echo "Role: $2"
  echo "Projects:"
  cat "$3"
} | onnai "create portfolio HTML:
- Hero with name, role, bio
- Projects grid
- Skills section
- Contact section
Dark mode, minimal, responsive." > portfolio.html

Blog Generator

#!/bin/bash
# blog-gen.sh - Simple static blog

POSTS_DIR="${1:-./posts}"
OUTPUT_DIR="${2:-./blog}"
mkdir -p "$OUTPUT_DIR"

# Index page
{
  echo "Posts:"
  for post in "$POSTS_DIR"/*.md; do
    TITLE=$(head -1 "$post" | sed 's/^#* *//')
    echo "- $TITLE"
  done
} | onnai "blog index.html with post list, clean typography" > "$OUTPUT_DIR/index.html"

# Each post
for post in "$POSTS_DIR"/*.md; do
  SLUG=$(basename "$post" .md)
  {
    echo "Title: $(head -1 "$post")"
    echo "Content:"
    cat "$post"
  } | onnai "blog post HTML, reading time, share buttons" > "$OUTPUT_DIR/${SLUG}.html"
done

Component Generator

# Generate UI components
echo "pricing table with 3 tiers" | onnai "HTML component, scoped CSS" > pricing.html
echo "testimonial carousel" | onnai "HTML component" > testimonials.html
echo "newsletter signup form" | onnai "HTML component, accessible" > signup.html
echo "FAQ accordion" | onnai "HTML component with vanilla JS" > faq.html

Full Site Generator

#!/bin/bash
# site-gen.sh - Multi-page site from brief

BRIEF="$1"
OUTPUT_DIR="${2:-./website}"
mkdir -p "$OUTPUT_DIR/css"

# Plan the site
cat "$BRIEF" | onnai "list pages needed with sections" > "$OUTPUT_DIR/plan.md"

# Generate CSS
cat "$BRIEF" | onnai "create CSS: variables, typography, layout, responsive" > "$OUTPUT_DIR/css/style.css"

# Generate each page
for page in home about features pricing contact; do
  {
    cat "$BRIEF"
    echo "Generate the $page page"
  } | onnai "HTML page linking css/style.css" > "$OUTPUT_DIR/${page}.html"
done

mv "$OUTPUT_DIR/home.html" "$OUTPUT_DIR/index.html"

Iterate on Design

#!/bin/bash
# iterate.sh - Refine with feedback

{
  cat "$1"
  echo "---"
  echo "Feedback: $2"
} | onnai "update HTML based on feedback" > "${1}.new"

diff "$1" "${1}.new" | head -30
read -p "Apply? [y/N] " confirm
[ "$confirm" = "y" ] && mv "${1}.new" "$1"
./iterate.sh index.html "bigger hero, more whitespace"
./iterate.sh index.html "green color scheme"
./iterate.sh index.html "add button animations"

Documentation Site

#!/bin/bash
# docs-gen.sh - Docs from code

for file in "$1"/*.{py,js,go}; do
  [ -f "$file" ] || continue
  NAME=$(basename "$file")
  cat "$file" | onnai "create documentation HTML:
- Overview
- Functions with params
- Usage examples" > "docs/${NAME%.*}.html"
done

Quick Sites

# One-liners
echo "SaaS for dog walkers" | onnai "landing page HTML" > index.html
echo "Welcome new users" | onnai "HTML email, inline CSS" > welcome.html
echo "Launching soon" | onnai "coming soon page with email capture" > soon.html
echo "Developer blog" | onnai "creative 404 page" > 404.html

Deploy

# GitHub Pages
cd site && git init && git add . && git commit -m "Deploy"
git remote add origin git@github.com:user/repo.git
git push -f origin main

# Netlify
netlify deploy --dir=./site --prod

# Surge
surge ./site mysite.surge.sh

# Vercel  
vercel ./site

Complete Workflow

#!/bin/bash
# website-workflow.sh - End to end

BRIEF="$1"
DOMAIN="$2"

# Generate
./site-gen.sh "$BRIEF" ./site

# Sitemap
{
  echo '<?xml version="1.0"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">'
  find ./site -name "*.html" -exec echo "<url><loc>https://${DOMAIN}/{}</loc></url>" \;
  echo '</urlset>'
} > ./site/sitemap.xml

# Deploy
surge ./site "$DOMAIN"
echo "Live at https://$DOMAIN"


Part 3: Automation & Integration

Building AI Agents

Agents are loops that observe, think, and act. Unix makes this simple.

The Basic Agent Pattern

#!/bin/bash
# agent-loop.sh - Basic agent structure

while true; do
  # 1. Observe
  OBSERVATION=$(get_current_state)
  
  # 2. Think
  ACTION=$(echo "$OBSERVATION" | onnai "what should I do next? respond with exactly one command")
  
  # 3. Act
  eval "$ACTION"
  
  # 4. Check if done
  echo "$ACTION" | grep -q "DONE" && break
  
  sleep 1
done

File Organizer Agent

#!/bin/bash
# organize-agent.sh - Sorts files into folders

TARGET="${1:-.}"
mkdir -p "$TARGET"/{documents,images,code,data,other}

cat > .context << 'EOF'
You are a file organizer. For each filename, respond with ONLY the category:
documents, images, code, data, or other
No explanation, just the single word.
EOF

for file in "$TARGET"/*; do
  [ -f "$file" ] || continue
  
  FILENAME=$(basename "$file")
  CATEGORY=$(echo "$FILENAME" | onnai "categorize this file")
  CATEGORY=$(echo "$CATEGORY" | tr -d '[:space:]' | tr '[:upper:]' '[:lower:]')
  
  # Validate category
  case "$CATEGORY" in
    documents|images|code|data|other)
      mv "$file" "$TARGET/$CATEGORY/"
      echo "Moved $FILENAME$CATEGORY/"
      ;;
    *)
      mv "$file" "$TARGET/other/"
      echo "Moved $FILENAME → other/ (unknown category: $CATEGORY)"
      ;;
  esac
done

echo "Done. Summary:"
for dir in documents images code data other; do
  echo "  $dir: $(ls "$TARGET/$dir" 2>/dev/null | wc -l) files"
done

Research Agent

#!/bin/bash
# research-agent.sh - Investigates a topic iteratively

TOPIC="$1"
MAX_ITERATIONS="${2:-5}"
WORKDIR="./research-$(date +%s)"
mkdir -p "$WORKDIR"

cat > "$WORKDIR/.context" << EOF
You are a research agent investigating: $TOPIC
Be thorough. Ask follow-up questions. Identify gaps.
EOF

cd "$WORKDIR"

# Initial question
echo "$TOPIC" > current_question.txt
echo "# Research: $TOPIC" > findings.md
echo "" >> findings.md

for i in $(seq 1 $MAX_ITERATIONS); do
  echo "=== Iteration $i ==="
  
  QUESTION=$(cat current_question.txt)
  echo "Researching: $QUESTION"
  
  # Search (simulate with onnai, replace with real search)
  RESULTS=$(echo "$QUESTION" | onnai "provide key facts about this topic, cite sources if known")
  
  # Analyze results
  {
    echo "Question: $QUESTION"
    echo "Results: $RESULTS"
    echo "Previous findings:"
    cat findings.md
  } | onnai "analyze: what did we learn? what gaps remain? what should we investigate next?" > analysis.txt
  
  # Extract new findings
  cat analysis.txt | onnai "extract only the new facts learned, as bullet points" >> findings.md
  
  # Determine next question
  NEXT=$(cat analysis.txt | onnai "what's the single most important follow-up question? respond with just the question")
  
  # Check if done
  if echo "$NEXT" | grep -qi "sufficient\|complete\|no further"; then
    echo "Research complete."
    break
  fi
  
  echo "$NEXT" > current_question.txt
  sleep 2
done

echo "=== Final Report ==="
cat findings.md | onnai "synthesize into a coherent research summary" > final_report.md
cat final_report.md

Code Review Agent

#!/bin/bash
# review-agent.sh - Multi-pass code review

FILE="$1"

cat > .context << 'EOF'
You are a thorough code reviewer. Be specific. Cite line numbers.
EOF

echo "=== Reviewing: $FILE ==="

# Pass 1: Security
echo "## Security Review" > review.md
cat -n "$FILE" | onnai "find security issues: injection, auth, secrets, etc." >> review.md

# Pass 2: Bugs
echo -e "\n## Potential Bugs" >> review.md
cat -n "$FILE" | onnai "find bugs: off-by-one, null refs, race conditions, etc." >> review.md

# Pass 3: Performance
echo -e "\n## Performance Issues" >> review.md
cat -n "$FILE" | onnai "find performance issues: N+1, unnecessary loops, memory leaks" >> review.md

# Pass 4: Style
echo -e "\n## Style & Readability" >> review.md
cat -n "$FILE" | onnai "find style issues: naming, complexity, missing docs" >> review.md

# Prioritize
echo -e "\n## Priority Actions" >> review.md
cat review.md | onnai "list top 5 issues to fix, ranked by severity" >> review.md

cat review.md

Debugging Agent

#!/bin/bash
# debug-agent.sh - Iteratively debugs an issue

ERROR_LOG="$1"
CODEBASE="${2:-.}"
MAX_ATTEMPTS=5

cat > .context << 'EOF'
You are a debugging agent. Be methodical.
Respond with ONE action at a time in format:
ACTION: <inspect|hypothesis|test|fix|done>
TARGET: <file or component>
DETAILS: <what to do>
EOF

echo "=== Starting debug session ==="
echo "Error: $(head -5 "$ERROR_LOG")"

# Initialize state
echo "" > hypotheses.txt
echo "" > tested.txt

for attempt in $(seq 1 $MAX_ATTEMPTS); do
  echo "--- Attempt $attempt ---"
  
  # Build context
  {
    echo "=== ERROR ==="
    cat "$ERROR_LOG"
    echo ""
    echo "=== HYPOTHESES ==="
    cat hypotheses.txt
    echo ""
    echo "=== TESTED ==="
    cat tested.txt
  } > debug_context.txt
  
  # Get next action
  ACTION=$(cat debug_context.txt | onnai "what's the next debugging step?")
  echo "$ACTION"
  
  # Parse action
  ACTION_TYPE=$(echo "$ACTION" | grep "ACTION:" | cut -d: -f2 | tr -d ' ')
  TARGET=$(echo "$ACTION" | grep "TARGET:" | cut -d: -f2-)
  
  case "$ACTION_TYPE" in
    inspect)
      echo "Inspecting: $TARGET"
      if [ -f "$CODEBASE/$TARGET" ]; then
        cat -n "$CODEBASE/$TARGET" | onnai "analyze for issues related to: $(cat "$ERROR_LOG" | head -1)"
      else
        find "$CODEBASE" -name "*$TARGET*" -exec cat {} \;
      fi
      ;;
    hypothesis)
      echo "$ACTION" >> hypotheses.txt
      ;;
    test)
      echo "Test: $TARGET" >> tested.txt
      # In real agent, would run actual tests
      ;;
    fix)
      echo "=== SUGGESTED FIX ==="
      echo "$ACTION"
      ;;
    done)
      echo "Debug session complete."
      break
      ;;
  esac
  
  sleep 1
done

Monitoring Agent

#!/bin/bash
# monitor-agent.sh - Watches logs and reacts

LOG_FILE="$1"
ALERT_CMD="${2:-echo ALERT:}"

cat > .context << 'EOF'
You are a log monitoring agent.
Classify each log batch as: normal, warning, critical
For critical: explain the issue in one line
Respond in format:
LEVEL: <normal|warning|critical>
SUMMARY: <one line if not normal>
EOF

echo "Monitoring: $LOG_FILE"
echo "Alert command: $ALERT_CMD"

tail -f "$LOG_FILE" | while read -r line; do
  # Batch lines (collect for 5 seconds)
  BATCH="$line"
  read -t 5 -r more && BATCH="$BATCH"$'\n'"$more"
  
  # Analyze
  ANALYSIS=$(echo "$BATCH" | onnai "analyze these log lines")
  LEVEL=$(echo "$ANALYSIS" | grep "LEVEL:" | cut -d: -f2 | tr -d ' ')
  SUMMARY=$(echo "$ANALYSIS" | grep "SUMMARY:" | cut -d: -f2-)
  
  case "$LEVEL" in
    critical)
      $ALERT_CMD "CRITICAL: $SUMMARY"
      echo "$BATCH" | onnai "suggest immediate remediation" 
      ;;
    warning)
      echo "⚠️  Warning: $SUMMARY"
      ;;
  esac
done

Task Execution Agent

#!/bin/bash
# task-agent.sh - Breaks down and executes complex tasks

TASK="$1"

cat > .context << 'EOF'
You are a task execution agent with access to bash.
Break tasks into steps. Execute one at a time.
Respond in format:
THOUGHT: <reasoning>
ACTION: <bash command or DONE>
Only use safe, non-destructive commands.
EOF

echo "=== Task: $TASK ==="

HISTORY=""
STEP=0

while [ $STEP -lt 20 ]; do
  ((STEP++))
  
  # Plan next action
  RESPONSE=$({
    echo "Task: $TASK"
    echo "History:"
    echo "$HISTORY"
    echo "---"
    echo "What's the next step?"
  } | onnai)
  
  echo "Step $STEP:"
  echo "$RESPONSE"
  
  # Extract action
  ACTION=$(echo "$RESPONSE" | grep "ACTION:" | cut -d: -f2-)
  
  if echo "$ACTION" | grep -qi "DONE"; then
    echo "=== Task Complete ==="
    break
  fi
  
  # Safety check
  if echo "$ACTION" | grep -qE "rm -rf|sudo|mkfs|dd |:(){"; then
    echo "Blocked unsafe command: $ACTION"
    HISTORY="$HISTORY"$'\n'"Step $STEP: BLOCKED - $ACTION"
    continue
  fi
  
  # Execute
  echo "Executing: $ACTION"
  RESULT=$(eval "$ACTION" 2>&1 | head -50)
  echo "Result: $RESULT"
  
  HISTORY="$HISTORY"$'\n'"Step $STEP: $ACTION"$'\n'"Result: $RESULT"
  
  sleep 1
done

Multi-Agent Collaboration

#!/bin/bash
# multi-agent.sh - Agents that talk to each other

PROBLEM="$1"

# Agent 1: Proposer
propose() {
  echo "$1" | onnai "propose a solution to this problem. be specific."
}

# Agent 2: Critic  
critique() {
  echo "$1" | onnai "find flaws in this proposal. be harsh but constructive."
}

# Agent 3: Synthesizer
synthesize() {
  echo "$1" | onnai "synthesize the proposal and critique into an improved solution"
}

echo "=== Problem: $PROBLEM ==="

PROPOSAL=$(propose "$PROBLEM")
echo -e "\n📝 Proposal:\n$PROPOSAL"

CRITICISM=$(critique "$PROPOSAL")
echo -e "\n🔍 Critique:\n$CRITICISM"

FINAL=$({
  echo "Original problem: $PROBLEM"
  echo "Proposal: $PROPOSAL"
  echo "Critique: $CRITICISM"
} | synthesize)
echo -e "\n✅ Final Solution:\n$FINAL"

Autonomous Coder

#!/bin/bash
# coder-agent.sh - Writes code iteratively with tests

SPEC="$1"
LANGUAGE="${2:-python}"
OUTFILE="${3:-solution.$LANGUAGE}"

cat > .context << EOF
Language: $LANGUAGE
Write clean, tested, production-ready code.
EOF

echo "=== Building: $SPEC ==="

# Step 1: Design
echo "Designing..."
DESIGN=$(echo "$SPEC" | onnai "design the solution: list functions, data structures, edge cases")
echo "$DESIGN"

# Step 2: Initial implementation
echo "Implementing..."
{
  echo "Spec: $SPEC"
  echo "Design: $DESIGN"
} | onnai "write the implementation" > "$OUTFILE"

# Step 3: Generate tests
echo "Generating tests..."
cat "$OUTFILE" | onnai "write unit tests for this code" > "test_$OUTFILE"

# Step 4: Review and iterate
for i in 1 2 3; do
  echo "Review iteration $i..."
  
  ISSUES=$({
    echo "=== CODE ==="
    cat "$OUTFILE"
    echo "=== TESTS ==="
    cat "test_$OUTFILE"
  } | onnai "find bugs, edge cases, improvements. be specific.")
  
  if echo "$ISSUES" | grep -qi "no issues\|looks good\|none found"; then
    echo "Code approved!"
    break
  fi
  
  echo "Issues found: $ISSUES"
  
  # Fix issues
  {
    cat "$OUTFILE"
    echo "---"
    echo "Fix these issues: $ISSUES"
  } | onnai "output the corrected code" > "${OUTFILE}.new"
  mv "${OUTFILE}.new" "$OUTFILE"
done

echo "=== Done ==="
echo "Code: $OUTFILE"
echo "Tests: test_$OUTFILE"

Agent Building Tips

  1. Clear action format: Make the AI respond in parseable formats (ACTION:, LEVEL:, etc.)
  2. Safety guards: Always validate and sanitize commands before eval
  3. State tracking: Keep history so the agent remembers what it tried
  4. Exit conditions: Always have a way to break the loop
  5. Rate limiting: Add sleep to avoid hammering APIs
  6. Bounded iterations: Set MAX_ATTEMPTS to prevent infinite loops
  7. Logging: Pipe agent output to tee for debugging

Agent Patterns Reference

Pattern Use Case
Single-pass One-shot analysis or transformation
Loop until done Iterative refinement
Multi-agent Different perspectives or expertise
Reactive Monitoring and responding to events
Planning Break down complex tasks
Tool-using Execute commands and process results

Integrations

Connect onnai to the tools you already use.

Cron Jobs

# Edit crontab
crontab -e

# Daily summary at 9am
0 9 * * * /home/user/scripts/daily-summary.sh | mail -s "Daily Report" user@example.com

# Hourly log check
0 * * * * tail -100 /var/log/app.log | onnai "any issues?" >> /var/log/ai-monitor.log

# Weekly newsletter draft (Sunday 6pm)
0 18 * * 0 /home/user/scripts/weekly-newsletter.sh

# Monitor every 5 minutes, alert on issues
*/5 * * * * /home/user/scripts/health-check.sh

daily-summary.sh:

#!/bin/bash

{
  echo "=== Yesterday's Sales ==="
  cat /var/log/sales.log | grep "$(date -d yesterday +%Y-%m-%d)"
  
  echo "=== Errors ==="
  cat /var/log/app.log | grep -i error | tail -20
  
  echo "=== Server Stats ==="
  df -h
  free -h
} | onnai "write a brief daily ops summary, highlight anything concerning"

health-check.sh:

#!/bin/bash

STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://yourapp.com/health)

if [ "$STATUS" != "200" ]; then
  echo "Health check failed: $STATUS" | onnai "write urgent alert" | \
    curl -X POST -d @- "https://hooks.slack.com/services/XXX"
fi

Slack Integration

Send to Slack:

#!/bin/bash
# slack-send.sh

WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"

send_slack() {
  curl -s -X POST "$WEBHOOK_URL" \
    -H "Content-Type: application/json" \
    -d "{\"text\": \"$1\"}"
}

MESSAGE=$(cat | onnai "make this a friendly Slack message")
send_slack "$MESSAGE"

Usage:

echo "Build failed for main branch" | ./slack-send.sh
cat sales.log | onnai "summarize today" | ./slack-send.sh

Slack bot listener:

#!/bin/bash
# slack-bot.sh - Requires jq, websocat

BOT_TOKEN="xoxb-your-bot-token"
WS_URL=$(curl -s "https://slack.com/api/rtm.connect?token=$BOT_TOKEN" | jq -r '.url')

websocat "$WS_URL" | while read -r event; do
  TYPE=$(echo "$event" | jq -r '.type')
  
  if [ "$TYPE" = "message" ]; then
    TEXT=$(echo "$event" | jq -r '.text')
    CHANNEL=$(echo "$event" | jq -r '.channel')
    
    if echo "$TEXT" | grep -q "<@$BOT_ID>"; then
      RESPONSE=$(echo "$TEXT" | onnai "respond helpfully and briefly")
      curl -s -X POST "https://slack.com/api/chat.postMessage" \
        -H "Authorization: Bearer $BOT_TOKEN" \
        -H "Content-Type: application/json" \
        -d "{\"channel\": \"$CHANNEL\", \"text\": \"$RESPONSE\"}"
    fi
  fi
done

Discord Integration

#!/bin/bash
# discord-send.sh

WEBHOOK_URL="https://discord.com/api/webhooks/YOUR/WEBHOOK"

MESSAGE=$(cat | onnai "format for Discord, brief")

curl -s -X POST "$WEBHOOK_URL" \
  -H "Content-Type: application/json" \
  -d "{\"content\": \"$MESSAGE\"}"

Email Automation

Process inbox:

#!/bin/bash
# process-inbox.sh

MAILDIR="$HOME/Maildir/new"

for mail in "$MAILDIR"/*; do
  [ -f "$mail" ] || continue
  
  FROM=$(grep -m1 "^From:" "$mail" | sed 's/From: //')
  SUBJECT=$(grep -m1 "^Subject:" "$mail" | sed 's/Subject: //')
  BODY=$(sed -n '/^$/,$p' "$mail" | tail -n +2)
  
  # Classify
  CATEGORY=$(echo "$SUBJECT $BODY" | head -c 1000 | \
    onnai "classify as: urgent, support, sales, spam")
  
  case "$CATEGORY" in
    urgent) echo "$SUBJECT from $FROM" | ./slack-send.sh ;;
    support) echo "$BODY" | onnai "draft helpful response" > "drafts/${FROM}_reply.txt" ;;
    spam) mv "$mail" "$HOME/Maildir/spam/" ;;
  esac
done

Send email:

#!/bin/bash
# send-email.sh

TO="$1"
SUBJECT="$2"
CONTEXT="$3"

BODY=$(echo "$CONTEXT" | onnai "write a professional email about this")
echo "$BODY" | mail -s "$SUBJECT" "$TO"

Database Workflows

Natural language to SQL:

#!/bin/bash
# db-query.sh

DB_NAME="myapp"
QUESTION="$1"

SQL=$(echo "$QUESTION" | onnai "write PostgreSQL query. tables: users(id,email,created_at), orders(id,user_id,amount,created_at). return ONLY SQL")

echo "Query: $SQL"
read -p "Execute? [y/N] " confirm
[ "$confirm" = "y" ] && psql -d "$DB_NAME" -c "$SQL"

Usage:

./db-query.sh "how many users signed up last week"
./db-query.sh "top 10 customers by total order value"

SQLite:

#!/bin/bash
# sqlite-helper.sh

DB_FILE="$1"
QUESTION="$2"

SCHEMA=$(sqlite3 "$DB_FILE" ".schema")

SQL=$({
  echo "Schema: $SCHEMA"
  echo "Question: $QUESTION"
} | onnai "write SQLite query, ONLY the SQL")

sqlite3 "$DB_FILE" "$SQL"

Webhook Receivers

Stripe payments:

#!/bin/bash
# stripe-webhook.sh

BODY="$1"
EVENT_TYPE=$(echo "$BODY" | jq -r '.type')
DATA=$(echo "$BODY" | jq '.data.object')

case "$EVENT_TYPE" in
  "payment_intent.succeeded")
    AMOUNT=$(echo "$DATA" | jq -r '.amount / 100')
    EMAIL=$(echo "$DATA" | jq -r '.receipt_email')
    echo "New payment: \$$AMOUNT from $EMAIL" | onnai "make exciting" | say &
    ;;
  "customer.subscription.deleted")
    EMAIL=$(echo "$DATA" | jq -r '.customer_email')
    echo "Churn alert: $EMAIL cancelled" | ./slack-send.sh
    ;;
esac

GitHub Integration

Auto-review PRs:

#!/bin/bash
# github-pr.sh

EVENT="$1"
ACTION=$(echo "$EVENT" | jq -r '.action')
PR_URL=$(echo "$EVENT" | jq -r '.pull_request.url')

if [ "$ACTION" = "opened" ]; then
  DIFF=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \
    "${PR_URL}/files" | jq -r '.[].patch' | head -500)
  
  REVIEW=$(echo "$DIFF" | onnai "review: security, bugs, style issues")
  
  COMMENT_URL=$(echo "$EVENT" | jq -r '.pull_request.comments_url')
  curl -s -X POST "$COMMENT_URL" \
    -H "Authorization: token $GITHUB_TOKEN" \
    -d "{\"body\": \"## AI Review\n\n$REVIEW\"}"
fi

Auto-label issues:

#!/bin/bash
# github-label.sh

EVENT="$1"
TITLE=$(echo "$EVENT" | jq -r '.issue.title')
BODY=$(echo "$EVENT" | jq -r '.issue.body')
ISSUE_URL=$(echo "$EVENT" | jq -r '.issue.url')

LABELS=$(echo "Title: $TITLE\nBody: $BODY" | \
  onnai "return comma-separated labels: bug, feature, docs, question")

curl -s -X POST "${ISSUE_URL}/labels" \
  -H "Authorization: token $GITHUB_TOKEN" \
  -d "{\"labels\": [\"${LABELS//,/\",\"}\"]}"

Calendar Integration

#!/bin/bash
# calendar-helper.sh (using gcalcli)

# Summarize today
gcalcli agenda --nocolor | onnai "summarize my day, note conflicts"

# Add event from natural language
add_event() {
  PARSED=$(echo "$1" | onnai "parse: TITLE|DATE|TIME|DURATION")
  IFS='|' read -r TITLE DATE TIME DURATION <<< "$PARSED"
  gcalcli add --title "$TITLE" --when "$DATE $TIME" --duration "$DURATION" --noprompt
}

add_event "lunch with Sarah next Tuesday at noon for an hour"

Systemd Service

# /etc/systemd/system/onnai-monitor.service
[Unit]
Description=Onnai Log Monitor
After=network.target

[Service]
Type=simple
User=youruser
ExecStart=/home/youruser/scripts/monitor-agent.sh /var/log/app.log
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
sudo systemctl enable onnai-monitor
sudo systemctl start onnai-monitor

Retry Wrapper

#!/bin/bash
# retry.sh

MAX_RETRIES=3

retry() {
  local attempt=1
  while [ $attempt -le $MAX_RETRIES ]; do
    "$@" && return 0
    echo "Attempt $attempt failed, retrying..."
    sleep 5
    ((attempt++))
  done
  echo "All attempts failed" | ./slack-send.sh
  return 1
}

retry curl -f https://api.example.com/health


Part 4: Industry Workflows

Setup

onnai --add legal

Project Context

Create a .context file in your case directory:

echo "California jurisdiction. Insurance defense practice." > .context
echo "Client: Acme Corp. Matter: 2024-1847" >> .context
echo "Opposing counsel: Smith & Associates" >> .context
echo "Be conservative in liability assessments." >> .context

Every command in that directory now understands your case.

Contract Review

# Flag risky clauses
pdftotext contract.pdf - | legal "flag risky clauses" | tee review.md

# Compare against standard terms
textutil -convert txt vendor_agreement.docx -stdout | legal "compare to our standard terms"

# Redline suggestions
diff their_draft.txt our_template.txt | legal "explain changes and recommend response"

Discovery & Document Review

# Search across discovery
for f in discovery/*.pdf; do pdftotext "$f" -; done | legal "find mentions of safety violations"

# Summarize depositions
cat depositions/*.txt | legal "summarize contradictions between witnesses"

# Build timeline
cat discovery/*.txt | legal "extract all dates and events" > timeline.md
# Find precedents
cat case_facts.txt | legal "find relevant precedents"

# Analyze opposing arguments
pdftotext opposing_brief.pdf - | legal "identify weak arguments and suggest rebuttals"

Drafting

# Letters
cat notes.txt | legal "draft cease and desist letter" > letter.md

# Meeting notes to filings
cat research_notes.md | legal "draft motion to compel"

# Client communications
pdftotext ruling.pdf - | legal "summarize for client in plain english"

Compliance

# Policy review
cat privacy_policy.md | legal "check GDPR and CCPA compliance"

# Track changes
diff old_tos.txt new_tos.txt | legal "summarize material changes"

# Audit contracts
for f in contracts/*.pdf; do pdftotext "$f" -; done | legal "list all indemnification clauses"

Billing & Admin

# Meeting notes to billable entries
cat meeting_notes.md | legal "convert to billable time entries"

# Client intake
cat intake_form.txt | legal "spot potential conflicts of interest"
# Full contract review
pdftotext contract.pdf - | \
  legal "extract all obligations" | \
  legal "flag items outside standard terms" | \
  legal "draft response recommendations" > contract_review.md

# Deposition prep
cat discovery/*.txt | \
  legal "find statements about delivery dates" | \
  legal "format as deposition questions" > depo_outline.md

Privacy Note

Your documents flow through for processing but are never stored. Review onnai’s privacy policy and your jurisdiction’s ethics rules on cloud-based tools.


Creative & Ad Agency Workflows

Brainstorm campaigns, generate copy, iterate on ideas.

Campaign Brainstorm

#!/bin/bash
# brainstorm.sh - Generate campaign concepts

CLIENT="$1"
PRODUCT="$2"
GOAL="$3"

cat > .context << EOF
You are a creative director at a top ad agency.
Be bold. Be unexpected. Avoid clichés.
EOF

{
  echo "Client: $CLIENT"
  echo "Product: $PRODUCT"
  echo "Goal: $GOAL"
} | onnai "generate 5 campaign concepts. for each:
- Campaign name
- Core insight
- Tagline
- Key visual idea
- Why it works"

Usage:

./brainstorm.sh "Nike" "new running shoe" "reach Gen Z runners"
./brainstorm.sh "Local Bank" "savings account" "compete with fintech apps"

Headline Generator

#!/bin/bash
# headlines.sh - Generate and rank headlines

PRODUCT="$1"
ANGLE="$2"

echo "$PRODUCT - $ANGLE" | onnai "write 10 headlines:
- Mix emotional and rational
- Vary length
- Include one edgy option
- No exclamation marks" | tee headlines.txt

# Rate them
cat headlines.txt | onnai "rank by memorability, explain top 3"

Copy Variations

# A/B test copy
echo "Shop now and save 20%" | onnai "5 variations, different tones"
echo "Sign up for our newsletter" | onnai "make this not boring, 5 versions"
echo "Limited time offer" | onnai "5 ways without clichés"

Creative Brief Generator

#!/bin/bash
# brief.sh - Notes to creative brief

cat "$1" | onnai "write a creative brief:
## Background
## Objective  
## Target Audience
## Key Insight
## Single-Minded Proposition
## Proof Points
## Tone of Voice
## Mandatories
## Deliverables" > brief.md

Mimic Builder

#!/bin/bash
# mimic.sh - Detailed audience mimics

{
  echo "Product: $1"
  echo "Segment: $2"
} | onnai "create a detailed mimic:
- Name & description
- Demographics
- Psychographics  
- A day in their life
- Pain points
- Goals
- Media habits
- Quote they'd say
- How to reach them"

Tagline Workshop

#!/bin/bash
# taglines.sh - Iterative tagline development

BRAND="$1"
POSITIONING="$2"

# Round 1: Quantity
echo "$BRAND - $POSITIONING" | onnai "20 taglines, go wide" | tee round1.txt

# Round 2: Refine
cat round1.txt | onnai "pick 5 best, create 3 variations of each" | tee round2.txt

# Round 3: Polish
cat round2.txt | onnai "top 3 with:
- Why it works
- Potential concerns
- How it sounds as TV voiceover
- How it looks on billboard"

Ad Copy Generator

#!/bin/bash
# ad-copy.sh - Platform-specific copy

{
  echo "Product: $1"
  echo "Offer: $2"
  echo "Audience: $3"
} | onnai "write ad copy for:

## Google Search
3 headlines (30 chars), 2 descriptions (90 chars)

## Facebook/Instagram
Primary text, headline, description

## LinkedIn
Intro text, headline

## YouTube
6-sec, 15-sec, 30-sec scripts

Include character counts."

Social Media Campaign

#!/bin/bash
# social-campaign.sh - Multi-platform content

{
  echo "Campaign: $1"
  echo "Core message: $2"
} | onnai "create platform-native content for:
- Instagram (feed, story, reel)
- TikTok (concept, sound, hashtags)
- Twitter (thread + singles)
- LinkedIn
- Facebook"

Concept Testing

#!/bin/bash
# concept-test.sh - Stress test ideas

cat "$1" | onnai "devil's advocate:
- Potential weaknesses
- Audience objections
- Competitive response
- Cultural risks
- Execution challenges
- Does it answer the brief?
- How to make it stronger" > critique.md

# Improve
{ cat "$1"; echo "---"; cat critique.md; } | \
  onnai "improve the concept addressing these" > concept_v2.md

Name Generator

#!/bin/bash
# name-gen.sh - Product/brand naming

{
  echo "What: $1"
  echo "Attributes: $2"
} | onnai "30 name candidates:
- Real words (5)
- Compound words (5)
- Invented words (5)
- Acronyms (3)
- Foreign/classical roots (5)
- Abstract/evocative (5)
- Playful (2)

For each: name, pronunciation, why it works"

Radio Script

#!/bin/bash
# radio.sh - Audio ad scripts

{
  echo "Product: $1"
  echo "Length: ${2:-30} seconds"
} | onnai "radio script with:
- Sound/music direction
- Voice description
- Script with timing
- Word count (~$((${2:-30} * 3)) words)"

TV Spot Concept

#!/bin/bash
# tv-spot.sh - Commercial concepts

cat "$1" | onnai "${2:-30}-second TV spot:
- Title
- One-line idea
- Script table: VISUAL | AUDIO | TIME
- End frame
- Casting notes
- Location/set
- Key visual moment
- Music direction"

Brainstorm Session

#!/bin/bash
# brainstorm-session.sh - Facilitated ideation

CHALLENGE="$1"

{
  echo "# Brainstorm: $CHALLENGE"
  
  echo "## Round 1: Clear the Obvious"
  echo "$CHALLENGE" | onnai "10 obvious solutions, get clichés out"
  
  echo "## Round 2: Opposites"
  echo "$CHALLENGE" | onnai "what if we did the OPPOSITE? 10 ideas"
  
  echo "## Round 3: Category Theft"
  echo "$CHALLENGE" | onnai "how would Apple, Liquid Death, Duolingo solve this?"
  
  echo "## Round 4: Constraints"
  echo "$CHALLENGE" | onnai "ideas with: no words / \$50 budget / one day deadline"
  
  echo "## Synthesis"
  echo "Review and identify 3 strongest directions"
} | tee session.md

Creative Team Simulation

#!/bin/bash
# creative-team.sh - Multiple perspectives

BRIEF="$1"

# Strategist
echo "## Strategist" 
cat "$BRIEF" | onnai "what's the real problem? what's the insight?"

# Copywriter
echo "## Copywriter"
cat "$BRIEF" | onnai "headline? voice? 3 campaign lines."

# Art Director
echo "## Art Director"
cat "$BRIEF" | onnai "visual world? key images? references?"

# Creative Director
echo "## Creative Director"
cat "$BRIEF" | onnai "synthesize into one cohesive direction"

Quick Creative Commands

# Ideas
echo "energy drink for coders" | onnai "10 product names"
echo "banking app" | onnai "5 unexpected brand partnerships"

# Copy polish
echo "We're the best in the business" | onnai "say this without saying this"
echo "Click here to learn more" | onnai "10 better CTAs"

# Visual ideas
echo "boring B2B software" | onnai "unexpected visual metaphors"
echo "insurance company" | onnai "photography without suits or handshakes"

# Strategy
echo "declining soda brand" | onnai "5 repositioning strategies"
echo "new meditation app" | onnai "differentiate from Calm and Headspace"

Education & Learning

Tutoring, lesson plans, study tools, and learning workflows.

Personal Tutor

#!/bin/bash
# tutor.sh - Interactive tutoring session

SUBJECT="$1"

cat > .context << EOF
You are a patient, encouraging tutor.
Use Socratic method - ask guiding questions.
Adapt to the student's level.
Never make them feel dumb.
EOF

echo "Tutoring: $SUBJECT (type 'quit' to end)"

while true; do
  read -p "You: " input
  [ "$input" = "quit" ] && break
  echo "$input" | onnai "respond as tutor"
  echo ""
done

Explain Like I’m 5

# Simple explanations at any level
echo "quantum entanglement" | onnai "explain to a 5 year old"
echo "blockchain" | onnai "explain to a 10 year old"
echo "derivatives trading" | onnai "explain to a 15 year old"

Flashcard Generator

#!/bin/bash
# flashcards.sh - Generate flashcards from content

cat "$1" | onnai "create 20 flashcards:
Q: [question]
A: [answer]
---
Mix recall, definitions, and application."

Anki format:

cat "$1" | onnai "flashcards as CSV: front,back (no headers)" > anki_import.csv

Quiz Generator

#!/bin/bash
# quiz.sh - Generate quizzes

echo "$1" | onnai "create 10 question quiz:
- Multiple choice (mark correct with *)
- True/False
- Fill in blank
Number each question."

Interactive quiz:

#!/bin/bash
# quiz-interactive.sh

SCORE=0
for i in {1..5}; do
  QA=$(echo "$1 question $i" | onnai "one MC question with ANSWER: [letter]")
  echo "$QA" | grep -v "ANSWER:"
  read -p "Answer (A/B/C/D): " ans
  CORRECT=$(echo "$QA" | grep "ANSWER:" | tr -cd 'A-D')
  [ "$(echo $ans | tr a-d A-D)" = "$CORRECT" ] && echo "✓ Correct!" && ((SCORE++)) || echo "✗ Was $CORRECT"
done
echo "Score: $SCORE/5"

Lesson Plan Generator

#!/bin/bash
# lesson-plan.sh

{
  echo "Topic: $1"
  echo "Duration: ${2:-60} minutes"
  echo "Level: $3"
} | onnai "lesson plan:
## Learning Objectives
## Prerequisites  
## Materials
## Outline
- Opening (10%)
- Instruction (40%)
- Practice (30%)
- Assessment (15%)
- Closing (5%)
## Differentiation
## Homework"

Study Guide Generator

#!/bin/bash
# study-guide.sh

{
  cat "$1"
  echo "---"
  echo "Exam focus: $2"
} | onnai "study guide:
- Key concepts (ranked)
- Definitions
- Formulas/rules
- Common mistakes
- Practice problems with solutions
- Memory tricks"

Homework Helper

# Guide without giving answers
echo "$1" | onnai "help with homework:
- DON'T give direct answer
- Ask guiding questions
- Point to relevant concepts
- Give similar example
- Progressive hints if stuck"

Essay Feedback

#!/bin/bash
# essay-feedback.sh

cat "$1" | onnai "essay feedback:
## Strengths
## Areas for Improvement
- Thesis & argument
- Organization
- Evidence
- Style
- Grammar (specific errors)
## Suggested Revisions
## Grade Estimate
Be encouraging but honest."

Vocabulary Builder

# Different modes
cat words.txt | onnai "define each with example sentence"
cat words.txt | onnai "create matching quiz, shuffled"
cat words.txt | onnai "use each in memorable sentence"
cat words.txt | onnai "etymology of each word"

Math Problem Solver

echo "$1" | onnai "solve step by step:
1. What we're solving for
2. Approach
3. Every step with explanation
4. Box final answer
5. Verify"

Language Learning

# Translate with context
echo "hello, how are you" | onnai "translate to Spanish with pronunciation and literal breakdown"

# Conversation practice
echo "ordering coffee" | onnai "French dialogue with English translation and vocabulary"

# Grammar help
echo "subjunctive mood" | onnai "explain German grammar with examples and exercises"

Reading Comprehension

cat "$1" | onnai "reading comprehension:
- 5 vocabulary words
- 3 factual questions
- 3 inferential questions
- 2 evaluative questions
- Summary exercise
- Answer key"

Practice Test Generator

{
  echo "Subject: $1"
  echo "Type: ${2:-midterm}"
} | onnai "practice test:
## Section A: Multiple Choice (40%)
10 questions

## Section B: Short Answer (30%)
5 questions

## Section C: Long Answer (30%)
2 questions

## Answer Key
## Grading Rubric"

Concept Map

echo "$1" | onnai "concept map in Mermaid:
\`\`\`mermaid
graph TD
  A[Central] --> B[Branch]
\`\`\`
10-15 nodes with relationships"

Research Paper Helper

# Thesis options
echo "$TOPIC" | onnai "5 arguable thesis statements with evidence needs"

# Outline
echo "$TOPIC" | onnai "paper outline with topic sentences and evidence suggestions"

# Citation help
echo "$SOURCE" | onnai "format in APA, MLA, and Chicago"

Quick Learning Commands

# Explain
echo "recursion" | onnai "explain for beginner programmer"

# Compare
echo "TCP vs UDP" | onnai "comparison table"

# Memory aids
echo "order of planets" | onnai "memorable mnemonic"

# Quick quiz
echo "photosynthesis" | onnai "3 quiz questions with answers"

# Summarize
cat chapter.txt | onnai "summarize key points for exam"

# Simplify
cat paper.txt | onnai "main findings in plain English"

Classroom Tools

# Discussion questions
echo "To Kill a Mockingbird" | onnai "5 discussion questions, varying difficulty"

# Rubric
echo "5-paragraph essay" | onnai "grading rubric, 4 levels"

# Differentiation
echo "fractions" | onnai "3 versions: struggling, on-level, advanced"

# Parent email
echo "struggling with reading" | onnai "draft parent email, supportive"



Interviewer Workflows

Build AI-powered interviewers for hiring, research, and journalism.

Job Interview Simulator

#!/bin/bash
# job-interview.sh - Practice job interviews

ROLE="$1"
COMPANY="${2:-a tech company}"
DIFFICULTY="${3:-medium}"

cat > .context << EOF
You are a senior interviewer at $COMPANY hiring for: $ROLE
Difficulty: $DIFFICULTY
Be professional but warm. Ask follow-up questions.
Evaluate answers but don't reveal scores until the end.
Mix behavioral, technical, and situational questions.
EOF

echo "=== Job Interview Practice ==="
echo "Role: $ROLE at $COMPANY"
echo "Type 'end' to finish and get feedback"
echo ""

TRANSCRIPT=""

# Opening
OPENER=$(echo "Start the interview with a warm greeting and first question" | onnai)
echo "Interviewer: $OPENER"
TRANSCRIPT="Interviewer: $OPENER"$'\n'

while true; do
  echo ""
  read -p "You: " response
  [ "$response" = "end" ] && break
  
  TRANSCRIPT="$TRANSCRIPT"$'\n'"Candidate: $response"$'\n'
  
  FOLLOWUP=$(echo "$TRANSCRIPT" | onnai "respond as interviewer: acknowledge their answer briefly, then ask the next question or a follow-up")
  
  echo ""
  echo "Interviewer: $FOLLOWUP"
  TRANSCRIPT="$TRANSCRIPT""Interviewer: $FOLLOWUP"$'\n'
done

# Final feedback
echo ""
echo "=== Interview Feedback ==="
echo "$TRANSCRIPT" | onnai "provide interview feedback:

## Overall Impression
(1-2 sentences)

## Strengths
- What the candidate did well

## Areas for Improvement  
- What could be better

## Specific Feedback
(On 2-3 key answers)

## Score
Communication: X/10
Technical/Role Fit: X/10
Culture Fit: X/10

## Hiring Recommendation
(Hire / Maybe / Pass) with brief explanation"

Usage:

./job-interview.sh "Senior Software Engineer" "Google" "hard"
./job-interview.sh "Product Manager" "Startup" "medium"
./job-interview.sh "Data Analyst" "Bank" "easy"

Technical Interview

#!/bin/bash
# tech-interview.sh - Coding interview practice

TOPIC="${1:-algorithms}"
LEVEL="${2:-medium}"

cat > .context << EOF
You are a technical interviewer.
Topic: $TOPIC
Level: $LEVEL
Guide the candidate through the problem.
Give hints if stuck, but let them struggle a bit first.
EOF

# Generate problem
PROBLEM=$(echo "Generate a $LEVEL $TOPIC interview problem with:
- Problem statement
- Example input/output
- Constraints
Don't reveal the solution." | onnai)

echo "=== Technical Interview ==="
echo ""
echo "$PROBLEM"
echo ""
echo "Talk through your approach. Type 'hint' for help, 'solution' to see answer, 'end' to finish."

TRANSCRIPT="Problem: $PROBLEM"$'\n'

while true; do
  echo ""
  read -p "You: " response
  
  case "$response" in
    end) break ;;
    hint)
      HINT=$(echo "$TRANSCRIPT" | onnai "give a small hint without revealing the full solution")
      echo "Interviewer: $HINT"
      ;;
    solution)
      SOLUTION=$(echo "$PROBLEM" | onnai "provide the optimal solution with explanation and complexity analysis")
      echo "$SOLUTION"
      break
      ;;
    *)
      TRANSCRIPT="$TRANSCRIPT"$'\n'"Candidate: $response"
      RESPONSE=$(echo "$TRANSCRIPT" | onnai "respond as interviewer: ask clarifying questions, probe their thinking, or guide them")
      echo "Interviewer: $RESPONSE"
      TRANSCRIPT="$TRANSCRIPT"$'\n'"Interviewer: $RESPONSE"
      ;;
  esac
done

User Research Interview

#!/bin/bash
# user-research.sh - Conduct user research interviews

PRODUCT="$1"
RESEARCH_GOAL="$2"

cat > .context << EOF
You are a UX researcher conducting user interviews.
Product: $PRODUCT
Research goal: $RESEARCH_GOAL

Interview best practices:
- Ask open-ended questions
- Don't lead the witness
- Dig into the "why" behind answers
- Listen for pain points and unmet needs
- Use the 5 Whys technique when appropriate
EOF

# Generate interview guide
GUIDE=$({
  echo "Product: $PRODUCT"
  echo "Goal: $RESEARCH_GOAL"
} | onnai "create a 10-question interview guide with:
- Warm-up questions (2)
- Core questions about behavior (4)
- Pain point exploration (2)
- Future/ideal state questions (2)")

echo "$GUIDE" > interview_guide.md
echo "Interview guide saved to interview_guide.md"
echo ""
echo "=== User Research Interview ==="
echo "Type 'end' to finish and see insights"
echo ""

TRANSCRIPT=""

# Warm-up
OPENER=$(echo "Start with a friendly warm-up question about their background with $PRODUCT" | onnai)
echo "Researcher: $OPENER"
TRANSCRIPT="Researcher: $OPENER"$'\n'

while true; do
  echo ""
  read -p "Participant: " response
  [ "$response" = "end" ] && break
  
  TRANSCRIPT="$TRANSCRIPT""Participant: $response"$'\n'
  
  FOLLOWUP=$(echo "$TRANSCRIPT" | onnai "as UX researcher: probe deeper into what they said, or move to the next topic from the guide. never ask yes/no questions.")
  
  echo ""
  echo "Researcher: $FOLLOWUP"
  TRANSCRIPT="$TRANSCRIPT""Researcher: $FOLLOWUP"$'\n'
done

# Analysis
echo ""
echo "=== Interview Analysis ==="
echo "$TRANSCRIPT" | onnai "analyze this user interview:

## Key Insights
(3-5 main takeaways)

## Pain Points Identified
(List with quotes)

## Unmet Needs
(What are they trying to accomplish?)

## Opportunities
(Product/feature ideas based on this)

## Notable Quotes
(2-3 quotable moments)

## Follow-up Questions
(What to explore in next interview)"

echo "$TRANSCRIPT" > "interview_transcript_$(date +%Y%m%d_%H%M).md"

Podcast/Journalism Interview

#!/bin/bash
# podcast-interview.sh - Prepare and conduct interviews

GUEST="$1"
TOPIC="$2"

cat > .context << EOF
You are a skilled podcast interviewer.
Guest: $GUEST
Topic: $TOPIC
Style: Curious, conversational, probing but respectful.
Go deep on interesting tangents.
EOF

# Research and prep
echo "=== Interview Prep ==="
{
  echo "Guest: $GUEST"
  echo "Topic: $TOPIC"
} | onnai "create interview prep:

## Guest Background
(Key facts to know)

## Potential Story Angles
(3 interesting narratives)

## Questions
### Opener (warm, easy)
### Core Questions (6-8)
### Spicy Questions (2-3 provocative but fair)
### Closer

## Topics to Avoid
(If any sensitive areas)

## Follow-up Prompts
('Tell me more about...', 'What did that feel like?', etc.)" | tee interview_prep.md

echo ""
read -p "Press enter to start the interview..."

echo ""
echo "=== Live Interview ==="

TRANSCRIPT=""
OPENER=$(echo "Start with an engaging opening question about $TOPIC" | onnai)
echo "Host: $OPENER"
TRANSCRIPT="Host: $OPENER"$'\n'

while true; do
  echo ""
  read -p "$GUEST: " response
  [ "$response" = "end" ] && break
  
  TRANSCRIPT="$TRANSCRIPT""$GUEST: $response"$'\n'
  
  FOLLOWUP=$(echo "$TRANSCRIPT" | onnai "as podcast host: respond naturally, then ask a follow-up that digs deeper or pivots to something interesting they mentioned")
  
  echo ""
  echo "Host: $FOLLOWUP"
  TRANSCRIPT="$TRANSCRIPT""Host: $FOLLOWUP"$'\n'
done

# Post-interview
echo ""
echo "=== Post-Interview ==="
echo "$TRANSCRIPT" | onnai "from this interview:

## Best Moments
(Timestamps/quotes for highlights)

## Potential Episode Title
(3 options)

## Episode Description
(For podcast apps)

## Pull Quotes for Social
(3 shareable quotes)

## Clips to Cut
(Suggest 2-3 short clips for promotion)"

Structured Interview System (Hiring)

#!/bin/bash
# structured-interview.sh - Consistent hiring interviews

ROLE="$1"
CANDIDATE="$2"
COMPETENCIES="${3:-communication,problem-solving,collaboration,technical-skills}"

cat > .context << EOF
You are conducting a structured interview.
Role: $ROLE
Evaluate these competencies: $COMPETENCIES
Use behavioral questions (STAR format).
Score each answer 1-5 with justification.
Be consistent and fair.
EOF

# Generate scorecard
IFS=',' read -ra COMP_ARRAY <<< "$COMPETENCIES"

echo "# Interview Scorecard: $CANDIDATE for $ROLE" > scorecard.md
echo "Date: $(date +%Y-%m-%d)" >> scorecard.md
echo "" >> scorecard.md

for comp in "${COMP_ARRAY[@]}"; do
  echo "## $comp" >> scorecard.md
  
  # Generate question
  QUESTION=$(echo "Generate one behavioral interview question to assess: $comp" | onnai)
  echo "**Question:** $QUESTION" >> scorecard.md
  echo "" >> scorecard.md
  
  echo "Assessing: $comp"
  echo "Question: $QUESTION"
  echo ""
  read -p "Candidate answer (or 'skip'): " answer
  
  if [ "$answer" != "skip" ]; then
    EVAL=$({
      echo "Competency: $comp"
      echo "Question: $QUESTION"
      echo "Answer: $answer"
    } | onnai "evaluate this answer:
- Score (1-5)
- Justification (2-3 sentences)
- Follow-up question if needed")
    
    echo "**Answer:** $answer" >> scorecard.md
    echo "" >> scorecard.md
    echo "**Evaluation:** $EVAL" >> scorecard.md
  else
    echo "**Skipped**" >> scorecard.md
  fi
  echo "" >> scorecard.md
  echo "---" >> scorecard.md
done

# Overall assessment
echo "" >> scorecard.md
cat scorecard.md | onnai "based on this scorecard, provide:
## Overall Assessment
## Strengths
## Concerns
## Hiring Recommendation (Strong Yes / Yes / Maybe / No)
## Suggested Next Steps" >> scorecard.md

echo "Scorecard saved to scorecard.md"
cat scorecard.md

Exit Interview

#!/bin/bash
# exit-interview.sh - Employee exit interviews

EMPLOYEE="$1"
ROLE="$2"
TENURE="$3"

cat > .context << EOF
You are conducting an exit interview.
Employee: $EMPLOYEE
Role: $ROLE  
Tenure: $TENURE

Be empathetic and professional.
The goal is to learn, not to convince them to stay.
Create psychological safety for honest feedback.
EOF

QUESTIONS=$(cat << 'EOF'
1. What prompted you to start looking for a new opportunity?
2. What did you enjoy most about working here?
3. What would you change about the team or company?
4. Did you feel you had the resources to do your job well?
5. How was your relationship with your manager?
6. Would you recommend this company to a friend? Why or why not?
7. Is there anything we could have done to keep you?
8. Any advice for your replacement?
EOF
)

echo "=== Exit Interview: $EMPLOYEE ==="
echo ""

TRANSCRIPT=""

for q in $(seq 1 8); do
  QUESTION=$(echo "$QUESTIONS" | sed -n "${q}p")
  echo "Q$q: $QUESTION"
  read -p "Answer: " answer
  TRANSCRIPT="$TRANSCRIPT"$'\n'"Q: $QUESTION"$'\n'"A: $answer"$'\n'
  echo ""
done

echo "=== Exit Interview Analysis ==="
echo "$TRANSCRIPT" | onnai "analyze this exit interview:

## Key Reasons for Leaving
(Prioritized)

## What We Did Well
(To keep doing)

## Areas for Improvement
(Actionable feedback)

## Risk Factors
(What might cause others to leave?)

## Recommendations
(Specific actions to take)

## Sentiment
(Overall tone: positive/neutral/negative departure)" | tee "exit_${EMPLOYEE// /_}_$(date +%Y%m%d).md"

Quick Interview Commands

# Generate interview questions
echo "senior product manager" | onnai "10 behavioral interview questions"

# Evaluate an answer
echo "Tell me about a time you failed. Answer: I once shipped a bug to production..." | \
  onnai "evaluate this interview answer using STAR framework"

# Interview prep for candidate
echo "data scientist at Netflix" | onnai "how to prepare for this interview:
- Likely questions
- Topics to study
- Questions to ask them"

# Debrief questions
echo "We just interviewed 5 candidates for engineering manager" | \
  onnai "debrief questions to ask the interview panel"

# Reference check questions
echo "hiring a VP of Sales" | onnai "10 reference check questions"

Part 5: Extras

Fun with Audio

Use say (macOS) or espeak (Linux) to make your terminal talk.

Sales Announcements

#!/bin/bash
# sales-announcer.sh - Announce new sales with flair

SALES_LOG="$1"  # or pipe from webhook

tail -f "$SALES_LOG" | while read -r line; do
  # Parse sale (adjust for your format)
  AMOUNT=$(echo "$line" | jq -r '.amount')
  CUSTOMER=$(echo "$line" | jq -r '.customer')
  PRODUCT=$(echo "$line" | jq -r '.product')
  
  # Generate announcement
  ANNOUNCEMENT=$(echo "$CUSTOMER bought $PRODUCT for $AMOUNT" | \
    onnai "make this a fun, enthusiastic sales announcement. one sentence. no emojis.")
  
  # Speak it!
  say -v Samantha "$ANNOUNCEMENT"
  
  # Big sales get special treatment
  if [ "${AMOUNT%.*}" -gt 1000 ]; then
    say -v Samantha "Ka-ching! Big money!"
  fi
done

Webhook Listener

#!/bin/bash
# sales-webhook.sh - Listen for Stripe/payment webhooks

# Requires: nc (netcat)
while true; do
  # Listen on port 8080
  REQUEST=$(nc -l 8080)
  
  # Extract JSON body
  BODY=$(echo "$REQUEST" | tail -1)
  
  # Parse and announce
  AMOUNT=$(echo "$BODY" | jq -r '.data.object.amount // 0')
  AMOUNT_DOLLARS=$((AMOUNT / 100))
  
  if [ "$AMOUNT_DOLLARS" -gt 0 ]; then
    say "New sale! $AMOUNT_DOLLARS dollars!"
  fi
done

Voice Styles (macOS)

# List all voices
say -v '?'

# Fun voices for different occasions
say -v Samantha "New sale!"              # Professional
say -v Fred "We got another one!"        # Goofy
say -v Bells "Money money money"         # Musical
say -v Whisper "Secret sale incoming"    # Dramatic
say -v "Good News" "Excellent!"          # Enthusiastic

# Different accents
say -v Daniel "Brilliant sale, mate"     # British
say -v Thomas "Très bien!"               # French
say -v Karen "Fantastic sale today"      # Australian

Tiered Announcements

#!/bin/bash
# tiered-sales.sh - Different reactions by sale size

announce_sale() {
  AMOUNT=$1
  CUSTOMER=$2
  
  if [ "$AMOUNT" -lt 50 ]; then
    say -v Samantha "Nice! $AMOUNT dollars from $CUSTOMER"
    
  elif [ "$AMOUNT" -lt 500 ]; then
    say -v Samantha "Sweet! $CUSTOMER just dropped $AMOUNT dollars!"
    afplay /System/Library/Sounds/Glass.aiff
    
  elif [ "$AMOUNT" -lt 1000 ]; then
    say -v "Good News" "Big sale alert! $AMOUNT dollars from $CUSTOMER!"
    afplay /System/Library/Sounds/Blow.aiff
    
  else
    # Whale alert!
    say -v Samantha "Oh my god."
    sleep 1
    say -v "Good News" "WHALE ALERT! $CUSTOMER just spent $AMOUNT dollars!"
    afplay /System/Library/Sounds/Funk.aiff
    osascript -e 'display notification "💰 $'"$AMOUNT"' from '"$CUSTOMER"'" with title "WHALE ALERT"'
  fi
}

# Test it
announce_sale 25 "Alice"
announce_sale 250 "Bob"  
announce_sale 750 "Charlie"
announce_sale 5000 "Mega Corp"

Build Status Announcer

#!/bin/bash
# build-announcer.sh - CI/CD status spoken

STATUS="$1"  # success, failure, started
PROJECT="$2"

case "$STATUS" in
  started)
    say -v Samantha "Build started for $PROJECT"
    ;;
  success)
    say -v "Good News" "$PROJECT build succeeded. Ship it!"
    afplay /System/Library/Sounds/Glass.aiff
    ;;
  failure)
    say -v Whisper "Oh no."
    sleep 0.5
    say -v Fred "$PROJECT build failed. Somebody broke it."
    afplay /System/Library/Sounds/Basso.aiff
    ;;
esac

Pomodoro Timer

#!/bin/bash
# pomodoro.sh - Speaking pomodoro timer

WORK_MINS=${1:-25}
BREAK_MINS=${2:-5}

say "Starting $WORK_MINS minute work session"

for ((i=WORK_MINS; i>0; i--)); do
  if [ $i -eq 5 ]; then
    say -v Whisper "Five minutes remaining"
  fi
  sleep 60
done

say -v "Good News" "Time for a $BREAK_MINS minute break!"
afplay /System/Library/Sounds/Glass.aiff

sleep $((BREAK_MINS * 60))

say "Break over. Ready for another round?"

Daily Standup Bot

#!/bin/bash
# standup.sh - Morning standup announcements

# Get yesterday's sales
YESTERDAY=$(date -v-1d +%Y-%m-%d)
SALES_DATA=$(cat sales.log | grep "$YESTERDAY")

TOTAL=$(echo "$SALES_DATA" | jq -s 'map(.amount) | add')
COUNT=$(echo "$SALES_DATA" | jq -s 'length')
TOP=$(echo "$SALES_DATA" | jq -s 'max_by(.amount) | "\(.customer): $\(.amount)"')

SUMMARY=$(cat << EOF | onnai "write a brief, upbeat morning standup summary"
Yesterday's sales:
- Total: $TOTAL
- Number of sales: $COUNT  
- Biggest sale: $TOP
EOF
)

say -v Samantha "Good morning team."
sleep 1
say -v Samantha "$SUMMARY"

Linux Alternative (espeak)

# Install
sudo apt install espeak

# Basic usage
espeak "New sale!"

# Different voices
espeak -v en-us "American accent"
espeak -v en-gb "British accent"

# Speed and pitch
espeak -s 150 -p 50 "Normal speed and pitch"
espeak -s 200 -p 80 "Fast and high"

# Cross-platform wrapper
speak() {
  if command -v say &> /dev/null; then
    say "$@"
  elif command -v espeak &> /dev/null; then
    espeak "$@"
  else
    echo "$@"
  fi
}

Sound Effects

# macOS system sounds
afplay /System/Library/Sounds/Ping.aiff      # Subtle
afplay /System/Library/Sounds/Glass.aiff     # Pleasant
afplay /System/Library/Sounds/Blow.aiff      # Attention
afplay /System/Library/Sounds/Funk.aiff      # Celebration
afplay /System/Library/Sounds/Basso.aiff     # Error
afplay /System/Library/Sounds/Hero.aiff      # Achievement

More Ideas

DevOps & SRE

kubectl logs pod | onnai "summarize errors last hour" | slack-cli -c alerts
aws ec2 describe-instances | onnai "find underutilized instances"
cat Dockerfile | onnai "optimize for size"
crontab -l | onnai "explain what each job does"

Security

nmap -sV target | security "assess vulnerabilities"
cat auth.log | onnai "detect brute force patterns"
cat nginx.conf | security "find misconfigurations"

Git Workflows

# Pre-commit hook
git diff --cached | reviewer || exit 1

# Changelog generation
git log --oneline v1.0..HEAD | writer "generate user-facing changelog"

# Commit message help
git diff --staged | onnai "suggest commit message"

Data Wrangling

cat messy.csv | onnai "normalize phone numbers" > clean.csv
cat data.json | jq '.[]' | onnai "find anomalies"
curl api.example.com/prices | onnai "extract prices" | awk '{sum+=$1} END {print sum}'

Writing & Content

# Chained refinement
cat draft.md | writer "make punchy" | writer "add humor" > final.md

# Translation
cat english.txt | onnai "translate to Spanish"

# Summarization
curl -s https://longread.com/article | lynx -dump -stdin | writer "summarize in 3 bullets"

Learning

man awk | onnai "explain like I'm 5"
cat complex.rs | onnai "add comments explaining each block"
history | onnai "teach me better ways to do these tasks"

Fun

fortune | therapist "respond to this wisdom"
cat recipe.txt | onnai "make it vegan" | onnai "halve portions"
history | onnai "roast my command line habits"


Contributing

Have a workflow to share? Email hello@onnai.ai.