The Prompt Object Model (POM) is a structured data format and Python SDK for composing, organizing, and rendering prompt instructions for large language models (LLMs). It models a document as a tree of nested sections. Each section may contain:
title
(optional for top-level sections, required for nested ones) body
(string content) bullets
(for itemized points) subsections
POM supports both machine-readability (via JSON) and structured rendering (via Markdown), making it ideal for prompt templating, modular editing, traceable documentation, and direct LLM consumption.
Structured prompts are essential when building reliable and maintainable LLM instructions. As your prompts evolve, you may need to insert, remove, or rearrange entire sections, subsections, or even individual bullet points. Without a clean structure, such changes can introduce inconsistencies or reduce the LLM's effectiveness due to unclear or chaotic formatting. POM enforces hierarchy and organization to keep prompts modular, traceable, and performant.
Each section is an object with the following fields:
Field | Type | Required | Description |
---|---|---|---|
title |
string | No (top), Yes (nested) | Heading text. Optional only at the root level. |
body |
string | No | Paragraph or long-form instruction text. |
bullets |
string[] | No | Bulleted list of short statements or rules. |
subsections |
Section[] | No | Nested list of sections. Each must include a title . |
The entire POM document is a JSON array of top-level section objects.
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://example.com/pom.schema.json",
"title": "Prompt Object Model",
"type": "array",
"items": { "$ref": "#/$defs/section" },
"$defs": {
"section": {
"type": "object",
"properties": {
"title": { "type": "string" },
"body": { "type": "string" },
"bullets": {
"type": "array",
"items": { "type": "string" }
},
"subsections": {
"type": "array",
"items": { "$ref": "#/$defs/nestedSection" }
}
},
"required": [],
"additionalProperties": false
},
"nestedSection": {
"type": "object",
"properties": {
"title": { "type": "string" },
"body": { "type": "string" },
"bullets": {
"type": "array",
"items": { "type": "string" }
},
"subsections": {
"type": "array",
"items": { "$ref": "#/$defs/nestedSection" }
}
},
"required": ["title"],
"additionalProperties": false
}
}
}
Each section is rendered as Markdown with heading levels corresponding to its depth:
##
###
, ####
, etc. Example:
## Objective
Define the task.
- Be concise
- Avoid repetition
### Main Goal
Provide helpful and direct responses.
POM documents can also be rendered to XML as an alternative to Markdown. This format is especially useful when your LLM is tuned to expect or parse structured XML data.
Each section becomes a <section>
element, with optional <title>
, <body>
, <bullets>
, and nested <subsections>
. Here's an example:
<prompt>
<section>
<title>Key Skills</title>
<body>You have the following skills.</body>
<bullets>
<bullet>You like Star Wars.</bullet>
<bullet>You can look up the weather at any location (including Star Wars planets).</bullet>
<bullet>You can dial digits when asked.</bullet>
</bullets>
<subsections>
<section>
<title>Communication</title>
<body>You can converse fluently in natural language.</body>
<bullets>
<bullet>Use concise responses</bullet>
<bullet>Avoid over-explaining unless asked</bullet>
</bullets>
</section>
</subsections>
</section>
</prompt>
While XML is highly structured and easy for LLMs to parse, it uses significantly more input tokens than Markdown or plain text. This may impact performance or cost in token-limited environments. Choose XML rendering when structure and reliability outweigh token efficiency.
from POM import PromptObjectModel
pom = PromptObjectModel()
section = pom.add_section(title="Objective", body="Define the LLM's purpose.")
section.add_bullets(["Summarize clearly", "Answer efficiently"])
section.add_subsection(title="Main Goal", body="Be concise and helpful.")
print(pom.render_markdown())
To load from JSON:
with open("prompt.json", "r") as f:
pom = PromptObjectModel.from_json(f.read())
print(pom.render_markdown())
The Prompt Object Model is a lightweight but powerful structure for managing rich, reusable LLM instructions.