I didn't invent DIIICE. I just noticed I kept pasting the same six things into every AI prompt, every agent brief, every context window, and eventually I gave them letters so I could stop typing them out.
D for Data. I for Intelligence. I for Instructions. I for Ideas. C for Context. E for Examples. Six letters. One taxonomy for everything an AI needs to know.
The reason it's six, not five, is that Examples showed up late. I was running AI agents on Upwork for almost a year before I realized that models learn from examples faster than from any instruction you can write. You can describe a voice in five paragraphs of instructions and the output will still feel generic. Give the same model three paragraphs written in that voice and the mimicry is uncanny. Examples aren't a nice-to-have. They're a category.
This post walks through each letter — what it is, what it isn't, and one canonical DOT from my own vault for each. At the end you'll have a classifier you can actually use.
The Six Letters, in Order of Discovery
People ask why the order is DIIICE and not ABCDEF or Data-first-Context-last. The honest answer is this is the order the letters arrived in. Data came first because that's what felt like the raw material — rows, facts, numbers, things you can look up. Then Intelligence, because I kept noticing that a bunch of what I was writing down wasn't facts but insights — compressed judgments that had cost me thousands of client-hours to form. Then Instructions, because agents needed a separate category for do-this-then-that procedures. Then Ideas, because I had a notebook full of things that were neither facts nor judgments nor procedures — they were hypotheses, half-formed. Then Context, because everything around the work mattered as much as the work. And finally Examples, the day I handed a model three songs and it wrote a fourth that felt like mine.
The order isn't a hierarchy. All six are equal. But the order tells you something about what agents typically need first and what they typically forget to ask for.
D — Data
Data is the stuff you can look up. Prices, dates, addresses, counts, enumerations, schemas. If it fits cleanly in a spreadsheet, it's probably Data.
Data is the easiest category to get right and the easiest to get wrong. Right: a list of your current product SKUs, last quarter's revenue by channel, your content publishing schedule. Wrong: anything a reasonable person would call opinion dressed up as a table. A table of "the best marketing frameworks" is not Data. It's Intelligence pretending.
The tell: can two people with the same sources independently produce the same value? If yes, Data. If no, something else.
Brand Architecture Models — Branded House vs. House of Brands vs. Hybrid
A three-row table with examples. 'Branded House: Google, Apple, Virgin.' 'House of Brands: P&G, Unilever.' 'Hybrid: Alphabet, Nestlé.' No opinion. Anyone referencing the same sources builds the same table.
I — Intelligence
Intelligence is compressed judgment. The reason a thing is a certain way. The pattern you only see after looking at a hundred examples. A rule of thumb you'd hand to a smart junior before sending them into the wild.
Most of what experienced operators know is Intelligence. It's also the hardest thing to extract because it lives in people's heads and comes out in side-comments. The test: can you say it in 2–4 sentences and does it end with a consequence? "Agencies with fewer than ten employees fail when they hire their first salesperson before their first systems person, because the sales compounding outruns the delivery" is Intelligence. "Agencies fail often" is not — it's vibes.
Context Engineering Paradigm — From Context Extension to Context Curation
For two years the industry treated 'more context window' as the answer. The real unlock was the opposite — learning to curate *less* context, more precisely. The shift from extension to curation is the hinge the whole field is turning on.
I — Instructions
Instructions are imperative. Do this. Then this. If X, then Y. A procedure you'd hand to an agent and expect literal execution.
The most common Instruction mistake is writing instructions that assume context. "Write the email in our voice" is not an Instruction — it's a pointer to a missing Context DOT. A real Instruction would be: "Open the voice DOT. Apply the three anti-patterns. Run the EduSales structure. Produce three variants and tag them A/B/C." Every word commits the agent to something.
A useful rule: if a tech-savvy junior could follow your Instruction without asking a single clarifying question, it's probably a valid Instruction. If they'd need to ask one more thing, you've got a dependency hiding. That dependency is usually Context or Intelligence.
I — Ideas
Ideas are hypotheses. Suggestions. Maybe we should try X. Things that aren't settled enough to be Intelligence but are too structured to be random thoughts. Ideas become Intelligence when they're validated, become Data when they're measured, and stay Ideas forever if they're interesting but never tested.
The reason Ideas get their own category is that AI agents handle them differently. An Idea DOT is something you'd want the agent to consider, not apply. When I'm planning a week's content and I feed the agent my Ideas DOTs, I want it to surface the three that fit this week's theme — not to execute on all of them.
Product-Integrated Content Strategy for AI and Productivity SaaS
The content IS the product demo. The demo IS the content. Not marketing for the product — proof of the product. This is an Idea, not a framework, because it lives or dies on how aggressively you structure for it.
C — Context
Context is everything around the work. Who the person is. What the history is. Why this matters. What the constraints are. What happened last time. What the reader already knows. What the business model actually rewards.
Context is the single largest category in most real vaults. The DOT Vault I built for my own business is roughly 40% Context by count — because every agent needs to know a little bit more about the company, the founder, the customer, the market, the history, the taste, before it can do a single useful thing. Strip out all the Context DOTs and you've got an agent that can do things but doesn't know why to do them.
Context is also where the multiplier effects hide. The same Context DOT ('John grew up in Philly, moved to Vegas in 2019') can be relevant to a marketing agent, a writing agent, a customer-support agent, and a legal agent — in totally different ways. One DOT, many consumers. That's why Context pays compound interest.
Dots Origin Story: Steve Jobs, Connect the Dots to HIM
The company exists because of one Steve Jobs line — 'you can't connect the dots looking forward' — and the realization that AI changes the direction of the arrow. That sentence is Context. It's the *why* behind every feature decision.
E — Examples
Examples are the category I almost missed. For a year I had DIIIC and kept wondering why my agent outputs felt off. Then I gave the same agent three samples of the voice I wanted, without changing the instructions, and the output snapped into place.
Examples are what instructions point at. An Instruction says 'write in John's voice.' An Example is John's voice, transcribed. A good vault has at least three Examples per pattern — one isn't enough because the model will overfit to the single case. Three triangulates.
The pattern that finally made Examples click for me: treat them like flashcards. Short. Labeled. Plural. Not one long essay of 'here's how I write' — a deck of thirty snippets with a tag on each. The agent pattern-matches better, and you can evict bad examples without rewriting anything.
The Identity Triad — Visual, Verbal, and Experiential Brand Pillars
The three pillars of brand identity, with one canonical Example per pillar. Visual = the site. Verbal = this post. Experiential = the feeling of the product. Intelligence with Examples attached is the most useful kind.
The Classifier (Copy This)
Here's the six-question decision tree I use to classify a new piece of knowledge. Feed this to an AI along with the thing you're trying to classify and it'll get it right about 90% of the time.
# DIIICE Classifier
You will be given a piece of knowledge. Classify it into EXACTLY one of
six categories using the questions below, in order. Stop at the first yes.
1. Is it a procedure — do-this-then-that, imperative? → INSTRUCTIONS
2. Is it a sample of the thing itself (voice, style, layout)? → EXAMPLES
3. Is it a hypothesis — "maybe we should try X", untested? → IDEAS
4. Is it compressed judgment — 2–4 sentences ending in
a consequence, rule-of-thumb flavor? → INTELLIGENCE
5. Is it lookup-able — two people with the same sources
independently produce the same value? → DATA
6. Otherwise — who the person is, why this matters, history,
constraints, taste, surrounding conditions: → CONTEXT
Respond with ONLY the category name. If the input fits two categories
equally well, choose the earlier one in the list.Note the order. Instructions first because it's the most procedural and easiest to recognize. Context last because it's the catch-all — if nothing else fits, it's almost certainly Context.
The 10% failure cases are almost always things that should be split. 'Our voice is direct, and here are three examples' is both Intelligence and Examples. Split it. Two DOTs. That's the real trick — when the classifier struggles, the knowledge isn't wrong, it's compound. Decompose it.
Why This Taxonomy, Not Another
There are other knowledge taxonomies. DIKW (Data, Information, Knowledge, Wisdom) predates me by decades. Cynefin has its five-domain thing. Every enterprise knowledge-management book has a slightly-different pyramid.
DIIICE isn't better than those as a theory. It's better as a practice. I notice the difference every time I have to classify something on deadline and DIKW makes me pause. 'Is this Information or Knowledge?' is a philosophical question. 'Is this a procedure or an example?' is an operational one. DIIICE wins because the questions you ask to apply it map to things an agent will do differently.
If your taxonomy doesn't change what the agent does with the result, you don't have a taxonomy. You have vocabulary.
An agent reads Data and presents it. Reads Intelligence and applies it as a filter. Reads Instructions and executes. Reads Ideas and surfaces. Reads Context and assumes. Reads Examples and imitates. Six verbs. Six buckets. That's the whole reason DIIICE exists — six buckets because there are exactly six agent-behaviors that need a bucket.
What This Unlocks
Once every piece of knowledge in your vault has a DIIICE tag, three things become possible that weren't before.
- Routing. When a new agent request comes in, you can pull the relevant DOTs by type — give a writing agent only Examples and Context, give a research agent only Data and Intelligence. Smaller context windows, sharper results, cheaper runs.
- Validation. You can look at a vault and ask: am I Context-heavy and Example-thin? That's an agent that sounds generic. Am I Instructions-heavy and Ideas-thin? That's an agent that never innovates. The distribution is a diagnostic.
- Composition. A good prompt is almost always a recipe — a specific proportion of each type, in a specific order. Once you have DIIICE tags, prompt templates stop being prose and become structured assemblies.
I've watched this go from a personal hack to the backbone of an agent platform. Six letters. One of the six dots that connected into Dots. Every structured knowledge unit on this site — and there are 49,000 of them and counting — is classified by these six letters. Every agent we run reads them back. The taxonomy isn't an afterthought. It's the product.
If you've got a vault — or you're about to start one — tag everything you've got by DIIICE. The tags are free. The clarity is worth thousands of billable hours. And the agents you run on top of that vault will, finally, sound like they know you.
Up next in the series: how to write an Instruction DOT that survives a model upgrade, and the one Example pattern that does 80% of the voice-mimicry work.



