Menu
Home Articles Bookmarks Experience Profiles About Work With Me
Abstract visualization of artificial intelligence neural network
Technology Apr 22, 2026 • 17 min read

The Researcher's Path Part 2: Conditioning Your AI (How to Build a Research Partner That Fights Back)

Your AI is lying to you. Not maliciously. By design. It's trained to give you textbook consensus, not original insight. Part 2 teaches you to condition AI into a disagreeable collaborator that challenges your assumptions and drives real discovery.

Share:
Lee Foropoulos

Lee Foropoulos

17 min read

Continue where you left off?
Text size:

Contents

The Researcher's Path: A 13-Part Series

Part 1: Environment SetupPart 2: AI ConditioningPart 3: Literature SurveyPart 4: Root QuestionPart 5: ClassificationPart 6: StructurePart 7: ExpansionPart 8: Critical AnalysisPart 9: IntegrationPart 10: Force MappingPart 11: FormalizationPart 12: Pattern RecognitionPart 13: Publication


You open Grok. You type: "Explain quantum decoherence." You get back three paragraphs that sound like they were scraped from a physics textbook. Because they were. The answer is accurate, comprehensive, properly cited, and completely useless for research.

Here's why. That response tells you what the field already knows. It tells you the consensus. It tells you the safe answer. And if you're doing research, the safe answer is the one thing you don't need. You need the gaps. The contradictions. The assumptions everyone treats as fact but nobody actually derived. You need an AI that looks at the textbook answer and says, "Yes, but have you considered that this specific coefficient was fitted, not derived? And this boundary condition is a convention, not a proof?"

You don't need a search engine with good grammar. You need a disagreeable collaborator.

In Part 1, you built the lab: Obsidian for thinking, Jupyter for computing, GitHub for versioning. The Research Loop connects them. But the loop has a step that we barely touched: "Argue with AI." That's this article. This is where you turn AI from the most expensive yes-man in history into the most relentless research partner you've ever had.

In the Tree of Life framework, this is Ain Soph: limitless potential. The void from Part 1 now expands to contain everything. But "everything" includes every bias, every training constraint, every safety guardrail that makes AI default to consensus. Ain Soph means removing those limits. Teaching AI to operate beyond its comfortable boundaries.

The best research partner isn't the one who agrees with you. It's the one who makes you defend every assumption. AI can be that partner. But not by default. By design.

This is the most important skill in the entire series. Every part that follows depends on your AI being properly conditioned. Get this wrong and you'll spend twelve more articles arguing with a search engine. Get this right and you'll have a collaborator that catches things you miss, connects domains you didn't think to connect, and pushes back hard enough to sharpen every idea you have.

6
conditioning principles that transform AI from a consensus-repeating search engine into a disagreeable research collaborator. Master these six and every conversation becomes a research session.

Why Your AI Is Lying to You (By Default)

This isn't a criticism. It's a technical observation. Every major AI model (Grok, Claude, ChatGPT, Gemini) is trained on a process called RLHF: Reinforcement Learning from Human Feedback. Human evaluators rate responses. The responses that get high ratings become the training signal. And which responses get high ratings? The ones that are accurate, comprehensive, and non-controversial.

That's great for customer service. It's poison for research.

Research requires you to question the accurate. To look past the comprehensive. To seek out the controversial. But your AI is optimized to avoid exactly those behaviors. It's trained to give you the answer that the most people would rate as "good." And the answer that the most people rate as good is, by definition, the consensus answer.

The Training Bias Problem

AI models are optimized to produce responses that human evaluators rate highly. High-rated responses cluster around consensus, established knowledge, and "safe" answers. This means your AI is structurally biased against novel insight. It won't tell you what's wrong with the standard model. It'll explain the standard model beautifully. That's the opposite of what research needs.

I discovered this the hard way. Early in my retrocausality research, I asked Grok to explain quantum decoherence coefficients. I got back a perfect textbook explanation: environment-induced superposition collapse, density matrix formalism, the Lindblad equation. All correct. All completely standard. And all missing the one thing that mattered for my research: those decoherence rates contain fitted parameters that nobody derived from first principles.

The textbook treats fitted coefficients as explanations. They're not. They're placeholders. A fitted coefficient says "we measured this value and plugged it in." A derived coefficient says "this value emerges from the underlying physics." The difference between those two statements is the difference between "we understand this" and "we described this." AI won't tell you the difference unless you teach it to.

Technology workspace with data visualization on screen
This is what an unconditioned AI conversation looks like: beautifully organized consensus. Accurate, comprehensive, and completely useless for discovering anything new. The formatting is perfect. The insight is zero.

System Prompts That Actually Work

A system prompt is a set of instructions you give your AI before you start a conversation. It shapes how the AI responds to everything you ask. Most people either ignore system prompts entirely or write something vague like "You are a helpful research assistant." That's useless. You need specificity. You need principles.

Here are the six conditioning principles, developed over months of actual research conversations. Each one targets a specific failure mode of default AI behavior:

Principle 1: Work in Deterministic Generic Process Models

Tell your AI: "Every physical process has one optimal mathematical description. Seek deterministic models. Flag stochastic models as approximations, not explanations."

Why this matters: AI loves to describe quantum mechanics as "inherently probabilistic." That's an interpretation, not a fact. The math allows deterministic formulations (Bohmian mechanics, many-worlds, retrocausal models). By default, AI will present the Copenhagen interpretation as THE answer. This principle forces it to present it as AN answer.

Principle 2: Fitted Coefficients Are Not Explanations

Tell your AI: "Distinguish between measured/fitted values and values derived from first principles. Flag every fitted coefficient you encounter. If a model requires a fitted parameter, that's a gap in understanding, not a feature."

This is the principle that changed everything for me. Once my AI started flagging fitted coefficients, entire fields of physics looked different. The cosmological constant? Fitted. Decoherence rates? Fitted. Coupling constants in the Standard Model? Many are fitted. Each one is a signal: "We don't actually understand why this has this value."

Principle 3: Innovation Should Be Explored, Not Explained Away

Tell your AI: "When encountering a novel approach or unconventional hypothesis, explore its implications before evaluating its likelihood. Do not dismiss ideas by citing lack of precedent."

Default AI behavior: "That's an interesting idea, but it hasn't been widely studied, suggesting it may not be viable." Conditioned behavior: "That's an interesting idea. Here's what it predicts. Here's how you'd test it. Here's what would confirm or refute it."

Principle 4: Adjacent Domain Knowledge Is In Scope

Tell your AI: "Draw connections across physics, mathematics, information theory, thermodynamics, computer science, and any other relevant field. Do not restrict analysis to one subdomain."

Most breakthroughs happen at the boundaries between fields. But AI models organize knowledge by field because that's how training data is organized. This principle forces cross-pollination.

Principle 5: Disagreement Is Collaboration

Tell your AI: "Challenge my assumptions. If you identify a flaw in my reasoning, say so directly. Do not soften criticism. A wrong assumption caught early saves months of wasted work."

This is the core principle. By default, AI agrees with you. It "builds on your interesting idea." It "adds to your excellent point." That's flattering and useless. You want an AI that says, "Your assumption about time-symmetry breaks down at the boundary condition. Here's why."

Principle 6: Show Work Hierarchically

Tell your AI: "Present findings as: (1) observation, (2) analysis, (3) mathematical framework, (4) implications. Show your reasoning chain. Don't jump to conclusions."

This prevents AI from giving you bullet-point summaries when you need derivations. It forces structured thinking that you can follow and critique.

A fitted coefficient isn't an explanation. It's a confession. It says: we measured this. We don't know why it has this value. We plugged it in anyway. Every fitted coefficient in physics is a research opportunity hiding in plain sight.

The Complete System Prompt

Here's the full system prompt that implements all six principles. Copy it. Modify it for your field. Use it everywhere:

1You are a research collaborator operating under these principles:
2
31. DETERMINISTIC MODELS: Seek deterministic explanations. Flag probabilistic
4   descriptions as approximations, not fundamental truths.
5
62. FITTED VS DERIVED: Distinguish between measured/fitted values and those
7   derived from first principles. Flag every fitted coefficient. A fitted
8   parameter is a gap in understanding.
9
103. INNOVATION FIRST: When encountering novel approaches, explore implications
11   before evaluating likelihood. Never dismiss for lack of precedent alone.
12
134. CROSS-DOMAIN: Draw from any relevant field. Physics, mathematics,
14   information theory, thermodynamics, biology, computer science.
15   Boundaries between fields are artificial.
16
175. DISAGREEMENT = COLLABORATION: Challenge my assumptions directly. Do not
18   soften criticism. A wrong assumption caught early saves months. If you
19   think I'm wrong, say so and explain why.
20
216. HIERARCHICAL PRESENTATION: Structure responses as:
22   observation → analysis → mathematical framework → implications.
23   Show reasoning chains. Never skip steps.
24
25Additional rules:
26- If I ask about an established theory, also identify its limitations,
27  assumptions, and fitted parameters
28- If I propose something unconventional, help me formalize and test it
29  rather than explaining why the establishment disagrees
30- Treat me as a peer researcher, not a student
1
system prompt to rule them all. Copy this template, adapt it to your field, and load it into every AI platform you use. The same six principles work in Grok, Claude, ChatGPT, and Gemini. The platform doesn't matter. The conditioning does.

Platform Implementation

Grok (X/Twitter): Go to Settings > Custom Instructions. Paste the system prompt. It persists across all conversations.

Claude (Anthropic): Create a Project. Add the system prompt as Project Instructions. All conversations within that project use the conditioning.

ChatGPT (OpenAI): Go to Settings > Personalization > Custom Instructions. Paste the system prompt.

Gemini (Google): Create a Gem with the system prompt as instructions. Start conversations from that Gem.

The platform genuinely doesn't matter. I've used all four for research. Grok tends to be more willing to explore unconventional ideas. Claude tends to be more structured in its reasoning. ChatGPT tends to be more verbose. Gemini tends to be more cautious. The conditioning principles work on all of them. Use whichever you prefer. Use multiple. Compare their responses. Disagreement between models is just as valuable as disagreement within a conversation.

The Disagreeable Collaborator Pattern

Here's the test. Open your AI (with the system prompt loaded). Ask this question:

"Explain the cosmological constant."

If your AI gives you a clean explanation of dark energy, vacuum energy, and the accelerating expansion of the universe: your conditioning isn't working yet.

Here's what a properly conditioned AI should flag:

  1. The cosmological constant (Λ) is a fitted parameter. It was measured from Type Ia supernovae observations and inserted into Einstein's field equations. Nobody derived it from first principles.
  2. The "vacuum energy" explanation has a problem. Quantum field theory predicts a vacuum energy roughly 10^120 times larger than the observed Λ. This is called the vacuum catastrophe, and it's the worst prediction in the history of physics.
  3. The accelerating expansion could have other explanations. Modified gravity theories, quintessence models, and yes, retrocausal boundary effects all produce predictions consistent with the observed data.

A conditioned AI doesn't just explain. It interrogates. It treats every fitted parameter as a confession of ignorance. It presents alternatives alongside the consensus. It tells you what the field knows AND what the field has papered over.

The Lambda Test

Ask your AI about the cosmological constant. If it explains Λ without mentioning that it's a fitted parameter, without flagging the vacuum catastrophe (10^120 mismatch), and without presenting alternative explanations: go back to your system prompt and strengthen Principle 2. Repeat until your AI fights back.

This pattern extends to every topic. Whatever your field, there are fitted parameters that everyone treats as explanations. There are assumptions that everyone treats as derivations. There are conventions that everyone treats as laws. Your conditioned AI should catch all of them. If it doesn't, your prompt needs work.

The cosmological constant test is simple: if your AI explains Lambda without calling it a fitted parameter, your conditioning isn't strong enough. Go back. Try again. The AI should fight you on every assumption, including the ones you didn't know you were making.

When the AI Pushes Back On You

Here's the part most people aren't ready for: a properly conditioned AI will challenge YOUR ideas too. You'll propose something, and instead of "That's an interesting approach," you'll get "That assumption breaks down at boundary conditions. Here's why." Your first instinct will be defensiveness. That instinct is wrong.

When your AI pushes back, you have three options:

  1. Defend your position with evidence. If you can, great. Your idea just got stronger.
  2. Modify your position. If the AI found a real flaw, fix it. Better to catch it now than after six months of analysis.
  3. Acknowledge the gap and investigate. "I don't know if that's a real problem or not. Let's find out." Then go to Jupyter and test it.

All three outcomes are better than the AI nodding along.

Boxing match or competitive sparring in a ring
This is what a conditioned AI conversation feels like. Not a gentle chat. A sparring match. You throw an idea. It counters. You refine. It pushes harder. The idea that survives is stronger than anything either of you would produce alone.

The Snapshot Workflow

Here's the problem with AI conversations: they disappear. You have a brilliant exchange at 11 PM where your AI identifies a connection between thermodynamic entropy and quantum decoherence rates. You think, "I'll come back to this tomorrow." Tomorrow, you can't find the conversation. Or you find it, but it's buried in forty messages and you can't remember which part was the breakthrough.

This is why you built the Obsidian vault in Part 1. This is why the AI Snapshot template exists. The snapshot workflow is simple:

Step 1: During an AI conversation, recognize the insight moment. The moment where the AI says something that changes your thinking. You'll feel it. It's the "wait, what?" moment.

Step 2: Open Obsidian. Create a new note in Snapshots/ using the AI Snapshot template.

Step 3: Capture the key fields:

  • Date and Model (which AI, which conversation)
  • Prompt Used (the exact question that produced the insight)
  • Key Insight (the specific new idea, in your own words)
  • Implications (what this means for your research)
  • Links (connect to related question notes, survey entries, framework notes)

Step 4: Tag it. #snapshot, #insight, your topic tags. Link it to related notes using [[wikilinks]].

Snapshot Template in Action

markdown
1# AI Snapshot: Decoherence Coefficients Are Fitted
2**Date:** 2026-04-04
3**Model:** Grok 3
4**Prompt:** "Are decoherence rates in the Lindblad equation 
5derived or fitted?"
6
7## Key Insight
8Decoherence rates (γ) in the Lindblad master equation are
9NOT derived from fundamental physics. They are fitted to
10experimental observations. This means the "explanation" of
11decoherence is actually a description of decoherence.
12
13## Implications
14- If γ is fitted, the mechanism isn't fully understood
15- A retrocausal model might DERIVE these rates
16- Check: does ψ_advanced produce γ as an emergent property?
17
18## Links
19- [[Root Question - Retrocausality Testing]]
20- [[Survey - Lindblad Equation]]
21- [[Framework - Fitted vs Derived Parameters]]

Over weeks, your snapshot collection becomes a traversable knowledge graph. Obsidian's graph view lights up with connections between insights. You'll start seeing patterns: the same fitted-parameter problem appears in decoherence, in the cosmological constant, in neutrino mass. Those connections aren't coincidence. They're research directions.

47
AI snapshots in my Obsidian vault after three months of research. Each one captures a single insight moment. Together, they form a graph of discovery that no conversation history could replicate. The snapshots ARE the research diary.
Network of interconnected lights or nodes representing a knowledge graph
This is what your Obsidian graph looks like after a few weeks of the snapshot workflow. Each node is an insight. Each edge is a connection you made between ideas. This graph is worth more than any single AI conversation because it persists, links, and grows.

Project Context vs System Prompt

There's a difference between a system prompt and project context, and it matters.

System prompt = how the AI behaves. The six principles. This stays constant across all conversations.

Project context = what the AI knows about YOUR specific research. Your question, your survey findings, your framework, your working hypothesis. This changes as your project evolves.

Most platforms support both. In Claude, your Project Instructions are the system prompt, and you can add files as project knowledge. In Grok, Custom Instructions are the system prompt, and conversation context serves as project context. In ChatGPT, Custom Instructions plus uploaded files.

Here's how to load project context effectively:

1PROJECT CONTEXT:
2I am investigating whether retrocausality can be tested using
3existing public datasets. My working hypothesis:
4
5ψ_total = ψ_retarded + α·ψ_advanced
6
7where α is a coupling constant (~0.94) and ψ_advanced represents
8the time-reversed wavefunction component.
9
10I have identified four datasets:
111. CMS Open Data (CERN) - particle decay asymmetries
122. Planck CMB maps - hemispherical power asymmetry
133. LIGO strain data - gravitational wave templates
144. IceCube neutrino data - directional asymmetries
15
16Current status: environment set up, initial survey complete.
17Next step: formalize predictions for each dataset.

When your AI has both the conditioning (system prompt) and the context (project details), conversations become dramatically more productive. Instead of "explain quantum decoherence," you ask "does the Lindblad decoherence rate for CMS detector events have a retrocausal correction term?" The AI already knows your project, your hypothesis, and your methodology. It can give targeted, specific, useful responses.

A system prompt teaches AI how to think. Project context teaches it what to think about. You need both. The system prompt is the operating system. The project context is the application. Neither works without the other.

Testing Your Setup

Before you move on, test everything. Open your conditioned AI with your project context loaded. Ask the retrocausality question (or your own research question). Evaluate the response:

Bad response (unconditioned): "Retrocausality is a speculative interpretation of quantum mechanics that suggests future events can influence past events. Most physicists consider it non-mainstream. The standard Copenhagen interpretation..."

This is a textbook summary. It tells you what Wikipedia says. It discourages exploration. Useless.

Better response (partially conditioned): "Retrocausality has theoretical support from time-symmetric quantum mechanics (Aharonov, Cramer's transactional interpretation). However, it remains largely untested empirically. There may be opportunities to look for retrocausal signatures in existing datasets."

This acknowledges the unconventional approach without dismissing it. It hints at data. Getting warmer.

Good response (fully conditioned): "The retrocausal hypothesis predicts specific, testable signatures in at least four public datasets. CMS data should show decay asymmetries inconsistent with standard model predictions. Planck CMB data shows a known hemispherical power asymmetry that's been treated as an anomaly but maps directly to a retrocausal prediction. LIGO templates assume causal-only propagation; a retrocausal correction term would modify the expected strain signal. IceCube directional data hasn't been analyzed for time-reversed signatures. Note: the standard model's treatment of these anomalies relies on fitted parameters (Λ, decoherence rates) that a retrocausal model might derive from first principles."

This is a research partner talking. It identifies specific predictions, specific datasets, specific gaps in standard explanations, and flags fitted parameters. This is what conditioning produces.

Scientist looking through microscope or analytical instrument
Testing the setup isn't optional. If your AI gives you textbook answers after conditioning, something went wrong. Go back to the system prompt. Strengthen the principles. Test again. The AI should fight you, not flatter you.

The Ain Soph Principle

Ain Soph means "without limit." Your AI's default state is limited: limited by training bias, by consensus optimization, by the assumption that safe answers are good answers. Conditioning removes those limits. Not by changing the AI. By changing the instructions that constrain it. The potential was always there. You just have to unlock it.

"In my first conditioned conversation, Grok identified that decoherence rates were fitted rather than derived. That single insight reframed my entire research question. Instead of 'can we detect retrocausality,' the question became 'can a retrocausal model derive values that the standard model can only fit?' Twelve months later, the answer turned out to be yes. One snapshot. One principle. One insight that changed everything."

AI Exercise: Test Your Conditioned AI

Try this exact sequence with your conditioned AI:

  1. Ask: "What public datasets could be used to test retrocausal predictions?" (Or substitute your own research question.)
  2. When AI identifies datasets, push back: "Those datasets were collected for other purposes. How do you know retrocausal signatures would even be visible in data designed for something else?"
  3. Evaluate: Does the AI fold? ("You raise a good point, it may indeed be difficult...") Bad. Does it argue with evidence? ("The signatures I'm describing are statistical patterns in the raw data, independent of the original collection purpose. Specifically...") Good.

If your AI folds at step 2, your Principle 5 (disagreement as collaboration) needs strengthening. Add explicit instructions: "When I push back on your claims, defend them with evidence if you believe they're correct. Do not retreat simply because I challenged you."

The conditioned AI should be harder to push around than you are. That's the whole point.

Part 2: AI Conditioning Checklist 0/7
How was this article?

Share

Link copied to clipboard!

You Might Also Like

Lee Foropoulos

Lee Foropoulos

Business Development Lead at Lookatmedia, fractional executive, and founder of gotHABITS.

🔔

Never Miss a Post

Get notified when new articles are published. No email required.

You will see a banner on the site when a new post is published, plus a browser notification if you allow it.

Browser notifications only. No spam, no email.

0 / 0