The Researcher's Path: A 13-Part Series
Part 1: Environment Setup → Part 2: AI Conditioning → Part 3: Literature Survey → Part 4: Root Question → Part 5: Classification → Part 6: Structure → Part 7: Expansion → Part 8: Critical Analysis → Part 9: Integration → Part 10: Force Mapping → Part 11: Formalization → Part 12: Pattern Recognition → Part 13: Publication
The typical literature review goes like this. You search Google Scholar for your topic. You find five papers. You read their reference lists and find twenty more. You read those reference lists and find fifty more. After two weeks, you have a bibliography of seventy papers, all of which cite each other, all of which were written by people in the same department at the same twelve universities, and all of which agree on the same fundamental assumptions.
Congratulations. You've mapped an echo chamber.
This is how most researchers survey their field. It's how PhD students are taught to do it. And it's why entire fields can spend decades stuck on the same problems. The survey doesn't find gaps because the survey never looks outside the boundaries that created the gaps in the first place.
In the retrocausality project, here's what a traditional literature review would have found: a handful of theoretical papers by Huw Price, John Cramer, Yakir Aharonov, and Olivier Costa de Beauregard. These papers propose time-symmetric interpretations of quantum mechanics. They're brilliant, respected, and almost entirely theoretical. A traditional review would conclude: "Retrocausality remains an untested theoretical proposal." End of survey. Close the book.
Here's what the traditional review would have missed: four publicly available datasets, totaling hundreds of gigabytes, sitting on servers at CERN, NASA, and IceCube, containing exactly the signatures that retrocausal theory predicts. Nobody connected the theory to the data because the theorists don't read experimental data catalogs, and the experimentalists don't read foundational physics philosophy.
This is Part 3 of The Researcher's Path. In the Tree of Life framework, this is Ain Soph Aur: limitless light. The first illumination. The moment you zoom out far enough to see the entire landscape, not just the corner that your field has been staring at for fifty years.
Your environment is built. Your AI is conditioned. Now you're going to use both to map everything that's been tried, everything that hasn't, and the gap between them where your contribution lives.
Why Most Literature Reviews Are Useless
Let's be precise about why traditional surveys fail. There are three structural problems, and they compound each other.
Problem 1: Citation Networks Are Inbred
Academic papers cite other academic papers. Those papers cite other papers. The result is a citation network that reinforces itself. If everyone in quantum foundations cites the same twenty authors, anyone searching by citation will find those same twenty authors. The network is closed. New perspectives, adjacent fields, and unconventional approaches don't get cited, so they don't appear in citation-driven searches.
This isn't a conspiracy. It's selection bias baked into the system. Peer reviewers are drawn from the same network. They recommend citations from the same network. The bibliography self-replicates like a very boring organism.
Problem 2: Field Boundaries Are Invisible Walls
Physics journals don't cite information theory papers. Cosmology reviews don't reference signal processing methods. Quantum foundations doesn't read the CERN Open Data catalog. These field boundaries are completely artificial, maintained by university department structures and journal scope statements, and they hide relevant work from every survey that stays within one field.
In my retrocausality survey, the most useful methods turned out to come from signal processing (Fourier analysis applied to CMS data), cosmological anomaly detection (the Planck hemispherical asymmetry), and Bayesian model comparison (from biostatistics, of all places). None of these appear in a quantum foundations literature review.
Problem 3: Surveys Map Knowledge, Not Ignorance
A traditional survey tells you what the field knows. Useful, but not sufficient. What you actually need is a map of what the field doesn't know. Where are the gaps? Which predictions haven't been tested? Which assumptions haven't been questioned? Which data hasn't been analyzed for signatures that a different theory predicts?
The gaps are where discovery lives. And traditional surveys are structurally designed to miss them.
The Survey Trap
If your literature review only cites papers from your field, written by authors who cite each other, and published in journals that all share the same scope statement: you haven't surveyed the landscape. You've surveyed the echo chamber. The gaps you need to find are outside those walls.
Surveying the Retrocausality Landscape
Let me walk you through what the retrocausality survey actually looked like, using the methodology you should apply to your own field.
Step 1: The Conventional Search
I started where everyone starts. ArXiv searches for "retrocausality," "time-symmetric quantum mechanics," "advanced wavefunction." This produced the expected cast of characters:
- Huw Price (Cambridge/Sydney): The philosopher who's been arguing since the 1990s that physics has no good reason to assume time flows in only one direction. His 2012 paper "Does Time-Symmetry Imply Retrocausality?" is the clearest statement of the theoretical case.
- John Cramer (University of Washington): Proposed the Transactional Interpretation of quantum mechanics in 1986. Uses both retarded (forward-in-time) and advanced (backward-in-time) waves. Elegant, testable in principle, and mostly ignored by mainstream physics.
- Yakir Aharonov, Peter Bergmann, Joel Lebowitz (ABL): Their 1964 paper introduced the two-state vector formalism, which describes quantum systems using both past and future boundary conditions. This is the mathematical backbone of most modern retrocausal proposals.
- Olivier Costa de Beauregard: French physicist who proposed retrocausal signaling through quantum correlations. Controversial, fascinating, largely forgotten.
Step 2: Map What They Actually Tested
Here's where the survey gets interesting. I asked my conditioned AI: "Of these theoretical proposals, which ones include empirically testable predictions? Which of those predictions have been tested?"
The answer was revealing. Price's work is philosophical: it argues that retrocausality is logically coherent but doesn't specify measurements. Cramer proposed a specific experiment in 2006 (using entangled photons) but the results were ambiguous. Aharonov's two-state formalism makes predictions through "weak measurements," which have been tested in tabletop experiments with photons but not in high-energy physics.
Nobody had taken these predictions and applied them to existing large-scale datasets. Not one person.
Step 3: Cross the Boundaries
This is where the conditioned AI earned its keep. I asked: "What fields outside quantum foundations deal with time-asymmetry, and what have they found?"
The AI pulled in:
- Thermodynamics: The arrow of time is explained by entropy increase, but the underlying laws are time-symmetric. This is the "Past Hypothesis" problem. Physicists have been arguing about it for 150 years.
- Cosmology: The Planck satellite data shows a hemispherical power asymmetry in the cosmic microwave background. Cosmologists call it an "anomaly." Nobody in quantum foundations had noticed that a retrocausal model predicts exactly this kind of asymmetry.
- Information theory: Retrocausality can be formulated as a constraint on information flow. The mathematics of backward information propagation already exist in control theory and signal processing.
- Gravitational wave physics: LIGO's matched filtering templates assume strictly causal (forward-in-time) wave propagation. A retrocausal correction term would modify the templates. Nobody had checked.
The Gap Analysis
Now we get to the part that changes everything. You've surveyed the conventional literature. You've crossed domain boundaries. You've built a map. Now you stare at that map and ask: where are the holes?
In the retrocausality project, the gap was almost comically obvious once you saw it:
| What Exists | What's Missing |
|---|---|
| Theoretical framework (since 1964) | Empirical testing on large datasets |
| Testable predictions (since 1980s) | Predictions mapped to specific dataset signatures |
| Public CMS data (since 2014) | Anyone analyzing CMS data for retrocausal signatures |
| Planck CMB anomaly (since 2013) | Anyone connecting the anomaly to retrocausal predictions |
| LIGO strain data (since 2016) | Templates that include retrocausal correction terms |
| IceCube neutrino data (since 2013) | Directional analysis for time-reversed asymmetries |
The gap wasn't theoretical. Theory was fine. The gap was that nobody had pointed the data at the theory. The telescope existed. The star map existed. Nobody looked through the telescope at the stars on the map.
The Ain Soph Aur Principle
Ain Soph Aur means "limitless light." This is the moment of first illumination. Not the full picture yet (that comes in later parts), but the first flash where you see the shape of the landscape. The survey produces this flash. You see what exists, what doesn't, and the enormous gap between theory and testing. That gap is your contribution. That gap is where the light enters.
The Four Public Datasets
CMS Open Data (CERN): 443,761+ collision events with full detector readouts. Available at opendata.cern.ch. Prediction: decay channel asymmetries inconsistent with standard model rates.
Planck CMB Maps (ESA/NASA): Full-sky microwave background temperature and polarization maps. Available at pla.esac.esa.int. Prediction: hemispherical power asymmetry maps to retrocausal boundary conditions.
LIGO Strain Data (Caltech/MIT): Raw gravitational wave strain from multiple detector events. Available at gwosc.org. Prediction: residuals after standard template subtraction contain retrocausal correction signatures.
IceCube Neutrino Data (U. Wisconsin): Directional and energy data for detected neutrino events. Available at icecube.wisc.edu/data. Prediction: directional asymmetries in neutrino arrival patterns.
"Physics has been in a deadlock. Not because the theory is wrong. Not because the data doesn't exist. Because the theory is in one building and the data is in another, and nobody walked between them. The survey is the walk."
Building Your Survey in Obsidian
Now let's turn this methodology into a repeatable system. In your Obsidian vault, the 02-Survey/ folder is where all of this lives. Here's how to structure it.
One Note Per Source
Every paper, every dataset, every cross-domain connection gets its own note. Use the Survey Entry template from Part 1:
1# Price 2012: Does Time-Symmetry Imply Retrocausality?
2
3**Source:** arXiv:1002.0906v3
4**Field:** quantum foundations, philosophy of physics
5**Date Read:** 2026-04-04
6**Tags:** #survey #retrocausality #time-symmetry #theoretical
7
8## Summary
9Argues that if quantum mechanics is time-symmetric (which the
10math allows), then retrocausality follows logically. No new
11physics needed. Just removing the assumption that causes
12must precede effects.
13
14## Key Claims
15- Time-asymmetry in QM is a convention, not a derivation
16- Bell's theorem allows retrocausal hidden variables
17- Standard objections (grandfather paradox) don't apply to
18 quantum retrocausality
19
20## Gaps / What's Missing
21- No specific empirical predictions
22- No proposed dataset or experimental protocol
23- Philosophical argument, not testable framework
24
25## Connection to Our Question
26Provides theoretical justification. We need to supply the
27empirical testing framework that Price explicitly says is
28needed but doesn't provide.The Survey Summary Note
After you've entered all your sources, create a summary note at 02-Survey/SURVEY-SUMMARY.md. This is the bird's eye view:
1# Survey Summary: Retrocausality Empirical Testing
2
3## Theoretical Landscape
4- 4 major theoretical frameworks (Price, Cramer, ABL, Costa de B)
5- Strong theoretical support since 1964
6- Gap: almost no empirical testing at scale
7
8## Cross-Domain Connections
9- Thermodynamics: time-symmetry, Past Hypothesis
10- Cosmology: CMB hemispherical anomaly (Planck 2013)
11- Signal Processing: applicable to CMS and LIGO data
12- Information Theory: retrocausal information flow formalism
13
14## Public Data Available
15| Dataset | Size | Signatures | Status |
16|---------|------|------------|--------|
17| CMS | 443K events | Decay asymmetries | Untested |
18| Planck | Full-sky CMB | Hemispherical asymmetry | Known but unexplained |
19| LIGO | Multiple events | Template residuals | Untested |
20| IceCube | Multi-year | Directional asymmetries | Untested |
21
22## The Gap
23Theory exists. Data exists. Nobody connected them.Use Tags and Links Aggressively
Tag every survey note with its field (#quantum-foundations, #cosmology, #signal-processing), its type (#theoretical, #experimental, #dataset), and its relevance (#supports-hypothesis, #contradicts, #neutral). Link notes to each other with [[wikilinks]] wherever they connect.
After two weeks of surveying, open Obsidian's graph view. You'll see clusters (papers that cite each other) connected by thin bridges (your cross-domain connections). Those bridges are your most valuable notes. They represent connections that the original authors didn't make. They are, in a real sense, your first original contribution.
Your Survey as a Research Artifact
Here's something most researchers don't realize: your survey is publishable. Not as a blog post. As an actual research contribution.
A well-structured survey that maps multiple fields, identifies cross-domain connections, and highlights untested predictions is exactly what review journals publish. "Interdisciplinary Review of Retrocausal Predictions and Available Public Datasets" is a paper that could stand on its own. Even if you never run a single analysis, a survey that maps theory to untested public data is valuable because it tells other researchers where to look.
But you're not going to stop at the survey. You're going to do the analysis too. This is just the map.
Commit your survey to GitHub:
1# In your research repo
2git add docs/survey-summary.md
3git add docs/survey-notes/
4git commit -m "complete survey: 4 theoretical frameworks, 4 public datasets, gap analysis"
5git pushYour survey is now versioned, shareable, and timestamped. If someone publishes a similar survey next year, your Git history proves you were here first. Version control isn't just about code. It's about intellectual property.
AI Exercise: Building Your Survey
Run this exact sequence with your conditioned AI:
"Survey all approaches to testing [your topic] empirically." Let the AI produce a comprehensive list. Don't interrupt. Let it go wide.
"Which of these approaches used publicly available data?" This separates theoretical proposals from actual tests. In most fields, the answer will be "very few" or "none."
"What public datasets could test predictions from [your topic] but haven't been used for that purpose?" This is the gap question. This is where AI earns its conditioning. A properly conditioned AI will identify datasets from adjacent fields that contain relevant signatures.
Push back: "Those datasets were collected for different purposes. How do you know the signatures would be detectable?" If the AI defends its position with specific statistical arguments, your conditioning is working. If it retreats, strengthen Principle 5 in your system prompt.
Snapshot the results. Create survey notes for each source the AI identifies. Create a summary note. Build the links. Push to GitHub.
For the retrocausality project, this sequence converged on CMS, Planck, LIGO, and IceCube within three conversations. Your project will converge on whatever datasets exist for your question. If no datasets exist, that's a gap too, and Part 6 will teach you how to acquire or create data.