Menu
Home Articles Bookmarks Experience Profiles About Work With Me
Old library with books and warm light streaming through windows
Technology Apr 29, 2026 • 18 min read

The Researcher's Path Part 3: The Survey (Mapping Everything That's Been Tried and Everything That Hasn't)

Most literature reviews are echo chambers. They cite the same 20 papers, miss entire adjacent fields, and never identify the gaps where real discovery hides. Part 3 teaches you to survey like a researcher, not a student.

Share:
Lee Foropoulos

Lee Foropoulos

18 min read

Continue where you left off?
Text size:

Contents

The Researcher's Path: A 13-Part Series

Part 1: Environment SetupPart 2: AI ConditioningPart 3: Literature SurveyPart 4: Root QuestionPart 5: ClassificationPart 6: StructurePart 7: ExpansionPart 8: Critical AnalysisPart 9: IntegrationPart 10: Force MappingPart 11: FormalizationPart 12: Pattern RecognitionPart 13: Publication


The typical literature review goes like this. You search Google Scholar for your topic. You find five papers. You read their reference lists and find twenty more. You read those reference lists and find fifty more. After two weeks, you have a bibliography of seventy papers, all of which cite each other, all of which were written by people in the same department at the same twelve universities, and all of which agree on the same fundamental assumptions.

Congratulations. You've mapped an echo chamber.

This is how most researchers survey their field. It's how PhD students are taught to do it. And it's why entire fields can spend decades stuck on the same problems. The survey doesn't find gaps because the survey never looks outside the boundaries that created the gaps in the first place.

In the retrocausality project, here's what a traditional literature review would have found: a handful of theoretical papers by Huw Price, John Cramer, Yakir Aharonov, and Olivier Costa de Beauregard. These papers propose time-symmetric interpretations of quantum mechanics. They're brilliant, respected, and almost entirely theoretical. A traditional review would conclude: "Retrocausality remains an untested theoretical proposal." End of survey. Close the book.

Here's what the traditional review would have missed: four publicly available datasets, totaling hundreds of gigabytes, sitting on servers at CERN, NASA, and IceCube, containing exactly the signatures that retrocausal theory predicts. Nobody connected the theory to the data because the theorists don't read experimental data catalogs, and the experimentalists don't read foundational physics philosophy.

The data was on the server. The theory was in the papers. Nobody connected them because the survey never crossed the boundary between theoretical physics and experimental data catalogs. That boundary is where the discovery was hiding.

This is Part 3 of The Researcher's Path. In the Tree of Life framework, this is Ain Soph Aur: limitless light. The first illumination. The moment you zoom out far enough to see the entire landscape, not just the corner that your field has been staring at for fifty years.

Your environment is built. Your AI is conditioned. Now you're going to use both to map everything that's been tried, everything that hasn't, and the gap between them where your contribution lives.

4
publicly available datasets containing retrocausal signatures that the theoretical literature never mentioned. CMS at CERN. Planck at NASA. LIGO at Caltech. IceCube at Wisconsin. All free. All sitting there. All missed by every literature review in the field.

Why Most Literature Reviews Are Useless

Let's be precise about why traditional surveys fail. There are three structural problems, and they compound each other.

Problem 1: Citation Networks Are Inbred

Academic papers cite other academic papers. Those papers cite other papers. The result is a citation network that reinforces itself. If everyone in quantum foundations cites the same twenty authors, anyone searching by citation will find those same twenty authors. The network is closed. New perspectives, adjacent fields, and unconventional approaches don't get cited, so they don't appear in citation-driven searches.

This isn't a conspiracy. It's selection bias baked into the system. Peer reviewers are drawn from the same network. They recommend citations from the same network. The bibliography self-replicates like a very boring organism.

Problem 2: Field Boundaries Are Invisible Walls

Physics journals don't cite information theory papers. Cosmology reviews don't reference signal processing methods. Quantum foundations doesn't read the CERN Open Data catalog. These field boundaries are completely artificial, maintained by university department structures and journal scope statements, and they hide relevant work from every survey that stays within one field.

In my retrocausality survey, the most useful methods turned out to come from signal processing (Fourier analysis applied to CMS data), cosmological anomaly detection (the Planck hemispherical asymmetry), and Bayesian model comparison (from biostatistics, of all places). None of these appear in a quantum foundations literature review.

Problem 3: Surveys Map Knowledge, Not Ignorance

A traditional survey tells you what the field knows. Useful, but not sufficient. What you actually need is a map of what the field doesn't know. Where are the gaps? Which predictions haven't been tested? Which assumptions haven't been questioned? Which data hasn't been analyzed for signatures that a different theory predicts?

The gaps are where discovery lives. And traditional surveys are structurally designed to miss them.

The Survey Trap

If your literature review only cites papers from your field, written by authors who cite each other, and published in journals that all share the same scope statement: you haven't surveyed the landscape. You've surveyed the echo chamber. The gaps you need to find are outside those walls.

Library stacks stretching into the distance with organized shelves
The traditional literature review: deep, thorough, and blind. Every paper on these shelves cites every other paper on these shelves. What you need is the paper that ISN'T here. The one filed in a different section, written in a different vocabulary, solving the same problem from a direction nobody in this aisle has considered.

Surveying the Retrocausality Landscape

Let me walk you through what the retrocausality survey actually looked like, using the methodology you should apply to your own field.

Step 1: The Conventional Search

I started where everyone starts. ArXiv searches for "retrocausality," "time-symmetric quantum mechanics," "advanced wavefunction." This produced the expected cast of characters:

  • Huw Price (Cambridge/Sydney): The philosopher who's been arguing since the 1990s that physics has no good reason to assume time flows in only one direction. His 2012 paper "Does Time-Symmetry Imply Retrocausality?" is the clearest statement of the theoretical case.
  • John Cramer (University of Washington): Proposed the Transactional Interpretation of quantum mechanics in 1986. Uses both retarded (forward-in-time) and advanced (backward-in-time) waves. Elegant, testable in principle, and mostly ignored by mainstream physics.
  • Yakir Aharonov, Peter Bergmann, Joel Lebowitz (ABL): Their 1964 paper introduced the two-state vector formalism, which describes quantum systems using both past and future boundary conditions. This is the mathematical backbone of most modern retrocausal proposals.
  • Olivier Costa de Beauregard: French physicist who proposed retrocausal signaling through quantum correlations. Controversial, fascinating, largely forgotten.
1964
when Aharonov, Bergmann, and Lebowitz published the two-state vector formalism. The mathematical foundation for retrocausal quantum mechanics has existed for over sixty years. The empirical testing? Almost none.

Step 2: Map What They Actually Tested

Here's where the survey gets interesting. I asked my conditioned AI: "Of these theoretical proposals, which ones include empirically testable predictions? Which of those predictions have been tested?"

The answer was revealing. Price's work is philosophical: it argues that retrocausality is logically coherent but doesn't specify measurements. Cramer proposed a specific experiment in 2006 (using entangled photons) but the results were ambiguous. Aharonov's two-state formalism makes predictions through "weak measurements," which have been tested in tabletop experiments with photons but not in high-energy physics.

Nobody had taken these predictions and applied them to existing large-scale datasets. Not one person.

The theoretical framework existed since 1964. Testable predictions existed since the 1980s. Public datasets containing the relevant signatures existed since 2014. In 2025, nobody had connected them. The literature review revealed not a gap in knowledge, but a gap in action.

Step 3: Cross the Boundaries

This is where the conditioned AI earned its keep. I asked: "What fields outside quantum foundations deal with time-asymmetry, and what have they found?"

The AI pulled in:

  • Thermodynamics: The arrow of time is explained by entropy increase, but the underlying laws are time-symmetric. This is the "Past Hypothesis" problem. Physicists have been arguing about it for 150 years.
  • Cosmology: The Planck satellite data shows a hemispherical power asymmetry in the cosmic microwave background. Cosmologists call it an "anomaly." Nobody in quantum foundations had noticed that a retrocausal model predicts exactly this kind of asymmetry.
  • Information theory: Retrocausality can be formulated as a constraint on information flow. The mathematics of backward information propagation already exist in control theory and signal processing.
  • Gravitational wave physics: LIGO's matched filtering templates assume strictly causal (forward-in-time) wave propagation. A retrocausal correction term would modify the templates. Nobody had checked.
Map or atlas showing interconnected routes and pathways
The cross-domain survey map. Each circle is a field. Each line is a connection that the traditional literature review missed. The retrocausality question doesn't live in quantum foundations. It lives at the intersection of five fields that rarely talk to each other.

The Gap Analysis

Now we get to the part that changes everything. You've surveyed the conventional literature. You've crossed domain boundaries. You've built a map. Now you stare at that map and ask: where are the holes?

In the retrocausality project, the gap was almost comically obvious once you saw it:

What ExistsWhat's Missing
Theoretical framework (since 1964)Empirical testing on large datasets
Testable predictions (since 1980s)Predictions mapped to specific dataset signatures
Public CMS data (since 2014)Anyone analyzing CMS data for retrocausal signatures
Planck CMB anomaly (since 2013)Anyone connecting the anomaly to retrocausal predictions
LIGO strain data (since 2016)Templates that include retrocausal correction terms
IceCube neutrino data (since 2013)Directional analysis for time-reversed asymmetries

The gap wasn't theoretical. Theory was fine. The gap was that nobody had pointed the data at the theory. The telescope existed. The star map existed. Nobody looked through the telescope at the stars on the map.

The Ain Soph Aur Principle

Ain Soph Aur means "limitless light." This is the moment of first illumination. Not the full picture yet (that comes in later parts), but the first flash where you see the shape of the landscape. The survey produces this flash. You see what exists, what doesn't, and the enormous gap between theory and testing. That gap is your contribution. That gap is where the light enters.

100+
gigabytes of public data across four datasets (CMS, Planck, LIGO, IceCube) that contain potential retrocausal signatures. Free to download. Free to analyze. The computational power to process them costs nothing on Google Colab. The total barrier to testing a sixty-year-old theory: zero dollars and one person willing to look.

The Four Public Datasets

  1. CMS Open Data (CERN): 443,761+ collision events with full detector readouts. Available at opendata.cern.ch. Prediction: decay channel asymmetries inconsistent with standard model rates.

  2. Planck CMB Maps (ESA/NASA): Full-sky microwave background temperature and polarization maps. Available at pla.esac.esa.int. Prediction: hemispherical power asymmetry maps to retrocausal boundary conditions.

  3. LIGO Strain Data (Caltech/MIT): Raw gravitational wave strain from multiple detector events. Available at gwosc.org. Prediction: residuals after standard template subtraction contain retrocausal correction signatures.

  4. IceCube Neutrino Data (U. Wisconsin): Directional and energy data for detected neutrino events. Available at icecube.wisc.edu/data. Prediction: directional asymmetries in neutrino arrival patterns.

"Physics has been in a deadlock. Not because the theory is wrong. Not because the data doesn't exist. Because the theory is in one building and the data is in another, and nobody walked between them. The survey is the walk."

Building Your Survey in Obsidian

Now let's turn this methodology into a repeatable system. In your Obsidian vault, the 02-Survey/ folder is where all of this lives. Here's how to structure it.

One Note Per Source

Every paper, every dataset, every cross-domain connection gets its own note. Use the Survey Entry template from Part 1:

markdown
1# Price 2012: Does Time-Symmetry Imply Retrocausality?
2
3**Source:** arXiv:1002.0906v3
4**Field:** quantum foundations, philosophy of physics
5**Date Read:** 2026-04-04
6**Tags:** #survey #retrocausality #time-symmetry #theoretical
7
8## Summary
9Argues that if quantum mechanics is time-symmetric (which the
10math allows), then retrocausality follows logically. No new
11physics needed. Just removing the assumption that causes
12must precede effects.
13
14## Key Claims
15- Time-asymmetry in QM is a convention, not a derivation
16- Bell's theorem allows retrocausal hidden variables
17- Standard objections (grandfather paradox) don't apply to
18  quantum retrocausality
19
20## Gaps / What's Missing
21- No specific empirical predictions
22- No proposed dataset or experimental protocol
23- Philosophical argument, not testable framework
24
25## Connection to Our Question
26Provides theoretical justification. We need to supply the
27empirical testing framework that Price explicitly says is
28needed but doesn't provide.

The Survey Summary Note

After you've entered all your sources, create a summary note at 02-Survey/SURVEY-SUMMARY.md. This is the bird's eye view:

markdown
1# Survey Summary: Retrocausality Empirical Testing
2
3## Theoretical Landscape
4- 4 major theoretical frameworks (Price, Cramer, ABL, Costa de B)
5- Strong theoretical support since 1964
6- Gap: almost no empirical testing at scale
7
8## Cross-Domain Connections
9- Thermodynamics: time-symmetry, Past Hypothesis
10- Cosmology: CMB hemispherical anomaly (Planck 2013)
11- Signal Processing: applicable to CMS and LIGO data
12- Information Theory: retrocausal information flow formalism
13
14## Public Data Available
15| Dataset | Size | Signatures | Status |
16|---------|------|------------|--------|
17| CMS | 443K events | Decay asymmetries | Untested |
18| Planck | Full-sky CMB | Hemispherical asymmetry | Known but unexplained |
19| LIGO | Multiple events | Template residuals | Untested |
20| IceCube | Multi-year | Directional asymmetries | Untested |
21
22## The Gap
23Theory exists. Data exists. Nobody connected them.

Use Tags and Links Aggressively

Tag every survey note with its field (#quantum-foundations, #cosmology, #signal-processing), its type (#theoretical, #experimental, #dataset), and its relevance (#supports-hypothesis, #contradicts, #neutral). Link notes to each other with [[wikilinks]] wherever they connect.

After two weeks of surveying, open Obsidian's graph view. You'll see clusters (papers that cite each other) connected by thin bridges (your cross-domain connections). Those bridges are your most valuable notes. They represent connections that the original authors didn't make. They are, in a real sense, your first original contribution.

Network visualization with interconnected nodes and glowing connections
Your Obsidian graph after a thorough survey. The dense clusters are fields that cite themselves. The bridges between them are connections you discovered. Those bridges don't exist in any bibliography. They exist in your vault. That's original work.
Your survey, properly executed and documented in Obsidian, is already a contribution. Most researchers never map the full landscape. Most never cross domain boundaries. If you do both, you've produced something that doesn't exist anywhere else: a complete map with the gaps visible.

Your Survey as a Research Artifact

Here's something most researchers don't realize: your survey is publishable. Not as a blog post. As an actual research contribution.

A well-structured survey that maps multiple fields, identifies cross-domain connections, and highlights untested predictions is exactly what review journals publish. "Interdisciplinary Review of Retrocausal Predictions and Available Public Datasets" is a paper that could stand on its own. Even if you never run a single analysis, a survey that maps theory to untested public data is valuable because it tells other researchers where to look.

But you're not going to stop at the survey. You're going to do the analysis too. This is just the map.

Commit your survey to GitHub:

bash
1# In your research repo
2git add docs/survey-summary.md
3git add docs/survey-notes/
4git commit -m "complete survey: 4 theoretical frameworks, 4 public datasets, gap analysis"
5git push

Your survey is now versioned, shareable, and timestamped. If someone publishes a similar survey next year, your Git history proves you were here first. Version control isn't just about code. It's about intellectual property.

Person standing at elevated viewpoint looking out over expansive landscape
The view from Ain Soph Aur. You can see the whole landscape now. The theories, the data, the gaps. Parts 4 through 13 will take you down into that landscape to do the actual work. But you'll never get lost. Because you have the map.

AI Exercise: Building Your Survey

Run this exact sequence with your conditioned AI:

  1. "Survey all approaches to testing [your topic] empirically." Let the AI produce a comprehensive list. Don't interrupt. Let it go wide.

  2. "Which of these approaches used publicly available data?" This separates theoretical proposals from actual tests. In most fields, the answer will be "very few" or "none."

  3. "What public datasets could test predictions from [your topic] but haven't been used for that purpose?" This is the gap question. This is where AI earns its conditioning. A properly conditioned AI will identify datasets from adjacent fields that contain relevant signatures.

  4. Push back: "Those datasets were collected for different purposes. How do you know the signatures would be detectable?" If the AI defends its position with specific statistical arguments, your conditioning is working. If it retreats, strengthen Principle 5 in your system prompt.

  5. Snapshot the results. Create survey notes for each source the AI identifies. Create a summary note. Build the links. Push to GitHub.

For the retrocausality project, this sequence converged on CMS, Planck, LIGO, and IceCube within three conversations. Your project will converge on whatever datasets exist for your question. If no datasets exist, that's a gap too, and Part 6 will teach you how to acquire or create data.

Part 3: Literature Survey Checklist 0/9
How was this article?

Share

Link copied to clipboard!

You Might Also Like

Lee Foropoulos

Lee Foropoulos

Business Development Lead at Lookatmedia, fractional executive, and founder of gotHABITS.

🔔

Never Miss a Post

Get notified when new articles are published. No email required.

You will see a banner on the site when a new post is published, plus a browser notification if you allow it.

Browser notifications only. No spam, no email.

0 / 0