The Researcher's Path: A 13-Part Series
Part 1: Environment Setup → Part 2: AI Conditioning → Part 3: Literature Survey → Part 4: Root Question → Part 5: Classification → Part 6: Structure → Part 7: Expansion → Part 8: Critical Analysis → Part 9: Integration → Part 10: Force Mapping → Part 11: Formalization → Part 12: Pattern Recognition → Part 13: Publication
You're about to start a research project. Maybe it's quantum physics. Maybe it's materials science. Maybe it's understanding why your sourdough starter keeps dying. Doesn't matter. Every research project starts the same way: you ask a question, you start gathering information, and within seventy-two hours your digital life looks like a crime scene.
Forty browser tabs. Notes in three different apps. An AI conversation with a breakthrough insight that you can't find because you had it in a different chat window on a different device three days ago. Screenshots of equations you took at 2 AM that are now buried in your camera roll between a picture of your lunch and a meme your friend sent about cats.
This is how most people "do research." They gather information the way a tornado gathers houses. Efficiently, comprehensively, and with absolutely no regard for retrievability.
The difference between someone who does research and someone who accumulates information is the system. Not the tools. Not the intelligence. Not the credentials. The system.
This is Part 1 of The Researcher's Path. We're going to build your system before you touch a single research question. Because the lab has to exist before the experiment. The void has to exist before creation. In the framework of the Tree of Life that structures this entire series, this is Ain: nothingness. The empty space that must be prepared before anything can fill it.
By the end of this article, you'll have:
- An Obsidian vault architectured for research projects
- A Jupyter workspace with the right libraries installed
- A GitHub repository structured for reproducible research
- A workflow loop that connects all three, with AI as your fourth tool
Total cost: zero dollars.
Why Your Setup Determines Your Output
Here's something nobody tells you in grad school: the quality of your research environment determines the quality of your research output. Not because fancy tools make you smarter, but because a well-organized system means you never lose an insight, never duplicate work, never waste an hour searching for something you already found.
I learned this the hard way. When I started my quantum computing research, I was using Google Docs for notes, browser bookmarks for references, a Notes app for quick thoughts, and the search history of three different AI platforms for "that one conversation where GROK explained decoherence coefficients." It was a disaster. Not because the research was bad. Because the research was drowning in its own disorganization.
The day I set up a proper environment (Obsidian vault, Jupyter workspace, GitHub repo, connected workflow) my research productivity roughly tripled. Not because I got smarter overnight. Because I stopped losing things.
The Ain Principle
In the Tree of Life, Ain is the void before creation. Nothing exists yet. That's the point. Before you can fill a space with knowledge, the space must exist and be properly shaped. A messy desk produces messy thinking. A structured vault produces structured discovery. Step one is always preparation.
Obsidian Vault Architecture for Research
If you've read the Obsidian article, you know the basics: free, local-first, Markdown-based, knowledge graph. Now we're going to set it up specifically for a research project.
Create a new vault. Call it whatever your project is about. Mine is called "Quantum Research." If you don't have a specific project yet, call it "Research" and you'll rename it when inspiration hits.
Inside your vault, create this folder structure:
1Research/
2├── 00-Inbox/ # Raw captures, unsorted AI conversations
3├── 01-Questions/ # Research questions at every stage
4├── 02-Survey/ # Literature and landscape mapping
5├── 03-Frameworks/ # Conceptual models and structures
6├── 04-Analysis/ # Active analysis notes
7├── 05-Formalization/ # LaTeX equations, formal write-ups
8├── 06-Publication/ # Paper drafts, figures, submission notes
9├── Templates/ # Reusable note templates
10├── Snapshots/ # AI conversation captures
11└── Daily/ # Daily research logThe numbers aren't arbitrary. They map to the research workflow: raw input → questions → survey → frameworks → analysis → formalization → publication. Obsidian will sort them in order automatically because they start with numbers. You'll always know where you are in the process.
Now create four template files in the Templates/ folder:
Question Template (Templates/Question.md):
1# {{title}}
2**Date:** {{date}}
3**Status:** open | investigating | answered
4**Root Question:**
5
6## Assumptions
7-
8
9## What Success Looks Like
10
11
12## Related Notes
13- AI Snapshot Template (Templates/AI-Snapshot.md):
1# AI Snapshot: {{title}}
2**Date:** {{date}}
3**Model:** Grok | Claude | ChatGPT | Gemini
4**Prompt Used:**
5>
6
7## Key Insight
8
9
10## Implications
11
12
13## Links
14- Survey Entry Template (Templates/Survey-Entry.md):
1# {{title}}
2**Source:**
3**Field:**
4**Date Read:** {{date}}
5**Tags:** #survey
6
7## Summary
8
9
10## Key Claims
11
12
13## Gaps / What's Missing
14
15
16## Connection to Our Question
17Daily Research Log (Templates/Daily-Log.md):
1# Research Log: {{date}}
2
3## What I Explored Today
4
5
6## What Surprised Me
7
8
9## What Contradicts What I Thought
10
11
12## Next Steps
13Why Templates Matter
Templates aren't about being neat. They're about being consistent. When you have 500 notes in six months, consistency is what makes them searchable, linkable, and queryable with Dataview. Every AI snapshot follows the same structure. Every survey entry captures the same fields. Your future self is the primary user of your current notes. Treat them well.
Enable Daily Notes in Obsidian (Settings > Core Plugins > Daily Notes). Set the template to your Daily Research Log. Every day you work on your project, Obsidian creates a dated note from the template. This becomes your research diary: what you explored, what surprised you, what contradicts what you thought. Over weeks and months, this diary becomes one of the most valuable documents in your vault.
Jupyter Workspace Configuration
If you've read the Jupyter article, you know the basics. Now let's configure it for research.
Option A: Local JupyterLab (recommended for serious work)
pip install jupyterlab numpy scipy matplotlib pandas astropyThat one command installs everything you need for most scientific research. If you're doing quantum work like me, add:
pip install qutip qiskitIf you're doing machine learning:
pip install scikit-learn tensorflowLaunch with jupyter lab and you have a full IDE in your browser.
Option B: Google Colab (recommended for zero-install start)
Go to colab.research.google.com. Sign in. Click "New Notebook." You now have a free Jupyter environment with GPU access. NumPy, SciPy, Matplotlib, and pandas are pre-installed. This is the fastest path from "I want to compute something" to "I'm computing it."
Your first research notebook: Create a notebook called 00_environment_test.ipynb. Add these cells:
1# Cell 1: Verify imports
2import numpy as np
3import scipy
4import matplotlib.pyplot as plt
5print(f"NumPy: {np.__version__}")
6print(f"SciPy: {scipy.__version__}")
7print("Environment ready.")1# Cell 2: Your first research plot
2x = np.linspace(0, 4*np.pi, 200)
3plt.figure(figsize=(10, 4))
4plt.plot(x, np.sin(x), label='sin(x)')
5plt.plot(x, np.cos(x), label='cos(x)')
6plt.title("If you can see this, your lab works")
7plt.legend()
8plt.grid(True, alpha=0.3)
9plt.show()Run both cells. If you see the plot, your computational lab is operational. That took about ninety seconds.
GitHub Repository Structure
If you've read the GitHub article, you know what repositories, commits, and branches are. Now let's create one for your research.
Go to github.com, click "New Repository." Name it after your project (mine: retrocausality-study). Add a README. Make it public (science should be open) or private (your call).
Clone it to your machine, then create this folder structure:
1retrocausality-study/
2├── README.md # Project summary and how to reproduce
3├── notebooks/ # Jupyter notebooks (analysis)
4├── docs/ # Papers, write-ups, LaTeX files
5├── data/ # Processed data (or links to raw data)
6├── prompts/ # AI system prompts and conditioning
7├── figures/ # Generated plots and diagrams
8└── .gitignore # Ignore .ipynb_checkpoints, __pycache__, etc.Your .gitignore should include:
1.ipynb_checkpoints/
2__pycache__/
3*.pyc
4.DS_StoreWrite a README that explains what this project is about. One paragraph is fine for now. You'll expand it as the project develops.
Make your first commit:
1git add .
2git commit -m "initial project structure: notebooks, docs, data, prompts"
3git pushCongratulations. Your research project now has version control. Every change you make from this point forward is tracked, reversible, and shareable.
Connecting the Pieces: The Research Loop
Here's how these three tools work together. This is the loop you'll use for every research session from now on:
The Research Loop
Think in Obsidian. Open your daily research log. Record what you're exploring today. Review linked notes from previous sessions. Let the graph view remind you of connections you forgot.
Argue with AI. Open Grok, Claude, or ChatGPT. Ask your question. Push back on the answer. Ask "why?" three more times than feels comfortable. Take snapshots of key insights into your Obsidian vault.
Test in Jupyter. Take the AI's claims and test them computationally. Does the equation actually produce the predicted output? Does the data actually show the pattern? Code doesn't lie. Run it and find out.
Record in Obsidian. What did the computation show? Did it confirm or contradict the AI's claims? What new questions emerged? Link to the notebook. Link to the AI snapshot. Update your framework notes.
Commit to GitHub. Push your notebooks, your notes export, your documentation. Every meaningful step gets a commit message that explains WHY, not just what.
Repeat. Tomorrow, start at step 1. Your daily log picks up where yesterday left off. The graph grows. The notebooks accumulate. The repository tracks everything.
This loop is the backbone of the entire series. Parts 2 through 13 will teach you what to DO inside this loop. Part 1 is about making sure the loop exists and works.
Your First Research Commit
Let's tie it all together. Right now, before you close this article:
In Obsidian: Create a note in
01-Questions/with your first research question. Use the question template. If you're following along with the retrocausality example: "Can retrocausality be tested using existing public datasets?" If you have your own question, use that.In Jupyter: Open a new notebook. Write a Markdown cell at the top explaining what this project is about. Write a code cell that imports your key libraries. Run it. Verify everything works.
In your AI of choice: Open Grok (or Claude, or ChatGPT). Ask: "I'm setting up a research environment to investigate [your topic]. What public datasets exist? What file formats should I expect? What Python libraries will I need?" Take a snapshot of the response into your Obsidian vault.
In GitHub: Copy your environment test notebook into the
notebooks/folder. Copy your AI system prompt (we'll build the real one in Part 2) intoprompts/. Commit and push.
1git add .
2git commit -m "first research session: environment verified, initial question recorded"
3git pushYou now have a research environment that is:
- Organized (Obsidian vault with templates and structure)
- Computational (Jupyter with scientific libraries)
- Versioned (GitHub tracking every change)
- Connected (a workflow loop that ties all three together)
"The void must exist before creation. The lab must exist before the experiment. Everything in this series builds on what you just set up. If your environment is solid, everything downstream flows. If it's chaotic, everything downstream suffers. You just built the foundation."