Prompt Your Way to Linux: A 4-Part Series
Part 1: Picking Your Distro → Part 2: Storage and Encryption → Part 3: Manual Install → Part 4: Services and GPU
If you've ever had a root partition fill up at 3 AM and watched your system grind to a halt, you already know: default partitioning is a trap. But designing a proper encrypted LVM layout for six different workloads across three physical disks? That's a full day of solo planning, cross-referencing docs, calculating sizes, and second-guessing yourself the entire time.
Here's how to let AI do the heavy lifting on storage design so you're not buried in man pages for a full day. You'll feed it your workload requirements, get back a complete architecture with reasoning you can actually follow, and walk away understanding every layer of what you're building.
In Part 1, you picked your distro and mapped the hardware. Now comes the part where most people either accept bad defaults or give up entirely: storage design.
Start With Workloads, Not Partition Sizes
You've got your hardware mapped from Part 1: say an older ThinkPad with an NVMe drive, an SSD, and an HDD. The specs tell you what's possible. But workloads tell you what's necessary.
Here's the move. Don't ask AI for a partition table. Don't specify sizes. Don't say "put /home on its own partition." Instead, describe behaviors and requirements and let it derive the architecture. Something like this:
"This machine will run Docker containers, PostgreSQL with TimescaleDB, Redis, Ollama for local AI models, and store attack artifacts from a honeypot network. It also needs an encrypted analysis workspace separate from the OS, and encrypted archive storage for long-term evidence. Design the best partition and encryption strategy for the three disks we found."
See the difference? You're handing over constraints and objectives, not micromanaging the solution. If the reasoning comes back sound, the design will be too. And if a choice doesn't make sense, you'll spot it because you understand what your machine actually needs to do.
Sorting Workloads by Disk Behavior
The first thing to sort out is workload separation by IO pattern. Not by size, not by importance, but by how each workload actually touches the disk.
The Three Lanes
- High IOPS, low latency = NVMe. This is where the OS, Docker, databases, and AI models live. These workloads do lots of small random reads and writes. They need the fastest storage available.
- Fast scratch space, isolated = SSD. The analysis workspace goes here. It needs decent speed for processing captures and running tools, but more importantly, it needs to be separate from the OS disk. If analysis work corrupts a filesystem or fills a disk, the OS keeps running.
- Bulk capacity, sequential writes = HDD. Long-term archive storage for pcaps, evidence exports, and backups. Sequential write performance is fine. Capacity matters more than speed.
This separation alone prevents the most common failure mode: one workload starving another for disk IO or space.
Why Every Service Gets Its Own Logical Volume
Don't stop at three disks. On the NVMe, break the space into six separate logical volumes, each for a specific reason:
- /var/lib/docker gets isolation because Docker eats disk unpredictably. A runaway container build or forgotten image cache shouldn't be able to fill root.
- /var/lib/postgresql gets isolation for independent sizing, backup snapshots, and future performance tuning. Databases have unique IO patterns that benefit from dedicated space.
- /var/lib/ollama gets isolation because AI models are enormous and grow independently. A single LLM can be 8GB or more. You don't want model downloads competing with your OS for space.
- /home gets isolation so user data, dotfiles, and personal configs survive OS rebuilds.
- swap as its own logical volume, sized to support hibernation if needed.
- root (/) bounded at a fixed size so nothing outside the designated paths can fill it. If root fills up, the system stops functioning. Bounding it is defensive design.
Encryption Isn't Optional
Encryption isn't optional here, and the reasoning is straightforward. This is a laptop form factor running as a security workstation that stores sensitive data at rest: honeypot captures, attack artifacts, network analysis results. If the machine gets stolen, lost, or decommissioned, every disk needs to be unreadable without the passphrase.
Full-disk encryption with LUKS is baseline for this use case. Period.
Leave Headroom in Your Volume Group
This detail separates good LVM design from amateur work. Leave about 5-10% of your volume group unallocated. Not wasted. Reserved. Here's why:
With LVM, you can grow logical volumes on the fly without rebooting. If Docker needs more space in six months, you extend the LV and resize the filesystem in seconds. If you allocated everything upfront, you'd need to shrink one volume to grow another, and shrinking is slow, risky, and sometimes impossible with certain filesystems.
Intentional headroom is a feature, not waste.
The Storage Layer Mental Model
This mental model is the single most valuable thing to take away from this entire chapter. Not the partition sizes. Not the mount points. This 8-layer stack that explains how Linux storage actually works, top to bottom.
The 8-Layer Storage Stack
Every piece of your storage passes through these layers, top to bottom:
- Raw partition: what
fdiskorgdiskcreates on the physical disk - Encryption (LUKS): wraps the raw partition in an encrypted container
- Mapper device: what appears after unlocking:
/dev/mapper/cryptnvme - LVM Physical Volume (PV): the mapper device registered as storage for LVM
- Volume Group (VG): one or more PVs grouped together into a pool
- Logical Volume (LV): a slice of the VG, sized for a specific purpose
- Filesystem: ext4 or swap, formatted on the LV
- Mount point: where the filesystem appears in the directory tree
When something breaks, the fix is almost always at one specific layer. If you can't identify which layer, you'll waste hours troubleshooting the wrong thing.
Most of the failures you'll hit during the actual install, especially with fstab and crypttab in Part 3, come from confusing these layers. Writing a UUID from layer 2 where layer 6 was expected. Referencing a device path from layer 3 in a config that needed layer 1. Every storage error maps back to a layer mismatch.
Bookmark this stack. Tattoo it on your forearm. Once you internalize these eight layers, Linux storage stops being mysterious. It's just eight things, each doing one job, connected in order.
The Final NVMe Layout
Here's the complete design for the primary NVMe drive:
Primary NVMe (nvme0n1)
| Partition | Size | Type | Purpose |
|---|---|---|---|
| nvme0n1p1 | 1 GB | EFI System (FAT32) | Boot firmware |
| nvme0n1p2 | 2 GB | /boot (ext4) | Kernel and initramfs |
| nvme0n1p3 | Remaining | LUKS2 encrypted | Everything else |
Inside the LUKS container on nvme0n1p3:
nvme0n1p3 → cryptnvme (LUKS2) → LVM PV → vgkali (Volume Group)
| Logical Volume | Size | Filesystem | Mount Point |
|---|---|---|---|
| root | 120 GB | ext4 | / |
| swap | 24 GB | swap | [swap] |
| docker | 220 GB | ext4 | /var/lib/docker |
| ollama | 180 GB | ext4 | /var/lib/ollama |
| postgres | 120 GB | ext4 | /var/lib/postgresql |
| home | 200 GB | ext4 | /home |
| (free) | ~65 GB | n/a | n/a |
Secondary Disks
SSD (sdb): sdb1 → cryptanalysis (LUKS2) → ext4 → /srv/analysis
HDD (sda): sda1 → cryptarchive (LUKS2) → ext4 → /srv/archive
Three encrypted disks. Six logical volumes on the primary. Each workload in its own lane. Nothing shares space with anything it shouldn't.
Turning the Design Into a Build Script
Once the design is locked, the next prompt is simple:
"Give me the exact commands to build this entire storage layout from the live Kali environment. Assume all three disks can be wiped."
Ask for a complete shell script: wipe existing signatures, create GPT partition tables, carve partitions with sgdisk, create LUKS2 containers with strong defaults, open them, initialize LVM physical volumes, create the volume group, allocate all six logical volumes, format everything, then repeat the LUKS setup for the SSD and HDD.
Expect roughly 80 lines. Every command should be auditable. If anything in the script doesn't make sense to you, ask AI to explain that specific line before you run it.
What Will Go Wrong (And It Won't Be the Design)
Fair warning: the design will be solid. The execution will fight you. Here are the traps to watch for:
Windows line endings. If you write the script on a Windows machine and transfer it to the live environment via USB, every line ends with \r\n instead of \n. Bash chokes on every single command, and the error messages are cryptic. Fix it with sed -i 's/\r$//' script.sh before running anything.
Shebang corruption. Related to the line ending issue. The #!/bin/bash line gets an invisible carriage return, so the kernel can't find the interpreter. The error looks like the script doesn't exist even though it clearly does.
Path confusion. The USB mounts at one path, but you're referencing the script from a different working directory. Tab completion and relative paths in a live environment are unreliable when you're juggling multiple mount points. Use absolute paths.
Wrong execution syntax. Forgetting ./ before the script name, or not setting execute permissions first. Classic mistakes that have nothing to do with storage design and everything to do with muscle memory.
"AI can write the script, but you still need to understand what you're pasting. If a command doesn't make sense to you, stop and ask."
Once the line endings and execution issues are sorted, the storage build completes in under two minutes. Every LUKS container opens. Every LVM structure creates cleanly. Every filesystem formats. The design holds up.
Verify Everything With the Paste-and-Check Pattern
Don't assume success. After the script finishes, paste your terminal output back to AI and ask it to audit the result:
"Here's my
lsblkandlvsoutput. Did everything build correctly?"
Paste the raw terminal output and ask AI to walk through it line by line. It should confirm: cryptnvme active, vgkali present with six logical volumes at the correct sizes, correct filesystems on each, secondary disks encrypted and formatted.
This paste-and-verify pattern is one of the most useful techniques for system administration with AI. You run a command, paste the output, and ask whether reality matches the plan. It catches problems you'd miss because you're too close to the work:
- A logical volume accidentally formatted as the wrong filesystem type
- A volume sized in megabytes when you meant gigabytes
- A missing partition that the script silently skipped
- A LUKS container that didn't actually open
You can do this with almost any system state: lsblk, lvs, pvs, vgs, blkid, fdisk -l, cryptsetup status. Paste it. Ask AI to audit it. It's faster and more thorough than checking everything yourself.
The Takeaway: Design From Workloads, Not Defaults
The storage layout here wasn't chosen from a template or copied from a forum post. It was derived from actual workload requirements. Docker needs space isolation. Databases need dedicated IO. AI models need room to grow. Security artifacts need encryption at rest. Archives need capacity over speed.
Every decision traces back to a real requirement. That's the difference between a storage layout that survives six months and one that falls apart the first time something unexpected happens.
This approach works for any build, not just security workstations. Tell AI what the machine will do. Describe the services, the data patterns, the growth expectations, the failure scenarios you want to survive. Then let it figure out how to organize the disks. The reasoning it shows you is more valuable than the final partition table, because you'll understand why each choice was made and when it might need to change.
Your disks are partitioned, encrypted, and carved into logical volumes. The architecture is done. Now comes the part that actually breaks people: installing an OS on top of all this without a graphical installer holding your hand.
In Part 3: Manual Install, you'll mount the encrypted volumes, bootstrap Kali from scratch, wire up fstab and crypttab so everything unlocks at boot, and build a working bootloader config. This is where the 8-layer model gets stress-tested for real, and where one wrong UUID can leave you staring at a GRUB rescue prompt. Bring coffee.