CAUTION: before you continue, im not responsible for any data loss that these steps might cause you. everyone should have a backup of their data.

Here is how you restore a really corrupt btrfs filesystem using btrfs restore.

PRE STEPS (realizing the filesystem is really corrupt by trying simple mounts and restores):

* assume your volume name is /dev/md127 and that you have an available /root folder to dump temp data to and that you will be dumping your restore to /USB
* you can change any of those variable if thats not the case

Here we assume the filesystem is so corrupt that its not mounting regulary:

And its not mounting with recovery options & readonly:

dmesg can give more information & this usually means we should try btrfs restore

NOTE: open_ctree failed can mean many things… you could simply be missing a needed btrfs device, make sure all of the btrfs devices that are part of the filesystem are in device & try running “btrfs device scan” and repeat the mount commands

Now try this a regular btrfs restore, first I run it dry (-D)to see if it can recover anything. Dry meaning its a test run, it wont actually do any writes.

If it would of worked (it would of listed the files it would of restored) I would of ran this to continue with the restore:

But since that didnt show anything would be restored we need to try “btrfs restore” from other tree locations.

STEPS (restoring corrupt filesystem):

Process is similar to this article talking about undeleting files in btrfs.

Here we will try to restore from another tree location (well block #), we find these by running btrfs-find-root.

Note: if you dont have any well block numbers or tree locations its probably because you dont have COW enabled and you also didnt take any snapshots (not having snapshots is all that bad, but not having COW enabled is bad as you probably wont have any well block #s / tree locations). Also I believe running btrfs balances and btrfs defrags might clear up other tree locations (well block numbers) making this type of recovery impossible.

What im doing here is basically:

First run this to find all of the tree locations which are the number after “well block #”

btrfs-find-root /dev/md127

example output:

 

Then run a dry btrfs restore to see how many files and folders are restored

btrfs restore -D -t <well block> -v -F -i /dev/md127 /dev/null

Then I pick the biggest output / most recent tree location.

– most recent tree location has bigger tree block number (i assume this, as these look like transaction ids, and all transaction ids are incremented in filesystems)

– biggest output of “btrfs restore -D” means there is more files and folders restored.

Once I find the well block number i like that gives me the most recent output of alot of data, I then plug in a usb and mount it to /USB (or whereever), then I run:

btrfs restore -t <well block> -v -F -i /dev/md127 /USB

-o is optional, -F is optional (but if you dont use you will need to type “yes” or “y” alot telling the process to continue).

Now you wait for the restore to finish.

Example of picking tree location / well block number: I ran the above scripts and I found that the biggest file

Also at the bottom of 000-btrfs-find-root.1 is the number 1066916682137

So here I would choose 1066916682137 as its the most recent well block number and its also the biggest 333 file (as it has the most output when running btrfs restore in dry mode – meaning it has the most restore entries —  sure enough it has 3517 restore lines)

I would restore it like so:

SIDENOTE: how to find out how much data there might need to be dumped?

Use btrfs-show-super

Look (grep) for bytes:

So we know there is a max of 9,327,327,404,032 bytes of data (8.5 TiB). So I will need to mount a location to /USB that has that much space (two 6 TB usbs LVMed or BTRFSed together can provide 12 TB which will fit 8.5 TiB). Note that in reality there might be less data as “bytes_used” includes metadata and snapshots not just data. When we are running btrfs restore we are not restoring any of the snapshots (you could restore the snapshots if you wanted to. Also btrfs restore gets the full filesystem (aside from the snapshots), so if you only wanted a particular subvolume (or folder) you can use the –path-regex option of “btrfs restore” to ask it only to dump data from that subfolder.

The end.

Leave a Reply

Your email address will not be published. Required fields are marked *