It’s not an easy task to keep more than one copy of a large file repository (i.e. movie or music collection) synchronized. For quite a long time I’ve been happily using Unison to do this and I haven’t found more versatile tool, but there’s one situation where it still sucks (well, not sucks, but it’s quite annoying): synchronizing after you’ve largely reorganized your collection.
To be fair, most other tools would suck much more. If you rename or move many big files (or just rename a directory near root) and try to propagate these changes to a remote location with, say, otherwise perfect rsync, you’ll have all moved files resent over whatever link you’re using. Even if it’s a nice gigabit LAN, having to encrypt and decrypt terabytes will take ages.
Unison does almost all that can be done here: if you haven’t switched off your
xferbycopying
option, for every new file that has the same hash as any
already existing file, it will copy the file locally instead of transferring
the data over link. Any renamed or moved file will be detected as two
changes—deletion in the original location and creation in the new location,
and since deletions are propagated last, no unnecessary transfers over link
will take place.
Considering Unison’s purpose and architecture, I don’t see how it could do any better. Like rsync, it’s optimized for small changes within any files, it works on many operating systems,[1] it has to be absolutely sure not to destroy data under any circumstance, it supports per-file backups etc., so any heuristic “move” operation (instead of current “delete” and “create”) would not really fit here. Still, if you run Unison after you’ve done some changes involving gigabyte- or terabyte-sized directories, you will suffer, because:
-
After starting, it will take ages to detect changes, since all the moved or renamed files will have to be completely re-read and their hashes recomputed.
-
Once it finishes reconciling changes, it will present you with tons of file deletions and additions, without any indication of identical file content, so unless all changes come from one end, you will easily lose track on what’s being updated.
-
As you propagate the changes,
xferbycopying
will save you network transfers, but locally copying and then deleting gigabytes (with constant seeking on a rotary medium) will still take centuries and it will crash in case the combined size of renamed files is greater than your remaining free space (and terribly fragment the files if it’s near that). -
If you change even one byte in a moved file (imagine retagging your music library with respective changes of file/directory names), everything will still be transferred over network.
Preprocessing moved files before syncing with Unison
I wasn’t going to stop using Unison to sync my data, so I thought that if I had just a little tool to replay the big structural changes (moving and renaming only) identically on the other synchronized side and then continued using Unison as normal, it would solve all the mentioned problems except for the first one (which isn’t present at all when using rsync for simple one-way synchronization). Up until recently I used to do this preprocessing manually, but that’s only suitable when there are just a few changes.
I hoped to be able to log/record/journal file and directory actions in Midnight Commander where I usually perform directory reorganizations, but such functionality isn’t even in its wishlist. Then I found this little tool to detect moved files, but only for unchanged filenames and using an NFS mounted replica. And then I thought—hey, that should be easy to implement using just file and directory inode numbers! And so I did:
It’s a small Python script that will dump inodes for every file and directory within a directory tree to a text file, you then do your big changes and after you run it again, it will produce a human-readable Python (or shell) script which you can run on the remote machine to replicate the changes. If there are any remaining changes besides renaming and moving, you may finish them manually or using Unison, rsync or similar.
Always check the generated script before executing it. It should never overwrite any file or directory and also stop after first error, but still, manipulating large directory trees is inherently a dangerous task. |
As usual, good luck and please report any problems!
-
If you do changes using a software that, instead of overwriting files in place, creates a new file and then replaces the original, inode detection obviously won’t work. The script can still offer some help with the directories that contain these files, though.
-
It may produce incorrect changes if the directory tree crosses multiple filesystems and there are inode number collisions.
-
For the generated shell script it not possible to tell
mv
command to only do atomic moves (i.e. usingrename(2)
) and to refuse doing copy+delete. The current version generates Python script instead (by default) which do not suffer from this.