I worked out how to retroactively annex a large file that had been checked into a git repo some time ago. I thought this might be useful for others, so I am posting it here.

Suppose you have a git repo where somebody had checked in a large file you would like to have annexed, but there are a bunch of commits after it and you don't want to loose history, but you also don't want everybody to have to retrieve the large file when they clone the repo. This will re-write history as if the file had been annexed when it was originally added.

This command works for me, it relies on the current behavior of git which is to use a directory named .git-rewrite/t/ at the top of the git tree for the extracted tree. This will not be fast and it will rewrite history, so be sure that everybody who has a copy of your repo is OK with accepting the new history. If the behavior of git changes, you can specify the directory to use with the -d option. Currently, the t/ directory is created inside the directory you specify, so "-d ./.git-rewrite/" should be roughly equivalent to the default.

Enough with the explanation, on to the command:

git filter-branch --tree-filter 'for FILE in file1 file2 file3;do if [ -f "$FILE" ] && [ ! -L "$FILE" ];then git rm --cached "$FILE";git annex add "$FILE";ln -sf `readlink "$FILE"|sed -e "s:^../../::"` "$FILE";fi;done' --tag-name-filter cat -- --all

replace file1 file2 file3... with whatever paths you want retroactively annexed. If you wanted bigfile1.bin in the top dir and subdir1/bigfile2.bin to be retroactively annexed try:

git filter-branch --tree-filter 'for FILE in bigfile1.bin subdir1/bigfile2.bin;do if [ -f "$FILE" ] && [ ! -L "$FILE" ];then git rm --cached "$FILE";git annex add "$FILE";ln -sf `readlink "$FILE"|sed -e "s:^../../::"` "$FILE";fi;done' --tag-name-filter cat -- --all

If your repo has tags then you should take a look at the git-filter-branch man page about the --tag-name-filter option and decide what you want to do. By default this will re-write the tags "nearly properly".

You'll probably also want to look at the git-filter-branch man page's section titled "CHECKLIST FOR SHRINKING A REPOSITORY" if you want to free up the space in the existing repo that you just changed history on.

Man, I wish you'd written this a couple weeks ago. :) I was never able to figure that incantation out and ended up unannexing and re-annexing the whole thing to get rid of the file I inadvertently checked into git instead of the annex.
Comment by http://edheil.wordpress.com/ Sun Dec 16 00:11:38 2012

Based on the hints given here I've worked on a filter to both annex and add urls via filter-branch:

https://gitorious.org/arand-scripts/arand-scripts/blobs/master/annex-filter

The script above is very specific but I think there are a few ideas that can be used in general, the general structure is

#!/bin/bash

# links that already exist
links=$(mktemp)
find . -type l >"$links"

# remove from staging area first to not block and then annex
git rm --cached --ignore-unmatch -r bin*
git annex add -c annex.alwayscommit=false bin*

# compare links before and after annexing, remove links that existed before
newlinks=$(mktemp -u)
mkfifo "$newlinks"
comm -13 <(sort "$links") <(find . -type l | sort) > "$newlinks" &

# rewrite links
while IFS= read -r file
do
    # link is created below .git-rewrite/t/ during filter-branch, strip two parents for correct target
    ln -sf "$(readlink "$file" | sed -e 's%^\.\./\.\./%%')" "$file"
done < "$newlinks"

git annex merge

which would be run using

git filter-branch --tree-filter path/annex-filter --tag-filter cat -- --all

or similar.

  • I'm using find to make sure the only rewritten symlinks are for the newly annexed files, this way it is possible to annex an unknown set of filenames
  • If doing several git annex commands using -c annex.alwayscommit=false and doing a git annex merge at the end instead might be faster.
Comment by https://launchpad.net/~arand Wed Mar 13 12:05:49 2013

One thing I noticed is that git-annex needs to checksum each file even if they were previously annexed (rather obviously since there is no general way to tell if the file is the same as the old one without checksumming), but in the specific case that we are replacing files that are already in git, we do actually have the sha1 checksum for each file in question, which could be used.

So, trying to work with this, I wrote a filter script that starts out annexing stuff in the first commit, and continously writes out sha1<->filename<->git-annex-object triplets to a global file, when it then starts with the next commit, it compares the sha1s of the index with those of the global file, and any matches are manually symlinked directly to the corresponding git-annex-object without checksumming.

I've done a few tests and this seems to be considerably faster than letting git-annex checksum everything.

This is from a git-svn import of the (free software) Red Eclipse game project, there are approximately 3500 files (images, maps, models, etc.) being annexed in each commit (and around 5300 commits, hence why I really, really care about speed):

10 commits: ~7min

100 commits: ~38min

For comparison, the old and new method (the difference should increase with the amount of commits):

old, 20 commits ~32min

new, 20 commits: ~11min

The script itself is a bit of a monstrosity in bash(/grep/sed/awk/git), and the files that are annexed are hardcoded (removed in forming $oldindexfiles), but should be fairly easy to adapt:

https://gitorious.org/arand-scripts/arand-scripts/blobs/master/annex-ffilter

The usage would be something like:

rm /tmp/annex-ffilter.log; git filter-branch --tree-filter 'ANNEX_FFILTER_LOG=/tmp/annex-ffilter.log ~/utv/scripts/annex-ffilter' --tag-name-filter cat -- branchname

I suggest you use it with at least two orders of magnitude more caution than normal filter-branch.

Hope it might be useful for someone else wrestling with filter-branch and git-annex :)

Comment by arand Mon Mar 18 14:39:52 2013

Thanks for the tip :) One question though: how do I push this new history out throughout my other Annexes? All I managed to make it do was revert the rewrite so the raw file appeared again...

I recently had the need of re-kind-of-annexing an unusually large repo (one of the largest?). With some tricks and the right code I managed to get it down to 170000 commits in 19 minutes and extracing ~8GB of blobs. Attaching the link here as I feel it might be helpful for very large projects (where git-filter-branch can become prohibitively slow)

https://www.primianotucci.com/blog/large-scale-git-history-rewrites

Comments on this page are closed.