Conversation moved from recover data from lost+found to a proper bug. --Joey
(Unfortunatly that scrambled the comment creation times and thus order.)
Added a message done --Joey
Conversation moved from recover data from lost+found to a proper bug. --Joey
(Unfortunatly that scrambled the comment creation times and thus order.)
Added a message done --Joey
I followed this to re-inject files which git annex fsck listed as missing.
For everyone of those files, I get
when trying to copy the files to the remote.
-- Richard
As my comment from work is stuck in moderation:
I ran this twice:
but nothing changed
Hmm. Old versions may have forgotten to git add a .git-annex location log file when recovering content with fsck. That could be another reason things are out of sync.
But I'm not clear on which repo is trying to copy files to which.
(NB: If the files were recovered on a bare git repo, fsck cannot update the location log there, which could also explain this.)
Version: 0.20110503
My local non-bare repo is copying to a remote bare repo.
I have been recovering in a non-bare repo.
If there is anything I can send you to help... If I removed said files and went through http://git-annex.branchable.com/bugs/No_easy_way_to_re-inject_a_file_into_an_annex/ -- would that help?
git annex whereis
say about it? Is the content actually present in annex/objects/ on the bare repository? Does that contradict whereis?It exists locally, whereis tells me it exists locally and locally, only.
The object is not in the bare repo.
The file might have gone missing before I upgraded my annex backend version to 2. Could this be a factor?
What you're describing should be impossible; the error message shown can only occur if the object is present in the annex where
git-annex-shell recvkey
is run. So something strange is going on.Try reproducing it by running on the remote system,
git-annex-shell recvkey /remote/repo.git $key
.. if you can reproduce it, I guess the next thing to do will be to strace the command and see why it's thinking the object is there.It seems the objects are in the remote after all, but the remote is unaware of this fact. No idea where/why the remote lost that info, but.. Anyway, with the SHA backends, wouldn't it make sense to simply return "OK" and update the annex logs accordingly, no?
Local:
Remote:
So, it appears that you're using git annex copy --fast. As documented that assumes the location log is correct. So it avoids directly checking if the bare repo contains the file, and tries to upload it, and the bare repo is all like "but I've already got this file!". The only way to improve that behavior might be to let rsync go ahead and retransfer the file, which, with recovery, should require sending little data etc. But I can't say I like the idea much, as the repo already has the content, so unlocking it and letting rsync mess with it is an unnecessary risk. I think it's ok for --force to blow up if its assumptions turn out to be wrong.
If you use git annex copy without --fast in this situation, it will do the right thing.
Yes, makes sense. I am so used to using --fast, I forgot a non-fast mode existed. I still think it would be a good idea to fall back to non-fast mode if --fast runs into an error from the remote, but as that is well without my abilities how about this patch?
Or, even better, wouldn't it make sense to have SHA backends always default to --fast and only use non-fast when any snags are hit, use non-fast mode for that file.
Though if we continue here, we should probably move this to its own page.