On Fri, Jan 12, 2018 at 4:33 AM, Ævar Arnfjörð Bjarmason <ava...@gmail.com> wrote: > For those rusty on git-gc's defaults, this is what it looks like in this > scenario: > > 1. User runs "git pull" > 2. git gc --auto is called, there are >6700 loose objects > 3. it forks into the background, tries to prune and repack, objects > older than gc.pruneExpire (2.weeks.ago) are pruned. > 4. At the end of all this, we check *again* if we have >6700 objects, > if we do we print "run 'git prune'" to .git/gc.log, and will just > emit that error for the next day before trying again, at which point > we unlink the gc.log and retry, see gc.logExpiry. > > Right now I've just worked around this by setting gc.pruneExpire to a > lower value (4.days.ago). But there's a larger issue to be addressed > here, and I'm not sure how. > > When the warning was added in [1] it didn't know to detach to the > background yet, that came in [2], shortly after came gc.log in [3]. > > We could add another gc.auto-like limit, which could be set at some > higher value than gc.auto. "Hey if I have more than 6700 loose objects, > prune the <2wks old ones, but if at the end there's still >6700 I don't > want to hear about it unless there's >6700*N".
Yes it's about time we make too_many_loose_objects() more accurate and complain less, especially when the complaint is useless. > I thought I'd just add that, but the details of how to pass that message > around get nasty. With that solution we *also* don't want git gc to > start churning in the background once we reach >6700 objects, so we need > something like gc.logExpiry which defers the gc until the next day. We > might need to create .git/gc-waitabit.marker, ew. Hmm.. could we save the info from the last run to help the next one? If the last gc --auto (which does try to remove some loose objects) leaves 6700 objects still loose, then it's "clear" that the next run may also leave those loose. If we save that number somewhere (gc.log too?) too_many_loose_objects() can read back and subtract it from the estimation and may decide not to do gc at all since the number of loose-and-prunable objects is below threshold. The problem is of course these 6700 will gradually become prunable over time. We can't just substract the same constant forever. Perhaps we can do something based on gc.pruneExpire? Say gc.pruneExpires specifies to keep objects in two weeks, we assume these object create time is spread out equally over 14 days. So after one day, 6700/14 objects are supposed to be prune-able and part of too_many_loose_objects estimation. The gc--auto that is run two weeks since the first run would count all loose objects as prunable again. > More generally, these hard limits seem contrary to what the user cares > about. E.g. I suspect that most of these loose objects come from > branches since deleted in upstream, whose objects could have a different > retention policy. Er.. what retention policy? I think gc.pruneExpire is the only thing that can keep loose objects around? BTW > But now I have git-gc on some servers yelling at users on every pull > command: > > warning: There are too many unreachable loose objects; run 'git prune' to > remove them. Why do we yell at the users when some maintenance thing is supposed to be done on the server side? If this is the case, should gc have some way to yell at the admin instead? -- Duy