... and we prompty figured out how much dpb is spending in waiting-for-locks
on "large" clusters (4 cores and more).
That's a bit too much.
I'm currently checking how I can cheat to avoid that... first simple solution
being tested, I have a further more convoluted idea for a bit later...
This time, I tweaked usability a bit, as far as lock contention goes.
People running dpb regularly knows dpb locks the host during
depends/prepare/prepare-results/junk.
Now, that locking is more explicit: dpb tries to obtain the lock,
and if it can't it will create a separate task that waits for
More performance changes:
- I've found a stupid bug that prevented some unnecessary info from being
deleted from memory, so dpb's memory consumption when running a fetch
engine (default) is somewhat lower.
- some extra processes are removed: there's no longer a checksum process
if all the distfiles
On Mon, Dec 31, 2012 at 12:29:57AM +0100, Marc Espie wrote:
> On Mon, Dec 24, 2012 at 03:48:58PM +0100, Marc Espie wrote:
> > First commit happened a few days ago. dpb scans for already built packages
> > more aggressively, so that in case of restarts, the I/B information will
> > be more accurate
On Mon, Dec 24, 2012 at 03:48:58PM +0100, Marc Espie wrote:
> First commit happened a few days ago. dpb scans for already built packages
> more aggressively, so that in case of restarts, the I/B information will
> be more accurate faster: before that change, every pkgpath would go through
> the que
First commit happened a few days ago. dpb scans for already built packages
more aggressively, so that in case of restarts, the I/B information will
be more accurate faster: before that change, every pkgpath would go through
the queue, and already built paths would move to I/B later.
I'm also test-