Comment #2 on issue 203 by [email protected]: frequent crashes of ganeti-masterd and frequent failiures to restart it due to errors with the job queue.
http://code.google.com/p/ganeti/issues/detail?id=203

the restart og the masterd is indeed fixed, however:
2011-11-15 21:30:46,650: ganeti-masterd pid=13231/JobQueue3/Job42144 INFO - device disk/0: 100.00% done, 0s remaining (estimated) 2011-11-15 21:30:46,983: ganeti-masterd pid=13231/JobQueue3/Job42144 INFO - device disk/0: 100.00% done, 0s remaining (estimated) 2011-11-15 21:30:47,059: ganeti-masterd pid=13231/ClientReq4 INFO Received job poll request for 42144 2011-11-15 21:30:47,061: ganeti-masterd pid=13231/ClientReq8 INFO Received job poll request for 42144 2011-11-15 21:30:47,465: ganeti-masterd pid=13231/JobQueue3/Job42144 INFO Instance hlt-archive.internal's disks are in sync. 2011-11-15 21:30:47,899: ganeti-masterd pid=13231/JobQueue3/Job42144 INFO Remove logical volumes for 0 2011-11-15 21:30:47,988: ganeti-masterd pid=13231/ClientReq3 INFO Received job poll request for 42144 2011-11-15 21:30:47,991: ganeti-masterd pid=13231/ClientReq16 INFO Received job poll request for 42144 python2.7: ath.c:193: _gcry_ath_mutex_lock: Assertion `*lock == ((ath_mutex_t) 0)' failed.

Is something that keeps happen quite regularly.

Reply via email to