On 04/12/2011 11:23 AM, Todd B Sanders wrote:
On 04/12/2011 11:50 AM, Jeff Ortel wrote:
Todd,

Here's where we are with outstanding QPID related bugs:

692876 - package install hangs because gofer agent became unresponsive

The comments in this bug suggest (2) different issues and were split out into 
(2)
separate bugs.

What are the two bugs? I assume one in the yum file descriptor leaking, which 
we fixed on
our end?

Yes. The 1st bug was yum file descriptor leak, which we addressed in our code. There were two leaks. The first was by doing YumBase.closeRpmDb() which stopped the leaks to /var/lib/rpm. The second, is in yum. It adds a FileHandler on each instantiation of YumBase which causes a leak to /var/log/yum. In any case, we've addressed this.

The 2nd bug, mentioned her was split out into 692891 mentioned below.


This bug remains to call attention to David's objection to the perception to 
the user
when, for whatever reason, the agent does not respond. This isn't a defect but 
we need
to put some more thought in to the flow to mitigated this perception beyond 
what we did
with the heartbeat.

The heartbeat was introduced to address this, correct?

Yes. The heartbeat is used to warn users that the agent is not online. However, if the heartbeat interval is 10 seconds, a user can still issue an install package and not get a warning if the heartbeat is only 5 seconds overdue.

 Do you have any thoughts on how we
expand that to be a fail-safe in all situations?

Yes, but nothing we have time to do this sprint. I have a few options in mind but we need to discuss the trade-offs. I really haven't had time this sprint the think though them.


Recommend: We push the bug to next sprint.


692891 - gofer becomes unresponsive w/ Enqueue capacity threshold exceeded error

This bug pertains to a specific exception that caused us to stop using the
qpid-cpp-server-store. It's my understanding that only David has seen this 
since we
moved away from using the server-store on an F13 box. He's been running some 
automated
testing in an attempt to reproduce it without success. So, thread to RHUI? I 
/think/ low
but the nature of this bug is such that I really cant say for sure. There is 
always the
possibility that the pulp qpid consumer stopped consuming heartbeats when David 
saw this
and it was legitimate error. Not sure until we can reproduce and snoop around.

Recommend: We push the bug to next sprint and continue debugging.

Ok.


695761 - QPID (python) driver goes crazy when VPN dropped between client and 
broker

This issue was originally reported by bkearney but I have been able to 
reproduce. I
don't think this presents risk to RHUI because the only way I've found to 
reproduce
involves a VPN. I've sent this information to the python-qpid maintainer Rafael
Schloming on several occasions (last email on 2/8 with no response).

Recommend: We push the bug to next sprint and continue debugging.

Possible that we could submit a patch to python-qpid to address the issue?

Probably, but it could take a fair amount of time to dig through the driver to understand how it works and how this can be fixed.


-Todd

Thanks,

jeff


_______________________________________________
Pulp-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/pulp-list

Reply via email to