On Fri, May 30, 2008 at 12:01 PM, hemant kumar <[EMAIL PROTECTED]> wrote:

> On Fri, 2008-05-30 at 10:25 -0700, Raghu Srinivasan wrote:
> > Hi Hemant - no, I am not using thread_pool right now. I do have two
> > separate workers like you and Ryan Leavengood suggested - one for the
> > batch process and the other for the live/web user initiated process -
> > which by the way works out great, thanks!.
> >
> > How are other folks handling 1000s of RSS refreshes? Via BDRb - or
> > something else? Is BDRb really the best tool for what I am trying to
> > do? I'd really appreciate if others could share their experiences.
> >
>
> okay, then it shouldn't be a problem. Can you try git version of
> backgroundrb as suggest in following link and report us back?
>
>
> http://gnufied.org/2008/05/21/bleeding-edge-version-of-backgroundrb-for-better-memory-usage/
>
>
>
> Hemant - Thanks for your help. I'll try this first on development and then
report back. With this, will I be able to queue up 256+ or 1024+ b/g jobs?
_______________________________________________
Backgroundrb-devel mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/backgroundrb-devel

Reply via email to