Adrian,

I need to update our application to work with the newest code from the
trunk.  We did an update a couple months ago so I don't think it will take
me long.  I'll try to provide some feedback early next week.


Brett

On Sat, Aug 18, 2012 at 12:31 AM, Adrian Crum <
adrian.c...@sandglass-software.com> wrote:

> Brett,
>
> Any results yet?
>
> -Adrian
>
>
> On 8/8/2012 12:57 AM, Brett Palmer wrote:
>
>> Adrian,
>>
>> Thanks I'll take an update and try it out.
>>
>> Brett
>> On Aug 7, 2012 4:23 PM, "Adrian Crum" <adrian.crum@sandglass-**
>> software.com <adrian.c...@sandglass-software.com>>
>> wrote:
>>
>>  Brett,
>>>
>>> I think I solved your problems with my recent commits, ending with rev
>>> 1370566. Let me know if it helps.
>>>
>>> -Adrian
>>>
>>> On 8/5/2012 4:53 PM, Brett Palmer wrote:
>>>
>>>  Adrian,
>>>>
>>>> Thanks for the update.  Here are some feedback points on your listed
>>>> items:
>>>>
>>>> 1. JobPoller get out-of-memor error.  We've seen this a lot in
>>>> production
>>>> servers when the JobSandbox table is not constantly pruned of old
>>>> records.
>>>>    It would be nice if the poller restricted its search for only active
>>>> records it could process.
>>>>
>>>> 2. Queue for capturing missing records would be good.  From item 1 above
>>>> we
>>>> have had locks on table when the poller is busy doing a scan and new
>>>> jobs
>>>> cannot be added or time out.
>>>>
>>>> Other wish items:
>>>>
>>>> - Ability to assign different service engines to process specific job
>>>> types.  We often multiple application servers but want to limit how many
>>>> concurrent jobs are run.  For example, if I had 4 app servers connected
>>>> to
>>>> the same DB I may only want one app server to service particular jobs.
>>>>  I
>>>> thought this feature was possible but when I tried to implement it by
>>>> changing some of the configuration files it never worked correctly.
>>>>
>>>> - JMS support for the service engine.  It would be nice if there was a
>>>> JMS
>>>> interface for those that want to use JMS as their queuing mechanism for
>>>> jobs.
>>>>
>>>>
>>>> Brett
>>>>
>>>> On Sun, Aug 5, 2012 at 6:21 AM, Adrian Crum <
>>>> adrian.crum@sandglass-**softwa**re.com <http://software.com> <
>>>> adrian.crum@sandglass-**software.com<adrian.c...@sandglass-software.com>
>>>> >>
>>>>
>>>> wrote:
>>>>
>>>>   On 8/5/2012 11:02 AM, Adrian Crum wrote:
>>>>
>>>>>   I just committed a bunch of changes to the Job Manager group of
>>>>> classes.
>>>>>
>>>>>> The changes help simplify the code and hopefully make the Job Manager
>>>>>> more
>>>>>> robust. On the other hand, I might have broken something. ;) I will
>>>>>> monitor
>>>>>> the mailing list for problems.
>>>>>>
>>>>>> I believe the JobPoller settings in serviceengine.xml (the
>>>>>> <thread-pool>
>>>>>> element) should be changed. I think min-threads should be set to "2"
>>>>>> and
>>>>>> max-threads should be set to "5". Creating lots of threads can hurt
>>>>>> throughput because the JVM spends more time managing them. I would be
>>>>>> interested in hearing what others think.
>>>>>>
>>>>>>   Thinking about this more, there are some other things that need to
>>>>>> be
>>>>>>
>>>>> fixed:
>>>>>
>>>>> 1. The JobPoller uses an unbounded queue. In a busy server, there is
>>>>> the
>>>>> potential the queue will grow in size until it causes an out-of-memory
>>>>> condition.
>>>>> 2. There is no accommodation for when a job cannot be added to the
>>>>> queue
>>>>> -
>>>>> it is just lost. We could add a dequeue method to the Job interface
>>>>> that
>>>>> will allow implementations to recover or reschedule the job when it
>>>>> can't
>>>>> be added to the queue.
>>>>> 3. There is a JobPoller instance per delegator, and each instance
>>>>> contains
>>>>> the number of threads configured in serviceengine.xml. With the current
>>>>> max-threads setting of 15, a multi-tenant installation with 100 tenants
>>>>> will create up to 1500 threads. (!!!) A smarter strategy might be to
>>>>> have a
>>>>> single JobPoller instance that services multiple JobManagers.
>>>>>
>>>>> -Adrian
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>

Reply via email to