*Adrian,

I ported our code over to the newest ofbiz trunk and tried out the changes
for the new service engine.  The changes that you made are working very
well.  I configured our server to use multiple pools and then scheduled
various jobs to those pools.  Servers that were not configured to service
the pools left the jobs dormant and servers that were configured to service
the new pools did so as expected.

These new changes will work for us in our production environment.  Thanks
for implementing them.  I now need to work on tuning the configuration
settings based on the particular jobs and the capacity of our servers.

I did have another question on the thread-pool configuration. Here is a
sample configuration that I was using to do some of the testing.

<thread-pool send-to-pool="pool"
                    purge-job-days="4"
                    failed-retry-min="3"
                    ttl="120000"
                    jobs="100"
                    min-threads="2"
                    max-threads="5"
                    wait-millis="1000"
                    poll-enabled="true"
                    poll-db-millis="30000">
           <run-from-pool name="pool"/>
           <run-from-pool name="testPool"/>
           <run-from-pool name="pool2"/>
           <run-from-pool name="pool3"/>
           <run-from-pool name="pool4"/>
       </thread-pool>

Is the “send-to-pool” attribute the default pool that is used by the
service engine for any sync and async requests through the service engine
API?

Is there a relationship between the “send-to-pool” attribute and the
“run-from-pool” names or are they independent of each other?  For example,
if I don't have a run-from-pool element with a name="pool" will the default
"pool" still work?

Thanks again for you work on the service engine.  We really appreciate it.

Let me know if you need more feedback on the new changes.



Brett*

On Mon, Sep 17, 2012 at 3:50 PM, Adrian Crum <
adrian.c...@sandglass-software.com> wrote:

> Brett,
>
> Any news on this?
>
> -Adrian
>
>
> On 8/18/2012 4:11 PM, Brett Palmer wrote:
>
>> Adrian,
>>
>> I need to update our application to work with the newest code from the
>> trunk.  We did an update a couple months ago so I don't think it will take
>> me long.  I'll try to provide some feedback early next week.
>>
>>
>> Brett
>>
>> On Sat, Aug 18, 2012 at 12:31 AM, Adrian Crum <
>> adrian.crum@sandglass-**software.com <adrian.c...@sandglass-software.com>>
>> wrote:
>>
>>  Brett,
>>>
>>> Any results yet?
>>>
>>> -Adrian
>>>
>>>
>>> On 8/8/2012 12:57 AM, Brett Palmer wrote:
>>>
>>>  Adrian,
>>>>
>>>> Thanks I'll take an update and try it out.
>>>>
>>>> Brett
>>>> On Aug 7, 2012 4:23 PM, "Adrian Crum" <adrian.crum@sandglass-**
>>>> software.com 
>>>> <adrian.crum@sandglass-**software.com<adrian.c...@sandglass-software.com>
>>>> >>
>>>> wrote:
>>>>
>>>>   Brett,
>>>>
>>>>> I think I solved your problems with my recent commits, ending with rev
>>>>> 1370566. Let me know if it helps.
>>>>>
>>>>> -Adrian
>>>>>
>>>>> On 8/5/2012 4:53 PM, Brett Palmer wrote:
>>>>>
>>>>>   Adrian,
>>>>>
>>>>>> Thanks for the update.  Here are some feedback points on your listed
>>>>>> items:
>>>>>>
>>>>>> 1. JobPoller get out-of-memor error.  We've seen this a lot in
>>>>>> production
>>>>>> servers when the JobSandbox table is not constantly pruned of old
>>>>>> records.
>>>>>>     It would be nice if the poller restricted its search for only
>>>>>> active
>>>>>> records it could process.
>>>>>>
>>>>>> 2. Queue for capturing missing records would be good.  From item 1
>>>>>> above
>>>>>> we
>>>>>> have had locks on table when the poller is busy doing a scan and new
>>>>>> jobs
>>>>>> cannot be added or time out.
>>>>>>
>>>>>> Other wish items:
>>>>>>
>>>>>> - Ability to assign different service engines to process specific job
>>>>>> types.  We often multiple application servers but want to limit how
>>>>>> many
>>>>>> concurrent jobs are run.  For example, if I had 4 app servers
>>>>>> connected
>>>>>> to
>>>>>> the same DB I may only want one app server to service particular jobs.
>>>>>>   I
>>>>>> thought this feature was possible but when I tried to implement it by
>>>>>> changing some of the configuration files it never worked correctly.
>>>>>>
>>>>>> - JMS support for the service engine.  It would be nice if there was a
>>>>>> JMS
>>>>>> interface for those that want to use JMS as their queuing mechanism
>>>>>> for
>>>>>> jobs.
>>>>>>
>>>>>>
>>>>>> Brett
>>>>>>
>>>>>> On Sun, Aug 5, 2012 at 6:21 AM, Adrian Crum <
>>>>>> adrian.crum@sandglass-****softwa**re.com <http://software.com> <
>>>>>>
>>>>>> adrian.crum@sandglass-**softwa**re.com <http://software.com><
>>>>>> adrian.crum@sandglass-**software.com<adrian.c...@sandglass-software.com>
>>>>>> >
>>>>>> wrote:
>>>>>>
>>>>>>    On 8/5/2012 11:02 AM, Adrian Crum wrote:
>>>>>>
>>>>>>     I just committed a bunch of changes to the Job Manager group of
>>>>>>> classes.
>>>>>>>
>>>>>>>  The changes help simplify the code and hopefully make the Job
>>>>>>>> Manager
>>>>>>>> more
>>>>>>>> robust. On the other hand, I might have broken something. ;) I will
>>>>>>>> monitor
>>>>>>>> the mailing list for problems.
>>>>>>>>
>>>>>>>> I believe the JobPoller settings in serviceengine.xml (the
>>>>>>>> <thread-pool>
>>>>>>>> element) should be changed. I think min-threads should be set to "2"
>>>>>>>> and
>>>>>>>> max-threads should be set to "5". Creating lots of threads can hurt
>>>>>>>> throughput because the JVM spends more time managing them. I would
>>>>>>>> be
>>>>>>>> interested in hearing what others think.
>>>>>>>>
>>>>>>>>    Thinking about this more, there are some other things that need
>>>>>>>> to
>>>>>>>> be
>>>>>>>>
>>>>>>>>  fixed:
>>>>>>>
>>>>>>> 1. The JobPoller uses an unbounded queue. In a busy server, there is
>>>>>>> the
>>>>>>> potential the queue will grow in size until it causes an
>>>>>>> out-of-memory
>>>>>>> condition.
>>>>>>> 2. There is no accommodation for when a job cannot be added to the
>>>>>>> queue
>>>>>>> -
>>>>>>> it is just lost. We could add a dequeue method to the Job interface
>>>>>>> that
>>>>>>> will allow implementations to recover or reschedule the job when it
>>>>>>> can't
>>>>>>> be added to the queue.
>>>>>>> 3. There is a JobPoller instance per delegator, and each instance
>>>>>>> contains
>>>>>>> the number of threads configured in serviceengine.xml. With the
>>>>>>> current
>>>>>>> max-threads setting of 15, a multi-tenant installation with 100
>>>>>>> tenants
>>>>>>> will create up to 1500 threads. (!!!) A smarter strategy might be to
>>>>>>> have a
>>>>>>> single JobPoller instance that services multiple JobManagers.
>>>>>>>
>>>>>>> -Adrian
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>

Reply via email to