> On Oct 8, 2017, at 4:08 AM, David Adams via 4D_Tech <[email protected]> 
> wrote:
> 
> I've posted about my various worker log problems here and on some other
> threads and even posted my sample database for review. Lately, John
> Baughman has really been sinking his teeth into this issue and has spent a
> *ton* of time working on it….snip…John might want to explain his point of 
> view. 


    OK, here ya go. Some may find useful what I found in the process of testing 
David’s example DB. When he sent me the LogWriter database he said that it was 
“blowing up” after running for a while and I took it as a challenge to find out 
why.

    The test in LogWriter simply starts 10 process in a for loop. The first 
process creates a worker process and each of the 10 processes calls this worker 
process 1 million times in a for loop. The worker process only task is to write 
a log line to the same text file on the hard drive. So if all 10 process run to 
completion the worker would have been called 100 million times and the text 
file would have 100 million log lines. All 10 process are calling the worker 
simultaneously.

    Worker processes only handle 1 call at a time, creating a queue of calls 
which are handled in turn. I found that the LogWriter test was rapidly 
maintaining a huge queue growing to over 100,000 calls almost instantly. The 
crash occurred when the queue reached something in the neighborhood of 2.1 
million calls, approximately 2,101,000 calls. This was 100% repeatable.

     You might be asking how do I know how many calls are in the queue. There 
is no 4D function that returns the queue size.I used a global <>CallCount 
variable which I incremented every time the 4D command Call Worker was executed 
and decremented every time a call worker method completed execution. 
<>CallCount thus represents all the calls that have not yet been completed... 
the queue. <>CallCount  was then monitored during the test and it’s value 
posted as the log event to the disk file. When the database crashed, the last 
line in the text file was always 2,101.000 and change.

    The crash was fixed by letting the queue empty itself every time it reached 
100,000 calls. This was done by setting a semaphore which caused the 10 
processes to pause calling the worker until the semaphore was cleared. The 
semaphore was cleared when the call count came back down to 50,000 calls. The  
test ran to completion, taking close to 3 hours. I repeated the test 3 times 
without any crashes. Note that 100,000 and 50,000 are numbers I picked out of 
the air. I experimented with different combinations and could not see any 
difference with regard to speed of execution.

    What about Open/Close Document and Kill Worker which we have been beating 
to death on the NUG? 

1. Open/Close Document: I think we all agree, even David, that it is now and 
has always been our responsibility to close documents as quickly as possible 
after opening them. The test database was closing and reopening the text file 
every 100,000 calls to the worker. I left this in as I suspected that it had 
nothing to do with the crash, and it did not. Actually in my modified test the 
file was being closed and reopened when the queue reached 100.000 calls instead 
of after 100,00 worker calls.

2. Kill Worker: After seeing how quickly the queue builds, I realized that 
killing a worker is a really bad idea unless you can be sure that there are no 
calls waiting in the queue. In LogWriter the worker was being killed and 
restarted after every 100,000 calls, which abandoned hundreds of thousand calls 
in the queue.

   OK, what did I learn from this exercise…

1. A worker queue has a finite size limit which must be managed it a worker is 
expected to be flooded with calls.

2. 4D must provide a function call that returns the size of a worker’s queue… 
OB Worker Queue Size(“WorkerName”)…  Using a global variable as I did is a non 
starter if you want to have everything thread safe. David mentioned that Brent 
Raymond came up with a thread safe way to do this involving multiple worker 
processes. Should not be that hard. Just give us a 4D function for this purpose.
 
3. OB KILL WORKER, use with a great deal of caution.

4. The documentation for Workers needs to contain some discussion of worker 
queue size limits.

5. David certainly has valid issues with 4D’s implementation of Workers. But 
David’s implementation in his LogWriter project is extreme and apparently this 
is only one of his complaints. I for one have found workers to be extremely 
useful. For me, as long as one understands the limits of 4D’s implementation, 
Worker processes in 4D are ready for prime time.


John

John Baughman
Kailua, Hawaii
(808) 262-0328
[email protected]





**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:[email protected]
**********************************************************************

Reply via email to