> Thanks. Nobody ever mentions that when discussing the feature.

Yes. Has anyone heard if this is an expected feature in a future version?
That would seem like a fair thing.

I'm obviously a lot more into headless and distributed applications than
anything else, so I look at networked 4D systems architecture differently
than some. (But not all.) If I wanted to set up a tolerably stable,
 high-performance 4D Server system, it would look like this:

4D Server
4D Sand-alone merged apps
Some kind of communications layer

In that setup, all of the machines are preemptive capable.

With that said, what I would *really* like to hear are real-world stories
from people getting some advantage out of preemptive mode. And details
about their approach.

Taking a page from Miyako's book, I think that a sensible starting strategy
for a regular 4D Server would be:

* Pick a discrete, time-consuming, CPU-bound task, of course. (Nothing else
could ever be worth the bother.)
* Run the code using Execute on server.

That rules out CALL WORKER, but that's okay. I suspect we're all pretty
careful about what sort of code runs on the server machine!

To communicate with the process on the other CPU, I'd use any number of
old-fashioned tricks. Honestly, I'd be picking jobs that can be described
in either a document or a record. Then the task runner would periodically
poll for new work in either a watch folder or a job table. If we're talking
about large tasks, the polling costs should be low. In fact, if the jobs
are large and rare, just run them on demand in a new process and then let
that process die.

But it would depend on circumstances how best to proceed, of course.
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to