On Wed, Nov 2, 2016 at 10:00 AM, Tim Nevels <timnev...@mac.com> wrote:
On Nov 1, 2016, at 4:49 PM, David Adams wrote:

> There's a note about docrefs being shared in cooperative threads and not
> shared in preemptive threads. I'm not at all clear what that means. I
would
> expect you would pass paths anyway...isn't that what everyone does now
> anyway?

>> I think this means you do something like:

>> <>docRef_h:=Open document(“logfile.txt”)

>> in one process. Then in another process you can do:

>> SEND PACKET(<>docRef_h;”a message for the log file”)

>> So you open the log file once in your On Startup method.
>> Then from every process that is running you can just do a
>> SEND PACKET to the already open file.

Who knew? Great trick. A good alternative (but more complicated) is to use
a memory-based message queue and send all of the log data to a single
process that has the file open. This is a perfect use of CALL WORKER.

> Of course the change is easy. Just use APPEND TO DOCUMENT command and,
> as you said, pass a file path instead of a DocRef to the worker.

Hmm. That might cause a lot of thrashing in the file system. You could just
pass the message and let the CALL WORKER keep the file open exclusively
during it's lifetime.

> I can see how each preemptive worker would not have access to open files
in other processes, cooperative or preemptive.

4D's delegating/using the OS so there are limits on what it can do. Real
limits that they can't work around. It sounds like they're sheltering us
from a whole lot of hassles in a good way.

> I have a client that I know for sure will upgrade to v16 early next year.
The very first thing I will do when I upgrade to
> v16 is make sure that all the triggers are running preemptive. That’s
step #1. Get those triggers to run as fast as possible
>  on the server. Everyone should benefit from that.

I gave up on complex triggers in 4D years and versions ago ;-(

> Step #2 will be to try and redesign some of the queries and reports that
now take many seconds to run. I’ll break the
> queries and report calculations into pieces and do each piece in a worker
and then at the end combine the results. Now it
> all runs in 1 core. Maybe breaking it into 4 parts and running it in 4
cores simultaneously will result in faster execution
> time.
Not necessarily applicable in your case, but do you have a way of storing
summary data for reporting purposes that is accurate (enough) for you
needs? I'm big on well-normalized structures (ever more so)...but I don't
see all tables as relational. Lots of data analysis tools use very
different topologies and rules for the purposes of reporting. I don't mix
extract data and live data in the same tables, but I definitely have tables
that are pre-renderings of complex calculations. That's about the only way
to make some reports feasible in, for example, a Web environments. Lots of
us have data that is 100% static - like past quarterly data. It isn't even
allowed to change. Grind it up and boil it down!

> So the idea is the “CalculateTotals” method at the beginning of the
report method that builds a bunch of arrays of totals
> will be rewritten to be preemptive. The rest of the report method will
not need to be changed. “CalculateTotals” is where
> all the work will be done. I’ll set it to “execute on server”. Then split
things into 4 pieces, call 4 workers, wait for
> them all the finish and then combine everything into a single set of
arrays.

Cool, please report back on what you find. There will also be a break-even
point for the extra hassle of marshalling all of those services, so  you
may get some real-world sensitivity for that.
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to