On Mon, Sep 25, 2017 at 6:06 PM, Jeremy Roussak via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Please forgive me if I’m being dim, but isn’t a solution (maybe not the
> best, but a solution) to maintain the log as records in a table, which is
> periodically emptied into a file by a process which opens, writes and
> closes that file, then deletes the records in the table?
>
> It’s not an approach which I’ve tried, and you may have good reasons for
> rejecting it. I’m just curious to know what they are.
>
>
Jeremy,

Using a 4D table as a sort of cache for log lines is a perfectly sensible
idea, but also one I'd like to avoid. I did this some years back in a very
high-volume system and ended up killing a lot of performance as a
consequence. (Just ask Justin Leavens.) Now, you could take steps to reduce
the network activity involved, to be sure, but there are still a few points
that come to mind:

* 4D doesn't do well (or at least hasn't historically done well) with
tables that get thrashed a lot. Add-delete-add-delete. Gets really slow and
horrible. So then you have to do compacts, or hijinks with reusing records,
or all kinds of special magic with never deleting records. Life is too
short for any of that, if at all avoidable.

* There's no _real_ reason to have the data in 4D at any point. 4D isn't an
analysis platform and it isn't something that I'd use for
high-volume/low-data-content material like log entries. So, there's no good
reason for it to be in 4D other than as a cache. I would consider using a
small table as a cache for storing lines that didn't upload/output to a log
file, but only if the % of failure was pretty low.

* With a post-to-table-log-and-clear system, you now need a process to poll
the table. You can do this in a way that is relatively inexpensive, but
it's not free.

* Potentially lots of network traffic for zero gain.

But there are tons of ways to think about this sort of thing, and lots of
different setups and requirements. My judgment calls and current
constraints are likely different to what other people are dealing with. So,
there are plenty of good arguments for any particular design. I'm just
trying to be clear that while I'm not keen on the approach you suggest in
this case, it's not because I think it's inherently bad - I've used it
before. It's just that in this case I don't want the log data in 4D, if at
all possible. I see that as pure cost with no gain.

Hopefully, someone will find something silly in what I'm doing as the CALL
WORKER idea is really ver nice. You have n processes doing their thing -
running code, serving Web requests, handling windows, etc. Then they can
send log data (events, behaviors, errors, etc.) over to a central log
worker. Using CALL WORKER, this is quite fast. (4D can stack up tons of
these little requests quite well.) From there, you've got one process
handling all of the log data. A single external file for a particular log
is perfect at this point. It's fast and super easy to use. The whole point
of using a single process for this is to avoid contention on the file
amongst processes. That can otherwise be a *huge* and totally unworthy
bottleneck. So, one process for logging which has the file open in
read-write all the time. So good. Such a standard design. Simple, clean,
cheap, efficient...doesn't work in 4D. Unless there's a bug in my bug
report ;-)
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to