> Not an anomaly.

John & Neil, thanks for the reports, I find them encouraging.

We're dealing with somewhat different situations. You're going through US
Tech Support, which I have no contact with and you're using the TAOW, which
we don't currently have access to in AU/NZ. The business case for 4D
devoting resources to rolling their own bulletin boards and help desk
software instead of renting tools from Atlassian (Aussie! Aussie! Aussie!
Oi! Oi Oi) or similar escapes me....but that's a different discussion.

Anyway, our tech support is Wayne Stewart. Everyone loves Wayne. If I made
anyone think that I was criticizing Wayne, I am sorry and also sad. Wayne
is a helpful *to a fault*. Even now he's trying to come up with workaround
for my particular bug. But here's the thing, it's a *concurrency problem*
and it is *inside of 4D*. That's not a problem we can fix. The only thing
Wayne can do (and has done months ago) is kick it to France. Its their bug
and only they can fix it. Wayne told me that they accepted it as a bug
months ago, and that's the last we heard.

Why do I care about this bug? No one here will have noticed, but I don't
talk about bugs a whole lot. For the most part, I don't care. If 4D can't
do it one way and I can find another, I'll do that. If 4D can't do it, I'll
find another tool. In this case I *do* care. Here's why: It's an important
bug that blocks certain designs and leaves me mistrusting workers and
preemptive mode quite fundamentally.

My problem boils down to how workers deal with file locks. In theory, a
worker processes requests in strict order. (They're not requests, they're
EXECUTE statements run in the context of the worker, but nevermind).  So,
if you have a worker and close a file, it should be closed completely. If
you kill the worker and it restarts, the file close done explicitly in the
worker should finish before the worker dies. Managing file locks
across/amongst process is the sort of basic concurrency problem that was
worked out over 50 years ago. And when I say "worked out", I don't mean
"dude wrote some code" I mean the basic reality and mathematics of
concurrency were worked out. In that process, the semaphore was invented.
Why? Because it is necessary in many situations.

Are semaphores hard to get your head around? Yes. But what's _really_ at
issue are the issues of concurrent computing themselves, that's what's hard
to grasp. I remember how many smart people just could not accept that
semaphores were necessary to lock shared resources when V3 introduced
semaphores. Some people _still_ don't seem to grasp them. Here are the
options that you've got for a situation where a race condition might occur:

1) You develop a system that is *provably* never going to have the problem,
ever.
2) You are inevitably going to encounter the problem.

Because science. There is no third option other than "it doesn't seem
likely, so I'll risk it." See outcome 2.

In my case, I'm architecting some stuff that is going to be very high
volume, so option 2 isn't a choice. So I need a truly safe solution. I
stumbled across the file lock problem *by accident*. I don't want this bug
to exist. But here's the thing, if what we've been told about workers and
preemptive process is all correct, then this bug is *impossible*. But it's
there. So there's something most definitely wrong. Is the problem very
narrow? Is it widespread? I have no way of knowing and 4D has told me
exactly nothing. (I asked on the Forums but was a) told nothing or b)
accused of trying to "sabotage" the command, whatever that means.) So, I
have to go with what I can see:

    It is not safe to rely on file open/close in workers.

I tried a delay and a bunch of other things and they don't solve the
problem. I guess never closing the file might work, but that's not an
option. (I don't want a 2GB HTTPD log file, thanks.) And here's the thing,
there is no workaround to a concurrency bug.

It's 4D's bug, only they can fix it.

So, in my case, to get my custom log data out I have to avoid files
entirely. Again, high volume - race condition possible = race condition
inevitable. I've got two different architectures:

Source process ---> CALL WORKER ---> HTTP Request --> Logging platform

Or

[Log_Record]
Read with a standard process
Write to a log file

There is so much data involved that log records aren't a sustainable
solution. (The last time we went down this road we killed 4D and then
InnoDB in MySQL. Postgres, you're next!)

As I've been unclear: The lock problem manifests differently in cooperative
and preemptive processes, but occurs in both.

There is no workaround, only alternative architectures and an actual fix to
4D.

My frustration is not with tech support - Wayne's great - it's with France
not fixing their bugs. And, as it has _always_ been for me, they're an
informational black hole. I don't get any information from them. A few
times over the years, ever. That's it. So, how am I to know if they're
working on this? How do I know anything about it? I don't.

I think people may at times imagine that I have some special relationship
with France. I most certainly do not and never did. I worked in the US
office for 4 years a very long time ago. I didn't get help from France. I
wrote a bunch of books, but that was not with any help from anyone at 4D. I
only wrote what I could demonstrate. In fact, France actively tried to
suppress my business for many years. (Brendan, Damon, Thomas Maul and many
other regional directors were very kind and helpful to me over the years,
for sure.) So, very possibly, they're ignoring an important bug for no
other reason than I reported it.

Unfortunate.

On Thu, Oct 5, 2017 at 4:40 PM, Dennis, Neil via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> > Now in the context of this thread, this appears to be an anomaly?
>
> Not an anomaly.
>
> I have found the same great service, even with a tough bug... we had a
> problem with web packets not being delivered. We had wireshark captures and
> web logs showing the problem but could not reproduce it outside of our
> environment.
>
> I submitted the issue with 4D, the worked with me and within a few days
> had a system on their end to be able to reproduce the problem and reported
> it as a bug. I'm expecting the 16.3 to have the fix.
>
> Other bugs I have submitted have also been fixed in a timely manner and I
> was kept in the loop the entire process.
>
> I think they are doing a great job.
>
> Neil
>
>
>
>
>
> --
>
> Privacy Disclaimer: This message contains confidential information and is
> intended only for the named addressee. If you are not the named addressee
> you should not disseminate, distribute or copy this email. Please delete
> this email from your system and notify the sender immediately by replying
> to this email.  If you are not the intended recipient you are notified that
> disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.
>
> The Alternative Investments division of UMB Fund Services provides a full
> range of services to hedge funds, funds of funds and private equity funds.
> Any tax advice in this communication is not intended to be used, and cannot
> be used, by a client or any other person or entity for the purpose of (a)
> avoiding penalties that may be imposed on any taxpayer or (b) promoting,
> marketing, or recommending to another party any matter addressed herein.
> **********************************************************************
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: http://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **********************************************************************
>
**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to