Re: Benchmark Speed Test Method

2019-04-05 Thread steve simpson via 4D_Tech
On Fri, Apr 5, 2019 at 3:00 PM  Neal Schaefer wrote:

> [snip]
>
> * We're getting a new Windows 2016 server for our 4DServer, and before I
> migrate, I'd like to write a benchmark speed test to run before and after.
> I'd like to measure create, edit, delete records, processing, IO, file
> copying, and other relevant functions. We're also migrating from v16.6 to
> v17 later in the year, and I'd like to run it again before and after the
> upgrade. I'm wondering if anyone has a method they've written for this
> purpose that they might be willing to share?*
>
We'd be very interested in that too. And Neal, I hope you share your
results when finished.

Also, has anyone moved up to v17 web server yet application? (Not "web
area", but rather the full "web server application") Can you tell how it
compares to v15 please? Any issues one should be aware of?
-
Stephen Simpson
Cimarron Software
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​RE: Server Monitoring...

2018-07-17 Thread steve simpson via 4D_Tech
https://uptimerobot.com/

Pretty robust; all kinds of triggers (IP, port, url, keyword, etc); easy to
configure. Up to 50 sites free. Bulletproof last few years for several
dozen projects.
-
Stephen Simpson
Cimarron Software
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

v17 ready for prime time?

2018-07-16 Thread steve simpson via 4D_Tech
I see that a lot of you are using v17, doing active development with it,
testing it out with current projects, etc. I don't normally move production
projects into a new version until the xx.2 or xx.3 iterations, but am
wondering if some of you who are actually using v17 in production could
chime in with your opinion as to whether it is safe to move straight from
v15.5 to v17 now, or best to move first to v16 and wait for a couple more
updates to v17. Is it ready?

-
Stephen Simpson
​Cimarron Software​
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: What do you use to monitor your offsite servers?

2018-07-16 Thread steve simpson via 4D_Tech
On Mon, Jul 16, 2018 at 2:01 PM,
​
John Baughma wrote:

> I really like Microsoft Remote Desktop for Mac which can be downloaded for
> free from Microsoft. They regularly update it. Clipboards are shared and
> you can designate one or more folders on your Mac as network drives which
> automatically mount when you log in.
> ​[snip]​
>
>
​+1
--
Stephen Simpson
Cimarron Software​
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​Save Record($vptr_Table->)

2018-06-10 Thread steve simpson via 4D_Tech
On Sun, Jun 10, 2018 at 5:01 PM,
​
​
Jody Bevan  wrote:

The method that this err occurs in is used hundreds of time throughout the
> system. All record saves go through it.
> ​[snip]
>

​Jody, do you happen to have a "mandatory" field in this table that
normally gets filled in but in this one instance is not being populated?

---------
Steve Simpson
Cimarron Software​
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​Document capture

2018-01-25 Thread steve simpson via 4D_Tech
​
K
​​
enneth, sounds to me like you are on the right path. Much of it depends on
the sheer volume of documents. As some pointed out, a vast number of
documents in a single folder can bog down OS storage systems, although that
is way less a problem now than it was several years ago. Much also depends
on why you are storing them and what you have to do with them later. It
might come in handy to store the doc type, in case you need to call a
specific routine (or sub-launch a special app or whatever) in order to view
it directly. Also, I've found over the years that one can easily create all
kinds of workable routines for the computer (4D) to store and retrieve
documents, but invariably one will find that there is a need for a human to
go through those folders trying to find a specific document, so I try now
to make my storage folder mechanism and file naming conventions somewhat
human understandable, or at least logically understandable within the scope
of the application's world in case something comes up and a human has to
dig through that repository. Another important consideration is how
redundant or robust your storage needs to be. Depending on volume, I'm a
huge fan of storing raw source documents in AWS S3 buckets. One of our
projects is digitizing, indexing, and providing access to county courthouse
documents in both tiff and pdf format. Millions and millions and millions
and many many terabytes of documents that are indexed to 4D fields so users
can search/sort/report normal 4D digital records and/or then download the
actual digital disk document(s). We quickly ran out of disk space in normal
office and enterprise hard drive environments, especially with duplicate
backups in geo diverse locations since we can't afford to lose that many
documents. AWS cloud storage was the answer for us (but similar to most of
the large cloud storage systems). There are lots of ways to access cloud
buckets, depending on who you go with, but we chose a "Cloudberry Drive"
solution (cloudberrylab.com) so that AWS buckets become available locally
as familiar network mounted drives (and they work with all the large cloud
vendors); document access and manipulation works as if you are in a local
lan type world - but with unlimited storage capacity. Surprisingly fast and
efficient and cost effective. As for our document naming and placing
routines, we obviously start the folder hierarchy with the "county name".
Over time we've worked out a storage system that is well suited to this
courthouse world based on the year, the month, and the day of the filing
date for each document, combined with the county clerk's "instrument
number" or in some counties the vol/page nomenclature they use. Fairly
universal stuff in the court system vertical market, but probably not so
meaningful in yours. Nonetheless, the naming and folder hierarchy logic is
an important element to get right at the beginning.

Steve Simpson
​Cimarron Software

On Thu, Jan 25, 2018 at 1:24 PM,
​​
​K​
​
​
enneth Geiger  wrote:

>
> ​[snip]
>
> I’ve not started prototyping anything yet but I think I’ve got a viable
> approach. The server will have a shared directory with a sub-directory for
> each of their clients. There will be a dialog where the user enters
> information about the document, including a text box where they can enter a
> brief description of the document. The user would then drag-and-drop a scan
> of the document onto the description text box and an “on drop” event would
> trigger a document capture method. This method will have to rename the
> document (the file-name will be created automatically within 4D without
> changing the extension), check that the relevant sub-directory exists on
> the server (and create it if it does not), and then save the renamed file
> to the server.
>
> If any of you have done something similar, I would really appreciate any
> feedback on my approach and would welcome any suggestions, pseudo-code, or
> code that you would be willing to share.
> ​[snip]
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​NB: Pre-emptive mode hassle

2017-10-11 Thread steve simpson via 4D_Tech
On Wed, Oct 11, 2017 at 3:00 PM,
​
Tim Nevels wrote:

> ​[snip]
>
>
> I sure can. Remember when C_OBJECT variables were introduced? There were
> several developers here that talked about completely overhauling all their
> code to eliminate process and interprocess variables and replace all of it
> with object variables. Code was working fine. They just wanted to use this
> cool new variable type.
> ​[snip]
>
> I do a lot of UI work with 4D. I love doing UI work. Windows and dialog
> boxes that users deal with all day every day. So 4D is great for a lot of
> the work that I do.
> ​[snip]​
>

​+1
_
Steve Simpson
Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: UUID vs Longint primary key

2017-08-08 Thread steve simpson via 4D_Tech
On Tue, Aug 8, 2017 at 11:00 AM,
​
Chip Scheide <4d_o...@pghrepository.org> wrote:

>
> Worse, I've found that the same product, from the same vendor in
> differing purchase amounts (1 vs case) is the same part number, but
> different pricing! So.. even a check on part numbers is insufficient to
> stop duplicate entries.
>
> ​Well I believe there are "dupes" and then there are dupes. There are
unacceptable dupes and there are dupes you have to or need to live with.
And then there are "near dupes". Like your example above. That is not
"really" a dupe because there is one field/col, pack size, that is NOT
duplicated (case vs 1). On and on.

This all started with David making a broad blanket statement about "data
integrity" and "row duplication" and how using "synthetic" record ID keys
ruined the ability to automatically​

​filter out "dupes". (I _think_ that was your point David. Please correct
me if not.) And in that strict sense, if a "row" is really actually
absolutely duplicated, that is _probably_ bad. Or maybe not if you didn't
include that "pack size" field that would have changed the row to unique.
Or all the other examples cited on this thread about duplicate names that
were not _really_ duplications; they just needed a little more information
included in the "row" to better define it. In fact we all used to
experience it right here nearly daily with our Walt Nelson(Guam) vs Walt
Nelson(Seattle) signatures. Constantly confusing without that one little
added tidbit.

So my point was, ​it all depends. And sometimes you have to design your own
system differently or provide tools within your current system to suss out
what, how, why and when a "dupe" occurred. And how to - or IF to - fix it
or prevent it or even find it.

I too am definitely a convert to using
 UUID over longint
​. AND in using them in preference to some construction using row data
itself which may well change as the business grows/changes and will NOT
play well when you absorb new data sources (buy a competitor for example
with nearly identical inventory items or combine existing standalone data
installations into one big common enterprise bucket, or decide for all
kinds of business reasons to extract a certain batch and combine it with
another batch in a different bucket, etc.etc.etc. Data duplication of one
sort or another are bound to occur in many of these "growth" scenarios and
more often than not the merging and cleaning of those dupes is not
reducible to some sort of algorithm without human hands to help. Or
whatever.

As ​
Neil succinctly describe
​d
:

 - UUID is faster (do to "random" data in the index)
>  - UUID solves problems with distributed systems that sync
>  - UUID fixes the home grown sequence problem with transactions
>  - UUID is not easily readable by human and keeps me from being tempted to
> expose them :)
>
> ​And to Chip's point, I DO sometimes expose those UUIDs as read only info
on certain Admin Review pages. I sometimes place a button to "copy to
pasteboard" if it is appropriate that the admin might desire to do some
searching with that UUID - for as we all know​, they are hellishly
difficult to type. And in many projects I retain that seq longint idea
because it really IS a useful human marker that is easier than a date:time
stamp to read and quickly sort on and in general "glom" as you scan down
long lists of rows. But I've been burned way too many times to ever use it
again as a unique recordID. I now consider it a user interface type aid
only, still useful as a ProductID or some such in many cases. But not as a
Unique-Unvarying-Forever-Regardless-Of-Source-Or-Destination-Record-Key.

Steve Simpson
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: Preventing Duplicate Data WAS UUID vs Longint primary key

2017-08-07 Thread steve simpson via 4D_Tech
On Mon, Aug 7, 2017 at 3:00 PM,
​
​
David Adams  wrote:

>
>
> > How do you deal with that problem (Preventing duplicate data)
>

Definitely
​
"Carefully program your system to detect and prevent duplicate rows" as
appropriate. Generally such a Dupe Check can take many forms depending on
the business needs, the data in question, and the data entry
process/environment. I've not yet found a solution that fits all. Generally
it is much easier to do this if the "data entry user" is a browser post -
where you have the time and space to do more complicated look-ups. (More
and more of my own projects are web front ends to 4D in the backend.)
 ​I've used similar constructions as John's ContactsDuplicateManager example
as well, although I steer away from storing extra data if I can. For less
immediacy data needs I've found that after hours "helper" routines that
fire off and run in the background working through data to flag dupes for
admin oversight next day is popular with some managers who prefer to make
their own decision about whether some stuff really is a dupe or not. Some
duplicate data has to be eliminated "right now" before the record is saved;
whereas some might be interesting to investigate a little more in depth.
The whys and wherefores --what caused it, who caused it, where did it come
from, and why, is it legit or a real mistake; such questions can often lead
to better processes, better training, better form design, better import
provisioning, or better import pre-cleaning, etc. It all depends on the
project and biz needs.

Steve Simpson
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: UUID vs Longint primary key

2017-08-07 Thread steve simpson via 4D_Tech
On Mon, Aug 7, 2017 at 11:51 AM, <
​
<4d_tech-requ...@lists.4d.com>
​
npden...@greatext.com> wrote:

>
> ​[snip]
>
> On the other hand, you do model the data after business relations, but the
> keys that tie that relation data need/should never be seen in a well
> designed system. If a user readable key is needed by business, then there
> should be another data piece that the user can read (like an MRN, medical
> record number, or an abbreviation that is unique and human readable) But
> these should never be used to link together data in a structure in primary
> key foreign key relation.
> ​ [snip]​
>
>
>
+1 My point exactly, only stated much better.​
​
​

Steve Simpson
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: UUID vs Longint primary key

2017-08-06 Thread steve simpson via 4D_Tech
David and Chuck, point taken about "rows" being unique because of and
created with the containing data of each row -- up to a ... well "a point".
Like you said, a real pain when real world reality steps in. I believe the
only truism involved here is your statement that "...
​i
t then leaves the task of doing integrity checks and so on to you.
​" Exactly! Regardless of the way you decide to handle 4D record
uniqueness, you will surely encounter ​the ongoing need to continually
manage integrity "on your own". Hoping that your "unique key" will do that
for you is likely not realistic, regardless of your method.

I'm sure that many of us have had to eventually abandon the 4D long int
"auto increment" in favor of some sort of crafted version of it when we
expanded to mixing data sources. For instance, you build and distribute an
application that starts up out the gate with recordID #1, then #2, then #3,
etc with each install. But  5 years later the client wants everyone - all 4
thousand installs - to start sending data to a central accounting
processing point -- retaining uniqueness across the enterprise. Oooops.
Suddenly, you have to make sure that each and every installation has some
sort of retrofitted unique number, something not envisioned when it was
first developed. Whoa. Big problem.

I must say that the only "real" solution to this so far in the 4D world -
for me at least - has been the auto UUID key field. Finally a way to create
a record (and its children) that can be moved around willy-nilly without
angst. OK, granted it does not solve the issue of "integrity" from row to
row. But neither does your own example of finding uniqueness within each
row's actual data (i.e.
​
Keller123, 1
​
Keller123, 2
​
Keller123, etc); that is a task you have to deal with regardless of how you
handle your own unique key - logical ways - important realistic vertical
market biz logic ways -- issues that may be unique to each project -- logic
that may change over time. It has NOTHING to do with the record key in of
of itself. Chuck Miller said in response, "...
​
I am not David but I agree with his assessment. Relational model databases
by definition are not supposed to use keys that have no meaning. They are
supposed to create relations that have meaning". All I can say is that
"meaning" per row, and meaning contained within children rows -- especially
meaning to "humans" -- may not really exist and may have little to cling to
or may even change as projects expand and morph.

I'm really just reacting to what I experience as "theory" mismatch with
real world reality. We need to separate "record uniqueness" as an idea from
"meaning" of the data within a row, since no matter how you handle record
uniqueness you will still need to craft your own unique project needs to
insure "data integrity", which can take many forms and have many challenges
beyond some simple "row" logic. And realize that these needs will likely
change over time as the project morphs. So the first need is to handle "4D
record ID uniqueness". (and in my experience so far, ONLY UUIDs do that
satisfactorily.) Then address your project specific "data integrity" needs
and understand that you'll have to deal with that on an ongoing basis
regardless of how you handle record uniqueness as your app grows and morphs
and expands and begins to interact with other outside data sources.

And BTW, we all have users who get a lot of info from the visual feedback
of incremental "record numbers". OK, no problem. Just another field:
"record number", far different from "unique internal record KEYID". And
likely understandable and fixable for human needs if dupes are introduced
with data mergers. Just another "user comprehension data integrity" task.

Steve Simpson
​Cimarron Software

On Sun, Aug 6, 2017 at 3:00 PM, <4d_tech-requ...@lists.4d.com> wrote:

>
>
> Message: 3
> Date: Sun, 6 Aug 2017 11:03:57 -0400
> From: Chuck Miller 
> To: 4D iNug Technical <4d_tech@lists.4d.com>
> Subject: Re: ​
> ​​
> Re: UUID vs Longint primary key
>
> ​​
> I am not David but I agree with his assessment. Relational model databases
> by definition are not supposed to use keys that have no meaning. They are
> supposed to create relations that have meaning. Even the use of numbers
> breaks the theoretical rule. The problem is that we have pushed all to use
> relational model rather that what he logical model proposes. Who out there
> remembers hierarchical, tree, or inverted structure models.
>
> Regards
> Chuck
> --
>
> Message: 4
> Date: Sun, 6 Aug 2017 08:16:07 -0700
> From: David Adams 
> To: 4D iNug Techni

​Re: UUID vs Longint primary key

2017-08-06 Thread steve simpson via 4D_Tech
On  Fri, 4 Aug 2017 12:52:28 -0700
​
David Adams 
​ wrote:​

>
> ​[snip] ​
> 4D's UUIDs function as globally unique row *serial numbers*. That's great
> for backups and convenient for physical relations, but it has exactly zero
> to do with a real "primary key" or relational integrity.
> ​ [snip]​
>

​Care to elaborate on that statement? I don't get why you'd say that.​
_
Steve Simpson
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​Someone help me out, how do you get good information out of 4D

2017-07-11 Thread steve simpson via 4D_Tech
ork
done and write that much unless I never slept and was a super fast typist.
I admire that about you.

But... to get back to your core point, I think you are demanding that
France come up with a product that does everything for everyone. That has
GOT to be frustrating. 4D is not and cannot and will not be everything for
everyone. Everything about 4D screams "user interface usability". NOT "big
data enterprise repository", which seems to be the thing you are in to
these days. Yeah, gotta be mutually frustrating. Especially for 4D
internally when confronted with negative feedback from a very savvy, very
intellectual, obviously advanced, and publicly known and admired user like
you. So... you asked my advice find the products that are made for
what you need, learn them and master them for the task they were invented
for, and don't try to stuff 4D into a usage it is not suited for. No need
to diss it when it cannot, will not, and need not fit your specific current
need when other products are better fitted, invented, and developed
specially for those needs.
-
Steve Simpson
Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: How to detect EOL character in text file

2017-05-13 Thread steve simpson via 4D_Tech
On Sat, May 13, 2017 at 9:04 AM,
​
​
Peter Mew  wrote:
>
>
> Replace text($text;lf;cr)
> Replace text($text;crlf;cr)
> Position($text;cr)
>

​+1. I find this the fastest and simplest method of handling high volumes
of incoming "unknown" or "known to be dirty" or "suspect" text. (And not
just to find an EOL.) Plus you can't always use something like ​"Document
to text" as the initial spigot point. And by the way, some incoming text
contains actual text like "/r" that does not immediately equate to char(13)
assumed above, which is just as easily handled with "replace text".


Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: Schemes for record level access control

2017-05-13 Thread steve simpson via 4D_Tech
On Sat, May 13, 2017 at 9:04 AM,
​
​
 David Adams  wrote:
>
>
> Just as a simple point, it's nice to have access values as a number:
>
> 1 2 3 4 5
>
> Imagine that access increases at each step.
>
> // On after query
> C_LONGINT($1;$user_access_level)
> $user_access_level:=$1
>
> QUERY SELECTION([Foo];[Foo]Minimum_access_score <= $user_access_level)
>

​+1​, especially when "...then you don't have to jump through any more
hoops when using 4D's built-in editors and tools.
​"​



Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

​Re: Clean Slate - Modern User Experience

2017-04-17 Thread steve simpson via 4D_Tech
> On Apr 17, 2017, at 8:47 AM,Peter Jakobsson wrote:
>
> > Doesn’t the O/S define ‘modern era’ ?
> >
> > If you stick to vanilla 4D objects (i.e. just the raw button styles,
> list styles & fields etc) then the O/S will do all the work for you in
> keeping your screens up to date.
>

​+1​

----
Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​Please vote for this feature request: Raise the 31 character limit

2017-03-06 Thread steve simpson via 4D_Tech
On Mon, Mar 6, 2017 at 3:00 PM,
​
Keith Goebel  wrote:

> ​[snip]
>
> Curious interface...
>

Understatement!​

----
Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Please vote for this feature request: Raise the 31 character limit

2017-03-05 Thread steve simpson via 4D_Tech
On Sun, Mar 5, 2017 at 3:00 PM,
​David Adams
 wrote:

> Please vote for this feature request: Raise the 31 character
>   limit (David Adams)
>

​uh... Do you mean for variable names?  nah... no thanks. Can't
remember the last time I felt constrained by a 31 char limit. In fact,
reading code with a bunch of variable names approaching 20 char makes my
blood pressure go up.​ The worst for me is meaningless variable names. The
next worst thing for me is really long variable names that could use some
smart concise editing that retains meaning with readability.

--------
Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​About using the same data file in two different structures...

2016-12-04 Thread steve simpson
On Sun, Dec 4, 2016 at 1:36 PM,
​
​
Kirk Brooks  wrote:
>
> Hi Steve,
> Thanks very much for posting this. It's exactly the kind of real-world
> experience I was hoping to hear.
>
A couple of questions that come to mind:
> So for the exporting it sounds like you're using the basic export record
> function.
>

​On the whole no.
Export Record is better suited for new records than for modified records
and not as malleable if I want to embed some sort of logic based on an
internal field content.
​ So mostly I use a special "Export_Import" table FORM with every field on
it in the order I want and copy that form to the satellite and "export
text" using that form and "import text" using the same form.
Strangely I
often find
​ that operation faster than using 4d Export/Import Record. PLUS it gives
me a human readable file if there are some "mysteries" to be solved.
"Sometimes low tech is better tech". Nothing to be ashamed of
.
​


> Have you upgraded any of the these to v15 with the new journaling features?
>

​"Journaling" is not meaningful in our environment of master/satellite the
way we implement them. Or at least not within my understanding of
journaling capability. I like to have more control than that suggests.

How are you resolving unique record ID issues and related records?
> ​
>

​That was always a biggie that had to be planned out with caution. Over time
 I found that I could put "starting" record IDs for each location in a
range separated by "millions". Sometimes 5M or 10M, or more. Obviously not
foolproof, but in the "real" world, so far over 35 years none of those
original master/satellite database pairs have overlapped. (and frankly,
none of the remaining ones still running ever will!) The key was having a
central place where those starting numbers were originated and kept track
of. These days, I don't worry about it. I just use the 4D UUID auto
generated field ID, regardless of it's location of origin.


> One thought that comes to mind for keeping track of the changed records is
> to use the On saving an existing record trigger to either add a record to a
> tracking table or simply write a table/record reference to a log file and
> then use that log file to prepare the next export batch.
>

​Nah. Much simpler. Just update a "DateTimeStamp" field (in every table,
every record) using a trigger on new & modified records. And a separate
list of "deleted" records on a "delete"  trigger. When the sync wakes up,
search for every record with an older DateTimeStamp than "right now", move
the "next" Updater DateTimeStamp to "right now", and export found records.
Records in transition (those in the process of being created or modified by
current users) will get a new DateTimeStamp when it is saved, and therefore
will show up in the next sync (anything 1 millisecond after or equal to
"right now"). The current deletes list is added to each "updater" and
simultaneously zeroed out locally, ready to start the next sync operation.
On the receiving end, each "deleted" record will be deleted if it exists.
(Retain that deleted list for a while. It is the weak link in this sort of
thing since the original record will no longer be found unless you go the
more complex "tag deleted records but don't really delete them" route.)

The central key here is that this is not a "journalling" sync operation -
or "mirror" operation. It's a simple "updater" operation, with unlimited
logical needs at each end depending on the specific project and it's own
unique needs for master/satellite updation needs, sometimes in a two way
manner.  I'm a fan of "low tech" for these operations.
--
Steve Simpson
Cimarron software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​About using the same data file in two different structures...

2016-12-04 Thread steve simpson
On Sun, Dec 4, 2016 at 12:36 AM,
​
​
 Kirk Brooks  wrote:
>
>
> Does anyone have experience using a scheme like this? The situation is one
> database that is mainly concerned with creating and actually working with
> the data. Then there are these '
> ​​
> satellite' locations that run a different
> database that is based on the same structure and data but has different
> forms and methods. So the "main" (we'll call it) datafile is periodically
> sent to these other locations to be installed. Occasionally the satellite
> locations will move a datafile back to the "main".
>

​Kirk, perhaps can you accomplish this just a little differently, yet still
achieve the same goal in the end. We do something similar with
​
satellite sites that all consume their data from one "master" 4DServer
"data center". Some of the remote sites have unique specialty field/table
combos of their own, but all of them get their core data from the central
source with field/table combos that 100% mirror the master database. We
"sync" these remote field sites on some sort of schedule anywhere from 5
min to 24 hours normally, by waking up an export routine that finds all the
new/modified records since the last export and exports all new/modified
data to that site. One export schedule for each remote involved. The remote
site wakes up on a matching schedule and imports that data. This way we
have some sites that are 100% "up-to-date" every 5 mins. Others are up to
date daily, weekly, etc.

We have some projects that require the central master data to retain some
data that is created only on the
​
satellite site, so the same applies in reverse: the remote site exports and
the central master imports when each awakens. We've worked out a scheme
over the years to facilitate this 2 way traffic logic that works perfectly.

The difficult part is having a "common dump folder" accessible to both
master and
​
satellite to deposit and consume these modified record exports/imports.
Back in the v1.x - v3.x days before the web or built in 4DSync or SOAP
solutions, we kept world wide data centers in sync this way using
Compuserve, and later other more robust "email" providers. We've boiled it
down these days to two: SOAP for small exchanges that require real time
exchange -- and an AWS S3 bucket "dump folder" accessible to each 4D
installation for those much larger exchanges that can be thousands and
thousands of records at times. We've found over the years that "real time"
exchange is troublesome when you have large exchanges. These two
master/slave sites are often in very distant locals and time zones and
we've come to appreciate the elasticity of this method of exchange that
doesn't rely on real time connection and response from both at the same
time. If something happens at one end of the other, no problem. Either roll
back the "last updated" timestamp (a simple site specific record field) and
trigger the export/import again - on either or both sides. Or just let it
hit the next sync normally once whatever kink is worked out that made the
last sync fail -- power went off somewhere, lines were cut somewhere,
somebody in xyz location was working on hardware at 4am their time and had
the computer down during that sync time. etc etc etc.

This has been working world wide for us since 1988 with literally tens of
thousands of database installations over that time. It is low tech, human
eyeball understandable, and reliable. And it is easy to debug if one
archives each exchange instead of deleting the export file once consumed.
AND it allows one to totally build a new site or replace an old site by
simply setting the "last updated" timestamp so that the entire database is
exported at once. There are obviously pitfalls depending on the rules you
embed or the needs you actually have. But it is worth thinking about for
what you have described.


Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​HTML Tag Mismatch Finder!

2016-10-14 Thread steve simpson
On Thu, Oct 13, 2016 at 11:00 PM,
​
​
Sujit Shah  wrote:
>
>
> Is there a magic tool that can find a mismatched tag in several hundred
> lines of HTML Code?
>
> One which can work with embedded 4D Tags??
>

​https://www.htmlvalidator.com/​

--------
Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: ​Amazon S3 API

2016-08-13 Thread steve simpson
On Fri, Aug 12, 2016 at 9:10 PM,
​
​
ADeeg  wrote:
>
>
> Hi, I like to upload pictures to the Amazon S3 storage and hope that I can
> get timesavers to do this from 4D.
>
> Who can help me ?
>

​Armin, we went through the whole official aws command line program via
LAUNCH EXTERNAL PROCESS route until we discovered a product called
"CloudBerry Drive" from CloudBerry Labs that "...seamlessly integrates
Amazon S3 as an external or network drive with the Windows environment".
[Unfortunately as you see, that product is windows only, but I think I read
somewhere they are working on a Mac solution; or there may be very similar
Mac solutions; or if not, it might be worthwhile moving some processes
involved to windows machines if you find it economically desirable for this
project.] You can name and place your picture into a folder within that
mounted drive (which is really an S3 bucket) directly from 4D - just as if
it were a local folder. This makes handling any files you work with this
way truly 4D native and easy to deal with, without messing with scripts and
external processes. Further, If you designate your S3 bucket to be an AWS
CloudFront type bucket, you will get a special ULR to that bucket (ex. "
https://edglgb5o46ay9.cloudfront.net";) - so if you save your data driven
invented path within a 4D Field (i.e.
"/myPictureFolder3/myfilename28.pdf"), you can simply append that 4D field
contents to your AWS cloudfront URL (i.e. "
https://edglgb5o46ay9.cloudfront.net/myPictureFolder3/myfilename28.pdf) to
provide a web link to that file within any document you generate (email,
web page, message, 4D database itself, etc) What you get in addition is
that AWS will immediately distribute "CloudFront" files to multiple
distribution centers so users are accessing the file from some hosting
center closest to them (for faster delivery), giving you "9 to the 9s" type
backup security since it now exists in multiple locations instead of one
repository. But more than that, you can mount that same S3 bucket as a
local drive on multiple servers simultaneously to give you access to the
same material from many different locations. In our case, for instance, we
have specialized office servers constantly converting milllions and
millions of county courthouse scanned tifs into PDFs. The PDFs are dropped
into the appropriate county S3 bucket folder using 4D data driven naming
conventions. Separate 4D driven (as well as non-4D) projects can then have
access to the same S3 bucket material to provide links or actual download
access from "local drive folders" for all kinds of in house routines or
external web browser user links. For us, it solved multiple logistical
issues and totally simplified (actually eliminated) a complex AWS S3 bucket
access scripting system.

I apologize for describing a windows solution to a mac question, but hope
it helps.


Steve Simpson
​Cimarron Software
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**