On Wed, Apr 26, 2017 at 10:37 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Gotcha. I've got my main code base in V13 still and like it fine.
>
> I still feel behind on this thread...what turned out to be the source of
> the slowdown? Packing? Unpacking? Transmission? Some combinati
> The app is in V13 now and will be moving to V15 over the Summer so
there's no 4D Object available yet.
Gotcha. I've got my main code base in V13 still and like it fine.
I still feel behind on this thread...what turned out to be the source of
the slowdown? Packing? Unpacking? Transmission? Some
On Wed, Apr 26, 2017 at 3:49 PM, David Adams via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> I just went back to the top of this thread and scanned down...and I think
> that I'm not understanding a key detail. Douglas, you're saying that the
> packed records have 'meta-data', but it sounds like that
I just went back to the top of this thread and scanned down...and I think
that I'm not understanding a key detail. Douglas, you're saying that the
packed records have 'meta-data', but it sounds like that data is a map to
the packing. So, packed data types and offsets, something of that sort.
Would
On Tue, Apr 25, 2017 at 10:12 AM, Tim Nevels via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Here’s an idea. I’m assuming all the record processing is done in a single
> process. How much work would it be to modify the code so that it spawns
> multiple processes that can run at the same time? I don’t
Jim:
SSD - I'm a big believer in SSD's, no question of that. I'm using a MacBook
Pro with a 500 GB SSD. It's a "late 2013" model so it's not as fast as the
newer ones (450 MB/s vs > 1000). The server machine is using SSD's running
Win Server 2008 with a single i7-4770 CPU running at 3.4 GHz. RAM i
On Apr 26, 2017, at 5:12 PM, Douglas von Roeder via 4D_Tech
<4d_tech@lists.4d.com> wrote:
> There are many, repetitive method calls. For example, each time the code
> converts a byte range to a longint, it calls a function that returns the
> byte order. As much as I never met a subroutine I didn'
On Tue, Apr 25, 2017 at 6:36 AM, James Crate via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> If you can easily modify the code, you could try commenting the SAVE
> RECORD command(s), and replace any queries for an existing record with
> REDUCE SELECTION($tablePtr->;0). That should be quick and easy a
On Tue, Apr 25, 2017 at 10:20 AM, Randy Engle via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Synching is a wonderful thing.
>
> Most users think it's a magic bullet. ;-)
>
It is very slick but things do go off the rails when the same document is
modified on the LAN while it's being modified on th
Tim:
There were delays in the code - for whatever reason, the original programer
(not Brad!) had delays of up to 15 seconds in some of processes.
I thought of kicking this out to multiple processes but that's involved.
The data has to follow a strict FIFO sequence so I'd have to examine the
BLOB'
h
Sent: Tuesday, April 25, 2017 10:01 AM
To: 4D iNug Technical <4d_tech@lists.4d.com>
Cc: Douglas von Roeder
Subject: Re: Experience with FTSY Sync Code//Speed up Sync Code
Randy:
Good summary. This code is slightly more efficient on the transfer because it
packs multiple records into a
On Apr 25, 2017, at 12:01 PM,Douglas von Roeder wrote:
> Some payloads are pretty good sized but I don't recall if compression is
> used. The transmission time is very reasonable - everything just goes in
> the crapper when it comes to unbundling. I haven't timed the decoding vs
> encoding and the
ftware LLC
>
> -Original Message-
> From: 4D_Tech [mailto:4d_tech-boun...@lists.4d.com] On Behalf Of Douglas
> von Roeder via 4D_Tech
> Sent: Monday, April 24, 2017 6:26 PM
> To: 4D iNug Technical <4d_tech@lists.4d.com>
> Cc: Douglas von Roeder
> Subject: Experien
der via 4D_Tech
Sent: Monday, April 24, 2017 6:26 PM
To: 4D iNug Technical <4d_tech@lists.4d.com>
Cc: Douglas von Roeder
Subject: Experience with FTSY Sync Code//Speed up Sync Code
Anyone here have experience with Brad Weber's "FTSY Sync" code?
The code in question was writte
install a UUID
All records now have unique identifiers
if the data is not there now, implement a 'site id' to determine/track
where the data originated.
use send record, or plain text, or xml to export/import
your done!
:)
Chip
On Mon, 24 Apr 2017 18:25:39 -0700, Douglas von Roeder via 4D_Tech
On Apr 24, 2017, at 11:20 PM, Douglas von Roeder via 4D_Tech
<4d_tech@lists.4d.com> wrote:
>
> Updating indexes takes some time but being able to update only 3 - 4
> records per second has got to have some other cause. If you've had positive
> experience with that approach, perhaps I need to look
On Mon, Apr 24, 2017 at 8:30 PM, Wayne Stewart via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Mine performs similarly slowly (5-6 records per second) but it sends only
> one record per web service call.
>
> A smarter and less lazy person than me would bunch a few records into the
> one call, use com
> The deal breaker is that the code is only updating about 3 records per
> second it can sometimes takes days for the server to catch up.
Ouch, that is slow. I haven't followed closely...if your'e using SOAP, it
has to escape binaries to Base64...which is an absolutely hideous wire
format. If you
Mine performs similarly slowly (5-6 records per second) but it sends only
one record per web service call.
A smarter and less lazy person than me would bunch a few records into the
one call, use compression etc. One day I might implement this but you
never can tell, I think beer is more interesti
Jim:
"I wrote similar code a long time ago, and just replaced it last year (it
stored the metadata in the resource fork, which was going to be problematic
soon). Exports usually contained hundreds of records of 3-4 tables of 100+
fields, and when importing were parsed pretty much instantly.
Unles
David:
The transmission time is very manageable — these folks send in data from
very remote locations and the payload always arrives at the server. The
BLOB's are sent via web services and, IIRC, the BLOB's are pretty good
sized, some being over 100k.
The deal breaker is that the code is only upd
On Apr 24, 2017, at 9:25 PM, Douglas von Roeder via 4D_Tech
<4d_tech@lists.4d.com> wrote:
> Anyone here have experience with Brad Weber's "FTSY Sync" code?
>
> One aspect of the code that's challenging is that the V11+ code (the "new"
> code) could no longer use 4D Open so the design was changed
> The current approach has got to be quite inefficient. The problem I've got
> is that I can't come up with anything other than a WAG as to how much
> faster it will be.
It's been some time since I tested out size v. speed relationships in a
similar setup. And, of course, it depends on your hardw
Ron:
Oh yes, big fan of API Pack! I've used B2R and there's also JSON and a few
other approaches.
The issue I need to resolve is how much, if at all, will a different
encoding/deciding approach impact performance?
The current approach has got to be quite inefficient. The problem I've got
is that
Doug,
I may be misunderstanding your application, but wouldn’t API Pack’s Record to
Blob and Blob to Record functions work? (It’s from pluggers.nl)
We use that, first compressing then converting the Blob that contains the
entire record to text before sending it as a variable via an HTTP Post.
Doug,
I do something similar.
I use Web services (lazy option). The sync records are in JSON (v13 so I
used NTK later versions use C_OBJECT commands) for the "small" fields and
pack big fields into a Blob.
I can send code if you're interested.
Regards,
Wayne
[image: --]
Wayne Stewart
[ima
Anyone here have experience with Brad Weber's "FTSY Sync" code?
The code in question was written almost 20 years ago to synchronize records
between standalones and a client server system, and I know that is was used
by a couple of companies inclduing Husqvarna in North Carolina.
One aspect of the
27 matches
Mail list logo