Title: RE: EDI & ANY to XML translator - opinions wanted
Ad oc SQL means it was written on the fly by the translator/mapper, not that it was generic.  Ad hoc queries suffer from not having proper indexes.  Static or compiled queries create indexes used to optimize execution.  Your description of the problem matches your guru's description of the cause.  There is a very large caveat with all this.  Sterling may be able to produce static/compiled SQL, but the issue is closely tied to the number of records in the tables.  No two SQL compliant databases operate the same in this respect.  Each has its own break points and index optimization techniques which vary depending on table sizes.  Even within SQL-Server, when you cross an optimization technique threshold on table size performance goes in the toilet again until the query is recompiled.  Considering the variability of inbound file sizes it's going to be a real challenge for Sterling to solve this problem.
 
Short version:  You need to move to a different platform.  The Unix version of Gentran Server and most other Unix based systems, or TLE or Mercator or PaperFree or even Gentran Server for NT using something other an SQL-Server (I hesitate to recommend Oracle, but that's a personal thing.) are worth considering.

Peter Olivola ([EMAIL PROTECTED])

-----Original Message-----
From: Electronic Data Interchange Issues [mailto:[EMAIL PROTECTED]]On Behalf Of Anthony Beecher
Sent: Monday, January 10, 2000 11:27 AM
To: [EMAIL PROTECTED]
Subject: Re: EDI & ANY to XML translator - opinions wanted

>        -- a 450k files took (n) minutes to translate
>        -- a 4500k file took 55 x (n) minutes to translate
>        I can't answer why a file 10x larger took 55x more time to
>translate.

We use MS SQL server backend for Gentran. I had a MS SQL guru here tracing Gentran's SQL statements. His comment was that gentran was using "Ad Hoc" SQL statements which I took as meaning that they were using totally generic SQL statements to support as many flavors of backend databases as possible. The tradeoff is performance. It didn't seem that there was any optimization to the database. It used very little if any indexing. Thus it seems that the more documents per batch, the larger the database table for that batch and thus efficiency goes way down as the table takes more time to move through.

There was some rumor from Sterling that awaited version 3.0 would have an MSSQL optimized version of the program, but I can't pin them down on this. Some employees say yes, some say no.

Interestingly, Bob, I sent them a 27 meg file, that defied error diagnosis and said - "Here, you prove to me that you product can do the job for me".  They escalated it to Level 2 tech support. In a couple follow up calls, I told the Level 1 Techs that I know what the compliance error is and I'll tell them if they would like to know. Both the level one guys said "No, lets let Level 2 figure it out - they need to address this shortcoming. You're not the first one to complain about this."

Very interesting.

I had already gone your route and wrote a transaction segmenter, but I noticed a small bug in my script that didn't always provide a perfectly divided segment, so I decided not to invest any more energy patching a sinking ship and simply to find another translator.

Regarding Archive crashing - another capacity related issue, they want me to install these debug symbol and diagnose the error for them, but I haven't had time yet.  Different people in the company seem to admit or deny this issue, so there's no consistent call on that play.

Reply via email to