Title: RE: EDI & ANY to XML translator - opinions wanted

>-----Original Message-----
>Anthony Beecher takes in EDI batches, which he wants to map to XML
>files; some are too large (20 - 30 Megs) for Gentran:Server NT to
>handle.  We all suspect the performance problems have got something to
>do with the database used by Gentran. Either the SQL calls will have to
>be optimized,

Apparently, the ad hoc nature makes this impossible for the user to do.

> or the database will have to be isolated on its own path,

Yep, tried that. I put the database on a separate machine, still slow. It as a little better to keep the database on the same machine and give it a separate physical disk.

>or a RAID will have to be installed,

Nope, I have that... I don't even think EMC will do it for me.

>or we can add more RAM,

Have 512 megs already, I don't know that 2 gigs will do the trick, I doubt it. I don't think it's paging, but with NT it's hard to tell, it seems to page somewhat no matter how much ram you have.

>or we have
>to upgrade to a quad-processor water-cooled Pentium 800mH,

I'd settle for even single Coppermine if I could actually purchase one, it seems that they only exist in Intel's press releases. Too bad I don't know of any business class AMD Athlon machines, but even if I double the speed - half of several hours is still a long time.

>or move to a
>different platform altogether, or ....

ehhh - I think a different product will do just fine.

>I think Laurent Szyster hit the nail on the head - that "[it's] the
>software design."  Laurent  goes on:

 >  I assume that Gentran [puts] a lot in a database (which
 >  implies sorting and retrieving from a sorted collection).
 >  The larger the dataset, the slower the processing ;-)>

>   Obviously, at one point, Gentran use of the database
>.   generates so much overhead that it accounts for more than
>   4/5 of the total processing time.
 
>As the index for a database grows, especially when random keys are
>added, each subsequent access requires more and more time.  Probably
>this accounts for much of the geometric growth in processing or elapsed
>time for Anthony's EDI to XML translation using Gentran:Server NT.>

And I don't even want to pay another $20,000 to Sterling for the XML add-on. Who knows if it would even work! I was left high and dry by Sterling and Edifecs on their "supposed" integration (the only reason I got

hoodwinked into picking Edifecs over EDISIM.) Each side points their finger at the other to actually get that feature going :)

>Back in my day, in these circumstances we'd just extract information to
>a flat file, much as Eliot Muir suggested.  The 3M or or 20M or 100M or
>6GB or whatever were slightly reformatted or rearranged, but nothing
>requiring database calls or much brain-power.  Then we'd sort by the key
>fields and load the data sequentially into the VSAM keyed database.
>Since the indices were built in key order, it went lickety-split.

>But that was then, in the olden days of COBOL and VSAM and two digit
>years.  Young whipper-snappers today insist in going directly to the
>database in any old random order across a network with an n-tier
>application design, even for a batch process, because they can always
>scale up the hardware.

>Anthony: how big are the resulting XML files going to be? 

I believe the exact quantification would be: "big".

>And if
>they're proportional in size to the 20-30 megabyte input EDI files,
>whatever are you going to do with them?

Pass them to the database guy!

Anthony

Reply via email to