Skellen, Frank wrote: >Ordinal Technology's Nsort program has delivered the best commercial sort >performance on Windows and Unix systems. >Nsort is a sort/merge program that can quickly sort large amounts of data, >using large numbers of processors and disks in parallel. Unique in its CPU >efficiency, Nsort is the only commercial sort program to demonstrate:
>1 Terabyte sorts (33 minutes) >1 Gigabyte/sec file read and write rates Please define all of above two demonstration points. Under what hardware mix are those points achieved? What do you mean by 1 Terabyte? Is it input, workspace or total mix of space usage? How long are these records? What sort criterias are used? What character coding are used? Please define the read/write rates. Is it total or per file or what? Are they on different disks? I personally would like sort input on one disk, workspace on second and output on third disk while my page datasets, Ok, page files, are spread around a few disks. Since you're speaking about windoze and Unix, what are these workload/overhead during such sort work. Can you still do work while that sorting is taking place or do you need a coffee break? ;) Groete / Greetings Elardus Engelbrecht ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN