On 17 April 2013 15:14, DASDBILL2 <dasdbi...@comcast.net> wrote:

> Your elapsed times in g oing from 100,000 records to 107 million looks
> like linear scaling.  That's the best one can hope for.  Working "fine"
> means running in less than two minutes.  It did "work" for 107 million
> records.  But it didn't "work fine" because it took longer than two
> minutes.  I suppose this developer also expects it to take less than two
> minutes to process 100 billion records.  The application developer needs to
> go to remedial multiplication class.  I learned how to multiply in the
> third grade.  Sixty years later I still remember how to multiply.  It's
> also important to know when, why, and what  to multiply.


Right. I like the case because it illustrates some of the issues. In this
particular case the application went from a intimate interaction between
z/OS and DB2 to a remote database on z/Linux. It's not even bad if you make
the round trip from the application through TCP/IP to the database, now and
then hit the disk, dispatch the virtual machine, and back to the
application, and all on average under 1 ms. That's less than 2 minutes for
100K, but 27 hrs for 100M...

I think it's not uncommon for people having trouble to absorb several
orders of magnitude. Many mainframe folks have learned to do that
multiplication despite intuition. And some of us know how long it takes to
copy a 3390-3 and can do the math during the meeting already. I've been
involved in migration projects where people claimed it was "pretty fast"
but in reality would not even do 5% of the total migration in 48 hrs. It's
the same experience that makes me ask "what about backup and D/R" and
project managers blame the messenger for being negative...

Rob

Reply via email to