Knutson, Sam wrote:
We are a large IMS DC/DB shop and CICS DB2/IMS DBCTL.   IMS is still an
order of magnitude more efficient than DB2.  That is not saying anything
bad about DB2 it is designed for more flexible data manipulation and
easier development by offloading more business and data handling logic
into the DBMS.  DB2 is a relational database and IMS a hierarchical one
though IMS appears to be geared up to take on some new abilities soon
with V10.

IMS is wickedly efficient ask some of the large banking and delivery
concerns that still use it to process large transactions volumes.

some of this was part of the discourse between the ims group and
the system/r group in 70s ... i.e. originally relational/sql implementation
http://www.garlic.com/~lynn/subtopic.html#systemr

there was then technology transfer from sjr to endicott for sql/ds ...
and one of the people listed at this meeting claimed to have handled
a lot of the technology transfer from endicott to stl for DB2
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/96.html#15

an old email with ims & relational reference:
http://www.galric.com/~lynn/2007.html#email801016

in this post
http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"

in the discussion between the two groups ... the ims claim was that
ims was significantly more efficient between it had direct pointers
... while relational abstracted away the pointers with an implicit
index. The implicit index (under the covers) tended to double the
amount of disk space required and significantly increased the number
of disk access to reach the desired record. the relational counter
argument was that it significantly reduced the people/manual effort
required to manage the effort.

the transition in the 80s was that the economics for disk space
significantly changed ... mitigating the disk space issue and
the significant increase in system real storage sizes allowed
much of the relational infrastructure information to be cached
... cutting down on the physical disk operations required. at the
same time there was changes in people cost vis-a-vis hardware costs
... allowing some lower value uses to become practical (hardware
costs dropped below some threshold and with elimination of some
amount of manual/people cost support).

other posts discussing the theme of changes in system
configurations and relative costs between the 60s and the 80s
and its effect on dbms implementation trade-offs:
http://www.garlic.com/~lynn/2005s.html#12 Flat Query
http://www.garlic.com/~lynn/2006l.html#0 history of computing
http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
http://www.garlic.com/~lynn/2006o.html#22 Cache-Size vs Performance
http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object
http://www.garlic.com/~lynn/2007f.html#66 IBM System z9
http://www.garlic.com/~lynn/2007o.html#17 FORTRAN IV program illustrating 
assigned GO TO on web site

for other topic drift ... the university had gotten a ONR library automation
grant and was selected as beta-test for the original CICS (adapting code
that had been developed at a specific customer site and turning it into
a product) ... and i got tasked to provide debugging and deployment support.
misc. past posts mentioning CICS and/or BDAM
http://www.garlic.com/~lynn/subtopic.html#bdam

recent post in another thread discussing relative system disk
thruput
http://www.garlic.com/~lynn/2007o.html#69 ServerPac Installs and dataset 
allocations

for other drift ... part of what prompted the observation mentioned in
the above post was that the dynamic adaptive resource management work http://www.garlic.com/~lynn/subtopic.html#fairshare

i had done as an undergraduate in the 60s and at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

in the 70s ... included the general objective of being able to (dynamically)
"schedule to the bottleneck" ... aka dynamically recognize what is the major system resource bottleneck(s) and adapt the resource scheduling
policy to the bottleneck resource(s).

misc. past posts mentioning "schedule to the bottleneck"
http://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the 
Door
http://www.garlic.com/~lynn/94.html#1 Multitasking question
http://www.garlic.com/~lynn/94.html#50 Rethinking Virtual Memory
http://www.garlic.com/~lynn/98.html#6 OS with no distinction between RAM and HD 
?
http://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
http://www.garlic.com/~lynn/99.html#143 OS/360 (and descendents) VM system?
http://www.garlic.com/~lynn/2000.html#86 Ux's good points.
http://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran 
as slow as today's supercomputers?
http://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
http://www.garlic.com/~lynn/2004o.html#2 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2006d.html#11 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
http://www.garlic.com/~lynn/2007i.html#54 John W. Backus, 82, Fortran 
developer, dies

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to