Ken,  IBM has a couple of modleing tools available to size the workload on
zLinux and IFLs/Memory.  If you have a locak IBM rep or business partner rep
  You may want to ask about the Size390 or New Workload sizing that IBM
Techline performs.  This service is no cost and can assist in getting an
estimate to the z900 requirements.  

If you do not have these contacts, let me know and we can discuss off-list. 
 
-------Original Message-------
 
From: Linux on 390 Port
Date: 07/13/04 10:14:48
To: [EMAIL PROTECTED]
Subject: need to compare apples to oranges (HP Unix CPU to zVM IFL CPU with
Oracle)
 
Hi,
 
We are looking at a pilot project to test an Oracle database running on
Linux/zVM.  Currently we have about five applications that run on various
HP Unix servers.  Each of these applications connect to their own Oracle
instance.  Each instance is about 300GB, so we have 1.5TB for the
databases.  Our test would be to move the five Oracle instances to a
Linux/zVM server running on an IFL on a z900.  We will have one 300GB copy
of the database, and each of the five instances would appear to have their
own copy since the updates for each instance will be intercepted and
written to a private area.
 
The Unix group says that the Oracle instances consume between two and
three CPUs on a HP Superdome 750Mhz box.  The Project Office wants to know
how that consumption would compare to a z900 IFL.  We said that we really
need to perform the pilot to get the numbers, but they said they really
need the numbers before we can do the pilot.  Does anyone know how to
compare the CPUs between the two platforms?
 
The Project people are also worried that the VM overhead will result in
slow response times.  We can try and perform a test via a standard script
on each box.  Has experience in the performance to be gained/lost between
the two platforms?
 
On the Oracle side, if I had a database of 300GB with an instance name of
ORA1, and I want to change the instance name to ORA2, how many records
need to be changed?  If the instance name is connected to every record,
then this project will have trouble since we are trying to share the 300GB
base with multiple instances.  We would use the I/O intercept software to
write the changes to a private area, and if it needs to update 300GB of
data, then it is not a feasible solution.  The DBA group is dubious of
this concept (read project), so we need to demonstrate that it will work.
 
We do not expect to save money in the hardware costs of this project.  The
saving should come from the flexibility to deploy new images, and the
quicker turnaround for the database restores with the I/O intercept
software.  However, if we are going to need six IFLs to run this DB, then
it is unlikely they will let us proceed no matter how much flexibility we
could gain.
 
Any information would be appreciated.
 
Thanks,
 
Ken Vance
Amadeus
 
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

<<IMSTP.gif>>

Reply via email to