Hello again,

 

I have some files I want to send but they are big.

1.       Excel output from the iozone program (on my z/VM test environment)-  
the test is on a redhat 5.2 machine with 512 MB of RAM. There are 2 test with 
different I/O schedulers (1 - fcq and 2 - deadline). I have to mention that cmm 
and VMRMSVM is installed on the test z/VM system.

2.       PERFSVM output for one hour packed with vmarc from the z/VM production 
environment it is 5MB so I didn't attached it.

 

Attached is a Q SRM output  from my prod system.

Note: cmm and VMRMSRV is not installed on the z/VM production system.

 

I would be happy to send the data off list if someone is willing to take a look 
at them.

 

Many thanks!

Offer Baruch.

 

 

From: Offer Baruch [mailto:offerbar...@gmail.com] 
Sent: Thursday, August 06, 2009 1:45 PM
To: Linux on 390 Port
Subject: Re: z/Linux dasd performance issue

 

Hi guys,

 

First of all thanks for the replies...

I will be back at work on Sunday... I will try the benchmark tools and send you 
data from z/VM and Linux.

 

Thanks again!

Offer Baruch

On Wed, Aug 5, 2009 at 9:50 AM, Rob van der Heij <rvdh...@velocitysoftware.com> 
wrote:

2009/8/4 עופר ברוך <offerbar...@gmail.com>:

> Any thoughts?
> Is my test case ok? Am I doing something wrong?
> Is this normal behavior?

If you're able to collect raw monitor data from z/VM, that should tell
us whether there were any issues outside Linux that can explain it.
We're happy to look into the data for you. Send me a note off-list for
details.

For inside Linux, we'd need Linux metrics too. My experience with
measuring disk I/O in Linux is that normally the large difference can
be explained because one of the cases did not actually do I/O (or not
as much as the other). For example, when you have enough memory
compared to the data set (like your 300 MB in 1G) and the test runs
shorter than 30 seconds, there is no I/O at all (at least not before
the dd command completes). In that case you're doing a CPU measurement
and your limithard may be impacting your results.

Re: expecting great I/O performance: We don't make the disks spin
faster. Your DASD subsystem is made up of simple consumer quality disk
drives (well, 15K drives are not as bad as the 4800 drives you put in
your netbook). Once you actually write to disk, that will be equally
slow. Though you 300 MB/s will nicely go into NVS and be written out
later...
Mainframe I/O performance shines in that you do things in parallel,
that does not mean that without things in parallel something will run
faster (and empty bus does not drive 50 times as fast as one filled
with passengers).

Rob
--
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/


----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

 

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.392 / Virus Database: 270.13.38/2274 - Release Date: 08/05/09 
18:23:00


----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
q srm
IABIAS : INTENSITY=90%; DURATION=2
LDUBUF : Q1=300% Q2=200% Q3=100%
STORBUF: Q1=250% Q2=200% Q3=150%
DSPBUF : Q1=32767 Q2=32767 Q3=32767
DISPATCHING MINOR TIMESLICE = 5 MS
MAXWSS : LIMIT=9999%
...... : PAGES=999999
XSTORE : 0%

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to