Odd swap space behavior

2011-11-02 Thread Bauer, Bobby (NIH/CIT) [E]
One of our Redhat servers got a LOT of activity yesterday and the swap space 
looks funny to me.

swapon -s
FilenameTypeSizeUsedPriority
/dev/dasda2 partition   1023976 3692-1
/dev/dasdb1 partition   194964  420 2
/dev/dasdc1 partition   64976   152 1
/dev/dasdd1 partition   196596  25244   3


Why would the system use swap space on dasdc1, dasdb1 and dasda2 if dasdd1 
hasn't run out?

Bobby Bauer
Center for Information Technology
National Institutes of Health
Bethesda, MD 20892-5628
301-594-7474


 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Rob van der Heij
On Wed, Nov 2, 2011 at 12:37 PM, Bauer, Bobby (NIH/CIT) [E]
baue...@mail.nih.gov wrote:
 One of our Redhat servers got a LOT of activity yesterday and the swap space 
 looks funny to me.

 swapon -s
 Filename                                Type            Size    Used    
 Priority
 /dev/dasda2                             partition       1023976 3692    -1
 /dev/dasdb1                             partition       194964  420     2
 /dev/dasdc1                             partition       64976   152     1
 /dev/dasdd1                             partition       196596  25244   3


 Why would the system use swap space on dasdc1, dasdb1 and dasda2 if dasdd1 
 hasn't run out?

It filled the first ones and eventually used the last one. Some
processes were killed in the fight and their pages on swap space got
released. As long as it did not go too quick, a performance monitor
could show you total swap usage over time and reveal you (briefly) had
that much swapped out.

There's nothing in Linux that will migrate things back to the first
swap disks in the list, other than when you swapoff the last ones in
the chain. Remember that when you swapoff a VDISK, z/VM will still
hold the old data (and use memory for that).

Rob

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Richard Higson
On Wed, Nov 02, 2011 at 07:37:17AM -0400, Bauer, Bobby (NIH/CIT) [E] wrote:
 Date: Wed, 2 Nov 2011 07:37:17 -0400
 From: Bauer, Bobby (NIH/CIT) [E] baue...@mail.nih.gov
 To: LINUX-390@VM.MARIST.EDU
 Subject: Odd swap space behavior

 One of our Redhat servers got a LOT of activity yesterday and the swap space 
 looks funny to me.

 swapon -s
 FilenameTypeSizeUsed
 Priority
 /dev/dasda2 partition   1023976 3692-1
 /dev/dasdb1 partition   194964  420 2
 /dev/dasdc1 partition   64976   152 1
 /dev/dasdd1 partition   196596  25244   3
haven't done Linux on Z for a while, but I have always used the same Priority 
for the swapdisks
so that linux could spread out the IO to several disks (preferably on separate 
spindles).
This works well on x86 (real  VMware) and P-Series

quote from http://www.vm.ibm.com/perf/tips/linuxper.html
Swap extents of equal priority are used in round-robin fashion. Equal 
prioritization can be used to spread swap I/O across chpids and controllers, 
but if you are doing this, be careful not to put all the swap extents on 
minidisks on the same physical DASD volume, for if you do, you will not be 
accomplishing any spreading.
/quote

I'd be interested to see what today's thinking is.

//rhi - now back to lurking
--
... Point and click  ...
... probably means that you forgot to load the gun ...
Have a nice day ;-) Richard Higson mailto:richard.hig...@gt.owl.de

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread RPN01
That's what you want when you're using spindles, but on z, you're usually
talking about v-disks, which are really virtual disks in memory. When
they're not in use, they take up no space at all, but when you start using
them, they start to occupy real memory and become a burden. So you set
priorities on the swap spaces so that they each get used one at a time in
turn.

Ideally, you don't want to use them at all; they're a safeguard to keep the
image from coming down. When they are used, they're an indication that you
need more memory allocated to the image, and they give you a buffer to get
to the moment when you can safely cycle the image to add that memory. Having
four swap spaces allocated seems like a bit of overkill to me. It should be
sufficient to have one to be the buffer, and a second larger one to be the
trigger to increase the size of the image.

--
Robert P. Nix  Mayo Foundation.~.
RO-OC-1-18 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.



On 11/2/11 8:25 AM, Richard Higson richard.hig...@gt.owl.de wrote:

 On Wed, Nov 02, 2011 at 07:37:17AM -0400, Bauer, Bobby (NIH/CIT) [E] wrote:

 haven't done Linux on Z for a while, but I have always used the same
 Priority for the swapdisks
 so that linux could spread out the IO to several disks (preferably on separate
 spindles).
 This works well on x86 (real  VMware) and P-Series


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Bauer, Bobby (NIH/CIT) [E]
Yes, having 4 is a little odd. We are struggling with this server. It sits 
almost idle most of the month then for 1 or 2 days it gets 60 to 80 thousand 
hits/hour. 
Not sure what to make of this current display of the swap space.

Bobby Bauer
Center for Information Technology
National Institutes of Health
Bethesda, MD 20892-5628
301-594-7474



-Original Message-
From: RPN01 [mailto:nix.rob...@mayo.edu] 
Sent: Wednesday, November 02, 2011 10:14 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Odd swap space behavior

That's what you want when you're using spindles, but on z, you're usually
talking about v-disks, which are really virtual disks in memory. When
they're not in use, they take up no space at all, but when you start using
them, they start to occupy real memory and become a burden. So you set
priorities on the swap spaces so that they each get used one at a time in
turn.

Ideally, you don't want to use them at all; they're a safeguard to keep the
image from coming down. When they are used, they're an indication that you
need more memory allocated to the image, and they give you a buffer to get
to the moment when you can safely cycle the image to add that memory. Having
four swap spaces allocated seems like a bit of overkill to me. It should be
sufficient to have one to be the buffer, and a second larger one to be the
trigger to increase the size of the image.

--
Robert P. Nix  Mayo Foundation.~.
RO-OC-1-18 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.



On 11/2/11 8:25 AM, Richard Higson richard.hig...@gt.owl.de wrote:

 On Wed, Nov 02, 2011 at 07:37:17AM -0400, Bauer, Bobby (NIH/CIT) [E] wrote:

 haven't done Linux on Z for a while, but I have always used the same
 Priority for the swapdisks
 so that linux could spread out the IO to several disks (preferably on separate
 spindles).
 This works well on x86 (real  VMware) and P-Series


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Rob van der Heij
On Wed, Nov 2, 2011 at 2:25 PM, Richard Higson richard.hig...@gt.owl.de wrote:

 haven't done Linux on Z for a while, but I have always used the same 
 Priority for the swapdisks
 so that linux could spread out the IO to several disks (preferably on 
 separate spindles).
 This works well on x86 (real  VMware) and P-Series

The OP is correct in using different priority for the swap devices.

The issue with Linux on z/VM using VDISK as swap is not to spread the
I/O (there is no I/O for VDISK). Multiple swap devices with different
priority is to force Linux to reuse blocks rather than take fresh
ones. This reduces the amount of pages that must be backed by z/VM.
It's also important when you have a mix of disk types (like VDISK and
real disk).

Rob

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Richard Troth
You might consider a manual 'swapoff' (then 'swapon') of one large
swap volume after that crunch time.  In any case, this is one where
you should reconsider how much VDISK to use.  Obviously, there's a lot
happening when it gets that end-of-month workload, so remember to
include CPU and other I/O when you profile this server.

As Rob said, there's no page migration in Linux.  (Other than to force
the issue with a 'swapoff' and 'swapon' cycle.)  So what you're seeing
is random pages which got pushed out at various times during the
stress period.  If not needed, they will sit there forever.  I like to
differentiate between swap occupancy and swap movement.  The
occupancy doesn't really hurt you in terms of response time.

-- R;   
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Nov 2, 2011 at 10:21, Bauer, Bobby (NIH/CIT) [E]
baue...@mail.nih.gov wrote:
 Yes, having 4 is a little odd. We are struggling with this server. It sits 
 almost idle most of the month then for 1 or 2 days it gets 60 to 80 thousand 
 hits/hour.
 Not sure what to make of this current display of the swap space.

 Bobby Bauer
 Center for Information Technology
 National Institutes of Health
 Bethesda, MD 20892-5628
 301-594-7474



 -Original Message-
 From: RPN01 [mailto:nix.rob...@mayo.edu]
 Sent: Wednesday, November 02, 2011 10:14 AM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Odd swap space behavior

 That's what you want when you're using spindles, but on z, you're usually
 talking about v-disks, which are really virtual disks in memory. When
 they're not in use, they take up no space at all, but when you start using
 them, they start to occupy real memory and become a burden. So you set
 priorities on the swap spaces so that they each get used one at a time in
 turn.

 Ideally, you don't want to use them at all; they're a safeguard to keep the
 image from coming down. When they are used, they're an indication that you
 need more memory allocated to the image, and they give you a buffer to get
 to the moment when you can safely cycle the image to add that memory. Having
 four swap spaces allocated seems like a bit of overkill to me. It should be
 sufficient to have one to be the buffer, and a second larger one to be the
 trigger to increase the size of the image.

 --
 Robert P. Nix          Mayo Foundation        .~.
 RO-OC-1-18             200 First Street SW    /V\
 507-284-0844           Rochester, MN 55905   /( )\
 -                                        ^^-^^
 In theory, theory and practice are the same, but
  in practice, theory and practice are different.



 On 11/2/11 8:25 AM, Richard Higson richard.hig...@gt.owl.de wrote:

 On Wed, Nov 02, 2011 at 07:37:17AM -0400, Bauer, Bobby (NIH/CIT) [E] wrote:

 haven't done Linux on Z for a while, but I have always used the same
 Priority for the swapdisks
 so that linux could spread out the IO to several disks (preferably on 
 separate
 spindles).
 This works well on x86 (real  VMware) and P-Series


 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread RPN01
You might also consider using a real disk to back the v-disk for the peak
period swap, so that it doesn't add additional memory pressure to the
underlying z/VM system.


On 11/2/11 10:29 AM, Richard Troth vmcow...@gmail.com wrote:

 You might consider a manual 'swapoff' (then 'swapon') of one large
 swap volume after that crunch time.  In any case, this is one where
 you should reconsider how much VDISK to use.  Obviously, there's a lot
 happening when it gets that end-of-month workload, so remember to
 include CPU and other I/O when you profile this server.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread David Boyes
 As Rob said, there's no page migration in Linux.

Yet.

 8-)

-- db

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Netbackup and SLES 11

2011-11-02 Thread Joe Comitini
We are running Netbackup successfully on SLES11.

NetBackup-IBMzSeriesSuSE2.6 6.5.4

Joe

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Bern 
VK2KAD
Sent: Thursday, October 06, 2011 6:20 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Netbackup and SLES 11

Hi All

Is anyone deploying Netbackup on SLES11.   I am about to embark on this
trail and am looking for footprints to follow.

We already have Master and Media servers deployed in the mid-range SANs - we
are looking to exploit this infrastructure to give us file level backups of
our zVM guests.

Our preferred direction is do the backups via fibre channel rather than the
network - our z10 only had OSA Express2 cards so bandwidth is limited.

Early research reveals not all features/functions are available on s390x
architecture.  All comments appreciated

Bern

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Netbackup and SLES 11

2011-11-02 Thread Victor Echavarry Diaz
Joe:



Which oracle version is running? Our Unix group test SLES 11 with Oracle 11 and 
doesn't work.



Regards,



Victor Echavarry

System Programmer

Technology Systems  Operations Division

EVERTEC







-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Joe 
Comitini
Sent: Wednesday, November 02, 2011 2:39 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Netbackup and SLES 11



We are running Netbackup successfully on SLES11.



NetBackup-IBMzSeriesSuSE2.6 6.5.4



Joe



-Original Message-

From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Bern 
VK2KAD

Sent: Thursday, October 06, 2011 6:20 PM

To: LINUX-390@VM.MARIST.EDU

Subject: Netbackup and SLES 11



Hi All



Is anyone deploying Netbackup on SLES11.   I am about to embark on this

trail and am looking for footprints to follow.



We already have Master and Media servers deployed in the mid-range SANs - we

are looking to exploit this infrastructure to give us file level backups of

our zVM guests.



Our preferred direction is do the backups via fibre channel rather than the

network - our z10 only had OSA Express2 cards so bandwidth is limited.



Early research reveals not all features/functions are available on s390x

architecture.  All comments appreciated



Bern



--

For LINUX-390 subscribe / signoff / archive access instructions,

send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit

http://www.marist.edu/htbin/wlvindex?LINUX-390

--

For more information on Linux on System z, visit

http://wiki.linuxvm.org/



--

For LINUX-390 subscribe / signoff / archive access instructions,

send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit

http://www.marist.edu/htbin/wlvindex?LINUX-390

--

For more information on Linux on System z, visit

http://wiki.linuxvm.org/





-
CONFIDENTIALITY NOTE: This email communication and its attachments
contain information that are proprietary and confidential to
EVERTEC, INC., its affiliates or its clients.  They may not be
disclosed, distributed, used, copied or modified in any way without
EVERTEC, Inc.’s authorization. If you are not the intended
recipient of this email, you are not an authorized person.  Please
delete it and notify the sender immediately. EVERTEC, Inc. and its
affiliates do not assume any liability for damages resulting from
emails that have been sent or altered without their consent.
Moreover, EVERTEC, Inc. has taken precautions to safeguard its
email communications, but cannot assure that such is the case and
disclaim any responsibility attributable thereto.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Shane
On Wed, 2 Nov 2011 11:29:40 -0400 Richard Troth wrote:

 So what you're seeing
 is random pages which got pushed out at various times during the
 stress period.  If not needed, they will sit there forever.

Well, maybe not forever ... ;-)
This lazy (de-)allocation behaviour of Linux is worth remembering. It's
just too expensive to continually run the q's to clean this up. Later
kernels expose the per-pid (actual) swap usage - I haven't figured out
if there is yet a reliable means of discerning disk vs. cache swap
usage.

 I like to
 differentiate between swap occupancy and swap movement.  The
 occupancy doesn't really hurt you in terms of response time.

Most of the time.
If memory pressure ramps up *really* quickly, kswapd gets kicked into
action to run the q's to free up pages. And it can cycle back through
chasing enough memory to free.
Not likely in this scenario, but if kswapd needs to work, you wait.

Shane ...

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/