Re: Additional SWAP recommendations

2016-11-21 Thread Willemina Konynenberg

Not necessarily...

It depends on what ends up on the swap disks.
If you barely see any swap in/out activity, you don't much care how much
of your swap space you are actually using.
What you don't want, generally, is having lots & lots of swap in/out
activity slowing the applications down (see vmstat).

So, if the request for more swap is because there are some applications
with large but largely inactive memory foot prints, then extending swap
is a feasible solution, but if the request is because the system is
actively doing I/O on all the swap it already has, then adding more swap
mostly will make it even slower, so increasing RAM would have a more
beneficial effect (and higher cost).


Hope this helps,

WF Konynenberg


On 11/21/2016 03:48 PM, Karl Kingston wrote:

I would not do that.   Can you increase the guest machine size?You
want to be at a point where you barely swap.





From:   "Beesley, Paul" 
To: LINUX-390@VM.MARIST.EDU
Date:   11/21/2016 09:35 AM
Subject:Additional SWAP recommendations
Sent by:Linux on 390 Port 



Hi

A couple of our z/Linux servers running under z/VM are apparently short on
swap space and I?ve been asked to increase it.
The PROFILE EXEC has 2 SWAPGEN statements to define 2 x 768MB swap VDisks.

I was planning on using defining additional Vdisk as follows:
Def vfb-512 ? allocate 1GB
Activate using YAST
Mkswap ?f /dev/dasdx
Swapon /dev/dasdx
And then to add an additional SWAPGEN to the profile to make it permanent

Is this the recommended method?

Regards and thanks
Paul



Atos, Atos Consulting, Worldline and Canopy The Open Cloud Company are
trading names used by the Atos group. The following trading entities are
registered in England and Wales: Atos IT Services UK Limited (registered
number 01245534), Atos Consulting Limited (registered number 04312380),
Atos Worldline UK Limited (registered number 08514184) and Canopy The Open
Cloud Company Limited (registration number 08011902). The registered
office for each is at 4 Triton Square, Regent?s Place, London, NW1 3HG.The
VAT No. for each is: GB232327983.

This e-mail and the documents attached are confidential and intended
solely for the addressee, and may contain confidential or privileged
information. If you receive this e-mail in error, you are not authorised
to copy, disclose, use or retain it. Please notify the sender immediately
and delete this email from your systems. As emails may be intercepted,
amended or lost, they are not secure. Atos therefore can accept no
liability for any errors or their content. Although Atos endeavours to
maintain a virus-free network, we do not warrant that this transmission is
virus-free and can accept no liability for any damages resulting from any
virus transmitted. The risks are deemed to be accepted by everyone who
communicates with Atos by email.



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: The Mainframe vs. the Server Farm: A Comparison

2017-05-25 Thread Willemina Konynenberg
But according to the datasheets, upgrading, say, an H06 to an H13
"requires planned down time", so if you started small and then want to
grow, the only feasible (non-down-time) upgrade path is to buy a 2nd
mainframe, which, as you point out "won't scale painlessly".


With a COTS based system, you work with a cluster configuration from the
start (without requiring additional licenses), and have a rather more
granular and disruption-free upgrade path.
And because it is designed from the ground up as a cluster, it is
designed to be maintainable WHILE WORKING.  Replacing any hardware
component of the cluster (ECC memory, CPU, I/O board, main board,
network component, rack, ...) can be done while the system is running.

So there isn't really any *functional* advantage to using a mainframe.
The question is whether you want to be running a cluster of, say, 2 - 5
mainframes, or, say, 10 - 500 COTS boxen.  I.e. "what do you want to
spend your money on".

And no, you should not then have a bunch of sysadmins running around
manually managing those 500 COTS boxen.  That's supposed to be automated...


WFK

On 05/25/17 16:22, John Campbell wrote:
> As I recall from Appendix A of the "Linux for S/390" redbook, the S/390
> (and, likely, zSeries) is designed to be maintainable WHILE WORKING.
> 
> The multi-dimensional ECC memory allows a memory card to be replaced WHILE
> the system is running.  Likewise, power supplies the CPs.
> 
> I have to agree that the "second" zSeries box won't scale painlessly;  The
> work to load balance would NOT be fun (and the second box has its own
> issues w/r/t the management team, too).
> 
> I recall, when dealing with the idea of putting an S/390 into a Universal
> Server Farm in Secaucus, NJ (I had some fun helping define the various
> networks as this predated the "hyperchannel" within the BFI ("Big Iron") as
> part of this USF integration) when it was killed for non-technical reasons.
> 
> -soup
> 
> On Thu, May 25, 2017 at 8:41 AM, Philipp Kern  wrote:
> 
>> On 24.05.2017 00:03, John Campbell wrote:
>>> Cool...
>>>
>>> Though the real key is that the mainframe is designed for something at or
>>> beyond five 9s (99.999%) uptime.
>>>
>>> [HUMOR]
>>> Heard from a Tandem guy:  "Your application, as critical as it is, is on
>> a
>>> nine 5s (55.555%) platform."
>>> [/HUMOR]
>>
>> Mostly you trade complexity in hardware with complexity in software.
>> Mainframes do not scale limitless either, so you trade being able to
>> grow your service by adding hardware with doing it within the boundaries
>> of a sysplex.
>>
>> Your first statement is also imprecise. It's designed for five 9s
>> excluding scheduled downtime. If you use the fact that hardware is
>> unrealiable (after subtracting your grossly overstated unreliability) to
>> your advantage, you end up with a system where any component can fail
>> and it doesn't matter. You win.
>>
>> Again, it then comes down to the trade-off question if you're willing to
>> pay for the smart software and the smart brains to maintain it rather
>> than paying IBM to provide service for the mainframe.
>>
>> Kind regards
>> Philipp Kern
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
> 
> 
> 
> --
> John R. Campbell Speaker to Machines  souperb at gmail dot
> com
> MacOS X proved it was easier to make Unix user-friendly than to fix Windows
> "It doesn't matter how well-crafted a system is to eliminate errors;
> Regardless
>  of any and all checks and balances in place, all systems will fail because,
>  somewhere, there is meat in the loop." - me
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
> 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: The Mainframe vs. the Server Farm: A Comparison

2017-05-25 Thread Willemina Konynenberg
Of course.  You should budget things properly or you're not doing your
job.  Network hardware, power control, redundancy, etc, should all be
taken into consideration.  And to some extent, the IBM maintenance
contract needs to be replaced with man power to repair/replace broken
parts as & when needed.  However, I suspect that if you do, you will
find that it still easily fits within a comparable overall budget.

I don't have numbers to do a sensible comparison of the power
consumption.  Foot print for COTS is likely somewhat bigger, but that is
generally not a major concern these days.  Space is cheap, power &
cooling is not.

Generally, I would expect that, in most organizations, other
considerations (contracts, personnel, politics, legacy) will play a
larger role in the decision of Mainframe vs COTS, before things like
functional differences, plain !/$, or environment.

WFK


On 05/25/17 18:19, Marcy Cortes wrote:
> And cabling and network ports, etc.  And while you can maintain high 
> availability with those 500 things, you still will have failures and people 
> costs of repairing and putting that thing back into rotation.Can be done, 
> but it’s a  cost people don't account for that I've seen.
> 
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Tom 
> Huegel
> Sent: Thursday, May 25, 2017 9:09 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: [LINUX-390] The Mainframe vs. the Server Farm: A Comparison
> 
> Don't forget to consider the mainframe has a much smaller enironmental 
> footprint that say 500 COTS.
> The cost savings in power comsumption, air conditioning, and floor space can 
> be huge.
> 
> On Thu, May 25, 2017 at 10:21 AM, Willemina Konynenberg > wrote:
> 
>> But according to the datasheets, upgrading, say, an H06 to an H13 
>> "requires planned down time", so if you started small and then want to 
>> grow, the only feasible (non-down-time) upgrade path is to buy a 2nd 
>> mainframe, which, as you point out "won't scale painlessly".
>>
>>
>> With a COTS based system, you work with a cluster configuration from 
>> the start (without requiring additional licenses), and have a rather 
>> more granular and disruption-free upgrade path.
>> And because it is designed from the ground up as a cluster, it is 
>> designed to be maintainable WHILE WORKING.  Replacing any hardware 
>> component of the cluster (ECC memory, CPU, I/O board, main board, 
>> network component, rack, ...) can be done while the system is running.
>>
>> So there isn't really any *functional* advantage to using a mainframe.
>> The question is whether you want to be running a cluster of, say, 2 - 
>> 5 mainframes, or, say, 10 - 500 COTS boxen.  I.e. "what do you want to 
>> spend your money on".
>>
>> And no, you should not then have a bunch of sysadmins running around 
>> manually managing those 500 COTS boxen.  That's supposed to be automated...
>>
>>
>> WFK
>>
>> On 05/25/17 16:22, John Campbell wrote:
>>> As I recall from Appendix A of the "Linux for S/390" redbook, the 
>>> S/390 (and, likely, zSeries) is designed to be maintainable WHILE WORKING.
>>>
>>> The multi-dimensional ECC memory allows a memory card to be replaced
>> WHILE
>>> the system is running.  Likewise, power supplies the CPs.
>>>
>>> I have to agree that the "second" zSeries box won't scale 
>>> painlessly;
>> The
>>> work to load balance would NOT be fun (and the second box has its 
>>> own issues w/r/t the management team, too).
>>>
>>> I recall, when dealing with the idea of putting an S/390 into a 
>>> Universal Server Farm in Secaucus, NJ (I had some fun helping define 
>>> the various networks as this predated the "hyperchannel" within the 
>>> BFI ("Big Iron")
>> as
>>> part of this USF integration) when it was killed for non-technical
>> reasons.
>>>
>>> -soup
>>>
>>> On Thu, May 25, 2017 at 8:41 AM, Philipp Kern  wrote:
>>>
>>>> On 24.05.2017 00:03, John Campbell wrote:
>>>>> Cool...
>>>>>
>>>>> Though the real key is that the mainframe is designed for 
>>>>> something at
>> or
>>>>> beyond five 9s (99.999%) uptime.
>>>>>
>>>>> [HUMOR]
>>>>> Heard from a Tandem guy:  "Your application, as critical as it is, 
>>>>> is
>> on
>>>> a
>>>>> nine 5s (55.555%) platfo

Re: Gold On LUN

2017-09-08 Thread Willemina Konynenberg

To me, all this seems to suggest some weakness in the virtualisation
infrastructure, which seems odd for something as mature as z/VM.

So then the follow up question would be: is the host infrastructure
being used properly here?  Is there not some other (managable) way to
set things up such that all the ugly technical details of the underlying
host/san infrastructure is completely hidden from the (clone) guests,
and that the guests cannot accidentally end up accessing resources they
shouldn't be allowed to access?  This should be the responsibility of
the host system, not of each and every single guest.

To me, that seems a fairly basic requirement for any sensible virtual
machine host infrastructure, so I would think that would already be
possible in z/VM somehow.

Willemina


On 09/08/17 22:28, Robert J Brenneman wrote:

Ancient history: http://www.redbooks.ibm.com/redpapers/pdfs/redp3871.pdf

Without NPIV you're in that same boat.

Even if you had NPIV you would still have to mount the new clone and fix
the ramdisk so that it points to the new target device instead of the
golden image.

This is especially an issue for DS8000 type storage units that give every
LUN a unique LUN number based on which internal LCU its on and the order it
gets created. Storwize devices like SVC and V7000 do it differently: each
LUN is numbered starting from  and counts up from there for each host,
so the boot LUN is always LUN 0x for every clone and you
don't have to worry about that part so much.

The gist of your issue is that you need to:
   mount the new clone volume on a running Linux instance
   chroot into it so that your commands are 'inside' that cloned linux
environment
   fix the udev rules to point to the correct lun number
   fix the grub kernel parameter to point to the correct lun if needed
   fix the /etc/fstab records to point to the new lun if needed
   ?? re-generate the initrd so that it does not contain references to the
master image ??

( I'm not sure whether that last one is required on SLES 12 )

On Fri, Sep 8, 2017 at 3:30 PM, Alan Altmark 
wrote:


On Friday, 09/08/2017 at 04:46 GMT, Scott Rohling
 wrote:

Completely agree with you ..I might make an exception if the only

FCP

use is for z/VM to supply EDEVICEs


AND the PCHID is configured in the IOCDS as non-shared.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM Systems Lab Services
IBM Z Delivery Practice
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/





--
Jay Brenneman

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/