We used EDEVICE's for 15 years, tired of that. :-) Seriously the large
EDEVICEs with many zLinux OS disks per device was a bottleneck. We
never moved '/var' off so log backups (often for a bunch of systems at
once) and Linux backups were both having really high wait times on
EDEVs, with 1 I/O
lden
> master to the target disk and reboot.
> Alan
>
> Sent from my iPhone using IBM Verse
>
> On Sep 9, 2017, 6:14:02 PM, r...@casita.net wrote:
>
> From: r...@casita.net
> To: LINUX-390@VM.MARIST.EDU
> Cc:
> Date: Sep 9, 2017, 6:14:02 PM
> Subject: Re:
and reboot.
Alan
Sent from my iPhone using IBM Verse
On Sep 9, 2017, 6:14:02 PM, r...@casita.net wrote:
From: r...@casita.net
To: LINUX-390@VM.MARIST.EDU
Cc:
Date: Sep 9, 2017, 6:14:02 PM
Subject: Re: [LINUX-390] Gold On LUN
Don't clone! Just link!
I do, and have done, a fair bit
Don't clone! Just link!
I do, and have done, a fair bit of cloning. But I don't recommend it for
production systems. (dev, test, recovery, sure)
Instead, _share the OS disk or LUN_. Give each "client" (for lack of a
better term) its own root.
Less to backup, less to push with Puppet, more
EDEVICEs are z/VM's way to provide that virtualization and hide the ugly
details but in this case z/VM is simply supplying a path (FCP
subchannel) and it's up to guest OS to do what it needs to do to connect to
the storage using that path.
So to me the question would be - why aren't
To me, all this seems to suggest some weakness in the virtualisation
infrastructure, which seems odd for something as mature as z/VM.
So then the follow up question would be: is the host infrastructure
being used properly here? Is there not some other (managable) way to
set things up such that
Thanks Robert and others,
We figured there would be a learning curve, I think we'll get it figured
out, we just need to figure out everything, then how you do those things
on SLES12.
On 9/8/2017 3:28 PM, Robert J Brenneman wrote:
Ancient history:
Ancient history: http://www.redbooks.ibm.com/redpapers/pdfs/redp3871.pdf
Without NPIV you're in that same boat.
Even if you had NPIV you would still have to mount the new clone and fix
the ramdisk so that it points to the new target device instead of the
golden image.
This is especially an
On Friday, 09/08/2017 at 04:46 GMT, Scott Rohling
wrote:
> Completely agree with you ..I might make an exception if the only
FCP
> use is for z/VM to supply EDEVICEs
AND the PCHID is configured in the IOCDS as non-shared.
Alan Altmark
Senior Managing z/VM and
On Friday, 09/08/2017 at 05:14 GMT, canzon...@verizon.net wrote:
> I'm a proponent of NPIV. But, limitations on how many NPIV wwpns can
connect
> to a SAN storage unit is a big problem. Am I right that only 64 allowed
with
> z14 per channel? With that rate, we ran out of slots for FICON
>>> On 9/8/2017 at 10:19 AM, Greg Preddy wrote:
> Bingo! No NPIV so our only hope is fixing the clone with it mounted to
> gold server or in recovery mode, but if "It's all tooling, no direct
> editing of any config files with SLES." then how do we fix this?
As others have
>>> On 9/7/2017 at 09:08 AM, Greg Preddy <gpre...@cox.net> wrote:
> All,
>
> We're doing SLES 12 on 100% LUN, with gold copy on a single 60GB LUN.
> This is a new cloning approach for us so we're not sure how to make this
> work. Our Linux SA got the storage admin
Rohling <scott.rohl...@gmail.com>
To: LINUX-390 <LINUX-390@VM.MARIST.EDU>
Sent: Fri, Sep 8, 2017 12:48 pm
Subject: Re: Gold On LUN
Completely agree with you ..I might make an exception if the only FCP
use is for z/VM to supply EDEVICEs -- but I haven't seen an EDEVICE-only
implem
Completely agree with you ..I might make an exception if the only FCP
use is for z/VM to supply EDEVICEs -- but I haven't seen an EDEVICE-only
implementation yet myself - it's always in combination with some guest
attached FCPs. I have had a hard time explaining FCP/NPIV to mainframe
On Friday, 09/08/2017 at 02:20 GMT, Greg Preddy wrote:
> Bingo! No NPIV
*sigh* Folks have GOT to switch to NPIV!! If I audit such a system, it
fails. It's like sharing passwords. No accountability, no separation, no
access control, no assurance that the Golden Master is
On 09/08/2017 04:46 PM, Steffen Maier wrote:
On 09/08/2017 04:19 PM, Greg Preddy wrote:
gold server or in recovery mode, but if "It's all tooling, no direct
editing of any config files with SLES." then how do we fix this?
You could run the customization in a chroot environment on the cloned
On 09/08/2017 04:46 PM, Steffen Maier wrote:
On 09/08/2017 04:19 PM, Greg Preddy wrote:
Bingo! No NPIV so our only hope is fixing the clone with it mounted to
NPIV won't solve clone customization. It just solves perfect access
control. Your boot from the un-customized clone disk would fail
ferencing your gold
system.
On Thu, 2017-09-07 at 11:31 -0400, Grzegorz Powiedziuk wrote:
Hi
What do you mean it still mounts a gold LUN? You boot from from a
NEW
Lun
but root filesystem ends up beeing mounted from GOLD Lun?
First of I all I would make sure that GOLD lun after clonning is not
acce
ook's own tooling/scripting contains the image clone
customization details.
On 9/7/2017 10:34 AM, Karl Kingston wrote:
Check your FCP definitions on linux. You may find they are still
referencing your gold
system.
On Thu, 2017-09-07 at 11:31 -0400, Grzegorz Powiedziuk wrote:
Hi
What do you mean i
html?Open
>
> NB: The book's own tooling/scripting contains the image clone
> customization details.
>
> On 9/7/2017 10:34 AM, Karl Kingston wrote:
>>
>>> Check your FCP definitions on linux. You may find they are still
>>> referencing your gold
>>> system.
>
find they are still
referencing your gold
system.
On Thu, 2017-09-07 at 11:31 -0400, Grzegorz Powiedziuk wrote:
Hi
What do you mean it still mounts a gold LUN? You boot from from a
NEW Lun
but root filesystem ends up beeing mounted from GOLD Lun?
First of I all I would make sure that GOL
mmand?
2017-09-07 15:11 GMT-04:00 Greg Preddy <gpre...@cox.net>:
> Yes we use LVM except on /boot. Not clear what needs to be changed,
> /etc/multipath.conf on the new LUN?
>
>
> On 9/7/2017 10:31 AM, Grzegorz Powiedziuk wrote:
>
>> Hi
>> What do you mean it s
they are still referencing
your gold
system.
On Thu, 2017-09-07 at 11:31 -0400, Grzegorz Powiedziuk wrote:
Hi
What do you mean it still mounts a gold LUN? You boot from from a NEW Lun
but root filesystem ends up beeing mounted from GOLD Lun?
First of I all I would make sure that GOLD lun after
Yes we use LVM except on /boot. Not clear what needs to be changed,
/etc/multipath.conf on the new LUN?
On 9/7/2017 10:31 AM, Grzegorz Powiedziuk wrote:
Hi
What do you mean it still mounts a gold LUN? You boot from from a NEW Lun
but root filesystem ends up beeing mounted from GOLD Lun
Check your FCP definitions on linux. You may find they are still referencing
your gold
system.
On Thu, 2017-09-07 at 11:31 -0400, Grzegorz Powiedziuk wrote:
> Hi
> What do you mean it still mounts a gold LUN? You boot from from a NEW Lun
> but root filesystem ends up beeing mou
Hi
What do you mean it still mounts a gold LUN? You boot from from a NEW Lun
but root filesystem ends up beeing mounted from GOLD Lun?
First of I all I would make sure that GOLD lun after clonning is not
accesible in virtual machine anymore. Just to make it simple.
I can't remember how
All,
We're doing SLES 12 on 100% LUN, with gold copy on a single 60GB LUN.
This is a new cloning approach for us so we're not sure how to make this
work. Our Linux SA got the storage admin to replicate the LUN, but when
we change the server to boot the copy, it still mounts the gold LUN.
99
27 matches
Mail list logo