Re: WWN change
On Thu, Oct 27, 2016 at 6:55 PM, Alan Altmark wrote: > On Thursday, 10/27/2016 at 02:32 GMT, "Cohen, Sam" > wrote: > > If you're not replacing the target SAN disks, then there are no changes > to z/VM or a Linux > > connections (as long as the IOADDRS are unchanged). Your SAN fabric has > to change for the new > > WWPNS on the z. > > The z13 includes support to retain the NPIV WWPNs during an upgrade. Look > for the "Update I/O World Wide Port Number" task. (Read it as "Update I/O > Serial Number Portion of WWPNs".) This was previously an RPQ. > > To the extent you keep the same IOCDS definitions and PCHID assignments, > you can keep the same NPIV WWPNs. > > But we are changing the target SAN disks as well. And new SAN switches. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
KVM for IBM z Systems v1.1.2 released today
KVM for IBM z Systems v1.1.2 is released today - see http://kvmonz.blogspot.com/2016/10/kvm-for-ibm-z-systems-v112-released.html for a list of highlights from a pure KVM perspective. Mit freundlichen Grüßen / Kind regards Stefan Raspl KVM on z Systems IBM Systems & Technology Group, Systems Software Development / SW Linux on System z Dev & Service --- IBM Deutschland Schoenaicher Str. 220 71032 Boeblingen Phone: +49-7031-16-2177 E-Mail: stefan.ra...@de.ibm.com --- IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Docker on Z
Mike We know how to modify the CPU/memory dynamically. The issue is how do we get the docker components to signal that they are about to deploy more workload than the current memory size can handle so we can grow it. Phil Sent from my iPhone > On Oct 27, 2016, at 9:52 PM, Mike Friesenegger wrote: > > Have you looked at cpuplugd - > https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lhdd/lhdd_r_cpuplugdcmd.html > - which uses a set of rules to dynamically enable or disable CPUs and > also dynamically add or remove memory. > > Mike Friesenegger > >> On 10/27/2016 06:56 PM, PHILIP TULLY wrote: >> The issue here is we have multiple docker engines on multiple lpars ( >> we still think from an economics and manageability point of view that >> under VM is better than on the metal). >> We have been doing the testing to have one engine pick up the workload >> form another that has failed, this works. >> We are still trying to make the environment more flexible. These >> docker engine VMs are sized at 60G and 8 vcpu but can grow (without >> ipl) to 120G and 16 vcpu's. It is the automation piece to exploit >> the flexibility of the platform that we need to figure out. Yes, we >> can define full and "waste" resources but at these sizes the resources >> are big. >> >> >> >>> On Tue, Oct 25, 2016 at 04:57 PM, R P Herrold wrote: >>> On Tue, 25 Oct 2016, PHILIP TULLY wrote: We are looking to implement Docker on Z, as we have begun the testing part of the issue is to be able to grow a docker engine and growing it dynamically based on it's current needs especially when a node in the Docker cluster fails. So the question is does anyone see a way for the VM system to see the memory resource grow, which would allow me to add more dynamically. >>> >>> I thought one point of Docker was to have 'fast to spin up' >>> instances, ready to spin up, which then pulled in ephemeral >>> data from a back end persistent store, so that a swarm of them >>> handled load spikes, and once the spike passes, that the >>> excess units are shut down >>> >>> -- Russ herrold >>> >>> -- >>> For LINUX-390 subscribe / signoff / archive access instructions, >>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 >>> or visit >>> http://www.marist.edu/htbin/wlvindex?LINUX-390 >>> -- >>> For more information on Linux on System z, visit >>> http://wiki.linuxvm.org/ >>> >> >> -- >> For LINUX-390 subscribe / signoff / archive access instructions, >> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 >> or visit >> http://www.marist.edu/htbin/wlvindex?LINUX-390 >> -- >> For more information on Linux on System z, visit >> http://wiki.linuxvm.org/ >> > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- > For more information on Linux on System z, visit > http://wiki.linuxvm.org/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Docker on Z
Have you looked at cpuplugd - https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lhdd/lhdd_r_cpuplugdcmd.html - which uses a set of rules to dynamically enable or disable CPUs and also dynamically add or remove memory. Mike Friesenegger On 10/27/2016 06:56 PM, PHILIP TULLY wrote: The issue here is we have multiple docker engines on multiple lpars ( we still think from an economics and manageability point of view that under VM is better than on the metal). We have been doing the testing to have one engine pick up the workload form another that has failed, this works. We are still trying to make the environment more flexible. These docker engine VMs are sized at 60G and 8 vcpu but can grow (without ipl) to 120G and 16 vcpu's. It is the automation piece to exploit the flexibility of the platform that we need to figure out. Yes, we can define full and "waste" resources but at these sizes the resources are big. On Tue, Oct 25, 2016 at 04:57 PM, R P Herrold wrote: On Tue, 25 Oct 2016, PHILIP TULLY wrote: We are looking to implement Docker on Z, as we have begun the testing part of the issue is to be able to grow a docker engine and growing it dynamically based on it's current needs especially when a node in the Docker cluster fails. So the question is does anyone see a way for the VM system to see the memory resource grow, which would allow me to add more dynamically. I thought one point of Docker was to have 'fast to spin up' instances, ready to spin up, which then pulled in ephemeral data from a back end persistent store, so that a swarm of them handled load spikes, and once the spike passes, that the excess units are shut down -- Russ herrold -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Docker on Z
The issue here is we have multiple docker engines on multiple lpars ( we still think from an economics and manageability point of view that under VM is better than on the metal). We have been doing the testing to have one engine pick up the workload form another that has failed, this works. We are still trying to make the environment more flexible. These docker engine VMs are sized at 60G and 8 vcpu but can grow (without ipl) to 120G and 16 vcpu's. It is the automation piece to exploit the flexibility of the platform that we need to figure out. Yes, we can define full and "waste" resources but at these sizes the resources are big. On Tue, Oct 25, 2016 at 04:57 PM, R P Herrold wrote: On Tue, 25 Oct 2016, PHILIP TULLY wrote: We are looking to implement Docker on Z, as we have begun the testing part of the issue is to be able to grow a docker engine and growing it dynamically based on it's current needs especially when a node in the Docker cluster fails. So the question is does anyone see a way for the VM system to see the memory resource grow, which would allow me to add more dynamically. I thought one point of Docker was to have 'fast to spin up' instances, ready to spin up, which then pulled in ephemeral data from a back end persistent store, so that a swarm of them handled load spikes, and once the spike passes, that the excess units are shut down -- Russ herrold -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: systemd-analyze
Bob, Same: # systemd-analyze critical-chain Bootup is not yet finished. Please try again later. The time after the unit is active or started is printed after the "@" character. The time the unit takes to start is printed after the "+" character. -Mike On Thu, Oct 27, 2016 at 2:21 PM, Nix, Robert P. wrote: > Did you try ³systems-analyze critical-path²? If it would work, then it > should show the longest path, which at this point, should be the one which > is incomplete. > -- > Robert P. Nix | Sr IT Systems Engineer | Data Center Infrastructure > Services > > Mayo Clinic | 200 First Street SW | Rochester, MN 55905 > 507-284-0844 | nix.rob...@mayo.edu > "quando omni flunkus moritati" > > > > > On 10/27/16, 12:24 PM, "Linux on 390 Port on behalf of Michael MacIsaac" > wrote: > > >Thanks for the replies. > > > ># systemctl list-units --failed > >0 loaded units listed. Pass --all to see loaded but inactive units, too. > ># systemctl is-system-running > >Unknown operation 'is-system-running'. > > > >With the systemctl status output sent to a file, I found a service > >'waiting'. I stopped it, but still get: > > > ># systemd-analyze time > >Bootup is not yet finished. Please try again later. > > > >I don't really need the output of 'systemd-analyze time' that badly. This > >was more of a curiosity. > > > > -Mike > > > > > >On Thu, Oct 27, 2016 at 12:13 PM, Dimitri John Ledkov > >wrote: > > > >> On 27 October 2016 at 15:32, Michael MacIsaac > >>wrote: > >> > I heard about this new cool command and tried it, but it did not work: > >> > > >> > # systemd-analyze time > >> > Bootup is not yet finished. Please try again later. > >> > > >> > How would I analyze systemd to know why 'bootup is not yet finished'? > >> This > >> > is SLES 12 SP1. > >> > > >> > >> Generic / architecture independent systemd commands to try: > >> > >> $ systemctl list-units --failed > >> > >> Should show some culprits. > >> > >> Also look at full output of $ systemctl list-units > >> > >> and grep/look for things that are activating or waiting. Hopefully > >> this should give you enough hints to figure out what components are > >> not ready yet, to class system as started. > >> > >> Ideally, at the end of the boot you should be able to see that system > >> is in running state: > >> > >> $ systemctl is-system-running > >> running > >> > >> It is for me on machines that I maintain. Loads of things can make > >> systemd believe things are degraded - e.g. when optional services are > >> required or wanted by accident and similar. > >> > >> -- > >> Regards, > >> > >> Dimitri. > >> > >> -- > >> For LINUX-390 subscribe / signoff / archive access instructions, > >> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 > or > >> visit > >> http://www.marist.edu/htbin/wlvindex?LINUX-390 > >> -- > >> For more information on Linux on System z, visit > >> http://wiki.linuxvm.org/ > >> > > > >-- > >For LINUX-390 subscribe / signoff / archive access instructions, > >send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or > >visit > >http://www.marist.edu/htbin/wlvindex?LINUX-390 > >-- > >For more information on Linux on System z, visit > >http://wiki.linuxvm.org/ > > > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or > visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- > For more information on Linux on System z, visit > http://wiki.linuxvm.org/ > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: systemd-analyze
Did you try ³systems-analyze critical-path²? If it would work, then it should show the longest path, which at this point, should be the one which is incomplete. -- Robert P. Nix | Sr IT Systems Engineer | Data Center Infrastructure Services Mayo Clinic | 200 First Street SW | Rochester, MN 55905 507-284-0844 | nix.rob...@mayo.edu "quando omni flunkus moritati" On 10/27/16, 12:24 PM, "Linux on 390 Port on behalf of Michael MacIsaac" wrote: >Thanks for the replies. > ># systemctl list-units --failed >0 loaded units listed. Pass --all to see loaded but inactive units, too. ># systemctl is-system-running >Unknown operation 'is-system-running'. > >With the systemctl status output sent to a file, I found a service >'waiting'. I stopped it, but still get: > ># systemd-analyze time >Bootup is not yet finished. Please try again later. > >I don't really need the output of 'systemd-analyze time' that badly. This >was more of a curiosity. > > -Mike > > >On Thu, Oct 27, 2016 at 12:13 PM, Dimitri John Ledkov >wrote: > >> On 27 October 2016 at 15:32, Michael MacIsaac >>wrote: >> > I heard about this new cool command and tried it, but it did not work: >> > >> > # systemd-analyze time >> > Bootup is not yet finished. Please try again later. >> > >> > How would I analyze systemd to know why 'bootup is not yet finished'? >> This >> > is SLES 12 SP1. >> > >> >> Generic / architecture independent systemd commands to try: >> >> $ systemctl list-units --failed >> >> Should show some culprits. >> >> Also look at full output of $ systemctl list-units >> >> and grep/look for things that are activating or waiting. Hopefully >> this should give you enough hints to figure out what components are >> not ready yet, to class system as started. >> >> Ideally, at the end of the boot you should be able to see that system >> is in running state: >> >> $ systemctl is-system-running >> running >> >> It is for me on machines that I maintain. Loads of things can make >> systemd believe things are degraded - e.g. when optional services are >> required or wanted by accident and similar. >> >> -- >> Regards, >> >> Dimitri. >> >> -- >> For LINUX-390 subscribe / signoff / archive access instructions, >> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or >> visit >> http://www.marist.edu/htbin/wlvindex?LINUX-390 >> -- >> For more information on Linux on System z, visit >> http://wiki.linuxvm.org/ >> > >-- >For LINUX-390 subscribe / signoff / archive access instructions, >send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or >visit >http://www.marist.edu/htbin/wlvindex?LINUX-390 >-- >For more information on Linux on System z, visit >http://wiki.linuxvm.org/ > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: systemd-analyze
Thanks for the replies. # systemctl list-units --failed 0 loaded units listed. Pass --all to see loaded but inactive units, too. # systemctl is-system-running Unknown operation 'is-system-running'. With the systemctl status output sent to a file, I found a service 'waiting'. I stopped it, but still get: # systemd-analyze time Bootup is not yet finished. Please try again later. I don't really need the output of 'systemd-analyze time' that badly. This was more of a curiosity. -Mike On Thu, Oct 27, 2016 at 12:13 PM, Dimitri John Ledkov wrote: > On 27 October 2016 at 15:32, Michael MacIsaac wrote: > > I heard about this new cool command and tried it, but it did not work: > > > > # systemd-analyze time > > Bootup is not yet finished. Please try again later. > > > > How would I analyze systemd to know why 'bootup is not yet finished'? > This > > is SLES 12 SP1. > > > > Generic / architecture independent systemd commands to try: > > $ systemctl list-units --failed > > Should show some culprits. > > Also look at full output of $ systemctl list-units > > and grep/look for things that are activating or waiting. Hopefully > this should give you enough hints to figure out what components are > not ready yet, to class system as started. > > Ideally, at the end of the boot you should be able to see that system > is in running state: > > $ systemctl is-system-running > running > > It is for me on machines that I maintain. Loads of things can make > systemd believe things are degraded - e.g. when optional services are > required or wanted by accident and similar. > > -- > Regards, > > Dimitri. > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or > visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- > For more information on Linux on System z, visit > http://wiki.linuxvm.org/ > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: WWN change
On Thursday, 10/27/2016 at 02:32 GMT, "Cohen, Sam" wrote: > If you're not replacing the target SAN disks, then there are no changes to z/VM or a Linux > connections (as long as the IOADDRS are unchanged). Your SAN fabric has to change for the new > WWPNS on the z. The z13 includes support to retain the NPIV WWPNs during an upgrade. Look for the "Update I/O World Wide Port Number" task. (Read it as "Update I/O Serial Number Portion of WWPNs".) This was previously an RPQ. To the extent you keep the same IOCDS definitions and PCHID assignments, you can keep the same NPIV WWPNs. Alan Altmark Senior Managing z/VM and Linux Consultant Lab Services System z Delivery Practice IBM Systems & Technology Group ibm.com/systems/services/labservices office: 607.429.3323 mobile; 607.321.7556 alan_altm...@us.ibm.com IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: systemd-analyze
On 27 October 2016 at 15:32, Michael MacIsaac wrote: > I heard about this new cool command and tried it, but it did not work: > > # systemd-analyze time > Bootup is not yet finished. Please try again later. > > How would I analyze systemd to know why 'bootup is not yet finished'? This > is SLES 12 SP1. > Generic / architecture independent systemd commands to try: $ systemctl list-units --failed Should show some culprits. Also look at full output of $ systemctl list-units and grep/look for things that are activating or waiting. Hopefully this should give you enough hints to figure out what components are not ready yet, to class system as started. Ideally, at the end of the boot you should be able to see that system is in running state: $ systemctl is-system-running running It is for me on machines that I maintain. Loads of things can make systemd believe things are degraded - e.g. when optional services are required or wanted by accident and similar. -- Regards, Dimitri. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: systemd-analyze
>>> On 10/27/2016 at 10:32 AM, Michael MacIsaac wrote: > How would I analyze systemd to know why 'bootup is not yet finished'? This > is SLES 12 SP1. It looks like systemd-analyze opens a socket to systemd and a bunch of "stuff" gets sent back and forth. An strace on systemd during this doesn't reveal any files being opened, so it appears the information needed is being kept in memory by systemd. I guess that you would need to start with "systemctl status" and go from there. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: systemd-analyze
Hi Mike I just tried that same command and it worked. Sent from my iPhone > On Oct 27, 2016, at 10:32 AM, Michael MacIsaac wrote: > > I heard about this new cool command and tried it, but it did not work: > > # systemd-analyze time > Bootup is not yet finished. Please try again later. > > How would I analyze systemd to know why 'bootup is not yet finished'? This > is SLES 12 SP1. > > Thanks. > >-Mike MacIsaac > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- > For more information on Linux on System z, visit > http://wiki.linuxvm.org/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
It is indeed big...
Greetings Alan Altmark, Thanks. Paul Flint On Thu, 27 Oct 2016, Alan Altmark wrote: Date: Thu, 27 Oct 2016 15:01:43 +0100 From: Alan Altmark Reply-To: Linux on 390 Port To: LINUX-390@VM.MARIST.EDU Subject: Re: Docker on Z - One less lier? On Wednesday, 10/26/2016 at 11:05 GMT, Paul Flint wrote: In zVM land you have a kick ass memory manager that essentially lies to each Virtual Machine and tells it that the memory limit is in the Exabytes (gee, I love that word :^). The guest operating system on the Virtual Machine in turn uses this lie to set the limit the docker engine can operate a docker instance based upon the lie it got from zVM. While the *architecture* limits the memory to 16EB, the *machine* may (and will) establish a smaller value based on construction. You can figure it out by setting the MAXSTOR value in your directory entry to 16E and then trying to DEFINE STORAGE 16E. If you exceed the machine maximum, you get an error like this Storage size (16E) exceeds hardware maximum (16T) That's a hardware statement. From a software point of view, z/VM supports virtual machines up to 1TB. You can define larger, but they aren't supported. Alan Altmark Senior Managing z/VM and Linux Consultant Lab Services System z Delivery Practice IBM Systems & Technology Group ibm.com/systems/services/labservices office: 607.429.3323 mobile; 607.321.7556 alan_altm...@us.ibm.com IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/ Kindest Regards, ☮ Paul Flint (802) 479-2360 Home (802) 595-9365 Cell / Based upon email reliability concerns, please send an acknowledgement in response to this note. Paul Flint 17 Averill Street Barre, VT 05641 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: WWN change
Chris, If you're not replacing the target SAN disks, then there are no changes to z/VM or a Linux connections (as long as the IOADDRS are unchanged). Your SAN fabric has to change for the new WWPNS on the z. Sam Sent from my Verizon 4G LTE smartphone Original message From: Christer Solskogen Date: 10/27/16 07:23 (GMT-07:00) To: LINUX-390@VM.MARIST.EDU Subject: WWN change Hi! We are in the process of moving our z/VMs and all of the linux systems over to z13 from a zEC12. And just to make it even more complicated we are moving. That also means that the disk system is also moved (DS8870). The disk system is somehow in sync with the old system, I'm no storage guy so I'm not sure how this work. I only know that it works ;-) But after the move the wwn is going to change (I guess it would have changed anyways?) - Is there an easy way or do I have to add the luns manually for every server once the linux system is running on z13? Linux system disks are on 3390, but data is on SAN / zfcp. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
systemd-analyze
I heard about this new cool command and tried it, but it did not work: # systemd-analyze time Bootup is not yet finished. Please try again later. How would I analyze systemd to know why 'bootup is not yet finished'? This is SLES 12 SP1. Thanks. -Mike MacIsaac -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
WWN change
Hi! We are in the process of moving our z/VMs and all of the linux systems over to z13 from a zEC12. And just to make it even more complicated we are moving. That also means that the disk system is also moved (DS8870). The disk system is somehow in sync with the old system, I'm no storage guy so I'm not sure how this work. I only know that it works ;-) But after the move the wwn is going to change (I guess it would have changed anyways?) - Is there an easy way or do I have to add the luns manually for every server once the linux system is running on z13? Linux system disks are on 3390, but data is on SAN / zfcp. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Docker on Z - One less lier?
On Wednesday, 10/26/2016 at 11:05 GMT, Paul Flint wrote: > In zVM land you have a kick ass memory manager that essentially lies to > each Virtual Machine and tells it that the memory limit is in the Exabytes > (gee, I love that word :^). The guest operating system on the Virtual > Machine in turn uses this lie to set the limit the docker engine can > operate a docker instance based upon the lie it got from zVM. While the *architecture* limits the memory to 16EB, the *machine* may (and will) establish a smaller value based on construction. You can figure it out by setting the MAXSTOR value in your directory entry to 16E and then trying to DEFINE STORAGE 16E. If you exceed the machine maximum, you get an error like this Storage size (16E) exceeds hardware maximum (16T) That's a hardware statement. From a software point of view, z/VM supports virtual machines up to 1TB. You can define larger, but they aren't supported. Alan Altmark Senior Managing z/VM and Linux Consultant Lab Services System z Delivery Practice IBM Systems & Technology Group ibm.com/systems/services/labservices office: 607.429.3323 mobile; 607.321.7556 alan_altm...@us.ibm.com IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/