Thanks guys!
These disks are ECKD 3390-3 dasd on DS8300.
Firstly I defined MDISK to that linux guest.
Secondly using CPFMTXA utility to format thess dasds at range between
cylinder 0 to END.
Thirdly boot that linux guest, I didn't use fdisk and dasdfmt, but use YAST
-> Hardware -> DASD -> Select
Our zVM version is 5.4.
Linux version is SUSE Enterprise Server 11 SP1 running as guest on zVM.
I add 2 more dasd for this guest's directory like below:
MDISK 0207 3390 0001 END LBE508 MR LNX4VM LNX4VM LNX4VM
MDISK 0208 3390 0001 END LBE509 MR LNX4VM LNX4VM LNX4VM
On linux, I want create a volume
Linux guest move its memory to target z/VM member during relocating. Then
it quiesce and resume on target z/VM.
How to understand "quiesce"? What state of linux when it is in quiesce
time? Is idling? shutdown? or something?
--
For
>Yes, it is the application. If it doesn't have highly parallel
>multi-threading such that it can keep two CPUs working, then the guest
>ends up doing extra work to deal with two processors without any benefit.
Hello Alan,
Thanks a lot!!
We can control linux's virtual processor to horizontal an
Based on my understanding, machine guests such as linux on z/VM can get cpu
resource by their SHARE value.
For example, LINUXA system has SHARE RELATIVE 100. LINUXB system also has
SHARE RELATIVE 100. So LINUXA and LINUXB both has 50% of real physical
processor resource.
If LINUXA has 2 virtual pr
How to dynamically increase/decrease storage for linux running on z/VM?
Assumption: Define LPAR profile with 4G initial and 0 reserved storage.
z/VM version is 5.4
User Direct statement for linux guest is "USER LNX1 LNX1 1G 2G
EG"
Objective: Dynamically increase/decrease
Suse 11SP1 running as guest on zVM 5.4
Linux guest has 4 virtual processors.
From zVM Performance Toolkit FCX112-User Resource Usage Screen, we can find
a linux guest's %CPU is 55.2.
Based on description, this value is percent of total used by this guest.
At the same time, I logon this linux wit
Our z/VM is 5.4. I defined a vswitch by following statements:
DEFINE VSWITCH CPOVSW1 RDEV 0558 0578 IP NON
VMLAN MACPREFIX 020001
And I defined nic for z/Linux guest in user directory file:
NICDEF 600 TYPE QDIO LAN SYSTEM EWYVSW1
When I ipl linux guest, the following error messages were shown:
qe
Let me explain the reason of installing tape driver is we are going to
install Tivoli Storage Manager which will be the only tool to backup our
systems.
Our linux is newly installed which was provided by IBM. Nobody modified it,
and even nobody begin to use it.
I also downloaded tape driver from
Our linux is SUSE Enterprise Server 11 SP1 running on z/VM V5.4.
Our linux kernel is 2.6.32.12-0.7-default (geeko@buildhost) (gcc version
4.3.4 gcc-4 setup.1a06a7: Linux is running as a z/VM guest operating
system in 64-bit mode.
Our tape lib is TS3500, driver is 3592 E05.
Our SAN switch is 2498-B
We had finished multipathing work in our environment. Our problem we
perviously had is caused by using too much FCP sub-channels of every one
physical channel.I summarize our problem resolution below:
(1)The number of FCP sub-channels(for every FCP physical channel) used
should <= 32.
The total
I had used 4 sub-channels instead of 80 for every linux, like what Steffen
and Ralph recommanded.
And HW guy also modified configuration on storage side, here is what he
did: (6 linux share 20 disks)
(1)Allocating 20 volumes.
(2)Define volume group 1 that contains 24 volumes.
(3)Define host
Many thanks for your detail explaination everybody here.
At first, I will use 4 sub-channels instead of 80 for every linux, like
what Steffen and Ralph recommanded.
Then check if the error occur again.
At second, I will update MCF version to latest one, like what Raymond
recommanded.
I
What is MCF for Ficon card? How to get that level?
What is the key reason to cause our problem, missing a MCF? or using too
much(more than 32) sub-channels in NPIV mode?
--
For LINUX-390 subscribe / signoff / archive access instr
0.0.4e2a host36
Bus = "ccw"
availability= "good"
card_version= "0x0004"
cmb_enable = "0"
cutype = "1731/03"
devtype = "1732/03"
failed = "0"
hardware_version= "0x"
in_recovery = "0"
0.0.3f36 host18
Bus = "ccw"
availability= "good"
card_version= "0x0004"
cmb_enable = "0"
cutype = "1731/03"
devtype = "1732/03"
failed = "0"
hardware_version= "0x"
in_recovery = "0"
(1)We have 4 FCP type physical channels, every one has 64 sub-channel
addresses. So total is 256 sub-channel addresses we can use.
Physical channel 33: from 3300 to 333F
Physical channel 3F: from 3F00 to 3F3F
Physical channel 43: from 4300 to 433F
Physical channel 4E: from 4E00 to 4E3F
Our goal is
We have use 4 FCP type channel on Mainframe side. Every channel is enabled
NPIV with 64 sub-channels whose WWPN are available.
We use 4 hba-ports on DS8000 side.
We will add 20 LUN disks for zlinux system. Every disk has two paths to be
reached.(multipath was enabled)
Usually, when we add zfcp dis
I had recovery our system when this error occured. So I cannot display the
response of "ls -l /dev/disk/by-id/" and "/etc/fstab".
Based on startup message, is there a clue to tell us what caused this
error?
--
For LINUX-390 subsc
>>> On 9/9/2011 at 04:18 AM, Lu GL Gao wrote:
> /sbin/fsck.ext3 (1) -- /vol1 fsck.ext3
> -a /dev/mapper/36005076308ffc621000
> 0_part1
> fsck.ext3: No such file or directory while trying to
> open /dev/mapper/3600507630
> 8ffc621_part1
I
I have add sda and sdb for our linux and multipath them(because they are
point to same LUN).
Then I logoff this linux guest and re-logon, everything is fine and we can
access new LUN disk space.
Then I use same method to add sdc and sdd for this linux and multipath them
(they are same LUN).
Howeve
Our z/VM is Version 5.4. z/Linux is SUSE 11 SP1.
I'm going to add one SCSI disk(from 2 FCP type sub-channels,to 2 SAN
switchs, to 2 HBA port of DS8000) and use multipath.
The following is what I did:
(1)use YAST2 to add 2 zfcp disks(actually they are one SCSI disk).
successful
(2)use mkinitrd comm
We have some clients who have plan to migrate there applications running on
UNIX to z/Linux. We have not communicate with those clients yet, so I don't
have detail info.
Before that, I just seek if there are some general considerations about
migration, so that I can have good preparation for our fi
If migrating UNIX application to z/Linux, are there incompatabilities? In
other word, what are general considerations for this kind of migration?
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to li
24 matches
Mail list logo