expected Label VOL1 not found
Our zVM version is 5.4. Linux version is SUSE Enterprise Server 11 SP1 running as guest on zVM. I add 2 more dasd for this guest's directory like below: MDISK 0207 3390 0001 END LBE508 MR LNX4VM LNX4VM LNX4VM MDISK 0208 3390 0001 END LBE509 MR LNX4VM LNX4VM LNX4VM On linux, I want create a volume group by these 2 dasd. But system reject creating this volume group. A warning diaglog pop-up showing following messages: Failure occurred during following action: Creating volume group testvg from /dev/dasde /dev/dasdk System error code was: -4010 At the same time, I also found error messages on this guest's console: - dasde: Warning, expected Label VOL1 not found, treating as CDL formated Disk Jan 29 19:12:03 LXCPOB kernel: Warning, expected Label VOL1 not found, treating as CDL formated Disk dasdk: Warning, expected Label VOL1 not found, treating as CDL formated Disk Jan 29 19:12:03 LXCPOB kernel: Warning, expected Label VOL1 not found, treating as CDL formated Disk Anyone has idea about this problem? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: expected Label VOL1 not found
Thanks guys! These disks are ECKD 3390-3 dasd on DS8300. Firstly I defined MDISK to that linux guest. Secondly using CPFMTXA utility to format thess dasds at range between cylinder 0 to END. Thirdly boot that linux guest, I didn't use fdisk and dasdfmt, but use YAST - Hardware - DASD - Select target disks - Perform Action - Activate and Format. Lastly adding new volume group. Based on your explanation, I think the high possibility of my problem is that I didn't create partition for every disk before defining new volume group. My YAST steps didn't execute pvcreate operation. So when I added them to volume group, they were still /dev/dasde rather than /dev/dasde1. Thank you again! -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
LGR guest quiesce
Linux guest move its memory to target z/VM member during relocating. Then it quiesce and resume on target z/VM. How to understand quiesce? What state of linux when it is in quiesce time? Is idling? shutdown? or something? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: zLinux guest multiprocessors
Yes, it is the application. If it doesn't have highly parallel multi-threading such that it can keep two CPUs working, then the guest ends up doing extra work to deal with two processors without any benefit. Hello Alan, Thanks a lot!! We can control linux's virtual processor to horizontal and vertical polarization. What situation we should use vertical polarization? And what situation we should use the other polarization? Best Regards! Gao Lu (高路) I/T Specialist Global Technology Services IBM Global Services (China) Company Limited. Address:18/F, Pangu Plaza, No.27, Central North 4th Ring Road, Chaoyang District, Beijing, 100101 地址:北京市朝阳区北四环中路27号盘古大观写字楼18层,100101 BeiJing 100027, PRC Cell Phone: 15001327619 Internet ID: lu...@cn.ibm.com
zLinux guest multiprocessors
Based on my understanding, machine guests such as linux on z/VM can get cpu resource by their SHARE value. For example, LINUXA system has SHARE RELATIVE 100. LINUXB system also has SHARE RELATIVE 100. So LINUXA and LINUXB both has 50% of real physical processor resource. If LINUXA has 2 virtual processors defined in DIRECTORY, and LINUXB has only 1 virtual processor. What difference between LINUXA and LINUXB? Question1: my understanding is following, is it correct? LINUXA and LINUXB both has 50% of real cpu resource. Because LINUXA has 2 virtual processors, every virtual processor of it has 25% of real cpu resource. Question2: As far as cpu resource is concerned, what difference between LINUXA and LINUXB? Question3: usually what factor determines defining multiple virtual processor for linux guest? Application determines? Best Regards! Gao Lu (高路) I/T Specialist Global Technology Services IBM Global Services (China) Company Limited. Address:18/F, Pangu Plaza, No.27, Central North 4th Ring Road, Chaoyang District, Beijing, 100101 地址:北京市朝阳区北四环中路27号盘古大观写字楼18层,100101 BeiJing 100027, PRC Cell Phone: 15001327619 Internet ID: lu...@cn.ibm.com
Dynamic adjust linux guest's storage
How to dynamically increase/decrease storage for linux running on z/VM? Assumption: Define LPAR profile with 4G initial and 0 reserved storage. z/VM version is 5.4 User Direct statement for linux guest is USER LNX1 LNX1 1G 2G EG Objective: Dynamically increase/decrease guest storage. Before booting linux, use command“DEFINE STORAGE AS 700M RESERVED 300M” to set 300M reserved storage for linux. Question 1: If linux guest is SUSE11 or above, how to dynamically increase/decrease storage ?? Question 2: If linux guest is Redhat, how to dynamically increase/decrease storage ?? Best Regards! Gao Lu (高路) I/T Specialist Global Technology Services IBM Global Services (China) Company Limited. Address:18/F, Pangu Plaza, No.27, Central North 4th Ring Road, Chaoyang District, Beijing, 100101 地址:北京市朝阳区北四环中路27号盘古大观写字楼18层,100101 BeiJing 100027, PRC Cell Phone: 15001327619 Internet ID: lu...@cn.ibm.com
zLinux guest cpu question
Suse 11SP1 running as guest on zVM 5.4 Linux guest has 4 virtual processors. From zVM Performance Toolkit FCX112-User Resource Usage Screen, we can find a linux guest's %CPU is 55.2. Based on description, this value is percent of total used by this guest. At the same time, I logon this linux with root user and issue top command. I found: Cpu(s): 0.8%us, 5.9%sy, 0.0%ni, 80.7%id, 6.7%wa, 0.3%hi, 4.1%si, 1.4%st We know that linux cpu usage mainly include user cpu and sys cpu. But why performance toolkit value cannot corresponding with top command value? What's mean 55.2 %CPU on Performance Toolkit? Why it is not equal to %us + %sy Best Regards! Gao Lu (高路) I/T Specialist Global Technology Services IBM Global Services (China) Company Limited. Address:18/F, Pangu Plaza, No.27, Central North 4th Ring Road, Chaoyang District, Beijing, 100101 地址:北京市朝阳区北四环中路27号盘古大观写字楼18层,100101 BeiJing 100027, PRC Cell Phone: 15001327619 Internet ID: lu...@cn.ibm.com
network error message during boot zlinux
Our z/VM is 5.4. I defined a vswitch by following statements: DEFINE VSWITCH CPOVSW1 RDEV 0558 0578 IP NON VMLAN MACPREFIX 020001 And I defined nic for z/Linux guest in user directory file: NICDEF 600 TYPE QDIO LAN SYSTEM EWYVSW1 When I ipl linux guest, the following error messages were shown: qeth.47953b: 0.0.0600: Hardware IP fragmentation not supported on eth0 qeth.066069: 0.0.0600: Inbound source MAC-address not supported on eth0 Are they normal messages? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Install tape driver error on zlinux
Our linux is SUSE Enterprise Server 11 SP1 running on z/VM V5.4. Our linux kernel is 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 gcc-4 setup.1a06a7: Linux is running as a z/VM guest operating system in 64-bit mode. Our tape lib is TS3500, driver is 3592 E05. Our SAN switch is 2498-B24. The connection path from mainframe to tape lib is: fcp channel(A) - input port of san switch(B) - output port of san switch(C) - port of driver(D). The special thing is C point is directly connect D point. Rather than through 3953 tape lib controller. The following is my detail work of installing tape driver: (1) download tape driver from website of System Storage Interoperation Center. Before selecting correct driver, we selected following values to narrow target drivers on website search. Product Family - IBM System Storage Enterprise Tape Product Model - TS3500 (3584) with TS1120 (3592-E05) Drives (I asked this info from HW guy) Product Version - TS3500 (B150) with TS1120 (3592-E05) Drives (D3I1_EE9) (I select the last one of pull-down menu, because I thought it is the up-to-date version) Host Platform - IBM System z Operating System - Novell SUSE Linux Enterprise Server 11 SP1 Adapter Model - FC 3324 (I ask this info from HW guy) SAN or Networking Model - IBM SAN24B-4 (2498-B24) (I checked on SAN switch label) The most unsure item is Product Version. I don't know if it is correct for us. I downloaded 2 files. One is lin_tape-1.61.0-1.src.rpm, the other is lin_taped-1.61.0-sles11.s390x.rpm. (2) I'm sure that zfcp device driver is loaded into kernel, because we have scsi disk connected through zfcp device driver. (3) rpmbuild -rebuild lin_tape-1.61.0-1.src.rpm This step produced lin_tape package on /usr/src/packages/RPMS/s390x/lin_tape-1.61.0-1.s390x.rpm (4) rpm -ivh /usr/src/packages/RPMS/s390x/lin_tape-1.61.0-1.s390x.rpm This step produced below error: Preparing... ### [100%] 1:lin_tape ### [100%] Starting lin_tape: FATAL: module '/lib/modules/2.6.32.12-0.7-default/kernel/drivers/scsi/lin_tape.ko' is unsupported Use --allow-unsupported or set allow_unsupported_modules to 1 in /etc/modprobe.d/unsupported-modules lin_tape loaded I don't know why this error occured. Is it means incorrect version of tape driver I used? (5) rpm -ivh /usr/src//lin_taped-1.x.x.x..0-sles11.s390x.rpm This step install lin_taped daemon. But I got similar error as step 4: Preparing... ### [100%] 1:lin_taped ### [100%] Starting lin_tape: FATAL: module '/lib/modules/2.6.32.12-0.7-default/kernel/drivers/scsi/lin_tape.ko' is unsupported Use --allow-unsupported or set allow_unsupported_modules to 1 in /etc/modprobe.d/unsupported-modules lin_tape loaded Anybody has idea for this error? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Install tape driver error on zlinux
Let me explain the reason of installing tape driver is we are going to install Tivoli Storage Manager which will be the only tool to backup our systems. Our linux is newly installed which was provided by IBM. Nobody modified it, and even nobody begin to use it. I also downloaded tape driver from IBM storage support website. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: zfcp disk error
We had finished multipathing work in our environment. Our problem we perviously had is caused by using too much FCP sub-channels of every one physical channel.I summarize our problem resolution below: (1)The number of FCP sub-channels(for every FCP physical channel) used should = 32. The total number of connections for every FCP physical channel should = 128. (note: a connection is subchannel-hba_port-LUN) (2)After prolem fixed, we had update MCF for Ficon Express card from G40938.006 to G40938.007, based on Raymond's advice. (3)Sometimes, if you add too much zfcp devices through YAST, it will be occasionally happened that wwpn list of luns can not fetched by clicking get LUN button. I don't know why it happens. But we can bypass it by defining few zfcp a time. Finally, thanks everybody here to share your experience for this problem! -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: zfcp disk error
I had used 4 sub-channels instead of 80 for every linux, like what Steffen and Ralph recommanded. And HW guy also modified configuration on storage side, here is what he did: (6 linux share 20 disks) (1)Allocating 20 volumes. (2)Define volume group 1 that contains 24 volumes. (3)Define host connections from host_1 to host_6, every host connection contains 4 sub-channels WWPN on MF side. (4)Assign all host connections to volume group 1. On one linux, I add disk1 to disk8 normally, every disk use 4 paths (subchannel:3300, 3f00, 4300, 4E00) When I add disk9, 3300,4300,4E00 is OK, but an error occured when input 3f00. This time, error shows different status. Last time, input sub-channel, can not get WWPN of hba_port This time, input sub-channel, can get WWPN of hba_port, but can not get WWPN list of LUN. On linux, a dialog poped up, showing: zfcp_san_disc:Unable to activate LUN 0 The most strange thing is that I had use 3f00 to define disk1 to disk8, but error suddenly occured. Raymond, I had get /console/data/iqyylog.log for previous error and this time error( happend at 15:11). Do you know who can help us to analyze this file because I can not recognize the characters in this log. I can send this log by email. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: zfcp disk error
Many thanks for your detail explaination everybody here. At first, I will use 4 sub-channels instead of 80 for every linux, like what Steffen and Ralph recommanded. Then check if the error occur again. At second, I will update MCF version to latest one, like what Raymond recommanded. I will post our progress and result later. Raymond: I had talked with HW guy about collect HW logs. But he is not clear. Could you tell me what kind of logs should collected? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: zfcp disk error
What is MCF for Ficon card? How to get that level? What is the key reason to cause our problem, missing a MCF? or using too much(more than 32) sub-channels in NPIV mode? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: zfcp disk error-part1
(1)We have 4 FCP type physical channels, every one has 64 sub-channel addresses. So total is 256 sub-channel addresses we can use. Physical channel 33: from 3300 to 333F Physical channel 3F: from 3F00 to 3F3F Physical channel 43: from 4300 to 433F Physical channel 4E: from 4E00 to 4E3F Our goal is 6 linux share 20 LUN disks. If we use 4 paths to access every disk, we lack physical channel. So we use 2 paths to access every disk. Take linux A for example: Path to LUN1: 3300-WWPN of disk1 3F00-WWPN of disk1 Path to LUN2: 3306-WWPN of disk2 3F06-WWPN of disk2 ... Path to LUN20: Take linux B for example: Path to LUN1: 3301-WWPN of disk1 3F01-WWPN of disk1 Path to LUN2: 3307-WWPN of disk2 3F07-WWPN of disk2 ... Path to LUN20: Therefore, every linux use 40 sub-channel addresses to access 20 LUN disks. That is why use more than one subchannel of the same FCP channel within the same virtual machine. (2)Our hardware is system z9. Ficon Express 4 z/VM is Version 5.4 z/Linux is SUSE 11 SP1 Output of lszfcp -Ha: - 0.0.3300 host0 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401e20610 port_id = 0x010005 port_name = 0xc05076fc9380 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 1 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 42200455 requests= 43 0 41306 seconds_active = 249316 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 13056 utilization = 0 0 0 0.0.3306 host1 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401e20610 port_id = 0x01000d port_name = 0xc05076fc93800018 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 1 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 38339750 requests= 43 0 41328 seconds_active = 249449 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 13062 utilization = 0 0 0 0.0.330c host2 0.0.330c host2 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn
Re: zfcp disk error-part2
0.0.3f36 host18 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401e20c46 port_id = 0x010013 port_name = 0xc05076fc938008d8 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 2 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 36035443 requests= 64 0 41289 seconds_active = 249214 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 16182 utilization = 0 0 0 0.0.4300 host19 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401e207f2 port_id = 0x010103 port_name = 0xc05076fc93800400 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 1 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 49997967 requests= 47 0 434 seconds_active = 2551 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 17152 utilization = 0 0 0 0.0.4306 host20 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401e207f2 port_id = 0x010110 port_name = 0xc05076fc93800418 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 1 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 41230873 requests= 47 0 437 seconds_active = 2569 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 17158 utilization = 0 0 0 0.0.430c host21 Bus = ccw availability=
Re: zfcp disk error-part3
0.0.4e2a host36 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401a20beb port_id = 0x010111 port_name = 0xc05076fc93800ca8 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 2 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 23603303 requests= 64 0 377 seconds_active = 2141 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 20010 utilization = 0 0 0 0.0.4e30 host37 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401a20beb port_id = 0x01010a port_name = 0xc05076fc93800cc0 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 2 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 26084100 requests= 64 0 381 seconds_active = 2163 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 20016 utilization = 0 0 0 0.0.4e36 host38 Bus = ccw availability= good card_version= 0x0004 cmb_enable = 0 cutype = 1731/03 devtype = 1732/03 failed = 0 hardware_version= 0x in_recovery = 0 lic_version = 0x0710 modalias= ccw:t1731m03dt1732dm03 online = 1 peer_d_id = 0x00 peer_wwnn = 0x peer_wwpn = 0x status = 0x540a uevent = DRIVER=zfcp Class = fc_host maxframe_size = 2112 bytes node_name = 0x5005076400c8ef8d permanent_port_name = 0x5005076401a20beb port_id = 0x01010d port_name = 0xc05076fc93800cd8 port_state = Online port_type = NPIV VPORT serial_number = IBM830008EF8D speed = 4 Gbit supported_classes = Class 2, Class 3 supported_speeds= 4 Gbit tgtid_bind_type = wwpn (World Wide Port Name) Class = scsi_host active_mode = Initiator can_queue = 4096 cmd_per_lun = 1 host_busy = 0 megabytes = 1 0 proc_name = zfcp prot_capabilities = 0 prot_guard_type = 0 queue_full = 0 25762173 requests= 64 0 384 seconds_active = 2186 sg_tablesize= 538 state = running supported_mode = Initiator unchecked_isa_dma = 0 unique_id = 20022 utilization = 0 0 0 --
zfcp disk error
We have use 4 FCP type channel on Mainframe side. Every channel is enabled NPIV with 64 sub-channels whose WWPN are available. We use 4 hba-ports on DS8000 side. We will add 20 LUN disks for zlinux system. Every disk has two paths to be reached.(multipath was enabled) Usually, when we add zfcp disk to zlinux system, input sub-channel address (for example, 3300) from YAST, and we can automatically get WWPN of hba-port on storage side, and then we can get accessible WWPN of LUN disks. Howevery, in our system, I found some sub-channel can normally get WWPN of hba-port, some can not get. Strange thing is that these sub-channel is belong to same physical channel. For example, 3300 can automatically recognize WWPN of target storage, 3301 can not. That means physical fiber connection is no problem. And I also check configuration side of storage with H/W guy, they do define disk with correct WWPN of sub-channel on mainframe side. I find some error messages shown on zlinux guest console from z/VM side: --- zfcp.e78dec: 0.0.3f24: A QDIO problem occurred zfcp.e78dec: 0.0.3f1e: A QDIO problem occurred zfcp.e78dec: 0.0.3f18: A QDIO problem occurred zfcp.e78dec: 0.0.3f00: A QDIO problem occurred zfcp.e78dec: 0.0.3f36: A QDIO problem occurred zfcp.e78dec: 0.0.3f0c: A QDIO problem occurred zfcp.e78dec: 0.0.3f2a: A QDIO problem occurred zfcp.e78dec: 0.0.3f06: A QDIO problem occurred zfcp.e78dec: 0.0.3f12: A QDIO problem occurred zfcp.3dff9c: 0.0.3f24: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3f12: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3f0c: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3f00: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3f1e: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: zfcp.3dff9c: 0.0.3f06: Setting up the QDIO connection to the FCP adapter failed 0.0.3f2a: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3f18: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3f36: Setting up the QDIO connection to the FCP adapter failed Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f24: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f1e: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f18: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f00: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f36: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f0c: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f2a: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f06: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.e78dec: 0.0.3f12: A QDIO problem occurred Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f24: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f12: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f0c: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f00: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f1e: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f06: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: 0.0.3f2a: Setting up the QDIO connection to the Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f18: Setting up the QDIO conne Sep 16 11:31:24 LXCPOA kernel: zfcp.3dff9c: 0.0.3f36: Setting up the QDIO conne qdio: 0.0.3f24 ZFCP on SC 19 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f1e ZFCP on SC 18 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f06 ZFCP on SC 14 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f18 ZFCP on SC 17 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f12 ZFCP on SC 16 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f0c ZFCP on SC 15 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f00 ZFCP on SC 13 using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f2a ZFCP on SC 1a using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO qdio: 0.0.3f36 ZFCP on SC 1c using AI:1 QEBSM:1 PCI:1 TDD:1 SIGA: W AO INIT: Id 4 respawning too fast: disabled for 5 minutes zfcp.e78dec: 0.0.3306: A QDIO problem occurred zfcp.e78dec: 0.0.3318: A QDIO problem occurred zfcp.e78dec: 0.0.3330: A QDIO problem occurred zfcp.e78dec: 0.0.331e: A QDIO problem occurred zfcp.e78dec: zfcp.e78dec: 0.0.332a: A QDIO problem occurred 0.0.3300: A QDIO problem occurred zfcp.e78dec: 0.0.330c: A QDIO problem occurred zfcp.e78dec: 0.0.3312: A QDIO problem occurred zfcp.e78dec: 0.0.3324: A QDIO problem occurred zfcp.e78dec: 0.0.3336: A QDIO problem occurred zfcp.3dff9c: 0.0.3306: Setting up the QDIO connection to the FCP adapter failed zfcp.3dff9c: 0.0.3336: Setting up the QDIO
Re: linux startup problem because multipathing implementation
I had recovery our system when this error occured. So I cannot display the response of ls -l /dev/disk/by-id/ and /etc/fstab. Based on startup message, is there a clue to tell us what caused this error? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
linux startup problem because multipathing implementation
I have add sda and sdb for our linux and multipath them(because they are point to same LUN). Then I logoff this linux guest and re-logon, everything is fine and we can access new LUN disk space. Then I use same method to add sdc and sdd for this linux and multipath them (they are same LUN). However, linux was booted abnormally, the whole startup message is following. Any one know what cause it? DO YOU WANT TO IPL LINUX FROM MINIDISK 200? Y/N y 00: zIPL v1.8.0-44.22.5 interactive boot menu 00: 00: 0. default (SLES11_SP1) 00: 00: 1. SLES11_SP1 00: 2. FailsafeV1 00: 3. ipl 00: 00: Note: VM users please use '#cp vi vmsg number kernel-parameters' 00: 00: Please choose (default will boot in 10 seconds): 00: Booting default (SLES11_SP1)... Initializing cgroup subsys cpuset Initializing cgroup subsys cpu Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 gcc-4_ 3-branch revision 152973 (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 setup.1a06a7: Linux is running as a z/VM guest operating system in 64-bit mode Zone PFN ranges: DMA 0x - 0x0008 Normal 0x0008 - 0x0008 Movable zone start PFN for each node early_node_map 2 active PFN ranges 0: 0x - 0x8800 0: 0xa200 - 0x00041a00 PERCPU: Embedded 11 pages/cpu @01a1e000 s12544 r8192 d24320 u65536 pcpu-alloc: s12544 r8192 d24320 u65536 alloc=16*4096 pcpu-alloc: 0 00 0 01 0 02 0 03 0 04 0 05 0 06 0 07 pcpu-alloc: 0 08 0 09 0 10 0 11 0 12 0 13 0 14 0 15 pcpu-alloc: 0 16 0 17 0 18 0 19 0 20 0 21 0 22 0 23 pcpu-alloc: 0 24 0 25 0 26 0 27 0 28 0 29 0 30 0 31 pcpu-alloc: 0 32 0 33 0 34 0 35 0 36 0 37 0 38 0 39 pcpu-alloc: 0 40 0 41 0 42 0 43 0 44 0 45 0 46 0 47 01: HCPGSP26 27I The virtual machine is placed in CP mode due to a SIGP initial C PU reset from CPU 00. 02: HCPGSP2627I The virtual machine is placed in CP mode due to a SIGP initial C PU reset from CPU 00. 03: HCPGSP2627I The virtual machine is placed in CP mode due to a SIGP initial C PU reset from CPU 00. pcpu-alloc: 0 48 0 49 0 50 0 51 0 52 0 53 0 54 0 55 pcpu-alloc: 0 56 0 57 0 58 0 59 0 60 0 61 0 62 0 63 Built 1 zonelists in Zone order, mobility grouping on. Total pages: 258469 Kernel command line: root=/dev/disk/by-path/ccw-0.0.0200-part1 TERM=dumb BOOT_IM AGE=0 PID hash table entries: 4096 (order: 3, 32768 bytes) Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes) Inode-cache hash table entries: 65536 (order: 7, 524288 bytes) Memory: 1011168k/1075200k available (4382k kernel code, 0k reserved, 2114k data, 228k init) Write protected kernel read-only data: 0x10 - 0x5f Hierarchical RCU implementation. console ttyS0 enabled allocated 13107200 bytes of page_cgroup please try 'cgroup_disable=memory' option if you don't want memory cgroups pid_max: default: 65536 minimum: 512 Security Framework initialized AppArmor: AppArmor initialized Mount-cache hash table entries: 256 Initializing cgroup subsys ns Initializing cgroup subsys cpuacct Initializing cgroup subsys memory Initializing cgroup subsys devices Initializing cgroup subsys freezer Initializing cgroup subsys net_cls cpu.33a262: 4 configured CPUs, 0 standby CPUs cpu.17772b: Processor 0 started, address 0, identification 02EF8D cpu.17772b: Processor 1 started, address 0, identification 02EF8D cpu.17772b: Processor 2 started, address 0, identification 02EF8D cpu.17772b: Processor 3 started, address 0, identification 02EF8D Brought up 4 CPUs devtmpfs: initialized NET: Registered protocol family 16 bio: create slab bio-0 at 0 NetLabel: Initializing NetLabel: domain hash size = 128 NetLabel: protocols = UNLABELED CIPSOv4 NetLabel: unlabeled traffic allowed by default AppArmor: AppArmor Filesystem Enabled NET: Registered protocol family 2 IP route cache hash table entries: 32768 (order: 6, 262144 bytes) TCP established hash table entries: 65536 (order: 8, 1048576 bytes) TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) TCP: Hash tables configured (established 65536 bind 65536) TCP reno registered NET: Registered protocol family 1 Unpacking initramfs... Freeing initrd memory: 5621k freed audit: initializing netlink socket (disabled) type=2000 audit(1315554916.548:1): initialized HugeTLB registered 1 MB page size, pre-allocated 0 pages VFS: Disk quotas dquot_6.5.2 Dquot-cache hash table entries: 512 (order 0, 4096 bytes) msgmni has been set to 496 alg: No test for stdrng (krng) Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254) io scheduler noop registered io scheduler anticipatory registered io scheduler deadline registered (default) io scheduler cfq registered cio.b5d5f6: Channel measurement facility initialized using format extended (mode autodetected) TCP cubic registered registered
Re: linux startup problem because multipathing implementation
On 9/9/2011 at 04:18 AM, Lu GL Gao lu...@cn.ibm.com wrote: /sbin/fsck.ext3 (1) -- /vol1 fsck.ext3 -a /dev/mapper/36005076308ffc621000 0_part1 fsck.ext3: No such file or directory while trying to open /dev/mapper/3600507630 8ffc621_part1 I was added 2 SCSI disks, one is from 3300(fcp subchannel)-- 5005076308034621(hba port 1 of DS8000)--0x40004000 the other is from 3f00(fcp subchannel)-- 5005076308134621(hba port 2 of DS8300)--0x40004000 They are same LUN. So after multipathing them, this long number name is generated automatically. I didn't add these zfcp disks by command, but by YAST2. After that, I logoff this linux guest, and re-logon, and boot linux guest. Then this error is occured. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Multipath alias problem
Our z/VM is Version 5.4. z/Linux is SUSE 11 SP1. I'm going to add one SCSI disk(from 2 FCP type sub-channels,to 2 SAN switchs, to 2 HBA port of DS8000) and use multipath. The following is what I did: (1)use YAST2 to add 2 zfcp disks(actually they are one SCSI disk). successful (2)use mkinitrd command. The response messages are different with those of SUSE 10, I'm not sure if they are correct. LXCPOA:/ # mkinitrd Kernel image: /boot/image-2.6.32.12-0.7-default Initrd image: /boot/initrd-2.6.32.12-0.7-default Root device:/dev/disk/by-path/ccw-0.0.0200-part1 (/dev/dasda1) (mounted on / as ext3) Kernel Modules: jbd mbcache ext3 dasd_mod dasd_eckd_mod Features: block dasd resume.userspace resume.kernel 25502 blocks (3)use zipl command. The response messages are also different with those of SUSE 10, I'm not sure if they are correct. LXCPOA:/ # zipl Using config file '/etc/zipl.conf' Building bootmap in '/boot/zipl' Building menu 'menu' Adding #1: IPL section 'SLES11_SP1' (default) Adding #2: IPL section 'FailsafeV1' Adding #3: IPL section 'ipl' Preparing boot device: dasda (0200). Done. (4)use lsscsi command to show available zfcp disks. And check their information. successful(I think) LXCPOA:/ # lsscsi [0:0:0:1073758208]diskIBM 2107900 3.44 /dev/sda [1:0:0:1073758208]diskIBM 2107900 3.44 /dev/sdb LXCPOA:/ # cat /sys/class/scsi_device/0:0:0:1073758208/device/fcp_lun 0x40004000 LXCPOA:/ # cat /sys/class/scsi_device/0:0:0:1073758208/device/wwpn 0x5005076308034621 LXCPOA:/ # cat /sys/class/scsi_device/0:0:0:1073758208/device/hba_id 0.0.3300 LXCPOA:/ # cat /sys/class/scsi_device/1:0:0:1073758208/device/fcp_lun 0x40004000 LXCPOA:/ # cat /sys/class/scsi_device/1:0:0:1073758208/device/wwpn 0x5005076308134621 LXCPOA:/ # cat /sys/class/scsi_device/1:0:0:1073758208/device/hba_id 0.0.3f00 (5)At this point, there is no multipath.conf file in system, so I created new one. LXCPOA:/etc # cat multipath.conf multipaths{ multipath{ wwid 36005076308ffc621 alias mpvol1 } } (6)I want to enable new alias, so I enable multipath again. But nothing is shown, is there a error? LXCPOA:/etc # multipath LXCPOA:/etc # (7)use YAST2 to have a file system created on it and mounted. successfuly (8)check new directory. I don't know why my alias to this disk is not used!!! LXCPOA:/ # df -h FilesystemSize Used Avail Use% Mounted on /dev/dasda1 2.3G 438M 1.8G 20% / devtmpfs 497M 204K 497M 1% /dev tmpfs 497M 100K 497M 1% /dev/shm /dev/dasdb1 2.3G 1.9G 293M 87% /usr /dev/dasdc1 2.3G 1.4G 813M 63% /usr/share /dev/dasdd1 2.3G 195M 2.0G 9% /var /dev/mapper/36005076308ffc621_part1 778G 197M 738G 1% /vol1 1244.70 2.54 9955.06 6778 26563184 (9)I upload a 1G file by FTP to new directory to check if IO load is balanced among 2 paths. Is balance sucessful? Why the tps value for dm-0 and dm-1 is so different? LXCPOA:/ # iostat Linux 2.6.32.12-0.7-default (LXCPOA)09/07/2011 _s390x_ avg-cpu: %user %nice %system %iowait %steal %idle 2.620.012.661.020.36 93.33 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dasda 2.66 116.1798.00 317808 268112 dasdb 1.22 121.49 0.27 332352744 dasdc 1.1583.30 0.97 227896 2664 dasde 0.06 1.74 0.00 4764 0 dasdd 0.9713.1115.88 35856 43432 dasdf 0.02 0.46 0.00 1248 0 dasdg 0.02 0.46 0.00 1248 0 dasdh 0.02 0.46 0.00 1248 0 sda 8.6817.53 5090.43 47962 13926104 dm-0 17.0717.12 10194.13 46834 27888488 sdb 8.42 0.52 5103.70 1432 13962384 dm-1 1274.58 2.49 10194.13 6818 27888480 Many Thanks!!! -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Migrating UNIX application to z/Linux
We have some clients who have plan to migrate there applications running on UNIX to z/Linux. We have not communicate with those clients yet, so I don't have detail info. Before that, I just seek if there are some general considerations about migration, so that I can have good preparation for our first met with clients. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Migrating UNIX application to z/Linux
If migrating UNIX application to z/Linux, are there incompatabilities? In other word, what are general considerations for this kind of migration? -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/