Re: PIPEDDR and attached DASD

2011-05-05 Thread Brian Nielsen
I have solved this problem and present the resolution for the benefit of 

others: it was an MTU mismatch between the VM TCPIP stacks (set at 8992) 

and what the network between would support (1500).

Further testing I did had showed that ATTACH vs. MDISK and the versions o
f 
PIPELINES and PIPEDDR were not the cause.  Instead, sucess or failure was
 
dependent on the data being transferred and whether it traversed the 
external network or not.  After ripping apart PIPEDDR to understand it so
 
that I could add debugging code and options I found that the transfer for
 
a particular disk always failed at a unique track within that disk.  

Manually added pacing (via DELAY stages) would work (although *tediously*
 
more slowly than expected based on the delay value used), but not if the 

delay was too small.  Eventually, while looking for pacing information in
 
the TCPIP stack I read the reference that Selecting an MTU size that is 

too large may cause a client application to hang.  The light bulb went o
n 
and after adjusting the MTU size used in the VM TCPIP stack everything 

works fine now.

Brian Nielsen


On Tue, 5 Apr 2011 12:20:59 -0500, Brian Nielsen bniel...@sco.idaho.gov
 
wrote:

Should PIPEDDR work with attached DASD or does it only support MDISKs? 
 
The documentation doesn't seem to say.  I get an error with attached DAS
D 
but it works fine with a full pack MDISK of the same DASD volume when 

doing a dump/restore over TCPIP.

Using attached DASD will avoid label conflicts and also avoids 
maintaining 
hardcoded MDISKs with DEVNO's in the directory.  Unfortunately, the 
DEFINE 
MDISK command doesn't have a DEVNO option.


Here is what the failure looks like from the receiving and sending sides
 
using attached DASD at both ends:

-

pipeddr restore * 6930 11000 (listen noprompt
Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
Waiting for connection on port 11000 to restore BNIELSEN 6930.
Sending user is BNIELSEN at VMP2
Receiving data from 172.16.64.45
PIPTCQ1015E ERRNO 54: ECONNRESET.
PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
PIPMSG001I ... Running tcpdata.
PIPUPK072E Last record not complete.
PIPMSG003I ... Issued from stage 2 of pipeline 1.
PIPMSG001I ... Running unpack.
Data restore failed.
Ready(01015); T=0.01/0.01 08:58:15



pipeddr dump * 9d5e 172.16.64.44 11000
Dumping disk BNIELSEN 9D5E to 172.16.64.44
PIPTCQ1015E ERRNO 32: EPIPE.
PIPMSG004I ... Issued from stage 7 of pipeline 1 name ipread.
PIPMSG001I ... Running tcpclient 172.16.64.44 11000 linger 10 reuseaddr
 
U.
Dump failed.
Ready(01015); T=0.02/0.03 07:54:34




If I create full pack MDISKs via DEFINE MDISK (starting at cyl zero) it 

works fine, as shown below.



pipeddr restore * 6930 11000 (listen noprompt
Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
Waiting for connection on port 11000 to restore BNIELSEN 6930.
Sending user is BNIELSEN at VMP2
Receiving data from 172.16.64.45
41 MB received.
Data restored successfully.
Ready; T=4.04/4.54 09:34:18


---

pipeddr dump * 9d5e 172.16.64.44 11000
Dumping disk BNIELSEN 9D5E to 172.16.64.44
-- All data sent to BNIELSEN AT VMP1 --
41 MB transmitted.
Dump completed.
Ready; T=6.27/8.21 08:30:37



Brian Nielsen

=
===


Re: PIPEDDR and attached DASD

2011-04-06 Thread Dave
Sometimes you have to vary the device off and on before the system detects a 
new VOLSER.


-Original Message-
From: Bruce Hayden bjhay...@gmail.com
Sent: Apr 5, 2011 3:33 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: PIPEDDR and attached DASD

I don't know why the mdisk works and the attached disk doesn't.  If
the label is still SCRTCH, then nothing was written on the disk.
That seems like the TCP/IP connection wasn't established correctly.
We should take this offline to work on it further.



Dave Lewis
Sterling Commerce 
An IBM company
Connect:Direct Level 3 Support


Re: PIPEDDR and attached DASD

2011-04-06 Thread Alan Altmark
On Wednesday, 04/06/2011 at 10:57 EDT, Dave david_v_le...@earthlink.net 
wrote:
 Sometimes you have to vary the device off and on before the system 
detects a 
 new VOLSER.

When a guest DETACHes a volume, CP re-reads the volser.

(btw, Dave, you have reply-to: set to your id.)

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: PIPEDDR and attached DASD

2011-04-05 Thread Bruce Hayden
PIPEDDR should work fine with attached DASD.  I just tried it on the
latest version and some older levels and they all worked fine with a
3390-3 with both the source disk and target disk attached.  I have the
latest Pipelines runtime module, if that makes a difference.  PIPEDDR
doesn't really do anything differently for attached DASD, it just
passes the virtual address you specified to the trackread or
trackwrite stage of Pipelines.

On Tue, Apr 5, 2011 at 1:20 PM, Brian Nielsen bniel...@sco.idaho.gov wrote:
 Should PIPEDDR work with attached DASD or does it only support MDISKs?
 The documentation doesn't seem to say.  I get an error with attached DASD
 but it works fine with a full pack MDISK of the same DASD volume when
 doing a dump/restore over TCPIP.

 Using attached DASD will avoid label conflicts and also avoids maintaining
 hardcoded MDISKs with DEVNO's in the directory.  Unfortunately, the DEFINE
 MDISK command doesn't have a DEVNO option.


 Here is what the failure looks like from the receiving and sending sides
 using attached DASD at both ends:

 -

 pipeddr restore * 6930 11000 (listen noprompt
 Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
 Waiting for connection on port 11000 to restore BNIELSEN 6930.
 Sending user is BNIELSEN at VMP2
 Receiving data from 172.16.64.45
 PIPTCQ1015E ERRNO 54: ECONNRESET.
 PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
 PIPMSG001I ... Running tcpdata.
 PIPUPK072E Last record not complete.
 PIPMSG003I ... Issued from stage 2 of pipeline 1.
 PIPMSG001I ... Running unpack.
 Data restore failed.
 Ready(01015); T=0.01/0.01 08:58:15

 

 pipeddr dump * 9d5e 172.16.64.44 11000
 Dumping disk BNIELSEN 9D5E to 172.16.64.44
 PIPTCQ1015E ERRNO 32: EPIPE.
 PIPMSG004I ... Issued from stage 7 of pipeline 1 name ipread.
 PIPMSG001I ... Running tcpclient 172.16.64.44 11000 linger 10 reuseaddr
 U.
 Dump failed.
 Ready(01015); T=0.02/0.03 07:54:34

 


 If I create full pack MDISKs via DEFINE MDISK (starting at cyl zero) it
 works fine, as shown below.

 

 pipeddr restore * 6930 11000 (listen noprompt
 Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
 Waiting for connection on port 11000 to restore BNIELSEN 6930.
 Sending user is BNIELSEN at VMP2
 Receiving data from 172.16.64.45
 41 MB received.
 Data restored successfully.
 Ready; T=4.04/4.54 09:34:18


 ---

 pipeddr dump * 9d5e 172.16.64.44 11000
 Dumping disk BNIELSEN 9D5E to 172.16.64.44
 -- All data sent to BNIELSEN AT VMP1 --
 41 MB transmitted.
 Dump completed.
 Ready; T=6.27/8.21 08:30:37

 

 Brian Nielsen




-- 
Bruce Hayden
z/VM and Linux on System z ATS
IBM, Endicott, NY


Re: PIPEDDR and attached DASD

2011-04-05 Thread Brian Nielsen
On Tue, 5 Apr 2011 13:55:57 -0400, Bruce Hayden bjhay...@gmail.com wrot
e:

PIPEDDR should work fine with attached DASD.  I just tried it on the
latest version and some older levels and they all worked fine with a
3390-3 with both the source disk and target disk attached.  I have the
latest Pipelines runtime module, if that makes a difference.  PIPEDDR
doesn't really do anything differently for attached DASD, it just
passes the virtual address you specified to the trackread or
trackwrite stage of Pipelines.


I updated both systems PIPEDDR from 1.4.10 to 1.5.12 and the Pipelines 

runtime from 1.0111 to 1.0112.

pipe query
PIPINX086I CMS/TSO Pipelines, 5654-030/5655-A17 1.0112 
(Version.Release/Mod) - Generated 3 Dec 2010 at 11:10:08.


It still fails the same way (but there are now progress messages on the 

sending side):



q 6607
DASD 6607 ATTACHED TO BNIELSEN 6607 R/W VZ6607
Ready; T=0.01/0.01 13:36:07
pipeddr restore * 6607 11000 (listen noprompt
Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
Waiting for connection on port 11000 to restore BNIELSEN 6607.
Sending user is BNIELSEN at VMP2
Receiving disk BNIELSEN 9D5E from 172.16.64.45
PIPTCQ1015E ERRNO 54: ECONNRESET.
PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
PIPMSG001I ... Running tcpdata.
PIPUPK072E Last record not complete.
PIPMSG003I ... Issued from stage 2 of pipeline 1.
PIPMSG001I ... Running unpack.
Data restore failed.
Ready(01015); T=0.01/0.02 13:39:17


-

q 9d5e
DASD 9D5E ATTACHED TO BNIELSEN 9D5E R/W SYSDRL
Ready; T=0.01/0.01 13:32:49
pipeddr dump * 9d5e 172.16.64.44 11000
Dumping disk BNIELSEN 9D5E to 172.16.64.44
PIPTCQ1015E ERRNO 32: EPIPE.
PIPMSG004I ... Issued from stage 9 of pipeline 1 name ipread.
PIPMSG001I ... Running tcpclient 172.16.64.44 11000 linger 10 reuseaddr 

U.
Cylinder 133 of 3339 completed (3%)
Cylinder 266 of 3339 completed (7%)
Cylinder 400 of 3339 completed (11%)
Cylinder 533 of 3339 completed (15%)
Cylinder 666 of 3339 completed (19%)
Cylinder 800 of 3339 completed (23%)
Cylinder 933 of 3339 completed (27%)
Cylinder 1066 of 3339 completed (31%)
Cylinder 1200 of 3339 completed (35%)
Cylinder 1333 of 3339 completed (39%)
Cylinder 1466 of 3339 completed (43%)
Cylinder 1600 of 3339 completed (47%)
Cylinder 1733 of 3339 completed (51%)
Cylinder 1866 of 3339 completed (55%)
Cylinder 2000 of 3339 completed (59%)
Cylinder 2133 of 3339 completed (63%)
Cylinder 2266 of 3339 completed (67%)
Cylinder 2400 of 3339 completed (71%)
Cylinder 2533 of 3339 completed (75%)
Cylinder 2666 of 3339 completed (79%)
Cylinder 2800 of 3339 completed (83%)
Cylinder 2933 of 3339 completed (87%)
Cylinder 3066 of 3339 completed (91%)
Cylinder 3200 of 3339 completed (95%)
Cylinder  of 3339 completed (99%)
Dump failed.
Ready(01015); T=4.77/6.30 13:37:11




I verified that both DASD volumes are the same size (Mod-3's w/3339 cyls)
:

---

q dasd details 6607
6607  CUTYPE = 2105-E8, DEVTYPE = 3390-0A, VOLSER = VZ6607, CYLS =
 3339
  CACHE DETAILS:  CACHE NVS CFW DFW PINNED CONCOPY
   -SUBSYSTEM   YY   Y   -N   N
  -DEVICE   Y-   -   YN   N
  DEVICE DETAILS: CCA = 07, DDC = --
  DUPLEX DETAILS: --
  PAV DETAILS: BASE VOLUME WITH 01 ALIAS VOLUMES
  CU DETAILS: SSID = 6600, CUNUM = 6600
Ready; T=0.01/0.01 13:48:14

---

q dasd details 9d5e
9D5E  CUTYPE = 2105-E8, DEVTYPE = 3390-0A, VOLSER = SYSDRL, CYLS =
 3339
  CACHE DETAILS:  CACHE NVS CFW DFW PINNED CONCOPY
   -SUBSYSTEM   YY   Y   -N   N
  -DEVICE   Y-   -   YN   N
  DEVICE DETAILS: CCA = 5E, DDC = --
  DUPLEX DETAILS: --
  CU DETAILS: SSID = 450D, CUNUM = 9D00
Ready; T=0.01/0.01 13:44:11

***


Are there any options that might help?

Brian Nielsen


Re: PIPEDDR and attached DASD

2011-04-05 Thread Rob van der Heij
On Tue, Apr 5, 2011 at 10:07 PM, Brian Nielsen bniel...@sco.idaho.gov wrote:

 PIPTCQ1015E ERRNO 54: ECONNRESET.
 PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
 PIPMSG001I ... Running tcpdata.
 PIPUPK072E Last record not complete.
 PIPMSG003I ... Issued from stage 2 of pipeline 1.
 PIPMSG001I ... Running unpack.
 Data restore failed.
 Ready(01015); T=0.01/0.02 13:39:17

Sounds like PIPEDDR is not properly handling the termination of the
TCP/IP connection (like the sender going AWOL while the last piece of
data is still in transit). If the pipe leaks, subtle timing changes
may get your feet wet. I never looked at what PIPEDDR does for flow
control, but I do recall that I had to master similar things when I
did mine...

| Rob


Re: PIPEDDR and attached DASD

2011-04-05 Thread Brian Nielsen
Additional note:  After this failed transfer the receiving DASD has a 
label of SCRTCH.  I would expect it to be SYSDRL after cyl 0 is transfere
d.

det 6607
DASD 6607 DETACHED
Ready; T=0.01/0.01 14:15:13
q 6607
DASD 6607 SCRTCH
Ready; T=0.01/0.01 14:15:15


Brian Nielsen

On Tue, 5 Apr 2011 15:07:31 -0500, Brian Nielsen bniel...@sco.idaho.gov
 
wrote:

On Tue, 5 Apr 2011 13:55:57 -0400, Bruce Hayden bjhay...@gmail.com 
wrote:

PIPEDDR should work fine with attached DASD.  I just tried it on the
latest version and some older levels and they all worked fine with a
3390-3 with both the source disk and target disk attached.  I have the
latest Pipelines runtime module, if that makes a difference.  PIPEDDR
doesn't really do anything differently for attached DASD, it just
passes the virtual address you specified to the trackread or
trackwrite stage of Pipelines.


I updated both systems PIPEDDR from 1.4.10 to 1.5.12 and the Pipelines 

runtime from 1.0111 to 1.0112.

pipe query
PIPINX086I CMS/TSO Pipelines, 5654-030/5655-A17 1.0112 
(Version.Release/Mod) - Generated 3 Dec 2010 at 11:10:08.


It still fails the same way (but there are now progress messages on the 

sending side):



q 6607
DASD 6607 ATTACHED TO BNIELSEN 6607 R/W VZ6607
Ready; T=0.01/0.01 13:36:07
pipeddr restore * 6607 11000 (listen noprompt
Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
Waiting for connection on port 11000 to restore BNIELSEN 6607.
Sending user is BNIELSEN at VMP2
Receiving disk BNIELSEN 9D5E from 172.16.64.45
PIPTCQ1015E ERRNO 54: ECONNRESET.
PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
PIPMSG001I ... Running tcpdata.
PIPUPK072E Last record not complete.
PIPMSG003I ... Issued from stage 2 of pipeline 1.
PIPMSG001I ... Running unpack.
Data restore failed.
Ready(01015); T=0.01/0.02 13:39:17


-

q 9d5e
DASD 9D5E ATTACHED TO BNIELSEN 9D5E R/W SYSDRL
Ready; T=0.01/0.01 13:32:49
pipeddr dump * 9d5e 172.16.64.44 11000
Dumping disk BNIELSEN 9D5E to 172.16.64.44
PIPTCQ1015E ERRNO 32: EPIPE.
PIPMSG004I ... Issued from stage 9 of pipeline 1 name ipread.
PIPMSG001I ... Running tcpclient 172.16.64.44 11000 linger 10 reuseaddr
 
U.
Cylinder 133 of 3339 completed (3%)
Cylinder 266 of 3339 completed (7%)
Cylinder 400 of 3339 completed (11%)
Cylinder 533 of 3339 completed (15%)
Cylinder 666 of 3339 completed (19%)
Cylinder 800 of 3339 completed (23%)
Cylinder 933 of 3339 completed (27%)
Cylinder 1066 of 3339 completed (31%)
Cylinder 1200 of 3339 completed (35%)
Cylinder 1333 of 3339 completed (39%)
Cylinder 1466 of 3339 completed (43%)
Cylinder 1600 of 3339 completed (47%)
Cylinder 1733 of 3339 completed (51%)
Cylinder 1866 of 3339 completed (55%)
Cylinder 2000 of 3339 completed (59%)
Cylinder 2133 of 3339 completed (63%)
Cylinder 2266 of 3339 completed (67%)
Cylinder 2400 of 3339 completed (71%)
Cylinder 2533 of 3339 completed (75%)
Cylinder 2666 of 3339 completed (79%)
Cylinder 2800 of 3339 completed (83%)
Cylinder 2933 of 3339 completed (87%)
Cylinder 3066 of 3339 completed (91%)
Cylinder 3200 of 3339 completed (95%)
Cylinder  of 3339 completed (99%)
Dump failed.
Ready(01015); T=4.77/6.30 13:37:11




I verified that both DASD volumes are the same size (Mod-3's w/3339 cyls
):

---

q dasd details 6607
6607  CUTYPE = 2105-E8, DEVTYPE = 3390-0A, VOLSER = VZ6607, CYLS =
 3339
  CACHE DETAILS:  CACHE NVS CFW DFW PINNED CONCOPY
   -SUBSYSTEM   YY   Y   -N   N
  -DEVICE   Y-   -   YN   N
  DEVICE DETAILS: CCA = 07, DDC = --
  DUPLEX DETAILS: --
  PAV DETAILS: BASE VOLUME WITH 01 ALIAS VOLUMES
  CU DETAILS: SSID = 6600, CUNUM = 6600
Ready; T=0.01/0.01 13:48:14

---

q dasd details 9d5e
9D5E  CUTYPE = 2105-E8, DEVTYPE = 3390-0A, VOLSER = SYSDRL, CYLS =
 3339
  CACHE DETAILS:  CACHE NVS CFW DFW PINNED CONCOPY
   -SUBSYSTEM   YY   Y   -N   N
  -DEVICE   Y-   -   YN   N
  DEVICE DETAILS: CCA = 5E, DDC = --
  DUPLEX DETAILS: --
  CU DETAILS: SSID = 450D, CUNUM = 9D00
Ready; T=0.01/0.01 13:44:11

***


Are there any options that might help?

Brian Nielsen


Re: PIPEDDR and attached DASD

2011-04-05 Thread Brian Nielsen
On Tue, 5 Apr 2011 22:15:31 +0200, Rob van der Heij rvdh...@gmail.com 

wrote:

On Tue, Apr 5, 2011 at 10:07 PM, Brian Nielsen bniel...@sco.idaho.gov 

wrote:

 PIPTCQ1015E ERRNO 54: ECONNRESET.
 PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
 PIPMSG001I ... Running tcpdata.
 PIPUPK072E Last record not complete.
 PIPMSG003I ... Issued from stage 2 of pipeline 1.
 PIPMSG001I ... Running unpack.
 Data restore failed.
 Ready(01015); T=0.01/0.02 13:39:17

Sounds like PIPEDDR is not properly handling the termination of the
TCP/IP connection (like the sender going AWOL while the last piece of
data is still in transit). If the pipe leaks, subtle timing changes
may get your feet wet. I never looked at what PIPEDDR does for flow
control, but I do recall that I had to master similar things when I
did mine...

Perhaps, but it's curious that it works for a full pack MDISK but not for
 
an attached DASD.

Could it be a VM service level related issue??

q cplevel
z/VM Version 5 Release 4.0, service level 0802 (64-bit)
Generated at 02/19/09 10:50:42 MDT
IPL at 02/28/09 11:15:51 MDT
Ready; T=0.01/0.01 14:22:24

netstat
VM TCP/IP Netstat Level 540   TCP/IP Server Name: TCPIP


Both sides are at the same level, and yes, the last time my main VM was 

IPL'd was over 2 years ago.

Brian Nielsen


Re: PIPEDDR and attached DASD

2011-04-05 Thread Bruce Hayden
I don't know why the mdisk works and the attached disk doesn't.  If
the label is still SCRTCH, then nothing was written on the disk.
That seems like the TCP/IP connection wasn't established correctly.
We should take this offline to work on it further.

On Tue, Apr 5, 2011 at 4:27 PM, Brian Nielsen bniel...@sco.idaho.gov wrote:
 On Tue, 5 Apr 2011 22:15:31 +0200, Rob van der Heij rvdh...@gmail.com
 wrote:

On Tue, Apr 5, 2011 at 10:07 PM, Brian Nielsen bniel...@sco.idaho.gov
 wrote:

 PIPTCQ1015E ERRNO 54: ECONNRESET.
 PIPMSG004I ... Issued from stage 3 of pipeline 3 name iprestore.
 PIPMSG001I ... Running tcpdata.
 PIPUPK072E Last record not complete.
 PIPMSG003I ... Issued from stage 2 of pipeline 1.
 PIPMSG001I ... Running unpack.
 Data restore failed.
 Ready(01015); T=0.01/0.02 13:39:17

Sounds like PIPEDDR is not properly handling the termination of the
TCP/IP connection (like the sender going AWOL while the last piece of
data is still in transit). If the pipe leaks, subtle timing changes
may get your feet wet. I never looked at what PIPEDDR does for flow
control, but I do recall that I had to master similar things when I
did mine...

 Perhaps, but it's curious that it works for a full pack MDISK but not for
 an attached DASD.

 Could it be a VM service level related issue??

 q cplevel
 z/VM Version 5 Release 4.0, service level 0802 (64-bit)
 Generated at 02/19/09 10:50:42 MDT
 IPL at 02/28/09 11:15:51 MDT
 Ready; T=0.01/0.01 14:22:24

 netstat
 VM TCP/IP Netstat Level 540       TCP/IP Server Name: TCPIP


 Both sides are at the same level, and yes, the last time my main VM was
 IPL'd was over 2 years ago.

 Brian Nielsen




-- 
Bruce Hayden
z/VM and Linux on System z ATS
IBM, Endicott, NY