I have solved this problem and present the resolution for the benefit of 

others: it was an MTU mismatch between the VM TCPIP stacks (set at 8992) 

and what the network between would support (1500).

Further testing I did had showed that ATTACH vs. MDISK and the versions o
f 
PIPELINES and PIPEDDR were not the cause.  Instead, sucess or failure was
 
dependent on the data being transferred and whether it traversed the 
external network or not.  After ripping apart PIPEDDR to understand it so
 
that I could add debugging code and options I found that the transfer for
 
a particular disk always failed at a unique track within that disk.  

Manually added pacing (via DELAY stages) would work (although *tediously*
 
more slowly than expected based on the delay value used), but not if the 

delay was too small.  Eventually, while looking for pacing information in
 
the TCPIP stack I read the reference that "Selecting an MTU size that is 

too large may cause a client application to hang."  The light bulb went o
n 
and after adjusting the MTU size used in the VM TCPIP stack everything 

works fine now.

Brian Nielsen


On Tue, 5 Apr 2011 12:20:59 -0500, Brian Nielsen <bniel...@sco.idaho.gov>
 
wrote:

>Should PIPEDDR work with attached DASD or does it only support MDISKs? 
 
>The documentation doesn't seem to say.  I get an error with attached DAS
D 
>but it works fine with a full pack MDISK of the same DASD volume when 

>doing a dump/restore over TCPIP.
>
>Using attached DASD will avoid label conflicts and also avoids 
maintaining 
>hardcoded MDISKs with DEVNO's in the directory.  Unfortunately, the 
DEFINE 
>MDISK command doesn't have a DEVNO option.
>
>
>Here is what the failure looks like from the receiving and sending sides
 
>using attached DASD at both ends:
>
>---------
>
>pipeddr restore * 6930 11000 (listen noprompt
>Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
>Waiting for connection on port 11000 to restore BNIELSEN 6930.
>Sending user is BNIELSEN at VMP2
>Receiving data from 172.16.64.45
>PIPTCQ1015E ERRNO 54: ECONNRESET.
>PIPMSG004I ... Issued from stage 3 of pipeline 3 name "iprestore".
>PIPMSG001I ... Running "tcpdata".
>PIPUPK072E Last record not complete.
>PIPMSG003I ... Issued from stage 2 of pipeline 1.
>PIPMSG001I ... Running "unpack".
>Data restore failed.
>Ready(01015); T=0.01/0.01 08:58:15
>
>----------------
>
>pipeddr dump * 9d5e 172.16.64.44 11000
>Dumping disk BNIELSEN 9D5E to 172.16.64.44
>PIPTCQ1015E ERRNO 32: EPIPE.
>PIPMSG004I ... Issued from stage 7 of pipeline 1 name "ipread".
>PIPMSG001I ... Running "tcpclient 172.16.64.44 11000 linger 10 reuseaddr
 
>U".
>Dump failed.
>Ready(01015); T=0.02/0.03 07:54:34
>
>****************
>
>
>If I create full pack MDISKs via DEFINE MDISK (starting at cyl zero) it 

>works fine, as shown below.
>
>------------
>
>pipeddr restore * 6930 11000 (listen noprompt
>Connecting to TCP/IP.  Enter PIPMOD STOP to terminate.
>Waiting for connection on port 11000 to restore BNIELSEN 6930.
>Sending user is BNIELSEN at VMP2
>Receiving data from 172.16.64.45
>41 MB received.
>Data restored successfully.
>Ready; T=4.04/4.54 09:34:18
>
>
>-----------
>
>pipeddr dump * 9d5e 172.16.64.44 11000
>Dumping disk BNIELSEN 9D5E to 172.16.64.44
>-- All data sent to BNIELSEN AT VMP1 --
>41 MB transmitted.
>Dump completed.
>Ready; T=6.27/8.21 08:30:37
>
>------------
>
>Brian Nielsen
>========================
=========================
=======================

Reply via email to