I’m trying to get better throughput on my VTAPE services going to a PC. I 
performed a number of traces both on my laptop were the VTAPE server is running 
and the VSE TCPIP stack. The delays appear not to be on the PC or network side 
but getting the data from my application (LIBR, FCOPY etc and VTAPE) to the 
TCPIP stack. For example I was averaging about 10-20 msecs turnaround time 
between my send to receiving an ack from the PC server while the mainframe 
application would take 40-50 msecs to present the data to the VSE TCP/IP stack. 
Using CA FAQS ASO J display it appears the LIBR or FCOPY program is spending 
most of its time in a task status of VDSBND. Google had one hit and the 
reference to VDSBND is a POWER main task. Not sure why this appears here. VM 
explore show my system is running at between 20 to 30 percent at time of my 
backups so CPU cycles appear not to be the issue.

I’m using a VSWITCH to help reduce the mainframe overhead. My VSE is 3.1, TCPIP 
s 1.05D and VM is z/VM 4.4.

I used a TCPIP window of 64K on both VSE TCP/IP stack and on my Windows XP 
laptop. The VSE TCP/IP trace indicated that after receiving about 32K of data 
it would send it to the PC server. I’ve been told this size is a VSE TCP/IP API 
socket limitation.

My FCOPY is getting about 12MB per minute over a period of about 1hr 20minutes. 
LIBR is about the same but those jobs only take about 5 to 6 minutes for 62 MB.

We would like to use VTAPE to do some offsite DRP backups and at this point 
VTAPE is to slow.

Hans







________________________________________________________________
Sent via the WebMail system at hmrconsultants.com





[This E-mail scanned for viruses]

Reply via email to