Re: Increase VTAPE throughput to VTAPE server on PC.
Hi Tom. From my traces the Network and PC are responding very well. This includes the OSA Express, VSWITCH and VSE TCP/IP stack connected to the VSWITCH. It really appears to be on the FCOPY/VTAPE partitions. FCOPY is lower than VTAPE partition which is lower than TCP/IP partition and I believe I have ample mainframe cycles available. Hans -Original Message- From: Tom Duerbusch [mailto:[EMAIL PROTECTED] Sent: April 3, 2006 3:49 PM To: [EMAIL PROTECTED]; IBMVM@LISTSERV.UARK.EDU Subject: Re: Increase VTAPE throughput to VTAPE server on PC. What speed is your network card on your PC? 10 mbs? The OSA cars is 1 gbs. That seems to be where my VTAPE performance problem lies (also any other large IP service that has my PC envolved). I initially thought it was the network traffic. But I didn't get any greater thruput on a weekend. Tom Duerbusch THD Consulting >>> Hans Rempel <[EMAIL PROTECTED]> 4/3/2006 2:53 PM >>> I'm trying to get better throughput on my VTAPE services going to a PC. I performed a number of traces both on my laptop were the VTAPE server is running and the VSE TCPIP stack. The delays appear not to be on the PC or network side but getting the data from my application (LIBR, FCOPY etc and VTAPE) to the TCPIP stack. For example I was averaging about 10-20 msecs turnaround time between my send to receiving an ack from the PC server while the mainframe application would take 40-50 msecs to present the data to the VSE TCP/IP stack. Using CA FAQS ASO J display it appears the LIBR or FCOPY program is spending most of its time in a task status of VDSBND. Google had one hit and the reference to VDSBND is a POWER main task. Not sure why this appears here. VM explore show my system is running at between 20 to 30 percent at time of my backups so CPU cycles appear not to be the issue. I'm using a VSWITCH to help reduce the mainframe overhead. My VSE is 3.1, TCPIP s 1.05D and VM is z/VM 4.4. I used a TCPIP window of 64K on both VSE TCP/IP stack and on my Windows XP laptop. The VSE TCP/IP trace indicated that after receiving about 32K of data it would send it to the PC server. I've been told this size is a VSE TCP/IP API socket limitation. My FCOPY is getting about 12MB per minute over a period of about 1hr 20minutes. LIBR is about the same but those jobs only take about 5 to 6 minutes for 62 MB. We would like to use VTAPE to do some offsite DRP backups and at this point VTAPE is to slow. Hans Sent via the WebMail system at hmrconsultants.com [This E-mail scanned for viruses] [This E-mail scanned for viruses]
Re: Increase VTAPE throughput to VTAPE server on PC.
What speed is your network card on your PC? 10 mbs? The OSA cars is 1 gbs. That seems to be where my VTAPE performance problem lies (also any other large IP service that has my PC envolved). I initially thought it was the network traffic. But I didn't get any greater thruput on a weekend. Tom Duerbusch THD Consulting >>> Hans Rempel <[EMAIL PROTECTED]> 4/3/2006 2:53 PM >>> I'm trying to get better throughput on my VTAPE services going to a PC. I performed a number of traces both on my laptop were the VTAPE server is running and the VSE TCPIP stack. The delays appear not to be on the PC or network side but getting the data from my application (LIBR, FCOPY etc and VTAPE) to the TCPIP stack. For example I was averaging about 10-20 msecs turnaround time between my send to receiving an ack from the PC server while the mainframe application would take 40-50 msecs to present the data to the VSE TCP/IP stack. Using CA FAQS ASO J display it appears the LIBR or FCOPY program is spending most of its time in a task status of VDSBND. Google had one hit and the reference to VDSBND is a POWER main task. Not sure why this appears here. VM explore show my system is running at between 20 to 30 percent at time of my backups so CPU cycles appear not to be the issue. I'm using a VSWITCH to help reduce the mainframe overhead. My VSE is 3.1, TCPIP s 1.05D and VM is z/VM 4.4. I used a TCPIP window of 64K on both VSE TCP/IP stack and on my Windows XP laptop. The VSE TCP/IP trace indicated that after receiving about 32K of data it would send it to the PC server. I've been told this size is a VSE TCP/IP API socket limitation. My FCOPY is getting about 12MB per minute over a period of about 1hr 20minutes. LIBR is about the same but those jobs only take about 5 to 6 minutes for 62 MB. We would like to use VTAPE to do some offsite DRP backups and at this point VTAPE is to slow. Hans Sent via the WebMail system at hmrconsultants.com [This E-mail scanned for viruses]
Increase VTAPE throughput to VTAPE server on PC.
Im trying to get better throughput on my VTAPE services going to a PC. I performed a number of traces both on my laptop were the VTAPE server is running and the VSE TCPIP stack. The delays appear not to be on the PC or network side but getting the data from my application (LIBR, FCOPY etc and VTAPE) to the TCPIP stack. For example I was averaging about 10-20 msecs turnaround time between my send to receiving an ack from the PC server while the mainframe application would take 40-50 msecs to present the data to the VSE TCP/IP stack. Using CA FAQS ASO J display it appears the LIBR or FCOPY program is spending most of its time in a task status of VDSBND. Google had one hit and the reference to VDSBND is a POWER main task. Not sure why this appears here. VM explore show my system is running at between 20 to 30 percent at time of my backups so CPU cycles appear not to be the issue. Im using a VSWITCH to help reduce the mainframe overhead. My VSE is 3.1, TCPIP s 1.05D and VM is z/VM 4.4. I used a TCPIP window of 64K on both VSE TCP/IP stack and on my Windows XP laptop. The VSE TCP/IP trace indicated that after receiving about 32K of data it would send it to the PC server. Ive been told this size is a VSE TCP/IP API socket limitation. My FCOPY is getting about 12MB per minute over a period of about 1hr 20minutes. LIBR is about the same but those jobs only take about 5 to 6 minutes for 62 MB. We would like to use VTAPE to do some offsite DRP backups and at this point VTAPE is to slow. Hans Sent via the WebMail system at hmrconsultants.com [This E-mail scanned for viruses]