> From: [email protected] [mailto:[email protected]] On > Behalf Of Tika Mahata > > I am looking for the solution to transfer large oracle database files about 2TB > between two datacenters which have latency of 65ms with 1gpbs p2p link. > What are the options to reduce the transfer time? I have Linux at both ends. > Any software or protocol or tuning the OS parameters?
The latency should be irrelevant for a continuous stream, but I find transfers over sftp for some reason are susceptible to the latency. As long as you can use any other protocol - mbuffer, ftp, http, cifs/samba, nfs ... Presuming your connection is secured by some other means (such as vpn). Then even a single stream continuous transfer should saturate the link. Also, you mentioned the data is oracle database. This should be irrelevant, as long as you stop the db server first. Then it's just a simple file transfer. Another thing. Being a database, it's probably compressible (depending on the data inside there). But being a 1Gbit link, you'll probably slow yourself down by running through gzip. Even gzip --fast isn't going to be fast enough. You might consider pigz --fast, or lzop. These should gain you some compression acceleration. _______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
