Mag Gam wrote:
I need to copy over 100TB of data from one server to another via
network. What is the best option to do this? I am planning to use
rsync but is there a better tool or better way of doing this?
For example, I plan on doing
rsync -azv /largefs /targetfs
/targetfs is a NFS
I have done 700k and 800k files transfers (including hardlinks), but
indeed it could take a while to compute the transferlist. Newer rsync
versions bring down the amount of memory needed drastically. That is
one of the reasons I offer a recent rsync in RPMforge. There is almost
never a
On Sun, Jun 22, 2008 at 3:36 AM, Rainer Duffner [EMAIL PROTECTED] wrote:
Now that I know the details - I don' think this is going to work. Not with
100 TB of data. It kind-of-works with 1 TB.
Can anybody comment on the feasibility of rsync on 1 million files?
rsync always broke on my
stops sync?
2008/6/22 Raja Subramanian [EMAIL PROTECTED]:
On Sun, Jun 22, 2008 at 3:36 AM, Rainer Duffner [EMAIL PROTECTED] wrote:
Now that I know the details - I don' think this is going to work. Not with
100 TB of data. It kind-of-works with 1 TB.
Can anybody comment on the feasibility of
On Sun, 22 Jun 2008, Raja Subramanian wrote:
On Sun, Jun 22, 2008 at 3:36 AM, Rainer Duffner [EMAIL PROTECTED] wrote:
Now that I know the details - I don' think this is going to work. Not with
100 TB of data. It kind-of-works with 1 TB.
Can anybody comment on the feasibility of rsync on 1
On Sun, Jun 22, 2008 at 4:32 PM, Dag Wieers [EMAIL PROTECTED] wrote:
I have done 700k and 800k files transfers (including hardlinks), but indeed
it could take a while to compute the transferlist. Newer rsync versions
bring down the amount of memory needed drastically. That is one of the
If you do end up using rsync for something like this via ssh, you
might want to look at some of the Pittsburgh Supercomputing Center's
patches. The high-performance patches can allow you to see dramatic
increases in throughput.
Or, if it's over a secure network, drop ssh entirely and use
Rainer Duffner wrote:
...
Can anybody comment on the feasibility of rsync on 1 million files?
I rsync 2.6M files daily. No problem.
It takes 15 minutes, if there's only a few changes.
For fast transfer of files between two machines
I usually use ttcp:
From machine:
tar cf - .|ttcp -l5120
I need to copy over 100TB of data from one server to another via network.
What is the best option to do this? I am planning to use rsync but is there
a better tool or better way of doing this?
For example, I plan on doing
rsync -azv /largefs /targetfs
/targetfs is a NFS mounted filesystem.
Any
Mag Gam wrote:
I need to copy over 100TB of data from one server to another via
network. What is the best option to do this? I am planning to use
rsync but is there a better tool or better way of doing this?
For example, I plan on doing
rsync -azv /largefs /targetfs
/targetfs is a NFS
Mag Gam wrote:
I need to copy over 100TB of data from one server to another via network.
What is the best option to do this? I am planning to use rsync but is there
a better tool or better way of doing this?
For example, I plan on doing
rsync -azv /largefs /targetfs
/targetfs is a NFS mounted
Am 21.06.2008 um 15:33 schrieb Mag Gam:
I need to copy over 100TB of data from one server to another via
network. What is the best option to do this? I am planning to use
rsync but is there a better tool or better way of doing this?
For example, I plan on doing
rsync -azv /largefs
Network is a 10/100
1 million large files
No SAN, JBOD
On Sat, Jun 21, 2008 at 1:19 PM, Rainer Duffner [EMAIL PROTECTED]
wrote:
Am 21.06.2008 um 15:33 schrieb Mag Gam:
I need to copy over 100TB of data from one server to another via network.
What is the best option to do this? I am
Am 21.06.2008 um 21:51 schrieb Mag Gam:
Network is a 10/100
You're kidding?
1 million large files
No SAN, JBOD
Move the data by moving the storage itself.
It will take months to transfer 100 TB via FastEthernet.
cheers,
Rainer
--
Rainer Duffner
CISSP, LPI, MCSE
[EMAIL PROTECTED]
Mag Gam wrote:
Network is a 10/100
1 million large files
No SAN, JBOD
assuming 100baseT wire speed of about 10Mbyte/sec, moving 100TB will
take a minimum of 100TB/10MB/s = 10,000,000 seconds or 2900 hours, or
about 4 months. even on a gigE network, this would still take about 2
weeks or
Can add fiber network card to each server? fiber switch? if not try to
plugin to each server giga ethernet card put a crossover cable and
start rsync... i did that with 1tb of photos and takes a lot of
timekeep power supply working and cross the fingers
I hope this can help
2008/6/21
On Sat, Jun 21, 2008 at 5:12 PM, nightduke [EMAIL PROTECTED] wrote:
Can add fiber network card to each server? fiber switch? if not try to
plugin to each server giga ethernet card put a crossover cable and
start rsync... i did that with 1tb of photos and takes a lot of
timekeep power
Am 21.06.2008 um 23:44 schrieb Matt Morgan:
O
Then if you get the network sorted out, the fastest most reliable
way I know to copy lots of files is
star --copy
You can get star with
yum install star
Now that I know the details - I don' think this is going to work. Not
with 100 TB
On Sat, 2008-06-21 at 09:33 -0400, Mag Gam wrote:
I need to copy over 100TB of data from one server to another via
network. What is the best option to do this? I am planning to use
rsync but is there a better tool or better way of doing this?
At gigabit speeds, you're looking at over a week
19 matches
Mail list logo