Levente, > Same server because I need the transfer to be quick. Once I start - I > need to do the migration in max 1-2 hrs... tops. I know... I know...
Let's say you have exactly one terabyte of data, 1,000,000 megabytes of it. You can stretch the migration time to your indicated maximum, 2 hours, 7200 seconds. Do the math. Your disk system would have to be able to read at a constant speed of 140 megabytes/second AND write at the same 140 at the same time. And that's not taking into account the network traffic and CPU load that the AFS processes cause you, just the hardware. You can easily test whether you can meet your own requirements. # time dd if=/dev/WHATEVERITIS of=/THATPARTITION/BOGUSFILE bs=1048576 count=16384 Keep iostat running in another window to get an idea. That's the simplest it gets; absolutely sequential reads since you are reading block by block from the physical disk, and writing to one file only, on the same filesystem. I could meet your goal of 140 MB/s both ways (and exceed it by a safe margin; it gave me 237) on an old (circa 2008 I'd say?) Supermicro storage system running CentOS 6 that has an Areca ARC-1261 controller with fourteen 1TB disks in a RAID6 configuration. What's your storage system like? The question is merely posed so that you can evaluate it for yourself whether your requirements for data transfer are realistic given the hardware you have and the type of operation you intend to perform. If you have been given 2 hours to transfer 1 terabyte of data out of and back into a system that does 50 MB/s tops, by the time you need to go back online you've only completed one third of your estimated workload. So it looks like you need to think of something else... -- Atro Tossavainen, Chairman of the Board Infinite Mho Oy, Helsinki, Finland tel. +358-44-5000 600, http://www.infinitemho.fi/ _______________________________________________ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info