need to cancel subscription
2014-07-28 9:25 GMT-04:00, John Doe jd...@yahoo.com:
From: Benjamin Smith li...@benjamindsmith.com
Thanks for your feedback - it's advice I would have given myself just a
few years ago. We have *literally* in the range of one hundred million
small PDF documents.
On 7/29/2014 11:48 AM, Juan De Mola wrote:
need to cancel subscription
.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
right there, that URL on every message.
--
john r pierce
On 07/26/2014 07:04 AM, Jerry Franz wrote:
On 07/25/2014 03:33 PM, Benjamin Smith wrote:
takes between 1 and 2 days, system load depending. We had to give up
on rsync for backups in this context a while ago - we just couldn't
get a daily backup more often then about 2x per week. Now we're
On 07/28/2014 05:02 PM, Cliff Pratt wrote:
1. Setup inotify (no idea how it would behave with your millions of files)
2. One big rsync
3. Bring it down and copy the few modified files reported by inotify.
Or lsyncd?
lsyncd is interesting, but for our use case isn't nearly as efficient as
From: Benjamin Smith li...@benjamindsmith.com
Thanks for your feedback - it's advice I would have given myself just a
few years ago. We have *literally* in the range of one hundred million
small PDF documents. The simple command
find /path/to/data /dev/null
takes between 1 and 2 days,
How about something like this:
Use find to process each file with a script that does something like this:
if foo not soft link :
if foo open for output (lsof?) :
add foo to todo list
else :
make foo read-only
if foo open for output :
add foo to
On 07/25/2014 12:32 PM, Benjamin Smith wrote:
On 07/25/2014 06:56 AM, Robert Nichols wrote:
Unless you can figure out some way to move the start of the partition back
to make room for the RAID superblock ahead of the existing filesystem, the
answer is, No. The version 1.2 superblock is located
rsync breaks silently or sometimes noisily on big directory/file
structures. It depends on how the OP's files are distributed. We organised
our files in a client/year/month/day and run a number of rsyncs on separate
parts of the hierarchy. Older stuff doesn't need to be rsynced but gets
backed up
On Tue, Jul 29, 2014 at 1:25 AM, John Doe jd...@yahoo.com wrote:
From: Benjamin Smith li...@benjamindsmith.com
Thanks for your feedback - it's advice I would have given myself just a
few years ago. We have *literally* in the range of one hundred million
small PDF documents. The
On 07/25/2014 03:33 PM, Benjamin Smith wrote:
takes between 1 and 2 days, system load depending. We had to give up
on rsync for backups in this context a while ago - we just couldn't
get a daily backup more often then about 2x per week. Now we're
using ZFS + send/receive to get daily backup
On Fri, Jul 25, 2014 at 5:41 AM, Lists li...@benjamindsmith.com wrote:
I have a large disk full of data that I'd like to upgrade to SW RAID 1
with a minimum of downtime. Taking it offline for a day or more to rsync
all the files over is a non-starter. Since I've mounted SW RAID1 drives
On 07/24/2014 10:16 PM, Lists wrote:
So... is it possible to convert an EXT4 partition to a RAID1 partition
without having to copy the files over?
Unless you can figure out some way to move the start of the partition back
to make room for the RAID superblock ahead of the existing filesystem,
On Fri, Jul 25, 2014 at 8:56 AM, Robert Nichols
rnicholsnos...@comcast.net wrote:
On 07/24/2014 10:16 PM, Lists wrote:
So... is it possible to convert an EXT4 partition to a RAID1 partition
without having to copy the files over?
Unless you can figure out some way to move the start of the
You can also try this:
1- Convert your ext4 partition to btrfs.
2- Make raid1 with btrfs. With btrfs you can convert a bare partition
to almost any raid level, with the proper hard disk amount.
So... is it possible to convert an EXT4 partition to a RAID1 partition
without having to copy the
On 07/25/2014 06:56 AM, Robert Nichols wrote:
Unless you can figure out some way to move the start of the partition back
to make room for the RAID superblock ahead of the existing filesystem, the
answer is, No. The version 1.2 superblock is located 4KB from the start
of the device (partition)
On Fri, Jul 25, 2014 at 12:32 PM, Benjamin Smith
li...@benjamindsmith.com wrote:
On 07/25/2014 06:56 AM, Robert Nichols wrote:
Unless you can figure out some way to move the start of the partition back
to make room for the RAID superblock ahead of the existing filesystem, the
answer is, No.
Is there soome reason that the existing files cannot
be accessed while they are being copied to the raid?
--
Michael henne...@web.cs.ndsu.nodak.edu
SCSI is NOT magic. There are *fundamental technical
reasons* why it is necessary to sacrifice a young
goat to your SCSI chain now and then. --
On 07/25/2014 12:12 PM, Michael Hennebry wrote:
Is there soome reason that the existing files cannot
be accessed while they are being copied to the raid?
Sheer volume. With something in the range of 100,000,000 small files, it
takes a good day or two to rsync. This means that getting a
Benjamin Smith wrote:
On 07/25/2014 12:12 PM, Michael Hennebry wrote:
Is there soome reason that the existing files cannot
be accessed while they are being copied to the raid?
Sheer volume. With something in the range of 100,000,000 small files, it
takes a good day or two to rsync. This means
On Fri, Jul 25, 2014 at 3:08 PM, Benjamin Smith
li...@benjamindsmith.com wrote:
On 07/25/2014 12:12 PM, Michael Hennebry wrote:
Is there soome reason that the existing files cannot
be accessed while they are being copied to the raid?
Sheer volume. With something in the range of 100,000,000
On 07/25/2014 03:06 PM, Les Mikesell wrote:
On Fri, Jul 25, 2014 at 3:08 PM, Benjamin Smith
li...@benjamindsmith.com wrote:
On 07/25/2014 12:12 PM, Michael Hennebry wrote:
Is there soome reason that the existing files cannot
be accessed while they are being copied to the raid?
Sheer volume.
I have a large disk full of data that I'd like to upgrade to SW RAID 1
with a minimum of downtime. Taking it offline for a day or more to rsync
all the files over is a non-starter. Since I've mounted SW RAID1 drives
directly with mount -t ext3 /dev/sdX it would seem possible to flip
the
On Thu, Jul 24, 2014 at 7:11 PM, Lists li...@benjamindsmith.com wrote:
I have a large disk full of data that I'd like to upgrade to SW RAID 1
with a minimum of downtime. Taking it offline for a day or more to rsync
all the files over is a non-starter. Since I've mounted SW RAID1 drives
On 07/24/2014 06:07 PM, Les Mikesell wrote:
On Thu, Jul 24, 2014 at 7:11 PM, Lists li...@benjamindsmith.com wrote:
I have a large disk full of data that I'd like to upgrade to SW RAID 1
with a minimum of downtime. Taking it offline for a day or more to rsync
all the files over is a
24 matches
Mail list logo