On 8/30/09, Jeffrey J. Kosowsky backu...@kosowsky.org wrote:
Les Mikesell wrote at about 14:26:47 -0500 on Friday, August 28, 2009:
Jim Wilcoxson wrote:
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
beta that I believe will handle a large backuppc server. In
Another thing about BackupPC is that by my reading, new files are
first written to the PC area, then pool links are created by
BackupPC_link. This suggests that backing up the pool last might
improve performance, because it is likely to be more fragmented.
Let me just say ... huh? What
Michael Stowe wrote:
Another thing about BackupPC is that by my reading, new files are
first written to the PC area, then pool links are created by
BackupPC_link. This suggests that backing up the pool last might
improve performance, because it is likely to be more fragmented.
Let me just
Yohoo!
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these close to each other, but when you link an existing file
into a
On Mon, Aug 31, 2009 at 05:15:19PM +0200, Christian Völker wrote:
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these
Christian Völker wrote:
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these close to each other, but when you link an
Jeffrey J. Kosowsky wrote:
It's almost as if you guys haven't heard of filesystem-specific dump
utilities. For such utils (vxdump, ufsdump, zfs send/receive, etc.) the
number of hardlinks isn't a problem. You can do both full and
incremental dumps, even across separate machines.
Hello Michael,
No, I haven't implemented Volume Shadow Copy on this user. Getting time on her
laptop is very difficult, even for configuration purposes. She has Outlook open
during every backup with at least two very large, around 1.5 GB, .pst's.
Let me ask you all if this scenario works; The
Thank you Michael,
I will look deeper into VSS and hope it helps in this case, from what you've
said I will trust that it will resolve the hung processing due to open PST's.
Brian
+--
|This was sent by bbett...@alfseed.com
Jim Leonard wrote at about 21:21:08 -0500 on Sunday, August 30, 2009:
dan wrote:
Once the metadata and config moves to a database, so many things
become very easy. A single backuppc server could handle many more
concurrent backups because multple data storage devices can seperate
Les Mikesell wrote:
Jeffrey J. Kosowsky wrote:
It's almost as if you guys haven't heard of filesystem-specific dump
utilities. For such utils (vxdump, ufsdump, zfs send/receive, etc.) the
number of hardlinks isn't a problem. You can do both full and
incremental dumps, even
It certainly does in the field here; this is the method I use:
http://www.goodjobsucking.com/?p=62
Thank you Michael,
I will look deeper into VSS and hope it helps in this case, from what
you've said I will trust that it will resolve the hung processing due to
open PST's.
Brian
Peter Walter wrote:
For me, the matter could be resolved if a
way was found to at least backup a backuppc server in a reasonable
fashion without requiring particular filesystems and utilities such as
zfs send/receive.
But there is a reasonable way: unmount the partition and image-copy the
Les Mikesell wrote at about 14:05:27 -0500 on Monday, August 31, 2009:
Peter Walter wrote:
For me, the matter could be resolved if a
way was found to at least backup a backuppc server in a reasonable
fashion without requiring particular filesystems and utilities such as
zfs
2009/8/31 Christian Völker chrisc...@knebb.de:
Yohoo!
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these close to each
This still is not a solution for all of us. First, I store the backups
on a consumer-level NAS device that does not easily facilitate adding
partitions without additional hacking and risks to data integrity. The
device also does not support LVM. I do not want to have copy a whole
1TB
Les Mikesell wrote at about 12:35:49 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
It's almost as if you guys haven't heard of filesystem-specific dump
utilities. For such utils (vxdump, ufsdump, zfs send/receive, etc.)
the
number of hardlinks isn't a
Les Mikesell wrote at about 12:35:49 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
It's almost as if you guys haven't heard of filesystem-specific dump
utilities. For such utils (vxdump, ufsdump, zfs send/receive, etc.)
the
number of hardlinks isn't a
I don't see why everything needs to worship at the alter of atomic
operations. There are other ways to ensure that things don't go wrong.
There probably isn't, frankly. And is there a better way of ensuring
synchrony?
Again, it would be helpful to know some specific use cases that you
are
Les Mikesell wrote:
Peter Walter wrote:
For me, the matter could be resolved if a
way was found to at least backup a backuppc server in a reasonable
fashion without requiring particular filesystems and utilities such as
zfs send/receive.
But there is a reasonable way: unmount
Jeffrey J. Kosowsky wrote:
This still is not a solution for all of us. First, I store the backups
on a consumer-level NAS device that does not easily facilitate adding
partitions without additional hacking and risks to data integrity.
OK, but when drives are available for around $100/TB
On Sun, Aug 30, 2009 at 8:01 PM,
baradossbackuppc-fo...@backupcentral.com wrote:
Hello,
I just installed backuppc successfully on my server
and give each position are saved in / var / lib / backuppc / pc /.
but I would be worth its data automatically burn to DVD or DVD-rewritable, ie
the
On Mon, Aug 31, 2009 at 3:23 PM, Jeffrey J.
Kosowskybacku...@kosowsky.org wrote:
I really fail to understand the dogged resistance to finding a viable
solution to a well-known and repeated issue with BackupPC that does
not rely on filesystem level kludges. I could see if this were given
as a
Jeffrey J. Kosowsky wrote:
I see lots of advantage in keeping the database portion relatively
small, fast, replicable, and moveable. Then you can keep and
distribute the files themselves wherever you want them spread across
one or more separate filesystems. Then the database
Peter Walter wrote:
Les Mikesell wrote:
Peter Walter wrote:
For me, the matter could be resolved if a
way was found to at least backup a backuppc server in a reasonable
fashion without requiring particular filesystems and utilities such as
zfs send/receive.
But there is a
Michael Stowe wrote at about 14:41:21 -0500 on Monday, August 31, 2009:
This still is not a solution for all of us. First, I store the backups
on a consumer-level NAS device that does not easily facilitate adding
partitions without additional hacking and risks to data integrity. The
Use of hard links to reduce
disk usage dates back to the inception of hard links. It's not a
kludge, its an established feature of unix based filesystems.
It's also an established feature of Windows' filesystem NTFS, for the record.
Michael Stowe wrote at about 14:56:38 -0500 on Monday, August 31, 2009:
I don't see why everything needs to worship at the alter of atomic
operations. There are other ways to ensure that things don't go wrong.
There probably isn't, frankly. And is there a better way of ensuring
Jon Craig wrote:
Lastly, we wouldn't be having a discusion about replicating the
backuppc server if backuppc wasn't as stable and robust as it is.
BackupPC must first and foremost be a reliable and trustworthy
repository of backup data. It having the ability to replicate itself
for DR
Les Mikesell wrote at about 15:08:24 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
This still is not a solution for all of us. First, I store the backups
on a consumer-level NAS device that does not easily facilitate adding
partitions without additional hacking and
Jon Craig wrote:
2009/8/31 Christian Völker chrisc...@knebb.de:
Yohoo!
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these
In other words, I'd suggest that working around the limitations of
your consumer-grade NAS is probably beyond the scope of any backup
system.
How nice of you. And please remind me of all the code you have
contributed to BackupPC and to this user group...
I don't think a discussion of scope
Jon Craig wrote at about 16:23:44 -0400 on Monday, August 31, 2009:
On Mon, Aug 31, 2009 at 3:23 PM, Jeffrey J.
Kosowskybacku...@kosowsky.org wrote:
I really fail to understand the dogged resistance to finding a viable
solution to a well-known and repeated issue with BackupPC that
Jeffrey J. Kosowsky wrote:
No one said education. I said warn users of the advisability of
using a dedicated filesystem that can easily be
copied/resized/moved. Because most people don't recognize the problem
of copying/moving/resizing their BackupPC database until they have
been using it
Hi all,
On Mon, Aug 31, 2009 at 04:32:14PM -0400, Jeffrey J. Kosowsky wrote:
In a very real sense, the current implementation already uses an
artificial database structure - albeit it a slow, prorprietary,
non-extensible, non-optimizable version. To wit, the attrib files
present in each and
Michael Stowe wrote at about 15:48:17 -0500 on Monday, August 31, 2009:
In other words, I'd suggest that working around the limitations of your
consumer-grade NAS is probably beyond the scope of any backup system.
How nice of you. And please remind me of all the code you have
Les Mikesell wrote at about 15:56:16 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
No one said education. I said warn users of the advisability of
using a dedicated filesystem that can easily be
copied/resized/moved. Because most people don't recognize the problem
Les Mikesell wrote:
Peter Walter wrote:
Les Mikesell wrote:
Peter Walter wrote:
For me, the matter could be resolved if a
way was found to at least backup a backuppc server in a reasonable
fashion without requiring particular filesystems and utilities such as
zfs
Michael Stowe wrote at about 15:29:42 -0500 on Monday, August 31, 2009:
Use of hard links to reduce
disk usage dates back to the inception of hard links. It's not a
kludge, its an established feature of unix based filesystems.
It's also an established feature of Windows'
baradoss wrote:
Hello,
I just installed backuppc successfully on my server
and give each position are saved in / var / lib / backuppc / pc /.
but I would be worth its data automatically burn to DVD or DVD-rewritable, ie
the hen I start the backup from the web interface of backuppc,
Michael Stowe wrote at about 15:48:17 -0500 on Monday, August 31, 2009:
In other words, I'd suggest that working around the limitations of
your
consumer-grade NAS is probably beyond the scope of any backup
system.
How nice of you. And please remind me of all the code you
Jeffrey J. Kosowsky wrote:
I have seen problems where the attrib files are not synchronized with
the backups or when the pc tree is broken. In fact, that is the reason
I wrote several of my routines to identify and fix such problems. Now
true, the cause is typically due to crashes or
Michael Stowe wrote at about 15:29:42 -0500 on Monday, August 31, 2009:
Use of hard links to reduce
disk usage dates back to the inception of hard links. It's not a
kludge, its an established feature of unix based filesystems.
It's also an established feature of Windows'
Michael Stowe wrote at about 16:29:40 -0500 on Monday, August 31, 2009:
Michael Stowe wrote at about 15:48:17 -0500 on Monday, August 31, 2009:
In other words, I'd suggest that working around the limitations of
your
consumer-grade NAS is probably beyond the scope of any
Les Mikesell wrote at about 16:36:37 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
I have seen problems where the attrib files are not synchronized with
the backups or when the pc tree is broken. In fact, that is the reason
I wrote several of my routines to identify
Les Mikesell wrote at about 15:23:41 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
I see lots of advantage in keeping the database portion relatively
small, fast, replicable, and moveable. Then you can keep and
distribute the files themselves wherever you
Words like fringe, shenanigans are pejorative - no matter how you
couch it. My response was hardly ad-hominem, but rather suggesting if
you went based on actual contributions to the BackupPC community then
you would be way more fringe than me -- that's all.
There's a big difference between
OK. Then we have different use cases. For example. I like to use the fuser
implementation to look for old files or old versions of files.
Would you mind elaborating?
--
Let Crystal Reports handle the reporting - Free
Michael Stowe wrote at about 17:15:47 -0500 on Monday, August 31, 2009:
Words like fringe, shenanigans are pejorative - no matter how you
couch it. My response was hardly ad-hominem, but rather suggesting if
you went based on actual contributions to the BackupPC community then
you
Another disadvantage of the current approach is that it is difficult
to perform queries such as:
How many copies of file xyz do I have?
Return the latest version of file xyz across the following hosts?
(and infinite variations and extensions of the above)
Does this really come up much?
Les Mikesell wrote at about 17:22:27 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
I have seen problems where the attrib files are not synchronized with
the backups or when the pc tree is broken. In fact, that is the reason
I wrote several of my routines to
Michael Stowe wrote at about 17:27:33 -0500 on Monday, August 31, 2009:
OK. Then we have different use cases. For example. I like to use the fuser
implementation to look for old files or old versions of files.
Would you mind elaborating?
Someone wrote a cute little fuser filesystem
Michael Stowe wrote at about 17:48:09 -0500 on Monday, August 31, 2009:
Another disadvantage of the current approach is that it is difficult
to perform queries such as:
How many copies of file xyz do I have?
Return the latest version of file xyz across the following hosts?
(and
Michael Stowe wrote:
Another disadvantage of the current approach is that it is difficult
to perform queries such as:
How many copies of file xyz do I have?
Return the latest version of file xyz across the following hosts?
(and infinite variations and extensions of the above)
Does this
Yes I know you run Open Solaris. However, 99.99% of computer users
don't so we don't have access to zfs. On the other hand free sql
database applications are available on just about any OS. Why is so
hard to understand that Open Solaris is not just an option for the
average user. It is also
Hi Jeffrey, hi all,
Jeffrey J. Kosowsky wrote on 2009-08-31 18:41:18 -0400 [Re: [BackupPC-users]
Problems with hardlink-based backups...]:
[...]
This is getting ridiculous. Who cares?
I do (even if I'm quoting out of context).
Frankly, this discussion has been ridiculous from the start ...
Hi all
On Mon, 31 Aug 2009 15:42:50 -0500, Les Mikesell lesmikes...@gmail.com wrote:
And the new directory entry may be all the way across the disk from the
existing inode - and far from any other inode in this directory.
true, but system cache takes care of most directory access, so
All,
Since upgrading a very busy BackupPC server to 3.1, it's been falling
farther and farther behind due to disk contention between the nightly
admin jobs and backups which ran 24x7 on the 2.x set up. I asked for
help here and the only suggestion I got was to carve out a window of
time
Michael Stowe wrote at about 18:02:44 -0500 on Monday, August 31, 2009:
Yes I know you run Open Solaris. However, 99.99% of computer users
don't so we don't have access to zfs. On the other hand free sql
database applications are available on just about any OS. Why is so
hard to
Peter Walter wrote:
Terabyte image copies between servers are not feasible with the WAN
bandwidth I have available. The second backup server does not (and
cannot) backup the original targets directly - the second backup server
may only access the primary backup servers remotely, not the
Hi,
Jeffrey J. Kosowsky wrote on 2009-08-31 18:15:07 -0400 [Re: [BackupPC-users]
Problems with hardlink-based backups...]:
Les Mikesell wrote at about 15:23:41 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
[...]
I guess I can't answer your question without knowing what
I wrote:
issue, and I have used it for my small file pool (220MB), which syncs in
Sorry, I meant 220GB.
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report
Holger Parplies wrote at about 01:25:40 +0200 on Tuesday, September 1, 2009:
*snipped all the irrelevant and patronizing comments*
How do you ensure consistency between database content and file
system content? Please answer that, for once!
How do you ensure consistency between the pool and
Just because a word processor has tables doesn't mean you shouldn't be
using a spreadsheet.
huh? your statement is not even logically parallel let only
comprehensible. Do you just like to argue for arguments sake or only
to avoid admitting you were wrong?
Then I'll explain: you can
Jeffrey J. Kosowsky wrote:
Is it self-evident that a BackupPC tree is difficult to
copy/move/resize if not on a dedicated filesystem?
What is a dedicated filesystem? How does it differ from any other
filesystem?
--
Jim Leonard (trix...@oldskool.org)http://www.oldskool.org/
Help
Les Mikesell wrote at about 18:24:20 -0500 on Monday, August 31, 2009:
How's that? You have to install some unix-like OS distribution.
There's not a huge difference.
Here is the difference:
1. SQL database
1. Most Linux distributions already include a version of sql in the
base
ahh yes.. managed to figure out that the share name should not have anything
inside. Couldn't tell what's the difference initially between the 2 specified
directories. Finally got it working. Thanks.
+--
|This was sent by
Then I would suggest you haven't seen enough software. Backup systems
are not trivial systems, and it should be implied that you would never
set them up without consulting their operation and requirements.
I have it on good authority that if you post to a list copiously enough
for a long
Jim Leonard wrote at about 20:20:59 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
Is it self-evident that a BackupPC tree is difficult to
copy/move/resize if not on a dedicated filesystem?
What is a dedicated filesystem? How does it differ from any other
Holger Parplies wrote:
Hi,
Marty wrote on 2009-08-31 19:58:58 -0400 [Re: [BackupPC-users] Problems with
hardlink-based backups...]:
Peter Walter wrote:
[...]
If I had a method of simply backing up the changed files on the
backup server, and a method of dumping the hardlinks in such a
Les Mikesell wrote at about 15:23:41 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
More generally, we would need to consider two things:
1. What are the normal ways in which the two could get out of synch
and then address each of those cases
Start with that copy
Three small points:
1) LVM is de rigeur for any substantial Linux-based filesystem
2) You don't have to move it anywhere, you can just start a new repository
elsewhere
3) My filesystem isn't dedicated by any means, and I can't think of a good
reason to do so
Jim Leonard wrote at about 20:20:59
On Mon, Aug 31, 2009 at 04:25:36PM -0500, Les Mikesell wrote:
baradoss wrote:
but I would be worth its data automatically burn to DVD or DVD-rewritable,
ie the hen I start the backup from the web interface of backuppc, the
burning will start automatically.
You need to do it manually or
Peter Walter wrote:
I have access to cloud storage I would like to take
advantage of, but can't because of the hardlink issue. My (klugey)
solution at present is to use a backuppc server to backup the backuppc
server, but even incrementals take days to run.
What is the problem with your
Holger Parplies wrote at about 02:05:28 +0200 on Tuesday, September 1, 2009:
Hi,
Jeffrey J. Kosowsky wrote on 2009-08-31 18:15:07 -0400 [Re: [BackupPC-users]
Problems with hardlink-based backups...]:
Les Mikesell wrote at about 15:23:41 -0500 on Monday, August 31, 2009:
Jeffrey
Jeffrey J. Kosowsky wrote:
But a program should not be dependent on volume management. Volume
management is a general tool that can be helpful but should not be
required.
BackupPC isn't dependent on volume management more than any other
program. Volume management is simply one way to get
Les Mikesell wrote:
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these close to each other, but when you link an existing
Christian Völker wrote:
Makes sense to me. Is there any FS which would be recommended for best
performance?
OpenSolaris + ZFS. For best performance, 2G of RAM and a dual-core CPU
would be minimum requirements IMO.
No doubt people will complain about such heavy requirements. I would
respond
Peter Walter wrote:
Perhaps - but a very close second. Backuppc is very stable and robust.
But, disasters do happen. I have had my grits saved at least twice by
having a remote backup of the backup server (remember Katrina and New
Orleans?) and I am very nervous about using a backup
Jeffrey J. Kosowsky wrote:
The kludge is not the use per-se of hard links
to store the file data but the resulting collapsing of multiple
version of the same file to a single inode that correspond to
different inodes and file attributes in the source data.
You do not have a clear
I don't see the issue here.
- New files are created only when a new file is added to the
pool. Since this happens coincident with the need for a new database
entry, these two operations can be synchronized
Unless there's a database problem. Or the executable crashes. Or a
programming
Gentlemen (and Ladies, if any are lurking):
I have one (ok, ok, 4) observations regarding the recent converse on the
list, which you may take or leave. I will not be drawn into your flame-fest
either way:
1. Please be professional. Not only is it considered polite to be
considerate to one
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jeffrey J. Kosowsky wrote:
Jim Leonard wrote at about 20:20:59 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
Is it self-evident that a BackupPC tree is difficult to
copy/move/resize if not on a dedicated filesystem?
Hi,
Marty wrote on 2009-08-31 19:58:58 -0400 [Re: [BackupPC-users] Problems with
hardlink-based backups...]:
Peter Walter wrote:
[...]
If I had a method of simply backing up the changed files on the
backup server, and a method of dumping the hardlinks in such a manner
that they could
Jim Leonard wrote:
Peter Walter wrote:
Perhaps - but a very close second. Backuppc is very stable and robust.
But, disasters do happen. I have had my grits saved at least twice by
having a remote backup of the backup server (remember Katrina and New
Orleans?) and I am very nervous
Hi,
higuita wrote on 2009-08-31 23:45:54 +0100 [Re: [BackupPC-users] Which FS?]:
On Mon, 31 Aug 2009 15:42:50 -0500, Les Mikesell lesmikes...@gmail.com
wrote:
[...]
And, assuming you have enough disk activity to keep the cache out of
date, that 'ls -l' will have to move the disk head to
Jeffrey J. Kosowsky wrote:
How's that? You have to install some unix-like OS distribution.
There's not a huge difference.
Here is the difference:
1. SQL database
1. Most Linux distributions already include a version of sql in the
base install
If not yum install mysql or
Jim Leonard wrote:
Peter Walter wrote:
I have access to cloud storage I would like to take
advantage of, but can't because of the hardlink issue. My (klugey)
solution at present is to use a backuppc server to backup the backuppc
server, but even incrementals take days to run.
-Original Message-
From: Jeffrey J. Kosowsky [mailto:backu...@kosowsky.org]
Sent: Saturday, August 29, 2009 7:40 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] BackupPC File::RsyncP issues
Jacob Hydeman wrote at about 18:28:54 -0700 on
Jim Leonard wrote at about 16:55:04 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
The kludge is not the use per-se of hard links
to store the file data but the resulting collapsing of multiple
version of the same file to a single inode that correspond to
different
Jim Leonard wrote at about 17:17:24 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
But a program should not be dependent on volume management. Volume
management is a general tool that can be helpful but should not be
required.
BackupPC isn't dependent on volume
Michael Stowe wrote at about 22:41:06 -0500 on Monday, August 31, 2009:
Then I would suggest you haven't seen enough software. Backup systems
are not trivial systems, and it should be implied that you would never
set them up without consulting their operation and requirements.
I
Jeffrey J. Kosowsky wrote:
it seems that many
people (myself included) initially set up their BackupPC topdir on a
filesystem containing mixed data and without the advantage of things
like LVM or ZFS since they don't realize in advance how hard it is to
copy/move/resize the topdir area due to
Michael Stowe wrote at about 22:53:02 -0500 on Monday, August 31, 2009:
Three small points:
1) LVM is de rigeur for any substantial Linux-based filesystem
Not all Linux installations support LVM - oh yeah, I forgot, you
consider a consumer-NAS to be a fringe case.
2) You don't have
Michael Stowe wrote at about 22:53:02 -0500 on Monday, August 31, 2009:
Three small points:
1) LVM is de rigeur for any substantial Linux-based filesystem
Not all Linux installations support LVM - oh yeah, I forgot, you
consider a consumer-NAS to be a fringe case.
2) You don't have
Peter Walter wrote:
What is the problem with your cloud storage such that you can't use it
to make a backup of BackupPC? What cloud storage do you have access to,
and what operating system and filesystem are you using to run BackupPC?
I have not (yet) come across a cloud storage
Adam Goryachev wrote at about 14:14:49 +1000 on Tuesday, September 1, 2009:
Jeffrey J. Kosowsky wrote:
Jim Leonard wrote at about 20:20:59 -0500 on Monday, August 31, 2009:
Jeffrey J. Kosowsky wrote:
Is it self-evident that a BackupPC tree is difficult to
copy/move/resize if
Jeffrey J. Kosowsky wrote:
In contrast, the normal usage of hard links uses a
single inode to represent the same file albeit differing only in name.
There is nothing abnormal about the use of hard links here. What
operating environment are you basing your definition of normal on?
This is not
Michael Stowe wrote at about 23:15:15 -0500 on Monday, August 31, 2009:
I don't see the issue here.
- New files are created only when a new file is added to the
pool. Since this happens coincident with the need for a new database
entry, these two operations can be synchronized
Peter Walter wrote:
Jim Leonard wrote:
Peter Walter wrote:
I have access to cloud storage I would like to take
advantage of, but can't because of the hardlink issue. My (klugey)
solution at present is to use a backuppc server to backup the backuppc
server, but even incrementals take
1 - 100 of 105 matches
Mail list logo