Marc MERLIN schrieb:
> I'm one of those people who uses cp -al and rsync to do backups. Indeed
> I should likely rework the flow to use subvolumes and snapshots.
> You also mentioned reflinks, and it sounds like I can use
> cp -a --reflink instead of cp -al.
>
> Also, would the dedupe code in bt
On Thu, Dec 05, 2013 at 07:39:30PM +, Duncan wrote:
> John Goerzen posted on Thu, 05 Dec 2013 11:52:04 -0600 as excerpted:
>
> > Hello,
> >
> > I have observed extremely slow metadata performance with btrfs. This may
> > be a bit of a nightmare scenario; it
On 12/05/2013 05:32 PM, Russell Coker wrote:
On Thu, 5 Dec 2013 11:52:04 John Goerzen wrote:
> I have observed extremely slow metadata performance with btrfs. This may
> be a bit of a nightmare scenario; it involves untarring a backup of
> 1.6TB of backuppc data, which contains mi
On 12/05/2013 05:32 PM, Russell Coker wrote:
On Thu, 5 Dec 2013 11:52:04 John Goerzen wrote:
> I have observed extremely slow metadata performance with btrfs. This may
> be a bit of a nightmare scenario; it involves untarring a backup of
> 1.6TB of backuppc data, which contains mi
On Thu, 5 Dec 2013 11:52:04 John Goerzen wrote:
> I have observed extremely slow metadata performance with btrfs. This may
> be a bit of a nightmare scenario; it involves untarring a backup of
> 1.6TB of backuppc data, which contains millions of hardlinks and much
> data, onto USB 2.0
John Goerzen posted on Thu, 05 Dec 2013 11:52:04 -0600 as excerpted:
> Hello,
>
> I have observed extremely slow metadata performance with btrfs. This may
> be a bit of a nightmare scenario; it involves untarring a backup of
> 1.6TB of backuppc data, which contains millions of har
Hello,
I have observed extremely slow metadata performance with btrfs. This may
be a bit of a nightmare scenario; it involves untarring a backup of
1.6TB of backuppc data, which contains millions of hardlinks and much
data, onto USB 2.0 disks.
I have run disk monitoring tools such as dstat