Hi,
On 2016-08-26 13:52, Austin S. Hemmelgarn wrote:
Regular 'df' isn't to be trusted when dealing with BTRFS, the only
reason we report anything there is because many things break horribly
if we don't.
Yeah, I noticed. Seems to produce a reasonable guess, though.
Additionally, while running
Hi Chris,
first off, thank you for the detailled explanations!
On 2016-08-26 01:04, Chris Murphy wrote:
No, it's not a file, directory or subvolume specific command. It
applies to a whole volume.
You are right, but all I was after in the first place was a way to
change the mode for new data on
Hi,
On 2016-08-25 21:50, Chris Murphy wrote:
It's incidental right now. It's not something controllable or intended
to have enduring mixed profile block groups.
I see. (Kindof)
Such a switch doesn't exist, there's no way to define what files,
directories, or subvolumes, have what profiles.
We
Hi,
On 2016-08-25 20:26, Justin Kilpatrick wrote:
I'm not sure why you want to avoid a balance,
I didn't check, but I imagined it would slow down my rsync
significantly.
Once you start this command all the new data should follow the new
rules.
Ah, now that's interesting.
When the balance i
Hi,
I recently created a new btrfs on two disks - one 6TB, one 2TB - for
temporary backup purposes.
It apparently defaulted to raid0 for data, and I didn't realize at the
time that this would become a problem.
Now the 2TB is almost full, and df tells me I only have about 200GB of
free space. W
Hi,
thank you all for your helpful comments.
From what I've read, I forged the following guidelines (for myself;
ymmv):
- Use btrfs for generic data storage on spinning disks and for
everything on ssds.
- Use zfs for spinning disks that may be used for cow-unfriendly
workloads, like vm images
On 2015-09-18 04:22, Duncan wrote:
one way or another, you're going to
have to write two things, one a checksum of the other, and if they are
in-
place-overwrites, while the race can be narrowed, there's always going
to
be a point at which either one or the other will have been written,
while
Am 2015-09-18 04:00, schrieb Duncan:
Some users have ameliorated that by scheduling weekly or monthly btrfs
defrag, reporting that cow1 issues with temporary snapshots build up
slow
enough that the scheduled defrag effectively eliminates the otherwise
growing problem, but it's still an addition
On 2015-09-18 00:41, Sean Greenslade wrote:
MD is emulating hardware RAID. In hardware RAID, you are doing
work at the block level. Block-level RAID has no understanding of the
filesystem(s) running on top of it. Therefore it would have to checksum
groups of blocks, and store those checksums on
On 17.09.2015 at 21:43, Hugo Mills wrote:
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
BTRFS looks really nice feature-wise, but is not (yet) optimized for
my use-case I guess. Disabling COW would certainly help, but I don't
want to lose the data checksum
On 17.09.2015 at 20:35, Chris Murphy wrote:
You can use Btrfs in the guest to get at least notification of SDC.
Yes, but I'd rather not depend on all potential guest OSes having btrfs
or something similar.
Another way is to put a conventional fs image on e.g. GlusterFS with
checksumming enabl
Hi,
thank you for your answers!
So it seems there are several suboptimal alternatives here...
MD+LVM is very close to what I want, but md has no way to cope with
silent data corruption. So if I'd want to use a guest filesystem that
has no checksums either, I'm out of luck.
I'm honestly a bit
Hi everybody,
first off, I'm not 100% sure if this is the right place to ask, so if
it's not, I apologize and I'd appreciate a pointer in the right direction.
I want to build a virtualization server to replace my current home
server. I'm thinking about a Debian system with libvirt/KVM. The sy
13 matches
Mail list logo