Am Samstag, 11. Mai 2013, 17:57:11 schrieb Tim Eggleston:
> > Yes. The command just triggers the defragmentation which takes place
> > in the
> > background. Try a "sync" afterwards :)
>
> Sorry Martin, I should have specified that I wondered if it was like
> the scrub operation in that respect, s
Yes. The command just triggers the defragmentation which takes place
in the
background. Try a "sync" afterwards :)
Sorry Martin, I should have specified that I wondered if it was like
the scrub operation in that respect, so I left it several hours before
running filefrag again (and seeing t
Am Samstag, 11. Mai 2013, 12:27:09 schrieb Tim Eggleston:
> Hi list,
>
> I have a few large image files (VMware workstation VMDKs and TrueCrypt
> containers) which I routinely back up over the network to a btrfs raid10
> volume via bigsync (https://code.google.com/p/bigsync/).
>
> The VM images i
On Sat, May 11, 2013 at 02:27:27PM +0200, Clemens Eisserer wrote:
> Hi,
>
> I frequently get messages like "unlinked 10 orphans" in syslog
> (running linux 3.9.1), although I have never had a power outage nor a
> kernel crash.
> Is this something to worry about, or just a usual clean-up informatio
Hi,
I frequently get messages like "unlinked 10 orphans" in syslog
(running linux 3.9.1), although I have never had a power outage nor a
kernel crash.
Is this something to worry about, or just a usual clean-up information?
Thank you in advance, Clemens
--
To unsubscribe from this list: send the l
Hi list,
I have a few large image files (VMware workstation VMDKs and TrueCrypt
containers) which I routinely back up over the network to a btrfs raid10
volume via bigsync (https://code.google.com/p/bigsync/).
The VM images in particular get really fragmented due to CoW, which is
expected. I
Raid5 with 3 devices is well defined while the old logic allowed
raid5 only with a minimum of 4 devices when converting the block group
profile via btrfs balance. Creating a raid5 with just three devices
using mkfs.btrfs worked always as expected. This is now fixed and the
whole logic is rewritten.
Clean up the format of the definitions of BTRFS_BLOCK_GROUP_RAID5 and
BTRFS_BLOCK_GROUP_RAID6.
Signed-off-by: Andreas Philipp
---
fs/btrfs/ctree.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index e3a4fd7..ea688aa 100644
--- a/fs/bt
Hi,
The last few days I have been playing around with Chris Mason's
raid56-experimental branch (Thanks!) and discovered two minor issues.
Thanks,
Andreas
Andreas Philipp (2):
Minor format cleanup.
Correct allowed raid levels on balance.
fs/btrfs/ctree.h | 4 ++--
fs/btrfs/volumes.c | 11
On 05/10/2013 11:46 PM, Hugo Mills wrote:
> On Fri, May 10, 2013 at 11:43:34PM +0200, Marcus Lövgren wrote:
>> Yes, you were right! Adding another drive to the array made it continue
>> without errors. Is this already reported as a bug?
>
>I believe it has been, yes. I think we've even had a p
Jan Schmidt schrieb:
> We can try to debug that further, you can send me / upload the output of
>
>btrfs-image -c9 /dev/whatever blah.img
>
> built from Josef's repository
>
>git://github.com/josefbacik/btrfs-progs.git
>
> It contains all your metadata (like file names), data is omitt
On Apr 4, 2013, Alexandre Oliva wrote:
> I've been trying to figure out the btrfs I/O stack to try to understand
> why, sometimes (but not always), after a failure to read a (data
> non-replicated) block from the disk, the file being accessed becomes
> permanently locked, and the filesystem, unm
12 matches
Mail list logo