Threads always use default attributes in all tools, so pthread
attribute objects and their initializations are of no use. Just pass
NULL as attr attribute to pthread_create for default attributes.
Signed-off-by: Rakesh Pandit rak...@tuxera.com
---
cmds-scrub.c | 13 ++---
cmds-send.c |
To compress a small write(=blocksize) dosen't save us
disk space at all, skip it can save us some compression time.
This patch can also fix wrong setting nocompression flag for
inode, say a case when @total_in is 4096, and then we get
@total_compressed 52,because we do aligment to page cache size
Steps to reproduce:
# mkfs.btrfs -f /dev/sda[8-11] -m raid5 -d raid5
# mount /dev/sda8 /mnt
# btrfs scrub start -BR /mnt
# echo $? --unverified errors make return value be 3
This is because we don't setup right mapping between physical
and logical address for raid56, which makes checksum
I had something similar with 3.14.0-rc7.
I saw it also with 3.14.0-rc3, as discussed in this thread:
http://thread.gmane.org/gmane.comp.file-systems.btrfs/32938/focus=32977
Here is the trace for 3.14.0-rc7:
[377944.904848] [ cut here ]
[377944.904947] kernel BUG at
Marc MERLIN posted on Sun, 23 Mar 2014 09:25:06 -0700 as excerpted:
On Sun, Mar 23, 2014 at 04:18:43PM +, Hugo Mills wrote:
On Sun, Mar 23, 2014 at 08:25:17AM -0700, Marc MERLIN wrote:
What's the syntax for removing a drive that isn't there?
btrfs dev del missing /path
Per Nystrom posted on Sun, 23 Mar 2014 13:38:21 -0700 as excerpted:
I am going through the process of replacing a bad drive in a RAID 1
mirror. The filesystem wouldn't mount because of the missing device,
and the btrfs man pages were not helpful in resolving it. Specifically,
it would have
On 23/03/14 22:56, Marc MERLIN wrote:
Ok, thanks to the help I got from you, and my own experiments, I've
written this:
http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html
If someone reminds me how to edit the btrfs wiki, I'm happy to copy that
there, or give anyone
Marc MERLIN posted on Sun, 23 Mar 2014 11:58:16 -0700 as excerpted:
On Sun, Mar 23, 2014 at 11:09:07AM -0700, Marc MERLIN wrote:
I found out that a drive that used to be part of a raid system that is
mounted and running without it, btrfs apparently decides that the drive
is part of the
Just an idea:
btrfs Problem:
I've had two systems die with huge load factors 100(!) for the case
where a user program has unexpected to me been doing 'database'-like
operations and caused multiple files to become heavily fragmented. The
system eventually dies when data cannot be added to the
Martin posted on Mon, 24 Mar 2014 19:47:34 + as excerpted:
Possible fix:
btrfs checks the ratio of filesize versus number of fragments and for a
bad ratio either: [...]
3: Automatically defragments the file.
See the autodefrag mount option.
=:^)
--
Duncan - List replies preferred.
Hello,
I read through the FAQ you mentioned, but I must admit, that I do not
fully understand.
My experience is that it takes a bit of time to soak in. Between time,
previous Linux experience, and reading this list for awhile, things do
make more sense now, but my understanding has
On Mon, Mar 24, 2014 at 07:17:12PM +, Martin wrote:
Thanks for the very good summary.
So... In very brief summary, btrfs raid5 is very much a work in progress.
If you know how to use it, which I didn't know do now, it's technically very
usable as is. The corner cases are in having a
On Mon, Mar 24, 2014 at 06:38:30PM +, Duncan wrote:
Marc MERLIN posted on Sun, 23 Mar 2014 09:25:06 -0700 as excerpted:
On Sun, Mar 23, 2014 at 04:18:43PM +, Hugo Mills wrote:
On Sun, Mar 23, 2014 at 08:25:17AM -0700, Marc MERLIN wrote:
What's the syntax for removing a drive
Hi together,
I've created a btr(fs) filesystem on a partition on a qcow2 image with
GUID partition table created with qemu (1.7.0) tool:
qemu-img create -f qcow2 image.qcow2 2T
I'm connecting this image to a NBD with qemu-nbd and mounting the NBD.
I'm experiencing errors which I don't with
The BTRFS_IOC_SNAP_CREATE_V2 ioctl is limited by requiring that a file
descriptor be passed in order to create the snapshot. This means that
snapshots may only be created of trees that are available in the mounted
namespace. We have a need to create snapshots from subvolumes outside
of the
This patch uses the new BTRFS_SUBVOL_CREATE_SUBVOLID flag to create snapshots
by subvolume ID.
usage: btrfs subvolume snapshot [-r] [-q qgroupid] -s subvolid dest/name
Since we don't have a name for the source snapshot, the complete path to
the destination must be specified.
Signed-off-by: Jeff
On Mon, Mar 24, 2014 at 07:19:14PM +, Duncan wrote:
Marc MERLIN posted on Sun, 23 Mar 2014 11:58:16 -0700 as excerpted:
On Sun, Mar 23, 2014 at 11:09:07AM -0700, Marc MERLIN wrote:
I found out that a drive that used to be part of a raid system that is
mounted and running without it,
On 24/03/14 20:19, Duncan wrote:
Martin posted on Mon, 24 Mar 2014 19:47:34 + as excerpted:
Possible fix:
btrfs checks the ratio of filesize versus number of fragments and for a
bad ratio either: [...]
3: Automatically defragments the file.
See the autodefrag mount option.
=:^)
On 24/03/14 21:52, Marc MERLIN wrote:
On Mon, Mar 24, 2014 at 07:17:12PM +, Martin wrote:
Thanks for the very good summary.
So... In very brief summary, btrfs raid5 is very much a work in progress.
If you know how to use it, which I didn't know do now, it's technically very
usable as
On Tue, Mar 25, 2014 at 01:11:43AM +, Martin wrote:
Yes, looking good, but for my usage I need the option to run ok with a
failed drive. So, that's one to keep a development eye on for continued
progress...
So it does run with a failed drive, it'll just fill the logs with write
errors,
I had a tree with some amount of thousand files (less than 1 million)
on top of md raid5.
It took 18H to rm it in 3 tries:
gargamel:/mnt/dshelf2/backup/polgara# time rm -rf current.todel/
real1087m26.491s
user0m2.448s
sys 4m42.012s
gargamel:/mnt/dshelf2/backup/polgara# btrfs fi show
Hello,
I've been using a single drive btrfs for some time and when free space
became too low I've added an additional drive and rebalanced FS with
RAID0 data and RAID1 System and Metadata storage.
Now I have the following configuration:
# btrfs fi show /btr
Label: none uuid:
22 matches
Mail list logo