Dear btrfs community,
I am facing several problems regarding to btrfs, and I will be very
thankful if someone can help me with. Also while playing with btrfs I
have few suggestions – would be nice if one can comment on those.
While starting the system, /var (which is btrfs volume) failed to be
I would have folded this and patch 4 earlier if I had written patch 1, but I
didn't feel comfortable modifying Zach's work too much. I can make that change
if it's not really a problem.
Anna
On 10/11/2015 10:22 AM, Christoph Hellwig wrote:
> Needs to be folded.
>
--
To unsubscribe from this
On Wed, Oct 14, 2015 at 11:08:40AM -0700, Andy Lutomirski wrote:
> > So what I'm hearing is that I should drop the reflink and dedup flags and
> > change this system call only perform a full copy (with preserving of
> > sparseness), correct? I can make those changes, but only if everybody is
>
On 10/14/2015 02:25 PM, Christoph Hellwig wrote:
> On Wed, Oct 14, 2015 at 01:37:13PM -0400, Anna Schumaker wrote:
>> I would have folded this and patch 4 earlier if I had written patch 1,
>> but I didn't feel comfortable modifying Zach's work too much. I can
>> make that change if it's not
On Wed, Oct 14, 2015 at 01:37:13PM -0400, Anna Schumaker wrote:
> I would have folded this and patch 4 earlier if I had written patch 1,
> but I didn't feel comfortable modifying Zach's work too much. I can
> make that change if it's not really a problem.
Folding the changes is perfectly fine,
On Wed, Oct 14, 2015 at 11:27 AM, Christoph Hellwig wrote:
> On Wed, Oct 14, 2015 at 11:08:40AM -0700, Andy Lutomirski wrote:
>> > So what I'm hearing is that I should drop the reflink and dedup flags and
>> > change this system call only perform a full copy (with preserving
On Wed, Oct 14, 2015 at 01:59:40PM -0400, Anna Schumaker wrote:
> On 10/12/2015 07:17 PM, Darrick J. Wong wrote:
> > On Sun, Oct 11, 2015 at 07:22:03AM -0700, Christoph Hellwig wrote:
> >> On Wed, Sep 30, 2015 at 01:26:52PM -0400, Anna Schumaker wrote:
> >>> This allows us to have an in-kernel
On Tue, Oct 13, 2015 at 12:29:59AM -0700, Christoph Hellwig wrote:
> On Mon, Oct 12, 2015 at 04:41:06PM -0700, Darrick J. Wong wrote:
> > One of the patches in last week's XFS reflink patchbomb adds
> > FALLOC_FL_UNSHARE
> > flag; at the moment it _only_ forces copy-on-write of shared blocks, and
mkfs.btrfs allows creation of Btrfs filesystem instances with mixed block
group feature enabled and having a sectorsize different from nodesize.
For e.g:
[root@localhost btrfs-progs]# mkfs.btrfs -f -M -s 4096 -n 16384 /dev/loop0
Forcing mixed metadata/data groups
btrfs-progs
On 10/11/2015 10:23 AM, Christoph Hellwig wrote:
> On Wed, Sep 30, 2015 at 01:26:51PM -0400, Anna Schumaker wrote:
>> I still want to do an in-kernel copy even if the files are on different
>> mountpoints, and NFS has a "server to server" copy that expects two
>> files on different mountpoints.
On 10/12/2015 07:17 PM, Darrick J. Wong wrote:
> On Sun, Oct 11, 2015 at 07:22:03AM -0700, Christoph Hellwig wrote:
>> On Wed, Sep 30, 2015 at 01:26:52PM -0400, Anna Schumaker wrote:
>>> This allows us to have an in-kernel copy mechanism that avoids frequent
>>> switches between kernel and user
On Wed, Oct 14, 2015 at 11:38:13AM -0700, Andy Lutomirski wrote:
> One might argue that reflink is like copy + immediate dedupe.
Not, it's not. It's all that and more, because it is an operation that
is atomic vs other writes to the file and it's an operation that either
clones the whole range
On Wed, Oct 14, 2015 at 10:59 AM, Anna Schumaker
wrote:
> On 10/12/2015 07:17 PM, Darrick J. Wong wrote:
>> On Sun, Oct 11, 2015 at 07:22:03AM -0700, Christoph Hellwig wrote:
>>> On Wed, Sep 30, 2015 at 01:26:52PM -0400, Anna Schumaker wrote:
This allows us to have
On Tue, Oct 13, 2015 at 06:25:54PM -0500, EJ Parker wrote:
> I rebooted my server last night and discovered that my btrfs
> filesystem (3 disk raid1) would not mount anymore. After doing some
> research and getting nowhere I went to IRC and user darkling asked me
> a few questions and asked for
Hi Chris,
This pull is quite a small one, with 2 small and safe patches.
The first one is a fix that get missing in 4.2 merge windows.
Just remove an empty header I created in qgroup rework.
The other one is found in qgroup reserve rework, but that's also very
small and safe, with test case
On Tue, Oct 13, 2015 at 04:25:29PM -0400, Anna Schumaker wrote:
> I haven't tried it, but I think the hole would be expanded :(. I'm having
> splice() handle the pagecache copy part, and (as far as I know) splice()
> doesn't know anything about sparse files. I might be able to put in some
>
Hi David,
Any further comment?
Thanks,
Qu
Qu Wenruo wrote on 2015/10/07 09:22 +0800:
Hi David,
I'm sorry that I didn't get the point of your previous comment.
Maybe the parameter/function name don't follow BTRFS_STACK_GETSET_FUNC
macro, but IMHO that's OK, as btrfs_item_key_to_cpu() is not
Current code will always truncate tailing page if its alloc_start is
smaller than inode size.
For example, the file extent layout is like:
0 4K 8K 16K 32K
|<-Extent A>|
|<--Inode size: 18K-->|
But if calling fallocate even for range [0,4K), it will
The empty file is introduced as an careless 'git add' during the qgroup
accounting framework rework.
Just remove it.
Reported-by: David Sterba
Signed-off-by: Qu Wenruo
---
fs/btrfs/extent-tree.h | 0
1 file changed, 0 insertions(+), 0 deletions(-)
On Wed, Oct 14, 2015 at 05:08:17AM +, Duncan wrote:
> Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:
>
> > I have an home server with 3 hard drives that I added to the same btrfs
> > filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0
> > /` and
On 2015-10-14 14:53, Andy Lutomirski wrote:
On Wed, Oct 14, 2015 at 11:49 AM, Christoph Hellwig wrote:
On Wed, Oct 14, 2015 at 11:38:13AM -0700, Andy Lutomirski wrote:
One might argue that reflink is like copy + immediate dedupe.
Not, it's not. It's all that and more,
[Repost with new SPF rules, should not be classified as spam now.]
Hi,
btrfs send -c stopped working for me several months ago. My setup is
actually very simple. On the "send" side I have:
# btrfs sub list -u / | grep rootfs-snapshot-
ID 2221 gen 93340 top level 5 uuid
I would not use Raid56 in production. I've tried using it a few
different ways but have run in to trouble with stability and
performance. Raid10 has been working excellently for me.
On Wed, Oct 14, 2015 at 3:19 PM, Sjoerd wrote:
> Hi all,
>
> Is RAID6 still considered
Le 14/10/2015 22:23, Donald Pearson a écrit :
> I would not use Raid56 in production. I've tried using it a few
> different ways but have run in to trouble with stability and
> performance. Raid10 has been working excellently for me.
Hi, could you elaborate on the stability and performance
On 14/10/2015 16:40, Anand Jain wrote:
>> # mount -o degraded /var
>> Oct 11 18:20:15 kernel: BTRFS: too many missing devices, writeable
>> mount is not allowed
>>
>> # mount -o degraded,ro /var
>> # btrfs device add /dev/sdd1 /var
>> ERROR: error adding the device '/dev/sdd1' - Read-only file
On 14/10/15 20:14, Austin S Hemmelgarn wrote:
> On 2015-10-14 14:53, Andy Lutomirski wrote:
>> On Wed, Oct 14, 2015 at 11:49 AM, Christoph Hellwig
>> wrote:
>>> On Wed, Oct 14, 2015 at 11:38:13AM -0700, Andy Lutomirski wrote:
One might argue that reflink is like copy +
On Wed, Oct 14, 2015 at 11:49 AM, Christoph Hellwig wrote:
> On Wed, Oct 14, 2015 at 11:38:13AM -0700, Andy Lutomirski wrote:
>> One might argue that reflink is like copy + immediate dedupe.
>
> Not, it's not. It's all that and more, because it is an operation that
> is
On 2015-10-14 14:27, Christoph Hellwig wrote:
On Wed, Oct 14, 2015 at 11:08:40AM -0700, Andy Lutomirski wrote:
So what I'm hearing is that I should drop the reflink and dedup flags and
change this system call only perform a full copy (with preserving of
sparseness), correct? I can make those
Hi all,
Is RAID6 still considered unstable so I shouldn't use it in production?
The latest I could find about a test scenario is more than a year ago
(http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html)
I want to build a new NAS (6 disks of 4TB) on RAID6 and prefer to
Le 14/10/2015 22:53, Donald Pearson a écrit :
> I've used it from 3.8 something to current, it does not handle drive
> failure well at all, which is the point of parity raid. I had a 10disk
> Raid6 array on 4.1.1 and a drive failure put the filesystem in an
> irrecoverable state. Scrub speeds are
I've used it from 3.8 something to current, it does not handle drive
failure well at all, which is the point of parity raid. I had a 10disk
Raid6 array on 4.1.1 and a drive failure put the filesystem in an
irrecoverable state. Scrub speeds are also an order of magnitude or
more slower in my own
On Wed, Oct 14, 2015 at 4:53 PM, Donald Pearson
wrote:
>
> Personally I would still recommend zfs on illumos in production,
> because it's nearly unshakeable and the creative things you can do to
> deal with problems are pretty remarkable. The unfortunate reality is
>
Hugo Mills posted on Wed, 14 Oct 2015 09:13:25 + as excerpted:
> On Wed, Oct 14, 2015 at 05:08:17AM +, Duncan wrote:
>> Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:
>>
>> > I have an home server with 3 hard drives that I added to the same
>> > btrfs filesystem.
btrfs does handle mixed device sizes really well actually. And you're
right, zfs is limited to the smallest drive x vdev width. The rest
goes unused. You can do things like pre-slice the drives with sparse
files and create zfs on those files, but then you'll load up those
larger drives with a
On 10/13/2015 08:48 PM, David Sterba wrote:
On Sat, Oct 10, 2015 at 10:30:55PM +0800, Anand Jain wrote:
This is the btrfs-progs part of the kernel patch
Btrfs: Introduce device delete by devid
Thanks, now in next/delete-by-id-v3, I made some changes so please have
a look. Notably, I've
Dmitry Katsubo posted on Wed, 14 Oct 2015 22:27:29 +0200 as excerpted:
> On 14/10/2015 16:40, Anand Jain wrote:
>>> # mount -o degraded /var Oct 11 18:20:15 kernel: BTRFS: too many
>>> missing devices, writeable mount is not allowed
>>>
>>> # mount -o degraded,ro /var # btrfs device add /dev/sdd1
On Wed, Oct 14, 2015 at 1:09 AM, Zygo Blaxell
wrote:
>
> I wouldn't try to use dedup on a kernel older than v4.1 because of these
> fixes in 4.1 and later:
I would assume that these would be ported to the other longterm
kernels like 3.18 at some point?
> Do dedup
Hi
I've 6 x 3TB SATA drives available with a view to consolidating long
term storage to a single raid array intended to be operational more or
less 24/7. I've done this a few times too many and run into the
inevitable issues like waiting days to expand raid arrays whilst
running the risk of disk
On Thu, Oct 15, 2015 at 3:11 PM, audio muze wrote:
> Rebuilds and/or expanding the array should be pretty quick given only
> actual data blocks are written on rebuild or expansion as opposed to
> traditional raid systems that write out the entire array.
While that might be
On Tue, Oct 13, 2015 at 11:21:49PM +0200, Carmine Paolino wrote:
> I have an home server with 3 hard drives that I added to the same btrfs
> filesystem. Several hours ago I run `btrfs balance start -dconvert=raid0
> /` and as soon as I run `btrfs fi show /` I lost my ssh connection to
> the
When creating small Btrfs filesystem instances (i.e. filesystem size <= 1GiB),
mkfs.btrfs fails if both sectorsize and nodesize are specified on the command
line and sectorsize != nodesize, since mixed block groups involves both data
and metadata blocks sharing the same block group. This is an
On Wed, Oct 14, 2015 at 01:41:23PM -0400, Anna Schumaker wrote:
> > NAK. I thing this is a bad idea in general and will only be convinced
> > by a properly audited actual implementation. And even then with a flag
> > where the file system specificly needs to opt into this behavior instead
> > of
On 2015-10-14 05:13, Hugo Mills wrote:
On Wed, Oct 14, 2015 at 05:08:17AM +, Duncan wrote:
Carmine Paolino posted on Tue, 13 Oct 2015 23:21:49 +0200 as excerpted:
I have an home server with 3 hard drives that I added to the same btrfs
filesystem. Several hours ago I run `btrfs balance
Chandan Rajendra wrote on 2015/10/14 23:09 +0530:
When creating small Btrfs filesystem instances (i.e. filesystem size <= 1GiB),
mkfs.btrfs fails if both sectorsize and nodesize are specified on the command
line and sectorsize != nodesize, since mixed block groups involves both data
and
covici posted on Sun, 11 Oct 2015 08:29:27 -0400 as excerpted:
> Thanks, in the ext4 world, I have lvm and lots of things using separate
> lvm's. I don't want to go back to partitions, if btrfs is that fragile,
> maybe I should waita while yet. Or, I could use lvm and put btrfs on
> top of
On Wed, Oct 14, 2015 at 08:29:20AM -0400, Rich Freeman wrote:
> On Wed, Oct 14, 2015 at 1:09 AM, Zygo Blaxell
> wrote:
> >
> > I wouldn't try to use dedup on a kernel older than v4.1 because of these
> > fixes in 4.1 and later:
>
> I would assume that these would
On Wed, Oct 14, 2015 at 03:26:11PM +0800, Qu Wenruo wrote:
> Hi Chris,
>
> This pull is quite a small one, with 2 small and safe patches.
>
> The first one is a fix that get missing in 4.2 merge windows.
> Just remove an empty header I created in qgroup rework.
>
> The other one is found in
On Wed, Oct 14, 2015 at 3:15 PM, Rich Freeman
wrote:
> This is the main thing that has kept me away from zfs - you can't
> modify a vdev, like you can with an md array or btrfs.
A possible work around is ZoL (ZFS on Linux) used as a GlusterFS brick.
For that matter,
Sjoerd posted on Wed, 14 Oct 2015 22:19:50 +0200 as excerpted:
> Is RAID6 still considered unstable so I shouldn't use it in production?
> The latest I could find about a test scenario is more than a year ago
> (http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-
Status.html)
>
> I
See the other recent thread on the list "RAID6 stable enough for production?"
A lot of your questions have already been answered in recent previous threads.
While there are advantages to Btrfs raid56, there are some missing
parts that make it incomplete and possibly unworkable for certain use
On Thu, 15 Oct 2015 06:11:49 +0200
audio muze wrote:
> Before I go down this road I'd appreciate thoughts/ suggestions/
> alternatives? Have I left anything out? Most importantly is btrfs
> raid6 now stable enough to use in this fashion?
I would suggest going with Btrfs
Thanks Chris
I should've browsed recent threads, my apologies. Terribly
frustrating though that the issues you refer to aren't documented in
the btrfs wiki. Reading the wiki one is lead to believe that the only
real issue is the write hole that can occur as a result of a power
loss. There I
Thanks Roman, but I don't have the appetite to use mdadm and have the
array take forever to build or get yet another set of risks to
ultimately migrate from mdadm to btrfs when raid6 is stable. It seems
to me that the simplest option at present is probably to use each disk
separately, formatted
On Wed, Oct 14, 2015 at 11:53:45AM -0700, Andy Lutomirski wrote:
> Would copy_file_range without the reflink option removed still be
> permitted to link blocks on supported filesystems (btrfs and maybe
> XFS)?
Absolutely. Unless the COPY_FALLOCATE or whatever we call it option is
specified of
54 matches
Mail list logo