Resending, hopefully correct formatting.
As the title suggests, running the df command on a subvolume doesn't return a
filesystem. I'm not sure where the problem lies or if anyone else has noticed
this. Some programs fail to detect free space as a result.
Example for clarification:
kyle@home:~$
As the title suggests, running the df command on a subvolume doesn't return a
filesystem. I'm not sure where the problem lies or if anyone else has noticed
this. Some programs fail to detect free space as a result.
Example for clarification:
kyle@home:~$ sudo mount -o subvol=@data /mnt/btrfs/
ky
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Austin S. Hemmelgarn
> Sent: Thursday, September 01, 2016 6:18 AM
> To: linux-btrfs@vger.kernel.org
> Subject: Re: your mail
>
> On 2016-09-01 03:44, M G Berberich wrot
I'll preface this with the fact that I'm just a user and am only posing a
question for a possible enhancement to btrfs.
I'm quite sure it isn't currently allowed but would it be possible to set a
failing device as a seed instead of kicking it out of a multi-device
filesystem? This would make t
What issues would arise if ssd mode is activated because of a block layer
setting the rotational flag to zero? This happens for me running btrfs on
bcache. Would it be beneficial to pass the no_ssd flag?
Thanks,
Kyle
--
To unsubscribe from this list: send
Others might be thinking this to so I better ask:
Does this just read the first copy in the case of dup, raid1, etc. and plow on?
I'm
not sure how you would handle a mismatch due to a hardware error.
Perhaps read all the copies and create another subvolume containing the
mismatched copies?
Than
>> If it is unknown, which of these options have been used at btrfs
>> creation time - is it possible to check the state of these options
>> afterwards on a mounted or unmounted filesystem?
>>
> I don't think there is a specific tool for doing this, but some of them
> do show up in dmesg, for exam
> From: li...@colorremedies.com
> Date: Tue, 16 Sep 2014 11:26:16 -0600
>
>
> On Sep 16, 2014, at 10:51 AM, Mark Murawski
> wrote:
>
>>
>> Playing around with this filesystem I hot-removed a device from the
>> array and put in a replacement.
>>
>> Label: 'Root' uuid: d71404d4-468e-47d5-8f06-3b65f
> From: dste...@suse.cz
> To: linux-btrfs@vger.kernel.org
> CC: dste...@suse.cz
> Subject: [PATCH] btrfs-progs: mkfs: remove experimental tag
> Date: Thu, 31 Jul 2014 14:21:34 +0200
>
> Make it consistent with kernel status and documentation.
>
> Signed-of
> Date: Tue, 29 Jul 2014 11:18:17 +0900
> From: takeuchi_sat...@jp.fujitsu.com
> To: kylega...@hotmail.com; linux-btrfs@vger.kernel.org
> Subject: Re: [PATCH 2/2] btrfs-progs: Unify the messy error message formats
>
> Hi Kyle,
>
>
small wording error inline below
> Date: Fri, 25 Jul 2014 15:17:05 +0900
> From: takeuchi_sat...@jp.fujitsu.com
> To: linux-btrfs@vger.kernel.org
> Subject: [PATCH 2/2] btrfs-progs: Unify the messy error message formats
>
> From: Satoru Takeuchi
>
> - The
>
> Then there's raid10, which takes more drives and is faster, but is still
> limited to two mirrors. But while I haven't actually used raid10 myself,
> I do /not/ believe it's limited to pair-at-a-time additions. I believe
> it'll take, for instance five devices, just fine, staggering chunk
> al
On Thu, 9 Jan 2014 11:40:20 -0700 Chris Murphy wrote:
>
> On Jan 9, 2014, at 3:42 AM, Hugo Mills wrote:
>
>> On Thu, Jan 09, 2014 at 11:26:26AM +0100, Clemens Eisserer wrote:
>>> Hi,
>>>
>>> I am running write-intensive (well sort of, one write every 10s)
>>> workloads on cheap flash media which pr
On 11/14/2013 11:35 AM, Lutz Vieweg wrote:
>
> On 11/14/2013 06:18 PM, George Mitchell wrote:
>> The read only mount issue is by design. It is intended to make sure you
>> know exactly what is going
>> on before you proceed.
>
> Hmmm... but will a server be able to continue its operation (inclu
being balanced.
Thanks,
Kyle
Tested-by: Kyle Gates
Reported-by: Kyle Gates
Signed-off-by: Liu Bo
Signed-off-by: Miao Xie
---
fs/btrfs/relocation.c | 44
1 file changed, 44 insertions(+)
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 39
he easiest and cleanest one. With this, updating
fs/file
tree will at least make a delayed ref if the file extent is really shared
by
several parents, we can make nocow happy again without having to check
confusing
last_snapshot.
Works here. Extents are stable after a balance.
Thanks,
Kyle
Test
On Wed, May 29, 2013 Miao Xie wrote:
On wed, 29 May 2013 10:55:11 +0900, Liu Bo wrote:
On Tue, May 28, 2013 at 09:22:11AM -0500, Kyle Gates wrote:
From: Liu Bo
Subject: [PATCH] Btrfs: fix broken nocow after a normal balance
[...]
Sorry for the long wait in replying.
This patch was
On Tue, May 28, 2013, Liu Bo wrote:
On Tue, May 28, 2013 at 09:22:11AM -0500, Kyle Gates wrote:
>From: Liu Bo
>
>Subject: [PATCH] Btrfs: fix broken nocow after a normal balance
>
[...]
Sorry for the long wait in replying.
This patch was unsuccessful in fixing the problem (on m
file extent's generation while walking relocated
file extents in data reloc root, and use file extent's generation
instead for checking if we have cross refs for the file extent.
That way we can make nocow happy again and have no impact on others.
Reported-by: Kyle Gates
Signed-off-by:
On Fri, 17 May 2013 15:04:45 +0800, Liu Bo wrote:
On Thu, May 16, 2013 at 02:11:41PM -0500, Kyle Gates wrote:
and mounted with autodefrag
Am I actually just seeing large ranges getting split while remaining
contiguous on disk? This would imply crc calculation on the two
outside ranges. Or
On Fri, May 10, 2013 Liu Bo wrote:
On Thu, May 09, 2013 at 03:41:49PM -0500, Kyle Gates wrote:
I'll preface that I'm running Ubuntu 13.04 with the standard 3.8
series kernel so please disregard if this has been fixed in higher
versions. This is on a btrfs RAID1 with 3 then 4 disks.
M
On Fri, May 10, 2013 Liu Bo wrote:
On Thu, May 09, 2013 at 03:41:49PM -0500, Kyle Gates wrote:
I'll preface that I'm running Ubuntu 13.04 with the standard 3.8
series kernel so please disregard if this has been fixed in higher
versions. This is on a btrfs RAID1 with 3 then 4 disks.
M
I'll preface that I'm running Ubuntu 13.04 with the standard 3.8 series
kernel so please disregard if this has been fixed in higher versions. This
is on a btrfs RAID1 with 3 then 4 disks.
My use case is to set the nocow 'C' flag on a directory and copy in some
files, then make lots of writes (
> So I have ended up in a state where I can't delete files with rm.
>
> the error I get is no space on device. however I'm not even close to empty.
> /dev/sdb1 38G 27G 9.5G 75%
> there is about 800k files/dirs in this filesystem
>
> extra strange is that I can in another directory create and delete
> > Wade, thanks.
> >
> > Yes, with the preallocated extent I saw the behavior you describe, and
> > it makes perfect sense to alloc a new EXTENT_DATA in this case.
> > In my case, I did another simple test:
> >
> > Before:
> > item 4 key (257 INODE_ITEM 0) itemoff 3593 itemsize 160
> > inode gener
> To: linux-btrfs@vger.kernel.org
> From: samtyg...@yahoo.co.uk
> Subject: Re: problem replacing failing drive
> Date: Thu, 25 Oct 2012 22:02:23 +0100
>
> On 22/10/12 10:07, sam tygier wrote:
> > hi,
> >
> > I have a 2 drive btrfs raid set up. It was created
I'm currently running a 1GB raid1 btrfs /boot with no problems.
Also, I think the current grub2 has lzo support.
-Original Message-
From: Fajar A. Nugraha
Sent: Sunday, August 12, 2012 5:48 PM
To: Daniel Pocock
Cc: linux-btrfs@vger.kernel.org
Subject: Re: raw partition or LV for btrfs?
On Mon, Jul 30, 2012 at 11:58 PM, Liu Bo wrote:
> On 07/31/2012 12:35 PM, Kyle Gates wrote:
>
>> On Mon, Jul 30, 2012 at 9:00 PM, Liu Bo wrote:
>>> On 07/31/2012 03:55 AM, Kyle Gates wrote:
>>>
>>>> I have a 3 disk raid1 filesystem mounted with nodataco
On Mon, Jul 30, 2012 at 9:00 PM, Liu Bo wrote:
> On 07/31/2012 03:55 AM, Kyle Gates wrote:
>
>> I have a 3 disk raid1 filesystem mounted with nodatacow. I have a
>> folder in said filesystem with the 'C' NOCOW & 'Z' Not_Compressed
>> flags set f
I set the C (NOCOW) and z (Not_Compressed) flags on a folder but the extent
counts of files contained there keep increasing.
Said files are large and frequently modified but not changing in size. This
does not happen when the filesystem is mounted with nodatacow.
I'm using this as a workaround
> > Actually it is possible. Check out David's response to my question from
> > some time ago:
> > http://permalink.gmane.org/gmane.comp.file-systems.btrfs/14227
>
> this was a quick aid, please see attached file for an updated tool to set
> the file flags, now added 'z' for NOCOMPRESS flag, and
I've been having good luck with my /boot on a separate 1GB RAID1 btrfs
filesystem using grub2 (2 disks only! I wouldn't try it with 3). I
should note, however, that I'm NOT using compression on this volume
because if I remember correctly it may not play well with grub (maybe
that was just lzo thou
I've been having good luck with my /boot on a separate 1GB RAID1 btrfs
filesystem using grub2 (2 disks only! I wouldn't try it with 3). I should note,
however, that I'm NOT using compression on this volume because if I remember
correctly it may not play well with grub (maybe that was just lzo t
> > I have multiple subvolumes on the same filesystem that are mounted with
> > different options in fstab.
> > The problem is the mount options for subsequent subvolume mounts seem to be
> > ignored as reflected in /proc/mounts.
>
> The output of 'mount' and /proc/mounts is different. mount tak
When compiling btrfs-progs (2011-12-01) from
git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git on
3.2.1-030201-generic #201201121644 SMP Thu Jan 12 21:53:24 UTC 2012 i686 athlon
i386 GNU/Linux
I get the following warnings:
ls btrfs_cmds.c
btrfs_cmds.c
gcc -Wp,-MMD,./.btrfs_cm
Greeting all,
I have multiple subvolumes on the same filesystem that are mounted with
different options in fstab.
The problem is the mount options for subsequent subvolume mounts seem to be
ignored as reflected in /proc/mounts.
$ cat /etc/fstab | grep mnt
UUID= /mnt/a btrfs
subvol=a,defaults,
36 matches
Mail list logo