BTRFS and databases

2018-07-31 Thread MegaBrutal
Hi all,

I know it's a decade-old question, but I'd like to hear your thoughts
of today. By now, I became a heavy BTRFS user. Almost everywhere I use
BTRFS, except in situations when it is obvious there is no benefit
(e.g. /var/log, /boot). At home, all my desktop, laptop and server
computers are mainly running on BTRFS with only a few file systems on
ext4. I even installed BTRFS in corporate productive systems (in those
cases, the systems were mainly on ext4; but there were some specific
file systems those exploited BTRFS features).

But there is still one question that I can't get over: if you store a
database (e.g. MySQL), would you prefer having a BTRFS volume mounted
with nodatacow, or would you just simply use ext4?

I know that with nodatacow, I take away most of the benefits of BTRFS
(those are actually hurting database performance – the exact CoW
nature that is elsewhere a blessing, with databases it's a drawback).
But are there any advantages of still sticking to BTRFS for a database
albeit CoW is disabled, or should I just return to the old and
reliable ext4 for those applications?


Kind regards,
MegaBrutal
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Can rsync use reflink to synchronize directories?

2018-01-17 Thread MegaBrutal
Hi all,

Does rsync support copying files with reflink on local btrfs file
system? Of course it could only work if the necessary conditions for
reflinking are met, but it would be very useful.


Regards,
MegaBrutal
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: File system is oddly full after kernel upgrade, balance doesn't help

2017-01-28 Thread MegaBrutal
Hello,

Of course I can't retrieve the data from before the balance, but here
is the data from now:

root@vmhost:~# btrfs fi show /tmp/mnt/curlybrace
Label: 'curlybrace'  uuid: f471bfca-51c4-4e44-ac72-c6cd9ccaf535
Total devices 1 FS bytes used 752.38MiB
devid1 size 2.00GiB used 1.90GiB path
/dev/mapper/vmdata--vg-lxc--curlybrace

root@vmhost:~# btrfs fi df /tmp/mnt/curlybrace
Data, single: total=773.62MiB, used=714.82MiB
System, DUP: total=8.00MiB, used=16.00KiB
Metadata, DUP: total=577.50MiB, used=37.55MiB
GlobalReserve, single: total=512.00MiB, used=0.00B
root@vmhost:~# btrfs fi usage /tmp/mnt/curlybrace
Overall:
Device size:   2.00GiB
Device allocated:   1.90GiB
Device unallocated: 103.38MiB
Device missing: 0.00B
Used: 789.94MiB
Free (estimated): 162.18MiB(min: 110.50MiB)
Data ratio:  1.00
Metadata ratio:  2.00
Global reserve: 512.00MiB(used: 0.00B)

Data,single: Size:773.62MiB, Used:714.82MiB
   /dev/mapper/vmdata--vg-lxc--curlybrace 773.62MiB

Metadata,DUP: Size:577.50MiB, Used:37.55MiB
   /dev/mapper/vmdata--vg-lxc--curlybrace   1.13GiB

System,DUP: Size:8.00MiB, Used:16.00KiB
   /dev/mapper/vmdata--vg-lxc--curlybrace  16.00MiB

Unallocated:
   /dev/mapper/vmdata--vg-lxc--curlybrace 103.38MiB


So... if I sum the data, metadata, and the global reserve, I see why
only ~170 MB is left. I have no idea, however, why the global reserve
sneaked up to 512 MB for such a small file system, and how could I
resolve this situation. Any ideas?


MegaBrutal



2017-01-28 7:46 GMT+01:00 Duncan <1i5t5.dun...@cox.net>:
> MegaBrutal posted on Fri, 27 Jan 2017 19:45:00 +0100 as excerpted:
>
>> Hi,
>>
>> Not sure if it caused by the upgrade, but I only encountered this
>> problem after I upgraded to Ubuntu Yakkety, which comes with a 4.8
>> kernel.
>> Linux vmhost 4.8.0-34-generic #36-Ubuntu SMP Wed Dec 21 17:24:18 UTC
>> 2016 x86_64 x86_64 x86_64 GNU/Linux
>>
>> This is the 2nd file system which showed these symptoms, so I thought
>> it's more than happenstance. I don't remember what I did with the first
>> one, but I somehow managed to fix it with balance, if I remember
>> correctly, but it doesn't help with this one.
>>
>> FS state before any attempts to fix:
>> Filesystem  1M-blocks   Used Available Use% Mounted on
>> [...]curlybrace  1024   1024 0 100% /tmp/mnt/curlybrace
>>
>> Resized LV, run „btrfs filesystem resize max /tmp/mnt/curlybrace”:
>> [...]curlybrace  2048   1303 0 100% /tmp/mnt/curlybrace
>>
>> Notice how the usage magically jumped up to 1303 MB, and despite the FS
>> size is 2048 MB, the usage is still displayed as 100%.
>>
>> Tried full balance (other options with -dusage had no result):
>> root@vmhost:~# btrfs balance start -v /tmp/mnt/curlybrace
>
>> Starting balance without any filters.
>> ERROR: error during balancing '/tmp/mnt/curlybrace':
>> No space left on device
>
>> No space left on device? How?
>>
>> But it changed the situation:
>> [...]curlybrace  2048   1302   190  88% /tmp/mnt/curlybrace
>>
>> This is still not acceptable. I need to recover at least 50% free space
>> (since I increased the FS to the double).
>>
>> A 2nd balance attempt resulted in this:
>> [...]curlybrace  2048   1302   162  89% /tmp/mnt/curlybrace
>>
>> So... it became slightly worse.
>>
>> What's going on? How can I fix the file system to show real data?
>
> Something seems off, yes, but...
>
> https://btrfs.wiki.kernel.org/index.php/FAQ
>
> Reading the whole thing will likely be useful, but especially 1.3/1.4 and
> 4.6-4.9 discussing the problem of space usage, reporting, and (primarily
> in some of the other space related FAQs beyond the specific ones above)
> how to try and fix it when space runes out, on btrfs.
>
> If you read them before, read them again, because you didn't post the
> btrfs free-space reports covered in 4.7, instead posting what appears to
> be the standard (non-btrfs) df report, which for all the reasons
> explained in the FAQ, is at best only an estimate on btrfs.  That
> estimate is obviously behaving unexpectedly in your case, but without the
> btrfs specific reports, it's nigh impossible to even guess with any
> chance at accuracy what's going on, or how to fix it.
>
> A WAG would be that part of the problem might be that you were into
> global reserve before the resize, so after the filesystem got more space
> to use, the first thing it did was unload that global reserve usage,
> thereby immediately upping apparent usage.  That might explain that

File system is oddly full after kernel upgrade, balance doesn't help

2017-01-27 Thread MegaBrutal
Hi,

Not sure if it caused by the upgrade, but I only encountered this
problem after I upgraded to Ubuntu Yakkety, which comes with a 4.8
kernel.
Linux vmhost 4.8.0-34-generic #36-Ubuntu SMP Wed Dec 21 17:24:18 UTC
2016 x86_64 x86_64 x86_64 GNU/Linux

This is the 2nd file system which showed these symptoms, so I thought
it's more than happenstance. I don't remember what I did with the
first one, but I somehow managed to fix it with balance, if I remember
correctly, but it doesn't help with this one.

FS state before any attempts to fix:
Filesystem 1M-blocks   Used Available Use%
Mounted on
/dev/mapper/vmdata--vg-lxc--curlybrace  1024   1024 0 100%
/tmp/mnt/curlybrace

Resized LV, run „btrfs filesystem resize max /tmp/mnt/curlybrace”:
/dev/mapper/vmdata--vg-lxc--curlybrace  2048   1303 0 100%
/tmp/mnt/curlybrace

Notice how the usage magically jumped up to 1303 MB, and despite the
FS size is 2048 MB, the usage is still displayed as 100%.

Tried full balance (other options with -dusage had no result):
root@vmhost:~# btrfs balance start -v /tmp/mnt/curlybrace
Dumping filters: flags 0x7, state 0x0, force is off
  DATA (flags 0x0): balancing
  METADATA (flags 0x0): balancing
  SYSTEM (flags 0x0): balancing
WARNING:

Full balance without filters requested. This operation is very
intense and takes potentially very long. It is recommended to
use the balance filters to narrow down the balanced data.
Use 'btrfs balance start --full-balance' option to skip this
warning. The operation will start in 10 seconds.
Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting balance without any filters.
ERROR: error during balancing '/tmp/mnt/curlybrace': No space left on device
There may be more info in syslog - try dmesg | tail

No space left on device? How?

But it changed the situation:
/dev/mapper/vmdata--vg-lxc--curlybrace  2048   1302   190  88%
/tmp/mnt/curlybrace

This is still not acceptable. I need to recover at least 50% free
space (since I increased the FS to the double).

A 2nd balance attempt resulted in this:
/dev/mapper/vmdata--vg-lxc--curlybrace  2048   1302   162  89%
/tmp/mnt/curlybrace

So... it became slightly worse.

What's going on? How can I fix the file system to show real data?


Regards,
MegaBrutal
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: "No space left on device" and balance doesn't work

2016-08-09 Thread MegaBrutal
2016-06-03 14:43 GMT+02:00 Austin S. Hemmelgarn :
>
> Also, since you're on a new enough kernel, try 'lazytime' in the mount 
> options as well, this defers all on-disk timestamp updates for up to 24 hours 
> or until the inode gets written out anyway, but keeps the updated info in 
> memory.  The only downside to this is that mtimes might not be correct after 
> an unclean shutdown, but most software will have no issues with this.
>

Hi all,

Sorry for reviving this old thread, and probably it's not the best
place to ask about this... but I added the "noatime" option in fstab,
restarted the system, and now I think I should try "lazytime" too (as
I like the idea to have proper atimes with delayed writing to disk).
So now I'd just like to test the "lazytime" mount option without
restart.

I remounted the file system like this:

mount -o remount,lazytime /

But now the FS still has the "noatime" mount option, which I guess
renders "lazytime" ineffective. I thought they are supposed to be
mutually exclusive, so I'm actually surprised that I can have both
mount options at the same time.

Now my mount looks like this:

/dev/mapper/centrevg-rootlv on / type btrfs
(rw,noatime,lazytime,space_cache,subvolid=257,subvol=/@)

I also tried to explicitly add "atime" to negate "noatime" (man mount
says "atime" is the option to disable "noatime"), like this:

mount -o remount,atime,lazytime /

But the "noatime" option still stays. Why? Is it a BTRFS specific
issue, or does it reside in another layer?

By the way, is it valid to mount BTRFS subvolumes with different atime
policies? Then how do child subvolumes behave?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: "No space left on device" and balance doesn't work

2016-06-02 Thread MegaBrutal
2016-06-02 0:22 GMT+02:00 Henk Slager :
> What is the kernel version used?
> Is the fs on a mechanical disk or SSD?
> What are the mount options?
> How old is the fs?

Linux 4.4.0-22-generic (Ubuntu 16.04).
Mechanical disks in LVM.
Mount: /dev/mapper/centrevg-rootlv on / type btrfs
(rw,relatime,space_cache,subvolid=257,subvol=/@)
I don't know how to retrieve the exact FS age, but it was created in
2014 August.

Snapshots (their names encode their creation dates):

ID 908 gen 487349 top level 5 path @-snapshot-2016050301
ID 909 gen 488849 top level 5 path @-snapshot-2016050401
ID 910 gen 490313 top level 5 path @-snapshot-2016050501
ID 911 gen 491763 top level 5 path @-snapshot-2016050601
ID 912 gen 493399 top level 5 path @-snapshot-2016050702
ID 913 gen 494996 top level 5 path @-snapshot-2016050802
ID 914 gen 496495 top level 5 path @-snapshot-2016050902
ID 915 gen 498094 top level 5 path @-snapshot-2016051005
ID 916 gen 499688 top level 5 path @-snapshot-2016051102
ID 917 gen 501308 top level 5 path @-snapshot-2016051201
ID 918 gen 503375 top level 5 path @-snapshot-2016051402
ID 919 gen 504356 top level 5 path @-snapshot-2016051501
ID 920 gen 505890 top level 5 path @-snapshot-2016051601
ID 921 gen 506901 top level 5 path @-snapshot-2016051701
ID 922 gen 507313 top level 5 path @-snapshot-2016051802
ID 923 gen 507712 top level 5 path @-snapshot-2016051901
ID 924 gen 508057 top level 5 path @-snapshot-2016052001
ID 925 gen 508882 top level 5 path @-snapshot-2016052101
ID 926 gen 509241 top level 5 path @-snapshot-2016052201
ID 927 gen 509618 top level 5 path @-snapshot-2016052301
ID 928 gen 510277 top level 5 path @-snapshot-2016052402
ID 929 gen 511357 top level 5 path @-snapshot-2016052502
ID 930 gen 512125 top level 5 path @-snapshot-2016052602
ID 931 gen 513292 top level 5 path @-snapshot-2016052701
ID 932 gen 515766 top level 5 path @-snapshot-2016052802
ID 933 gen 517349 top level 5 path @-snapshot-2016052904
ID 934 gen 519004 top level 5 path @-snapshot-2016053002
ID 935 gen 519500 top level 5 path @-snapshot-2016053102
ID 936 gen 519847 top level 5 path @-snapshot-2016060101
ID 937 gen 521829 top level 5 path @-snapshot-2016060201

Removing old snapshots is the most feasible solution, but I can also
increase the FS size. It's easy since it's in LVM, and there is plenty
of space in the volume group.

Probably I should rewrite my alert script to check btrfs fi show
instead of plain df.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: "No space left on device" and balance doesn't work

2016-06-01 Thread MegaBrutal
Hi Peter,

I tried. I either get "Done, had to relocate 0 out of 33 chunks" or
"ERROR: error during balancing '/': No space left on device", and
nothing changes.


2016-06-01 22:29 GMT+02:00 Peter Becker <floyd@gmail.com>:
> try this:
>
> btrfs fi balance start -musage=0 /
> btrfs fi balance start -dusage=0 /
>
> btrfs fi balance start -musage=1 /
> btrfs fi balance start -dusage=1 /
>
> btrfs fi balance start -musage=5 /
> btrfs fi balance start -musage=10 /
> btrfs fi balance start -musage=20 /
>
>
> btrfs fi balance start -dusage=5 /
> btrfs fi balance start -dusage=10 /
> btrfs fi balance start -dusage=20 /
> 
>
> 2016-06-01 20:30 GMT+02:00 MegaBrutal <megabru...@gmail.com>:
>> Hi all,
>>
>> I have a 20 GB file system and df says I have about 2,6 GB free space,
>> yet I can't do anything on the file system because I get "No space
>> left on device" errors. I read that balance may help to remedy the
>> situation, but it actually doesn't.
>>
>>
>> Some data about the FS:
>>
>>
>> root@ReThinkCentre:~# df -h /
>> FájlrendszerMéret Fogl. Szab. Fo.% Csatol. pont
>> /dev/mapper/centrevg-rootlv   20G   18G  2,6G  88% /
>>
>> root@ReThinkCentre:~# btrfs fi show /
>> Label: 'RootFS'  uuid: 3f002b8d-8a1f-41df-ad05-e3c91d7603fb
>> Total devices 1 FS bytes used 15.42GiB
>> devid1 size 20.00GiB used 20.00GiB path 
>> /dev/mapper/centrevg-rootlv
>>
>> root@ReThinkCentre:~# btrfs fi df /
>> Data, single: total=16.69GiB, used=14.14GiB
>> System, DUP: total=32.00MiB, used=16.00KiB
>> Metadata, DUP: total=1.62GiB, used=1.28GiB
>> GlobalReserve, single: total=352.00MiB, used=0.00B
>>
>> root@ReThinkCentre:~# btrfs version
>> btrfs-progs v4.4
>>
>>
>> This happens when I try to balance:
>>
>> root@ReThinkCentre:~# btrfs fi balance start -dusage=66 /
>> Done, had to relocate 0 out of 33 chunks
>> root@ReThinkCentre:~# btrfs fi balance start -dusage=67 /
>> ERROR: error during balancing '/': No space left on device
>> There may be more info in syslog - try dmesg | tail
>>
>>
>> "dmesg | tail" does not show anything related to this.
>>
>> It is important to note that the file system currently has 32
>> snapshots of / at the moment, and snapshots taking up all the free
>> space is a plausible explanation. Maybe deleting some of the oldest
>> snapshots or just increasing the file system would help the situation.
>> However, I'm still interested, if the file system is full, why does df
>> show there is free space, and how could I show the situation without
>> having the mentioned options? I actually have an alert set up which
>> triggers when the FS usage reaches 90%, so then I know I have to
>> delete some old snapshots. It worked so far, I cleaned the snapshots
>> at 90%, FS usage fell back, everyone was happy. But now the alert
>> didn't even trigger because the FS is at 88% usage, so it shouldn't be
>> full yet.
>>
>>
>> Best regards and kecske,
>> MegaBrutal
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


"No space left on device" and balance doesn't work

2016-06-01 Thread MegaBrutal
Hi all,

I have a 20 GB file system and df says I have about 2,6 GB free space,
yet I can't do anything on the file system because I get "No space
left on device" errors. I read that balance may help to remedy the
situation, but it actually doesn't.


Some data about the FS:


root@ReThinkCentre:~# df -h /
FájlrendszerMéret Fogl. Szab. Fo.% Csatol. pont
/dev/mapper/centrevg-rootlv   20G   18G  2,6G  88% /

root@ReThinkCentre:~# btrfs fi show /
Label: 'RootFS'  uuid: 3f002b8d-8a1f-41df-ad05-e3c91d7603fb
Total devices 1 FS bytes used 15.42GiB
devid1 size 20.00GiB used 20.00GiB path /dev/mapper/centrevg-rootlv

root@ReThinkCentre:~# btrfs fi df /
Data, single: total=16.69GiB, used=14.14GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=1.62GiB, used=1.28GiB
GlobalReserve, single: total=352.00MiB, used=0.00B

root@ReThinkCentre:~# btrfs version
btrfs-progs v4.4


This happens when I try to balance:

root@ReThinkCentre:~# btrfs fi balance start -dusage=66 /
Done, had to relocate 0 out of 33 chunks
root@ReThinkCentre:~# btrfs fi balance start -dusage=67 /
ERROR: error during balancing '/': No space left on device
There may be more info in syslog - try dmesg | tail


"dmesg | tail" does not show anything related to this.

It is important to note that the file system currently has 32
snapshots of / at the moment, and snapshots taking up all the free
space is a plausible explanation. Maybe deleting some of the oldest
snapshots or just increasing the file system would help the situation.
However, I'm still interested, if the file system is full, why does df
show there is free space, and how could I show the situation without
having the mentioned options? I actually have an alert set up which
triggers when the FS usage reaches 90%, so then I know I have to
delete some old snapshots. It worked so far, I cleaned the snapshots
at 90%, FS usage fell back, everyone was happy. But now the alert
didn't even trigger because the FS is at 88% usage, so it shouldn't be
full yet.


Best regards and kecske,
MegaBrutal
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PROBLEM: #89121 BTRFS mixes up mounted devices with their snapshots

2014-12-04 Thread MegaBrutal
2014-12-04 6:15 GMT+01:00 Duncan 1i5t5.dun...@cox.net:

 Which is why I'm running an initramfs for the first time since I've
 switched to btrfs raid1 mode root, as I quit with initrds back before
 initramfs was an option.  An initramfs appended to the kernel image beats
 a separate initrd, but I'd still love to see the kernel commandline
 parsing fixed so it broke at the correct = in rootflags=device= (which
 seemed to be the problem, the kernel then didn't seem to recognize
 rootflags at all, as it was apparently seeing it as a parameter called
 rootflags=device, instead of rootflags), so I could be rid of the
 initramfs again.


Are you sure it isn't fixed? At least, it parses rootflags=subvol=@
well, which also has multiple = signs. And last time I've tried this,
and didn't cause any problems:
rootflags=device=/dev/mapper/vg-rootlv,subvol=@. Though device=
shouldn't have an effect in this case anyway, but I didn't get any
complaints against it. Though I use an initrd.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PROBLEM: #89121 BTRFS mixes up mounted devices with their snapshots

2014-12-02 Thread MegaBrutal
2014-12-02 8:50 GMT+01:00 Goffredo Baroncelli kreij...@inwind.it:
 On 12/02/2014 01:15 AM, MegaBrutal wrote:
 2014-12-02 0:24 GMT+01:00 Robert White rwh...@pobox.com:
 On 12/01/2014 02:10 PM, MegaBrutal wrote:

 Since having duplicate UUIDs on devices is not a problem for me since
 I can tell them apart by LVM names, the discussion is of little
 relevance to my use case. Of course it's interesting and I like to
 read it along, it is not about the actual problem at hand.


 Which is why you use the device= mount option, which would take LVM names
 and which was repeatedly discussed as solving this very problem.

 Once you decide to duplicate the UUIDs with LVM snapshots you take up the
 burden of disambiguating your storage.

 Which is part of why re-reading was suggested as this was covered in some
 depth and _is_ _exactly_ about the problem at hand.

 Nope.

 root@reproduce-1391429:~# cat /proc/cmdline
 BOOT_IMAGE=/vmlinuz-3.18.0-031800rc5-generic
 root=/dev/mapper/vg-rootlv ro
 rootflags=device=/dev/mapper/vg-rootlv,subvol=@

 Observe, device= mount option is added.

 device= options is needed only in a btrfs multi-volume scenario.
 If you have only one disk, this is not needed


I know. I only did this as a demonstration for Robert. He insisted it
will certainly solve the problem. Well, it doesn't.



 root@reproduce-1391429:~# ./reproduce-1391429.sh
 #!/bin/sh -v
 lvs
   LV VG   Attr  LSize   Pool Origin Data%  Move Log Copy%  Convert
   rootlv vg   -wi-ao---   1.00g
   swap0  vg   -wi-ao--- 256.00m

 grub-probe --target=device /
 /dev/mapper/vg-rootlv

 grep  /  /proc/mounts
 rootfs / rootfs rw 0 0
 /dev/dm-1 / btrfs rw,relatime,space_cache 0 0

 lvcreate --snapshot --size=128M --name z vg/rootlv
   Logical volume z created

 lvs
   LV VG   Attr  LSize   Pool Origin Data%  Move Log Copy%  Convert
   rootlv vg   owi-aos--   1.00g
   swap0  vg   -wi-ao--- 256.00m
   z  vg   swi-a-s-- 128.00m  rootlv   0.11

 ls -l /dev/vg/
 total 0
 lrwxrwxrwx 1 root root 7 Dec  2 00:12 rootlv - ../dm-1
 lrwxrwxrwx 1 root root 7 Dec  2 00:12 swap0 - ../dm-0
 lrwxrwxrwx 1 root root 7 Dec  2 00:12 z - ../dm-2

 grub-probe --target=device /
 /dev/mapper/vg-z

 grep  /  /proc/mounts
 rootfs / rootfs rw 0 0
 /dev/dm-2 / btrfs rw,relatime,space_cache 0 0

 What /proc/self/mountinfo contains ?

Before creating snapshot:

15 20 0:15 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
17 20 0:5 / /dev rw,relatime - devtmpfs udev
rw,size=241692k,nr_inodes=60423,mode=755
18 17 0:12 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts
rw,gid=5,mode=620,ptmxmode=000
19 20 0:16 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs
rw,size=50084k,mode=755
20 0 0:17 /@ / rw,relatime - btrfs /dev/dm-1 rw,space_cache
- THIS!
21 15 0:20 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755
22 15 0:21 / /sys/fs/fuse/connections rw,relatime - fusectl none rw
23 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw
24 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw
25 19 0:22 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none
rw,size=5120k
26 19 0:23 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw
27 19 0:24 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none
rw,size=102400k,mode=755
28 15 0:25 / /sys/fs/pstore rw,relatime - pstore none rw
29 20 253:1 / /boot rw,relatime - ext2 /dev/vda1 rw


After creating snapshot:

15 20 0:15 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
17 20 0:5 / /dev rw,relatime - devtmpfs udev
rw,size=241692k,nr_inodes=60423,mode=755
18 17 0:12 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts
rw,gid=5,mode=620,ptmxmode=000
19 20 0:16 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs
rw,size=50084k,mode=755
20 0 0:17 /@ / rw,relatime - btrfs /dev/dm-2 rw,space_cache
- WTF?!
21 15 0:20 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755
22 15 0:21 / /sys/fs/fuse/connections rw,relatime - fusectl none rw
23 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw
24 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw
25 19 0:22 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none
rw,size=5120k
26 19 0:23 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw
27 19 0:24 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none
rw,size=102400k,mode=755
28 15 0:25 / /sys/fs/pstore rw,relatime - pstore none rw
29 20 253:1 / /boot rw,relatime - ext2 /dev/vda1 rw


So it's consistent with what /proc/mounts reports.



 And more important question: it is only the value
 returned by /proc/mount wrongly or also the filesystem
 content is affected ?


I quote my bug report on this:

The information reported in /proc/mounts is certainly bogus, since
still the origin device is being written, the kernel does not actually
mix up the devices for write operations, and such, the phenomenon does
not cause

PROBLEM: #89121 BTRFS mixes up mounted devices with their snapshots

2014-12-01 Thread MegaBrutal
Hi all,

I've reported the bug I've previously posted about in BTRFS messes up
snapshot LV with origin in the Kernel Bug Tracker.
https://bugzilla.kernel.org/show_bug.cgi?id=89121

Since the other thread went off into theoretical debates about UUIDs
and their generic relation to BTRFS, their everyday use cases, and the
philosophical meaning behind uniqueness of copies and UUIDs; I'd like
to specifically ask you to only post here about the ACTUAL problem at
hand. Don't get me wrong, I find the discussion in the other thread
really interesting, I'm following it, but it is only very remotely
related to the original issue, so please keep it there! If you're
interested to catch up about the actual bug symptoms, please read the
bug report linked above, and (optionally) reproduce the problem
yourself!

A virtual machine image on which I've already reproduced the
conditions can be downloaded here:
http://undead.megabrutal.com/kvm-reproduce-1391429.img.xz
(Download size: 113 MB; Unpacked image size: 2 GB.)

Re-tested with mainline kernel 3.18.0-rc7 just today.


Regards,
MegaBrutal
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible to undo subvol delete?

2014-12-01 Thread MegaBrutal
2014-12-01 14:12 GMT+01:00 Austin S Hemmelgarn ahferro...@gmail.com:

 We might want to consider adding an option to btrfs subvol del to ask for
 confirmation (or make it do so by default and add an option to disable
 asking for confirmation).


I've also noticed, a subvolume can just be deleted with an rm -r,
just like an ordinary directory. I'd consider to only allow subvolume
deletions with exact btrfs subvolume delete commands, and they
should be protected against an ordinary rm. There also could be a
tunable FS feature to allow or disable ordinary subvolume deletions,
which could be set or unset by btrfstune. I think a subvolume really
deserves to be treated specially over an ordinary directory.

As for undeletion, while I have no idea how to do that, I noticed they
don't get deleted immediately. With older btrfs tools (3.12), I
noticed, if I delete a subvolume and then immediately issue a btrfs
subvolume list, my deleted subvolume is still listed with a DELETED
caption, and allocated space doesn't immediately gets freed if I check
with df -m. It takes a few seconds for the DELETED entry to
disappear and the allocated space to be freed.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible to undo subvol delete?

2014-12-01 Thread MegaBrutal
2014-12-01 14:47 GMT+01:00 Roman Mamedov r...@romanrm.net:
 On Mon, 1 Dec 2014 14:38:16 +0100
 MegaBrutal megabru...@gmail.com wrote:

 I've also noticed, a subvolume can just be deleted with an rm -r,
 just like an ordinary directory. I'd consider to only allow subvolume
 deletions with exact btrfs subvolume delete commands, and they

 This is already the case. 'rm -r' will remove all files in a subvolume, but
 the empty subvolume itself is only deletable via the 'btrfs' command.

That's great! And there is no way to protect against recursive
deletions (besides setting the subvolume read-only, as you suggested
below), as files are processes individually by rm. But it's OK,
people should always be very careful with rm, and it doesn't change
with btrfs. ;)


 If you want to make snapshots which can't be removed by ordinary tools, use
 the 'read-only' mode when creating them.

Yeah, good idea! Anyway, is it possible to change a read-only snapshot
to read-write and vica-versa, or you can only specify read-only while
creating them?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PROBLEM: #89121 BTRFS mixes up mounted devices with their snapshots

2014-12-01 Thread MegaBrutal
2014-12-01 18:27 GMT+01:00 Robert White rwh...@pobox.com:
 On 12/01/2014 04:56 AM, MegaBrutal wrote:

 Since the other thread went off into theoretical debates about UUIDs
 and their generic relation to BTRFS, their everyday use cases, and the
 philosophical meaning behind uniqueness of copies and UUIDs; I'd like
 to specifically ask you to only post here about the ACTUAL problem at
 hand. Don't get me wrong, I find the discussion in the other thread
 really interesting, I'm following it, but it is only very remotely
 related to the original issue, so please keep it there! If you're
 interested to catch up about the actual bug symptoms, please read the
 bug report linked above, and (optionally) reproduce the problem
 yourself!


 That discussion _was_ the actual discussion of the actual problem. A problem
 that is not particularly theoretical, a problem that is common to
 block-level snapshots, and a discussion that contained the actual
 work-arounds.

 I suggest a re-read. 8-)


The majority of the discussion was about how the kernel should react
UPON mounting a file system when more than one device of the same UUID
exist on the system. While it is a very legit problem worth to discuss
and mitigate, this is not the same situation as how the kernel behaves
when an identical device appears WHILE the file system is being
mounted.

Actually, I would not identify devices by UUIDs when I know that
duplicates could exist due to snapshots, therefore I mount devices by
LVM paths. And when a file system is already mounted with all its
devices, that is a clear situation: all devices are open and locked by
the kernel, any mixup at that point is an error. What is the case with
multiple-device file systems? Supply all their devices with device=
mount options. Just don't identify devices by UUIDs when you know
there could be duplicates. Use UUIDs when you don't use LVM.
Identifying file systems by UUIDs were invented because classic
/dev/sdXX device names might change. But LVM names don't change. They
only change when you intentionally change them e.g. with lvrename.

Since having duplicate UUIDs on devices is not a problem for me since
I can tell them apart by LVM names, the discussion is of little
relevance to my use case. Of course it's interesting and I like to
read it along, it is not about the actual problem at hand.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PROBLEM: #89121 BTRFS mixes up mounted devices with their snapshots

2014-12-01 Thread MegaBrutal
2014-12-02 0:24 GMT+01:00 Robert White rwh...@pobox.com:
 On 12/01/2014 02:10 PM, MegaBrutal wrote:

 Since having duplicate UUIDs on devices is not a problem for me since
 I can tell them apart by LVM names, the discussion is of little
 relevance to my use case. Of course it's interesting and I like to
 read it along, it is not about the actual problem at hand.


 Which is why you use the device= mount option, which would take LVM names
 and which was repeatedly discussed as solving this very problem.

 Once you decide to duplicate the UUIDs with LVM snapshots you take up the
 burden of disambiguating your storage.

 Which is part of why re-reading was suggested as this was covered in some
 depth and _is_ _exactly_ about the problem at hand.

Nope.

root@reproduce-1391429:~# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.18.0-031800rc5-generic
root=/dev/mapper/vg-rootlv ro
rootflags=device=/dev/mapper/vg-rootlv,subvol=@

Observe, device= mount option is added.


root@reproduce-1391429:~# ./reproduce-1391429.sh
#!/bin/sh -v
lvs
  LV VG   Attr  LSize   Pool Origin Data%  Move Log Copy%  Convert
  rootlv vg   -wi-ao---   1.00g
  swap0  vg   -wi-ao--- 256.00m

grub-probe --target=device /
/dev/mapper/vg-rootlv

grep  /  /proc/mounts
rootfs / rootfs rw 0 0
/dev/dm-1 / btrfs rw,relatime,space_cache 0 0

lvcreate --snapshot --size=128M --name z vg/rootlv
  Logical volume z created

lvs
  LV VG   Attr  LSize   Pool Origin Data%  Move Log Copy%  Convert
  rootlv vg   owi-aos--   1.00g
  swap0  vg   -wi-ao--- 256.00m
  z  vg   swi-a-s-- 128.00m  rootlv   0.11

ls -l /dev/vg/
total 0
lrwxrwxrwx 1 root root 7 Dec  2 00:12 rootlv - ../dm-1
lrwxrwxrwx 1 root root 7 Dec  2 00:12 swap0 - ../dm-0
lrwxrwxrwx 1 root root 7 Dec  2 00:12 z - ../dm-2

grub-probe --target=device /
/dev/mapper/vg-z

grep  /  /proc/mounts
rootfs / rootfs rw 0 0
/dev/dm-2 / btrfs rw,relatime,space_cache 0 0

lvremove --force vg/z
  Logical volume z successfully removed

grub-probe --target=device /
/dev/mapper/vg-rootlv

grep  /  /proc/mounts
rootfs / rootfs rw 0 0
/dev/dm-1 / btrfs rw,relatime,space_cache 0 0


Problem still reproduces.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible to undo subvol delete?

2014-12-01 Thread MegaBrutal
2014-12-01 17:39 GMT+01:00 Shriramana Sharma samj...@gmail.com:

 When btrfs has so many features (esp snapshots) to prevent user
 accidentally deleting data (I liked especially
 http://www.youtube.com/v/9H7e6BcI5Fo?start=209) I think there has to
 be *some* modicum of support for warning against deleting a subvolume
 (and it seems others agree too).


WOW, this is pretty neat? How can I do the same actions from the
command-line? E.g. I would be curious whether a file changed since the
last snapshot. Today I have to use traditional methods like plain ls
-l (in case I trust the time  file size), or diff. But in the
video we could see a directory in a view when we only seen the changed
files. How does that YaST application do so? And is there a more
enegant way to restore files to their originals than a plain mv?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Possible to undo subvol delete?

2014-12-01 Thread MegaBrutal
2014-12-02 4:40 GMT+01:00 Shriramana Sharma samj...@gmail.com:

 Well in office environs, where the root password is with a certain
 person only, then that's fine because that person is going to be wary
 of doing anything that's make others angry at them, but on single-user
 systems, one's regular password *is* the root password and the
 situation is such that because ordinary (and mostly non-destructive)
 things like installing requires entering it, so one gets accustomed to
 entering it without too much thought, leading to the requirement for
 such safety nets.


It reminds me of this accidental deletion:
http://serverfault.com/questions/587102/monday-morning-mistake-sudo-rm-rf-no-preserve-root

LOL at How do you even type --no-preserve-root accidentally?! :-o.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PROBLEM: #89121 BTRFS mixes up mounted devices with their snapshots

2014-12-01 Thread MegaBrutal
2014-12-01 22:45 GMT+01:00 Konstantin newsbox1...@web.de:

 MegaBrutal schrieb am 01.12.2014 um 13:56:
 Hi all,

 I've reported the bug I've previously posted about in BTRFS messes up
 snapshot LV with origin in the Kernel Bug Tracker.
 https://bugzilla.kernel.org/show_bug.cgi?id=89121
 Hi MegaBrutal. If I understand your report correctly, I can give you
 another example where this bug is appearing. It is so bad that it leads
 to freezing the system and I'm quite sure it's the same thing. I was
 thinking about filing a bug but didn't have the time for that yet. Maybe
 you could add this case to your bug report as well.

 The bug appears also when using mdadm RAID1 - when one of the drives is
 detached from the array then the OS discovers it and after a while (not
 directly, it takes several minutes) it appears under /proc/mounts:
 instead of /dev/md0p1 I see there /dev/sdb1. And usually after some hour
 or so (depending on system workload) the PC completely freezes. So
 discussion about the uniqueness of UUIDs or not, a crashing kernel is
 telling me that there is a serious bug.


Hmm, I also suspect our symptoms have the same root cause. It seems
the same thing happens: the BTRFS module notices another device with
the same file system and starts to report it as the root device. It
seems like it has no idea that it's part of a RAID configuration or
anything.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


BTRFS equivalent for tune2fs?

2014-12-01 Thread MegaBrutal
Hi all,

I know there is a btrfstune, but it doesn't provide all the
functionality I'm thinking of.

For ext2/3/4 file systems I can get a bunch of useful data with
tune2fs -l. How can I retrieve the same type of information about a
BTRFS file system? (E.g., last mount time, last checked time, blocks
reserved for superuser*, etc.)

* Anyway, does BTRFS even have an option to reserve X% for the superuser?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS messes up snapshot LV with origin

2014-11-28 Thread MegaBrutal
2014-11-29 2:25 GMT+01:00 Robert White rwh...@pobox.com:

 On 11/28/2014 09:05 AM, Goffredo Baroncelli wrote:

 For the disk autodetection, I still convinced that it is a sane default
 to skip the lvm-snapshot


 No... please don't...

 Maybe offer an option to select between snapshots or no-snapshots but in much 
 the same way there is no _functional_ difference between a subvolume and a 
 snapshot in btrfs, there is no degenerate status to an LVM snapshot.

 It would be way more useful if the helper dumped a message via stderr or 
 syslog that said something like UUID= ambiguous, must select between 
 /dev/AA and /dev/BB using device= to mount filesystem.



I agree with this. Sometimes people will exactly want to do that:
mount the snapshot devices and not the origins. Listing devices in the
device= mount option sounds perfectly sane.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS messes up snapshot LV with origin

2014-11-18 Thread MegaBrutal
2014-11-18 16:42 GMT+01:00 Phillip Susi ps...@ubuntu.com:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 11/18/2014 1:16 AM, Chris Murphy wrote:
  If fstab specifies rootfs as UUID, and there are two volumes with
  the same UUID, it’s now ambiguous which one at boot time is the
  intended rootfs. It’s no different than the days of /dev/sdXY where
  X would change designations between boots = ambiguity and why we
  went to UUID.

 He already said he has NOT rebooted, so there is no way that the
 snapshot has actually been mounted, even if it were UUID confusion.


That's right.

Anyway, I've built a system to reproduce the bug. You can download the
image and run it with KVM or other virtualization technology.
Instructions are straightforward – if you start the VM, you'll know
what to do, and you'll see what I was talking about.

http://undead.megabrutal.com/kvm-reproduce-1391429.img.xz

Download size: 113 MB; Unpacked image size: 2 GB.


  So we kinda need a way to distinguish derivative volumes. Maybe
  XFS and ext4 could easily change the volume UUID, but my vague
  recollection is this is difficult on Btrfs? So that led me to the
  idea of a way to create an on-the-fly (but consistent) “virtual
  volume UUID” maybe based on a hash of both the LVM LV and fs
  volume UUID.

 When using LVM, you should be referring to the volume by the LVM name
 rather than UUID.  LVM names are stable, and don't have the duplicate
 uuid problem.


I use LVM names to identify volumes. I initially suspected it's an
UUID confusion, because I thought grub-probe looks for the volume by
UUID. But now I think the problem is nothing to do with UUIDs.
Probably I should have looked deeper into the problem before I
hypothesized.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS messes up snapshot LV with origin

2014-11-17 Thread MegaBrutal
2014-11-17 7:59 GMT+01:00 Brendan Hide bren...@swiftspirit.co.za:

 Grub is already a little smart here - it avoids snapshots. But in this case 
 it is relying on the UUID and only finding it in the snapshot. So possibly 
 this is a bug in grub affecting the bug reporter specifically - but perhaps 
 the bug is in btrfs where grub is relying on btrfs code.


Yesterday, when I reproduced the phenomenon on a VM, I've found
something rather interesting thing: even /proc/mounts reports
incorrectly, that the snapshot is being mounted instead of the root
FS. Note, there were no reboot. Just create an LVM snapshot and then
check /proc/mounts.

I couldn't reproduce the same with non-root file systems. It seems
this only appears when the device in question is mounted as root FS.


 Yes, I'd rather use btrfs' snapshot mechanism - but this is often a choice 
 that is left to the user/admin/distro. I don't think saying LVM snapshots 
 are incompatible with btrfs is the right way to go either.


Before I did a release upgrade, just to be safe, I made both (LVM and
btrfs snapshot).



 That leaves two aspects of this issue which I view as two separate bugs:
 a) Btrfs cannot gracefully handle separate filesystems that have the same 
 UUID. At all.
 b) Grub appears to pick the wrong filesystem when presented with two 
 filesystems with the same UUID.

 I feel a) is a btrfs bug.
 I feel b) is a bug that is more about ecosystem design than grub being 
 silly.

 I imagine a couple of aspects that could help fix a):
 - Utilise a unique drive identifier in the btrfs metadata (surely this 
 exists already?). This way, any two filesystems will always have different 
 drive identifiers *except* in cases like a ddrescue'd copy or a block-level 
 snapshot. This will provide a sensible mechanism for defined behaviour, 
 preventing corruption - even if that defined behaviour is to simply give 
 out lots of PEBKAC errors and panic.
 - Utilise a drive list to ensure that two unrelated filesystems with the 
 same UUID cannot get mixed up. Yes, the user/admin would likely be the 
 culprit here (perhaps a VM rollout process that always gives out the same 
 UUID in all its filesystems). Again, does btrfs not already have something 
 like this built-in that we're simply not utilising fully?

 I'm not exactly sure of the correct way to fix b) except that I imagine it 
 would be trivial to fix once a) is fixed.


Note that everything that is written into the file system's metadata
gets duplicated with an LVM snapshot. So a unique drive identifier
wouldn't solve the problem, as it would also get replicated, and BTRFS
would still see two identical devices.

But devices on Linux have major and minor numbers those uniquely
identify devices while they are attached. The original and the
snapshot device have different major/minor numbers, and it would be
quite enough to differentiate the devices while they are being
opened/mounted.

By the way, I actually made an entire release upgrade with the
snapshot being there and being reported incorrectly. This would have
caused enough corruption in the file system that I would have surely
noticed it. But I didn't perceive any data corruption. BTRFS didn't
actually write to the snapshot device. It seems the device is only
mixed up in /proc/mounts, so probably the problem is not so severe as
we think, and wouldn't require fundamental changes to BTRFS to fix it.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: BTRFS messes up snapshot LV with origin

2014-11-17 Thread MegaBrutal
2014-11-17 20:04 GMT+01:00 Goffredo Baroncelli kreij...@inwind.it:

 Regarding b)
 I am bit confused: if I understood correctly, the root filesystem was
 picked from a LVM-snapshot, so grub-probe *correctly* reported that
 the root device is the snapshot.


This is not what happens. The system doesn't even get a reboot when
the mix-up happens.

You boot from the original device, create an LVM-snapshot*, and mount
starts to report the snapshot as the root device, while in fact it
isn't.

I know my initial descriptions of the bug were misleading, as myself
didn't know what the heck is going on.

From this point, please take these comments as reference:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1391429/comments/2
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1391429/comments/4


* I know I shouldn't make an LVM-snapshot of a mounted file system,
but this is not the point.


P.S.: E-mail sent twice, as lists didn't accept it in HTML. Plus I'm
not on the GRUB list, and can't post there.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


BTRFS messes up snapshot LV with origin

2014-11-16 Thread MegaBrutal
Hello guys,

I think you'll like this...
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1391429


MegaBrutal
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Quota reached: can't delete

2014-08-07 Thread MegaBrutal
Arne Jansen sensille at gmx.net writes:

 
 On 24.01.2013 16:12, Jerome M wrote:
  Hi,
  
  With the current btrfs quota implementation, when you reach a
  subvolume quota limit, you can't delete anything without first
  removing the limit or enlarge it:
  
  rm: cannot remove `testfile.bin': Disk quota exceeded
  
  
  Is there any plan to change that?
 
 Yes, there is. The problem is that even deletion needs space.
 So we need to allow remove to go over quota. The current implementation
 doesn't make this distinction.
 

I've found a workaround to avoid changing quota: null the file and then
you'll be able to delete it.

echo  testfile.bin
rm testfile.bin

P.S.: Sorry for necromancing this thread, but I couldn't resist to tell. :P

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


BTRFS over LVM remounts read-only

2014-01-11 Thread MegaBrutal
Hello,

I'm using BTRFS over LVM and after some time of usage (days or hours),
it just remounts itself, and I don't see the reason why, while it is
said to be fault-tolerant.

It is possible that I have bad sectors on the disk, though I don't
find it likely. Even then, I don't think bad sectors should cause that
much of a problem that the FS needs to be remounted because of it. Is
there a way to make BTRFS to detect and mark bad sectors like any
other file system does?

Here are the /etc/fstab entries those concern my BTRFS file system:
/dev/mapper/vmhost--vg-vmhost--rootfs /   btrfs
defaults,subvol=@ 0   1
/dev/mapper/vmhost--vg-vmhost--rootfs /home   btrfs
defaults,subvol=@home 0   2
/dev/mapper/vmhost--vg-vmhost--rootfs /media/btrfsbtrfs   noauto
 0   3

I've attached a dmesg output that may help developers to see what's the problem.

Do you have any ideas what causes the problem?

Also, if the disaster happens and the FS gets remounted read-only, is
there a way to force remount it in read-write mode? I've tried mount
-o remount,rw and mount -o force,remount,rw too, but it doesn't
work. Also I can't mount the BTRFS root or any subvolumes elsewhere
until I restart the system.

Probably my description of the problem wasn't detailed enough, but
since I'm totally clueless about the problem (not counting the
possible bad sectors), I can't tell more right now. But you can ask
more details and I'll try to answer.


MegaBrutal


btrfs-failure.dmesg.bz2
Description: BZip2 compressed data


Re: BTRFS over LVM remounts read-only

2014-01-11 Thread MegaBrutal
2014/1/11 Hugo Mills h...@carfax.org.uk:

 [60631.481913] attempt to access beyond end of device
 [60631.481935] dm-1: rw=1073, want=42917896, limit=42917888
 [60631.481941] btrfs_dev_stat_print_on_error: 34 callbacks suppressed
 [60631.481949] btrfs: bdev /dev/mapper/vmhost--vg-vmhost--rootfs errs: wr 
 311347, rd 0, flush 0, corrupt 0, gen 0

This is the problem.

It looks like you've shrunk the LV without first shrinking the
 filesystem. Depending on how much you shrunk it by, the odds are
 fairly good that significant chunks of the FS are now toast.

Hugo.


Thank you very much! Strange, I indeed shrinked the LV, but only by a
few megabytes to make space for /boot (as turned out, GRUB2 can't boot
from LVM), and I think I did shrink the FS too... But maybe not. As it
started out as a test system, I intentionally wanted to test some
obscure situations here. Thankfully I don't have considerable data
loss and I don't think any valuable data was on the space that was cut
off. Strange that the system was running like this for a month with
this problem being undetected.

How can I shrink the FS to the correct size right now, ensuring that I
really shrink it to the exact LV size?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html