Le 03/12/2018 à 23:22, Hans van Kranenburg a écrit :
> [...]
> Yes, I think that's true. See btrfs_read_block_groups in extent-tree.c:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/btrfs/extent-tree.c#n9982
>
> What the code is doing here is starting at the
Le 04/12/2018 à 03:52, Chris Murphy a écrit :
> On Mon, Dec 3, 2018 at 1:04 PM Lionel Bouton
> wrote:
>> Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
>>> [...]
>>> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
>>> tuning of the
Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
> [...]
> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
> tuning of the io queue (switching between classic io-schedulers and
> blk-mq ones in the virtual machines) and BTRFS mount options
> (space_cach
Hi,
Le 03/12/2018 à 19:20, Wilson, Ellis a écrit :
> Hi all,
>
> Many months ago I promised to graph how long it took to mount a BTRFS
> filesystem as it grows. I finally had (made) time for this, and the
> attached is the result of my testing. The image is a fairly
> self-explanatory graph,
Hi,
On 29/06/2018 09:22, Marc MERLIN wrote:
> On Fri, Jun 29, 2018 at 12:09:54PM +0500, Roman Mamedov wrote:
>> On Thu, 28 Jun 2018 23:59:03 -0700
>> Marc MERLIN wrote:
>>
>>> I don't waste a week recreating the many btrfs send/receive relationships.
>> Consider not using send/receive, and
Le 21/11/2017 à 23:04, Andy Leadbetter a écrit :
> I have a 4 disk array on top of 120GB bcache setup, arranged as follows
[...]
> Upgraded today to 4.14.1 from their PPA and the
4.14 and 4.14.1 have a nasty bug affecting bcache users. See for example
:
Le 06/07/2017 à 13:59, Austin S. Hemmelgarn a écrit :
> On 2017-07-05 20:25, Nick Terrell wrote:
>> On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn"
>> wrote:
>>> It's the slower compression speed that has me arguing for the
>>> possibility of configurable levels on zlib. 11MB/s
Le 06/07/2017 à 13:51, Austin S. Hemmelgarn a écrit :
>
> Additionally, when you're referring to extent size, I assume you mean
> the huge number of 128k extents that the FIEMAP ioctl (and at least
> older versions of `filefrag`) shows for compressed files? If that's
> the case, then it's
Le 19/05/2017 à 23:15, Timofey Titovets a écrit :
> 2017-05-19 23:19 GMT+03:00 Lionel Bouton
> <lionel-subscript...@bouton.name>:
>> I was too focused on other problems and having a fresh look at what I
>> wrote I'm embarrassed by what I read. Used pages for a given
Le 19/05/2017 à 16:17, Lionel Bouton a écrit :
> Hi,
>
> Le 19/05/2017 à 15:38, Timofey Titovets a écrit :
>> If data compression didn't free at least one PAGE_SIZE, it useless to store
>> that compressed extent
>>
>> Signed-off-by: Timofey Titovets <nefel
Hi,
Le 19/05/2017 à 15:38, Timofey Titovets a écrit :
> If data compression didn't free at least one PAGE_SIZE, it useless to store
> that compressed extent
>
> Signed-off-by: Timofey Titovets
> ---
> fs/btrfs/lzo.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Le 15/05/2017 à 10:14, Hugo Mills a écrit :
> [...]
>> As for limit= I'm not sure if it would be helpful since I run this
>> nightly. Anything that doesn't get done tonight due to limit, would be
>> done tomorrow?
>I'm suggesting limit= on its own. It's a fixed amount of work
> compared to
Le 14/05/2017 à 23:30, Kai Krakow a écrit :
> Am Sun, 14 May 2017 22:57:26 +0200
> schrieb Lionel Bouton <lionel-subscript...@bouton.name>:
>
>> I've coded one Ruby script which tries to balance between the cost of
>> reallocating group and the need for it.[...]
>>
Le 14/05/2017 à 22:15, Marc MERLIN a écrit :
> On Sun, May 14, 2017 at 09:13:35PM +0200, Hans van Kranenburg wrote:
>> On 05/13/2017 10:54 PM, Marc MERLIN wrote:
>>> Kernel 4.11, btrfs-progs v4.7.3
>>>
>>> I run scrub and balance every night, been doing this for 1.5 years on this
>>> filesystem.
Hi,
some news from the coal mine...
Le 17/03/2017 à 11:03, Lionel Bouton a écrit :
> [...]
> I'm considering trying to use a 4 week old snapshot of the device to
> find out if it was corrupted or not instead. It will still be a pain if
> it works but rsync for less than a m
Le 17/03/2017 à 10:51, Roman Mamedov a écrit :
> On Fri, 17 Mar 2017 10:27:11 +0100
> Lionel Bouton <lionel-subscript...@bouton.name> wrote:
>
>> Hi,
>>
>> Le 17/03/2017 à 09:43, Hans van Kranenburg a écrit :
>>> btrfs-debug-tree -b 3415463870464
>
Hi,
Le 17/03/2017 à 09:43, Hans van Kranenburg a écrit :
> btrfs-debug-tree -b 3415463870464
Here is what it gives me back :
btrfs-debug-tree -b 3415463870464 /dev/sdb
btrfs-progs v4.6.1
checksum verify failed on 3415463870464 found A85405B7 wanted 01010101
checksum verify failed on
Le 17/03/2017 à 05:32, Lionel Bouton a écrit :
> Hi,
>
> [...]
> I'll catch some sleep right now (it's 5:28 AM here) but I'll be able to
> work on this in 3 or 4 hours.
I woke up to this :
Mar 17 06:56:30 fileserver kernel: btree_readpage_end_io_hook: 104476
callbacks suppressed
Hi,
our largest BTRFS filesystem is damaged but I'm unclear if it is
recoverable or not. This is a 20TB filesystem with ~13TB used in a
virtual machine using virtio-scsi backed by Ceph (Firefly 0.8.10).
The following messages have become more frequent :
fileserver kernel: sd 0:0:1:0: [sdb] tag#
Le 07/02/2017 à 21:47, Austin S. Hemmelgarn a écrit :
> On 2017-02-07 15:36, Kai Krakow wrote:
>> Am Tue, 7 Feb 2017 09:13:25 -0500
>> schrieb Peter Zaitsev :
>>
>>> Hi Hugo,
>>>
>>> For the use case I'm looking for I'm interested in having snapshot(s)
>>> open at all time.
Le 07/02/2017 à 21:36, Kai Krakow a écrit :
> [...]
> I think I've read that btrfs snapshots do not guarantee single point in
> time snapshots - the snapshot may be smeared across a longer period of
> time while the kernel is still writing data. So parts of your writes
> may still end up in the
Hi Peter,
Le 07/02/2017 à 15:13, Peter Zaitsev a écrit :
> Hi Hugo,
>
> For the use case I'm looking for I'm interested in having snapshot(s)
> open at all time. Imagine for example snapshot being created every
> hour and several of these snapshots kept at all time providing quick
> recovery
Hi,
Le 02/12/2016 à 20:07, Blake Lewis a écrit :
> Hi, all, this is my first posting to the mailing list. I am a
> long-time file system guy who is just starting to take a serious
> interest in btrfs.
>
> My company's product uses btrfs for its backing storage. We
> maintain a log file to let
Hi,
Le 29/11/2016 à 18:20, Florian Lindner a écrit :
> [...]
>
> * Any other advice? ;-)
Don't rely on RAID too much... The degraded mode is unstable even for
RAID10: you can corrupt data simply by writing to a degraded RAID10. I
could reliably reproduce this on a 6 devices RAID10 BTRFS
Hi,
here's how I managed to recover from a BTRFS replace panic which
happened even on 4.8.4.
The kernel didn't seem to handle our raid10 filesystem with a missing
device correctly (even though it passed a precautionary scrub before
removing the device) :
- replace didn't work and triggered a
: the problem still made the kernel panic. Unless someone comes
up with a somewhat safe way to recover from this situation I'll let the
filesystem as is (we are building a new platform where redundancy will
be handled by Ceph anyway).
Lionel
Le 27/10/2016 à 18:07, Lionel Bouton a écrit :
> Hi,
>
>
Hi,
Le 27/10/2016 à 02:50, Lionel Bouton a écrit :
> [...]
> I'll stop for tonight and see what happens during the day. I'd like to
> try a device add / delete next but I'm worried I could end up with a
> completely unusable filesystem if the device delete hits the same
> probl
Hi,
Le 27/10/2016 à 01:54, Lionel Bouton a écrit :
>
> I'll post the final result of the btrfs replace later (it's currently at
> 5.6% after 45 minutes).
Result : kernel panic (so 4.8.4 didn't solve my main problem).
Unfortunately I don't have a remote KVM anymore so I couldn't capture
Hi,
Le 26/10/2016 à 02:57, Lionel Bouton a écrit :
> Hi,
>
> I'm currently trying to recover from a disk failure on a 6-drive Btrfs
> RAID10 filesystem. A "mount -o degraded" auto-resumes a current
> btrfs-replace from a missing dev to a new disk. This eventually
Hi,
I'm currently trying to recover from a disk failure on a 6-drive Btrfs
RAID10 filesystem. A "mount -o degraded" auto-resumes a current
btrfs-replace from a missing dev to a new disk. This eventually triggers
a kernel panic (and the panic seemed faster on each new boot). I
managed to cancel
Hi,
On 12/09/2016 14:59, Michel Bouissou wrote:
> [...]
> I never had problems with lzo compression, although I suspect that it (in
> conjuction with snapshots) adds much fragmentation that may relate to the
> extremely bad performance I get over time with mechanical HDs.
I had about 30 btrfs
Hi,
happy borgbackup user here. This is probably off-topic for most but as
many users probably are evaluating send/receive versus other backup
solutions, I'll keep linux-btrfs in the loop.
On 28/08/2016 20:10, Oliver Freyermuth wrote:
>> Try borgbackup, I'm using it very successfully. It is very
Le 21/06/2016 15:17, Graham Cobb a écrit :
> On 21/06/16 12:51, Austin S. Hemmelgarn wrote:
>> The scrub design works, but the whole state file thing has some rather
>> irritating side effects and other implications, and developed out of
>> requirements that aren't present for balance (it might be
Hi,
Le 09/05/2016 16:53, Niccolò Belli a écrit :
> On domenica 8 maggio 2016 20:27:55 CEST, Patrik Lundquist wrote:
>> Are you using any power management tweaks?
>
> Yes, as stated in my very first post I use TLP with
> SATA_LINKPWR_ON_BAT=max_performance, but I managed to reproduce the
> bug
Hi,
Le 19/04/2016 11:13, Anand Jain a écrit :
>
>>> # btrfs device delete 3 /mnt/store/
>>> ERROR: device delete by id failed: Inappropriate ioctl for device
>>>
>>> Were the patch sets above for btrfs-progs or for the kernel ?
>> [...]
>
> By the way, For Lionel issue, delete missing should
Le 18/04/2016 10:59, Lionel Bouton a écrit :
> [...]
> So the obvious thing to do in this circumstance is to delete the drive,
> forcing the filesystem to create the missing replicas in the process and
> only reboot if needed (no hotplug). Unfortunately I'm not sure of the
> c
Hi,
Le 10/02/2016 10:00, Anand Jain a écrit :
>
>
> Rene,
>
> Thanks for the report. Fixes are in the following patch sets
>
> concern1:
> Btrfs to fail/offline a device for write/flush error:
>[PATCH 00/15] btrfs: Hot spare and Auto replace
>
> concern2:
> User should be able to delete a
Le 31/03/2016 22:49, Chris Murray a écrit :
> Hi,
>
> I'm trying to troubleshoot a ceph cluster which doesn't seem to be
> honouring BTRFS compression on some OSDs. Can anyone offer some help? Is
> it likely to be a ceph issue or a BTRFS one? Or something else? I've
> asked on ceph-users already,
Hi,
Le 25/02/2016 18:44, Hegner Robert a écrit :
> Am 25.02.2016 um 18:34 schrieb Hegner Robert:
>> Hi all!
>>
>> I'm working on a embedded system (ARM) running from a SDcard.
>From experience, most SD cards are not to be trusted. They are not
designed for storing an operating system and
Le 23/02/2016 19:30, Marc MERLIN a écrit :
> On Tue, Feb 23, 2016 at 07:01:52PM +0100, Lionel Bouton wrote:
>> Why don't you use autodefrag ? If you have writable snapshots and do
>> write to them heavily it would not be a good idea (depending on how
>> BTRFS handles this in
Le 23/02/2016 18:34, Marc MERLIN a écrit :
> On Tue, Feb 23, 2016 at 09:26:35AM -0800, Marc MERLIN wrote:
>> Label: 'dshelf2' uuid: d4a51178-c1e6-4219-95ab-5c5864695bfd
>> Total devices 1 FS bytes used 4.25TiB
>> devid1 size 7.28TiB used 4.44TiB path /dev/mapper/dshelf2
>>
>>
Hi,
I'm using this Ruby script to maintain my BTRFS filesystems and try to
avoid them getting in a position where they can't allocate space even
though there is still plenty of it.
http://pastebin.com/39567Dun
It seems to work well (it maintains dozens of BTRFS filesystems, running
balance on
Hi,
Le 07/02/2016 14:15, Andreas Hild a écrit :
> Dear All,
>
> The file system on a RAID1 Debian server seems corrupted in a major
> way, with 99% of the files not found. This was the result of a
> precarious shutdown after a crash that was preceded by an accidental
> misconfiguration in
Le 05/01/2016 14:04, David Goodwin a écrit :
> Using btrfs progs 4.3.1 on a Vanilla kernel.org 4.1.15 kernel.
>
> time btrfs device delete /dev/xvdh /backups
>
> real13936m56.796s
> user0m0.000s
> sys 1351m48.280s
>
>
> (which is about 9 days).
>
> Where :
>
> /dev/xvdh was 120gb in
Le 15/12/2015 02:49, Duncan a écrit :
> Christoph Anton Mitterer posted on Tue, 15 Dec 2015 00:25:05 +0100 as
> excerpted:
>
>> On Mon, 2015-12-14 at 22:30 +0100, Lionel Bouton wrote:
>>
>>> I use noatime and nodiratime
>> FYI: noatime implies nodiratime
Le 14/12/2015 21:27, Austin S. Hemmelgarn a écrit :
> AFAIUI, the _only_ reason that that is still the default is because of
> Mutt, and that won't change as long as some of the kernel developers
> are using Mutt for e-mail and the Mutt developers don't realize that
> what they are doing is
Le 08/12/2015 16:06, Marc MERLIN a écrit :
> Howdy,
>
> Why would scrub need space and why would it cancel if there isn't enough of
> it?
> (kernel 4.3)
>
> /etc/cron.daily/btrfs-scrub:
> btrfs scrub start -Bd /dev/mapper/cryptroot
> scrub device /dev/mapper/cryptroot (id 1) done
> scrub
Le 08/12/2015 16:37, Holger Hoffstätte a écrit :
> On 12/08/15 16:06, Marc MERLIN wrote:
>> Howdy,
>>
>> Why would scrub need space and why would it cancel if there isn't enough of
>> it?
>> (kernel 4.3)
>>
>> /etc/cron.daily/btrfs-scrub:
>> btrfs scrub start -Bd /dev/mapper/cryptroot
>> scrub
Le 14/10/2015 22:23, Donald Pearson a écrit :
> I would not use Raid56 in production. I've tried using it a few
> different ways but have run in to trouble with stability and
> performance. Raid10 has been working excellently for me.
Hi, could you elaborate on the stability and performance
Le 14/10/2015 22:53, Donald Pearson a écrit :
> I've used it from 3.8 something to current, it does not handle drive
> failure well at all, which is the point of parity raid. I had a 10disk
> Raid6 array on 4.1.1 and a drive failure put the filesystem in an
> irrecoverable state. Scrub speeds are
Le 10/10/2015 16:41, cov...@ccs.covici.com a écrit :
> Holger Hoffstätte wrote:
>
>> On 10/10/15 14:46, cov...@ccs.covici.com wrote:
>>> Hi. I am having lots of btrfs troubles -- I am using a 4.1.9 kernel
>> Just FYI, both 4.1.9 and .10 have serious
Le 11/10/2015 01:32, cov...@ccs.covici.com a écrit :
> [...]
> I don't know if the file in question had the correct data, I only did a
> directory listing, but this makes no sense -- I did an rsync just before
> booting and got all kinds of errors and the only difference is the file
> system, this
Le 10/10/2015 18:55, cov...@ccs.covici.com a écrit :
> [...]
> But do you folks have any idea about my original question, this leads me
> to think that btrfs is too new or something.
I've seen a recent report of a problem with btrfs-progs 4.2 confirmed as
a bug in mkfs. As you created the
Le 11/10/2015 01:02, cov...@ccs.covici.com a écrit :
> Lionel Bouton <lionel+c...@bouton.name> wrote:
>
>> Le 10/10/2015 18:55, cov...@ccs.covici.com a écrit :
>>> [...]
>>> But do you folks have any idea about my original question, this leads me
>>>
Hi,
Le 04/10/2015 14:03, Lionel Bouton a écrit :
> [...]
> This focus on single reader RAID1 performance surprises me.
>
> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> you need 2 processes to read from 2 devices at once) and I've never seen
&
Hi,
Le 04/10/2015 04:09, Duncan a écrit :
> Russell Coker posted on Sat, 03 Oct 2015 18:32:17 +1000 as excerpted:
>
>> Last time I checked a BTRFS RAID-1 filesystem would assign each process
>> to read from one disk based on it's PID. Every RAID-1 implementation
>> that has any sort of
Le 27/09/2015 17:34, Lionel Bouton a écrit :
> [...]
> It's not clear to me that "btrfs fi defrag " can't interfere with
> another process trying to use the file. I assume basic reading and
> writing is OK but there might be restrictions on unlinking/locking/using
> other
Le 29/09/2015 16:49, Lionel Bouton a écrit :
> Le 27/09/2015 17:34, Lionel Bouton a écrit :
>> [...]
>> It's not clear to me that "btrfs fi defrag " can't interfere with
>> another process trying to use the file. I assume basic reading and
>> writing
Le 28/09/2015 22:52, Duncan a écrit :
> Lionel Bouton posted on Mon, 28 Sep 2015 11:55:15 +0200 as excerpted:
>
>> From what I understood, filefrag doesn't known the length of each extent
>> on disk but should have its position. This is enough to have a rough
>> estimation
conclusions on your own),
In fact I was initially aware of (no)CoW/defragmentation/snapshots
performance gotchas (I already used BTRFS for PostgreSQL slaves hosting
for example...).
But Ceph is filesystem aware: its OSDs detect if they are running on
XFS/BTRFS and activate automatically some filesystem features. So even
though I was aware of the problems that can happen on a CoW filesystem,
I preferred to do actual testing with the default Ceph settings and
filesystem mount options before tuning.
Best regards,
Lionel Bouton
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
n
4.0.5 (or better if we have the time to test a more recent kernel before
rebooting : 4.1.8 and 4.2.1 are our candidates for testing right now).
Best regards,
Lionel Bouton
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord
61 matches
Mail list logo