On Mon, Feb 11, 2019 at 5:17 AM Austin S. Hemmelgarn
wrote:
>
> Last I knew, it was systemd itself doing the pause, because we provide
> no real device for udev to wait on appearing.
Well there's more than one thing responsible for the net behavior. The
most central thing waiting is the kernel. A
On 2019-02-10 13:34, Chris Murphy wrote:
On Sat, Feb 9, 2019 at 5:13 AM waxhead wrote:
Understood, but that is not quite what I meant - let me rephrase...
If BTRFS still can't mount, why would it blindly accept a previously
non-existing disk to take part of the pool?!
It doesn't do it blindl
On 2/7/19 7:04 PM, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it works like
expected
That should be the normal behaviour,
IMO in the long term it will be. But before that we have few items to
fix around this, such as the serviceability part.
-Anan
On Sat, Feb 9, 2019 at 5:13 AM waxhead wrote:
> Understood, but that is not quite what I meant - let me rephrase...
> If BTRFS still can't mount, why would it blindly accept a previously
> non-existing disk to take part of the pool?!
It doesn't do it blindly. It only ever mounts when the user sp
Austin S. Hemmelgarn wrote:
On 2019-02-08 13:10, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
So why do BTRFS hurry to mount itself even if devices are missing? and
if BTRFS still can mount , why whould it blindly accept a n
On Fri, Feb 8, 2019 at 11:10 AM waxhead wrote:
> So what you are saying here is that distro's that use btrfs by default
> should be responsible enough to make some monitoring solution if they
> allow non-technical users to create a "raid"1 like btrfs filesystem in
> the first place.
None do this
On 2019-02-08 13:10, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab
it works like expected
That should be the normal behaviour
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
runn
On Fri, Feb 8, 2019 at 12:33 AM Stefan K wrote:
>
> > However the raid1 term only describes replication. It doesn't describe
> > any policy.
> yep you're right, but the most sysadmin expect some 'policies'.
A sysadmin expecting policies is fine, but assuming they exist makes
them a questionable s
On Fri, Feb 8, 2019 at 12:15 AM Stefan K wrote:
>
> > * Normal desktop users _never_ look at the log files or boot info, and
> > rarely run monitoring programs, so they as a general rule won't notice
> > until it's already too late. BTRFS isn't just a server filesystem, so
> > it needs to be safe
On 2019-02-08 02:15, Stefan K wrote:
* Normal desktop users _never_ look at the log files or boot info, and
rarely run monitoring programs, so they as a general rule won't notice
until it's already too late. BTRFS isn't just a server filesystem, so
it needs to be safe for regular users too.
I g
On 2019-02-07 23:51, Andrei Borzenkov wrote:
07.02.2019 22:39, Austin S. Hemmelgarn пишет:
The issue with systemd is that if you pass 'degraded' on most systemd
systems, and devices are missing when the system tries to mount the
volume, systemd won't mount it because it doesn't see all the devi
> However the raid1 term only describes replication. It doesn't describe
> any policy.
yep you're right, but the most sysadmin expect some 'policies'.
If I use RAID1 I expect that if one drive failed, I can still boot _without_
boot issues, just some warnings etc, because I use raid1 to have si
> * Normal desktop users _never_ look at the log files or boot info, and
> rarely run monitoring programs, so they as a general rule won't notice
> until it's already too late. BTRFS isn't just a server filesystem, so
> it needs to be safe for regular users too.
I guess a normal desktop user wo
07.02.2019 22:39, Austin S. Hemmelgarn пишет:
> The issue with systemd is that if you pass 'degraded' on most systemd
> systems, and devices are missing when the system tries to mount the
> volume, systemd won't mount it because it doesn't see all the devices.
> It doesn't even _try_ to mount it b
On 2019-02-07 5:19 p.m., Chris Murphy wrote:
> And actually, you could mitigate some decent amount of Btrfs missing
> features with server monitoring tools; including parsing kernel
> messages. Because right now you aren't even informed of read or write
> errors, device or csums mismatches or fixu
On Thu, Feb 7, 2019 at 10:37 AM Martin Steigerwald wrote:
>
> Chris Murphy - 07.02.19, 18:15:
> > > So please change the normal behavior
> >
> > In the case of no device loss, but device delay, with 'degraded' set
> > in fstab you risk a non-deterministic degraded mount. And there is no
> > automa
On 2019-02-07 2:39 p.m., Austin S. Hemmelgarn wrote:
> Again, BTRFS mounting degraded is significantly riskier than LVM or MD
> doing the same thing. Most users don't properly research things (When's
> the last time you did a complete cost/benefit analysis before deciding
> to use a particular p
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
running, and I don't care about a
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
running, and I don't care about a device loss, thats why I use a
RAI
Chris Murphy - 07.02.19, 18:15:
> > So please change the normal behavior
>
> In the case of no device loss, but device delay, with 'degraded' set
> in fstab you risk a non-deterministic degraded mount. And there is no
> automatic balance (sync) after recovering from a degraded mount. And
> as far
On Thu, Feb 7, 2019 at 4:04 AM Stefan K wrote:
>
> Thanks, with degraded as kernel parameter and also ind the fstab it works
> like expected
> That should be the normal behaviour, cause a server must be up and running,
> and I don't care about a device loss, thats why I use a RAID1.
You manage
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it works like
expected
That should be the normal behaviour, cause a server must be up and running, and
I don't care about a device loss, thats why I use a RAID1. The device-loss
problem can
Thanks, with degraded as kernel parameter and also ind the fstab it works like
expected
That should be the normal behaviour, cause a server must be up and running, and
I don't care about a device loss, thats why I use a RAID1. The device-loss
problem can I fix later, but its important that a s
On Mon, Feb 4, 2019 at 11:46 PM Chris Murphy wrote:
>
> After remounting both devices and scrubbing, it's dog slow. 14 minutes
> to scrub a 4GiB file system, complaining the whole time about
> checksums on the files not replicated. All it appears to be doing is
> replicating metadata at a snails p
On Mon, Feb 4, 2019 at 3:19 PM Patrik Lundquist
wrote:
>
> On Mon, 4 Feb 2019 at 18:55, Austin S. Hemmelgarn
> wrote:
> >
> > On 2019-02-04 12:47, Patrik Lundquist wrote:
> > > On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
> > >>
> > >> 1. At least with raid1/10, a particular device can only
On Mon, 4 Feb 2019 at 18:55, Austin S. Hemmelgarn wrote:
>
> On 2019-02-04 12:47, Patrik Lundquist wrote:
> > On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
> >>
> >> 1. At least with raid1/10, a particular device can only be mounted
> >> rw,degraded one time and from then on it fails, and can
On 2019-02-04 12:47, Patrik Lundquist wrote:
On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
1. At least with raid1/10, a particular device can only be mounted
rw,degraded one time and from then on it fails, and can only be ro
mounted. There are patches for this but I don't think they've been
On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
>
> 1. At least with raid1/10, a particular device can only be mounted
> rw,degraded one time and from then on it fails, and can only be ro
> mounted. There are patches for this but I don't think they've been
> merged still.
That should be fixed si
On Fri, Feb 1, 2019 at 3:28 AM Stefan K wrote:
>
> Hello,
>
> I've installed my Debian Stretch to have / on btrfs with raid1 on 2 SSDs.
> Today I want test if it works, it works fine until the server is running and
> the SSD get broken and I can change this, but it looks like that it does not
>
Hi Stefan,
On 2/1/19 11:28 AM, Stefan K wrote:
>
> I've installed my Debian Stretch to have / on btrfs with raid1 on 2
> SSDs. Today I want test if it works, it works fine until the server
> is running and the SSD get broken and I can change this, but it looks
> like that it does not work if the
31 matches
Mail list logo