Hello,
if I create a raid10 it looks like that:
mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
but if I've different jbods and I want that every mirror of a raid10 is on a
different jbod how can I archive that? in zfs it looks like that:
zpool create -o ashift=12 nc_storage m
:(
that means when one jbod fail its there is no guarantee that it works fine?
like in zfs? well that sucks
Didn't anyone think to program it that way?
On Wednesday, January 16, 2019 2:42:08 PM CET Hugo Mills wrote:
> On Wed, Jan 16, 2019 at 03:36:25PM +0100, Stefan K wrote:
&
Hello,
does exist an roadmap or something like "what do first/next"?
I saw the project ideas[1] and there are a lot of intersting things in it (like
read/write caches, per subvolumes mount options, block devices, etc), but there
is no plan or an order of ideas. Did btrfs has something like that?
a raid01 it does not fit in our use case + I can't
configure which disk is in which mirror..
On Wednesday, January 16, 2019 11:15:02 AM CET Chris Murphy wrote:
> On Wed, Jan 16, 2019 at 7:58 AM Stefan K wrote:
> >
> > :(
> > that means when one jbod fail its there is
Hello,
if I run 'bonnie++ -c4' the system is unusable and hangs, I got also some
CallTraces in my syslog. Is that a normal behavior?
My system is:
uname -a
Linux tani 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30)
x86_64 GNU/Linux
btrfs fi sh
Label: none uuid: 24be286b-ece
so a simple
btrfs fi resize -4k /
do that trick?
On Friday, January 25, 2019 3:51:12 PM CET Qu Wenruo wrote:
>
> On 2019/1/25 下午3:44, Stefan K wrote:
> > since it is my /-root FS its not possible to do that online?
> >
> >
>
>
> >> You could resi
Hello,
I've installed my Debian Stretch to have / on btrfs with raid1 on 2 SSDs. Today
I want test if it works, it works fine until the server is running and the SSD
get broken and I can change this, but it looks like that it does not work if
the SSD fails until restart. I got the error, that o
n 2/1/19 11:28 AM, Stefan K wrote:
> >
> > I've installed my Debian Stretch to have / on btrfs with raid1 on 2
> > SSDs. Today I want test if it works, it works fine until the server
> > is running and the SSD get broken and I can change this, but it looks
> >
019 2:39:34 PM CET Austin S. Hemmelgarn wrote:
> On 2019-02-07 13:53, waxhead wrote:
> >
> >
> > Austin S. Hemmelgarn wrote:
> >> On 2019-02-07 06:04, Stefan K wrote:
> >>> Thanks, with degraded as kernel parameter and also ind the fstab it
> &
> However the raid1 term only describes replication. It doesn't describe
> any policy.
yep you're right, but the most sysadmin expect some 'policies'.
If I use RAID1 I expect that if one drive failed, I can still boot _without_
boot issues, just some warnings etc, because I use raid1 to have si
sorry for disturb this discussion,
are there any plans/dates to fix the raid5/6 issue? Is somebody working on this
issue? Cause this is for me one of the most important things for a fileserver,
with a raid1 config I loose to much diskspace.
best regards
Stefan
On Saturday, September 8, 2018 8:40:50 AM CEST Duncan wrote:
> Stefan K posted on Fri, 07 Sep 2018 15:58:36 +0200 as excerpted:
>
> > sorry for disturb this discussion,
> >
> > are there any plans/dates to fix the raid5/6 issue? Is somebody working
> > on this issue? Caus
Dear Maintainer,
the command btrfs fi show takes too much time:
time btrfs fi show
Label: none uuid: 513dc574-e8bc-4336-b181-00d1e9782c1c
Total devices 2 FS bytes used 2.34GiB
devid1 size 927.79GiB used 4.03GiB path /dev/sdv2
devid2 size 927.79GiB used 4.03GiB path /dev/sda
> If your primary concern is to make the fs as stable as possible, then
> keep snapshots to a minimal amount, avoid any functionality you won't
> use, like qgroup, routinely balance, RAID5/6.
>
> And keep the necessary btrfs specific operations to minimal, like
> subvolume/snapshot (and don't keep
Hi,
> You may try to run the show command under strace to see where it blocks.
any recommendations for strace options?
On Friday, September 14, 2018 1:25:30 PM CEST David Sterba wrote:
> Hi,
>
> thanks for the report, I've forwarded it to the issue tracker
> https://github.com/kdave/btrfs-progs
Hello,
I've played a little bit with raid1:
my steps was:
1. create a raid1 with btrfs (add device; balance start -mconvert=raid1
-dconvert=raid1 /)
2. after finishing, i shutdown the server and remove a device and start it
again,
3. it works (i used degraded options in fstab)
4. I shutdown the
16 matches
Mail list logo