On Wed, Dec 23, 2015 at 7:21 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Donald Pearson posted on Wed, 23 Dec 2015 09:53:41 -0600 as excerpted:
>
>> Additionally real Raid10 will run circles around what BTRFS is doing in
>> terms of performance. In the 20 drive array y
On Tue, Dec 22, 2015 at 10:13 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Donald Pearson posted on Tue, 22 Dec 2015 17:56:29 -0600 as excerpted:
>
>
>>> Also understand with Brfs RAID 10 you can't lose more than 1 drive
>>> reliably. It's not like a stri
On Wed, Dec 23, 2015 at 12:20 PM, Goffredo Baroncelli
<kreij...@inwind.it> wrote:
> On 2015-12-23 16:53, Donald Pearson wrote:
> [...]
>>
>> Additionally real Raid10 will run circles around what BTRFS is doing
>> in terms of performance. In the 20 drive array you'r
>
> Also understand with Brfs RAID 10 you can't lose more than 1 drive
> reliably. It's not like a strict raid1+0 where you can lose all of the
> "copy 1" *OR* "copy 2" mirrors.
Pardon my pea brain but this sounds like a pretty bad design flaw?
--
To unsubscribe from this list: send the line
I read an implication in a different thread that defrag and autodefrag
behave differently in that autodefrag is more snapshot friendly for
COW data.
Did I understand that correctly? I have not been doing defrag on my
virtual machine image directory because I do use a snapshot schedule
and the
tch, want=12060305965056, have=13820656527619066643
> Couldn't read chunk tree
> Couldn't open file system
> [root@rockstor ~]#
>
> Thanks,
>
> Scotty Edmonds
> sco...@scottyedmonds.com
>
> ____
> From: Donald Pearson <donaldwhpear...@g
ds
> sco...@scottyedmonds.com
I think we need to see what some of the more experienced users think
on this one. But you can try removing sdh and seeing if you can mount
it *read only* and degraded. Just make sure whatever you do and play
with is done read only. Don't try any fixes or repai
What does btrfs check without any repair options report?
btrfs check /dev/sdd
On Thu, Nov 12, 2015 at 12:48 PM, Scotty Edmonds
wrote:
> Rockstor was running great, I ordered a SuperMicro 24-bay Chassis and decided
> to power down the machine while I was away. When I
t;
> On Mon, Oct 26, 2015 at 2:46 PM, cheater00 . <cheate...@gmail.com> wrote:
>> I don't remember doing that, but just to exclude everything, how do I check?
>>
>> On Mon, Oct 26, 2015 at 2:45 PM, Donald Pearson
>> <donaldwhpear...@gmail.com> wrote:
>>&g
Accidentally didn't reply to the list the 1st time.
I see the same issue when I have quotas enabled. If you have quotas
on, see if turning them off helps.
On Mon, Oct 26, 2015 at 7:16 AM, cheater00 . wrote:
> Hi guys,
> I am running into really bad performance. Here's my
the fstab says
> that - could they be enabled in another way? How do I check for sure?
> The man page doesn't say how to check the status:
> https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-quota
>
> On Mon, Oct 26, 2015 at 2:32 PM, Donald Pearson
> <donaldwhpear...@gma
I would not use Raid56 in production. I've tried using it a few
different ways but have run in to trouble with stability and
performance. Raid10 has been working excellently for me.
On Wed, Oct 14, 2015 at 3:19 PM, Sjoerd wrote:
> Hi all,
>
> Is RAID6 still considered
is
though that over time your system will probably grow and expand and
zfs is very locked in to the original configuration. Adding vdevs is
a poor solution IMO.
On Wed, Oct 14, 2015 at 3:34 PM, Lionel Bouton
<lionel-subscript...@bouton.name> wrote:
> Le 14/10/2015 22:23, Donald Pearson a écr
. I ultimately decided
to use btrfs on my personal equipment because it promises to be more
organic and my commodity hardware definitely likes to play the organic
role. :)
On Wed, Oct 14, 2015 at 4:15 PM, Rich Freeman
<r-bt...@thefreemanclan.net> wrote:
> On Wed, Oct 14, 2015 at 4:53 P
On Mon, Oct 12, 2015 at 12:33 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Donald Pearson posted on Sun, 11 Oct 2015 11:46:14 -0500 as excerpted:
>
>> Kernel 4.2.2-1.el7.elrepo btrfs-progs v4.2.1
>>
>> I'm attempting to convert a filesystem from raid6 to raid10. I
Kernel 4.2.2-1.el7.elrepo
btrfs-progs v4.2.1
I'm attempting to convert a filesystem from raid6 to raid10. I didn't
have any functional problems with it, but performance is abysmal
compared to basically the same arrangement in raid10 so I thought I'd
just get away from raid56 for a while (I also
see corresponding disk i/o, and the
process goes away after a reasonable amount of time.
On Tue, Jul 21, 2015 at 8:29 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Thanks for the feedback Duncan.
It doesn't appear to be a big deal to disable quotas so that's what
I'll do for now.
On Tue
Thanks for the feedback Duncan.
It doesn't appear to be a big deal to disable quotas so that's what
I'll do for now.
On Tue, Jul 21, 2015 at 4:29 AM, Duncan 1i5t5.dun...@cox.net wrote:
Donald Pearson posted on Mon, 20 Jul 2015 08:33:47 -0500 as excerpted:
Also, FWIW, the btrfs quota subsystem
On Mon, Jul 20, 2015 at 3:28 AM, Duncan 1i5t5.dun...@cox.net wrote:
Donald Pearson posted on Mon, 20 Jul 2015 00:15:26 -0500 as excerpted:
I'm starting to think there's something wrong with creating and removing
snapshots that leaves btrfs-cleaner either locked up or nearly so. If
the btrfs
backup.virtual_machines.hourly
[root@san01 virtual_machines]# date
Mon Jul 20 00:14:05 CDT 2015
On Wed, Jul 15, 2015 at 4:49 PM, Marc MERLIN m...@merlins.org wrote:
On Wed, Jul 15, 2015 at 01:02:29PM -0500, Donald Pearson wrote:
BTW, is anybody else experiencing btrfs-cleaner consuming heavy
resources for a very long time
%CPU %MEM TIME+ COMMAND
4134 root 20 0 0 0 0 R 100.0 0.0 2:41.40
btrfs-cleaner
4183 root 20 0 0 0 0 R 99.7 0.0 191:11.33
btrfs-cleaner
On Wed, Jul 15, 2015 at 9:42 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Implementation question
Implementation question about your scripts Marc..
I've set up some routines for different backup and retention intervals
and periods in cron but quickly ran in to stepping on my own toes by
the locking mechanism. I could just disable the locking but I'm not
sure if that's the best approach and I
Last time something happened and I poked at it myself I ended up
ruining the pool so I thought I'd ask here before doing anything.
I'm not sure if this really indicates that anything needs doing or
not. The filesystem will mount like normal.
It doesn't look like the core dump was written
On Fri, Jul 10, 2015 at 11:30 PM, Duncan 1i5t5.dun...@cox.net wrote:
Donald Pearson posted on Fri, 10 Jul 2015 15:57:46 -0500 as excerpted:
If I'm reading this right, my most fragmented file
(Training-flat.vmdk) is now almost 3x more fragmented?
[snip to context for brevity]
# filefrag
If I'm reading this right, my most fragmented file
(Training-flat.vmdk) is now almost 3x more fragmented?
[root@san01 tank]# filefrag
/mnt2/tank/virtual_machines/virtual_machines/Training/*
/mnt2/tank/virtual_machines/virtual_machines/Training/Training-flat.vmdk:
1444 extents found
Marc,
I thought I'd yours a try, and I'm probably embarassing myself here
but I'm running in to this issue. Centos 7.
[root@san01 tank]# ./btrfs-subvolume-backup store /mnt2/backups
./btrfs-subvolume-backup: line 177: shlock: command not found
/var/run/btrfs-subvolume-backup held for
... and I just found your other block about stealing shlock out of inn.
Officially embarassed!
On Thu, Jul 9, 2015 at 8:35 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Marc,
I thought I'd yours a try, and I'm probably embarassing myself here
but I'm running in to this issue. Centos 7
Something I've noticed scrubbing two pools that I have, one is Raid6
and the other is Raid5.
The scrubbing goes along very slowly and I think it's because there is
always one disk that's operating differently than the rest. Which
disk changes.
Here is an iostat of the current scrub, and you can
Basically I wouldn't trust the drive that's already showing signs of
failure to survive a dd. It isn't completely full, so the recover is
less load. That's just the way I see it. But I see your point of
trying to get drive images now to hedge against failures.
Unfortunately those errors are
(empty) drive, so that the data on the two original disks is not
touched at all?
Regards,
Hendrik
On 07.07.2015 15:14, Donald Pearson wrote:
That's what it looks like. You may want to try reseating cables, etc.
Instead of mounting and file copy, btrfs restore might be worth a shot
is failing?!
Regards,
Hendrik
On 07.07.2015 00:59, Donald Pearson wrote:
Anything in dmesg?
On Mon, Jul 6, 2015 at 5:07 PM, hend...@friedels.name
hend...@friedels.name wrote:
Hallo,
It seems, that mounting works, but the System locks completely soon after
I
backing up.
Greetings
Based on my experience Hugo's advice is critical, get the bad drive
out of the pool when in raid56 and do not try to replace or delete it
while it's still attached and recognized.
If you add a new device, mount degraded and rebalance. If you don't,
mount degraded then device delete missing.
On
Anything in dmesg?
On Mon, Jul 6, 2015 at 5:07 PM, hend...@friedels.name
hend...@friedels.name wrote:
Hallo,
It seems, that mounting works, but the System locks completely soon after I
backing up.
Greetings,
Hendrik
-- Originalnachricht--
Von: Donald Pearson
Datum: Mo., 6
for
consistency (scrub I suppose, but is it safe?)?
Regards,
Hendrik
On 06.07.2015 22:52, Omar Sandoval wrote:
On 07/06/2015 01:01 PM, Donald Pearson wrote:
Based on my experience Hugo's advice is critical, get the bad drive
out of the pool when in raid56 and do not try to replace or delete
)
On Fri, Jul 3, 2015 at 8:29 AM, Martin Steigerwald mar...@lichtvoll.de wrote:
On Friday 03 July 2015 09:31:03 Duncan wrote:
Donald Pearson posted on Thu, 02 Jul 2015 13:19:41 -0500 as excerpted:
btrfs restore complains that every device is missing except the one that
you specify on executing
used 5.00GiB path /dev/loop2
Btrfs v3.16.2
On Thu, Jul 2, 2015 at 11:01 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Because this is raid1 I believe you need another for that to work.
On Fri, Jul 3, 2015 at 12:57 AM, Rich Rauenzahn rraue...@gmail.com wrote:
Yes, I tried
during balancing '/' - No space left on device
There may be more info in syslog - try dmesg | tail
On Thu, Jul 2, 2015 at 10:45 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Have you seen this article?
I think the interesting part for you is the balance cannot run
because
, 2015 at 9:31 PM, Chris Murphy li...@colorremedies.com wrote:
On Wed, Jul 1, 2015 at 7:38 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Here's the drive vomiting in my logs after it got halfway through the
dd image attempt.
Jul 1 17:05:51 san01 kernel: sd 0:0:6:0: [sdg] FAILED Result
Have you seen this article?
I think the interesting part for you is the balance cannot run
because the filesystem is full heading.
http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
On Fri, Jul 3, 2015 at 12:32 AM, Rich Rauenzahn rraue...@gmail.com
assemble all the data that I know how to and follow that link Chris
suggested for filing a bug.
On Thu, Jul 2, 2015 at 12:00 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I do see plenty of complaints about the sdg drive
I think it is. I have another raid5 pool that I've created to test
the restore function on, and it worked.
On Thu, Jul 2, 2015 at 1:26 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Unfortunately btrfs image
That is correct. I'm going to rebalance my raid5 pool as raid6 and
re-test just because.
On Thu, Jul 2, 2015 at 1:37 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I think it is. I have another raid5 pool
13:48 test_file_1gb
[root@san01 btrfs-progs]#
On Thu, Jul 2, 2015 at 1:45 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
That is correct. I'm going to rebalance my raid5 pool as raid6 and
re-test just because.
On Thu, Jul 2, 2015 at 1:37 PM, Chris Murphy li...@colorremedies.com wrote
` screen so I
expect it's still running.
There are 4 other drives with the same total capacity as sdg so I
would have expected then to normally all complete at about the same
time.
Regards,
Donald
On Wed, Jul 1, 2015 at 11:09 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Thanks Chris
Thanks Chris,
To my shame it turns out darkling didn't drop off IRC after all; I'm
new to all this and learning quickly that I need to sit on my hands.
I admit despite darkling's suggestion that my usertools are probably
fine I pulled down a newer kernel from elrepo so currently I'm running
Hello,
darkling was helping me on IRC for a while before he had to drop
off, thanks for the help darkling.
To pick up where we left off...
In summary, I have a 10 disk raid6 pool that I cannot mount.
btrfs fi show output is here - http://pastebin.com/aidGV20e
'tank' is the pool in question.
error on dev sdg, logical block 189496912,
async page read
On Wed, Jul 1, 2015 at 1:58 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Small update on this, with no idea if this is useful information or not.
At some point within the last hour iostat shows that /dev/sdg is no
longer under
): 0x01332100 ***
Segmentation fault
On Wed, Jul 1, 2015 at 2:05 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I should have thought to check this to add earlier. I'm seeing errors
for /dev/sdg in dmesg (not surprised, I wanted this drive out of the
pool to begin with because it's
kernel: sd 0:0:6:0: [sdg] CDB: Read(10) 28 00 5a
5b f2 e0 00 01 00 00
On Wed, Jul 1, 2015 at 6:29 PM, Chris Murphy li...@colorremedies.com wrote:
On Wed, Jul 1, 2015 at 3:35 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
*** Error in `./btrfs': free(): invalid next size (fast
49 matches
Mail list logo