RAID5 block (6x1Gig=5Gig capacity).
Meaning that the disks are likely to hit full during the convert. To
avoid that I'm looping a convert with a block limit with a balance to
target those blocks. The balance is pretty quick, but it does slow the
process down.
On Thu, Dec 3, 2015 at 9:14 AM,
arious balancing starts, cancels, profile converts etc, worked
>>> surprisingly well, compared to my experience a year back with RAID5
>>> (hitting bugs, crashes).
>>>
>>> A RAID6 full balance with this setup might be very slow, even if the
>>> fs would be not so
a version of 4.5.1 before upgrading, that is my
usual kernel update strategy.
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
Any other details that people would like to see that are relevant to
this question?
-
at least now the idle class listed in iotop
is idle, so I hope that means it will be more friendly to other
processes.
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a mess
ear 90%,
> thus making it IO bound.
And yes I'd love to switch to SSD, but 12 2TB drives is a bit pricey still
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a mes
Yeah, RAID5. I'm now doing pause and resume on it to let it take
multiple nights, the idle should let other processes complete in
reasonable time.
On Wed, Apr 6, 2016 at 3:34 AM, Henk Slager wrote:
> On Tue, Apr 5, 2016 at 4:37 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>> Gare
re delete (not very practical).
>
> -chris
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gareth Pye
Leve
x
>>>
>>> I still have plenty of free space:
>>>
>>> # df -h /media/btrfs
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sdd 14T 5.8T 2.2T 74% /media/btrfs
>>>
>>> Any idea how I can get out of
have 6 drives mirrored across a local network, this is done with DRBD.
At any one time only a single server has the 6 drives mounted with btrfs.
Is this a ticking time bomb?
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
"Dear God, I would like to file a bug report&
ry very lucky.
Very very lucky doesn't sound likely.
On Fri, Aug 14, 2015 at 8:54 AM, Hugo Mills wrote:
> On Fri, Aug 14, 2015 at 08:32:46AM +1000, Gareth Pye wrote:
>> On Thu, Aug 13, 2015 at 9:44 PM, Austin S Hemmelgarn
>> wrote:
>> > 3. See the warnings about do
ing the default to -o degraded is wise, at
> all.
>
> --
> Duncan - List replies preferred. No HTML msgs.
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master." Richard Stallman
>
> --
> To unsubscribe from this lis
ded final functionality I don't think
balances are anywhere near that optimised currently.
Starting with a relatively green format isn't a great option for a
file system you intend to use for ever.
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
"Dear God
mesg and
syslog don't have anything obvious in them.
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
"Dear God, I would like to file a bug report"
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a messa
Poking around I just noticed that btrfs de stats /data points out that
3 of my drives have some read_io_errors. I'm guessing that is a bad
thing. I assume this would indicate bad hardware and would be a likely
cause of system crashes.
:(
On Tue, Dec 1, 2015 at 11:38 PM, Gareth Pye wrote:
rnel.org is making a log that looks like it's up to date but
isn't that's awkward :(
Building now from the github you mentioned.
Also running a scrub, but I'm starting to suspect something else is
responsible. It ran fine overnight but crashed in less than a minute
after I l
Will do that once the scrub finishes/I get home from work.
On Wed, Dec 2, 2015 at 7:30 AM, Austin S Hemmelgarn
wrote:
> On 2015-12-01 15:12, Gareth Pye wrote:
>>
>> On Wed, Dec 2, 2015 at 2:14 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>>>
>>> So if you
Looks like I have some issues. Going to confirm cables are all secure
and run a memtest.
On Wed, Dec 2, 2015 at 9:22 AM, Gareth Pye wrote:
> Will do that once the scrub finishes/I get home from work.
>
> On Wed, Dec 2, 2015 at 7:30 AM, Austin S Hemmelgarn
> wrote:
>> On 2015-1
Thanks for that info, ram appears to be checking out fine and smartctl
reported that the drives are old but one had some form of elevated
error. Looks like I might be buying a new drive.
On Wed, Dec 2, 2015 at 9:01 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Gareth Pye posted on Wed, 02
. and again if there's a third scan required,
>> etc.
>>
>> I'd say just make it automatic on corrected metadata errors as I can't
>> think of a reason people wouldn't want it, given the time it would save
>> over rerunning a full scrub over and ov
lse you need to be willing to lose everything on
> this volume, without further notice, i.e. you need a backup strategy
> that you're prepared to use without undue stress. If you can't do
> that, you need to look at another arrangement. Both LVM and mdadm
> raid6 + XFS are mo
not the MythBuntu
> installer. I don't remember if that was even an option.
>
> David
>
>
> On Wed, 2015-12-09 at 14:28 -0700, Chris Murphy wrote:
>> On Wed, Dec 9, 2015 at 12:56 PM, Gareth Pye wrote:
>> > I wouldn't blame Ubuntu too much, 14.10 went out o
Before that there is just lots of not particularly different
"relocating block group" messages.
Any ideas on what is going on here?
--
Gareth Pye
Level 2 MTG Judge, Melbourne, Australia
"Dear God, I would like to file a bug report"
--
To unsubscribe from this list: send the line
ess than 2% usage (Plus presumably 3 mostly full blocks). That sounds
like a bug.
On Tue, Jan 20, 2015 at 10:45 AM, Gareth Pye wrote:
> Hi,
>
> I'm attempting to convert a btrfs filesystem from raid10 to raid1.
> Things had been going well through a couple of pauses and resumes, but
&
g limit=10 to
speed up testing, I have tried without it and it just takes longer to
complete and the whole time the RAID1 total sky rockets while the
RAID1 used doesn't move.
On Tue, Jan 20, 2015 at 6:38 PM, Chris Murphy wrote:
>> On Mon, Jan 19, 2015 at 5:13 PM, Gareth Pye wrote:
>&
lance to clear up
the empty blocks, the flags 65 messages are from the RAID10->RAID1
balance
On Wed, Jan 21, 2015 at 8:41 AM, Chris Murphy wrote:
> On Tue, Jan 20, 2015 at 2:25 PM, Gareth Pye wrote:
>> Yeah, I have updated btrfs-progs to 3.18. While it is plausible that
>> the bug
first few results from
logical-resolve have been for files in the 1G~2G range, so that could
be some sticky spaghetti.
On Wed, Jan 21, 2015 at 9:53 AM, Chris Murphy wrote:
> On Tue, Jan 20, 2015 at 2:49 PM, Gareth Pye wrote:
>> The conversion is going the other way (raid10->raid1
What are the chances that splitting all the large files up into sub
gig pieces, finish convert, then recombine them all will work?
On Wed, Jan 21, 2015 at 3:03 PM, Chris Murphy wrote:
> On Tue, Jan 20, 2015 at 4:04 PM, Gareth Pye wrote:
>> Yeah, we don't have that much space spare
PS: the only snapshots are of apt-mirror, which doesn't have large files.
On Fri, Jan 23, 2015 at 8:58 AM, Gareth Pye wrote:
> What are the chances that splitting all the large files up into sub
> gig pieces, finish convert, then recombine them all will work?
>
> On Wed, Jan 21
ncan <1i5t5.dun...@cox.net>:
>
>> Marc Joliet posted on Fri, 23 Jan 2015 08:54:41 +0100 as excerpted:
>>
>> > Am Fri, 23 Jan 2015 04:34:19 + (UTC)
>> > schrieb Duncan <1i5t5.dun...@cox.net>:
>> >
>> >> Gareth Pye posted on Fri, 23
5.dun...@cox.net> wrote:
> Gareth Pye posted on Tue, 27 Jan 2015 14:24:03 +1100 as excerpted:
>
>> Have gone with the move stuff off then finish convert plan. Convert has
>> now finished and I'm 60% of the way through moving all the big files
>> back on.
>>
>>
gt;>>> __btrfs_abort_transaction+0x5f/0x130
>>>>> [btrfs]
>>>>> [372668.323339] [] btrfs_finish_ordered_io+0x552/0x5e0
>>>>> [btrfs]
>>>>> [372668.323418] [] finish_ordered_fn+0x15/0x20 [btrfs]
>>>>> [372668.323466] [] n
here?
--
Gareth Pye
Level 2 MTG Judge, Melbourne, Australia
"Dear God, I would like to file a bug report"
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
1.
# btrfs fi sh /data
Label: none uuid: b2986e1a-0891-4779-960c-e01f7534c6eb
Total devices 6 FS bytes used 4.41TiB
devid1 size 1.81TiB used 1.48TiB path /dev/drbd0
devid2 size 1.81TiB used 1.48TiB path /dev/drbd1
devid3 size 1.81TiB used 1.48TiB path /d
I guess it might be relevant that this array was originally created as
raid5 back in the early days of raid5 and converted to raid1 over a
year ago.
On Mon, May 25, 2015 at 2:00 PM, Gareth Pye wrote:
> 1.
> # btrfs fi sh /data
> Label: none uuid: b2986e1a-0891-4779-960c-e01
>
> --
> Peter Marheine
> Don't Panic
--
Gareth Pye
Level 2 MTG Judge, Melbourne, Australia
"Dear God, I would like to file a bug report"
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vge
After a full balance that is likely to change to 4.41TiB used of
4.41TiB total. Is that going to help anything, Peter is saying it's a
known bug that convert can't do anything currently.
On Tue, May 26, 2015 at 2:36 AM, Anthony Plack wrote:
>
>> On May 24, 2015, at 11:00 PM
ive this? i want all four drives in
> a raid 10 setup.
>
> Thanks in advance
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-
array offline
B) DD the contents of one of the 750G drives to a new 3T drive
C) Remove the 750G from the system
D) btrfs scan
E) Mount array
F) Run a balance
I know that not physically removing the old copy of the drive will
cause massive issues, but if I do that everything should be fine
right?
On Fri, Nov 25, 2016 at 3:31 PM, Zygo Blaxell
wrote:
>
> This risk mitigation measure does rely on admins taking a machine in this
> state down immediately, and also somehow knowing not to start a scrub
> while their RAM is failing...which is kind of an annoying requirement
> for the admin.
Attem
quot;Every nonfree program has a lord, a master --
> and if you use the program, he is your master." Richard Stallman
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More
all again. When I only saw one disk having troubles I was
concerned. Now I notice both sda and sdc having issues I'm thinking I
might be about to have a bad time.
What else should I provide?
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
--
To unsubscribe from
Current status:
Knowing things were bad I did set the scterc values sanely, but the
box was getting less stable so I thought a reboot was a good idea.
That reboot failed to mount the partition at all and eveything
triggered my 'is this a psu issue' sense so I've left the box off till
I've got time
Am I right that the wr: 0 means that the disks should at least be in a
nice consistent state? I know that overlapping read fails can still
cause everything to fail.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More m
When I can get this stupid box to boot from an external drive I'll
have some idea of what is going on
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.htm
Okay, things aren't looking good. The FS wont mount for me:
http://pastebin.com/sEEdRxsN
On Tue, Aug 30, 2016 at 9:01 AM, Gareth Pye wrote:
> When I can get this stupid box to boot from an external drive I'll
> have some idea of what is going on....
--
Gareth Pye - blog.cerb
fix that before at least confirming the things I at least
partially care about have a recent backup.
--
Gareth Pye - blog.cerberos.id.au
Level 2 MTG Judge, Melbourne, Australia
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vge
Or I could just once again select the right boot device in the bios. I
think I want some new hardware :)
On Wed, Aug 31, 2016 at 7:23 AM, Gareth Pye wrote:
> On Wed, Aug 31, 2016 at 4:28 AM, Chris Murphy wrote:
>> But I'd try a newer kernel before you
>> give up on it.
>
far. Am I right or is
their likely to be corrupt data in the files I've synced off?
On Wed, Aug 31, 2016 at 7:46 AM, Gareth Pye wrote:
> Or I could just once again select the right boot device in the bios. I
> think I want some new hardware :)
>
> On Wed, Aug 31, 2016 at 7:23 A
t; On 2016-08-31 19:04, Gareth Pye wrote:
>>
>> ro,degraded has mounted it nicely and my rsync of the more useful data
>> is progressing at the speed of WiFi.
>>
>> There are repeated read errors from one drive still but the rsync
>> hasn't bailed yet, which
PDF doc info dates it at 23/1/2013, which is the best guess that can
easily be found.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> More testing usually means more bugs found etc…
Yes, but releasing code before it's somewhat polished just generates a
mountain of bug reports.
Back in 2010 when I set up a server at work I was eagerly awaiting the
RAID5 implementation that was just a couple of months away.
Don't worry it doe
ret = -EINVAL;
>>> + goto out;
>>> +}
>>> +
>>> if (strcmp(device_path, "missing") == 0) {
>>> struct list_head *devices;
>>> struct btrfs_device *tmp;
>>>
>>>
>
tty sure
> you've compiled just the master branch of both linux-btrfs and
> btrfs-progs.
>
> On Mon, Feb 4, 2013 at 8:59 PM, Gareth Pye wrote:
>> I felt like having a small play with this stuff, as I've been wanting
>> it for so long :)
>>
>> But apparent
).
Conversely there is little benefit to putting one stripe of a
raid0/5/6 into the SSD device without the rest of that data reaching
the same level.
Not that additional reasons to do this work in btrfs were needed it
does need to be thought about how this implementation interacts with
those fea
e it needs 15G but because some of the storage is used in RAID1
then df shows 10G free but the 15G install would work fine. If you
could force the tool to install where it know it doesn't have
sufficient space?
--
Gareth Pye
Level 2 Judge, Melbourne, Australia
Australian MTG Forum: mtgau.co
not panic.
> For more info see: http://en.wikipedia.org/wiki/OpenPGP
--
Gareth Pye
Level 2 Judge, Melbourne, Australia
Australian MTG Forum: mtgau.com
gar...@cerberos.id.au - www.rockpaperdynamite.wordpress.com
"Dear God, I would like to file a bug report"
--
To unsubscribe from thi
, Z2, and Z3 do
> because they used the RAID acronym.
>
> On Mon, Feb 20, 2012 at 8:47 PM, Gareth Pye wrote:
>> On Tue, Feb 21, 2012 at 12:07 PM, Tom Cameron wrote:
>>>
>>> It seems from the BTRFS documentation that the RAID1 profile is
>>> actually "mir
o change the 'RAID level' to be the RAID1 analogue
for the new number of disks.
Users will forget that and they will lose data because of it. At least
with a M=N mode BTRFS can say they tried to make it easy to avoid that
pitfall.
(resend in plain text for mailing list, CC list received the
On Tue, Jun 26, 2012 at 8:37 AM, H. Peter Anvin wrote:
> They do? E.g. mdadm doesn't make them...
Hrm, you are right. It is something I always confirm is happening
though. Without a M=N mode there would need to be two balances as the
first balance would be doing it wrong :(
--
Ga
fsck in read only mode to confirm things look good
B - mount read only, confirm that I can read files well
C - mount read write, confirm working
Install latest OS, upgrade to latest kernel, then repeat above steps.
Any likely hiccups with the above procedure and suggested alternatives?
--
60 matches
Mail list logo