On December 13, 2018 9:14:46 PM UTC, Lou Picciano
wrote:
>The plot thickens, I’m afraid. Since last post, I’ve replaced the
>drive, and throughput remains molasses-in-January slow…
>period indicated below is more than 24 hours:
>
> scan: resilver in progress since Wed Dec 12 15:13:11 2018
On 13.12.18. 22:14, Lou Picciano wrote:
> The plot thickens, I’m afraid. Since last post, I’ve replaced the drive, and
> throughput remains molasses-in-January slow…
> period indicated below is more than 24 hours:
>
> scan: resilver in progress since Wed Dec 12 15:13:11 2018
> 26.9G
On 13.12.18. 22:31, Reginald Beardsley via openindiana-discuss wrote:
>
Hi,
Please see if you can hit "reply" when answering on message thread,
so that your response could stay inside message thread and not creating
new thread with your every answer.
I think it is about mail client you use or not
On Thu, 12/13/18, Lou Picciano wrote:
Subject: Re: [OpenIndiana-discuss] Huge ZFS root pool slowdown - diagnose root
cause?
To: "Discussion list for OpenIndiana"
Date: Thursday, December 13, 2018, 3:14 PM
[snip]
What’s next? Could it be as simple as a cable? These cables ha
The plot thickens, I’m afraid. Since last post, I’ve replaced the drive, and
throughput remains molasses-in-January slow…
period indicated below is more than 24 hours:
scan: resilver in progress since Wed Dec 12 15:13:11 2018
26.9G scanned out of 1.36T at 345K/s, (scan is slow, no
On 12/11/18 10:14 AM, John D Groenveld wrote:
And when its replaced, I believe the OP will need to installboot(1M)
the new drive.
Correct me if I'm wrong, but Illumos ZFS doesn't magically put
the boot code with zpool replace.
man installgrub
installgrub /boot/grub/stage1 /boot/grub/stage2
On Mon, 10 Dec 2018, jason matthews wrote:
Based on this, your disk is just super busy. Perhaps from the scrub? You are
doing about 200 reads/second and 100 writes per second. Identify what is
causing the writes and stop it.
Remember that he also saw slowness during boot which should not be
Hi no zfs replace does not manage mbr stuff.
You will need to use bootadm
Greetings
Till
On 11.12.18 19:14, John D Groenveld wrote:
> In message <2ee0cc78-2f8c-ee34-7371-10fcbfdbc...@broken.net>, jason matthews
> wr
> ites:
>> Life should be better with the sick disk removed.
>
> And when its
In message <2ee0cc78-2f8c-ee34-7371-10fcbfdbc...@broken.net>, jason matthews wr
ites:
>Life should be better with the sick disk removed.
And when its replaced, I believe the OP will need to installboot(1M)
the new drive.
Correct me if I'm wrong, but Illumos ZFS doesn't magically put
the boot code
This is your offending device:
$ pfexec smartctl -a -d sat,12 /dev/rdsk/c2t0d0s0 | grep Raw_Read
1 Raw_Read_Error_Rate 0x000b 094 094 016Pre-fail Always
- 1376259
Try removing this disk.
The boot manager is in your bios. It currently points to one of your
rpool
John, Jason,
Many thanks for your brainstorming on this…
> On Dec 10, 2018, at 6:19 PM, John D Groenveld wrote:
>
> In message <4ab4a1dd-5a90-4f9a-b26e-9a71028a0...@comcast.net>, Lou Picciano
> wri
> tes:
>> Is this evidence of erroneous attempts to read boot blocks/loader on disk0?
>>
>>
On 12/10/18 8:10 AM, Lou Picciano wrote:
Machine does eventually boot, however - takes about 20 mins! Recent Hipster
updates (2018-11-27) have been applied. System otherwise runs quite well. Most
client data is on datapool; they remain oblivious. (To be honest, they were
oblivious before
In message <4ab4a1dd-5a90-4f9a-b26e-9a71028a0...@comcast.net>, Lou Picciano wri
tes:
>Is this evidence of erroneous attempts to read boot blocks/loader on disk0?
>
>Given the machine BIOS identification of drives, dunno that I can be absolutel
>y certain disk0 is referring to one disk - or is the
Really need some feedback from The Experts here…
We have a root pool which has started to run very slowly…
Evidence?
- originally, only indication was that there seemed to be nearly-continuous
drive controller traffic. (the pool is nowhere near full…)
- scrub pool has taken about 5 days to
14 matches
Mail list logo