On Fri, Nov 11, 2022, 7:27 PM Gareth Evans wrote:
>
>
> On 11 Nov 2022, at 16:59, Vukovics Mihály wrote:
>
>
>
> Hi Gareth,
>
> dmesg is "clean", there disks are not shared in any way and there is no
> virtualization layer installed.
>
> Hello, but the message was from Nicholas :)
>
> Looking
Hi Nicholas,
in longer term the load of the raid members are equal:
DSK | sde | busy 7% | | read 102017 | write
217744 | KiB/r 9 | KiB/w 6 | | MBr/s 0.0 |
MBw/s 0.0 | avq 2.60 | | avio 5.91 ms |
DSK | sdb | busy
> On 11 Nov 2022, at 16:59, Vukovics Mihály wrote:
>
> Hi Gareth,
>
> dmesg is "clean", there disks are not shared in any way and there is no
> virtualization layer installed.
>
Hello, but the message was from Nicholas :)
Looking at your first graph, I noticed the upgrade seems to
Hi Gareth,
dmesg is "clean", there disks are not shared in any way and there is no
virtualization layer installed.
On 2022. 11. 11. 17:34, Nicholas Geovanis wrote:
On Fri, Nov 11, 2022, 1:58 AM Vukovics Mihály wrote:
Hi Gareth,
I have already tried to change the queue depth for
On Fri, Nov 11, 2022, 1:58 AM Vukovics Mihály wrote:
> Hi Gareth,
>
> I have already tried to change the queue depth for the physichal disks
> but that has almost no effect.
> There is almost no load on the filesystem, here is 10s sample from atop.
> 1-2 write requests but 30-50ms of average io.
Hi Gareth,
I have already tried to change the queue depth for the physichal disks
but that has almost no effect.
There is almost no load on the filesystem, here is 10s sample from atop.
1-2 write requests but 30-50ms of average io.
DSK | sdc | busy 27% | read 0 | write
Hello Gareth,
the average io wait state is 3% in the last 1d14h. I have checked the IO
usage with several tools and have not found any processes/threads
generation too much read/write requests. As you can see on my first
graph, only the read wait time increased significantly, the write not.
On Thu 10 Nov 2022, at 11:36, Gareth Evans wrote:
[...]
> I might be barking up the wrong tree ...
But simpler inquiries first.
I was wondering if MD might be too high-level to cause what does seem more like
a "scheduley" issue -
On Thu 10 Nov 2022, at 11:36, Gareth Evans wrote:
[...]
> This assumes the identification of the driver in [3] (below) is
> anything to go by.
I meant [1] not [3].
Also potentially of interest:
"Queue depth
The queue depth is a number between 1 and ~128 that shows how many I/O requests
On Thu 10 Nov 2022, at 07:04, Vukovics Mihaly wrote:
> Hi Gareth,
>
> - Smartmon/smarctl does not report any hw issues on the HDDs.
> - Fragmentation score is 1 (not fragmented at all)
> - 18% used only
> - RAID status is green (force-resynced)
> - rebooted several times
> - the IO utilization is
Hi Gareth,
- Smartmon/smarctl does not report any hw issues on the HDDs.
- Fragmentation score is 1 (not fragmented at all)
- 18% used only
- RAID status is green (force-resynced)
- rebooted several times
- the IO utilization is almost zero(!) - chart attached
- tried to change the io scheduler
On Tue 8 Nov 2022, at 09:48, Vukovics Mihály wrote:
> Hello Community,
>
> since I have upgraded my debian 10 to 11, the read IO wait time of all
> disks have been increased dramatically.
[...]
> Chart attached, you can clearly see the date of the upgrade.
>
> Any ideas?
Hello,
I'm not an
Hello Community,
since I have upgraded my debian 10 to 11, the read IO wait time of all
disks have been increased dramatically.
The hw and sw setup has not changed and there no extra IO related
activity on the server.
The scheduler is the same: mq-deadline. I have tried to set it to noop
but
13 matches
Mail list logo