It's FC/FcoE.
This is with configuration suggested by emc/redhat
360060160a62134002818778f949de411 dm-5 DGC,VRAID
size=11T features='2 queue_if_no_path retain_attached_hw_handler'
hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:1:2 sdr 65:16 active ready runn
can you please give the output of:
multipath -ll
and
iscsiadm -m session -P3
JP
2017-05-13 6:48 GMT-03:00 Stefano Bovina :
> Hi,
>
> 2.6.32-696.1.1.el6.x86_64
> 3.10.0-514.10.2.el7.x86_64
>
> I tried ioping test from different group of servers using multipath,
> members of different storage grou
Hi,
2.6.32-696.1.1.el6.x86_64
3.10.0-514.10.2.el7.x86_64
I tried ioping test from different group of servers using multipath,
members of different storage group (different lun, different raid ecc), and
everyone report latency.
I tried the same test (ioping) on a server with powerpath instead of
m
sorry to jump in, but what kernel version are you using? had similar issue
with kernel 4.10/4.11
2017-05-12 16:36 GMT-03:00 Stefano Bovina :
> Hi,
> a little update:
>
> The command multipath -ll hung when executed on the host while the problem
> occur (nothing logged in /var/log/messages or dme
Hi,
a little update:
The command multipath -ll hung when executed on the host while the problem
occur (nothing logged in /var/log/messages or dmesg).
I tested latency with ioping:
ioping /dev/6a386652-629d-4045-835b-21d2f5c104aa/metadata
Usually it return "time=15.6 ms", sometimes return "time=1
On Mon, May 8, 2017 at 11:50 AM, Stefano Bovina wrote:
> Yes,
> this configuration is the one suggested by EMC for EL7.
>
https://access.redhat.com/solutions/139193 suggest that for alua, the patch
checker needs to be different.
Anyway, it is very likely that you have storage issues - they need
Yes,
this configuration is the one suggested by EMC for EL7.
By the way,
"The parameters rr_min_io vs. rr_min_io_rq mean the same thing but are used
for device-mapper-multipath on differing kernel versions." and rr_min_io_rq
default value is 1, rr_min_io default value is 1000, so it should be fine
Hi,
thanks for the advice. The upgrade is already scheduled, but I would like
to fix this issue before proceeding with a big upgrade (unless an upgrade
will fixes the problem).
The problem is on all hypervisors.
We have 2 cluster (both connected to the same storage system):
- the old one with F
On Sun, May 7, 2017 at 1:27 PM, Stefano Bovina wrote:
> Sense data are 0x0/0x0/0x0
Interesting - first time I'm seeing 0/0/0. The 1st is usually 0x2 (see
[1]), and then the rest [2], [3] make sense.
A google search found another user with Clarion with the exact same
error[4], so I'm leaning to
On Tue, May 2, 2017 at 11:09 PM, Stefano Bovina wrote:
> Hi, the engine logs show high latency on storage domains: "Storage domain
> experienced a high latency of 19.2814 seconds from .. This may
> cause performance and functional issues."
>
> Looking at host logs, I found also these locking
Hi, the engine logs show high latency on storage domains: "Storage domain
experienced a high latency of 19.2814 seconds from .. This may
cause performance and functional issues."
Looking at host logs, I found also these locking errors:
2017-05-02 20:52:13+0200 33883 [10098]: s1 renewal error
11 matches
Mail list logo