- Original Message -
> Hi,
> blktrace btt report for a 200m write to a IBM DS5020 SAN partition
>
> Per Process
> Q2Cdm MIN AVG MAX N
> glock_workqueu0.000165159 0.000165159 0.000165159
> 1
Hi,
blktrace btt report for a 200m write to a IBM DS5020 SAN partition
Per Process
Q2Cdm MIN AVG MAX N
glock_workqueu0.000165159 0.000165159 0.000165159
1 normal node
glock_workqueu0.00022
Thank you for your reply. Below are extra info.
GFS2 partition in question:
[root@apps03 ~]# df -h
FilesystemSize Used Avail Use% Mounted on
/dev/mapper/SAN5020-order 800G 435G 366G 55%
/sanstorage/data0/images/order <- one that suffered from slow
- Original Message -
> Hi,
>
> I have a centos 6.5 cluster that are connected to a Fibre Channel SAN in star
> topology. All nodes/SAN_storages have single-pair fibre connection and
> no multipathing. Possibility of hardware issue had been eliminated
> because read/write between all other
Hi,
On 10/11/15 04:56, Dil Lee wrote:
Hi,
I have a centos 6.5 cluster that are connected to a Fibre Channel SAN in star
topology. All nodes/SAN_storages have single-pair fibre connection and
no multipathing. Possibility of hardware issue had been eliminated
because read/write between all other
Hi,
I have a centos 6.5 cluster that are connected to a Fibre Channel SAN in star
topology. All nodes/SAN_storages have single-pair fibre connection and
no multipathing. Possibility of hardware issue had been eliminated
because read/write between all other node/SAN_storage pairs works
perfectly.