Would it be difficult to suspend IO and snapshot all the nodes (assuming
ZFS). Could you be sure that your MDS and OSS are synchronised?
On 7 February 2017 at 19:52, Mike Selway wrote:
> Hello Brett,
>
>Actually, looking for someone who uses a commercialized
>
Hello Brett,
Actually, looking for someone who uses a commercialized approach
(that retains user metadata and Lustre extended metadata) and not specifically
the manual approaches of Chapter 17.
Thanks!
Mike
Mike Selway | Sr. Tiered Storage Architect | Cray Inc.
Work
As a continuation to my recent question on traffic compression/caching I
was wondering what others use to monitor their Lustre performance
Currently I have collectl running on all clients, the data gets shipped by
filebeat to an ELK+Grafana stack.
Hoping to soon also deploy collectl on the
Because the stat command is “lst stat servers”, the statistics you are seeing
are from the perspective of the server. The “from” and “to” parameters can get
quite confusing for the read case. When reading, you are transferring the bulk
data from the “to” group to the “from” group (yes, seems
Hi Ben,
On Mon, Feb 6, 2017 at 10:51 PM, Ben Evans wrote:
> My initial question is what are you measuring and where are you measuring
> it?
>
The tool I'm using is collectl, it in turn is calling perfquery once a
minute and at the end reports back the difference between the
Probably doing something wrong here, but I tried to test only READING
with the following:
#!/bin/bash
export LST_SESSION=$$
lst new_session read
lst add_group servers 10.0.12.12@o2ib
lst add_group readers 10.0.12.11@o2ib
lst add_batch bulk_read
lst add_test --batch bulk_read --concurrency 12