Marco, One other idea is to give an unfriendly pool name that users
can't guess. Like "myfs.mkpilaxluia" instead of myfs.flash or
myfs.ssd so that it becomes difficult(not impossible though) for users
to use :). Users don't have access to MDS to get the entire lists of
pools defined.
Thanks,
Raj
I do not think this exists yet.
But, if every user has individual area(sub folder) inside the main project
folder, can you create a ‘lustre project’ per project-user?
-Raj
On Tue, May 17, 2022 at 7:21 AM Kenneth Waegeman via lustre-discuss <
lustre-discuss@lists.lustre.org> wrote:
> Hi all,
>
>
Andreas, Is there any IO penalties in enabling project quota? Will I see
the same throughput from the FS?
Thanks
-Raj
On Fri, Apr 15, 2022 at 1:32 PM Andreas Dilger via lustre-discuss <
lustre-discuss@lists.lustre.org> wrote:
> Note that in newer Lustre releases, if you have project IDs enabled
Ellis, I would also check the peer_credit between server and the client.
They should be same.
On Wed, Jan 19, 2022 at 9:27 AM Patrick Farrell via lustre-discuss <
lustre-discuss@lists.lustre.org> wrote:
> Ellis,
>
> As you may have guessed, that function just set looks like a node which is
>
One other way is to install xltop(https://github.com/jhammond/xltop)
and use xltop client (ncurses based linux top like tool) to watch for
top client with more requests per sec (xltop -k q h).
You can also use it to track jobs but you might have to write your own
nodes to job mapping script
Alastair,
Few scenarios which you may consider:
1) define 2 lnets one per IB interface (say o2ib1 and o2ib2) and share out
one OST through o2ib1 and other one through o2ib2. You can map HBA and disk
locality so that they are attached to the same cpu.
2) Same as above but share the ost/s from both