usual 8+2 RAID6 type arrays that align with
Lustre's 1MB RPC properly?
2.
If we used 4K AF drives, with ashift=12, what size chunks (per disk)
will a 9+2 RAIDz2 array with 1MB record size create?
Thanks.
Regards,
Indivar Nair
___
lustre-discu
usual 8+2 RAID6 type arrays that align with Lustre's 1MB RPC
properly?
2.
If we used 4K AF drives, with ashift=12, what size chunks (per disk) will a
9+2 RAIDz2 array with 1MB record size create?
Thanks.
Regards,
Indivar Nair
___
lustre-discu
Okay. Got it.
Regards,
Indivar Nair
On Sun, Feb 21, 2021 at 7:55 PM Spitz, Cory James
wrote:
> Right, except I should point out that a mirror sync operation doesn’t have
> to be from one pool to another. The FLR replica can use any specified
> layout. Using an OST Pool
are a lot of mirrored files, the dedicated client will have to
be very fast too.
3. In future it may be possible to move these files using HSM agents.
Regards,
Indivar Nair
On Fri, Feb 19, 2021 at 1:51 AM Spitz, Cory James
wrote:
> Good questions, Indivar.
>
>
>
> 1) The Lust
gards,
Indivar Nair
On Thu, Feb 18, 2021 at 10:29 AM Sid Young wrote:
> G'day all,
>
> I'm trying to get my head around configuring a new Lustre 2.12.6 cluster
> on Centos 7.9, in particular the correct IP(s) for the MGS.
>
> In a pacemaker based MDS cluster, when I define
command was run?
2. And, in case of delayed mirroring of newly created files, who does the
copying?
3. How can we automate this process to automatically mirror certain file
types?
Regards,
Indivar Nair
___
lustre-discuss mailing list
lustre-discuss@l
Thanks, Peter.
Will check the presentation out.
Regards,
Indivar Nair
On Tue, Jan 23, 2018 at 10:33 PM, Jones, Peter A
wrote:
> There was a LUG presentation about using it in production -
> http://cdn.opensfs.org/wp-content/uploads/2017/06/
> Thur10-Raso-Barnett-Hammond-LUG2017.pd
Hi ...,
Is Lustre HSM tool - Lemur - Production ready?
And is the project still alive?
There has been no commits for the last 10 Months.
https://github.com/intel-hpdd/lemur
Regards,
Indivar Nair
___
lustre-discuss mailing list
lustre-discuss
Which OS and Version are you using on the MDS and OSS?
Regards,
Indivar Nair
On Sat, Oct 28, 2017 at 7:38 PM, Ravi Konila wrote:
> Hi
> I have newly installed Lustre 2.8 (2 MDSs and 2 OSSs) and I have mounted
> it on a client.
> It is working fine while mounting and also
ience with 100Gbps Ethernet NICs and
Lustre?
Regards,
Indivar Nair
On Thu, May 11, 2017 at 11:56 PM, Indivar Nair
wrote:
> Thanks for the advice.
> I had a hunch that the development will take time.
>
> Regards,
>
>
> Indivar Nair
>
> On Thu, May 11, 2017 at 11:28
Thanks for the advice.
I had a hunch that the development will take time.
Regards,
Indivar Nair
On Thu, May 11, 2017 at 11:28 PM, Oucharek, Doug S <
doug.s.oucha...@intel.com> wrote:
> As I write this, I am banging my head against this wall trying to figure
> it out. It is relate
Thanks a lot, Michael, Andreas, Simon, Doug,
I have already installed MLNX OFED 4:-(
I will now have to undo it and install the earlier version.
Roughly, by when would the support for MLNX OFED 4 be available?
Regards,
Indivar Nair
On Thu, May 11, 2017 at 9:35 PM, Oucharek, Doug S wrote
So I should add something like this in lnet.conf -
options lnet networks=o2ib0(p4p1)
Thats it, right?
Regards,
Indivar Nair
On Thu, May 11, 2017 at 8:39 PM, Dilger, Andreas
wrote:
> If you have RoCE cards and configure them with OFED, and configure Lustre
> to use o2iblnd then it
Mellanox OFED Drivers, but I cant find a way to tell
Lustre / LNET to use RoCE.
How do I go about?
Regards,
Indivar Nair
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
new default and
failover IP without formatting it?
In any case, how do I then tell the MDTs and OSTs that the MGT's default
and failover IPs have changed?
I cant find these in the document.
Thanks and Regards,
Indivar Nair
On Wed, Jul 22, 2015 at 5:25 AM, Dilger, Andreas
wrote:
> I
if I am wrong and tell if there is some other ways / methods
to reduce the recovery time.
Regards,
Indivar Nair
On Wed, Jul 22, 2015 at 8:03 AM, Patrick Farrell wrote:
> Note the other email also seemed to suggest that multiple NFS exports of
> Lustre wouldn't work. I don't
wants *at least* 9GB/s throughput from a single file-system.
But I think, if we architect the Lustre Storage correctly, with these many
disks, we should get at least 18GB/s throughput, if not more.
Regards,
Indivar Nair
On Tue, Jul 21, 2015 at 10:15 PM, Scott Nolin
wrote:
> An important que
Controllers do it (as in Option B)
2. Will Option B allow us to have lesser CPU/RAM than Option A?
Regards,
Indivar Nair
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
a
the Gateway.
The Gateway Nodes uses Infiniband+RDMA to connect to Lustre.
I am thinking of moving the Linux Native Clients to NFS, connecting them
through this Gateway.
All client nodes are on 1GbE network.
Infiniband is used only to connect the Gateway to Lustre.
Regards,
Indivar Nair
On Tue, J
*So, will having only 2 clients improve the failover-recovery time?*
===
Is there anything else we can do to speed up recovery?
Regards,
Indivar Nair
___
lustre-discuss mailing list
lustre-discuss@lists
Hi ...,
I can see quite a few mails on Lustre and OFED.
Are there any known issues with CentOS6.5+Lustre 2.5.3+Mellanox OFED 2.3
combination?
Regards,
Indivar Nair
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
Thanks Andreas.
I have a couple of queries on your suggestion.
Please find them inline *(blue color)* in the mail below.
Regards,
Indivar Nair
On Mon, Jun 30, 2014 at 2:12 PM, Dilger, Andreas
wrote:
> On 2014/06/29, 1:24 PM, "Indivar Nair" indivar.n...@techterra.in>> wr
take a point-in-time backup / rsync of the
complete Lustre filesystem?
If not, can one mount a copy of MDT and OST snapshots on another Lustre
cluster to recreate the complete Lustre Volume?
Regards,
Indivar Nair
___
Lustre-discuss mailing list
Lustre
Thanks Andreas,
Are there any statistics / theory on which approach would be better from a
performance perspective?
i.e. letting ZFS do the striping or Lustre do the striping
Regards,
Indivar Nair
On Tue, Jun 10, 2014 at 3:28 AM, Dilger, Andreas
wrote:
> This should be possible
will allow us to have a single OST per OSS, with ZFS managing the
striping across vdevs.
Regards,
Indivar Nair
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
2 Switches, the right thing to do is 'stacking'.
And one shouldn't use LACP on a Server-Switch connectivity.
Normal non-LACP bonding is the correct way to do it.
Will be to tell you more if you could post a diagram. Even a rough one
would do.
Indivar Nair
On Thu, Jun 27,
Then follow the instructions in my earlier mail.
No need to have bond0 and bond1.
You will achieve high-availability even with one bonded interface.
Cheers,
Indivar Nair
On Thu, Jun 27, 2013 at 11:55 AM, Alfonso Pardo wrote:
> Yes I have two swtiches, one to the bond0 interface and ot
itch side, then use bonding mode balance-alb or 6 on the Linux side.
No changes need to be done to the cabling in either case.
---
This way you get Load Balancing and H/A across NICs.
Indivar Nair
On Wed, Jun 26, 2013 at 5:47 PM, Michael Shuey wrote:
> That will probably be slow - the
'Lustre 2.0 Operations Manual'
(821-2076-10.pdf),
10.1.1 Selecting Storage for the MDS or OSTs, Page 175.
*(May have been moved to some other page in the newer version of the doc)*
Had implemented a similar configuration (with lesser disks) for one my
clients.
Regards,
Indivar Nair
On M
failover to each other.
Just ensure that each node has enough RAM to support 6 OSTs, in case one of
them fails.
Hope this helps.
Regards,
Indivar Nair
On Mon, Mar 11, 2013 at 6:31 PM, Jerome, Ron wrote:
> I am currently having a debate about the best way to carve up Dell
> MD3200's
192
--
Regards,
Indivar Nair
On Tue, Jan 22, 2013 at 12:55 AM, Lind, Bobbie J wrote:
> Indivar,
>
> I would be very interested to see what tuning parameters you
ailable for your kind of work.
Hope this helps.
Regards,
Indivar Nair
On Fri, Jan 18, 2013 at 1:51 AM, greg whynott wrote:
> Hi Charles,
>
> I received a few off list challenging email messages along with a few
> fishing ones, but its all good. its interesting how a post ask
welcome.
Regards,
Indivar Nair
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Hope I am on the right track :-).
Regards,
Indivar Nair
On Wed, Nov 2, 2011 at 9:35 PM, John White wrote:
> I did have that, with the 2nd line commented out. I just uncommented the
> 2nd, unloaded, depmod'd and reloaded only to see the same behavior.
>
>
s probably a code change to implement
async glimpse thread or bulkstat/readdirplus where Lustre could fetch
attributes before userspace requests them so they would be locally cached.'
Plus setting 'vfs_cache_pressure=0' or to a very low number is the solution.
Thanks,
Regards,
Ind
Size and TTL of file metadata cache in MDTs
and OSTs?
And how does the patch work then? If a request is for only 1 file stat, how
does multiple pages in readdir() help?
Regards,
Indivar Nair
On Mon, Sep 12, 2011 at 10:31 AM, Jeremy Filizetti <
jeremy.filize...@gmail.com> wrote:
>
&g
#x27; that is a triggering a 'stat' on each file
on the samba server?
Regards,
Indivar Nair
On Mon, Sep 12, 2011 at 9:09 AM, Indivar Nair wrote:
> So what you are saying is - The OSC (stat) issues the 1st RPC to the 1st
> OST, waits for its response, then issues the 2nd RPC to the
do extreme
parallelization?
Would patching lstat/stat/fstat to parallelize requests only when accessing
a Lustre store be possible?
Regards,
Indivar Nair
On Mon, Sep 12, 2011 at 6:38 AM, Jeremy Filizetti <
jeremy.filize...@gmail.com> wrote:
>
> From Adrian's explanation, I gath
more than 1 I/O thread, say like, it reads from the
buffer of one I/O and then moves to the next I/O buffer? Or is it strictly 1
RPC = 1 I/O?
Regards,
Indivar Nair
On Wed, Sep 7, 2011 at 6:40 PM, Michael Barnes wrote:
>
> Another thing to try is setting vfs_cache_pressure=0 on the OSSes
served 2 OSTs,
thereby reducing the number of OSS and RPCs?
Regards,
Indivar Nair
On Wed, Sep 7, 2011 at 1:28 AM, Adrian Ulrich wrote:
>
> > While normal file access works fine, the directory listing is extremely
> > slow.
> > Depending on the number of files in a directo
aster?
Could it be an MDS issue?
Some site suggested that this could be caused due to '-o flock' switch. Is
it so?
Kindly Help.
The storage is in Production, and this is causing a lot of issues.
Regards,
Indivar Nair
___
Lustre-discuss ma
aster?
Could it be an MDS issue?
Some site suggested that this could be caused due to '-o flock' switch. Is
it so?
Kindly Help.
The storage is in Production, and this is causing a lot of issues.
Regards,
Indivar Nair
___
Lustre-discuss ma
42 matches
Mail list logo