Thanks Colin , but how to get the current param of OSS or OST ? i wan to
know which failovernode ip or servicenode IP used .
On Wed, Aug 17, 2022 at 5:26 PM Colin Faber wrote:
> See: https://manpages.org/tunefslustre/8
>
> On Wed, Aug 17, 2022 at 4:27 AM Zeeshan Ali Shah via lustr
Dear All , i am getting strange issue the old OSS is decommissioned and all
of its OST moved to new OSS to new IP . suddenly in client side we are
getting following error ..
below 172.100.7.18 is OLD dead OSS .
126704:0:(o2iblnd_cb.c:2923:kiblnd_rejected()) 172.100.7.18@o2ib rejected:
consumer de
gt;
> T.H.Hsieh
>
>
> On Mon, Mar 08, 2021 at 11:08:26AM +0300, Zeeshan Ali Shah wrote:
> > Did you unmount the OST ? remember to lfs_migrate the data otherwise old
> > data would give errors
> >
> > On Fri, Mar 5, 2021 at 11:59 AM Etienne Aujames via lust
Did you unmount the OST ? remember to lfs_migrate the data otherwise old
data would give errors
On Fri, Mar 5, 2021 at 11:59 AM Etienne Aujames via lustre-discuss <
lustre-discuss@lists.lustre.org> wrote:
> Hello,
>
> There is some process/work in progress on the LU-7668 to remove the OST
> direc
> directory inside the mount.
>
> On Thu, Nov 12, 2020 at 2:41 AM Zeeshan Ali Shah
> wrote:
>
>> Dear All, is it possible to get list of files and directories from MDS
>> directly without OSS/OSTs. I am having some power issues and could not
>> fireup OSS/O
Dear All, is it possible to get list of files and directories from MDS
directly without OSS/OSTs. I am having some power issues and could not
fireup OSS/OSTs but wanna query for few files directly from MDS . .
is that possible ? and how plz advice
Zeeshan
_
Dear All we are getting following errors, what does this mean? any advice ?
[Wed Jul 29 08:17:33 2020] LustreError:
24267:0:(tgt_grant.c:758:tgt_grant_check()) lustre-OST000e: cli
61f1ceb7-b0ab-3607-1d60-69e1f3abc7a5 claims 1703936 GRANT, real grant 0
[Wed Jul 29 08:17:33 2020] LustreError:
24267:
Dear All ,
On zfs based lustre we are getting following
pool: ost2-xag
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
update: i tried using this :
zfs send -R mdtpool/mdt0@august | gzip > mdt-aug.gz
Zee
On Wed, Aug 21, 2019 at 10:16 PM Zeeshan Ali Shah
wrote:
> Dear All, what are best practices to backup MDT zfs based offline . One
> option is to zfs send/recieve to remote machine. Any other o
Dear All, what are best practices to backup MDT zfs based offline . One
option is to zfs send/recieve to remote machine. Any other option for
weekly backup ? .
what about tar ?
please advise
/Zee
___
lustre-discuss mailing list
lustre-discuss@lists.lu
I have fixed the issue
some how misconf two systems had the same IP of client .. after it change
all are ok
/Zee
On Mon, Oct 8, 2018 at 12:51 PM Zeeshan Ali Shah
wrote:
> We are getting the following error when run rsync .
>
> Background: We have three filesystem on same MDS
We are getting the following error when run rsync .
Background: We have three filesystem on same MDS , MDT are different zfs
pools.. would that be an issue ?
any advice ?
error below
[Mon Oct 8 12:29:11 2018] Lustre: sgp-MDT-mdc-883ffc2c3000:
Connection restored to 172.100.120.
etting ltop. 'zpool iostat' can give you numerous
> performance statistics as well.
>
> On 09/12/2018 04:11 PM, Cameron Harr wrote:
>
> Use 'zpool iostat -v ...'. You can review the man page for lots of
> options, including latency statistics.
>
> Came
Dear All,
Suddenly our lustre installation become dead slow, copying data from local
source to lustre result around 20MB/sec before it was above 600 MB.
i checked zfs status -xv and (all pool healths are ok)
1) How to check which OSTs are involved during data write operation ?
2) How to check Me
gt; Cheers, Andreas
>
> On Aug 29, 2018, at 00:14, Zeeshan Ali Shah wrote:
> >
> > Thanks a lot Patrick for detail answer, I tried with gnu parallel with
> dd and over all the throughput was increased locally .. you are right it is
> due to client side single thread issu
ink it just magically spews zeroes at any rate needed,
> but it’s not really designed to be read at high speed and actually isn’t
> that fast. If you really want to test high speed storage, you may need a
> tool that allocates memory and writes that out, not just dd. (ior is one
>
, 2018 at 3:54 PM Patrick Farrell wrote:
> How are you measuring write speed?
>
>
> --
> *From:* lustre-discuss on
> behalf of Zeeshan Ali Shah
> *Sent:* Tuesday, August 28, 2018 1:30:03 AM
> *To:* lustre-discuss@lists.lustre.org
> *Subject:
Dear All, I recently deployed 10PB+ Lustre solution which is working fine.
Recently for genomic pipeline we acquired another racks with dedicated
compute nodes with single 24-NVME SSD Servers/Rack . Each SSD server
connected to Compute Node via 100 G Omnipath.
Issue 1: is that when I combined S
ace. Fixing it and problem solved.
>
>
>
> Lixin.
>
>
>
>
>
> *From: *Zeeshan Ali Shah
> *Date: *Saturday, August 11, 2018 at 11:20 PM
> *To: *Lixin Liu
> *Cc: *"lustre-discuss@lists.lustre.org"
> *Subject: *Re: [lustre-discuss] strange errors on Lustr
What is output of opainfo ?
Sent from my iPhone
> On 12 Aug 2018, at 04:04, Lixin Liu wrote:
>
> Hi,
>
> I am getting these errors on all our MDS and OSS servers (Lustre 2.10.1):
>
> Aug 11 11:45:52 ndc-oss5b kernel: LNet:
> 24727:0:(o2iblnd_cb.c:2410:kiblnd_passive_connect()) Conn stale
We just finished another 10PB Lustre/zfs installation , this time i tried
to use IML (as monitor mode) but did not see any benefit of it.. IMHO .. so
i just un-installed it .. it was on MGS .
In our experience installation using commands is far better than using IML
at least in until now..
/Ze
Thanks a lot Mark and Cory and Peter.
we moved to single FS with nodemap...
/Zee
On Fri, Jun 29, 2018 at 2:17 AM, Cory Spitz wrote:
> Oops, right! Mark, thanks for pointing that out. Peter, thanks for the
> update on ZFS 0.8.
> -Cory
>
> --
>
> On 6/28/18, 5:20 PM, "Peter Jones" wrote:
>
>
we have different research groups i am thinking to have one filesystem and
beneath it using ACL have project folders .
Just curious what re the pros/cons of having multiple filesystem vs single
filesystem with folders ?
any advice ?
/Zeeshan
___
lustre
Found the answers:
http://doc.lustre.org/lustre_manual.xhtml#dbdoclet.50438194_88063
http://doc.lustre.org/lustre_manual.xhtml#managingsecurity
Thanks for manual :)
/Zee
On Thu, Jun 28, 2018 at 12:48 AM Zeeshan Ali Shah
wrote:
> Dear All,
> During mdt it ask for --fsname flag
Dear All,
During mdt it ask for --fsname flag , docs mentioned it is a name for
filesystem name to which mdt part of .. that is ok but on client when i
mount /lustre/fsname it mount complete lustre filesystem .
1) Can a mds/mdt serve more than one fsname ?
2) What is the best practice for maintai
ntroduction_to_Lustre_Object_Storage_Devices_(OSDs)
>
> http://wiki.lustre.org/Creating_Lustre_Object_Storage_Services_(OSS)
>
>
>
> There are also sections that cover the MGT and MDTs.
>
>
>
> Malcolm.
>
>
>
>
>
> *From: *lustre-discuss on
> behalf
s and 2 osts into other oss. At the same time hdds need to be shared
> between all oss. So in normal conditions 1 oss will import 2 ost and the
> second oss will import Other 2 osts.in case of ha single oss can import
> all 4osts
>
> Kind Regards
> Dzmitryj Jakavuk
>
> >
We have 2 OSS with 4 OST shared . Each OST has 90 Disk so total 360 Disks .
I am in phase of installing 2OSS as active/active but as zfs pools can only
be imported in single OSS host in this case how to achieve active/active HA
?
As what i read is that for active/active both HA hosts should have a
Hi Team,
For fresh install what are the benefits if we setup lustre with Intel
Luster manager vs manual ?
Hardware:
we have 2OSS+5oST(3 of those) so totall 6 OSS and 15 OST.
2 MDS + 1 MDT and 2 LNET , 2 Cifs routers .
we are looking for ZFS based OSTs.
any advice ?
/Zee
___
Thanks Got it..
/Zee
On Mon, Jun 25, 2018 at 5:06 PM, Andreas Dilger
wrote:
> You can also find the manual at: http://lustre.org/documentation/
>
> Cheers, Andreas
>
> On Jun 25, 2018, at 13:19, Zeeshan Ali Shah wrote:
>
> Lustre manual link
> https://wiki.w
Lustre manual link
https://wiki.whamcloud.com/display/PUB/Documentation
Lustre Manual , PDF, HTML , epub all are not working..
is dead , is there an alternate url ?
/Zee
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustr
t;
>
>
>
>
>
>
> *From:* lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] *On
> Behalf Of *Zeeshan Ali Shah
> *Sent:* Thursday, October 20, 2016 12:25 AM
> *To:* lustre-discuss@lists.lustre.org
> *Subject:* [lustre-discuss] OpenZFS support in lustre
Dear All,
Does the ZFS volume support in Community edition of Lustre from Intel.
actually in intel doc it seem bit unclear.
I want to have ZFS for OSTs ..
any suggestion ?
/Zeeshan
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http:/
33 matches
Mail list logo