Thanks a lot for guidance, I wl kick the installation in 1-2 days.

/Zee

On Wed, Jun 27, 2018 at 2:16 AM Cowe, Malcolm J <malcolm.j.c...@intel.com>
wrote:

> You can create pools and format the storage on a single node, provided
> that the correct `--servicenode` parameters are applied to the format
> command (i.e. the NIDs for each OSS in the HA pair). Then export half of
> the ZFS pools from the first node and import them to the other node.
>
>
>
> There is some documentation that describes the process here:
>
>
>
> http://wiki.lustre.org/Category:Lustre_Systems_Administration
>
>
>
> This includes sections on HA with Pacemaker:
>
>
>
> http://wiki.lustre.org/Managing_Lustre_as_a_High_Availability_Service
>
>
> http://wiki.lustre.org/Creating_a_Framework_for_High_Availability_with_Pacemaker
>
>
> http://wiki.lustre.org/Lustre_Server_Fault_Isolation_with_Pacemaker_Node_Fencing
>
>
> http://wiki.lustre.org/Creating_Pacemaker_Resources_for_Lustre_Storage_Services
>
>
>
>
>
> For OSD and OSS stuff:
>
>
>
> http://wiki.lustre.org/ZFS_OSD_Storage_Basics
>
> http://wiki.lustre.org/Introduction_to_Lustre_Object_Storage_Devices_(OSDs)
>
> http://wiki.lustre.org/Creating_Lustre_Object_Storage_Services_(OSS)
>
>
>
> There are also sections that cover the MGT and MDTs.
>
>
>
> Malcolm.
>
>
>
>
>
> *From: *lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on
> behalf of Zeeshan Ali Shah <javacli...@gmail.com>
> *Date: *Wednesday, 27 June 2018 at 1:53 am
> *To: *Lustre discussion <lustre-discuss@lists.lustre.org>
> *Subject: *Re: [lustre-discuss] ZFS based OSTs need advice
>
>
>
> Our OST are based on supermicro SSG-J4000-LUSTRE-OST , it is a kind of
> JBOD.
>
>
>
> all 360 Disks (90 disks x4 OST) appear in /dev/disk in both OSS1 and OSS2
> .
>
>
>
> My idea is to create zfspool of Raidz2 (9+2 spare) which means arround 36
> zfspools will be created  .
>
>
>
> Q1) Out of 36 zfs pools shall i create all of 36 Pools in OSS1 ?  in this
> case those pools can only be imported in OSS1 not in OSS2 how to gain HA
> /active/active here=
>
> Q2)  2nd option is to create  18 zfspools in OSS1 and 18 in OSS2 ? later
> in mkfs.luster specify oss1 as primary and oss2 in secondary (execute it in
> oss1) and 2nd time execute same command on oss2 and make oss2 primary and
> oss1 secondary .
>
>
>
> does it make sense ? am i missing some thing
>
>
>
> Thanks a lot
>
>
>
>
>
> /Zee
>
>
>
>
>
> On Tue, Jun 26, 2018 at 5:38 PM, Dzmitryj Jakavuk <dzmit...@gmail.com>
> wrote:
>
> Hello
>
> You can share 4 osts between pair of oss making 2 osts imported into one
> oss and 2 osts into other oss.  At the same time hdds need to be shared
> between all oss. So in normal conditions  1 oss will import 2 ost and the
> second oss will import   Other 2 osts.in case of ha single oss can import
> all 4osts
>
> Kind Regards
> Dzmitryj Jakavuk
>
>
> > On Jun 26, 2018, at 16:02, Zeeshan Ali Shah <javacli...@gmail.com>
> wrote:
> >
> > We have 2 OSS with 4 OST shared . Each OST has 90 Disk so total 360
> Disks .
> >
> > I am in phase of installing 2OSS as active/active but as zfs pools can
> only be imported in single OSS host in this case how to achieve
> active/active HA ?
> > As what i read is that for active/active both HA hosts should have
> access to a same sets of disks/volumes.
> >
> > any advice ?
> >
> >
> > /Zeeshan
> >
> >
> >
>
> > _______________________________________________
> > lustre-discuss mailing list
> > lustre-discuss@lists.lustre.org
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to