In this case I prefer to use http://www.storagereview.com/lsi_syncro_cs_ha_das_storage_overview <http://www.storagereview.com/lsi_syncro_cs_ha_das_storage_overview> which you can buy on ebay for about 1000USD
S pozdravem Kristián Feldsam Tel.: +420 773 303 353, +421 944 137 535 E-mail.: supp...@feldhost.cz www.feldhost.cz - FeldHost™ – profesionální hostingové a serverové služby za adekvátní ceny. FELDSAM s.r.o. V rohu 434/3 Praha 4 – Libuš, PSČ 142 00 IČ: 290 60 958, DIČ: CZ290 60 958 C 200350 vedená u Městského soudu v Praze Banka: Fio banka a.s. Číslo účtu: 2400330446/2010 BIC: FIOBCZPPXX IBAN: CZ82 2010 0000 0024 0033 0446 > On 27 Jul 2017, at 15:20, Ulrich Windl <ulrich.wi...@rz.uni-regensburg.de> > wrote: > > Hi! > > I think it will work, because the cluster does not monitor the PVs or > prtition or LUNs. It just checks whether you can activate the LVs (i.e.: the > VG). That's what I know... > > Regards, > Ulrich > >>>> lejeczek <pelj...@yahoo.co.uk> schrieb am 27.07.2017 um 15:05 in Nachricht > <636398a2-e8ea-644b-046b-ff12358de...@yahoo.co.uk>: >> hi fellas >> >> I realise this might be quite specialized topic, as this >> regards hardware DAS(sas2) and LVM and cluster itself but >> I'm hoping with some luck an expert peeps over here and I'll >> get some or all the answers then. >> >> question: >> Can cluster manage two(or more) LVM resources which would be >> on/in same single DAS storage and have these resources(eg. >> one LVM runs on 1&2 the other LVM runs on 3&4) run on >> different nodes(which naturally all connect to that single DAS)? >> >> Now, I guess this might be something many do already and >> many will say: trivial. In which case a few firm "yes" >> confirmations will mean - typical, just do it. >> Or could it be something unusual and untested but >> might/should work when done with care and special "preparation"? >> >> I understand that lots depends on what/how harwdare+kernel >> do things, but if possible(?) I'd leave it out for now and >> ask only the cluster itself - do you do it? >> >> many thanks. >> L. >> >> _______________________________________________ >> Users mailing list: Users@clusterlabs.org >> http://lists.clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org > > > > > > _______________________________________________ > Users mailing list: Users@clusterlabs.org > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org
_______________________________________________ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org