.pdf
I'm trying to configure active/passive iSCSI gateway on OSD nodes serving
RBD image. Clustering is done using pacemaker/corosync. Does anyone have a
similar working setup? Anything I should be aware of?
Thanks
Dominik
On Mon, Jan 18, 2016 at 11:35 AM, Dominik Zalewski <dzalew...@optlink
Hi,
I'm looking into implementing iscsi gateway with MPIO using lrbd -
https://github.com/swiftgist/lrb
https://www.suse.com/docrep/documents/kgu61iyowz/suse_enterprise_storage_2_and_iscsi.pdf
https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
>From above examples:
*For iSCSI failover
Hi,
I would like to hear from people who use cache tier in Ceph about best
practices and things I should avoid.
I remember hearing that it wasn't that stable back then. Has it changed in
Hammer release?
Any tips and tricks are much appreciated!
Thanks
Dominik
in the same
time due to writes happening on both of them.
You only going to get journal write performance penalty with RAID-1.
Dominik
On Wed, Aug 5, 2015 at 3:37 PM, Dominik Zalewski dzalew...@optlink.co.uk
wrote:
I would suggest splitting OSDs across two or more SSD journals (depending
= 1
On Wed, Aug 5, 2015 at 4:48 PM, Dominik Zalewski dzalew...@optlink.co.uk
wrote:
Yes, there should a separate partition per OSD. You are probably looking
at 10-20GB journal partition per OSD. If you are creating your cluster
using ceph-deploy it can create journal partitions for you
Hi,
I’ve asked same question last weeks or so (just search the mailing list
archives for EnhanceIO :) and got some interesting answers.
Looks like the project is pretty much dead since it was bought out by HGST.
Even their website has some broken links in regards to EnhanceIO
I’m keen to try
Hi,
I'm wondering if anyone is using NVME SSDs for journals?
Intel 750 series 400GB NVME SSD offers good performance and price in
comparison to let say Intel S3700 400GB.
http://ark.intel.com/compare/71915,86740
My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day
for 5yrs
Hi,
I’ve asked same question last weeks or so (just search the mailing list
archives for EnhanceIO :) and got some interesting answers.
Looks like the project is pretty much dead since it was bought out by HGST.
Even their website has some broken links in regards to EnhanceIO
I’m keen to try
Hi,
I came across this blog post mentioning using EnhanceIO (fork of
flashcache) as cache for OSDs.
http://www.sebastien-han.fr/blog/2014/10/06/ceph-and-enhanceio/
https://github.com/stec-inc/EnhanceIO
I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs and
using 256GB
year.
Dominik
On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk n...@fisk.me.uk wrote:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Dominik Zalewski
Sent: 26 June 2015 09:59
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph
Be warned that running SSD and HD based OSDs in the same server is not
recommended. If you need the storage capacity, I'd stick to the journals
on SSDs plan.
Can you please elaborate more why running SSD and HD based OSDs in the same
server is not
recommended ?
Thanks
Dominik
11 matches
Mail list logo