Re: [ceph-users] Luminous - bad performance

2018-01-25 Thread Steven Vacaroaia
com] > Sent: woensdag 24 januari 2018 19:47 > To: David Turner > Cc: ceph-users > Subject: Re: [ceph-users] Luminous - bad performance > > Hi , > > I have bundled the public NICs and added 2 more monitors ( running on 2 > of the 3 OSD hosts) This seem to improve thin

Re: [ceph-users] Luminous - bad performance

2018-01-24 Thread Marc Roos
ceph osd pool application enable XXX rbd -Original Message- From: Steven Vacaroaia [mailto:ste...@gmail.com] Sent: woensdag 24 januari 2018 19:47 To: David Turner Cc: ceph-users Subject: Re: [ceph-users] Luminous - bad performance Hi , I have bundled the public NICs and added 2 more

Re: [ceph-users] Luminous - bad performance

2018-01-24 Thread Steven Vacaroaia
Hi , I have bundled the public NICs and added 2 more monitors ( running on 2 of the 3 OSD hosts) This seem to improve things but still I have high latency Also performance of the SSD pool is worse than HDD which is very confusing SSDpool is using one Toshiba PX05SMB040Y per server ( for a total

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread Steven Vacaroaia
Hi David, I noticed the public interface of the server I am running the test from is heavily used so I will bond that one too I doubt though that this explains the poor performance Thanks for your advice Steven On 22 January 2018 at 12:02, David Turner wrote: > I'm

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread David Turner
I'm not speaking to anything other than your configuration. "I am using 2 x 10 GB bonded ( BONDING_OPTS="mode=4 miimon=100 xmit_hash_policy=1 lacp_rate=1") for cluster and 1 x 1GB for public" It might not be a bad idea for you to forgo the public network on the 1Gb interfaces and either put

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread Steven Vacaroaia
I did test with rados bench ..here are the results rados bench -p ssdpool 300 -t 12 write --no-cleanup && rados bench -p ssdpool 300 -t 12 seq Total time run: 300.322608 Total writes made: 10632 Write size: 4194304 Object size:4194304 Bandwidth (MB/sec):

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread Steven Vacaroaia
sorry ..send the message too soon Here is more info Vendor Id : SEAGATE Product Id : ST600MM0006 State : Online Disk Type : SAS,Hard Disk Device Capacity : 558.375 GB

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread Steven Vacaroaia
Hi David, Yes, I meant no separate partitions for WAL and DB I am using 2 x 10 GB bonded ( BONDING_OPTS="mode=4 miimon=100 xmit_hash_policy=1 lacp_rate=1") for cluster and 1 x 1GB for public Disks are Vendor Id : TOSHIBA Product Id : PX05SMB040Y

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread Sage Weil
On Mon, 22 Jan 2018, Steven Vacaroaia wrote: > Hi, > > I'll appreciate if you can provide some guidance / suggestions regarding > perfomance issues on a test cluster ( 3 x DELL R620, 1 Entreprise SSD, 3 x > 600 GB ,Entreprise HDD, 8 cores, 64 GB RAM) > > I created 2 pools ( replication factor 2)

Re: [ceph-users] Luminous - bad performance

2018-01-22 Thread David Turner
Disk models, other hardware information including CPU, network config? You say you're using Luminous, but then say journal on same device. I'm assuming you mean that you just have the bluestore OSD configured without a separate WAL or DB partition? Any more specifics you can give will be

[ceph-users] Luminous - bad performance

2018-01-22 Thread Steven Vacaroaia
Hi, I'll appreciate if you can provide some guidance / suggestions regarding perfomance issues on a test cluster ( 3 x DELL R620, 1 Entreprise SSD, 3 x 600 GB ,Entreprise HDD, 8 cores, 64 GB RAM) I created 2 pools ( replication factor 2) one with only SSD and the other with only HDD ( journal on