Re: [ceph-users] IO rate-limiting with Ceph RBD (and libvirt)

2018-03-21 Thread Wido den Hollander
On 03/21/2018 06:48 PM, Andre Goree wrote: > I'm trying to determine the best way to go about configuring IO > rate-limiting for individual images within an RBD pool. > > Here [1], I've found that OpenStack appears to use Libvirt's "iotune" > parameter, however I seem to recall reading about bei

Re: [ceph-users] DELL R620 - SSD recommendation

2018-03-21 Thread Nghia Than
If you want speed and IOPS, try: PM863a or SM863a (PM863a is slightly cheaper). If you want high endurances, try Intel DC S3700 series. Do not use consumer SSD for caching either HDD desktop for OSD. what is the highest HDD capacity that you were able to use in the R620 ? ​This depend on your

Re: [ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Subhachandra Chandra
Looking at the latency numbers in this thread, it seems to be a cut-through switch. Subhachandra On Wed, Mar 21, 2018 at 12:58 PM, Subhachandra Chandra < schan...@grailbio.com> wrote: > Latency is a concern if your application is sending one packet at a time > and waiting for a reply. If you are

[ceph-users] DELL R620 - SSD recommendation

2018-03-21 Thread Steven Vacaroaia
Hi, It will be appreciated if you could recommend some SSD models ( 200GB or less) I am planning to deploy 2 SSD and 6 HDD ( for a 1 to 3 ratio) in few DELL R620 with 64GB RAM Also, what is the highest HDD capacity that you were able to use in the R620 ? Note I apologize for asking "research e

Re: [ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Subhachandra Chandra
Latency is a concern if your application is sending one packet at a time and waiting for a reply. If you are streaming large blocks of data, the first packet is delayed by the network latency but after that you will receive a 10Gbps stream continuously. The latency for jumbo frames vs 1500 byte fra

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-21 Thread Kjetil Joergensen
I retract my previous statement(s). My current suspicion is that this isn't a leak as much as it being load-driven, after enough waiting - it generally seems to settle around some equilibrium. We do seem to sit on the mempools x 2.4 ~ ceph-osd RSS, which is on the higher side (I see documentation

[ceph-users] Bluestore cluster, bad IO perf on blocksize<64k... could it be throttling ?

2018-03-21 Thread Frederic BRET
Hi all, The context : - Test cluster aside production one - Fresh install on Luminous - choice of Bluestore (coming from Filestore) - Default config (including wpq queuing) - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far more Gb at each switch uplink... - R3 pool, 2 nodes per site - separat

[ceph-users] IO rate-limiting with Ceph RBD (and libvirt)

2018-03-21 Thread Andre Goree
I'm trying to determine the best way to go about configuring IO rate-limiting for individual images within an RBD pool. Here [1], I've found that OpenStack appears to use Libvirt's "iotune" parameter, however I seem to recall reading about being able to do so via Ceph's settings. Is there a

Re: [ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Willem Jan Withagen
On 21-3-2018 13:47, Paul Emmerich wrote: > Hi, > > 2.3µs is a typical delay for a 10GBASE-T connection. But fiber or SFP+ > DAC connections should be faster: switches are typically in the range of > ~500ns to 1µs. > > > But you'll find that this small difference in latency induced by the > switc

Re: [ceph-users] Prometheus RADOSGW usage exporter

2018-03-21 Thread Berant Lemmenes
My apologies, I don't seem to be getting notifications on PRs. I'll review this week. Thanks, Berant On Mon, Mar 19, 2018 at 5:55 AM, Konstantin Shalygin wrote: > Hi Berant > > > I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and >> exports the usage information for all

Re: [ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Paul Emmerich
Hi, 2.3µs is a typical delay for a 10GBASE-T connection. But fiber or SFP+ DAC connections should be faster: switches are typically in the range of ~500ns to 1µs. But you'll find that this small difference in latency induced by the switch will be quite irrelevant in the grand scheme of things wh

Re: [ceph-users] Object Gateway - Server Side Encryption

2018-03-21 Thread Vik Tara
On 15/03/18 10:45, Vik Tara wrote: > > On 14/03/18 12:31, Amardeep Singh wrote: > >> Though I have now another issue because I am using Multisite setup >> with one zone for data and second zone for metadata with elastic >> search tier. >> >> http://docs.ceph.com/docs/master/radosgw/elastic-sync-m

[ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Willem Jan Withagen
Hi, I just ran into this table for a 10G Netgear switch we use: Fiberdelays: 10 Gbps vezelvertraging (64 bytepakketten): 1.827 µs 10 Gbps vezelvertraging (512 bytepakketten): 1.919 µs 10 Gbps vezelvertraging (1024 bytepakketten): 1.971 µs 10 Gbps vezelvertraging (1518 bytepakketten): 1.905 µs Co

Re: [ceph-users] Separate BlueStore WAL/DB : best scenario ?

2018-03-21 Thread Ronny Aasen
On 21. mars 2018 11:27, Hervé Ballans wrote: Hi all, I have a question regarding a possible scenario to put both wal and db in a separate SSD device for an OSD node composed by 22 OSDs (HDD SAS 10k 1,8 To). I'm thinking of 2 options (at about the same price) : - add 2 SSD SAS Write Intensiv

Re: [ceph-users] Updating standby mds from 12.2.2 to 12.2.4 caused up:active 12.2.2 mds's to suicide

2018-03-21 Thread Martin Palma
Just run into this problem on our production cluster It would have been nice if the release notes of 12.2.4 had been adapted to inform user about this. Best, Martin On Wed, Mar 14, 2018 at 9:53 PM, Gregory Farnum wrote: > On Wed, Mar 14, 2018 at 12:41 PM, Lars Marowsky-Bree wrote: >> On 20

Re: [ceph-users] wal and db device on SSD partitions?

2018-03-21 Thread Ján Senko
2018-03-21 8:56 GMT+01:00 Caspar Smit : > 2018-03-21 7:20 GMT+01:00 ST Wong (ITSC) : > >> Hi all, >> >> >> >> We got some decommissioned servers from other projects for setting up >> OSDs. They’ve 10 2TB SAS disks with 4 2TB SSD. >> >> We try to test with bluestores and hope to play wal and db de

[ceph-users] Separate BlueStore WAL/DB : best scenario ?

2018-03-21 Thread Hervé Ballans
Hi all, I have a question regarding a possible scenario to put both wal and db in a separate SSD device for an OSD node composed by 22 OSDs (HDD SAS 10k 1,8 To). I'm thinking of 2 options (at about the same price) : - add 2 SSD SAS Write Intensive (10DWPD) - or add a unique SSD NVMe 800 Go

Re: [ceph-users] wal and db device on SSD partitions?

2018-03-21 Thread Caspar Smit
2018-03-21 7:20 GMT+01:00 ST Wong (ITSC) : > Hi all, > > > > We got some decommissioned servers from other projects for setting up > OSDs. They’ve 10 2TB SAS disks with 4 2TB SSD. > > We try to test with bluestores and hope to play wal and db devices on > SSD. Need advice on some newbie question