Re: [ceph-users] client did not provide supported auth type

2016-06-27 Thread Goncalo Borges
Hi... Just to clarify, you could have just one but if that one is problematic than your cluster stops working. It is always better to have more than one and in odd numbers : 3, 5, ... Regarding your specific problem, I am guessing it is related to keys and permissions because of the

Re: [ceph-users] RGW AWS4 SignatureDoesNotMatch when requests with port != 80 or != 443

2016-06-27 Thread Khang Nguyễn Nhật
Thanks Javier Muñoz . I will see it. 2016-06-24 22:30 GMT+07:00 Javier Muñoz : > Hi Khang, > > Today I had a look in a very similar issue... > > http://tracker.ceph.com/issues/16463 > > I guess it could be the same

Re: [ceph-users] client did not provide supported auth type

2016-06-27 Thread Goncalo Borges
Hi XiuCai Shouldn't you have, at least, 2 mons? Cheers G. On 06/28/2016 01:12 PM, 秀才 wrote: Hi, ther are 1 mon and 7 osds in my cluster now. but it seems something wrong, because `rbd -p test reate pet --size 1024` could not return. and status is always below: /cluster

[ceph-users] client did not provide supported auth type

2016-06-27 Thread ????
Hi, ther are 1 mon and 7 osds in my cluster now. but it seems something wrong, because `rbd -p test reate pet --size 1024` could not return. and status is always below: cluster 41f3f57f-0ca8-4dac-ba10-9359043ae21a health HEALTH_WARN 256 pgs degraded 256 pgs

[ceph-users] ceph-mon.target and ceph-mds.target systemd dependencies in centos7

2016-06-27 Thread Goncalo Borges
Hi All... Just upgraded from infernalis 9.2.0 to jewel 10.2.2 in centos7. I do have a question regarding ceph-mon.target and ceph-mds.target systemd dependencies. Before the upgrade, I had the following situation in a mon host, data host with 8 osds and mds host: # systemctl

Re: [ceph-users] Pinpointing performance bottleneck / would SSD journals help?

2016-06-27 Thread Christian Balzer
Hello, On Mon, 27 Jun 2016 21:35:35 +0100 Nick Fisk wrote: [snip] > > You need to run iostat on the OSD nodes themselves and see what the disks > are doing. You stated that they are doing ~180iops per disk, which > suggests they are highly saturated and likely to be the cause of the > problem.

Re: [ceph-users] Auto-Tiering

2016-06-27 Thread Christian Balzer
Hello, On Mon, 27 Jun 2016 21:11:02 +0530 Rakesh Parkiti wrote: > Hi All, > Does CEPH support auto tiering? > ThanksRakesh Parkiti Googling for "auto tiering ceph" would have answered that question. In short, it depends on how you define auto tiering.

Re: [ceph-users] ceph not replicating to all osds

2016-06-27 Thread Christian Balzer
Hello, On Mon, 27 Jun 2016 17:00:42 +0200 Ishmael Tsoaela wrote: > Hi ALL, > > Anyone can help with this issue would be much appreciated. > Your subject line has nothing to do with your "problem". You're alluding to OSD replication problems, obviously assuming that one client would write to

Re: [ceph-users] ceph not replicating to all osds

2016-06-27 Thread Brad Hubbard
On Tue, Jun 28, 2016 at 1:00 AM, Ishmael Tsoaela wrote: > Hi ALL, > > Anyone can help with this issue would be much appreciated. > > I have created an image on one client and mounted it on both 2 client I > have setup. > > When I write data on one client, I cannot access the

Re: [ceph-users] Pinpointing performance bottleneck / would SSD journals help?

2016-06-27 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Daniel Schneller > Sent: 27 June 2016 17:33 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Pinpointing performance bottleneck / would SSD > journals help? > > On 2016-06-27

Re: [ceph-users] osd current.remove.me.somenumber

2016-06-27 Thread Gregory Farnum
On Sat, Jun 25, 2016 at 11:22 AM, Mike Miller wrote: > Hi, > > what is the meaning of the directory "current.remove.me.846930886" is > /var/lib/ceph/osd/ceph-14? If you're using btrfs, I believe that's a no-longer-required snapshot of the current state of the system. If

Re: [ceph-users] cephfs mount /etc/fstab

2016-06-27 Thread Michael Hanscho
On 2016-06-27 11:40, John Spray wrote: > On Sun, Jun 26, 2016 at 10:51 AM, Michael Hanscho wrote: >> On 2016-06-26 10:30, Christian Balzer wrote: >>> >>> Hello, >>> >>> On Sun, 26 Jun 2016 09:33:10 +0200 Willi Fehler wrote: >>> Hello, I found an issue. I've added a

Re: [ceph-users] Pinpointing performance bottleneck / would SSD journals help?

2016-06-27 Thread Daniel Schneller
On 2016-06-27 16:01:07 +, Lionel Bouton said: Le 27/06/2016 17:42, Daniel Schneller a écrit : Hi! * Network Link saturation. All links / bonds are well below any relevant load (around 35MB/s or less) ... Or you sure ? On each server you have 12 OSDs with a theoretical bandwidth of at

Re: [ceph-users] Pinpointing performance bottleneck / would SSD journals help?

2016-06-27 Thread Lionel Bouton
Le 27/06/2016 17:42, Daniel Schneller a écrit : > Hi! > > We are currently trying to pinpoint a bottleneck and are somewhat stuck. > > First things first, this is the hardware setup: > > 4x DELL PowerEdge R510, 12x4TB OSD HDDs, journal colocated on HDD > 96GB RAM, 2x6 Cores + HT > 2x1GbE bonded

Re: [ceph-users] Dramatic performance drop at certain number of objects in pool

2016-06-27 Thread Mark Nelson
On 06/27/2016 03:12 AM, Blair Bethwaite wrote: On 25 Jun 2016 6:02 PM, "Kyle Bader" > wrote: fdatasync takes longer when you have more inodes in the slab caches, it's the double edged sword of vfs_cache_pressure. That's a bit sad when, iiuc,

[ceph-users] Pinpointing performance bottleneck / would SSD journals help?

2016-06-27 Thread Daniel Schneller
Hi! We are currently trying to pinpoint a bottleneck and are somewhat stuck. First things first, this is the hardware setup: 4x DELL PowerEdge R510, 12x4TB OSD HDDs, journal colocated on HDD 96GB RAM, 2x6 Cores + HT 2x1GbE bonded interfaces for Cluster Network 2x1GbE bonded interfaces for

[ceph-users] Auto-Tiering

2016-06-27 Thread Rakesh Parkiti
Hi All, Does CEPH support auto tiering? ThanksRakesh Parkiti ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph not replicating to all osds

2016-06-27 Thread Ishmael Tsoaela
Hi ALL, Anyone can help with this issue would be much appreciated. I have created an image on one client and mounted it on both 2 client I have setup. When I write data on one client, I cannot access the data on another client, what could be causing this issue? root@nodeB:/mnt# ceph osd tree

Re: [ceph-users] Should I use different pool?

2016-06-27 Thread Kanchana. P
calamari URL displays below error: New Calamari Installation This appears to be the first time you have started Calamari and there are no clusters currently configured. 3 Ceph servers are connected to Calamari, but no Ceph cluster has been created yet. Please use ceph-deploy to create a cluster;

Re: [ceph-users] Should I use different pool?

2016-06-27 Thread David
Yes you should definitely create different pools for different HDD types. Another decision you need to make is whether you want dedicated nodes for SSD or want to mix them in the same node. You need to ensure you have sufficient CPU and fat enough network links to get the most out of your SSD's.

Re: [ceph-users] Jewel Multisite RGW Memory Issues

2016-06-27 Thread Ben Agricola
Hi Pritha, Urgh, not sure what happened to the formatting there - let's try again. At the time, the 'primary' cluster (i.e. the one with the active data set) was receiving backup files from a small number of machines, prior to replication being enabled it was using ~10% RAM on the RadosGW boxes.

Re: [ceph-users] fsmap question

2016-06-27 Thread John Spray
On Mon, Jun 27, 2016 at 8:02 AM, Goncalo Borges wrote: > Hi All ... > > just updated from infernalis to jewel 10.2.2 in centos7 > > The procedure worked fine apart from the issue also reported on this thread: > "osds udev rules not triggered on reboot (jewel,

Re: [ceph-users] cephfs mount /etc/fstab

2016-06-27 Thread John Spray
On Sun, Jun 26, 2016 at 10:51 AM, Michael Hanscho wrote: > On 2016-06-26 10:30, Christian Balzer wrote: >> >> Hello, >> >> On Sun, 26 Jun 2016 09:33:10 +0200 Willi Fehler wrote: >> >>> Hello, >>> >>> I found an issue. I've added a ceph mount to my /etc/fstab. But when I >>> boot

Re: [ceph-users] Jewel Multisite RGW Memory Issues

2016-06-27 Thread Ben Agricola
Hi Pritha, At the time, the 'primary' cluster (i.e. the one with the active data set) was receiving backup files from a small number of machines, prior to replication being enabled it was using ~10% RAM on the RadosGW boxes. Without replication enabled, neither cluster sees any spikes in

[ceph-users] Regarding GET BUCKET ACL REST call

2016-06-27 Thread Anand Bhat
Hi, When GET BUCKET ACL REST call is issued with X-Auth-Token set, call fails. This is due to bucket in question not having CORS settings. Is there a way to set CORS on the S3 bucket with REST APIs? I know a way using boto S3 that works. I am looking for REST APIs for CORS setting. Regards,

Re: [ceph-users] Dramatic performance drop at certain number of objects in pool

2016-06-27 Thread Blair Bethwaite
On 25 Jun 2016 6:02 PM, "Kyle Bader" wrote: > fdatasync takes longer when you have more inodes in the slab caches, it's the double edged sword of vfs_cache_pressure. That's a bit sad when, iiuc, it's only journals doing fdatasync in the Ceph write path. I'd have expected

Re: [ceph-users] pg scrub and auto repair in hammer

2016-06-27 Thread Christian Balzer
Hello, On Mon, 27 Jun 2016 09:49:54 +0200 Dan van der Ster wrote: > On Mon, Jun 27, 2016 at 2:14 AM, Christian Balzer wrote: > > On Sun, 26 Jun 2016 19:48:18 +0200 Stefan Priebe wrote: > > > >> Hi, > >> > >> is there any option or chance to have auto repair of pgs in hammer? >

Re: [ceph-users] pg scrub and auto repair in hammer

2016-06-27 Thread Dan van der Ster
On Mon, Jun 27, 2016 at 2:14 AM, Christian Balzer wrote: > On Sun, 26 Jun 2016 19:48:18 +0200 Stefan Priebe wrote: > >> Hi, >> >> is there any option or chance to have auto repair of pgs in hammer? >> > Short answer: > No, in any version of Ceph. Well, jewel has a new option to

Re: [ceph-users] Jewel Multisite RGW Memory Issues

2016-06-27 Thread Pritha Srivastava
Corrected the formatting of the e-mail sent earlier. - Original Message - > From: "Pritha Srivastava" > To: ceph-users@lists.ceph.com > Sent: Monday, June 27, 2016 9:15:36 AM > Subject: Re: [ceph-users] Jewel Multisite RGW Memory Issues > > > I have 2 distinct

Re: [ceph-users] image map failed

2016-06-27 Thread Ishmael Tsoaela
Hi Rakesh, That works as well. I also diabled the other features. rbd feature disable data/data_01 exclusive-lock Thanks for the response On Fri, Jun 24, 2016 at 6:22 AM, Rakesh Parkiti wrote: > Hi Ishmael > > Once try to create image with image-feature as

[ceph-users] fsmap question

2016-06-27 Thread Goncalo Borges
Hi All ... just updated from infernalis to jewel 10.2.2 in centos7 The procedure worked fine apart from the issue also reported on this thread: "osds udev rules not triggered on reboot (jewel, jessie)". Apart from that, I am not understanding the fsmap output provided by 'ceph -s' which is