Re: [ceph-users] SIGHUP to ceph processes every morning

2017-01-25 Thread Torsten Casselt
Hi, that makes sense. Thanks for the fast answer! On 26.01.2017 08:04, Paweł Sadowski wrote: > Hi, > > 6:25 points to daily cron job, it's probably logrotate trying to force > ceph to reopen logs. > > > On 01/26/2017 07:34 AM, Torsten Casselt wrote: >> Hi, >> >> I get the following line in

Re: [ceph-users] SIGHUP to ceph processes every morning

2017-01-25 Thread Paweł Sadowski
Hi, 6:25 points to daily cron job, it's probably logrotate trying to force ceph to reopen logs. On 01/26/2017 07:34 AM, Torsten Casselt wrote: > Hi, > > I get the following line in journalctl: > > Jan 24 06:25:02 ceph01 ceph-osd[28398]: 2017-01-24 06:25:02.302770 > 7f0655516700 -1 received

[ceph-users] SIGHUP to ceph processes every morning

2017-01-25 Thread Torsten Casselt
Hi, I get the following line in journalctl: Jan 24 06:25:02 ceph01 ceph-osd[28398]: 2017-01-24 06:25:02.302770 7f0655516700 -1 received signal: Hangup from PID: 18157 task name: killall -q -1 ceph-mon ceph-mds ceph-osd ceph-fuse radosgw UID: 0 It happens every day at the same time which is

Re: [ceph-users] [Ceph-large] Help with the Hammer to Jewel upgrade procedure without loosing write access to the buckets

2017-01-25 Thread David Turner
I just checked the previous thread about this and that was backwords. Creating RBDs was broken when we upgraded the clients before the cluster. The Welcome to Ceph-Large thread is where the discussion took place.

Re: [ceph-users] Objects Stuck Degraded

2017-01-25 Thread Richard Bade
Hi Everyone, Just an update to this in case anyone has the same issue. This seems to have been caused by ceph osd reweight-by-utilization. Because we have two pools that map to two separate sets of disks and one pool was more full than the other the reweight-by-utilization had reduced the weight

Re: [ceph-users] Help with the Hammer to Jewel upgrade procedure without loosing write access to the buckets

2017-01-25 Thread Mohammed Naser
George, I believe the supported upgrade model is monitors, OSDs, metadata servers and object gateways finally. I would suggest trying to support path, if you’re still having issues *with* the correct upgrade sequence, I would look further into it Thanks Mohammed > On Jan 25, 2017, at 6:24

[ceph-users] Help with the Hammer to Jewel upgrade procedure without loosing write access to the buckets

2017-01-25 Thread George Mihaiescu
Hi, I need your help with upgrading our cluster from Hammer (last version) to Jewel 10.2.5 without loosing write access to Radosgw. We have a fairly large cluster (4.3 PB raw) mostly used to store large S3 objects, and we currently have more than 500 TB of data in the ".rgw.buckets" pool, so I'm

Re: [ceph-users] rgw static website docs 404

2017-01-25 Thread Robin H. Johnson
On Fri, Jan 20, 2017 at 11:37:47AM +0100, Wido den Hollander wrote: > Maybe the dev didn't want to write docs, he/she forgot or just didn't get to > it yet. > > It would be very much appreciated if you would send a PR with the updated > documentation :) As the dev, I did write docs, and have

Re: [ceph-users] systemd and ceph-mon autostart on Ubuntu 16.04

2017-01-25 Thread Wido den Hollander
> Op 25 januari 2017 om 20:25 schreef Patrick Donnelly : > > > On Wed, Jan 25, 2017 at 2:19 PM, Wido den Hollander wrote: > > Hi, > > > > I thought this issue was resolved a while ago, but while testing Kraken > > with BlueStore I ran into the problem

Re: [ceph-users] systemd and ceph-mon autostart on Ubuntu 16.04

2017-01-25 Thread Patrick Donnelly
On Wed, Jan 25, 2017 at 2:19 PM, Wido den Hollander wrote: > Hi, > > I thought this issue was resolved a while ago, but while testing Kraken with > BlueStore I ran into the problem again. > > My monitors are not being started on boot: > > Welcome to Ubuntu 16.04.1 LTS (GNU/Linux

Re: [ceph-users] ***Suspected Spam*** dm-crypt journal replacement

2017-01-25 Thread Steve Taylor
No need to re-create the osd. The easiest way to replace the journal is by creating the new journal partition with the same partition guid. You can use 'sgdisk -n :: --change-name=":ceph journal" --partition-guid=: --typecode=:45b0969e-9b03-4f30-b4c6-5ec00ceff106 ' to create the new journal

Re: [ceph-users] Health_Warn recovery stuck / crushmap problem?

2017-01-25 Thread Jonas Stunkat
Thanks for the response, problem solved. I added "osd crush update on start = false" in my ceph.conf unter the [osd] section. I decided to go this way as this environment is just not big enough to use custom hooks. After starting and than inserting my crushmap the recovery started and

Re: [ceph-users] CephFS - PG Count Question

2017-01-25 Thread John Spray
On Wed, Jan 25, 2017 at 12:56 PM, James Wilkins wrote: > Apologies if this is documented but I could not find any clear-cut advice > > > > Is it better to have a higher PG count for the metadata pool, or the data > pool of a CephFS filesystem? > > > > If I look at >

Re: [ceph-users] Health_Warn recovery stuck / crushmap problem?

2017-01-25 Thread LOPEZ Jean-Charles
Hi Jonas, In your current CRUSH map your root ssd contains 2 nodes but those two nodes contain no osds and this is causing the problem. Look like you forgot to set the parameter osd_crush_update_on_start = false before applying your special CRUSH Map. Hence when you restarted the OSD they wen