Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
I definitely had all the rbd volumes unmounted. I am not sure if they were unmapped. I can try that. On Fri, Feb 10, 2017 at 9:10 PM, Brad Hubbard wrote: > On Sat, Feb 11, 2017 at 2:58 PM, Brad Hubbard wrote: > > Just making sure the list sees this for those that are following. > > > > On Sat,

Re: [ceph-users] OSD Repeated Failure

2017-02-10 Thread Brad Hubbard
On Sat, Feb 11, 2017 at 2:51 PM, Ashley Merrick wrote: > Hello, > > > > I have a particular OSD (53), which at random will crash with the OSD > process stopping. > > > > OS: Debian 8.x > > CEPH : ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) > > > > From the logs at the time of th

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Brad Hubbard
On Sat, Feb 11, 2017 at 2:58 PM, Brad Hubbard wrote: > Just making sure the list sees this for those that are following. > > On Sat, Feb 11, 2017 at 2:49 PM, Michael Andersen > wrote: >> Right, so yes libceph is loaded >> >> root@compound-7:~# lsmod | egrep "ceph|rbd" >> rbd6

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Brad Hubbard
Just making sure the list sees this for those that are following. On Sat, Feb 11, 2017 at 2:49 PM, Michael Andersen wrote: > Right, so yes libceph is loaded > > root@compound-7:~# lsmod | egrep "ceph|rbd" > rbd69632 0 > libceph 245760 1 rbd > libcrc32c

[ceph-users] OSD Repeated Failure

2017-02-10 Thread Ashley Merrick
Hello, I have a particular OSD (53), which at random will crash with the OSD process stopping. OS: Debian 8.x CEPH : ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) >From the logs at the time of the OSD being marked as crashed I can only see >the following: -4> 2017-02-10 2

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Shinobu Kinjo
On Sat, Feb 11, 2017 at 1:08 PM, Michael Andersen wrote: > I believe I did shutdown mon process. Is that not done by the > > sudo systemctl stop ceph\*.service ceph\*.target Oh, that's I missed. > > command? Also, as I noted, the mon process does not show up in ps after I do > that, but I still

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
I believe I did shutdown mon process. Is that not done by the sudo systemctl stop ceph\*.service ceph\*.target command? Also, as I noted, the mon process does not show up in ps after I do that, but I still get the shutdown halting. The libceph kernel module may be installed. I did not do so deli

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Brad Hubbard
That looks like dmesg output from the libceph kernel module. Do you have the libceph kernel module loaded? If the answer to that question is "yes" the follow-up question is "Why?" as it is not required for a MON or OSD host. On Sat, Feb 11, 2017 at 1:18 PM, Michael Andersen wrote: > Yeah, all th

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Shinobu Kinjo
You may need to stop MON process (if it's acting as primary). And once you make sure that all OSDs sessions are moved to another MON, you would be able to shutdown physical host. Have you tried that? On Sat, Feb 11, 2017 at 12:18 PM, Michael Andersen wrote: > Yeah, all three mons have OSDs on

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
Yeah, all three mons have OSDs on the same machines. On Feb 10, 2017 7:13 PM, "Shinobu Kinjo" wrote: > Is your primary MON running on the host which some OSDs are running on? > > On Sat, Feb 11, 2017 at 11:53 AM, Michael Andersen > wrote: > > Hi > > > > I am running a small cluster of 8 machine

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Shinobu Kinjo
Is your primary MON running on the host which some OSDs are running on? On Sat, Feb 11, 2017 at 11:53 AM, Michael Andersen wrote: > Hi > > I am running a small cluster of 8 machines (80 osds), with three monitors on > Ubuntu 16.04. Ceph version 10.2.5. > > I cannot reboot the monitors without phy

[ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
Hi I am running a small cluster of 8 machines (80 osds), with three monitors on Ubuntu 16.04. Ceph version 10.2.5. I cannot reboot the monitors without physically going into the datacenter and power cycling them. What happens is that while shutting down, ceph gets stuck trying to contact the othe

Re: [ceph-users] CephFS root squash?

2017-02-10 Thread John Spray
On Fri, Feb 10, 2017 at 5:27 PM, Jim Kilborn wrote: > Interesting. I thought cephfs could be a replacement for a nfs server for > holding home directories, but not have a single point of failure. I'm > surprised that is generally frowned upon by the comments. (Sorry, this got a bit long, the tl

Re: [ceph-users] MDS HA failover

2017-02-10 Thread Gregory Farnum
This is odd on several levels, and indeed a failover shouldn't take that long (unless you have a *lot* of metadata that needs to get loaded into memory, which you won't if running standby-replay). Are you sure that it's trying to connect to the other MDS, and not a monitor or OSD on the same host?

Re: [ceph-users] trying to test S3 bucket lifecycles in Kraken

2017-02-10 Thread Robin H. Johnson
On Fri, Feb 10, 2017 at 05:14:51PM +0100, Uwe Mesecke wrote: > Hey, > > just to keep you updated: I was able to add a lifecycle using s3cmd (version > 1.6.1). I think I can live with that because for me changing lifecycles is > some one-time setup task. s3cmd does fallback to V2 signatures. Can

Re: [ceph-users] CephFS root squash?

2017-02-10 Thread Jim Kilborn
Interesting. I thought cephfs could be a replacement for a nfs server for holding home directories, but not have a single point of failure. I'm surprised that is generally frowned upon by the comments. Sent from my Windows 10 phone From: John Spray Sent: Friday, Fe

Re: [ceph-users] trying to test S3 bucket lifecycles in Kraken

2017-02-10 Thread Uwe Mesecke
Hey, just to keep you updated: I was able to add a lifecycle using s3cmd (version 1.6.1). I think I can live with that because for me changing lifecycles is some one-time setup task. Uwe > Am 10.02.2017 um 01:32 schrieb Uwe Mesecke : > > Hey, > > I am trying to do some testing of S3 bucket l

Re: [ceph-users] Migrating data from a Ceph clusters to another

2017-02-10 Thread Eugen Block
I'm not sure if this is what you need, but I recently tried 'rados cppool ' to populate a new test-pool with different properties. I simply needed some data in there, and this command for me. Regards, Eugen Zitat von 林自均 : Hi Irek & Craig, Sorry, I misunderstood "RBD mirroring". What I

Re: [ceph-users] I can't create new pool in my cluster.

2017-02-10 Thread choury
I've figured out the reason and committed a patch https://github.com/ceph/ceph/pull/13357 2017-02-10 11:08 GMT+08:00 choury : > I can find some log in ceph-mon.log about this: > >> 2017-02-10 10:47:54.264026 7f6a6eff4700 0 mon.ceph-test2@1(peon) e9 >> handle_command mon_command({"prefix": "osd

Re: [ceph-users] CephFS root squash?

2017-02-10 Thread John Spray
On Fri, Feb 10, 2017 at 8:02 AM, Robert Sander wrote: > On 09.02.2017 20:11, Jim Kilborn wrote: > >> I am trying to figure out how to allow my users to have sudo on their >> workstation, but not have that root access to the ceph kernel mounted volume. > > I do not think that CephFS is meant to be

Re: [ceph-users] CephFS root squash?

2017-02-10 Thread Wido den Hollander
> Op 10 februari 2017 om 9:02 schreef Robert Sander > : > > > On 09.02.2017 20:11, Jim Kilborn wrote: > > > I am trying to figure out how to allow my users to have sudo on their > > workstation, but not have that root access to the ceph kernel mounted > > volume. > > I do not think that Cep

Re: [ceph-users] CephFS root squash?

2017-02-10 Thread Robert Sander
On 09.02.2017 20:11, Jim Kilborn wrote: > I am trying to figure out how to allow my users to have sudo on their > workstation, but not have that root access to the ceph kernel mounted volume. I do not think that CephFS is meant to be mounted on human users' workstations. Regards -- Robert Sand