[ceph-users] RGW: how to get a list of defined radosgw users?

2017-08-01 Thread Diedrich Ehlerding
Hello,

according to the manpages of radosgw-admin, it is possible to 
suspend, resume, create, remove a single radosgw  user, but I 
haven't yet found a method to see a list of all defined radoswg 
users. Is that possible, and how is it possible?

TIA,
Diedrich
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH, 
MIS ITST CE PS&IS WST, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Gregiory Farnum wrote:
> 
> 
> You've probably noticed the RGW will create pools if it needs them and
> they don't exist. That's why it "needs" the extra monitor capabilities.


Yes, I have noticed that - and yes, automatically creating the pools  
helpeda lot  in a lab environment to setup my first gateway. 
However, in a production environment, I expect that at least the 
.rgw.buckets.data pool will need more placement groups and/or may 
reside in an EC pool, etc. - so I expect the pools to be created 
manually in advance.

> The OSD capabilities are because 1) I don't think you could make them
> as fine-grained when that documentation was written, 2) laziness about
> specifying pools. :) 


OK. I was just wondering if there are any reasons to allow the 
gateway more or less global access - i.e. reasons which I did not 
understand. Of course, laziness is a very good reason :-)

Thank you for your comments. 

best regards
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH, 
MIS ITST CE PS&IS WST, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Thank you for your response. Yes, as I wrote, the gateway seems to 
work with these settings.

The reason why I am considering the capabilities is: I am trying to 
attach a Openstack environment and a gateway to the same cluster, 
and I would like to prevent the Openstack admin to access the S3 
gateway data and vice versa to prevent the gateway admin to access 
the Openstack data. I just wonder if there is a reason why the 
documentation suggest these very global capabilities

Gregory Farnum wrote on Wed, 31 May 2017 20:07:16 +

> 
> I don't work with the gateway but in general that should work. 
> 
> That said, the RGW also sees all your client data going in so I'm not 
> sure how much you buy by locking it down. If you're just trying to 
> protect against accidents with the pools, you might give it write access 
> on the monitor; any failures due to capability mismatches there would 
> likely be pretty annoying to debug!
> -Greg
> 
> 
> On Wed, May 31, 2017 at 12:21 AM Diedrich Ehlerding 
>  wrote:
> Hello.
> 
> The documentation which I found proposes to create the ceph client
> for a rados gateway with very global capabilities, namely
> "mon allow rwx, osd allow rwx".
> 
> Are there any reasons for these very global capabilities (allowing
> this client to access and modify (even remove) all pools, all rbds,
> etc., event thiose in use vy other ceph clients? I tried to 
> restrict
> the rights, and my rados gateway seems to work with
> capabilities "mon allow r, osd allow rwx pool=.rgw.root, allow rwx
> pool=a.root, allow rwx pool=am.rgw.control [etc. for all the pools
> which this gateway uses]"
> 
> Are there any reasons not to restrict the capabilities in this way?
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH, 
MIS ITST CE PS&IS WST, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Hello.

The documentation which I found proposes to create the ceph client 
for a rados gateway with very global capabilities, namely
"mon allow rwx, osd allow rwx". 

Are there any reasons for these very global capabilities (allowing 
this client to access and modify (even remove) all pools, all rbds, 
etc., event thiose in use vy other ceph clients? I tried to restrict 
the rights, and my rados gateway seems to work with 
capabilities "mon allow r, osd allow rwx pool=.rgw.root, allow rwx 
pool=a.root, allow rwx pool=am.rgw.control [etc. for all the pools 
which this gateway uses]" 

Are there any reasons not to restrict the capabilities in this way?

Thank you.
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH, 
MIS ITST CE PS&IS WST, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-08 Thread Diedrich Ehlerding
Hi,

> 
> Have you increased the verbosity for the monitors, restarted them, and
> looked at the log output? 

First of all: The bug is still there, and the logs do not help. But I 
seem to have found a workaround (just for myself, not generally)

As for the bug:

I appended "debug log=20" to ceph-deploy's generated ceph.conf but I 
dont see much in the logs (and they do not get larger bym this 
option). Here is one of the monitor logs form /var/lib/ceph; the 
other ones look identically.

2014-04-08 08:26:09.405714 7fd1a0a94780  0 ceph version 0.72.2 
(a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid 
28842
2014-04-08 08:26:09.851227 7f66ecd06780  0 ceph version 0.72.2 
(a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid 
28943
2014-04-08 08:26:09.933034 7f66ecd06780  0 mon.hvrrzceph2 does not 
exist in monmap, will attempt to join an existing cluster
2014-04-08 08:26:09.933417 7f66ecd06780  0 using public_addr 
10.111.3.2:0/0 -> 10.111.3.2:6789/0
2014-04-08 08:26:09.934003 7f66ecd06780  1 mon.hvrrzceph2@-1(probing) 
e0 preinit fsid c847e327-1bc5-445f-9c7e-de0551bfde06
2014-04-08 08:26:09.934149 7f66ecd06780  1 mon.hvrrzceph2@-1(probing) 
e0  initial_members hvrrzceph1,hvrrzceph2,hvrrzceph3, filtering seed 
monmap
2014-04-08 08:26:09.937302 7f66ecd06780  0 mon.hvrrzceph2@-1(probing) 
e0  my rank is now 0 (was -1)
2014-04-08 08:26:09.938254 7f66e63c9700  0 -- 10.111.3.2:6789/0 >> 
0.0.0.0:0/2 pipe(0x15fba00 sd=21 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9c60).fault
2014-04-08 08:26:09.938442 7f66e61c7700  0 -- 10.111.3.2:6789/0 >> 
10.112.3.2:6789/0 pipe(0x1605280 sd=22 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c99a0).fault
2014-04-08 08:26:09.939001 7f66ecd04700  0 -- 10.111.3.2:6789/0 >> 
0.0.0.0:0/1 pipe(0x15fb280 sd=25 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9420).fault
2014-04-08 08:26:09.939120 7f66e62c8700  0 -- 10.111.3.2:6789/0 >> 
10.112.3.1:6789/0 pipe(0x1605780 sd=24 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9b00).fault
2014-04-08 08:26:09.941140 7f66e60c6700  0 -- 10.111.3.2:6789/0 >> 
10.112.3.3:6789/0 pipe(0x1605c80 sd=23 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9840).fault
2014-04-08 08:27:09.934720 7f66e7bcc700  0 
mon.hvrrzceph2@0(probing).data_health(0) update_stats avail 70% total 
15365520 used 3822172 avail 10762804
2014-04-08 08:28:09.935036 7f66e7bcc700  0 
mon.hvrrzceph2@0(probing).data_health(0) update_stats avail 70% total 
15365520 used 3822172 avail 10762804

Since ceph-deploy complained about not getting an answer from 
ceph-generate-keys:

[hvrrzceph3][DEBUG ] Starting ceph-create-keys on hvrrzceph3...
[hvrrzceph3][WARNIN] No data was received after 7 seconds, 
disconnecting...

I therefore tried to create a keys manually:

hvrrzceph2:~ # ceph-create-keys --id client.admin
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.

[etc.]

As for the workaround: What I wanted to do is: I have three servers, 
two NICs, and thre IP addresses per server. The NICs are bonded, the 
bond has an IP address in one network (untagged), and additionally, 
two tagged VLANs are also on the bond. The bug occured when I tried 
to use a dedicated cluster network (i.e. one of the tagged vlans) and 
another dedicated public network (the other tagged vlan). At that 
time, I had 

I now tried to leave "cluster network" and "public network" away from 
ceph.conf ... and now I could create the cluster.

So it seems to be a network problem, as you (and Brian) supposed. 
However, ssh etc. are properly working on all three networks.  I 
don't really understand what's going on there, but at least, I can 
continue to learn.

Thank you.

best regards
Diedrich 




-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-07 Thread Diedrich Ehlerding
[monitoprs do not start properly with ceph-deploy]
Brian Chandler:

> > thank you for your, response, however:
> >> Including iptables? CentOS/RedHat default to iptables enabled and
> >> closed.
> >>
> >> "iptables -Lvn" to be 100% sure.
> > hvrrzceph1:~ # iptables -Lvn
> > iptables: No chain/target/match by that name.
> > hvrrzceph1:~ #
> >
> Ergh, my mistake: iptables -L -v -n


 
hvrrzceph1:~ # iptables -L -v -n
Chain INPUT (policy ACCEPT 8739 packets, 476K bytes)
 pkts bytes target prot opt in out source   
destination 

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   
destination 

Chain OUTPUT (policy ACCEPT 6270 packets, 505K bytes)
 pkts bytes target prot opt in out source   
destination 
hvrrzceph1:~ #

The servers do not run any firewall, and they are connected to the 
same switch. ssh login works over three networks (one to be used as 
admin network, one as public network, and another one as cluster 
network). 

Any hint is appreciated ...

Diedrich
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-04 Thread Diedrich Ehlerding
 Alfredo Deza writes:


> Have you ensured that either there is no firewall up or that the ports
> that the monitors need to communicate between each other are open?


Yes, I am sure - the nodes are connected over one single switch, and 
no firewall is active.

 
> If that is not the problem then the next thing I would do is to
> increase the verbosity for the monitors, restart them
> and look at the logs.

My current configuration is: 
(ceph.conf)

i.e. I tried whether or not I used the wrong network intercaces. 

mon_host = 10.112.3.1,10.112.3.2,10.112.3.3
mon_initial_members = hvrrzceph1, hvrrzceph2, hvrrzceph3

and the networks are: 

hvrrzceph1:~/my-cluster # grep hvrrzceph1 /etc/hosts
10.1.1.239  hvrrzceph1-admin
10.111.3.1  hvrrzceph1-storage
10.112.3.1  hvrrzceph1
 (two more nodes in the same way)

Someone ist listening on port 6789:
hvrrzceph1:~/my-cluster # grep 6789 /etc/services
smc-https  6789/tcp # SMC-HTTPS  [Ratnadeep_Bhattachar]
smc-https  6789/udp # SMC-HTTPS  [Ratnadeep_Bhattachar]
hvrrzceph1:~/my-cluster # netstat -a | grep smc-https
tcp0  0 hvrrzceph1-st:smc-https *:*   LISTEN
hvrrzceph1:~/my-cluster #

The monitor log says:

2014-04-04 15:11:00.673595 7fbb264f2780  0 ceph version 0.72.2 
(a913ded2ff138aef
b8cb84d347d72164099cfd60), process ceph-mon, pid 9354
2014-04-04 15:11:01.017924 7f57b6e0c780  0 ceph version 0.72.2 
(a913ded2ff138aef
b8cb84d347d72164099cfd60), process ceph-mon, pid 9455
2014-04-04 15:11:01.027519 7f57b6e0c780  0 mon.hvrrzceph1 does not 
exist in monm
ap, will attempt to join an existing cluster
2014-04-04 15:11:01.027928 7f57b6e0c780  0 using public_addr 
10.111.3.1:0/0 -> 1
0.111.3.1:6789/0
2014-04-04 15:11:01.030407 7f57b6e0c780  1 mon.hvrrzceph1@-1(probing) 
e0 preinit
 fsid 8dba6b51-9380-4d32-9393-520dc141a8b6
2014-04-04 15:11:01.030645 7f57b6e0c780  1 mon.hvrrzceph1@-1(probing) 
e0  initia
l_members hvrrzceph1,hvrrzceph2,hvrrzceph3, filtering seed monmap
2014-04-04 15:11:01.031918 7f57b6e0c780  0 mon.hvrrzceph1@-1(probing) 
e0  my ran
k is now 0 (was -1)
2014-04-04 15:11:01.032909 7f57b04cf700  0 -- 10.111.3.1:6789/0 >> 
0.0.0.0:0/2 p
ipe(0x15fda00 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x15bec60).fault
2014-04-04 15:11:01.033772 7f57b03ce700  0 -- 10.111.3.1:6789/0 >> 
10.112.3.1:67
89/0 pipe(0x1607780 sd=24 :0 s=1 pgs=0 cs=0 l=0 c=0x15beb00).fault
2014-04-04 15:11:01.034079 7f57b6e0a700  0 -- 10.111.3.1:6789/0 >> 
0.0.0.0:0/1 p
ipe(0x15fd280 sd=23 :0 s=1 pgs=0 cs=0 l=0 c=0x15be420).fault
2014-04-04 15:11:01.034627 7f57b01cc700  0 -- 10.111.3.1:6789/0 >> 
10.112.3.3:67
89/0 pipe(0x1607c80 sd=25 :0 s=1 pgs=0 cs=0 l=0 c=0x15be840).fault

[etc. for some time; and then]

2014-04-04 15:21:01.033997 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236
2014-04-04 15:22:01.034316 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236
2014-04-04 15:23:01.034627 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236
2014-04-04 15:24:01.034917 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236

Diedrich

-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy fails to generate keys

2014-04-03 Thread Diedrich Ehlerding
"addr": "10.111.3.1:6789/0",
[hvrrzceph1][DEBUG ] "name": "hvrrzceph1",
[hvrrzceph1][DEBUG ] "rank": 0
[hvrrzceph1][DEBUG ]   }
[hvrrzceph1][DEBUG ] ]
[hvrrzceph1][DEBUG ]   },
[hvrrzceph1][DEBUG ]   "name": "hvrrzceph1",
[hvrrzceph1][DEBUG ]   "outside_quorum": [],
[hvrrzceph1][DEBUG ]   "quorum": [
[hvrrzceph1][DEBUG ] 0
[hvrrzceph1][DEBUG ]   ],
[hvrrzceph1][DEBUG ]   "rank": 0,
[hvrrzceph1][DEBUG ]   "state": "leader",
[hvrrzceph1][DEBUG ]   "sync_provider": []
[hvrrzceph1][DEBUG ] }
[hvrrzceph1][DEBUG ] 
**
**
[hvrrzceph1][INFO  ] monitor: mon.hvrrzceph1 is running
[hvrrzceph1][INFO  ] Running command: ceph --cluster=ceph 
--admin-daemon /var/run/ceph/ceph-mon.hvrrzceph1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.hvrrzceph1
[hvrrzceph1][DEBUG ] connected to host: hvrrzceph1
[hvrrzceph1][INFO  ] Running command: ceph --cluster=ceph 
--admin-daemon /var/run/ceph/ceph-mon.hvrrzceph1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.hvrrzceph1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have 
formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][DEBUG ] Checking hvrrzceph1 for 
/etc/ceph/ceph.client.admin.keyring
[hvrrzceph1][DEBUG ] connected to host: hvrrzceph1
[hvrrzceph1][DEBUG ] detect platform information from remote host
[hvrrzceph1][DEBUG ] detect machine type
[hvrrzceph1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find 
/etc/ceph/ceph.client.admin.keyring on ['hvrrzceph1']
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking hvrrzceph1 for 
/var/lib/ceph/bootstrap-osd/ceph.keyring
[hvrrzceph1][DEBUG ] connected to host: hvrrzceph1
[hvrrzceph1][DEBUG ] detect platform information from remote host
[hvrrzceph1][DEBUG ] detect machine type
[hvrrzceph1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find 
/var/lib/ceph/bootstrap-osd/ceph.keyring on ['hvrrzceph1']
[ceph_deploy.gatherkeys][DEBUG ] Checking hvrrzceph1 for 
/var/lib/ceph/bootstrap-mds/ceph.keyring
[hvrrzceph1][DEBUG ] connected to host: hvrrzceph1
[hvrrzceph1][DEBUG ] detect platform information from remote host
[hvrrzceph1][DEBUG ] detect machine type
[hvrrzceph1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find 
/var/lib/ceph/bootstrap-mds/ceph.keyring on ['hvrrzceph1']
hvrrzceph1:~/my-cluster # 

I had already a running cluster, and ceph-deploy had worked ... :-( 
Any hints where I can find what is missing?

TIA.
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] what happened to the emperor repositories?

2014-04-03 Thread Diedrich Ehlerding
Hello.

I had successfully set up a first cluster using the Suse-SLES11 
emperor repository at http://ceph.com/rpm/sles11/x86_64/. Everything 
worked file ... until I decided to reinstall the servers (due to 
hardware change).  Today, I can only find one package "ceph-deploy" 
with Suse's "zypper" tool. Howver, browsing the URI still displays 
all the packages.

Of courrse, I can download and install the packages manually - but 
something must have hapened with the repository's metadata. Indeed, 
the directory "repodata" was modified on march 19; maybe something is 
wrong with these data?

TIA,
Diedrich
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] recover from node failure / monitor and osds do not come back

2014-03-02 Thread Diedrich Ehlerding
Gregory Farnum wrote:

> 
> Your OSDs aren't supposed to be listed in the config file, but they
> should show up under /var/lib/ceph. Probably your OSD disks aren't
> being mounted for some reason (that would be the bug). Try mounting
> them and seeing what blocked the mount

Yes, that was my bug; I mounted them now, and everything is fine 
again.

Apparently the cluster remembers which node owns which OSD, and where 
it expects this OSD to be mounted, but does not remember the device. 
I had created my OSDs initially with 
ceph-deploy osd prepare node:/dev/sdX 
and started them with 
ceph-deploy osd activate node:/dev/sdX

So I assume I should manually extend my fstabs to mount these 
filesystems on /var/lib/ceph/osd/ceph-Y. My (wrong) assumption was 
that the osd filesystem is automatically mounted by /etc/init.d/ceph 
start osd.

Thank you for your help.

best regards
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] recover from node failure / monitor and osds do not come back

2014-02-26 Thread Diedrich Ehlerding
My configuration is: two osd servers, one admin node, three monitors; 
all running 072.2

I had to switch of one of the OSD servers. The ngood news is: As 
expected, all clients survived and continued to work with the 
cluster, and the cluster entered a "health warn" state (one monitor 
down, 5 of 10 osds down).

The bad news is: I cannot resume this server's operation. When I 
booted the server, the monitor was started automatically - but it did 
not join the cluster. "/etc/init.d/ceph start mon" says "already 
running", but ceph -s still says that one monitor (this one)  is 
missing.

And the OSDs do not come back; nor can I restart them; error message 
is:

# /etc/init.d/ceph start osd.0
/etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines 
mon.hvrrzrx301 , /var/lib/ceph defines mon.hvrrzrx301)

As expected, ceph osd tree display the osd as down:

-1  2.7 root default
-2  1.35host hvrrzrx301
0   0.27osd.0   down0
2   0.27osd.2   down0
4   0.27osd.4   down0
6   0.27osd.6   down0
8   0.27osd.8   down0
-3  1.35host hvrrzrx303
1   0.27osd.1   up  1
3   0.27osd.3   up  1
5   0.27osd.5   up  1
7   0.27osd.7   up  1
9   0.27osd.9   up  1


my ceph.conf only contains those settings which "ceph-deploy new " 
installed there; i.e. the osds are not mentioned in ceph.conf. I 
assume that this is the problem with my osds? Apparently the cluster 
(the surviving monitors) still know that osd.0, osd.2 etc. should 
appear in the failed node.

Alas, I couldnt find any descritpion how to configure osds within 
ceph.conf ... I tried to define
[osd.0]
host = my_server
devs = /dev/sdb2
data = /var/lib/ceph/osd/ceph-0

but now it complains now that no filesystem type is defined ...

To summarize: where can I find rules and procedures how to set up a 
ceph.conf not only by ceph-deploy; what must I do in addition to 
ceph-deploy in order that I can survive a node outage and can 
reattach the node to the cluster, with respect to the monitor on that 
node as well as to the osds?


best regards
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] extract a complete ceph.conf from running cluster?

2014-02-23 Thread Diedrich Ehlerding
I am rather new to ceph; I apologize if my question is too silly ... 
I just installed a first "playground" cluster (one admin node, two 
osd nodes, 3 monitors,  1 mds). I proceeded according to the "quick 
install" description at http://ceph.com/docs/master/start/: installed 
the software, deployed initial monitors, added osds, added mds, 
always with ceph-deploy. 

Now I have a cluster and can access it from a client node. My 
ceph.conf just contains the settings which "ceph-deploy new"  created 
(i.e. mainly the fsid and the monitor hosts; I only added public and 
cluster networks). It does not contain any osd sections, it does not 
contain an mds section. 

Apparently, it is possible to keep all this information (i.e. 
information about osd, mds, client keyrings etc.) in ceph.conf. And I 
would appreciate such a complete ceph.conf at least for documentation 
purposes, even if it may be unnecessary for operation. Is it possible 
to extract a complete ceph.conf with all the current settings from 
the running cluster?

Thank you.
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] download site down?

2014-02-18 Thread Diedrich Ehlerding
Hi, 

I tried to download ceph from http://ceph.com. Sone day ago, I found 
download URLs at "download", taking me to 
http://ceph.com/resources/downloads/  - but today, nothing is visible 
ther. :-(

Any hints?

best regards
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PS&IS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com