Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-08 Thread Diedrich Ehlerding
Hi,

 
 Have you increased the verbosity for the monitors, restarted them, and
 looked at the log output? 

First of all: The bug is still there, and the logs do not help. But I 
seem to have found a workaround (just for myself, not generally)

As for the bug:

I appended debug log=20 to ceph-deploy's generated ceph.conf but I 
dont see much in the logs (and they do not get larger bym this 
option). Here is one of the monitor logs form /var/lib/ceph; the 
other ones look identically.

2014-04-08 08:26:09.405714 7fd1a0a94780  0 ceph version 0.72.2 
(a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid 
28842
2014-04-08 08:26:09.851227 7f66ecd06780  0 ceph version 0.72.2 
(a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid 
28943
2014-04-08 08:26:09.933034 7f66ecd06780  0 mon.hvrrzceph2 does not 
exist in monmap, will attempt to join an existing cluster
2014-04-08 08:26:09.933417 7f66ecd06780  0 using public_addr 
10.111.3.2:0/0 - 10.111.3.2:6789/0
2014-04-08 08:26:09.934003 7f66ecd06780  1 mon.hvrrzceph2@-1(probing) 
e0 preinit fsid c847e327-1bc5-445f-9c7e-de0551bfde06
2014-04-08 08:26:09.934149 7f66ecd06780  1 mon.hvrrzceph2@-1(probing) 
e0  initial_members hvrrzceph1,hvrrzceph2,hvrrzceph3, filtering seed 
monmap
2014-04-08 08:26:09.937302 7f66ecd06780  0 mon.hvrrzceph2@-1(probing) 
e0  my rank is now 0 (was -1)
2014-04-08 08:26:09.938254 7f66e63c9700  0 -- 10.111.3.2:6789/0  
0.0.0.0:0/2 pipe(0x15fba00 sd=21 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9c60).fault
2014-04-08 08:26:09.938442 7f66e61c7700  0 -- 10.111.3.2:6789/0  
10.112.3.2:6789/0 pipe(0x1605280 sd=22 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c99a0).fault
2014-04-08 08:26:09.939001 7f66ecd04700  0 -- 10.111.3.2:6789/0  
0.0.0.0:0/1 pipe(0x15fb280 sd=25 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9420).fault
2014-04-08 08:26:09.939120 7f66e62c8700  0 -- 10.111.3.2:6789/0  
10.112.3.1:6789/0 pipe(0x1605780 sd=24 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9b00).fault
2014-04-08 08:26:09.941140 7f66e60c6700  0 -- 10.111.3.2:6789/0  
10.112.3.3:6789/0 pipe(0x1605c80 sd=23 :0 s=1 pgs=0 cs=0 l=0 
c=0x15c9840).fault
2014-04-08 08:27:09.934720 7f66e7bcc700  0 
mon.hvrrzceph2@0(probing).data_health(0) update_stats avail 70% total 
15365520 used 3822172 avail 10762804
2014-04-08 08:28:09.935036 7f66e7bcc700  0 
mon.hvrrzceph2@0(probing).data_health(0) update_stats avail 70% total 
15365520 used 3822172 avail 10762804

Since ceph-deploy complained about not getting an answer from 
ceph-generate-keys:

[hvrrzceph3][DEBUG ] Starting ceph-create-keys on hvrrzceph3...
[hvrrzceph3][WARNIN] No data was received after 7 seconds, 
disconnecting...

I therefore tried to create a keys manually:

hvrrzceph2:~ # ceph-create-keys --id client.admin
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
admin_socket: exception getting command descriptions: [Errno 2] No 
such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.

[etc.]

As for the workaround: What I wanted to do is: I have three servers, 
two NICs, and thre IP addresses per server. The NICs are bonded, the 
bond has an IP address in one network (untagged), and additionally, 
two tagged VLANs are also on the bond. The bug occured when I tried 
to use a dedicated cluster network (i.e. one of the tagged vlans) and 
another dedicated public network (the other tagged vlan). At that 
time, I had 

I now tried to leave cluster network and public network away from 
ceph.conf ... and now I could create the cluster.

So it seems to be a network problem, as you (and Brian) supposed. 
However, ssh etc. are properly working on all three networks.  I 
don't really understand what's going on there, but at least, I can 
continue to learn.

Thank you.

best regards
Diedrich 




-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-08 Thread Alfredo Deza
On Tue, Apr 8, 2014 at 3:33 AM, Diedrich Ehlerding
diedrich.ehlerd...@ts.fujitsu.com wrote:
 Hi,


 Have you increased the verbosity for the monitors, restarted them, and
 looked at the log output?

 First of all: The bug is still there, and the logs do not help. But I
 seem to have found a workaround (just for myself, not generally)

 As for the bug:

 I appended debug log=20 to ceph-deploy's generated ceph.conf but I
 dont see much in the logs (and they do not get larger bym this
 option). Here is one of the monitor logs form /var/lib/ceph; the
 other ones look identically.

You would probably need to up the verbosity for the monitors, so it
would look like this on
the global section

  debug mon = 20
  debug ms = 10

Then restart the mons and check the output

 2014-04-08 08:26:09.405714 7fd1a0a94780  0 ceph version 0.72.2
 (a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid
 28842
 2014-04-08 08:26:09.851227 7f66ecd06780  0 ceph version 0.72.2
 (a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid
 28943
 2014-04-08 08:26:09.933034 7f66ecd06780  0 mon.hvrrzceph2 does not
 exist in monmap, will attempt to join an existing cluster
 2014-04-08 08:26:09.933417 7f66ecd06780  0 using public_addr
 10.111.3.2:0/0 - 10.111.3.2:6789/0
 2014-04-08 08:26:09.934003 7f66ecd06780  1 mon.hvrrzceph2@-1(probing)
 e0 preinit fsid c847e327-1bc5-445f-9c7e-de0551bfde06
 2014-04-08 08:26:09.934149 7f66ecd06780  1 mon.hvrrzceph2@-1(probing)
 e0  initial_members hvrrzceph1,hvrrzceph2,hvrrzceph3, filtering seed
 monmap
 2014-04-08 08:26:09.937302 7f66ecd06780  0 mon.hvrrzceph2@-1(probing)
 e0  my rank is now 0 (was -1)
 2014-04-08 08:26:09.938254 7f66e63c9700  0 -- 10.111.3.2:6789/0 
 0.0.0.0:0/2 pipe(0x15fba00 sd=21 :0 s=1 pgs=0 cs=0 l=0
 c=0x15c9c60).fault
 2014-04-08 08:26:09.938442 7f66e61c7700  0 -- 10.111.3.2:6789/0 
 10.112.3.2:6789/0 pipe(0x1605280 sd=22 :0 s=1 pgs=0 cs=0 l=0
 c=0x15c99a0).fault
 2014-04-08 08:26:09.939001 7f66ecd04700  0 -- 10.111.3.2:6789/0 
 0.0.0.0:0/1 pipe(0x15fb280 sd=25 :0 s=1 pgs=0 cs=0 l=0
 c=0x15c9420).fault
 2014-04-08 08:26:09.939120 7f66e62c8700  0 -- 10.111.3.2:6789/0 
 10.112.3.1:6789/0 pipe(0x1605780 sd=24 :0 s=1 pgs=0 cs=0 l=0
 c=0x15c9b00).fault
 2014-04-08 08:26:09.941140 7f66e60c6700  0 -- 10.111.3.2:6789/0 
 10.112.3.3:6789/0 pipe(0x1605c80 sd=23 :0 s=1 pgs=0 cs=0 l=0
 c=0x15c9840).fault
 2014-04-08 08:27:09.934720 7f66e7bcc700  0
 mon.hvrrzceph2@0(probing).data_health(0) update_stats avail 70% total
 15365520 used 3822172 avail 10762804
 2014-04-08 08:28:09.935036 7f66e7bcc700  0
 mon.hvrrzceph2@0(probing).data_health(0) update_stats avail 70% total
 15365520 used 3822172 avail 10762804

 Since ceph-deploy complained about not getting an answer from
 ceph-generate-keys:

 [hvrrzceph3][DEBUG ] Starting ceph-create-keys on hvrrzceph3...
 [hvrrzceph3][WARNIN] No data was received after 7 seconds,
 disconnecting...

 I therefore tried to create a keys manually:

 hvrrzceph2:~ # ceph-create-keys --id client.admin
 admin_socket: exception getting command descriptions: [Errno 2] No
 such file or directory
 INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
 admin_socket: exception getting command descriptions: [Errno 2] No
 such file or directory
 INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
 admin_socket: exception getting command descriptions: [Errno 2] No
 such file or directory
 INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
 admin_socket: exception getting command descriptions: [Errno 2] No
 such file or directory
 INFO:ceph-create-keys:ceph-mon admin socket not ready yet.

 [etc.]

 As for the workaround: What I wanted to do is: I have three servers,
 two NICs, and thre IP addresses per server. The NICs are bonded, the
 bond has an IP address in one network (untagged), and additionally,
 two tagged VLANs are also on the bond. The bug occured when I tried
 to use a dedicated cluster network (i.e. one of the tagged vlans) and
 another dedicated public network (the other tagged vlan). At that
 time, I had

 I now tried to leave cluster network and public network away from
 ceph.conf ... and now I could create the cluster.

 So it seems to be a network problem, as you (and Brian) supposed.
 However, ssh etc. are properly working on all three networks.  I
 don't really understand what's going on there, but at least, I can
 continue to learn.

 Thank you.

 best regards
 Diedrich




 --
 Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
 FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
 Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
 Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-07 Thread Diedrich Ehlerding
[monitoprs do not start properly with ceph-deploy]
Brian Chandler:

  thank you for your, response, however:
  Including iptables? CentOS/RedHat default to iptables enabled and
  closed.
 
  iptables -Lvn to be 100% sure.
  hvrrzceph1:~ # iptables -Lvn
  iptables: No chain/target/match by that name.
  hvrrzceph1:~ #
 
 Ergh, my mistake: iptables -L -v -n


 
hvrrzceph1:~ # iptables -L -v -n
Chain INPUT (policy ACCEPT 8739 packets, 476K bytes)
 pkts bytes target prot opt in out source   
destination 

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   
destination 

Chain OUTPUT (policy ACCEPT 6270 packets, 505K bytes)
 pkts bytes target prot opt in out source   
destination 
hvrrzceph1:~ #

The servers do not run any firewall, and they are connected to the 
same switch. ssh login works over three networks (one to be used as 
admin network, one as public network, and another one as cluster 
network). 

Any hint is appreciated ...

Diedrich
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-07 Thread Alfredo Deza
On Mon, Apr 7, 2014 at 3:50 AM, Diedrich Ehlerding
diedrich.ehlerd...@ts.fujitsu.com wrote:
 [monitoprs do not start properly with ceph-deploy]
 Brian Chandler:

  thank you for your, response, however:
  Including iptables? CentOS/RedHat default to iptables enabled and
  closed.
 
  iptables -Lvn to be 100% sure.
  hvrrzceph1:~ # iptables -Lvn
  iptables: No chain/target/match by that name.
  hvrrzceph1:~ #
 
 Ergh, my mistake: iptables -L -v -n



 hvrrzceph1:~ # iptables -L -v -n
 Chain INPUT (policy ACCEPT 8739 packets, 476K bytes)
  pkts bytes target prot opt in out source
 destination

 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 6270 packets, 505K bytes)
  pkts bytes target prot opt in out source
 destination
 hvrrzceph1:~ #

 The servers do not run any firewall, and they are connected to the
 same switch. ssh login works over three networks (one to be used as
 admin network, one as public network, and another one as cluster
 network).

 Any hint is appreciated ...

Have you increased the verbosity for the monitors, restarted them, and
looked at the log output?

 Diedrich
 --
 Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
 FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
 Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
 Firmenangaben: http://de.ts.fujitsu.com/imprint.html


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-07 Thread Neil Levine
Is SELinux enabled?

On Mon, Apr 7, 2014 at 12:50 AM, Diedrich Ehlerding
diedrich.ehlerd...@ts.fujitsu.com wrote:
 [monitoprs do not start properly with ceph-deploy]
 Brian Chandler:

  thank you for your, response, however:
  Including iptables? CentOS/RedHat default to iptables enabled and
  closed.
 
  iptables -Lvn to be 100% sure.
  hvrrzceph1:~ # iptables -Lvn
  iptables: No chain/target/match by that name.
  hvrrzceph1:~ #
 
 Ergh, my mistake: iptables -L -v -n



 hvrrzceph1:~ # iptables -L -v -n
 Chain INPUT (policy ACCEPT 8739 packets, 476K bytes)
  pkts bytes target prot opt in out source
 destination

 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 6270 packets, 505K bytes)
  pkts bytes target prot opt in out source
 destination
 hvrrzceph1:~ #

 The servers do not run any firewall, and they are connected to the
 same switch. ssh login works over three networks (one to be used as
 admin network, one as public network, and another one as cluster
 network).

 Any hint is appreciated ...

 Diedrich
 --
 Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
 FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
 Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
 Firmenangaben: http://de.ts.fujitsu.com/imprint.html


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-04 Thread Alfredo Deza
On Fri, Apr 4, 2014 at 2:53 AM, Diedrich Ehlerding
diedrich.ehlerd...@ts.fujitsu.com wrote:
 I am trying to deploy a cluster with ceph-deploy. I installed ceph
 0.72.2 from the rpm repositories. Running ceph-deploy mon
 create-initial creates /var/lib/ceph etc. on all the nodes, but on
 all nodes I get a warning:


 [hvrrzceph2][DEBUG ] Starting Ceph mon.hvrrzceph2 on hvrrzceph2...
 [hvrrzceph2][DEBUG ] Starting ceph-create-keys on hvrrzceph2...
 [hvrrzceph2][WARNING] No data was received after 7 seconds,
 disconnecting...


 and afterwards, ceph -s cannot connect to the cluster. No
 client.admin keyring is created in ceph.conf.

 Then I attempted to create one monitor only on one node. Again, no
 keys for client.admin,  bootstrap-mds and bootstrap-osd were created.
 Here is the complete log of this attempt:

 hvrrzceph1:~/my-cluster # ceph-deploy new hvrrzceph1
 [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy new
 hvrrzceph1
 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
 [ceph_deploy.new][DEBUG ] Resolving host hvrrzceph1
 [ceph_deploy.new][DEBUG ] Monitor hvrrzceph1 at 10.1.1.239
 [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
 [ceph_deploy.new][DEBUG ] Monitor initial members are ['hvrrzceph1']
 [ceph_deploy.new][DEBUG ] Monitor addrs are ['10.1.1.239']
 [ceph_deploy.new][DEBUG ] Creating a random mon key...
 [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
 [ceph_deploy.new][DEBUG ] Writing monitor keyring to
 ceph.mon.keyring...
 hvrrzceph1:~/my-cluster # vi ceph.conf

 [added cluster network + public network; leaving them away doesnt
 change anything]

 hvrrzceph1:~/my-cluster # cat ceph.conf
 [global]
 auth_service_required = cephx
 filestore_xattr_use_omap = true
 auth_client_required = cephx
 auth_cluster_required = cephx
 mon_host = 10.1.1.239
 mon_initial_members = hvrrzceph1
 fsid = ec73a230-f645-456f-9523-9a03621d18dd
 cluster_network = 10.112.0.0/16
 public_network = 10.111.0.0/16


 hvrrzceph1:~/my-cluster # ceph-deploy mon create-initial
 [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy mon
 create-initial
 [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts
 hvrrzceph1
 [ceph_deploy.mon][DEBUG ] detecting platform for host hvrrzceph1 ...
 [hvrrzceph1][DEBUG ] connected to host: hvrrzceph1
 [hvrrzceph1][DEBUG ] detect platform information from remote host
 [hvrrzceph1][DEBUG ] detect machine type
 [ceph_deploy.mon][INFO  ] distro info: SUSE Linux Enterprise Server
 11 x86_64
 [hvrrzceph1][DEBUG ] determining if provided host has same hostname
 in remote
 [hvrrzceph1][DEBUG ] get remote short hostname
 [hvrrzceph1][DEBUG ] deploying mon to hvrrzceph1
 [hvrrzceph1][DEBUG ] get remote short hostname
 [hvrrzceph1][DEBUG ] remote hostname: hvrrzceph1
 [hvrrzceph1][DEBUG ] write cluster configuration to
 /etc/ceph/{cluster}.conf
 [hvrrzceph1][DEBUG ] create the mon path if it does not exist
 [hvrrzceph1][DEBUG ] checking for done path:
 /var/lib/ceph/mon/ceph-hvrrzceph1/done
 [hvrrzceph1][DEBUG ] done path does not exist:
 /var/lib/ceph/mon/ceph-hvrrzceph1/done
 [hvrrzceph1][INFO  ] creating keyring file:
 /var/lib/ceph/tmp/ceph-hvrrzceph1.mon.keyring
 [hvrrzceph1][DEBUG ] create the monitor keyring file
 [hvrrzceph1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs
 -i hvrrzceph1 --keyring /var/lib/ceph/tmp/ceph-hvrrzceph1.mon.keyring
 [hvrrzceph1][DEBUG ] ceph-mon: set fsid to
 ec73a230-f645-456f-9523-9a03621d18dd
 [hvrrzceph1][DEBUG ] ceph-mon: created monfs at
 /var/lib/ceph/mon/ceph-hvrrzceph1 for mon.hvrrzceph1
 [hvrrzceph1][INFO  ] unlinking keyring file
 /var/lib/ceph/tmp/ceph-hvrrzceph1.mon.keyring
 [hvrrzceph1][DEBUG ] create a done file to avoid re-doing the mon
 deployment
 [hvrrzceph1][DEBUG ] create the init path if it does not exist
 [hvrrzceph1][INFO  ] Running command: rcceph -c /etc/ceph/ceph.conf
 start mon.hvrrzceph1
 [hvrrzceph1][DEBUG ] === mon.hvrrzceph1 ===
 [hvrrzceph1][DEBUG ] Starting Ceph mon.hvrrzceph1 on hvrrzceph1...
 [hvrrzceph1][DEBUG ] Starting ceph-create-keys on hvrrzceph1...
 [hvrrzceph1][WARNIN] No data was received after 7 seconds,
 disconnecting...
 [hvrrzceph1][INFO  ] Running command: ceph --cluster=ceph
 --admin-daemon /var/run/ceph/ceph-mon.hvrrzceph1.asok mon_status
 [hvrrzceph1][DEBUG ]
 **
 **
 [hvrrzceph1][DEBUG ] status for monitor: mon.hvrrzceph1
 [hvrrzceph1][DEBUG ] {
 [hvrrzceph1][DEBUG ]   election_epoch: 2,
 [hvrrzceph1][DEBUG ]   extra_probe_peers: [
 [hvrrzceph1][DEBUG ] 10.1.1.239:6789/0
 [hvrrzceph1][DEBUG ]   ],
 [hvrrzceph1][DEBUG ]   monmap: {
 [hvrrzceph1][DEBUG ] created: 0.00,
 [hvrrzceph1][DEBUG ] epoch: 1,
 [hvrrzceph1][DEBUG ] fsid:
 ec73a230-f645-456f-9523-9a03621d18dd,
 [hvrrzceph1][DEBUG ] modified: 0.00,
 [hvrrzceph1][DEBUG ] mons: [
 [hvrrzceph1][DEBUG ]   {
 [hvrrzceph1][DEBUG ] addr: 10.111.3.1:6789/0,
 

Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-04 Thread Diedrich Ehlerding
 Alfredo Deza writes:


 Have you ensured that either there is no firewall up or that the ports
 that the monitors need to communicate between each other are open?


Yes, I am sure - the nodes are connected over one single switch, and 
no firewall is active.

 
 If that is not the problem then the next thing I would do is to
 increase the verbosity for the monitors, restart them
 and look at the logs.

My current configuration is: 
(ceph.conf)

i.e. I tried whether or not I used the wrong network intercaces. 

mon_host = 10.112.3.1,10.112.3.2,10.112.3.3
mon_initial_members = hvrrzceph1, hvrrzceph2, hvrrzceph3

and the networks are: 

hvrrzceph1:~/my-cluster # grep hvrrzceph1 /etc/hosts
10.1.1.239  hvrrzceph1-admin
10.111.3.1  hvrrzceph1-storage
10.112.3.1  hvrrzceph1
 (two more nodes in the same way)

Someone ist listening on port 6789:
hvrrzceph1:~/my-cluster # grep 6789 /etc/services
smc-https  6789/tcp # SMC-HTTPS  [Ratnadeep_Bhattachar]
smc-https  6789/udp # SMC-HTTPS  [Ratnadeep_Bhattachar]
hvrrzceph1:~/my-cluster # netstat -a | grep smc-https
tcp0  0 hvrrzceph1-st:smc-https *:*   LISTEN
hvrrzceph1:~/my-cluster #

The monitor log says:

2014-04-04 15:11:00.673595 7fbb264f2780  0 ceph version 0.72.2 
(a913ded2ff138aef
b8cb84d347d72164099cfd60), process ceph-mon, pid 9354
2014-04-04 15:11:01.017924 7f57b6e0c780  0 ceph version 0.72.2 
(a913ded2ff138aef
b8cb84d347d72164099cfd60), process ceph-mon, pid 9455
2014-04-04 15:11:01.027519 7f57b6e0c780  0 mon.hvrrzceph1 does not 
exist in monm
ap, will attempt to join an existing cluster
2014-04-04 15:11:01.027928 7f57b6e0c780  0 using public_addr 
10.111.3.1:0/0 - 1
0.111.3.1:6789/0
2014-04-04 15:11:01.030407 7f57b6e0c780  1 mon.hvrrzceph1@-1(probing) 
e0 preinit
 fsid 8dba6b51-9380-4d32-9393-520dc141a8b6
2014-04-04 15:11:01.030645 7f57b6e0c780  1 mon.hvrrzceph1@-1(probing) 
e0  initia
l_members hvrrzceph1,hvrrzceph2,hvrrzceph3, filtering seed monmap
2014-04-04 15:11:01.031918 7f57b6e0c780  0 mon.hvrrzceph1@-1(probing) 
e0  my ran
k is now 0 (was -1)
2014-04-04 15:11:01.032909 7f57b04cf700  0 -- 10.111.3.1:6789/0  
0.0.0.0:0/2 p
ipe(0x15fda00 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x15bec60).fault
2014-04-04 15:11:01.033772 7f57b03ce700  0 -- 10.111.3.1:6789/0  
10.112.3.1:67
89/0 pipe(0x1607780 sd=24 :0 s=1 pgs=0 cs=0 l=0 c=0x15beb00).fault
2014-04-04 15:11:01.034079 7f57b6e0a700  0 -- 10.111.3.1:6789/0  
0.0.0.0:0/1 p
ipe(0x15fd280 sd=23 :0 s=1 pgs=0 cs=0 l=0 c=0x15be420).fault
2014-04-04 15:11:01.034627 7f57b01cc700  0 -- 10.111.3.1:6789/0  
10.112.3.3:67
89/0 pipe(0x1607c80 sd=25 :0 s=1 pgs=0 cs=0 l=0 c=0x15be840).fault

[etc. for some time; and then]

2014-04-04 15:21:01.033997 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236
2014-04-04 15:22:01.034316 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236
2014-04-04 15:23:01.034627 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236
2014-04-04 15:24:01.034917 7f57b1cd2700  0 
mon.hvrrzceph1@0(probing).data_health
(0) update_stats avail 70% total 15365520 used 3804740 avail 10780236

Diedrich

-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH,
FTS CE SC PSIS W, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-04 Thread Brian Candler

On 04/04/2014 14:31, Diedrich Ehlerding wrote:

  Alfredo Deza writes:



Have you ensured that either there is no firewall up or that the ports
that the monitors need to communicate between each other are open?


Yes, I am sure - the nodes are connected over one single switch, and
no firewall is active.

Including iptables? CentOS/RedHat default to iptables enabled and closed.

iptables -Lvn to be 100% sure.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy fails to generate keys

2014-04-04 Thread Brian Candler

On 04/04/2014 15:14, Diedrich Ehlerding wrote:

Hi Brian,

thank you for your, response, however:

Including iptables? CentOS/RedHat default to iptables enabled and
closed.

iptables -Lvn to be 100% sure.

hvrrzceph1:~ # iptables -Lvn
iptables: No chain/target/match by that name.
hvrrzceph1:~ #


Ergh, my mistake: iptables -L -v -n

Regards,

Brian.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com