To see where your OSDs and Mon are listening, you have various cmds in
Linux, e.g:

'lsof -ni | grep ceph' - you should see one LISTEN line for the monitor,
2 LISTEN lines for the OSDs and a lot of ESTABLISHED lines, which
indicate communication between OSDs and OSDs and clients
'netstat -atn | grep LIST' - you should see a lot of lines with
portnumber 6800 and upwards (OSDs) and port 6789 (MON)
More comments inline.

HTH,
Kurt
> Gandalf Corvotempesta <mailto:gandalf.corvotempe...@gmail.com>
> 28. April 2014 11:05
> 2014-04-26 12:06 GMT+02:00 Gandalf Corvotempesta
>
> I've added "cluster addr" and "public addr" to each OSD configuration
> but nothing is changed.
> I see all OSDs down except the ones from one server but I'm able to
> ping each other nodes on both interfaces.

What do you mean by "I see all OSDs down"? What does a 'ceph osd stat' say?
>
> How can I detect what ceph is doing?
'ceph -w'

> I see tons of debug logs but they
> are not very easy to understand
> with "ceph health" i can see that "pgs down" value is slowly
> decreasing so I can suppose that caph is recovering. Is that right?
What's the output of 'ceph -s'

>
> Isn't possible to add a semplified output like the one coming from
> "mdadm"? (cat /proc/mdstat)
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> Gandalf Corvotempesta <mailto:gandalf.corvotempe...@gmail.com>
> 26. April 2014 12:06
> I've not defined cluster IPs for each OSD server but only the whole
> subnet.
> Should I define each IP for each OSD ? This is not wrote on docs and
> could be tricky to do this in big environments with hundreds of nodes
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> McNamara, Bradley <mailto:bradley.mcnam...@seattle.gov>
> 24. April 2014 20:04
> Do you have all of the cluster IP's defined in the host file on each
> OSD server? As I understand it, the mon's do not use a cluster
> network, only the OSD servers.
>
> -----Original Message-----
> From: ceph-users-boun...@lists.ceph.com
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gandalf
> Corvotempesta
> Sent: Thursday, April 24, 2014 8:54 AM
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] cluster_network ignored
>
> I'm trying to configure a small ceph cluster with both public and
> cluster networks.
> This is my conf:
>
> [global]
> public_network = 192.168.0/24
> cluster_network = 10.0.0.0/24
> auth cluster required = cephx
> auth service required = cephx
> auth client required = cephx
> fsid = 004baba0-74dc-4429-84ec-1e376fb7bcad
> osd pool default pg num = 8192
> osd pool default pgp num = 8192
> osd pool default size = 3
>
> [mon]
> mon osd down out interval = 600
> mon osd mon down reporters = 7
> [mon.osd1]
> host = osd1
> mon addr = 192.168.0.1
> [mon.osd2]
> host = osd2
> mon addr = 192.168.0.2
> [mon.osd3]
> host = osd3
> mon addr = 192.168.0.3
>
> [osd]
> osd mkfs type = xfs
> osd journal size = 16384
> osd mon heartbeat interval = 30
> filestore merge threshold = 40
> filestore split multiple = 8
> osd op threads = 8
> osd recovery max active = 5
> osd max backfills = 2
> osd recovery op priority = 2
>
>
> on each node I have bond0 bound to 192.168.0.x and bond1 bound to
> 10.0.0.x When ceph is doing recovery, I can see replication through
> bond0 (public interface) and nothing via bond1 (cluster interface)
>
> Should I configure anything else ?
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> Gandalf Corvotempesta <mailto:gandalf.corvotempe...@gmail.com>
> 24. April 2014 17:53
> I'm trying to configure a small ceph cluster with both public and
> cluster networks.
> This is my conf:
>
> [global]
> public_network = 192.168.0/24
> cluster_network = 10.0.0.0/24
> auth cluster required = cephx
> auth service required = cephx
> auth client required = cephx
> fsid = 004baba0-74dc-4429-84ec-1e376fb7bcad
> osd pool default pg num = 8192
> osd pool default pgp num = 8192
> osd pool default size = 3
>
> [mon]
> mon osd down out interval = 600
> mon osd mon down reporters = 7
> [mon.osd1]
> host = osd1
> mon addr = 192.168.0.1
> [mon.osd2]
> host = osd2
> mon addr = 192.168.0.2
> [mon.osd3]
> host = osd3
> mon addr = 192.168.0.3
>
> [osd]
> osd mkfs type = xfs
> osd journal size = 16384
> osd mon heartbeat interval = 30
> filestore merge threshold = 40
> filestore split multiple = 8
> osd op threads = 8
> osd recovery max active = 5
> osd max backfills = 2
> osd recovery op priority = 2
>
>
> on each node I have bond0 bound to 192.168.0.x and bond1 bound to 10.0.0.x
> When ceph is doing recovery, I can see replication through bond0
> (public interface) and nothing via bond1 (cluster interface)
>
> Should I configure anything else ?
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to