[ceph-users] ceph-deploy and debian stretch 9

2017-02-14 Thread Zorg

Hello debian

stretch is almost stable so i wanted to deploy ceph jewel on it

but with

ceph-deploy new mynode

I have this error

[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: 
debian  9.0



I Know I can cheat changing /etc/debian_version to 8.0 but i'm sure 
there is a better way to do it


Thanks for your help


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph and systemd

2016-02-29 Thread zorg

Hi can someone can just explain how ceph is organize with systemd

I can see tha tfor each osd there a unit file in like this
/etc/systemd/system/ceph.target.wants/ceph-osd@0.service
wich is a symblink to /lib/systemd/system/ceph-osd@.service

but there is other service start for ceph like
sys-devices-pci:00-:00:01.0-:04:00.0-host0-target0:0:0-0:0:0:1-block-sdb-sdb1.device 
loaded active plugged   LOGICAL_VOLUME ceph\x20data
sys-devices-pci:00-:00:01.0-:04:00.0-host0-target0:0:0-0:0:0:1-block-sdb-sdb2.device 
loaded active plugged   LOGICAL_VOLUME ceph\x20journal


and
var-lib-ceph-osd-ceph\x2d0.mount loaded active mounted   
/var/lib/ceph/osd/ceph-0


That in can't find anywhere in my system and don't really understand how 
they are start




Thanks for your explanation

--
probeSys - spécialiste GNU/Linux
site web : http://www.probesys.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph debian systemd

2014-09-25 Thread zorg

Hi,
I'm using ceph version 0.80.5

I trying to make work  a ceph cluster using debian and systemd

I have already manage to install ceph cluster on debian with sysinit 
without any problem


But after installing all, using ceph deploy without error

after rebooting not all my osd  start (they are not mount)
and what is more strange at each reboot, it 's not the same osd that 
start adn some start 10 min later


I ' ve this in the log

Sep 25 12:18:23 addceph3 systemd-udevd[437]: 
'/usr/sbin/ceph-disk-activate /dev/sdh1' [1005] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[476]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdq1' [1142]
Sep 25 12:18:23 addceph3 systemd-udevd[486]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdg1' [998]
Sep 25 12:18:23 addceph3 systemd-udevd[486]: 
'/usr/sbin/ceph-disk-activate /dev/sdg1' [998] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[476]: 
'/usr/sbin/ceph-disk-activate /dev/sdq1' [1142] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[458]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdi1' [1001]
Sep 25 12:18:23 addceph3 systemd-udevd[458]: 
'/usr/sbin/ceph-disk-activate /dev/sdi1' [1001] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[444]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdj1' [1006]
Sep 25 12:18:23 addceph3 systemd-udevd[460]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdk1' [1152]
Sep 25 12:18:23 addceph3 systemd-udevd[444]: 
'/usr/sbin/ceph-disk-activate /dev/sdj1' [1006] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[460]: 
'/usr/sbin/ceph-disk-activate /dev/sdk1' [1152] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[469]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdm1' [1110]
Sep 25 12:18:23 addceph3 systemd-udevd[470]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdp1' [1189]
Sep 25 12:18:23 addceph3 systemd-udevd[469]: 
'/usr/sbin/ceph-disk-activate /dev/sdm1' [1110] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[470]: 
'/usr/sbin/ceph-disk-activate /dev/sdp1' [1189] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[468]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdl1' [1177]
Sep 25 12:18:23 addceph3 systemd-udevd[447]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdo1' [1181]
Sep 25 12:18:23 addceph3 systemd-udevd[468]: 
'/usr/sbin/ceph-disk-activate /dev/sdl1' [1177] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[447]: 
'/usr/sbin/ceph-disk-activate /dev/sdo1' [1181] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[490]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdr1' [1160]
Sep 25 12:18:23 addceph3 systemd-udevd[490]: 
'/usr/sbin/ceph-disk-activate /dev/sdr1' [1160] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 systemd-udevd[445]: timeout: killing 
'/usr/sbin/ceph-disk-activate /dev/sdn1' [1202]
Sep 25 12:18:23 addceph3 systemd-udevd[445]: 
'/usr/sbin/ceph-disk-activate /dev/sdn1' [1202] terminated by signal 9 
(Killed)
Sep 25 12:18:23 addceph3 kernel: [   39.813701] XFS (sdo1): Mounting 
Filesystem
Sep 25 12:18:23 addceph3 kernel: [   39.854510] XFS (sdo1): Ending clean 
mount
Sep 25 12:22:59 addceph3 systemd[1]: ceph.service operation timed out. 
Terminating.
Sep 25 12:22:59 addceph3 systemd[1]: Failed to start LSB: Start Ceph 
distributed file system daemons at boot time.




I'm not actually very experimented with systemd
don't really how ceph handle systemd

if someone can give me a bit of information

thanks



--
probeSys - spécialiste GNU/Linux
site web : http://www.probesys.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] periodic strange message in log

2014-02-10 Thread zorg

hello

Have already seen this issue in forum on bug
but don' really know what to do
I have
ceph health always HEALTH_OK

but in my syslog
Feb 10 03:07:14 dcceph1 kernel: [1589377.227270] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 03:22:15 dcceph1 kernel: [1590276.664061] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 03:22:15 dcceph1 kernel: [1590276.664719] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 03:37:16 dcceph1 kernel: [1591176.144811] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 03:37:16 dcceph1 kernel: [1591176.145375] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 03:52:17 dcceph1 kernel: [1592075.577299] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 03:52:17 dcceph1 kernel: [1592075.577944] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:07:18 dcceph1 kernel: [1592975.058406] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:07:18 dcceph1 kernel: [1592975.059011] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:22:19 dcceph1 kernel: [1593874.490796] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:22:19 dcceph1 kernel: [1593874.491363] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:37:20 dcceph1 kernel: [1594773.971795] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:37:20 dcceph1 kernel: [1594773.972363] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:52:21 dcceph1 kernel: [1595673.404358] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:52:21 dcceph1 kernel: [1595673.404898] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:07:22 dcceph1 kernel: [1596572.885531] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:07:22 dcceph1 kernel: [1596572.886160] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:22:23 dcceph1 kernel: [1597472.318134] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:22:23 dcceph1 kernel: [1597472.318726] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:37:24 dcceph1 kernel: [1598371.799174] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:37:24 dcceph1 kernel: [1598371.799766] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:52:25 dcceph1 kernel: [1599271.231721] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:52:25 dcceph1 kernel: [1599271.232285] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 06:07:26 dcceph1 kernel: [1600170.712652] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 06:07:26 dcceph1 kernel: [1600170.713344] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 06:22:27 dcceph1 kernel: [1601070.145368] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 06:22:27 dcceph1 kernel: [1601070.145938] libceph: osd0 
192.168.3.22:6809 connect authorization failure


all other osd ok
in ceph-osd.0.log
2014-02-10 05:22:23.241291 7f75e7153700  0 -- 192.168.3.22:6809/31236  
192.168.3.22:0/3626805593 pipe(0x115e8f00 sd=39 :6809 s=0

 pgs=0 cs=0 l=1 c=0x5dab4a0).accept: got bad authorizer
2014-02-10 05:22:23.952829 7f75e7153700  0 -- 192.168.3.22:6809/31236  
192.168.3.22:0/3626805593 pipe(0x46ae280 sd=39 :6809 s=0
pgs=0 cs=0 l=0 c=0x5daa2c0).accept peer addr is really 
192.168.3.22:0/3626805593 (socket is 192.168.3.22:58105/0)



I have
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)

do I have to worry and how to get rid of this

thanks

cyril
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] periodic strange message in log

2014-02-10 Thread zorg

One more info
osd0 is use for rbd map

and for test one block is  map on dcceph1

maybe it's due to this


Le 10/02/2014 20:30, zorg a écrit :

hello

Have already seen this issue in forum on bug
but don' really know what to do
I have
ceph health always HEALTH_OK

but in my syslog
Feb 10 03:07:14 dcceph1 kernel: [1589377.227270] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 03:22:15 dcceph1 kernel: [1590276.664061] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 03:22:15 dcceph1 kernel: [1590276.664719] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 03:37:16 dcceph1 kernel: [1591176.144811] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 03:37:16 dcceph1 kernel: [1591176.145375] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 03:52:17 dcceph1 kernel: [1592075.577299] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 03:52:17 dcceph1 kernel: [1592075.577944] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:07:18 dcceph1 kernel: [1592975.058406] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:07:18 dcceph1 kernel: [1592975.059011] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:22:19 dcceph1 kernel: [1593874.490796] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:22:19 dcceph1 kernel: [1593874.491363] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:37:20 dcceph1 kernel: [1594773.971795] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:37:20 dcceph1 kernel: [1594773.972363] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 04:52:21 dcceph1 kernel: [1595673.404358] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 04:52:21 dcceph1 kernel: [1595673.404898] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:07:22 dcceph1 kernel: [1596572.885531] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:07:22 dcceph1 kernel: [1596572.886160] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:22:23 dcceph1 kernel: [1597472.318134] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:22:23 dcceph1 kernel: [1597472.318726] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:37:24 dcceph1 kernel: [1598371.799174] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:37:24 dcceph1 kernel: [1598371.799766] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 05:52:25 dcceph1 kernel: [1599271.231721] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 05:52:25 dcceph1 kernel: [1599271.232285] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 06:07:26 dcceph1 kernel: [1600170.712652] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 06:07:26 dcceph1 kernel: [1600170.713344] libceph: osd0 
192.168.3.22:6809 connect authorization failure
Feb 10 06:22:27 dcceph1 kernel: [1601070.145368] libceph: osd0 
192.168.3.22:6809 socket closed
Feb 10 06:22:27 dcceph1 kernel: [1601070.145938] libceph: osd0 
192.168.3.22:6809 connect authorization failure


all other osd ok
in ceph-osd.0.log
2014-02-10 05:22:23.241291 7f75e7153700  0 -- 192.168.3.22:6809/31236 
 192.168.3.22:0/3626805593 pipe(0x115e8f00 sd=39 :6809 s=0

 pgs=0 cs=0 l=1 c=0x5dab4a0).accept: got bad authorizer
2014-02-10 05:22:23.952829 7f75e7153700  0 -- 192.168.3.22:6809/31236 
 192.168.3.22:0/3626805593 pipe(0x46ae280 sd=39 :6809 s=0
pgs=0 cs=0 l=0 c=0x5daa2c0).accept peer addr is really 
192.168.3.22:0/3626805593 (socket is 192.168.3.22:58105/0)



I have
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)

do I have to worry and how to get rid of this

thanks

cyril
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] get virtual size and used

2014-02-03 Thread zorg

hi,
We use rbd pool for
and I wonder how can i have
the real size use by my drb image

I can have the virtual size rbd info
but  how can i have the real size use by my drbd image


--
probeSys - spécialiste GNU/Linux
site web : http://www.probesys.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Debian wheezy qemu rbd support?

2014-01-29 Thread zorg

Hello
we use libvirt from wheezy-backports



Le 29/01/2014 04:13, Schlacta, Christ a écrit :
Thank you zorg :)  In theory it does help, however, I've already got 
it installed currently from a local repository.  I'm planning to throw 
that local repo into ceph and call it a day here.  I did notice that 
libvirt is noticeably absent from your repository.  What do you use in 
place of libvirt to manage your virtual environments?



On Tue, Jan 28, 2014 at 1:30 AM, zorg z...@probesys.com 
mailto:z...@probesys.com wrote:


Hello
we have a public repository with qemu-kvm wheezy-backports build
with rbd

deb http://deb.probesys.com/debian/ wheezy-backports main

hope it can help


Le 26/01/2014 12:43, Schlacta, Christ a écrit :


So on Debian wheezy, qemu is built without ceph/rbd support. I
don't know about everyone else, but I use backported qemu. Does
anyone provide a trusted, or official, build of qemu from Debian
backports that supports ceph/rbd?






--
probeSys - spécialiste GNU/Linux
site web : http://www.probesys.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] placing SSDs and SATAs pool in same hosts

2014-01-22 Thread zorg

Hi,
After reading the thread
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002358.html

We have done this crush map to make thing work.
srv1 and srv1ssd are the same physical server (same srv2,3,4)
we split it in the crush to make two parallel hierarchies.
This example is working,
I was just wondering if it's  the best way to achieve this

Thanks


# begin crush map

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15

# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root

# buckets
host srv1 {
id -2# do not change unnecessarily
# weight 5.690
alg straw
hash 0# rjenkins1
item osd.1 weight 1.820
item osd.2 weight 1.820
item osd.3 weight 1.820
}
host srv2 {
id -3# do not change unnecessarily
# weight 5.690
alg straw
hash 0# rjenkins1
item osd.5 weight 1.820
item osd.14 weight 1.820
item osd.15 weight 1.820
}
host srv3 {
id -4# do not change unnecessarily
# weight 5.690
alg straw
hash 0# rjenkins1
item osd.7 weight 1.820
item osd.8 weight 1.820
item osd.9 weight 1.820
}
host srv4 {
id -5# do not change unnecessarily
# weight 5.690
alg straw
hash 0# rjenkins1
item osd.11 weight 1.820
item osd.12 weight 1.820
item osd.13 weight 1.820
}
host srv1ssd {
id -100
alg straw
hash 0
item osd.0 weight 0.230
}
host srv2ssd {
id -101
alg straw
hash 0
item osd.4 weight 0.230
}
host srv3ssd {
id -102
alg straw
hash 0
item osd.6 weight 0.230
}
host srv4ssd {
id -103
alg straw
hash 0
item osd.10 weight 0.230
}

root default {
id -1# do not change unnecessarily
# weight 22.760
alg straw
hash 0# rjenkins1
item srv1 weight 5.690
item srv2 weight 5.690
item srv3 weight 5.690
item srv4 weight 5.690
}

root ssd {
id -99
alg straw
hash 0
item srv1ssd weight 0.230
item srv2ssd weight 0.230
item srv3ssd weight 0.230
item srv4ssd weight 0.230
}


# rules
rule data {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule metadata {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule sata {
ruleset 2
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule ssd {
ruleset 3
type replicated
min_size 1
max_size 10
step take ssd
step chooseleaf firstn 0 type host
step emit
}

# end crush map

cyril

--
probeSys - spécialiste GNU/Linux
site web : http://www.probesys.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] kvm rbd ceph and SGBD

2013-09-24 Thread zorg

Hi
I want to use ceph and kvm with rdb hosting mysql and oracle
I have already use kvm with iscsi but with sgbd it suffer of io limitation

is there some people who have good and bad experience on hosting sgbd.


thank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com