Re: [ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Stefan Eriksson

Hi

Thanks for the reply.
Some follow-up's.

Den 2015-09-12 kl. 17:30, skrev Christian Balzer:

Hello,

On Sat, 12 Sep 2015 17:11:04 +0200 Stefan Eriksson wrote:


Hi,

I'm reading the documentation about creating new OSD's and I'm see:

"The foregoing example assumes a disk dedicated to one Ceph OSD Daemon,
and a path to an SSD journal partition. We recommend storing the journal
on a separate drive to maximize throughput. You may dedicate a single
drive for the journal too (which may be expensive) or place the journal
on the same disk as the OSD (not recommended as it impairs performance).
In the foregoing example we store the journal on a partitioned solid
state drive." From:
http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/

So I would like to create my journals on the same SSD as I have my OS
(RAID1). Is this good practise to initiate a new disk with:


Which SSDs (model)?
Some SSDs are patently unsuited for OSD journals, while others will have
no issues keeping up with the OS and journal duties.


I'm using SSDSA2BZ10 in my lab, in hardware raid1.



The sda below suggest that your RAID1 is a HW one?
That's a bad choice on two counts, a HW RAID can't be TRIM'ed last I
checked.
And you would get a lot more performance out of a software RAID1 with
journals on both SSDs.
A RAID 1 still might be OK if you can/want trade performance for
redundancy.
  


I can spare some performance for the convenience to use HW raid on the 
OS disk.



ceph-deploy disk zap osdserver1:sdb
ceph-deploy osd prepare osdserver1:sdb:/dev/sda


I'm not a ceph-deploy expert or fan, but I'm pretty sure you will need to
create the partitions beforehand and then assign them accordingly.
And using uuids makes things renumbering proof:

ceph-deploy osd prepare ceph-04:sdb:/dev/disk/by-id/wwn-0x55cd2e404b73d348-part4

Christian


It should be ok I think to use just :/dev/sda
(ceph-deploy osd prepare osdserver1:sdb:/dev/sda)

My output is:

[osd03][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if 
journal is not the same device as the osd data
[osd03][WARNIN] DEBUG:ceph-disk:Creating journal partition num 4 size 
5120 on /dev/sda
[osd03][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk 
--new=4:0:+5120M --change-name=4:ceph journal 
--partition-guid=4:80836633--8e7d-5cf560e53ef 
--typecode=4:45b09694f-b4b80ceff106 --mbrtogpt -- /dev/sda


So "ceph-deploy osd prepare" seems to recognize the current partitions 
and create a 4th.
Here I ran into bug where the sda did not boot up, but I think its due 
to --mbrtogpt as the sda was MBR (centos 7 defaults to this when 
installed on small disks) and not GPT. So the conversion might have gone 
bad.

So now I will try to force GPT when I install centos7 even on smaller disks.

A question persists about ceph-deploy and the "osd prepare" statement 
working together with a disk that is already used by the OS. Does it 
have logic that is sound for this use case? I think it might be a common 
design choice to use journal and OS on the same disk.

It would be nice to hear some thoughs about the above.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread deeepdish
Hello,

I’m having a (strange) issue with OSD bucket persistence / affinity on my test 
cluster..   

The cluster is PoC / test, by no means production.   Consists of a single OSD / 
MON host + another MON running on a KVM VM.  

Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be part of the ssd bucket 
in my CRUSH map.   This works fine when either editing the CRUSH map by hand 
(exporting, decompile, edit, compile, import), or via the ceph osd crush set 
command:

"ceph osd crush set osd.11 0.140 root=ssd”

I’m able to verify that the OSD / MON host and another MON I have running see 
the same CRUSH map. 

After rebooting OSD / MON host, both osd.10 and osd.11 become part of the 
default bucket.   How can I ensure that ODSs persist in their configured 
buckets?

Here’s my desired CRUSH map.   This is a PoC and by no means production ready.. 

——

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host osdhost {
id -2   # do not change unnecessarily
# weight 36.200
alg straw
hash 0  # rjenkins1
item osd.0 weight 3.620
item osd.1 weight 3.620
item osd.2 weight 3.620
item osd.3 weight 3.620
item osd.4 weight 3.620
item osd.5 weight 3.620
item osd.6 weight 3.620
item osd.7 weight 3.620
item osd.8 weight 3.620
item osd.9 weight 3.620
}
root default {
id -1   # do not change unnecessarily
# weight 36.200
alg straw
hash 0  # rjenkins1
item osdhost weight 36.200
}
host osdhost-ssd {
id -3   # do not change unnecessarily
# weight 0.280
alg straw
hash 0  # rjenkins1
item osd.10 weight 0.140
item osd.11 weight 0.140
}
root ssd {
id -4   # do not change unnecessarily
# weight 0.280
alg straw
hash 0  # rjenkins1
item osdhost-ssd weight 0.280
}

# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type osd
step emit
}
rule ecpool {
ruleset 1
type erasure
min_size 3
max_size 7
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 0 type osd
step emit
}
rule cachetier {
ruleset 2
type replicated
min_size 1
max_size 10
step set_chooseleaf_tries 5
step set_choose_tries 100
step take ssd
step chooseleaf firstn 0 type osd
step emit
}

# end crush map

——


ceph osd tree (before reboot)

ID WEIGHT   TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-4  0.28000 root ssd  
-30 host osdhost-ssd   
10  0.14000 osd.10   up  1.0  1.0 
11  0.14000 osd.11   up  1.0  1.0 
-1 36.19995 root default  
-2 36.19995 host osdhost   
 0  3.62000 osd.0up  1.0  1.0 
 1  3.62000 osd.1up  1.0  1.0 
 2  3.62000 osd.2up  1.0  1.0 
 3  3.62000 osd.3up  1.0  1.0 
 4  3.62000 osd.4up  1.0  1.0 
 5  3.62000 osd.5up  1.0  1.0 
 6  3.62000 osd.6up  1.0  1.0 
 7  3.62000 osd.7up  1.0  1.0 
 8  3.62000 osd.8up  1.0  1.0 
 9  3.62000 osd.9up  1.0  1.0 


ceph osd tree (after reboot)


[root@osdhost tmp]# ceph osd tree
ID WEIGHT   TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-40 root ssd  
-30 host osdhost-ssd   
-1 36.47995 root default  
-2 36.47995 host osdhost   
 0  3.62000 osd.0up  1.0  1.0 
 1  3.62000 osd.1up  1.0  1.0 
 2  3.62000 osd.2up  1.0  1.0 
 3  3.62000 osd.3up  1.0  1.0 
 4  

Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread Johannes Formann
Hi,

> I’m having a (strange) issue with OSD bucket persistence / affinity on my 
> test cluster..   
> 
> The cluster is PoC / test, by no means production.   Consists of a single OSD 
> / MON host + another MON running on a KVM VM.  
> 
> Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be part of the ssd 
> bucket in my CRUSH map.   This works fine when either editing the CRUSH map 
> by hand (exporting, decompile, edit, compile, import), or via the ceph osd 
> crush set command:
> 
> "ceph osd crush set osd.11 0.140 root=ssd”
> 
> I’m able to verify that the OSD / MON host and another MON I have running see 
> the same CRUSH map. 
> 
> After rebooting OSD / MON host, both osd.10 and osd.11 become part of the 
> default bucket.   How can I ensure that ODSs persist in their configured 
> buckets?

I guess you have set "osd crush update on start = true" 
(http://ceph.com/docs/master/rados/operations/crush-map/ ) and only the default 
„root“-entry.

Either fix the „root“-Entry in the ceph.conf or set osd crush update on start = 
false.

greetings

Johannes
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
So you are using same version on other clients?
But only one client has problem?

Can you provide:

  /sys/class/net//statistics/*

just do:

  tar cvf .tar \
  /sys/class/net//statistics/*

Can you hold when same issue happen next?
No reboot is necessary.

But if you have to reboot, of course you can.

Shinobu

- Original Message -
From: "谷枫" 
To: "Shinobu Kinjo" 
Cc: "ceph-users" 
Sent: Sunday, September 13, 2015 11:30:57 AM
Subject: Re: [ceph-users] ceph-fuse auto down

Yes, when some ceph-fuse crash , the mount driver has gone, and can't
remount . Reboot the server is the only way I can do.
But other client with ceph-fuse mount on them working well. Can writing /
reading data on them.

ceph-fuse --version
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

ceph -s
cluster 0fddc8e0-9e64-4049-902a-2f0f6d531630
 health HEALTH_OK
 monmap e1: 3 mons at {ceph01=
10.3.1.11:6789/0,ceph02=10.3.1.12:6789/0,ceph03=10.3.1.13:6789/0}
election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03
 mdsmap e29: 1/1/1 up {0=ceph04=up:active}, 1 up:standby
 osdmap e26: 4 osds: 4 up, 4 in
  pgmap v94931: 320 pgs, 3 pools, 90235 MB data, 241 kobjects
289 GB used, 1709 GB / 1999 GB avail
 320 active+clean
  client io 1023 kB/s rd, 1210 kB/s wr, 72 op/s

2015-09-13 10:23 GMT+08:00 Shinobu Kinjo :

> Can you give us package version of ceph-fuse?
>
> > Multi ceph-fuse crash just now today.
>
> Did you just mount filesystem or was there any
> activity on filesystem?
>
>   e.g: writing / reading data
>
> Can you give us output of on cluster side:
>
>   ceph -s
>
> Shinobu
>
> - Original Message -
> From: "谷枫" 
> To: "Shinobu Kinjo" 
> Cc: "ceph-users" 
> Sent: Sunday, September 13, 2015 10:51:35 AM
> Subject: Re: [ceph-users] ceph-fuse auto down
>
> sorry Shinobu,
> I don't understand what's the means what you pasted.
> Multi ceph-fuse crash just now today.
> The ceph-fuse completely unusable for me now.
> Maybe i must change the kernal mount with it.
>
> 2015-09-12 20:08 GMT+08:00 Shinobu Kinjo :
>
> > In _usr_bin_ceph-fuse.0.crash.client2.tar
> >
> > What I'm seeing now is:
> >
> >   3 Date: Sat Sep 12 06:37:47 2015
> >  ...
> >   6 ExecutableTimestamp: 1440614242
> >  ...
> >   7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m
> > 10.3.1.11,10.3.1.12,10.3.1.13 /grdata
> >  ...
> >  30  7f32de7fe000-7f32deffe000 rw-p  00:00 0
> > [stack:17270]
> >  ...
> > 250  7f341021d000-7f3410295000 r-xp  fd:01 267219
> >/usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
> >  ...
> > 255  7f341049b000-7f341054f000 r-xp  fd:01 266443
> >/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
> >  ...
> > 260  7f3410754000-7f3410794000 r-xp  fd:01 267222
> >/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
> >  ...
> > 266  7f3411197000-7f341119a000 r-xp  fd:01 264953
> >/usr/lib/x86_64-linux-gnu/libplds4.so
> >  ...
> > 271  7f341139f000-7f341159e000 ---p 4000 fd:01 264955
> >/usr/lib/x86_64-linux-gnu/libplc4.so
> >  ...
> > 274  7f34115a-7f34115c5000 r-xp  fd:01 267214
> >/usr/lib/x86_64-linux-gnu/libnssutil3.so
> >  ...
> > 278  7f34117cb000-7f34117ce000 r-xp  fd:01 1189512
> > /lib/x86_64-linux-gnu/libdl-2.19.so
> >  ...
> > 287  7f3411d94000-7f3411daa000 r-xp  fd:01 1179825
> > /lib/x86_64-linux-gnu/libgcc_s.so.1
> >  ...
> > 294  7f34122b-7f3412396000 r-xp  fd:01 266069
> >/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
> >  ...
> > 458  State: D (disk sleep)
> >  ...
> > 359  VmPeak: 5250648 kB
> > 360  VmSize: 4955592 kB
> >  ...
> >
> > What were you trying to do?
> >
> > Shinobu
> >
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread deeepdish
Johannes,

Thank you — "osd crush update on start = false” did the trick.   I wasn’t aware 
that ceph has automatic placement logic for OSDs 
(http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/9035 
).
   This brings up a best practice question..   

How is the configuration of OSD hosts with multiple storage types (e.g. 
spinners + flash/ssd), typically implemented in the field from a crush map / 
device location perspective?   Preference is for a scale out design.

In addition to the SSDs which are used for a EC cache tier, I’m also planning a 
5:1 ratio of spinners to SSD for journals.   In this case I want to implement 
an availability groups within the OSD host itself.   

e.g. in a 26-drive chassis, there will be 6 SSDs + 20 spinners.   [2 SSDs for 
replicated cache tier, 4 SSDs will create 5 availability groups of 5 spinners 
each]   The idea is to have CRUSH take into account SSD journal failure 
(affecting 5 spinners).   

Thanks.



> On Sep 12, 2015, at 19:11 , Johannes Formann  wrote:
> 
> Hi,
> 
>> I’m having a (strange) issue with OSD bucket persistence / affinity on my 
>> test cluster..   
>> 
>> The cluster is PoC / test, by no means production.   Consists of a single 
>> OSD / MON host + another MON running on a KVM VM.  
>> 
>> Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be part of the ssd 
>> bucket in my CRUSH map.   This works fine when either editing the CRUSH map 
>> by hand (exporting, decompile, edit, compile, import), or via the ceph osd 
>> crush set command:
>> 
>> "ceph osd crush set osd.11 0.140 root=ssd”
>> 
>> I’m able to verify that the OSD / MON host and another MON I have running 
>> see the same CRUSH map. 
>> 
>> After rebooting OSD / MON host, both osd.10 and osd.11 become part of the 
>> default bucket.   How can I ensure that ODSs persist in their configured 
>> buckets?
> 
> I guess you have set "osd crush update on start = true" 
> (http://ceph.com/docs/master/rados/operations/crush-map/ ) and only the 
> default „root“-entry.
> 
> Either fix the „root“-Entry in the ceph.conf or set osd crush update on start 
> = false.
> 
> greetings
> 
> Johannes

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 2 replications, flapping can not stop for a very long time

2015-09-12 Thread zhao.ming...@h3c.com
Hi,
I'm testing reliability of ceph recently, and I have met the flapping problem.
I have 2 replications, and cut off the cluster network ,now  flapping can not 
stop,I have wait more than 30min, but status of osds are still not stable;
I want to know about  when monitor recv reports from osds ,how it can mark 
one osd down?
(reports && reporter && grace) need to satisfied some conditions, how to 
calculate the grace?
and how long will the flapping  stop?Does the flapping must be stopped by 
configure,such as configure an osd lost?
Can someone help me ?
Thanks~
-
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, 
which is
intended only for the person or entity whose address is listed above. Any use 
of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender
by phone or email immediately and delete it!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
sorry Shinobu,
I don't understand what's the means what you pasted.
Multi ceph-fuse crash just now today.
The ceph-fuse completely unusable for me now.
Maybe i must change the kernal mount with it.

2015-09-12 20:08 GMT+08:00 Shinobu Kinjo :

> In _usr_bin_ceph-fuse.0.crash.client2.tar
>
> What I'm seeing now is:
>
>   3 Date: Sat Sep 12 06:37:47 2015
>  ...
>   6 ExecutableTimestamp: 1440614242
>  ...
>   7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m
> 10.3.1.11,10.3.1.12,10.3.1.13 /grdata
>  ...
>  30  7f32de7fe000-7f32deffe000 rw-p  00:00 0
> [stack:17270]
>  ...
> 250  7f341021d000-7f3410295000 r-xp  fd:01 267219
>/usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
>  ...
> 255  7f341049b000-7f341054f000 r-xp  fd:01 266443
>/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
>  ...
> 260  7f3410754000-7f3410794000 r-xp  fd:01 267222
>/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
>  ...
> 266  7f3411197000-7f341119a000 r-xp  fd:01 264953
>/usr/lib/x86_64-linux-gnu/libplds4.so
>  ...
> 271  7f341139f000-7f341159e000 ---p 4000 fd:01 264955
>/usr/lib/x86_64-linux-gnu/libplc4.so
>  ...
> 274  7f34115a-7f34115c5000 r-xp  fd:01 267214
>/usr/lib/x86_64-linux-gnu/libnssutil3.so
>  ...
> 278  7f34117cb000-7f34117ce000 r-xp  fd:01 1189512
> /lib/x86_64-linux-gnu/libdl-2.19.so
>  ...
> 287  7f3411d94000-7f3411daa000 r-xp  fd:01 1179825
> /lib/x86_64-linux-gnu/libgcc_s.so.1
>  ...
> 294  7f34122b-7f3412396000 r-xp  fd:01 266069
>/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
>  ...
> 458  State: D (disk sleep)
>  ...
> 359  VmPeak: 5250648 kB
> 360  VmSize: 4955592 kB
>  ...
>
> What were you trying to do?
>
> Shinobu
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
Can you give us package version of ceph-fuse?

> Multi ceph-fuse crash just now today.

Did you just mount filesystem or was there any
activity on filesystem?

  e.g: writing / reading data

Can you give us output of on cluster side:

  ceph -s

Shinobu

- Original Message -
From: "谷枫" 
To: "Shinobu Kinjo" 
Cc: "ceph-users" 
Sent: Sunday, September 13, 2015 10:51:35 AM
Subject: Re: [ceph-users] ceph-fuse auto down

sorry Shinobu,
I don't understand what's the means what you pasted.
Multi ceph-fuse crash just now today.
The ceph-fuse completely unusable for me now.
Maybe i must change the kernal mount with it.

2015-09-12 20:08 GMT+08:00 Shinobu Kinjo :

> In _usr_bin_ceph-fuse.0.crash.client2.tar
>
> What I'm seeing now is:
>
>   3 Date: Sat Sep 12 06:37:47 2015
>  ...
>   6 ExecutableTimestamp: 1440614242
>  ...
>   7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m
> 10.3.1.11,10.3.1.12,10.3.1.13 /grdata
>  ...
>  30  7f32de7fe000-7f32deffe000 rw-p  00:00 0
> [stack:17270]
>  ...
> 250  7f341021d000-7f3410295000 r-xp  fd:01 267219
>/usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
>  ...
> 255  7f341049b000-7f341054f000 r-xp  fd:01 266443
>/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
>  ...
> 260  7f3410754000-7f3410794000 r-xp  fd:01 267222
>/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
>  ...
> 266  7f3411197000-7f341119a000 r-xp  fd:01 264953
>/usr/lib/x86_64-linux-gnu/libplds4.so
>  ...
> 271  7f341139f000-7f341159e000 ---p 4000 fd:01 264955
>/usr/lib/x86_64-linux-gnu/libplc4.so
>  ...
> 274  7f34115a-7f34115c5000 r-xp  fd:01 267214
>/usr/lib/x86_64-linux-gnu/libnssutil3.so
>  ...
> 278  7f34117cb000-7f34117ce000 r-xp  fd:01 1189512
> /lib/x86_64-linux-gnu/libdl-2.19.so
>  ...
> 287  7f3411d94000-7f3411daa000 r-xp  fd:01 1179825
> /lib/x86_64-linux-gnu/libgcc_s.so.1
>  ...
> 294  7f34122b-7f3412396000 r-xp  fd:01 266069
>/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
>  ...
> 458  State: D (disk sleep)
>  ...
> 359  VmPeak: 5250648 kB
> 360  VmSize: 4955592 kB
>  ...
>
> What were you trying to do?
>
> Shinobu
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
Yes, when some ceph-fuse crash , the mount driver has gone, and can't
remount . Reboot the server is the only way I can do.
But other client with ceph-fuse mount on them working well. Can writing /
reading data on them.

ceph-fuse --version
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)

ceph -s
cluster 0fddc8e0-9e64-4049-902a-2f0f6d531630
 health HEALTH_OK
 monmap e1: 3 mons at {ceph01=
10.3.1.11:6789/0,ceph02=10.3.1.12:6789/0,ceph03=10.3.1.13:6789/0}
election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03
 mdsmap e29: 1/1/1 up {0=ceph04=up:active}, 1 up:standby
 osdmap e26: 4 osds: 4 up, 4 in
  pgmap v94931: 320 pgs, 3 pools, 90235 MB data, 241 kobjects
289 GB used, 1709 GB / 1999 GB avail
 320 active+clean
  client io 1023 kB/s rd, 1210 kB/s wr, 72 op/s

2015-09-13 10:23 GMT+08:00 Shinobu Kinjo :

> Can you give us package version of ceph-fuse?
>
> > Multi ceph-fuse crash just now today.
>
> Did you just mount filesystem or was there any
> activity on filesystem?
>
>   e.g: writing / reading data
>
> Can you give us output of on cluster side:
>
>   ceph -s
>
> Shinobu
>
> - Original Message -
> From: "谷枫" 
> To: "Shinobu Kinjo" 
> Cc: "ceph-users" 
> Sent: Sunday, September 13, 2015 10:51:35 AM
> Subject: Re: [ceph-users] ceph-fuse auto down
>
> sorry Shinobu,
> I don't understand what's the means what you pasted.
> Multi ceph-fuse crash just now today.
> The ceph-fuse completely unusable for me now.
> Maybe i must change the kernal mount with it.
>
> 2015-09-12 20:08 GMT+08:00 Shinobu Kinjo :
>
> > In _usr_bin_ceph-fuse.0.crash.client2.tar
> >
> > What I'm seeing now is:
> >
> >   3 Date: Sat Sep 12 06:37:47 2015
> >  ...
> >   6 ExecutableTimestamp: 1440614242
> >  ...
> >   7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m
> > 10.3.1.11,10.3.1.12,10.3.1.13 /grdata
> >  ...
> >  30  7f32de7fe000-7f32deffe000 rw-p  00:00 0
> > [stack:17270]
> >  ...
> > 250  7f341021d000-7f3410295000 r-xp  fd:01 267219
> >/usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
> >  ...
> > 255  7f341049b000-7f341054f000 r-xp  fd:01 266443
> >/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
> >  ...
> > 260  7f3410754000-7f3410794000 r-xp  fd:01 267222
> >/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
> >  ...
> > 266  7f3411197000-7f341119a000 r-xp  fd:01 264953
> >/usr/lib/x86_64-linux-gnu/libplds4.so
> >  ...
> > 271  7f341139f000-7f341159e000 ---p 4000 fd:01 264955
> >/usr/lib/x86_64-linux-gnu/libplc4.so
> >  ...
> > 274  7f34115a-7f34115c5000 r-xp  fd:01 267214
> >/usr/lib/x86_64-linux-gnu/libnssutil3.so
> >  ...
> > 278  7f34117cb000-7f34117ce000 r-xp  fd:01 1189512
> > /lib/x86_64-linux-gnu/libdl-2.19.so
> >  ...
> > 287  7f3411d94000-7f3411daa000 r-xp  fd:01 1179825
> > /lib/x86_64-linux-gnu/libgcc_s.so.1
> >  ...
> > 294  7f34122b-7f3412396000 r-xp  fd:01 266069
> >/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
> >  ...
> > 458  State: D (disk sleep)
> >  ...
> > 359  VmPeak: 5250648 kB
> > 360  VmSize: 4955592 kB
> >  ...
> >
> > What were you trying to do?
> >
> > Shinobu
> >
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
​
 _usr_bin_ceph-fuse.0.crash.client1.tar.gz

​​
 _usr_bin_ceph-fuse.0.crash.client2.tar.gz

​Hi,Shinobu
I check the /var/log/dmesg carefully again, but not find the useful message
about ceph-fuse crash.
So i attach two _usr_bin_ceph-fuse.0.crash files on two clients to you.
Please let me know if you want more about other log. Thank you very much!

2015-09-12 13:01 GMT+08:00 Shinobu Kinjo :

> Ah, you are using ubuntu, sorry for that.
> How about:
>
>   /var/log/dmesg
>
> I believe you can attach file not paste.
> Pasting a bunch of logs would not be good for me -;
>
> And when did you notice that cephfs was hung?
>
> Shinobu
>
> - Original Message -
> From: "谷枫" 
> To: "Shinobu Kinjo" 
> Cc: "ceph-users" 
> Sent: Saturday, September 12, 2015 1:50:05 PM
> Subject: Re: [ceph-users] ceph-fuse auto down
>
> hi,Shinobu
> There is no /var/log/messages on my system but i saw the /var/log/syslog
> and no useful messages be found.
> I discover the /var/crash/_usr_bin_ceph-fuse.0.crash with grep the "fuse"
> on the system.
> Below is the message in it :
> ProcStatus:
>  Name:  ceph-fuse
>  State: D (disk sleep)
>  Tgid:  2903
>  Ngid:  0
>  Pid:   2903
>  PPid:  1
>  TracerPid: 0
>  Uid:   0   0   0   0
>  Gid:   0   0   0   0
>  FDSize:64
>  Groups:0
>  VmPeak: 7428552 kB
>  VmSize: 6838728 kB
>  VmLck:0 kB
>  VmPin:0 kB
>  VmHWM:  1175864 kB
>  VmRSS:   343116 kB
>  VmData: 6786232 kB
>  VmStk:  136 kB
>  VmExe: 5628 kB
>  VmLib: 7456 kB
>  VmPTE: 3404 kB
>  VmSwap:   0 kB
>  Threads:   37
>  SigQ:  1/64103
>  SigPnd:
>  ShdPnd:
>  SigBlk:1000
>  SigIgn:1000
>  SigCgt:0001c18040eb
>  CapInh:
>  CapPrm:003f
>  CapEff:003f
>  CapBnd:003f
>  Seccomp:   0
>  Cpus_allowed:  
>  Cpus_allowed_list: 0-15
>  Mems_allowed:  ,0001
>  Mems_allowed_list: 0
>  voluntary_ctxt_switches:   25
>  nonvoluntary_ctxt_switches:2
> Signal: 11
> Uname: Linux 3.19.0-28-generic x86_64
> UserGroups:
> CoreDump: base64
>
> Is this useful infomations?
>
>
> 2015-09-12 12:33 GMT+08:00 Shinobu Kinjo :
>
> > There should be some complains in /var/log/messages.
> > Can you attach?
> >
> > Shinobu
> >
> > - Original Message -
> > From: "谷枫" 
> > To: "ceph-users" 
> > Sent: Saturday, September 12, 2015 1:30:49 PM
> > Subject: [ceph-users] ceph-fuse auto down
> >
> > Hi,all
> > My cephfs cluster deploy on three nodes with Ceph Hammer 0.94.3 on Ubuntu
> > 14.04 the kernal version is 3.19.0.
> >
> > I mount the cephfs with ceph-fuse on 9 clients,but some of them
> (ceph-fuse
> > process) auto down sometimes and i can't find the reason seems like there
> > is no other logs can be found except this file
> > /var/log/ceph/ceph-client.admin.log that without useful messages for me.
> >
> > When the ceph-fuse down . The mount driver is gone.
> > How can i find the reason of this problem. Can some guys give me good
> > ideas?
> >
> > Regards
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Abhishek L
On Thu, Sep 10, 2015 at 3:27 PM, Shinobu Kinjo  wrote:
> Thank you for letting me know your thought, Abhishek!!
>
>
> > The Ceph Object Gateway will query Keystone periodically
> > for a list of revoked tokens. These requests are encoded
> > and signed. Also, Keystone may be configured to provide
> > self-signed tokens, which are also encoded and signed.
>

Looked a bit more into this, swift apis seem to support the use
of an admin tenant, user & token for validating the bearer token,
similar to other openstack service which use a service tenant
credentials for authenticating.
 Though it needs documentation, the configurables `rgw keystone admin
tenant`, `rgw keystone admin user` and `rgw keystone admin password`
make this possible, so as to avoid configuring the keystone shared
admin password compoletely.

S3 APIs with keystone seem to be a bit more different, apparently
s3tokens interface does seem to allow authenticating without an
`X-Auth-Token` in the headers and validates based on the access key,
secret key provided to it. So basically not configuring
`rgw_keystone_admin_password` seem to work, can you also see if this
is the case for you.

> This is completely absolutely out of scope of my original
> question.
>
> But I would like to ask you if above implementation that
> **periodically** talks to keystone with tokens is really
> secure or not.
>
> I'm just asking you. Because I'm just thinking of keysto-
> ne federation.
> But you can ignore me anyhow or point out anything to me -;
> Shinobu
>
> - Original Message -
> From: "Abhishek L" 
> To: "Shinobu Kinjo" 
> Cc: "Gregory Farnum" , "ceph-users" 
> , "ceph-devel" 
> Sent: Thursday, September 10, 2015 6:35:31 PM
> Subject: Re: [ceph-users] Ceph.conf
>
> On Thu, Sep 10, 2015 at 2:51 PM, Shinobu Kinjo  wrote:
>> Thank you for your really really quick reply, Greg.
>>
>>  > Yes. A bunch shouldn't ever be set by users.
>>
>>  Anyhow, this is one of my biggest concern right now -;
>>
>> rgw_keystone_admin_password =
>>
>>
>> MUST not be there.
>
>
> I know the dangers of this (ie keystone admin password being visible);
> but isn't this already visible in ceph/radosgw configuration file as
> well if you configure keystone.[1]
>
> [1]: 
> http://ceph.com/docs/master/radosgw/keystone/#integrating-with-openstack-keystone
>
>> Shinobu
>>
>> - Original Message -
>> From: "Gregory Farnum" 
>> To: "Shinobu Kinjo" 
>> Cc: "ceph-users" , "ceph-devel" 
>> 
>> Sent: Thursday, September 10, 2015 5:57:52 PM
>> Subject: Re: [ceph-users] Ceph.conf
>>
>> On Thu, Sep 10, 2015 at 9:44 AM, Shinobu Kinjo  wrote:
>>> Hello,
>>>
>>> I'm seeing 859 parameters in the output of:
>>>
>>> $ ./ceph --show-config | wc -l
>>> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>>> 859
>>>
>>> In:
>>>
>>> $ ./ceph --version
>>> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>>> ceph version 9.0.2-1454-g050e1c5 
>>> (050e1c5c7471f8f237d9fa119af98c1efa9a8479)
>>>
>>> Since I'm quite new to Ceph, so my question is:
>>>
>>> Where can I know what each parameter exactly mean?
>>>
>>> I am probably right. Some parameters are just for tes-
>>> ting purpose.
>>
>> Yes. A bunch shouldn't ever be set by users. A lot of the ones that
>> should be are described as part of various operations in
>> ceph.com/docs, but I don't know which ones of interest are missing
>> from there. It's not very discoverable right now, unfortunately.
>> -Greg
>>
>>>
>>> Thank you for your help in advance.
>>>
>>> Shinobu
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
Thank you for log archives.

I went to dentist -;
Please do not forget CCing ceph-users from the next because there is a bunch of 
really **awesome** guys;

Can you re-attach log files again so that they see?

Shinobu

- Original Message -
From: "谷枫" 
To: "Shinobu Kinjo" 
Sent: Saturday, September 12, 2015 5:32:31 PM
Subject: Re: [ceph-users] ceph-fuse auto down

The mds log shows that the client session timeout then mds close the socket
right?
I think this happend behind the ceph-fuse crash. So the root cause is the
ceph-fuse crash .

2015-09-12 15:09 GMT+08:00 谷枫 :

> Hi,Shinobu
> I notice some useful message in the mds-log.
>
> 2015-09-12 14:29 GMT+08:00 谷枫 :
>
>> ​
>>  _usr_bin_ceph-fuse.0.crash.client1.tar.gz
>> 
>> ​​
>>  _usr_bin_ceph-fuse.0.crash.client2.tar.gz
>> 
>> ​Hi,Shinobu
>> I check the /var/log/dmesg carefully again, but not find the useful
>> message about ceph-fuse crash.
>> So i attach two _usr_bin_ceph-fuse.0.crash files on two clients to you.
>> Please let me know if you want more about other log. Thank you very much!
>>
>> 2015-09-12 13:01 GMT+08:00 Shinobu Kinjo :
>>
>>> Ah, you are using ubuntu, sorry for that.
>>> How about:
>>>
>>>   /var/log/dmesg
>>>
>>> I believe you can attach file not paste.
>>> Pasting a bunch of logs would not be good for me -;
>>>
>>> And when did you notice that cephfs was hung?
>>>
>>> Shinobu
>>>
>>> - Original Message -
>>> From: "谷枫" 
>>> To: "Shinobu Kinjo" 
>>> Cc: "ceph-users" 
>>> Sent: Saturday, September 12, 2015 1:50:05 PM
>>> Subject: Re: [ceph-users] ceph-fuse auto down
>>>
>>> hi,Shinobu
>>> There is no /var/log/messages on my system but i saw the /var/log/syslog
>>> and no useful messages be found.
>>> I discover the /var/crash/_usr_bin_ceph-fuse.0.crash with grep the "fuse"
>>> on the system.
>>> Below is the message in it :
>>> ProcStatus:
>>>  Name:  ceph-fuse
>>>  State: D (disk sleep)
>>>  Tgid:  2903
>>>  Ngid:  0
>>>  Pid:   2903
>>>  PPid:  1
>>>  TracerPid: 0
>>>  Uid:   0   0   0   0
>>>  Gid:   0   0   0   0
>>>  FDSize:64
>>>  Groups:0
>>>  VmPeak: 7428552 kB
>>>  VmSize: 6838728 kB
>>>  VmLck:0 kB
>>>  VmPin:0 kB
>>>  VmHWM:  1175864 kB
>>>  VmRSS:   343116 kB
>>>  VmData: 6786232 kB
>>>  VmStk:  136 kB
>>>  VmExe: 5628 kB
>>>  VmLib: 7456 kB
>>>  VmPTE: 3404 kB
>>>  VmSwap:   0 kB
>>>  Threads:   37
>>>  SigQ:  1/64103
>>>  SigPnd:
>>>  ShdPnd:
>>>  SigBlk:1000
>>>  SigIgn:1000
>>>  SigCgt:0001c18040eb
>>>  CapInh:
>>>  CapPrm:003f
>>>  CapEff:003f
>>>  CapBnd:003f
>>>  Seccomp:   0
>>>  Cpus_allowed:  
>>>  Cpus_allowed_list: 0-15
>>>  Mems_allowed:  ,0001
>>>  Mems_allowed_list: 0
>>>  voluntary_ctxt_switches:   25
>>>  nonvoluntary_ctxt_switches:2
>>> Signal: 11
>>> Uname: Linux 3.19.0-28-generic x86_64
>>> UserGroups:
>>> CoreDump: base64
>>>
>>> Is this useful infomations?
>>>
>>>
>>> 2015-09-12 12:33 GMT+08:00 Shinobu Kinjo :
>>>
>>> > There should be some complains in /var/log/messages.
>>> > Can you attach?
>>> >
>>> > Shinobu
>>> >
>>> > - Original Message -
>>> > From: "谷枫" 
>>> > To: "ceph-users" 
>>> > Sent: Saturday, September 12, 2015 1:30:49 PM
>>> > Subject: [ceph-users] ceph-fuse auto down
>>> >
>>> > Hi,all
>>> > My cephfs cluster deploy on three nodes with Ceph Hammer 0.94.3 on
>>> Ubuntu
>>> > 14.04 the kernal version is 3.19.0.
>>> >
>>> > I mount the cephfs with ceph-fuse on 9 clients,but some of them
>>> (ceph-fuse
>>> > process) auto down sometimes and i can't find the reason seems like
>>> there
>>> > is no other logs can be found except this file
>>> > /var/log/ceph/ceph-client.admin.log that without useful messages for
>>> me.
>>> >
>>> > When the ceph-fuse down . The mount driver is gone.
>>> > How can i find the reason of this problem. Can some guys give me good
>>> > ideas?
>>> >
>>> > Regards
>>> >
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
Sorry about that.
I re-attach the crash log files and the mds logs.
The mds log shows that the client session timeout then mds close the socket
right?
I think this happend behind the ceph-fuse crash. So the root cause is the
ceph-fuse crash .
​
 _usr_bin_ceph-fuse.0.crash.client1.tar.gz

​​
 _usr_bin_ceph-fuse.0.crash.client2.tar.gz

​

2015-09-12 19:10 GMT+08:00 Shinobu Kinjo :

> Thank you for log archives.
>
> I went to dentist -;
> Please do not forget CCing ceph-users from the next because there is a
> bunch of really **awesome** guys;
>
> Can you re-attach log files again so that they see?
>
> Shinobu
>
> - Original Message -
> From: "谷枫" 
> To: "Shinobu Kinjo" 
> Sent: Saturday, September 12, 2015 5:32:31 PM
> Subject: Re: [ceph-users] ceph-fuse auto down
>
> The mds log shows that the client session timeout then mds close the socket
> right?
> I think this happend behind the ceph-fuse crash. So the root cause is the
> ceph-fuse crash .
>
> 2015-09-12 15:09 GMT+08:00 谷枫 :
>
> > Hi,Shinobu
> > I notice some useful message in the mds-log.
> >
> > 2015-09-12 14:29 GMT+08:00 谷枫 :
> >
> >> ​
> >>  _usr_bin_ceph-fuse.0.crash.client1.tar.gz
> >> <
> https://drive.google.com/file/d/0Bw059OTYnFqAbDA2R0RlS3ZHNlU/view?usp=drive_web
> >
> >> ​​
> >>  _usr_bin_ceph-fuse.0.crash.client2.tar.gz
> >> <
> https://drive.google.com/file/d/0Bw059OTYnFqAMXVCWTV6UXVnRjg/view?usp=drive_web
> >
> >> ​Hi,Shinobu
> >> I check the /var/log/dmesg carefully again, but not find the useful
> >> message about ceph-fuse crash.
> >> So i attach two _usr_bin_ceph-fuse.0.crash files on two clients to you.
> >> Please let me know if you want more about other log. Thank you very
> much!
> >>
> >> 2015-09-12 13:01 GMT+08:00 Shinobu Kinjo :
> >>
> >>> Ah, you are using ubuntu, sorry for that.
> >>> How about:
> >>>
> >>>   /var/log/dmesg
> >>>
> >>> I believe you can attach file not paste.
> >>> Pasting a bunch of logs would not be good for me -;
> >>>
> >>> And when did you notice that cephfs was hung?
> >>>
> >>> Shinobu
> >>>
> >>> - Original Message -
> >>> From: "谷枫" 
> >>> To: "Shinobu Kinjo" 
> >>> Cc: "ceph-users" 
> >>> Sent: Saturday, September 12, 2015 1:50:05 PM
> >>> Subject: Re: [ceph-users] ceph-fuse auto down
> >>>
> >>> hi,Shinobu
> >>> There is no /var/log/messages on my system but i saw the
> /var/log/syslog
> >>> and no useful messages be found.
> >>> I discover the /var/crash/_usr_bin_ceph-fuse.0.crash with grep the
> "fuse"
> >>> on the system.
> >>> Below is the message in it :
> >>> ProcStatus:
> >>>  Name:  ceph-fuse
> >>>  State: D (disk sleep)
> >>>  Tgid:  2903
> >>>  Ngid:  0
> >>>  Pid:   2903
> >>>  PPid:  1
> >>>  TracerPid: 0
> >>>  Uid:   0   0   0   0
> >>>  Gid:   0   0   0   0
> >>>  FDSize:64
> >>>  Groups:0
> >>>  VmPeak: 7428552 kB
> >>>  VmSize: 6838728 kB
> >>>  VmLck:0 kB
> >>>  VmPin:0 kB
> >>>  VmHWM:  1175864 kB
> >>>  VmRSS:   343116 kB
> >>>  VmData: 6786232 kB
> >>>  VmStk:  136 kB
> >>>  VmExe: 5628 kB
> >>>  VmLib: 7456 kB
> >>>  VmPTE: 3404 kB
> >>>  VmSwap:   0 kB
> >>>  Threads:   37
> >>>  SigQ:  1/64103
> >>>  SigPnd:
> >>>  ShdPnd:
> >>>  SigBlk:1000
> >>>  SigIgn:1000
> >>>  SigCgt:0001c18040eb
> >>>  CapInh:
> >>>  CapPrm:003f
> >>>  CapEff:003f
> >>>  CapBnd:003f
> >>>  Seccomp:   0
> >>>  Cpus_allowed:  
> >>>  Cpus_allowed_list: 0-15
> >>>  Mems_allowed:  ,0001
> >>>  Mems_allowed_list: 0
> >>>  voluntary_ctxt_switches:   25
> >>>  nonvoluntary_ctxt_switches:2
> >>> Signal: 11
> >>> Uname: Linux 3.19.0-28-generic x86_64
> >>> UserGroups:
> >>> CoreDump: base64
> >>>
> >>> Is this useful infomations?
> >>>
> >>>
> >>> 2015-09-12 12:33 GMT+08:00 Shinobu Kinjo :
> >>>
> >>> > There should be some complains in /var/log/messages.
> >>> > Can you attach?
> >>> >
> >>> > Shinobu
> >>> >
> >>> > - Original Message -
> >>> > From: "谷枫" 
> >>> > To: "ceph-users" 
> >>> > Sent: Saturday, September 12, 2015 1:30:49 PM
> >>> > Subject: [ceph-users] ceph-fuse auto down
> >>> >
> >>> > Hi,all
> >>> > My cephfs cluster deploy on three nodes with Ceph Hammer 0.94.3 on
> >>> Ubuntu
> >>> > 14.04 the kernal version is 3.19.0.
> >>> >
> >>> > I mount the cephfs with ceph-fuse on 9 clients,but some of them
> >>> (ceph-fuse
> >>> > 

Re: [ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Christian Balzer

Hello,

On Sat, 12 Sep 2015 17:11:04 +0200 Stefan Eriksson wrote:

> Hi,
> 
> I'm reading the documentation about creating new OSD's and I'm see:
> 
> "The foregoing example assumes a disk dedicated to one Ceph OSD Daemon,
> and a path to an SSD journal partition. We recommend storing the journal
> on a separate drive to maximize throughput. You may dedicate a single
> drive for the journal too (which may be expensive) or place the journal
> on the same disk as the OSD (not recommended as it impairs performance).
> In the foregoing example we store the journal on a partitioned solid
> state drive." From:
> http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
> 
> So I would like to create my journals on the same SSD as I have my OS
> (RAID1). Is this good practise to initiate a new disk with:
>
Which SSDs (model)?
Some SSDs are patently unsuited for OSD journals, while others will have
no issues keeping up with the OS and journal duties.

The sda below suggest that your RAID1 is a HW one?
That's a bad choice on two counts, a HW RAID can't be TRIM'ed last I
checked.
And you would get a lot more performance out of a software RAID1 with
journals on both SSDs. 
A RAID 1 still might be OK if you can/want trade performance for
redundancy. 
 
> ceph-deploy disk zap osdserver1:sdb
> ceph-deploy osd prepare osdserver1:sdb:/dev/sda
> 
I'm not a ceph-deploy expert or fan, but I'm pretty sure you will need to
create the partitions beforehand and then assign them accordingly.
And using uuids makes things renumbering proof:

ceph-deploy osd prepare ceph-04:sdb:/dev/disk/by-id/wwn-0x55cd2e404b73d348-part4

Christian

> parted info on sda is:
> 
> Number  Start   End SizeFile system Name Flags
>  1  1049kB  211MB   210MB   ext4 boot
>  2  211MB   21.2GB  21.0GB  ext4
>  3  21.2GB  29.6GB  8389MB  linux-swap(v1)
> 
> There is enough space for many 5G journal patitions on sda


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-disk command execute errors

2015-09-12 Thread darko
Hi, I am working on rebuilding a new cluster. I am using Debian wheezy
 and  http://ceph.com/debian-hammer/ wheezy main.

I get the below error when running "ceph-deploy osd activate" from the
admin node as my ceph user.

Note: I ran into all sorts of weird issues with keyring and manually copied
the correct keyrings to  nodes to get past errors.

[node1][WARNIN] ceph-disk: Error: ceph osd create failed: Command
'/usr/bin/ceph' returned non-zero exit status 1:
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v
activate --mark-init sysvinit --mount /var/local/osd-node1



best regards,
dg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Stefan Eriksson
Hi,

I'm reading the documentation about creating new OSD's and I'm see:

"The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and a 
path to an SSD journal partition. We recommend storing the journal on a 
separate drive to maximize throughput. You may dedicate a single drive for the 
journal too (which may be expensive) or place the journal on the same disk as 
the OSD (not recommended as it impairs performance). In the foregoing example 
we store the journal on a partitioned solid state drive."
From: http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/

So I would like to create my journals on the same SSD as I have my OS (RAID1). 
Is this good practise to initiate a new disk with:

ceph-deploy disk zap osdserver1:sdb
ceph-deploy osd prepare osdserver1:sdb:/dev/sda

parted info on sda is:

Number  Start   End SizeFile system Name Flags
 1  1049kB  211MB   210MB   ext4 boot
 2  211MB   21.2GB  21.0GB  ext4
 3  21.2GB  29.6GB  8389MB  linux-swap(v1)

There is enough space for many 5G journal patitions on sda
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
In _usr_bin_ceph-fuse.0.crash.client2.tar

What I'm seeing now is:

  3 Date: Sat Sep 12 06:37:47 2015
 ...
  6 ExecutableTimestamp: 1440614242
 ...
  7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m 
10.3.1.11,10.3.1.12,10.3.1.13 /grdata
 ...
 30  7f32de7fe000-7f32deffe000 rw-p  00:00 0  
[stack:17270]
 ...
250  7f341021d000-7f3410295000 r-xp  fd:01 267219 
/usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
 ...
255  7f341049b000-7f341054f000 r-xp  fd:01 266443 
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
 ...
260  7f3410754000-7f3410794000 r-xp  fd:01 267222 
/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
 ...
266  7f3411197000-7f341119a000 r-xp  fd:01 264953 
/usr/lib/x86_64-linux-gnu/libplds4.so
 ...
271  7f341139f000-7f341159e000 ---p 4000 fd:01 264955 
/usr/lib/x86_64-linux-gnu/libplc4.so
 ...
274  7f34115a-7f34115c5000 r-xp  fd:01 267214 
/usr/lib/x86_64-linux-gnu/libnssutil3.so
 ...
278  7f34117cb000-7f34117ce000 r-xp  fd:01 1189512
/lib/x86_64-linux-gnu/libdl-2.19.so
 ...
287  7f3411d94000-7f3411daa000 r-xp  fd:01 1179825
/lib/x86_64-linux-gnu/libgcc_s.so.1
 ...
294  7f34122b-7f3412396000 r-xp  fd:01 266069 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
 ...
458  State: D (disk sleep)
 ...
359  VmPeak: 5250648 kB
360  VmSize: 4955592 kB
 ...

What were you trying to do?

Shinobu

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Shinobu Kinjo
In _usr_bin_ceph-fuse.0.crash.client2.tar

What I'm seeing now is:

  3 Date: Sat Sep 12 06:37:47 2015
 ...
  6 ExecutableTimestamp: 1440614242
 ...
  7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m 
10.3.1.11,10.3.1.12,10.3.1.13 /grdata
 ...
 30  7f32de7fe000-7f32deffe000 rw-p  00:00 0  
[stack:17270]
 ...
250  7f341021d000-7f3410295000 r-xp  fd:01 267219 
/usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
 ...
255  7f341049b000-7f341054f000 r-xp  fd:01 266443 
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
 ...
260  7f3410754000-7f3410794000 r-xp  fd:01 267222 
/usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
 ...
266  7f3411197000-7f341119a000 r-xp  fd:01 264953 
/usr/lib/x86_64-linux-gnu/libplds4.so
 ...
271  7f341139f000-7f341159e000 ---p 4000 fd:01 264955 
/usr/lib/x86_64-linux-gnu/libplc4.so
 ...
274  7f34115a-7f34115c5000 r-xp  fd:01 267214 
/usr/lib/x86_64-linux-gnu/libnssutil3.so
 ...
278  7f34117cb000-7f34117ce000 r-xp  fd:01 1189512
/lib/x86_64-linux-gnu/libdl-2.19.so
 ...
287  7f3411d94000-7f3411daa000 r-xp  fd:01 1179825
/lib/x86_64-linux-gnu/libgcc_s.so.1
 ...
294  7f34122b-7f3412396000 r-xp  fd:01 266069 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
 ...
458  State: D (disk sleep)
 ...
359  VmPeak: 5250648 kB
360  VmSize: 4955592 kB
 ...

What were you trying to do?

Shinobu

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Query about contribution regarding monitoring of Ceph Object Storage

2015-09-12 Thread pragya jain
Hello all
I am carrying out research in the area of cloud computing under Department of 
CS, University of Delhi. I would like to contribute my research work regarding 
monitoring of Ceph Object Storage to the Ceph community. 
Please help me by providing the appropriate link with whom I can connect to 
know if my work is relevant for contribution. Thank you-RegardsPragya 
JainDepartment of Computer ScienceUniversity of DelhiDelhi, India___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com