Re: [ceph-users] How to remove mds from cluster

2015-01-05 Thread debian Only
i use 0.87,  in side ceph.conf, do not have mds.0  related config

i did
*root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*
*mds gid 0 dne*

2015-01-05 11:15 GMT+07:00 Lindsay Mathieson lindsay.mathie...@gmail.com:

 Did you remove the mds.0 entry from ceph.conf?

 On 5 January 2015 at 14:13, debian Only onlydeb...@gmail.com wrote:

 i have tried ' ceph mds newfs 1 0 --yes-i-really-mean-it'but not fix
 the problem

 2014-12-30 17:42 GMT+07:00 Lindsay Mathieson lindsay.mathie...@gmail.com
 :

  On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:

  ceph 0.87 , Debian 7.5,   anyone can help ?

 

  2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com:

  i want to move mds from one host to another.

 

  how to do it ?

 

  what did i do as below, but ceph health not ok, mds was not removed :

 

  root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm

  mds gid 0 dne

 

  root@ceph06-vm:~# ceph health detail

  HEALTH_WARN mds ceph06-vm is laggy

  mds.ceph06-vm at 192.168.123.248:6800/4350 is laggy/unresponsive



 I removed an mds using this guide:




 http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/



 and ran into your problem, which is also mentioned there.



 resolved it using the guide suggestion:



 $ ceph mds newfs metadata data --yes-i-really-mean-it



 --

 Lindsay

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





 --
 Lindsay

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to remove mds from cluster

2015-01-05 Thread debian Only
I try myself,  use these command and recover to Health Ok now.  But i do
not know why were these command work, in my opinion , fail mds node first
and rm failed mds node

root@ceph01-vm:~#* ceph mds fail 0*
failed mds.0
root@ceph01-vm:~# ceph -s
cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
 health HEALTH_ERR mds rank 0 has failed; mds cluster is degraded
 monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
 mdsmap e68: 0/1/1 up, 1 failed
 osdmap e588: 8 osds: 8 up, 8 in
  pgmap v285967: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
15173 MB used, 2768 GB / 2790 GB avail
2392 active+clean
root@ceph01-vm:~#  *ceph mds rm 0 mds.ceph06-vm  *
mds gid 0 dne
root@ceph01-vm:~# ceph -s
cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
 health HEALTH_ERR mds rank 0 has failed; mds cluster is degraded
 monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
 mdsmap e69: 0/1/1 up, 1 failed
 osdmap e588: 8 osds: 8 up, 8 in
  pgmap v285970: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
15173 MB used, 2768 GB / 2790 GB avail
2392 active+clean
root@ceph01-vm:~# *ceph mds newfs 1 0 --yes-i-really-mean-it *
*filesystem 'cephfs' already exists*
root@ceph01-vm:~# ceph -s
cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
 health HEALTH_ERR mds rank 0 has failed; mds cluster is degraded
 monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
 mdsmap e70: 0/1/1 up, 1 failed
 osdmap e588: 8 osds: 8 up, 8 in
  pgmap v285973: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
15173 MB used, 2768 GB / 2790 GB avail
2392 active+clean
root@ceph01-vm:~# *ceph mds rmfailed 0*
root@ceph01-vm:~# ceph -s
cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
 health HEALTH_OK
 monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 128, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
 *mdsmap e71: 0/1/1 up*
 osdmap e588: 8 osds: 8 up, 8 in
  pgmap v286028: 2392 pgs, 21 pools, 4990 MB data, 1391 objects
15174 MB used, 2768 GB / 2790 GB avail
2392 active+clean

2015-01-05 15:03 GMT+07:00 debian Only onlydeb...@gmail.com:

 i use 0.87,  in side ceph.conf, do not have mds.0  related config

 i did
 *root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*
 *mds gid 0 dne*

 2015-01-05 11:15 GMT+07:00 Lindsay Mathieson lindsay.mathie...@gmail.com
 :

 Did you remove the mds.0 entry from ceph.conf?

 On 5 January 2015 at 14:13, debian Only onlydeb...@gmail.com wrote:

 i have tried ' ceph mds newfs 1 0 --yes-i-really-mean-it'but not fix
 the problem

 2014-12-30 17:42 GMT+07:00 Lindsay Mathieson 
 lindsay.mathie...@gmail.com:

  On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:

  ceph 0.87 , Debian 7.5,   anyone can help ?

 

  2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com:

  i want to move mds from one host to another.

 

  how to do it ?

 

  what did i do as below, but ceph health not ok, mds was not removed :

 

  root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm

  mds gid 0 dne

 

  root@ceph06-vm:~# ceph health detail

  HEALTH_WARN mds ceph06-vm is laggy

  mds.ceph06-vm at 192.168.123.248:6800/4350 is laggy/unresponsive



 I removed an mds using this guide:




 http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/



 and ran into your problem, which is also mentioned there.



 resolved it using the guide suggestion:



 $ ceph mds newfs metadata data --yes-i-really-mean-it



 --

 Lindsay

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





 --
 Lindsay



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to remove mds from cluster

2015-01-04 Thread debian Only
i have tried ' ceph mds newfs 1 0 --yes-i-really-mean-it'but not fix
the problem

2014-12-30 17:42 GMT+07:00 Lindsay Mathieson lindsay.mathie...@gmail.com:

  On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:

  ceph 0.87 , Debian 7.5,   anyone can help ?

 

  2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com:

  i want to move mds from one host to another.

 

  how to do it ?

 

  what did i do as below, but ceph health not ok, mds was not removed :

 

  root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm

  mds gid 0 dne

 

  root@ceph06-vm:~# ceph health detail

  HEALTH_WARN mds ceph06-vm is laggy

  mds.ceph06-vm at 192.168.123.248:6800/4350 is laggy/unresponsive



 I removed an mds using this guide:




 http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/



 and ran into your problem, which is also mentioned there.



 resolved it using the guide suggestion:



 $ ceph mds newfs metadata data --yes-i-really-mean-it



 --

 Lindsay

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to remove mds from cluster

2014-12-30 Thread debian Only
ceph 0.87 , Debian 7.5,   anyone can help ?

2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com:

 i want to move mds from one host to another.

 how to do it ?

 what did i do as below, but ceph health not ok, mds was not removed :

 *root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*
 *mds gid 0 dne*

 *root@ceph06-vm:~# ceph health detail*
 *HEALTH_WARN mds ceph06-vm is laggy*
 *mds.ceph06-vm at 192.168.123.248:6800/4350
 http://192.168.123.248:6800/4350 is laggy/unresponsive*
 *root@ceph06-vm:~# ceph mds dump*
 *dumped mdsmap epoch 62*
 *epoch   62*
 *flags   0*
 *created 2014-08-19 20:57:33.736901*
 *modified2014-12-29 04:43:04.907600*
 *tableserver 0*
 *root0*
 *session_timeout 60*
 *session_autoclose   300*
 *max_file_size   1099511627776*
 *last_failure0*
 *last_failure_osd_epoch  567*
 *compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
 ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
 uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}*
 *max_mds 1*
 *in  0*
 *up  {0=2393640}*
 *failed*
 *stopped*
 *data_pools  0*
 *metadata_pool   1*
 *inline_data disabled*
 *2393640:192.168.123.248:6800/4350
 http://192.168.123.248:6800/4350 'ceph06-vm' mds.0.8 up:active seq 6
 laggy since 2014-12-29 04:25:52.307468*
 *root@ceph06-vm:~# ceph mds newfs 1 0 --yes-i-really-mean-it
  filesystem 'cephfs' already exists*


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to remove mds from cluster

2014-12-29 Thread debian Only
i want to move mds from one host to another.

how to do it ?

what did i do as below, but ceph health not ok, mds was not removed :

*root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*
*mds gid 0 dne*

*root@ceph06-vm:~# ceph health detail*
*HEALTH_WARN mds ceph06-vm is laggy*
*mds.ceph06-vm at 192.168.123.248:6800/4350
http://192.168.123.248:6800/4350 is laggy/unresponsive*
*root@ceph06-vm:~# ceph mds dump*
*dumped mdsmap epoch 62*
*epoch   62*
*flags   0*
*created 2014-08-19 20:57:33.736901*
*modified2014-12-29 04:43:04.907600*
*tableserver 0*
*root0*
*session_timeout 60*
*session_autoclose   300*
*max_file_size   1099511627776*
*last_failure0*
*last_failure_osd_epoch  567*
*compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}*
*max_mds 1*
*in  0*
*up  {0=2393640}*
*failed*
*stopped*
*data_pools  0*
*metadata_pool   1*
*inline_data disabled*
*2393640:192.168.123.248:6800/4350
http://192.168.123.248:6800/4350 'ceph06-vm' mds.0.8 up:active seq 6
laggy since 2014-12-29 04:25:52.307468*
*root@ceph06-vm:~# ceph mds newfs 1 0 --yes-i-really-mean-it
 filesystem 'cephfs' already exists*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to upgrade ceph from Firefly to Giant on Wheezy smothly?

2014-11-17 Thread debian Only
thanks a lot

2014-11-16 1:44 GMT+07:00 Alexandre DERUMIER aderum...@odiso.com:

 simply change your debian repository to giant

 deb http://ceph.com/debian-giant wheezy main


 then

 apt-get update
 apt-get dist-upgrade

 on each node


 then

 /etc/init.d/ceph restart mon

 on each node


 then

 /etc/init.d/ceph restart osd

 on each node
 ...

 - Mail original -

 De: debian Only onlydeb...@gmail.com
 À: ceph-users@lists.ceph.com
 Envoyé: Samedi 15 Novembre 2014 08:10:30
 Objet: [ceph-users] How to upgrade ceph from Firefly to Giant on Wheezy
 smothly?


 Dear all


 i have one Ceph Firefily test cluster on Debian Wheezy too, i want to
 upgrade ceph from Firefly to Giant, could you tell me how to do upgrade ?


 i saw the release notes like below , bu ti do not know how to upgrade,
 could you give me some guide ?




 Upgrade Sequencing
 --

 * If your existing cluster is running a version older than v0.80.x
 Firefly, please first upgrade to the latest Firefly release before
 moving on to Giant . We have not tested upgrades directly from
 Emperor, Dumpling, or older releases.

 We *have* tested:

 * Firefly to Giant
 * Dumpling to Firefly to Giant

 * Please upgrade daemons in the following order:

 #. Monitors
 #. OSDs
 #. MDSs and/or radosgw

 Note that the relative ordering of OSDs and monitors should not matter, but
 we primarily tested upgrading monitors first.

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to upgrade ceph from Firefly to Giant on Wheezy smothly?

2014-11-14 Thread debian Only
Dear all

i have one Ceph Firefily test cluster on Debian Wheezy too, i want to
upgrade ceph from Firefly to Giant, could you tell me how to do upgrade ?

i saw the release notes  like  below , bu ti do not know how to upgrade,
could you give me some guide ?






















*Upgrade Sequencing--* If your existing cluster is running
a version older than v0.80.x  Firefly, please first upgrade to the latest
Firefly release before  moving on to Giant.  We have not tested upgrades
directly from  Emperor, Dumpling, or older releases.  We *have* tested:   *
Firefly to Giant   * Dumpling to Firefly to Giant* Please upgrade daemons
in the following order:   #. Monitors   #. OSDs   #. MDSs and/or radosgw
Note that the relative ordering of OSDs and monitors should not matter,
but  we primarily tested upgrading monitors first.*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.87 Giant released

2014-11-12 Thread debian Only
Dear expert

could you help to provide some guide upgrade Ceph from firefly to giant ?

many thanks !

2014-10-30 15:37 GMT+07:00 Joao Eduardo Luis joao.l...@inktank.com:

 On 10/30/2014 05:54 AM, Sage Weil wrote:

 On Thu, 30 Oct 2014, Nigel Williams wrote:

 On 30/10/2014 8:56 AM, Sage Weil wrote:

 * *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and
 related commands now make a distinction between data that is
 degraded (there are fewer than the desired number of copies) and
 data that is misplaced (stored in the wrong location in the
 cluster).


 Is someone able to briefly described how/why misplaced happens please,
 is it
 repaired eventually? I've not seen misplaced (yet).


 Sure.  An easy way to get misplaced objects is to do 'ceph osd
 out N' on an OSD.  Nothing is down, we still have as many copies
 as we had before, but Ceph now wants to move them somewhere
 else. Starting with giant, you will see the misplaced % in 'ceph -s' and
 not degraded.

leveldb_write_buffer_size = 32*1024*1024  = 33554432  // 32MB
   leveldb_cache_size= 512*1024*1204 = 536870912 // 512MB


 I noticed the typo, wondered about the code, but I'm not seeing the same
 values anyway?

 https://github.com/ceph/ceph/blob/giant/src/common/config_opts.h

 OPTION(leveldb_write_buffer_size, OPT_U64, 8 *1024*1024) // leveldb
 write
 buffer size
 OPTION(leveldb_cache_size, OPT_U64, 128 *1024*1024) // leveldb cache size


 Hmm!  Not sure where that 32MB number came from.  I'll fix it, thanks!


 Those just happen to be the values used on the monitors (in ceph_mon.cc).
 Maybe that's where the mix up came from. :)

   -Joao


 --
 Joao Eduardo Luis
 Software Engineer | http://inktank.com | http://ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] One Mon log huge and this Mon down often

2014-08-24 Thread debian Only
most of the reason cause Mon down is big log
i have set debug paxos = 0,  and i am watching now.

before set this,
# ceph daemon mon.ceph01-vm config get debug_mon
{ debug_mon: 1\/5}
# ceph daemon mon.ceph01-vm config get debug_ms
{ debug_ms: 0\/5}
# ceph daemon mon.ceph01-vm config get debug_paxos
{ debug_paxos: 1\/5}

after set this
# ceph daemon mon.ceph01-vm config get debug_mon
{ debug_mon: 1\/5}
# ceph daemon mon.ceph01-vm config get debug_ms
{ debug_ms: 0\/5}
# ceph daemon mon.ceph01-vm config get debug_paxos
{ debug_paxos: 0\/0}



2014-08-24 18:58 GMT+07:00 Joao Eduardo Luis joao.l...@inktank.com:

 On 08/24/2014 01:57 AM, debian Only wrote:

 this is happen i use *ceph-deploy create ceph01-vm ceph02-vm ceph04-vm
 *to create 3 Mons member.

 now every 10 hours, one  Mon will down.   every time have this error,
   some time the hardisk have enough space left,such as 30G.

 i deployed Ceph before,  only create one Mon at first step *ceph-deploy
 create ceph01-vm ,  and then ceph-deploy mon add ceph02-vm, *not meet

 this problem.

 i do not know why ?


 Your monitor shutdown because the disk the monitor is sitting on has
 dropped to (or below) 5% of available disk space.  This is meant to prevent
 the monitor from running out of disk space and be unable to store critical
 cluster information.  5% is a rough estimate, which may be adequate for
 some disks, but may be either too small or too large for small disks and
 large disks respectively.  This value can be adjusted if you feel like you
 need to, using the 'mon_data_avail_crit' option (which defaults to 5, as in
 5%, but can be adjusted to whatever suits you best).

 The big problem here however seems to be that you're running out of space
 due to huge monitor logs. Is that it?

 If so, I would ask you to run the following commands and share the results:

 ceph daemon mon.* config get debug_mon
 ceph daemon mon.* config get debug_ms
 ceph daemon mon.* config get debug_paxos

   -Joao


 2014-08-23 10:19:43.910650 7f3c0028c700  0
 mon.ceph01-vm@1(peon).data_health(56) *update_stats avail 5% total
 15798272 used 12941508 avail 926268*

 2014-08-23 10:19:43.910806 7f3c0028c700 -1
 mon.ceph01-vm@1(peon).data_health(56) reached critical levels of
 available space on local monitor storage -- shutdown!
 2014-08-23 10:19:43.910811 7f3c0028c700  0 ** Shutdown via Data Health
 Service **
 2014-08-23 10:19:43.931427 7f3bffa8b700  1
 mon.ceph01-vm@1(peon).paxos(paxos active c 15814..16493) is_readable
 now=2014-08-23 10:19:43.931433 lease_expire=2014-08-23 10:19:45.989585
 has v0 lc 16493
 2014-08-23 10:19:43.931486 7f3bfe887700 -1 mon.ceph01-vm@1(peon) e2 ***
 Got Signal Interrupt ***
 2014-08-23 10:19:43.931515 7f3bfe887700  1 mon.ceph01-vm@1(peon) e2
 shutdown
 2014-08-23 10:19:43.931725 7f3bfe887700  0 quorum service shutdown
 2014-08-23 10:19:43.931730 7f3bfe887700  0
 mon.ceph01-vm@1(shutdown).health(56) HealthMonitor::service_shutdown 1
 services
 2014-08-23 10:19:43.931735 7f3bfe887700  0 quorum service shutdown



 2014-08-22 21:31 GMT+07:00 debian Only onlydeb...@gmail.com
 mailto:onlydeb...@gmail.com:


 this time ceph01-vm down, no big log happen ,  other 2 ok.do not
 what's the reason,  this is not my first time install Ceph.  but
 this is first time i meet that mon down again and again.

 ceph.conf on each OSDs and MONs
   [global]
 fsid = 075f1aae-48de-412e-b024-b0f014dbc8cf
 mon_initial_members = ceph01-vm, ceph02-vm, ceph04-vm
 mon_host = 192.168.123.251,192.168.123.252,192.168.123.250
 auth_cluster_required = cephx
 auth_service_required = cephx
 auth_client_required = cephx
 filestore_xattr_use_omap = true

 rgw print continue = false
 rgw dns name = ceph-radosgw
 osd pool default pg num = 128
 osd pool default pgp num = 128


 [client.radosgw.gateway]
 host = ceph-radosgw
 keyring = /etc/ceph/ceph.client.radosgw.keyring
 rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
 log file = /var/log/ceph/client.radosgw.gateway.log


 2014-08-22 18:15 GMT+07:00 Joao Eduardo Luis joao.l...@inktank.com
 mailto:joao.l...@inktank.com:


 On 08/22/2014 10:21 AM, debian Only wrote:

 i have  3 mons in Ceph 0.80.5 on Wheezy. have one RadosGW

 when happen this first time, i increase the mon log device.
 this time mon.ceph02-vm down, only this mon down,  other 2
 is ok.

 pls some one give me some guide.

27M Aug 22 02:11 ceph-mon.ceph04-vm.log
43G Aug 22 02:11 ceph-mon.ceph02-vm.log
2G Aug 22 02:11 ceph-mon.ceph01-vm.log


 Depending on the debug level you set, and depending on which
 subsystems you set a higher debug level, the monitor can spit
 out A LOT of information in a short period of time.  43GB is
 nothing compared to some 100+ GB logs I've had churn through in
 the past

Re: [ceph-users] One Mon log huge and this Mon down often

2014-08-23 Thread debian Only
this is happen i use  *ceph-deploy create ceph01-vm ceph02-vm ceph04-vm  *to
create 3 Mons member.
now every 10 hours, one  Mon will down.   every time have this error,  some
time the hardisk have enough space left,such as 30G.

i deployed Ceph before,  only create one Mon at first step  *ceph-deploy
create ceph01-vm ,  and then ceph-deploy mon add ceph02-vm, *not meet this
problem.

i do not know why ?

2014-08-23 10:19:43.910650 7f3c0028c700  0
mon.ceph01-vm@1(peon).data_health(56)
*update_stats avail 5% total 15798272 used 12941508 avail 926268*
2014-08-23 10:19:43.910806 7f3c0028c700 -1
mon.ceph01-vm@1(peon).data_health(56)
reached critical levels of available space on local monitor storage --
shutdown!
2014-08-23 10:19:43.910811 7f3c0028c700  0 ** Shutdown via Data Health
Service **
2014-08-23 10:19:43.931427 7f3bffa8b700  1 mon.ceph01-vm@1(peon).paxos(paxos
active c 15814..16493) is_readable now=2014-08-23 10:19:43.931433
lease_expire=2014-08-23 10:19:45.989585 has v0 lc 16493
2014-08-23 10:19:43.931486 7f3bfe887700 -1 mon.ceph01-vm@1(peon) e2 *** Got
Signal Interrupt ***
2014-08-23 10:19:43.931515 7f3bfe887700  1 mon.ceph01-vm@1(peon) e2 shutdown
2014-08-23 10:19:43.931725 7f3bfe887700  0 quorum service shutdown
2014-08-23 10:19:43.931730 7f3bfe887700  0 mon.ceph01-vm@1(shutdown).health(56)
HealthMonitor::service_shutdown 1 services
2014-08-23 10:19:43.931735 7f3bfe887700  0 quorum service shutdown



2014-08-22 21:31 GMT+07:00 debian Only onlydeb...@gmail.com:

 this time ceph01-vm down, no big log happen ,  other 2 ok.do not
 what's the reason,  this is not my first time install Ceph.  but this is
 first time i meet that mon down again and again.

 ceph.conf on each OSDs and MONs
  [global]
 fsid = 075f1aae-48de-412e-b024-b0f014dbc8cf
 mon_initial_members = ceph01-vm, ceph02-vm, ceph04-vm
 mon_host = 192.168.123.251,192.168.123.252,192.168.123.250
 auth_cluster_required = cephx
 auth_service_required = cephx
 auth_client_required = cephx
 filestore_xattr_use_omap = true

 rgw print continue = false
 rgw dns name = ceph-radosgw
 osd pool default pg num = 128
 osd pool default pgp num = 128


 [client.radosgw.gateway]
 host = ceph-radosgw
 keyring = /etc/ceph/ceph.client.radosgw.keyring
 rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
 log file = /var/log/ceph/client.radosgw.gateway.log


 2014-08-22 18:15 GMT+07:00 Joao Eduardo Luis joao.l...@inktank.com:

 On 08/22/2014 10:21 AM, debian Only wrote:

 i have  3 mons in Ceph 0.80.5 on Wheezy. have one RadosGW

 when happen this first time, i increase the mon log device.
 this time mon.ceph02-vm down, only this mon down,  other 2 is ok.

 pls some one give me some guide.

   27M Aug 22 02:11 ceph-mon.ceph04-vm.log
   43G Aug 22 02:11 ceph-mon.ceph02-vm.log
   2G Aug 22 02:11 ceph-mon.ceph01-vm.log


 Depending on the debug level you set, and depending on which subsystems
 you set a higher debug level, the monitor can spit out A LOT of information
 in a short period of time.  43GB is nothing compared to some 100+ GB logs
 I've had churn through in the past.

 However, I'm not grasping what kind of help you need.  According to your
 'ceph -s' below the monitors seem okay -- all are in, health is OK.

 If you issue is with having that one monitor spitting out humongous
 amounts of debug info here's what you need to do:

 - If you added one or more 'debug something = X' to that monitor's
 ceph.conf, you will want to remove them so that in a future restart the
 monitor doesn't start with non-default debug levels.

 - You will want to inject default debug levels into that one monitor.

 Depending on what debug levels you have increased, you will want to run a
 version of ceph tell mon.ceph02-vm injectargs '--debug-mon 1/5 --debug-ms
 0/5 --debug-paxos 1/5'

   -Joao


 # ceph -s
  cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
   health HEALTH_OK
   monmap e2: 3 mons at
 {ceph01-vm=192.168.123.251:6789/0,ceph02-vm=192.168.123.
 252:6789/0,ceph04-vm=192.168.123.250:6789/0
  http://192.168.123.251:6789/0,ceph02-vm=192.168.123.252:
 6789/0,ceph04-vm=192.168.123.250:6789/0},

 election epoch 44, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
   mdsmap e10: 1/1/1 up {0=ceph06-vm=up:active}
   osdmap e145: 10 osds: 10 up, 10 in
pgmap v4394: 2392 pgs, 21 pools, 4503 MB data, 1250 objects
  13657 MB used, 4908 GB / 4930 GB avail
  2392 active+clean


 /2014-08-22 02:06:34.738828 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:34.738830 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:36.618805 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:36.618807 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:36.620019 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:36.620021

[ceph-users] One Mon log huge and this Mon down often

2014-08-22 Thread debian Only
i have  3 mons in Ceph 0.80.5 on Wheezy. have one RadosGW

when happen this first time, i increase the mon log device.
this time mon.ceph02-vm down, only this mon down,  other 2 is ok.

pls some one give me some guide.

 27M Aug 22 02:11 ceph-mon.ceph04-vm.log
 43G Aug 22 02:11 ceph-mon.ceph02-vm.log
 2G Aug 22 02:11 ceph-mon.ceph01-vm.log

# ceph -s
cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
 health HEALTH_OK
 monmap e2: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph04-vm=192.168.123.250:6789/0},
election epoch 44, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
 mdsmap e10: 1/1/1 up {0=ceph06-vm=up:active}
 osdmap e145: 10 osds: 10 up, 10 in
  pgmap v4394: 2392 pgs, 21 pools, 4503 MB data, 1250 objects
13657 MB used, 4908 GB / 4930 GB avail
2392 active+clean


*2014-08-22 02:06:34.738828 7ff2b9557700  1
mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
now=2014-08-22 02:06:34.738830 lease_expire=2014-08-22 02:06:39.701305 has
v0 lc 9756*
*2014-08-22 02:06:36.618805 7ff2b9557700  1
mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
now=2014-08-22 02:06:36.618807 lease_expire=2014-08-22 02:06:39.701305 has
v0 lc 9756*
*2014-08-22 02:06:36.620019 7ff2b9557700  1
mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
now=2014-08-22 02:06:36.620021 lease_expire=2014-08-22 02:06:39.701305 has
v0 lc 9756*
*2014-08-22 02:06:36.620975 7ff2b9557700  1
mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
now=2014-08-22 02:06:36.620977 lease_expire=2014-08-22 02:06:39.701305 has
v0 lc 9756*
*2014-08-22 02:06:36.629362 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2
handle_command mon_command({prefix: mon_status, format: json} v 0)
v1*
*2014-08-22 02:06:36.633007 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2
handle_command mon_command({prefix: status, format: json} v 0) v1*
*2014-08-22 02:06:36.637002 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2
handle_command mon_command({prefix: health, detail: , format:
json} v 0) v1*
*2014-08-22 02:06:36.640971 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2
handle_command mon_command({dumpcontents: [pgs_brief], prefix: pg
dump, format: json} v 0) v1*
*2014-08-22 02:06:36.641014 7ff2b9557700  1
mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
now=2014-08-22 02:06:36.641016 lease_expire=2014-08-22 02:06:39.701305 has
v0 lc 9756*
*2014-08-22 02:06:37.520387 7ff2b9557700  1
mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9757) is_readable
now=2014-08-22 02:06:37.520388 lease_expire=2014-08-22 02:06:42.501572 has
v0 lc 9757*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] One Mon log huge and this Mon down often

2014-08-22 Thread debian Only
this time ceph01-vm down, no big log happen ,  other 2 ok.do not what's
the reason,  this is not my first time install Ceph.  but this is first
time i meet that mon down again and again.

ceph.conf on each OSDs and MONs
 [global]
fsid = 075f1aae-48de-412e-b024-b0f014dbc8cf
mon_initial_members = ceph01-vm, ceph02-vm, ceph04-vm
mon_host = 192.168.123.251,192.168.123.252,192.168.123.250
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

rgw print continue = false
rgw dns name = ceph-radosgw
osd pool default pg num = 128
osd pool default pgp num = 128


[client.radosgw.gateway]
host = ceph-radosgw
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/ceph/client.radosgw.gateway.log


2014-08-22 18:15 GMT+07:00 Joao Eduardo Luis joao.l...@inktank.com:

 On 08/22/2014 10:21 AM, debian Only wrote:

 i have  3 mons in Ceph 0.80.5 on Wheezy. have one RadosGW

 when happen this first time, i increase the mon log device.
 this time mon.ceph02-vm down, only this mon down,  other 2 is ok.

 pls some one give me some guide.

   27M Aug 22 02:11 ceph-mon.ceph04-vm.log
   43G Aug 22 02:11 ceph-mon.ceph02-vm.log
   2G Aug 22 02:11 ceph-mon.ceph01-vm.log


 Depending on the debug level you set, and depending on which subsystems
 you set a higher debug level, the monitor can spit out A LOT of information
 in a short period of time.  43GB is nothing compared to some 100+ GB logs
 I've had churn through in the past.

 However, I'm not grasping what kind of help you need.  According to your
 'ceph -s' below the monitors seem okay -- all are in, health is OK.

 If you issue is with having that one monitor spitting out humongous
 amounts of debug info here's what you need to do:

 - If you added one or more 'debug something = X' to that monitor's
 ceph.conf, you will want to remove them so that in a future restart the
 monitor doesn't start with non-default debug levels.

 - You will want to inject default debug levels into that one monitor.

 Depending on what debug levels you have increased, you will want to run a
 version of ceph tell mon.ceph02-vm injectargs '--debug-mon 1/5 --debug-ms
 0/5 --debug-paxos 1/5'

   -Joao


 # ceph -s
  cluster 075f1aae-48de-412e-b024-b0f014dbc8cf
   health HEALTH_OK
   monmap e2: 3 mons at
 {ceph01-vm=192.168.123.251:6789/0,ceph02-vm=192.168.123.
 252:6789/0,ceph04-vm=192.168.123.250:6789/0
 http://192.168.123.251:6789/0,ceph02-vm=192.168.123.252:
 6789/0,ceph04-vm=192.168.123.250:6789/0},

 election epoch 44, quorum 0,1,2 ceph04-vm,ceph01-vm,ceph02-vm
   mdsmap e10: 1/1/1 up {0=ceph06-vm=up:active}
   osdmap e145: 10 osds: 10 up, 10 in
pgmap v4394: 2392 pgs, 21 pools, 4503 MB data, 1250 objects
  13657 MB used, 4908 GB / 4930 GB avail
  2392 active+clean


 /2014-08-22 02:06:34.738828 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:34.738830 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:36.618805 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:36.618807 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:36.620019 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:36.620021 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:36.620975 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:36.620977 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:36.629362 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2

 handle_command mon_command({prefix: mon_status, format: json} v
 0) v1/
 /2014-08-22 02:06:36.633007 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2
 handle_command mon_command({prefix: status, format: json} v 0) v1/
 /2014-08-22 02:06:36.637002 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2

 handle_command mon_command({prefix: health, detail: , format:
 json} v 0) v1/
 /2014-08-22 02:06:36.640971 7ff2b9557700  0 mon.ceph02-vm@2(peon) e2

 handle_command mon_command({dumpcontents: [pgs_brief], prefix: pg
 dump, format: json} v 0) v1/
 /2014-08-22 02:06:36.641014 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9756) is_readable
 now=2014-08-22 02:06:36.641016 lease_expire=2014-08-22 02:06:39.701305
 has v0 lc 9756/
 /2014-08-22 02:06:37.520387 7ff2b9557700  1

 mon.ceph02-vm@2(peon).paxos(paxos active c 9037..9757) is_readable
 now=2014-08-22 02:06:37.520388 lease_expire=2014-08-22 02:06:42.501572
 has v0 lc 9757/



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Joao Eduardo Luis
 Software Engineer | http://inktank.com | http://ceph.com

[ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
i can upload file to RadosGW by s3cmd , and software Dragondisk.

the script can list all bucket and all file in the bucket.  but can not
from python s3.
###
#coding=utf-8
__author__ = 'Administrator'

#!/usr/bin/env python
import fnmatch
import os, sys
import boto
import boto.s3.connection

access_key = 'VC8R6C193WDVKNTDCRKA'
secret_key = 'ASUWdUTx6PwVXEf/oJRRmDnvKEWp509o3rl1Xt+h'

pidfile = copytoceph.pid


def check_pid(pid):
try:
os.kill(pid, 0)
except OSError:
return False
else:
return True


if os.path.isfile(pidfile):
pid = long(open(pidfile, 'r').read())
if check_pid(pid):
print %s already exists, doing natting % pidfile
sys.exit()

pid = str(os.getpid())
file(pidfile, 'w').write(pid)

conn = boto.connect_s3(
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
host='ceph-radosgw.lab.com',
port=80,
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
)

print conn
mybucket = conn.get_bucket('foo')
print mybucket
mylist = mybucket.list()
print mylist
buckets = conn.get_all_buckets()
for bucket in buckets:
print {name}\t{created}.format(
name=bucket.name,
created=bucket.creation_date,
)

for key in bucket.list():
print {name}\t{size}\t{modified}.format(
name=(key.name).encode('utf8'),
size=key.size,
modified=key.last_modified,
)


key = mybucket.new_key('hello.txt')
print key
key.set_contents_from_string('Hello World!')

###

root@ceph-radosgw:~# python rgwupload.py
S3Connection:ceph-radosgw.lab.com
Bucket: foo
boto.s3.bucketlistresultset.BucketListResultSet object at 0x1d6ae10
backup  2014-08-21T10:23:08.000Z
add volume for vms.png  23890   2014-08-21T10:53:43.000Z
foo 2014-08-20T16:11:19.000Z
file0001.txt29  2014-08-21T04:22:25.000Z
galley/DSC_0005.JPG 2142126 2014-08-21T04:24:29.000Z
galley/DSC_0006.JPG 2005662 2014-08-21T04:24:29.000Z
galley/DSC_0009.JPG 1922686 2014-08-21T04:24:29.000Z
galley/DSC_0010.JPG 2067713 2014-08-21T04:24:29.000Z
galley/DSC_0011.JPG 2027689 2014-08-21T04:24:30.000Z
galley/DSC_0012.JPG 2853358 2014-08-21T04:24:30.000Z
galley/DSC_0013.JPG 2844746 2014-08-21T04:24:30.000Z
iso 2014-08-21T04:43:16.000Z
pdf 2014-08-21T09:36:15.000Z
Key: foo,hello.txt

it hanged at here.

Same error when i run this script on radosgw host.

Traceback (most recent call last):
  File D:/Workspace/S3-Ceph/test.py, line 65, in module
key.set_contents_from_string('Hello World!')
  File c:\Python27\lib\site-packages\boto\s3\key.py, line 1419, in
set_contents_from_string
encrypt_key=encrypt_key)
  File c:\Python27\lib\site-packages\boto\s3\key.py, line 1286, in
set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
  File c:\Python27\lib\site-packages\boto\s3\key.py, line 746, in
send_file
chunked_transfer=chunked_transfer, size=size)
  File c:\Python27\lib\site-packages\boto\s3\key.py, line 944, in
_send_file_internal
query_args=query_args
  File c:\Python27\lib\site-packages\boto\s3\connection.py, line 664, in
make_request
retry_handler=retry_handler
  File c:\Python27\lib\site-packages\boto\connection.py, line 1053, in
make_request
retry_handler=retry_handler)
  File c:\Python27\lib\site-packages\boto\connection.py, line 1009, in
_mexe
raise BotoServerError(response.status, response.reason, body)
boto.exception.BotoServerError: BotoServerError: 500 Internal Server Error
None
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
when i use Dragondisk , i unselect Expect 100-continue  header , upload
file sucessfully.  when select this option, upload file will hang.

maybe the python script can not upload file due to the 100-continue ??  my
radosgw Apache2 not use 100-continue.

if my guess is ture,  how to disable this in python s3-connection and make
python script working for upload file?



2014-08-21 20:57 GMT+07:00 debian Only onlydeb...@gmail.com:

 i can upload file to RadosGW by s3cmd , and software Dragondisk.

 the script can list all bucket and all file in the bucket.  but can not
 from python s3.
 ###
 #coding=utf-8
 __author__ = 'Administrator'

 #!/usr/bin/env python
 import fnmatch
 import os, sys
 import boto
 import boto.s3.connection

 access_key = 'VC8R6C193WDVKNTDCRKA'
 secret_key = 'ASUWdUTx6PwVXEf/oJRRmDnvKEWp509o3rl1Xt+h'

 pidfile = copytoceph.pid


 def check_pid(pid):
 try:
 os.kill(pid, 0)
 except OSError:
 return False
 else:
 return True


 if os.path.isfile(pidfile):
 pid = long(open(pidfile, 'r').read())
 if check_pid(pid):
 print %s already exists, doing natting % pidfile
 sys.exit()

 pid = str(os.getpid())
 file(pidfile, 'w').write(pid)

 conn = boto.connect_s3(
 aws_access_key_id=access_key,
 aws_secret_access_key=secret_key,
 host='ceph-radosgw.lab.com',
 port=80,
 is_secure=False,
 calling_format=boto.s3.connection.OrdinaryCallingFormat(),
 )

 print conn
 mybucket = conn.get_bucket('foo')
 print mybucket
 mylist = mybucket.list()
 print mylist
 buckets = conn.get_all_buckets()
 for bucket in buckets:
 print {name}\t{created}.format(
 name=bucket.name,
 created=bucket.creation_date,
 )

 for key in bucket.list():
 print {name}\t{size}\t{modified}.format(
 name=(key.name).encode('utf8'),
 size=key.size,
 modified=key.last_modified,
 )


 key = mybucket.new_key('hello.txt')
 print key
 key.set_contents_from_string('Hello World!')

 ###

 root@ceph-radosgw:~# python rgwupload.py
 S3Connection:ceph-radosgw.lab.com
 Bucket: foo
 boto.s3.bucketlistresultset.BucketListResultSet object at 0x1d6ae10
 backup  2014-08-21T10:23:08.000Z
 add volume for vms.png  23890   2014-08-21T10:53:43.000Z
 foo 2014-08-20T16:11:19.000Z
 file0001.txt29  2014-08-21T04:22:25.000Z
 galley/DSC_0005.JPG 2142126 2014-08-21T04:24:29.000Z
 galley/DSC_0006.JPG 2005662 2014-08-21T04:24:29.000Z
 galley/DSC_0009.JPG 1922686 2014-08-21T04:24:29.000Z
 galley/DSC_0010.JPG 2067713 2014-08-21T04:24:29.000Z
 galley/DSC_0011.JPG 2027689 2014-08-21T04:24:30.000Z
 galley/DSC_0012.JPG 2853358 2014-08-21T04:24:30.000Z
 galley/DSC_0013.JPG 2844746 2014-08-21T04:24:30.000Z
 iso 2014-08-21T04:43:16.000Z
 pdf 2014-08-21T09:36:15.000Z
 Key: foo,hello.txt

 it hanged at here.

 Same error when i run this script on radosgw host.

 Traceback (most recent call last):
   File D:/Workspace/S3-Ceph/test.py, line 65, in module
 key.set_contents_from_string('Hello World!')
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 1419, in
 set_contents_from_string
 encrypt_key=encrypt_key)
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 1286, in
 set_contents_from_file
 chunked_transfer=chunked_transfer, size=size)
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 746, in
 send_file
 chunked_transfer=chunked_transfer, size=size)
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 944, in
 _send_file_internal
 query_args=query_args
   File c:\Python27\lib\site-packages\boto\s3\connection.py, line 664, in
 make_request
 retry_handler=retry_handler
   File c:\Python27\lib\site-packages\boto\connection.py, line 1053, in
 make_request
 retry_handler=retry_handler)
   File c:\Python27\lib\site-packages\boto\connection.py, line 1009, in
 _mexe
 raise BotoServerError(response.status, response.reason, body)
 boto.exception.BotoServerError: BotoServerError: 500 Internal Server Error
 None

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
my radosgw disbaled 100-continue

[global]
fsid = 075f1aae-48de-412e-b024-b0f014dbc8cf
mon_initial_members = ceph01-vm, ceph02-vm, ceph04-vm
mon_host = 192.168.123.251,192.168.123.252,192.168.123.250
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

*rgw print continue = false*
rgw dns name = ceph-radosgw
osd pool default pg num = 128
osd pool default pgp num = 128

#debug rgw = 20
[client.radosgw.gateway]
host = ceph-radosgw
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/ceph/client.radosgw.gateway.log



2014-08-21 22:42 GMT+07:00 debian Only onlydeb...@gmail.com:

 when i use Dragondisk , i unselect Expect 100-continue  header , upload
 file sucessfully.  when select this option, upload file will hang.

 maybe the python script can not upload file due to the 100-continue ??  my
 radosgw Apache2 not use 100-continue.

 if my guess is ture,  how to disable this in python s3-connection and make
 python script working for upload file?



 2014-08-21 20:57 GMT+07:00 debian Only onlydeb...@gmail.com:

 i can upload file to RadosGW by s3cmd , and software Dragondisk.

 the script can list all bucket and all file in the bucket.  but can not
 from python s3.
 ###
 #coding=utf-8
 __author__ = 'Administrator'

 #!/usr/bin/env python
 import fnmatch
 import os, sys
 import boto
 import boto.s3.connection

 access_key = 'VC8R6C193WDVKNTDCRKA'
 secret_key = 'ASUWdUTx6PwVXEf/oJRRmDnvKEWp509o3rl1Xt+h'

 pidfile = copytoceph.pid


 def check_pid(pid):
 try:
 os.kill(pid, 0)
 except OSError:
 return False
 else:
 return True


 if os.path.isfile(pidfile):
 pid = long(open(pidfile, 'r').read())
 if check_pid(pid):
 print %s already exists, doing natting % pidfile
  sys.exit()

 pid = str(os.getpid())
 file(pidfile, 'w').write(pid)

 conn = boto.connect_s3(
 aws_access_key_id=access_key,
 aws_secret_access_key=secret_key,
 host='ceph-radosgw.lab.com',
 port=80,
 is_secure=False,
 calling_format=boto.s3.connection.OrdinaryCallingFormat(),
  )

 print conn
 mybucket = conn.get_bucket('foo')
 print mybucket
 mylist = mybucket.list()
 print mylist
 buckets = conn.get_all_buckets()
 for bucket in buckets:
 print {name}\t{created}.format(
 name=bucket.name,
 created=bucket.creation_date,
 )

 for key in bucket.list():
 print {name}\t{size}\t{modified}.format(
 name=(key.name).encode('utf8'),
  size=key.size,
 modified=key.last_modified,
 )


 key = mybucket.new_key('hello.txt')
 print key
 key.set_contents_from_string('Hello World!')

 ###

 root@ceph-radosgw:~# python rgwupload.py
 S3Connection:ceph-radosgw.lab.com
 Bucket: foo
 boto.s3.bucketlistresultset.BucketListResultSet object at 0x1d6ae10
 backup  2014-08-21T10:23:08.000Z
 add volume for vms.png  23890   2014-08-21T10:53:43.000Z
 foo 2014-08-20T16:11:19.000Z
 file0001.txt29  2014-08-21T04:22:25.000Z
 galley/DSC_0005.JPG 2142126 2014-08-21T04:24:29.000Z
 galley/DSC_0006.JPG 2005662 2014-08-21T04:24:29.000Z
 galley/DSC_0009.JPG 1922686 2014-08-21T04:24:29.000Z
 galley/DSC_0010.JPG 2067713 2014-08-21T04:24:29.000Z
 galley/DSC_0011.JPG 2027689 2014-08-21T04:24:30.000Z
 galley/DSC_0012.JPG 2853358 2014-08-21T04:24:30.000Z
 galley/DSC_0013.JPG 2844746 2014-08-21T04:24:30.000Z
 iso 2014-08-21T04:43:16.000Z
 pdf 2014-08-21T09:36:15.000Z
 Key: foo,hello.txt

 it hanged at here.

 Same error when i run this script on radosgw host.

 Traceback (most recent call last):
   File D:/Workspace/S3-Ceph/test.py, line 65, in module
 key.set_contents_from_string('Hello World!')
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 1419, in
 set_contents_from_string
 encrypt_key=encrypt_key)
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 1286, in
 set_contents_from_file
 chunked_transfer=chunked_transfer, size=size)
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 746, in
 send_file
 chunked_transfer=chunked_transfer, size=size)
   File c:\Python27\lib\site-packages\boto\s3\key.py, line 944, in
 _send_file_internal
 query_args=query_args
   File c:\Python27\lib\site-packages\boto\s3\connection.py, line 664,
 in make_request
 retry_handler=retry_handler
   File c:\Python27\lib\site-packages\boto\connection.py, line 1053, in
 make_request
 retry_handler=retry_handler)
   File c:\Python27\lib\site-packages\boto\connection.py, line 1009, in
 _mexe
 raise BotoServerError(response.status, response.reason, body)
 boto.exception.BotoServerError: BotoServerError: 500 Internal Server Error
 None



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] some pgs active+remapped, Ceph can not recover itself.

2014-08-20 Thread debian Only
thanks , Lewis.and i got one suggestion it is better to put similar OSD
size .


2014-08-20 9:24 GMT+07:00 Craig Lewis cle...@centraldesktop.com:

 I believe you need to remove the authorization for osd.4 and osd.6 before
 re-creating them.

 When I re-format disks, I migrate data off of the disk using:
   ceph osd out $OSDID

 Then wait for the remapping to finish.  Once it does:
   stop ceph-osd id=$OSDID
   ceph osd out $OSDID
   ceph auth del osd.$OSDID
   ceph osd crush remove osd.$OSDID
   ceph osd rm $OSDID

 Ceph will migrate the data off of it.  When it's empty, you can delete it
 using the above commands. Since osd.4 and osd.6 are already lost, you can
 just do the part after remapping finishes for them.


 You could be having trouble because the size of the OSDs are so different.
  I wouldn't mix OSDs that are 100GB and 1.8TB.  Most of the stuck PGs are
 on osd.5, osd.7, and one of the small OSDs.  You can migrate data off of
 those small disks the same way I said to do osd.10.



 On Tue, Aug 19, 2014 at 6:34 AM, debian Only onlydeb...@gmail.com wrote:

 this is happen after some OSD fail and i recreate osd.

 i have did  ceph osd rm osd.4  to remove the osd.4 and osd.6. but when
 i use ceph-deploy to install OSD by
  ceph-deploy osd --zap-disk --fs-type btrfs create ceph0x-vm:sdb,
 ceph-deploy result said new osd is ready,
  but the OSD can not start. said that ceph-disk failure.
  /var/lib/ceph/bootstrap-osd/ceph.keyring and auth:error
  and i have check the ceph.keyring is same as other on live OSD.

  when i run ceph-deploy twice. first it will create osd.4, failed , will
 display in osd tree.  then osd.6 same.
  next ceph-deploy osd again, create osd.10, this OSD can start
 successful.  but osd.4 osd.6 display down in osd tree.

  when i use ceph osd reweight-by-utilization,  run one time, more pgs
 active+remapped. Ceph can not recover itself

  and Crush map tunables already optimize.  do not how to solve it.

 root@ceph-admin:~# ceph osd crush dump
 { devices: [
 { id: 0,
   name: osd.0},
 { id: 1,
   name: osd.1},
 { id: 2,
   name: osd.2},
 { id: 3,
   name: osd.3},
 { id: 4,
   name: device4},
 { id: 5,
   name: osd.5},
 { id: 6,
   name: device6},
 { id: 7,
   name: osd.7},
 { id: 8,
   name: osd.8},
 { id: 9,
   name: osd.9},
 { id: 10,
   name: osd.10}],
   types: [
 { type_id: 0,
   name: osd},
 { type_id: 1,
   name: host},
 { type_id: 2,
   name: chassis},
 { type_id: 3,
   name: rack},
 { type_id: 4,
   name: row},
 { type_id: 5,
   name: pdu},
 { type_id: 6,
   name: pod},
 { type_id: 7,
   name: room},
 { type_id: 8,
   name: datacenter},
 { type_id: 9,
   name: region},
 { type_id: 10,
   name: root}],
   buckets: [
 { id: -1,
   name: default,
   type_id: 10,
   type_name: root,
   weight: 302773,
   alg: straw,
   hash: rjenkins1,
   items: [
 { id: -2,
   weight: 5898,
   pos: 0},
 { id: -3,
   weight: 5898,
   pos: 1},
 { id: -4,
   weight: 5898,
   pos: 2},
 { id: -5,
   weight: 12451,
   pos: 3},
 { id: -6,
   weight: 13107,
   pos: 4},
 { id: -7,
   weight: 87162,
   pos: 5},
 { id: -8,
   weight: 49807,
   pos: 6},
 { id: -9,
   weight: 116654,
   pos: 7},
 { id: -10,
   weight: 5898,
   pos: 8}]},
 { id: -2,
   name: ceph02-vm,
   type_id: 1,
   type_name: host,
   weight: 5898,
   alg: straw,
   hash: rjenkins1,
   items: [
 { id: 0,
   weight: 5898,
   pos: 0}]},
 { id: -3,
   name: ceph03-vm,
   type_id: 1,
   type_name: host,
   weight: 5898,
   alg: straw,
   hash: rjenkins1,
   items: [
 { id: 1,
   weight: 5898,
   pos: 0}]},
 { id: -4,
   name: ceph01-vm,
   type_id: 1,
   type_name: host,
   weight: 5898,
   alg: straw,
   hash: rjenkins1,
   items: [
 { id: 2,
   weight: 5898,
   pos: 0}]},
 { id: -5,
   name: ceph04-vm,
   type_id: 1,
   type_name: host

[ceph-users] some pgs active+remapped, Ceph can not recover itself.

2014-08-19 Thread debian Only
this is happen after some OSD fail and i recreate osd.

i have did  ceph osd rm osd.4  to remove the osd.4 and osd.6. but when i
use ceph-deploy to install OSD by
 ceph-deploy osd --zap-disk --fs-type btrfs create ceph0x-vm:sdb,
ceph-deploy result said new osd is ready,
 but the OSD can not start. said that ceph-disk failure.
 /var/lib/ceph/bootstrap-osd/ceph.keyring and auth:error
 and i have check the ceph.keyring is same as other on live OSD.

 when i run ceph-deploy twice. first it will create osd.4, failed , will
display in osd tree.  then osd.6 same.
 next ceph-deploy osd again, create osd.10, this OSD can start successful.
 but osd.4 osd.6 display down in osd tree.

 when i use ceph osd reweight-by-utilization,  run one time, more pgs
active+remapped. Ceph can not recover itself

 and Crush map tunables already optimize.  do not how to solve it.

root@ceph-admin:~# ceph osd crush dump
{ devices: [
{ id: 0,
  name: osd.0},
{ id: 1,
  name: osd.1},
{ id: 2,
  name: osd.2},
{ id: 3,
  name: osd.3},
{ id: 4,
  name: device4},
{ id: 5,
  name: osd.5},
{ id: 6,
  name: device6},
{ id: 7,
  name: osd.7},
{ id: 8,
  name: osd.8},
{ id: 9,
  name: osd.9},
{ id: 10,
  name: osd.10}],
  types: [
{ type_id: 0,
  name: osd},
{ type_id: 1,
  name: host},
{ type_id: 2,
  name: chassis},
{ type_id: 3,
  name: rack},
{ type_id: 4,
  name: row},
{ type_id: 5,
  name: pdu},
{ type_id: 6,
  name: pod},
{ type_id: 7,
  name: room},
{ type_id: 8,
  name: datacenter},
{ type_id: 9,
  name: region},
{ type_id: 10,
  name: root}],
  buckets: [
{ id: -1,
  name: default,
  type_id: 10,
  type_name: root,
  weight: 302773,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: -2,
  weight: 5898,
  pos: 0},
{ id: -3,
  weight: 5898,
  pos: 1},
{ id: -4,
  weight: 5898,
  pos: 2},
{ id: -5,
  weight: 12451,
  pos: 3},
{ id: -6,
  weight: 13107,
  pos: 4},
{ id: -7,
  weight: 87162,
  pos: 5},
{ id: -8,
  weight: 49807,
  pos: 6},
{ id: -9,
  weight: 116654,
  pos: 7},
{ id: -10,
  weight: 5898,
  pos: 8}]},
{ id: -2,
  name: ceph02-vm,
  type_id: 1,
  type_name: host,
  weight: 5898,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 0,
  weight: 5898,
  pos: 0}]},
{ id: -3,
  name: ceph03-vm,
  type_id: 1,
  type_name: host,
  weight: 5898,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 1,
  weight: 5898,
  pos: 0}]},
{ id: -4,
  name: ceph01-vm,
  type_id: 1,
  type_name: host,
  weight: 5898,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 2,
  weight: 5898,
  pos: 0}]},
{ id: -5,
  name: ceph04-vm,
  type_id: 1,
  type_name: host,
  weight: 12451,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 8,
  weight: 12451,
  pos: 0}]},
{ id: -6,
  name: ceph05-vm,
  type_id: 1,
  type_name: host,
  weight: 13107,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 3,
  weight: 13107,
  pos: 0}]},
{ id: -7,
  name: ceph06-vm,
  type_id: 1,
  type_name: host,
  weight: 87162,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 5,
  weight: 87162,
  pos: 0}]},
{ id: -8,
  name: ceph07-vm,
  type_id: 1,
  type_name: host,
  weight: 49807,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 9,
  weight: 49807,
  pos: 0}]},
{ id: -9,
  name: ceph08-vm,
  type_id: 1,
  type_name: host,
  weight: 116654,
  alg: straw,
  hash: rjenkins1,
  items: [
{ id: 7,
 

[ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
Dear all

i have meet some issue when access radosgw.
Fobidden 403 and fail to create subuser key when use radosgw

ceph version 0.80.5(ceph osd, radosgw), OS Wheezy

(1) Reference of installation
http://ceph.com/docs/master/radosgw/config/#configuring-print-continue

(2) Config File
root@ceph-radosgw:~# more /etc/ceph/ceph.conf
[global]
fsid = ae3da4d2-eef0-47cf-a872-24df8f2c8df4
mon_initial_members = ceph01-vm
mon_host = 192.168.123.251
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

rgw print continue = false
rgw dns name = ceph-radosgw
debug rgw = 20


[client.radosgw.gateway]
host = ceph-radosgw
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/ceph/client.radosgw.gateway.log

root@ceph-admin:~# rados lspools
 data
 metadata
 rbd
 testpool
 iscsi
 pool-A
 pool-B
 iscsi_pool
 .rgw.root
 .rgw.control
 .rgw
 .rgw.gc
 .users.uid
 .users
 .users.swift
 .users.email
 .rgw.buckets
 .rgw.buckets.index
 .log
 .intent-log
 .usage

 when access radosgw http://192.168.123.191, seam ok
   ListAllMyBucketsResult xmlns=http://s3.amazonaws.com/doc/2006-03-01/;
   Owner
   IDanonymous/ID
   DisplayName/
   /Owner
   Buckets/
   /ListAllMyBucketsResult


(3) error meet when create radosgw user(swift) and gen-key

root@ceph-radosgw:~# radosgw-admin user create --uid=testuser
--display-nameFirst User
{ user_id: testuser,
  display_name: First User,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [
{ user: testuser,
  access_key: SU3L3KCDXQ31KJ6BZ04B,
  secret_key: nhA2XNsqwJN8bZlkOEd2UyexMADC9THOhc7UmW4l}],
  swift_keys: [],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}
root@ceph-radosgw:~# radosgw-admin usage show --show-log-entries=false
{ summary: []}root@ceph-radosgw:~# radosgw-admin user create
--uid=testuser First User^Ce=F
root@ceph-radosgw:~# radosgw-admin subuser create --uid=testuser
--subuser=testuser:swf0001 --access=full
{ user_id: testuser,
  display_name: First User,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [
{ id: testuser:swf0001,
  permissions: full-control}],
  keys: [
{ user: testuser:swf0001,
  access_key: 9IN7P6HA6K4JCDO61N67,
  secret_key: },
{ user: testuser,
  access_key: SU3L3KCDXQ31KJ6BZ04B,
  secret_key: nhA2XNsqwJN8bZlkOEd2UyexMADC9THOhc7UmW4l}],
  swift_keys: [],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}
root@ceph-radosgw:~# radosgw-admin key create
--subuser=testuser:swf0001 --key-type=swift --gen-secret
could not create key: unable to add access key, unable to store user
info
2014-08-11 19:56:35.834507 7f4c4f1af780  0 WARNING: can't store user
info, swift id () already mapped to another user (testuser)

(find it maybe bug ,  http://tracker.ceph.com/issues/9002)


root@ceph-radosgw:~# radosgw-admin user create
--subuser=testuser:swf0001 --display-name=Test User One --key-type=swift
--access=full
could not create user: unable to create user, user: testuser exists
root@ceph-radosgw:~# radosgw-admin user create
--subuser=testuser:swf0001 --display-name=Test User One --key-type=swift
--access=full
could not create user: unable to create user, user: testuser exists
root@ceph-radosgw:~# radosgw-admin user rm --uid=testuser

root@ceph-radosgw:~# radosgw-admin user create
--subuser=testuser:swf0001 --display-name=Test User One --key-type=swift
--access=full
{ user_id: testuser,
  display_name: Test User One,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [],
  swift_keys: [
{ user: testuser:swf0001,
  secret_key: W\/zZ8T09VPFoPKxnVAJocsmNALoPxEYPmjOwytCj}],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
   

Re: [ceph-users] no user info saved after user creation / can't create buckets

2014-08-12 Thread debian Only
i meet same problem with u , but i still can not after i create
 .rgw.buckets
 .rgw.buckets.index
 .log
 .intent-log
 .usage

still stuck here.


2014-03-13 7:38 GMT+07:00 Greg Poirier greg.poir...@opower.com:

 And, I figured out the issue.

 The utility I was using to create pools, zones, and regions automatically
 failed to do two things:

 - create rgw.buckets and rgw.buckets.index for each zone
 - setup placement pools for each zone

 I did both of those, and now everything is working.

 Thanks, me, for the commitment to figuring this poo out.


 On Wed, Mar 12, 2014 at 8:31 PM, Greg Poirier greg.poir...@opower.com
 wrote:

 Increasing the logging further, and I notice the following:

 2014-03-13 00:27:28.617100 7f6036ffd700 20 rgw_create_bucket returned
 ret=-1 bucket=test(@.rgw.buckets[us-west-1.15849318.1])

 But hope that .rgw.buckets doesn't have to exist... and that
 @.rgw.buckets is perhaps telling of something?

 I did notice that .us-west-1.rgw.buckets and .us-west-1.rgw.buckets.index
 weren't created. I created those, restarted radosgw, and still 403 errors.


 On Wed, Mar 12, 2014 at 8:00 PM, Greg Poirier greg.poir...@opower.com
 wrote:

 And the debug log because that last log was obviously not helpful...

 2014-03-12 23:57:49.497780 7ff97e7dd700  1 == starting new request
 req=0x23bc650 =
 2014-03-12 23:57:49.498198 7ff97e7dd700  2 req 1:0.000419::PUT
 /test::initializing
 2014-03-12 23:57:49.498233 7ff97e7dd700 10 host=s3.amazonaws.com
 rgw_dns_name=us-west-1.domain
 2014-03-12 23:57:49.498366 7ff97e7dd700 10 s-object=NULL
 s-bucket=test
 2014-03-12 23:57:49.498437 7ff97e7dd700  2 req 1:0.000659:s3:PUT
 /test::getting op
 2014-03-12 23:57:49.498448 7ff97e7dd700  2 req 1:0.000670:s3:PUT
 /test:create_bucket:authorizing
 2014-03-12 23:57:49.498508 7ff97e7dd700 10 cache get:
 name=.us-west-1.users+BLAHBLAHBLAH : miss
 2014-03-12 23:57:49.500852 7ff97e7dd700 10 cache put:
 name=.us-west-1.users+BLAHBLAHBLAH
 2014-03-12 23:57:49.500865 7ff97e7dd700 10 adding
 .us-west-1.users+BLAHBLAHBLAH to cache LRU end
 2014-03-12 23:57:49.500886 7ff97e7dd700 10 moving
 .us-west-1.users+BLAHBLAHBLAH to cache LRU end
 2014-03-12 23:57:49.500889 7ff97e7dd700 10 cache get:
 name=.us-west-1.users+BLAHBLAHBLAH : type miss (requested=1, cached=6)
 2014-03-12 23:57:49.500907 7ff97e7dd700 10 moving
 .us-west-1.users+BLAHBLAHBLAH to cache LRU end
 2014-03-12 23:57:49.500910 7ff97e7dd700 10 cache get:
 name=.us-west-1.users+BLAHBLAHBLAH : hit
 2014-03-12 23:57:49.502663 7ff97e7dd700 10 cache put:
 name=.us-west-1.users+BLAHBLAHBLAH
 2014-03-12 23:57:49.502667 7ff97e7dd700 10 moving
 .us-west-1.users+BLAHBLAHBLAH to cache LRU end
 2014-03-12 23:57:49.502700 7ff97e7dd700 10 cache get:
 name=.us-west-1.users.uid+test : miss
 2014-03-12 23:57:49.505128 7ff97e7dd700 10 cache put:
 name=.us-west-1.users.uid+test
 2014-03-12 23:57:49.505138 7ff97e7dd700 10 adding
 .us-west-1.users.uid+test to cache LRU end
 2014-03-12 23:57:49.505157 7ff97e7dd700 10 moving
 .us-west-1.users.uid+test to cache LRU end
 2014-03-12 23:57:49.505160 7ff97e7dd700 10 cache get:
 name=.us-west-1.users.uid+test : type miss (requested=1, cached=6)
 2014-03-12 23:57:49.505176 7ff97e7dd700 10 moving
 .us-west-1.users.uid+test to cache LRU end
 2014-03-12 23:57:49.505178 7ff97e7dd700 10 cache get:
 name=.us-west-1.users.uid+test : hit
 2014-03-12 23:57:49.507401 7ff97e7dd700 10 cache put:
 name=.us-west-1.users.uid+test
 2014-03-12 23:57:49.507406 7ff97e7dd700 10 moving
 .us-west-1.users.uid+test to cache LRU end
 2014-03-12 23:57:49.507521 7ff97e7dd700 10 get_canon_resource():
 dest=/test
 2014-03-12 23:57:49.507529 7ff97e7dd700 10 auth_hdr:
 PUT

 binary/octet-stream
 Wed, 12 Mar 2014 23:57:51 GMT
 /test
 2014-03-12 23:57:49.507674 7ff97e7dd700  2 req 1:0.009895:s3:PUT
 /test:create_bucket:reading permissions
 2014-03-12 23:57:49.507682 7ff97e7dd700  2 req 1:0.009904:s3:PUT
 /test:create_bucket:verifying op mask
 2014-03-12 23:57:49.507695 7ff97e7dd700  2 req 1:0.009917:s3:PUT
 /test:create_bucket:verifying op permissions
 2014-03-12 23:57:49.509604 7ff97e7dd700  2 req 1:0.011826:s3:PUT
 /test:create_bucket:verifying op params
 2014-03-12 23:57:49.509615 7ff97e7dd700  2 req 1:0.011836:s3:PUT
 /test:create_bucket:executing
  2014-03-12 23:57:49.509694 7ff97e7dd700 10 cache get:
 name=.us-west-1.domain.rgw+test : miss
 2014-03-12 23:57:49.512229 7ff97e7dd700 10 cache put:
 name=.us-west-1.domain.rgw+test
 2014-03-12 23:57:49.512259 7ff97e7dd700 10 adding
 .us-west-1.domain.rgw+test to cache LRU end
 2014-03-12 23:57:49.512333 7ff97e7dd700 10 cache get:
 name=.us-west-1.domain.rgw+.pools.avail : miss
 2014-03-12 23:57:49.518216 7ff97e7dd700 10 cache put:
 name=.us-west-1.domain.rgw+.pools.avail
 2014-03-12 23:57:49.518228 7ff97e7dd700 10 adding
 .us-west-1.domain.rgw+.pools.avail to cache LRU end
 2014-03-12 23:57:49.518248 7ff97e7dd700 10 moving
 .us-west-1.domain.rgw+.pools.avail to cache LRU end
 2014-03-12 23:57:49.518251 7ff97e7dd700 10 

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
root@ceph-radosgw:~# radosgw-admin user create --uid=testuser
--display-name=First User
{ user_id: testuser,
  display_name: First User,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [
{ user: testuser,
  access_key: 1YKSB0M9BOJZ23BV2VKB,
  secret_key: JUR2FBZyYbfITVfW+mtcqRzmV879OzSDkIgbjqQi}],
  swift_keys: [],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}
root@ceph-radosgw:~# radosgw-admin subuser create --uid=testuser
--subuser=testuser:swf0001 --access=full
{ user_id: testuser,
  display_name: First User,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [
{ id: testuser:swf0001,
  permissions: full-control}],
  keys: [
{ user: testuser,
  access_key: 1YKSB0M9BOJZ23BV2VKB,
  secret_key: JUR2FBZyYbfITVfW+mtcqRzmV879OzSDkIgbjqQi},
{ user: testuser:swf0001,
  access_key: WL058L93OWMSB3XCM0TJ,
  secret_key: }],
  swift_keys: [],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}
root@ceph-radosgw:~# radosgw-admin key create --subuser=testuser:swf0001
--key-type=swift --gen-secret
could not create key: unable to add access key, unable to store user info
2014-08-12 02:21:04.463267 7f64b9e48780  0 WARNING: can't store user info,
swift id () already mapped to another user (testuser)

###
 then i use another way to create key for testuser:swf0001   . can not
remove key

root@ceph-radosgw:~# radosgw-admin user rm --uid=testuser


root@ceph-radosgw:~# radosgw-admin user create --subuser=testuser:swf0001
--display-name=Test User One --key-type=swift --access=full
{ user_id: testuser,
  display_name: Test User One,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [],
  swift_keys: [
{ user: testuser:swf0001,
  secret_key: JOgJ+XKcD68Zozs7v2cAaCorRFnZEBG4SwdUnuo8}],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}
root@ceph-radosgw:~# radosgw-admin key rm --uid=testuser:swf0001
 could not remove key: unable to parse request, user info was not
populated


root@ceph-radosgw:~# radosgw-admin key create --subuser=testuser:swf0001
--key-type=swift --gen-secret
{ user_id: testuser,
  display_name: Test User One,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [],
  swift_keys: [
{ user: testuser:swf0001,
  secret_key: r4sHbFyF0A5tE1mW+GSMYovwkNdoqS\/nP8rd1UGO}],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}



2014-08-12 15:52 GMT+07:00 Karan Singh karan.si...@csc.fi:


 For your item number 3 , can you try :


- Removing the keys for sub user  ( testuser:swf0001 )



- Once Key is removed for sub user ,  try recreating the key  [ #
radosgw-admin key create --subuser=testuser:swf0001 --key-type=swift
--gen-secret ]



 - Karan -

 On 12 Aug 2014, at 11:26, debian Only onlydeb...@gmail.com wrote:

 Dear all

 i have meet some issue when access radosgw.
 Fobidden 403 and fail to create subuser key when use radosgw

 ceph version 0.80.5(ceph osd, radosgw), OS Wheezy

 (1) Reference of installation
 http://ceph.com/docs/master/radosgw/config/#configuring-print-continue

 (2) Config File
 root@ceph-radosgw:~# more /etc/ceph/ceph.conf
 [global]
 fsid = ae3da4d2-eef0-47cf-a872-24df8f2c8df4
 mon_initial_members = ceph01-vm
 mon_host = 192.168.123.251
 auth_cluster_required = cephx
 auth_service_required = cephx
 auth_client_required = cephx
 filestore_xattr_use_omap = true

 rgw print continue = false
 rgw dns name = ceph-radosgw
 debug rgw = 20


 [client.radosgw.gateway]
 host = ceph-radosgw
 keyring = /etc/ceph/ceph.client.radosgw.keyring
 rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
 log file = /var/log/ceph/client.radosgw.gateway.log

 root@ceph-admin:~# rados lspools
  data
  metadata
  rbd
  testpool
  iscsi
  pool-A
  pool-B
  iscsi_pool
  .rgw.root
  .rgw.control
  .rgw
  .rgw.gc
  .users.uid
  .users
  .users.swift

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
# my Trouble shooting  #

when i try use s3cmd to check ,  use user johndoe i created.  it can create
bucket.

###
root@ceph-radosgw:~# more .s3cfg
[default]
access_key = UGM3MB541JI0WG3WJIZ7
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes
--passph
rase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes
--passph
rase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = ceph-radosgw.lab.com
host_bucket = %(bucket)s.ceph-radosgw.lab.com
human_readable_sizes = False
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
recursive = False
recv_chunk = 4096
reduced_redundancy = False
secret_key = gL2txO+bQ3kEdYmqwR9YXYcO0O1gXFXX/+/kdh8Q
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = False
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html


root@ceph-radosgw:~# vi .s3cfg
root@ceph-radosgw:~# s3cmd ls
root@ceph-radosgw:~# s3cmd mb s3://foo
Bucket 's3://foo/' created
root@ceph-radosgw:~# s3cmd ls
2014-08-12 10:10  s3://foo


root@ceph-radosgw:~# radosgw-admin metadata list bucket
  [
foo]

root@ceph-radosgw:~# radosgw-admin bucket link --bucket=foo --uid=johndoe
root@ceph-radosgw:~# radosgw-admin bucket stats --bucket=foo
{ bucket: foo,
  pool: .rgw.buckets,
  index_pool: .rgw.buckets.index,
  id: default.441498.1,
  marker: default.441498.1,
  owner: johndoe,
  ver: 1,
  master_ver: 0,
  mtime: 1407839163,
  max_marker: ,
  usage: {},
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1}}





2014-08-12 16:24 GMT+07:00 debian Only onlydeb...@gmail.com:


 root@ceph-radosgw:~# radosgw-admin user create --uid=testuser
 --display-name=First User
 { user_id: testuser,
   display_name: First User,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [],
   keys: [
 { user: testuser,
   access_key: 1YKSB0M9BOJZ23BV2VKB,
   secret_key: JUR2FBZyYbfITVfW+mtcqRzmV879OzSDkIgbjqQi}],
   swift_keys: [],
   caps: [],
   op_mask: read, write, delete,
   default_placement: ,
   placement_tags: [],
   bucket_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   user_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   temp_url_keys: []}
 root@ceph-radosgw:~# radosgw-admin subuser create --uid=testuser
 --subuser=testuser:swf0001 --access=full
 { user_id: testuser,
   display_name: First User,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [
 { id: testuser:swf0001,
   permissions: full-control}],
   keys: [
 { user: testuser,
   access_key: 1YKSB0M9BOJZ23BV2VKB,
   secret_key: JUR2FBZyYbfITVfW+mtcqRzmV879OzSDkIgbjqQi},
 { user: testuser:swf0001,
   access_key: WL058L93OWMSB3XCM0TJ,
   secret_key: }],
   swift_keys: [],
   caps: [],
   op_mask: read, write, delete,
   default_placement: ,
   placement_tags: [],
   bucket_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   user_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   temp_url_keys: []}
 root@ceph-radosgw:~# radosgw-admin key create --subuser=testuser:swf0001
 --key-type=swift --gen-secret
 could not create key: unable to add access key, unable to store user info
 2014-08-12 02:21:04.463267 7f64b9e48780  0 WARNING: can't store user info,
 swift id () already mapped to another user (testuser)

 ###
  then i use another way to create key for testuser:swf0001   . can not
 remove key
 
 root@ceph-radosgw:~# radosgw-admin user rm --uid=testuser


 root@ceph-radosgw:~# radosgw-admin user create --subuser=testuser:swf0001
 --display-name=Test User One --key-type=swift --access=full
 { user_id: testuser,
   display_name: Test User One,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [],
   keys: [],
   swift_keys: [
 { user: testuser:swf0001,
   secret_key: JOgJ+XKcD68Zozs7v2cAaCorRFnZEBG4SwdUnuo8}],
   caps: [],
   op_mask: read, write, delete,
   default_placement: ,
   placement_tags: [],
   bucket_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   user_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   temp_url_keys: []}
 root@ceph-radosgw:~# radosgw-admin key

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
i have test, meet same issue on Wheezy and Ubuntu12.04 with Ceph0.80.5 too.

it can be sucessful when use :
radosgw-admin user create --subuser=testuser:swf0001 --display-name=Test
User One --key-type=swift --access=full

and it will create correct swift user in Pool .users.swift
 # rados ls -p .users.swift

 testuser:swf0001

and can use the swift user:  testuser:swf0001  to access the rados gateway

root@ceph-radosgw:~# curl -v -i http://192.168.123.191/auth -X GET -H
X-Auth-User:testuser:swf0001 -H
X-Auth-Key:r4sHbFyF0A5tE1mW+GSMYovwkNdoqS/nP8rd1UGO
* About to connect() to 192.168.123.191 port 80 (#0)
*   Trying 192.168.123.191...
* Connected to 192.168.123.191 (192.168.123.191) port 80 (#0)
 GET /auth HTTP/1.1
 User-Agent: curl/7.29.0
 Host: 192.168.123.191
 Accept: */*
 X-Auth-User:testuser:swf0001
 X-Auth-Key:r4sHbFyF0A5tE1mW+GSMYovwkNdoqS/nP8rd1UGO

 HTTP/1.1 204 No Content
HTTP/1.1 204 No Content
 Date: Tue, 12 Aug 2014 14:32:34 GMT
Date: Tue, 12 Aug 2014 14:32:34 GMT
 Server: Apache/2.2.22 (Debian)
Server: Apache/2.2.22 (Debian)
 X-Storage-Url: http://192.168.123.191/swift/v1
X-Storage-Url: http://192.168.123.191/swift/v1
 X-Storage-Token:
AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
X-Storage-Token:
AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
 X-Auth-Token:
AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
X-Auth-Token:
AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
 Content-Type: application/json
Content-Type: application/json


* Connection #0 to host 192.168.123.191 left intact

in my environment, johndoe:swift not in the .users.swift pool.   i can use
test s3cmd   successfully with
user: johndoe,
access key: UGM3MB541JI0WG3WJIZ7
secret: gL2txO+bQ3kEdYmqwR9YXYcO0O1gXFXX/+/kdh8Q

due to the issue can not create swift subuser key normally .  rados gateway
can not use normally.
http://tracker.ceph.com/issues/9002




2014-08-12 17:34 GMT+07:00 debian Only onlydeb...@gmail.com:

 # my Trouble shooting  #

 when i try use s3cmd to check ,  use user johndoe i created.  it can
 create bucket.

 ###
 root@ceph-radosgw:~# more .s3cfg
 [default]
 access_key = UGM3MB541JI0WG3WJIZ7
 bucket_location = US
 cloudfront_host = cloudfront.amazonaws.com
 default_mime_type = binary/octet-stream
 delete_removed = False
 dry_run = False
 enable_multipart = True
 encoding = UTF-8
 encrypt = False
 follow_symlinks = False
 force = False
 get_continue = False
 gpg_command = /usr/bin/gpg
 gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes
 --passph
 rase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
 gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes
 --passph
 rase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
 gpg_passphrase =
 guess_mime_type = True
 host_base = ceph-radosgw.lab.com
 host_bucket = %(bucket)s.ceph-radosgw.lab.com
 human_readable_sizes = False
 invalidate_on_cf = False
 list_md5 = False
 log_target_prefix =
 mime_type =
 multipart_chunk_size_mb = 15
 preserve_attrs = True
 progress_meter = True
 proxy_host =
 proxy_port = 0
 recursive = False
 recv_chunk = 4096
 reduced_redundancy = False
 secret_key = gL2txO+bQ3kEdYmqwR9YXYcO0O1gXFXX/+/kdh8Q
 send_chunk = 4096
 simpledb_host = sdb.amazonaws.com
 skip_existing = False
 socket_timeout = 300
 urlencoding_mode = normal
 use_https = False
 verbosity = WARNING
 website_endpoint = http://%(bucket)s.s3-website-%(location)
 s.amazonaws.com/
 website_error =
 website_index = index.html


 root@ceph-radosgw:~# vi .s3cfg
 root@ceph-radosgw:~# s3cmd ls
 root@ceph-radosgw:~# s3cmd mb s3://foo
 Bucket 's3://foo/' created
 root@ceph-radosgw:~# s3cmd ls
 2014-08-12 10:10  s3://foo


 root@ceph-radosgw:~# radosgw-admin metadata list bucket
   [
 foo]

 root@ceph-radosgw:~# radosgw-admin bucket link --bucket=foo --uid=johndoe
 root@ceph-radosgw:~# radosgw-admin bucket stats --bucket=foo
 { bucket: foo,
   pool: .rgw.buckets,
   index_pool: .rgw.buckets.index,
   id: default.441498.1,
   marker: default.441498.1,
   owner: johndoe,
   ver: 1,
   master_ver: 0,
   mtime: 1407839163,
   max_marker: ,
   usage: {},
   bucket_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1}}





 2014-08-12 16:24 GMT+07:00 debian Only onlydeb...@gmail.com:


 root@ceph-radosgw:~# radosgw-admin user create --uid=testuser
 --display-name=First User
 { user_id: testuser,
   display_name: First User,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [],
   keys: [
 { user: testuser,
   access_key: 1YKSB0M9BOJZ23BV2VKB,
   secret_key: JUR2FBZyYbfITVfW+mtcqRzmV879OzSDkIgbjqQi}],
swift_keys: [],
   caps: [],
   op_mask: read

Re: [ceph-users] Fobidden 403 and fail to create subuser key when use radosgw

2014-08-12 Thread debian Only
i just use s3cmd do test. i plan to use s3/swift with inkScope or for
Openstack.  so i need prepare rados Gateway first.
but  i meet this issue now


2014-08-12 22:05 GMT+07:00 Christopher O'Connell c...@sendfaster.com:

 I've had a tremendous difficultly using S3 command when using RGW. I've
 successfully used an older php client, but not s3cmd. For the moment, we're
 no longer using s3cmd with RGW, because it simply doesn't seem to work,
 other than for listing buckets.


 On Tue, Aug 12, 2014 at 10:52 AM, debian Only onlydeb...@gmail.com
 wrote:

 i have test, meet same issue on Wheezy and Ubuntu12.04 with Ceph0.80.5
 too.

 it can be sucessful when use :
 radosgw-admin user create --subuser=testuser:swf0001 --display-name=Test
 User One --key-type=swift --access=full

 and it will create correct swift user in Pool .users.swift
  # rados ls -p .users.swift
 
  testuser:swf0001

 and can use the swift user:  testuser:swf0001  to access the rados gateway

 root@ceph-radosgw:~# curl -v -i http://192.168.123.191/auth -X GET -H
 X-Auth-User:testuser:swf0001 -H
 X-Auth-Key:r4sHbFyF0A5tE1mW+GSMYovwkNdoqS/nP8rd1UGO
  * About to connect() to 192.168.123.191 port 80 (#0)
 *   Trying 192.168.123.191...
 * Connected to 192.168.123.191 (192.168.123.191) port 80 (#0)
  GET /auth HTTP/1.1
  User-Agent: curl/7.29.0
  Host: 192.168.123.191
  Accept: */*
  X-Auth-User:testuser:swf0001
  X-Auth-Key:r4sHbFyF0A5tE1mW+GSMYovwkNdoqS/nP8rd1UGO
 
  HTTP/1.1 204 No Content
 HTTP/1.1 204 No Content
  Date: Tue, 12 Aug 2014 14:32:34 GMT
 Date: Tue, 12 Aug 2014 14:32:34 GMT
  Server: Apache/2.2.22 (Debian)
 Server: Apache/2.2.22 (Debian)
   X-Storage-Url: http://192.168.123.191/swift/v1
 X-Storage-Url: http://192.168.123.191/swift/v1
  X-Storage-Token:
 AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
 X-Storage-Token:
 AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
  X-Auth-Token:
 AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
 X-Auth-Token:
 AUTH_rgwtk100074657374757365723a737766303030317f6874150b34862d0277eb5328350129e54690fff014e8a758339af7dc34d895ab6061b9
  Content-Type: application/json
 Content-Type: application/json

 
 * Connection #0 to host 192.168.123.191 left intact

 in my environment, johndoe:swift not in the .users.swift pool.   i can
 use test s3cmd   successfully with
 user: johndoe,
 access key: UGM3MB541JI0WG3WJIZ7
 secret: gL2txO+bQ3kEdYmqwR9YXYcO0O1gXFXX/+/kdh8Q

 due to the issue can not create swift subuser key normally .  rados
 gateway can not use normally.
 http://tracker.ceph.com/issues/9002




 2014-08-12 17:34 GMT+07:00 debian Only onlydeb...@gmail.com:

 # my Trouble shooting  #

 when i try use s3cmd to check ,  use user johndoe i created.  it can
 create bucket.

 ###
 root@ceph-radosgw:~# more .s3cfg
 [default]
 access_key = UGM3MB541JI0WG3WJIZ7
 bucket_location = US
 cloudfront_host = cloudfront.amazonaws.com
 default_mime_type = binary/octet-stream
 delete_removed = False
 dry_run = False
 enable_multipart = True
 encoding = UTF-8
 encrypt = False
 follow_symlinks = False
 force = False
 get_continue = False
 gpg_command = /usr/bin/gpg
 gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes
 --passph
 rase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
 gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes
 --passph
 rase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
 gpg_passphrase =
 guess_mime_type = True
 host_base = ceph-radosgw.lab.com
 host_bucket = %(bucket)s.ceph-radosgw.lab.com
 human_readable_sizes = False
 invalidate_on_cf = False
 list_md5 = False
 log_target_prefix =
 mime_type =
 multipart_chunk_size_mb = 15
 preserve_attrs = True
 progress_meter = True
 proxy_host =
 proxy_port = 0
 recursive = False
 recv_chunk = 4096
 reduced_redundancy = False
 secret_key = gL2txO+bQ3kEdYmqwR9YXYcO0O1gXFXX/+/kdh8Q
 send_chunk = 4096
 simpledb_host = sdb.amazonaws.com
 skip_existing = False
 socket_timeout = 300
 urlencoding_mode = normal
 use_https = False
 verbosity = WARNING
 website_endpoint = http://%(bucket)s.s3-website-%(location)
 s.amazonaws.com/
 website_error =
 website_index = index.html


 root@ceph-radosgw:~# vi .s3cfg
 root@ceph-radosgw:~# s3cmd ls
 root@ceph-radosgw:~# s3cmd mb s3://foo
 Bucket 's3://foo/' created
 root@ceph-radosgw:~# s3cmd ls
 2014-08-12 10:10  s3://foo


 root@ceph-radosgw:~# radosgw-admin metadata list bucket
   [
 foo]

 root@ceph-radosgw:~# radosgw-admin bucket link --bucket=foo
 --uid=johndoe
 root@ceph-radosgw:~# radosgw-admin bucket stats --bucket=foo
 { bucket: foo,
   pool: .rgw.buckets,
   index_pool: .rgw.buckets.index,
   id: default.441498.1,
   marker: default.441498.1,
   owner: johndoe,
   ver: 1,
   master_ver: 0,
   mtime

Re: [ceph-users] issues with creating Swift users for radosgw

2014-08-11 Thread debian Only
I meet same problem.   maybe this is a bug
http://tracker.ceph.com/issues/9002

but i stiell can not access radosgw


root@ceph-radosgw:~# radosgw-admin user create --subuser=testuser:swf0001
--display-name=Test User One --key-type=swift --access=full
{ user_id: testuser,
  display_name: Test User One,
  email: ,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [],
  swift_keys: [
{ user: testuser:swf0001,
  secret_key: W\/zZ8T09VPFoPKxnVAJocsmNALoPxEYPmjOwytCj}],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: [],
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  user_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1},
  temp_url_keys: []}

root@ceph-radosgw:~# curl -v -i http://192.168.123.191/auth -X GET -H
X-Auth-User:testuser:swf0001 -H
X-Auth-Key:W/zZ8T09VPFoPKxnVAJocsmNALoPxEYPmjOwytCj
* About to connect() to 192.168.123.191 port 80 (#0)
*   Trying 192.168.123.191...
* Connected to 192.168.123.191 (192.168.123.191) port 80 (#0)
 GET /auth HTTP/1.1
 User-Agent: curl/7.29.0
 Host: 192.168.123.191
 Accept: */*
 X-Auth-User:testuser:swf0001
 X-Auth-Key:W/zZ8T09VPFoPKxnVAJocsmNALoPxEYPmjOwytCj

 HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
 Date: Tue, 12 Aug 2014 03:16:41 GMT
Date: Tue, 12 Aug 2014 03:16:41 GMT
 Server: Apache/2.2.22 (Debian)
Server: Apache/2.2.22 (Debian)
 Accept-Ranges: bytes
Accept-Ranges: bytes
 Content-Length: 23
Content-Length: 23
 Content-Type: application/json
Content-Type: application/json


* Connection #0 to host 192.168.123.191 left intact
{Code:AccessDenied}


2014-05-21 3:14 GMT+07:00 Simon Weald sim...@memset.com:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hello

 I've been experimenting with radosgw and I've had no issues with the
 S3 interface, however I cannot get a subuser created for use with the
 Swift api.

 First I created a user:

 root@ceph1:~# radosgw-admin user create --uid=shw
 - --display-name=Simon Weald
 { user_id: shw,
   display_name: Simon Weald,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [],
   keys: [
 { user: shw,
   access_key: 1WFY4I8I152WX8P74NZ7,
   secret_key: AkYBun7GubMaJq+IV4\/Rd904gkThrTVTLnhDATNm}],
   swift_keys: [],
   caps: [],
   op_mask: read, write, delete,
   default_placement: ,
   placement_tags: [],
   bucket_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   user_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   temp_url_keys: []}

 Then I created a subuser:

 root@ceph1:~# radosgw-admin subuser create --uid=shw
 - --subuser=shw:swift --access=full
 { user_id: shw,
   display_name: Simon Weald,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [
 { id: shw:swift,
   permissions: full-control}],
   keys: [
 { user: shw,
   access_key: 1WFY4I8I152WX8P74NZ7,
   secret_key: AkYBun7GubMaJq+IV4\/Rd904gkThrTVTLnhDATNm},
 { user: shw:swift,
   access_key: QJDYHDW1E63ZU0B75Z3P,
   secret_key: }],
   swift_keys: [],
   caps: [],
   op_mask: read, write, delete,
   default_placement: ,
   placement_tags: [],
   bucket_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   user_quota: { enabled: false,
   max_size_kb: -1,
   max_objects: -1},
   temp_url_keys: []}

 The issue comes when trying to create a secret key for the subuser:

 root@ceph1:~# radosgw-admin key create --subuser=shw:swift
 - --key-type=swift --gen-secret
 2014-05-20 20:13:29.460167 7f579bed5700  0 -- :/1004375 
 10.16.116.14:6789/0 pipe(0x1f94240 sd=3 :0 s=1 pgs=0 cs=0t
 could not create key: unable to add access key, unable to store user
 info2014-05-20 20:13:32.530032 7f57a5e7a780  0 )


 I'm running Firefly on Wheezy.

 Thanks!

 - --

 PGP key - http://www.simonweald.com/simonweald.asc
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.14 (GNU/Linux)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTe7e8AAoJEJiOmFh0er6IzJoH/RKeXDCFKiR108zjpnwmd+O2
 b0+u6N3Y+4KoLRRZbq7aJOSxH42lgFuGwwhkIZxXWC/xIHuxwHlwn4zqoBrTtfG3
 BAoOZkFdeEyoVfB3/xnAY8PXQPOCbTq6E2qma3dTxDS30h27ru09uGrWPuSfZV18
 g/cPGuOXpEp+bXHaRVgKBKp98sO+679V3uWrqszgRDV/xkc4h0Z9qicWJCIT+y4u
 niYeRL9zfBg/zQG5urx8GCkmkpVdvQ/L0M29zFpoDrlMORHtBy5Fs/3Wh9zFacNB
 u7KY44JbMrYPnbegbWK+5d5D2nO84d63k498KFkk3ExlFJJ8MC3JmKFlhWEc1K4=
 =Q/Yk
 -END PGP SIGNATURE-
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Introductions

2014-08-08 Thread debian Only
As i konw , it is not recommend to run Ceph OSD (RBD server) same as the VM
host like KVM.
in another hand,  more service in same host, it is hard to maintenance, and
not good performance for each service.


2014-08-09 7:33 GMT+07:00 Zach Hill z...@eucalyptus.com:

 Hi all,

 I'm Zach Hill, the storage lead at Eucalyptus http://www.eucalyptus.com.
 We're working on adding Ceph RBD support for our scale-out block storage
 (EBS API). Things are going well, and we've been happy with Ceph thus far.
 We are a RHEL/CentOS shop mostly, so any other tips there would be greatly
 appreciated.

 Our basic architecture is that we have a storage control node that issues
 control-plan operations: create image, delete, snapshot, etc. This
 controller uses librbd directly via JNA bindings. VMs access the Ceph RBD
 Images as block devices exposed via the Qemu/KVM RBD driver on our Node
 Controller hosts. It's similar to OpenStack Cinder in many ways.

 One of the questions we often get is:
 Can I run OSDs on my servers that also host VMs?

 Generally, we recommend strongly against such a deployment in order to
 ensure performance and failure isolation between the compute and storage
 sides of the system. But, I'm curious if anyone is doing this in practice
 and if they've found reasonable ways to make it work in production.

 Thanks for any info in advance, and we're happy to be joining this
 community in a more active way.

 -Zach



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] What is difference in storing data between rbd and rados ?

2014-08-07 Thread debian Only
Hope expert give me some light


2014-08-06 18:01 GMT+07:00 debian Only onlydeb...@gmail.com:

 I am confuse to understand how File store in Ceph.

 I do two test. where is the File or the object for the File

 ①rados put Python.msi Python.msi -p data
 ②rbd -p testpool create fio_test --size 2048

 rados command of ① means use Ceph as Object storage ?
 rbd command of ② means use Ceph as Block storage ?

 As i known, object in Ceph is 4M by default.  this Object will put in PG.
 so i try do test as blow.  the fio_test image store in Ceph by 512 object.
  512(object) * 4 = 2048
 and i can get object in testpool.

 # rbd -p testpool info fio_test
 rbd image 'fio_test':
 size 2048 MB in 512 objects
 order 22 (4096 kB objects)
 block_name_prefix: rb.0.1b6f.2ae8944a
 format: 1
 # rados -p testpool ls |grep rb.0.1b6f.2ae8944a |wc -l
 512


 but when i check the data pool, only one file :Python.msi (26M), why not
 split Python.msi to many object(4M)  ?

 t# rados ls -p pool-B
 python.msi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] What is difference in storing data between rbd and rados ?

2014-08-06 Thread debian Only
I am confuse to understand how File store in Ceph.

I do two test. where is the File or the object for the File

①rados put Python.msi Python.msi -p data
②rbd -p testpool create fio_test --size 2048

rados command of ① means use Ceph as Object storage ?
rbd command of ② means use Ceph as Block storage ?

As i known, object in Ceph is 4M by default.  this Object will put in PG.
so i try do test as blow.  the fio_test image store in Ceph by 512 object.
 512(object) * 4 = 2048
and i can get object in testpool.

# rbd -p testpool info fio_test
rbd image 'fio_test':
size 2048 MB in 512 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1b6f.2ae8944a
format: 1
# rados -p testpool ls |grep rb.0.1b6f.2ae8944a |wc -l
512


but when i check the data pool, only one file :Python.msi (26M), why not
split Python.msi to many object(4M)  ?

t# rados ls -p pool-B
python.msi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is possible to use Ramdisk for Ceph journal ?

2014-08-06 Thread debian Only
Thanks for your reply.
I have found and test a way myself.. and now share to others


Begin  On Debian 
root@ceph01-vm:~# modprobe brd rd_nr=1 rd_size=4194304 max_part=0
root@ceph01-vm:~# mkdir /mnt/ramdisk
root@ceph01-vm:~# mkfs.btrfs /dev/ram0

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/ram0
nodesize 4096 leafsize 4096 sectorsize 4096 size 4.00GB
Btrfs Btrfs v0.19
root@ceph01-vm:~# mount /dev/ram0 /mnt/ramdisk/
root@ceph01-vm:~# update-rc.d ramdisk defaults 10 99
cd /etc/rc0.d/
 mv K01ramdisk K99ramdisk
 cd ../rc1.d/
 mv K01ramdisk K99ramdisk
cd ../rc6.d/
mv K01ramdisk K99ramdisk
 cd ../rc2.d/
mv S17ramdisk S08ramdisk
cd ../rc3.d/
mv S17ramdisk S08ramdisk
 cd ../rc4.d/
 mv S17ramdisk S08ramdisk
 cd ../rc5.d/
 mv S17ramdisk S08ramdisk
update-rc.d: using dependency based boot sequencing
root@ceph01-vm:~# cd /etc/rc0.d/
root@ceph01-vm:/etc/rc0.d#  mv K01ramdisk K99ramdisk
root@ceph01-vm:/etc/rc0.d#  cd ../rc1.d/
root@ceph01-vm:/etc/rc1.d#  mv K01ramdisk K99ramdisk
root@ceph01-vm:/etc/rc1.d# cd ../rc6.d/
root@ceph01-vm:/etc/rc6.d# mv K01ramdisk K99ramdisk
root@ceph01-vm:/etc/rc6.d#  cd ../rc2.d/
root@ceph01-vm:/etc/rc2.d# mv S17ramdisk S08ramdisk
root@ceph01-vm:/etc/rc2.d# cd ../rc3.d/
root@ceph01-vm:/etc/rc3.d# mv S17ramdisk S08ramdisk
root@ceph01-vm:/etc/rc3.d#  cd ../rc4.d/
root@ceph01-vm:/etc/rc4.d#  mv S17ramdisk S08ramdisk
root@ceph01-vm:/etc/rc4.d#  cd ../rc5.d/
root@ceph01-vm:/etc/rc5.d#  mv S17ramdisk S08ramdisk
root@ceph01-vm:/etc/rc5.d# service ceph status
=== mon.ceph01-vm ===
mon.ceph01-vm: running {version:0.80.5}
=== osd.2 ===
osd.2: running {version:0.80.5}
=== mds.ceph01-vm ===
mds.ceph01-vm: running {version:0.80.5}
root@ceph01-vm:/etc/rc5.d# service ceph stop osd.2
=== osd.2 ===
Stopping Ceph osd.2 on ceph01-vm...kill 10457...done
root@ceph01-vm:/etc/rc5.d# ceph-osd -i 2 --flush-journal
sh: 1: /sbin/hdparm: not found
2014-08-04 00:40:44.544251 7f5438b7a780 -1 journal _check_disk_write_cache:
pclose failed: (61) No data available
sh: 1: /sbin/hdparm: not found
2014-08-04 00:40:44.568660 7f5438b7a780 -1 journal _check_disk_write_cache:
pclose failed: (61) No data available
2014-08-04 00:40:44.570047 7f5438b7a780 -1 flushed journal
/var/lib/ceph/osd/ceph-2/journal for object store /var/lib/ceph/osd/ceph-2
root@ceph01-vm:/etc/rc5.d# vi /etc/ceph/ceph.conf

put this config in to /etc/ceph/ceph.conf

[osd]
journal dio = false
osd journal size = 3072
[osd.2]
host = ceph01-vm
osd journal = /mnt/ramdisk/journal


root@ceph01-vm:/etc/rc5.d# ceph-osd -c /etc/ceph/ceph.conf -i 2 --mkjournal
2014-08-04 00:41:37.706925 7fa84b9dd780 -1 journal FileJournal::_open: aio
not supported without directio; disabling aio
2014-08-04 00:41:37.707975 7fa84b9dd780 -1 journal FileJournal::_open_file
: unable to preallocation journal to 5368709120 bytes: (28) No space left
on device
2014-08-04 00:41:37.708020 7fa84b9dd780 -1
filestore(/var/lib/ceph/osd/ceph-2) mkjournal error creating journal on
/mnt/ramdisk/journal: (28) No space left on device
2014-08-04 00:41:37.708050 7fa84b9dd780 -1  ** ERROR: error creating fresh
journal /mnt/ramdisk/journal for object store /var/lib/ceph/osd/ceph-2:
(28) No space left on device
root@ceph01-vm:/etc/rc5.d# ceph-osd -c /etc/ceph/ceph.conf -i 2 --mkjournal
2014-08-04 00:41:39.033908 7fd7e7627780 -1 journal FileJournal::_open: aio
not supported without directio; disabling aio
2014-08-04 00:41:39.034067 7fd7e7627780 -1 journal check: ondisk fsid
---- doesn't match expected
6b619888-6ce4-4028-b7b3-a3af2cf0c6c9, invalid (someone else's?) journal
2014-08-04 00:41:39.034252 7fd7e7627780 -1 created new journal
/mnt/ramdisk/journal for object store /var/lib/ceph/osd/ceph-2
root@ceph01-vm:/etc/rc5.d# service ceph start osd.2
=== osd.2 ===
create-or-move updated item name 'osd.2' weight 0.09 at location
{host=ceph01-vm,root=default} to crush map
Starting Ceph osd.2 on ceph01-vm...
starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /mnt/ramdisk/journal
root@ceph01-vm:/etc/rc5.d# service ceph status
=== mon.ceph01-vm ===
mon.ceph01-vm: running {version:0.80.5}
=== osd.2 ===
osd.2: running {version:0.80.5}
=== mds.ceph01-vm ===
mds.ceph01-vm: running {version:0.80.5}
=== osd.2 ===
osd.2: running {version:0.80.5}

End


2014-08-06 7:14 GMT+07:00 Craig Lewis cle...@centraldesktop.com:


2014-08-06 7:14 GMT+07:00 Craig Lewis cle...@centraldesktop.com:

 Try this (adjust the size param as needed):
 mount -t tmpfs -o size=256m tmpfs /mnt/ramdisk
 ceph-deploy osd  prepare ceph04-vm:/dev/sdb:/mnt/ramdisk/journal.osd0



 On Sun, Aug 3, 2014 at 7:13 PM, debian Only onlydeb...@gmail.com wrote:

 anyone can help?


 2014-07-31 23:55 GMT+07:00 debian Only onlydeb...@gmail.com:

 Dear ,

 i have one test environment  Ceph Firefly 0.80.4, on Debian 7.5 .
 i do not have enough  SSD for each OSD.
 I want to test speed Ceph perfermance by put journal in a Ramdisk or
 tmpfs

Re: [ceph-users] v0.83 released

2014-08-04 Thread debian Only
Good news.  when this release will public in Debian Wheezy pkglist ?
thanks for ur good job


2014-07-30 8:21 GMT+07:00 Sage Weil s...@inktank.com:

 Another Ceph development release!  This has been a longer cycle, so there
 has been quite a bit of bug fixing and stabilization in this round.
 There is also a bunch of packaging fixes for RPM distros (RHEL/CentOS,
 Fedora, and SUSE) and for systemd.  We've also added a new
 librados-striper library from Sebastien Ponce that provides a generic
 striping API for applications to code to.

 Upgrading
 -

 * The experimental keyvaluestore-dev OSD backend had an on-disk format
   change that prevents existing OSD data from being upgraded.  This
   affects developers and testers only.

 * mon-specific and osd-specific leveldb options have been removed.
   From this point onward users should use 'leveldb_' generic options and
 add
   the options in the appropriate sections of their configuration files.
   Monitors will still maintain the following monitor-specific defaults:

 leveldb_write_buffer_size = 32*1024*1024  = 33554432  // 32MB
 leveldb_cache_size= 512*1024*1204 = 536870912 // 512MB
 leveldb_block_size= 64*1024   = 65536 // 64KB
 leveldb_compression   = false
 leveldb_log   = 

   OSDs will still maintain the following osd-specific defaults:

 leveldb_log   = 

 Notable Changes
 ---

 * ceph-disk: fix dmcrypt support (Stephen Taylor)
 * cephtool: fix help (Yilong Zhao)
 * cephtool: test cleanup (Joao Eduardo Luis)
 * doc: librados example fixes (Kevin Dalley)
 * doc: many doc updates (John Wilkins)
 * doc: update erasure docs (Loic Dachary, Venky Shankar)
 * filestore: disable use of XFS hint (buggy on old kernels) (Samuel Just)
 * filestore: fix xattr spillout (Greg Farnum, Haomai Wang)
 * keyvaluestore: header cache (Haomai Wang)
 * librados_striper: striping library for librados (Sebastien Ponce)
 * libs3: update to latest (Danny Al-Gaaf)
 * log: fix derr level (Joao Eduardo Luis)
 * logrotate: fix osd log rotation on ubuntu (Sage Weil)
 * mds: fix xattr bug triggered by ACLs (Yan, Zheng)
 * misc memory leaks, cleanups, fixes (Danny Al-Gaaf, Sahid Ferdjaoui)
 * misc suse fixes (Danny Al-Gaaf)
 * misc word size fixes (Kevin Cox)
 * mon: drop mon- and osd- specific leveldb options (Joao Eduardo Luis)
 * mon: ec pool profile fixes (Loic Dachary)
 * mon: fix health down messages (Sage Weil)
 * mon: fix quorum feature check (#8738, Greg Farnum)
 * mon: 'osd crush reweight-subtree ...' (Sage Weil)
 * mon, osd: relax client EC support requirements (Sage Weil)
 * mon: some instrumentation (Sage Weil)
 * objecter: flag operations that are redirected by caching (Sage Weil)
 * osd: clean up shard_id_t, shard_t (Loic Dachary)
 * osd: fix connection reconnect race (Greg Farnum)
 * osd: fix dumps (Joao Eduardo Luis)
 * osd: fix erasure-code lib initialization (Loic Dachary)
 * osd: fix extent normalization (Adam Crume)
 * osd: fix loopback msgr issue (Ma Jianpeng)
 * osd: fix LSB release parsing (Danny Al-Gaaf)
 * osd: improved backfill priorities (Sage Weil)
 * osd: many many core fixes (Samuel Just)
 * osd, mon: config sanity checks on start (Sage Weil, Joao Eduardo Luis)
 * osd: sharded threadpool to improve parallelism (Somnath Roy)
 * osd: simple io prioritization for scrub (Sage Weil)
 * osd: simple scrub throttling (Sage Weil)
 * osd: tests for bench command (Loic Dachary)
 * osd: use xfs hint less frequently (Ilya Dryomov)
 * pybind/rados: fix small timeouts (John Spray)
 * qa: xfstests updates (Ilya Dryomov)
 * rgw: cache bucket info (Yehuda Sadeh)
 * rgw: cache decoded user info (Yehuda Sadeh)
 * rgw: fix multipart object attr regression (#8452, Yehuda Sadeh)
 * rgw: fix radosgw-admin 'show log' command (#8553, Yehuda Sadeh)
 * rgw: fix URL decoding (#8702, Brian Rak)
 * rgw: handle empty extra pool name (Yehuda Sadeh)
 * rpm: do not restart daemons on upgrade (Alfredo Deza)
 * rpm: misc packaging fixes for rhel7 (Sandon Van Ness)
 * rpm: split ceph-common from ceph (Sandon Van Ness)
 * systemd: wrap started daemons in new systemd environment (Sage Weil, Dan
   Mick)
 * sysvinit: less sensitive to failures (Sage Weil)
 * upstart: increase max open files limit (Sage Weil)

 Getting Ceph
 

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.83.tar.gz
 * For packages, see http://ceph.com/docs/master/install/get-packages
 * For ceph-deploy, see
 http://ceph.com/docs/master/install/install-ceph-deploy
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is possible to use Ramdisk for Ceph journal ?

2014-08-03 Thread debian Only
anyone can help?


2014-07-31 23:55 GMT+07:00 debian Only onlydeb...@gmail.com:

 Dear ,

 i have one test environment  Ceph Firefly 0.80.4, on Debian 7.5 .
 i do not have enough  SSD for each OSD.
 I want to test speed Ceph perfermance by put journal in a Ramdisk or
 tmpfs, but when to add new osd use separate disk for OSD data and journal
 ,it is failure.

 first , i have test Ram mount to a filesystem and made it to persistent.
  , i have tested it, it can recovery data from last archive when system
 boot.
   ramdisk.sh 
 #! /bin/sh
 ### BEGIN INIT INFO
 # Provides: Ramdisk
 # Required-Start:   $remote_fs $syslog
 # Required-Stop:$remote_fs $syslog
 # Default-Start:2 3 4 5
 # Default-Stop: 0 1 6
 # Short-Description:Ramdisk
 ### END INIT INFO
 # /etc/init.d/ramdisk.sh
 #

 case $1 in
  start)
echo Copying files to ramdisk
cd /mnt
mkfs.btrfs /dev/ram0  /var/log/ramdisk_sync.log
mount /dev/ram0 /mnt/ramdisk/
tar --lzop -xvf ramdisk-backup.tar.lzop  /var/log/ramdisk_sync.log
echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched from HD 
 /var/log/ramdisk_s
 ync.log
;;
  sync)
echo Synching files from ramdisk to Harddisk
echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD 
 /var/log/ramdisk_syn
 c.log
cd /mnt
mv -f ramdisk-backup.tar.lzop ramdisk-backup-old.tar.lzop
tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk 
 /var/log/ramdisk_sync.log
;;
  stop)
echo Synching logfiles from ramdisk to Harddisk
echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD 
 /var/log/ramdisk_syn
 c.log
tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk 
 /var/log/ramdisk_sync.log
;;
  *)
echo Usage: /etc/init.d/ramdisk {start|stop|sync}
exit 1
;;
 esac

 exit 0

 #

 then i want to add new OSD use ramdisk for journal.

 i have tried 3 ways.  all failed.
 1. ceph-deploy osd --zap-disk --fs-type btrfs create
 ceph04-vm:/dev/sdb:/dev/ram0 (use device way)
 2. ceph-deploy osd  prepare ceph04-vm:/mnt/osd:/mnt/ramdisk  (use
 direcotry way)
 3. ceph-deploy osd  prepare ceph04-vm:/dev/sdb:/mnt/ramdisk

 could some expert give me some guide on it ???

  some log#
 root@ceph-admin:~/my-cluster# ceph-deploy osd --zap-disk --fs-type btrfs
 create ceph04-vm:/dev/sdb:/dev/ram0
 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /root/.cephdeploy.conf
 [ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy osd
 --zap-disk --fs-type btrfs create ceph04-vm:/dev/sdb:/dev/ram0
 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
 ceph04-vm:/dev/sdb:/dev/ram0
 [ceph04-vm][DEBUG ] connected to host: ceph04-vm
 [ceph04-vm][DEBUG ] detect platform information from remote host
 [ceph04-vm][DEBUG ] detect machine type
 [ceph_deploy.osd][INFO  ] Distro info: debian 7.6 wheezy
 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph04-vm
 [ceph04-vm][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
 [ceph04-vm][INFO  ] Running command: udevadm trigger
 --subsystem-match=block --action=add
 [ceph_deploy.osd][DEBUG ] Preparing host ceph04-vm disk /dev/sdb journal
 /dev/ram0 activate True
 [ceph04-vm][INFO  ] Running command: ceph-disk-prepare --zap-disk
 --fs-type btrfs --cluster ceph -- /dev/sdb /dev/ram0
 [ceph04-vm][DEBUG ]
 
 [ceph04-vm][DEBUG ] Caution: Found protective or hybrid MBR and corrupt
 GPT. Using GPT, but disk
 [ceph04-vm][DEBUG ] verification and recovery are STRONGLY recommended.
 [ceph04-vm][DEBUG ]
 
 [ceph04-vm][DEBUG ] GPT data structures destroyed! You may now partition
 the disk using fdisk or
 [ceph04-vm][DEBUG ] other utilities.
 [ceph04-vm][DEBUG ] The operation has completed successfully.
 [ceph04-vm][DEBUG ] Creating new GPT entries.
 [ceph04-vm][DEBUG ] Information: Moved requested sector from 34 to 2048 in
 [ceph04-vm][DEBUG ] order to align on 2048-sector boundaries.
 [ceph04-vm][WARNIN] Caution: invalid backup GPT header, but valid main
 header; regenerating
 [ceph04-vm][WARNIN] backup header from main header.
 [ceph04-vm][WARNIN]
 [ceph04-vm][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if
 journal is not the same device as the osd data
 [ceph04-vm][WARNIN] Could not create partition 2 from 34 to 10485793
 [ceph04-vm][WARNIN] Unable to set partition 2's name to 'ceph journal'!
 [ceph04-vm][WARNIN] Could not change partition 2's type code to
 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
 [ceph04-vm][WARNIN] Error encountered; not saving changes.
 [ceph04-vm][WARNIN] ceph-disk: Error: Command '['/sbin/sgdisk',
 '--new=2:0:+5120M', '--change-name=2:ceph journal',
 '--partition-guid=2:ea326680-d389-460d-bef1-3c6bd0ab83c5',
 '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
 '/dev/ram0']' returned non-zero exit status 4
 [ceph04-vm][ERROR ] RuntimeError: command returned non-zero exit status: 1

Re: [ceph-users] Persistent Error on osd activation

2014-08-01 Thread debian Only
i have meet the same issue , when i want to use prepare .
when i use --zap-disk , it is ok.   but if use prepare to define journal
device, failed
ceph-disk-prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb
/dev/sdc



2014-07-01 1:00 GMT+07:00 Iban Cabrillo cabri...@ifca.unican.es:

 Hi Alfredo,
   During this morning, I have purged all the deployment.
   I just prepared 4 SAN Servers with 4 FC-Atacched disk (2.7 TB per disk)
 each one of them.

   Tomorrow I will try to deploy anew installation leaving the VMs machines
 as mons and the OSDs with this physical servers.

   The local disk Is SAS RAID1, Should I put the journal over the local
 disk (SAS RAID1), or should be a better solution used the RAID5 nlSAS disk
 FC-attached (journal + data on the same disk)?

   Which would be the recommended size for the journal? 10GB per disk, for
 example?

 Regards, I


 2014-06-30 18:50 GMT+02:00 Alfredo Deza alfredo.d...@inktank.com:

 On Mon, Jun 30, 2014 at 11:22 AM, Iban Cabrillo cabri...@ifca.unican.es
 wrote:
  Hi Alfredo and folk,
Could you have a look at this?
Someone else has any idea why i am getting this error?
 
  Thanks in advance, I
 
 
 
  2014-06-27 16:37 GMT+02:00 Iban Cabrillo cabri...@ifca.unican.es:
 
  Hi Alfredo,
   This is the complete procedure:
 
 
On OSD node:
 
  [ceph@ceph02 ~]$ sudo parted /dev/xvdb
 
  GNU Parted 2.1
  Using /dev/xvdb
  Welcome to GNU Parted! Type 'help' to view a list of commands.
  (parted) p
  Model: Xen Virtual Block Device (xvd)
  Disk /dev/xvdb: 107GB
  Sector size (logical/physical): 512B/512B
  Partition Table: gpt
 
  Number  Start  End  Size  File system  Name  Flags
 
  [ceph@ceph02 ~]$ sudo ls -la /var/lib/ceph/tmp/
  total 8
  drwxr-xr-x 2 root root 4096 Jun 27 16:30 .
  drwxr-xr-x 7 root root 4096 Jun 26 22:30 ..
  [ceph@ceph02 ~]$ sudo ls -la /var/lib/ceph/osd/
  total 8
  drwxr-xr-x 2 root root 4096 Jun 27 12:14 .
  drwxr-xr-x 7 root root 4096 Jun 26 22:30 ..
 
  On ceph admin node:
 
  [ceph@cephadm ~]$ sudo ceph osd tree
  # idweighttype nameup/downreweight
  -10.14root default
  -20.009995host ceph02
  10.009995osd.1DNE
  -30.03999host ceph04
  40.03999osd.4up1
  -40.09host ceph03
  60.09osd.6up1
 
 
  [ceph@cephadm ceph-cloud]$ ceph-deploy osd prepare ceph02:xvdb
  [ceph_deploy.conf][DEBUG ] found configuration file at:
  /home/ceph/.cephdeploy.conf
  [ceph_deploy.cli][INFO  ] Invoked (1.5.5): /usr/bin/ceph-deploy osd
  prepare ceph02:xvdb
  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
 ceph02:/dev/xvdb:
  [ceph02][DEBUG ] connected to host: ceph02
  [ceph02][DEBUG ] detect platform information from remote host
  [ceph02][DEBUG ] detect machine type
  [ceph_deploy.osd][INFO  ] Distro info: Scientific Linux 6.2 Carbon
  [ceph_deploy.osd][DEBUG ] Deploying osd to ceph02
  [ceph02][DEBUG ] write cluster configuration to
 /etc/ceph/{cluster}.conf
  [ceph02][INFO  ] Running command: sudo udevadm trigger
  --subsystem-match=block --action=add
  [ceph_deploy.osd][DEBUG ] Preparing host ceph02 disk /dev/xvdb journal
  None activate False
  [ceph02][INFO  ] Running command: sudo ceph-disk-prepare --fs-type xfs
  --cluster ceph -- /dev/xvdb
  [ceph02][DEBUG ] Setting name!
  [ceph02][DEBUG ] partNum is 1
  [ceph02][DEBUG ] REALLY setting name!
  [ceph02][DEBUG ] The operation has completed successfully.
  [ceph02][DEBUG ] Setting name!
  [ceph02][DEBUG ] partNum is 0
  [ceph02][DEBUG ] REALLY setting name!
  [ceph02][DEBUG ] The operation has completed successfully.
  [ceph02][DEBUG ] meta-data=/dev/xvdb1 isize=2048
 agcount=4,
  agsize=5897919 blks
  [ceph02][DEBUG ]  =   sectsz=512   attr=2
  [ceph02][DEBUG ] data =   bsize=4096
  blocks=23591675, imaxpct=25
  [ceph02][DEBUG ]  =   sunit=0  swidth=0
  blks
  [ceph02][DEBUG ] naming   =version 2  bsize=4096
 ascii-ci=0
  [ceph02][DEBUG ] log  =internal log   bsize=4096
  blocks=11519, version=2
  [ceph02][DEBUG ]  =   sectsz=512   sunit=0
  blks, lazy-count=1
  [ceph02][DEBUG ] realtime =none   extsz=4096
 blocks=0,
  rtextents=0
  [ceph02][DEBUG ] The operation has completed successfully.
  [ceph02][WARNIN] INFO:ceph-disk:Will colocate journal with data on
  /dev/xvdb
  [ceph02][INFO  ] checking OSD status...
  [ceph02][INFO  ] Running command: sudo ceph --cluster=ceph osd stat
  --format=json
  [ceph_deploy.osd][DEBUG ] Host ceph02 is now ready for osd use.
 
 
 
  If i make create instead of prepare do the same (create do not make the
  trick prepare+activate )
 
 
  In the OSD:
 
  [ceph@ceph02 ~]$ sudo parted /dev/xvdb
 
  GNU Parted 2.1
  Using /dev/xvdb
  Welcome to GNU Parted! Type 'help' to view a list of commands.
  (parted) p
  Model: Xen Virtual Block Device (xvd)
  Disk 

Re: [ceph-users] Using Ramdisk wi

2014-08-01 Thread debian Only
i am looking for the method how to ramdisk with Ceph , just for test
environment, i do not have enough SSD for each osd.   but do not how to
move osd journal to a tmpfs or ramdisk.

hope some one can give some guide.


2014-07-31 8:58 GMT+07:00 Christian Balzer ch...@gol.com:


 On Wed, 30 Jul 2014 18:17:16 +0200 Josef Johansson wrote:

  Hi,
 
  Just chippin in,
  As RAM is pretty cheap right now, it could be an idea to fill all the
  memory slots in the OSDs, bigger chance that the data you've requested
  is actually in ram already then.
 
 While that is very, VERY true, it won't help his perceived bad read speeds
 much, as they're not really caused by the OSDs per se.

  You should go with DC S3700 400GB for the journals at least..
 
 That's probably going overboard in the other direction.
 While on paper this would be the first model to handle the sequential
 write speeds of 3 HDDs, that kind of scenario is pretty unrealistic.
 Even with just one client writing they will never reach those speeds due
 to FS overhead, parallel writes caused by replication and so forth.

 The only scenario where this makes some sense is one with short, very high
 write spikes that can be handled by the journal (both in size and ceph
 settings like filestore max/min sync interval), followed by long enough
 pauses to scribble the data to the HDDs.

 In the end for nearly all use cases obsessing over high write speeds is a
 fallacy, one is much more likely to run out of steam due to IOPS caused by
 much smaller transactions.

 What would worry me about the small DC 3500 is the fact that it is only
 rated for about 38GB writes/day over 5 years. Now this could be very well
 within the deployment parameters, but we don't know.

 A 200GB DC S3700 should be fine here, higher endurance, about 3 times the
 speed of the DC 3500 120GB for sequential writes and 8 times for write
 IOPS.

 Christian

  Cheers,
  Josef
 
  On 30/07/14 17:12, Christian Balzer wrote:
   On Wed, 30 Jul 2014 10:50:02 -0400 German Anders wrote:
  
   Hi Christian,
 How are you? Thanks a lot for the answers, mine in red.
  
   Most certainly not in red on my mail client...
  
   --- Original message ---
   Asunto: Re: [ceph-users] Using Ramdisk wi
   De: Christian Balzer ch...@gol.com
   Para: ceph-users@lists.ceph.com
   Cc: German Anders gand...@despegar.com
   Fecha: Wednesday, 30/07/2014 11:42
  
  
   Hello,
  
   On Wed, 30 Jul 2014 09:55:49 -0400 German Anders wrote:
  
   Hi Wido,
  
How are you? Thanks a lot for the quick response. I
   know that is
   heavy cost on using ramdisk, but also i want to try that to see if i
   could get better performance, since I'm using a 10GbE network with
   the following configuration and i can't achieve more than 300MB/s of
   throughput on rbd:
  
   Testing the limits of Ceph with a ramdisk based journal to see what
   is possible in terms of speed (and you will find that it is
   CPU/protocol bound) is fine.
   Anything resembling production is a big no-no.
   Got it, did you try flashcache from facebook or dm-cache?
   No.
  
  
  
   MON Servers (3):
2x Intel Xeon E3-1270v3 @3.5Ghz (8C)
32GB RAM
2x SSD Intel 120G in RAID1 for OS
1x 10GbE port
  
   OSD Servers (4):
2x Intel Xeon E5-2609v2 @2.5Ghz (8C)
64GB RAM
2x SSD Intel 120G in RAID1 for OS
3x SSD Intel 120G for Journals (3 SAS disks: 1 SSD
   Journal)
   You're not telling us WHICH actual Intel SSDs you're using.
   If those are DC3500 ones, then 300MB/s totoal isn't a big surprise
   at all,
   as they are capable of 135MB/s writes at most.
   The SSD model is Intel SSDSC2BB120G4 firm D2010370
   That's not really an answer, but then again Intel could have chosen
   model numbers that resemble their product names.
  
   That is indeed a DC 3500, so my argument stands.
   With those SSDs for your journals, much more than 300MB/s per node is
   simply not possible, never mind how fast or slow the HDDs perform.
  
  
  
9x SAS 3TB 6G for OSD
   That would be somewhere over 1GB/s in theory, but give file system
   and other overheads (what is your replication level?) that's a very
   theoretical value indeed.
   The RF is 2, so perf should be much better, also notice that read
   perf is really poor, around 62MB/s...
  
   A replication factor of 2 means that each write is amplified by 2.
   So half of your theoretical performance is gone already.
  
   Do your tests with atop or iostat running on all storage nodes.
   Determine where the bottleneck is, the journals SSDs or the HDDs or
   (unlikely) something else.
  
   Read performance sucks balls with RBD (at least individually), it can
   be improved by fondling the readahead value. See:
  
   http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/8817
  
   This is something the Ceph developers are aware of and hopefully will
   

[ceph-users] Is possible to use Ramdisk for Ceph journal ?

2014-07-31 Thread debian Only
Dear ,

i have one test environment  Ceph Firefly 0.80.4, on Debian 7.5 .
i do not have enough  SSD for each OSD.
I want to test speed Ceph perfermance by put journal in a Ramdisk or tmpfs,
but when to add new osd use separate disk for OSD data and journal ,it is
failure.

first , i have test Ram mount to a filesystem and made it to persistent.
 , i have tested it, it can recovery data from last archive when system
boot.
  ramdisk.sh 
#! /bin/sh
### BEGIN INIT INFO
# Provides: Ramdisk
# Required-Start:   $remote_fs $syslog
# Required-Stop:$remote_fs $syslog
# Default-Start:2 3 4 5
# Default-Stop: 0 1 6
# Short-Description:Ramdisk
### END INIT INFO
# /etc/init.d/ramdisk.sh
#

case $1 in
 start)
   echo Copying files to ramdisk
   cd /mnt
   mkfs.btrfs /dev/ram0  /var/log/ramdisk_sync.log
   mount /dev/ram0 /mnt/ramdisk/
   tar --lzop -xvf ramdisk-backup.tar.lzop  /var/log/ramdisk_sync.log
   echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched from HD 
/var/log/ramdisk_s
ync.log
   ;;
 sync)
   echo Synching files from ramdisk to Harddisk
   echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD 
/var/log/ramdisk_syn
c.log
   cd /mnt
   mv -f ramdisk-backup.tar.lzop ramdisk-backup-old.tar.lzop
   tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk 
/var/log/ramdisk_sync.log
   ;;
 stop)
   echo Synching logfiles from ramdisk to Harddisk
   echo [`date +%Y-%m-%d %H:%M`] Ramdisk Synched to HD 
/var/log/ramdisk_syn
c.log
   tar --lzop -cvf ramdisk-backup.tar.lzop ramdisk 
/var/log/ramdisk_sync.log
   ;;
 *)
   echo Usage: /etc/init.d/ramdisk {start|stop|sync}
   exit 1
   ;;
esac

exit 0

#

then i want to add new OSD use ramdisk for journal.

i have tried 3 ways.  all failed.
1. ceph-deploy osd --zap-disk --fs-type btrfs create
ceph04-vm:/dev/sdb:/dev/ram0 (use device way)
2. ceph-deploy osd  prepare ceph04-vm:/mnt/osd:/mnt/ramdisk  (use direcotry
way)
3. ceph-deploy osd  prepare ceph04-vm:/dev/sdb:/mnt/ramdisk

could some expert give me some guide on it ???

 some log#
root@ceph-admin:~/my-cluster# ceph-deploy osd --zap-disk --fs-type btrfs
create ceph04-vm:/dev/sdb:/dev/ram0
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.9): /usr/bin/ceph-deploy osd
--zap-disk --fs-type btrfs create ceph04-vm:/dev/sdb:/dev/ram0
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph04-vm:/dev/sdb:/dev/ram0
[ceph04-vm][DEBUG ] connected to host: ceph04-vm
[ceph04-vm][DEBUG ] detect platform information from remote host
[ceph04-vm][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: debian 7.6 wheezy
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph04-vm
[ceph04-vm][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph04-vm][INFO  ] Running command: udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph04-vm disk /dev/sdb journal
/dev/ram0 activate True
[ceph04-vm][INFO  ] Running command: ceph-disk-prepare --zap-disk --fs-type
btrfs --cluster ceph -- /dev/sdb /dev/ram0
[ceph04-vm][DEBUG ]

[ceph04-vm][DEBUG ] Caution: Found protective or hybrid MBR and corrupt
GPT. Using GPT, but disk
[ceph04-vm][DEBUG ] verification and recovery are STRONGLY recommended.
[ceph04-vm][DEBUG ]

[ceph04-vm][DEBUG ] GPT data structures destroyed! You may now partition
the disk using fdisk or
[ceph04-vm][DEBUG ] other utilities.
[ceph04-vm][DEBUG ] The operation has completed successfully.
[ceph04-vm][DEBUG ] Creating new GPT entries.
[ceph04-vm][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph04-vm][DEBUG ] order to align on 2048-sector boundaries.
[ceph04-vm][WARNIN] Caution: invalid backup GPT header, but valid main
header; regenerating
[ceph04-vm][WARNIN] backup header from main header.
[ceph04-vm][WARNIN]
[ceph04-vm][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if
journal is not the same device as the osd data
[ceph04-vm][WARNIN] Could not create partition 2 from 34 to 10485793
[ceph04-vm][WARNIN] Unable to set partition 2's name to 'ceph journal'!
[ceph04-vm][WARNIN] Could not change partition 2's type code to
45b0969e-9b03-4f30-b4c6-b4b80ceff106!
[ceph04-vm][WARNIN] Error encountered; not saving changes.
[ceph04-vm][WARNIN] ceph-disk: Error: Command '['/sbin/sgdisk',
'--new=2:0:+5120M', '--change-name=2:ceph journal',
'--partition-guid=2:ea326680-d389-460d-bef1-3c6bd0ab83c5',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/ram0']' returned non-zero exit status 4
[ceph04-vm][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
--zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/ram0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

[ceph-users] how to deploy standalone rsdosgw with Firefly0.80.4 on debian

2014-07-23 Thread debian Only
Dear all

i got that Firefly 0.80.4 have new feature that not need install apache and
fastcgi, am i right ?
*Standalone radosgw (experimental): The radosgw process can now run in a
standalone mode without an apache (or similar) web server or fastcgi. This
simplifies deployment and can improve performance. - See more at:
http://ceph.com/releases/v0-80-firefly-released/#sthash.uP9T3U6d.dpuf
http://ceph.com/releases/v0-80-firefly-released/#sthash.uP9T3U6d.dpuf*

and i did not find how to deploy Firefly standalone rsdosgw and do test.
could some expert give me some advice?

many thanks.

my ceph is ready:

root@ceph-admin:~/my-cluster# ceph -s
cluster ae3da4d2-eef0-47cf-a872-24df8f2c8df4
 health HEALTH_OK
 monmap e3: 3 mons at {ceph01-vm=
192.168.123.251:6789/0,ceph02-vm=192.168.123.252:6789/0,ceph03-vm=192.168.123.253:6789/0},
election epoch 8, quorum 0,1,2 ceph01-vm,ceph02-vm,ceph03-vm
 mdsmap e13: 1/1/1 up {0=ceph01-vm=up:active}
 osdmap e67: 3 osds: 3 up, 3 in
  pgmap v555: 392 pgs, 5 pools, 10893 kB data, 34 objects
48056 kB used, 281 GB / 284 GB avail
 392 active+clean
root@ceph-admin:~/my-cluster# ceph osd tree
# idweight  type name   up/down reweight
-1  0.27root default
-2  0.09host ceph02-vm
0   0.09osd.0   up  1
-3  0.09host ceph03-vm
1   0.09osd.1   up  1
-4  0.09host ceph01-vm
2   0.09osd.2   up  1
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com