[ceph-users] Mds lock

2013-08-16 Thread Jun Jun8 Liu
Hi all,
 I am doing some research about mds.

 there are so many types lock and states .But I don't found some 
document to describe.

 Is there anybody tell me what is loner and lock_mix

thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] striped rbd volumes with Cinder

2013-08-16 Thread Dan Van Der Ster
On Aug 16, 2013, at 1:26 AM, Sébastien Han sebastien@enovance.com wrote:

 Hi,
 
 I have a patch on the way to use the stripes.
 
 The first version is on the operator side (the operator configures it and 
 this will take affect for all the created volume)

Cool! If you share it we'd be happy to help with testing.
Cheers, Dan


 
 A second patch could also bring this as metadata for the end-user.
 
 That's gonna be for Havana, but could be a potential backport for Grizzly.
 
 Stay tuned.
 
 
 Sébastien Han
 Cloud Engineer
 
 Always give 100%. Unless you're giving blood.
 
 On August 15, 2013 at 4:45:54 PM, Dan Van Der Ster 
 (daniel.vanders...@cern.ch) wrote:
 
 Hi, 
 Did anyone manage to use striped rbd volumes with OpenStack Cinder 
 (Grizzly)? I noticed in the current OpenStack master code that there are 
 options for striping the new _backup_ volumes, but there's still nothing to 
 do with striping in the master Cinder rbd driver. Is there a way to set some 
 defaults in the ceph.conf file, for example? 
 Cheers, Dan 
 ___ 
 ceph-users mailing list 
 ceph-users@lists.ceph.com 
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
 logo-eNovance-2013.png

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Flapping osd / continuously reported as failed

2013-08-16 Thread Mostowiec Dominik
Hi,
Thanks for your response.

 It's possible, as deep scrub in particular will add a bit of load (it
 goes through and compares the object contents). 

It is possible that the scrubbing blocks access(RW or only W) to bucket index 
when check .dir... file?
When rgw index is very large I guess it take some time.

 Are you not having any
 flapping issues any more, and did you try and find when it started the
 scrub to see if it matched up with your troubles?

No, I didn't.
But on our second cluster with the same problem, disable scrubbing also helps.

 I'd be hesitant to turn it off as scrubbing can uncover corrupt
 objects etc, but you can configure it with the settings at
 http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing.
 (Always check the surprisingly-helpful docs when you need to do some
 config or operations work!)

I think change config scrub timeout or interval don't full remove issues.
Change osd deep scrub stride to small value make scrubbing lightest?

--
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [ANN] ceph-deploy 1.2.1 released

2013-08-16 Thread Alfredo Deza
Hi all,

There is a new bug-fix release of ceph-deploy, the easy ceph deployment tool.

ceph-deploy can be installed from three different sources depending on
your package manager and distribution, currently available for RPMs,
DEBs and directly as a Python package from the Python Package Index.

Documentation has been updated with installation instructions:

https://github.com/ceph/ceph-deploy#installation

This is the list of all fixes that went into this release which can
also be found in the CHANGELOG.rst file in ceph-deploy's git repo:

* Print the help when no arguments are passed
* Add a ``--version`` flag
* Show the version in the help menu
* Catch ``DeployError`` exceptions nicely with the logger
* Fix blocked command when calling ``mon create``
* default to ``dumpling`` for installs
* halt execution on remote exceptions/errors

Make sure you update!

Thanks,


Alfredo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Destroyed Ceph Cluster

2013-08-16 Thread Georg Höllrigl

Hello,

I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with ceph pg force_create_pg


HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 5 requests 
are blocked  32 sec; mds cluster is degraded; 1 mons down, quorum 0,1,2 
vvx-ceph-m-01,vvx-ceph-m-02,vvx-ceph-m-03


Any idea how to fix the cluster, besides completley rebuilding the 
cluster from scratch? What if such a thing happens in a production 
environment...


The pgs from ceph pg dump looks all like creating for some time now:

2.3d0   0   0   0   0   0   0   creating 
  2013-08-16 13:43:08.186537   0'0 0:0 []  [] 
0'0 0.000'0 0.00


Is there a way to just dump the data, that was on the discarded OSDs?




Kind Regards,
Georg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large memory leak on scrubbing

2013-08-16 Thread Sylvain Munaut
Hi,


 Is this a known bug in this version?

Yes.

 (Do you know some workaround to fix this?)

Upgrade.


Cheers,

Sylvain
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-16 Thread Pavel Timoschenkov
So that is after running `disk zap`. What does it say after using 
ceph-deploy and failing?

After ceph-disk -v prepare /dev/sdaa /dev/sda1:

root@ceph001:~# parted /dev/sdaa print
Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sdaa: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End SizeFile system  Name   Flags
1  1049kB  3001GB  3001GB  xfs  ceph data

And

root@ceph001:~# parted /dev/sda1 print
Model: Unknown (unknown)
Disk /dev/sda1: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

With the same errors:

root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit 
status 32


From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 14, 2013 7:44 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Wed, Aug 14, 2013 at 10:47 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:


From: Alfredo Deza 
[mailto:alfredo.d...@inktank.commailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 14, 2013 5:41 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.commailto:ceph-us...@ceph.com

Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:
It looks like at some point the filesystem is not passed to the options. 
Would you mind running the `ceph-disk-prepare` command again but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would 
be something like:

ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

Hi.
If I'm running:
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors:
==
root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit 
status 32

If executed this command separately for both disks - looks like ok:

For sdaa:

root@ceph001:~# ceph-disk -v prepare 

Re: [ceph-users] Destroyed Ceph Cluster

2013-08-16 Thread Mark Nelson

Hi Georg,

I'm not an expert on the monitors, but that's probably where I would 
start.  Take a look at your monitor logs and see if you can get a sense 
for why one of your monitors is down.  Some of the other devs will 
probably be around later that might know if there are any known issues 
with recreating the OSDs and missing PGs.


Mark

On 08/16/2013 08:21 AM, Georg Höllrigl wrote:

Hello,

I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with ceph pg force_create_pg


HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 5 requests 
are blocked  32 sec; mds cluster is degraded; 1 mons down, quorum 
0,1,2 vvx-ceph-m-01,vvx-ceph-m-02,vvx-ceph-m-03


Any idea how to fix the cluster, besides completley rebuilding the 
cluster from scratch? What if such a thing happens in a production 
environment...


The pgs from ceph pg dump looks all like creating for some time now:

2.3d0   0   0   0   0   0   0 creating 
  2013-08-16 13:43:08.186537   0'0 0:0 []  [] 0'0 
0.000'0 0.00


Is there a way to just dump the data, that was on the discarded OSDs?




Kind Regards,
Georg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION 61.7

2013-08-16 Thread Alfredo Deza
On Fri, Aug 9, 2013 at 12:05 PM, Aquino, BenX O benx.o.aqu...@intel.com wrote:
 CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7

 ADMINNODE:

 root@ubuntuceph900athf1:~# ceph -v

 ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)

 root@ubuntuceph900athf1:~#



 SERVERNODE:

 root@ubuntuceph700athf1:/etc/ceph# ceph -v

 ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)

 root@ubuntuceph700athf1:/etc/ceph#



 ===:

 Trial-1 of using ceph-deploy results:
 (http://ceph.com/docs/next/start/quick-ceph-deploy/)



 My trial-1 scenario is using ceph-deploy to replace 2 OSD's (osd.2 and
 OSD.11) of a ceph node.



 Obesrvation:

 ceph-deploy creating a symbolic-links ceph-0--ceph-2 dir and
 ceph-1---ceph-11 dir.

 I did not ran into any errors or issue in this trials.



 once concern:

 --ceph deploy did not update linux fstab of mount point of osd data.



 ===



 Trial 2: (http://ceph.com/docs/next/start/quick-ceph-deploy/)



 I notice my node did not have any contents in
 /var/lib/ceph/boostrap-{osd}|{mds} .

 Result: FAILURE TO MOVE FORWARD BEYOND THIS STEP



 Tip from http://ceph.com/docs/next/start/quick-ceph-deploy/

 If you don’t have these keyrings, you may not have created a monitor
 successfully,

 or you may have a problem with your network connection.

 Ensure that you complete this step such that you have the foregoing keyrings
 before proceeding further.



 Tip from (http://ceph.com/docs/next/start/quick-ceph-deploy/:

 You may repeat this procedure. If it fails, check to see if the
 /var/lib/ceph/boostrap-{osd}|{mds} directories on the server node have
 keyrings.

 If they do not have keyrings, try adding the monitor again; then, return to
 this step.



 My WORKAROUND1:

 COPIED CONTENTS OF /var/lib/ceph/boostrap-{osd}|{mds} FROM ANOTHER NODE



 My WORKAROUND2:

 USED CREATE A NEW CLUSTER PROCEDURE with CEPH-DEPLOY to create missing
 keyrings.



 =:

 TRIAL-3:  Attemp to build a new cluster/1-Node using ceph deploy:



 RESULT FAILED TO GO BEYOND THE ERROR LOGS BELOW:



 root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
 ubuntuceph700athf1:sde1:/var/lib/ceph/journal/osd.0.journal

 ceph-disk-prepare -- /dev/sde1 /var/lib/ceph/journal/osd.0.journal returned
 1

 meta-data=/dev/sde1  isize=2048   agcount=4, agsize=30524098
 blks

  =   sectsz=512   attr=2, projid32bit=0

 data =   bsize=4096   blocks=122096390, imaxpct=25

  =   sunit=0  swidth=0 blks

 naming   =version 2  bsize=4096   ascii-ci=0

 log  =internal log   bsize=4096   blocks=59617, version=2

  =   sectsz=512   sunit=0 blks, lazy-count=1

 realtime =none   extsz=4096   blocks=0, rtextents=0



 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same
 device as the osd data

 umount: /var/lib/ceph/tmp/mnt.iMsc1G: device is busy.

 (In some cases useful info about processes that use

  the device is found by lsof(8) or fuser(1))

 ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--',
 '/var/lib/ceph/tmp/mnt.iMsc1G']' returned non-zero exit status 1



 ceph-deploy: Failed to create 1 OSDs



 root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
 ubuntuceph700athf1:sde1

 ceph-disk-prepare -- /dev/sde1 returned 1

 meta-data=/dev/sde1  isize=2048   agcount=4, agsize=30524098
 blks

  =   sectsz=512   attr=2, projid32bit=0

 data =   bsize=4096   blocks=122096390, imaxpct=25

  =   sunit=0  swidth=0 blks

 naming   =version 2  bsize=4096   ascii-ci=0

 log  =internal log   bsize=4096   blocks=59617, version=2

  =   sectsz=512   sunit=0 blks, lazy-count=1

 realtime =none   extsz=4096   blocks=0, rtextents=0



 umount: /var/lib/ceph/tmp/mnt.0JxBp1: device is busy.

 (In some cases useful info about processes that use

  the device is found by lsof(8) or fuser(1))

 ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--',
 '/var/lib/ceph/tmp/mnt.0JxBp1']' returned non-zero exit status 1



 ceph-deploy: Failed to create 1 OSDs

 root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
 ubuntuceph700athf1:sde1

 ceph-disk-prepare -- /dev/sde1 returned 1



 ceph-disk: Error: Device is mounted: /dev/sde1



 ceph-deploy: Failed to create 1 OSDs





 Attempted on the local node:

 root@ubuntuceph700athf1:/etc/ceph# ceph-deploy osd prepare
 ubuntuceph700athf1:sde1:/var/lib/ceph/journal/osd.0.journal

 ceph-disk-prepare -- /dev/sde1 /var/lib/ceph/journal/osd.0.journal returned
 1

 ceph-disk: Error: Device is 

[ceph-users] ceph-deploy 1.2.1 ceph-create-keys broken?

2013-08-16 Thread Bernhard Glomm
Hi all,

since ceph-deploy/ceph-create-keys is broken
(see bug 4924)
and mkcephfs is deprecated

is there a howto for deploying the system without using
neither of this tools? (especially not ceph-create-keys 
since that won't stop running without doing anything ;-)

Since I have only 5 instances I like to set up
I could do a manually configuration an import that
into my cfengine management, so wouldn't need to rely on ceph-deploy

TIA

Bernhard

-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy missing?

2013-08-16 Thread Ian Colle
Please note, you do not need to specify the version of deb or rpm if you
want the latest. Just continue to point to http://ceph.com/debian or
http://ceph.com/rpm and you'll get the same thing as
http://ceph.com/debian-dumpling and http://ceph.com/rpm-dumpling

Ian R. Colle
Director of Engineering

Inktank
Cell: +1.303.601.7713 tel:%2B1.303.601.7713
Email: i...@inktank.com



Delivering the Future of Storage

 http://www.linkedin.com/in/ircolle
 http://www.twitter.com/ircolle

On 8/16/13 8:26 AM, Alfredo Deza alfredo.d...@inktank.com wrote:

 
 
 
 On Thu, Aug 15, 2013 at 11:45 AM, Alfredo Deza alfredo.d...@inktank.com
 wrote:
 
 
 
 On Thu, Aug 15, 2013 at 11:41 AM, Jim Summers jbsumm...@gmail.com wrote:
 
 I ran:
 
 ceph-deploy mon create chost0 chost1
 
 It seemed to be working and then hung at:
 
 [chost0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-chost0/done
 [chost0][INFO  ] create a done file to avoid re-doing the mon deployment
 [chost0][INFO  ] create the init path if it does not exist
 [chost0][INFO  ] locating `service` executable...
 [chost0][INFO  ] found `service` executable: /sbin/service
 [chost0][INFO  ] Running command: /sbin/service ceph start mon.chost0
 [chost0][INFO  ] === mon.chost0 ===
 [chost0][INFO  ] Starting Ceph mon.chost0 on chost0...
 [chost0][INFO  ] Starting ceph-create-keys on chost0...
 
 Then it just hung there and never moved on to chost1.
 
 
 There is a high priority bug with this problem that just got merged into
 ceph-deploy today. We should follow with a release
 soon. 
 
 If you Ctrl-C this it should still work as expected.
 
 
 
 
 
 On Thu, Aug 15, 2013 at 10:32 AM, Alfredo Deza alfredo.d...@inktank.com
 wrote:
 
 
 
 On Thu, Aug 15, 2013 at 11:10 AM, Jim Summers jbsumm...@gmail.com wrote:
 
 Hello All,
 
 Since the release of dumpling, a couple of things are now not working.  I
 can not yum install ceph-deploy and earlier I tried to manually modify the
 ceph.repo file.  That did get ceph-deploy installed but it did not work.
 So then I switched to the ceph-release that would bring me back to
 cuttlefish.  Can not install ceph-deploy again.
 
 Now I saw an email that ceph-deploy can be installed with pip or
 east_install.
 
 Let me clarify a bit here :)
 
 Yes, you can install ceph-deploy via pip or easy_install if you choose to,
 and that should just work.
 
 It seems to me that you might be having network issues with pip, or maybe
 just a PyPI hiccup. Try using pip with the
 verbose flag so you can see better what is going on.
 
 fwiw I just did `pip install ceph-deploy` and it worked correctly. Output
 should be similar to this:
 
   Downloading/unpacking ceph-deploy
 Downloading ceph-deploy-1.2.tar.gz
 Running setup.py egg_info for package ceph-deploy
 
   no previously-included directories found matching 'ceph_deploy/test'
   Installing collected packages: ceph-deploy
 Running setup.py install for ceph-deploy\
   Installing ceph-deploy script to /Users/alfredo/.virtualenvs/tmp/bin
   Successfully installed ceph-deploy
   Cleaning up...
 
 When you say you tried to modify the ceph.repo file and got ceph-deploy
 installed but it did not work, what do you mean? How did it not work?
 Any output/explanation for that would help understand this a bit better.
 
 Also, keep in mind that the location for ceph-deploy's repo has changed,
 you should look here http://ceph.com/packages/ceph-extras/rpm/ and add the
 correct repo for your OS.
  
 So I redid the ceph-release to dumpling. Then tried to pip ceph-deploy and
 got:
 
 Retrieving 
 http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
 Preparing...###
 [100%]
1:ceph-release   ###
 [100%]
 # pip install ceph-deploy
 Downloading/unpacking ceph-deploy
   Real name of requirement ceph-deploy is ceph-deploy
   Could not find any downloads that satisfy the requirement ceph-deploy
 No distributions at all found for ceph-deploy
 
 Not sure what is happening.
 
 Ideas?
 
 ceph-deploy was released yesterday to the ceph-extras repo, the
 ceph.com/rpm-dumpling http://ceph.com/rpm-dumpling ,
 ceph.com/debain-dumpling http://ceph.com/debain-dumpling  and the python
 package index, so you *should* be able to install it as before.
 
 
 
 
 On Thu, Aug 15, 2013 at 9:18 AM, bernhard glomm
 bernhard.gl...@ecologic.eu wrote:
 Hi all,
 
 I would like to use ceph in our company and had some test setups running.
 Now all of a sudden ceph-deploy is not in the repos anymore
 
 this is my sources list,
 
 ...
 deb http://archive.ubuntu.com/ubuntu raring main restricted universe
 multiverse
 deb http://security.ubuntu.com/ubuntu raring-security main restricted
 universe multiverse
 deb http://archive.ubuntu.com/ubuntu raring-updates main restricted
 universe multiverse
 deb http:///ceph.com/debian/ raring main
 Š
 
 up to last week I had no problem installing ceph-deploy
 
 Any 

Re: [ceph-users] ceph-deploy 1.2.1 ceph-create-keys broken?

2013-08-16 Thread Alfredo Deza
On Fri, Aug 16, 2013 at 10:38 AM, Bernhard Glomm bernhard.gl...@ecologic.eu
 wrote:

 Hi all,

 since ceph-deploy/ceph-create-keys is broken
 (see bug 4924)


That may or may not be related to your specific problem.

Have you looked at the mon logs in the hosts where create-keys is not
working?

For example, this is quite common for me when I have configuration issues
and get log output similar to:

-1 no public_addr or public_network specified, and mon.precise64 not
present in monmap or ceph.conf

Where hostname is precise64, after attempting to do a `ceph-deploy mon
create node1`



 and mkcephfs is deprecated

 is there a howto for deploying the system without using
 neither of this tools? (especially not ceph-create-keys
 since that won't stop running without doing anything ;-)

 Since I have only 5 instances I like to set up
 I could do a manually configuration an import that
 into my cfengine management, so wouldn't need to rely on ceph-deploy

 TIA

 Bernhard

 --
   --
   [image: *Ecologic Institute*]   *Bernhard Glomm*
 IT Administration

Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
 bernhard.glomm.ecologic [image:
 Website:] http://ecologic.eu [image: | 
 Video:]http://www.youtube.com/v/hZtiK04A9Yo [image:
 | Newsletter:] http://ecologic.eu/newsletter/subscribe [image: |
 Facebook:] http://www.facebook.com/Ecologic.Institute [image: |
 Linkedin:]http://www.linkedin.com/company/ecologic-institute-berlin-germany 
 [image:
 | Twitter:] http://twitter.com/EcologicBerlin [image: | 
 YouTube:]http://www.youtube.com/user/EcologicInstitute [image:
 | Google+:] http://plus.google.com/113756356645020994482   Ecologic
 Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
 Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
 --

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION 61.7

2013-08-16 Thread Aquino, BenX O
Thanks for the response;

Here's the version:

root@ubuntuceph700athf1:/etc/ceph# aptitude versions ceph-deploy
Package ceph-deploy:
i   1.0-1   
 stable   500 



I notice that you did have a ceph-deploy release the day after my evaluation, 
so yes, I'll definitely try this again in the near future.

Regards,
-ben



From: Alfredo Deza [alfredo.d...@inktank.com]
Sent: Friday, August 16, 2013 7:31 AM
To: Aquino, BenX O
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION 
61.7

On Fri, Aug 9, 2013 at 12:05 PM, Aquino, BenX O benx.o.aqu...@intel.com wrote:
 CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7

 ADMINNODE:

 root@ubuntuceph900athf1:~# ceph -v

 ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)

 root@ubuntuceph900athf1:~#



 SERVERNODE:

 root@ubuntuceph700athf1:/etc/ceph# ceph -v

 ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)

 root@ubuntuceph700athf1:/etc/ceph#



 ===:

 Trial-1 of using ceph-deploy results:
 (http://ceph.com/docs/next/start/quick-ceph-deploy/)



 My trial-1 scenario is using ceph-deploy to replace 2 OSD's (osd.2 and
 OSD.11) of a ceph node.



 Obesrvation:

 ceph-deploy creating a symbolic-links ceph-0--ceph-2 dir and
 ceph-1---ceph-11 dir.

 I did not ran into any errors or issue in this trials.



 once concern:

 --ceph deploy did not update linux fstab of mount point of osd data.



 ===



 Trial 2: (http://ceph.com/docs/next/start/quick-ceph-deploy/)



 I notice my node did not have any contents in
 /var/lib/ceph/boostrap-{osd}|{mds} .

 Result: FAILURE TO MOVE FORWARD BEYOND THIS STEP



 Tip from http://ceph.com/docs/next/start/quick-ceph-deploy/

 If you don’t have these keyrings, you may not have created a monitor
 successfully,

 or you may have a problem with your network connection.

 Ensure that you complete this step such that you have the foregoing keyrings
 before proceeding further.



 Tip from (http://ceph.com/docs/next/start/quick-ceph-deploy/:

 You may repeat this procedure. If it fails, check to see if the
 /var/lib/ceph/boostrap-{osd}|{mds} directories on the server node have
 keyrings.

 If they do not have keyrings, try adding the monitor again; then, return to
 this step.



 My WORKAROUND1:

 COPIED CONTENTS OF /var/lib/ceph/boostrap-{osd}|{mds} FROM ANOTHER NODE



 My WORKAROUND2:

 USED CREATE A NEW CLUSTER PROCEDURE with CEPH-DEPLOY to create missing
 keyrings.



 =:

 TRIAL-3:  Attemp to build a new cluster/1-Node using ceph deploy:



 RESULT FAILED TO GO BEYOND THE ERROR LOGS BELOW:



 root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
 ubuntuceph700athf1:sde1:/var/lib/ceph/journal/osd.0.journal

 ceph-disk-prepare -- /dev/sde1 /var/lib/ceph/journal/osd.0.journal returned
 1

 meta-data=/dev/sde1  isize=2048   agcount=4, agsize=30524098
 blks

  =   sectsz=512   attr=2, projid32bit=0

 data =   bsize=4096   blocks=122096390, imaxpct=25

  =   sunit=0  swidth=0 blks

 naming   =version 2  bsize=4096   ascii-ci=0

 log  =internal log   bsize=4096   blocks=59617, version=2

  =   sectsz=512   sunit=0 blks, lazy-count=1

 realtime =none   extsz=4096   blocks=0, rtextents=0



 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same
 device as the osd data

 umount: /var/lib/ceph/tmp/mnt.iMsc1G: device is busy.

 (In some cases useful info about processes that use

  the device is found by lsof(8) or fuser(1))

 ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--',
 '/var/lib/ceph/tmp/mnt.iMsc1G']' returned non-zero exit status 1



 ceph-deploy: Failed to create 1 OSDs



 root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
 ubuntuceph700athf1:sde1

 ceph-disk-prepare -- /dev/sde1 returned 1

 meta-data=/dev/sde1  isize=2048   agcount=4, agsize=30524098
 blks

  =   sectsz=512   attr=2, projid32bit=0

 data =   bsize=4096   blocks=122096390, imaxpct=25

  =   sunit=0  swidth=0 blks

 naming   =version 2  bsize=4096   ascii-ci=0

 log  =internal log   bsize=4096   blocks=59617, version=2

  =   sectsz=512   sunit=0 blks, lazy-count=1

 realtime =none   extsz=4096   blocks=0, rtextents=0



 umount: /var/lib/ceph/tmp/mnt.0JxBp1: device is busy.

 (In some cases useful info about processes that use

  

Re: [ceph-users] ceph-deploy missing?

2013-08-16 Thread Sage Weil
On Fri, 16 Aug 2013, Ian Colle wrote:
 Please note, you do not need to specify the version of deb or rpm if you
 want the latest. Just continue to point to http://ceph.com/debian or
 http://ceph.com/rpm and you'll get the same thing as
 http://ceph.com/debian-dumpling and http://ceph.com/rpm-dumpling

Hmm, we should perhaps make a way for ceph-deploy to do this?  Right now 
it selects what it thinks is the latest stable release (currently 
dumpling) and sets up the apt sources that way.

sage

 
 Ian R. Colle
 Director of Engineering
 
 Inktank
 Cell: +1.303.601.7713
 Email: i...@inktank.com
 
 
 [IMAGE]
 
 Delivering the Future of Storage
 
 [linkedinbutton_option_3.png]
 Follow teststamp on Twitter
 
 On 8/16/13 8:26 AM, Alfredo Deza alfredo.d...@inktank.com wrote:
 
 
 
 
   On Thu, Aug 15, 2013 at 11:45 AM, Alfredo Deza
   alfredo.d...@inktank.com wrote:
 
 
 
 On Thu, Aug 15, 2013 at 11:41 AM, Jim Summers
 jbsumm...@gmail.com wrote:
 
 I ran:
 
 ceph-deploy mon create chost0 chost1
 
 It seemed to be working and then hung at:
 
 [chost0][DEBUG ] checking for done path:
 /var/lib/ceph/mon/ceph-chost0/done
 [chost0][INFO  ] create a done file to avoid re-doing the
 mon deployment
 [chost0][INFO  ] create the init path if it does not exist
 [chost0][INFO  ] locating `service` executable...
 [chost0][INFO  ] found `service` executable: /sbin/service
 [chost0][INFO  ] Running command: /sbin/service ceph start
 mon.chost0
 [chost0][INFO  ] === mon.chost0 ===
 [chost0][INFO  ] Starting Ceph mon.chost0 on chost0...
 [chost0][INFO  ] Starting ceph-create-keys on chost0...
 
 Then it just hung there and never moved on to chost1.
 
 
 There is a high priority bug with this problem that just got
 merged into ceph-deploy today. We should follow with a release
 soon.
 
 If you Ctrl-C this it should still work as expected.
 
 
 
 
 
 On Thu, Aug 15, 2013 at 10:32 AM, Alfredo Deza
 alfredo.d...@inktank.com wrote:
 
 
 
   On Thu, Aug 15, 2013 at 11:10 AM, Jim Summers
   jbsumm...@gmail.com wrote:
 
 Hello All,
 
 Since the release of dumpling, a couple of
 things are now not working.  I can not yum
 install ceph-deploy and earlier I tried to
 manually modify the ceph.repo file.  That did
 get ceph-deploy installed but it did not
 work.   So then I switched to the ceph-release
 that would bring me back to cuttlefish.  Can
 not install ceph-deploy again.
 
 Now I saw an email that ceph-deploy can be
 installed with pip or east_install. 
 
 
 Let me clarify a bit here :)
 
 Yes, you can install ceph-deploy via pip or
 easy_install if you choose to, and that should just
 work.
 
 It seems to me that you might be having network
 issues with pip, or maybe just a PyPI hiccup. Try
 using pip with the
 verbose flag so you can see better what is going on.
 
 fwiw I just did `pip install ceph-deploy` and it
 worked correctly. Output should be similar to this:
 
   Downloading/unpacking ceph-deploy
     Downloading ceph-deploy-1.2.tar.gz
     Running setup.py egg_info for package
 ceph-deploy
 
   no previously-included directories found
 matching 'ceph_deploy/test'
   Installing collected packages: ceph-deploy
     Running setup.py install for ceph-deploy\
   Installing ceph-deploy script to
 /Users/alfredo/.virtualenvs/tmp/bin
   Successfully installed ceph-deploy
   Cleaning up...
 
 When you say you tried to modify the ceph.repo file
 and got ceph-deploy installed but it did not work,
 what do you mean? How did it not work?
 Any output/explanation for that would help
 understand this a bit better.
 
 Also, keep in mind that the location for
 ceph-deploy's repo has changed, you should look here
 http://ceph.com/packages/ceph-extras/rpm/ and add
 the
 correct repo for your OS.
  
   So I redid the ceph-release to dumpling.
   Then tried to pip ceph-deploy and got:
 
   Retrieving
   http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
   Preparing...   
   ###
   [100%]
      1:ceph-release  
   ###
   [100%]
   # pip install ceph-deploy
   Downloading/unpacking ceph-deploy
     Real name of requirement ceph-deploy
   is ceph-deploy
     Could not find any downloads that
   satisfy the requirement ceph-deploy
   No distributions at all found for
   ceph-deploy
 
 Not sure what is happening. 
 
 Ideas?
 
 
 ceph-deploy was released yesterday to the ceph-extras repo, the
 ceph.com/rpm-dumpling, ceph.com/debain-dumpling and the python package
 index, so you *should* be able to install it as before.
 
 
 
 
 On Thu, Aug 15, 2013 at 9:18 AM, bernhard
 glomm bernhard.gl...@ecologic.eu wrote:
   Hi all,
 
 I would like to use ceph in our company
 and had some test setups running.
 Now all of a sudden ceph-deploy is not
 in the repos anymore
 
 this is my sources list,
 
 ...
 deb 

[ceph-users] paxos is_readable spam on monitors?

2013-08-16 Thread Jeppesen, Nelson
Hello Ceph-users,

Running dumping (upgraded yesterday) and several hours after the upgrade the 
following type of message repeated over and over in logs. Started about 8 hours 
ago. 

1 mon.1@0(leader).paxos(paxos active c 6005920..6006535) is_readable 
now=2013-08-16 14:35:53.351282 lease_expire=2013-08-16 14:35:58.245728 has v0 
lc 6006535

Only monitor settings I have in ceph.conf besides host and host_addr:

[mon]
mon cluster log to syslog = false
mon cluster log file = none

Can I safely ignore them? Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-16 Thread Dan Mick
That loosk interesting, but I cannot browse without making an account; 
can you make your source freely available?


On 08/14/2013 10:54 PM, Mikaël Cluseau wrote:

Hi lists,

in this release I see that the ceph command is not compatible with
python 3. The changes were not all trivial so I gave up, but for those
using gentoo, I made my ceph git repository available here with an
ebuild that forces the python version to 2.6 ou 2.7 :

git clone https://git.isi.nc/cloud/cloud-overlay.git

I upgraded from cuttlefish without any problem so good job
ceph-contributors :)

Have a nice day.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-16 Thread Dan Mick

OK, no worries.  Was just after maximum availability.

On 08/16/2013 08:19 PM, Mikaël Cluseau wrote:

On 08/17/2013 02:06 PM, Dan Mick wrote:

That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?


gitlab's policy is the following :

Public access
If checked, this project can be cloned /without any/ authentication. It
will also be listed on the public access directory
https://git.isi.nc/public. /Any/ user will have Guest
https://git.isi.nc/help/permissions permissions on the repository.


We support oauth (gmail has been extensively tested ;)) but accounts are
blocked by default so I don't know if you'll have instant access even
after an oauth login.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Glance image upload errors after upgrading to Dumpling

2013-08-16 Thread Rongze Zhu
There is same error in Cinder.



On Thu, Aug 15, 2013 at 10:07 PM, Michael Morgan mmor...@dca.net wrote:

 On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote:
  On 08/14/2013 02:22 PM, Michael Morgan wrote:
  Hello Everyone,
  
I have a Ceph test cluster doing storage for an OpenStack Grizzly
platform
  (also testing). Upgrading to 0.67 went fine on the Ceph side with the
  cluster
  showing healthy but suddenly I can't upload images into Glance anymore.
 The
  upload fails and glance-api throws an error:
  
  2013-08-14 15:19:55.898 ERROR glance.api.v1.images
  [4dcd9de0-af65-4902-a36d-afc5497605e7 3867c65db6cc48398a0f57ce53144e69
  5dbca756421c4a3eb0a1cc2f1ee3c67c] Failed to upload image
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images Traceback (most
  recent call last):
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images   File
  /usr/lib/python2.6/site-packages/glance/api/v1/images.py, line 444, in
  _upload
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images
  image_meta['size'])
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images   File
  /usr/lib/python2.6/site-packages/glance/store/rbd.py, line 241, in add
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images with
  rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn:
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images   File
  /usr/lib/python2.6/site-packages/rados.py, line 195, in __init__
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images raise
  Error(Rados(): can't supply both rados_id and name)
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images Error: Rados():
  can't supply both rados_id and name
  2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images
 
  This would be a backwards-compatibility regression in the librados
  python bindings - a fix is in the dumpling branch, and a point
  release is in the works. You could add name=None to that rados.Rados()
  call in glance to work around it in the meantime.
 
  Josh
 
I'm not sure if there's a patch I need to track down for Glance or if
 I
missed
  a change in the necessary Glance/Ceph setup. Is anyone else seeing this
  behavior? Thanks!
  
  -Mike

 That's what I ended up doing and it's working fine now. Thanks for the
 confirmation and all the hard work on OpenStack integration. I look
 forward to
 the point release.

 -Mike
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 

Rongze Zhu
https://github.com/zhurongze
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com