Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Alfredo Deza
It seems that the metadata didn't get updated.

I just tried out and got the right version with no issues. Hopefully
*this* time it works for you.

Sorry for all the troubles

On Tue, Jan 5, 2016 at 3:21 PM, Derek Yarnell  wrote:
> Hi Alfredo,
>
> I am still having a bit of trouble though with what looks like the
> 1.5.31 release.  With a `yum update ceph-deploy` I get the following
> even after a full `yum clean all`.
>
> http://ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.31-0.noarch.rpm:
> [Errno -1] Package does not match intended download. Suggestion: run yum
> --enablerepo=Ceph-noarch clean metadata
>
> Thanks,
> derek
>
> On 1/5/16 1:25 PM, Alfredo Deza wrote:
>> It looks like this was only for ceph-deploy in Hammer. I verified that
>> this wasn't the case in e.g. Infernalis
>>
>> I have ensured that the ceph-deploy packages in hammer are in fact
>> signed and coming from our builds.
>>
>> Thanks again for reporting this!
>>
>> On Tue, Jan 5, 2016 at 12:27 PM, Alfredo Deza  wrote:
>>> This is odd. We are signing all packages before publishing them on the
>>> repository. These ceph-deploy releases are following a new release
>>> process so I will
>>> have to investigate where is the disconnect.
>>>
>>> Thanks for letting us know.
>>>
>>> On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell  wrote:
>>>> It looks like the ceph-deploy > 1.5.28 packages in the
>>>> http://download.ceph.com/rpm-hammer/el6 and
>>>> http://download.ceph.com/rpm-hammer/el7 repositories are not being PGP
>>>> signed.  What happened?  This is causing our yum updates to fail but may
>>>> be a sign of something much more nefarious?
>>>>
>>>> # rpm -qp --queryformat %{SIGPGP}
>>>> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.28-0.noarch.rpm
>>>> 89021c040001020006050255fae0d5000a0910e84ac2c0460f3994203610009e284c0c6749f9d1ccd54aca8668e5f4148eb60f0ade762a5cb316926060d73a82490c41b8a5e9a5ebb8a7136a5ce294565cf8548dce160f7a577b623f12fb841b1656fba0b139404b4a074c076abf8c38f176bbecfc551567d22826d6c3ac2a67d8c8f4db67e3a2566272f492f3a1461b2c80bfc56f0c29e3a0c0e03fe50ee877d2d2b99963ea876914f5d85ae6fcf60c7c372040fcc82591552af21e152a37ab4103c3116ccd3a5f10992dc9ec483922212ef8ad8c37abbb6a751f6da2cc79567ed45e7bcb83d92aecc2a61d7584699183622714376bf3766e8781c7675834cce7d3e6c349bee6992872248fe7dd9f00248806e0c99f1a7010a8e77d13fefffeb142c1ee4ee8e55e53043fb89b7127a1c2282f4ab0fa3d19eccaa38194aa42310860bdd7746de8512b106d7923e9da9d1ad84b4ba1f8a3175b808d08f99ca5b737d4a7cba1f165b815187bec9ff1e0b5627e435ed869ae0bb16419e928e1a64413bb4dd62a6b1b049faa02eaa14bd6636b5f835bfef16acfd2daad82c1fed57a5e635971281367d2fe99c3b2b542490559d9b9b3f4295c86185aa3c4b4014da55c1b0ff68bc42c869729fee29472c413c911ea9bc5d58957bfb670ddc54d28fd8f30444969b790e53f9d34a1b2
>
>  df9b
>>>>
>>>>  e2afe9d26
>>>> d5be57b9fcd659c4880fad613ba5f175e4e3466dba4919a4656ffd228688a9c81d865e6df870ba33bbfc000
>>>>
>>>> # rpm -qp --queryformat %{SIGPGP}
>>>> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.29-0.noarch.rpm
>>>> (none)
>>>>
>>>> # rpm -qp --queryformat %{SIGPGP}
>>>> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.30-0.noarch.rpm
>>>> (none)
>>>>
>>>> # rpm -qp --queryformat %{SIGPGP}
>>>> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.28-0.noarch.rpm
>>>> 89021c040001020006050255fadf42000a0910e84ac2c0460f39943b131000cb7f253c91019b2f5993fd232c4369003d521538aa19f996717d2eee780fe2d7ed4e969418ce92d6ad4be69b3c5421b80d2241a9d6e72e758ba86f0360e24aadd63d89165b47a566bcd8bed39d7b37e809d7afdf6b38e5e014f98caca6df7da6278822e2457c627cdba505febc23edb32447e11c2878e79bf5f5690def708ed7d79d261a839d5808b177cb3d6a8bc62317441f3e1b5cf986aeb5cde98fc986c42af2761418e7e83309df9b8703648a8e6eefe83f9d3cbcfe371bc336320657f86343ab25df8bd578203b6f312746ebbe0da195adeb1087487d12d530281b5328731c54240b0c5c01f1648c8802231876a33a0835a553e1b84e6d8a15acdd5db6b6bf9c6dee84b22ae0e70dc0cf2acdd5779e510a248844bba0af87ae8d5a874502ec0e48b235926222cf3386c44e30e3af14dea6134a5873784013297fa19a09f439bc8a2b73f563fc6e5cfa60767629a37f3cd24762f7b14e5f7ce08adeed82da3effc59298359a9f7f0efab0e4e808a33ceb07431530e0c279462da043bbece02d3fdf6a96e5a813eea0bf0f73e84b7fac6e28449e1bf15ddc2fa692f641ce8d4d9ed4261ba2824adee47dad90993ebc46d6ee083e92c8f76aaf8428e274e48cb1a91d0a2eb15e8779289b3771
>
>  ef71
>>>>
>>>>  1cd9cc7f2
>>>> 8f7a3cde708e4577b0aad546024ee98646f4f543ee1e33d8c96a93cff9b48deefa5b3996f659b16786ff016
>>>>
>>>> # rpm -qp --queryformat %{SIGPGP}
&

Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Alfredo Deza
It looks like this was only for ceph-deploy in Hammer. I verified that
this wasn't the case in e.g. Infernalis

I have ensured that the ceph-deploy packages in hammer are in fact
signed and coming from our builds.

Thanks again for reporting this!

On Tue, Jan 5, 2016 at 12:27 PM, Alfredo Deza  wrote:
> This is odd. We are signing all packages before publishing them on the
> repository. These ceph-deploy releases are following a new release
> process so I will
> have to investigate where is the disconnect.
>
> Thanks for letting us know.
>
> On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell  wrote:
>> It looks like the ceph-deploy > 1.5.28 packages in the
>> http://download.ceph.com/rpm-hammer/el6 and
>> http://download.ceph.com/rpm-hammer/el7 repositories are not being PGP
>> signed.  What happened?  This is causing our yum updates to fail but may
>> be a sign of something much more nefarious?
>>
>> # rpm -qp --queryformat %{SIGPGP}
>> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.28-0.noarch.rpm
>> 89021c040001020006050255fae0d5000a0910e84ac2c0460f3994203610009e284c0c6749f9d1ccd54aca8668e5f4148eb60f0ade762a5cb316926060d73a82490c41b8a5e9a5ebb8a7136a5ce294565cf8548dce160f7a577b623f12fb841b1656fba0b139404b4a074c076abf8c38f176bbecfc551567d22826d6c3ac2a67d8c8f4db67e3a2566272f492f3a1461b2c80bfc56f0c29e3a0c0e03fe50ee877d2d2b99963ea876914f5d85ae6fcf60c7c372040fcc82591552af21e152a37ab4103c3116ccd3a5f10992dc9ec483922212ef8ad8c37abbb6a751f6da2cc79567ed45e7bcb83d92aecc2a61d7584699183622714376bf3766e8781c7675834cce7d3e6c349bee6992872248fe7dd9f00248806e0c99f1a7010a8e77d13fefffeb142c1ee4ee8e55e53043fb89b7127a1c2282f4ab0fa3d19eccaa38194aa42310860bdd7746de8512b106d7923e9da9d1ad84b4ba1f8a3175b808d08f99ca5b737d4a7cba1f165b815187bec9ff1e0b5627e435ed869ae0bb16419e928e1a64413bb4dd62a6b1b049faa02eaa14bd6636b5f835bfef16acfd2daad82c1fed57a5e635971281367d2fe99c3b2b542490559d9b9b3f4295c86185aa3c4b4014da55c1b0ff68bc42c869729fee29472c413c911ea9bc5d58957bfb670ddc54d28fd8f30444969b790e53f9d34a1b2df9b
>>
>>  e2afe9d26
>> d5be57b9fcd659c4880fad613ba5f175e4e3466dba4919a4656ffd228688a9c81d865e6df870ba33bbfc000
>>
>> # rpm -qp --queryformat %{SIGPGP}
>> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.29-0.noarch.rpm
>> (none)
>>
>> # rpm -qp --queryformat %{SIGPGP}
>> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.30-0.noarch.rpm
>> (none)
>>
>> # rpm -qp --queryformat %{SIGPGP}
>> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.28-0.noarch.rpm
>> 89021c040001020006050255fadf42000a0910e84ac2c0460f39943b131000cb7f253c91019b2f5993fd232c4369003d521538aa19f996717d2eee780fe2d7ed4e969418ce92d6ad4be69b3c5421b80d2241a9d6e72e758ba86f0360e24aadd63d89165b47a566bcd8bed39d7b37e809d7afdf6b38e5e014f98caca6df7da6278822e2457c627cdba505febc23edb32447e11c2878e79bf5f5690def708ed7d79d261a839d5808b177cb3d6a8bc62317441f3e1b5cf986aeb5cde98fc986c42af2761418e7e83309df9b8703648a8e6eefe83f9d3cbcfe371bc336320657f86343ab25df8bd578203b6f312746ebbe0da195adeb1087487d12d530281b5328731c54240b0c5c01f1648c8802231876a33a0835a553e1b84e6d8a15acdd5db6b6bf9c6dee84b22ae0e70dc0cf2acdd5779e510a248844bba0af87ae8d5a874502ec0e48b235926222cf3386c44e30e3af14dea6134a5873784013297fa19a09f439bc8a2b73f563fc6e5cfa60767629a37f3cd24762f7b14e5f7ce08adeed82da3effc59298359a9f7f0efab0e4e808a33ceb07431530e0c279462da043bbece02d3fdf6a96e5a813eea0bf0f73e84b7fac6e28449e1bf15ddc2fa692f641ce8d4d9ed4261ba2824adee47dad90993ebc46d6ee083e92c8f76aaf8428e274e48cb1a91d0a2eb15e8779289b3771ef71
>>
>>  1cd9cc7f2
>> 8f7a3cde708e4577b0aad546024ee98646f4f543ee1e33d8c96a93cff9b48deefa5b3996f659b16786ff016
>>
>> # rpm -qp --queryformat %{SIGPGP}
>> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.29-0.noarch.rpm
>> (none)
>>
>> # rpm -qp --queryformat %{SIGPGP}
>> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.30-0.noarch.rpm
>> (none)
>>
>> --
>> Derek T. Yarnell
>> University of Maryland
>> Institute for Advanced Computer Studies
>> ___
>> ceph-users mailing list
>> ceph-us...@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] PGP signatures for RHEL hammer RPMs for ceph-deploy

2016-01-05 Thread Alfredo Deza
This is odd. We are signing all packages before publishing them on the
repository. These ceph-deploy releases are following a new release
process so I will
have to investigate where is the disconnect.

Thanks for letting us know.

On Tue, Jan 5, 2016 at 10:31 AM, Derek Yarnell  wrote:
> It looks like the ceph-deploy > 1.5.28 packages in the
> http://download.ceph.com/rpm-hammer/el6 and
> http://download.ceph.com/rpm-hammer/el7 repositories are not being PGP
> signed.  What happened?  This is causing our yum updates to fail but may
> be a sign of something much more nefarious?
>
> # rpm -qp --queryformat %{SIGPGP}
> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.28-0.noarch.rpm
> 89021c040001020006050255fae0d5000a0910e84ac2c0460f3994203610009e284c0c6749f9d1ccd54aca8668e5f4148eb60f0ade762a5cb316926060d73a82490c41b8a5e9a5ebb8a7136a5ce294565cf8548dce160f7a577b623f12fb841b1656fba0b139404b4a074c076abf8c38f176bbecfc551567d22826d6c3ac2a67d8c8f4db67e3a2566272f492f3a1461b2c80bfc56f0c29e3a0c0e03fe50ee877d2d2b99963ea876914f5d85ae6fcf60c7c372040fcc82591552af21e152a37ab4103c3116ccd3a5f10992dc9ec483922212ef8ad8c37abbb6a751f6da2cc79567ed45e7bcb83d92aecc2a61d7584699183622714376bf3766e8781c7675834cce7d3e6c349bee6992872248fe7dd9f00248806e0c99f1a7010a8e77d13fefffeb142c1ee4ee8e55e53043fb89b7127a1c2282f4ab0fa3d19eccaa38194aa42310860bdd7746de8512b106d7923e9da9d1ad84b4ba1f8a3175b808d08f99ca5b737d4a7cba1f165b815187bec9ff1e0b5627e435ed869ae0bb16419e928e1a64413bb4dd62a6b1b049faa02eaa14bd6636b5f835bfef16acfd2daad82c1fed57a5e635971281367d2fe99c3b2b542490559d9b9b3f4295c86185aa3c4b4014da55c1b0ff68bc42c869729fee29472c413c911ea9bc5d58957bfb670ddc54d28fd8f30444969b790e53f9d34a1b2df9b
>
>  e2afe9d26
> d5be57b9fcd659c4880fad613ba5f175e4e3466dba4919a4656ffd228688a9c81d865e6df870ba33bbfc000
>
> # rpm -qp --queryformat %{SIGPGP}
> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.29-0.noarch.rpm
> (none)
>
> # rpm -qp --queryformat %{SIGPGP}
> http://download.ceph.com/rpm-hammer/el6/noarch/ceph-deploy-1.5.30-0.noarch.rpm
> (none)
>
> # rpm -qp --queryformat %{SIGPGP}
> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.28-0.noarch.rpm
> 89021c040001020006050255fadf42000a0910e84ac2c0460f39943b131000cb7f253c91019b2f5993fd232c4369003d521538aa19f996717d2eee780fe2d7ed4e969418ce92d6ad4be69b3c5421b80d2241a9d6e72e758ba86f0360e24aadd63d89165b47a566bcd8bed39d7b37e809d7afdf6b38e5e014f98caca6df7da6278822e2457c627cdba505febc23edb32447e11c2878e79bf5f5690def708ed7d79d261a839d5808b177cb3d6a8bc62317441f3e1b5cf986aeb5cde98fc986c42af2761418e7e83309df9b8703648a8e6eefe83f9d3cbcfe371bc336320657f86343ab25df8bd578203b6f312746ebbe0da195adeb1087487d12d530281b5328731c54240b0c5c01f1648c8802231876a33a0835a553e1b84e6d8a15acdd5db6b6bf9c6dee84b22ae0e70dc0cf2acdd5779e510a248844bba0af87ae8d5a874502ec0e48b235926222cf3386c44e30e3af14dea6134a5873784013297fa19a09f439bc8a2b73f563fc6e5cfa60767629a37f3cd24762f7b14e5f7ce08adeed82da3effc59298359a9f7f0efab0e4e808a33ceb07431530e0c279462da043bbece02d3fdf6a96e5a813eea0bf0f73e84b7fac6e28449e1bf15ddc2fa692f641ce8d4d9ed4261ba2824adee47dad90993ebc46d6ee083e92c8f76aaf8428e274e48cb1a91d0a2eb15e8779289b3771ef71
>
>  1cd9cc7f2
> 8f7a3cde708e4577b0aad546024ee98646f4f543ee1e33d8c96a93cff9b48deefa5b3996f659b16786ff016
>
> # rpm -qp --queryformat %{SIGPGP}
> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.29-0.noarch.rpm
> (none)
>
> # rpm -qp --queryformat %{SIGPGP}
> http://download.ceph.com/rpm-hammer/el7/noarch/ceph-deploy-1.5.30-0.noarch.rpm
> (none)
>
> --
> Derek T. Yarnell
> University of Maryland
> Institute for Advanced Computer Studies
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


New "make check" job for Ceph pull requests

2015-12-23 Thread Alfredo Deza
Hi all,

As of yesterday (Tuesday Dec 22nd) we have the "make check" job
running within our CI infrastructure, working very similarly as the
previous check with a few differences:

* there are no longer comments added to the pull requests
* notifications of success (or failure) are done inline in the same
notification box for "This branch has no conflicts with the base
branch"
* All members of the Ceph organization can trigger a job with the
following comment:
test this please

Changes to the job should be done following our new process: anyone can open
a pull request against the "ceph-pull-requests" job that configures/modifies
it. This process is fairly minimal:

1) *Jobs no longer require to make changes in the Jenkins UI*, they
are rather plain text YAML files that live in the ceph/ceph-build.git
repository and have a specific structure. Job changes (including
scripts) are made directly on that repository via pull requests.

2) As soon as a PR is merged the changes are automatically pushed to
Jenkins. Regardless if this is a new or old job. All one needs for a
new job to appear is a directory with a working YAML file (see links
at the end on what this means)

Below, please find a list to resources on how to make changes to a
Jenkins Job, and examples on how mostly anyone can provide changes:

* Format and configuration of YAML files are consumed by JJB (Jenkins
Job builder), full docs are here:
http://docs.openstack.org/infra/jenkins-job-builder/definition.html
* Where does the make-check configuration lives?
https://github.com/ceph/ceph-build/tree/master/ceph-pull-requests
* Full documentation on Job structure and configuration:
https://github.com/ceph/ceph-build#ceph-build
* Everyone has READ permissions on jenkins.ceph.com (you can 'login'
with your github account), current admin members (WRITE permissions)
are: ktdreyer, alfredodeza, gregmeno, dmick, zmc, andrewschoen,
ceph-jenkins, dachary, ldachary

If you have any questions, we can help and provide guidance and feedback. We
highly encourage contributors to take ownership on this new tool and make it
awesome!

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Issues installing on CentOS 7.1

2015-12-15 Thread Alfredo Deza
Oh and also worth pointing out that we don't do RHEL builds anymore
(existing rhel repos are no longer getting updated), so in this case
"el7" is the right one to use.


On Tue, Dec 15, 2015 at 7:29 AM, Alfredo Deza  wrote:
> We released 1.5.30 yesterday and had some issues with the repo
> metadata for CentOS, should now be resolved, can you try again
> and let me know if that is not the case?
>
> On Mon, Dec 14, 2015 at 5:07 PM, Luis Pabon  wrote:
>> Just wanted to check if it is me or something else is going on.  Is anyone 
>> seeing the following when installing on CentOS 7.1? :
>>
>> ---
>>  INFERNALIS ---
>> ---
>>
>> [vagrant@ceph ~]$ sudo yum install 
>> http://download.ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-1.el7.noarch.rpm
>> Loaded plugins: fastestmirror
>> ceph-release-1-1.el7.noarch.rpm  
>>  
>>  | 3.9 kB  00:00:00
>> Examining /var/tmp/yum-root-r1Dzps/ceph-release-1-1.el7.noarch.rpm: 
>> ceph-release-1-1.el7.noarch
>> Marking /var/tmp/yum-root-r1Dzps/ceph-release-1-1.el7.noarch.rpm to be 
>> installed
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package ceph-release.noarch 0:1-1.el7 will be installed
>> --> Finished Dependency Resolution
>>
>> Dependencies Resolved
>>
>> ==
>>  Package Arch
>>   Version  Repository
>>Size
>> ==
>> Installing:
>>  ceph-releasenoarch  
>>   1-1.el7  /ceph-release-1-1.el7.noarch  
>>   550
>>
>> Transaction Summary
>> ==
>> Install  1 Package
>>
>> Total size: 550
>> Installed size: 550
>> Is this ok [y/d/N]: y
>> Downloading packages:
>> Running transaction check
>> Running transaction test
>> Transaction test succeeded
>> Running transaction
>>   Installing : ceph-release-1-1.el7.noarch   
>>  
>> 1/1
>>   Verifying  : ceph-release-1-1.el7.noarch   
>>  
>> 1/1
>>
>> Installed:
>>   ceph-release.noarch 0:1-1.el7
>>
>> Complete!
>>
>>
>> [vagrant@ceph ~]$ sudo yum install ceph-deploy
>> Loaded plugins: fastestmirror
>> Ceph 
>>  
>>  | 2.9 kB  00:00:00
>> Ceph-noarch  
>>  
>>  | 2.9 kB  00:00:00
>> base 
>>  
>>  | 3.6 kB  00:00:00
>> ceph-source  
>>  
>>  | 2.9 kB  00:00:00
>> extras   
>>  
>>  | 3.4 kB  00:00:00
>> updates  
>> 

Re: Issues installing on CentOS 7.1

2015-12-15 Thread Alfredo Deza
We released 1.5.30 yesterday and had some issues with the repo
metadata for CentOS, should now be resolved, can you try again
and let me know if that is not the case?

On Mon, Dec 14, 2015 at 5:07 PM, Luis Pabon  wrote:
> Just wanted to check if it is me or something else is going on.  Is anyone 
> seeing the following when installing on CentOS 7.1? :
>
> ---
>  INFERNALIS ---
> ---
>
> [vagrant@ceph ~]$ sudo yum install 
> http://download.ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-1.el7.noarch.rpm
> Loaded plugins: fastestmirror
> ceph-release-1-1.el7.noarch.rpm   
>   
>| 3.9 kB  00:00:00
> Examining /var/tmp/yum-root-r1Dzps/ceph-release-1-1.el7.noarch.rpm: 
> ceph-release-1-1.el7.noarch
> Marking /var/tmp/yum-root-r1Dzps/ceph-release-1-1.el7.noarch.rpm to be 
> installed
> Resolving Dependencies
> --> Running transaction check
> ---> Package ceph-release.noarch 0:1-1.el7 will be installed
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
> ==
>  Package Arch 
>  Version  Repository  
>  Size
> ==
> Installing:
>  ceph-releasenoarch   
>  1-1.el7  /ceph-release-1-1.el7.noarch
> 550
>
> Transaction Summary
> ==
> Install  1 Package
>
> Total size: 550
> Installed size: 550
> Is this ok [y/d/N]: y
> Downloading packages:
> Running transaction check
> Running transaction test
> Transaction test succeeded
> Running transaction
>   Installing : ceph-release-1-1.el7.noarch
>   
>   1/1
>   Verifying  : ceph-release-1-1.el7.noarch
>   
>   1/1
>
> Installed:
>   ceph-release.noarch 0:1-1.el7
>
> Complete!
>
>
> [vagrant@ceph ~]$ sudo yum install ceph-deploy
> Loaded plugins: fastestmirror
> Ceph  
>   
>| 2.9 kB  00:00:00
> Ceph-noarch   
>   
>| 2.9 kB  00:00:00
> base  
>   
>| 3.6 kB  00:00:00
> ceph-source   
>   
>| 2.9 kB  00:00:00
> extras
>   
>| 3.4 kB  00:00:00
> updates   
>   
>| 3.4 kB  00:00:00
> (1/7): Ceph-noarch/primary_db 
>   
>| 2.5 kB  00:00:00
> (2/7): Ceph/x86_64/primary_db 
>   
>|  25 kB  00:00:00
> (3/7): ceph-source/primary_db 
>   
>| 3.7 kB  00:00:00
> (4/7): extras/7/x86_64/primary_db 
>   
>|  90 kB  00:00:00
> (5/7): base/7/x86_64/group_gz 
>

Re: Stable release HOWTO: hunt for lost page

2015-12-14 Thread Alfredo Deza
On Mon, Dec 14, 2015 at 9:58 AM, Loic Dachary  wrote:
> Hi Abhishek,
>
> On 14/12/2015 15:40, Abhishek Varshney wrote:
>> Hi Loic,
>>
>> I have revived the page at
>> http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_triage_incoming_backport_pull_requests
>> from https://web.archive.org with a version cached on 4th October.
>
> Perfect ! It has not changed much in the past few month, I can't think of 
> something we added that would be missing.
>
>> Please review it, in case there are any changes to it.
>>
>> Disabling the page deletion sounds like a good idea.
>
> Done.
>
> Cheers
>
>>
>> Thanks
>> Abhishek
>>
>> On Mon, Dec 14, 2015 at 7:50 PM, Loic Dachary  wrote:
>>> Hi,
>>>
>>> It looks like we've lost 
>>> http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_triage_incoming_backport_pull_requests
>>>  . We can re-write it, of course, it's not that complex. But looking at the 
>>> index I can't find it ( 
>>> http://tracker.ceph.com/projects/ceph-releases/wiki/index ) and the 
>>> activity backlog ( 
>>> http://tracker.ceph.com/projects/ceph-releases/activity?utf8=%E2%9C%93&show_issues=1&show_wiki_edits=1
>>>  ) does not show an event when a page is deleted.
>>>
>>> I suppose it's an accident and I propose we disable the page deletion 
>>> feature to avoid such mistakes in the future (pages can be renamed to 
>>> things like REMOVE-ME instead ;-).

Maybe this should be in source control? There are plenty of
repositories where this could live in the ceph org. If that doesn't
accommodate this use case we could certainly have a distinct one
created

>>>
>>> What do you think ?
>>>
>>> --
>>> Loïc Dachary, Artisan Logiciel Libre
>>>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] v9.2.0 Infernalis released

2015-11-10 Thread Alfredo Deza
uild: cmake: misc fixes (Orit Wasserman, Casey Bodley)
> * build: disable LTTNG by default (#11333 Josh Durgin)
> * build: do not build ceph-dencoder with tcmalloc (#10691 Boris Ranto)
> * build: fix junit detection on Fedora 22 (Ira Cooper)
> * build: fix pg ref disabling (William A. Kennington III)
> * build: fix ppc build (James Page)
> * build: install-deps: misc fixes (Loic Dachary)
> * build: install-deps.sh improvements (Loic Dachary)
> * build: install-deps: support OpenSUSE (Loic Dachary)
> * build: make_dist_tarball.sh (Sage Weil)
> * build: many cmake improvements
> * build: misc cmake fixes (Matt Benjamin)
> * build: misc fixes (Boris Ranto, Ken Dreyer, Owen Synge)
> * build: OSX build fixes (Yan, Zheng)
> * build: remove rest-bench
> * ceph-authtool: fix return code on error (Gerhard Muntingh)
> * ceph-detect-init: added Linux Mint (Michal Jarzabek)
> * ceph-detect-init: robust init system detection (Owen Synge)
> * ceph-disk: ensure 'zap' only operates on a full disk (#11272 Loic Dachary)
> * ceph-disk: fix zap sgdisk invocation (Owen Synge, Thorsten Behrens)
> * ceph-disk: follow ceph-osd hints when creating journal (#9580 Sage Weil)
> * ceph-disk: handle re-using existing partition (#10987 Loic Dachary)
> * ceph-disk: improve parted output parsing (#10983 Loic Dachary)
> * ceph-disk: install pip > 6.1 (#11952 Loic Dachary)
> * ceph-disk: make suppression work for activate-all and activate-journal (Dan 
> van der Ster)
> * ceph-disk: many fixes (Loic Dachary, Alfredo Deza)
> * ceph-disk: fixes to respect init system (Loic Dachary, Owen Synge)
> * ceph-disk: pass --cluster arg on prepare subcommand (Kefu Chai)
> * ceph-disk: support for multipath devices (Loic Dachary)
> * ceph-disk: support NVMe device partitions (#11612 Ilja Slepnev)
> * ceph: fix 'df' units (Zhe Zhang)
> * ceph: fix parsing in interactive cli mode (#11279 Kefu Chai)
> * cephfs-data-scan: many additions, improvements (John Spray)
> * ceph-fuse: do not require successful remount when unmounting (#10982 Greg 
> Farnum)
> * ceph-fuse, libcephfs: don't clear COMPLETE when trimming null (Yan, Zheng)
> * ceph-fuse, libcephfs: drop inode when rmdir finishes (#11339 Yan, Zheng)
> * ceph-fuse,libcephfs: fix uninline (#11356 Yan, Zheng)
> * ceph-fuse, libcephfs: hold exclusive caps on dirs we "own" (#11226 Greg 
> Farnum)
> * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> * ceph: improve error output for 'tell' (#11101 Kefu Chai)
> * ceph-monstore-tool: fix store-copy (Huangjun)
> * ceph: new 'ceph daemonperf' command (John Spray, Mykola Golub)
> * ceph-objectstore-tool: many many improvements (David Zafman)
> * ceph-objectstore-tool: refactoring and cleanup (John Spray)
> * ceph-post-file: misc fixes (Joey McDonald, Sage Weil)
> * ceph_test_rados: test pipelined reads (Zhiqiang Wang)
> * client: avoid sending unnecessary FLUSHSNAP messages (Yan, Zheng)
> * client: exclude setfilelock when calculating oldest tid (Yan, Zheng)
> * client: fix error handling in check_pool_perm (John Spray)
> * client: fsync waits only for inode's caps to flush (Yan, Zheng)
> * client: invalidate kernel dcache when cache size exceeds limits (Yan, Zheng)
> * client: make fsync wait for unsafe dir operations (Yan, Zheng)
> * client: pin lookup dentry to avoid inode being freed (Yan, Zheng)
> * common: add descriptions to perfcounters (Kiseleva Alyona)
> * common: add perf counter descriptions (Alyona Kiseleva)
> * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> * common: detect overflow of int config values (#11484 Kefu Chai)
> * common: fix bit_vector extent calc (#12611 Jason Dillaman)
> * common: fix json parsing of utf8 (#7387 Tim Serong)
> * common: fix leak of pthread_mutexattr (#11762 Ketor Meng)
> * common: fix LTTNG vs fork issue (Josh Durgin)
> * common: fix throttle max change (Henry Chang)
> * common: make mutex more efficient
> * common: make work queue addition/removal thread safe (#12662 Jason Dillaman)
> * common: optracker improvements (Zhiqiang Wang, Jianpeng Ma)
> * common: PriorityQueue tests (Kefu Chai)
> * common: some async compression infrastructure (Haomai Wang)
> * crush: add --check to validate dangling names, max osd id (Kefu Chai)
> * crush: cleanup, sync with kernel (Ilya Dryomov)
> * crush: fix crash from invalid 'take' argument (#11602 Shiva Rkreddy, Sage 
> Weil)
> * crush: fix divide-by-2 in straw2 (#11357 Yann Dupont, Sage Weil)
> * crush: fix has_v4_buckets (#11364 Sage Weil)
> * crush: fix subtree base weight on adjust_subtree_weight (#11855 Sage Weil)
> * crush: respect default replicated ruleset config on map creation (Ilya 
> Dryomov)
> * crushtool: fix order of operations, usage 

Package requirements for upcoming Infernalis Repository

2015-10-20 Thread Alfredo Deza
Hi all,

I am in the process of getting the new Infernalis repository and going
over the notes I had when
the Hammer repository was created includes a few packages that didn't
come from a Ceph build and had to be added manually.

I don't know which of these packages are still needed for Infernalis,
so hopefully someone on this list can help rule out the ones that are
no longer needed:

* xfsprogs
* snappy
* snappy-devel
* leveldb
* leveldb-dev
* gperftools
* gdisk
* fcgi
* curl
* libcurl
* libleveldb
* snappy
* python-requests
* urllib3

*Hopefully* we don't need any of these anymore. Any help would be great.

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-deploy] Ambiguous Debug Message

2015-09-03 Thread Alfredo Deza
On Thu, Sep 3, 2015 at 8:04 AM, Alfredo Deza  wrote:
>
>
> On Tue, Sep 1, 2015 at 7:48 PM, Shinobu Kinjo  wrote:
>>
>> Hi,
>>
>> When I ran:
>>
>> $ sudo ./ceph-deploy --username ceph install ceph01
>>
>> I ended up with:
>>
>> [ceph_deploy][ERROR ] RuntimeError: connecting to host: ceph@ceph01
>> resulted in errors: HostNotFound ceph@ceph01
>
>
 This is a problem from the underlying library that ceph-deploy uses that
 treats the inability to connect to a host as a "HostNotFound" error
 which may not be correct.

 I opened an issue to follow up on this:
 
https://bitbucket.org/hpk42/execnet/issues/48/hostnotfound-is-raised-when-ssh-reports-a

 But in the meantime, I think that ceph-deploy could report that it couldn't
 connect and suggest that the user should try to
 SSH directly to the host. Would that be an acceptable report? (given that we
 don't have access to what SSH returned as the error message)
>
>>
>>
>> That's fine. I need to do something.
>> But actual reason of failure was:
>>
>> Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
>>
>> I think we should change such an ambiguous [ERROR ] message
>> to more reasonable one so that nobody doesn't confuse any
>> hostname configuration?
>>
>> Thought?
>>
>> -
>> $ ./ceph-deploy --version
>> 1.5.27
>>
>> Shinobu
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: debian jessie gitbuilder repositories ?

2015-07-20 Thread Alfredo Deza


- Original Message -
> From: "Sage Weil" 
> To: "Alexandre DERUMIER" 
> Cc: "ceph-devel" , se...@ceph.com
> Sent: Monday, July 20, 2015 10:19:49 AM
> Subject: Re: debian jessie gitbuilder repositories ?
> 
> On Mon, 20 Jul 2015, Alexandre DERUMIER wrote:
> > Hi,
> > 
> > debian jessie gitbuilder is ok since 2 weeks now,
> > 
> > http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-jessie-amd64-basic
> > 
> > 
> > It is possible to push packages to repositories ?
> > 
> > http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
> 
> 
> The builds are failing with this:
> 
> + GNUPGHOME=/srv/gnupg reprepro --ask-passphrase -b
> ../out/output/sha1/6ffb1c4ae43bcde9f5fde40dd97959399135ed86.tmp -C main
> --ignore=undef
> inedtarget --ignore=wrongdistribution include jessie
> out~/ceph_0.94.2-50-g6ffb1c4-1jessie_amd64.changes
> Cannot find definition of distribution 'jessie'!
> There have been errors!
> 
> 
> I've seen it before a long time ago, but I forget what the resolution is.

I am not 100% how gitbuilders are setup but the DEB builders are meant to be 
generic
and they can be only when some setup is run to hold the environments for each 
distro.

The environments are created/updated with pbuilder, so one needs to exist for 
jessie. There is a script
that can do that called update_pbuilder.sh but that is missing `jessie`:

https://github.com/ceph/ceph-build/blob/master/update_pbuilder.sh#L22-30

A manual call to create the jessie environment for pbuilder would suffice here 
I think.
> 
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: pulpito slowness

2015-07-20 Thread Alfredo Deza


- Original Message -
> From: "Loic Dachary" 
> To: "Alfredo Deza" 
> Cc: "Ceph Development" 
> Sent: Sunday, July 19, 2015 12:56:12 PM
> Subject: pulpito slowness
> 
> Hi Alfredo,
> 
> After installing pulpito and run from sources with:
> 
> virtualenv ./virtualenv
> source ./virtualenv/bin/activate
> pip install -r requirements.txt
> python run.py &
> 
> I run a rados suite with 40 workers and 218 jobs. All is well except a
> slowness from pulpito that I don't quite understand. It takes 9 seconds to
> load although the load average of the machine is low, the CPU are not all
> busy, there is plenty of free ram.

There are pieces of the setup that might be causing this. Pulpito on its own 
doesn't do much, it is stateless
and just serves HTML.

I would look into paddles (pulpito feeds from it) and see how that is doing. 
Ideally, paddles would be setup with
PostgreSQL as well, I remember that at some point the queries became very 
complex in paddles and some investigation
was done to improve their speed.



> 
> ubuntu@teuthology:~$ curl
> http://localhost:8081/ubuntu-2015-07-19_15:57:13-rados-hammer---basic-openstack/
> > /dev/null  % Total% Received % Xferd  Average Speed   TimeTime
> Time  Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  391k  100  391k0 0  42774  0  0:00:09  0:00:09 --:--:--
> 96305
> 
> Do you have an idea of the reason for this slowness ?
> 
> Cheers
> 
> --
> Loïc Dachary, Artisan Logiciel Libre
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: s3-tests development

2015-02-27 Thread Alfredo Deza


- Original Message -
From: "Andrew Gaul" 
To: "Yehuda Sadeh-Weinraub" 
Cc: ceph-devel@vger.kernel.org, "Alfredo Deza" 
Sent: Friday, February 27, 2015 1:26:23 PM
Subject: Re: s3-tests development

On Fri, Feb 27, 2015 at 12:11:45PM -0500, Yehuda Sadeh-Weinraub wrote:
> Andrew Gaul wrote:
> > While s3-tests has good functionality as it
> > exists, the project has not progressed much over the last six months.  I
> > have submitted over 20 pull requests to fix incorrect tests and for
> > additional test coverage but most remain unreviewed[2].
> 
> Right. We do need to be more responsive. The main reason we took our time was 
> that these tests are used for the ceph nightly qa tests, and any changes in 
> these that exposes new incompatibility will fail these. We'd rather first 
> have the issue fixed, then merge the change. An alternative way to doing it 
> is to open a ceph tracker issue about the incompatibility, mark the test as 
> 'fails_on_rgw', in which case we could merge it immediately.

Instead of testing against master, perhaps you can tag a s3-tests 1.0.0
release and Ceph can use that?  Alternatively you can run your tests
against a specific commit hash.  In addition to running s3-tests against
Ceph, it would be good to run it against AWS, although we are blocked as
discussed below.

> > s3-tests remains biased towards Ceph and not the AWS de facto standard,
> > failing to run against the latter due to TooManyBuckets failures[3].
> 
> Not sure what's the appropriate way to attack this one. Maybe the 
> create_bucket function could keep a list of created buckets and then remove 
> them if encountering this error using lru.

jclouds uses an LRU scheme to recycle buckets between tests which works
well.  Alternatively s3-tests could remove buckets immediately after
each test although this has some tooling challenges discussed in the
issue.

> > Finally some features like V4 signature support[4] will require more
> > extensive changes.  We are at-risk of diverging; how can we best move
> > forward together?
> 
> We'd be happy with more tests, and with more features tested even if these 
> weren't implemented yet. It can later help with the development of these 
> features. E.g., I looked a few weeks back at v4 signatures, and having such a 
> test would have helped. The only thing we need though would be a way to 
> easily disable such tests. So adding 'fails_on_rgw', or some other way to 
> detect would help a lot.

Annotating every failing test will not scale with additional providers;
I have almost 100 annotations for S3Proxy presently[1].  Could we
implement an external profile mechanism which tracks which tests should
or should not run?  Perhaps nosetests has a way to do this already?

Like I mentioned, we are working to port these to py.test with a 
re-organization included that should allow to configure what tests are run
and how failures are treated.



[1] 
https://github.com/andrewgaul/s3-tests/commit/7891cb4d9a1cd6e6f4d8119f33c4f0bdb35348d3

-- 
Andrew Gaul
http://gaul.org/
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: s3-tests development

2015-02-27 Thread Alfredo Deza


- Original Message -
From: "Yehuda Sadeh-Weinraub" 
To: "Andrew Gaul" 
Cc: ceph-devel@vger.kernel.org, "Alfredo Deza" 
Sent: Friday, February 27, 2015 12:11:45 PM
Subject: Re: s3-tests development



- Original Message -
> From: "Andrew Gaul" 
> To: ceph-devel@vger.kernel.org
> Sent: Thursday, February 26, 2015 6:37:35 PM
> Subject: s3-tests development
> 
> Hello cephalopods, I use s3-tests to test S3Proxy[1], Apache jclouds,
> and an internal project.  While s3-tests has good functionality as it
> exists, the project has not progressed much over the last six months.  I
> have submitted over 20 pull requests to fix incorrect tests and for
> additional test coverage but most remain unreviewed[2].  Further

Right. We do need to be more responsive. The main reason we took our time was 
that these tests are used for the ceph nightly qa tests, and any changes in 
these that exposes new incompatibility will fail these. We'd rather first have 
the issue fixed, then merge the change. An alternative way to doing it is to 
open a ceph tracker issue about the incompatibility, mark the test as 
'fails_on_rgw', in which case we could merge it immediately.

> s3-tests remains biased towards Ceph and not the AWS de facto standard,
> failing to run against the latter due to TooManyBuckets failures[3].

I have started some work to have a much cleaner way to identify documented API 
vs actual tests (making it clear where new tests should go) and to be able
to have leaner tests (vs. the current approach that stacks decorators).

This will also mean that we would move away from using the current test runner 
as it doesn't allow for such abstractions (in favor of py.test).

It will take some time to accomplish this, however we do still think that new 
tests and fixes should still be submitted while the new implementation is being 
worked on.

Not sure what's the appropriate way to attack this one. Maybe the create_bucket 
function could keep a list of created buckets and then remove them if 
encountering this error using lru.

> Finally some features like V4 signature support[4] will require more
> extensive changes.  We are at-risk of diverging; how can we best move
> forward together?

We'd be happy with more tests, and with more features tested even if these 
weren't implemented yet. It can later help with the development of these 
features. E.g., I looked a few weeks back at v4 signatures, and having such a 
test would have helped. The only thing we need though would be a way to easily 
disable such tests. So adding 'fails_on_rgw', or some other way to detect would 
help a lot.

Thanks,
Yehuda

> 
> [1] https://github.com/andrewgaul/s3proxy
> [2] https://github.com/ceph/s3-tests/pulls?q=is%3Aopen+is%3Apr
> [3] https://github.com/ceph/s3-tests/issues/25
> [4] https://github.com/ceph/s3-tests/issues/35
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ceph-devel] radosgw-agent failed to parse

2015-02-10 Thread Alfredo Deza
1.2.1 was just released bringing parity for both RPM/DEB and the PyPI versions.

You should now be able to get it either way. Let me know if you are still 
seeing the same issues.



- Original Message -
From: "ghislain chevalier" 
To: "Alfredo Deza" 
Cc: ceph-devel@vger.kernel.org
Sent: Thursday, February 5, 2015 9:42:09 AM
Subject: RE: [Ceph-devel] radosgw-agent failed to parse

HI

Thanks a lot

I installed the last version using (pip install radosgw-agent) as recommended 
after removing the current version (apt-get purge radosgw-agent)
The python scripts are now installed in /usr/local/bin/./radosgw_agent 
instead of /usr/bin/../radosgw_agent

Can you tell me what is required about the 'requests' library?

So, I will test it asap

Did you plan to update the package in order to be compliant with apt-get 
install radosgw-agent?

Best regards 

-Message d'origine-
De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la part de 
Alfredo Deza
Envoyé : mardi 3 février 2015 14:30
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-devel@vger.kernel.org
Objet : Re: [Ceph-devel] radosgw-agent failed to parse

This tells me that you are not using the latest released version of the agent 
because we dropped the requirement for the `requests` library that is showing 
up in your tracebacks.

You will need to make sure that the agent is completely removed, including the 
`requests` library and then install the latest agent available from the Python 
package Index (PyPI) with a Python installer (e.g.
`pip install radosgw-agent`) or if you feel comfortable use the latest tip from 
the master branch (I advice to use the former).


On Tue, Feb 3, 2015 at 5:12 AM,   wrote:
> Hi Alfredo,
>
> Here are the logs you requested using the original client.py python script.
>
> --
> DIRECT LAUNCH USING CLI:KO
> radosgw-agent -v -c 
> /etc/ceph/radosgw-agent/fr-rennes-radosgw1-sync.conf_direct
> the standard output is also written in the log file configured in the 
> synchro file
>
> DEBUG:boto:Using access key provided by client.
> DEBUG:boto:Using secret key provided by client.
> DEBUG:boto:StringToSign:
> GET
>
>
> Tue, 03 Feb 2015 09:37:54 GMT
> /admin/config
> DEBUG:boto:Signature:
> AWS SBF7DQZ1ESR6Y34225H8:9i6/4do7tDpViolN1XdxgefMs3U=
> DEBUG:boto:url = 'http://openstorage.ushttp://openstorage.us/admin/config'
> params={}
> headers={'Date': 'Tue, 03 Feb 2015 09:37:54 GMT', 'Content-Length': 
> '0', 'Authorization': 'AWS 
> SBF7DQZ1ESR6Y34225H8:9i6/4do7tDpViolN1XdxgefMs3U=', 'User-Agent': 
> 'Boto/2.20.1 Python/2.7.6 Linux/3.11.0-26-generic'} data=None 
> ERROR:root:Could not retrieve region map from destination Traceback (most 
> recent call last):
>   File "/usr/lib/python2.7/dist-packages/radosgw_agent/cli.py", line 269, in 
> main
> region_map = client.get_region_map(dest_conn)
>   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line 391, 
> in get_region_map
> region_map = request(connection, 'get', 'admin/config')
>   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line 153, 
> in request
> result = handler(url, params=params, headers=request.headers, data=data)
>   File "/usr/lib/python2.7/dist-packages/requests/api.py", line 55, in get
> return request('get', url, **kwargs)
>   File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
> return session.request(method=method, url=url, **kwargs)
>   File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 421, in 
> request
> prep = self.prepare_request(req)
>   File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 359, in 
> prepare_request
> hooks=merge_hooks(request.hooks, self.hooks),
>   File "/usr/lib/python2.7/dist-packages/requests/models.py", line 287, in 
> prepare
> self.prepare_url(url, params)
>   File "/usr/lib/python2.7/dist-packages/requests/models.py", line 334, in 
> prepare_url
> scheme, auth, host, port, path, query, fragment = parse_url(url)
>   File "/usr/lib/python2.7/dist-packages/urllib3/util.py", line 390, in 
> parse_url
> raise LocationParseError("Failed to parse: %s" % url)
> LocationParseError: Failed to parse: Failed to parse: openstorage.ushttp:
>
> --
> -- LAUNCH USING /etc/init.d/radosgw-agent shell script:KO 
> /etc/init.d/radosgw-agent start 
> /etc/ceph/radosgw-agent/fr-rennes-radosgw1-sync.con

Re: [Ceph-devel] radosgw-agent failed to parse

2015-02-03 Thread Alfredo Deza
--
> LAUNCH AS A SERVICE:KO
> service radosgw-agent start 
> /etc/ceph/radosgw-agent/fr-rennes-radosgw1-sync.conf_direct
> standard output:
> Starting radosgw-agent fr-rennes-radosgw1-sync.conf_direct
>
> Log file:
> 2015-02-03T11:04:25.435 6872:ERROR:root:Could not retrieve region map from 
> destination
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/dist-packages/radosgw_agent/cli.py", line 269, in 
> main
> region_map = client.get_region_map(dest_conn)
>   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line 396, 
> in get_region_map
> region_map = request(connection, 'get', 'admin/config')
>   File "/usr/lib/python2.7/dist-packages/radosgw_agent/client.py", line 158, 
> in request
> result = handler(url, params=params, headers=request.headers, data=data)
>   File "/usr/lib/python2.7/dist-packages/requests/api.py", line 55, in get
> return request('get', url, **kwargs)
>   File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
> return session.request(method=method, url=url, **kwargs)
>   File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 421, in 
> request
> prep = self.prepare_request(req)
>   File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 359, in 
> prepare_request
> hooks=merge_hooks(request.hooks, self.hooks),
>   File "/usr/lib/python2.7/dist-packages/requests/models.py", line 287, in 
> prepare
> self.prepare_url(url, params)
>   File "/usr/lib/python2.7/dist-packages/requests/models.py", line 338, in 
> prepare_url
> "Perhaps you meant http://{0}?".format(url))
> MissingSchema: Invalid URL u'/admin/config': No schema supplied. Perhaps you 
> meant http:///admin/config?
>
>
>
> Best regards
>
> -Message d'origine-
> De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la part de 
> Alfredo Deza
> Envoyé : lundi 2 février 2015 19:42
> À : CHEVALIER Ghislain IMT/OLPS
> Cc : ceph-devel@vger.kernel.org
> Objet : Re: [Ceph-devel] radosgw-agent failed to parse
>
> If you use the `-v` flag you will get very verbose output.
>
> On Mon, Feb 2, 2015 at 12:12 PM,   wrote:
>> OK I will send them asap
>>
>> Logs are not very verbose.. Can I set a debug mode?
>>
>> -Message d'origine-
>> De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la
>> part de Alfredo Deza Envoyé : lundi 2 février 2015 16:59 À : CHEVALIER
>> Ghislain IMT/OLPS Cc :
>> Objet : Re: [Ceph-devel] radosgw-agent failed to parse
>>
>> That sounds suspicious because we haven't packaged a 1.2.1 release.
>>
>> The latest on PyPI is 1.2, which is the same in the master branch.
>>
>> Would you mind sending logs that show *how* the url is malformed (both when 
>> it is OK and when it is not) ?
>>
>> On Mon, Feb 2, 2015 at 9:53 AM,   wrote:
>>> HI,
>>>
>>> Thx for replying
>>>
>>> According to dpkg -l, it's 1.2.1.
>>>
>>> I noticed that the URL is malformed when launching using directly
>>> radosgw-agent -c  but well formed when launching using
>>> service radosgw-agent start 
>>>
>>> best regards
>>>
>>> -Message d'origine-
>>> De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la
>>> part de Alfredo Deza Envoyé : lundi 2 février 2015 15:18 À :
>>> CHEVALIER Ghislain IMT/OLPS Cc : ceph-devel@vger.kernel.org Objet : Re:
>>> [Ceph-devel] radosgw-agent failed to parse
>>>
>>> What version of the agent are you using? And when it fails, how does the 
>>> log output look?
>>>
>>> On Mon, Feb 2, 2015 at 8:56 AM,   wrote:
>>>> Hi all,
>>>>
>>>> Context : Ubuntu 14.04 TLS firefly 0.80.8
>>>>
>>>> I sent this post in ceph-users (identical subject) because I recently 
>>>> encountered the same issue.
>>>> Maybe I missed something between July and January...
>>>>
>>>> I found that the http request wasn't correctly formed by
>>>> /usr/lib/python2.7/dist-packages/radosgw_agent/client.py
>>>>
>>>> I did the changes below
>>>> #url = '{protocol}://{host}{path}'.format(protocol=request.protocol,
>>>> # host=request.host,
>>>> #   

Re: [Ceph-devel] radosgw-agent failed to parse

2015-02-02 Thread Alfredo Deza
If you use the `-v` flag you will get very verbose output.

On Mon, Feb 2, 2015 at 12:12 PM,   wrote:
> OK I will send them asap
>
> Logs are not very verbose.. Can I set a debug mode?
>
> -Message d'origine-
> De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la part de 
> Alfredo Deza
> Envoyé : lundi 2 février 2015 16:59
> À : CHEVALIER Ghislain IMT/OLPS
> Cc :
> Objet : Re: [Ceph-devel] radosgw-agent failed to parse
>
> That sounds suspicious because we haven't packaged a 1.2.1 release.
>
> The latest on PyPI is 1.2, which is the same in the master branch.
>
> Would you mind sending logs that show *how* the url is malformed (both when 
> it is OK and when it is not) ?
>
> On Mon, Feb 2, 2015 at 9:53 AM,   wrote:
>> HI,
>>
>> Thx for replying
>>
>> According to dpkg -l, it's 1.2.1.
>>
>> I noticed that the URL is malformed when launching using directly
>> radosgw-agent -c  but well formed when launching using
>> service radosgw-agent start 
>>
>> best regards
>>
>> -Message d'origine-
>> De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la
>> part de Alfredo Deza Envoyé : lundi 2 février 2015 15:18 À : CHEVALIER
>> Ghislain IMT/OLPS Cc : ceph-devel@vger.kernel.org Objet : Re:
>> [Ceph-devel] radosgw-agent failed to parse
>>
>> What version of the agent are you using? And when it fails, how does the log 
>> output look?
>>
>> On Mon, Feb 2, 2015 at 8:56 AM,   wrote:
>>> Hi all,
>>>
>>> Context : Ubuntu 14.04 TLS firefly 0.80.8
>>>
>>> I sent this post in ceph-users (identical subject) because I recently 
>>> encountered the same issue.
>>> Maybe I missed something between July and January...
>>>
>>> I found that the http request wasn't correctly formed by
>>> /usr/lib/python2.7/dist-packages/radosgw_agent/client.py
>>>
>>> I did the changes below
>>> #url = '{protocol}://{host}{path}'.format(protocol=request.protocol,
>>> # host=request.host,
>>> # path=request.path)
>>>  url = '{path}'.format(protocol="", host="", path=request.path)
>>>
>>> The request is then correctly formed and sent.
>>>
>>> I still have problems setting the federation between 2 zones of 2 ceph 
>>> clusters in 2 regions.
>>>
>>> I go on investigating.
>>>
>>> Best regards
>>>
>>>
>>> _
>>> _ ___
>>>
>>> Ce message et ses pieces jointes peuvent contenir des informations
>>> confidentielles ou privilegiees et ne doivent donc pas etre diffuses,
>>> exploites ou copies sans autorisation. Si vous avez recu ce message
>>> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que 
>>> les pieces jointes. Les messages electroniques etant susceptibles 
>>> d'alteration, Orange decline toute responsabilite si ce message a ete 
>>> altere, deforme ou falsifie. Merci.
>>>
>>> This message and its attachments may contain confidential or
>>> privileged information that may be protected by law; they should not be 
>>> distributed, used or copied without authorisation.
>>> If you have received this email in error, please notify the sender and 
>>> delete this message and its attachments.
>>> As emails may be altered, Orange is not liable for messages that have been 
>>> modified, changed or falsified.
>>> Thank you.
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majord...@vger.kernel.org More majordomo
>>> info at  http://vger.kernel.org/majordomo-info.html
>>
>> __
>> ___
>>
>> Ce message et ses pieces jointes peuvent contenir des informations
>> confidentielles ou privilegiees et ne doivent donc pas etre diffuses,
>> exploites ou copies sans autorisation. Si vous avez recu ce message
>> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les 
>> pieces jointes. Les messages electroniques etant susceptibles d'alteration, 
>> Orange decline toute responsabilite si ce message a ete

Re: [Ceph-devel] radosgw-agent failed to parse

2015-02-02 Thread Alfredo Deza
That sounds suspicious because we haven't packaged a 1.2.1 release.

The latest on PyPI is 1.2, which is the same in the master branch.

Would you mind sending logs that show *how* the url is malformed (both
when it is OK and when it is not) ?

On Mon, Feb 2, 2015 at 9:53 AM,   wrote:
> HI,
>
> Thx for replying
>
> According to dpkg -l, it's 1.2.1.
>
> I noticed that the URL is malformed when launching using directly 
> radosgw-agent -c 
> but well formed when launching using service radosgw-agent start 
>
> best regards
>
> -Message d'origine-
> De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la part de 
> Alfredo Deza
> Envoyé : lundi 2 février 2015 15:18
> À : CHEVALIER Ghislain IMT/OLPS
> Cc : ceph-devel@vger.kernel.org
> Objet : Re: [Ceph-devel] radosgw-agent failed to parse
>
> What version of the agent are you using? And when it fails, how does the log 
> output look?
>
> On Mon, Feb 2, 2015 at 8:56 AM,   wrote:
>> Hi all,
>>
>> Context : Ubuntu 14.04 TLS firefly 0.80.8
>>
>> I sent this post in ceph-users (identical subject) because I recently 
>> encountered the same issue.
>> Maybe I missed something between July and January...
>>
>> I found that the http request wasn't correctly formed by
>> /usr/lib/python2.7/dist-packages/radosgw_agent/client.py
>>
>> I did the changes below
>> #url = '{protocol}://{host}{path}'.format(protocol=request.protocol,
>> # host=request.host,
>> # path=request.path)
>>  url = '{path}'.format(protocol="", host="", path=request.path)
>>
>> The request is then correctly formed and sent.
>>
>> I still have problems setting the federation between 2 zones of 2 ceph 
>> clusters in 2 regions.
>>
>> I go on investigating.
>>
>> Best regards
>>
>>
>> __
>> ___
>>
>> Ce message et ses pieces jointes peuvent contenir des informations
>> confidentielles ou privilegiees et ne doivent donc pas etre diffuses,
>> exploites ou copies sans autorisation. Si vous avez recu ce message
>> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les 
>> pieces jointes. Les messages electroniques etant susceptibles d'alteration, 
>> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
>> falsifie. Merci.
>>
>> This message and its attachments may contain confidential or
>> privileged information that may be protected by law; they should not be 
>> distributed, used or copied without authorisation.
>> If you have received this email in error, please notify the sender and 
>> delete this message and its attachments.
>> As emails may be altered, Orange is not liable for messages that have been 
>> modified, changed or falsified.
>> Thank you.
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majord...@vger.kernel.org More majordomo
>> info at  http://vger.kernel.org/majordomo-info.html
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ceph-devel] radosgw-agent failed to parse

2015-02-02 Thread Alfredo Deza
What version of the agent are you using? And when it fails, how does
the log output look?

On Mon, Feb 2, 2015 at 8:56 AM,   wrote:
> Hi all,
>
> Context : Ubuntu 14.04 TLS firefly 0.80.8
>
> I sent this post in ceph-users (identical subject) because I recently 
> encountered the same issue.
> Maybe I missed something between July and January...
>
> I found that the http request wasn't correctly formed by 
> /usr/lib/python2.7/dist-packages/radosgw_agent/client.py
>
> I did the changes below
> #url = '{protocol}://{host}{path}'.format(protocol=request.protocol,
> # host=request.host,
> # path=request.path)
>  url = '{path}'.format(protocol="", host="", path=request.path)
>
> The request is then correctly formed and sent.
>
> I still have problems setting the federation between 2 zones of 2 ceph 
> clusters in 2 regions.
>
> I go on investigating.
>
> Best regards
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] v0.90 released

2015-01-05 Thread Alfredo Deza
On Sat, Dec 20, 2014 at 1:15 AM, Anthony Alba  wrote:
> Hi Sage,
>
> Has the repo metadata been regenerated?
>
> One of my reposync jobs can only see up to 0.89, using
> http://ceph.com/rpm-testing.

It was generated but we somehow missed out on properly syncing it. You
should now see 0.90 properly.


>
> Thanks
>
> Anthony
>
>
>
> On Sat, Dec 20, 2014 at 6:22 AM, Sage Weil  wrote:
>> This is the last development release before Christmas.  There are some API
>> cleanups for librados and librbd, and lots of bug fixes across the board
>> for the OSD, MDS, RGW, and CRUSH.  The OSD also gets support for discard
>> (potentially helpful on SSDs, although it is off by default), and there
>> are several improvements to ceph-disk.
>>
>> The next two development releases will be getting a slew of new
>> functionality for hammer.  Stay tuned!
>>
>> Upgrading
>> -
>>
>> * Previously, the formatted output of 'ceph pg stat -f ...' was a full
>>   pg dump that included all metadata about all PGs in the system.  It
>>   is now a concise summary of high-level PG stats, just like the
>>   unformatted 'ceph pg stat' command.
>>
>> * All JSON dumps of floating point values were incorrecting surrounding the
>>   value with quotes.  These quotes have been removed.  Any consumer of
>>   structured JSON output that was consuming the floating point values was
>>   previously having to interpret the quoted string and will most likely
>>   need to be fixed to take the unquoted number.
>>
>> Notable Changes
>> ---
>>
>> * arch: fix NEON feaeture detection (#10185 Loic Dachary)
>> * build: adjust build deps for yasm, virtualenv (Jianpeng Ma)
>> * build: improve build dependency tooling (Loic Dachary)
>> * ceph-disk: call partx/partprobe consistency (#9721 Loic Dachary)
>> * ceph-disk: fix dmcrypt key permissions (Loic Dachary)
>> * ceph-disk: fix umount race condition (#10096 Blaine Gardner)
>> * ceph-disk: init=none option (Loic Dachary)
>> * ceph-monstore-tool: fix shutdown (#10093 Loic Dachary)
>> * ceph-objectstore-tool: fix import (#10090 David Zafman)
>> * ceph-objectstore-tool: many improvements and tests (David Zafman)
>> * ceph.spec: package rbd-replay-prep (Ken Dreyer)
>> * common: add 'perf reset ...' admin command (Jianpeng Ma)
>> * common: do not unlock rwlock on destruction (Federico Simoncelli)
>> * common: fix block device discard check (#10296 Sage Weil)
>> * common: remove broken CEPH_LOCKDEP optoin (Kefu Chai)
>> * crush: fix tree bucket behavior (Rongze Zhu)
>> * doc: add build-doc guidlines for Fedora and CentOS/RHEL (Nilamdyuti
>>   Goswami)
>> * doc: enable rbd cache on openstack deployments (Sebastien Han)
>> * doc: improved installation nots on CentOS/RHEL installs (John Wilkins)
>> * doc: misc cleanups (Adam Spiers, Sebastien Han, Nilamdyuti Goswami, Ken
>>   Dreyer, John Wilkins)
>> * doc: new man pages (Nilamdyuti Goswami)
>> * doc: update release descriptions (Ken Dreyer)
>> * doc: update sepia hardware inventory (Sandon Van Ness)
>> * librados: only export public API symbols (Jason Dillaman)
>> * libradosstriper: fix stat strtoll (Dongmao Zhang)
>> * libradosstriper: fix trunc method (#10129 Sebastien Ponce)
>> * librbd: fix list_children from invalid pool ioctxs (#10123 Jason
>>   Dillaman)
>> * librbd: only export public API symbols (Jason Dillaman)
>> * many coverity fixes (Danny Al-Gaaf)
>> * mds: 'flush journal' admin command (John Spray)
>> * mds: fix MDLog IO callback deadlock (John Spray)
>> * mds: fix deadlock during journal probe vs purge (#10229 Yan, Zheng)
>> * mds: fix race trimming log segments (Yan, Zheng)
>> * mds: store backtrace for stray dir (Yan, Zheng)
>> * mds: verify backtrace when fetching dirfrag (#9557 Yan, Zheng)
>> * mon: add max pgs per osd warning (Sage Weil)
>> * mon: fix *_ratio units and types (Sage Weil)
>> * mon: fix JSON dumps to dump floats as flots and not strings (Sage Weil)
>> * mon: fix formatter 'pg stat' command output (Sage Weil)
>> * msgr: async: several fixes (Haomai Wang)
>> * msgr: simple: fix rare deadlock (Greg Farnum)
>> * osd: batch pg log trim (Xinze Chi)
>> * osd: clean up internal ObjectStore interface (Sage Weil)
>> * osd: do not abort deep scrub on missing hinfo (#10018 Loic Dachary)
>> * osd: fix ghobject_t formatted output to include shard (#10063 Loic
>>   Dachary)
>> * osd: fix osd peer check on scrub messages (#9555 Sage Weil)
>> * osd: fix pgls filter ops (#9439 David Zafman)
>> * osd: flush snapshots from cache tier immediately (Sage Weil)
>> * osd: keyvaluestore: fix getattr semantics (Haomai Wang)
>> * osd: keyvaluestore: fix key ordering (#10119 Haomai Wang)
>> * osd: limit in-flight read requests (Jason Dillaman)
>> * osd: log when scrub or repair starts (Loic Dachary)
>> * osd: support for discard for journal trim (Jianpeng Ma)
>> * qa: fix osd create dup tests (#10083 Loic Dachary)
>> * rgw: add location header when object is in another region (VRan Liu)
>> * rgw: check timestamp on s3 keystone auth (#10062 Abhishek 

[ANN] ceph-deploy 1.5.18 released

2014-10-09 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy that includes a fix where
enabling the OSD service would
fail on certain distros.

There is also a new improvement for creating a monitor keyring if not
found when deploying
monitors.

The full changelog can be seen here:
http://ceph.com/ceph-deploy/docs/changelog.html#id1

Make sure you upgrade!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: v0.67.11 dumpling released

2014-09-25 Thread Alfredo Deza
On Thu, Sep 25, 2014 at 1:27 PM, Mike Dawson  wrote:
> Looks like the packages have partially hit the repo, but at least the
> following are missing:
>
> Failed to fetch
> http://ceph.com/debian-dumpling/pool/main/c/ceph/librbd1_0.67.11-1precise_amd64.deb
> 404  Not Found
> Failed to fetch
> http://ceph.com/debian-dumpling/pool/main/c/ceph/librados2_0.67.11-1precise_amd64.deb
> 404  Not Found
> Failed to fetch
> http://ceph.com/debian-dumpling/pool/main/c/ceph/python-ceph_0.67.11-1precise_amd64.deb
> 404  Not Found
> Failed to fetch
> http://ceph.com/debian-dumpling/pool/main/c/ceph/ceph_0.67.11-1precise_amd64.deb
> 404  Not Found
> Failed to fetch
> http://ceph.com/debian-dumpling/pool/main/c/ceph/libcephfs1_0.67.11-1precise_amd64.deb
> 404  Not Found
>
> Based on the timestamps of the files that made it, it looks like the process
> to publish the packages isn't still in process, but rather failed yesterday.

That is odd. I just went ahead and re-pushed the packages and they are
now showing up.

Thanks for letting us know!


>
> Thanks,
> Mike Dawson
>
>
> On 9/25/2014 11:09 AM, Sage Weil wrote:
>>
>> v0.67.11 "Dumpling"
>> ===
>>
>> This stable update for Dumpling fixes several important bugs that affect a
>> small set of users.
>>
>> We recommend that all Dumpling users upgrade at their convenience.  If
>> none of these issues are affecting your deployment there is no urgency.
>>
>>
>> Notable Changes
>> ---
>>
>> * common: fix sending dup cluster log items (#9080 Sage Weil)
>> * doc: several doc updates (Alfredo Deza)
>> * libcephfs-java: fix build against older JNI headesr (Greg Farnum)
>> * librados: fix crash in op timeout path (#9362 Matthias Kiefer, Sage
>> Weil)
>> * librbd: fix crash using clone of flattened image (#8845 Josh Durgin)
>> * librbd: fix error path cleanup when failing to open image (#8912 Josh
>> Durgin)
>> * mon: fix crash when adjusting pg_num before any OSDs are added (#9052
>>Sage Weil)
>> * mon: reduce log noise from paxos (Aanchal Agrawal, Sage Weil)
>> * osd: allow scrub and snap trim thread pool IO priority to be adjusted
>>(Sage Weil)
>> * osd: fix mount/remount sync race (#9144 Sage Weil)
>>
>> Getting Ceph
>> 
>>
>> * Git at git://github.com/ceph/ceph.git
>> * Tarball at http://ceph.com/download/ceph-0.67.11.tar.gz
>> * For packages, see http://ceph.com/docs/master/install/get-packages
>> * For ceph-deploy, see
>> http://ceph.com/docs/master/install/install-ceph-deploy
>> ___
>> ceph-users mailing list
>> ceph-us...@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.14 released

2014-09-10 Thread Alfredo Deza
Hi All,

There is a new bug-fix release of ceph-deploy that helps prevent the
environment variable
issues that sometimes may cause issues when depending on them on remote hosts.

It is also now possible to specify public and cluster networks when
creating a new ceph.conf file.

Make sure you update!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.11 released

2014-08-13 Thread Alfredo Deza
Hi All,

This is a bug fix release of ceph-deploy, it primarily addresses the
issue of failing to install
Ceph on CentOS7 distros.

Make sure you update!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.10 released

2014-08-01 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool
for Ceph.

This release comes with a few improvements towards better usage of ceph-disk
on remote nodes, with more verbosity so things are a bit more clear when they
execute.

The full list of fixes for this release can be found in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1


Make sure you update!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: some bugs when cluster name is not ‘ceph'

2014-07-17 Thread Alfredo Deza
On Wed, Jul 16, 2014 at 12:37 PM, yy  wrote:
> Hi, we found some bugs when cluster name is not ‘ceph' in version 0.80.1,
> -
> diff --git a/src/ceph-disk b/src/ceph-disk
> index f79e341..153e344 100755
> --- a/src/ceph-disk
> +++ b/src/ceph-disk
> @@ -1611,6 +1611,8 @@ def start_daemon(
>  [
>  svc,
>  'ceph',
> +'-c',
> +'/etc/ceph/{cluster}.conf'.format(cluster=cluster),
>  'start',
>  'osd.{osd_id}'.format(osd_id=osd_id),
>  ],
> diff --git a/src/ceph_common.sh b/src/ceph_common.sh
> index 01781b7..8d14a3c 100644
> --- a/src/ceph_common.sh
> +++ b/src/ceph_common.sh
> @@ -49,13 +49,13 @@ check_host() {
>  get_conf user "" "user"
>
>  #echo host for $name is $host, i am $hostname
> -
> -if [ -e "/var/lib/ceph/$type/ceph-$id/upstart" ]; then
> +cluster=$1

Are we always passing `$1` here? What happens when `check_host` is
called with no arguments? It seems to
me that we should default to `ceph` but reading this, doesn't look like we do.


> +if [ -e "/var/lib/ceph/$type/$cluster-$id/upstart" ]; then
> return 1
>  fi
>
>  # sysvinit managed instance in standard location?
> -if [ -e "/var/lib/ceph/$type/ceph-$id/sysvinit" ]; then
> +if [ -e "/var/lib/ceph/$type/$cluster-$id/sysvinit" ]; then
> host="$hostname"
> echo "=== $type.$id === "
> return 0
> diff --git a/src/init-ceph.in b/src/init-ceph.in
> index 846bd57..24c52d9 100644
> --- a/src/init-ceph.in
> +++ b/src/init-ceph.in
> @@ -189,7 +189,7 @@ for name in $what; do
>  num=$id
>  name="$type.$id"
>
> -check_host || continue
> +check_host $cluster|| continue
>
>  binary="$BINDIR/ceph-$type"
>  cmd="$binary -i $id"
> @@ -231,7 +231,7 @@ for name in $what; do
>  cmd="$cmd -c $conf"
>
>  if echo $name | grep -q ^osd; then
> -   get_conf osd_data "/var/lib/ceph/osd/ceph-$id" "osd data"
> +   get_conf osd_data "/var/lib/ceph/osd/$cluster-$id" "osd data"
> get_conf fs_path "$osd_data" "fs path"  # mount point defaults so osd 
> data
>  get_conf fs_devs "" "devs"
> if [ -z "$fs_devs" ]; then
> @@ -323,7 +323,7 @@ for name in $what; do
> if [ "${update_crush:-1}" = "1" -o "${update_crush:-1}" = 
> "true" ]; then
> # update location in crush
> get_conf osd_location_hook "$BINDIR/ceph-crush-location" 
> "osd crush location hook"
> -   osd_location=`$osd_location_hook --cluster ceph --id $id 
> --type osd`
> +   osd_location=`$osd_location_hook --cluster $cluster --id 
> $id --type osd`
> get_conf osd_weight "" "osd crush initial weight"
> defaultweight="$(df -P -k $osd_data/. | tail -1 | awk '{ 
> print sprintf("%.2f",$2/1073741824) }')"
> get_conf osd_keyring "$osd_data/keyring" "keyring"
> @@ -354,7 +354,7 @@ for name in $what; do
> get_conf mon_data "/var/lib/ceph/mon/ceph-$id" "mon data"
> if [ "$mon_data" = "/var/lib/ceph/mon/ceph-$id" -a "$asok" = 
> "/var/run/ceph/ceph-mon.$id.asok" ]; then
> echo Starting ceph-create-keys on $host...
> -   cmd2="$SBINDIR/ceph-create-keys -i $id 2> /dev/null &"
> +   cmd2="$SBINDIR/ceph-create-keys --cluster $cluster -i $id 
> 2> /dev/null &"
> do_cmd "$cmd2"
> fi
> fi
>
> ---
> Best regards,
> yy,
> eXtreme Spring Network Technology Limited. Co.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.9 released

2014-07-15 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool
for Ceph.

There is a minor cleanup when ceph-deploy disconnects from remote hosts that was
creating some tracebacks. And there is a new flag for the `new` subcommand that
allows to specify an fsid for the cluster.

The full list of fixes for this release can be found in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1


Make sure you update!


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.8 released

2014-07-10 Thread Alfredo Deza
Hi All,

There is a new bug-fix release of ceph-deploy, the easy deployment tool
for Ceph.

The full list of fixes for this release can be found in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1


Make sure you update!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.7 released

2014-07-02 Thread Alfredo Deza
Hi All,

There is a new bug-fix release of ceph-deploy, the easy deployment tool
for Ceph.

The full list of fixes for this release can be found in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1


Make sure you update!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Problem installing ceph from package manager / ceph repositories

2014-06-10 Thread Alfredo Deza
On Tue, Jun 10, 2014 at 4:34 AM, Dan Van Der Ster
 wrote:
> Hi,
>
> On 10 Jun 2014, at 10:30, Karan Singh  wrote:
>
> Hello Cephers
>
> First of all this problem is not related to ceph-deploy , ceph-deploy 1.5.4
> works like charm :-) , thanks for Alfredo
>
> Problem :
>
> 1. When installing Ceph using package manger ( # yum install ceph  or  # yum
> update cehp) that uses ceph repositories (cep.repo) , the package manager
> does not respect the ceph.repo file and takes ceph package directly from
> EPEL . But when i disable EPEL repo and again try to install ceph , it takes
> from ceph repo.  This is a new problem and earlier i have not faced this ,
> though i have done several ceph cluster installation with package manager. I
> don’t want EPEL version of Ceph.
>
>
> You probably need to tweak the repo priorities. We use priority=30 for
> epel.repo, priority=5 for ceph.repo.
>

This is exactly what ceph-deploy is now doing. It is installing the
priorities plugin for yum
and then setting a very low number for ceph.repo (1 I believe)

This is explained in the last release announcement for ceph-deploy and
for more details
you can see this ticket:  http://tracker.ceph.com/issues/8533

> Cheers, Dan
>
>
> -- Dan van der Ster || Data & Storage Services || CERN IT Department --
>
>
> 2. When i uninstall ceph , from system its repository file i.e. ceph.repo is
> getting removed . Is this normal , i have not seen this before.
>
>
> Any one else facing the same problem or have solution to this.
>
>
> Regards
> Karan Singh
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ceph-deploy 1.5.4 (addressing packages coming from EPEL)

2014-06-09 Thread Alfredo Deza
Hi All,

We've experienced a lot of issues since EPEL started packaging a
0.80.1-2 version that YUM
will see as higher than 0.80.1 and therefore will choose to install
the EPEL one.

That package has some issues from what we have seen and in most cases
will break the installation
process.

There is a new version of ceph-deploy (1.5.4) that addresses this
problem by setting the priorities
so that the ceph.repo will be considered before the EPEL one.

Some improvements where done for how ceph-deploy parses
cephdeploy.conf files so that priorities
can be correctly set (and honored) from there as well.

The changelog with the details of this release can be found here:

http://ceph.com/ceph-deploy/docs/changelog.html#id1

Make sure you update!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


missing 0.81 from ceph.com/downloads/

2014-06-03 Thread Alfredo Deza
It looks like we missed out on a step to get the 0.81 tarballs to
ceph.com/downloads/

It just got uploaded. Apologies if you got bit by that!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.3 released

2014-06-02 Thread Alfredo Deza
Hi All,

There is a new bug-fix release of ceph-deploy, the easy deployment tool
for Ceph.

The full list of fixes for this release can be found in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1


Make sure you update!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.2 released

2014-05-07 Thread Alfredo Deza
Hi All,

There is a new bug-fix release of ceph-deploy, the easy deployment
tool for Ceph.

This release comes with two important changes:

* fix usage of `--` when removing packages in Debian/Ubuntu
* Default to Firefly when installing Ceph.

Make sure you upgrade!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ANN] ceph-deploy 1.5.0 released!

2014-05-01 Thread Alfredo Deza
A minor issue was found while attempting to activate OSDs that has
just been fixed and released
as ceph-deploy v.1.5.1

Even if you haven't encountered this particular issue I do recommend
an upgrade anyway.


Thanks!


Alfredo

On Mon, Apr 28, 2014 at 3:17 PM, Alfredo Deza  wrote:
> Hi All,
>
> There is a new release of ceph-deploy, the easy deployment tool for Ceph.
>
> This release comes with a few bug fixes and a few features:
>
> * implement `osd list`
> * add a status check on OSDs when deploying
> * sync local mirrors to remote hosts when installing
> * support flags and options set in cephdeploy.conf
>
> The full list of changes and fixes is documented at:
>
> http://ceph.com/ceph-deploy/docs/changelog.html#id1
>
> Make sure you update!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.5.0 released!

2014-04-28 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool for Ceph.

This release comes with a few bug fixes and a few features:

* implement `osd list`
* add a status check on OSDs when deploying
* sync local mirrors to remote hosts when installing
* support flags and options set in cephdeploy.conf

The full list of changes and fixes is documented at:

http://ceph.com/ceph-deploy/docs/changelog.html#id1

Make sure you update!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


0.79 dependency issue on RPM packages

2014-04-09 Thread Alfredo Deza
Yesterday we found out that there was a dependency issue for the init
script on CentOS/RHEL
distros where we depend on some functions that are available through
redhat-lsb-core but were
not declared in the ceph.spec file.

This will cause daemons not to start at all since the init script will
attempt to source a file that is not
there (if that package is not installed).

The workaround for this issue is to just install that one package:

sudo yum install redhat-lsb-core

And make sure that `/lib/lsb/init-functions` is present.

This should not affect Debian (and Debian based) distros.

Ticket reference: http://tracker.ceph.com/issues/8028
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.4.0 released!

2014-03-20 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool for Ceph.

This release comes with two new features: the ability to add a new
monitor to an existing cluster
and a configuration file to manage custom repositories/mirrors.

As always, you can find all changes documented in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1

Make sure you update!


-Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.3.5 released!

2014-02-07 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool for Ceph.

Although this is primarily a bug-fix release, the library that
ceph-deploy uses to connect
to remote hosts (execnet) was updated with the latest stable release.

A full list of changes can be found in the changelog:

http://ceph.com/ceph-deploy/docs/changelog.html#id1

Thanks!


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.3.4 released!

2014-01-03 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool for Ceph.

This is mostly a bug-fix release, although one minor feature was
added: the ability to
install/remove packages from remote hosts with a new sub-command: `pkg`

As we continue to add features (or improve old ones) we are also
making sure proper
documentation goes hand in hand with those changes too. For `pkg` this
is now documented
in the ceph-deploy docs page: http://ceph.com/ceph-deploy/docs/pkg.html

The complete changelog, including 1.3.4 changes can be found here:
http://ceph.com/ceph-deploy/docs/changelog.html#id1


Make sure you update!


Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.3.3 released!

2013-11-26 Thread Alfredo Deza
Hi All,

There is a new release of ceph-deploy, the easy deployment tool for Ceph.

The most important (non-bug) change for this release is the ability to
specify repository mirrors when installing ceph. This can be done with
environment variables or flags in the `install` subcommand.

Full documentation on that feature can be found in the new location for docs:

http://ceph.com/ceph-deploy/docs/install.html#behind-firewall

The complete changelog can be found here:
http://ceph.com/ceph-deploy/docs/changelog.html#id1

Make sure you update!

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.3.2 released!

2013-11-13 Thread Alfredo Deza
Hi All,

I'm happy to announce a new release of ceph-deploy, the easy
deployment tool for Ceph.

The only two (very important) changes made for this release are:

* Automatic SSH key copying/generation for hosts that do not have keys
setup when using `ceph-deploy new`

* All installs will now use the latest stable release of Ceph:
Emperor, unless otherwise specified.

With this release we are now also building documentation (and
publishing it!) with every change landing in master. Docs were not a
high priority in the past few months because we wanted to make
ceph-deploy more robust and easier to use, but we are now in a better
place and so docs will get better as we make changes.

The new documentation home is at:  http://ceph.com/ceph-deploy/docs/

And the changelog for this release is:
http://ceph.com/ceph-deploy/docs/changelog.html#id1

The new SSH magic was also documented as part of the release and can
be found here: http://ceph.com/ceph-deploy/docs/index.html#ssh-keys


Thanks!


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mourning the demise of mkcephfs

2013-11-12 Thread Alfredo Deza
On Tue, Nov 12, 2013 at 2:22 AM, Wido den Hollander  wrote:
> On 11/11/2013 06:51 PM, Dave (Bob) wrote:
>>
>> The utility mkcephfs seemed to work, it was very simple to use and
>> apparently effective.
>>
>> It has been deprecated in favour of something called ceph-deploy, which
>> does not work for me.
>>
>> I've ignored the deprecation messages until now, but in going from 70 to
>> 72 I find that mkcephfs has finally gone.
>>
>> I have tried ceph-deploy, and it seems to be tied in to specific
>> 'distributions' in some way.
>>
>> It is unuseable for me at present, because it reports:
>>
>> [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported:
>>
>>
>> I therefore need to go back to first principles, but the documentation
>> seems to have dropped descriptions of driving ceph without smoke and
>> mirrors.
>>
>> The direct approach may be more laborious, but at least it would not
>> depend on anything except ceph itself.
>>
>
> I myself am not a very big fan of ceph-deploy as well.

Why not? I definitely want to make it better for users, and any
feedback is super useful. What kind of caveats have
you found that lead you to not use it (or use something completely different) ?

>  Most installations I
> do are done by bootstrapping the monitors and osds manually.
>
> I have some homebrew scripts for this, but I mainly use Puppet to make sure
> all the packages and configuration is present on the nodes and afterwards
> it's just a matter of adding the OSDs and formatting their disks once.
>
> The guide to bootstrapping a monitor:
> http://eu.ceph.com/docs/master/dev/mon-bootstrap/
>
> When the monitor cluster is running you can start generating cephx keys for
> the OSDs and add them to the cluster:
> http://eu.ceph.com/docs/master/rados/operations/add-or-rm-osds/
>
> I don't know if the docs are 100% correct. I've done this so many times that
> I do a lot of things without even reading the docs, so there might be a typo
> in it somewhere. If so, report it so it can be fixed.
>
> Where I think that ceph-deploy works for a lot of people I fully understand
> that some people just want to manually bootstrap a Ceph cluster from
> scratch.
>
> Wido
>
>
>> Maybe I need to step back a version or two, set up my cluster with
>> mkcephfs, then switch back to the latest to use it.
>>
>> I'll search the documentation again.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Mourning the demise of mkcephfs

2013-11-12 Thread Alfredo Deza
On Mon, Nov 11, 2013 at 12:51 PM, Dave (Bob)  wrote:
> The utility mkcephfs seemed to work, it was very simple to use and
> apparently effective.
>
> It has been deprecated in favour of something called ceph-deploy, which
> does not work for me.
>
> I've ignored the deprecation messages until now, but in going from 70 to
> 72 I find that mkcephfs has finally gone.
>
> I have tried ceph-deploy, and it seems to be tied in to specific
> 'distributions' in some way.

It is! ceph-deploy needs to know what distribution is it talking to
because different distros will install/uninstall packages
with different packages managers. Init scripts will also be different,
so this "distro detection" is a must to provide the same
functionality regardless of what supported distro you might be using.

>
> It is unuseable for me at present, because it reports:
>
> [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported:
>

That looks like a bug. For the past few months the log output of
ceph-deploy has tried to improve to give as much useful information
as possible.

To get to the bottom of this it would be super helpful to know what
distro you were attempting to connect to, what the actual command was,
and what version of ceph-deploy you were using.

Hopefully, with that information and the (possible) bug fix, it will
mean that more and more people find ceph-deploy as a robust solution
to get
a Ceph cluster up and running.

>
> I therefore need to go back to first principles, but the documentation
> seems to have dropped descriptions of driving ceph without smoke and
> mirrors.
>
> The direct approach may be more laborious, but at least it would not
> depend on anything except ceph itself.
>
> Maybe I need to step back a version or two, set up my cluster with
> mkcephfs, then switch back to the latest to use it.
>
> I'll search the documentation again.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.3.1 released

2013-11-07 Thread Alfredo Deza
Hi All,

There is a new (bug-fix) release of ceph-deploy, the easy deployment
tool for Ceph.

There were a couple of issues related to GPG keys when installing in
Debian and Debian-based distros that where addressed.

A fix was added to improve moving temporary files to overwrite other
files like ceph.conf that was preventing some OSD operations.

The full changelog can be found at:
https://github.com/ceph/ceph-deploy/blob/master/docs/source/changelog.rst

Make sure you update!

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] testing ceph

2013-11-04 Thread Alfredo Deza
On Mon, Nov 4, 2013 at 10:56 AM, Trivedi, Narendra
 wrote:
> Bingo! A lot of people are getting this dreadful GenericErro and Failed to
> create 1 OSD. Does anyone know why despite /etc/ceph being there on each
> node?

/etc/ceph is created by installing ceph on a node, and purgedata will
remove the contents of /etc/ceph/
and not the actual directory in the latest (1.3) version.

Also, FYI purgedata on multiple nodes doesn’t work sometime i.e. it
> says it is uninstalled ceph and removed /etc/ceph from all nodes but they
> are there on all nodes except the first one (i.e. the first argument to the
> purgedata command ). Hence sometimes, I have to issue purgedata to
> individual nodes.

That does sound unexpected behavior from ceph-deploy. Can you share
some logs that demonstrate
this? Like I said, /etc/ceph is actually no longer removed in the
latest version, just the contents.

And you say "sometimes" as in, this doesn't happen consistently? Or do
you mean something else?

Again, log output and how you got there would be useful to try and
determine what is going on.

>
>
>
> From: ceph-users-boun...@lists.ceph.com
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of charles L
> Sent: Monday, November 04, 2013 9:26 AM
> To: ceph-devel@vger.kernel.org; ceph-us...@ceph.com
>
>
> Subject: Re: [ceph-users] testing ceph
>
>
>
>
>
>  Pls can somebody help?  Im  getting this error.
>
>
>
> ceph@CephAdmin:~$ ceph-deploy osd create server1:sda:/dev/sdj1
>
> [ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd create
> server1:sda:/dev/sdj1
>
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> server1:/dev/sda:/dev/sdj1
>
> [server1][DEBUG ] connected to host: server1
>
> [server1][DEBUG ] detect platform information from remote host
>
> [server1][DEBUG ] detect machine type
>
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
>
> [ceph_deploy.osd][DEBUG ] Deploying osd to server1
>
> [server1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>
> [server1][INFO  ] Running command: sudo udevadm trigger
> --subsystem-match=block --action=add
>
> [ceph_deploy.osd][DEBUG ] Preparing host server1 disk /dev/sda journal
> /dev/sdj1 activate True
>
> [server1][INFO  ] Running command: sudo ceph-disk-prepare --fs-type xfs
> --cluster ceph -- /dev/sda /dev/sdj1
>
> [server1][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal
> is not the same device as the osd data
>
> [server1][ERROR ] Could not create partition 1 from 34 to 2047
>
> [server1][ERROR ] Error encountered; not saving changes.
>
> [server1][ERROR ] ceph-disk: Error: Command '['sgdisk', '--largest-new=1',
> '--change-name=1:ceph data',
> '--partition-guid=1:d3ca8a92-7ba5-412e-abf5-06af958b788d',
> '--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be', '--', '/dev/sda']'
> returned non-zero exit status 4
>
> [server1][ERROR ] Traceback (most recent call last):
>
> [server1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py", line
> 68, in run
>
> [server1][ERROR ] reporting(conn, result, timeout)
>
> [server1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py", line 13,
> in reporting
>
> [server1][ERROR ] received = result.receive(timeout)
>
> [server1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
> line 455, in receive
>
> [server1][ERROR ] raise self._getremoteerror() or EOFError()
>
> [server1][ERROR ] RemoteError: Traceback (most recent call last):
>
> [server1][ERROR ]   File "", line 806, in executetask
>
> [server1][ERROR ]   File "", line 35, in _remote_run
>
> [server1][ERROR ] RuntimeError: command returned non-zero exit status: 1
>
> [server1][ERROR ]
>
> [server1][ERROR ]
>
> [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare
> --fs-type xfs --cluster ceph -- /dev/sda /dev/sdj1
>
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>
>
>
>
>
>
>
>
>
>
>
>
>
>> Date: Thu, 31 Oct 2013 10:55:56 +
>> From: joao.l...@inktank.com
>> To: charlesboy...@hotmail.com; ceph-devel@vger.kernel.org
>> Subject: Re: testing ceph
>>
>> On 10/31/2013 04:54 AM, charles L wrote:
>> > Hi,
>> > Pls is this a good setup for a production environment test of ceph? My
>> > focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and shared by
>> > the four OSDs on a host? or is this a better configuration for the SSD to 
>> > be
>> > just one partition(sdf1) while all osd uses that one partition?
>> > my setup:
>> > - 6 Servers with one 250gb boot disk for OS(sda),
>> > four-2Tb Disks each for the OSDs i.e Total disks = 6x4 = 24 disks (sdb
>> > -sde)
>> > and one-60GB SSD for Osd Journal(sdf).
>> > -RAM = 32GB on each server with 2 GB network link.
>> > hostname for servers: Server1 -Server6
>>
>> Charles,
>>
>> What you are describing on the ceph.conf below is definitely not a good
>> idea. If you really want to use just one SSD and share it

[ANN] ceph-deploy 1.3 released!

2013-11-01 Thread Alfredo Deza
Hi all,

A new version (1.3) of ceph-deploy is now out, a lot of fixes went
into this release including the addition of a more robust library to
connect to remote hosts and it removed the one extra dependency we
used. Installation should be simpler.

The complete changelog can be found at:

https://github.com/ceph/ceph-deploy/blob/master/docs/source/changelog.rst


The highlights for this release are:


* We now allow to use `--username` to connect on remote hosts,
specifying something different than the current user or the SSH
config.

* Global timeouts for remote commands to be able to disconnect if
there is no input received (defaults to 5 minutes), but still allowing
other more granular timeouts for some commands that need to just run a
simple command without output expectation.


Please make sure you update (install instructions:
http://github.com/ceph/ceph-deploy/#installation) and use the latest
version!


Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Missing Dependency for ceph-deploy 1.2.7

2013-10-16 Thread Alfredo Deza
On Tue, Oct 15, 2013 at 9:54 PM, Luke Jing Yuan  wrote:
> Hi,
>
> I am trying to install/upgrade to 1.2.7 but Ubuntu (Precise) is complaining 
> about unmet dependency which seemed to be python-pushy 0.5.3 which seemed to 
> be missing. Am I correct to assume so?

That is odd, we still have pushy packages available for the version
that you are having issues with, see:
http://ceph.com/debian-dumpling/pool/main/p/python-pushy/

It might be that you need to update your repos?
>
> Regards,
> Luke
>
> --
> -
> -
> DISCLAIMER:
>
> This e-mail (including any attachments) is for the addressee(s)
> only and may contain confidential information. If you are not the
> intended recipient, please note that any dealing, review,
> distribution, printing, copying or use of this e-mail is strictly
> prohibited. If you have received this email in error, please notify
> the sender  immediately and delete the original message.
> MIMOS Berhad is a research and development institution under
> the purview of the Malaysian Ministry of Science, Technology and
> Innovation. Opinions, conclusions and other information in this e-
> mail that do not relate to the official business of MIMOS Berhad
> and/or its subsidiaries shall be understood as neither given nor
> endorsed by MIMOS Berhad and/or its subsidiaries and neither
> MIMOS Berhad nor its subsidiaries accepts responsibility for the
> same. All liability arising from or in connection with computer
> viruses and/or corrupted e-mails is excluded to the fullest extent
> permitted by law.
>
>
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: OpenStack and ceph integration with puppet

2013-10-10 Thread Alfredo Deza
On Thu, Oct 10, 2013 at 11:43 AM, Loic Dachary  wrote:
>
>
> On 09/10/2013 22:46, Loic Dachary wrote:
>>
>>
>> On 08/10/2013 16:20, Don Talton (dotalton) wrote:> Hi Loic,
>>>
>>
>>> We utilize stackforge's puppet modules to do our heavy lifting, including 
>>> p-openstack, p-cinder, p-glance. There are dependency chains so that 
>>> services will be restarted after configuration changes are made. Since many 
>>> of our customers don't allow their baremetal  nodes Internet access, we've 
>>> added the packages to our APT repo to avoid the version issues with using 
>>> either stock or public packages.
>>>
>>> You can probably find some other useful code the 
>>> https://github.com/CiscoSystems/ repo, including what is needed to 
>>> cohabitate MON/OSD nodes with OpenStack service nodes 
>>> (https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph) 
>>> and more. The primary orchestration is in grizzly-manifests. You can see 
>>> HOWTOs for different deployment scenarios here: 
>>> http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation.
>>>
>>> Hope this helps some!
>>
>> It does and it's great that all this is documented :-) Although there are a 
>> few modules around, re-using ceph-deploy seems to be the preferred method. I 
>> wonder what Alfredo would suggest. From a previous discussion we had I think 
>> he will suggest to use ceph-disk directly + cli / rest call instead. Looking 
>> at
>>
>> https://github.com/ceph/ceph-deploy/blob/master/ceph_deploy/new.py
>> https://github.com/ceph/ceph-deploy/blob/master/ceph_deploy/mon.py
>> etc.
>>
>> the layer provided by ceph-deploy is indeed thin. But is it something that 
>> needs to be duplicated in a puppet module ?
>>
>
> I took a look at ceph-deploy and it won't rely on sudo if run from root
>
> ceph_deploy/sudo_pushy.py
> def needs_sudo():
> if getpass.getuser() == 'root':
> return False
> return True
>
> and that it won't rely on ssh if the target host is localhost:
>
> ceph_deploy/lib/remoto/connection.py
> def needs_ssh(hostname, _socket=None):
> """
> Obtains remote hostname of the socket and cuts off the domain part
> of its FQDN.
> """
> _socket = _socket or socket
> local_hostname = _socket.gethostname()
> local_short_hostname = local_hostname.split('.')[0]
> if local_hostname == hostname or local_short_hostname == hostname:
> return False
> return True
>
> Since puppet-cephdeploy runs on the target host as root, it means that
>
> puppet-cephdeploy/manifests/init.pp
>   file {"/home/$user/.ssh/authorized_keys":
> ...
> etc.
>
> could probably be avoided since puppet-cephdeploy/manifests/mon.pp runs
>
> command => "/usr/local/bin/ceph-deploy mon create $::hostname",
>
> runs as root, on the target host.

Loic is spot on here. Yes, ceph-deploy will avoid all of those things
described if they are not necessary. The one possible caveat is when
there is
an ~/.ssh/config that changes the login of a remote user to something
else which ceph-deploy would not be able to tell.

So say you have a `node1` defined in the ssh config with user `foo`
but you are executing ceph-deploy as `root`, then that would mean
that ceph-deploy would not run `sudo` commands in the remote host
because it assumes the ssh is happening with root.

If the manifest is doing all of this work locally, then this is not a
problem, but something to be aware of.
>
> I'm not sure if the distribution of the keys would work though as it relies 
> on files collected by "gatherkeys" which is still a little mysterious for me 
> :-)

gatherkeys will just go to standard locations and look for the
generated keys and copy them back to where ceph-deploy is executing
from. Really nothing
complex is  happening there.


>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
> All that is necessary for the triumph of evil is that good people do nothing.
>
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.2.7 released!

2013-10-08 Thread Alfredo Deza
Hi all,

Version 1.2.7 of ceph-deploy has been released, the easy ceph deployment tool.

As always, there were a good amount of bug fixes into this release and a wealth
of improvements. 1.2.6 was not announced as it was a small bug fix
from the previous
release.

Installation instructions: https://github.com/ceph/ceph-deploy#installation

This is the list of all fixes that went into this release which can
also be found in the CHANGELOG.rst file in ceph-deploy's git repo:

* Ensure local calls to ceph-deploy do not attempt to ssh.
* ``mon create-initial`` command to deploy all defined mons, wait for them to
  form quorum and finally to ``gatherkeys``.
* Improve help menu for mon commands.
* Add ``--fs-type`` option to ``disk`` and ``osd`` commands (Thanks Benoit
  Knecht)
* Make sure we are using ``--cluster`` for remote configs when starting ceph
* Fix broken ``mon destroy`` calls using the new hostname resolution helper
* Add a helper to catch common monitor errors (reporting the status of a mon)
* Normalize all configuration options in ceph-deploy (Thanks Andrew Woodward)
* Use a ``cuttlefish`` compatible ``mon_status`` command
* Make ``osd activate`` use the new remote connection libraries for improved
  readability.
* Make ``disk zap`` also use the new remote connection libraries.
* Handle any connection errors that may came up when attempting to get into
  remote hosts.

Highlights
---
* Cuttlefish users will benefit from all the monitor work in ceph-deploy: catch
  common pitfalls and errors when deploying monitors.
* A new sub-command to create monitors was added: ``ceph-deploy mon
  create-initial`` which will deploy all initially defined monitors, will wait
  for them to form quorum and finally will gather the keys when they do
  *greatly simplifying the steps needed for this*.
* Disk zapping will have **much** better output and should help debugging.


Make sure you update!

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


(guidelilines for) Unit Testing in Python

2013-09-20 Thread Alfredo Deza
Hi all,

Below was a summary for some general guidelines we are going to be
following for unit testing the teuthology code with Python, hopefully
it might be a good read as well :)

This should be used as a general guideline as to what are the
expectations when writing unit tests.

Before Writing Anything
-
Make sure you have installed a test runner which you will be using to
run the tests (making sure they pass). We are going to be using
py.test for this and that is the same tool Jenkins will be using when
running unit tests for teuthology (it does the same for ceph-deploy).

To install py.test, as with any library in Python, make sure you have
a virtualenv created for your work on teuthology and have activated
it, there are tools that help you manage virtualenvs (and you can
certainly use those) but for now, this will assume you are creating
one in your home directory:

mkdir venvs
virtualenv venvs/teuthology

And now "activate" it:

source ~/venvs/teuthology/bin/activate

This is more or less what the bootstrap script does, making sure you
have all dependencies installed.

Now that you have your virtualenv created and activated, install py.test:

pip install pytest

Note there is no dot in the name for the package, but there is one for
calling the tool:

py.test --version

That should work and tell you it is installed.

How it works
--
With everything installed and ready to go, change directories to
teuthology/teuthology/test and run
py.test:

py.test

You should see output similar to this:

collected 19 items

test_get_distro.py 
test_misc.py ..
test_safepath.py .


19 tests! The dots signal a pass, when there is a fail you would see
an "F" instead of a dot and the tracebacks for each failed test.


Writing your first test

Python conventions for naming test files is to prefix them with
`test_`. The test runner will be able to collect them automatically if
so.

If there is a file not prefixed with `test_` it will not be considered a test.

Similarly, in test files, everything that is a test (a class, method
or function) should be prefixed with `test`.

This is the general convention if you are testing "foo":

Functions:def test_foo():

Classes:class TestFoo(object):

Methods:   def test_foo(self):

Lets assume you have a small function that does some path alterations
(so very common in teuthology) and this function is called
`get_absolute_path` that will return an absolute path from 2
arguments: base_path and trailing_dir.

This is the function:

def get_asbolute_path(base_path, trailing_dir=None):
"""
given a base_path, try to join it with trailing_dir (if any)
and return an absolute path

absolute_base_path = os.path.abspath(base_path)
if trailing_dir:
trailing_dir = trailing_dir.lstrip('/')  # make sure we
don't have trailing slashes
return os.path.join(absolute_base_path, trailing_dir)
return absolute_base_path

Pretty simple function, now lets go ahead and test it.

Create a test file for it that tells you the location/module you are
testing, lets assume that our function lives in safepath.py in
teuthology. So we are going to call this new file `test_safepath.py`
and will create a class for it.

This is how the most basic test would look:

from teuthology import safepath

class TestGetAbsolutePath(object):

def test_trailing_dir_gets_joined(self):
absolute_path = safepath.get_absolute_path('/foo', 'bar')
assert absolute_path == '/foo/bar'


We imported our module for testing, created a test class and added a
single test method. If we run py.test against it (using the test file
name as the first argument) we would get a pass!

With py.test you can get very nice readability with plain assertions
like the test method.

The goal is to have test methods that are *meaningful* and concise.
When that test method fails it will be apparent that something about
the trailing dir is failing.

Avoid *like the plague* generic testing names. Just as an example,
this is something you should not do:

class TestSimple(object):

def test_path(self):
absolute_path = safepath.get_absolute_path('/foo', 'bar')
assert absolute_path == '/foo/bar'

"TestSimple" doesn't tell us what you are testing, and "test_path" is
misleading. By properly naming your tests, you can get meaningful
failures!

Also important to note is that you should keep your test methods as
short and concise as possible.

If you test method is huge, that is a tell that something is not right
and it will be very hard to attempt to fix it when it fails.

By making test methods short and concise you are making the intent
clear. The same goes for assertions, try not to "compound" tests by
making a bunch of calls and assertions in the same test method.

If it lo

[ANN] ceph-deploy 1.2.5 released!

2013-09-18 Thread Alfredo Deza
Hi all,

There is a new release of ceph-deploy, the easy ceph deployment tool.

There were a good amount of bug fixes into this release and a wealth
of improvements. Thanks
to all of you who contributed patches and issues, and thanks to Dmitry
Borodaenko and
Andrew Woodward for extensively testing and reporting problems before
this release went out.

Installation instructions: https://github.com/ceph/ceph-deploy#installation

This is the list of all fixes that went into this release which can
also be found in the CHANGELOG.rst file in ceph-deploy's git repo:

* Improve ``osd help`` menu with path information
* Really discourage the use of ``ceph-deploy new [IP]``
* Fix hanging remote requests
* Add ``mon status`` output when creating monitors
* Fix Debian install issue (wrong parameter order) (Thanks Sayid Munawar)
* ``osd`` commands will be more verbose when deploying them
* Issue a warning when provided hosts do not match ``hostname -s`` remotely
* Create two flags for altering/not-altering source repos at install time:
  ``--adjust-repos`` and ``--no-adjust-repos``
* Do not do any ``sudo`` commands if user is root
* Use ``mon status`` for every ``mon`` deployment and detect problems with
  monitors.
* Allow to specify ``host:fqdn/ip`` for all mon commands (Thanks Dmitry
  Borodaenko)
* Be consistent for hostname detection (Thanks Dmitry Borodaenko)
* Fix hanging problem on remote hosts

Highlights
--
* Dumpling users will benefit from better status information from each
monitor when deploying.
* 2 new flags to enable/disable adding source repositories when
installing (``--adjust-repos`` and ``--no-adjust-repos``) that
  should help environments that have custom repos (or that are
prevented to going to the internet).
* Much better OSD output! This will improve debugging as there is now
capturing of every step on remote hosts.


Make sure you update!

Thanks,


Alfredo

p.s. What happened to 1.2.4? As we released this we found there was a
problem with mon commands that could hang
so even though 1.2.4 was released we decided to cut another release
with the fix.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Ceph-deploy (git from today) fails to create osd on host that does not have a mon

2013-09-05 Thread Alfredo Deza
On Thu, Sep 5, 2013 at 2:27 AM, Mark Kirkwood
 wrote:
> On 05/09/13 17:56, Mark Kirkwood wrote:
>>
>>
>>
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
>> ceph2:/dev/vdb:/dev/vdc
>> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
>> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
>> [ceph2][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [ceph2][INFO  ] keyring file does not exist, creating one at:
>> /var/lib/ceph/bootstrap-osd/ceph.keyring
>> [ceph2][INFO  ] create mon keyring file
>> [ceph2][ERROR ] Traceback (most recent call last):
>> [ceph2][ERROR ]   File
>> "/home/markir/develop/python/ceph-deploy/ceph_deploy/util/decorators.py",
>> line 10, in inner
>> [ceph2][ERROR ]   File
>> "/home/markir/develop/python/ceph-deploy/ceph_deploy/osd.py", line 14, in
>> write_keyring
>> [ceph2][ERROR ] NameError: global name 'key' is not defined
>>
>
> The attached patch seems to fix it.


Woah, good catch Mark. The osd module recently had some changes to
improve the logging in the remote host and it seems that
that variable was left out.

I opened http://tracker.ceph.com/issues/6237 and this should be fixed
(with your patch) today.
>
> Cheers
>
> Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.2.2 released!

2013-08-23 Thread Alfredo Deza
Hi all,

There is a new bug-fix release of ceph-deploy, the easy ceph deployment tool.

Installation instructions: https://github.com/ceph/ceph-deploy#installation

This is the list of all fixes that went into this release which can
also be found in the CHANGELOG.rst file in ceph-deploy's git repo:

* Do not force usage of lsb_release, fallback to
  ``platform.linux_distribution()``
* Ease installation in CentOS/Scientific by adding the EPEL repo
  before attempting to install Ceph.
* Graceful handling of pushy connection issues due to host
  address resolution
* Honor the usage of ``--cluster`` when calling osd prepare.


Make sure you update!

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ANN] ceph-deploy 1.2.1 released

2013-08-16 Thread Alfredo Deza
Hi all,

There is a new bug-fix release of ceph-deploy, the easy ceph deployment tool.

ceph-deploy can be installed from three different sources depending on
your package manager and distribution, currently available for RPMs,
DEBs and directly as a Python package from the Python Package Index.

Documentation has been updated with installation instructions:

https://github.com/ceph/ceph-deploy#installation

This is the list of all fixes that went into this release which can
also be found in the CHANGELOG.rst file in ceph-deploy's git repo:

* Print the help when no arguments are passed
* Add a ``--version`` flag
* Show the version in the help menu
* Catch ``DeployError`` exceptions nicely with the logger
* Fix blocked command when calling ``mon create``
* default to ``dumpling`` for installs
* halt execution on remote exceptions/errors

Make sure you update!

Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html