Re: [ceph-users] v0.67 Dumpling released

2013-08-16 Thread Dan Mick
That loosk interesting, but I cannot browse without making an account; 
can you make your source freely available?


On 08/14/2013 10:54 PM, Mikaël Cluseau wrote:

Hi lists,

in this release I see that the ceph command is not compatible with
python 3. The changes were not all trivial so I gave up, but for those
using gentoo, I made my ceph git repository available here with an
ebuild that forces the python version to 2.6 ou 2.7 :

git clone https://git.isi.nc/cloud/cloud-overlay.git

I upgraded from cuttlefish without any problem so good job
ceph-contributors :)

Have a nice day.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-16 Thread Dan Mick

OK, no worries.  Was just after maximum availability.

On 08/16/2013 08:19 PM, Mikaël Cluseau wrote:

On 08/17/2013 02:06 PM, Dan Mick wrote:

That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?


gitlab's policy is the following :

Public access
If checked, this project can be cloned /without any/ authentication. It
will also be listed on the public access directory
https://git.isi.nc/public. /Any/ user will have Guest
https://git.isi.nc/help/permissions permissions on the repository.


We support oauth (gmail has been extensively tested ;)) but accounts are
blocked by default so I don't know if you'll have instant access even
after an oauth login.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Markus Goldberg

Hi,
is it ok to upgrade from 0.66 to 0.67 by just running 'apt-get upgrade' 
and rebooting the nodes one by one ?

Thanks.
Regards,
  Markus
Am 14.08.2013 07:32, schrieb Sage Weil:

Another three months have gone by, and the next stable release of Ceph is
ready: Dumpling!  Thank you to everyone who has contributed to this
release!

This release focuses on a few major themes since v0.61 (Cuttlefish):

  * rgw: multi-site, multi-datacenter support for S3/Swift object storage
  * new RESTful API endpoint for administering the cluster, based on a new
and improved management API and updated CLI
  * mon: stability and performance
  * osd: stability performance
  * cephfs: open-by-ino support (for improved NFS reexport)
  * improved support for Red Hat platforms
  * use of the Intel CRC32c instruction when available

As with previous stable releases, you can upgrade from previous versions
of Ceph without taking the entire cluster online, as long as a few simple
guidelines are followed.

  * For Dumpling, we have tested upgrades from both Bobtail and Cuttlefish.
If you are running Argonaut, please upgrade to Bobtail and then to
Dumpling.
  * Please upgrade daemons/hosts in the following order:
1. Upgrade ceph-common on all nodes that will use the command line ceph
   utility.
2. Upgrade all monitors (upgrade ceph package, restart ceph-mon
   daemons). This can happen one daemon or host at a time. Note that
   because cuttlefish and dumpling monitors cant talk to each other,
   all monitors should be upgraded in relatively short succession to
   minimize the risk that an untimely failure will reduce availability.
3. Upgrade all osds (upgrade ceph package, restart ceph-osd daemons).
   This can happen one daemon or host at a time.
4. Upgrade radosgw (upgrade radosgw package, restart radosgw daemons).

There are several small compatibility changes between Cuttlefish and
Dumpling, particularly with the CLI interface.  Please see the complete
release notes for a summary of the changes since v0.66 and v0.61
Cuttlefish, and other possible issues that should be considered before
upgrading:

http://ceph.com/docs/master/release-notes/#v0-67-dumpling

Dumpling is the second Ceph release on our new three-month stable release
cycle.  We are very pleased to have pulled everything together on
schedule.  The next stable release, which will be code-named Emperor, is
slated for three months from now (beginning of November).

You can download v0.67 Dumpling from the usual locations:

  * Git at git://github.com/ceph/ceph.git
  * Tarball at http://ceph.com/download/ceph-0.67.tar.gz
  * For Debian/Ubuntu packages, see http://ceph.com/docs/master/install/debian
  * For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
MfG,
  Markus Goldberg


Markus Goldberg | Universität Hildesheim
| Rechenzentrum
Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 883205 | email goldb...@uni-hildesheim.de



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread James Harper
 Hi,
 is it ok to upgrade from 0.66 to 0.67 by just running 'apt-get upgrade'
 and rebooting the nodes one by one ?

Is a full reboot required?

James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Dan van der Ster
On Wed, Aug 14, 2013 at 11:35 AM, Markus Goldberg
goldb...@uni-hildesheim.de wrote:
 is it ok to upgrade from 0.66 to 0.67 by just running 'apt-get upgrade' and
 rebooting the nodes one by one ?

Did you see http://ceph.com/docs/master/release-notes/#upgrading-from-v0-66  ??
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread peter

On 2013-08-14 07:32, Sage Weil wrote:
Another three months have gone by, and the next stable release of Ceph 
is

ready: Dumpling!  Thank you to everyone who has contributed to this
release!

This release focuses on a few major themes since v0.61 (Cuttlefish):

 * rgw: multi-site, multi-datacenter support for S3/Swift object 
storage
 * new RESTful API endpoint for administering the cluster, based on a 
new

   and improved management API and updated CLI
 * mon: stability and performance
 * osd: stability performance
 * cephfs: open-by-ino support (for improved NFS reexport)
 * improved support for Red Hat platforms
 * use of the Intel CRC32c instruction when available

As with previous stable releases, you can upgrade from previous 
versions
of Ceph without taking the entire cluster online, as long as a few 
simple

guidelines are followed.

 * For Dumpling, we have tested upgrades from both Bobtail and 
Cuttlefish.

   If you are running Argonaut, please upgrade to Bobtail and then to
   Dumpling.
 * Please upgrade daemons/hosts in the following order:
   1. Upgrade ceph-common on all nodes that will use the command line 
ceph

  utility.
   2. Upgrade all monitors (upgrade ceph package, restart ceph-mon
  daemons). This can happen one daemon or host at a time. Note that
  because cuttlefish and dumpling monitors cant talk to each other,
  all monitors should be upgraded in relatively short succession to
  minimize the risk that an untimely failure will reduce 
availability.
   3. Upgrade all osds (upgrade ceph package, restart ceph-osd 
daemons).

  This can happen one daemon or host at a time.
   4. Upgrade radosgw (upgrade radosgw package, restart radosgw 
daemons).


There are several small compatibility changes between Cuttlefish and
Dumpling, particularly with the CLI interface.  Please see the complete
release notes for a summary of the changes since v0.66 and v0.61
Cuttlefish, and other possible issues that should be considered before
upgrading:

   http://ceph.com/docs/master/release-notes/#v0-67-dumpling

Dumpling is the second Ceph release on our new three-month stable 
release

cycle.  We are very pleased to have pulled everything together on
schedule.  The next stable release, which will be code-named Emperor, 
is

slated for three months from now (beginning of November).

You can download v0.67 Dumpling from the usual locations:

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.67.tar.gz
 * For Debian/Ubuntu packages, see 
http://ceph.com/docs/master/install/debian

 * For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Hi Sage,

I just upgraded and everything went quite smoothly with osds, mons and 
mds, good work guys! :)


The only problem I have ran into is with radosgw. It is unable to start 
after the upgrade with the following message:


2013-08-14 11:57:25.841310 7ffd0d2ae780  0 ceph version 0.67 
(e3b7bc5bce8ab330ec1661381072368af3c218a0), process radosgw, pid 5612
2013-08-14 11:57:25.841328 7ffd0d2ae780 -1 WARNING: libcurl doesn't 
support curl_multi_wait()
2013-08-14 11:57:25.841335 7ffd0d2ae780 -1 WARNING: cross zone / region 
transfer performance may be affected
2013-08-14 11:57:25.855427 7ffcef7fe700  2 
RGWDataChangesLog::ChangesRenewThread: start
2013-08-14 11:57:25.856138 7ffd0d2ae780 -1 Couldn't init storage 
provider (RADOS)


ceph auth list returns:

client.radosgw.gateway
key: xx
caps: [mon] allow r
caps: [osd] allow rwx

my config:

[client.radosgw.gateway]
keyring = /etc/ceph/keyring.radosgw.gateway
rgw socket path = /tmp/radosgw.sock
log file = /var/log/ceph/radosgw.log
rgw enable ops log = false
rgw print continue = true
rgw keystone url = http://xx:5000
rgw keystone admin token = password
rgw keystone accepted roles = admin,Member
rgw keystone token cache size = 500
rgw keystone revocation interval = 600
#nss db path = /var/lib/ceph/nss

Also, is the libcurl warning a problem? It seems the libcurl package is 
a bit old on Ubuntu 12.04LTS:


curl --version
curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
zlib/1.2.3.4 libidn/1.23 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 
pop3s rtmp rtsp smtp smtps telnet tftp

Features: GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP

Cheers,

Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Gregory Farnum
On Wednesday, August 14, 2013, wrote:

 Hi Sage,

 I just upgraded and everything went quite smoothly with osds, mons and
 mds, good work guys! :)

 The only problem I have ran into is with radosgw. It is unable to start
 after the upgrade with the following message:

 2013-08-14 11:57:25.841310 7ffd0d2ae780  0 ceph version 0.67 (**
 e3b7bc5bce8ab330ec166138107236**8af3c218a0), process radosgw, pid 5612
 2013-08-14 11:57:25.841328 7ffd0d2ae780 -1 WARNING: libcurl doesn't
 support curl_multi_wait()
 2013-08-14 11:57:25.841335 7ffd0d2ae780 -1 WARNING: cross zone / region
 transfer performance may be affected
 2013-08-14 11:57:25.855427 7ffcef7fe700  2 
 RGWDataChangesLog::**ChangesRenewThread:
 start
 2013-08-14 11:57:25.856138 7ffd0d2ae780 -1 Couldn't init storage provider
 (RADOS)

 ceph auth list returns:

 client.radosgw.gateway
 key: xx
 caps: [mon] allow r
 caps: [osd] allow rwx


That's your problem; the gateway needs to have rw on the monitor to create
some new pools. That's always been a soft requirement (it would complain
badly if it needed a pool not already created) but it got harder in
Dumpling. I think that should have been in the extended release notes...?
Anyway, if you update the permissions on the monitor it should be all good.
-Greg


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Ian Colle
There are version specific repos, but you shouldn't need them if you want
the latest.

In fact, http://ceph.com/rpm/ is simply a link to
http://ceph.com/rpm-dumpling

Ian R. Colle
Director of Engineering

Inktank
Cell: +1.303.601.7713 tel:%2B1.303.601.7713
Email: i...@inktank.com



Delivering the Future of Storage

 http://www.linkedin.com/in/ircolle
 http://www.twitter.com/ircolle

On 8/14/13 8:28 AM, Kyle Hutson kylehut...@k-state.edu wrote:

 Ah, didn't realize the repos were version-specific. Thanks Dan!
 
 
 On Wed, Aug 14, 2013 at 9:20 AM, Dan van der Ster daniel.vanders...@cern.ch
 wrote:
 http://ceph.com/rpm-dumpling/el6/x86_64/
 
 
 --
 Dan van der Ster
 CERN IT-DSS
 
 
 On Wednesday, August 14, 2013 at 4:17 PM, Kyle Hutson wrote:
 
  Any suggestions for upgrading CentOS/RHEL? The yum repos don't appear to
 have been updated yet.
 
  I thought maybe with the improved support for Red Hat platforms that
 would be the easy way of going about it.
 
 
  On Wed, Aug 14, 2013 at 5:08 AM, pe...@2force.nl
 (mailto:pe...@2force.nl) wrote:
   On 2013-08-14 07:32, Sage Weil wrote:
Another three months have gone by, and the next stable release of
 Ceph is
ready: Dumpling! Thank you to everyone who has contributed to this
release!
   
This release focuses on a few major themes since v0.61 (Cuttlefish):
   
* rgw: multi-site, multi-datacenter support for S3/Swift object
 storage
* new RESTful API endpoint for administering the cluster, based on a
new
and improved management API and updated CLI
* mon: stability and performance
* osd: stability performance
* cephfs: open-by-ino support (for improved NFS reexport)
* improved support for Red Hat platforms
* use of the Intel CRC32c instruction when available
   
As with previous stable releases, you can upgrade from previous
 versions
of Ceph without taking the entire cluster online, as long as a few
 simple
guidelines are followed.
   
* For Dumpling, we have tested upgrades from both Bobtail and
 Cuttlefish.
If you are running Argonaut, please upgrade to Bobtail and then to
Dumpling.
* Please upgrade daemons/hosts in the following order:
1. Upgrade ceph-common on all nodes that will use the command line
ceph
utility.
2. Upgrade all monitors (upgrade ceph package, restart ceph-mon
daemons). This can happen one daemon or host at a time. Note that
because cuttlefish and dumpling monitors cant talk to each other,
all monitors should be upgraded in relatively short succession to
minimize the risk that an untimely failure will reduce availability.
3. Upgrade all osds (upgrade ceph package, restart ceph-osd
 daemons).
This can happen one daemon or host at a time.
4. Upgrade radosgw (upgrade radosgw package, restart radosgw
 daemons).
   
There are several small compatibility changes between Cuttlefish and
Dumpling, particularly with the CLI interface. Please see the
 complete
release notes for a summary of the changes since v0.66 and v0.61
Cuttlefish, and other possible issues that should be considered
 before
upgrading:
   
http://ceph.com/docs/master/release-notes/#v0-67-dumpling
   
Dumpling is the second Ceph release on our new three-month stable
 release
cycle. We are very pleased to have pulled everything together on
schedule. The next stable release, which will be code-named Emperor,
is
slated for three months from now (beginning of November).
   
You can download v0.67 Dumpling from the usual locations:
   
* Git at git://github.com/ceph/ceph.git
 http://github.com/ceph/ceph.git  (http://github.com/ceph/ceph.git)
* Tarball at http://ceph.com/download/ceph-0.67.tar.gz
* For Debian/Ubuntu packages, see
 http://ceph.com/docs/master/install/debian
* For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
  
  
   Hi Sage,
  
   I just upgraded and everything went quite smoothly with osds, mons and
 mds, good work guys! :)
  
   The only problem I have ran into is with radosgw. It is unable to start
 after the upgrade with the following message:
  
   2013-08-14 11:57:25.841310 7ffd0d2ae780 0 ceph version 0.67
 (e3b7bc5bce8ab330ec1661381072368af3c218a0), process radosgw, pid 5612
   2013-08-14 11:57:25.841328 7ffd0d2ae780 -1 WARNING: libcurl doesn't
 support curl_multi_wait()
   2013-08-14 11:57:25.841335 7ffd0d2ae780 -1 WARNING: cross zone / region
 transfer performance may be affected
   2013-08-14 11:57:25.855427 7ffcef7fe700 2
 RGWDataChangesLog::ChangesRenewThread: start
   2013-08-14 11:57:25.856138 7ffd0d2ae780 -1 Couldn't init storage
 provider (RADOS)
  
   ceph auth list returns:
  
   client.radosgw.gateway
   key: xx
   caps: [mon] allow r
   caps: [osd] allow rwx
  
   my config:

Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Kyle Hutson
Thanks for that bit, too, Ian.

For what it's worth, I updated /etc/yum.repos.d/ceph.repo , installed the
latest version (from cuttlefish), restarted (monitors first, then
everything else) and everything looks great.


On Wed, Aug 14, 2013 at 1:28 PM, Ian Colle ian.co...@inktank.com wrote:

 There are version specific repos, but you shouldn't need them if you want
 the latest.

 In fact, http://ceph.com/rpm/ is simply a link to
 http://ceph.com/rpm-dumpling

 Ian R. Colle
 Director of Engineering

 Inktank
 Cell: +1.303.601.7713
 Email: i...@inktank.com


 Delivering the Future of Storage

 
 http://www.linkedin.com/in/ircolle
 [image: Follow teststamp on Twitter] http://www.twitter.com/ircolle

 On 8/14/13 8:28 AM, Kyle Hutson kylehut...@k-state.edu wrote:

 Ah, didn't realize the repos were version-specific. Thanks Dan!


 On Wed, Aug 14, 2013 at 9:20 AM, Dan van der Ster 
 daniel.vanders...@cern.ch wrote:

 http://ceph.com/rpm-dumpling/el6/x86_64/


 --
 Dan van der Ster
 CERN IT-DSS


 On Wednesday, August 14, 2013 at 4:17 PM, Kyle Hutson wrote:

  Any suggestions for upgrading CentOS/RHEL? The yum repos don't appear
 to have been updated yet.
 
  I thought maybe with the improved support for Red Hat platforms that
 would be the easy way of going about it.
 
 
  On Wed, Aug 14, 2013 at 5:08 AM, pe...@2force.nl (mailto:
 pe...@2force.nl) wrote:
   On 2013-08-14 07:32, Sage Weil wrote:
Another three months have gone by, and the next stable release of
 Ceph is
ready: Dumpling! Thank you to everyone who has contributed to this
release!
   
This release focuses on a few major themes since v0.61 (Cuttlefish):
   
* rgw: multi-site, multi-datacenter support for S3/Swift object
 storage
* new RESTful API endpoint for administering the cluster, based on
 a new
and improved management API and updated CLI
* mon: stability and performance
* osd: stability performance
* cephfs: open-by-ino support (for improved NFS reexport)
* improved support for Red Hat platforms
* use of the Intel CRC32c instruction when available
   
As with previous stable releases, you can upgrade from previous
 versions
of Ceph without taking the entire cluster online, as long as a few
 simple
guidelines are followed.
   
* For Dumpling, we have tested upgrades from both Bobtail and
 Cuttlefish.
If you are running Argonaut, please upgrade to Bobtail and then to
Dumpling.
* Please upgrade daemons/hosts in the following order:
1. Upgrade ceph-common on all nodes that will use the command line
 ceph
utility.
2. Upgrade all monitors (upgrade ceph package, restart ceph-mon
daemons). This can happen one daemon or host at a time. Note that
because cuttlefish and dumpling monitors cant talk to each other,
all monitors should be upgraded in relatively short succession to
minimize the risk that an untimely failure will reduce availability.
3. Upgrade all osds (upgrade ceph package, restart ceph-osd
 daemons).
This can happen one daemon or host at a time.
4. Upgrade radosgw (upgrade radosgw package, restart radosgw
 daemons).
   
There are several small compatibility changes between Cuttlefish and
Dumpling, particularly with the CLI interface. Please see the
 complete
release notes for a summary of the changes since v0.66 and v0.61
Cuttlefish, and other possible issues that should be considered
 before
upgrading:
   
http://ceph.com/docs/master/release-notes/#v0-67-dumpling
   
Dumpling is the second Ceph release on our new three-month stable
 release
cycle. We are very pleased to have pulled everything together on
schedule. The next stable release, which will be code-named
 Emperor, is
slated for three months from now (beginning of November).
   
You can download v0.67 Dumpling from the usual locations:
   
* Git at git://github.com/ceph/ceph.git (
 http://github.com/ceph/ceph.git)
* Tarball at http://ceph.com/download/ceph-0.67.tar.gz
* For Debian/Ubuntu packages, see
 http://ceph.com/docs/master/install/debian
* For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
  
  
   Hi Sage,
  
   I just upgraded and everything went quite smoothly with osds, mons
 and mds, good work guys! :)
  
   The only problem I have ran into is with radosgw. It is unable to
 start after the upgrade with the following message:
  
   2013-08-14 11:57:25.841310 7ffd0d2ae780 0 ceph version 0.67
 (e3b7bc5bce8ab330ec1661381072368af3c218a0), process radosgw, pid 5612
   2013-08-14 11:57:25.841328 7ffd0d2ae780 -1 WARNING: libcurl doesn't
 support curl_multi_wait()
   2013-08-14 11:57:25.841335 7ffd0d2ae780 -1 WARNING: cross zone /
 region transfer performance may be affected
   2013-08-14 

Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Terekhov, Mikhail
Would it be possible to generate rpms for the latest OpenSuSE-12.3?

Regards,
Mikhail

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ian Colle
Sent: Wednesday, August 14, 2013 2:29 PM
To: Kyle Hutson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] v0.67 Dumpling released

There are version specific repos, but you shouldn't need them if you want the 
latest.

In fact, http://ceph.com/rpm/ is simply a link to http://ceph.com/rpm-dumpling

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Jeppesen, Nelson
Sage et al,

This is an exciting release but I must say I'm a bit confused about some of the 
new rgw details.

Questions:

1) I'd like to understand how regions work. I assume that's how you get 
multi-site, multi-datacenter support working but must they be part of the same 
ceph cluster still? 

2) I have two independent zones (intranet and internet). Should they be put in 
the same region by setting 'rgw region root pool = blabla' ? I wasn't sure how 
placement_targets work.

3) When I upgraded my rgw from .61 to .67 I lost access to my data. I used 
'rgw_zone_root_pool' and noticed zone object changed from zone_info to 
zone_info.default. I did a 'rados cp zone_info zone_info.default -pool bIabla'. 
That fixed it but not sure if that's the correct fix.

4) In the zone_info.default I the following at the end : 

...system_key: { access_key: ,
  secret_key: },
  placement_pools: []}

What are these for exactly and should they be set? Or just a placeholder for E 
release?

Thanks and keep up the great work!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-14 Thread Mikaël Cluseau

Hi lists,

in this release I see that the ceph command is not compatible with 
python 3. The changes were not all trivial so I gave up, but for those 
using gentoo, I made my ceph git repository available here with an 
ebuild that forces the python version to 2.6 ou 2.7 :


git clone https://git.isi.nc/cloud/cloud-overlay.git

I upgraded from cuttlefish without any problem so good job 
ceph-contributors :)


Have a nice day.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v0.67 Dumpling released

2013-08-13 Thread Sage Weil
Another three months have gone by, and the next stable release of Ceph is 
ready: Dumpling!  Thank you to everyone who has contributed to this 
release!

This release focuses on a few major themes since v0.61 (Cuttlefish):

 * rgw: multi-site, multi-datacenter support for S3/Swift object storage
 * new RESTful API endpoint for administering the cluster, based on a new 
   and improved management API and updated CLI
 * mon: stability and performance
 * osd: stability performance
 * cephfs: open-by-ino support (for improved NFS reexport)
 * improved support for Red Hat platforms
 * use of the Intel CRC32c instruction when available

As with previous stable releases, you can upgrade from previous versions 
of Ceph without taking the entire cluster online, as long as a few simple 
guidelines are followed.

 * For Dumpling, we have tested upgrades from both Bobtail and Cuttlefish.  
   If you are running Argonaut, please upgrade to Bobtail and then to 
   Dumpling.
 * Please upgrade daemons/hosts in the following order:
   1. Upgrade ceph-common on all nodes that will use the command line ceph 
  utility.
   2. Upgrade all monitors (upgrade ceph package, restart ceph-mon 
  daemons). This can happen one daemon or host at a time. Note that 
  because cuttlefish and dumpling monitors cant talk to each other, 
  all monitors should be upgraded in relatively short succession to
  minimize the risk that an untimely failure will reduce availability.
   3. Upgrade all osds (upgrade ceph package, restart ceph-osd daemons). 
  This can happen one daemon or host at a time.
   4. Upgrade radosgw (upgrade radosgw package, restart radosgw daemons).

There are several small compatibility changes between Cuttlefish and 
Dumpling, particularly with the CLI interface.  Please see the complete 
release notes for a summary of the changes since v0.66 and v0.61 
Cuttlefish, and other possible issues that should be considered before 
upgrading:

   http://ceph.com/docs/master/release-notes/#v0-67-dumpling

Dumpling is the second Ceph release on our new three-month stable release 
cycle.  We are very pleased to have pulled everything together on 
schedule.  The next stable release, which will be code-named Emperor, is 
slated for three months from now (beginning of November).

You can download v0.67 Dumpling from the usual locations:

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.67.tar.gz
 * For Debian/Ubuntu packages, see http://ceph.com/docs/master/install/debian
 * For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com