Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-16 Thread Alfredo Deza
Trusty has just been pushed out and should be ready to be used right
away. We didn't realize this until today, sorry!

-Alfredo

On Wed, Oct 14, 2015 at 9:24 PM, Sage Weil  wrote:
> On Thu, 15 Oct 2015, Francois Lafont wrote:
>
>> Sorry, another remark.
>>
>> On 13/10/2015 23:01, Sage Weil wrote:
>>
>> > The v9.1.0 packages are pushed to the development release repositories::
>> >
>> >   http://download.ceph.com/rpm-testing
>> >   http://download.ceph.com/debian-testing
>>
>> I don't see the 9.1.0 available for Ubuntu Trusty :
>>
>> 
>> http://download.ceph.com/debian-testing/dists/trusty/main/binary-amd64/Packages
>> (the string "9.1" is not present in this page currently)
>>
>> The 9.0.3 is available but, after a quick test, this version of
>> the package doesn't create the ceph unix account.
>
> You're right.. I see jessie but not trusty in the archive.  Alfredo, can
> you verify it synced properly?
>
> Thanks!
> sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Sage Weil
On Wed, 14 Oct 2015, Kyle Hutson wrote:
> > Which bug?  We want to fix hammer, too!
> 
> This
> one: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23915.html
> 
> (Adam sits about 5' from me.)

Oh... that fix is already in the hammer branch and will be in 0.94.4.  
Since you have to go to that anyway before infernalis you may as well stop 
there (unless there is something else you want from internalis!).

sage___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Kyle Hutson
Nice! Thanks!

On Wed, Oct 14, 2015 at 1:23 PM, Sage Weil  wrote:

> On Wed, 14 Oct 2015, Kyle Hutson wrote:
> > > Which bug?  We want to fix hammer, too!
> >
> > This
> > one:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23915.html
> >
> > (Adam sits about 5' from me.)
>
> Oh... that fix is already in the hammer branch and will be in 0.94.4.
> Since you have to go to that anyway before infernalis you may as well stop
> there (unless there is something else you want from internalis!).
>
> sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Kyle Hutson
A couple of questions related to this, especially since we have a hammer
bug that's biting us so we're anxious to upgrade to Infernalis.

1) RE: ibrbd and librados ABI compatibility is broken.  Be careful installing
this RC on client machines (e.g., those running qemu). It will be fixed in
the final v9.2.0 release.

We have several qemu clients. If we upgrade the ceph servers (and not the
qemu clients), will this affect us?

2) RE: Upgrading directly from Firefly v0.80.z is not possible.  All
clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z release;
only then is it possible to upgrade to Infernalis 9.2.z.

I think I understand this, but want to verify. We're on 0.94.3. Can we
upgrade to the RC 9.1.0 and then safely upgrade to 9.2.z when it is
finalized? Any foreseen issues with this upgrade path?

On Wed, Oct 14, 2015 at 7:30 AM, Sage Weil  wrote:

> On Wed, 14 Oct 2015, Dan van der Ster wrote:
> > Hi Goncalo,
> >
> > On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
> >  wrote:
> > > Hi Sage...
> > >
> > > I've seen that the rh6 derivatives have been ruled out.
> > >
> > > This is a problem in our case since the OS choice in our systems is,
> > > somehow, imposed by CERN. The experiments software is certified for
> SL6 and
> > > the transition to SL7 will take some time.
> >
> > Are you accessing Ceph directly from "physics" machines? Here at CERN
> > we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
> > time we upgrade to Infernalis the servers will all be CentOS 7 as
> > well. Batch nodes running SL6 don't (currently) talk to Ceph directly
> > (in the future they might talk to Ceph-based storage via an xroot
> > gateway). But if there are use-cases then perhaps we could find a
> > place to build and distributing the newer ceph clients.
> >
> > There's a ML ceph-t...@cern.ch where we could take this discussion.
> > Mail me if have trouble joining that e-Group.
>
> Also note that it *is* possible to build infernalis on el6, but it
> requires a lot more effort... enough that we would rather spend our time
> elsewhere (at least as far as ceph.com packages go).  If someone else
> wants to do that work we'd be happy to take patches to update the and/or
> release process.
>
> IIRC the thing that eventually made me stop going down this patch was the
> fact that the newer gcc had a runtime dependency on the newer libstdc++,
> which wasn't part of the base distro... which means we'd need also to
> publish those packages in the ceph.com repos, or users would have to
> add some backport repo or ppa or whatever to get things running.  Bleh.
>
> sage
>
>
> >
> > Cheers, Dan
> > CERN IT-DSS
> >
> > > This is kind of a showstopper specially if we can't deploy clients in
> SL6 /
> > > Centos6.
> > >
> > > Is there any alternative?
> > >
> > > TIA
> > > Goncalo
> > >
> > >
> > >
> > > On 10/14/2015 08:01 AM, Sage Weil wrote:
> > >>
> > >> This is the first Infernalis release candidate.  There have been some
> > >> major changes since hammer, and the upgrade process is non-trivial.
> > >> Please read carefully.
> > >>
> > >> Getting the release candidate
> > >> -
> > >>
> > >> The v9.1.0 packages are pushed to the development release
> repositories::
> > >>
> > >>http://download.ceph.com/rpm-testing
> > >>http://download.ceph.com/debian-testing
> > >>
> > >> For for info, see::
> > >>
> > >>http://docs.ceph.com/docs/master/install/get-packages/
> > >>
> > >> Or install with ceph-deploy via::
> > >>
> > >>ceph-deploy install --testing HOST
> > >>
> > >> Known issues
> > >> 
> > >>
> > >> * librbd and librados ABI compatibility is broken.  Be careful
> > >>installing this RC on client machines (e.g., those running qemu).
> > >>It will be fixed in the final v9.2.0 release.
> > >>
> > >> Major Changes from Hammer
> > >> -
> > >>
> > >> * *General*:
> > >>* Ceph daemons are now managed via systemd (with the exception of
> > >>  Ubuntu Trusty, which still uses upstart).
> > >>* Ceph daemons run as 'ceph' user instead root.
> > >>* On Red Hat distros, there is also an SELinux policy.
> > >> * *RADOS*:
> > >>* The RADOS cache tier can now proxy write operations to the base
> > >>  tier, allowing writes to be handled without forcing migration of
> > >>  an object into the cache.
> > >>* The SHEC erasure coding support is no longer flagged as
> > >>  experimental. SHEC trades some additional storage space for
> faster
> > >>  repair.
> > >>* There is now a unified queue (and thus prioritization) of client
> > >>  IO, recovery, scrubbing, and snapshot trimming.
> > >>* There have been many improvements to low-level repair tooling
> > >>  (ceph-objectstore-tool).
> > >>* The internal ObjectStore API has been significantly cleaned up in
> > >> order
> > >>  to faciliate new storage backends like 

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Kyle Hutson
> Which bug?  We want to fix hammer, too!

This one:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23915.html

(Adam sits about 5' from me.)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Sage Weil
On Wed, 14 Oct 2015, Kyle Hutson wrote:
> A couple of questions related to this, especially since we have a hammer
> bug that's biting us so we're anxious to upgrade to Infernalis.

Which bug?  We want to fix hammer, too!

> 1) RE: ibrbd and librados ABI compatibility is broken.  Be careful installing
> this RC on client machines (e.g., those running qemu). It will be fixed in
> the final v9.2.0 release.
> 
> We have several qemu clients. If we upgrade the ceph servers (and not the
> qemu clients), will this affect us?

Nope! That will be fine.

> 2) RE: Upgrading directly from Firefly v0.80.z is not possible.  All
> clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z release;
> only then is it possible to upgrade to Infernalis 9.2.z.
> 
> I think I understand this, but want to verify. We're on 0.94.3. Can we
> upgrade to the RC 9.1.0 and then safely upgrade to 9.2.z when it is
> finalized? Any foreseen issues with this upgrade path?

You need to first upgrade ot 0.94.4 (or latest hammer branch) before going 
to 9.1.0.  You'll of course be able to upgrade form there to 9.2.z.

sage

> 
> On Wed, Oct 14, 2015 at 7:30 AM, Sage Weil  wrote:
> 
> > On Wed, 14 Oct 2015, Dan van der Ster wrote:
> > > Hi Goncalo,
> > >
> > > On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
> > >  wrote:
> > > > Hi Sage...
> > > >
> > > > I've seen that the rh6 derivatives have been ruled out.
> > > >
> > > > This is a problem in our case since the OS choice in our systems is,
> > > > somehow, imposed by CERN. The experiments software is certified for
> > SL6 and
> > > > the transition to SL7 will take some time.
> > >
> > > Are you accessing Ceph directly from "physics" machines? Here at CERN
> > > we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
> > > time we upgrade to Infernalis the servers will all be CentOS 7 as
> > > well. Batch nodes running SL6 don't (currently) talk to Ceph directly
> > > (in the future they might talk to Ceph-based storage via an xroot
> > > gateway). But if there are use-cases then perhaps we could find a
> > > place to build and distributing the newer ceph clients.
> > >
> > > There's a ML ceph-t...@cern.ch where we could take this discussion.
> > > Mail me if have trouble joining that e-Group.
> >
> > Also note that it *is* possible to build infernalis on el6, but it
> > requires a lot more effort... enough that we would rather spend our time
> > elsewhere (at least as far as ceph.com packages go).  If someone else
> > wants to do that work we'd be happy to take patches to update the and/or
> > release process.
> >
> > IIRC the thing that eventually made me stop going down this patch was the
> > fact that the newer gcc had a runtime dependency on the newer libstdc++,
> > which wasn't part of the base distro... which means we'd need also to
> > publish those packages in the ceph.com repos, or users would have to
> > add some backport repo or ppa or whatever to get things running.  Bleh.
> >
> > sage
> >
> >
> > >
> > > Cheers, Dan
> > > CERN IT-DSS
> > >
> > > > This is kind of a showstopper specially if we can't deploy clients in
> > SL6 /
> > > > Centos6.
> > > >
> > > > Is there any alternative?
> > > >
> > > > TIA
> > > > Goncalo
> > > >
> > > >
> > > >
> > > > On 10/14/2015 08:01 AM, Sage Weil wrote:
> > > >>
> > > >> This is the first Infernalis release candidate.  There have been some
> > > >> major changes since hammer, and the upgrade process is non-trivial.
> > > >> Please read carefully.
> > > >>
> > > >> Getting the release candidate
> > > >> -
> > > >>
> > > >> The v9.1.0 packages are pushed to the development release
> > repositories::
> > > >>
> > > >>http://download.ceph.com/rpm-testing
> > > >>http://download.ceph.com/debian-testing
> > > >>
> > > >> For for info, see::
> > > >>
> > > >>http://docs.ceph.com/docs/master/install/get-packages/
> > > >>
> > > >> Or install with ceph-deploy via::
> > > >>
> > > >>ceph-deploy install --testing HOST
> > > >>
> > > >> Known issues
> > > >> 
> > > >>
> > > >> * librbd and librados ABI compatibility is broken.  Be careful
> > > >>installing this RC on client machines (e.g., those running qemu).
> > > >>It will be fixed in the final v9.2.0 release.
> > > >>
> > > >> Major Changes from Hammer
> > > >> -
> > > >>
> > > >> * *General*:
> > > >>* Ceph daemons are now managed via systemd (with the exception of
> > > >>  Ubuntu Trusty, which still uses upstart).
> > > >>* Ceph daemons run as 'ceph' user instead root.
> > > >>* On Red Hat distros, there is also an SELinux policy.
> > > >> * *RADOS*:
> > > >>* The RADOS cache tier can now proxy write operations to the base
> > > >>  tier, allowing writes to be handled without forcing migration of
> > > >>  an object into the cache.
> > > >>* The SHEC erasure coding support is no longer flagged as
> > > 

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Dan van der Ster
Hi Goncalo,

On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
 wrote:
> Hi Sage...
>
> I've seen that the rh6 derivatives have been ruled out.
>
> This is a problem in our case since the OS choice in our systems is,
> somehow, imposed by CERN. The experiments software is certified for SL6 and
> the transition to SL7 will take some time.

Are you accessing Ceph directly from "physics" machines? Here at CERN
we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
time we upgrade to Infernalis the servers will all be CentOS 7 as
well. Batch nodes running SL6 don't (currently) talk to Ceph directly
(in the future they might talk to Ceph-based storage via an xroot
gateway). But if there are use-cases then perhaps we could find a
place to build and distributing the newer ceph clients.

There's a ML ceph-t...@cern.ch where we could take this discussion.
Mail me if have trouble joining that e-Group.

Cheers, Dan
CERN IT-DSS

> This is kind of a showstopper specially if we can't deploy clients in SL6 /
> Centos6.
>
> Is there any alternative?
>
> TIA
> Goncalo
>
>
>
> On 10/14/2015 08:01 AM, Sage Weil wrote:
>>
>> This is the first Infernalis release candidate.  There have been some
>> major changes since hammer, and the upgrade process is non-trivial.
>> Please read carefully.
>>
>> Getting the release candidate
>> -
>>
>> The v9.1.0 packages are pushed to the development release repositories::
>>
>>http://download.ceph.com/rpm-testing
>>http://download.ceph.com/debian-testing
>>
>> For for info, see::
>>
>>http://docs.ceph.com/docs/master/install/get-packages/
>>
>> Or install with ceph-deploy via::
>>
>>ceph-deploy install --testing HOST
>>
>> Known issues
>> 
>>
>> * librbd and librados ABI compatibility is broken.  Be careful
>>installing this RC on client machines (e.g., those running qemu).
>>It will be fixed in the final v9.2.0 release.
>>
>> Major Changes from Hammer
>> -
>>
>> * *General*:
>>* Ceph daemons are now managed via systemd (with the exception of
>>  Ubuntu Trusty, which still uses upstart).
>>* Ceph daemons run as 'ceph' user instead root.
>>* On Red Hat distros, there is also an SELinux policy.
>> * *RADOS*:
>>* The RADOS cache tier can now proxy write operations to the base
>>  tier, allowing writes to be handled without forcing migration of
>>  an object into the cache.
>>* The SHEC erasure coding support is no longer flagged as
>>  experimental. SHEC trades some additional storage space for faster
>>  repair.
>>* There is now a unified queue (and thus prioritization) of client
>>  IO, recovery, scrubbing, and snapshot trimming.
>>* There have been many improvements to low-level repair tooling
>>  (ceph-objectstore-tool).
>>* The internal ObjectStore API has been significantly cleaned up in
>> order
>>  to faciliate new storage backends like NewStore.
>> * *RGW*:
>>* The Swift API now supports object expiration.
>>* There are many Swift API compatibility improvements.
>> * *RBD*:
>>* The ``rbd du`` command shows actual usage (quickly, when
>>  object-map is enabled).
>>* The object-map feature has seen many stability improvements.
>>* Object-map and exclusive-lock features can be enabled or disabled
>>  dynamically.
>>* You can now store user metadata and set persistent librbd options
>>  associated with individual images.
>>* The new deep-flatten features allows flattening of a clone and all
>>  of its snapshots.  (Previously snapshots could not be flattened.)
>>* The export-diff command command is now faster (it uses aio).  There
>> is also
>>  a new fast-diff feature.
>>* The --size argument can be specified with a suffix for units
>>  (e.g., ``--size 64G``).
>>* There is a new ``rbd status`` command that, for now, shows who has
>>  the image open/mapped.
>> * *CephFS*:
>>* You can now rename snapshots.
>>* There have been ongoing improvements around administration,
>> diagnostics,
>>  and the check and repair tools.
>>* The caching and revocation of client cache state due to unused
>>  inodes has been dramatically improved.
>>* The ceph-fuse client behaves better on 32-bit hosts.
>>
>> Distro compatibility
>> 
>>
>> We have decided to drop support for many older distributions so that we
>> can
>> move to a newer compiler toolchain (e.g., C++11).  Although it is still
>> possible
>> to build Ceph on older distributions by installing backported development
>> tools,
>> we are not building and publishing release packages for ceph.com.
>>
>> In particular,
>>
>> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
>>RHEL 6 derivatives, like Scientific Linux 6).
>> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
>>support for C++11 (and no 

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Francois Lafont
Hi and thanks at all for this good news, ;)

On 13/10/2015 23:01, Sage Weil wrote:

>#. Fix the data ownership during the upgrade.  This is the preferred 
> option,
>   but is more work.  The process for each host would be to:
> 
>   #. Upgrade the ceph package.  This creates the ceph user and group.  For
>example::
> 
>  ceph-deploy install --stable infernalis HOST
> 
>   #. Stop the daemon(s).::
> 
>  service ceph stop   # fedora, centos, rhel, debian
>  stop ceph-all   # ubuntu
>  
>   #. Fix the ownership::
> 
>  chown -R ceph:ceph /var/lib/ceph
> 
>   #. Restart the daemon(s).::
> 
>  start ceph-all# ubuntu
>  systemctl start ceph.target   # debian, centos, fedora, rhel

With this (preferred) option, if I understand well, I should
repeat these commands above host-by-host. Personally, my monitors
are hosted in the OSD servers (I have no dedicated monitor server).
So, with this option, I will have osd daemons upgraded before
monitor daemons. Is it a problem?

I ask the question because, during a migration to a new release,
it's generally recommended to upgrade _all_ the monitors before
to upgrade the first osd daemon.

-- 
François Lafont
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Sage Weil
On Thu, 15 Oct 2015, Goncalo Borges wrote:
> Hi Sage, Dan...
> 
> In our case, we have strongly invested in the testing of CephFS. It seems as a
> good solution to some of the issues we currently experience regarding the use
> cases from our researchers.
> 
> While I do not see a problem in deploying Ceph cluster in SL7, I suspect that
> we will need CephFS clients in SL6 for quite some time. The problem here is
> that our researchers use a whole bunch of software provided by the CERN
> experiments to generate MC data or analyse experimental data. This software is
> currently certified for SL6 and I think that a SL7 version will take a
> considerable amount of time. So we need a CephFS client that allows our
> researchers to access and analyse the data in that environment.
> 
> If you guys did not think it was worthwhile the effort to built for those
> flavors, that actually tells me this is a complicated task that, most
> probably, I can not do it on my own.

I don't think it will be much of a problem.

First, if you're using the CephFS kernel client, the important bit is the 
kernel--you'll want something quite recent.  The OS doesn't really matter 
much.  The only piece that is of any use is mount.ceph, but it is 
optional.  It only does two semi-useful things: it resolves DNS if you 
identify your monitor(s) with something other than an IP (and actually the 
kernel can do this too if it's built with the right options) and it will 
turn a '-o secretfile=' into a '-o 
secret='.  In other words, it's optional, although it makes it 
slightly awkward not to put the ceph key in /etc/fstab.  In any case, it's 
trivial to build that binary and install/distirbute it in some other 
manner.

Or, you can build the ceph packages with the newer gcc.. it isn't 
that painful.  I stopped because I didn't want to have us distributing 
newer versions of the libstdc++ libraries in the ceph repositories.

If you're talking about using libcephfs or ceph-fuse, then building those 
packages is inevitable... but probably not that onerous.

sage



> 
> I am currently interacting with Dan and other colleagues in a CERN mailing
> list. Let us see what would be the outcome of that discussion.
> 
> But at the moment I am open to suggestions.
> 
> TIA
> Goncalo
> 
> On 10/14/2015 11:30 PM, Sage Weil wrote:
> > On Wed, 14 Oct 2015, Dan van der Ster wrote:
> > > Hi Goncalo,
> > > 
> > > On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
> > >  wrote:
> > > > Hi Sage...
> > > > 
> > > > I've seen that the rh6 derivatives have been ruled out.
> > > > 
> > > > This is a problem in our case since the OS choice in our systems is,
> > > > somehow, imposed by CERN. The experiments software is certified for SL6
> > > > and
> > > > the transition to SL7 will take some time.
> > > Are you accessing Ceph directly from "physics" machines? Here at CERN
> > > we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
> > > time we upgrade to Infernalis the servers will all be CentOS 7 as
> > > well. Batch nodes running SL6 don't (currently) talk to Ceph directly
> > > (in the future they might talk to Ceph-based storage via an xroot
> > > gateway). But if there are use-cases then perhaps we could find a
> > > place to build and distributing the newer ceph clients.
> > > 
> > > There's a ML ceph-t...@cern.ch where we could take this discussion.
> > > Mail me if have trouble joining that e-Group.
> > Also note that it *is* possible to build infernalis on el6, but it
> > requires a lot more effort... enough that we would rather spend our time
> > elsewhere (at least as far as ceph.com packages go).  If someone else
> > wants to do that work we'd be happy to take patches to update the and/or
> > release process.
> > 
> > IIRC the thing that eventually made me stop going down this patch was the
> > fact that the newer gcc had a runtime dependency on the newer libstdc++,
> > which wasn't part of the base distro... which means we'd need also to
> > publish those packages in the ceph.com repos, or users would have to
> > add some backport repo or ppa or whatever to get things running.  Bleh.
> > 
> > sage
> > 
> > 
> > > Cheers, Dan
> > > CERN IT-DSS
> > > 
> > > > This is kind of a showstopper specially if we can't deploy clients in
> > > > SL6 /
> > > > Centos6.
> > > > 
> > > > Is there any alternative?
> > > > 
> > > > TIA
> > > > Goncalo
> > > > 
> > > > 
> > > > 
> > > > On 10/14/2015 08:01 AM, Sage Weil wrote:
> > > > > This is the first Infernalis release candidate.  There have been some
> > > > > major changes since hammer, and the upgrade process is non-trivial.
> > > > > Please read carefully.
> > > > > 
> > > > > Getting the release candidate
> > > > > -
> > > > > 
> > > > > The v9.1.0 packages are pushed to the development release
> > > > > repositories::
> > > > > 
> > > > > http://download.ceph.com/rpm-testing
> > > > > http://download.ceph.com/debian-testing
> > > > 

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Francois Lafont
Sorry, another remark.

On 13/10/2015 23:01, Sage Weil wrote:

> The v9.1.0 packages are pushed to the development release repositories::
> 
>   http://download.ceph.com/rpm-testing
>   http://download.ceph.com/debian-testing

I don't see the 9.1.0 available for Ubuntu Trusty :


http://download.ceph.com/debian-testing/dists/trusty/main/binary-amd64/Packages
(the string "9.1" is not present in this page currently)

The 9.0.3 is available but, after a quick test, this version of
the package doesn't create the ceph unix account.

Have I forgotten something?

-- 
François Lafont
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Goncalo Borges

Hi Sage, Dan...

In our case, we have strongly invested in the testing of CephFS. It 
seems as a good solution to some of the issues we currently experience 
regarding the use cases from our researchers.


While I do not see a problem in deploying Ceph cluster in SL7, I suspect 
that we will need CephFS clients in SL6 for quite some time. The problem 
here is that our researchers use a whole bunch of software provided by 
the CERN experiments to generate MC data or analyse experimental data. 
This software is currently certified for SL6 and I think that a SL7 
version will take a considerable amount of time. So we need a CephFS 
client that allows our researchers to access and analyse the data in 
that environment.


If you guys did not think it was worthwhile the effort to built for 
those flavors, that actually tells me this is a complicated task that, 
most probably, I can not do it on my own.


I am currently interacting with Dan and other colleagues in a CERN 
mailing list. Let us see what would be the outcome of that discussion.


But at the moment I am open to suggestions.

TIA
Goncalo

On 10/14/2015 11:30 PM, Sage Weil wrote:

On Wed, 14 Oct 2015, Dan van der Ster wrote:

Hi Goncalo,

On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
 wrote:

Hi Sage...

I've seen that the rh6 derivatives have been ruled out.

This is a problem in our case since the OS choice in our systems is,
somehow, imposed by CERN. The experiments software is certified for SL6 and
the transition to SL7 will take some time.

Are you accessing Ceph directly from "physics" machines? Here at CERN
we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
time we upgrade to Infernalis the servers will all be CentOS 7 as
well. Batch nodes running SL6 don't (currently) talk to Ceph directly
(in the future they might talk to Ceph-based storage via an xroot
gateway). But if there are use-cases then perhaps we could find a
place to build and distributing the newer ceph clients.

There's a ML ceph-t...@cern.ch where we could take this discussion.
Mail me if have trouble joining that e-Group.

Also note that it *is* possible to build infernalis on el6, but it
requires a lot more effort... enough that we would rather spend our time
elsewhere (at least as far as ceph.com packages go).  If someone else
wants to do that work we'd be happy to take patches to update the and/or
release process.

IIRC the thing that eventually made me stop going down this patch was the
fact that the newer gcc had a runtime dependency on the newer libstdc++,
which wasn't part of the base distro... which means we'd need also to
publish those packages in the ceph.com repos, or users would have to
add some backport repo or ppa or whatever to get things running.  Bleh.

sage



Cheers, Dan
CERN IT-DSS


This is kind of a showstopper specially if we can't deploy clients in SL6 /
Centos6.

Is there any alternative?

TIA
Goncalo



On 10/14/2015 08:01 AM, Sage Weil wrote:

This is the first Infernalis release candidate.  There have been some
major changes since hammer, and the upgrade process is non-trivial.
Please read carefully.

Getting the release candidate
-

The v9.1.0 packages are pushed to the development release repositories::

http://download.ceph.com/rpm-testing
http://download.ceph.com/debian-testing

For for info, see::

http://docs.ceph.com/docs/master/install/get-packages/

Or install with ceph-deploy via::

ceph-deploy install --testing HOST

Known issues


* librbd and librados ABI compatibility is broken.  Be careful
installing this RC on client machines (e.g., those running qemu).
It will be fixed in the final v9.2.0 release.

Major Changes from Hammer
-

* *General*:
* Ceph daemons are now managed via systemd (with the exception of
  Ubuntu Trusty, which still uses upstart).
* Ceph daemons run as 'ceph' user instead root.
* On Red Hat distros, there is also an SELinux policy.
* *RADOS*:
* The RADOS cache tier can now proxy write operations to the base
  tier, allowing writes to be handled without forcing migration of
  an object into the cache.
* The SHEC erasure coding support is no longer flagged as
  experimental. SHEC trades some additional storage space for faster
  repair.
* There is now a unified queue (and thus prioritization) of client
  IO, recovery, scrubbing, and snapshot trimming.
* There have been many improvements to low-level repair tooling
  (ceph-objectstore-tool).
* The internal ObjectStore API has been significantly cleaned up in
order
  to faciliate new storage backends like NewStore.
* *RGW*:
* The Swift API now supports object expiration.
* There are many Swift API compatibility improvements.
* *RBD*:
* The ``rbd du`` command shows actual usage (quickly, when
  object-map is enabled).
* The object-map feature has seen 

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Sage Weil
On Thu, 15 Oct 2015, Francois Lafont wrote:
> Hi and thanks at all for this good news, ;)
> 
> On 13/10/2015 23:01, Sage Weil wrote:
> 
> >#. Fix the data ownership during the upgrade.  This is the preferred 
> > option,
> >   but is more work.  The process for each host would be to:
> > 
> >   #. Upgrade the ceph package.  This creates the ceph user and group.  
> > For
> >  example::
> > 
> >ceph-deploy install --stable infernalis HOST
> > 
> >   #. Stop the daemon(s).::
> > 
> >service ceph stop   # fedora, centos, rhel, debian
> >stop ceph-all   # ubuntu
> >
> >   #. Fix the ownership::
> > 
> >chown -R ceph:ceph /var/lib/ceph
> > 
> >   #. Restart the daemon(s).::
> > 
> >start ceph-all# ubuntu
> >systemctl start ceph.target   # debian, centos, fedora, rhel
> 
> With this (preferred) option, if I understand well, I should
> repeat these commands above host-by-host. Personally, my monitors
> are hosted in the OSD servers (I have no dedicated monitor server).
> So, with this option, I will have osd daemons upgraded before
> monitor daemons. Is it a problem?

No.  You can also chown -R /var/lib/ceph/mon and /var/lib/ceph/osd 
separately. 

> I ask the question because, during a migration to a new release,
> it's generally recommended to upgrade _all_ the monitors before
> to upgrade the first osd daemon.

Doing all the monitors is recommended, but not strictly required.

Also note that the chown on the OSD dirs can take a very long time 
(hours).  I suspect we should revise the recommendation to do it for the 
mons and not the osds... or at least give a better warning about how long 
it takes.  (And I'm very interested in hearing what peoples' experiences 
are here...)

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Sage Weil
On Thu, 15 Oct 2015, Francois Lafont wrote:

> Sorry, another remark.
> 
> On 13/10/2015 23:01, Sage Weil wrote:
> 
> > The v9.1.0 packages are pushed to the development release repositories::
> > 
> >   http://download.ceph.com/rpm-testing
> >   http://download.ceph.com/debian-testing
> 
> I don't see the 9.1.0 available for Ubuntu Trusty :
> 
> 
> http://download.ceph.com/debian-testing/dists/trusty/main/binary-amd64/Packages
> (the string "9.1" is not present in this page currently)
> 
> The 9.0.3 is available but, after a quick test, this version of
> the package doesn't create the ceph unix account.

You're right.. I see jessie but not trusty in the archive.  Alfredo, can 
you verify it synced properly?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Sage Weil
On Wed, 14 Oct 2015, Dan van der Ster wrote:
> Hi Goncalo,
> 
> On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
>  wrote:
> > Hi Sage...
> >
> > I've seen that the rh6 derivatives have been ruled out.
> >
> > This is a problem in our case since the OS choice in our systems is,
> > somehow, imposed by CERN. The experiments software is certified for SL6 and
> > the transition to SL7 will take some time.
> 
> Are you accessing Ceph directly from "physics" machines? Here at CERN
> we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
> time we upgrade to Infernalis the servers will all be CentOS 7 as
> well. Batch nodes running SL6 don't (currently) talk to Ceph directly
> (in the future they might talk to Ceph-based storage via an xroot
> gateway). But if there are use-cases then perhaps we could find a
> place to build and distributing the newer ceph clients.
> 
> There's a ML ceph-t...@cern.ch where we could take this discussion.
> Mail me if have trouble joining that e-Group.

Also note that it *is* possible to build infernalis on el6, but it 
requires a lot more effort... enough that we would rather spend our time 
elsewhere (at least as far as ceph.com packages go).  If someone else 
wants to do that work we'd be happy to take patches to update the and/or 
release process.

IIRC the thing that eventually made me stop going down this patch was the 
fact that the newer gcc had a runtime dependency on the newer libstdc++, 
which wasn't part of the base distro... which means we'd need also to 
publish those packages in the ceph.com repos, or users would have to 
add some backport repo or ppa or whatever to get things running.  Bleh.

sage


> 
> Cheers, Dan
> CERN IT-DSS
> 
> > This is kind of a showstopper specially if we can't deploy clients in SL6 /
> > Centos6.
> >
> > Is there any alternative?
> >
> > TIA
> > Goncalo
> >
> >
> >
> > On 10/14/2015 08:01 AM, Sage Weil wrote:
> >>
> >> This is the first Infernalis release candidate.  There have been some
> >> major changes since hammer, and the upgrade process is non-trivial.
> >> Please read carefully.
> >>
> >> Getting the release candidate
> >> -
> >>
> >> The v9.1.0 packages are pushed to the development release repositories::
> >>
> >>http://download.ceph.com/rpm-testing
> >>http://download.ceph.com/debian-testing
> >>
> >> For for info, see::
> >>
> >>http://docs.ceph.com/docs/master/install/get-packages/
> >>
> >> Or install with ceph-deploy via::
> >>
> >>ceph-deploy install --testing HOST
> >>
> >> Known issues
> >> 
> >>
> >> * librbd and librados ABI compatibility is broken.  Be careful
> >>installing this RC on client machines (e.g., those running qemu).
> >>It will be fixed in the final v9.2.0 release.
> >>
> >> Major Changes from Hammer
> >> -
> >>
> >> * *General*:
> >>* Ceph daemons are now managed via systemd (with the exception of
> >>  Ubuntu Trusty, which still uses upstart).
> >>* Ceph daemons run as 'ceph' user instead root.
> >>* On Red Hat distros, there is also an SELinux policy.
> >> * *RADOS*:
> >>* The RADOS cache tier can now proxy write operations to the base
> >>  tier, allowing writes to be handled without forcing migration of
> >>  an object into the cache.
> >>* The SHEC erasure coding support is no longer flagged as
> >>  experimental. SHEC trades some additional storage space for faster
> >>  repair.
> >>* There is now a unified queue (and thus prioritization) of client
> >>  IO, recovery, scrubbing, and snapshot trimming.
> >>* There have been many improvements to low-level repair tooling
> >>  (ceph-objectstore-tool).
> >>* The internal ObjectStore API has been significantly cleaned up in
> >> order
> >>  to faciliate new storage backends like NewStore.
> >> * *RGW*:
> >>* The Swift API now supports object expiration.
> >>* There are many Swift API compatibility improvements.
> >> * *RBD*:
> >>* The ``rbd du`` command shows actual usage (quickly, when
> >>  object-map is enabled).
> >>* The object-map feature has seen many stability improvements.
> >>* Object-map and exclusive-lock features can be enabled or disabled
> >>  dynamically.
> >>* You can now store user metadata and set persistent librbd options
> >>  associated with individual images.
> >>* The new deep-flatten features allows flattening of a clone and all
> >>  of its snapshots.  (Previously snapshots could not be flattened.)
> >>* The export-diff command command is now faster (it uses aio).  There
> >> is also
> >>  a new fast-diff feature.
> >>* The --size argument can be specified with a suffix for units
> >>  (e.g., ``--size 64G``).
> >>* There is a new ``rbd status`` command that, for now, shows who has
> >>  the image open/mapped.
> >> * *CephFS*:
> >>* You can now rename snapshots.
> 

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-13 Thread Goncalo Borges

Hi Sage...

I've seen that the rh6 derivatives have been ruled out.

This is a problem in our case since the OS choice in our systems is, 
somehow, imposed by CERN. The experiments software is certified for SL6 
and the transition to SL7 will take some time.


This is kind of a showstopper specially if we can't deploy clients in 
SL6 / Centos6.


Is there any alternative?

TIA
Goncalo


On 10/14/2015 08:01 AM, Sage Weil wrote:

This is the first Infernalis release candidate.  There have been some
major changes since hammer, and the upgrade process is non-trivial.
Please read carefully.

Getting the release candidate
-

The v9.1.0 packages are pushed to the development release repositories::

   http://download.ceph.com/rpm-testing
   http://download.ceph.com/debian-testing

For for info, see::

   http://docs.ceph.com/docs/master/install/get-packages/

Or install with ceph-deploy via::

   ceph-deploy install --testing HOST

Known issues


* librbd and librados ABI compatibility is broken.  Be careful
   installing this RC on client machines (e.g., those running qemu).
   It will be fixed in the final v9.2.0 release.

Major Changes from Hammer
-

* *General*:
   * Ceph daemons are now managed via systemd (with the exception of
 Ubuntu Trusty, which still uses upstart).
   * Ceph daemons run as 'ceph' user instead root.
   * On Red Hat distros, there is also an SELinux policy.
* *RADOS*:
   * The RADOS cache tier can now proxy write operations to the base
 tier, allowing writes to be handled without forcing migration of
 an object into the cache.
   * The SHEC erasure coding support is no longer flagged as
 experimental. SHEC trades some additional storage space for faster
 repair.
   * There is now a unified queue (and thus prioritization) of client
 IO, recovery, scrubbing, and snapshot trimming.
   * There have been many improvements to low-level repair tooling
 (ceph-objectstore-tool).
   * The internal ObjectStore API has been significantly cleaned up in order
 to faciliate new storage backends like NewStore.
* *RGW*:
   * The Swift API now supports object expiration.
   * There are many Swift API compatibility improvements.
* *RBD*:
   * The ``rbd du`` command shows actual usage (quickly, when
 object-map is enabled).
   * The object-map feature has seen many stability improvements.
   * Object-map and exclusive-lock features can be enabled or disabled
 dynamically.
   * You can now store user metadata and set persistent librbd options
 associated with individual images.
   * The new deep-flatten features allows flattening of a clone and all
 of its snapshots.  (Previously snapshots could not be flattened.)
   * The export-diff command command is now faster (it uses aio).  There is also
 a new fast-diff feature.
   * The --size argument can be specified with a suffix for units
 (e.g., ``--size 64G``).
   * There is a new ``rbd status`` command that, for now, shows who has
 the image open/mapped.
* *CephFS*:
   * You can now rename snapshots.
   * There have been ongoing improvements around administration, diagnostics,
 and the check and repair tools.
   * The caching and revocation of client cache state due to unused
 inodes has been dramatically improved.
   * The ceph-fuse client behaves better on 32-bit hosts.

Distro compatibility


We have decided to drop support for many older distributions so that we can
move to a newer compiler toolchain (e.g., C++11).  Although it is still possible
to build Ceph on older distributions by installing backported development tools,
we are not building and publishing release packages for ceph.com.

In particular,

* CentOS 7 or later; we have dropped support for CentOS 6 (and other
   RHEL 6 derivatives, like Scientific Linux 6).
* Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
   support for C++11 (and no systemd).
* Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
   supported.
* Fedora 22 or later.

Upgrading from Firefly
--

Upgrading directly from Firefly v0.80.z is not possible.  All clusters
must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
then is it possible to upgrade to Infernalis 9.2.z.

Note that v0.94.4 isn't released yet, but you can upgrade to a test build
from gitbuilder with::

   ceph-deploy install --dev hammer HOST

The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
is.

Upgrading from Hammer
-

* For all distributions that support systemd (CentOS 7, Fedora, Debian
   Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
   files instead of the legacy sysvinit scripts.  For example,::

 systemctl start ceph.target   # start all daemons
 systemctl status ceph-osd@12  # check status of osd.12

   The main notable distro that is *not* yet using 

[ceph-users] v9.1.0 Infernalis release candidate released

2015-10-13 Thread Sage Weil
This is the first Infernalis release candidate.  There have been some
major changes since hammer, and the upgrade process is non-trivial.
Please read carefully.

Getting the release candidate
-

The v9.1.0 packages are pushed to the development release repositories::

  http://download.ceph.com/rpm-testing
  http://download.ceph.com/debian-testing

For for info, see::

  http://docs.ceph.com/docs/master/install/get-packages/

Or install with ceph-deploy via::

  ceph-deploy install --testing HOST

Known issues


* librbd and librados ABI compatibility is broken.  Be careful
  installing this RC on client machines (e.g., those running qemu).
  It will be fixed in the final v9.2.0 release.

Major Changes from Hammer
-

* *General*:
  * Ceph daemons are now managed via systemd (with the exception of
Ubuntu Trusty, which still uses upstart).
  * Ceph daemons run as 'ceph' user instead root.
  * On Red Hat distros, there is also an SELinux policy.
* *RADOS*:
  * The RADOS cache tier can now proxy write operations to the base
tier, allowing writes to be handled without forcing migration of
an object into the cache.
  * The SHEC erasure coding support is no longer flagged as
experimental. SHEC trades some additional storage space for faster
repair.
  * There is now a unified queue (and thus prioritization) of client
IO, recovery, scrubbing, and snapshot trimming.
  * There have been many improvements to low-level repair tooling
(ceph-objectstore-tool).
  * The internal ObjectStore API has been significantly cleaned up in order
to faciliate new storage backends like NewStore.
* *RGW*:
  * The Swift API now supports object expiration.
  * There are many Swift API compatibility improvements.
* *RBD*:
  * The ``rbd du`` command shows actual usage (quickly, when
object-map is enabled).
  * The object-map feature has seen many stability improvements.
  * Object-map and exclusive-lock features can be enabled or disabled
dynamically.
  * You can now store user metadata and set persistent librbd options
associated with individual images.
  * The new deep-flatten features allows flattening of a clone and all
of its snapshots.  (Previously snapshots could not be flattened.)
  * The export-diff command command is now faster (it uses aio).  There is also
a new fast-diff feature.
  * The --size argument can be specified with a suffix for units
(e.g., ``--size 64G``).
  * There is a new ``rbd status`` command that, for now, shows who has
the image open/mapped.
* *CephFS*:
  * You can now rename snapshots.
  * There have been ongoing improvements around administration, diagnostics,
and the check and repair tools.
  * The caching and revocation of client cache state due to unused
inodes has been dramatically improved.
  * The ceph-fuse client behaves better on 32-bit hosts.

Distro compatibility


We have decided to drop support for many older distributions so that we can
move to a newer compiler toolchain (e.g., C++11).  Although it is still possible
to build Ceph on older distributions by installing backported development tools,
we are not building and publishing release packages for ceph.com.

In particular,

* CentOS 7 or later; we have dropped support for CentOS 6 (and other
  RHEL 6 derivatives, like Scientific Linux 6).
* Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
  support for C++11 (and no systemd).
* Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
  supported.
* Fedora 22 or later.

Upgrading from Firefly
--

Upgrading directly from Firefly v0.80.z is not possible.  All clusters
must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
then is it possible to upgrade to Infernalis 9.2.z.

Note that v0.94.4 isn't released yet, but you can upgrade to a test build
from gitbuilder with::

  ceph-deploy install --dev hammer HOST

The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
is.

Upgrading from Hammer
-

* For all distributions that support systemd (CentOS 7, Fedora, Debian
  Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
  files instead of the legacy sysvinit scripts.  For example,::

systemctl start ceph.target   # start all daemons
systemctl status ceph-osd@12  # check status of osd.12

  The main notable distro that is *not* yet using systemd is Ubuntu trusty
  14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of upstart.)

* Ceph daemons now run as user and group ``ceph`` by default.  The
  ceph user has a static UID assigned by Fedora and Debian (also used
  by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
  the ceph user will currently get a dynamically assigned UID when the
  user is created.

  If your systems already have a ceph user, upgrading the package will cause
  

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-13 Thread Nick Fisk
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Sage Weil
> Sent: 13 October 2015 22:02
> To: ceph-annou...@ceph.com; ceph-de...@vger.kernel.org; ceph-
> us...@ceph.com; ceph-maintain...@ceph.com
> Subject: [ceph-users] v9.1.0 Infernalis release candidate released
>
> This is the first Infernalis release candidate.  There have been some major
> changes since hammer, and the upgrade process is non-trivial.
> Please read carefully.
>
> Getting the release candidate
> -
>
> The v9.1.0 packages are pushed to the development release repositories::
>
>   http://download.ceph.com/rpm-testing
>   http://download.ceph.com/debian-testing
>
> For for info, see::
>
>   http://docs.ceph.com/docs/master/install/get-packages/
>
> Or install with ceph-deploy via::
>
>   ceph-deploy install --testing HOST
>
> Known issues
> 
>
> * librbd and librados ABI compatibility is broken.  Be careful
>   installing this RC on client machines (e.g., those running qemu).
>   It will be fixed in the final v9.2.0 release.
>
> Major Changes from Hammer
> -
>
> * *General*:
>   * Ceph daemons are now managed via systemd (with the exception of
> Ubuntu Trusty, which still uses upstart).
>   * Ceph daemons run as 'ceph' user instead root.
>   * On Red Hat distros, there is also an SELinux policy.
> * *RADOS*:
>   * The RADOS cache tier can now proxy write operations to the base
> tier, allowing writes to be handled without forcing migration of
> an object into the cache.
>   * The SHEC erasure coding support is no longer flagged as
> experimental. SHEC trades some additional storage space for faster
> repair.
>   * There is now a unified queue (and thus prioritization) of client
> IO, recovery, scrubbing, and snapshot trimming.
>   * There have been many improvements to low-level repair tooling
> (ceph-objectstore-tool).
>   * The internal ObjectStore API has been significantly cleaned up in order
> to faciliate new storage backends like NewStore.
> * *RGW*:
>   * The Swift API now supports object expiration.
>   * There are many Swift API compatibility improvements.
> * *RBD*:
>   * The ``rbd du`` command shows actual usage (quickly, when
> object-map is enabled).
>   * The object-map feature has seen many stability improvements.
>   * Object-map and exclusive-lock features can be enabled or disabled
> dynamically.
>   * You can now store user metadata and set persistent librbd options
> associated with individual images.
>   * The new deep-flatten features allows flattening of a clone and all
> of its snapshots.  (Previously snapshots could not be flattened.)
>   * The export-diff command command is now faster (it uses aio).  There is
> also
> a new fast-diff feature.
>   * The --size argument can be specified with a suffix for units
> (e.g., ``--size 64G``).
>   * There is a new ``rbd status`` command that, for now, shows who has
> the image open/mapped.
> * *CephFS*:
>   * You can now rename snapshots.
>   * There have been ongoing improvements around administration,
> diagnostics,
> and the check and repair tools.
>   * The caching and revocation of client cache state due to unused
> inodes has been dramatically improved.
>   * The ceph-fuse client behaves better on 32-bit hosts.
>
> Distro compatibility
> 
>
> We have decided to drop support for many older distributions so that we can
> move to a newer compiler toolchain (e.g., C++11).  Although it is still 
> possible
> to build Ceph on older distributions by installing backported development
> tools, we are not building and publishing release packages for ceph.com.
>
> In particular,
>
> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
>   RHEL 6 derivatives, like Scientific Linux 6).
> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
>   support for C++11 (and no systemd).
> * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
>   supported.
> * Fedora 22 or later.
>
> Upgrading from Firefly
> --
>
> Upgrading directly from Firefly v0.80.z is not possible.  All clusters must 
> first
> upgrade to Hammer v0.94.4 or a later v0.94.z release; only then is it possible
> to upgrade to Infernalis 9.2.z.
>
> Note that v0.94.4 isn't released yet, but you can upgrade to a test build from
> gitbuilder with::
>
>   ceph-deploy install --dev hammer HOST
>
> The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis is.
>
> Upgrading from Hammer
> --

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-13 Thread Sage Weil
On Tue, 13 Oct 2015, Nick Fisk wrote:
> Do you know if any of the Tiering + EC performance improvements 
> currently waiting to merge will make the final release or is it likely 
> they will get pushed back to Jewel?
> 
> Specifically:-
> https://github.com/ceph/ceph/pull/5486
> https://github.com/ceph/ceph/pull/4467
> https://github.com/ceph/ceph/pull/4467

Those will both target Jewel.

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-13 Thread Joao Eduardo Luis
On 13/10/15 22:01, Sage Weil wrote:
> * *RADOS*:
>   * The RADOS cache tier can now proxy write operations to the base
> tier, allowing writes to be handled without forcing migration of
> an object into the cache.
>   * The SHEC erasure coding support is no longer flagged as
> experimental. SHEC trades some additional storage space for faster
> repair.
>   * There is now a unified queue (and thus prioritization) of client
> IO, recovery, scrubbing, and snapshot trimming.
>   * There have been many improvements to low-level repair tooling
> (ceph-objectstore-tool).
>   * The internal ObjectStore API has been significantly cleaned up in order
> to faciliate new storage backends like NewStore.

It may also be worth to mention that we dropped a few options from the
monitor that people may have customized in their configurations (I guess
we forgot to add them to the PendingReleaseNotes).

These options are:

- mon_lease_renew_interval (default: 3)
- mon_lease_ack_timeout (default: 10)
- mon_accept_timeout (default: 10)

If you are using these in your configuration, please use instead

- mon_lease_renew_interval_factor (default: 0.6)
- mon_lease_ack_timeout_factor (default: 2.0)
- mon_accept_timeout_factor (default: 2.0)

These are now used as a factor of 'mon_lease'. If you also have
'mon_lease' (default: 5) adjusted and your previous options match the
new factors defaults, you have nothing else to do but to drop the old
options from your configuration file. Otherwise, please consider
adjusting the factors to match whatever is working for you.

  -Joao
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com