Re: [ceph-users] CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

2016-05-25 Thread Ken Dreyer
On Wed, May 25, 2016 at 8:05 AM, Gregory Farnum wrote: > On Tue, May 24, 2016 at 9:54 PM, Goncalo Borges > wrote: >> Thank you Greg... >> >> There is one further thing which is not explained in the release notes and >> that may be worthwhile to

Re: [ceph-users] CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

2016-05-25 Thread Gregory Farnum
On Tue, May 24, 2016 at 9:54 PM, Goncalo Borges wrote: > Thank you Greg... > > There is one further thing which is not explained in the release notes and > that may be worthwhile to say. > > The rpm structure (for redhat compatible releases) changed in Jewel where >

Re: [ceph-users] CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

2016-05-24 Thread Goncalo Borges
Thank you Greg... There is one further thing which is not explained in the release notes and that may be worthwhile to say. The rpm structure (for redhat compatible releases) changed in Jewel where now there is a ( ceph + ceph-common + ceph-base + ceph-mon/osd/mds + others ) packages while

Re: [ceph-users] CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

2016-05-24 Thread Gregory Farnum
On Wed, May 18, 2016 at 6:04 PM, Goncalo Borges wrote: > Dear All... > > Our infrastructure is the following: > > - We use CEPH/CEPHFS (9.2.0) > - We have 3 mons and 8 storage servers supporting 8 OSDs each. > - We use SSDs for journals (2 SSDs per storage server,

[ceph-users] CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

2016-05-18 Thread Goncalo Borges
Dear All... Our infrastructure is the following: - We use CEPH/CEPHFS (9.2.0) - We have 3 mons and 8 storage servers supporting 8 OSDs each. - We use SSDs for journals (2 SSDs per storage server, each serving 4 OSDs). - We have one main mds and one standby-replay mds. - We are