On 17-09-06 18:23, Sage Weil wrote:
Hi everyone,
Traditionally, we have done a major named "stable" release twice a year,
and every other such release has been an "LTS" release, with fixes
backported for 1-2 years.
With kraken and luminous we missed our schedule by a lot: instead of
releasing i
> * Drop the odd releases, and aim for a ~9 month cadence. This splits the
> difference between the current even/odd pattern we've been doing.
>
> + eliminate the confusing odd releases with dubious value
> + waiting for the next release isn't quite as bad
> - required upgrades every 9 months
Hello,
Am 07.09.2017 um 03:53 schrieb Christian Balzer:
>
> Hello,
>
> On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote:
>
>> We are planning a Jewel filestore based cluster for a performance
>> sensitive healthcare client, and the conservative OSD choice is
>> Samsung SM863A.
>>
>
> Whil
On 17-09-07 02:42, Deepak Naidu wrote:
Hope collective feedback helps. So here's one.
- Not a lot of people seem to run the "odd" releases (e.g., infernalis, kraken).
I think the more obvious reason companies/users wanting to use CEPH will stick
with LTS versions as it models the 3yr support
These error logs look like they are being generated here,
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L8987-L8993
or possibly here,
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L9230-L9236.
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs cep
Yehuda,
Is there any way to create snapshots of individual buckets? I don't find this
feature now.
Can you give me some ideas?
Thanks a lot.
donglifec...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
Hello,
On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote:
> We are planning a Jewel filestore based cluster for a performance
> sensitive healthcare client, and the conservative OSD choice is
> Samsung SM863A.
>
While I totally see where you're coming from and me having stated that
I'll g
Hope collective feedback helps. So here's one.
>>- Not a lot of people seem to run the "odd" releases (e.g., infernalis,
>>kraken).
I think the more obvious reason companies/users wanting to use CEPH will stick
with LTS versions as it models the 3yr support cycle.
>>* Drop the odd releases,
I was reading this post by Josh Durgin today and was pretty happy to see we can
get a summary of features that clients are using with the 'ceph features'
command:
http://ceph.com/community/new-luminous-upgrade-complete/
However, I haven't found an option to display the IP address of those clien
On Wed, Sep 06, 2017 at 02:08:14PM +, Engelmann Florian wrote:
> we are running a luminous cluster and three radosgw to serve a s3 compatible
> objectstore. As we are (currently) not using Openstack we have to use the
> RadosGW Admin API to get our billing data. I tried to access the API with
I have been working with Ceph for the last several years and I help
support multiple Ceph clusters. I would like to have the team drop the
Even/Odd release schedule, and go to an all production release
schedule. I would like releases on no more then a 9 month schedule,
with smaller incremental cha
No support for that yet -- it's being tracked by a backlog ticket [1].
[1] https://trello.com/c/npmsOgM5
On Wed, Sep 6, 2017 at 12:27 PM, Christoph Adomeit
wrote:
> Now that we are 2 years and some ceph releases farther and have bluestor:
>
> Are there meanwhile any better ways to find out the m
Now that we are 2 years and some ceph releases farther and have bluestor:
Are there meanwhile any better ways to find out the mtime of an rbd image ?
Thanks
Christoph
On Thu, Nov 26, 2015 at 06:50:46PM +0100, Jan Schermer wrote:
> Find in which block the filesystem on your RBD image stores jou
Thank you. Iam able to replace the dmcrypt journal successfully.
On Sep 5, 2017 18:14, "David Turner" wrote:
> Did the journal drive fail during operation? Or was it taken out during
> pre-failure. If it fully failed, then most likely you can't guarantee the
> consistency of the underlying osds.
On 09/06/2017 04:23 PM, Sage Weil wrote:
* Keep even/odd pattern, but force a 'train' model with a more regular
cadence
+ predictable schedule
- some features will miss the target and be delayed a year
Personally, I think a predictable schedule is the way to go. Two major
reasons come t
On Wed, Sep 6, 2017 at 9:23 AM, Sage Weil wrote:
> * Keep even/odd pattern, but force a 'train' model with a more regular
> cadence
>
> + predictable schedule
> - some features will miss the target and be delayed a year
This one (#2, regular release cadence) is the one I will value the most.
Hi Sage,
The one option I do not want for Ceph is the last one: support upgrade
across multiple LTS versions
I'd rather wait 3 months for a better release (both in terms of
functions and quality) than seeing the Ceph team exhausted, having to
maintain for years a lot more releases and code
Othe
On Wed, Sep 6, 2017 at 11:23 AM Sage Weil wrote:
> Hi everyone,
>
> Traditionally, we have done a major named "stable" release twice a year,
> and every other such release has been an "LTS" release, with fixes
> backported for 1-2 years.
>
> With kraken and luminous we missed our schedule by a lo
On Wed, 2017-09-06 at 15:23 +, Sage Weil wrote:
> Hi everyone,
>
> Traditionally, we have done a major named "stable" release twice a year,
> and every other such release has been an "LTS" release, with fixes
> backported for 1-2 years.
>
> With kraken and luminous we missed our schedule by
Hi Greg,
thanks for your insight! I do have a few follow-up questions.
On 09/05/2017 11:39 PM, Gregory Farnum wrote:
>> It seems to me that there still isn't a good recommendation along the
>> lines of "try not to have more than X snapshots per RBD image" or "try
>> not to have more than Y snapsh
Very new to Ceph but long time but long time sys admin who is jaded/opinionated.
My 2 cents:
1) This sounds like a perfect thing to put in a poll and ask/beg people to
vote. Hopefully that will get you more of a response from a larger number of
users.
2) Given that the value of the odd releases
Hi everyone,
Traditionally, we have done a major named "stable" release twice a year,
and every other such release has been an "LTS" release, with fixes
backported for 1-2 years.
With kraken and luminous we missed our schedule by a lot: instead of
releasing in October and April we released in
Hello friends, I have a question.
Is it possible to change the default.rgw pool of the ceph used in radosgw
already with stored data to a new name?
Tested in version: ceph version 10.2.7 and 10.2.9
I already tried to change the metadata of the region and zone and made the
renames of the pool, I
Oh, I see that this is probably a bug: http://tracker.ceph.com/issues/21260
I also noticed following error in mgr logs:
/2017-09-06 16:41:08.537577 7f34c0a7a700 1 mgr send_beacon active//
//2017-09-06 16:41:08.539161 7f34c0a7a700 1 mgr[restful] Unknown
request ''//
//2017-09-06 16:41:08.54383
Oh, I'm on flight at the time
On Wed, Sep 6, 2017 at 6:28 PM, Joao Eduardo Luis wrote:
> On 09/06/2017 06:06 AM, Leonardo Vaz wrote:
>>
>> Hey cephers,
>>
>> The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm
>> Eastern Time (EDT), in an APAC-friendly time slot.
>
>
> As much
Hi,
I ran a small test two node ceph cluster - 12.2.0 version. It has 28
osds, 1 mon and 2 mgr. It runs fine, however I noticed this strange
thing in output of ceph versions command:
/# ceph versions//
//{//
//"mon": {//
//"ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab
Hi,
I have the same problem. A bug [1] is reported since months, but
unfortunately this is not fixed yet. I hope, if more people are having
this problem the developers can reproduce and fix it.
I was using Kernel-RBD with a Cache Tier.
so long
Thomas Coelho
[1] http://tracker.ceph.com/issues/20
Hi,
we are running a luminous cluster and three radosgw to serve a s3 compatible
objectstore. As we are (currently) not using Openstack we have to use the
RadosGW Admin API to get our billing data. I tried to access the API with
pathon like:
[...]
import rgwadmin
[...]
Users = radosgw.get_user
On 17-09-06 16:24, Jean-Francois Nadeau wrote:
Hi,
On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC
pools + Bluestore.
Setup went fine but after a few bench runs several OSD are failing and
many wont even restart.
ceph osd erasure-code-profile set myprofile \
k=2\
Hi,
On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC pools +
Bluestore.
Setup went fine but after a few bench runs several OSD are failing and many
wont even restart.
ceph osd erasure-code-profile set myprofile \
k=2\
m=1 \
crush-failure-domain=host
ceph osd pool crea
We are planning a Jewel filestore based cluster for a performance
sensitive healthcare client, and the conservative OSD choice is
Samsung SM863A.
I am going to put an 8GB Areca HBA in front of it to cache small
metadata operations, but was wondering if anyone has seen a positive
impact from also u
Hi,
I was deleting a lot of hard linked files, when "something" happened.
Now my mds starts for a few seconds, writes a lot of these lines:
-43> 2017-09-06 13:51:43.396588 7f9047b21700 10 log_client will send
2017-09-06 13:51:40.531563 mds.0 10.210.32.12:6802/2735447218 4963 : cluster [ERR
On 09/06/2017 06:06 AM, Leonardo Vaz wrote:
Hey cephers,
The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm
Eastern Time (EDT), in an APAC-friendly time slot.
As much as I would love to attend and discuss some topics (especially
the RADOS replication stuff), this is an un
Okie thanks all, will hold off 😊
-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: 06 September 2017 17:58
To: Ashley Merrick
Cc: Henrik Korkuc ; ceph-us...@ceph.com
Subject: Re: [ceph-users] Luminous Upgrade KRBD
On Wed, Sep 6, 2017 at 11:23 AM, Ashley Merrick wr
On Wed, Sep 6, 2017 at 11:23 AM, Ashley Merrick wrote:
> Only drive for it was to be able to use this:
>
> http://docs.ceph.com/docs/master/rados/operations/upmap/
>
> To see if would help with the current very uneven PG MAP across 100+ OSD's,
> something that can wait if current kernel isn't rea
Only drive for it was to be able to use this:
http://docs.ceph.com/docs/master/rados/operations/upmap/
To see if would help with the current very uneven PG MAP across 100+ OSD's,
something that can wait if current kernel isn't ready.
,Ashley
-Original Message-
From: Ilya Dryomov [mailt
Quick drop-in, if this is a suitable solution: rbd-nbd
This will give you, for a small performance cost, a block device using
librbd (in userspace)
On 06/09/2017 11:08, Ilya Dryomov wrote:
> On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc wrote:
>> On 17-09-06 09:10, Ashley Merrick wrote:
>>
>> I w
On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc wrote:
> On 17-09-06 09:10, Ashley Merrick wrote:
>
> I was just going by : docs.ceph.com/docs/master/start/os-recommendations/
>
>
> Which states 4.9
>
>
> docs.ceph.com/docs/master/rados/operations/crush-map
>
>
> Only goes as far as Jewel and states
On 17-09-06 09:10, Ashley Merrick wrote:
I was just going by : docs.ceph.com/docs/master/start/os-recommendations/
Which states 4.9
docs.ceph.com/docs/master/rados/operations/crush-map
Only goes as far as Jewel and states 4.5
Not sure where else I can find a concrete answer to if 4.10 is
39 matches
Mail list logo