Quoting Kashif Mumtaz (kashif.mum...@yahoo.com):
>
> Dear User,
> I am striving had to install Ceph luminous version on Ubuntu 16.04.3 (
> xenial ).
> Its repo is available at https://download.ceph.com/debian-luminous/
> I added it like sudo apt-add-repository 'deb
> https://download.ceph.com/
Hello,
On Thu, 28 Sep 2017 22:36:22 + Gregory Farnum wrote:
> Also, realize the deep scrub interval is a per-PG thing and (unfortunately)
> the OSD doesn't use a global view of its PG deep scrub ages to try and
> schedule them intelligently across that time. If you really want to try and
> f
This looks similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1458007 or one of the
bugs/trackers attached to that.
On Thu, Sep 28, 2017 at 11:14 PM, Sean Purdy wrote:
> On Thu, 28 Sep 2017, Matthew Vernon said:
>> Hi,
>>
>> TL;DR - the timeout setting in ceph-disk@.service is (far) too small
Have you tried running a Luminous OSD with filestore instead of BlueStore?
As BlueStore is all new code and uses a lot of optimizations and tricks for
fast and efficient use of memory, some 64-bit assumptions may have snuck in
there. I'm not sure how much interest there is in making sure that work
I often schedule the deep scrubs for a cluster so that none of them will
happen on their own and will always be run using my cron/scripts. For
instance, set the deep scrub interval to 2 months and schedule a cron that
will take care of all of the deep scrubs within a month. If for any reason
the
Also, realize the deep scrub interval is a per-PG thing and (unfortunately)
the OSD doesn't use a global view of its PG deep scrub ages to try and
schedule them intelligently across that time. If you really want to try and
force this out, I believe a few sites have written scripts to do it by
turni
I'm pretty sure the orphan find command does exactly just that -
finding orphans. I remember some emails on the dev list where Yehuda
said he wasn't 100% comfortable of automating the delete just yet.
So the purpose is to run the orphan find tool and then delete the
orphaned objects once you're hap
On Thu, Sep 28, 2017 at 5:16 AM Micha Krause wrote:
> Hi,
>
> I had a chance to catch John Spray at the Ceph Day, and he suggested that
> I try to reproduce this bug in luminos.
>
> To fix my immediate problem we discussed 2 ideas:
>
> 1. Manually edit the Meta-data, unfortunately I was not able
Dear User,
I am striving had to install Ceph luminous version on Ubuntu 16.04.3 ( xenial
).
Its repo is available at https://download.ceph.com/debian-luminous/
I added it like sudo apt-add-repository 'deb
https://download.ceph.com/debian-luminous/ xenial main'
# more sources.list
deb https://
When I had to use that I just took for granted that it worked, so I can't
really tell you if that's just it.
:|
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
On Thu, Sep 28, 2017 at 1:31 PM, Andreas Calminder <
andreas.calmin...@klarna.com> wrote:
> Hi,
> Ye
On 28. sep. 2017 18:53, hjcho616 wrote:
Yay! Finally after about exactly one month I finally am able to mount
the drive! Now is time to see how my data is doing. =P Doesn't look
too bad though.
Got to love the open source. =) I downloaded ceph source code. Built them. Then tried to run ce
Yay! Finally after about exactly one month I finally am able to mount the
drive! Now is time to see how my data is doing. =P Doesn't look too bad
though.
Got to love the open source. =) I downloaded ceph source code. Built them.
Then tried to run ceph-objectstore-export on that osd.4. The
Hi,
Yes I'm able to run these commands, however it is unclear both in man file
and the docs what's supposed to happen with the orphans, will they be
deleted once I run finish? Or will that just throw away the job? What will
orphans find actually produce? At the moment it just outputs a lot of text
This is the first bugfix release of Luminous v12.2.x long term stable
release series. It contains a range of bug fixes and a few features
across CephFS, RBD & RGW. We recommend all the users of 12.2.x series
update.
For more details, refer to the release notes entry at the official
blog[1] and th
Hi
it looks like OpenStack (Pike) has deprecated Ceilometer-API.
Is this a problem for RadosGW when pushes stats to Openstack Telemetry
service?
Thanks,
J.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
On Thu, 28 Sep 2017, Matthew Vernon said:
> Hi,
>
> TL;DR - the timeout setting in ceph-disk@.service is (far) too small - it
> needs increasing and/or removing entirely. Should I copy this to ceph-devel?
Just a note. Looks like debian stretch luminous packages have a 10_000 second
timeout:
fr
Hello,
not an expert here but I think the answer is something like:
radosgw-admin orphans find --pool=_DATA_POOL_ --job-id=_JOB_ID_
radosgw-admin orphans finish --job-id=_JOB_ID_
_JOB_ID_ being anything.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
On Thu
Hello,
running Jewel on some nodes with rados gateway I've managed to get a
lot of leaked multipart objects, most of them belonging to buckets
that do not even exist anymore. We estimated these objects to occupy
somewhere around 60TB, which would be great to reclaim. Question is
how, since trying t
Hi,
I had a chance to catch John Spray at the Ceph Day, and he suggested that I try
to reproduce this bug in luminos.
To fix my immediate problem we discussed 2 ideas:
1. Manually edit the Meta-data, unfortunately I was not able to find any
Information on how the meta-data is structured :-(
Hi,
TL;DR - the timeout setting in ceph-disk@.service is (far) too small -
it needs increasing and/or removing entirely. Should I copy this to
ceph-devel?
On 15/09/17 16:48, Matthew Vernon wrote:
On 14/09/17 16:26, Götz Reinicke wrote:
After that, 10 OSDs did not came up as the others. The
On Thu, Sep 28, 2017 at 11:51 AM, Richard Hesketh
wrote:
> On 27/09/17 19:35, John Spray wrote:
>> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
>> wrote:
>>> On 27/09/17 12:32, John Spray wrote:
On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh
wrote:
> As the subject says... a
So the problem you faced has been completely solved?
On Thu, Sep 28, 2017 at 7:51 PM, Richard Hesketh
wrote:
> On 27/09/17 19:35, John Spray wrote:
>> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
>> wrote:
>>> On 27/09/17 12:32, John Spray wrote:
On Wed, Sep 27, 2017 at 12:15 PM, Richar
On 27/09/17 19:35, John Spray wrote:
> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
> wrote:
>> On 27/09/17 12:32, John Spray wrote:
>>> On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh
>>> wrote:
As the subject says... any ceph fs administrative command I try to run
hangs forever
Hi Haomai,
can you please guide me to a running cluster with RDMA ?
regards
Gerhard W. Recher
net4sec UG (haftungsbeschränkt)
Leitenweg 6
86929 Penzing
+49 171 4802507
Am 28.09.2017 um 04:21 schrieb Haomai Wang:
> previously we have a infiniband cluster, recently we deploy a roce
> cluster. th
Hi Dan, list,
Our cluster is small: three nodes, totally 24 4Tb platter OSDs, SSD
journals. Using rbd for VMs. That's it. Runs nicely though :-)
The fact that "tunable optimal" for jewel would result in "significantly
fewer mappings change when an OSD is marked out of the cluster" is what
at
On 09/28/2017 04:08 AM, Leonardo Vaz wrote:
Hey Cephers,
This is just a friendly reminder that the next Ceph Developer Montly
meeting is coming up:
http://wiki.ceph.com/Planning
If you have work that you're doing that it a feature work, significant
backports, or anything you would like to di
Are we going to have next CDM in an APAC friendly time slot again?
On Thu, Sep 28, 2017 at 12:08 PM, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> http://wiki.ceph.com/Planning
>
> If you have work that y
David,
Thank you so much for your reply. I'm not entirely satisfied though. I'm
expecting the PG states "degraded" and "undersized". Those should result in
a HEALTH_WARN. I'm particularly worried about the "stuck inactive" part.
Please correct me if I'm wrong but I was in the understanding that a
On 28. sep. 2017 09:27, Olivier Migeot wrote:
Greetings,
we're in the process of recovering a cluster after an electrical
disaster. Didn't work bad so far, we managed to clear most of errors.
All that prevents return to HEALTH_OK now is a bunch (6) of scrub
errors, apparently from a PG that's
Hi,
How big is your cluster and what is your use case?
For us, we'll likely never enable the recent tunables that need to
remap *all* PGs -- it would simply be too disruptive for marginal
benefit.
Cheers, Dan
On Thu, Sep 28, 2017 at 9:21 AM, mj wrote:
> Hi,
>
> We have completed the upgrade t
Greetings,
we're in the process of recovering a cluster after an electrical
disaster. Didn't work bad so far, we managed to clear most of errors.
All that prevents return to HEALTH_OK now is a bunch (6) of scrub
errors, apparently from a PG that's marked as active+clean+inconsistent.
Thing i
Hi,
We have completed the upgrade to jewel, and we set tunables to hammer.
Cluster again HEALTH_OK. :-)
But now, we would like to proceed in the direction of luminous and
bluestore OSDs, and we would like to ask for some feedback first.
From the jewel ceph docs on tubables: "Changing tunabl
32 matches
Mail list logo