There seems to be a more fundamental confusion here. "ceph osd map" asks
the cluster where a single *object* is located. On a pool of size 2, that
will return 2 OSDs, but it DOES NOT check to see if the object actually
exists — it just outputs the CRUSH mapping!
Files in CephFS are composed of
This isn't very complete as it just indicates that something went wrong
with a read. Since I presume it happens on every startup, it may help if
you set "debug bluestore = 20" in the OSD's config and provide that log
(perhaps with ceph-post-file if it's large).
I also went through my email and see
after emtpying the bucket, cannot deleted since there are some aborted
multipart uploads
radosgw-admin bucket check --bucket=weird_bucket
[
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
On Thu, Aug 02, 2018 at 01:04:46PM +0200, Ilya Dryomov wrote:
> On Thu, Aug 2, 2018 at 12:49 PM wrote:
> >
> > I create a rbd named dx-app with 500G, and map as rbd0.
> >
> > But i find the size is different with different cmd:
> >
> > [root@dx-app docker]# rbd info dx-app
> > rbd image 'dx-app':
Hello!!!
after emtpying the bucket, cannot deleted since there are some aborted
multipart uploads
radosgw-admin bucket check --bucket=weird_bucket
[
"_multipart_DISK_P/collection_1/anonymous/GRLIX/GRLIX_001069.pdf.VOeGNgr-gvhXCrf6dlnhAqhjaFHIF7t.1",
Hi,
Anyone who has real experiences on this case, could you give me more
information and estimation?
Thanks.
2018-08-05 15:00 GMT+07:00 Sam Huracan :
> Thanks Saludos!
>
> As far as I know, we should keep the FileStore SSD Journal after
> upgrading, because BlueStore will affect the write
Hello,
I am having a problem with the default.rgw.buckets.data.
there are about 10+ buckets within the pool
Buckets are filled with objects around 100K+ and resharding blocks them.
However the resharding process gets also blocked with problems within
the buckets
# ceph version
ceph version
Thanks for the reply! Ok I understand :-)
But the page still shows 403 by now...
5 августа 2018 г. 6:42:33 GMT+03:00, Gregory Farnum пишет:
>On Sun, Aug 5, 2018 at 1:25 AM Виталий Филиппов
>wrote:
>
>> Hi!
>>
>> I wanted to report a bug in ceph, but I found out that visiting
>>
Hi,
We start to see core dump occurring with luminous 12.2.7. Any idea where
this is coming from ?? We started having issues with bluestore core dumping
when we moved to 12.2.6 and hoped that 12.2.7 would have fixed it. We might
need to revert back to 12.2.5 as it seems a lot more stable.
Thanks Saludos!
As far as I know, we should keep the FileStore SSD Journal after upgrading,
because BlueStore will affect the write performance??
I think I'll choose Luminous which is recently the most stable version.
On Sat, Aug 4, 2018, 03:31 Xavier Trilla wrote:
> Hi Sam,
>
>
>
>
10 matches
Mail list logo