Hello,
Luminous 12.2.2
There were several discussions on this list concerning Bluestore migration,
as official documentation does not work quite well yet. In particular this
one
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/024190.html
Is it possible to modify official documen
47 48 .. 4f 50 .. 57 58 .. 5f
Obj5= 8MObj6= 8MObj7= 8MObj8= 8M
60 .. 6768 .. 6f70 .. 7778 .. 7f
Alexander.
On Wed, Oct 11, 2017 at 3:19 PM, Alexander Kushnirenko <
kushnire...@gmail.com> wrote:
> Oh! I put a wrong link
, Alexander Kushnirenko <
kushnire...@gmail.com> wrote:
> Hi, Ian!
>
> Thank you for your reference!
>
> Could you comment on the following rule:
> object_size = stripe_unit * stripe_count
> Or it is not necessarily so?
>
> I refer to page 8 in this report:
>
&
Hi, Ian!
Thank you for your reference!
Could you comment on the following rule:
object_size = stripe_unit * stripe_count
Or it is not necessarily so?
I refer to page 8 in this report:
https://indico.cern.ch/event/531810/contributions/2298934/at
tachments/1358128/2053937/Ceph-Experience-at-RAL-f
RadosStriper
> set_object_layout_object_size(unsigned int object_size);
>
> So I imagine you specify it with those the same way you've set the stripe
> unit and counts.
>
> On Sat, Oct 7, 2017 at 12:38 PM Alexander Kushnirenko <
> kushnire...@gmail.com> wrote:
>
>&g
Hi,
Are there any recommendations on what is the limit when osd performance
start to decline because of large number of objects? Or perhaps a procedure
on how to find this number (luminous)? My understanding is that the
recommended object size is 10-100 MB, but is there any performance hit due
to
counts. I would expect you need to make sure that
> the size is an integer multiple of the stripe unit. And it probably
> defaults to a 4MB object if you don't specify one?
>
> On Fri, Sep 29, 2017 at 2:09 AM Alexander Kushnirenko <
> kushnire...@gmail.com> wrote:
>
>&g
Hello,
I'm working on third party code (Bareos Storage daemon) which gives very
low write speeds for CEPH. The code was written to demonstrate that it is
possible, but the speed is about 3-9 MB/s which is too slow. I modified
the routine to use rados_aio_write instead of rados_write, and was ab
e you are dominated by the per-op already rather than the
> throughout of your cluster. Using aio or multiple threads will let you
> parallelism requests.
> -Greg
> On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko <
> kushnire...@gmail.com> wrote:
>
>> Hello,
>
Hello,
We see very poor performance when reading/writing rados objects. The speed
is only 3-4MB/sec, compared to 95MB rados benchmarking.
When you look on underline code it uses librados and linradosstripper
libraries (both have poor performance) and the code uses rados_read and
rados_write func
Hi,
I'm trying to use CEPH-12.2.0 as storage for with Bareos-16.2.4 backup with
libradosstriper1 support.
Libradosstriber was suggested on this list to solve the problem, that
current CEPH-12 discourages users from using object with very big size
(>128MB). Bareos treat Rados Object as Volume and
objects. Objects shouldn’t be
> stored as large as that and performance will also suffer.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Alexander Kushnirenko
> *Sent:* 26 September 2017 13:50
> *To:* ceph-users@lists.ceph.com
> *Subject:*
Hello,
We successfully use rados to store backup volumes in jewel version of CEPH.
Typical volume size is 25-50GB. Backup software (bareos) use Rados objects
as backup volumes and it works fine. Recently we tried luminous for the
same purpose.
In luminous developers reduced osd_max_object_size
13 matches
Mail list logo