Awesome, that did it.

I consider creating a separate Bareos device with striping, testing there, and 
then fading out the old non-striped pool... Maybe that would also fix the 
suboptimal throughput...

But from the Ceph side of things, it looks like I'm good now.

Thanks again :)

Cheers,

Martin 

-----Ursprüngliche Nachricht-----
Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] 
Gesendet: Dienstag, 4. Juli 2017 14:42
An: Martin Emrich <martin.emr...@empolis.com>
Cc: Gregory Farnum <gfar...@redhat.com>; ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] Rados maximum object size issue since Luminous?

2017-07-04 12:10 GMT+00:00 Martin Emrich <martin.emr...@empolis.com>:
...
> So as striping is not backwards-compatible (and this pools is indeed for 
> backup/archival purposes where large objects are no problem):
>
> How can I restore the behaviour of jewel (allowing 50GB objects)?
>
> The only option I found was "osd max write size" but that seems not to be the 
> right one, as its default of 90MB is lower than my observed 128MB.

That should be osd_max_object_size, see https://github.com/ceph/ceph/pull/15520
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to