Le dimanche 23 septembre 2018 à 20:28 +0200, mj a écrit :
> XFS has *always* treated us nicely, and we have been using it for a
> VERY
> long time, ever since the pre-2000 suse 5.2 days on pretty much all
> our
> machines.
>
> We have seen only very few corruptions on xfs, and the few times we
Le dimanche 23 septembre 2018 à 17:49 -0700, solarflow99 a écrit :
> ya, sadly it looks like btrfs will never materialize as the next
> filesystem
> of the future. Redhat as an example even dropped it from its future,
> as
> others probably will and have too.
Too bad, since this FS have a lot of
Hi list,
We have a 2 Luminous RGW running behind an F5 balancer. Every couple of seconds
the F5 balancer send a keep-alive request to the RGW and saturate the Civetweb
log with http entries, making it very difficult to troubleshoot users
connection. Example:
172.16.212.86 - - [24/Sep/2018:11:
Is it possible to set data-pool for ec-pools on qemu-img?
For repl-pools I used "qemu-img convert" to convert from e.g. vmdk to raw
and write to rbd/ceph directly.
The rbd utility is able to do this for raw or empty images but without
convert (converting 800G and writing it again would now take a
ya, sadly it looks like btrfs will never materialize as the next filesystem
of the future. Redhat as an example even dropped it from its future, as
others probably will and have too.
On Sun, Sep 23, 2018 at 11:28 AM mj wrote:
> Hi,
>
> Just a very quick and simple reply:
>
> XFS has *always* t
Hi,
Just a very quick and simple reply:
XFS has *always* treated us nicely, and we have been using it for a VERY
long time, ever since the pre-2000 suse 5.2 days on pretty much all our
machines.
We have seen only very few corruptions on xfs, and the few times we
tried btrfs, (almost) always
On Fri, Sep 21, 2018 at 04:17:35PM -0400, Jin Mao wrote:
> I am looking for an API equivalent of 'radosgw-admin log list' and
> 'radosgw-admin log show'. Existing /usage API only reports bucket level
> numbers like 'radosgw-admin usage show' does. Does anyone know if this is
> possible from rest AP
Hi Paul,
thanks for the hint, I just checked and it works perfectly.
I found this guide:
https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/
The works well with one meta/data setup but not with multiple (like
device-class based pools).
The link above uses client-auth, is there
The usual trick for clients not supporting this natively is the option
"rbd_default_data_pool" in ceph.conf which should also work here.
Paul
Am So., 23. Sep. 2018 um 18:03 Uhr schrieb Kevin Olbrich :
>
> Hi!
>
> Is it possible to set data-pool for ec-pools on qemu-img?
> For repl-pools I used
Hi!
Is it possible to set data-pool for ec-pools on qemu-img?
For repl-pools I used "qemu-img convert" to convert from e.g. vmdk to raw
and write to rbd/ceph directly.
The rbd utility is able to do this for raw or empty images but without
convert (converting 800G and writing it again would now ta
Short answer: no and no.
Long:
1. having size = 2 is safe *if you also keep min_size at 2*. But
that's not highly available so you usually don't want this. min_size =
1 (or reducing min size on an ec pool) is basically a guarantee to
lose at least some data/writes in the long run.
2. It's no lon
11 matches
Mail list logo