Hi Felix,
What happens when you Restart the Server - is the numbering then identical?
I have this behavior on Intel Server and when i put additional Disks in
Runtime, the numbering increases but i expect to get a Number from between two
Slots (of course when i put a Disk between this slots).
Hi Ben,
On Thu, Apr 20, 2017 at 6:08 PM, Ben Morrice wrote:
> Hi all,
>
> I have tried upgrading one of our RGW servers from 10.2.5 to 10.2.7 (RHEL7)
> and authentication is in a very bad state. This installation is part of a
> multigw configuration, and I have just updated one host in the second
On Thu, Apr 20, 2017 at 2:31 AM, mj wrote:
> Hi Gregory,
>
> Reading your reply with great interest, thanks.
>
> Can you confirm my understanding now:
>
> - live snapshots are more expensive for the cluster as a whole, than taking
> the snapshot when the VM is switched off?
No, it doesn't matter
Hi all,
I have tried upgrading one of our RGW servers from 10.2.5 to 10.2.7
(RHEL7) and authentication is in a very bad state. This installation is
part of a multigw configuration, and I have just updated one host in the
secondary zone (all other hosts/zones are running 10.2.5).
On the 10.2.
On Thu, Apr 20, 2017 at 2:13 AM, Maxime Guyot
wrote:
> >2) Why did you choose to run the ceph nodes on loopback interfaces as
> opposed to the /24 for the "public" interface?
>
> I can’t speak for this example, but in a clos fabric you generally want to
> assign the routed IPs on loopback rather
Hi all,
hope you are all doing well and maybe some of you can help me with a problem
i'm focusing recently.
I started to evaluate ceph a couple of months ago and I now have a very strange
problem while formatting rbd images.
The Problem only occurs when using rbd images directly with the kernel
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jogi
> Hofmüller
> Sent: 20 April 2017 13:51
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] slow requests and short OSD failures in small
> cluster
>
> Hi,
>
> Am Dienstag, den 1
No they are not empty, but I compared the size with my records form yesterday.
You are right they are getting smaller, it is just incredibly slow. The pool
was an OpenStack Gnocchi pool that created several million small files on each
OSD. (Happy to get rid of gnocchi now…) The OSD is slowly del
Hi,
Am Dienstag, den 18.04.2017, 18:34 + schrieb Peter Maloney:
> The 'slower with every snapshot even after CoW totally flattens it'
> issue I just find easy to test, and I didn't test it on hammer or
> earlier, and others confirmed it, but didn't keep track of the
> versions. Just make an r
Is there any data in those directories? Do a du on them and see if they are
getting smaller over time, or maybe they're already empty and failed too
delete the main folder.
On Thu, Apr 20, 2017, 2:46 AM Daniel Marks
wrote:
> Hi all,
>
> I am wondering when the PGs for a deleted pool get removed
Richard, that would be simple to require that an SSD am HDD on the same
host are not used. Set your failure domain to Rack instead of Host and put
the SSD host and HDD host in the same rack. Ceph has no way of actually
telling what a rack is, so it's whatever you set it to in your crush map.
On
Hi,
>>2) Why did you choose to run the ceph nodes on loopback interfaces as opposed
>>to the /24 for the "public" interface?
>
> I can’t speak for this example, but in a clos fabric you generally want
> to assign the routed IPs on loopback rather than physical interfaces.
> This way if one of th
Hi Gregory,
Reading your reply with great interest, thanks.
Can you confirm my understanding now:
- live snapshots are more expensive for the cluster as a whole, than
taking the snapshot when the VM is switched off?
- using fstrim in VMs is (much?) more expensive when the VM has existing
sn
On 19/04/17 21:08, Reed Dier wrote:
> Hi Maxime,
>
> This is a very interesting concept. Instead of the primary affinity being
> used to choose SSD for primary copy, you set crush rule to first choose an
> osd in the ‘ssd-root’, then the ‘hdd-root’ for the second set.
>
> And with 'step choosel
Hi,
>2) Why did you choose to run the ceph nodes on loopback interfaces as opposed
>to the /24 for the "public" interface?
I can’t speak for this example, but in a clos fabric you generally want to
assign the routed IPs on loopback rather than physical interfaces. This way if
one of the link go
On 04/20/2017 02:25 AM, Donny Davis wrote:
> In reading the docs, I am curious if I can change the chooseleaf parameter as
> my cluster expands. I currently only have one node and used this parameter in
> ceph.conf
>
> osd crush chooseleaf type = 0
>
> Can this be changed after I expand nodes
Hello cephers,
is anyone using Fujitsu Hardware for Ceph OSDs with the PRAID EP400i
Raidcontroller in JBOD Mode? We are having three identical servers with
identical Disk placement. First three Slots are SSDs for journaling and
remaining nine slots with SATA Disks. Problem is, that in Ubuntu (and
17 matches
Mail list logo