> the sender and destroy all copies of this e-mail.
>
>
> -Original Message-
> From: Cary [mailto:dynamic.c...@gmail.com]
> Sent: Friday, December 15, 2017 5:56 PM
> To: David Turner
> Cc: James Okken; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] add hard dr
of this e-mail.
-Original Message-
From: Cary [mailto:dynamic.c...@gmail.com]
Sent: Friday, December 15, 2017 5:56 PM
To: David Turner
Cc: James Okken; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)
James,
You can set these values
h file or
>> > directory
>> > 2017-12-15 17:28:22.893443 7fd2f9e928c0 -1 created object store
>> > /var/lib/ceph/osd/ceph-4 for osd.4 fsid
>> > 2b9f7957-d0db-481e-923e-89972f6c594f
>> > 2017-12-15 17:28:22.893484 7fd2f9e928c0 -1 auth: error reading fil
58
> 3 3.7 1.0 3723G 202G 3521G 5.43 1.09 291
> TOTAL 13680G 682G 12998G 4.99
> MIN/MAX VAR: 0.79/1.16 STDDEV: 0.67
>
> Thanks!
>
> -Original Message-
> From: Cary [mailto:dynamic.c...@gmail.com]
> Sent: Friday, December 15, 2
G 4.99
MIN/MAX VAR: 0.79/1.16 STDDEV: 0.67
Thanks!
-Original Message-
From: Cary [mailto:dynamic.c...@gmail.com]
Sent: Friday, December 15, 2017 4:05 PM
To: James Okken
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)
James
7-12-15 17:28:22.893662 7fd2f9e928c0 -1 created new key in keyring
> /var/lib/ceph/osd/ceph-4/keyring
>
> thanks
>
> -Original Message-
> From: Cary [mailto:dynamic.c...@gmail.com]
> Sent: Thursday, December 14, 2017 7:13 PM
> To: James Okken
> Cc: ceph-users@lists.ceph
c0 -1 created new key in keyring
> /var/lib/ceph/osd/ceph-4/keyring
>
> thanks
>
> -Original Message-
> From: Cary [mailto:dynamic.c...@gmail.com]
> Sent: Thursday, December 14, 2017 7:13 PM
> To: James Okken
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ce
7:13 PM
To: James Okken
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)
James,
Usually once the misplaced data has balanced out the cluster should reach a
healthy state. If you run a "ceph health detail" Ceph will show you
233 GB
> avail; 0 B/s rd, 334 kB/s wr
>
>
> -Original Message-
> From: Cary [mailto:dynamic.c...@gmail.com]
> Sent: Thursday, December 14, 2017 4:21 PM
> To: James Okken
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] add hard drives to 3 CEPH se
-users@lists.ceph.com
Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)
Jim,
I am not an expert, but I believe I can assist.
Normally you will only have 1 OSD per drive. I have heard discussions about
using multiple OSDs per disk, when using SSDs though.
Once your
On 14.12.2017 18:34, James Okken wrote:
Hi all,
Please let me know if I am missing steps or using the wrong steps
I'm hoping to expand my small CEPH cluster by adding 4TB hard drives to each of
the 3 servers in the cluster.
I also need to change my replication factor from 1 to 3.
This is
Jim,
I am not an expert, but I believe I can assist.
Normally you will only have 1 OSD per drive. I have heard discussions
about using multiple OSDs per disk, when using SSDs though.
Once your drives have been installed you will have to format them,
unless you are using Bluestore. My steps
Hi all,
Please let me know if I am missing steps or using the wrong steps
I'm hoping to expand my small CEPH cluster by adding 4TB hard drives to each of
the 3 servers in the cluster.
I also need to change my replication factor from 1 to 3.
This is part of an Openstack environment deployed by
13 matches
Mail list logo