Thanks for explanation.
W dniu 07.09.2017 o 12:06, John Spray pisze:
On Wed, Sep 6, 2017 at 4:47 PM, Piotr Dzionek wrote:
Oh, I see that this is probably a bug: http://tracker.ceph.com/issues/21260
I also noticed following error in mgr logs:
2017-09-06 16:41:08.537577 7f34c0a7a700 1 mgr
ror('no certificate configured')//
//RuntimeError: no certificate configured//
/
Probably not related, but what kind of certificate it might refer to ?
W dniu 06.09.2017 o 16:31, Piotr Dzionek pisze:
Hi,
I ran a small test two node ceph cluster - 12.2.0 version. It has 28
osds, 1 mon and 2 m
}/
As you can see one ceph manager is in unknown state. Why is that ? FYI,
I checked rpms versions and did a restart of all mgr and I still get the
same result.
Kind regards,
Piotr Dzionek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
fine.
W dniu 31.08.2017 o 18:14, Hervé Ballans pisze:
Hi Piotr,
Just to verify one point, how are connected your disks (physically),
in a NON-RAID or RAID0 mode ?
rv
Le 31/08/2017 à 16:24, Piotr Dzionek a écrit :
For a last 3 weeks I have been running latest LTS Luminous Ceph
release on Ce
volume(the part with meta-data) is not mounted
yet. My question here, what mounts it and why it takes so long ? Maybe
there is a setting that randomizes the start up process of osds running
on the same node?
Kind regards,
Piotr Dzionek
___
ceph-users
e 45B0969E-9B03-4F30-B4C6-B4B80CEFF106, not the journal_uuid. I guess
this tutorial is for the old ceph, which didn't run as a ceph user, but
as a root user. Thanks for your help.
Kind regards,
Piotr Dzionek
W dniu 13.02.2017 o 16:38, ulem...@polarzone.de pisze:
Hi Piotr,
is your parti
7; succeeded.
OWNER 64045 /lib/udev/rules.d/95-ceph-osd.rules:16
GROUP 64045 /lib/udev/rules.d/95-ceph-osd.rules:16
MODE 0660 /lib/udev/rules.d/95-ceph-osd.rules:16
RUN '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name'
/lib/udev/rules.d/95-ceph-osd.rules:16
...
Then /dev/sdb2 wi
udev
rules for this disks?
FYI, I have this issue after every restart now.
Kind regards,
Piotr Dzionek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ok, you convinced me to increase size to 3 and min_size to 2. During my
time running ceph I only had issues like single disk or host failures -
nothing exotic, but I think it is better to be safe than sorry.
Kind regards,
Piotr Dzionek
W dniu 30.11.2016 o 12:16, Nick Fisk pisze
blocked io ?
Maybe you mean faster rebuild ? or maybe if there is no IOs the likehood
of another disk failure drops ?
W dniu 30.11.2016 o 04:39, Brad Hubbard pisze:
On Tue, Nov 29, 2016 at 11:37 PM, Piotr Dzionek wrote:
Hi,
As far as I understand if I set pool size 2, there is a chance to loose
On Mon, Nov 28, 2016 at 9:54 PM, Piotr Dzionek wrote:
Hi,
I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and 12
osds. I removed default pool and created the following one:
pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 10
Hi,
You are right I missed that there is default time out for changing state
from in to out for down osd. "mon osd down out interval" : 300and I
didn't wait long enough before starting it again.
Kind regards,
Piotr Dzionek
W dniu 28.11.2016 o 16:12, David Turner pisze:
s 1024 too big ? or maybe there is some misconfiguration in crushmap ?
Kind regards,
Piotr Dzionek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
13 matches
Mail list logo