Hi,
I am also searching for tuning the single thread performance.
You can try following parameters:
[osd]
osd mount options xfs =
rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M
osd_op_threads = 4
osd_disk_threads = 4
Udo
Am 25.06.2014 08:52, schrieb wsnote:
OS: CentOS 6.5
Version:
Hi,
AFAIK should an ceph osd down osd.29 marked osd.29 as down.
But what is to do if this don't happens?
I got following:
root@ceph-02:~# ceph osd down osd.29
marked down osd.29.
root@ceph-02:~# ceph osd tree
2014-06-23 08:51:00.588042 7f15747f5700 0 -- :/1018258
172.20.2.11:6789/0
to force use
of aio anyway
2014-06-23 09:08:05.313059 7f1ecb5d6780 -1 flushed journal
/srv/journal/osd.29.journal for object store /var/lib/ceph/osd/ceph-29
root@ceph-02:~# umount /var/lib/ceph/osd/ceph-29
But why don't work ceph osd down osd.29?
Udo
Am 23.06.2014 09:01, schrieb Udo Lembke:
Hi
Hi Henrik,
On 23.06.2014 09:16, Henrik Korkuc wrote:
ceph osd set noup will prevent osd's from becoming up. Later remember
to run ceph osd unset noup.
You can stop OSD with stop ceph-osd id=29.
thanks for the hint!
Udo
___
ceph-users mailing list
Hi all,
I know the formula ( num osds * 100 / replica ) for pg_num and pgp_num
(extend to the next power of 2 value).
But does something changed with two (or three) active pools?
E.G. we have two pools which should have an pg_num of 4096. Should use
the 4096 or 2048 because of two pools?
best
Hi,
I think not that's related, but how full is your ceph-cluster? Perhaps
it's has something to do with the fragmentation on the xfs-filesystem
(xfs_db -c frag -r device)?
Udo
Am 08.05.2014 02:57, schrieb Christian Balzer:
Hello,
ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs
Hi again,
sorry, too fast - but this can't be an problem due to your 4GB cache...
Udo
Am 08.05.2014 17:20, schrieb Udo Lembke:
Hi,
I think not that's related, but how full is your ceph-cluster? Perhaps
it's has something to do with the fragmentation on the xfs-filesystem
(xfs_db -c frag -r
Hi,
perhaps due IOs from the journal?
You can test with iostat (like iostat -dm 5 sdg).
on debian iostat is in the package sysstat.
Udo
Am 28.04.2014 07:38, schrieb Indra Pramana:
Hi Craig,
Good day to you, and thank you for your enquiry.
As per your suggestion, I have created a 3rd
Hi,
is the mon-process running?
netstat -an | grep 6789 | grep -i listen
is the filesystem nearly full?
df -k
any error output if you start the mon in the foreground (here mon b)
ceph-mon -i b -d -c /etc/ceph/ceph.conf
Udo
Am 15.04.2014 16:11, schrieb Jonathan Gowar:
Hi,
I had an OSD
Hi,
On 10.04.2014 20:03, Russell E. Glaue wrote:
I am seeing the same thing, and was wondering the same.
We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4.
ceph version 0.72.2
I am importing a 3.3TB disk image into a rbd image.
At 2.6TB, and still importing, 5.197TB
Hi all,
I have startet the ceph-cluster with an weight of 1 for all osd-disks (4TB).
Later I switched to ceph-deploy and ceph-deploy use normaly an weight of
3.64 for this disks, which makes much more sense!
Now I wan't to change the weight of all 52 osds (on 4 nodes) to 3.64 and
the question is,
Hi Sage,
thanks for the info! I will tried at weekend.
Udo
Am 04.03.2014 15:16, schrieb Sage Weil:
The goal should be to increase the weights in unison, which should prevent
any actual data movement (modulo some rounding error, perhaps). At the
moment that can't be done via the CLI, but
Hi Greg,
I have used the ultimative way with
ceph osd lost 42 --yes-i-really-mean-it
but the pg is further down:
ceph -s
cluster 591db070-15c1-4c7a-b107-67717bdb87d9
health HEALTH_WARN 206 pgs degraded; 1 pgs down; 57 pgs incomplete;
1 pgs peering; 31 pgs stuck inactive; 145 pgs stuck
Hi,
I switch some disks from manual format to ceph-deploy (because slightly
different xfs-parameters) - all disks are on a single node of an 4-node
cluster.
After rebuilding the osd-disk one PG are incomplete:
ceph -s
cluster 591db070-15c1-4c7a-b107-67717bdb87d9
health HEALTH_WARN 1 pgs
On Sun, Feb 16, 2014 at 12:32 AM, Udo Lembke ulem...@polarzone.de wrote:
Hi,
I switch some disks from manual format to ceph-deploy (because slightly
different xfs-parameters) - all disks are on a single node of an 4-node
cluster.
After rebuilding the osd-disk one PG are incomplete:
ceph -s
Hi,
perhaps your filesystem is too full?
df -k
du -hs /var/lib/ceph/mon/ceph-st3/store.db
What output/Error-Message you get if you start the mon in the foreground?
ceph-mon -i st3 -d -c /etc/ceph/ceph.conf
Udo
On 15.02.2014 09:30, Vadim Vatlin wrote:
Hello
Could you help me please
ceph
Hi,
does ceph -s also stuck on missing keyring?
Do you have an keyring like:
cat /etc/ceph/keyring
[client.admin]
key = AQCdkHZR2NBYMBAATe/rqIwCI96LTuyS3gmMXp==
Or do you have anothe defined keyring in ceph.conf?
global-section - keyring = /etc/ceph/keyring
The key is in ceph - see
ceph
On 14.02.2014 17:58, Karan Singh wrote:
mds cluster is degraded
Hi,
have you tried to create two more mds?
About degraded: have you canged the weight of the osd's after an healthy
cluster?
Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi Aaron,
thanks for the very usefull hint! With ceph osd set noout it's works
without trouble. Typical beginner's mistake.
regards
Udo
Am 21.01.2014 20:45, schrieb Aaron Ten Clay:
Udo,
I think you might have better luck using ceph osd set noout before
doing maintenance, rather than ceph
Hi,
I need a little bit help.
We have an 4-node ceph cluster and the clients run in trouble if one
node is down (due to maintenance).
After the node is switched on again ceph health shows (for a little time):
HEALTH_WARN 4 pgs incomplete; 14 pgs peering; 370 pgs stale; 12 pgs
stuck unclean; 36
Hi,
perhaps the disk has an problem?
Have you look with smartctl?
(apt-get install smartmontools; smartctl -A /dev/sdX )
Udo
On 15.01.2014 10:49, Rottmann, Jonas (centron GmbH) wrote:
Hi,
I now did an upgrade to dumpling (ceph version 0.67.5
(a60ac9194718083a4b6a225fc17cad6096c69bd1)),
Hi,
yesterday I expand our 3-Node ceph-cluster with an fourth node
(additional 13 OSDs - all OSDs have the same size (4TB)).
I use the same command like before to add OSDs and change the weight:
ceph osd crush set 44 0.2 pool=default rack=unknownrack host=ceph-04
But ceph osd tree show all OSDs
On 19.11.2013 06:56, Robert van Leeuwen wrote:
Hi,
...
It looks like it is just using /dev/sdX for this instead of the
/dev/disk/by-id /by-path given by ceph-deploy.
...
Hi Robert,
I'm using the disk-label:
fstab:
LABEL=osd.0 /var/lib/ceph/osd/ceph-0 xfs noatime,nodiratime
0
101 - 123 of 123 matches
Mail list logo