Hi,
looks that some osds are down?!
What is the output of "ceph osd tree"
Udo
Am 25.09.2014 04:29, schrieb Aegeaner:
> The cluster healthy state is WARN:
>
> health HEALTH_WARN 118 pgs degraded; 8 pgs down; 59 pgs
> incomplete; 28 pgs peering; 292 pgs stale; 87 pgs stuck inactive;
Hi Christian,
On 22.09.2014 05:36, Christian Balzer wrote:
> Hello,
>
> On Sun, 21 Sep 2014 21:00:48 +0200 Udo Lembke wrote:
>
>> Hi Christian,
>>
>> On 21.09.2014 07:18, Christian Balzer wrote:
>>> ...
>>> Personally I found ext4 to be faster than
Hi Christian,
On 21.09.2014 07:18, Christian Balzer wrote:
> ...
> Personally I found ext4 to be faster than XFS in nearly all use cases and
> the lack of full, real kernel integration of ZFS is something that doesn't
> appeal to me either.
a little bit OT... what kind of ext4-mount options do you
Hi list,
on the weekend one of five OSD-nodes fails (hung with kernel panic).
The cluster degraded (12 of 60 osds), but from our monitoring-host the
noout-flag is set in this case.
But around three hours later the kvm-guest, which used storage on the
ceph cluster (and use writes) are unaccessible.
Hi,
don't see an improvement with tcp_window_scaling=0 with my configuration.
More the other way: the iperf-performance are much less:
root@ceph-03:~# iperf -c 172.20.2.14
Client connecting to 172.20.2.14, TCP port 5001
TCP window size:
Hi again,
forget to say - I'm still on 0.72.2!
Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Steve,
I'm also looking for improvements of single-thread-reads.
A little bit higher values (twice?) should be possible with your config.
I have 5 nodes with 60 4-TB hdds and got following:
rados -p test bench -b 4194304 60 seq -t 1 --no-cleanup
Total time run:60.066934
Total reads made
Hi,
which values are all changed with "ceph osd crush tunables optimal"?
Is it perhaps possible to change some parameter the weekends before the
upgrade is running, to have more time?
(depends if the parameter are available in 0.72...).
The warning told, it's can take days... we have an cluster w
Hi Erich,
I'm also on searching for improvements.
You should use the "right" mountoptions, to prevent fragmentation (for XFS).
[osd]
osd mount options xfs = "rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"
osd_op_threads = 4
osd_disk_threads = 4
With 45 OSDs per node you need an powerfull
Hi,
Am 25.06.2014 16:48, schrieb Aronesty, Erik:
> I'm assuming you're testing the speed of cephfs (the file system) and not
> ceph "object storage".
for my part I mean object storage (VM disk via rbd).
Udo
___
ceph-users mailing list
ceph-users@lis
Hi,
I am also searching for tuning the single thread performance.
You can try following parameters:
[osd]
osd mount options xfs =
"rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"
osd_op_threads = 4
osd_disk_threads = 4
Udo
Am 25.06.2014 08:52, schrieb wsnote:
> OS: CentOS 6.5
> Version:
Hi Henrik,
On 23.06.2014 09:16, Henrik Korkuc wrote:
> "ceph osd set noup" will prevent osd's from becoming up. Later remember
> to run "ceph osd unset noup".
>
> You can stop OSD with "stop ceph-osd id=29".
>
>
thanks for the hint!
Udo
___
ceph-users m
force use
of aio anyway
2014-06-23 09:08:05.313059 7f1ecb5d6780 -1 flushed journal
/srv/journal/osd.29.journal for object store /var/lib/ceph/osd/ceph-29
root@ceph-02:~# umount /var/lib/ceph/osd/ceph-29
But why don't work "ceph osd down osd.29"?
Udo
Am 23.06.2014 09:01, schrieb U
Hi,
AFAIK should an "ceph osd down osd.29" marked osd.29 as down.
But what is to do if this don't happens?
I got following:
root@ceph-02:~# ceph osd down osd.29
marked down osd.29.
root@ceph-02:~# ceph osd tree
2014-06-23 08:51:00.588042 7f15747f5700 0 -- :/1018258 >>
172.20.2.11:6789/0 pipe(0x7
Hi,
take a look with:
rados df
rbd -p ls
and with -l option for long output like
rbd -p rbd ls -l
NAME SIZE PARENT FMT PROT LOCK
vm-127-disk-1 35000M 2
vm-131-disk-1 8192M 2
vm-135-disk-1 8192M 2
...
Udo
On 19.06.2014 09:42, wsno
Hi all,
I know the formula ( num osds * 100 / replica ) for pg_num and pgp_num
(extend to the next power of 2 value).
But does something changed with two (or three) active pools?
E.G. we have two pools which should have an pg_num of 4096. Should use
the 4096 or 2048 because of two pools?
best reg
Hi again,
sorry, too fast - but this can't be an problem due to your 4GB cache...
Udo
Am 08.05.2014 17:20, schrieb Udo Lembke:
> Hi,
> I think not that's related, but how full is your ceph-cluster? Perhaps
> it's has something to do with the fragmentation on the xfs-file
Hi,
I think not that's related, but how full is your ceph-cluster? Perhaps
it's has something to do with the fragmentation on the xfs-filesystem
(xfs_db -c frag -r device)?
Udo
Am 08.05.2014 02:57, schrieb Christian Balzer:
>
> Hello,
>
> ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs
Hi,
perhaps due IOs from the journal?
You can test with iostat (like "iostat -dm 5 sdg").
on debian iostat is in the package sysstat.
Udo
Am 28.04.2014 07:38, schrieb Indra Pramana:
> Hi Craig,
>
> Good day to you, and thank you for your enquiry.
>
> As per your suggestion, I have created a 3r
Hi,
is the mon-process running?
netstat -an | grep 6789 | grep -i listen
is the filesystem nearly full?
df -k
any error output if you start the mon in the foreground (here mon "b")
ceph-mon -i b -d -c /etc/ceph/ceph.conf
Udo
Am 15.04.2014 16:11, schrieb Jonathan Gowar:
> Hi,
>
> I had an OSD
Hi,
On 10.04.2014 20:03, Russell E. Glaue wrote:
> I am seeing the same thing, and was wondering the same.
>
> We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4.
> ceph version 0.72.2
>
> I am importing a 3.3TB disk image into a rbd image.
> At 2.6TB, and still importing, 5
Hi Sage,
thanks for the info! I will tried at weekend.
Udo
Am 04.03.2014 15:16, schrieb Sage Weil:
> The goal should be to increase the weights in unison, which should prevent
> any actual data movement (modulo some rounding error, perhaps). At the
> moment that can't be done via the CLI, but
Hi all,
I have startet the ceph-cluster with an weight of 1 for all osd-disks (4TB).
Later I switched to ceph-deploy and ceph-deploy use normaly an weight of
3.64 for this disks, which makes much more sense!
Now I wan't to change the weight of all 52 osds (on 4 nodes) to 3.64 and
the question is,
Hi Greg,
I have used the ultimative way with
ceph osd lost 42 --yes-i-really-mean-it
but the pg is further down:
ceph -s
cluster 591db070-15c1-4c7a-b107-67717bdb87d9
health HEALTH_WARN 206 pgs degraded; 1 pgs down; 57 pgs incomplete;
1 pgs peering; 31 pgs stuck inactive; 145 pgs stuck unc
"last": 21598,
"maybe_went_rw": 1,
"up": [
47,
31],
"acting": [
47,
31]},
{ "first": 2
Hi,
I switch some disks from manual format to ceph-deploy (because slightly
different xfs-parameters) - all disks are on a single node of an 4-node
cluster.
After rebuilding the osd-disk one PG are incomplete:
ceph -s
cluster 591db070-15c1-4c7a-b107-67717bdb87d9
health HEALTH_WARN 1 pgs in
Hi,
does "ceph -s" also stuck on missing keyring?
Do you have an keyring like:
cat /etc/ceph/keyring
[client.admin]
key = AQCdkHZR2NBYMBAATe/rqIwCI96LTuyS3gmMXp==
Or do you have anothe defined keyring in ceph.conf?
global-section -> keyring = /etc/ceph/keyring
The key is in ceph - see
ce
Hi,
perhaps your filesystem is too full?
df -k
du -hs /var/lib/ceph/mon/ceph-st3/store.db
What output/Error-Message you get if you start the mon in the foreground?
ceph-mon -i st3 -d -c /etc/ceph/ceph.conf
Udo
On 15.02.2014 09:30, Vadim Vatlin wrote:
> Hello
> Could you help me please
>
> ce
On 14.02.2014 17:58, Karan Singh wrote:
> mds cluster is degraded
Hi,
have you tried to create two more mds?
About degraded: have you canged the weight of the osd's after an healthy
cluster?
Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
qual:
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
How to convert the .ldb-files to .sst??
Any hints?
Best regards
Udo Lembke
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Aaron,
thanks for the very usefull hint! With "ceph osd set noout" it's works
without trouble. Typical beginner's mistake.
regards
Udo
Am 21.01.2014 20:45, schrieb Aaron Ten Clay:
> Udo,
>
> I think you might have better luck using "ceph osd set noout" before
> doing maintenance, rather than
Hi,
I need a little bit help.
We have an 4-node ceph cluster and the clients run in trouble if one
node is down (due to maintenance).
After the node is switched on again ceph health shows (for a little time):
HEALTH_WARN 4 pgs incomplete; 14 pgs peering; 370 pgs stale; 12 pgs
stuck unclean; 36 req
Hi,
perhaps the disk has an problem?
Have you look with smartctl?
(apt-get install smartmontools; smartctl -A /dev/sdX )
Udo
On 15.01.2014 10:49, Rottmann, Jonas (centron GmbH) wrote:
>
> Hi,
>
>
>
> I now did an upgrade to dumpling (ceph version 0.67.5
> (a60ac9194718083a4b6a225fc17cad6096c69
Hi,
yesterday I expand our 3-Node ceph-cluster with an fourth node
(additional 13 OSDs - all OSDs have the same size (4TB)).
I use the same command like before to add OSDs and change the weight:
ceph osd crush set 44 0.2 pool=default rack=unknownrack host=ceph-04
But ceph osd tree show all OSDs n
On 19.11.2013 06:56, Robert van Leeuwen wrote:
> Hi,
>
> ...
> It looks like it is just using /dev/sdX for this instead of the
> /dev/disk/by-id /by-path given by ceph-deploy.
>
> ...
Hi Robert,
I'm using the disk-label:
fstab:
LABEL=osd.0 /var/lib/ceph/osd/ceph-0 xfs noatime,nodiratime
101 - 135 of 135 matches
Mail list logo