just tired 4.0 kernel, still do not encounter any problem. please run the
test again, when the test hang, check /sys/kernel/debug/ceph/*/mdsc
and /sys/kernel/debug/ceph/*/osdc
to find which request is hung.
By the way, do you have cephfs mount on host which run ceph-osd/ceph-mds?
On Wed, Aug
Thanks Nick for your suggestion.
Can you also tell how i can reduce RBD block size to 512K or 1M , do i need
to put something in clients ceph.conf ( what parameter i need to set )
Thanks once again
- Vickey
On Wed, Aug 12, 2015 at 4:49 PM, Nick Fisk n...@fisk.me.uk wrote:
-Original
On Thu, Aug 13, 2015 at 5:12 AM, Bob Ababurko b...@ababurko.net wrote:
I am actually looking for the most stable way to implement cephfs at
this
point. My cephfs cluster contains millions of small files, so many
inodes
if that needs to be taken into account. Perhaps I should only be
On Thu, Aug 13, 2015 at 3:29 AM, yangyongp...@bwstor.com.cn
yangyongp...@bwstor.com.cn wrote:
I also encounter a problem,standby mds can not be altered to active when
active mds service stopped,which bother me for serval days.Maybe MDS cluster
can solve those problem,but ceph team haven't
Hi.
This document applies only to RadosGW.
You need to read the data document:
https://wiki.ceph.com/Planning/Blueprints/Hammer/RBD%3A_Mirroring
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
2015-08-13 11:40 GMT+03:00 Özhan Rüzgar Karaman oruzgarkara...@gmail.com:
Hi;
I like to
Hi;
I like to learn about Ceph's Geographical Replication and Disaster Recovery
Options. I know that currently we do not have a built-in official Geo
Replication or disaster recovery, there are some third party tools like
drbd but they are not like a solution that business needs.
I also read the
So, after testing SSD (i wipe 1 SSD, and used it for tests)
root@ix-s2:~# sudo fio --filename=/dev/sda --direct=1 --sync=1 --rw=write
--bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --gr[53/1800]
ting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K,
Hi, Igor.
Try to roll the patch here:
http://www.theirek.com/blog/2014/02/16/patch-dlia-raboty-s-enierghoniezavisimym-keshiem-ssd-diskov
P.S. I am no longer tracks changes in this direction(kernel), because we
use already recommended SSD
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
Hello everyone,
Today I have a cluster with 4 hosts and I created a pool that uses a erasure
code profile bellow:
##
directory=/usr/lib/ceph/erasure-code
k=3
m=1
plugin=jerasure
ruleset-failure-domain=host
technique=reed_sol_van
##
This cluster is used only for RGW and I’m planning
Hi,
I have a CEPH cluster running on 4 physical servers, the cluster is up and
healthy
So far I was unable to connect any client to the cluster using krbd or fio rbd
plugin.
My clients can see and create images in rbd pool but cannot map
root@r-dcs68 ~ # rbd ls
fio_test
foo
foo1
foo_test
I tested and can recommend the Samsung 845 DC PRO (make sure it is DC PRO and
not just PRO or DC EVO!).
Those were very cheap but are out of stock at the moment (here).
Faster than Intels, cheaper, and slightly different technology (3D V-NAND)
which IMO makes them superior without needing many
So, good, but price for 845 DC PRO 400 GB higher in about 2x times than
intel S3500 240G (((
Any other models? (((
2015-08-13 15:45 GMT+03:00 Jan Schermer j...@schermer.cz:
I tested and can recommend the Samsung 845 DC PRO (make sure it is DC PRO
and not just PRO or DC EVO!).
Those were very
Hello,
I'm having an issue where disk usages between OSDs aren't well balanced
thus causing disk space to be wasted. Ceph is latest 0.94.2, used
exclusively through cephfs. Re-weighting helps, but just slightly, and
it has to be done on a daily basis causing constant refills. In the end
I get OSD
Dear all,
I try to create osds and get an error message (old/different cluster
instance?).
And osd can create but not active. This server ever build osds before.
Pls give me some advises.
OS:rhel7
ceph:0.80 firefly
Best wishes,
Mika
___
ceph-users
Dear Team,
We are using two ceph OSD with replica 2 and it is working properly.
Here my doubt is (Pool A -image size will be 10GB) and its replicated with two
OSD, what will happen suppose if the size reached the limit, Is there any
chance to make the data to continue writing in
I decided to set OSD 76 out and let the cluster shuffle the data off
that disk and then brought the OSD back in. For the most part this
seemed to be working, but then I had 1 object degraded and 88xxx
objects misplaced:
# ceph health detail
HEALTH_WARN 11 pgs stuck unclean; recovery 1/66089446
Use the order parameter when creating an RBD 22=4MB, 20=1MB
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickey
Singh
Sent: 13 August 2015 09:31
To: Nick Fisk n...@fisk.me.uk
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Cache tier best practices
Thanks
I think you're looking for this.
http://ceph.com/docs/master/man/8/rbd/#cmdoption-rbd--order
It's used when you create the RBD images. 1MB is order=20, 512 is order=19.
Thanks,
Bill Sanders
On Thu, Aug 13, 2015 at 1:31 AM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Thanks Nick for
Hi,
I'm trying to use a RBD to act as a staging area for some data before
pushing it down to some LTO6 tapes. As I cannot use striping with the kernel
client I tend to be maxing out at around 80MB/s reads testing with DD. Has
anyone got any clever suggestions of giving this a bit of a boost, I
On 13.08.2015 18:01, GuangYang wrote:
Try 'ceph osd reweight-by-pg int' right after creating the pools?
Would it do any good now when pool is in use and nearly full as I can't
re-create it now. Also, what's the integer argument in the command
above? I failed to find proper explanation in the
I don't see anything obvious, sorry..
Looks like something with osd.{5, 76, 38}, which are absent from the *up* set
though they are up. How about increasing log level 'debug_osd = 20' on osd.76
and restart the OSD?
Thanks,
Guang
Date: Thu, 13 Aug
There are three factors that impact disk utilization of an OSD:
1. number of PGs on the OSD (determined by CRUSH)
2. number of objects with each PG (better to pick a 2 power PG number to make
this one more even)
3. object size deviation
with 'ceph osd reweight-by-pg', you can tune (1). And if
You can add one or more osd,and ceph will balance the distribution of pgs.Data
will not out of rage as if you have big enough space.
yangyongp...@bwstor.com.cn
From: gjprabu
Date: 2015-08-13 22:42
To: ceph-users
CC: Kamala Subramani; Siva Sokkumuthu
Subject: [ceph-users] ceph distributed osd
Hi Goncalo,
On Fri, 14 Aug 2015 13:30:35 +1000
Goncalo Borges gonc...@physics.usyd.edu.au wrote:
Is this expected? Are those PGs actually assigned to something it
does not exists?
Objects are mapped to PGs algorithmically, based on their names. You can
think of that result as telling you
Hi Cehp gurus...
I using 0.94.2 in all my Ceph / CephFS installation.
While trying to understand how files are translated into object, it
seems that 'ceph osd map' returns a valid answer even for objects that
do not exist.
# ceph osd map cephfs_dt thisobjectdoesnotexist
osdmap e341
Hi,
When setting up teuthology in my own environment , I found a problem as
follows:
In the file teuthology/__init__.py, when importing class
gevent.monkey, It will conflict with paramiko. and if
create_nodes.py is used to connect to paddles/pulpito node, it will be
hanged.
OSD tree: http://pastebin.com/3z333DP4
Crushmap: http://pastebin.com/DBd9k56m
I realize these nodes are quite large, I have plans to break them out
into 12 OSD's/node.
On Thu, Aug 13, 2015 at 9:02 AM, GuangYang yguan...@outlook.com wrote:
Could you share the 'ceph osd tree dump' and CRUSH map
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Nick Fisk
Sent: 13 August 2015 18:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] How to improve single thread sequential reads?
Hi,
I'm trying to use a RBD to act as a staging
Try 'ceph osd reweight-by-pg int' right after creating the pools? What is the
typical object size in the cluster?
Thanks,
Guang
To: ceph-users@lists.ceph.com
From: vedran.fu...@gmail.com
Date: Thu, 13 Aug 2015 14:58:11 +0200
Subject: [ceph-users]
Could you share the 'ceph osd tree dump' and CRUSH map dump ?
Thanks,
Guang
Date: Thu, 13 Aug 2015 08:16:09 -0700
From: sdain...@spd1.com
To: yangyongp...@bwstor.com.cn; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cluster health_warn 1
30 matches
Mail list logo