EP,
Try setting the crush map to use legacy tunables. I've had the same issue with
the"feature mismatch" errors when using krbd that didn't support format 2 and
running jewel 10.2.2 on the storage nodes.
From the command line:
ceph osd crush tunables legacy
Bruce
> On Aug 16, 2016, at 4:21
You'll need to set it on the monitor too.
Sent from my iPhone
> On Aug 15, 2016, at 2:24 PM, EP Komarla wrote:
>
> Team,
>
> I am trying to configure the rbd readahead value? Before I increase this
> value, I am trying to find out the current value that is set
I've been asked to look at the performance of RHEL 7.1/RHCS 1.3. I keep running
into these errors on 1 of my RHEL 7.1 client systems. The rbd devices are still
present, but ceph-rbdname Is not in /usr/bin, but it is in trusty /usr/bin.
Much like the rbdmap init script that ships with RHEL 7.1,
Dainard [mailto:sdain...@spd1.com]
Sent: Friday, July 17, 2015 1:59 PM
To: Bruce McFarland
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Workaround for RHEL/CentOS 7.1 rbdmap service
start warnings?
Other than those errors, do you find RBD's will not be unmapped on system
restart
When starting the rbdmap.service to provide map/unmap of rbd devices across
boot/shutdown cycles the /etc/init.d/rbdmap includes /lib/lsb/init-functions.
This is not a problem except that the rbdmap script is making calls to the
log_daemon_* log_progress_* log_actiion_* functions that are
Is there a classic ceph cluster test matrix?? I'm wondering what's done for
releases ie sector sizes 4k,128k,1M,4M? sequential, random, 80/20 mix? #
concurrent IOs? I've seen some spreadsheets in the past, but can't find them.
Thanks,
Bruce
___
/cosbench.README
Mark
On 07/08/2015 02:55 PM, Bruce McFarland wrote:
Is there a classic ceph cluster test matrix?? I'm wondering what's
done for releases ie sector sizes 4k,128k,1M,4M? sequential, random,
80/20 mix? # concurrent IOs? I've seen some spreadsheets in the past,
but can't find them
creating empty
object store in /var/lib/ceph/osd/ceph-0: (22) Invalid argument
[root@ceph0 ceph]#
-Original Message-
From: Bruce McFarland
Sent: Monday, June 29, 2015 11:39 AM
To: 'Loic Dachary'; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD
partx: /dev/sdc: error adding partition 1
[root@ceph0 ceph]#
-Original Message-
From: Loic Dachary [mailto:l...@dachary.org]
Sent: Saturday, June 27, 2015 1:08 AM
To: Bruce McFarland; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD
Hi Bruce
Do these issues occur in Centos 7 also?
-Original Message-
From: Bruce McFarland
Sent: Monday, June 29, 2015 12:06 PM
To: 'Loic Dachary'; 'ceph-users@lists.ceph.com'
Subject: RE: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver
0.94.2
Using the manual method
...@dachary.org]
Sent: Friday, June 26, 2015 3:29 PM
To: Bruce McFarland; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD
Hi,
Prior to firefly v0.80.8 ceph-disk zap did not call partprobe and that was
causing the kind of problems you're experiencing
I've always used Ubuntu for my Ceph client OS and found out in the lab that
Centos/RHEL 6.x doesn't have the kernel rbd support. I wanted to investigate
using RHEL 7.1 for the client OS. Is there a kernel rbd module that installs
with RHEL 7.1?? If not are there 7.1 rpm's or src tar balls
I followed the Calamari build instructions here:
http://ceph.com/category/ceph-step-by-step/
I used an Ubuntu 14.04 system to build all of the Calarmari client and server
packages for Centos 6.5 and Ubuntu Trusty (14.04).
Once the packages were built I also referenced the Calamari instructions
PyZMQ: 14.5.0
RAET: Not Installed
ZMQ: 4.0.5
Mako: Not Installed
root@KVDrive11:~#
-Original Message-
From: Gregory Meno [mailto:gm...@redhat.com]
Sent: Wednesday, May 13, 2015 3:52 PM
To: Bruce McFarland
Cc: Michael Kuriger; ceph-calam...@lists.ceph.com
. The calamari
master is running on Ubuntu 14.04.
From: Michael Kuriger [mailto:mk7...@yp.com]
Sent: Wednesday, May 13, 2015 2:00 PM
To: Bruce McFarland; ceph-calam...@lists.ceph.com; ceph-us...@ceph.com;
ceph-devel (ceph-de...@vger.kernel.org)
Subject: Re: [ceph-users] Does anyone understand
I am having a similar issue. The cluster is up and salt is running on and has
accepted keys from all nodes, including the monitor. I can issue salt and
salt/ceph.py commands from the Calamari including 'salt \* ceph.get_heartbeats'
which returns from all nodes including the monitor with the
[salt.crypt ][DEBUG
][5912] Re-using SAuth for ('/etc/salt/pki/minion', 'octeon109',
'tcp://209.243.160.35:4506')
-Original Message-
From: Bruce McFarland
Sent: Tuesday, May 12, 2015 6:11 PM
To: 'Gregory Meno'
Cc: ceph-calam...@lists.ceph.com; ceph-us
supervisorctl to start/stop Cthulhu.
I've performed the calamari-ctl clear/init sequence more than twice with also
stopping/starting apache2 and Cthulhu.
-Original Message-
From: Gregory Meno [mailto:gm...@redhat.com]
Sent: Tuesday, May 12, 2015 5:58 PM
To: Bruce McFarland
Cc: ceph-calam
I've run into an issue starting OSD's where I'm running out of ports. I've
increased the port range with ms bind port max and on the next attempt to
start the osd it reports no ports in the new range. I am only running 1 osd on
the node and rarely restart the osd. I've increased the debug level
You won’t get a PG warning message from ceph –s unless you have 20 PG’s per
OSD in your cluster.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Bruce
McFarland
Sent: Tuesday, April 14, 2015 10:00 AM
To: Giuseppe Civitella; Saverio Proto
Cc: ceph-users@lists.ceph.com
I use this to quickly check pool stats:
[root@ceph-mon01 ceph]# ceph osd dump | grep pool
pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 1 flags hashpspool crash_replay_interval 45
stripe_width 0
pool 1 'metadata' replicated size
system to use APT (Ubuntu) or
RPM (Centos) and the code for the ceph.repo file. There are also package
dependency lists, trusted keys, etc.
Bruce
-Original Message-
From: Loic Dachary [mailto:l...@dachary.org]
Sent: Tuesday, April 07, 2015 1:32 AM
To: Bruce McFarland; ceph-users
I'm not sure about Centos 7.0 but Ceph is not part of the 6.5 distro.
Sent from my iPhone
On Apr 7, 2015, at 12:26 PM, Loic Dachary l...@dachary.org wrote:
On 07/04/2015 18:51, Bruce McFarland wrote:
Loic,
You're not mistaken the pages are listed under the Installation (Manual)
link
I'm not sure exactly what your steps where, but I reinstalled a monitor
yesterday on Centos 6.5 using ceph-deploy with the /etc/yum.repos.d/ceph.repo
from ceph.com which I've included below.
Bruce
[root@essperf13 ceph-mon01]# ceph -v
ceph version 0.80.9
to the calamari master.
Thanks,
Bruce
From: Quentin Hartman [mailto:qhart...@direwolfdigital.com]
Sent: Wednesday, April 01, 2015 1:56 PM
To: Bruce McFarland
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Calamari Questions
You should have a config page in calamari UI where you can accept osd nodes
I've built the Calamari client, server, and diamond packages from source for
trusty and centos and installed it on the trusty Master. Installed diamond and
salt packages on the storage nodes. I can connect to the calamari master,
accept salt keys from the ceph nodes, but then Calamari reports 3
of ceph.conf? rbd admin
sockets? Nothing at ceph.com/docs on either topic.
Thanks,
Bruce
-Original Message-
From: Mykola Golub [mailto:to.my.troc...@gmail.com]
Sent: Sunday, February 01, 2015 1:24 PM
To: Udo Lembke
Cc: Bruce McFarland; ceph-us...@ceph.com; Prashanth Nednoor
Subject: Re: [ceph
.
-Original Message-
From: Nicheal [mailto:zay11...@gmail.com]
Sent: Monday, February 02, 2015 7:35 PM
To: Bruce McFarland
Cc: ceph-us...@ceph.com; Prashanth Nednoor
Subject: Re: [ceph-users] RBD caching on 4K reads???
It seems you use the kernel rbd. So rbd_cache does not work, which is just
and
physically disabling kernel caching.
-Original Message-
From: Nicheal [mailto:zay11...@gmail.com]
Sent: Monday, February 02, 2015 7:35 PM
To: Bruce McFarland
Cc: ceph-us...@ceph.com; Prashanth Nednoor
Subject: Re: [ceph-users] RBD caching on 4K reads???
It seems you use the kernel rbd
I have a cluster and have created a rbd device - /dev/rbd1. It shows up as
expected with 'rbd -image test info' and rbd showmapped. I have been looking at
cluster performance with the usual Linux block device tools - fio and vdbench.
When I look at writes and large block sequential reads I'm
[mailto:ulem...@polarzone.de]
Sent: Friday, January 30, 2015 1:00 PM
To: Bruce McFarland; ceph-us...@ceph.com
Cc: Prashanth Nednoor
Subject: Re: [ceph-users] RBD caching on 4K reads???
Hi Bruce,
hmm, sounds for me like the rbd cache.
Can you look, if the cache is realy disabled in the running config
+clean; 0 bytes data, 135 GB used, 327 TB / 327
TB avail
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Monday, August 25, 2014 1:15 AM
To: ceph-us...@ceph.com
Cc: Bruce McFarland
Subject: Re: [ceph-users] Monitor/OSD report tuning question
Hello,
On Sat, 23 Aug
osd_heartbeat_grace: 35,
[root@ceph-mon01 ceph]#
-Original Message-
From: Bruce McFarland
Sent: Monday, August 25, 2014 10:46 AM
To: 'Gregory Farnum'
Cc: ceph-us...@ceph.com
Subject: RE: [ceph-users] osd_heartbeat_grace set to 30 but osd's still fail
for grace 20
That's something that was been puzzling
settings can handle?
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Monday, August 25, 2014 11:01 AM
To: Bruce McFarland
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] osd_heartbeat_grace set to 30 but osd's still fail
for grace 20
On Mon, Aug 25, 2014 at 10:56
I see osd's being failed for heartbeat reporting default osd_heartbeat_grace
of 20 but the run time config shows that the grace is set to 30. Is there
another variable for the osd or the mon I need to set for the non default
osd_heartbeat_grace of 30 to take effect?
2014-08-23 23:03:08.982590
Hello,
I have a Cluster with 30 OSDs distributed over 3 Storage Servers connected by a
10G cluster link and connected to the Monitor over 1G. I still have a lot to
understand with Ceph. Observing the cluster messages in a ceph -watch window
I see a lot of osd flapping when it is sitting in a
]
Sent: Thursday, August 21, 2014 1:17 AM
To: Bruce McFarland
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting
Hi,
You only have one OSD? I've seen similar strange things in test pools having
only one OSD - and I kinda explained it by assuming
://ceph.com
On Thu, Aug 21, 2014 at 7:11 AM, Bruce McFarland
bruce.mcfarl...@taec.toshiba.com wrote:
I have 3 storage servers each with 30 osds. Each osd has a journal
that is a partition on a virtual drive that is a raid0 of 6 ssds. I
brought up a 3 osd
(1 per storage server) cluster to bring up
I've tried using ceph-deploy but it wants to assign the same id for each osd
and I end up with a bunch of prepared ceph-disk's and only 1 active. If I
use the manual short form method the activate step fails and there are no xfs
mount points on the ceph-disks. If I use the manual long form it
[ceph_deploy.osd][DEBUG ] Host ceph0 is now ready for osd use.
From: Bruce McFarland
Sent: Thursday, August 14, 2014 11:45 AM
To: 'ceph-us...@ceph.com'
Subject: How to create multiple OSD's per host?
I've tried using ceph-deploy but it wants to assign the same id for each osd
and I end up
...@ceph.com
Subject: Re: [ceph-users] How to create multiple OSD's per host?
2014-08-15 7:56 GMT+08:00 Bruce McFarland
bruce.mcfarl...@taec.toshiba.commailto:bruce.mcfarl...@taec.toshiba.com:
This is an example of the output from ‘ceph-deploy osd create [data] [journal’
I’ve noticed that all of the ‘ceph
209.243.160.51 - osd.0
209.243.160.52 - osd.3
209.243.160.59 - osd.2
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Sunday, August 03, 2014 11:15 AM
To: Bruce McFarland
Cc: Brian Rak; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Firefly OSDs stuck in creating state
Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Monday, August 04, 2014 10:09 AM
To: Bruce McFarland
Cc: Brian Rak; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Firefly OSDs stuck in creating state forever
Okay, looks like the mon went down then.
Was there a stack trace
/physical chassis? Limits?
Thank you very much for all of your help.
Bruce
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Monday, August 04, 2014 12:25 PM
To: Bruce McFarland
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Firefly OSDs stuck in creating state
This is going to sound odd and if I hadn't been issuing all commands on the
monitor I would swear I issued 'rm -rf' from the shell of the osd in the
/var/lib/osd/ceph-s/ directory. After creating the pool/rbd and getting an
error from 'rbd info' I saw an osd down/out so I went to it's shell and
avail
96 creating+peering
10 active+clean
96 creating+incomplete
[root@essperf3 Ceph]#
From: Brian Rak [mailto:b...@gameservers.com]
Sent: Friday, August 01, 2014 2:54 PM
To: Bruce McFarland; ceph-users@lists.ceph.com
Subject: Re: [ceph
46 matches
Mail list logo