Re: [ceph-users] Rbd map command doesn't work

2016-08-16 Thread Bruce McFarland
EP, Try setting the crush map to use legacy tunables. I've had the same issue with the"feature mismatch" errors when using krbd that didn't support format 2 and running jewel 10.2.2 on the storage nodes. From the command line: ceph osd crush tunables legacy Bruce > On Aug 16, 2016, at 4:21

Re: [ceph-users] rbd readahead settings

2016-08-15 Thread Bruce McFarland
You'll need to set it on the monitor too. Sent from my iPhone > On Aug 15, 2016, at 2:24 PM, EP Komarla wrote: > > Team, > > I am trying to configure the rbd readahead value? Before I increase this > value, I am trying to find out the current value that is set

[ceph-users] systemd-udevd: failed to execute '/usr/bin/ceph-rbdnamer'

2015-08-05 Thread Bruce McFarland
I've been asked to look at the performance of RHEL 7.1/RHCS 1.3. I keep running into these errors on 1 of my RHEL 7.1 client systems. The rbd devices are still present, but ceph-rbdname Is not in /usr/bin, but it is in trusty /usr/bin. Much like the rbdmap init script that ships with RHEL 7.1,

Re: [ceph-users] Workaround for RHEL/CentOS 7.1 rbdmap service start warnings?

2015-07-17 Thread Bruce McFarland
Dainard [mailto:sdain...@spd1.com] Sent: Friday, July 17, 2015 1:59 PM To: Bruce McFarland Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Workaround for RHEL/CentOS 7.1 rbdmap service start warnings? Other than those errors, do you find RBD's will not be unmapped on system restart

[ceph-users] Workaround for RHEL/CentOS 7.1 rbdmap service start warnings?

2015-07-14 Thread Bruce McFarland
When starting the rbdmap.service to provide map/unmap of rbd devices across boot/shutdown cycles the /etc/init.d/rbdmap includes /lib/lsb/init-functions. This is not a problem except that the rbdmap script is making calls to the log_daemon_* log_progress_* log_actiion_* functions that are

[ceph-users] Performance test matrix?

2015-07-08 Thread Bruce McFarland
Is there a classic ceph cluster test matrix?? I'm wondering what's done for releases ie sector sizes 4k,128k,1M,4M? sequential, random, 80/20 mix? # concurrent IOs? I've seen some spreadsheets in the past, but can't find them. Thanks, Bruce ___

Re: [ceph-users] Performance test matrix?

2015-07-08 Thread Bruce McFarland
/cosbench.README Mark On 07/08/2015 02:55 PM, Bruce McFarland wrote: Is there a classic ceph cluster test matrix?? I'm wondering what's done for releases ie sector sizes 4k,128k,1M,4M? sequential, random, 80/20 mix? # concurrent IOs? I've seen some spreadsheets in the past, but can't find them

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

2015-06-29 Thread Bruce McFarland
creating empty object store in /var/lib/ceph/osd/ceph-0: (22) Invalid argument [root@ceph0 ceph]# -Original Message- From: Bruce McFarland Sent: Monday, June 29, 2015 11:39 AM To: 'Loic Dachary'; ceph-users@lists.ceph.com Subject: RE: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

2015-06-29 Thread Bruce McFarland
partx: /dev/sdc: error adding partition 1 [root@ceph0 ceph]# -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Saturday, June 27, 2015 1:08 AM To: Bruce McFarland; ceph-users@lists.ceph.com Subject: Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD Hi Bruce

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

2015-06-29 Thread Bruce McFarland
Do these issues occur in Centos 7 also? -Original Message- From: Bruce McFarland Sent: Monday, June 29, 2015 12:06 PM To: 'Loic Dachary'; 'ceph-users@lists.ceph.com' Subject: RE: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2 Using the manual method

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD

2015-06-26 Thread Bruce McFarland
...@dachary.org] Sent: Friday, June 26, 2015 3:29 PM To: Bruce McFarland; ceph-users@lists.ceph.com Subject: Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD Hi, Prior to firefly v0.80.8 ceph-disk zap did not call partprobe and that was causing the kind of problems you're experiencing

[ceph-users] Ceph Client OS - RHEL 7.1??

2015-06-04 Thread Bruce McFarland
I've always used Ubuntu for my Ceph client OS and found out in the lab that Centos/RHEL 6.x doesn't have the kernel rbd support. I wanted to investigate using RHEL 7.1 for the client OS. Is there a kernel rbd module that installs with RHEL 7.1?? If not are there 7.1 rpm's or src tar balls

Re: [ceph-users] Installing calamari on centos 7

2015-05-26 Thread Bruce McFarland
I followed the Calamari build instructions here: http://ceph.com/category/ceph-step-by-step/ I used an Ubuntu 14.04 system to build all of the Calarmari client and server packages for Centos 6.5 and Ubuntu Trusty (14.04). Once the packages were built I also referenced the Calamari instructions

Re: [ceph-users] [ceph-calamari] Does anyone understand Calamari??

2015-05-13 Thread Bruce McFarland
PyZMQ: 14.5.0 RAET: Not Installed ZMQ: 4.0.5 Mako: Not Installed root@KVDrive11:~# -Original Message- From: Gregory Meno [mailto:gm...@redhat.com] Sent: Wednesday, May 13, 2015 3:52 PM To: Bruce McFarland Cc: Michael Kuriger; ceph-calam...@lists.ceph.com

Re: [ceph-users] Does anyone understand Calamari??

2015-05-13 Thread Bruce McFarland
. The calamari master is running on Ubuntu 14.04. From: Michael Kuriger [mailto:mk7...@yp.com] Sent: Wednesday, May 13, 2015 2:00 PM To: Bruce McFarland; ceph-calam...@lists.ceph.com; ceph-us...@ceph.com; ceph-devel (ceph-de...@vger.kernel.org) Subject: Re: [ceph-users] Does anyone understand

Re: [ceph-users] New Calamari server

2015-05-12 Thread Bruce McFarland
I am having a similar issue. The cluster is up and salt is running on and has accepted keys from all nodes, including the monitor. I can issue salt and salt/ceph.py commands from the Calamari including 'salt \* ceph.get_heartbeats' which returns from all nodes including the monitor with the

Re: [ceph-users] [ceph-calamari] Does anyone understand Calamari??

2015-05-12 Thread Bruce McFarland
[salt.crypt ][DEBUG ][5912] Re-using SAuth for ('/etc/salt/pki/minion', 'octeon109', 'tcp://209.243.160.35:4506') -Original Message- From: Bruce McFarland Sent: Tuesday, May 12, 2015 6:11 PM To: 'Gregory Meno' Cc: ceph-calam...@lists.ceph.com; ceph-us

Re: [ceph-users] [ceph-calamari] Does anyone understand Calamari??

2015-05-12 Thread Bruce McFarland
supervisorctl to start/stop Cthulhu. I've performed the calamari-ctl clear/init sequence more than twice with also stopping/starting apache2 and Cthulhu. -Original Message- From: Gregory Meno [mailto:gm...@redhat.com] Sent: Tuesday, May 12, 2015 5:58 PM To: Bruce McFarland Cc: ceph-calam

[ceph-users] accepter.accepter.bind unable to bind to IP on any port in range 6800-7300:

2015-05-08 Thread Bruce McFarland
I've run into an issue starting OSD's where I'm running out of ports. I've increased the port range with ms bind port max and on the next attempt to start the osd it reports no ports in the new range. I am only running 1 osd on the node and rarely restart the osd. I've increased the debug level

Re: [ceph-users] Binding a pool to certain OSDs

2015-04-14 Thread Bruce McFarland
You won’t get a PG warning message from ceph –s unless you have 20 PG’s per OSD in your cluster. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Bruce McFarland Sent: Tuesday, April 14, 2015 10:00 AM To: Giuseppe Civitella; Saverio Proto Cc: ceph-users@lists.ceph.com

Re: [ceph-users] Binding a pool to certain OSDs

2015-04-14 Thread Bruce McFarland
I use this to quickly check pool stats: [root@ceph-mon01 ceph]# ceph osd dump | grep pool pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0 pool 1 'metadata' replicated size

Re: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5

2015-04-07 Thread Bruce McFarland
system to use APT (Ubuntu) or RPM (Centos) and the code for the ceph.repo file. There are also package dependency lists, trusted keys, etc. Bruce -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Tuesday, April 07, 2015 1:32 AM To: Bruce McFarland; ceph-users

Re: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5

2015-04-07 Thread Bruce McFarland
I'm not sure about Centos 7.0 but Ceph is not part of the 6.5 distro. Sent from my iPhone On Apr 7, 2015, at 12:26 PM, Loic Dachary l...@dachary.org wrote: On 07/04/2015 18:51, Bruce McFarland wrote: Loic, You're not mistaken the pages are listed under the Installation (Manual) link

Re: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5

2015-04-06 Thread Bruce McFarland
I'm not sure exactly what your steps where, but I reinstalled a monitor yesterday on Centos 6.5 using ceph-deploy with the /etc/yum.repos.d/ceph.repo from ceph.com which I've included below. Bruce [root@essperf13 ceph-mon01]# ceph -v ceph version 0.80.9

Re: [ceph-users] Calamari Questions

2015-04-01 Thread Bruce McFarland
to the calamari master. Thanks, Bruce From: Quentin Hartman [mailto:qhart...@direwolfdigital.com] Sent: Wednesday, April 01, 2015 1:56 PM To: Bruce McFarland Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Calamari Questions You should have a config page in calamari UI where you can accept osd nodes

[ceph-users] Calamari Questions

2015-04-01 Thread Bruce McFarland
I've built the Calamari client, server, and diamond packages from source for trusty and centos and installed it on the trusty Master. Installed diamond and salt packages on the storage nodes. I can connect to the calamari master, accept salt keys from the ceph nodes, but then Calamari reports 3

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Bruce McFarland
of ceph.conf? rbd admin sockets? Nothing at ceph.com/docs on either topic. Thanks, Bruce -Original Message- From: Mykola Golub [mailto:to.my.troc...@gmail.com] Sent: Sunday, February 01, 2015 1:24 PM To: Udo Lembke Cc: Bruce McFarland; ceph-us...@ceph.com; Prashanth Nednoor Subject: Re: [ceph

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Bruce McFarland
. -Original Message- From: Nicheal [mailto:zay11...@gmail.com] Sent: Monday, February 02, 2015 7:35 PM To: Bruce McFarland Cc: ceph-us...@ceph.com; Prashanth Nednoor Subject: Re: [ceph-users] RBD caching on 4K reads??? It seems you use the kernel rbd. So rbd_cache does not work, which is just

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Bruce McFarland
and physically disabling kernel caching. -Original Message- From: Nicheal [mailto:zay11...@gmail.com] Sent: Monday, February 02, 2015 7:35 PM To: Bruce McFarland Cc: ceph-us...@ceph.com; Prashanth Nednoor Subject: Re: [ceph-users] RBD caching on 4K reads??? It seems you use the kernel rbd

[ceph-users] RBD caching on 4K reads???

2015-01-30 Thread Bruce McFarland
I have a cluster and have created a rbd device - /dev/rbd1. It shows up as expected with 'rbd -image test info' and rbd showmapped. I have been looking at cluster performance with the usual Linux block device tools - fio and vdbench. When I look at writes and large block sequential reads I'm

Re: [ceph-users] RBD caching on 4K reads???

2015-01-30 Thread Bruce McFarland
[mailto:ulem...@polarzone.de] Sent: Friday, January 30, 2015 1:00 PM To: Bruce McFarland; ceph-us...@ceph.com Cc: Prashanth Nednoor Subject: Re: [ceph-users] RBD caching on 4K reads??? Hi Bruce, hmm, sounds for me like the rbd cache. Can you look, if the cache is realy disabled in the running config

Re: [ceph-users] Monitor/OSD report tuning question

2014-08-25 Thread Bruce McFarland
+clean; 0 bytes data, 135 GB used, 327 TB / 327 TB avail -Original Message- From: Christian Balzer [mailto:ch...@gol.com] Sent: Monday, August 25, 2014 1:15 AM To: ceph-us...@ceph.com Cc: Bruce McFarland Subject: Re: [ceph-users] Monitor/OSD report tuning question Hello, On Sat, 23 Aug

Re: [ceph-users] osd_heartbeat_grace set to 30 but osd's still fail for grace 20

2014-08-25 Thread Bruce McFarland
osd_heartbeat_grace: 35, [root@ceph-mon01 ceph]# -Original Message- From: Bruce McFarland Sent: Monday, August 25, 2014 10:46 AM To: 'Gregory Farnum' Cc: ceph-us...@ceph.com Subject: RE: [ceph-users] osd_heartbeat_grace set to 30 but osd's still fail for grace 20 That's something that was been puzzling

Re: [ceph-users] osd_heartbeat_grace set to 30 but osd's still fail for grace 20

2014-08-25 Thread Bruce McFarland
settings can handle? -Original Message- From: Gregory Farnum [mailto:g...@inktank.com] Sent: Monday, August 25, 2014 11:01 AM To: Bruce McFarland Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] osd_heartbeat_grace set to 30 but osd's still fail for grace 20 On Mon, Aug 25, 2014 at 10:56

[ceph-users] osd_heartbeat_grace set to 30 but osd's still fail for grace 20

2014-08-24 Thread Bruce McFarland
I see osd's being failed for heartbeat reporting default osd_heartbeat_grace of 20 but the run time config shows that the grace is set to 30. Is there another variable for the osd or the mon I need to set for the non default osd_heartbeat_grace of 30 to take effect? 2014-08-23 23:03:08.982590

[ceph-users] Monitor/OSD report tuning question

2014-08-23 Thread Bruce McFarland
Hello, I have a Cluster with 30 OSDs distributed over 3 Storage Servers connected by a 10G cluster link and connected to the Monitor over 1G. I still have a lot to understand with Ceph. Observing the cluster messages in a ceph -watch window I see a lot of osd flapping when it is sitting in a

Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting

2014-08-21 Thread Bruce McFarland
] Sent: Thursday, August 21, 2014 1:17 AM To: Bruce McFarland Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting Hi, You only have one OSD? I've seen similar strange things in test pools having only one OSD - and I kinda explained it by assuming

Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting

2014-08-21 Thread Bruce McFarland
://ceph.com On Thu, Aug 21, 2014 at 7:11 AM, Bruce McFarland bruce.mcfarl...@taec.toshiba.com wrote: I have 3 storage servers each with 30 osds. Each osd has a journal that is a partition on a virtual drive that is a raid0 of 6 ssds. I brought up a 3 osd (1 per storage server) cluster to bring up

[ceph-users] How to create multiple OSD's per host?

2014-08-14 Thread Bruce McFarland
I've tried using ceph-deploy but it wants to assign the same id for each osd and I end up with a bunch of prepared ceph-disk's and only 1 active. If I use the manual short form method the activate step fails and there are no xfs mount points on the ceph-disks. If I use the manual long form it

Re: [ceph-users] How to create multiple OSD's per host?

2014-08-14 Thread Bruce McFarland
[ceph_deploy.osd][DEBUG ] Host ceph0 is now ready for osd use. From: Bruce McFarland Sent: Thursday, August 14, 2014 11:45 AM To: 'ceph-us...@ceph.com' Subject: How to create multiple OSD's per host? I've tried using ceph-deploy but it wants to assign the same id for each osd and I end up

Re: [ceph-users] How to create multiple OSD's per host?

2014-08-14 Thread Bruce McFarland
...@ceph.com Subject: Re: [ceph-users] How to create multiple OSD's per host? 2014-08-15 7:56 GMT+08:00 Bruce McFarland bruce.mcfarl...@taec.toshiba.commailto:bruce.mcfarl...@taec.toshiba.com: This is an example of the output from ‘ceph-deploy osd create [data] [journal’ I’ve noticed that all of the ‘ceph

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-04 Thread Bruce McFarland
209.243.160.51 - osd.0 209.243.160.52 - osd.3 209.243.160.59 - osd.2 -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Sunday, August 03, 2014 11:15 AM To: Bruce McFarland Cc: Brian Rak; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Firefly OSDs stuck in creating state

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-04 Thread Bruce McFarland
Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Monday, August 04, 2014 10:09 AM To: Bruce McFarland Cc: Brian Rak; ceph-users@lists.ceph.com Subject: RE: [ceph-users] Firefly OSDs stuck in creating state forever Okay, looks like the mon went down then. Was there a stack trace

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-04 Thread Bruce McFarland
/physical chassis? Limits? Thank you very much for all of your help. Bruce -Original Message- From: Sage Weil [mailto:sw...@redhat.com] Sent: Monday, August 04, 2014 12:25 PM To: Bruce McFarland Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Firefly OSDs stuck in creating state

[ceph-users] OSD daemon code in /var/lib/ceph/osd/ceph-2/ dissapears after creating pool/rbd -

2014-08-04 Thread Bruce McFarland
This is going to sound odd and if I hadn't been issuing all commands on the monitor I would swear I issued 'rm -rf' from the shell of the osd in the /var/lib/osd/ceph-s/ directory. After creating the pool/rbd and getting an error from 'rbd info' I saw an osd down/out so I went to it's shell and

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Bruce McFarland
avail 96 creating+peering 10 active+clean 96 creating+incomplete [root@essperf3 Ceph]# From: Brian Rak [mailto:b...@gameservers.com] Sent: Friday, August 01, 2014 2:54 PM To: Bruce McFarland; ceph-users@lists.ceph.com Subject: Re: [ceph