Hey folks. Running RHEL7.1 with stock 3.10.0 kernel and trying to deploy
Infernalis. Haven't done this since Firefly but I used to know what I was
doing. My problem is "ceph-deploy new" and "ceph-deploy install" seem to go
well but "ceph-deploy mon create-initial" reliably fails when
Augh, never mind, firewall problem. Thanks anyway.
From: Gruher, Joseph R
Sent: Thursday, June 11, 2015 10:55 PM
To: ceph-users@lists.ceph.com
Cc: Gruher, Joseph R
Subject: MONs not forming quorum
Hi folks-
I'm trying to deploy 0.94.2 (Hammer) onto CentOS7. I used to be pretty good
Hi folks-
I'm trying to deploy 0.94.2 (Hammer) onto CentOS7. I used to be pretty good at
this on Ubuntu but it has been a while. Anyway, my monitors are not forming
quorum, and I'm not sure why. They can definitely all ping each other and
such. Any thoughts on specific problems in the
Hi folks-
Was this ever resolved? I’m not finding a resolution in the email chain,
apologies if I am missing it. I am experiencing this same problem. Cluster
works fine for object traffic, can’t seem to get rbd to work in 0.78. Worked
fine in 0.72.2 for me. Running Ubuntu 13.04 with 3.12
, 427 kobjects
2439 GB used, 12643 GB / 15083 GB avail
2784 active+clean
From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:44 AM
To: 'Ирек Фасихов'; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature
Aha – upgrade of kernel from 3.13 to 3.14 appears to have resolved the problem.
Thanks,
Joe
From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:48 AM
To: Ирек Фасихов; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?
Meant
Hi Folks-
Having a bit of trouble with EC setup on 0.78. Hoping someone can help me out.
I've got most of the pieces in place, I think I'm just having a problem with
the ruleset.
I am running 0.78:
ceph --version
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)
I created a new
Great, thanks! I'll watch (hope) for an update later this week. Appreciate
the rapid response.
-Joe
From: Ian Colle [mailto:ian.co...@inktank.com]
Sent: Sunday, March 16, 2014 7:22 PM
To: Gruher, Joseph R; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] erasure coding testing
Joe,
We're
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Mark Nelson
Sent: Monday, February 03, 2014 6:48 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Low RBD Performance
On 02/03/2014 07:29 PM, Gruher, Joseph R
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Tuesday, February 04, 2014 9:46 AM
To: Gruher, Joseph R
Cc: Mark Nelson; ceph-users@lists.ceph.com; Ilya Dryomov
Subject: Re: [ceph-users] Low RBD Performance
On Tue, Feb 4, 2014 at 9:29 AM, Gruher, Joseph R
Ultimately this seems to be an FIO issue. If I use --iodepth X or --
iodepth=X on the FIO command line I always get queue depth 1. After
switching to specifying iodepth=X in the body of the FIO workload file I do
get the desired queue depth and I can immediately see performance is much
higher
Hi folks-
I'm having trouble demonstrating reasonable performance of RBDs. I'm running
Ceph 0.72.2 on Ubuntu 13.04 with the 3.12 kernel. I have four dual-Xeon
servers, each with 24GB RAM, and an Intel 320 SSD for journals and four WD 10K
RPM SAS drives for OSDs, all connected with an LSI
Hi all-
I'm creating some scripted performance testing for my Ceph cluster. The part
relevant to my questions works like this:
1. Create some pools
2. Create and map some RBDs
3. Write-in the RBDs using DD or FIO
4. Run FIO testing on the RBDs (small block random and
I don't know how rbd works inside, but i think ceph rbd here returns zeros
without real osd disk read if the block/sector of the rbd-disk is unused. That
would explain the graph you see. You can try adding a second rbd image and
not format/use it and benchmark this disk, then make a filesystem on
For ~$67 you get a mini-itx motherboard with a soldered on 17W dual core
1.8GHz ivy-bridge based Celeron (supports SSE4.2 CRC32 instructions!).
It has 2 standard dimm slots so no compromising on memory, on-board gigabit
eithernet, 3 3Gb/s + 1 6Gb/s SATA, and a single PCIE slot for an additional
Hi Alfredo-
Have you looked at adding the ability to specify a proxy on the ceph-deploy
command line? Something like:
ceph-deploy install --proxy {http_proxy}
That would then need to run all the remote commands (rpm, curl, wget, etc) with
the proxy. Not sure how complex that would
Those aren't really errors, when ceph-deploy runs commands on the host anything
that gets printed to stderr as a result is relayed back through ceph-deploy
with the [ERROR] tag. If you look at the content of the errors it just has the
output of the commands that were run in the step
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, November 20, 2013 7:17 AM
To: Gruher, Joseph R
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry
On Mon, Nov 18, 2013 at 1:12 PM, Gruher
So is there any size limit on RBD images? I had a failure this morning
mounting 1TB RBD. Deleting now (why does it take so long to delete if it was
never even mapped, much less written to?) and will retry with smaller images.
See output below. This is 0.72 on Ubuntu 13.04 with 3.12 kernel.
-Original Message-
From: Gruher, Joseph R
Sent: Tuesday, November 19, 2013 12:24 PM
To: 'Wolfgang Hennerbichler'; Bernhard Glomm
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Size of RBD images
So is there any size limit on RBD images? I had a failure this morning
mounting
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Monday, November 18, 2013 6:34 AM
To: Gruher, Joseph R
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry
I went ahead and created a ticket to track
Using ceph-deploy 1.3.2 with ceph 0.72.1. Ceph-deploy disk zap will fail and
exit with error, but then on retry will succeed. This is repeatable as I go
through each of the OSD disks in my cluster. See output below.
I am guessing the first attempt to run changes something about the initial
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Dinu Vlad
Sent: Thursday, November 07, 2013 3:30 AM
To: ja...@peacon.co.uk; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph cluster performance
In this case
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of ??
Sent: Wednesday, November 06, 2013 10:04 PM
To: ceph-users
Subject: [ceph-users] please help me.problem with my ceph
1. I have installed ceph with one mon/mds and one osd.When i use 'ceph -
Is there any plan to implement some kind of QoS in Ceph? Say I want to provide
service level assurance to my OpenStack VMs and I might have to throttle
bandwidth to some to provide adequate bandwidth to others - is anything like
that planned for Ceph? Generally with regard to block storage
why can't
radosgw start? Details below.
Thanks!
-Original Message-
From: Gruher, Joseph R
Sent: Friday, November 01, 2013 11:50 AM
To: Gruher, Joseph R
Subject: RE: radosgw fails to start
Adding some debug arguments has generated output which I believe
indicates the problem is my keyring
-Original Message-
From: Yehuda Sadeh [mailto:yeh...@inktank.com]
Sent: Monday, November 04, 2013 12:40 PM
To: Gruher, Joseph R
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] radosgw fails to start
Not sure why you're able to run the 'rados' and 'ceph' command, and not
'radosgw
osd_journal_size = 1024
mon_initial_members = joceph01, joceph02, joceph03, joceph04
fsid = 74d808db-aaa7-41d2-8a84-7d590327a3c7
From: Gruher, Joseph R
Sent: Wednesday, October 30, 2013 12:24 PM
To: ceph-users@lists.ceph.com
Subject: radosgw fails to start, leaves no clues why
Hi all-
Trying to set up
-Original Message-
From: Derek Yarnell [mailto:de...@umiacs.umd.edu]
Sent: Friday, November 01, 2013 12:20 PM
To: Gruher, Joseph R; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] radosgw fails to start
On 11/1/13, 2:07 PM, Gruher, Joseph R wrote:
Adding some debug arguments has
Hi all-
Trying to set up object storage on CentOS. I've done this successfully on
Ubuntu but I'm having some trouble on CentOS. I think I have everything
configured but when I try to start the radosgw service it reports starting, but
then the status is not running, with no helpful output as
I have CentOS 6.4 running with the 3.11.6 kernel from elrepo and it includes
the rbd module. I think you could make the same update on RHEL 6.4 and get
rbd. From there it is very simple to mount an rbd device. Here are a few
notes on what I did.
Update kernel:
sudo rpm --import
If you are behind a proxy try configuring the wget proxy through /etc/wgetrc.
I had a similar problem where I could complete wget commands manually but they
would fail in ceph-deploy until I configured the wget proxy in that manner.
From: ceph-users-boun...@lists.ceph.com
Try configuring the curl proxy in /root/.curlrc. I had a similar problem
earlier this week.
Overall I have to be sure to set all these proxies individually for ceph-deploy
to work on CentOS (Ubuntu is easier):
Curl: /root/.curlrc
rpm: /root/.rpmmacros
wget: /etc/wgetrc
yum: /etc/yum.conf
-Joe
Should osd_pool_default_pg_num and osd_pool_default_pgp_num apply to the
default pools? I put them in ceph.conf before creating any OSDs but after
bringing up the OSDs the default pools are using a value of 64.
Ceph.conf contains these lines in [global]:
osd_pool_default_pgp_num = 800
Hi all,
I have CentOS 6.4 with 3.11.6 kernel running (built from latest stable on
kernel.org) and I cannot load the rbd client module. Should I have to do
anything to enable/install it? Shouldn't it be present in this kernel?
[ceph@joceph05 /]$ cat /etc/centos-release
CentOS release 6.4
Speculating, but it seems possible that the ':' in the path is problematic,
since that is also the separator between disk and journal (HOST:DISK:JOURNAL)?
Perhaps if you enclose in ''s or or use /dev/disk/by-id?
-Original Message-
From: ceph-users-boun...@lists.ceph.com
: Gregory Farnum [mailto:g...@inktank.com]
Sent: Monday, October 07, 2013 1:27 PM
To: Gruher, Joseph R
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Client Timeout on Rados Gateway
The ping tests you're running are connecting to different interfaces
(10.23.37.175) than those you specify
: Monday, October 07, 2013 1:27 PM
To: Gruher, Joseph R
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Client Timeout on Rados Gateway
The ping tests you're running are connecting to different interfaces
(10.23.37.175) than those you specify in the mon_hosts option (10.0.0.2,
10.0.0.3, 10.0.0.4
Along the lines of this thread, if I have OSD(s) on rotational HDD(s), but have
the journal(s) going to an SSD, I am curious about the best procedure for
replacing the SSD should it fail.
-Joe
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
On my system my user is named ceph so I modified /home/ceph/.ssh/config.
That seemed to work fine for me. ~/ is shorthand for your user's home folder.
I think SSH will default to the current username so if you just use the same
username everywhere this may not even be necessary.
My file:
Can anyone provide me a sample ceph.conf with multiple rados gateways? I must
not be configuring it correctly and I can't seem to Google up an example or
find one in the docs. Thanks!
-Joe
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello-
I've set up a rados gateway but I'm having trouble accessing it from clients.
I can access it using rados command line just fine from any system in my ceph
deployment, including my monitors and OSDs, the gateway system, and even the
admin system I used to run ceph-deploy. However,
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Gruher, Joseph R
Sent: Monday, September 30, 2013 10:27 AM
To: Yehuda Sadeh
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] failure starting radosgw after setting
Hi all-
I am following the object storage quick start guide. I have a cluster with two
OSDs and have followed the steps on both. Both are failing to start radosgw
but each in a different manner. All the previous steps in the quick start
guide appeared to complete successfully. Any tips on
/var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
ceph@cephtest01:/my-cluster$
-Joe
-Original Message-
From: Gruher, Joseph R
Sent: Thursday, September 19, 2013 11:14 AM
To: ceph-users@lists.ceph.com
Cc: Gruher, Joseph R
Subject
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Can you try running ceph-deploy *without* sudo ?
Ah, OK, sure. Without sudo I end up hung here again:
ceph@cephtest01:~$ ceph-deploy install cephtest03 cephtest04 cephtest05
cephtest06
cut
[cephtest03][INFO ]
Could someone make a quick clarification on the quick start guide for me? On
this page: http://ceph.com/docs/next/start/quick-ceph-deploy/. After I do
ceph-deploy new to a system is that system then a monitor from that point
forward? Or do I then have to do ceph-deploy mon create to that
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Mike Dawson
you need to understand losing an SSD will cause
the loss of ALL of the OSDs which had their journal on the failed SSD.
First, you probably don't want RAID1
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Again, in this next coming release, you will be able to tell
ceph-deploy to just install the packages without mangling your repos
(or installing keys)
Updated to new ceph-deploy release 1.2.6 today but I still see
Using latest ceph-deploy:
ceph@cephtest01:/my-cluster$ sudo ceph-deploy --version
1.2.6
I get this failure:
ceph@cephtest01:/my-cluster$ sudo ceph-deploy install cephtest03 cephtest04
cephtest05 cephtest06
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster
ceph hosts
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Gilles Mocellin
So you can add something like this in all ceph nodes' /etc/sudoers (use
visudo) :
Defaults env_keep += http_proxy https_proxy ftp_proxy no_proxy
Hope it
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Subject: Re: [ceph-users] problem with ceph-deploy hanging
ceph-deploy will use the user as you are currently executing. That is why, if
you are calling ceph-deploy as root, it will log in remotely as root.
So by a
From: Gruher, Joseph R
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:
root@cephtest01:~# ssh cephtest02 wget -q -O-
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' |
apt-key add -
gpg
Hello all-
I'm setting up a new Ceph cluster (my first time - just a lab experiment, not
for production) by following the docs on the ceph.com website. The preflight
checklist went fine, I installed and updated Ubuntu 12.04.2, set up my user and
set up passwordless SSH, etc. I ran
...
ceph.git
RSS Atom
root@cephtest01:~#
Is this URL wrong, or is the data at the URL incorrect?
Thanks,
Joe
From: Gruher, Joseph R
Sent: Friday, September 13, 2013 1:17 PM
To: ceph-users@lists.ceph.com
Cc: Gruher, Joseph R
Subject: problem with ceph user
Hello all-
I'm setting up a new Ceph cluster (my
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Friday, September 13, 2013 3:17 PM
To: Gruher, Joseph R
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] problem with ceph-deploy hanging
On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
joseph.r.gru
56 matches
Mail list logo