On Tue, Nov 1, 2016 at 11:45 AM, Sage Weil wrote:
> On Tue, 1 Nov 2016, Travis Rhoden wrote:
>> Hello,
>> Is there a consistent, reliable way to identify a Ceph client? I'm looking
>> for a string/ID (UUID, for example) that can be traced back to a client
>> d
Hello,
Is there a consistent, reliable way to identify a Ceph client? I'm looking
for a string/ID (UUID, for example) that can be traced back to a client
doing RBD maps.
There are a couple of possibilities out there, but they aren't quite what
I'm looking for. When checking "rbd status", for exa
Hi Noah,
What is the ownership on /var/lib/ceph ?
ceph-deploy should only be trying to use --setgroup if /var/lib/ceph is
owned by non-root.
On a fresh install of Hammer, this should be root:root.
The --setgroup flag was added to ceph-deploy in 1.5.26.
- Travis
On Wed, Sep 2, 2015 at 1:59 PM
A couple of things here...
Looks like you are on RHEL. If you are on RHEL, but *not* trying to install
RHCS (Red Hat Ceph Storage), a few extra flags are required. You must use
"--release". For example, "ceph-deploy install --release hammer " in
order to get the Hammer upstream release.
The
Hi everyone,
A new version of ceph-deploy has been released. Version 1.5.28
includes the following:
- A fix for a regression introduced in 1.5.27 that prevented
importing GPG keys on CentOS 6 only.
- Will prevent Ceph daemon deployment on nodes that don't have Ceph
installed on them.
- Makes i
Hi everyone,
A new version of ceph-deploy has been released. Version 1.5.28
includes the following:
- A fix for a regression introduced in 1.5.27 that prevented
importing GPG keys on CentOS 6 only.
- Will prevent Ceph daemon deployment on nodes that don't have Ceph
installed on them.
- Makes i
Hi Nigel,
On Wed, Aug 5, 2015 at 9:00 PM, Nigel Williams
wrote:
> On 6/08/2015 9:45 AM, Travis Rhoden wrote:
>>
>> A new version of ceph-deploy has been released. Version 1.5.27
>> includes the following:
>
>
> Has the syntax for use of --zap-disk changed? I moved i
Hi Nigel,
On Wed, Aug 5, 2015 at 9:00 PM, Nigel Williams
wrote:
> On 6/08/2015 9:45 AM, Travis Rhoden wrote:
>>
>> A new version of ceph-deploy has been released. Version 1.5.27
>> includes the following:
>
>
> Has the syntax for use of --zap-disk changed? I moved i
Hi everyone,
A new version of ceph-deploy has been released. Version 1.5.27
includes the following:
- a new "ceph-deploy repo" command that allows for adding and
removing custom repo definitions
- Makes commands like "ceph-deploy install --rgw" only install the
RGW component of Ceph.
This work
Hi everyone,
A new version of ceph-deploy has been released. Version 1.5.27
includes the following:
- a new "ceph-deploy repo" command that allows for adding and
removing custom repo definitions
- Makes commands like "ceph-deploy install --rgw" only install the
RGW component of Ceph.
This work
On Fri, Jul 31, 2015 at 3:09 AM, David Disseldorp wrote:
> On Wed, 29 Jul 2015 15:41:29 -0700, Travis Rhoden wrote:
>
>> There are the following flags in ceph-deploy today that can control
>> what daemons get installed:
>>
>> ceph-deploy install --help
>> ..
Hi Quentin,
It may be the specific option you are trying to tweak.
osd-scrub-begin-hour was first introduced in development release
v0.93, which means it would be in 0.94.x (Hammer), but your cluster is
0.87.1 (Giant).
Cheers,
- Travis
On Wed, Jul 29, 2015 at 4:28 PM, Quentin Hartman
wrote:
>
Right now when a cluster is installed by ceph-deploy, ceph-deploy
installs *everything*. By this I mean all packages need to run
ceph-mon, ceph-osd, ceph-mds, and radosgw.
The way this is actually split out into packages depends on the
distro. For RPM-distros, the 'ceph' package includes ceph-mo
On Tue, Jul 28, 2015 at 12:13 PM, Sage Weil wrote:
> Hey,
>
> I've finally had some time to play with the systemd integration branch on
> fedora 22. It's in wip-systemd and my current list of issues includes:
>
> - after mon creation ceph-create-keys isn't run automagically
> - Personally I kin
el.repo) though. That's why I thought about
breaking it into two phases.
>
>
> On Fri, Jul 24, 2015 at 4:28 PM Pete Zaitcev wrote:
>>
>> On Thu, 23 Jul 2015 16:12:59 -0700
>> Travis Rhoden wrote:
>>
>> > I’m working on ways to improve Ceph installati
Hi Pete,
Thanks for the input. I think Ken can probably help shed some light
on this as well for you, but some things I can add...
On Fri, Jul 24, 2015 at 4:24 PM, Pete Zaitcev wrote:
> On Thu, 23 Jul 2015 16:12:59 -0700
> Travis Rhoden wrote:
>
>> I’m working on ways
Hi Noah,
It does look like the two things are unrelated. But you are right,
ceph-deploy stopped accepting that trailing hostname with the
"ceph-deploy mon create-initial" command with 1.5.26. It was never a
needed argument, and accepting it led to confusion. I tightened up
the argument parsing
Hi Bernhard,
Thanks for your email. systemd support for Ceph in general is still a
work in progress. It is actively being worked on, but the packages
hosted on ceph.com are still using sysvinit (for RPM systems), and
Upstart on Ubuntu. It is definitely a known issue.
Along those lines, ceph.co
pkgs.fedoraproject.org/cgit/ceph.git/commit/?h=epel7&id=c9a91bad2f3c3083b8dad7a1feb9f84994c2f35c
- Travis
>
> - Shinobu
>
>
> On Fri, Jul 24, 2015 at 8:12 AM, Travis Rhoden wrote:
>> HI Everyone,
>>
>> I’m working on ways to improve Ceph installation
HI Everyone,
I’m working on ways to improve Ceph installation with ceph-deploy, and a common
hurdle we have hit involves dependency issues between ceph.com hosted RPM
repos, and packages within EPEL. For a while we were able to managed this with
the priorities plugin, but then EPEL shipped pac
Hi everyone,
This is announcing a new release of ceph-deploy that focuses on usability
improvements.
- Most of the help menus for ceph-deploy subcommands (e.ge. “ceph-deploy mon”
and “ceph-deploy osd”) have been improved to be more context aware, such that
help for “ceph-deploy osd create --h
it defined? is it up?) and to re-sync the model. It seems
like a lot of back and forth interaction to keep the model up to date,
and ultimately we lose all that information when the application
exits.
That's my initial feedback.
- Travis
On Fri, Jul 17, 2015 at 2:31 AM, Owen Synge wrote
Hi everyone,
This is announcing a new release of ceph-deploy that focuses on usability
improvements.
- Most of the help menus for ceph-deploy subcommands (e.ge. “ceph-deploy mon”
and “ceph-deploy osd”) have been improved to be more context aware, such that
help for “ceph-deploy osd create --h
Hi Owen,
I'm still giving this one some thought. I've gone back and reviewed
https://github.com/ceph/ceph-deploy/pull/320 a few more times. I do
understand how it works (it took a couple times through it), and
cosmetic things notwithstanding I can appreciate what it is doing. I
also fully get th
> On Jul 14, 2015, at 3:41 AM, Owen Synge wrote:
>
> Dear Travis,
>
> We clearly disagree in this area.
>
> I hope me explaining my perspective is not seen as unhelpful.
>
> On 07/09/2015 07:00 PM, Travis Rhoden wrote:
>>> (2B) inflexible / complex i
> On Jul 14, 2015, at 1:47 AM, Owen Synge wrote:
>
> On 07/09/2015 09:58 PM, Travis Rhoden wrote:
>>
>>> On Jul 9, 2015, at 4:59 AM, Owen Synge wrote:
>>>
>>> On 07/09/2015 12:46 PM, John Spray wrote:
>>>> Owen,
>>>
>>>
still apply to
>> ceph and ceph-deploy.
>>
>> From what you said, in my opinion the "boat anchor" in ceph-deploy is
>> redefined, as coupling of facade pattern, where all data is available,
>> to the ssh loop in a connection. This is probably the biggest single
>
> On Jul 9, 2015, at 4:59 AM, Owen Synge wrote:
>
> On 07/09/2015 12:46 PM, John Spray wrote:
>> Owen,
>
> Hi John,
>
> thanks for your reasonable mail.
>
>> Please can you say what your overall goal is with recent ceph-deploy
>> patches?
>
> To give ceph-deploy code features we have in dow
Hi Owen,
There are quite a few emails in this thread already, but there are points in
each of them I would like to address.
Up front let me say I hear and recognize your frustration. If you’ve felt
ignored, I apologize. I think you’ve turned around a lot of patches, PR
comments, and emails f
Hi Pankaj,
While there have been times in the past where ARM binaries were hosted
on ceph.com, there is not currently any ARM hardware for builds. I
don't think you will see any ARM binaries in
http://ceph.com/debian-hammer/pool/main/c/ceph/, for example.
Combine that with the fact that ceph-dep
Hi everyone,
This is announcing a new release of ceph-deploy that fixes a security
related issue, improves SUSE support, and improves support for RGW on
RPM systems. ceph-deploy can be installed from ceph.com hosted repos
for Firefly, Giant, Hammer, and testing, and is also available on
PyPI.
Ea
Hi everyone,
This is announcing a new release of ceph-deploy that fixes a security
related issue, improves SUSE support, and improves support for RGW on
RPM systems. ceph-deploy can be installed from ceph.com hosted repos
for Firefly, Giant, Hammer, and testing, and is also available on
PyPI.
Ea
Hi Mark,
Thanks for the detective work! I'll confirm whether the constraint on
the client name is really intended -- I believe that it is. Once
confirmed, I completely agree that ceph-deploy should not allow you to
create rgw daemons with names that we know Ceph is going to reject.
- Travis
O
I did also confirm that, as Ken mentioned, this is not a problem on Hammer
since Hammer includes the package split (python-ceph became python-rados
and python-rbd).
- Travis
On Wed, Apr 8, 2015 at 5:00 PM, Travis Rhoden wrote:
> Hi Vickey,
>
> The easiest way I know of to get ar
Hi Vickey,
The easiest way I know of to get around this right now is to add the
following line in section for epel in /etc/yum.repos.d/epel.repo
exclude=python-rados python-rbd
So this is what my epel.repo file looks like: http://fpaste.org/208681/
It is those two packages in EPEL that are caus
Hi Frederic,
Thanks for the report! Do you mind throwing this details into a bug
report at http://tracker.ceph.com/ ?
I have seen the same thing once before, but at the time didn't have
the chance to check if the inconsistency was coming from ceph-deploy
or from ceph-disk. This certainly seems
Hi All,
This is a new release of ceph-deploy that includes a new feature for
Hammer and bugfixes. ceph-deploy can be installed from the ceph.com
hosted repos for Firefly, Giant, Hammer, or testing, and is also
available on PyPI.
ceph-deploy now defaults to installing the Hammer release. If you n
Hi All,
This is a new release of ceph-deploy that includes a new feature for
Hammer and bugfixes. ceph-deploy can be installed from the ceph.com
hosted repos for Firefly, Giant, Hammer, or testing, and is also
available on PyPI.
ceph-deploy now defaults to installing the Hammer release. If you n
Hi All,
This is a new release of ceph-deploy that changes a couple of behaviors.
On RPM-based distros, ceph-deploy will now automatically enable
check_obsoletes in the Yum priorities plugin. This resolves an issue
many community members hit where package dependency resolution was
breaking due to
Hi All,
This is a new release of ceph-deploy that changes a couple of behaviors.
On RPM-based distros, ceph-deploy will now automatically enable
check_obsoletes in the Yum priorities plugin. This resolves an issue
many community members hit where package dependency resolution was
breaking due to
Hi Khyati,
On Sat, Mar 7, 2015 at 5:18 AM, khyati joshi wrote:
> Hello ceph-users,
>
> I am new to ceph.I am using centos-5.11 (i386) for deploying ceph
> with and epel-release-5.4.noarch.rpm is sucessfuly installed.
ceph (and ceph-deploy) is not packaged for CentOS 5. You'll need to use 6 o
On Wed, Mar 4, 2015 at 4:43 PM, Lionel Bouton
wrote:
> On 03/04/15 22:18, John Spray wrote:
>> On 04/03/2015 20:27, Datatone Lists wrote:
>>> [...] [Please don't mention ceph-deploy]
>> This kind of comment isn't very helpful unless there is a specific
>> issue with ceph-deploy that is preventing
vis
>
> Thanks & Regards
> Somnath
>
> -Original Message-
> From: Travis Rhoden [mailto:trho...@gmail.com]
> Sent: Wednesday, February 25, 2015 4:46 PM
> To: Somnath Roy
> Cc: Sage Weil; Ceph Development
> Subject: Re: librados2 and librbd1 dependency on libvirt-
Somnath
>
> -Original Message-----
> From: ceph-devel-ow...@vger.kernel.org
> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Travis Rhoden
> Sent: Wednesday, February 25, 2015 4:23 PM
> To: Somnath Roy
> Cc: Sage Weil; Ceph Development
> Subject: Re: librados2 and librbd1 dep
Somnath,
Just so I can try to recreate fully, can you also tell me the version
of Ceph you are using, the version of ceph-deploy (previously
mentioned), and the version of libvirt? For example, if you have the
Juno cloud-archive enabled or anything.
- Travis
On Wed, Feb 25, 2015 at 3:36 PM, So
Hi Pankaj,
I can't say that it will fix the issue, but the first thing I would
encourage is to use the latest ceph-deploy.
you are using 1.4.0, which is quite old. The latest is 1.5.21.
- Travis
On Wed, Feb 25, 2015 at 3:38 PM, Garg, Pankaj
wrote:
> Hi,
>
> I had a successful ceph cluster th
On Wed, Feb 25, 2015 at 3:33 PM, Somnath Roy wrote:
> We did ‘ceph-deploy purge ’
> According to the following link
> http://ceph.com/docs/master/rados/deployment/ceph-deploy-purge/
>
> -Original Message-
> From: Travis Rhoden [mailto:trho...@gmail.com]
> Sent: Wednesd
On Debian/Ubuntu, the uninstall/purge will remove the following packages:
ceph
ceph-mds
ceph-common
ceph-fs-common.
Ironically, ceph-deploy will spit out on the CLI that it does *not*
remove librados2 and librbd1, because yes that would break
qemu/libvirt. The intent is to leave those librarie
Also, did you successfully start your monitor(s), and define/create the
OSDs within the Ceph cluster itself?
There are several steps to creating a Ceph cluster manually. I'm unsure if
you have done the steps to actually create and register the OSDs with the
cluster.
- Travis
On Wed, Feb 25, 20
Note that ceph-deploy would enable EPEL for you automatically on
CentOS. When doing a manual installation, the requirement for EPEL is
called out here:
http://ceph.com/docs/master/install/get-packages/#id8
Though looking at that, we could probably update it to use the now
much easier to use "yum
t; gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
> gpgcheck=1
>
> [epel-testing-source]
> name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Source
> #baseurl=http://download.fedoraproject.org/pub/epel/testing/7/SRPMS
> mirrorlist=https://mirrors.fedoraprojec
Hi Paul,
Would you mind sharing/posting the contents of your .repo files for
ceph, ceph-el7, and ceph-noarch repos?
I see that python-rbd is getting pulled in from EPEL, which I don't
think is what you want.
My guess is that you need the fix documented in
http://tracker.ceph.com/issues/10476, th
Hi Karl,
Sorry that I missed this go by. If you are still hitting this issue,
I'd like to help you and figure this one out, especially since you are
not the only person to have hit it.
Can you pass along your system details, (OS, version, etc.).
I'd also like to know how you installed ceph-depl
Hi John,
For the last part, there being two different versions of packages in
Giant, I don't think that's the actual problem.
What's really happening there is that python-ceph has been obsoleted
by other packages that are getting picked up by Yum. See the line
that says "Package python-ceph is o
at 4:05 PM, Travis Rhoden wrote:
> Hi Noah,
>
> I'll try to recreate this on a fresh FC20 install as well. Looks to
> me like there might be a repo priority issue. It's mixing packages
> from Fedora downstream repos and the ceph.com upstream repos. That's
>
Hi Noah,
I'll try to recreate this on a fresh FC20 install as well. Looks to
me like there might be a repo priority issue. It's mixing packages
from Fedora downstream repos and the ceph.com upstream repos. That's
not supposed to happen.
- Travis
On Wed, Jan 7, 2015 at 2:15 PM, Noah Watkins
Hello,
Can you give the link the exact instructions you followed?
For CentOS7 (EL7) ceph-extras should not be necessary. The instructions at
[1] do not have you enabled the ceph-extras repo. You will find that there
are EL7 packages at [2]. I recently found a README that was incorrectly
refere
On Tue, Jan 6, 2015 at 11:23 AM, Sage Weil wrote:
> On Tue, 6 Jan 2015, Travis Rhoden wrote:
>> On Tue, Jan 6, 2015 at 9:28 AM, Sage Weil wrote:
>> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
>> >> 2015-01-06 13:08 GMT+08:00 Sage Weil :
>> >> &
015-01-06 8:42 GMT+08:00 Robert LeBlanc :
>> >> > I do think the "find a journal partition" code isn't particularly
>> >> > robust.
>> >> > I've had experiences with ceph-disk trying to create a new partition
>> >>
On Mon, Jan 5, 2015 at 12:27 PM, Sage Weil wrote:
> On Mon, 5 Jan 2015, Travis Rhoden wrote:
>> Hi Loic and Wido,
>>
>> Loic - I agree with you that it makes more sense to implement the core
>> of the logic in ceph-disk where it can be re-used by other tools
out, stop service, remove marker files, umount) and (2)
remove: which would undefine the OSD within the cluster (remove from
CRUSH, remove cephx key, deallocate OSD ID).
I'm mostly talking out loud here. Looking for more ideas, input. :)
- Travis
On Sun, Jan 4, 2015 at 6:07 AM, Wido den
Hi Giuseppe,
ceph-deploy does try to do some pinning for the Ceph packages. Those
settings should be found at /etc/apt/preferences.d/ceph.pref
If you find something is incorrect there, please let us know what it is and
we can can look into it!
- Travis
On Sat, Dec 20, 2014 at 11:32 AM, Giusep
Hello,
I believe this is a problem specific to Fedora packaging. The Fedora
package for ceph-deploy is a bit different than the ones hosted at ceph.com.
Can you please tell me the output of "rpm -q python-remoto"?
I believe the problem is that the python-remoto package is too old, and
there is n
Hi everyone,
There has been a long-standing request [1] to implement an OSD
"destroy" capability to ceph-deploy. A community user has submitted a
pull request implementing this feature [2]. While the code needs a
bit of work (there are a few things to work out before it would be
ready to merge),
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
- Travis
On Mon, Dec 15, 2014 at 8:58 AM, Max Power <
mailli...@ferienwohnung-altenbeken.de> wrote:
>
> > Ilya Dryomov hat am 12. Dezember 2014 um
> 18:00
Hi All,
This is a new release of ceph-deploy that defaults to installing the
Giant release of Ceph.
Additionally, there are a couple of bug fixes that makes sure that
calls to 'gatherkeys' returns non-zero upon failure, and that the EPEL
repo is properly enabled as a prerequisite to installation
Hi All,
This is a new release of ceph-deploy that defaults to installing the
Giant release of Ceph.
Additionally, there are a couple of bug fixes that makes sure that
calls to 'gatherkeys' returns non-zero upon failure, and that the EPEL
repo is properly enabled as a prerequisite to installation
.
If you visit http://tracker.ceph.com, in the top right-hand corner is a
link for "Register".
Hope that helps!
- Travis
>
> Thanks,
> Massimiliano Cuttini
>
>
> Il 18/11/2014 23:03, Travis Rhoden ha scritto:
>
> I've captured this at http://tracker.cep
o format again?
>
That will depend on your goals. A mixed version cluster is viable, but if
you want Giant everywhere, you'll need to upgrade the packages on the node
running your OSDs and restart the OSDs themselves. An actual disk
re-format is not necessary.
- Travis
>
>
>
I've captured this at http://tracker.ceph.com/issues/10133
On Tue, Nov 18, 2014 at 4:48 PM, Travis Rhoden wrote:
> Hi Massimiliano,
>
> I just recreated this bug myself. Ceph-deploy is supposed to install EPEL
> automatically on the platforms that need it. I just confirmed
Hi Massimiliano,
I just recreated this bug myself. Ceph-deploy is supposed to install EPEL
automatically on the platforms that need it. I just confirmed that it is
not doing so, and will be opening up a bug in the Ceph tracker. I'll paste
it here when I do so you can follow it. Thanks for the
Hi Andrija,
I'm running a cluster with both CentOS and Ubuntu machines in it. I just
did some upgrades to 0.80.4, and I can confirm that doing "yum update ceph"
on the CentOS machine did result in having all OSDs on that machine
restarted automatically. I actually did not know that would happen,
.
So, I'm just trusting Ceph to the right thing, and so far it seems to, but
the comments here about needing to determine the correct object and place
it on the primary PG make me wonder if I've been missing something.
- Travis
On Thu, Jul 10, 2014 at 10:19 AM, Travis Rhoden wrote:
&g
I can also say that after a recent upgrade to Firefly, I have experienced
massive uptick in scrub errors. The cluster was on cuttlefish for about a
year, and had maybe one or two scrub errors. After upgrading to Firefly,
we've probably seen 3 to 4 dozen in the last month or so (was getting 2-3 a
Hi George,
I actually asked Sage about a similar scenario at the OpenStack summit in
Atlanta this year -- namely if I could use the new pool quota functionality
to enforce quotas on CephFS. The answer was no, that the pool quota
functionality is mostly intended for radosgw and that the existing c
- public-leaf2: 10.2.2.0/24
>
> ceph.conf would be:
>
> cluster_network: 10.1.0.0/255.255.0.0
> public_network: 10.2.0.0/255.255.0.0
>
> - Mike Dawson
>
>
> On 5/28/2014 1:01 PM, Travis Rhoden wrote:
>
>> Hi folks,
>>
>> Does anybody know if there
Hi folks,
Does anybody know if there are any issues running Ceph with multiple L2 LAN
segements? I'm picturing a large multi-rack/multi-row deployment where you
may give each rack (or row) it's own L2 segment, then connect them all with
L3/ECMP in a leaf-spine architecture.
I'm wondering how clu
You can define the UUID in the secret.xml file. That way you can generate
one yourself, or let it autogenerate the first one for you and then use the
same one on all the other compute nodes.
In the Ceph docs, it actually generates one using "uuidgen", then puts that
UUID in the secret.xml file it
Sage,
Congrats to you and Inktank!
- Travis
On Wed, Apr 30, 2014 at 9:27 AM, Haomai Wang wrote:
> Congratulation!
>
> On Wed, Apr 30, 2014 at 8:18 PM, Sage Weil wrote:
> > Today we are announcing some very big news: Red Hat is acquiring Inktank.
> > We are very excited about what this means
25 avr. 2014 à 21:07, Drew Weaver a écrit :
>
> You can actually just install it using the Ubuntu packages. I did it
> yesterday on Trusty.
>
>
>
> Thanks,
>
> -Drew
>
>
>
>
>
> *From:* ceph-users-boun...@lists.ceph.com [
> mailto:ceph-users-boun...
Are there packages for Trusty being built yet?
I don't see it listed at http://ceph.com/debian-emperor/dists/
Thanks,
- Travis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
mbitious I'll submit changes for the docs...
Thanks for the help!
- Travis
On Wed, Apr 2, 2014 at 12:00 PM, Travis Rhoden wrote:
> Thanks for the response Greg.
>
> Unfortunately, I appear to be missing something. If I use my "cephfs" key
> with these perms:
>
>
s
> somebody else recently pointed out, but [3] will give you the
> filesystem access you're looking for.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Fri, Mar 28, 2014 at 9:40 AM, Travis Rhoden wrote:
> > Hi Folks,
> >
> >
Hi Folks,
What would be the right set of capabilities to set for a new client key
that has access to CephFS only? I've seen a few different examples:
[1] mds 'allow *' mon 'allow r' osd 'allow rwx pool=data'
[2] mon 'allow r' osd 'allow rwx pool=data'
[3] mds 'allow rwx' mon 'allow r' osd 'allow
, Travis Rhoden wrote:
> Thanks for the feedback -- I'll post back with more detailed logs if
> anything looks fishy!
>
>
> On Tue, Mar 25, 2014 at 1:10 PM, Gregory Farnum wrote:
>
>> Well, you could try running with messenger debugging cranked all the
>> way
Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Mar 25, 2014 at 10:05 AM, Travis Rhoden wrote:
> >
> >
> >
> > On Tue, Mar 25, 2014 at 12:53 PM, Gregory Farnum
> wrote:
> >>
> >> On Tue, Mar 25, 2014 at 9:24 AM, Travis Rh
On Tue, Mar 25, 2014 at 12:53 PM, Gregory Farnum wrote:
> On Tue, Mar 25, 2014 at 9:24 AM, Travis Rhoden wrote:
> > Okay, last one until I get some guidance. Sorry for the spam, but
> wanted to
> > paint a full picture. Here are debug logs from all three mons, capturing
>
hat is your netmask ? (as I see 10.10.30.0 for mon1)
>
Everything is on 10.10.0.0/16
Did you really see something that said 10.10.30.0 for mon1? Because it
should be .1, not .0... .0 for mon0, .1 for mon1, etc.
Thanks for the response!
>
>
>
>
> - Mail original -
>
2014-03-25 16:17:30.354040 7f80d0013700 5 mon.ceph2@2(electing).elector(35)
election timer expired
Oddly, it looks to me like mon.2 (ceph2) never handles/receives the
proposal from mon.0 (ceph0). But I admit I have no clue how monitor
election works.
- Travis
On Tue, Mar 25, 2014 at 12:04 PM,
Tue, Mar 25, 2014 at 11:48 AM, Travis Rhoden wrote:
> Just to emphasize that I don't think it's clock skew, here is the NTP
> state of all three monitors:
>
> # ansible ceph_mons -m command -a "ntpq -p" -kK
> SSH password:
> sudo password [defa
hen poll reach delay offset
jitter
==
*controller-10g 198.60.73.8 2 u 30 64 3770.201 -0.063
0.063
I think they are pretty well in synch.
- Travis
On Tue, Mar 25, 2014 at 11:09 AM, Travis Rhoden wrote:
> Hello,
>
> I just deployed a new Emperor cluster
Hello,
I just deployed a new Emperor cluster using ceph-deploy 1.4. All went very
smooth, until I rebooted all the nodes. After reboot, the monitors no
longer form a quorum.
I followed the troubleshooting steps here:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
Specif
I had been working on something similar to Seb's. Alfredo's (which I
had looked at before) seems to be a wrapper around ceph-deploy.
Useful, but not as fully featured. It also hasn't been updated in 7
months.
I would certainly like to see a more fully-featured set of Ansible
playbooks in Ceph's
On Thu, Jan 9, 2014 at 9:48 AM, Alfredo Deza wrote:
> On Thu, Jan 9, 2014 at 9:45 AM, Travis Rhoden wrote:
>> HI Mordur,
>>
>> I'm definitely straining my memory on this one, but happy to help if I can?
>>
>> I'm pretty sure I did not figure it out
HI Mordur,
I'm definitely straining my memory on this one, but happy to help if I can?
I'm pretty sure I did not figure it out -- you can see I didn't get
any feedback from the list. What I did do, however, was uninstall
everything and try the same setup with mkcephfs, which worked fine at
the t
Eric,
Yeah, your OSD weights are a little crazy...
For example, looking at one host from your output of "ceph osd tree"...
-3 31.5host tca23
1 3.63osd.1 up 1
7 0.26osd.7 up 1
13 2.72osd.13
>
> I think this family of issues speak to the need for Ceph to have more
> visibility into the underlying storage's limitations (especially spindle
> contention) when performing known expensive maintainance operations.
>
> Thanks,
> Mike Dawson
>
>
> On 9/27/2
>
> I think this family of issues speak to the need for Ceph to have more
> visibility into the underlying storage's limitations (especially spindle
> contention) when performing known expensive maintainance operations.
>
> Thanks,
> Mike Dawson
>
>
> On 9/27/2
Hello everyone,
I'm running a Cuttlefish cluster that hosts a lot of RBDs. I recently
removed a snapshot of a large one (rbd snap rm -- 12TB), and I noticed
that all of the clients had markedly decreased performance. Looking
at iostat on the OSD nodes had most disks pegged at 100% util.
I know
On Tue, Sep 24, 2013 at 5:16 PM, Sage Weil wrote:
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
>> This "noshare" option may have just helped me a ton -- I sure wish I would
>> have asked similar questions sooner, because I have seen the same failure to
>> scale.
1 - 100 of 210 matches
Mail list logo