https://github.com/ceph/ceph/pull/7119 fixed an issue preventing docs
from building. Master is fixed; merge that into your branches if you
want working docs again.
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line "unsubscribe ceph-
On 12/21/2015 11:29 PM, Gregory Farnum wrote:
> On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick <dm...@redhat.com> wrote:
>> I needed something to fetch current config values from all OSDs (sorta
>> the opposite of 'injectargs --key value), so I hacked it, and then
>>
to use [default: ./ceph.conf]
-u USER user to connect with ssh
-f FILE get names and osds from yaml
COMMAND command other than "config get" to execute
-k KEYconfig key to retrieve with config get
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubs
On 12/02/2015 09:26 PM, Dan Mick wrote:
> On 12/01/2015 04:44 AM, Nathan Cutler wrote:
>> So I go to http://ceph.com and click on the "Documentation" link. In the
>> HTML source code the linked URL is http://ceph.com/docs but the HTTP
>> server rewrites this to:
a Google search for "troubleshoot pgs in ceph":
> http://tinyurl.com/nfuohss )
>
Yes, agreed; we were in the process of relocating that webserver, so had
been neglecting the old configuration. Hopefully we'll get it redone
tomorrow (one in a list of a zillion things arou
On 11/30/2015 11:57 AM, Deneau, Tom wrote:
> I did not see the source tarball for 10.0.0 at
> http://download.ceph.com/tarballs/ceph-10.0.0.tar.gz
>
It's there now, FWIW.
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line &qu
tracker.ceph.com will be brought down today for upgrade and move to a
new host. I plan to do this at about 4PM PST (40 minutes from now).
Expect a downtime of about 15-20 minutes. More notification to follow.
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from
It's back. New DNS info is propagating its way around. If you
absolutely must get to it, newtracker.ceph.com is the new address, but
please don't bookmark that, as it will be going away after the transition.
Please let me know of any problems you have.
On 10/22/2015 04:09 PM, Dan Mick wrote
Fixed a configuration problem preventing updating issues, and switched
the mailer to use ipv4; if you updated and failed, or missed an email
notification, that may have been why.
On 10/22/2015 04:51 PM, Dan Mick wrote:
> It's back. New DNS info is propagating its way around. If you
> abso
tracker.ceph.com down now
On 10/22/2015 03:20 PM, Dan Mick wrote:
> tracker.ceph.com will be brought down today for upgrade and move to a
> new host. I plan to do this at about 4PM PST (40 minutes from now).
> Expect a downtime of about 15-20 minutes. More notification to follow.
>
Found that issue; reverted the database to the non-backlog-plugin state,
created a test bug. Retry?
On 10/22/2015 06:54 PM, Dan Mick wrote:
> I see that too. I suspect this is because of leftover database columns
> from the backlogs plugin, which is removed. Looking into it.
>
> O
you were trying to access.
> If you continue to experience problems please contact your Redmine
> administrator for assistance.
>
> If you are the Redmine administrator, check your log files for details
> about the error.
>
>
> On Thu, Oct 22, 2015 at 6:15 PM, Dan
the Ceph
> cluster, would make sense from performance, resource handling including
> networking resource point of views.
>
> So, do you remember?
>
> Shinobu
>
> - Original Message -
> From: "Dan Mick" <dm...@redhat.com>
> To: "Shi
t; --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
tohave more of them?
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://cep
currently needed.
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/23/2015 09:44 AM, Sage Weil wrote:
On Thu, 23 Jul 2015, Deneau, Tom wrote:
I wanted to register for tracker.ceph.com to enter a few issues but never
got the confirming email and my registration is now in some stuck state
(not complete but name/email in use so can't re-register). Any
/resize commands was explicitly added by commit 08f47a4. Dan Mick or
Josh Durgin could probably better explain the history behind the change since
it was before my time.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord
can just make sure the block
size is larger than zero
-邮件原件-
发件人: Dan Mick [mailto:dm...@redhat.com]
发送时间: 2015年7月24日 5:25
收件人: Jason Dillaman; zhengbin 08747 (RD)
抄送: ceph-devel
主题: Re: hello, I am confused about a question of rbd
Why not zero?
If the answer is it can't
On 07/20/2015 07:19 AM, Sage Weil wrote:
On Mon, 20 Jul 2015, Alexandre DERUMIER wrote:
Hi,
debian jessie gitbuilder is ok since 2 weeks now,
http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-jessie-amd64-basic
It is possible to push packages to repositories ?
The overall issue here is, of course, there's a lot of vars that cephlab
relies on that are not generated by the task itself (in the case of no
inventory file). I'm not sure what the plan was to cope with all those
vars.
On 07/07/2015 05:41 PM, Loic Dachary wrote:
Hi Zack Andrew,
With Dan's
.
Joe
-Original Message-
From: John Spray [mailto:john.sp...@redhat.com]
Sent: Friday, June 05, 2015 7:39 PM
To: Handzik, Joe; Sage Weil
Cc: gm...@redhat.com; ceph-devel@vger.kernel.org; Dan Mick (dm...@redhat.com)
Subject: Re: Thoughts about metadata exposure in Calamari
be nice to add an epoch/version to metadata as well (that would
probably be independent of the other versions, unless I'm missing some
coordination). It ends up being an optimization, but probably a very
useful one.
Joe
-Original Message-
From: Dan Mick [mailto:dm...@redhat.com]
Sent
: tracker.ceph.com
Accept: */*
Content-Length: 239
* We are completely uploaded and fine
* Empty reply from server
* Connection #0 to host tracker.ceph.com left intact
Am I doing something wrong ?
Cheers
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line
a kernel stanza to the yaml for the suite
in question).
All cron jobs on teuthology and magna002 have already been changed to
explicitly specify a kernel branch.--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
Dan Mick, Filesystem
On 10/09/2014 02:16 PM, Fabian Frederick wrote:
Fix some coccinelle warnings:
fs/ceph/caps.c:2400:6-10: WARNING: Assignment of bool to 0/1
- bool wake = 0;
+ bool wake = false;
FWIW, that message is backwards: it should say WARNING: Assignment of
0/1 to bool (I know, it's a
Just adding a note in case you hadn't noticed that updatedb itself has a
CLI for managing the .conf: --add-prune{fs,names,paths}. Sadly, there
is no --remove, but at least it lets the conf file format be abstract.
+1 on everything has a .d/ dir though.
On 02/20/2014 10:47 AM, Sage Weil wrote:
One comment; otherwise looks rational to me
On 04/01/2014 02:44 PM, Sage Weil wrote:
Can someone please review
https://github.com/ceph/ceph/pull/1581
There are a couple of minor packages changes here that could use a second
look. Mostly just shifting things into ceph-common from
...osd, mon, mds, client. It's not
really normal operation, but you can find stats there, and often
things like status, version of the software, etc.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list
Ah, yes, that's in the validator for CephEntityAddr; it's not checking
for the case that no nonce was supplied. I've filed
http://tracker.ceph.com/issues/6425.
On 09/24/2013 04:57 PM, Mandell Degerness wrote:
See trace below. We run this command on system restart in order to
clear any
Pull request created with fix.
On 09/26/2013 04:51 PM, Dan Mick wrote:
Ah, yes, that's in the validator for CephEntityAddr; it's not checking
for the case that no nonce was supplied. I've filed
http://tracker.ceph.com/issues/6425.
On 09/24/2013 04:57 PM, Mandell Degerness wrote:
See trace
-complete
Paper here: http://web.eecs.utk.edu/~plank/plank/papers/FAST-2013-GF.pdf
looks very much like the presentation Ethan gave at the conference.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send
Where are you looking? ceph.com/debian-testing has 0.68
On 09/04/2013 07:12 PM, 이주헌 wrote:
Debian/Ubuntu packages is still 0.67.2.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line
It's a bit surprising that it broke with cuttlefish; something might
have happened in dumpling, but we wouldn't expect changes in cuttlefish.
It looks like collectd just couldn't talk to the monitor properly.
Maybe look at the mon's log and see what it thinks it saw?
On 08/30/2013 05:04 AM,
Ah, there's another we apply universally to our test systems, apparently:
'/etc/security/limits.d/ubuntu.conf'
ubuntu hard nofile 16384
and the tests run as user ubuntu. Line 4 of the script is the nofile
setting.
On 08/10/2013 01:34 AM, Loic Dachary wrote:
On 10/08/2013 07:35, Dan Mick
IIRC we had to adjust settings in /etc/security to allow ulimit
adjustment of at least core:
sed -i 's/^#\*.*soft.*core.*0/\*softcore
unlimited/g' /etc/security/limits.conf
or something like that. That seems to apply to centos/fedora/redhat
systems.
On 08/08/2013
There are a number of subsystems that the clients use, so a number of
knobs that matter. The logging/authentication name (the type.id
name), by default, is 'client.admin'. Some of the relevant logging
knobs are ms/monc; of course there are usually effects caused by the
daemons too, so the
On 08/04/2013 10:51 PM, Noah Watkins wrote:
On Fri, Aug 2, 2013 at 1:58 AM, Niklas Goerke nik...@niklasgoerke.de wrote:
As for the documentation you referenced: I didn't find a documentation of
the RADOS Protocol which could be used to base an implementation of librados
upon. Does anything
I would get the cluster up and running and do some experiments before I
spent any time on optimization, much less all this.
On 07/20/2013 09:35 AM, Ta Ba Tuan wrote:
Please help me!
On 07/20/2013 02:11 AM, Ta Ba Tuan wrote:
Hi everyone,
I have *3 nodes (running MON and MDS)*
and *6 data
e5184ea95031b7bea4264062de083045767d5dc3 in master.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info
On 05/29/2013 01:24 AM, Wido den Hollander wrote:
On 05/29/2013 10:14 AM, Wido den Hollander wrote:
Hi,
Is there a way to find out if a cluster is near full via librados?
Yes, there is. (Thanks tnt on IRC!)
There is ofcourse rados_cluster_stat which will give you:
struct
the next entry
should be ceph-deploy-purge.
Ah. That's because the TOC *also* says that the next is MDS
Configuration (under CephFS). I guess we're sharing the Add/Remove MDS
page at two places in the hierarchy.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
On 05/24/2013 02:16 PM, Travis Rhoden wrote:
On Fri, May 24, 2013 at 4:55 PM, Dan Mick dan.m...@inktank.com wrote:
On 05/24/2013 10:05 AM, Travis Rhoden wrote:
I stumbled across a weird TOC/navigation error in the docs.
On the page:
http://ceph.com/docs/master/rados/deployment/ceph
On 05/20/2013 05:00 PM, ymorita...@gmail.com wrote:
Hi,
I have found some issues on ceph v0.61.2 on Ubuntu 12.10.
(1) ceph-deploy osd create command fails when using --cluster name option.
[root@host3 yuji_ceph]# ceph-deploy --cluster yuji osd create host1:sdb
Traceback (most recent call
I also just pushed a fix to the cuttlefish branch, so if you want
packages that fix this, you can get them from gitbuilders using the
testing versions, branch cuttlefish.
Thanks, Mike, for pointing this out!
On 05/10/2013 08:27 PM, Mike Dawson wrote:
Anyone running 0.61.1,
Watch out for
On 04/22/2013 11:09 AM, Scott Sullivan wrote:
Referring to this:
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
I compiled the latest tgt with RBD support. My question is when using
this method to access RBD volumes, where do you tell it what user to
authenticate to the cluster
It's arguable, but we wanted to treat source and destination pools
separately in general.
Note that you can also specify images as POOLNAME/a and POOLNAME/b.
On 04/19/2013 12:28 AM, Stefan Priebe - Profihost AG wrote:
Hi,
if i issue rbd -p POOLNAME ... rename a b
It uses the POOL for a but
On 04/15/2013 01:06 PM, Gandalf Corvotempesta wrote:
2013/4/12 Mark Nelson mark.nel...@inktank.com
Currently reads always come from the primary OSD in the placement group
rather than a secondary even if the secondary is closer to the client.
In this way, only one OSD will be involved in
On 04/15/2013 01:24 PM, Stefan Priebe wrote:
is there a possibility to disable / skip the jornal? Stuff like
FlashCache, BCache and others seem to work much better but sequential
I/O becomes a bottleneck if the ceph journal is on a single ssd.
As far as I know, the journal is a required
Indeed. http://tracker.ceph.com/issues/4678. Thanks for the report.
On 4/5/2013 5:49 PM, Lorieri wrote:
Hi,
My colleague run ceph pg dump --format with no other arguments.
It crashes all mon daemons in the cluster.
happens in ceph 0.60 and 0.56
3 mon, 3 osd, default crush map
# ceph pg
This is pretty cool, Sébastien.
On 03/28/2013 02:34 AM, Sebastien Han wrote:
Hello everybody,
Quite recently François Charlier and I worked together on the Puppet
modules for Ceph on behalf of our employer eNovance. In fact, François
started to work on them last summer, back then he achieved
Everything else was coming from github.com; ceph-object-corpus was the
one thing talking to ceph.com.
I just did
git clone git://ceph.com/git/ceph-object-corpus.git
and it worked for me.
On 03/27/2013 12:52 AM, charles L wrote:
Hi Joao,
I am able to access ceph.com without any issue and if
It looks like it attempts to behave as documented (somewhat surprisingly
and fragile-ly, IMO; I would have made it all-or-nothing and return
nothing in buf if len is too small).
The Python binding retries/resizes until it can return them all.
What do you think is wrong, Wido?
On 03/27/2013
There is a typo though: List objects in a pool
That should be: List all pools
yep.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message
That's right. The remove versus unlink verbs make that pretty clear
to me, at least... Are you suggesting this be clarified in the docs, or
that the command set change? I think once we settle on the CLI, John can
make a pass through the crush docs and make sure these commands are
explained.
On 03/22/2013 05:37 AM, Jerker Nyberg wrote:
There seem to be a missing argument to ceph osd lost (also in help for
the command).
http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
Indeed, it seems to be missing the id. The CLI is getting a big rework
right now, but the
As a point of comparison, mysql removes the config files but not
/var/lib/mysql.
The question is, is that okay/typical/desireable/recommended/a bad idea?
I should have asked this sooner. Do you know _any_ program that removes
your favorite music collection, your family photos or your
That depends on what functionality you want to test...
On 03/11/2013 04:03 AM, waed Albataineh wrote:
Hello there,
For the quick start of Ceph, do i need to continue the RESTful
Gateway quick start ??
even when i will just be testing the basic functions of Ceph!
--
To
called after throwing an instance of 'ceph::FailedAssertion'
Any clue why that happened?
This looks like
http://tracker.ceph.com/issues/4271
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send
You're aware of the just-added ceph df? I don't know it well enough to know if
it's a solution, but it's in that space...
On Mar 6, 2013, at 6:48 AM, Patrick McGarry patr...@inktank.com wrote:
Proxy-ing this in for a user I had a discussion with on irc this morning:
The question is is
leveldb is required. What filesystem are you using that doesn't support
mmap?
On 02/07/2013 02:11 PM, sheng qiu wrote:
Is it possible comment the leveldb setup codes during mkfs call path?
i find the problem is because i am trying to use a bunch of memory as
the OSD, when the leveldb mmap()
On 02/06/2013 10:15 AM, Yehuda Sadeh wrote:
On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil s...@inktank.com wrote:
One of the goals for cuttlefish is to improvement manageability of the
system. This will invovle both cleaning up the CLI and adding a REST API
to do everything the CLI current does.
My take on this is to keep the current behaviour (client issues a
command and the monitor handles it as it sees fit), but all
communication should be done in json, either to or from the monitors.
This would allow us to provide more information on each result, getting
rid of all the annoying
On 02/06/2013 12:14 PM, Sage Weil wrote:
On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
On 02/06/2013 01:34 PM, Sage Weil wrote:
I think the one caveat here is that having a single registry for commands
in the monitor means that commands can come in two flavors: vectorstring
(cli) and URL
Thanks for the work, Adam!
On 02/06/2013 03:18 PM, Michel, Adam K - (amichel) wrote:
I did some testing against an RBD device in my local environment to suss out
differences between a few filesystem options. Folks in #ceph thought this data
might be interesting to some on the list, so I'm
Yes; as Martin said last night, you don't have the ceph module.
Did you build your own kernel?
See
http://ceph.com/docs/master/install/os-recommendations/#linux-kernel
On 02/05/2013 09:37 PM, femi anjorin wrote:
Linux 2.6.32-279.19.1.el6.x86_64 x86_64 CentOS 6.3
Pls can somebody help ... This
...and/or do you have the corepath set interestingly, or one of the
core-trapping mechanisms turned on?
On 02/04/2013 11:29 AM, Sage Weil wrote:
On Mon, 4 Feb 2013, S?bastien Han wrote:
Hum just tried several times on my test cluster and I can't get any
core dump. Does Ceph commit suicide or
On 02/04/2013 03:26 PM, Yasuhiro Ohara wrote:
3264 pgs: 1088 creating, 2176 active+clean
This means, I believe, that the cluster has healed 2176 of 3264 PGs, and
is working on the remaining 1088. You can use 'ceph -w' to observe the
progress, but I think your cluster is backfilling the
it is in the status.
ceph -w can show us the other progress (like backfilling on the
down/up osds), but has not shown any progress on the 'creating'.
regards,
Yasu
From: Dan Mick dan.m...@inktank.com
Subject: Re: Trigger to create PGs ?
Date: Mon, 04 Feb 2013 15:53:25 -0800
Message-ID: 511049f5
(rbd was set to 2, which meant it didn't match, which I'm sure is what
Sage meant. Just correcting the record for those scoring at home.)
On 02/04/2013 06:36 PM, Yasuhiro Ohara wrote:
Thanks Sage, it instantly fixed the problem.
:)
regards,
Yasu
From: Sage Weil s...@inktank.com
Subject:
You might want to ask a more-specific question.
On 01/31/2013 08:24 AM, charles L wrote:
I need some help and guide on compiling ceph client on Eclipse..
--
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
On 01/30/2013 11:55 AM, Sage Weil wrote:
I think the end goal is to build an rbd-fuse package, just like ceph-fuse.
I'm not sure it matters how mature the tool is before it goes into the
package; as long as it is separate, people can not install it, and the
distros can keep it out of their
On 01/30/2013 09:08 PM, Dan Mick wrote:
On 01/30/2013 11:55 AM, Sage Weil wrote:
I think the end goal is to build an rbd-fuse package, just like
ceph-fuse.
I'm not sure it matters how mature the tool is before it goes into the
package; as long as it is separate, people can not install
ignore
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Thanks Danny, I'll look at these today.
On Jan 28, 2013, at 7:33 AM, Danny Al-Gaaf danny.al-g...@bisect.de wrote:
Here three patches to fix some issues with the new rbd-fuse
code and an issues with the fuse handling in configure.
Danny Al-Gaaf (3):
configure: fix check for
I'd just noticed utime on my laptop 32-bit build and was trying to figure out
why our 32-bit nightly didn't see it. And Greg had seen the system build
problem where I didn't, and I was isolating differences there as well.
I purposely didn't spend time on the system() error handling because I
Actually Sage merged them into master. Thanks again.
On 01/28/2013 09:45 AM, Dan Mick wrote:
Thanks Danny, I'll look at these today.
On Jan 28, 2013, at 7:33 AM, Danny Al-Gaaf danny.al-g...@bisect.de wrote:
Here three patches to fix some issues with the new rbd-fuse
code and an issues
Sage merged these into master. Thanks!
On 01/27/2013 12:57 PM, Danny Al-Gaaf wrote:
Attached two patches to fix some compiler warnings.
Danny Al-Gaaf (2):
utime: fix narrowing conversion compiler warning in sleep()
rbd: don't ignore return value of system()
src/include/utime.h | 2
On 1/25/2013 9:35 PM, Dan Mick wrote:
If the S3 API is not well suited to my scenario, then my effort should
be better directed to porting or writing a native ceph client for
Windows. I just need an API to read and write/append blocks to files.
Any comments are really appreciated.
Hopefully
If the S3 API is not well suited to my scenario, then my effort should
be better directed to porting or writing a native ceph client for
Windows. I just need an API to read and write/append blocks to files.
Any comments are really appreciated.
Hopefully someone with more windows experience
On 01/24/2013 07:28 AM, Dimitri Maziuk wrote:
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message go away (for testing purposes), you can
On 01/20/2013 08:32 AM, Dimitri Maziuk wrote:
On 1/19/2013 11:13 AM, Sage Weil wrote:
If you want to use the kernel client(s), that is true: there are no
plans
to backport the client code to the ancient RHEL kernels. Nothing
prevents
you from running the server side, though, or the userland
You'd think that only one [osd] section in ceph.conf implies
nrep = 1, though. (And then you can go on adding OSDs and changing nrep
accordingly -- that was my plan.)
Yeah; it's probably mostly just that one-OSD configurations are so
uncommon that we never special-cased that small user set.
what is pool 2 (rbd) for? looks like it's absolutely empty.
by default it's for rbd images (see the rbd command etc.). It being
empty or not has no effect on the other pools.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
On 01/22/2013 11:18 PM, Chen, Xiaoxi wrote:
Hi List,
Here is part of /etc/init.d/ceph script:
case $command in
start)
# Increase max_open_files, if the configuration calls for it.
get_conf max_open_files 8192 max open files
if [
On 01/21/2013 12:19 AM, Gandalf Corvotempesta wrote:
2013/1/21 Gregory Farnum g...@inktank.com:
I'm not quite sure what you mean…the use of the cluster network and public
network are really just intended as conveniences for people with multiple NICs on their box.
There's nothing preventing
The '-a/--allhosts' parameter is to spread the command across the
cluster...that is, service ceph -a start will start across the cluster.
On 01/22/2013 01:01 PM, Xing Lin wrote:
I like the current approach. I think it is more convenient to run
commands once at one host to do all the setup
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/22/2013 01:57 PM, Alex Elder wrote:
A few very minor changes to the rbd code:
- RBD_MAX_OPT_LEN is unused, so get rid of it
- Consolidate rbd options definitions
- Make rbd_segment_name() return pointer to const char
Signed-off
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/22/2013 01:58 PM, Alex Elder wrote:
The return type of rbd_get_num_segments() is int, but the values it
operates on are u64. Although it's not likely, there's no guarantee
the result won't exceed what can be respresented in an int. The
function
It would not surprise me at all if gcov files are *highly* version
dependent. I don't know one way or the other, but it seems very possible.
On 01/15/2013 09:21 AM, Josh Durgin wrote:
On 01/15/2013 02:10 AM, Loic Dachary wrote:
On 01/14/2013 06:26 PM, Josh Durgin wrote:
Looking at how it's
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/14/2013 10:50 AM, Alex Elder wrote:
Define a new rbd device flags field, manipulated using atomic bit
operations. Replace the use of the current exists flag with a
bit in this new flags field.
Signed-off-by: Alex Elder el...@inktank.com
I see that set_bit is atomic, but I don't see that test_bit is. Am I
missing a subtlety?
On 01/14/2013 10:50 AM, Alex Elder wrote:
Define a new rbd device flags field, manipulated using atomic bit
operations. Replace the use of the current exists flag with a
bit in this new flags field.
I think I agree that the claim is that the onus is on the set, and so
I think the proposed code is safe.
On 01/14/2013 01:23 PM, Alex Elder wrote:
On 01/14/2013 02:32 PM, Dan Mick wrote:
I see that set_bit is atomic, but I don't see that test_bit is. Am I
missing a subtlety?
That's
A couple of very simple commits to rbd to help with krbd map/unmap requests.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 1/10/2013 7:03 PM, Dan Mick wrote:
A couple of very simple commits to rbd to help with krbd map/unmap
requests.
thank you Mr. Elder.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/03/2013 11:07 AM, Alex Elder wrote:
It's kind of a silly macro, but ceph_encode_8_safe() is the only one
missing from an otherwise pretty complete set. It's not used, but
neither are a couple of the others in this set.
While in there, insert
I personally dislike spaces after cast, but I haven't checked the
kernel style guide. Otherwise:
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/03/2013 02:40 PM, Alex Elder wrote:
The result field in a ceph osd reply header is a signed 32-bit type,
but rbd code often casually uses int
Thanks for the help, Roberto. Sure, send a pull request.
On 01/03/2013 01:31 PM, Roberto Aguilar wrote:
Thanks for clarifying, Mark. I made a quick change to the docs:
https://github.com/rca/ceph/commit/37b57cdf0fdc5c03eeff3f5eb58ff4010ce581f6
Can I send you a pull request?
Thanks,
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/03/2013 02:38 PM, Alex Elder wrote:
There are two names used for items of rbd_request structure type:
req and req_data. The former name is also used to represent
items of pointers to struct ceph_osd_request.
Change all variables that have
1 - 100 of 181 matches
Mail list logo