On 08/04/2013 10:51 PM, Noah Watkins wrote:
On Fri, Aug 2, 2013 at 1:58 AM, Niklas Goerke nik...@niklasgoerke.de wrote:
As for the documentation you referenced: I didn't find a documentation of
the RADOS Protocol which could be used to base an implementation of librados
upon. Does anything
There are a number of subsystems that the clients use, so a number of
knobs that matter. The logging/authentication name (the type.id
name), by default, is 'client.admin'. Some of the relevant logging
knobs are ms/monc; of course there are usually effects caused by the
daemons too, so the
IIRC we had to adjust settings in /etc/security to allow ulimit
adjustment of at least core:
sed -i 's/^#\*.*soft.*core.*0/\*softcore
unlimited/g' /etc/security/limits.conf
or something like that. That seems to apply to centos/fedora/redhat
systems.
On 08/08/2013
Ah, there's another we apply universally to our test systems, apparently:
'/etc/security/limits.d/ubuntu.conf'
ubuntu hard nofile 16384
and the tests run as user ubuntu. Line 4 of the script is the nofile
setting.
On 08/10/2013 01:34 AM, Loic Dachary wrote:
On 10/08/2013 07:35, Dan Mick
It's a bit surprising that it broke with cuttlefish; something might
have happened in dumpling, but we wouldn't expect changes in cuttlefish.
It looks like collectd just couldn't talk to the monitor properly.
Maybe look at the mon's log and see what it thinks it saw?
On 08/30/2013 05:04 AM,
Where are you looking? ceph.com/debian-testing has 0.68
On 09/04/2013 07:12 PM, 이주헌 wrote:
Debian/Ubuntu packages is still 0.67.2.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line
-complete
Paper here: http://web.eecs.utk.edu/~plank/plank/papers/FAST-2013-GF.pdf
looks very much like the presentation Ethan gave at the conference.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send
Ah, yes, that's in the validator for CephEntityAddr; it's not checking
for the case that no nonce was supplied. I've filed
http://tracker.ceph.com/issues/6425.
On 09/24/2013 04:57 PM, Mandell Degerness wrote:
See trace below. We run this command on system restart in order to
clear any
Pull request created with fix.
On 09/26/2013 04:51 PM, Dan Mick wrote:
Ah, yes, that's in the validator for CephEntityAddr; it's not checking
for the case that no nonce was supplied. I've filed
http://tracker.ceph.com/issues/6425.
On 09/24/2013 04:57 PM, Mandell Degerness wrote:
See trace
...osd, mon, mds, client. It's not
really normal operation, but you can find stats there, and often
things like status, version of the software, etc.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list
One comment; otherwise looks rational to me
On 04/01/2014 02:44 PM, Sage Weil wrote:
Can someone please review
https://github.com/ceph/ceph/pull/1581
There are a couple of minor packages changes here that could use a second
look. Mostly just shifting things into ceph-common from
On 01/21/2013 12:19 AM, Gandalf Corvotempesta wrote:
2013/1/21 Gregory Farnum g...@inktank.com:
I'm not quite sure what you mean…the use of the cluster network and public
network are really just intended as conveniences for people with multiple NICs on their box.
There's nothing preventing
The '-a/--allhosts' parameter is to spread the command across the
cluster...that is, service ceph -a start will start across the cluster.
On 01/22/2013 01:01 PM, Xing Lin wrote:
I like the current approach. I think it is more convenient to run
commands once at one host to do all the setup
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/22/2013 01:57 PM, Alex Elder wrote:
A few very minor changes to the rbd code:
- RBD_MAX_OPT_LEN is unused, so get rid of it
- Consolidate rbd options definitions
- Make rbd_segment_name() return pointer to const char
Signed-off
Reviewed-by: Dan Mick dan.m...@inktank.com
On 01/22/2013 01:58 PM, Alex Elder wrote:
The return type of rbd_get_num_segments() is int, but the values it
operates on are u64. Although it's not likely, there's no guarantee
the result won't exceed what can be respresented in an int. The
function
On 01/22/2013 11:18 PM, Chen, Xiaoxi wrote:
Hi List,
Here is part of /etc/init.d/ceph script:
case $command in
start)
# Increase max_open_files, if the configuration calls for it.
get_conf max_open_files 8192 max open files
if [
On 01/24/2013 07:28 AM, Dimitri Maziuk wrote:
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message go away (for testing purposes), you can
On 01/20/2013 08:32 AM, Dimitri Maziuk wrote:
On 1/19/2013 11:13 AM, Sage Weil wrote:
If you want to use the kernel client(s), that is true: there are no
plans
to backport the client code to the ancient RHEL kernels. Nothing
prevents
you from running the server side, though, or the userland
You'd think that only one [osd] section in ceph.conf implies
nrep = 1, though. (And then you can go on adding OSDs and changing nrep
accordingly -- that was my plan.)
Yeah; it's probably mostly just that one-OSD configurations are so
uncommon that we never special-cased that small user set.
what is pool 2 (rbd) for? looks like it's absolutely empty.
by default it's for rbd images (see the rbd command etc.). It being
empty or not has no effect on the other pools.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
If the S3 API is not well suited to my scenario, then my effort should
be better directed to porting or writing a native ceph client for
Windows. I just need an API to read and write/append blocks to files.
Any comments are really appreciated.
Hopefully someone with more windows experience
On 1/25/2013 9:35 PM, Dan Mick wrote:
If the S3 API is not well suited to my scenario, then my effort should
be better directed to porting or writing a native ceph client for
Windows. I just need an API to read and write/append blocks to files.
Any comments are really appreciated.
Hopefully
Thanks Danny, I'll look at these today.
On Jan 28, 2013, at 7:33 AM, Danny Al-Gaaf danny.al-g...@bisect.de wrote:
Here three patches to fix some issues with the new rbd-fuse
code and an issues with the fuse handling in configure.
Danny Al-Gaaf (3):
configure: fix check for
I'd just noticed utime on my laptop 32-bit build and was trying to figure out
why our 32-bit nightly didn't see it. And Greg had seen the system build
problem where I didn't, and I was isolating differences there as well.
I purposely didn't spend time on the system() error handling because I
Actually Sage merged them into master. Thanks again.
On 01/28/2013 09:45 AM, Dan Mick wrote:
Thanks Danny, I'll look at these today.
On Jan 28, 2013, at 7:33 AM, Danny Al-Gaaf danny.al-g...@bisect.de wrote:
Here three patches to fix some issues with the new rbd-fuse
code and an issues
Sage merged these into master. Thanks!
On 01/27/2013 12:57 PM, Danny Al-Gaaf wrote:
Attached two patches to fix some compiler warnings.
Danny Al-Gaaf (2):
utime: fix narrowing conversion compiler warning in sleep()
rbd: don't ignore return value of system()
src/include/utime.h | 2
ignore
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 01/30/2013 11:55 AM, Sage Weil wrote:
I think the end goal is to build an rbd-fuse package, just like ceph-fuse.
I'm not sure it matters how mature the tool is before it goes into the
package; as long as it is separate, people can not install it, and the
distros can keep it out of their
On 01/30/2013 09:08 PM, Dan Mick wrote:
On 01/30/2013 11:55 AM, Sage Weil wrote:
I think the end goal is to build an rbd-fuse package, just like
ceph-fuse.
I'm not sure it matters how mature the tool is before it goes into the
package; as long as it is separate, people can not install
You might want to ask a more-specific question.
On 01/31/2013 08:24 AM, charles L wrote:
I need some help and guide on compiling ceph client on Eclipse..
--
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
...and/or do you have the corepath set interestingly, or one of the
core-trapping mechanisms turned on?
On 02/04/2013 11:29 AM, Sage Weil wrote:
On Mon, 4 Feb 2013, S?bastien Han wrote:
Hum just tried several times on my test cluster and I can't get any
core dump. Does Ceph commit suicide or
On 02/04/2013 03:26 PM, Yasuhiro Ohara wrote:
3264 pgs: 1088 creating, 2176 active+clean
This means, I believe, that the cluster has healed 2176 of 3264 PGs, and
is working on the remaining 1088. You can use 'ceph -w' to observe the
progress, but I think your cluster is backfilling the
it is in the status.
ceph -w can show us the other progress (like backfilling on the
down/up osds), but has not shown any progress on the 'creating'.
regards,
Yasu
From: Dan Mick dan.m...@inktank.com
Subject: Re: Trigger to create PGs ?
Date: Mon, 04 Feb 2013 15:53:25 -0800
Message-ID: 511049f5
(rbd was set to 2, which meant it didn't match, which I'm sure is what
Sage meant. Just correcting the record for those scoring at home.)
On 02/04/2013 06:36 PM, Yasuhiro Ohara wrote:
Thanks Sage, it instantly fixed the problem.
:)
regards,
Yasu
From: Sage Weil s...@inktank.com
Subject:
Yes; as Martin said last night, you don't have the ceph module.
Did you build your own kernel?
See
http://ceph.com/docs/master/install/os-recommendations/#linux-kernel
On 02/05/2013 09:37 PM, femi anjorin wrote:
Linux 2.6.32-279.19.1.el6.x86_64 x86_64 CentOS 6.3
Pls can somebody help ... This
On 02/06/2013 10:15 AM, Yehuda Sadeh wrote:
On Wed, Feb 6, 2013 at 9:25 AM, Sage Weil s...@inktank.com wrote:
One of the goals for cuttlefish is to improvement manageability of the
system. This will invovle both cleaning up the CLI and adding a REST API
to do everything the CLI current does.
My take on this is to keep the current behaviour (client issues a
command and the monitor handles it as it sees fit), but all
communication should be done in json, either to or from the monitors.
This would allow us to provide more information on each result, getting
rid of all the annoying
On 02/06/2013 12:14 PM, Sage Weil wrote:
On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
On 02/06/2013 01:34 PM, Sage Weil wrote:
I think the one caveat here is that having a single registry for commands
in the monitor means that commands can come in two flavors: vectorstring
(cli) and URL
Thanks for the work, Adam!
On 02/06/2013 03:18 PM, Michel, Adam K - (amichel) wrote:
I did some testing against an RBD device in my local environment to suss out
differences between a few filesystem options. Folks in #ceph thought this data
might be interesting to some on the list, so I'm
leveldb is required. What filesystem are you using that doesn't support
mmap?
On 02/07/2013 02:11 PM, sheng qiu wrote:
Is it possible comment the leveldb setup codes during mkfs call path?
i find the problem is because i am trying to use a bunch of memory as
the OSD, when the leveldb mmap()
You're aware of the just-added ceph df? I don't know it well enough to know if
it's a solution, but it's in that space...
On Mar 6, 2013, at 6:48 AM, Patrick McGarry patr...@inktank.com wrote:
Proxy-ing this in for a user I had a discussion with on irc this morning:
The question is is
called after throwing an instance of 'ceph::FailedAssertion'
Any clue why that happened?
This looks like
http://tracker.ceph.com/issues/4271
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send
That depends on what functionality you want to test...
On 03/11/2013 04:03 AM, waed Albataineh wrote:
Hello there,
For the quick start of Ceph, do i need to continue the RESTful
Gateway quick start ??
even when i will just be testing the basic functions of Ceph!
--
To
As a point of comparison, mysql removes the config files but not
/var/lib/mysql.
The question is, is that okay/typical/desireable/recommended/a bad idea?
I should have asked this sooner. Do you know _any_ program that removes
your favorite music collection, your family photos or your
On 03/22/2013 05:37 AM, Jerker Nyberg wrote:
There seem to be a missing argument to ceph osd lost (also in help for
the command).
http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
Indeed, it seems to be missing the id. The CLI is getting a big rework
right now, but the
That's right. The remove versus unlink verbs make that pretty clear
to me, at least... Are you suggesting this be clarified in the docs, or
that the command set change? I think once we settle on the CLI, John can
make a pass through the crush docs and make sure these commands are
explained.
Everything else was coming from github.com; ceph-object-corpus was the
one thing talking to ceph.com.
I just did
git clone git://ceph.com/git/ceph-object-corpus.git
and it worked for me.
On 03/27/2013 12:52 AM, charles L wrote:
Hi Joao,
I am able to access ceph.com without any issue and if
It looks like it attempts to behave as documented (somewhat surprisingly
and fragile-ly, IMO; I would have made it all-or-nothing and return
nothing in buf if len is too small).
The Python binding retries/resizes until it can return them all.
What do you think is wrong, Wido?
On 03/27/2013
There is a typo though: List objects in a pool
That should be: List all pools
yep.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message
This is pretty cool, Sébastien.
On 03/28/2013 02:34 AM, Sebastien Han wrote:
Hello everybody,
Quite recently François Charlier and I worked together on the Puppet
modules for Ceph on behalf of our employer eNovance. In fact, François
started to work on them last summer, back then he achieved
Indeed. http://tracker.ceph.com/issues/4678. Thanks for the report.
On 4/5/2013 5:49 PM, Lorieri wrote:
Hi,
My colleague run ceph pg dump --format with no other arguments.
It crashes all mon daemons in the cluster.
happens in ceph 0.60 and 0.56
3 mon, 3 osd, default crush map
# ceph pg
On 04/15/2013 01:06 PM, Gandalf Corvotempesta wrote:
2013/4/12 Mark Nelson mark.nel...@inktank.com
Currently reads always come from the primary OSD in the placement group
rather than a secondary even if the secondary is closer to the client.
In this way, only one OSD will be involved in
On 04/15/2013 01:24 PM, Stefan Priebe wrote:
is there a possibility to disable / skip the jornal? Stuff like
FlashCache, BCache and others seem to work much better but sequential
I/O becomes a bottleneck if the ceph journal is on a single ssd.
As far as I know, the journal is a required
It's arguable, but we wanted to treat source and destination pools
separately in general.
Note that you can also specify images as POOLNAME/a and POOLNAME/b.
On 04/19/2013 12:28 AM, Stefan Priebe - Profihost AG wrote:
Hi,
if i issue rbd -p POOLNAME ... rename a b
It uses the POOL for a but
On 04/22/2013 11:09 AM, Scott Sullivan wrote:
Referring to this:
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
I compiled the latest tgt with RBD support. My question is when using
this method to access RBD volumes, where do you tell it what user to
authenticate to the cluster
I also just pushed a fix to the cuttlefish branch, so if you want
packages that fix this, you can get them from gitbuilders using the
testing versions, branch cuttlefish.
Thanks, Mike, for pointing this out!
On 05/10/2013 08:27 PM, Mike Dawson wrote:
Anyone running 0.61.1,
Watch out for
On 05/20/2013 05:00 PM, ymorita...@gmail.com wrote:
Hi,
I have found some issues on ceph v0.61.2 on Ubuntu 12.10.
(1) ceph-deploy osd create command fails when using --cluster name option.
[root@host3 yuji_ceph]# ceph-deploy --cluster yuji osd create host1:sdb
Traceback (most recent call
the next entry
should be ceph-deploy-purge.
Ah. That's because the TOC *also* says that the next is MDS
Configuration (under CephFS). I guess we're sharing the Add/Remove MDS
page at two places in the hierarchy.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
On 05/24/2013 02:16 PM, Travis Rhoden wrote:
On Fri, May 24, 2013 at 4:55 PM, Dan Mick dan.m...@inktank.com wrote:
On 05/24/2013 10:05 AM, Travis Rhoden wrote:
I stumbled across a weird TOC/navigation error in the docs.
On the page:
http://ceph.com/docs/master/rados/deployment/ceph
On 05/29/2013 01:24 AM, Wido den Hollander wrote:
On 05/29/2013 10:14 AM, Wido den Hollander wrote:
Hi,
Is there a way to find out if a cluster is near full via librados?
Yes, there is. (Thanks tnt on IRC!)
There is ofcourse rados_cluster_stat which will give you:
struct
e5184ea95031b7bea4264062de083045767d5dc3 in master.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info
I would get the cluster up and running and do some experiments before I
spent any time on optimization, much less all this.
On 07/20/2013 09:35 AM, Ta Ba Tuan wrote:
Please help me!
On 07/20/2013 02:11 AM, Ta Ba Tuan wrote:
Hi everyone,
I have *3 nodes (running MON and MDS)*
and *6 data
On 08/08/2012 08:19 PM, eric_yh_c...@wiwynn.com wrote:
Dear all:
My Environment: two servers, and 12 hard-disk on each server.
Version: Ceph 0.48, Kernel: 3.2.0-27
We create a ceph cluster with 24 osd, 3 monitors
Osd.0 ~ osd.11 is on server1
Osd.12 ~ osd.23 is on
Reviewed-by: Dan Mick dan.m...@inktank.com
On 09/04/2012 11:08 AM, Alex Elder wrote:
In the on-disk image header structure there is a field block_name
which represents what we now call the object prefix for an rbd
image. Rename this field object_prefix to be consistent with
modern usage
We're looking into this, Christian.
On 09/24/2012 03:23 AM, Christian Huang wrote:
Hi,
we met the following issue while testing ceph cluster HA.
Appreciate if anyone can shed some light.
could this be related to the configuration ? (ie, 2 OSD nodes only)
Issue description:
Hemant:
Yes, you can. Use ceph osd getmap -o file to get the OSD map, and
then use osdmaptool --find-object-map objectname file to output the
PG the object hashes to and the list of OSDs that PG maps to (primary
first):
$ ceph osd getmap -o osdmap
got osdmap epoch 59
$ osdmaptool
] .
The --test-map-object is currently somewhat useless because it assumes
pool 0 ('data'), and your object is probably in a different pool.
sage
-
Hemant Surale.
On Wed, Sep 26, 2012 at 2:04 AM, Dan Mick dan.m...@inktank.com wrote:
Hemant:
Yes, you can. Use ceph osd getmap -o file to get the OSD
http://ceph.com/docs/master/rbd/rbd-openstack/
On 09/26/2012 09:45 PM, ramu eppa wrote:
Hi all,
I want to create volumes inside rbd pool,please help me to create volumes
inside rbd pool.
Thanks,
Ramu.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
AGH! So sorry Hemant. I really was thinking 'map' when I typed that.
On 09/26/2012 04:41 PM, Sage Weil wrote:
On Wed, 26 Sep 2012, Dan Mick wrote:
Ah, yeah, that assumption would be a problem.
So, Hemant, does
ceph osd dump poolname objectname
Ahem,
ceph osd map poolname
The command you show below is not creating a pool; it's creating an rbd
image named testpool inside the RADOS pool named nova.
Creating and managing images inside nova is done with OpenStack
administration tools, typically, and not with the rbd command directly.
Did you have questions about
Don't understand the question. rbd images live, by default, in the rbd
pool; the image namespace is flat, and there is no path.
On 10/02/2012 06:13 AM, ramu eppa wrote:
Hi all,
How to know the rbd volume path.
Thanks,
Ramu.
--
To unsubscribe from this list: send the line unsubscribe
What tool/library/context are you using the string nova/volume1/data
in? What does data represent? How was volume1 created?
On 10/02/2012 10:09 AM, ramu eppa wrote:
Hi Tommi Virtanen,
Actually we know the rbd full path,we write some data directly to volume.
nova/volume1/data.
I'm not Tommi, but I'm not aware of a way to convert a snapshot into a
writable image. You can make a copy-on-write clone of a snapshot, which
is writable, but the snapshot remains readonly.
I don't know what the rest of your sentence means.
Are you trying to accomplish something in
On 10/09/2012 10:58 PM, ramu eppa wrote:
Hi Dan Mick,
Thank u for reply,actually
1.I want to create writable snapshot.
There is no such thing as a writable snapshot.
2.In that snapshot I want to insert data.
3.Is it possible to reuse snapshot.
4.And for this what version of ceph can I
The immediate cause of the problem is that the osd's commit_op_seq
file is reading back '0',
which is invalid; it's created with an initial value of 1. Try removing
the osd data dir
(/var/lib/ceph/osd/ceph-0) completely and let it be recreated; perhaps
something got there somehow by mistake.
Nothing like that exists at the moment; see
http://tracker.newdream.net/issues/3283 fpr the other side of it.
On 10/15/2012 12:52 AM, Alexandre DERUMIER wrote:
Hi,
I'm looking for a way to retrieve the free space from a rbd cluster with rbd
command.
Any hint ?
(something like ceph -w
On 10/17/2012 09:58 PM, hemant surale wrote:
Hi Community,
I have tried to build ceph from source code i.e. v0.48 tarball .
After proceeding to all steps given at official site . when I execute
service ceph start it gives following error
-//Error
On 10/21/2012 02:35 PM, Sage Weil wrote:
On Sun, 21 Oct 2012, Joe Buck wrote:
It looks like vstart.sh does not work without authx enabled. Given that, I'd
propose to change the default to having cephx be enabled and then use the -x
flag to disable cephx.
Interestingly enough, the help output
OK, assuming the comment is changed to reflect the we expect a data
length back here indication, Alex explained this offline and I agree.
On 10/22/2012 10:18 AM, Dan Mick wrote:
I really feel like we ought to root-cause this before we patch the
kernel client. Something isn't working the way
On 10/22/2012 08:10 PM, jie sun wrote:
Hi,
I create a image and map it to a virtual machine,and then mkfs and mount it.
I want to remove the image from mon server forcely, but it says
Removing image: 99% complete...failed.
delete error: image still has watchers
This means the image is
Possibly related: I just pushed a patch to master that enables cephx auth
by default. For the -X case we need to put in ceph.conf:
auth cluster required =
auth service required =
This was not added, and thus vstart.sh -X is currently broken.
--
To unsubscribe from this list:
it.
2012/10/23 Dan Mick dan.m...@inktank.com:
On 10/22/2012 08:10 PM, jie sun wrote:
Hi,
I create a image and map it to a virtual machine,and then mkfs and mount
it.
I want to remove the image from mon server forcely, but it says
Removing image: 99% complete...failed.
delete error: image still
So, I've discovered that to make no cephx work, you need to explicitly
set none for the three options (thanks to Yehuda for the tip):
auth cluster required = none
auth service required = none
auth supported = none
Since blank is not an error, but leads to a disagreement
Thanks! Making sure this gets incorporated in our next update.
On 10/22/2012 09:06 AM, Masanari Iida wrote:
Correct spelling typo in debug message
Signed-off-by: Masanari Iida standby2...@gmail.com
---
fs/ceph/xattr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
Reviewed-by: Dan Mick dan.m...@inktank.com
On 10/22/2012 09:51 AM, Alex Elder wrote:
Change RBD_MAX_SNAP_NAME_LEN to be based on NAME_MAX. That is a
practical limit for the length of a snapshot name (based on the
presence of a directory using the name under /sys/bus/rbd to
represent
to make me think I'm missing a subdivision of the RADOS objects
that make up an rbd image that I didn't know about.
Otherwise,
Reviewed-by: Dan Mick dan.m...@inktank.com
On 10/22/2012 09:51 AM, Alex Elder wrote:
The aim of this patch is to make what's going on rbd_merge_bvec() a
bit more
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 21/42
degraded
(50.000%)
This is simply because you only have 1 osd but the default policy is 2x
replication. As such, all PGs are 'degraded' because they are only
replicated once.
If you add another OSD to your cluster the
LGTM. I might emphasize user rather than owner, but that's clearly
a nit. Also, does this now obviate the #TODO?
On 10/24/2012 06:20 PM, Joe Buck wrote:
I submitted a pull request for teuthology/task/workunit.py that uses the
configured owner rather than using the hard-coded ubuntu username
static void ceph_fault(struct ceph_connection *con)
__releases(con-mutex)
{
pr_err(%s%lld %s %s\n, ENTITY_NAME(con-peer_name),
ceph_pr_addr(con-peer_addr.in_addr), con-error_msg)
Perhaps this should become pr_info() or something. Sage?
Yeah, I
On 10/25/2012 09:46 PM, Raghunandhan wrote:
Hi Sage,
Thanks for replying back, Once a zpool is created if i mount it on
/var/lib/ceph/osd/ceph-0 the cephfs doesnt recognize it as a superblock
and hence it fails,
I assume you mean once a zfs is created? One can't mount zpools, can one?
Im
Last time you asked this, I responded (on 18 Oct):
So, clearly something went wrong with the installation step, and this
file was not created. How did you install after building?
On 10/29/2012 02:27 AM, hemant surale wrote:
Hi Folks ,
I have executed all required steps to build Ceph from
On 10/26/2012 02:52 PM, Gandalf Corvotempesta wrote:
Hi all,i'm new to ceph.
Are RBD and REST API production ready?
There are sites using them in production now.
Do you have any use case to share? we are looking for a distributed
block storage for an HP C7000 blade with 16 dual processor
On 10/30/2012 07:59 AM, Gandalf Corvotempesta wrote:
2012/10/30 袁冬 yuandong1...@gmail.com:
Yes, but network (and many other isssues) must be considered.
Obviously
3 is suggested.
Any contraindication running mon in the same OSD server?
Generally that's considered OK. ceph-mon
Gary, were you also going to update README? (I know, it's imperfect,
but...)
On 10/31/2012 10:25 AM, Gary Lowell wrote:
Hi Sage -
Sam may have the build machines updated. I'll double check that, and take care
of any packaging changes.
Cheers,
Gary
On Oct 31, 2012, at 9:03 AM, Sage Weil
I've had a long private thread with Hemant, and I believe he's past this
problem (in case anyone scans archives looking for open questions).
Hemant, it would be best to keep the thread on ceph-devel; you get more
people looking and answering.
It's a mystery, still, how /usr/bin/ceph-osd ended
I was somewhat surprised to note that we don't build with -I include, so
that files that userland programs would find with
relative-to-/usr/include paths have to be modified for building in the tree.
Was this conscious, or does anyone else think it would be smoother to
-I include/ so that
On 11/02/2012 02:14 PM, Dan Mick wrote:
I was somewhat surprised to note that we don't build with -I include, so
that files that userland programs would find with
relative-to-/usr/include paths have to be modified for building in the
tree.
Was this conscious, or does anyone else think
Resolution: installing the packages built for precise, rather than
squeeze, got versions that use syncfs.
On 11/06/2012 08:31 AM, Oliver Francke wrote:
In answer to myself,
On 11/06/2012 05:14 PM, Oliver Francke wrote:
Hi *,
anybody out there who's in Ubuntu 12.04.1/ in connection with libc
Hi Alex:
did you install the ceph packages before trying to build qemu? It
sounds like qemu is looking for the Ceph libraries and not finding them.
On 11/12/2012 09:38 PM, Alex Jiang wrote:
Hi, All
Has somebody used Ceph RBD in CloudStack as primary storage? I see
that in the new features
On 11/12/2012 02:47 PM, Josh Durgin wrote:
On 11/12/2012 08:30 AM, Andrey Korolyov wrote:
Hi,
For this version, rbd cp assumes that destination pool is the same as
source, not 'rbd', if pool in the destination path is omitted.
rbd cp install/img testimg
rbd ls install
img testimg
Is this
1 - 100 of 181 matches
Mail list logo