On 01/27/2015 08:01 AM, Vickie ch wrote:
> Hello cephers,
> After input command "ceph osd map rbd abcde-no-file". I can get
> the result like this:
> *"osdmap e42 pool 'rbd' (0) object '*
> *abcde-no-file' -> pg 0.2844d191 (0.11) -> up ([3], p3) acting ([3], p3)"*
>
> But the object "abcd
Hello cephers,
After input command "ceph osd map rbd abcde-no-file". I can get
the result like this:
*"osdmap e42 pool 'rbd' (0) object '*
*abcde-no-file' -> pg 0.2844d191 (0.11) -> up ([3], p3) acting ([3], p3)"*
But the object "abcde-no-file" is not exist. Why ceph osd map can mapping
t
Hi,All.
Indeed, there is a problem. Removed 1 TB of data space on a cluster is not
cleared. This feature of the behavior or a bug? And how long will it be
cleaned?
Sat Sep 20 2014 at 8:19:24 AM, Mikaël Cluseau :
> Hi all,
>
> I have weird behaviour on my firefly "test + convenience storage" clu
Hi, All,Loic
I have exactly the same error. I understand the problem is in 0.80.9? Thank
you.
Sat Jan 17 2015 at 2:21:09 AM, Loic Dachary :
>
>
> On 14/01/2015 18:33, Udo Lembke wrote:
> > Hi Loic,
> > thanks for the answer. I hope it's not like in
> > http://tracker.ceph.com/issues/8747 where
On Mon, Jan 26, 2015 at 6:47 PM, Kim Vandry wrote:
> Hello Ceph users,
>
> In our application, we found that we have a use case for appending to a
> rados object in such a way that the client knows afterwards at what offset
> the append happened, even while there may be other concurrent clients do
Do you mean cache tiering?
You can refer to http://ceph.com/docs/master/rados/operations/cache-tiering/
for detail command line.
PGs won't migrate from pool to pool.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Chad
William Seys
Sent: Thu
Hello Ceph users,
In our application, we found that we have a use case for appending to a
rados object in such a way that the client knows afterwards at what
offset the append happened, even while there may be other concurrent
clients doing the same thing.
At first I thought the client might
Hello Quenten,
On Tue, 27 Jan 2015 02:02:13 + Quenten Grasso wrote:
> Hi Christian,
>
> Ahh yes, The overall host weight changed when removing the OSD as all
> OSD's make up the host weight in turn removal of the OSD then decreased
> the host weight which then triggered the rebalancing.
>
Hi Christian,
Ahh yes, The overall host weight changed when removing the OSD as all OSD's
make up the host weight in turn removal of the OSD then decreased the host
weight which then triggered the rebalancing.
I guess it would have made more sense if setting the osd as "out" caused the
same af
On Tue, 27 Jan 2015 01:37:52 + Quenten Grasso wrote:
> Hi Christian,
>
> As you'll probably notice we have 11,22,33,44 marked as out as well. but
> here's our tree.
>
> all of the OSD's in question had already been rebalanced/emptied from
> the hosts. osd.0 existed on pbnerbd01
>
Ah, lemme
Hi Christian,
As you'll probably notice we have 11,22,33,44 marked as out as well. but here's
our tree.
all of the OSD's in question had already been rebalanced/emptied from the
hosts. osd.0 existed on pbnerbd01
# ceph osd tree
# idweight type name up/down reweight
-1 54
Hello,
A "ceph -s" and "ceph osd tree" would have been nice, but my guess is that
osd.0 was the only osd on that particular storage server?
In that case the removal of the bucket (host) by removing the last OSD in
it also triggered a re-balancing.
Not really/well documented AFAIK and annoying, b
Hi All,
I just removed an OSD from our cluster following the steps on
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
First I set the OSD as out,
ceph osd out osd.0
This emptied the OSD and eventually health of the cluster came back to
normal/ok. and OSD was up and out. (took abo
Hi All,
We are planning to use Ceph File System in our data center.
I was reading the ceph documentation and they do not recommend this for
production
[cid:image001.png@01D0397C.53621850]
Is this still valid ???
Please advise.
Thanks
Raj
___
ceph-
On Mon, 26 Jan 2015 11:16:34 -0800 Gregory Farnum wrote:
> Hmm, I'm looking at the actual code here and I'm wrong. Those values
> should be used whenever you create a pool via the API, and it doesn't
> look like anything external to the monitor can change that.
>
> So you probably set these value
Hi,
We are getting a lot of connection time outs and connection resets while
using tengine as an ssl proxy for civetweb.
This is our tengine configuration:
server
{
listen 443 default ssl;
access_log /tmp/nginx_reverse_access.log;
error_log /tmp/nginx_reverse_error.log;
On Mon, Jan 26, 2015 at 2:13 PM, Brian Rak wrote:
> I have an existing cluster where all the hosts were just added directly, for
> example:
>
> # ceph osd tree
> # idweight type name up/down reweight
> -1 60.06 root default
> ...
> -14 1.82host OSD75
> 12 1.8
I have an existing cluster where all the hosts were just added directly,
for example:
# ceph osd tree
# idweight type name up/down reweight
-1 60.06 root default
...
-14 1.82host OSD75
12 1.82osd.12 up 1
-15 1.82hos
With the following configuration file, the create_pool method in the python
librados library creates the pool with the correct pg_num value (256):
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 64.0.1.210
Hello,
I make a file upload using the S3 protocol to a empty pool:
Dislexia:s3cmd-master italosantos$ ./s3cmd -c s3test.cfg ls s3://bucket
2015-01-26 20:23846217 s3://bucket/bash.jpg
Also, I’m able to see the object on rados:
root@cephgw0001:~# rados -p .rgw.buckets.replicated ls
defau
Thanks for your answer.
But what I’d like to understand is if this numbers are per pool bases or per
cluster bases? If this number were per cluster bases I’ll plan on cluster
deploy how much pools I’d like to have on that cluster and their replicas
Regards.
Italo Santos
http://italosantos.co
Hmm, I'm looking at the actual code here and I'm wrong. Those values
should be used whenever you create a pool via the API, and it doesn't
look like anything external to the monitor can change that.
So you probably set these values in the [osd] section rather than the
[mon] or [global] section, at
Hello Gregory,
Thanks for the quick response. Does this mean that the rados python library is
out of date? create_pool in rados.py
(https://github.com/ceph/ceph/blob/master/src/pybind/rados.py#L535) only
requires a pool_name….doesn’t even offer pg_num as an optional argument.
Thank you,
-Jason
Just from memory, I think these values are only used for the initial pool
creations when the cluster is first set up.
We have been moving for a while to making users specify pg_num explicitly
on every pool create, and you should do so. :)
-Greg
On Mon, Jan 26, 2015 at 7:38 AM Jason Anderson <
jaso
Hello fellow Cephers,
I am running Ceph Giant on Centos 6.6 and am running into an issue where the
create_pool() method in the librados python library isn't using ceph.conf's
pg_num and pgp_num settings during pool creation.
My ceph.conf:
[global]
auth_service_required = cephx
filestore_xattr_u
Hi Robert,
I don't see any reply to your email, so I send you my thoughts.
Ceph is all about using cheap local disks to build a large performant
and resilient storage. Your use case with SAN and storwise doesn't seem
to fit very well to Ceph. (I'm not saying it can't be done).
¿Why are you p
26 matches
Mail list logo