Hi,
It works fine.
Thanks a lot
It's already integrated in inkscope
We will develop a similar feature for buckets in order to display quota and
specific fields related to placement
How can we get info about region and zone by using the admin API?
Best regards
-Message d'origine-
De
Hi,
Context : Firefly 0.80.10
I use the rgw admin api to get information about user from
http://docs.ceph.com/docs/master/radosgw/adminops/#get-user-info
The get request doesn't return all the information that we can get with
radosgw-admin user info --uid=
The metadata such as op_mask, system,
Hi,
Thanks for the advice
Brgds
-Message d'origine-
De : Sage Weil [mailto:s...@newdream.net]
Envoyé : mardi 17 novembre 2015 14:41
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-devel@vger.kernel.org
Objet : Re: [CEPH] OSD daemons running with a large number of threads
On Tue, 17 Nov 2015
Hi,
Context:
Firefly 0.80.9
Ubuntu 14.04.1
Almost a production platform in an openstack environment
176 OSD (SAS and SSD), 2 crushmap-oriented storage classes , 8 servers in 2
rooms, 3 monitors on openstack controllers
Usage: Rados Gateway for object service and RBD as back-end for Cinder and
G
Thx Sage
It's clear now
Best regards
-Message d'origine-
De : Sage Weil [mailto:s...@newdream.net]
Envoyé : jeudi 12 novembre 2015 16:01
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-devel@vger.kernel.org
Objet : RE: [CEPH][Crush][Tunables] issue when updating tunables
On Thu, 12 Nov 2015,
Hi Sage,
Thanks for the reply
You said
" You actually want straw_calc_version 1. This is just confusing output from
the 'firefly' tunable detection... the straw_calc_version does not have any
client dependencies."
My objective is to have the most relevant tunables for a firefly platform.
I d
Hi all,
Context:
Firefly 0.80.9
Ubuntu 14.04.1
Almost a production platform in an openstack environment
176 OSD (SAS and SSD), 2 crushmap-oriented storage classes , 8 servers in 2
rooms, 3 monitors on openstack controllers
Usage: Rados Gateway for object service and RBD as back-end for Cinder an
Hi Alfredo,
Sorry for replying very very late. I had to temporarly descope my actions on
rados gateway federation
The radosgw-agent is installed properly; that's fine.
We still have problems setting up federation between two sites.
One is at Orange Sillicon Valley (OSV@San Francisco), the other
Hi all,
Context : Firefly 0.80.8, Ubuntu 14.04 LTS, Lab cluster
Yesterday, I successfully deleted a s3 bucket "Bucket001ghis" after removing
the contents that were in.
Today, as I was browsing the radosgw system metadata, I discovered an
difference between the bucket metadata and the bucket.
HI
Thanks a lot
I installed the last version using (pip install radosgw-agent) as recommended
after removing the current version (apt-get purge radosgw-agent)
The python scripts are now installed in /usr/local/bin/./radosgw_agent
instead of /usr/bin/../radosgw_agent
Can you tell me wha
Hi Alfredo,
Here are the logs you requested using the original client.py python script.
--
DIRECT LAUNCH USING CLI:KO
radosgw-agent -v -c /etc/ceph/radosgw-agent/fr-rennes-radosgw1-sync.conf_direct
the standard output is also wri
OK I will send them asap
Logs are not very verbose.. Can I set a debug mode?
-Message d'origine-
De : alfredo.d...@inktank.com [mailto:alfredo.d...@inktank.com] De la part de
Alfredo Deza
Envoyé : lundi 2 février 2015 16:59
À : CHEVALIER Ghislain IMT/OLPS
Cc :
Objet : Re: [Ceph-devel] r
HI,
Thx for replying
According to dpkg -l, it's 1.2.1.
I noticed that the URL is malformed when launching using directly radosgw-agent
-c
but well formed when launching using service radosgw-agent start
best regards
-Message d'origine-
De : alfredo.d...@inktank.com [mailto:alfredo.
Hi all,
Context : Ubuntu 14.04 TLS firefly 0.80.8
I sent this post in ceph-users (identical subject) because I recently
encountered the same issue.
Maybe I missed something between July and January...
I found that the http request wasn't correctly formed by
/usr/lib/python2.7/dist-packages/
Hi all,
I've just noticed that the parameter erasure code profile is also set to a
replicated pool (set to default in my case)
Shouldn't it be better to set it to "none" ?
Best regards
Ghislain
___
Hi,
I think that Bug #8599 is more relevant (the one we are at the source)
Otherwise, managing rules and ruleset is confusing in Ceph.
First of all, it's curious to create a erasure rule giving its name and to get
a confirmation giving a name of ruleset and a ruleset_id
root@p-sbceph11:~# ceph
Hi Loic,
Eureka...
Remember the bug related to the rule_id and ruleset_id., we (Alain and I)
detected some weeks ago
It aIways exists for erasure-code pool creation
We altered the crushmap by updating the ruleset_id 52 (set by the system i.e.
last ruleset_id +1) to 7 in order to be equal to
Hi Loic,
Excuse me for replying late
First of all, Ii upgraded the platform to 0.80.7.
I turned osd and mon in debug mode as mentionned
I re create the erasure-coded pool ecpool
At pool creation no "create_lock_pg" in osd logs ; no message in mon log
At object creation (rados put) I got
2014
Hi,
oups..
nothing relevant in mon logs.
this message in some osd logs.
2014-10-15 17:03:45.303295 7fb296a21700 0 -- 10.192.134.122:6804/16878 >>
10.192.134.123:6809/21505 pipe(0x2219c80 sd=36 :41933 s=2 pgs=626 cs=355 l=0
c=0x398a580).fault with nothing to send, going to standby
FYi, I can
Hi...
Strange, you said strange...
I created a replicated pool (if it was what you asked for) as followed
root@p-sbceph11:~# ceph osd pool create strangepool 128 128 replicated
pool 'strangepool' created
root@p-sbceph11:~# ceph osd pool set strangepool crush_ruleset 53
set pool 108 crush_ruleset
Hi,
Here is the list of the types. host is type 1
"types": [
{ "type_id": 0,
"name": "osd"},
{ "type_id": 1,
"name": "host"},
{ "type_id": 2,
"name": "platform"},
{ "type_id": 3,
"name": "datacenter"},
{ "type_id": 4
HI,
THX Loïc for your quick reply.
Here is the result of ceph osd tree
As showed at the last ceph day in Paris, we have multiple root but the ruleset
52 entered the crushmap on root default.
# idweight type name up/down reweight
-1000.09998 root diskroot
-1100.04999
Hi all,
Context :
Ceph : Firefly 0.80.6
Sandbox Platform : Ubuntu 12.04 LTS, 5 VM (VMware), 3 mons, 10 osd
Issue:
I created an erasure-coded pool using the default profile
--> ceph osd pool create ecpool 128 128 erasure default
the erasure-code rule was dynamically created and associated to th
Hi all,
I just want to know what the field "stat_cat_sum" means when getting the result
of ceph pg dump -f json-pretty command.
Sometimes, there's a value (looking like a path_name e.g. foo.txt)sometimes
not...
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
Storage Service Architect
Hi Greg,
the information you requested:
dumpling platform : ceph version 0.67.9
(ba340a97c3dafc9155023da8d515eecc675c619a)
See attached the decompiled crushmap
(created today by "ceph osd getcrushmap -o crushmap_current" and crushtool -d
crushmap_current -o crushmap_current.txt)
and the result
Hi all,
Context :
Lab Platform
Ceph dumpling and firefly
Ubuntu 12.04 LTS
I encountered a strange behavior managing the crushmap on a dumpling and a
firefly ceph platform.
I built a crushmap, adding 2 specific rules (fastrule and slowrule) in order to
experiment tiering.
I used "ceph osd get|
Hi,
I set the --debug-rgw=20 parameter and it shows that the log is
2014-03-13 18:11:58.748840 7f24475d17c0 0 ceph version 0.72.2
(a913ded2ff138aefb8cb84d347d72164099cfd60), process radosgw, pid 20791
2014-03-13 18:11:58.765029 7f2435ffb700 2
RGWDataChangesLog::ChangesRenewThread: start
2014-0
Hi Yehuda,
and thx for answering quickly...
I suppose you want me to add --debug_ms 20 in the radosgw command.
Was it right?
Or is it an entry (not documented) in the ceph.conf: rgw_debug = 20?
I trapped some exchanges btw the rasdosgw server and all the osd servers but
it's very verbose...
20
Hi All,
I currently try to install a radosgw/emperor on a dedicated server under Ubuntu
13.10
This server is an exception in the cluster which is in Emperor under Ubuntu
12.04 LTS
The apache2 and radosgw installation seem to be correct; I created a S3 user
with a swift subuser but I get an
Hi Eric,
Great !! The cluster is totally up now
I got a wrong key in ceph.bootstrap-osd.keyring file in /etc/ceph of the
ceph-deploy server.
De : eric mourgaya [mailto:eric.mourg...@gmail.com]
Envoyé : vendredi 21 février 2014 15:39
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-devel@vger.kernel
Hi all,
I'd like to submit a strange behavior...
Context : lab platform
CEPH emperor
Ceph-deploy 1.3.4
Ubuntu 12.04 LTS
FYI : The problem occurs for another OSD with ceph-deploy 1.3.5 and Ubuntu
13.10; I upgraded the server in order to install The Rados Gateway that
requires 13.04 minimum.
I
31 matches
Mail list logo