Hi Dan ,
I have set replication factor to 3 of pool 'newbyh' . Then when i
tried to execute cmds told by you I got that it reported like
root@third-virtual-machine:~# osdmaptool --test-map-object Obj1 osdmap
osdmaptool: osdmap file 'osdmap'
On Tue, Sep 25, 2012 at 2:15 PM, Gregory Farnum wrote:
> Hi Tren,
> Sorry your last message got dropped — we've all been really busy!
>
No worries! I know you guys are busy, and I appreciate any assistance
you're able to provide.
> On Tue, Sep 25, 2012 at 10:22 AM, Tren Blackburn wrote:
>
>
>>
On Tue, 25 Sep 2012, Dan Mick wrote:
> Hemant:
>
> Yes, you can. Use ceph osd getmap -o to get the OSD map, and then use
> osdmaptool --find-object-map to output the
> PG the object hashes to and the list of OSDs that PG maps to (primary first):
>
> $ ceph osd getmap -o osdmap
> got osdmap ep
Hi Tren,
Sorry your last message got dropped — we've all been really busy!
On Tue, Sep 25, 2012 at 10:22 AM, Tren Blackburn wrote:
> All ceph servers are running ceph-0.51. Here is the output of ceph -s:
>
> ocr31-ire ~ # ceph -s
>health HEALTH_OK
>monmap e1: 3 mons at
> {fern=10.87.1.8
Hemant:
Yes, you can. Use ceph osd getmap -o to get the OSD map, and
then use osdmaptool --find-object-map to output the
PG the object hashes to and the list of OSDs that PG maps to (primary
first):
$ ceph osd getmap -o osdmap
got osdmap epoch 59
$ osdmaptool --test-map-object dmick.rbd o
Hi Nick,
On Tue, 25 Sep 2012, Nick Bartos wrote:
> I need to figure out some way of determining when it's OK to safely
> reboot a single node. I believe this involves making sure that at
> least one other monitor is running and up to date, and all the PGs on
> the local OSDs have up to date copie
On 09/25/2012 07:12 PM, Nick Bartos wrote:
I need to figure out some way of determining when it's OK to safely
reboot a single node. I believe this involves making sure that at
least one other monitor is running and up to date, and all the PGs on
the local OSDs have up to date copies somewhere
Hi List;
I'm having an issue where the mds failed over between two nodes, and
now is stuck in "clientreplay" state. It's been like this for several
hours. Here are some details about the environment:
mds/mon server sap:
sap ceph # emerge --info
Portage 2.1.10.65 (default/linux/amd64/10.0, gcc-4.
I need to figure out some way of determining when it's OK to safely
reboot a single node. I believe this involves making sure that at
least one other monitor is running and up to date, and all the PGs on
the local OSDs have up to date copies somewhere else in the cluster.
We're not concerned about
On 09/25/2012 09:38 AM, Christian Huang wrote:
> Hi Alex,
> just realized I made a mistake on the provided environment
> information for the additional verification, it's Ubuntu 12.04, not
> 12.10.
> we use 12.04 with either stock kernel(3.2 series), or 3.5 from
> ubuntu-quantal or 3.5/3.
On Tue, Sep 25, 2012 at 6:24 AM, Guilhem LETTRON
wrote:
> Hi,
>
> I use ceph in production with radosgw.
> I just see a configuration problem with my radosgw pool, the number of
> pgs is really too low.
>
> To correct, I want to copy all my data to a new pool with "rados
> cppool" and switch betwe
Applied, thanks!
sage
On Tue, 25 Sep 2012, Yan, Zheng wrote:
> From: "Yan, Zheng"
>
> When starting a MDS that was stopped cleanly, we need manually
> adjust mydir's auth. This is because MDS log is empty in this case,
> mydir's auth can not be adjusted during log replay.
>
> Signed-off-by: Y
On 09/25/2012 09:38 AM, Christian Huang wrote:
> Hi Alex,
> just realized I made a mistake on the provided environment
> information for the additional verification, it's Ubuntu 12.04, not
> 12.10.
> we use 12.04 with either stock kernel(3.2 series), or 3.5 from
> ubuntu-quantal or 3.5/3.
Hi Alex,
just realized I made a mistake on the provided environment
information for the additional verification, it's Ubuntu 12.04, not
12.10.
we use 12.04 with either stock kernel(3.2 series), or 3.5 from
ubuntu-quantal or 3.5/3.6 from kernel ppa.
Best Regards.
Chris.
On Tue, Sep 25, 2
Sure sounds like the MDS servers are having problems. Is it any better
or worse with a single MDS instead of 2? It might be worth turning MDS
debugging up and seeing if anything interesting pops up in the MDS logs.
http://ceph.com/wiki/Debugging
Mark
On 09/25/2012 12:29 AM, peacebull wrote:
On 09/25/2012 04:26 AM, Damien Churchill wrote:
> On 25 September 2012 07:09, Christian Huang wrote:
>> we used Ubuntu 12.10 as base OS
>
> Just a heads up, the 3.5.0-15.22 kernel in 12.10 has that patch already.
>
>
Thanks, I've been working with the 3.2.0 kernel from the logs provided
an
# begin crush map
# devices
device 0 device0
device 1 device1
device 2 device2
# types
type 0 osd
type 1 domain
type 2 pool
type 3 host
type 4 ghost
host hone {
id -2
alg straw
hash 0
item device0 weight 1.000
item device1 weight 1.000
}
ghost hsec {
On 09/25/2012 11:32 AM, hemant surale wrote:
I have created to different types "host (osd0,osd1 in it) , ghost
(osd2 in it ) " and used two separate crushrules to use them.
Could you sent your crushmap? Attachment or pastebin?
so can i create two different pools utilizing two different
cr
On 25/09/12 02:37, Sage Weil wrote:
> Hi John,
>
> It looks like we aren't encoding the old format for the pool_stat_t
> structure (which changed in v0.42). Can you try with the patch from
> wip-3212 applied? You can get debs from the gitbuilders, see
>
> http://ceph.com/docs/master/ins
I have created to different types "host (osd0,osd1 in it) , ghost
(osd2 in it ) " and used two separate crushrules to use them.
so can i create two different pools utilizing two different
crush_ruleset for data placement . Right now I am using Ceph v 0.36
with it , when i do so it goes unresponsi
Hi Alex,
[resend]
some updates on the patch,
unfortunately, it is still reproduceable after the patch is
applied in 3.2.0-30.48 of the precise tree
git://kernel.ubuntu.com/ubuntu/ubuntu-precise.git
we also found the patch was already included in
Ubuntu-3.5.0-15.22, from the quan
On 25 September 2012 07:09, Christian Huang wrote:
> we used Ubuntu 12.10 as base OS
Just a heads up, the 3.5.0-15.22 kernel in 12.10 has that patch already.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More maj
22 matches
Mail list logo