Hi Josh,
that was the right answer ! Thank you ! :)
Norbert
On 12.12.2012 09:57, Josh Durgin wrote:
On 12/11/2012 11:48 PM, norbi wrote:
Hi Ceph-List,
i have set up a Ceph-Cluster with 3 OSDs, 3 Mons, 2 MDS over three
server.
Server 1 has 2 ODSs (osd0,osd2) and one MON/MDS and Server 2 has
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 11/12/12 23:00, Gary Lowell wrote:
On Dec 11, 2012, at 2:06 AM, James Page wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256
On 11/12/12 06:32, Gary Lowell wrote:
I assume you are building with dpkg-buildpackage
?
The manpage
On 12/11/2012 06:37 PM, Liu Bo wrote:
On Tue, Dec 11, 2012 at 09:33:15AM -0700, Jim Schutt wrote:
On 12/09/2012 07:04 AM, Liu Bo wrote:
On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote:
Hi Jim,
Could you please apply the following patch to test if it works?
Hi,
So far, with
Some osd node can not boot on a host of my cluster, the reason from
osd.log maybe different, looks like journal broken.
logs attached, I can attach gziped core dump if needed.
BTW, is ceph has a fsck.* like tool to check the filesystem
consistency by hand ?
osd.22.log.gz
Description: GNU Zip
On Wed, 12 Dec 2012, James Page wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 11/12/12 23:00, Gary Lowell wrote:
On Dec 11, 2012, at 2:06 AM, James Page wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256
On 11/12/12 06:32, Gary Lowell wrote:
I assume you are
Below is a list of the patches I've posted in the last
six weeks or so. Most of them are still in need of a
review.
I'll say what I said last time, there's not a lot of
magic in here, and if you can read C code you can offer
a review and I would love to hear from you.
If you think it looks
Hello Bryant,
On 12/11/2012 08:23 PM, Bryant Ng wrote:
Hi,
I'm pretty new to Ceph and am just learning about it.
Where are the CRUSH maps stored in Ceph? In the documentation I see you
use the 'crushtool' to compile and decompile the crush map.
The crushmap is kept alongside with the
Hi,
Today during planned kernel upgrade one of osds (which I have not
touched yet), started to claim about ``misdirected client'':
2012-12-12 21:22:59.107648 osd.20 [WRN] client.2774043
10.5.0.33:0/1013711 misdirected client.2774043.0:114 pg 5.ad140d42 to
osd.20 in e23834, client e23834 pg 5.542
I guess my question was where is the crushmap (and osdmap) persisted on
the monitor node?
If the entire cluster goes down, I assume the monitor is reading the
crushmap from some persistent file stored on disk or a db? Is that why
the minimum recommended storage for monitors is 10GB? Is the
On 12/12/2012 07:02 PM, Bryant Ng wrote:
I guess my question was where is the crushmap (and osdmap) persisted on
the monitor node?
If the entire cluster goes down, I assume the monitor is reading the
crushmap from some persistent file stored on disk or a db? Is that why
the minimum recommended
On Wed, Dec 12, 2012 at 03:19:22PM +0100, Maciej Gałkiewicz wrote:
Hello
I can't map rbd volume after upgrading to 0.55 from 0.53.
# ceph auth list
...
client.test
key: AQAaAshQ6NakBRAAHbOaK7DvO9NIKDwTV5REMw==
caps: [mon] allow r
caps: [osd] allow rwx
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 12/12/12 15:44, Sage Weil wrote:
I thought this was going to be the easy solution, but on
running a
quick test, we are already calling pbuilder with the
--binary-arch option and it its building the java package
anyway.It looks like there
Hello List,
ceph osd create $NUM
does not seem to work anymore ;-(
# ceph osd createosd. 62
unknown command createosd
Crushmap is already changed and imported ceph.conf is altered and reloaded.
Greets
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body
On Wed, Dec 12, 2012 at 1:31 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hello List,
ceph osd create $NUM
does not seem to work anymore ;-(
# ceph osd createosd. 62
unknown command createosd
Read those two lines again. Very slowly. :)
The correct syntax is
ceph osd create uuid
The
HI Greg,
sorry just a copy paste error.
[cloud1-ceph1: ~]# ceph osd create 61
(22) Invalid argument
Read those two lines again. Very slowly. :)
The correct syntax is
ceph osd create uuid
The uuid is optional, but you don't specify IDs; it gives you an ID back.
Stefan
--
To unsubscribe
Yeah; 61 is not a valid UUID and you can't specify anything else on that line.
On Wed, Dec 12, 2012 at 1:38 PM, Stefan Priebe s.pri...@profihost.ag wrote:
HI Greg,
sorry just a copy paste error.
[cloud1-ceph1: ~]# ceph osd create 61
(22) Invalid argument
Read those two lines again. Very
Hi Greg,
thanks for explanation. I'm using current next branch.
I'm using:
host1:
osd 11 .. 14
host2:
osd 21 .. 24
host3:
osd 31 .. 34
host4:
osd 41 .. 44
host5:
osd 51 .. 54
Right now i want to add host6. But i still don't know even with your
explanation how to add osd 61-64 to osdmap.
On Wed, Dec 12, 2012 at 2:00 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi Greg,
thanks for explanation. I'm using current next branch.
I'm using:
host1:
osd 11 .. 14
host2:
osd 21 .. 24
host3:
osd 31 .. 34
host4:
osd 41 .. 44
host5:
osd 51 .. 54
Right now i want to add host6.
Try removing the = from the osd cap, ie.
Already tried and it did not help.
regards
Maciej
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ah OK good to know.
UUIDs are a good decision but right now the journal devices (if you use
a block device) are still absolute.
[osd.23]
host = cloud1-ceph2
public addr = 10.255.0.101
cluster addr = 10.255.0.101
osd journal = /dev/sdd1
Greets,
Stefan
Am
I've often wondered if cluster and client would have been better choices of
name for those nets.
On Dec 11, 2012, at 12:58 AM, Wido den Hollander w...@widodh.nl wrote:
Hi Chuanyu,
On 12/11/2012 02:00 PM, Chuanyu Tsai wrote:
Hi cephers,
I know that we can setup cluster addr and public
On Dec 12, 2012, at 2:17 PM, James Page wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 12/12/12 15:44, Sage Weil wrote:
Gah - this will bite when I do the next upload to Ubuntu as
well then; Can I suggest that we rework debian/rules for
debhelper = 7 and use overrides rather
2012/12/13 Dan Mick dan.m...@inktank.com
I've often wondered if cluster and client would have been better choices
of name for those nets.
it looks good!
On Dec 11, 2012, at 12:58 AM, Wido den Hollander w...@widodh.nl wrote:
Hi Chuanyu,
On 12/11/2012 02:00 PM, Chuanyu Tsai wrote:
Hi
Using wip-nick-newer, the problem still presented itself after 4
successful runs (so it may be a fluke, but it got slightly further
than before). The log is here:
https://gist.github.com/raw/4273114/9085ed00d5bdd5ebab9a94b48f4a562d1fbac431/rbd-hang-1355359129.log
Unfortunately I forgot to enable
Hi,
Alright, I did some work on this. My time was limited last week, so it
took a while. Nonetheless, I have some changes pushed to github
(github.com/roaldvanloon, ceph, branch wip-config).
Most of the time went into thinking about a nice clean interface
which A) would make porting the
Apologies, I missed your reply on Monday. Any attempt to read or
write the object will hit the file on the primary (the smaller one
with the newer syslog entries). If you take down both OSDs (12 and
40) while performing the repair, the vm in question will hang if it
tries to access that block,
26 matches
Mail list logo