'
is printed in big bold letters on the front page of the sales brochure. ;)
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
and gives you the map you could plug into a lookup table or
something to get right to the function call. My c++ is way rusty, I've
no idea what's available in boost co -- if you have to roll your own
json parser then you indeed don't care how that vectorstring is encoded.
--
Dimitri Maziuk
On 2/6/2013 5:54 AM, Dennis Jacobfeuerborn wrote:
...
To mount cephfs like that you need to have kernel support. As the Linux
kernel on CentOS 6.3 is version 2.6.32 and Ceph support wasn't added until
2.6.34, you need to compile your own kernel.
The better alternative is probably to install a
that distinction pretty simple...
Any reason you can't have your CLI json-encode the commands (or,
conversely, your cgi/wsgi/php/servlet URL handler decode them into
vectorstring) before passing them on to the monitor?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http
On 02/06/2013 02:14 PM, Sage Weil wrote:
On Wed, 6 Feb 2013, Dimitri Maziuk wrote:
Any reason you can't have your CLI json-encode the commands (or,
conversely, your cgi/wsgi/php/servlet URL handler decode them into
vectorstring) before passing them on to the monitor?
We can, but they won't
?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
On 1/24/2013 2:49 AM, Gandalf Corvotempesta wrote:
2013/1/24 Dimitri Maziuk dmaz...@bmrb.wisc.edu:
So I'm stuck at a point way before those guides become relevant: once I
had one OSD/MDS/MON box up, I got HEALTH_WARN 384 pgs degraded; 384 pgs
stuck unclean; recovery 21/42 degraded (50.000
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message go away (for testing purposes), you can
set the rep level to 1
One other question I have left (so far) is: I read and tried to follow
http://ceph.com/docs/master/install/rpm/ and
http://ceph.com/docs/master/start/quick-start/ on centos 6.3.
mkcephfs step fails without rbd kernel module.
I just tried to find libvirt, kernel, module, and qemu on those
On 1/24/2013 9:58 AM, Wido den Hollander wrote:
On 01/24/2013 04:53 PM, Jens Kristian Søgaard wrote:
Hi Dimitri,
Where in ceph.conf do I tell it to use qemu and librbd instead of
kernel module?
You do not need to specify that in ceph.conf.
When you run qemu then specify the disk for
On 1/24/2013 10:22 AM, Sam Lang wrote:
... Does that make sense?
Yes, but when I'm trying to set up a ceph server using the quick start
guide, mkcephfs is failing with an error message I didn't write down,
but the complaint was along the lines of missing rbd.ko. Booting a 3.7
kernel made it
On 01/24/2013 12:15 PM, Dan Mick wrote:
On 01/24/2013 07:28 AM, Dimitri Maziuk wrote:
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message
be no space between rw, and noatime in
osd mount options {fs-type} = {mount options} # default mount option is
rw, noatime
- for ext4, you need to specify user_xattr there or mkcephfs will fail
(with --mkfs at least).
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http
On 01/24/2013 12:16 PM, Dan Mick wrote:
This is an apparently-unique problem, and we'd love to see details.
I hate it when it makes a liar out of me, this time around it worked on
2.6.23 -- FSVO worked: I did get it to 384 pgs stuck unclean stage.
--
Dimitri Maziuk
Programmer/sysadmin
to the quick start,
b) more importantly, if there are any plans to write more quickstart
pages, I'd love to see the add another OSD (MDS, MON) to an existing
pool in 5 minutes.
Thanks all,
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
On 01/24/2013 03:48 PM, Sage Weil wrote:
On Thu, 24 Jan 2013, Dimitri Maziuk wrote:
So I re-done it with 2 OSDs and got the expected HEALTH_OK right from
the start.
There may be a related issue at work here: the default crush rules now
replicate across hosts instead of across osds, so
/dev/rbd/rbd/foo
should end with /dev/rbd0 instead of /dev/rbd/rbd/foo.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
? That I only have one OSD? Or is it genuinely unhealthy?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
have one OSD? Or is it genuinely unhealthy?
Assuming you have more than one host ...
I just said I have one host. So is that expected when I only have one host?
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP
timeout, reset the link, resend the command,
rinse, lather, repeat. (You usually get slow to respond, please be
patient and/or resetting link in syslog console.) It's at the low
enough level to freeze the whole system for minutes.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison
On 1/19/2013 12:16 PM, Sage Weil wrote:
We generally recommend the KVM+librbd route, as it is easier to manage the
dependencies, and is well integrated with libvirt. FWIW this is what
OpenStack and CloudStack normally use.
OK, so is there a quick stat document for that configuration?
(Oh,
; that'll only scale to a few dozen hosts at best.)
TIA,
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
On 01/15/2013 12:36 PM, Gregory Farnum wrote:
On Tue, Jan 15, 2013 at 10:33 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu
wrote:
At the start of the batch #cores-in-the-cluster processes try to mmap
the same 2GB and start reading it from SEEK_SET at the same time. I
won't know until I try but I
23 matches
Mail list logo