Hi! Sorry for the dumb question, could you point me out to the Python
APIs reference docs for the object store?
Do you have example to share for reading files/dirs?
Thanks,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 06/10/2013 07:35 PM, Stephane Boisvert wrote:
Hi,
I wondering how safe it is to use rbd cache = truewith libvirt/qemu.
I did read the documentation and it says When the OS sends a barrier
or a flush request, all dirty data is written to the OSDs. This means
that using write-back
Hi all,
I want to connect an openstack Folsom glance service to ceph.
The first option is setting up the glance-api.conf with 'default_store=rbd' and
the user and pool.
The second option is defined in
https://blueprints.launchpad.net/glance/+spec/ceph-s3-gateway (An OpenStack
installation
Hi,
I have problems with ceph-deploy gatherkeys in cuttlefish. When I run
ceph-deploy gatherkeys mon01
on my admin node, I get
Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring on ['mon01']
Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on ['mon01']
In an attempt to
Thanks for your answer that was exactly
what I was looking for !
We'll go forward with that cache setting. !
Stephane
On 13-06-11 05:24 AM, Wolfgang Hennerbichler wrote:
On 06/10/2013 07:35 PM, Stephane Boisvert wrote:
These keys are created by the ceph-create-keys script, which should be
launched when your monitors are. It requires a monitor quorum to have
formed first.
-Greg
On Tuesday, June 11, 2013, Peter Wienemann wrote:
Hi,
I have problems with ceph-deploy gatherkeys in cuttlefish. When I run
howdy, y'all.
we are testing ceph and all of its features. we love RBD! however cephFS,
though clearly stated not production ready, has been stonewalling us. in an
attempt to get rolling quickly, we followed some guides on CephFS (
http://goo.gl/BmVxG, http://goo.gl/1VtNk).
when i mount CephFS,
On Tue, Jun 11, 2013 at 9:39 AM, Bo b...@samware.com wrote:
howdy, y'all.
we are testing ceph and all of its features. we love RBD! however cephFS,
though clearly stated not production ready, has been stonewalling us. in an
attempt to get rolling quickly, we followed some guides on CephFS
Holy cow.
Thank you for pointing out what should have been obvious. So glad these
emails are kept on the web for future searchers like me ;)
-bo
On Tue, Jun 11, 2013 at 11:46 AM, Gregory Farnum g...@inktank.com wrote:
On Tue, Jun 11, 2013 at 9:39 AM, Bo b...@samware.com wrote:
howdy,
Hi,
We are currently testing the performance with rbd caching enabled with
write-back mode on our openstack (grizzly) nova nodes. By default, nova fires
up the rbd volumes with if=none mode evidenced by the following cmd line from
ps | grep.
-drive
Hi,
Am 11.06.2013 um 19:14 schrieb w sun ws...@hotmail.com:
Hi,
We are currently testing the performance with rbd caching enabled with
write-back mode on our openstack (grizzly) nova nodes. By default, nova fires
up the rbd volumes with if=none mode evidenced by the following cmd line
Here are the libraries for the Ceph Object Store.
http://ceph.com/docs/master/radosgw/s3/python/
http://ceph.com/docs/master/radosgw/swift/python/
On Tue, Jun 11, 2013 at 2:17 AM, Giuseppe \Gippa\ PaternĂ²
gpate...@gpaterno.com wrote:
Hi! Sorry for the dumb question, could you point me out to
Hi Greg,
thanks for this very useful hint. I found the origin of the problem and
will open a bug report in Redmine.
Cheers, Peter
On 06/11/2013 05:58 PM, Gregory Farnum wrote:
These keys are created by the ceph-create-keys script, which should be
launched when your monitors are. It requires
Gary,
I've added that instruction to the docs. It should be up shortly. Let
me know if you have other feedback for the docs.
Regards,
John
On Mon, Jun 10, 2013 at 9:13 AM, Gary Bruce garyofscotl...@gmail.com wrote:
Hi again,
I don't see anything in http://ceph.com/docs/master/start/ that
I have a cluster I originally built on argonaut and have since
upgraded it to bobtail and then cuttlefish. I originally configured
it with one node for both the mds node and mon node, and 4 other nodes
for hosting osd's:
a1: mon.a/mds.a
b1: osd.0, osd.1, osd.2, osd.3, osd.4, osd.20
b2: osd.5,
On Tue, Jun 11, 2013 at 2:35 PM, Bryan Stillwell
bstillw...@photobucket.com wrote:
I have a cluster I originally built on argonaut and have since
upgraded it to bobtail and then cuttlefish. I originally configured
it with one node for both the mds node and mon node, and 4 other nodes
for
On Tue, Jun 11, 2013 at 3:50 PM, Gregory Farnum g...@inktank.com wrote:
You should not run more than one active MDS (less stable than a
single-MDS configuration, bla bla bla), but you can run multiple
daemons and let the extras serve as a backup in case of failure. The
process for moving an
On Tue, Jun 11, 2013 at 3:04 PM, Bryan Stillwell
bstillw...@photobucket.com wrote:
On Tue, Jun 11, 2013 at 3:50 PM, Gregory Farnum g...@inktank.com wrote:
You should not run more than one active MDS (less stable than a
single-MDS configuration, bla bla bla), but you can run multiple
daemons
Hi,
I am now trying to setup ceph 0.61 build from source.
I have built it and I have defined the config file in /etc/ceph/ceph.conf.
[mon]
mon data = /mnt/mon$id
[mon.0]
host = dsi
mon addr = 10.217.242.28:6789
I created the directory /mnt/mon0. The hostname dsi
Hi,
I had not been able to use ceph-deploy to prepare the OSDs. It seemed every
time I execute this particular command (assuming running the data and journal
on a same disk), I ended up with a message:
ceph-disk: Error: Command '['partprobe','/dev/cciss/c0d1']' returned non-zero
exit status 1
20 matches
Mail list logo