The behavior you both are seeing is fixed by making flush requests
asynchronous in the qemu driver. This was fixed upstream in qemu 1.4.2
and 1.5.0. If you've installed from ceph-extras, make sure you're using
the .async rpms [1] (we should probably remove the non-async ones at
this point).
If i remember right the init script looks for a hostname match with
client.radosgw.{hostname}, I f this matches he starts the gw
Andi
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of myk...@gmail.com
Sent: Mittwoch,
Seas Wolfgang,
Am 2013-10-02 09:01, schrieb Wolfgang Hennerbichler:
On 10/01/2013 05:08 PM, Jogi Hofmüller wrote:
Is this [1] outdated? If not, why are the links to chef-* not
working? Is chef-* still recommended/used?
I believe this is a matter of taste. I can not say if this is
Hi Amit,
It can, but at the moment there is some issue with keystone token caching
(in Dumpling), so every auth call hits keystone and does not cache the
token.
See here:
http://www.spinics.net/lists/ceph-users/msg04531.html
and here:
http://tracker.ceph.com/issues/6360
Thanks
Darren
On
Dear all,
This is getting weird now ...
Am 2013-10-03 11:18, schrieb Jogi Hofmüller:
root@ceph-server1:~# service ceph start
=== osd.0 ===
No filesystem type defined!
This message is generated by /etc/init.d/ceph (OK, most of you know that
I guess), which is looking for osd mkfs type in
Hi All,
I have tried setting up a ceph cluster with 3 nodes (3 monitors). I am
using RHEL 6.4 as OS with dumpling(0.67.3) release. Ceph cluster creation
(using ceph-deploy as well as mkcephfs), ceph-creates-keys doesn't return
on any of the servers. Whereas, if I create a cluster with only 1 node
I am also having problems getting the latest version of ceph-deploy to
install on Raring.
I was able to install the updated ceph-deploy about two months ago for
Ubuntu 12.04 and Cuttlefish using the following two lines in the
'/etc/apt/sources.list.d/ceph.list' apt sources file on my 'Admin'
Thanks Josh. Sharding the keys over many buckets does make sense, but then
the question is over how many buckets? Every amazon user has a limit
(1000), by default, on the number of buckets he can create. Is there any
good reason for
restricting the number of buckets created by a user?
kernel-3.4.59-8.el6.centos.alt.x86_64 is what you want, it has ceph rbd.ko
driver.
Regards,
-Ben
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of John-Paul Robinson [j...@uab.edu]
Sent: Wednesday, September 25,
What is this take on such a configuration?
Is it worth the effort of tracking rebalancing at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy. Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?
If following the raid mirror approach,
An additional side to the RAID question: when you have a box with more
drives than you can front with OSDs due to memory or CPU constraints, is
some form of RAID advisable? At the moment one OSD per drive is the
recommendation, but from my perspective this does not scale at high drive
densities
I've run into this before too. I think with broken packages, you have to
uninstall the previous version and do apt-get autoremove as well. Sometimes
you have to manually uninstall whatever it lists as the broken packages and
then do autoremove. Then, reinstall.
On Thu, Oct 3, 2013 at 12:20 AM,
Jogi,
I'm working on updating for manual installation. Most users who want
that level of detail are using it to incorporate ceph into another
deployment system like Chef, Puppet, Juju, etc. What you are working
on is pre ceph-deploy documentation, and some chef commentary too.
The links are
Does Ceph really halve your storage like that?
If if you specify N+1,does it really store two copies, or just compute
checksums across MxN stripes? I guess Raid5+Ceph with a large array (12 disks
say) would be not too bad (2.2TB for each 1).
But It would be nicer, if I had 12 storage units
Currently Ceph uses replication. Each pool is set with a replication
factor. A replication factor of 1 obviously offers no redundancy.
Replication factors of 2 or 3 are common. So, Ceph currently halfs or
thirds your usable storage, accordingly. Also, note you can co-mingle
pools of various
On 10/03/2013 12:40 PM, Andy Paluch wrote:
Don't you have to take down a ceph node to replace defective drive? If I have
a
ceph node with 12 disks and one goes bad, would I not have to take the entire
node down to replace and then reformat?
If I have a hotswap chassis but using just an
If following the raid mirror approach, would you then skip redundency
at the ceph layer to keep your total overhead the same? It seems that
would be risky in the even you loose your storage server with the
raid-1'd drives. No Ceph level redunancy would then be fatal. But if
you do raid-1
Hi all, I'm having a problem uploading through the Rados GW. I'm getting
the following error, and searches haven't lead me to a solution.
[Fri Oct 04 04:05:11 2013] [error] [client xxx.xxx.xxx.xxx] chunked
Transfer-Encoding forbidden: /swift/v1/wwang-container/test
FastCGI version:
ii
18 matches
Mail list logo