Store dir layouts not only in the journal, but also in inode
information for the directory, so that they aren't dropped on the
floor when dirs are evicted from the MDS cache and dropped from the
journal. Also restore dir layouts when fetching dirs back into the
MDS cache.
This is supposed to fix
On 16/08/12 16:44, hemant surale wrote:
Hello Tommi, Ceph community
I did mkdir the directory. Infact I have created a new partition by
the same name and formatted using ext3. I also executed the following
command for the partition/directory:
"mount -o remount,user_xattr /"
Still I am getting
Hello Tommi, Ceph community
I did mkdir the directory. Infact I have created a new partition by
the same name and formatted using ext3. I also executed the following
command for the partition/directory:
"mount -o remount,user_xattr /"
Still I am getting the same error:
>> 2012-08-14 14:31:29.390
master/next/testing should now have the intrusive_ptr errors fixed.
-Sam
On Wed, Aug 15, 2012 at 1:12 PM, alphe salas wrote:
>
>
>
> /usr/include/boost/smart_ptr/intrusive_ptr.hpp: In instantiation of
> 'boost::intrusive_ptr::intrusive_ptr(T*, bool) [with T = MOSDPGLog]':
> osd/PG.h:871:26: req
On Wed, 15 Aug 2012, Atchley, Scott wrote:
> On Aug 15, 2012, at 3:46 PM, Sage Weil wrote:
>
> > I'm experiencing a stall with Ceph daemons communicating over TCP that
> > occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
> > situation is:
> >
> > - the socket is two proce
On Aug 15, 2012, at 3:46 PM, Sage Weil wrote:
> I'm experiencing a stall with Ceph daemons communicating over TCP that
> occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
> situation is:
>
> - the socket is two processes communicating over TCP on the same host, e.g.
>
>
I'm experiencing a stall with Ceph daemons communicating over TCP that
occurs reliably with 3.6-rc1 (and linus/master) but not 3.5. The basic
situation is:
- the socket is two processes communicating over TCP on the same host, e.g.
tcp0 2164849 10.214.132.38:6801 10.214.132.38:5
Well,
On 08/14/2012 09:29 PM, Sage Weil wrote:
On Tue, 14 Aug 2012, Oliver Francke wrote:
Hi Sage,
I just updated to debian-testing/0.50 this afternoon, after some hint:
* osd: better tracking of recent slow operations
This is actually about the admin socket command to dump operations in
fli
Yeah I actually figured it out through trial and error but thanks!
On Wed, Aug 15, 2012 at 1:18 PM, Wido den Hollander wrote:
> On 08/15/2012 12:21 PM, John Axel Eriksson wrote:
>>
>> I found somewhere that it's supposed to be
>> /var/lib/ceph/radosgw/ceph-$id. Ok in my case I guess that would me
On 08/15/2012 12:21 PM, John Axel Eriksson wrote:
I found somewhere that it's supposed to be
/var/lib/ceph/radosgw/ceph-$id. Ok in my case I guess that would mean:
/var/lib/ceph/radosgw/ceph-client.radosgw.gateway would that be
correct? Since I need to store the keyring in that directory for
exam
I found somewhere that it's supposed to be
/var/lib/ceph/radosgw/ceph-$id. Ok in my case I guess that would mean:
/var/lib/ceph/radosgw/ceph-client.radosgw.gateway would that be
correct? Since I need to store the keyring in that directory for
example and I want to use the defaults.
John
--
To unsu
Signed-off-by: Wido den Hollander
---
doc/install/debian.rst|8
doc/source/build-packages.rst |4
doc/source/get-tarballs.rst |3 ++-
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/doc/install/debian.rst b/doc/install/debian.rst
index be39827..
From: "Yan, Zheng"
The global_id for ceph_auth_client is always zero. It prevents the
kernel from creating more than one ceph_clients that are connected
to the same cluster. (ceph_debugfs_client_init() fails to create
debugfs directory)
Without this patch, I can't using rbd and cephfs on the sam
Hi guys,
Thank you for the tremendous answers :D
How far are we to see this feature in the stable branch? Part of the
0.48.x or far away from that?
Cheers!
On Mon, Aug 13, 2012 at 7:49 PM, Yehuda Sadeh wrote:
> On Mon, Aug 13, 2012 at 10:22 AM, Josh Durgin wrote:
>> On 08/13/2012 09:55 AM, Gre
14 matches
Mail list logo