-- All Branches --
Adam C. Emerson
2015-10-16 13:49:09 -0400 wip-cxx11time
2015-10-17 13:20:15 -0400 wip-cxx11concurrency
Adam Crume
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza
Maybe, I figured that the call to DBObjectMap::sync in FileStore::sync
should take care of it though?
-Sam
On Sat, Oct 31, 2015 at 11:41 PM, Chen, Xiaoxi wrote:
> As we use submit_transaction(instead of submit_transaction_sync) in
> DBObjectmap, and we also don't use a
As giant was declared EOL all related suites had been removed from the schedule:
#giant EOL 15 18 * * 3,6 teuthology-suite -v -c giant -k distro -m vps
-s upgrade/dumpling-firefly-x ~/vps.yaml
#giant EOL 18 18 * * 3,6 teuthology-suite -v -c giant -k distro -m vps
-s upgrade/firefly-x ~/vps.yaml
The osd keeps some metadata in the leveldb store, so you don't want to
delete it. I'm still not clear on why pg data being there causes
trouble.
-Sam
On Mon, Nov 2, 2015 at 10:26 AM, Samuel Just wrote:
> Maybe, I figured that the call to DBObjectMap::sync in FileStore::sync
>
On Mon, 2 Nov 2015 21:25:31 -0800
Guang Yang wrote:
> Is this a valid feature request we can put into radosgw? The way I am
> thinking to implement is like symbolic link, the link object just
> contains a pointer to the original object.
It's not going to be sufficient. What
Just found this:
https://www.usenix.org/conference/fast13/technical-sessions/presentation/koller
which should be helpful in constructing a persistent client-side writeback
cache for RBD that preserves consistency.
sage
--
To unsubscribe from this list: send the line "unsubscribe
On 11/02/2015 06:01 PM, Martin Millnert wrote:
> Minimum the documentation at
> http://docs.ceph.com/docs/master/radosgw/config-ref/ could be blessed
> with an entry on 'rgw frontends', including notes on how to configure it
> for loopback-binding access only.
Agreed:
Hi Yehuda,
We have a user requirement that needs symbolic link like feature on
radosgw - two object ids pointing to the same object (ideally it could
cross bucket, but same bucket is fine).
The closest feature on Amazon S3 I could find is [1], but not exact
the same, the one from Amazon S3 API
I don't see a way for auth->authorizer to be NULL in
ceph_x_sign_message() or ceph_x_check_message_signature().
Signed-off-by: Ilya Dryomov
---
net/ceph/auth_x.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/ceph/auth_x.c b/net/ceph/auth_x.c
We can use msg->con instead - at the point we sign an outgoing message
or check the signature on the incoming one, msg->con is always set. We
wouldn't know how to sign a message without an associated session (i.e.
msg->con == NULL) and being able to sign a message using an explicitly
provided
Hello,
This adds nocephx_sign_messages libceph option (a lack of which is
something people are running into, see [1]), plus a couple of related
cleanups.
[1] https://forum.proxmox.com/threads/24116-new-krbd-option-on-pve4-don-t-work
Thanks,
Ilya
Ilya Dryomov (4):
libceph:
On 2/25/15 2:31 PM, Sage Weil wrote:
> Hey,
>
> We are considering switching to civetweb (the embedded/standalone rgw web
> server) as the primary supported RGW frontend instead of the current
> apache + mod-fastcgi or mod-proxy-fcgi approach. "Supported" here means
> both the primary
On Mon, Nov 2, 2015 at 12:28 PM, Jaze Lee wrote:
> Hello,
>I find only ceph.client.admin can mount cephfs.
>
> [root@ceph-base-0 ceph]# ceph auth get client.cephfs_user
> exported keyring for client.cephfs_user
> [client.cephfs_user]
> key =
Hi,
Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not
growing up
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..
Thanks!
--
Hi,
Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not
growing up
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..
Thanks!
--
The problem is that peering shouldn't hang for no reason. If you are
seeing peering hang for a long time either
1) you are hitting a peering bug which we need to track down and fix
2) peering actually cannot make progress.
In case 1, it can be nice to have a work around to force peering to
Temporary network failures should be handled correctly. The best
solution is to actually fix that bug then. Capture logging on all
involved osds while it is hung and open a bug:
debug osd = 20
debug filestore = 20
debug ms = 1
-Sam
On Mon, Nov 2, 2015 at 5:24 PM, yangruifeng.09...@h3c.com
I will try my best to get the detailed log.
In the current version, we can ensure the messages that are related to peering
is correctly received by peers?
thanks
Ruifeng Yang.
-邮件原件-
发件人: Samuel Just [mailto:sj...@redhat.com]
发送时间: 2015年11月3日 9:28
收件人: yangruifeng 09209 (RD)
抄送:
root@ceph:~# uname -a
Linux ceph 3.16.0-44-generic #59~14.04.1-Ubuntu SMP Tue Jul 7 15:07:27 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux
root@ceph:~# cat /etc/issue
Ubuntu 14.04.2 LTS \n \l
thanks
Ruifeng Yang.
-邮件原件-
发件人: ceph-devel-ow...@vger.kernel.org
Yeah, there's a heartbeat system and the messenger is reliable delivery.
-Sam
On Mon, Nov 2, 2015 at 5:41 PM, yangruifeng.09...@h3c.com
wrote:
> I will try my best to get the detailed log.
> In the current version, we can ensure the messages that are related to
>
> On Nov 3, 2015, at 06:44, Ilya Dryomov wrote:
>
> Hello,
>
> This adds nocephx_sign_messages libceph option (a lack of which is
> something people are running into, see [1]), plus a couple of related
> cleanups.
>
> [1]
21 matches
Mail list logo