On 08/04/2015 10:53 PM, Sage Weil wrote:
I rebased the wip-user patches from wip-selinux-policy onto
wip-selinux-policy-no-user + merge to master so that it sits on top of the
newly-merged systemd changes.
Great, so if it is build-ready state, I can try it with our virtual cluster
install.
Hi Sam,
How does this proposal sound ? It would be great if that was done before the
feature freeze.
Cheers
On 29/07/2015 11:16, Loic Dachary wrote:
Hi Sam,
The SHEC plugin[0] has been running in the rados runs[1] in the past few
months. It also has a matching corpus verification which
On Tue, 2015-08-04 at 00:38 +0100, John Spray wrote:
OK, here are vstart+ceph.in changes that work well enough in my out of
tree build:
https://github.com/ceph/ceph/pull/5457
Great!
John
On Mon, Aug 3, 2015 at 11:09 AM, John Spray jsp...@redhat.com wrote:
On Sat, Aug 1, 2015 at 8:24
Hi,
comments inline.
On 05 Aug 2015, at 05:45, Jevon Qiao qiaojianf...@unitedstack.com wrote:
Hi Jan,
Thank you for the detailed suggestion. Please see my reply in-line.
On 5/8/15 01:23, Jan Schermer wrote:
I think I wrote about my experience with this about 3 months ago, including
On Wed, 5 Aug 2015, Ding Dinghua wrote:
2015-08-05 0:13 GMT+08:00 Somnath Roy somnath@sandisk.com:
Yes, it has to re-acquire pg_lock today..
But, between journal write and initiating the ondisk ack, there is one
context switche in the code path. So, I guess the pg_lock is not the only
Dear Ali,
my point is no longer relevant, but your reassurances is still very
relevant.
Thanks
Owen
On 08/04/2015 08:26 PM, Ali Maredia wrote:
Owen,
I understand your concern, and don't think any transition will be made to
CMake untill all the functionality is in it and until it has been
So, I've got this very-cephfs-specific piece of pgls filtering code in
ReplicatedPG:
https://github.com/ceph/ceph/commit/907a3c5a2ba8e3edda18d7edf89ccae7b9d91dc5
I'm not sure I'm sufficiently motivated to create some whole new
plugin framework for these, but what about piggy-backing on object
On Wed, 5 Aug 2015, John Spray wrote:
So, I've got this very-cephfs-specific piece of pgls filtering code in
ReplicatedPG:
https://github.com/ceph/ceph/commit/907a3c5a2ba8e3edda18d7edf89ccae7b9d91dc5
I'm not sure I'm sufficiently motivated to create some whole new
plugin framework for
On Tue, Aug 4, 2015 at 3:23 PM, GuangYang yguan...@outlook.com wrote:
Thanks for Sage, Yehuda and Sam's quick reply.
Given the discussion so far, could I summarize into the following bullet
points:
1 The first step we would like to pursue is to implement the following
mechanism to avoid
8AM PST as usual (that's in 13 minutes folks!) No specific topics for
this week, please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
Hi,
We are planning to move our radosgw setup from apache to civetweb. We
were successfully able to setup and run civetweb on a test cluster.
The radosgw instances are fronted by a VIP with currently checks the
health by getting /status.html file, after moving to civetweb the vip
is unable to
On Wed, 5 Aug 2015, Loic Dachary wrote:
Hi Sam,
How does this proposal sound ? It would be great if that was done before
the feature freeze.
I think it's a good time.
Takeshi, note that what this really means is that the on-disk encoding
needs to remain fixed. If we decide to change it
On Tue, Aug 4, 2015 at 3:41 AM, Loic Dachary l...@dachary.org wrote:
Hi Yehuda,
The next hammer release as found at https://github.com/ceph/ceph/tree/hammer
passed the rgw suite (http://tracker.ceph.com/issues/11990#rgw and
http://tracker.ceph.com/issues/12502#note-6).
Do you think the
Hi,
Here is another make check fail. They don't seem to be related. To the best of
my knowledge these are the only two rbd related failures in make check during
the past week.
http://jenkins.ceph.dachary.org/job/ceph/LABELS=ubuntu-14.04x86_64/6884/console
[ RUN ]
Hi Mark,
I Missed todays call :-(
Could you please point me to the recording ? Etherpad link only shows
recording until 07/08.
__Neo
On Wed, Aug 5, 2015 at 7:47 AM, Mark Nelson mnel...@redhat.com wrote:
8AM PST as usual (that's in 13 minutes folks!) No specific topics for this
week, please
Today I learned that syncfs(2) does an O(n) search of the superblock's
inode list searching for dirty items. I've always assumed that it was
only traversing dirty inodes (e.g., a list of dirty inodes), but that
appears not to be the case, even on the latest kernels.
That means that the more
Thanks Sage for digging down..I was suspecting something similar.. As I
mentioned in today's call, in idle time also syncfs is taking ~60ms. I have 64
GB of RAM in the system.
The workaround I was talking about today is working pretty good so far. In
this implementation, I am not giving much
On 08/05/2015 04:26 PM, Sage Weil wrote:
Today I learned that syncfs(2) does an O(n) search of the superblock's
inode list searching for dirty items. I've always assumed that it was
only traversing dirty inodes (e.g., a list of dirty inodes), but that
appears not to be the case, even on the
Dear developers,
My name is Cai Yi, and I am a graduate student majored in CS of Xi’an Jiaotong
University in China. From Ceph’s homepage, I know Sage is the author of Ceph
and I get the email address from your GitHub and Ceph’s official website.
Because Ceph is an excellent distributed file
Hi Nigel,
On Wed, Aug 5, 2015 at 9:00 PM, Nigel Williams
nigel.willi...@utas.edu.au wrote:
On 6/08/2015 9:45 AM, Travis Rhoden wrote:
A new version of ceph-deploy has been released. Version 1.5.27
includes the following:
Has the syntax for use of --zap-disk changed? I moved it around but
Agree
On Thu, Aug 6, 2015 at 5:38 AM, Somnath Roy somnath@sandisk.com wrote:
Thanks Sage for digging down..I was suspecting something similar.. As I
mentioned in today's call, in idle time also syncfs is taking ~60ms. I have
64 GB of RAM in the system.
The workaround I was talking about
On 6/08/2015 2:22 PM, Travis Rhoden wrote:
A few things in this area changed with 1.5.26. ceph-deploys options
are much more strictly attached only to the commands where they make
sense.
Oh much better, thanks. I did wonder about that, but as it worked I didn't
revisit.
--
To unsubscribe
On 6/08/2015 9:45 AM, Travis Rhoden wrote:
A new version of ceph-deploy has been released. Version 1.5.27
includes the following:
Has the syntax for use of --zap-disk changed? I moved it around but it is no longer
recognised; worked around by doing a ceph-disk zap before running ceph-deploy.
Hi Yuri
The hammer branch for v0.94.3, as found at
https://github.com/ceph/ceph/commits/hammer,is approved by the leads
(Sam,Greg,Josh Yehuda) and is ready for QE. The
backport itself is tracked at http://tracker.ceph.com/issues/11990,
which has a record of all the test runs so far
the tip of
Hi everyone,
A new version of ceph-deploy has been released. Version 1.5.27
includes the following:
- a new ceph-deploy repo command that allows for adding and
removing custom repo definitions
- Makes commands like ceph-deploy install --rgw only install the
RGW component of Ceph.
This works
Hi,
After upgrading to Hammer and moving from apache to civetweb. We
started seeing high PUT latency in the order of 2 sec for every PUT
request. The GET request lo
Attaching the radosgw logs for a single request. The ceph.conf has the
following configuration for civetweb.
Dear Sage,
note that what this really means is that the on-disk encoding needs to remain
fixed.
Thank you for letting us know the important notice.
We have no plan to change shec's format at this moment, but we will remember the
comment for any future events.
Best Regards,
Takeshi Miyamae
27 matches
Mail list logo